Revert "Unidoc directives for cluster md files" #22904

This commit is contained in:
Johan Andrén 2018-03-09 10:59:49 +01:00 committed by GitHub
parent 33ca78052c
commit ac408bba8f
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
6 changed files with 72 additions and 84 deletions

View file

@ -30,7 +30,7 @@ You can send messages via the @unidoc[ClusterClient] to any actor in the cluster
in the @unidoc[DistributedPubSubMediator] used by the @unidoc[akka.cluster.client.ClusterReceptionist].
The @unidoc[ClusterClientReceptionist] provides methods for registration of actors that
should be reachable from the client. Messages are wrapped in `ClusterClient.Send`,
@scala[@scaladoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient)] or @scala[@scaladoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient)].
@scala[@scaladoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.SendToAll`] or @scala[@scaladoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.Publish`].
Both the @unidoc[ClusterClient] and the @unidoc[ClusterClientReceptionist] emit events that can be subscribed to.
The @unidoc[ClusterClient] sends out notifications in relation to having received a list of contact points

View file

@ -112,11 +112,11 @@ Two different failure detectors can be configured for these two purposes:
* `akka.cluster.failure-detector` for failure detection within own data center
* `akka.cluster.multi-data-center.failure-detector` for failure detection across different data centers
When @ref[subscribing to cluster events](cluster-usage.md#cluster-subscriber) the @scala[@scaladoc[UnreachableMember](akka.cluster.ClusterEvent$)]@java[@javadoc[UnreachableMember](akka.cluster.ClusterEvent)] and
@scala[@scaladoc[ReachableMember](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableMember](akka.cluster.ClusterEvent)] events are for observations within the own data center. The same data center as where the
When @ref[subscribing to cluster events](cluster-usage.md#cluster-subscriber) the `UnreachableMember` and
`ReachableMember` events are for observations within the own data center. The same data center as where the
subscription was registered.
For cross data center unreachability notifications you can subscribe to @scala[@scaladoc[ReachableDataCenter](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableDataCenter](akka.cluster.ClusterEvent)] and @scala[@scaladoc[ReachableDataCenter](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableDataCenter](akka.cluster.ClusterEvent)]
For cross data center unreachability notifications you can subscribe to `UnreachableDataCenter` and `ReachableDataCenter`
events.
Heartbeat messages for failure detection across data centers are only performed between a number of the
@ -132,7 +132,7 @@ It's best to leave the oldest nodes until last.
## Cluster Singleton
The @ref[Cluster Singleton](cluster-singleton.md) is a singleton per data center. If you start the
@unidoc[ClusterSingletonManager] on all nodes and you have defined 3 different data centers there will be
`ClusterSingletonManager` on all nodes and you have defined 3 different data centers there will be
3 active singleton instances in the cluster, one in each data center. This is taken care of automatically,
but is important to be aware of. Designing the system for one singleton per data center makes it possible
for the system to be available also during network partitions between data centers.
@ -142,11 +142,11 @@ guaranteed to be consistent across data centers when using one leader per data c
difficult to select a single global singleton.
If you need a global singleton you have to pick one data center to host that singleton and only start the
@unidoc[ClusterSingletonManager] on nodes of that data center. If the data center is unreachable from another data center the
`ClusterSingletonManager` on nodes of that data center. If the data center is unreachable from another data center the
singleton is inaccessible, which is a reasonable trade-off when selecting consistency over availability.
The @unidoc[ClusterSingletonProxy] is by default routing messages to the singleton in the own data center, but
it can be started with a `data-center` parameter in the @unidoc[ClusterSingletonProxySettings] to define that
The `ClusterSingletonProxy` is by default routing messages to the singleton in the own data center, but
it can be started with a `data-center` parameter in the `ClusterSingletonProxySettings` to define that
it should route messages to a singleton located in another data center. That is useful for example when
having a global singleton in one data center and accessing it from other data centers.
@ -180,7 +180,7 @@ is working on a solution that will support multiple active entities that will wo
with Cluster Sharding across multiple data centers.
If you need global entities you have to pick one data center to host that entity type and only start
@unidoc[akka.cluster.sharding.ClusterSharding] on nodes of that data center. If the data center is unreachable from another data center the
`ClusterSharding` on nodes of that data center. If the data center is unreachable from another data center the
entities are inaccessible, which is a reasonable trade-off when selecting consistency over availability.
The Cluster Sharding proxy is by default routing messages to the shard regions in their own data center, but

View file

@ -51,8 +51,8 @@ Certain message routing and let-it-crash functions may not work when Sigar is no
Cluster metrics extension comes with two built-in collector implementations:
1. @unidoc[akka.cluster.metrics.SigarMetricsCollector] which requires Sigar provisioning, and is more rich/precise
2. @unidoc[akka.cluster.metrics.JmxMetricsCollector], which is used as fall back, and is less rich/precise
1. `akka.cluster.metrics.SigarMetricsCollector`, which requires Sigar provisioning, and is more rich/precise
2. `akka.cluster.metrics.JmxMetricsCollector`, which is used as fall back, and is less rich/precise
You can also plug-in your own metrics collector implementation.

View file

@ -16,7 +16,7 @@ In this context sharding means that actors with an identifier, so called entitie
can be automatically distributed across multiple nodes in the cluster. Each entity
actor runs only at one place, and messages can be sent to the entity without requiring
the sender to know the location of the destination actor. This is achieved by sending
the messages via a @unidoc[ShardRegion](ShardRegion$) actor provided by this extension, which knows how
the messages via a `ShardRegion` actor provided by this extension, which knows how
to route the message with the entity id to the final destination.
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up)
@ -51,7 +51,7 @@ Scala
Java
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
The above actor uses event sourcing and the support provided in @scala[@unidoc[PersistentActor]] @java[@unidoc[AbstractPersistentActor]] to store its state.
The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
its state if it is valuable.
@ -106,9 +106,9 @@ A simple sharding algorithm that works fine in most cases is to take the absolut
the entity identifier modulo number of shards. As a convenience this is provided by the
`ShardRegion.HashCodeMessageExtractor`.
Messages to the entities are always sent via the local @unidoc[ShardRegion](ShardRegion$) The @unidoc[ShardRegion](ShardRegion$) actor reference for a
Messages to the entities are always sent via the local `ShardRegion`. The `ShardRegion` actor reference for a
named entity type is returned by `ClusterSharding.start` and it can also be retrieved with `ClusterSharding.shardRegion`.
The @unidoc[ShardRegion](ShardRegion$) will lookup the location of the shard for the entity if it does not already know its location. It will
The `ShardRegion` will lookup the location of the shard for the entity if it does not already know its location. It will
delegate the message to the right node and it will create the entity actor on demand, i.e. when the
first message for a specific entity is delivered.
@ -127,28 +127,28 @@ tutorial named [Akka Cluster Sharding with Scala!](https://github.com/typesafehu
## How it works
The @unidoc[ShardRegion](ShardRegion$) actor is started on each node in the cluster, or group of nodes
tagged with a specific role. The @unidoc[ShardRegion](ShardRegion$) is created with two application specific
The `ShardRegion` actor is started on each node in the cluster, or group of nodes
tagged with a specific role. The `ShardRegion` is created with two application specific
functions to extract the entity identifier and the shard identifier from incoming messages.
A shard is a group of entities that will be managed together. For the first message in a
specific shard the @unidoc[ShardRegion](ShardRegion$) requests the location of the shard from a central coordinator,
specific shard the `ShardRegion` requests the location of the shard from a central coordinator,
the `ShardCoordinator`.
The `ShardCoordinator` decides which @unidoc[ShardRegion](ShardRegion$) shall own the `Shard` and informs
that @unidoc[ShardRegion](ShardRegion$). The region will confirm this request and create the `Shard` supervisor
The `ShardCoordinator` decides which `ShardRegion` shall own the `Shard` and informs
that `ShardRegion`. The region will confirm this request and create the `Shard` supervisor
as a child actor. The individual `Entities` will then be created when needed by the `Shard`
actor. Incoming messages thus travel via the @unidoc[ShardRegion](ShardRegion$) and the `Shard` to the target
actor. Incoming messages thus travel via the `ShardRegion` and the `Shard` to the target
`Entity`.
If the shard home is another @unidoc[ShardRegion](ShardRegion$) instance messages will be forwarded
to that @unidoc[ShardRegion](ShardRegion$) instance instead. While resolving the location of a
If the shard home is another `ShardRegion` instance messages will be forwarded
to that `ShardRegion` instance instead. While resolving the location of a
shard incoming messages for that shard are buffered and later delivered when the
shard home is known. Subsequent messages to the resolved shard can be delivered
to the target destination immediately without involving the `ShardCoordinator`.
Scenario 1:
1. Incoming message M1 to @unidoc[ShardRegion](ShardRegion$) instance R1.
1. Incoming message M1 to `ShardRegion` instance R1.
2. M1 is mapped to shard S1. R1 doesn't know about S1, so it asks the coordinator C for the location of S1.
3. C answers that the home of S1 is R1.
4. R1 creates child actor for the entity E1 and sends buffered messages for S1 to E1 child
@ -172,30 +172,30 @@ role.
The logic that decides where a shard is to be located is defined in a pluggable shard
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
allocates new shards to the @unidoc[ShardRegion](ShardRegion$) with least number of previously allocated shards.
allocates new shards to the `ShardRegion` with least number of previously allocated shards.
This strategy can be replaced by an application specific implementation.
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
of shards, i.e. migrate entities from one node to another. In the rebalance process the
coordinator first notifies all @unidoc[ShardRegion](ShardRegion$) actors that a handoff for a shard has started.
coordinator first notifies all `ShardRegion` actors that a handoff for a shard has started.
That means they will start buffering incoming messages for that shard, in the same way as if the
shard location is unknown. During the rebalance process the coordinator will not answer any
requests for the location of shards that are being rebalanced, i.e. local buffering will
continue until the handoff is completed. The @unidoc[ShardRegion](ShardRegion$) responsible for the rebalanced shard
continue until the handoff is completed. The `ShardRegion` responsible for the rebalanced shard
will stop all entities in that shard by sending the specified `handOffStopMessage`
(default `PoisonPill`) to them. When all entities have been terminated the @unidoc[ShardRegion](ShardRegion$)
(default `PoisonPill`) to them. When all entities have been terminated the `ShardRegion`
owning the entities will acknowledge the handoff as completed to the coordinator.
Thereafter the coordinator will reply to requests for the location of
the shard and thereby allocate a new home for the shard and then buffered messages in the
@unidoc[ShardRegion](ShardRegion$) actors are delivered to the new location. This means that the state of the entities
`ShardRegion` actors are delivered to the new location. This means that the state of the entities
are not transferred or migrated. If the state of the entities are of importance it should be
persistent (durable), e.g. with @ref:[Persistence](persistence.md), so that it can be recovered at the new
location.
The logic that decides which shards to rebalance is defined in a pluggable shard
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
picks shards for handoff from the @unidoc[ShardRegion](ShardRegion$) with most number of previously allocated shards.
They will then be allocated to the @unidoc[ShardRegion](ShardRegion$) with least number of previously allocated shards,
picks shards for handoff from the `ShardRegion` with most number of previously allocated shards.
They will then be allocated to the `ShardRegion` with least number of previously allocated shards,
i.e. new members in the cluster. There is a configurable threshold of how large the difference
must be to begin the rebalancing. This strategy can be replaced by an application specific
implementation.
@ -207,7 +207,7 @@ actor will take over and the state is recovered. During such a failure period sh
with known location are still available, while messages for new (unknown) shards
are buffered until the new `ShardCoordinator` becomes available.
As long as a sender uses the same @unidoc[ShardRegion](ShardRegion$) actor to deliver messages to an entity
As long as a sender uses the same `ShardRegion` actor to deliver messages to an entity
actor the order of the messages is preserved. As long as the buffer limit is not reached
messages are delivered on a best effort basis, with at-most once delivery semantics,
in the same way as ordinary message sending. Reliable end-to-end messaging, with
@ -280,9 +280,9 @@ See @ref:[How To Startup when Cluster Size Reached](cluster-usage.md#min-members
## Proxy Only Mode
The @unidoc[ShardRegion](ShardRegion$) actor can also be started in proxy only mode, i.e. it will not
The `ShardRegion` actor can also be started in proxy only mode, i.e. it will not
host any entities itself, but knows how to delegate messages to the right location.
A @unidoc[ShardRegion](ShardRegion$) is started in proxy only mode with the method `ClusterSharding.startProxy`
A `ShardRegion` is started in proxy only mode with the `ClusterSharding.startProxy` method.
Also a `ShardRegion` is started in proxy only mode in case if there is no match between the
roles of the current cluster node and the role specified in `ClusterShardingSettings`
passed to the `ClusterSharding.start` method.
@ -365,8 +365,8 @@ Note that stopped entities will be started again when a new message is targeted
## Graceful Shutdown
You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
to the @unidoc[ShardRegion](ShardRegion$) actor to hand off all shards that are hosted by that @unidoc[ShardRegion](ShardRegion$) and then the
@unidoc[ShardRegion](ShardRegion$) actor will be stopped. You can `watch` the @unidoc[ShardRegion](ShardRegion$) actor to know when it is completed.
to the `ShardRegion` actor to hand off all shards that are hosted by that `ShardRegion` and then the
`ShardRegion` actor will be stopped. You can `watch` the `ShardRegion` actor to know when it is completed.
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.

View file

@ -16,12 +16,12 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
The cluster singleton pattern is implemented by @unidoc[akka.cluster.singleton.ClusterSingletonManager].
The cluster singleton pattern is implemented by `akka.cluster.singleton.ClusterSingletonManager`.
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
a specific role. @unidoc[ClusterSingletonManager] is an actor that is supposed to be started on
a specific role. `ClusterSingletonManager` is an actor that is supposed to be started on
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
started by the @unidoc[ClusterSingletonManager] on the oldest node by creating a child actor from
supplied @unidoc[akka.actor.Props]. @unidoc[ClusterSingletonManager] makes sure that at most one singleton instance
started by the `ClusterSingletonManager` on the oldest node by creating a child actor from
supplied `Props`. `ClusterSingletonManager` makes sure that at most one singleton instance
is running at any point in time.
The singleton actor is always running on the oldest member with specified role.
@ -35,15 +35,15 @@ take over and a new singleton actor is created. For these failure scenarios ther
not be a graceful hand-over, but more than one active singletons is prevented by all
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
You can access the singleton actor by using the provided @unidoc[akka.cluster.singleton.ClusterSingletonProxy],
You can access the singleton actor by using the provided `akka.cluster.singleton.ClusterSingletonProxy`,
which will route all messages to the current instance of the singleton. The proxy will keep track of
the oldest node in the cluster and resolve the singleton's @unidoc[akka.actor.ActorRef] by explicitly sending the
singleton's `actorSelection` the @unidoc[akka.actor.Identify] message and waiting for it to reply.
the oldest node in the cluster and resolve the singleton's `ActorRef` by explicitly sending the
singleton's `actorSelection` the `akka.actor.Identify` message and waiting for it to reply.
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
Given the implementation, there might be periods of time during which the @unidoc[akka.actor.ActorRef] is unavailable,
Given the implementation, there might be periods of time during which the `ActorRef` is unavailable,
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
singleton and then deliver them when the singleton is finally available. If the buffer is full
the @unidoc[ClusterSingletonProxy] will drop old messages when new messages are sent via the proxy.
the `ClusterSingletonProxy` will drop old messages when new messages are sent via the proxy.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
It's worth noting that messages can always be lost because of the distributed nature of these actors.
@ -92,8 +92,8 @@ Scala
Java
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
On each node in the cluster you need to start the @unidoc[ClusterSingletonManager] and
supply the @unidoc[akka.actor.Props] of the singleton actor, in this case the JMS queue consumer.
On each node in the cluster you need to start the `ClusterSingletonManager` and
supply the `Props` of the singleton actor, in this case the JMS queue consumer.
Scala
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
@ -152,17 +152,17 @@ Maven
## Configuration
The following configuration properties are read by the @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings]
when created with a @unidoc[akka.actor.ActorSystem] parameter. It is also possible to amend the @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings]
or create it from another config section with the same layout as below. @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings] is
The following configuration properties are read by the `ClusterSingletonManagerSettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonManagerSettings`
or create it from another config section with the same layout as below. `ClusterSingletonManagerSettings` is
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
with different settings if needed.
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
The following configuration properties are read by the @unidoc[ClusterSingletonProxySettings]
when created with a @unidoc[akka.actor.ActorSystem] parameter. It is also possible to amend the @unidoc[ClusterSingletonProxySettings]
or create it from another config section with the same layout as below. @unidoc[ClusterSingletonProxySettings] is
The following configuration properties are read by the `ClusterSingletonProxySettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonProxySettings`
or create it from another config section with the same layout as below. `ClusterSingletonProxySettings` is
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
with different settings if needed.

View file

@ -8,7 +8,6 @@ import _root_.io.github.lukehutch.fastclasspathscanner.FastClasspathScanner
import com.lightbend.paradox.markdown._
import com.lightbend.paradox.sbt.ParadoxPlugin.autoImport._
import org.pegdown.Printer
import org.pegdown.ast.DirectiveNode.Source
import org.pegdown.ast._
import sbt.Keys._
import sbt._
@ -30,31 +29,23 @@ object ParadoxSupport {
class UnidocDirective(allClasses: IndexedSeq[String]) extends InlineDirective("unidoc") {
def render(node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
val (directive, label, source) = node.source match {
case direct: Source.Direct => (s"@unidoc[${node.label}](${direct.value})", node.label, direct.value)
case ref: Source.Ref => (s"@unidoc[${node.label}](${ref.value})", node.label, ref.value)
case Source.Empty => (s"@unidoc[${node.label}]", node.label.split('.').last, node.label)
}
if (source.split('[')(0).contains('.')) {
val fqcn = source
if (node.label.split('[')(0).contains('.')) {
val fqcn = node.label
if (allClasses.contains(fqcn)) {
val label = fqcn.split('.').last
syntheticNode("java", javaLabel(label), fqcn, node).accept(visitor)
syntheticNode("scala", scalaLabel(label), fqcn, node).accept(visitor)
syntheticNode("scala", label, fqcn, node).accept(visitor)
} else {
throw new java.lang.IllegalStateException(s"fqcn not found by $directive")
throw new java.lang.IllegalStateException(s"fqcn not found by @unidoc[$fqcn]")
}
}
else {
renderByClassName(directive, source, node, visitor, printer)
renderByClassName(node.label, node, visitor, printer)
}
}
def javaLabel(splitLabel: String): String =
splitLabel.replaceAll("\\\\_", "_").replaceAll("\\[", "<").replaceAll("\\]", ">").replace('_', '?')
def scalaLabel(splitLabel: String): String =
splitLabel.replaceAll("\\\\_", "_")
def javaLabel(label: String): String =
label.replaceAll("\\[", "<").replaceAll("\\]", ">").replace('_', '?')
def syntheticNode(group: String, label: String, fqcn: String, node: DirectiveNode): DirectiveNode = {
val syntheticSource = new DirectiveNode.Source.Direct(fqcn)
@ -65,33 +56,30 @@ object ParadoxSupport {
))
}
def renderByClassName(directive: String, source: String, node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
val sourceWithoutGenericParameters = source.replaceAll("\\\\_", "_").split("\\[")(0)
val labelWithScalaGenerics = scalaLabel(node.label)
val labelWithJavaGenerics = javaLabel(node.label)
val matches = allClasses.filter(_.endsWith('.' + sourceWithoutGenericParameters))
def renderByClassName(label: String, node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
val label = node.label.replaceAll("\\\\_", "_")
val labelWithoutGenericParameters = label.split("\\[")(0)
val labelWithJavaGenerics = javaLabel(label)
val matches = allClasses.filter(_.endsWith('.' + labelWithoutGenericParameters))
matches.size match {
case 0 =>
throw new java.lang.IllegalStateException(
s"No matches found for $directive. " +
s"You may want to use the fully qualified class name as @unidoc[fqcn]."
)
throw new java.lang.IllegalStateException(s"No matches found for $label")
case 1 if matches(0).contains("adsl") =>
throw new java.lang.IllegalStateException(s"Match for $directive only found in one language: ${matches(0)}")
throw new java.lang.IllegalStateException(s"Match for $label only found in one language: ${matches(0)}")
case 1 =>
syntheticNode("scala", labelWithScalaGenerics, matches(0), node).accept(visitor)
syntheticNode("scala", label, matches(0), node).accept(visitor)
syntheticNode("java", labelWithJavaGenerics, matches(0), node).accept(visitor)
case 2 if matches.forall(_.contains("adsl")) =>
matches.foreach(m => {
if (!m.contains("javadsl"))
syntheticNode("scala", labelWithScalaGenerics, m, node).accept(visitor)
syntheticNode("scala", label, m, node).accept(visitor)
if (!m.contains("scaladsl"))
syntheticNode("java", labelWithJavaGenerics, m, node).accept(visitor)
})
case n =>
throw new java.lang.IllegalStateException(
s"$n matches found for $directive, but not javadsl/scaladsl: ${matches.mkString(", ")}. " +
s"You may want to use the fully qualified class name as @unidoc[fqcn] instead of ."
s"$n matches found for @unidoc[$label], but not javadsl/scaladsl: ${matches.mkString(", ")}. " +
s"You may want to use the fully qualified class name as @unidoc[fqcn] instead of @unidoc[${label}]."
)
}
}