Unidoc directives for cluster md files #22904
This commit is contained in:
parent
7d67524bb5
commit
bcc86f7703
6 changed files with 84 additions and 72 deletions
|
|
@ -30,7 +30,7 @@ You can send messages via the @unidoc[ClusterClient] to any actor in the cluster
|
||||||
in the @unidoc[DistributedPubSubMediator] used by the @unidoc[akka.cluster.client.ClusterReceptionist].
|
in the @unidoc[DistributedPubSubMediator] used by the @unidoc[akka.cluster.client.ClusterReceptionist].
|
||||||
The @unidoc[ClusterClientReceptionist] provides methods for registration of actors that
|
The @unidoc[ClusterClientReceptionist] provides methods for registration of actors that
|
||||||
should be reachable from the client. Messages are wrapped in `ClusterClient.Send`,
|
should be reachable from the client. Messages are wrapped in `ClusterClient.Send`,
|
||||||
@scala[@scaladoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.SendToAll`] or @scala[@scaladoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient$)]@java[`ClusterClient.Publish`].
|
@scala[@scaladoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.SendToAll`](akka.cluster.client.ClusterClient)] or @scala[@scaladoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.Publish`](akka.cluster.client.ClusterClient)].
|
||||||
|
|
||||||
Both the @unidoc[ClusterClient] and the @unidoc[ClusterClientReceptionist] emit events that can be subscribed to.
|
Both the @unidoc[ClusterClient] and the @unidoc[ClusterClientReceptionist] emit events that can be subscribed to.
|
||||||
The @unidoc[ClusterClient] sends out notifications in relation to having received a list of contact points
|
The @unidoc[ClusterClient] sends out notifications in relation to having received a list of contact points
|
||||||
|
|
|
||||||
|
|
@ -112,11 +112,11 @@ Two different failure detectors can be configured for these two purposes:
|
||||||
* `akka.cluster.failure-detector` for failure detection within own data center
|
* `akka.cluster.failure-detector` for failure detection within own data center
|
||||||
* `akka.cluster.multi-data-center.failure-detector` for failure detection across different data centers
|
* `akka.cluster.multi-data-center.failure-detector` for failure detection across different data centers
|
||||||
|
|
||||||
When @ref[subscribing to cluster events](cluster-usage.md#cluster-subscriber) the `UnreachableMember` and
|
When @ref[subscribing to cluster events](cluster-usage.md#cluster-subscriber) the @scala[@scaladoc[UnreachableMember](akka.cluster.ClusterEvent$)]@java[@javadoc[UnreachableMember](akka.cluster.ClusterEvent)] and
|
||||||
`ReachableMember` events are for observations within the own data center. The same data center as where the
|
@scala[@scaladoc[ReachableMember](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableMember](akka.cluster.ClusterEvent)] events are for observations within the own data center. The same data center as where the
|
||||||
subscription was registered.
|
subscription was registered.
|
||||||
|
|
||||||
For cross data center unreachability notifications you can subscribe to `UnreachableDataCenter` and `ReachableDataCenter`
|
For cross data center unreachability notifications you can subscribe to @scala[@scaladoc[ReachableDataCenter](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableDataCenter](akka.cluster.ClusterEvent)] and @scala[@scaladoc[ReachableDataCenter](akka.cluster.ClusterEvent$)]@java[@javadoc[ReachableDataCenter](akka.cluster.ClusterEvent)]
|
||||||
events.
|
events.
|
||||||
|
|
||||||
Heartbeat messages for failure detection across data centers are only performed between a number of the
|
Heartbeat messages for failure detection across data centers are only performed between a number of the
|
||||||
|
|
@ -132,7 +132,7 @@ It's best to leave the oldest nodes until last.
|
||||||
## Cluster Singleton
|
## Cluster Singleton
|
||||||
|
|
||||||
The @ref[Cluster Singleton](cluster-singleton.md) is a singleton per data center. If you start the
|
The @ref[Cluster Singleton](cluster-singleton.md) is a singleton per data center. If you start the
|
||||||
`ClusterSingletonManager` on all nodes and you have defined 3 different data centers there will be
|
@unidoc[ClusterSingletonManager] on all nodes and you have defined 3 different data centers there will be
|
||||||
3 active singleton instances in the cluster, one in each data center. This is taken care of automatically,
|
3 active singleton instances in the cluster, one in each data center. This is taken care of automatically,
|
||||||
but is important to be aware of. Designing the system for one singleton per data center makes it possible
|
but is important to be aware of. Designing the system for one singleton per data center makes it possible
|
||||||
for the system to be available also during network partitions between data centers.
|
for the system to be available also during network partitions between data centers.
|
||||||
|
|
@ -142,11 +142,11 @@ guaranteed to be consistent across data centers when using one leader per data c
|
||||||
difficult to select a single global singleton.
|
difficult to select a single global singleton.
|
||||||
|
|
||||||
If you need a global singleton you have to pick one data center to host that singleton and only start the
|
If you need a global singleton you have to pick one data center to host that singleton and only start the
|
||||||
`ClusterSingletonManager` on nodes of that data center. If the data center is unreachable from another data center the
|
@unidoc[ClusterSingletonManager] on nodes of that data center. If the data center is unreachable from another data center the
|
||||||
singleton is inaccessible, which is a reasonable trade-off when selecting consistency over availability.
|
singleton is inaccessible, which is a reasonable trade-off when selecting consistency over availability.
|
||||||
|
|
||||||
The `ClusterSingletonProxy` is by default routing messages to the singleton in the own data center, but
|
The @unidoc[ClusterSingletonProxy] is by default routing messages to the singleton in the own data center, but
|
||||||
it can be started with a `data-center` parameter in the `ClusterSingletonProxySettings` to define that
|
it can be started with a `data-center` parameter in the @unidoc[ClusterSingletonProxySettings] to define that
|
||||||
it should route messages to a singleton located in another data center. That is useful for example when
|
it should route messages to a singleton located in another data center. That is useful for example when
|
||||||
having a global singleton in one data center and accessing it from other data centers.
|
having a global singleton in one data center and accessing it from other data centers.
|
||||||
|
|
||||||
|
|
@ -180,7 +180,7 @@ is working on a solution that will support multiple active entities that will wo
|
||||||
with Cluster Sharding across multiple data centers.
|
with Cluster Sharding across multiple data centers.
|
||||||
|
|
||||||
If you need global entities you have to pick one data center to host that entity type and only start
|
If you need global entities you have to pick one data center to host that entity type and only start
|
||||||
`ClusterSharding` on nodes of that data center. If the data center is unreachable from another data center the
|
@unidoc[akka.cluster.sharding.ClusterSharding] on nodes of that data center. If the data center is unreachable from another data center the
|
||||||
entities are inaccessible, which is a reasonable trade-off when selecting consistency over availability.
|
entities are inaccessible, which is a reasonable trade-off when selecting consistency over availability.
|
||||||
|
|
||||||
The Cluster Sharding proxy is by default routing messages to the shard regions in their own data center, but
|
The Cluster Sharding proxy is by default routing messages to the shard regions in their own data center, but
|
||||||
|
|
|
||||||
|
|
@ -51,8 +51,8 @@ Certain message routing and let-it-crash functions may not work when Sigar is no
|
||||||
|
|
||||||
Cluster metrics extension comes with two built-in collector implementations:
|
Cluster metrics extension comes with two built-in collector implementations:
|
||||||
|
|
||||||
1. `akka.cluster.metrics.SigarMetricsCollector`, which requires Sigar provisioning, and is more rich/precise
|
1. @unidoc[akka.cluster.metrics.SigarMetricsCollector] which requires Sigar provisioning, and is more rich/precise
|
||||||
2. `akka.cluster.metrics.JmxMetricsCollector`, which is used as fall back, and is less rich/precise
|
2. @unidoc[akka.cluster.metrics.JmxMetricsCollector], which is used as fall back, and is less rich/precise
|
||||||
|
|
||||||
You can also plug-in your own metrics collector implementation.
|
You can also plug-in your own metrics collector implementation.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,7 @@ In this context sharding means that actors with an identifier, so called entitie
|
||||||
can be automatically distributed across multiple nodes in the cluster. Each entity
|
can be automatically distributed across multiple nodes in the cluster. Each entity
|
||||||
actor runs only at one place, and messages can be sent to the entity without requiring
|
actor runs only at one place, and messages can be sent to the entity without requiring
|
||||||
the sender to know the location of the destination actor. This is achieved by sending
|
the sender to know the location of the destination actor. This is achieved by sending
|
||||||
the messages via a `ShardRegion` actor provided by this extension, which knows how
|
the messages via a @unidoc[ShardRegion](ShardRegion$) actor provided by this extension, which knows how
|
||||||
to route the message with the entity id to the final destination.
|
to route the message with the entity id to the final destination.
|
||||||
|
|
||||||
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up)
|
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up)
|
||||||
|
|
@ -51,7 +51,7 @@ Scala
|
||||||
Java
|
Java
|
||||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
|
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
|
||||||
|
|
||||||
The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
|
The above actor uses event sourcing and the support provided in @scala[@unidoc[PersistentActor]] @java[@unidoc[AbstractPersistentActor]] to store its state.
|
||||||
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
||||||
its state if it is valuable.
|
its state if it is valuable.
|
||||||
|
|
||||||
|
|
@ -106,9 +106,9 @@ A simple sharding algorithm that works fine in most cases is to take the absolut
|
||||||
the entity identifier modulo number of shards. As a convenience this is provided by the
|
the entity identifier modulo number of shards. As a convenience this is provided by the
|
||||||
`ShardRegion.HashCodeMessageExtractor`.
|
`ShardRegion.HashCodeMessageExtractor`.
|
||||||
|
|
||||||
Messages to the entities are always sent via the local `ShardRegion`. The `ShardRegion` actor reference for a
|
Messages to the entities are always sent via the local @unidoc[ShardRegion](ShardRegion$) The @unidoc[ShardRegion](ShardRegion$) actor reference for a
|
||||||
named entity type is returned by `ClusterSharding.start` and it can also be retrieved with `ClusterSharding.shardRegion`.
|
named entity type is returned by `ClusterSharding.start` and it can also be retrieved with `ClusterSharding.shardRegion`.
|
||||||
The `ShardRegion` will lookup the location of the shard for the entity if it does not already know its location. It will
|
The @unidoc[ShardRegion](ShardRegion$) will lookup the location of the shard for the entity if it does not already know its location. It will
|
||||||
delegate the message to the right node and it will create the entity actor on demand, i.e. when the
|
delegate the message to the right node and it will create the entity actor on demand, i.e. when the
|
||||||
first message for a specific entity is delivered.
|
first message for a specific entity is delivered.
|
||||||
|
|
||||||
|
|
@ -127,28 +127,28 @@ tutorial named [Akka Cluster Sharding with Scala!](https://github.com/typesafehu
|
||||||
|
|
||||||
## How it works
|
## How it works
|
||||||
|
|
||||||
The `ShardRegion` actor is started on each node in the cluster, or group of nodes
|
The @unidoc[ShardRegion](ShardRegion$) actor is started on each node in the cluster, or group of nodes
|
||||||
tagged with a specific role. The `ShardRegion` is created with two application specific
|
tagged with a specific role. The @unidoc[ShardRegion](ShardRegion$) is created with two application specific
|
||||||
functions to extract the entity identifier and the shard identifier from incoming messages.
|
functions to extract the entity identifier and the shard identifier from incoming messages.
|
||||||
A shard is a group of entities that will be managed together. For the first message in a
|
A shard is a group of entities that will be managed together. For the first message in a
|
||||||
specific shard the `ShardRegion` requests the location of the shard from a central coordinator,
|
specific shard the @unidoc[ShardRegion](ShardRegion$) requests the location of the shard from a central coordinator,
|
||||||
the `ShardCoordinator`.
|
the `ShardCoordinator`.
|
||||||
|
|
||||||
The `ShardCoordinator` decides which `ShardRegion` shall own the `Shard` and informs
|
The `ShardCoordinator` decides which @unidoc[ShardRegion](ShardRegion$) shall own the `Shard` and informs
|
||||||
that `ShardRegion`. The region will confirm this request and create the `Shard` supervisor
|
that @unidoc[ShardRegion](ShardRegion$). The region will confirm this request and create the `Shard` supervisor
|
||||||
as a child actor. The individual `Entities` will then be created when needed by the `Shard`
|
as a child actor. The individual `Entities` will then be created when needed by the `Shard`
|
||||||
actor. Incoming messages thus travel via the `ShardRegion` and the `Shard` to the target
|
actor. Incoming messages thus travel via the @unidoc[ShardRegion](ShardRegion$) and the `Shard` to the target
|
||||||
`Entity`.
|
`Entity`.
|
||||||
|
|
||||||
If the shard home is another `ShardRegion` instance messages will be forwarded
|
If the shard home is another @unidoc[ShardRegion](ShardRegion$) instance messages will be forwarded
|
||||||
to that `ShardRegion` instance instead. While resolving the location of a
|
to that @unidoc[ShardRegion](ShardRegion$) instance instead. While resolving the location of a
|
||||||
shard incoming messages for that shard are buffered and later delivered when the
|
shard incoming messages for that shard are buffered and later delivered when the
|
||||||
shard home is known. Subsequent messages to the resolved shard can be delivered
|
shard home is known. Subsequent messages to the resolved shard can be delivered
|
||||||
to the target destination immediately without involving the `ShardCoordinator`.
|
to the target destination immediately without involving the `ShardCoordinator`.
|
||||||
|
|
||||||
Scenario 1:
|
Scenario 1:
|
||||||
|
|
||||||
1. Incoming message M1 to `ShardRegion` instance R1.
|
1. Incoming message M1 to @unidoc[ShardRegion](ShardRegion$) instance R1.
|
||||||
2. M1 is mapped to shard S1. R1 doesn't know about S1, so it asks the coordinator C for the location of S1.
|
2. M1 is mapped to shard S1. R1 doesn't know about S1, so it asks the coordinator C for the location of S1.
|
||||||
3. C answers that the home of S1 is R1.
|
3. C answers that the home of S1 is R1.
|
||||||
4. R1 creates child actor for the entity E1 and sends buffered messages for S1 to E1 child
|
4. R1 creates child actor for the entity E1 and sends buffered messages for S1 to E1 child
|
||||||
|
|
@ -172,30 +172,30 @@ role.
|
||||||
|
|
||||||
The logic that decides where a shard is to be located is defined in a pluggable shard
|
The logic that decides where a shard is to be located is defined in a pluggable shard
|
||||||
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
|
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
|
||||||
allocates new shards to the `ShardRegion` with least number of previously allocated shards.
|
allocates new shards to the @unidoc[ShardRegion](ShardRegion$) with least number of previously allocated shards.
|
||||||
This strategy can be replaced by an application specific implementation.
|
This strategy can be replaced by an application specific implementation.
|
||||||
|
|
||||||
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
|
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
|
||||||
of shards, i.e. migrate entities from one node to another. In the rebalance process the
|
of shards, i.e. migrate entities from one node to another. In the rebalance process the
|
||||||
coordinator first notifies all `ShardRegion` actors that a handoff for a shard has started.
|
coordinator first notifies all @unidoc[ShardRegion](ShardRegion$) actors that a handoff for a shard has started.
|
||||||
That means they will start buffering incoming messages for that shard, in the same way as if the
|
That means they will start buffering incoming messages for that shard, in the same way as if the
|
||||||
shard location is unknown. During the rebalance process the coordinator will not answer any
|
shard location is unknown. During the rebalance process the coordinator will not answer any
|
||||||
requests for the location of shards that are being rebalanced, i.e. local buffering will
|
requests for the location of shards that are being rebalanced, i.e. local buffering will
|
||||||
continue until the handoff is completed. The `ShardRegion` responsible for the rebalanced shard
|
continue until the handoff is completed. The @unidoc[ShardRegion](ShardRegion$) responsible for the rebalanced shard
|
||||||
will stop all entities in that shard by sending the specified `handOffStopMessage`
|
will stop all entities in that shard by sending the specified `handOffStopMessage`
|
||||||
(default `PoisonPill`) to them. When all entities have been terminated the `ShardRegion`
|
(default `PoisonPill`) to them. When all entities have been terminated the @unidoc[ShardRegion](ShardRegion$)
|
||||||
owning the entities will acknowledge the handoff as completed to the coordinator.
|
owning the entities will acknowledge the handoff as completed to the coordinator.
|
||||||
Thereafter the coordinator will reply to requests for the location of
|
Thereafter the coordinator will reply to requests for the location of
|
||||||
the shard and thereby allocate a new home for the shard and then buffered messages in the
|
the shard and thereby allocate a new home for the shard and then buffered messages in the
|
||||||
`ShardRegion` actors are delivered to the new location. This means that the state of the entities
|
@unidoc[ShardRegion](ShardRegion$) actors are delivered to the new location. This means that the state of the entities
|
||||||
are not transferred or migrated. If the state of the entities are of importance it should be
|
are not transferred or migrated. If the state of the entities are of importance it should be
|
||||||
persistent (durable), e.g. with @ref:[Persistence](persistence.md), so that it can be recovered at the new
|
persistent (durable), e.g. with @ref:[Persistence](persistence.md), so that it can be recovered at the new
|
||||||
location.
|
location.
|
||||||
|
|
||||||
The logic that decides which shards to rebalance is defined in a pluggable shard
|
The logic that decides which shards to rebalance is defined in a pluggable shard
|
||||||
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
|
allocation strategy. The default implementation `ShardCoordinator.LeastShardAllocationStrategy`
|
||||||
picks shards for handoff from the `ShardRegion` with most number of previously allocated shards.
|
picks shards for handoff from the @unidoc[ShardRegion](ShardRegion$) with most number of previously allocated shards.
|
||||||
They will then be allocated to the `ShardRegion` with least number of previously allocated shards,
|
They will then be allocated to the @unidoc[ShardRegion](ShardRegion$) with least number of previously allocated shards,
|
||||||
i.e. new members in the cluster. There is a configurable threshold of how large the difference
|
i.e. new members in the cluster. There is a configurable threshold of how large the difference
|
||||||
must be to begin the rebalancing. This strategy can be replaced by an application specific
|
must be to begin the rebalancing. This strategy can be replaced by an application specific
|
||||||
implementation.
|
implementation.
|
||||||
|
|
@ -207,7 +207,7 @@ actor will take over and the state is recovered. During such a failure period sh
|
||||||
with known location are still available, while messages for new (unknown) shards
|
with known location are still available, while messages for new (unknown) shards
|
||||||
are buffered until the new `ShardCoordinator` becomes available.
|
are buffered until the new `ShardCoordinator` becomes available.
|
||||||
|
|
||||||
As long as a sender uses the same `ShardRegion` actor to deliver messages to an entity
|
As long as a sender uses the same @unidoc[ShardRegion](ShardRegion$) actor to deliver messages to an entity
|
||||||
actor the order of the messages is preserved. As long as the buffer limit is not reached
|
actor the order of the messages is preserved. As long as the buffer limit is not reached
|
||||||
messages are delivered on a best effort basis, with at-most once delivery semantics,
|
messages are delivered on a best effort basis, with at-most once delivery semantics,
|
||||||
in the same way as ordinary message sending. Reliable end-to-end messaging, with
|
in the same way as ordinary message sending. Reliable end-to-end messaging, with
|
||||||
|
|
@ -280,9 +280,9 @@ See @ref:[How To Startup when Cluster Size Reached](cluster-usage.md#min-members
|
||||||
|
|
||||||
## Proxy Only Mode
|
## Proxy Only Mode
|
||||||
|
|
||||||
The `ShardRegion` actor can also be started in proxy only mode, i.e. it will not
|
The @unidoc[ShardRegion](ShardRegion$) actor can also be started in proxy only mode, i.e. it will not
|
||||||
host any entities itself, but knows how to delegate messages to the right location.
|
host any entities itself, but knows how to delegate messages to the right location.
|
||||||
A `ShardRegion` is started in proxy only mode with the `ClusterSharding.startProxy` method.
|
A @unidoc[ShardRegion](ShardRegion$) is started in proxy only mode with the method `ClusterSharding.startProxy`
|
||||||
Also a `ShardRegion` is started in proxy only mode in case if there is no match between the
|
Also a `ShardRegion` is started in proxy only mode in case if there is no match between the
|
||||||
roles of the current cluster node and the role specified in `ClusterShardingSettings`
|
roles of the current cluster node and the role specified in `ClusterShardingSettings`
|
||||||
passed to the `ClusterSharding.start` method.
|
passed to the `ClusterSharding.start` method.
|
||||||
|
|
@ -365,8 +365,8 @@ Note that stopped entities will be started again when a new message is targeted
|
||||||
## Graceful Shutdown
|
## Graceful Shutdown
|
||||||
|
|
||||||
You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
|
You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
|
||||||
to the `ShardRegion` actor to hand off all shards that are hosted by that `ShardRegion` and then the
|
to the @unidoc[ShardRegion](ShardRegion$) actor to hand off all shards that are hosted by that @unidoc[ShardRegion](ShardRegion$) and then the
|
||||||
`ShardRegion` actor will be stopped. You can `watch` the `ShardRegion` actor to know when it is completed.
|
@unidoc[ShardRegion](ShardRegion$) actor will be stopped. You can `watch` the @unidoc[ShardRegion](ShardRegion$) actor to know when it is completed.
|
||||||
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
||||||
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -16,12 +16,12 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
|
||||||
but for some cases this feature takes care of that by making sure that another singleton
|
but for some cases this feature takes care of that by making sure that another singleton
|
||||||
instance will eventually be started.
|
instance will eventually be started.
|
||||||
|
|
||||||
The cluster singleton pattern is implemented by `akka.cluster.singleton.ClusterSingletonManager`.
|
The cluster singleton pattern is implemented by @unidoc[akka.cluster.singleton.ClusterSingletonManager].
|
||||||
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
|
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
|
||||||
a specific role. `ClusterSingletonManager` is an actor that is supposed to be started on
|
a specific role. @unidoc[ClusterSingletonManager] is an actor that is supposed to be started on
|
||||||
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
|
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
|
||||||
started by the `ClusterSingletonManager` on the oldest node by creating a child actor from
|
started by the @unidoc[ClusterSingletonManager] on the oldest node by creating a child actor from
|
||||||
supplied `Props`. `ClusterSingletonManager` makes sure that at most one singleton instance
|
supplied @unidoc[akka.actor.Props]. @unidoc[ClusterSingletonManager] makes sure that at most one singleton instance
|
||||||
is running at any point in time.
|
is running at any point in time.
|
||||||
|
|
||||||
The singleton actor is always running on the oldest member with specified role.
|
The singleton actor is always running on the oldest member with specified role.
|
||||||
|
|
@ -35,15 +35,15 @@ take over and a new singleton actor is created. For these failure scenarios ther
|
||||||
not be a graceful hand-over, but more than one active singletons is prevented by all
|
not be a graceful hand-over, but more than one active singletons is prevented by all
|
||||||
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
|
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
|
||||||
|
|
||||||
You can access the singleton actor by using the provided `akka.cluster.singleton.ClusterSingletonProxy`,
|
You can access the singleton actor by using the provided @unidoc[akka.cluster.singleton.ClusterSingletonProxy],
|
||||||
which will route all messages to the current instance of the singleton. The proxy will keep track of
|
which will route all messages to the current instance of the singleton. The proxy will keep track of
|
||||||
the oldest node in the cluster and resolve the singleton's `ActorRef` by explicitly sending the
|
the oldest node in the cluster and resolve the singleton's @unidoc[akka.actor.ActorRef] by explicitly sending the
|
||||||
singleton's `actorSelection` the `akka.actor.Identify` message and waiting for it to reply.
|
singleton's `actorSelection` the @unidoc[akka.actor.Identify] message and waiting for it to reply.
|
||||||
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
|
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
|
||||||
Given the implementation, there might be periods of time during which the `ActorRef` is unavailable,
|
Given the implementation, there might be periods of time during which the @unidoc[akka.actor.ActorRef] is unavailable,
|
||||||
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
||||||
singleton and then deliver them when the singleton is finally available. If the buffer is full
|
singleton and then deliver them when the singleton is finally available. If the buffer is full
|
||||||
the `ClusterSingletonProxy` will drop old messages when new messages are sent via the proxy.
|
the @unidoc[ClusterSingletonProxy] will drop old messages when new messages are sent via the proxy.
|
||||||
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
|
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
|
||||||
|
|
||||||
It's worth noting that messages can always be lost because of the distributed nature of these actors.
|
It's worth noting that messages can always be lost because of the distributed nature of these actors.
|
||||||
|
|
@ -92,8 +92,8 @@ Scala
|
||||||
Java
|
Java
|
||||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
|
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
|
||||||
|
|
||||||
On each node in the cluster you need to start the `ClusterSingletonManager` and
|
On each node in the cluster you need to start the @unidoc[ClusterSingletonManager] and
|
||||||
supply the `Props` of the singleton actor, in this case the JMS queue consumer.
|
supply the @unidoc[akka.actor.Props] of the singleton actor, in this case the JMS queue consumer.
|
||||||
|
|
||||||
Scala
|
Scala
|
||||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
|
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
|
||||||
|
|
@ -152,17 +152,17 @@ Maven
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The following configuration properties are read by the `ClusterSingletonManagerSettings`
|
The following configuration properties are read by the @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings]
|
||||||
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonManagerSettings`
|
when created with a @unidoc[akka.actor.ActorSystem] parameter. It is also possible to amend the @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings]
|
||||||
or create it from another config section with the same layout as below. `ClusterSingletonManagerSettings` is
|
or create it from another config section with the same layout as below. @unidoc[akka.cluster.singleton.ClusterSingletonManagerSettings] is
|
||||||
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
|
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
|
||||||
with different settings if needed.
|
with different settings if needed.
|
||||||
|
|
||||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
|
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
|
||||||
|
|
||||||
The following configuration properties are read by the `ClusterSingletonProxySettings`
|
The following configuration properties are read by the @unidoc[ClusterSingletonProxySettings]
|
||||||
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonProxySettings`
|
when created with a @unidoc[akka.actor.ActorSystem] parameter. It is also possible to amend the @unidoc[ClusterSingletonProxySettings]
|
||||||
or create it from another config section with the same layout as below. `ClusterSingletonProxySettings` is
|
or create it from another config section with the same layout as below. @unidoc[ClusterSingletonProxySettings] is
|
||||||
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
|
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
|
||||||
with different settings if needed.
|
with different settings if needed.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,7 @@ import _root_.io.github.lukehutch.fastclasspathscanner.FastClasspathScanner
|
||||||
import com.lightbend.paradox.markdown._
|
import com.lightbend.paradox.markdown._
|
||||||
import com.lightbend.paradox.sbt.ParadoxPlugin.autoImport._
|
import com.lightbend.paradox.sbt.ParadoxPlugin.autoImport._
|
||||||
import org.pegdown.Printer
|
import org.pegdown.Printer
|
||||||
|
import org.pegdown.ast.DirectiveNode.Source
|
||||||
import org.pegdown.ast._
|
import org.pegdown.ast._
|
||||||
import sbt.Keys._
|
import sbt.Keys._
|
||||||
import sbt._
|
import sbt._
|
||||||
|
|
@ -29,23 +30,31 @@ object ParadoxSupport {
|
||||||
|
|
||||||
class UnidocDirective(allClasses: IndexedSeq[String]) extends InlineDirective("unidoc") {
|
class UnidocDirective(allClasses: IndexedSeq[String]) extends InlineDirective("unidoc") {
|
||||||
def render(node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
|
def render(node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
|
||||||
if (node.label.split('[')(0).contains('.')) {
|
val (directive, label, source) = node.source match {
|
||||||
val fqcn = node.label
|
case direct: Source.Direct => (s"@unidoc[${node.label}](${direct.value})", node.label, direct.value)
|
||||||
|
case ref: Source.Ref => (s"@unidoc[${node.label}](${ref.value})", node.label, ref.value)
|
||||||
|
case Source.Empty => (s"@unidoc[${node.label}]", node.label.split('.').last, node.label)
|
||||||
|
}
|
||||||
|
|
||||||
|
if (source.split('[')(0).contains('.')) {
|
||||||
|
val fqcn = source
|
||||||
if (allClasses.contains(fqcn)) {
|
if (allClasses.contains(fqcn)) {
|
||||||
val label = fqcn.split('.').last
|
|
||||||
syntheticNode("java", javaLabel(label), fqcn, node).accept(visitor)
|
syntheticNode("java", javaLabel(label), fqcn, node).accept(visitor)
|
||||||
syntheticNode("scala", label, fqcn, node).accept(visitor)
|
syntheticNode("scala", scalaLabel(label), fqcn, node).accept(visitor)
|
||||||
} else {
|
} else {
|
||||||
throw new java.lang.IllegalStateException(s"fqcn not found by @unidoc[$fqcn]")
|
throw new java.lang.IllegalStateException(s"fqcn not found by $directive")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
renderByClassName(node.label, node, visitor, printer)
|
renderByClassName(directive, source, node, visitor, printer)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
def javaLabel(label: String): String =
|
def javaLabel(splitLabel: String): String =
|
||||||
label.replaceAll("\\[", "<").replaceAll("\\]", ">").replace('_', '?')
|
splitLabel.replaceAll("\\\\_", "_").replaceAll("\\[", "<").replaceAll("\\]", ">").replace('_', '?')
|
||||||
|
|
||||||
|
def scalaLabel(splitLabel: String): String =
|
||||||
|
splitLabel.replaceAll("\\\\_", "_")
|
||||||
|
|
||||||
def syntheticNode(group: String, label: String, fqcn: String, node: DirectiveNode): DirectiveNode = {
|
def syntheticNode(group: String, label: String, fqcn: String, node: DirectiveNode): DirectiveNode = {
|
||||||
val syntheticSource = new DirectiveNode.Source.Direct(fqcn)
|
val syntheticSource = new DirectiveNode.Source.Direct(fqcn)
|
||||||
|
|
@ -56,30 +65,33 @@ object ParadoxSupport {
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
def renderByClassName(label: String, node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
|
def renderByClassName(directive: String, source: String, node: DirectiveNode, visitor: Visitor, printer: Printer): Unit = {
|
||||||
val label = node.label.replaceAll("\\\\_", "_")
|
val sourceWithoutGenericParameters = source.replaceAll("\\\\_", "_").split("\\[")(0)
|
||||||
val labelWithoutGenericParameters = label.split("\\[")(0)
|
val labelWithScalaGenerics = scalaLabel(node.label)
|
||||||
val labelWithJavaGenerics = javaLabel(label)
|
val labelWithJavaGenerics = javaLabel(node.label)
|
||||||
val matches = allClasses.filter(_.endsWith('.' + labelWithoutGenericParameters))
|
val matches = allClasses.filter(_.endsWith('.' + sourceWithoutGenericParameters))
|
||||||
matches.size match {
|
matches.size match {
|
||||||
case 0 =>
|
case 0 =>
|
||||||
throw new java.lang.IllegalStateException(s"No matches found for $label")
|
throw new java.lang.IllegalStateException(
|
||||||
|
s"No matches found for $directive. " +
|
||||||
|
s"You may want to use the fully qualified class name as @unidoc[fqcn]."
|
||||||
|
)
|
||||||
case 1 if matches(0).contains("adsl") =>
|
case 1 if matches(0).contains("adsl") =>
|
||||||
throw new java.lang.IllegalStateException(s"Match for $label only found in one language: ${matches(0)}")
|
throw new java.lang.IllegalStateException(s"Match for $directive only found in one language: ${matches(0)}")
|
||||||
case 1 =>
|
case 1 =>
|
||||||
syntheticNode("scala", label, matches(0), node).accept(visitor)
|
syntheticNode("scala", labelWithScalaGenerics, matches(0), node).accept(visitor)
|
||||||
syntheticNode("java", labelWithJavaGenerics, matches(0), node).accept(visitor)
|
syntheticNode("java", labelWithJavaGenerics, matches(0), node).accept(visitor)
|
||||||
case 2 if matches.forall(_.contains("adsl")) =>
|
case 2 if matches.forall(_.contains("adsl")) =>
|
||||||
matches.foreach(m => {
|
matches.foreach(m => {
|
||||||
if (!m.contains("javadsl"))
|
if (!m.contains("javadsl"))
|
||||||
syntheticNode("scala", label, m, node).accept(visitor)
|
syntheticNode("scala", labelWithScalaGenerics, m, node).accept(visitor)
|
||||||
if (!m.contains("scaladsl"))
|
if (!m.contains("scaladsl"))
|
||||||
syntheticNode("java", labelWithJavaGenerics, m, node).accept(visitor)
|
syntheticNode("java", labelWithJavaGenerics, m, node).accept(visitor)
|
||||||
})
|
})
|
||||||
case n =>
|
case n =>
|
||||||
throw new java.lang.IllegalStateException(
|
throw new java.lang.IllegalStateException(
|
||||||
s"$n matches found for @unidoc[$label], but not javadsl/scaladsl: ${matches.mkString(", ")}. " +
|
s"$n matches found for $directive, but not javadsl/scaladsl: ${matches.mkString(", ")}. " +
|
||||||
s"You may want to use the fully qualified class name as @unidoc[fqcn] instead of @unidoc[${label}]."
|
s"You may want to use the fully qualified class name as @unidoc[fqcn] instead of ."
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue