Cluster Singleton - General Typed docs cleanup after all API changes #24717 (#27801)

This commit is contained in:
Helena Edelson 2019-09-27 09:18:15 +02:00 committed by Arnout Engelen
parent a7c43cf573
commit ea74f905ea
3 changed files with 130 additions and 88 deletions

View file

@ -15,21 +15,7 @@ To use Cluster Singleton, you must add the following dependency in your project:
## Introduction
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
Some examples:
* single point of responsibility for certain cluster-wide consistent decisions, or
coordination of actions across the cluster system
* single entry point to an external system
* single master, many workers
* centralized naming service, or routing logic
Using a singleton should not be the first design choice. It has several drawbacks,
such as single-point of bottleneck. Single-point of failure is also a relevant concern,
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
For the full documentation of this feature and for new projects see @ref:[Cluster Singleton - Introduction](typed/cluster-singleton.md#introduction).
The cluster singleton pattern is implemented by `akka.cluster.singleton.ClusterSingletonManager`.
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
@ -39,17 +25,6 @@ started by the `ClusterSingletonManager` on the oldest node by creating a child
supplied `Props`. `ClusterSingletonManager` makes sure that at most one singleton instance
is running at any point in time.
The singleton actor is always running on the oldest member with specified role.
The oldest member is determined by `akka.cluster.Member#isOlderThan`.
This can change when removing that member from the cluster. Be aware that there is a short time
period when there is no active singleton during the hand-over process.
The cluster @ref:[failure detector](typed/cluster.md#failure-detector) will notice when oldest node becomes unreachable due to
things like JVM crash, hard shut down, or network failure. Then a new oldest node will
take over and a new singleton actor is created. For these failure scenarios there will
not be a graceful hand-over, but more than one active singletons is prevented by all
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
You can access the singleton actor by using the provided `akka.cluster.singleton.ClusterSingletonProxy`,
which will route all messages to the current instance of the singleton. The proxy will keep track of
the oldest node in the cluster and resolve the singleton's `ActorRef` by explicitly sending the
@ -61,34 +36,7 @@ singleton and then deliver them when the singleton is finally available. If the
the `ClusterSingletonProxy` will drop old messages when new messages are sent via the proxy.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
It's worth noting that messages can always be lost because of the distributed nature of these actors.
As always, additional logic should be implemented in the singleton (acknowledgement) and in the
client (retry) actors to ensure at-least-once message delivery.
The singleton instance will not run on members with status @ref:[WeaklyUp](typed/cluster-membership.md#weaklyup-members).
## Potential problems to be aware of
This pattern may seem to be very tempting to use at first, but it has several drawbacks, some of them are listed below:
* the cluster singleton may quickly become a *performance bottleneck*,
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
@ref:[Auto Downing](typed/cluster.md#automatic-vs-manual-downing)),
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
Especially the last point is something you should be aware of — in general when using the Cluster Singleton pattern
you should take care of downing nodes yourself and not rely on the timing based auto-down feature.
@@@ warning
**Don't use Cluster Singleton together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple Singletons* being started, one in each separate cluster!
@@@
See @ref:[Cluster Singleton - Potential problems to be aware of](typed/cluster-singleton.md#potential-problems-to-be-aware-of).
## An Example
@ -145,21 +93,7 @@ A more comprehensive sample is available in the tutorial named
## Configuration
The following configuration properties are read by the `ClusterSingletonManagerSettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonManagerSettings`
or create it from another config section with the same layout as below. `ClusterSingletonManagerSettings` is
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
The following configuration properties are read by the `ClusterSingletonProxySettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonProxySettings`
or create it from another config section with the same layout as below. `ClusterSingletonProxySettings` is
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }
For the full documentation of this feature and for new projects see @ref:[Cluster Singleton - configuration](typed/cluster-singleton.md#configuration).
## Supervision
@ -190,20 +124,5 @@ Java
## Lease
A @ref[lease](coordination.md) can be used as an additional safety measure to ensure that two singletons
don't run at the same time. Reasons for how this can happen:
* Network partitions without an appropriate downing provider
* Mistakes in the deployment process leading to two separate Akka Clusters
* Timing issues between removing members from the Cluster on one side of a network partition and shutting them down on the other side
A lease can be a final backup that means that the singleton actor won't be created unless
the lease can be acquired.
To use a lease for singleton set `akka.cluster.singleton.use-lease` to the configuration location
of the lease to use. A lease with with the name `<actor system name>-singleton-<singleton actor path>` is used and
the owner is set to the @scala[`Cluster(system).selfAddress.hostPort`]@java[`Cluster.get(system).selfAddress().hostPort()`].
If the cluster singleton manager can't acquire the lease it will keep retrying while it is the oldest node in the cluster.
If the lease is lost then the singleton actor will be terminated then the lease will be re-tried.
For the full documentation of this feature and for new projects see @ref:[Cluster Singleton - Lease](typed/cluster-singleton.md#lease).

View file

@ -34,6 +34,73 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
### Singleton manager
The cluster singleton pattern manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
a specific role. The singleton is an actor that is supposed to be started as early as possible
on all nodes, or all nodes with specified role, in the cluster.
The actual singleton actor is
* Started on the oldest node by creating a child actor from
supplied `Props`. It makes sure that at most one singleton instance is running at any point in time.
* Always running on the oldest member with specified role.
The oldest member is determined by `akka.cluster.Member#isOlderThan`.
This can change when removing that member from the cluster. Be aware that there is a short time
period when there is no active singleton during the hand-over process.
The cluster @ref:[failure detector](cluster.md#failure-detector) will notice when oldest node becomes unreachable due to
things like JVM crash, hard shut down, or network failure. Then a new oldest node will
take over and a new singleton actor is created. For these failure scenarios there will
not be a graceful hand-over, but more than one active singletons is prevented by all
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
### Singleton proxy
To communicate with a given named singleton in the cluster you can access it though a proxy.
When creating the proxy for a given `singletonName` on a node, if there already is a singleton manager
running on this node, no additional manager is started, and if there is one running an [[ActorRef]] to that is returned.
The proxy will route all messages to the current instance of the singleton, and keep track of
the oldest node in the cluster and resolve the singleton's `ActorRef` by explicitly sending the
singleton's `actorSelection` the `akka.actor.Identify` message and waiting for it to reply.
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
Given the implementation, there might be periods of time during which the `ActorRef` is unavailable,
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
singleton and then deliver them when the singleton is finally available. If the buffer is full
the proxy will drop old messages when new messages are sent via the proxy.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
It's worth noting that messages can always be lost because of the distributed nature of these actors.
As always, additional logic should be implemented in the singleton (acknowledgement) and in the
client (retry) actors to ensure at-least-once message delivery.
The singleton instance will not run on members with status @ref:[WeaklyUp](cluster-membership.md#weaklyup-members).
## Potential problems to be aware of
This pattern may seem to be very tempting to use at first, but it has several drawbacks, some of them are listed below:
* the cluster singleton may quickly become a *performance bottleneck*,
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
@ref:[Auto Downing](cluster.md#auto-downing-do-not-use),
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
Especially the last point is something you should be aware of — in general when using the Cluster Singleton pattern
you should take care of downing nodes yourself and not rely on the timing based auto-down feature.
@@@ warning
**Don't use Cluster Singleton together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple Singletons* being started, one in each separate cluster!
@@@
## Example
Any `Behavior` can be run as a singleton. E.g. a basic counter:
@ -85,7 +152,63 @@ Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/SingletonCompileOnlyTest.java) { #stop-message }
## Configuration
The following configuration properties are read by the `ClusterSingletonManagerSettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonManagerSettings`
or create it from another config section with the same layout as below. `ClusterSingletonManagerSettings` is
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
The following configuration properties are read by the `ClusterSingletonProxySettings`
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonProxySettings`
or create it from another config section with the same layout as below. `ClusterSingletonProxySettings` is
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }
## Lease
A @ref[lease](../coordination.md) can be used as an additional safety measure to ensure that two singletons
don't run at the same time. Reasons for how this can happen:
* Network partitions without an appropriate downing provider
* Mistakes in the deployment process leading to two separate Akka Clusters
* Timing issues between removing members from the Cluster on one side of a network partition and shutting them down on the other side
A lease can be a final backup that means that the singleton actor won't be created unless
the lease can be acquired.
To use a lease for singleton set `akka.cluster.singleton.use-lease` to the configuration location
of the lease to use. A lease with with the name `<actor system name>-singleton-<singleton actor path>` is used and
the owner is set to the @scala[`Cluster(system).selfAddress.hostPort`]@java[`Cluster.get(system).selfAddress().hostPort()`].
If the cluster singleton manager can't acquire the lease it will keep retrying while it is the oldest node in the cluster.
If the lease is lost then the singleton actor will be terminated then the lease will be re-tried.
## Lease
A @ref[lease](../coordination.md) can be used as an additional safety measure to ensure that two singletons
don't run at the same time. Reasons for how this can happen:
* Network partitions without an appropriate downing provider
* Mistakes in the deployment process leading to two separate Akka Clusters
* Timing issues between removing members from the Cluster on one side of a network partition and shutting them down on the other side
A lease can be a final backup that means that the singleton actor won't be created unless
the lease can be acquired.
To use a lease for singleton set `akka.cluster.singleton.use-lease` to the configuration location
of the lease to use. A lease with with the name `<actor system name>-singleton-<singleton actor path>` is used and
the owner is set to the @scala[`Cluster(system).selfAddress.hostPort`]@java[`Cluster.get(system).selfAddress().hostPort()`].
If the cluster singleton manager can't acquire the lease it will keep retrying while it is the oldest node in the cluster.
If the lease is lost then the singleton actor will be terminated then the lease will be re-tried.
## Accessing singleton of another data centre
TODO

View file

@ -247,7 +247,7 @@ If you dont use RP, you should anyway carefully read the [documentation](http
of the Split Brain Resolver and make sure that the solution you are using handles the concerns
described there.
### Auto-downing (DO NOT USE)
### Auto-downing - DO NOT USE
There is an automatic downing feature that you should not use in production. For testing you can enable it with configuration: