Merge pull request #17869 from akka/wip-17447-split-docs-patriknw
=cls #17447 Split Cluster Sharding and Tools docs into java/scala
This commit is contained in:
commit
664ae2f8f5
21 changed files with 949 additions and 176 deletions
|
|
@ -1,4 +1,4 @@
|
|||
.. _cluster-client:
|
||||
.. _cluster-client-scala:
|
||||
|
||||
Cluster Client
|
||||
==============
|
||||
|
|
@ -10,13 +10,19 @@ contact points. It will establish a connection to a ``ClusterReceptionist`` some
|
|||
the cluster. It will monitor the connection to the receptionist and establish a new
|
||||
connection if the link goes down. When looking for a new receptionist it uses fresh
|
||||
contact points retrieved from previous establishment, or periodically refreshed contacts,
|
||||
i.e. not necessarily the initial contact points. Also, note it's necessary to change
|
||||
``akka.actor.provider`` from ``akka.actor.LocalActorRefProvider`` to
|
||||
``akka.remote.RemoteActorRefProvider`` or ``akka.cluster.ClusterActorRefProvider`` when using
|
||||
i.e. not necessarily the initial contact points.
|
||||
|
||||
.. note::
|
||||
|
||||
``ClusterClient`` should not be used when sending messages to actors that run
|
||||
within the same cluster. Similar functionality as the ``ClusterClient`` is
|
||||
provided in a more efficient way by :ref:`distributed-pub-sub-scala` for actors that
|
||||
belong to the same cluster.
|
||||
|
||||
Also, note it's necessary to change ``akka.actor.provider`` from ``akka.actor.LocalActorRefProvider``
|
||||
to ``akka.remote.RemoteActorRefProvider`` or ``akka.cluster.ClusterActorRefProvider`` when using
|
||||
the cluster client.
|
||||
|
||||
|
||||
|
||||
The receptionist is supposed to be started on all nodes, or all nodes with specified role,
|
||||
in the cluster. The receptionist can be started with the ``ClusterClientReceptionist`` extension
|
||||
or as an ordinary actor.
|
||||
|
|
@ -79,30 +85,26 @@ in the cluster.
|
|||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala#client
|
||||
|
||||
The ``initialContacts`` parameter is a ``Set[ActorSelection]``, which can be created like this:
|
||||
The ``initialContacts`` parameter is a ``Set[ActorPath]``, which can be created like this:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala#initialContacts
|
||||
|
||||
You will probably define the address information of the initial contact points in configuration or system property.
|
||||
See also :ref:`cluster-client-config-scala`.
|
||||
|
||||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_
|
||||
and `Distributed workers with Akka and Java! <http://www.typesafe.com/activator/template/akka-distributed-workers-java>`_.
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_.
|
||||
|
||||
ClusterClientReceptionist
|
||||
----------------------------
|
||||
ClusterClientReceptionist Extension
|
||||
-----------------------------------
|
||||
|
||||
In the example above the receptionist is started and accessed with the ``akka.cluster.client.ClusterClientReceptionist``.
|
||||
In the example above the receptionist is started and accessed with the ``akka.cluster.client.ClusterClientReceptionist`` extension.
|
||||
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
|
||||
start the ``akka.cluster.client.ClusterReceptionist`` actor as an ordinary actor and you can have several
|
||||
different receptionists at the same time, serving different types of clients.
|
||||
|
||||
The ``ClusterClientReceptionist`` can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
|
||||
|
||||
Note that the ``ClusterClientReceptionist`` uses the ``DistributedPubSub`` extension, which is described
|
||||
in :ref:`distributed-pub-sub`.
|
||||
in :ref:`distributed-pub-sub-scala`.
|
||||
|
||||
It is recommended to load the extension when the actor system is started by defining it in the
|
||||
``akka.extensions`` configuration property::
|
||||
|
|
@ -125,3 +127,21 @@ maven::
|
|||
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
|
||||
<version>@version@</version>
|
||||
</dependency>
|
||||
|
||||
.. _cluster-client-config-scala:
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
|
||||
with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterClientSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
|
||||
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
|
||||
with different settings if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#cluster-client-config
|
||||
|
|
|
|||
|
|
@ -152,4 +152,12 @@ You can plug-in your own metrics collector instead of built-in
|
|||
|
||||
Look at those two implementations for inspiration.
|
||||
|
||||
Custom metrics collector implementation class must be specified in the :ref:`cluster_metrics_configuration_scala`.
|
||||
Custom metrics collector implementation class must be specified in the
|
||||
``akka.cluster.metrics.collector.provider`` configuration property.
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The Cluster metrics extension can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-metrics/src/main/resources/reference.conf
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
.. _cluster-sharding:
|
||||
.. _cluster_sharding_scala:
|
||||
|
||||
Cluster Sharding
|
||||
================
|
||||
|
|
@ -13,7 +13,7 @@ but this feature is not limited to actors with persistent state.
|
|||
|
||||
Cluster sharding is typically used when you have many stateful actors that together consume
|
||||
more resources (e.g. memory) than fit on one machine. If you only have a few stateful actors
|
||||
it might be easier to run them on a :ref:`cluster-singleton` node.
|
||||
it might be easier to run them on a :ref:`cluster-singleton-scala` node.
|
||||
|
||||
In this context sharding means that actors with an identifier, so called entities,
|
||||
can be automatically distributed across multiple nodes in the cluster. Each entity
|
||||
|
|
@ -22,66 +22,8 @@ the sender to know the location of the destination actor. This is achieved by se
|
|||
the messages via a ``ShardRegion`` actor provided by this extension, which knows how
|
||||
to route the message with the entity id to the final destination.
|
||||
|
||||
An Example in Java
|
||||
------------------
|
||||
|
||||
This is how an entity actor may look like:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-actor
|
||||
|
||||
The above actor uses event sourcing and the support provided in ``UntypedPersistentActor`` to store its state.
|
||||
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
||||
its state if it is valuable.
|
||||
|
||||
Note how the ``persistenceId`` is defined. You may define it another way, but it must be unique.
|
||||
|
||||
When using the sharding extension you are first, typically at system startup on each node
|
||||
in the cluster, supposed to register the supported entity types with the ``ClusterSharding.start``
|
||||
method. ``ClusterSharding.start`` gives you the reference which you can pass along.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-start
|
||||
|
||||
The ``messageExtractor`` defines application specific methods to extract the entity
|
||||
identifier and the shard identifier from incoming messages.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-extractor
|
||||
|
||||
This example illustrates two different ways to define the entity identifier in the messages:
|
||||
|
||||
* The ``Get`` message includes the identifier itself.
|
||||
* The ``EntityEnvelope`` holds the identifier, and the actual message that is
|
||||
sent to the entity actor is wrapped in the envelope.
|
||||
|
||||
Note how these two messages types are handled in the ``entityId`` and ``entityMessage`` methods shown above.
|
||||
The message sent to the entity actor is what ``entityMessage`` returns and that makes it possible to unwrap envelopes
|
||||
if needed.
|
||||
|
||||
A shard is a group of entities that will be managed together. The grouping is defined by the
|
||||
``extractShardId`` function shown above. For a specific entity identifier the shard identifier must always
|
||||
be the same. Otherwise the entity actor might accidentally be started in several places at the same time.
|
||||
|
||||
Creating a good sharding algorithm is an interesting challenge in itself. Try to produce a uniform distribution,
|
||||
i.e. same amount of entities in each shard. As a rule of thumb, the number of shards should be a factor ten greater
|
||||
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
|
||||
will not host any shards. Too many shards will result in less efficient management of the shards, e.g. rebalancing
|
||||
overhead, and increased latency because the coordinator is involved in the routing of the first message for each
|
||||
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
|
||||
all nodes in the cluster.
|
||||
|
||||
A simple sharding algorithm that works fine in most cases is to take the absolute value of the ``hashCode`` of
|
||||
the entity identifier modulo number of shards. As a convenience this is provided by the
|
||||
``ShardRegion.HashCodeMessageExtractor``.
|
||||
|
||||
Messages to the entities are always sent via the local ``ShardRegion``. The ``ShardRegion`` actor reference for a
|
||||
named entity type is returned by ``ClusterSharding.start`` and it can also be retrieved with ``ClusterSharding.shardRegion``.
|
||||
The ``ShardRegion`` will lookup the location of the shard for the entity if it does not already know its location. It will
|
||||
delegate the message to the right node and it will create the entity actor on demand, i.e. when the
|
||||
first message for a specific entity is delivered.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-usage
|
||||
|
||||
An Example in Scala
|
||||
-------------------
|
||||
An Example
|
||||
----------
|
||||
|
||||
This is how an entity actor may look like:
|
||||
|
||||
|
|
@ -91,7 +33,8 @@ The above actor uses event sourcing and the support provided in ``PersistentActo
|
|||
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
||||
its state if it is valuable.
|
||||
|
||||
Note how the ``persistenceId`` is defined. You may define it another way, but it must be unique.
|
||||
Note how the ``persistenceId`` is defined. The name of the actor is the entity entity identifier (utf-8 URL-encoded).
|
||||
You may define it another way, but it must be unique.
|
||||
|
||||
When using the sharding extension you are first, typically at system startup on each node
|
||||
in the cluster, supposed to register the supported entity types with the ``ClusterSharding.start``
|
||||
|
|
@ -126,8 +69,9 @@ overhead, and increased latency because the coordinator is involved in the routi
|
|||
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
|
||||
all nodes in the cluster.
|
||||
|
||||
A simple sharding algorithm that works fine in most cases is to take the ``hashCode`` of the entity identifier modulo
|
||||
number of shards.
|
||||
A simple sharding algorithm that works fine in most cases is to take the absolute value of the ``hashCode`` of
|
||||
the entity identifier modulo number of shards. As a convenience this is provided by the
|
||||
``ShardRegion.HashCodeMessageExtractor``.
|
||||
|
||||
Messages to the entities are always sent via the local ``ShardRegion``. The ``ShardRegion`` actor reference for a
|
||||
named entity type is returned by ``ClusterSharding.start`` and it can also be retrieved with ``ClusterSharding.shardRegion``.
|
||||
|
|
@ -205,7 +149,7 @@ Thereafter the coordinator will reply to requests for the location of
|
|||
the shard and thereby allocate a new home for the shard and then buffered messages in the
|
||||
``ShardRegion`` actors are delivered to the new location. This means that the state of the entities
|
||||
are not transferred or migrated. If the state of the entities are of importance it should be
|
||||
persistent (durable), e.g. with ``akka-persistence``, so that it can be recovered at the new
|
||||
persistent (durable), e.g. with :ref:`persistence-scala`, so that it can be recovered at the new
|
||||
location.
|
||||
|
||||
The logic that decides which shards to rebalance is defined in a pluggable shard
|
||||
|
|
@ -217,7 +161,7 @@ must be to begin the rebalancing. This strategy can be replaced by an applicatio
|
|||
implementation.
|
||||
|
||||
The state of shard locations in the ``ShardCoordinator`` is persistent (durable) with
|
||||
``akka-persistence`` to survive failures. Since it is running in a cluster ``akka-persistence``
|
||||
:ref:`persistence-scala` to survive failures. Since it is running in a cluster :ref:`persistence-scala`
|
||||
must be configured with a distributed journal. When a crashed or unreachable coordinator
|
||||
node has been removed (via down) from the cluster a new ``ShardCoordinator`` singleton
|
||||
actor will take over and the state is recovered. During such a failure period shards
|
||||
|
|
@ -228,7 +172,7 @@ As long as a sender uses the same ``ShardRegion`` actor to deliver messages to a
|
|||
actor the order of the messages is preserved. As long as the buffer limit is not reached
|
||||
messages are delivered on a best effort basis, with at-most once delivery semantics,
|
||||
in the same way as ordinary message sending. Reliable end-to-end messaging, with
|
||||
at-least-once semantics can be added by using ``AtLeastOnceDelivery`` in ``akka-persistence``.
|
||||
at-least-once semantics can be added by using ``AtLeastOnceDelivery`` in :ref:`persistence-scala`.
|
||||
|
||||
Some additional latency is introduced for messages targeted to new or previously
|
||||
unused shards due to the round-trip to the coordinator. Rebalancing of shards may
|
||||
|
|
@ -275,7 +219,7 @@ for that entity has been received in the ``Shard``. Entities will not be restart
|
|||
using a ``Passivate``.
|
||||
|
||||
Note that the state of the entities themselves will not be restored unless they have been made persistent,
|
||||
e.g. with ``akka-persistence``.
|
||||
e.g. with :ref:`persistence-scala`.
|
||||
|
||||
Graceful Shutdown
|
||||
-----------------
|
||||
|
|
@ -288,11 +232,7 @@ triggered by the coordinator. When the shards have been stopped the coordinator
|
|||
|
||||
When the ``ShardRegion`` has terminated you probably want to ``leave`` the cluster, and shut down the ``ActorSystem``.
|
||||
|
||||
This is how to do it in Java:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#graceful-shutdown
|
||||
|
||||
This is how to do it in Scala:
|
||||
This is how to do that:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingGracefulShutdownSpec.scala#graceful-shutdown
|
||||
|
||||
|
|
@ -316,11 +256,15 @@ maven::
|
|||
Configuration
|
||||
-------------
|
||||
|
||||
The ``ClusterSharding`` extension can be configured with the following properties:
|
||||
The ``ClusterSharding`` extension can be configured with the following properties. These configuration
|
||||
properties are read by the ``ClusterShardingSettings`` when created with a ``ActorSystem`` parameter.
|
||||
It is also possible to amend the ``ClusterShardingSettings`` or create it from another config section
|
||||
with the same layout as below. ``ClusterShardingSettings`` is a parameter to the ``start`` method of
|
||||
the ``ClusterSharding`` extension, i.e. each each entity type can be configured with different settings
|
||||
if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/main/resources/reference.conf#sharding-ext-config
|
||||
|
||||
Custom shard allocation strategy can be defined in an optional parameter to
|
||||
``ClusterSharding.start``. See the API documentation of ``ShardAllocationStrategy``
|
||||
(Scala) or ``AbstractShardAllocationStrategy`` (Java) for details of how to implement a custom
|
||||
shard allocation strategy.
|
||||
``ClusterSharding.start``. See the API documentation of ``ShardAllocationStrategy`` for details of
|
||||
how to implement a custom shard allocation strategy.
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
.. _cluster-singleton:
|
||||
.. _cluster-singleton-scala:
|
||||
|
||||
Cluster Singleton
|
||||
=================
|
||||
|
|
@ -61,7 +61,8 @@ This pattern may seem to be very tempting to use at first, but it has several dr
|
|||
* the cluster singleton may quickly become a *performance bottleneck*,
|
||||
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
|
||||
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for :ref:`Scala <automatic-vs-manual-downing-scala>` or :ref:`Java <automatic-vs-manual-downing-java>`),
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for
|
||||
:ref:`automatic-vs-manual-downing-scala`),
|
||||
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
|
||||
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
|
||||
|
||||
|
|
@ -85,14 +86,8 @@ scenario when integrating with external systems.
|
|||
On each node in the cluster you need to start the ``ClusterSingletonManager`` and
|
||||
supply the ``Props`` of the singleton actor, in this case the JMS queue consumer.
|
||||
|
||||
In Scala:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#create-singleton-manager
|
||||
|
||||
In Java:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-manager
|
||||
|
||||
Here we limit the singleton to nodes tagged with the ``"worker"`` role, but all nodes, independent of
|
||||
role, can be used by not specifying ``withRole``.
|
||||
|
||||
|
|
@ -104,24 +99,13 @@ Here is how the singleton actor handles the ``terminationMessage`` in this examp
|
|||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#consumer-end
|
||||
|
||||
Note that you can send back current state to the ``ClusterSingletonManager`` before terminating.
|
||||
This message will be sent over to the ``ClusterSingletonManager`` at the new oldest node and it
|
||||
will be passed to the ``singletonProps`` factory when creating the new singleton instance.
|
||||
|
||||
With the names given above, access to the singleton can be obtained from any cluster node using a properly
|
||||
configured proxy.
|
||||
|
||||
In Scala:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#create-singleton-proxy
|
||||
|
||||
In Java:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-proxy
|
||||
|
||||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_
|
||||
and `Distributed workers with Akka and Java! <http://www.typesafe.com/activator/template/akka-distributed-workers-java>`_.
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
|
@ -139,3 +123,23 @@ maven::
|
|||
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
|
||||
<version>@version@</version>
|
||||
</dependency>
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
|
||||
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
|
||||
with different settings if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonProxySettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
|
||||
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
|
||||
with different settings if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-proxy-config
|
||||
|
|
|
|||
|
|
@ -307,7 +307,7 @@ you have exactly one actor of a certain type running somewhere in the cluster.
|
|||
|
||||
This can be implemented by subscribing to member events, but there are several corner
|
||||
cases to consider. Therefore, this specific use case is made easily accessible by the
|
||||
:ref:`cluster-singleton` in the contrib module.
|
||||
:ref:`cluster-singleton-scala`.
|
||||
|
||||
Cluster Sharding
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
|
@ -316,7 +316,7 @@ Distributes actors across several nodes in the cluster and supports interaction
|
|||
with the actors using their logical identifier, but without having to care about
|
||||
their physical location in the cluster.
|
||||
|
||||
See :ref:`cluster-sharding` in the contrib module.
|
||||
See :ref:`cluster_sharding_scala`
|
||||
|
||||
Distributed Publish Subscribe
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -325,7 +325,7 @@ Publish-subscribe messaging between actors in the cluster, and point-to-point me
|
|||
using the logical path of the actors, i.e. the sender does not have to know on which
|
||||
node the destination actor is running.
|
||||
|
||||
See :ref:`distributed-pub-sub` in the contrib module.
|
||||
See :ref:`distributed-pub-sub-scala`.
|
||||
|
||||
Cluster Client
|
||||
^^^^^^^^^^^^^^
|
||||
|
|
@ -334,7 +334,15 @@ Communication from an actor system that is not part of the cluster to actors run
|
|||
somewhere in the cluster. The client does not have to know on which node the destination
|
||||
actor is running.
|
||||
|
||||
See :ref:`cluster-client` in the contrib module.
|
||||
See :ref:`cluster-client-scala`.
|
||||
|
||||
Distributed Data
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
*Akka Distributed Data* is useful when you need to share data between nodes in an
|
||||
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
|
||||
|
||||
See :ref:`distributed_data_scala`.
|
||||
|
||||
Failure Detector
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
|
@ -530,7 +538,7 @@ Router Example with Pool of Remote Deployed Routees
|
|||
---------------------------------------------------
|
||||
|
||||
Let's take a look at how to use a cluster aware router on single master node that creates
|
||||
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton`
|
||||
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton-scala`
|
||||
in the contrib module. The ``ClusterSingletonManager`` is started on each node.
|
||||
|
||||
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsSampleOneMaster.scala#create-singleton-manager
|
||||
|
|
|
|||
|
|
@ -1,4 +1,4 @@
|
|||
.. _distributed-pub-sub:
|
||||
.. _distributed-pub-sub-scala:
|
||||
|
||||
Distributed Publish Subscribe in Cluster
|
||||
========================================
|
||||
|
|
@ -12,18 +12,18 @@ This pattern provides a mediator actor, ``akka.cluster.pubsub.DistributedPubSubM
|
|||
that manages a registry of actor references and replicates the entries to peer
|
||||
actors among all cluster nodes or a group of nodes tagged with a specific role.
|
||||
|
||||
The `DistributedPubSubMediator` is supposed to be started on all nodes,
|
||||
The ``DistributedPubSubMediator`` actor is supposed to be started on all nodes,
|
||||
or all nodes with specified role, in the cluster. The mediator can be
|
||||
started with the ``DistributedPubSub`` or as an ordinary actor.
|
||||
started with the ``DistributedPubSub`` extension or as an ordinary actor.
|
||||
|
||||
Changes are only performed in the own part of the registry and those changes
|
||||
are versioned. Deltas are disseminated in a scalable way to other nodes with
|
||||
a gossip protocol. The registry is eventually consistent, i.e. changes are not
|
||||
immediately visible at other nodes, but typically they will be fully replicated
|
||||
to all other nodes after a few seconds.
|
||||
The registry is eventually consistent, i.e. changes are not immediately visible at
|
||||
other nodes, but typically they will be fully replicated to all other nodes after
|
||||
a few seconds. Changes are only performed in the own part of the registry and those
|
||||
changes are versioned. Deltas are disseminated in a scalable way to other nodes with
|
||||
a gossip protocol.
|
||||
|
||||
You can send messages via the mediator on any node to registered actors on
|
||||
any other node. There is four modes of message delivery.
|
||||
any other node. There are four modes of message delivery.
|
||||
|
||||
**1. DistributedPubSubMediator.Send**
|
||||
|
||||
|
|
@ -79,28 +79,8 @@ Successful ``Subscribe`` and ``Unsubscribe`` is acknowledged with
|
|||
``DistributedPubSubMediator.SubscribeAck`` and ``DistributedPubSubMediator.UnsubscribeAck``
|
||||
replies.
|
||||
|
||||
A Small Example in Java
|
||||
-----------------------
|
||||
|
||||
A subscriber actor:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#subscriber
|
||||
|
||||
Subscriber actors can be started on several nodes in the cluster, and all will receive
|
||||
messages published to the "content" topic.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#start-subscribers
|
||||
|
||||
A simple actor that publishes to this "content" topic:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#publisher
|
||||
|
||||
It can publish messages to the topic from anywhere in the cluster:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#publish-message
|
||||
|
||||
A Small Example in Scala
|
||||
------------------------
|
||||
A Small Example
|
||||
---------------
|
||||
|
||||
A subscriber actor:
|
||||
|
||||
|
|
@ -122,16 +102,16 @@ It can publish messages to the topic from anywhere in the cluster:
|
|||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Akka Clustered PubSub with Scala! <http://www.typesafe.com/activator/template/akka-clustering>`_.
|
||||
|
||||
DistributedPubSub
|
||||
--------------------------
|
||||
DistributedPubSub Extension
|
||||
---------------------------
|
||||
|
||||
In the example above the mediator is started and accessed with the ``akka.cluster.pubsub.DistributedPubSub``.
|
||||
In the example above the mediator is started and accessed with the ``akka.cluster.pubsub.DistributedPubSub`` extension.
|
||||
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
|
||||
start the mediator actor as an ordinary actor and you can have several different mediators at the same
|
||||
time to be able to divide a large number of actors/topics to different mediators. For example you might
|
||||
want to use different cluster roles for different mediators.
|
||||
|
||||
The ``DistributedPubSub`` can be configured with the following properties:
|
||||
The ``DistributedPubSub`` extension can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#pub-sub-ext-config
|
||||
|
||||
|
|
|
|||
|
|
@ -147,7 +147,7 @@ Similarly to `Actor Classification`_, :class:`EventStream` will automatically re
|
|||
|
||||
.. note::
|
||||
The event stream is a *local facility*, meaning that it will *not* distribute events to other nodes in a clustered environment (unless you subscribe a Remote Actor to the stream explicitly).
|
||||
If you need to broadcast events in an Akka cluster, *without* knowing your recipients explicitly (i.e. obtaining their ActorRefs), you may want to look into: :ref:`distributed-pub-sub`.
|
||||
If you need to broadcast events in an Akka cluster, *without* knowing your recipients explicitly (i.e. obtaining their ActorRefs), you may want to look into: :ref:`distributed-pub-sub-scala`.
|
||||
|
||||
Default Handlers
|
||||
----------------
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue