!clt #13711 Move cluster tools from contrib
* new module akka-cluster-tools containing cluster singleton, distributed pub-sub, and cluster client
This commit is contained in:
parent
c39e41c45b
commit
fb72214d99
39 changed files with 792 additions and 683 deletions
|
|
@ -6,6 +6,9 @@ Networking
|
|||
|
||||
../common/cluster
|
||||
cluster-usage
|
||||
../scala/cluster-singleton
|
||||
../scala/distributed-pub-sub
|
||||
../scala/cluster-client
|
||||
cluster-metrics
|
||||
remoting
|
||||
serialization
|
||||
|
|
|
|||
|
|
@ -201,4 +201,14 @@ user defined main class and packaging with `sbt-native-packager <https://github.
|
|||
or `Typesafe ConductR <http://typesafe.com/products/conductr>`_.
|
||||
Please see :ref:`deployment-scenarios` for more information.
|
||||
|
||||
Cluster tools moved to separate module
|
||||
======================================
|
||||
|
||||
The Cluster Singleton, Distributed Pub-Sub, and Cluster Client previously located in the ``akka-contrib``
|
||||
jar is now moved to a separate module named ``akka-cluster-tools``. You need to replace this dependency
|
||||
if you use any of these tools.
|
||||
|
||||
The classes changed package name from ``akka.contrib.pattern`` to ``akka.cluster.singleton``, ``akka.cluster.pubsub``
|
||||
and ``akka.cluster.client``.
|
||||
|
||||
The configuration properties changed name to ``akka.cluster.pub-sub`` and ``akka.cluster.client``.
|
||||
|
|
|
|||
127
akka-docs/rst/scala/cluster-client.rst
Normal file
127
akka-docs/rst/scala/cluster-client.rst
Normal file
|
|
@ -0,0 +1,127 @@
|
|||
.. _cluster-client:
|
||||
|
||||
Cluster Client
|
||||
==============
|
||||
|
||||
An actor system that is not part of the cluster can communicate with actors
|
||||
somewhere in the cluster via this ``ClusterClient``. The client can of course be part of
|
||||
another cluster. It only needs to know the location of one (or more) nodes to use as initial
|
||||
contact points. It will establish a connection to a ``ClusterReceptionist`` somewhere in
|
||||
the cluster. It will monitor the connection to the receptionist and establish a new
|
||||
connection if the link goes down. When looking for a new receptionist it uses fresh
|
||||
contact points retrieved from previous establishment, or periodically refreshed contacts,
|
||||
i.e. not necessarily the initial contact points. Also, note it's necessary to change
|
||||
``akka.actor.provider`` from ``akka.actor.LocalActorRefProvider`` to
|
||||
``akka.remote.RemoteActorRefProvider`` or ``akka.cluster.ClusterActorRefProvider`` when using
|
||||
the cluster client.
|
||||
|
||||
|
||||
|
||||
The receptionist is supposed to be started on all nodes, or all nodes with specified role,
|
||||
in the cluster. The receptionist can be started with the ``ClusterReceptionistExtension``
|
||||
or as an ordinary actor.
|
||||
|
||||
You can send messages via the ``ClusterClient`` to any actor in the cluster that is registered
|
||||
in the ``DistributedPubSubMediator`` used by the ``ClusterReceptionist``.
|
||||
The ``ClusterReceptionistExtension`` provides methods for registration of actors that
|
||||
should be reachable from the client. Messages are wrapped in ``ClusterClient.Send``,
|
||||
``ClusterClient.SendToAll`` or ``ClusterClient.Publish``.
|
||||
|
||||
**1. ClusterClient.Send**
|
||||
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists. If several entries match the path the message will be delivered
|
||||
to one random destination. The sender() of the message can specify that local
|
||||
affinity is preferred, i.e. the message is sent to an actor in the same local actor
|
||||
system as the used receptionist actor, if any such exists, otherwise random to any other
|
||||
matching entry.
|
||||
|
||||
**2. ClusterClient.SendToAll**
|
||||
|
||||
The message will be delivered to all recipients with a matching path.
|
||||
|
||||
**3. ClusterClient.Publish**
|
||||
|
||||
The message will be delivered to all recipients Actors that have been registered as subscribers
|
||||
to the named topic.
|
||||
|
||||
Response messages from the destination actor are tunneled via the receptionist
|
||||
to avoid inbound connections from other cluster nodes to the client, i.e.
|
||||
the ``sender()``, as seen by the destination actor, is not the client itself.
|
||||
The ``sender()`` of the response messages, as seen by the client, is preserved
|
||||
as the original sender(), so the client can choose to send subsequent messages
|
||||
directly to the actor in the cluster.
|
||||
|
||||
While establishing a connection to a receptionist the ``ClusterClient`` will buffer
|
||||
messages and send them when the connection is established. If the buffer is full
|
||||
the ``ClusterClient`` will throw ``akka.actor.StashOverflowException``, which can be
|
||||
handled in by the supervision strategy of the parent actor. The size of the buffer
|
||||
can be configured by the following ``stash-capacity`` setting of the mailbox that is
|
||||
used by the ``ClusterClient`` actor.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#cluster-client-mailbox-config
|
||||
|
||||
An Example
|
||||
----------
|
||||
|
||||
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
|
||||
when the actor system is started by defining it in the ``akka.extensions`` configuration property::
|
||||
|
||||
akka.extensions = ["akka.cluster.client.ClusterReceptionistExtension"]
|
||||
|
||||
Next, register the actors that should be available for the client.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala#server
|
||||
|
||||
On the client you create the ``ClusterClient`` actor and use it as a gateway for sending
|
||||
messages to the actors identified by their path (without address information) somewhere
|
||||
in the cluster.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala#client
|
||||
|
||||
The ``initialContacts`` parameter is a ``Set[ActorSelection]``, which can be created like this:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala#initialContacts
|
||||
|
||||
You will probably define the address information of the initial contact points in configuration or system property.
|
||||
|
||||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_
|
||||
and `Distributed workers with Akka and Java! <http://www.typesafe.com/activator/template/akka-distributed-workers-java>`_.
|
||||
|
||||
ClusterReceptionistExtension
|
||||
----------------------------
|
||||
|
||||
In the example above the receptionist is started and accessed with the ``akka.cluster.client.ClusterReceptionistExtension``.
|
||||
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
|
||||
start the ``akka.cluster.client.ClusterReceptionist`` actor as an ordinary actor and you can have several
|
||||
different receptionists at the same time, serving different types of clients.
|
||||
|
||||
The ``ClusterReceptionistExtension`` can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
|
||||
|
||||
Note that the ``ClusterReceptionistExtension`` uses the ``DistributedPubSubExtension``, which is described
|
||||
in :ref:`distributed-pub-sub`.
|
||||
|
||||
It is recommended to load the extension when the actor system is started by defining it in the
|
||||
``akka.extensions`` configuration property::
|
||||
|
||||
akka.extensions = ["akka.cluster.client.ClusterReceptionistExtension"]
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
To use the Cluster Client you must add the following dependency in your project.
|
||||
|
||||
sbt::
|
||||
|
||||
"com.typesafe.akka" %% "akka-cluster-tools" % "@version@" @crossString@
|
||||
|
||||
maven::
|
||||
|
||||
<dependency>
|
||||
<groupId>com.typesafe.akka</groupId>
|
||||
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
|
||||
<version>@version@</version>
|
||||
</dependency>
|
||||
147
akka-docs/rst/scala/cluster-singleton.rst
Normal file
147
akka-docs/rst/scala/cluster-singleton.rst
Normal file
|
|
@ -0,0 +1,147 @@
|
|||
.. _cluster-singleton:
|
||||
|
||||
Cluster Singleton
|
||||
=================
|
||||
|
||||
For some use cases it is convenient and sometimes also mandatory to ensure that
|
||||
you have exactly one actor of a certain type running somewhere in the cluster.
|
||||
|
||||
Some examples:
|
||||
|
||||
* single point of responsibility for certain cluster-wide consistent decisions, or
|
||||
coordination of actions across the cluster system
|
||||
* single entry point to an external system
|
||||
* single master, many workers
|
||||
* centralized naming service, or routing logic
|
||||
|
||||
Using a singleton should not be the first design choice. It has several drawbacks,
|
||||
such as single-point of bottleneck. Single-point of failure is also a relevant concern,
|
||||
but for some cases this feature takes care of that by making sure that another singleton
|
||||
instance will eventually be started.
|
||||
|
||||
The cluster singleton pattern is implemented by ``akka.cluster.singleton.ClusterSingletonManager``.
|
||||
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
|
||||
a specific role. ``ClusterSingletonManager`` is an actor that is supposed to be started on
|
||||
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
|
||||
started by the ``ClusterSingletonManager`` on the oldest node by creating a child actor from
|
||||
supplied ``Props``. ``ClusterSingletonManager`` makes sure that at most one singleton instance
|
||||
is running at any point in time.
|
||||
|
||||
The singleton actor is always running on the oldest member with specified role.
|
||||
The oldest member is determined by [[akka.cluster.Member#isOlderThan]].
|
||||
This can change when removing that member from the cluster. Be aware that there is a short time
|
||||
period when there is no active singleton during the hand-over process.
|
||||
|
||||
The cluster failure detector will notice when oldest node becomes unreachable due to
|
||||
things like JVM crash, hard shut down, or network failure. Then a new oldest node will
|
||||
take over and a new singleton actor is created. For these failure scenarios there will
|
||||
not be a graceful hand-over, but more than one active singletons is prevented by all
|
||||
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
|
||||
|
||||
You can access the singleton actor by using the provided ``akka.cluster.singleton.ClusterSingletonProxy``,
|
||||
which will route all messages to the current instance of the singleton. The proxy will keep track of
|
||||
the oldest node in the cluster and resolve the singleton's ``ActorRef`` by explicitly sending the
|
||||
singleton's ``actorSelection`` the ``akka.actor.Identify`` message and waiting for it to reply.
|
||||
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
|
||||
Given the implementation, there might be periods of time during which the ``ActorRef`` is unavailable,
|
||||
e.g., when a node leaves the cluster. In these cases, the proxy will stash away all messages until it
|
||||
is able to identify the singleton. It's worth noting that messages can always be lost because of the
|
||||
distributed nature of these actors. As always, additional logic should be implemented in the singleton
|
||||
(acknowledgement) and in the client (retry) actors to ensure at-least-once message delivery.
|
||||
|
||||
Potential problems to be aware of
|
||||
---------------------------------
|
||||
|
||||
This pattern may seem to be very tempting to use at first, but it has several drawbacks, some of them are listed below:
|
||||
|
||||
* the cluster singleton may quickly become a *performance bottleneck*,
|
||||
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
|
||||
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for :ref:`Scala <automatic-vs-manual-downing-scala>` or :ref:`Java <automatic-vs-manual-downing-java>`),
|
||||
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
|
||||
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
|
||||
|
||||
Especially the last point is something you should be aware of — in general when using the Cluster Singleton pattern
|
||||
you should take care of downing nodes yourself and not rely on the timing based auto-down feature.
|
||||
|
||||
.. warning::
|
||||
**Be very careful when using Cluster Singleton together with Automatic Downing**,
|
||||
since it allows the cluster to split up into two separate clusters, which in turn will result
|
||||
in *multiple Singletons* being started, one in each separate cluster!
|
||||
|
||||
An Example
|
||||
----------
|
||||
|
||||
Assume that we need one single entry point to an external system. An actor that
|
||||
receives messages from a JMS queue with the strict requirement that only one
|
||||
JMS consumer must exist to be make sure that the messages are processed in order.
|
||||
That is perhaps not how one would like to design things, but a typical real-world
|
||||
scenario when integrating with external systems.
|
||||
|
||||
On each node in the cluster you need to start the ``ClusterSingletonManager`` and
|
||||
supply the ``Props`` of the singleton actor, in this case the JMS queue consumer.
|
||||
|
||||
In Scala:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#create-singleton-manager
|
||||
|
||||
Here we limit the singleton to nodes tagged with the ``"worker"`` role, but all nodes, independent of
|
||||
role, can be used by specifying ``None`` as ``role`` parameter.
|
||||
|
||||
The corresponding Java API for the ``singeltonProps`` function is ``akka.cluster.singleton.ClusterSingletonPropsFactory``.
|
||||
The Java API takes a plain String for the role parameter and ``null`` means that all nodes, independent of
|
||||
role, are used.
|
||||
|
||||
In Java:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-manager
|
||||
|
||||
.. note::
|
||||
|
||||
The ``singletonProps``/``singletonPropsFactory`` is invoked when creating
|
||||
the singleton actor and it must not use members that are not thread safe, e.g.
|
||||
mutable state in enclosing actor.
|
||||
|
||||
Here we use an application specific ``terminationMessage`` to be able to close the
|
||||
resources before actually stopping the singleton actor. Note that ``PoisonPill`` is a
|
||||
perfectly fine ``terminationMessage`` if you only need to stop the actor.
|
||||
|
||||
Here is how the singleton actor handles the ``terminationMessage`` in this example.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#consumer-end
|
||||
|
||||
Note that you can send back current state to the ``ClusterSingletonManager`` before terminating.
|
||||
This message will be sent over to the ``ClusterSingletonManager`` at the new oldest node and it
|
||||
will be passed to the ``singletonProps`` factory when creating the new singleton instance.
|
||||
|
||||
With the names given above, access to the singleton can be obtained from any cluster node using a properly
|
||||
configured proxy.
|
||||
|
||||
In Scala:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#create-singleton-proxy
|
||||
|
||||
In Java:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-proxy
|
||||
|
||||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.typesafe.com/activator/template/akka-distributed-workers>`_
|
||||
and `Distributed workers with Akka and Java! <http://www.typesafe.com/activator/template/akka-distributed-workers-java>`_.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
To use the Cluster Singleton you must add the following dependency in your project.
|
||||
|
||||
sbt::
|
||||
|
||||
"com.typesafe.akka" %% "akka-cluster-tools" % "@version@" @crossString@
|
||||
|
||||
maven::
|
||||
|
||||
<dependency>
|
||||
<groupId>com.typesafe.akka</groupId>
|
||||
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
|
||||
<version>@version@</version>
|
||||
</dependency>
|
||||
161
akka-docs/rst/scala/distributed-pub-sub.rst
Normal file
161
akka-docs/rst/scala/distributed-pub-sub.rst
Normal file
|
|
@ -0,0 +1,161 @@
|
|||
.. _distributed-pub-sub:
|
||||
|
||||
Distributed Publish Subscribe in Cluster
|
||||
========================================
|
||||
|
||||
How do I send a message to an actor without knowing which node it is running on?
|
||||
|
||||
How do I send messages to all actors in the cluster that have registered interest
|
||||
in a named topic?
|
||||
|
||||
This pattern provides a mediator actor, ``akka.cluster.pubsub.DistributedPubSubMediator``,
|
||||
that manages a registry of actor references and replicates the entries to peer
|
||||
actors among all cluster nodes or a group of nodes tagged with a specific role.
|
||||
|
||||
The `DistributedPubSubMediator` is supposed to be started on all nodes,
|
||||
or all nodes with specified role, in the cluster. The mediator can be
|
||||
started with the ``DistributedPubSubExtension`` or as an ordinary actor.
|
||||
|
||||
Changes are only performed in the own part of the registry and those changes
|
||||
are versioned. Deltas are disseminated in a scalable way to other nodes with
|
||||
a gossip protocol. The registry is eventually consistent, i.e. changes are not
|
||||
immediately visible at other nodes, but typically they will be fully replicated
|
||||
to all other nodes after a few seconds.
|
||||
|
||||
You can send messages via the mediator on any node to registered actors on
|
||||
any other node. There is four modes of message delivery.
|
||||
|
||||
**1. DistributedPubSubMediator.Send**
|
||||
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists in the registry. If several entries match the path the message will be sent
|
||||
via the supplied ``RoutingLogic`` (default random) to one destination. The sender() of the
|
||||
message can specify that local affinity is preferred, i.e. the message is sent to an actor
|
||||
in the same local actor system as the used mediator actor, if any such exists, otherwise
|
||||
route to any other matching entry. A typical usage of this mode is private chat to one
|
||||
other user in an instant messaging application. It can also be used for distributing
|
||||
tasks to registered workers, like a cluster aware router where the routees dynamically
|
||||
can register themselves.
|
||||
|
||||
**2. DistributedPubSubMediator.SendToAll**
|
||||
|
||||
The message will be delivered to all recipients with a matching path. Actors with
|
||||
the same path, without address information, can be registered on different nodes.
|
||||
On each node there can only be one such actor, since the path is unique within one
|
||||
local actor system. Typical usage of this mode is to broadcast messages to all replicas
|
||||
with the same path, e.g. 3 actors on different nodes that all perform the same actions,
|
||||
for redundancy. You can also optionally specify a property (``allButSelf``) deciding
|
||||
if the message should be sent to a matching path on the self node or not.
|
||||
|
||||
**3. DistributedPubSubMediator.Publish**
|
||||
|
||||
Actors may be registered to a named topic instead of path. This enables many subscribers
|
||||
on each node. The message will be delivered to all subscribers of the topic. For
|
||||
efficiency the message is sent over the wire only once per node (that has a matching topic),
|
||||
and then delivered to all subscribers of the local topic representation. This is the
|
||||
true pub/sub mode. A typical usage of this mode is a chat room in an instant messaging
|
||||
application.
|
||||
|
||||
**4. DistributedPubSubMediator.Publish with sendOneMessageToEachGroup**
|
||||
|
||||
Actors may be subscribed to a named topic with an optional property (``group``).
|
||||
If subscribing with a group name, each message published to a topic with the
|
||||
(``sendOneMessageToEachGroup``) flag is delivered via the supplied ``RoutingLogic``
|
||||
(default random) to one actor within each subscribing group.
|
||||
If all the subscribed actors have the same group name, then this works just like
|
||||
``Send`` and all messages are delivered to one subscriber.
|
||||
If all the subscribed actors have different group names, then this works like
|
||||
normal ``Publish`` and all messages are broadcast to all subscribers.
|
||||
|
||||
You register actors to the local mediator with ``DistributedPubSubMediator.Put`` or
|
||||
``DistributedPubSubMediator.Subscribe``. ``Put`` is used together with ``Send`` and
|
||||
``SendToAll`` message delivery modes. The ``ActorRef`` in ``Put`` must belong to the same
|
||||
local actor system as the mediator. ``Subscribe`` is used together with ``Publish``.
|
||||
Actors are automatically removed from the registry when they are terminated, or you
|
||||
can explicitly remove entries with ``DistributedPubSubMediator.Remove`` or
|
||||
``DistributedPubSubMediator.Unsubscribe``.
|
||||
|
||||
Successful ``Subscribe`` and ``Unsubscribe`` is acknowledged with
|
||||
``DistributedPubSubMediator.SubscribeAck`` and ``DistributedPubSubMediator.UnsubscribeAck``
|
||||
replies.
|
||||
|
||||
A Small Example in Java
|
||||
-----------------------
|
||||
|
||||
A subscriber actor:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#subscriber
|
||||
|
||||
Subscriber actors can be started on several nodes in the cluster, and all will receive
|
||||
messages published to the "content" topic.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#start-subscribers
|
||||
|
||||
A simple actor that publishes to this "content" topic:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#publisher
|
||||
|
||||
It can publish messages to the topic from anywhere in the cluster:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java#publish-message
|
||||
|
||||
A Small Example in Scala
|
||||
------------------------
|
||||
|
||||
A subscriber actor:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala#subscriber
|
||||
|
||||
Subscriber actors can be started on several nodes in the cluster, and all will receive
|
||||
messages published to the "content" topic.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala#start-subscribers
|
||||
|
||||
A simple actor that publishes to this "content" topic:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala#publisher
|
||||
|
||||
It can publish messages to the topic from anywhere in the cluster:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala#publish-message
|
||||
|
||||
A more comprehensive sample is available in the `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
tutorial named `Akka Clustered PubSub with Scala! <http://www.typesafe.com/activator/template/akka-clustering>`_.
|
||||
|
||||
DistributedPubSubExtension
|
||||
--------------------------
|
||||
|
||||
In the example above the mediator is started and accessed with the ``akka.cluster.pubsub.DistributedPubSubExtension``.
|
||||
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
|
||||
start the mediator actor as an ordinary actor and you can have several different mediators at the same
|
||||
time to be able to divide a large number of actors/topics to different mediators. For example you might
|
||||
want to use different cluster roles for different mediators.
|
||||
|
||||
The ``DistributedPubSubExtension`` can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#pub-sub-ext-config
|
||||
|
||||
It is recommended to load the extension when the actor system is started by defining it in
|
||||
``akka.extensions`` configuration property. Otherwise it will be activated when first used
|
||||
and then it takes a while for it to be populated.
|
||||
|
||||
::
|
||||
|
||||
akka.extensions = ["akka.cluster.pubsub.DistributedPubSubExtension"]
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
To use the Cluster Singleton you must add the following dependency in your project.
|
||||
|
||||
sbt::
|
||||
|
||||
"com.typesafe.akka" %% "akka-cluster-tools" % "@version@" @crossString@
|
||||
|
||||
maven::
|
||||
|
||||
<dependency>
|
||||
<groupId>com.typesafe.akka</groupId>
|
||||
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
|
||||
<version>@version@</version>
|
||||
</dependency>
|
||||
|
|
@ -6,6 +6,9 @@ Networking
|
|||
|
||||
../common/cluster
|
||||
cluster-usage
|
||||
cluster-singleton
|
||||
distributed-pub-sub
|
||||
cluster-client
|
||||
cluster-metrics
|
||||
remoting
|
||||
serialization
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue