=doc Links to activator and some doc improvements

This commit is contained in:
Patrik Nordwall 2014-02-21 11:24:01 +01:00
parent a6c29dc064
commit d1a7956d17
7 changed files with 61 additions and 35 deletions

View file

@ -83,6 +83,9 @@ The ``initialContacts`` parameter is a ``Set[ActorSelection]``, which can be cre
You will probably define the address information of the initial contact points in configuration or system property.
A more comprehensive sample is available in the `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Distributed workers with Akka and Scala! <http://typesafe.com/activator/template/akka-distributed-workers>`_
and `Distributed workers with Akka and Java! <http://typesafe.com/activator/template/akka-distributed-workers-java>`_.
ClusterReceptionistExtension
----------------------------

View file

@ -3,14 +3,17 @@
Cluster Sharding
================
The typical use case for this feature is when you have many stateful actors that together consume
more resources (e.g. memory) than fit on one machine. You need to distribute them across
several nodes in the cluster and you want to be able to interact with them using their
logical identifier, but without having to care about their physical location in the cluster,
which might also change over time. It could for example be actors representing Aggregate Roots in
Domain-Driven Design terminology. Here we call these actors "entries". These actors
typically have persistent (durable) state, but this feature is not limited to
actors with persistent state.
Cluster sharding is useful when you need to distribute actors across several nodes in the cluster and want to
be able to interact with them using their logical identifier, but without having to care about
their physical location in the cluster, which might also change over time.
It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology.
Here we call these actors "entries". These actors typically have persistent (durable) state,
but this feature is not limited to actors with persistent state.
Cluster sharding is typically used when you have many stateful actors that together consume
more resources (e.g. memory) than fit on one machine. If you only have a few stateful actors
it might be easier to run them on a :ref:`cluster-singleton` node.
In this context sharding means that actors with an identifier, so called entries,
can be automatically distributed across multiple nodes in the cluster. Each entry
@ -107,6 +110,9 @@ first message for a specific entry is delivered.
.. includecode:: @contribSrc@/src/multi-jvm/scala/akka/contrib/pattern/ClusterShardingSpec.scala#counter-usage
A more comprehensive sample is available in the `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Cluster Sharding with Scala! <http://typesafe.com/activator/template/akka-cluster-sharding-scala>`_.
How it works
------------

View file

@ -1,7 +1,7 @@
.. _cluster-singleton:
Cluster Singleton Pattern
=========================
Cluster Singleton
=================
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
@ -19,7 +19,7 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
The cluster singleton pattern is implemented by ``akka.contrib.pattern.ClusterSingletonManager``.
The cluster singleton is implemented by ``akka.contrib.pattern.ClusterSingletonManager``.
It manages singleton actor instance among all cluster nodes or a group of nodes tagged with
a specific role. ``ClusterSingletonManager`` is an actor that is supposed to be started on
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
@ -84,10 +84,6 @@ Here is how the singleton actor handles the ``terminationMessage`` in this examp
.. includecode:: @contribSrc@/src/multi-jvm/scala/akka/contrib/pattern/ClusterSingletonManagerSpec.scala#consumer-end
Note that you can send back current state to the ``ClusterSingletonManager`` before terminating.
This message will be sent over to the ``ClusterSingletonManager`` at the new oldest node and it
will be passed to the ``singletonProps`` factory when creating the new singleton instance.
With the names given above the path of singleton actor can be constructed by subscribing to
``MemberEvent`` cluster event and sort the members by age to keep track of oldest member.
@ -109,4 +105,8 @@ A nice alternative to the above proxy is to use :ref:`distributed-pub-sub`. Let
actor register itself to the mediator with ``DistributedPubSubMediator.Put`` message when it is
started. Send messages to the singleton actor via the mediator with ``DistributedPubSubMediator.SendToAll``.
.. note:: The singleton pattern will be simplified, perhaps provided out-of-the-box, when the cluster handles automatic actor partitioning.
A more comprehensive sample is available in the `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Distributed workers with Akka and Scala! <http://typesafe.com/activator/template/akka-distributed-workers>`_
and `Distributed workers with Akka and Java! <http://typesafe.com/activator/template/akka-distributed-workers-java>`_.

View file

@ -108,6 +108,9 @@ It can publish messages to the topic from anywhere in the cluster:
.. includecode:: @contribSrc@/src/multi-jvm/scala/akka/contrib/pattern/DistributedPubSubMediatorSpec.scala#publish-message
A more comprehensive sample is available in the `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Clustered PubSub with Scala! <http://typesafe.com/activator/template/akka-clustering>`_.
DistributedPubSubExtension
--------------------------

View file

@ -3,7 +3,7 @@
Reliable Proxy Pattern
======================
Looking at :ref:`message-delivery-guarantees` one might come to the conclusion that
Looking at :ref:`message-delivery-reliability` one might come to the conclusion that
Akka actors are made for blue-sky scenarios: sending messages is the only way
for actors to communicate, and then that is not even guaranteed to work. Is the
whole paradigm built on sand? Of course the answer is an emphatic “No!”.

View file

@ -281,34 +281,41 @@ has at least the defined number of members.
This callback can be used for other things than starting actors.
Cluster Singleton Pattern
^^^^^^^^^^^^^^^^^^^^^^^^^
Cluster Singleton
^^^^^^^^^^^^^^^^^
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
This can be implemented by subscribing to member events, but there are several corner
cases to consider. Therefore, this specific use case is made easily accessible by the
:ref:`cluster-singleton` in the contrib module. You can use it as is, or adjust to fit
your specific needs.
:ref:`cluster-singleton` in the contrib module.
Cluster Sharding
^^^^^^^^^^^^^^^^
When you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine
you need to distribute them across several nodes in the cluster. You want to be able to interact with them using their
logical identifier, but without having to care about their physical location in the cluster.
Distributes actors across several nodes in the cluster and supports interaction
with the actors using their logical identifier, but without having to care about
their physical location in the cluster.
See :ref:`cluster-sharding` in the contrib module.
Distributed Publish Subscribe Pattern
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Distributed Publish Subscribe
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Publish-subscribe messaging between actors in the cluster, and point-to-point messaging
using the logical path of the actors, i.e. the sender does not have to know on which
node the destination actor is running.
See :ref:`distributed-pub-sub` in the contrib module.
Cluster Client
^^^^^^^^^^^^^^
Communication from an actor system that is not part of the cluster to actors running
somewhere in the cluster. The client does not have to know on which node the destination
actor is running.
See :ref:`cluster-client` in the contrib module.
Failure Detector

View file

@ -275,34 +275,41 @@ has at least the defined number of members.
This callback can be used for other things than starting actors.
Cluster Singleton Pattern
^^^^^^^^^^^^^^^^^^^^^^^^^
Cluster Singleton
^^^^^^^^^^^^^^^^^
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
This can be implemented by subscribing to member events, but there are several corner
cases to consider. Therefore, this specific use case is made easily accessible by the
:ref:`cluster-singleton` in the contrib module. You can use it as is, or adjust to fit
your specific needs.
:ref:`cluster-singleton` in the contrib module.
Cluster Sharding
^^^^^^^^^^^^^^^^
When you have many stateful actors that together consume more resources (e.g. memory) than fit on one machine
you need to distribute them across several nodes in the cluster. You want to be able to interact with them using their
logical identifier, but without having to care about their physical location in the cluster.
Distributes actors across several nodes in the cluster and supports interaction
with the actors using their logical identifier, but without having to care about
their physical location in the cluster.
See :ref:`cluster-sharding` in the contrib module.
Distributed Publish Subscribe Pattern
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Distributed Publish Subscribe
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Publish-subscribe messaging between actors in the cluster, and point-to-point messaging
using the logical path of the actors, i.e. the sender does not have to know on which
node the destination actor is running.
See :ref:`distributed-pub-sub` in the contrib module.
Cluster Client
^^^^^^^^^^^^^^
Communication from an actor system that is not part of the cluster to actors running
somewhere in the cluster. The client does not have to know on which node the destination
actor is running.
See :ref:`cluster-client` in the contrib module.
Failure Detector