Merge pull request #22667 from 2m/wip/example-code-service/2m
Use Example Code Service for samples
This commit is contained in:
commit
f87ec658a5
30 changed files with 601 additions and 595 deletions
|
|
@ -195,8 +195,9 @@ message send/receive.
|
|||
.. includecode:: ../../../akka-remote-tests/src/multi-jvm/scala/akka/remote/sample/MultiNodeSample.scala
|
||||
:include: package,spec
|
||||
|
||||
The easiest way to run this example yourself is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Multi-Node Testing Sample with Scala <http://www.lightbend.com/activator/template/akka-sample-multi-node-scala>`_.
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
`Akka Multi-Node Testing Sample with Scala <@exampleCodeService@/akka-samples-multi-node-scala>`_
|
||||
together with the tutorial. The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-multi-node-scala>`_.
|
||||
|
||||
Things to Keep in Mind
|
||||
======================
|
||||
|
|
@ -206,7 +207,7 @@ surprising ways.
|
|||
|
||||
* Don't issue a shutdown of the first node. The first node is the controller and if it shuts down your test will break.
|
||||
|
||||
* To be able to use ``blackhole``, ``passThrough``, and ``throttle`` you must activate the failure injector and
|
||||
* To be able to use ``blackhole``, ``passThrough``, and ``throttle`` you must activate the failure injector and
|
||||
throttler transport adapters by specifying ``testTransport(on = true)`` in your MultiNodeConfig.
|
||||
|
||||
* Throttling, shutdown and other failure injections can only be done from the first node, which again is the controller.
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ Native Packager
|
|||
`sbt-native-packager <https://github.com/sbt/sbt-native-packager>`_ is a tool for creating
|
||||
distributions of any type of application, including Akka applications.
|
||||
|
||||
Define sbt version in ``project/build.properties`` file:
|
||||
Define sbt version in ``project/build.properties`` file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
|
@ -47,7 +47,7 @@ that you will need to take special care with the network configuration when usin
|
|||
described here: :ref:`remote-configuration-nat`
|
||||
|
||||
For an example of how to set up a project using Akka Cluster and Docker take a look at the
|
||||
`"akka-docker-cluster" activator template`__.
|
||||
`"akka-docker-cluster" sample`__.
|
||||
|
||||
__ https://www.lightbend.com/activator/template/akka-docker-cluster
|
||||
__ https://github.com/muuki88/activator-akka-docker
|
||||
|
||||
|
|
|
|||
|
|
@ -11,17 +11,11 @@ later installed on your machine.
|
|||
as part of the `Lightbend Reactive Platform <http://www.lightbend.com/platform>`_ which is made available
|
||||
for Java 6 in case your project can not upgrade to Java 8 just yet. It also includes additional commercial features or libraries.
|
||||
|
||||
Getting Started Guides and Template Projects
|
||||
--------------------------------------------
|
||||
|
||||
The best way to start learning Akka is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and try out one of Akka Template Projects.
|
||||
|
||||
Download
|
||||
--------
|
||||
|
||||
There are several ways to download Akka. You can download it as part of the Lightbend Platform
|
||||
(as described above). You can download the full distribution, which includes all modules.
|
||||
(as described above). You can download the full distribution, which includes all modules.
|
||||
Or you can use a build tool like Maven or SBT to download dependencies from the Akka Maven repository.
|
||||
|
||||
Modules
|
||||
|
|
@ -108,9 +102,8 @@ For previous Akka versions:
|
|||
Using Akka with Maven
|
||||
---------------------
|
||||
|
||||
The simplest way to get started with Akka and Maven is to check out the
|
||||
`Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Main in Java <http://www.lightbend.com/activator/template/akka-sample-main-java>`_.
|
||||
The simplest way to get started with Akka and Maven is to download the ready to run sample
|
||||
named `Akka Main in Java <@exampleCodeService@/akka-samples-main-java>`_.
|
||||
|
||||
Since Akka is published to Maven Central (for versions since 2.1-M2), it is
|
||||
enough to add the Akka dependencies to the POM. For example, here is the
|
||||
|
|
@ -144,8 +137,11 @@ For snapshot versions, the snapshot repository needs to be added as well:
|
|||
Using Akka with SBT
|
||||
-------------------
|
||||
|
||||
The simplest way to get started with Akka and SBT is to use
|
||||
`Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ with one of the SBT `templates <https://www.lightbend.com/activator/templates>`_.
|
||||
The simplest way to get started with Akka and SBT is to use a `Gitter8 <http://www.foundweekends.org/giter8/>`_ template
|
||||
named `Hello Akka! <https://github.com/akka/hello-akka.g8>`_. If you have `sbt` already installed, you can create a project
|
||||
from this template by running::
|
||||
|
||||
sbt new akka/hello-akka.g8
|
||||
|
||||
Summary of the essential parts for using Akka with SBT:
|
||||
|
||||
|
|
@ -260,4 +256,3 @@ If you have questions you can get help on the `Akka Mailing List <https://groups
|
|||
You can also ask for `commercial support <https://www.lightbend.com>`_.
|
||||
|
||||
Thanks for being a part of the Akka community.
|
||||
|
||||
|
|
|
|||
|
|
@ -1,7 +1,7 @@
|
|||
.. _actors-java:
|
||||
|
||||
########
|
||||
Actors
|
||||
Actors
|
||||
########
|
||||
|
||||
The `Actor Model`_ provides a higher level of abstraction for writing concurrent
|
||||
|
|
@ -196,8 +196,7 @@ __ Props_
|
|||
Techniques for dependency injection and integration with dependency injection frameworks
|
||||
are described in more depth in the
|
||||
`Using Akka with Dependency Injection <http://letitcrash.com/post/55958814293/akka-dependency-injection>`_
|
||||
guideline and the `Akka Java Spring <http://www.lightbend.com/activator/template/akka-java-spring>`_ tutorial
|
||||
in Lightbend Activator.
|
||||
guideline and the `Akka Java Spring <https://github.com/typesafehub/activator-akka-java-spring>`_ tutorial.
|
||||
|
||||
The Inbox
|
||||
---------
|
||||
|
|
@ -617,7 +616,7 @@ the :meth:`createReceive` method in the :class:`AbstractActor`:
|
|||
.. includecode:: code/jdocs/actor/ActorDocTest.java#createReceive
|
||||
|
||||
|
||||
The return type is :class:`AbstractActor.Receive` that defines which messages your Actor can handle,
|
||||
The return type is :class:`AbstractActor.Receive` that defines which messages your Actor can handle,
|
||||
along with the implementation of how the messages should be processed.
|
||||
You can build such behavior with a builder named ``ReceiveBuilder``.
|
||||
|
||||
|
|
@ -632,7 +631,7 @@ trail, you can split the creation of the builder into multiple statements as in
|
|||
.. includecode:: code/jdocs/actor/GraduallyBuiltActor.java
|
||||
:include: imports,actor
|
||||
|
||||
Using small methods is a good practice, also in actors. It's recommended to delegate the
|
||||
Using small methods is a good practice, also in actors. It's recommended to delegate the
|
||||
actual work of the message processing to methods instead of defining a huge ``ReceiveBuilder``
|
||||
with lots of code in each lambda. A well structured actor can look like this:
|
||||
|
||||
|
|
@ -652,10 +651,10 @@ to `Javaslang Pattern Matching DSL <http://www.javaslang.io/javaslang-jdocs/#_pa
|
|||
|
||||
If the validation of the ``ReceiveBuilder`` match logic turns out to be a bottleneck for some of your
|
||||
actors you can consider to implement it at lower level by extending ``UntypedAbstractActor`` instead
|
||||
of ``AbstractActor``. The partial functions created by the ``ReceiveBuilder`` consist of multiple lambda
|
||||
of ``AbstractActor``. The partial functions created by the ``ReceiveBuilder`` consist of multiple lambda
|
||||
expressions for every match statement, where each lambda is referencing the code to be run. This is something
|
||||
that the JVM can have problems optimizing and the resulting code might not be as performant as the
|
||||
untyped version. When extending ``UntypedAbstractActor`` each message is received as an untyped
|
||||
that the JVM can have problems optimizing and the resulting code might not be as performant as the
|
||||
untyped version. When extending ``UntypedAbstractActor`` each message is received as an untyped
|
||||
``Object`` and you have to inspect and cast it to the actual message type in other ways, like this:
|
||||
|
||||
.. includecode:: code/jdocs/actor/ActorDocTest.java#optimized
|
||||
|
|
@ -786,12 +785,12 @@ before stopping the target actor. Simple cleanup tasks can be handled in ``postS
|
|||
within a supervisor you control and only in response to a :class:`Terminated`
|
||||
message, i.e. not for top-level actors.
|
||||
|
||||
.. _coordinated-shutdown-java:
|
||||
|
||||
.. _coordinated-shutdown-java:
|
||||
|
||||
Coordinated Shutdown
|
||||
--------------------
|
||||
|
||||
There is an extension named ``CoordinatedShutdown`` that will stop certain actors and
|
||||
There is an extension named ``CoordinatedShutdown`` that will stop certain actors and
|
||||
services in a specific order and perform registered tasks during the shutdown process.
|
||||
|
||||
The order of the shutdown phases is defined in configuration ``akka.coordinated-shutdown.phases``.
|
||||
|
|
@ -803,26 +802,26 @@ More phases can be be added in the application's configuration if needed by over
|
|||
additional ``depends-on``. Especially the phases ``before-service-unbind``, ``before-cluster-shutdown`` and
|
||||
``before-actor-system-terminate`` are intended for application specific phases or tasks.
|
||||
|
||||
The default phases are defined in a single linear order, but the phases can be ordered as a
|
||||
The default phases are defined in a single linear order, but the phases can be ordered as a
|
||||
directed acyclic graph (DAG) by defining the dependencies between the phases.
|
||||
The phases are ordered with `topological <https://en.wikipedia.org/wiki/Topological_sorting>`_ sort of the DAG.
|
||||
The phases are ordered with `topological <https://en.wikipedia.org/wiki/Topological_sorting>`_ sort of the DAG.
|
||||
|
||||
Tasks can be added to a phase with:
|
||||
|
||||
.. includecode:: code/jdocs/actor/ActorDocTest.java#coordinated-shutdown-addTask
|
||||
|
||||
The returned ``CompletionStage<Done>`` should be completed when the task is completed. The task name parameter
|
||||
is only used for debugging/logging.
|
||||
is only used for debugging/logging.
|
||||
|
||||
Tasks added to the same phase are executed in parallel without any ordering assumptions.
|
||||
Tasks added to the same phase are executed in parallel without any ordering assumptions.
|
||||
Next phase will not start until all tasks of previous phase have been completed.
|
||||
|
||||
If tasks are not completed within a configured timeout (see :ref:`reference.conf <config-akka-actor>`)
|
||||
the next phase will be started anyway. It is possible to configure ``recover=off`` for a phase
|
||||
to abort the rest of the shutdown process if a task fails or is not completed within the timeout.
|
||||
|
||||
Tasks should typically be registered as early as possible after system startup. When running
|
||||
the coordinated shutdown tasks that have been registered will be performed but tasks that are
|
||||
Tasks should typically be registered as early as possible after system startup. When running
|
||||
the coordinated shutdown tasks that have been registered will be performed but tasks that are
|
||||
added too late will not be run.
|
||||
|
||||
To start the coordinated shutdown process you can invoke ``runAll`` on the ``CoordinatedShutdown``
|
||||
|
|
@ -840,9 +839,9 @@ To enable a hard ``System.exit`` as a final action you can configure::
|
|||
|
||||
When using :ref:`Akka Cluster <cluster_usage_java>` the ``CoordinatedShutdown`` will automatically run
|
||||
when the cluster node sees itself as ``Exiting``, i.e. leaving from another node will trigger
|
||||
the shutdown process on the leaving node. Tasks for graceful leaving of cluster including graceful
|
||||
shutdown of Cluster Singletons and Cluster Sharding are added automatically when Akka Cluster is used,
|
||||
i.e. running the shutdown process will also trigger the graceful leaving if it's not already in progress.
|
||||
the shutdown process on the leaving node. Tasks for graceful leaving of cluster including graceful
|
||||
shutdown of Cluster Singletons and Cluster Sharding are added automatically when Akka Cluster is used,
|
||||
i.e. running the shutdown process will also trigger the graceful leaving if it's not already in progress.
|
||||
|
||||
By default, the ``CoordinatedShutdown`` will be run when the JVM process exits, e.g.
|
||||
via ``kill SIGTERM`` signal (``SIGINT`` ctrl-c doesn't work). This behavior can be disabled with::
|
||||
|
|
@ -850,13 +849,13 @@ via ``kill SIGTERM`` signal (``SIGINT`` ctrl-c doesn't work). This behavior can
|
|||
akka.coordinated-shutdown.run-by-jvm-shutdown-hook=off
|
||||
|
||||
If you have application specific JVM shutdown hooks it's recommended that you register them via the
|
||||
``CoordinatedShutdown`` so that they are running before Akka internal shutdown hooks, e.g.
|
||||
``CoordinatedShutdown`` so that they are running before Akka internal shutdown hooks, e.g.
|
||||
those shutting down Akka Remoting (Artery).
|
||||
|
||||
.. includecode:: code/jdocs/actor/ActorDocTest.java#coordinated-shutdown-jvm-hook
|
||||
|
||||
For some tests it might be undesired to terminate the ``ActorSystem`` via ``CoordinatedShutdown``.
|
||||
You can disable that by adding the following to the configuration of the ``ActorSystem`` that is
|
||||
You can disable that by adding the following to the configuration of the ``ActorSystem`` that is
|
||||
used in the test::
|
||||
|
||||
# Don't terminate ActorSystem via CoordinatedShutdown in tests
|
||||
|
|
|
|||
|
|
@ -327,10 +327,10 @@ also :ref:`camel-examples-java` that implements both, an asynchronous
|
|||
consumer and an asynchronous producer, with the jetty component.
|
||||
|
||||
If the used Camel component is blocking it might be necessary to use a separate
|
||||
:ref:`dispatcher <dispatchers-java>` for the producer. The Camel processor is
|
||||
invoked by a child actor of the producer and the dispatcher can be defined in
|
||||
the deployment section of the configuration. For example, if your producer actor
|
||||
has path ``/user/integration/output`` the dispatcher of the child actor can be
|
||||
:ref:`dispatcher <dispatchers-java>` for the producer. The Camel processor is
|
||||
invoked by a child actor of the producer and the dispatcher can be defined in
|
||||
the deployment section of the configuration. For example, if your producer actor
|
||||
has path ``/user/integration/output`` the dispatcher of the child actor can be
|
||||
defined with::
|
||||
|
||||
akka.actor.deployment {
|
||||
|
|
@ -480,13 +480,12 @@ __ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/j
|
|||
Examples
|
||||
========
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Camel Samples with Java <http://www.lightbend.com/activator/template/akka-sample-camel-java>`_
|
||||
The sample named `Akka Camel Samples with Java <@exampleCodeService@/akka-samples-camel-java>`_ (`source code <@samples@/akka-sample-camel-java>`_)
|
||||
contains 3 samples:
|
||||
|
||||
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
|
||||
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
|
||||
producer actors that support :ref:`camel-asynchronous-routing-java` with their Camel endpoints.
|
||||
|
||||
|
||||
* Custom Camel route - Demonstrates the combined usage of a ``Producer`` and a
|
||||
``Consumer`` actor as well as the inclusion of a custom Camel route.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,18 +10,18 @@ contact points. It will establish a connection to a ``ClusterReceptionist`` some
|
|||
the cluster. It will monitor the connection to the receptionist and establish a new
|
||||
connection if the link goes down. When looking for a new receptionist it uses fresh
|
||||
contact points retrieved from previous establishment, or periodically refreshed contacts,
|
||||
i.e. not necessarily the initial contact points.
|
||||
i.e. not necessarily the initial contact points.
|
||||
|
||||
.. note::
|
||||
|
||||
``ClusterClient`` should not be used when sending messages to actors that run
|
||||
within the same cluster. Similar functionality as the ``ClusterClient`` is
|
||||
provided in a more efficient way by :ref:`distributed-pub-sub-java` for actors that
|
||||
belong to the same cluster.
|
||||
provided in a more efficient way by :ref:`distributed-pub-sub-java` for actors that
|
||||
belong to the same cluster.
|
||||
|
||||
Also, note it's necessary to change ``akka.actor.provider`` from ``local``
|
||||
to ``remote`` or ``cluster`` when using
|
||||
the cluster client.
|
||||
the cluster client.
|
||||
|
||||
The receptionist is supposed to be started on all nodes, or all nodes with specified role,
|
||||
in the cluster. The receptionist can be started with the ``ClusterClientReceptionist`` extension
|
||||
|
|
@ -77,11 +77,11 @@ The size of the buffer is configurable and it can be disabled by using a buffer
|
|||
It's worth noting that messages can always be lost because of the distributed nature
|
||||
of these actors. As always, additional logic should be implemented in the destination
|
||||
(acknowledgement) and in the client (retry) actors to ensure at-least-once message delivery.
|
||||
|
||||
|
||||
An Example
|
||||
----------
|
||||
|
||||
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
|
||||
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
|
||||
when the actor system is started by defining it in the ``akka.extensions`` configuration property::
|
||||
|
||||
akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
|
||||
|
|
@ -103,8 +103,7 @@ The ``initialContacts`` parameter is a ``Set<ActorPath>``, which can be created
|
|||
You will probably define the address information of the initial contact points in configuration or system property.
|
||||
See also :ref:`cluster-client-config-java`.
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Java! <http://www.lightbend.com/activator/template/akka-distributed-workers-java>`_.
|
||||
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Java! <https://github.com/typesafehub/activator-akka-distributed-workers-java>`_.
|
||||
|
||||
ClusterClientReceptionist Extension
|
||||
-----------------------------------
|
||||
|
|
@ -153,21 +152,21 @@ maven::
|
|||
</dependency>
|
||||
|
||||
.. _cluster-client-config-java:
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
|
||||
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
|
||||
with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterClientSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
|
||||
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
|
||||
The following configuration properties are read by the ``ClusterClientSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
|
||||
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
|
||||
with different settings if needed.
|
||||
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#cluster-client-config
|
||||
|
||||
Failure handling
|
||||
|
|
@ -190,4 +189,4 @@ within a configurable interval. This is configured with the ``reconnect-timeout`
|
|||
This can be useful when initial contacts are provided from some kind of service registry, cluster node addresses
|
||||
are entirely dynamic and the entire cluster might shut down or crash, be restarted on new addresses. Since the
|
||||
client will be stopped in that case a monitoring actor can watch it and upon ``Terminate`` a new set of initial
|
||||
contacts can be fetched and a new cluster client started.
|
||||
contacts can be fetched and a new cluster client started.
|
||||
|
|
|
|||
|
|
@ -164,9 +164,10 @@ The same type of router could also have been defined in code:
|
|||
|
||||
.. includecode:: code/jdocs/cluster/FactorialFrontend.java#router-deploy-in-code
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ tutorial named
|
||||
`Akka Cluster Samples with Java <http://www.lightbend.com/activator/template/akka-sample-cluster-java>`_.
|
||||
contains the full source code and instructions of how to run the **Adaptive Load Balancing** sample.
|
||||
The easiest way to run **Adaptive Load Balancing** example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-java>`_
|
||||
together with the tutorial. It contains instructions on how to run the **Adaptive Load Balancing** sample.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-java>`_.
|
||||
|
||||
Subscribe to Metrics Events
|
||||
---------------------------
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ the oldest node in the cluster and resolve the singleton's ``ActorRef`` by expli
|
|||
singleton's ``actorSelection`` the ``akka.actor.Identify`` message and waiting for it to reply.
|
||||
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
|
||||
Given the implementation, there might be periods of time during which the ``ActorRef`` is unavailable,
|
||||
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
||||
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
||||
singleton and then deliver them when the singleton is finally available. If the buffer is full
|
||||
the ``ClusterSingletonProxy`` will drop old messages when new messages are sent via the proxy.
|
||||
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
|
||||
|
|
@ -63,7 +63,7 @@ This pattern may seem to be very tempting to use at first, but it has several dr
|
|||
* the cluster singleton may quickly become a *performance bottleneck*,
|
||||
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
|
||||
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
|
||||
:ref:`automatic-vs-manual-downing-java`),
|
||||
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
|
||||
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
|
||||
|
|
@ -102,8 +102,7 @@ configured proxy.
|
|||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-proxy
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Java! <http://www.lightbend.com/activator/template/akka-distributed-workers-java>`_.
|
||||
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Java! <https://github.com/typesafehub/activator-akka-distributed-workers-java>`_.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
|
@ -126,18 +125,18 @@ maven::
|
|||
Configuration
|
||||
-------------
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
|
||||
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
|
||||
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
|
||||
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
|
||||
with different settings if needed.
|
||||
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonProxySettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
|
||||
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
|
||||
The following configuration properties are read by the ``ClusterSingletonProxySettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
|
||||
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
|
||||
with different settings if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-proxy-config
|
||||
|
|
|
|||
|
|
@ -85,9 +85,10 @@ An actor that uses the cluster extension may look like this:
|
|||
The actor registers itself as subscriber of certain cluster events. It receives events corresponding to the current state
|
||||
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
|
||||
|
||||
The easiest way to run this example yourself is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Cluster Samples with Java <http://www.lightbend.com/activator/template/akka-sample-cluster-java>`_.
|
||||
It contains instructions of how to run the ``SimpleClusterApp``.
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-java>`_
|
||||
together with the tutorial. It contains instructions on how to run the ``SimpleClusterApp``.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-java>`_.
|
||||
|
||||
Joining to Seed Nodes
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
|
||||
*Akka Distributed Data* is useful when you need to share data between nodes in an
|
||||
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
|
||||
The keys are unique identifiers with type information of the data values. The values
|
||||
The keys are unique identifiers with type information of the data values. The values
|
||||
are *Conflict Free Replicated Data Types* (CRDTs).
|
||||
|
||||
All data entries are spread to all nodes, or nodes with a certain role, in the cluster
|
||||
|
|
@ -21,21 +21,21 @@ Several useful data types for counters, sets, maps and registers are provided an
|
|||
you can also implement your own custom data types.
|
||||
|
||||
It is eventually consistent and geared toward providing high read and write availability
|
||||
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
|
||||
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
|
||||
out-of-date value.
|
||||
|
||||
Using the Replicator
|
||||
====================
|
||||
|
||||
The ``akka.cluster.ddata.Replicator`` actor provides the API for interacting with the data.
|
||||
The ``Replicator`` actor must be started on each node in the cluster, or group of nodes tagged
|
||||
with a specific role. It communicates with other ``Replicator`` instances with the same path
|
||||
The ``Replicator`` actor must be started on each node in the cluster, or group of nodes tagged
|
||||
with a specific role. It communicates with other ``Replicator`` instances with the same path
|
||||
(without address) that are running on other nodes . For convenience it can be used with the
|
||||
``akka.cluster.ddata.DistributedData`` extension but it can also be started as an ordinary
|
||||
actor using the ``Replicator.props``. If it is started as an ordinary actor it is important
|
||||
that it is given the same name, started on same path, on all nodes.
|
||||
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_java>`,
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_java>`,
|
||||
will participate in Distributed Data. This means that the data will be replicated to the
|
||||
:ref:`WeaklyUp <weakly_up_java>` nodes with the background gossip protocol. Note that it
|
||||
will not participate in any actions where the consistency mode is to read/write from all
|
||||
|
|
@ -43,9 +43,9 @@ nodes or the majority of nodes. The :ref:`WeaklyUp <weakly_up_java>` node is not
|
|||
as part of the cluster. So 3 nodes + 5 :ref:`WeaklyUp <weakly_up_java>` is essentially a
|
||||
3 node cluster as far as consistent actions are concerned.
|
||||
|
||||
Below is an example of an actor that schedules tick messages to itself and for each tick
|
||||
Below is an example of an actor that schedules tick messages to itself and for each tick
|
||||
adds or removes elements from a ``ORSet`` (observed-remove set). It also subscribes to
|
||||
changes of this.
|
||||
changes of this.
|
||||
|
||||
.. includecode:: code/jdocs/ddata/DataBot.java#data-bot
|
||||
|
||||
|
|
@ -83,7 +83,7 @@ You supply a write consistency level which has the following meaning:
|
|||
When you specify to write to ``n`` out of ``x`` nodes, the update will first replicate to ``n`` nodes. If there are not
|
||||
enough Acks after 1/5th of the timeout, the update will be replicated to ``n`` other nodes. If there are less than n nodes
|
||||
left all of the remaining nodes are used. Reachable nodes are prefered over unreachable nodes.
|
||||
|
||||
|
||||
Note that ``WriteMajority`` has a ``minCap`` parameter that is useful to specify to achieve better safety for small clusters.
|
||||
|
||||
.. includecode:: code/jdocs/ddata/DistributedDataDocTest.java#update
|
||||
|
|
@ -111,7 +111,7 @@ or maintain local correlation data structures.
|
|||
.. includecode:: code/jdocs/ddata/DistributedDataDocTest.java#update-request-context
|
||||
|
||||
.. _replicator_get_java:
|
||||
|
||||
|
||||
Get
|
||||
---
|
||||
|
||||
|
|
@ -159,7 +159,7 @@ The consistency level that is supplied in the :ref:`replicator_update_java` and
|
|||
specifies per request how many replicas that must respond successfully to a write and read request.
|
||||
|
||||
For low latency reads you use ``ReadLocal`` with the risk of retrieving stale data, i.e. updates
|
||||
from other nodes might not be visible yet.
|
||||
from other nodes might not be visible yet.
|
||||
|
||||
When using ``writeLocal`` the update is only written to the local replica and then disseminated
|
||||
in the background with the gossip protocol, which can take few seconds to spread to all nodes.
|
||||
|
|
@ -171,7 +171,7 @@ and you will not receive the value.
|
|||
If consistency is important, you can ensure that a read always reflects the most recent
|
||||
write by using the following formula::
|
||||
|
||||
(nodes_written + nodes_read) > N
|
||||
(nodes_written + nodes_read) > N
|
||||
|
||||
where N is the total number of nodes in the cluster, or the number of nodes with the role that is
|
||||
used for the ``Replicator``.
|
||||
|
|
@ -181,15 +181,15 @@ and reading from 4 nodes, or writing to 5 nodes and reading from 3 nodes.
|
|||
|
||||
By combining ``WriteMajority`` and ``ReadMajority`` levels a read always reflects the most recent write.
|
||||
The ``Replicator`` writes and reads to a majority of replicas, i.e. **N / 2 + 1**. For example,
|
||||
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
|
||||
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
|
||||
to 4 nodes and reads from 4 nodes.
|
||||
|
||||
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
|
||||
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
|
||||
is rather high and then the nice properties of combining majority write and reads are not
|
||||
guaranteed. Therefore the ``ReadMajority`` and ``WriteMajority`` have a ``minCap`` parameter that
|
||||
is useful to specify to achieve better safety for small clusters. It means that if the cluster
|
||||
size is smaller than the majority size it will use the ``minCap`` number of nodes but at most
|
||||
the total size of the cluster.
|
||||
guaranteed. Therefore the ``ReadMajority`` and ``WriteMajority`` have a ``minCap`` parameter that
|
||||
is useful to specify to achieve better safety for small clusters. It means that if the cluster
|
||||
size is smaller than the majority size it will use the ``minCap`` number of nodes but at most
|
||||
the total size of the cluster.
|
||||
|
||||
Here is an example of using ``writeMajority`` and ``readMajority``:
|
||||
|
||||
|
|
@ -202,7 +202,7 @@ Here is an example of using ``writeMajority`` and ``readMajority``:
|
|||
In some rare cases, when performing an ``Update`` it is needed to first try to fetch latest data from
|
||||
other nodes. That can be done by first sending a ``Get`` with ``ReadMajority`` and then continue with
|
||||
the ``Update`` when the ``GetSuccess``, ``GetFailure`` or ``NotFound`` reply is received. This might be
|
||||
needed when you need to base a decision on latest information or when removing entries from ``ORSet``
|
||||
needed when you need to base a decision on latest information or when removing entries from ``ORSet``
|
||||
or ``ORMap``. If an entry is added to an ``ORSet`` or ``ORMap`` from one node and removed from another
|
||||
node the entry will only be removed if the added entry is visible on the node where the removal is
|
||||
performed (hence the name observed-removed set).
|
||||
|
|
@ -215,11 +215,11 @@ The following example illustrates how to do that:
|
|||
|
||||
*Caveat:* Even if you use ``writeMajority`` and ``readMajority`` there is small risk that you may
|
||||
read stale data if the cluster membership has changed between the ``Update`` and the ``Get``.
|
||||
For example, in cluster of 5 nodes when you ``Update`` and that change is written to 3 nodes:
|
||||
n1, n2, n3. Then 2 more nodes are added and a ``Get`` request is reading from 4 nodes, which
|
||||
happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the response of the
|
||||
For example, in cluster of 5 nodes when you ``Update`` and that change is written to 3 nodes:
|
||||
n1, n2, n3. Then 2 more nodes are added and a ``Get`` request is reading from 4 nodes, which
|
||||
happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the response of the
|
||||
``Get`` request.
|
||||
|
||||
|
||||
Subscribe
|
||||
---------
|
||||
|
||||
|
|
@ -255,10 +255,10 @@ Subscribers will receive ``Replicator.DataDeleted``.
|
|||
|
||||
.. warning::
|
||||
|
||||
As deleted keys continue to be included in the stored data on each node as well as in gossip
|
||||
messages, a continuous series of updates and deletes of top-level entities will result in
|
||||
growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
|
||||
where frequent adds and removes are required, you should use a fixed number of top-level data
|
||||
As deleted keys continue to be included in the stored data on each node as well as in gossip
|
||||
messages, a continuous series of updates and deletes of top-level entities will result in
|
||||
growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
|
||||
where frequent adds and removes are required, you should use a fixed number of top-level data
|
||||
types that support both updates and removals, for example ``ORMap`` or ``ORSet``.
|
||||
|
||||
.. _delta_crdt_java:
|
||||
|
|
@ -280,7 +280,7 @@ to nodes in different order than the causal order of the updates. For this examp
|
|||
can result in that set ``{'a', 'b', 'd'}`` can be seen before element 'c' is seen. Eventually
|
||||
it will be ``{'a', 'b', 'c', 'd'}``.
|
||||
|
||||
Note that the full state is occasionally also replicated for delta-CRDTs, for example when
|
||||
Note that the full state is occasionally also replicated for delta-CRDTs, for example when
|
||||
new nodes are added to the cluster or when deltas could not be propagated because
|
||||
of network partitions or similar problems.
|
||||
|
||||
|
|
@ -294,7 +294,7 @@ Data Types
|
|||
The data types must be convergent (stateful) CRDTs and implement the ``ReplicatedData`` trait,
|
||||
i.e. they provide a monotonic merge function and the state changes always converge.
|
||||
|
||||
You can use your own custom ``AbstractReplicatedData`` or ``AbstractDeltaReplicatedData`` types,
|
||||
You can use your own custom ``AbstractReplicatedData`` or ``AbstractDeltaReplicatedData`` types,
|
||||
and several types are provided by this package, such as:
|
||||
|
||||
* Counters: ``GCounter``, ``PNCounter``
|
||||
|
|
@ -307,7 +307,7 @@ Counters
|
|||
|
||||
``GCounter`` is a "grow only counter". It only supports increments, no decrements.
|
||||
|
||||
It works in a similar way as a vector clock. It keeps track of one counter per node and the total
|
||||
It works in a similar way as a vector clock. It keeps track of one counter per node and the total
|
||||
value is the sum of these counters. The ``merge`` is implemented by taking the maximum count for
|
||||
each node.
|
||||
|
||||
|
|
@ -372,18 +372,18 @@ such as the following specialized maps.
|
|||
``ORMultiMap`` (observed-remove multi-map) is a multi-map implementation that wraps an
|
||||
``ORMap`` with an ``ORSet`` for the map's value.
|
||||
|
||||
``PNCounterMap`` (positive negative counter map) is a map of named counters. It is a specialized
|
||||
``PNCounterMap`` (positive negative counter map) is a map of named counters. It is a specialized
|
||||
``ORMap`` with ``PNCounter`` values.
|
||||
|
||||
``LWWMap`` (last writer wins map) is a specialized ``ORMap`` with ``LWWRegister`` (last writer wins register)
|
||||
values.
|
||||
values.
|
||||
|
||||
.. includecode:: code/jdocs/ddata/DistributedDataDocTest.java#ormultimap
|
||||
|
||||
When a data entry is changed the full state of that entry is replicated to other nodes, i.e.
|
||||
when you update a map the whole map is replicated. Therefore, instead of using one ``ORMap``
|
||||
with 1000 elements it is more efficient to split that up in 10 top level ``ORMap`` entries
|
||||
with 100 elements each. Top level entries are replicated individually, which has the
|
||||
with 1000 elements it is more efficient to split that up in 10 top level ``ORMap`` entries
|
||||
with 100 elements each. Top level entries are replicated individually, which has the
|
||||
trade-off that different entries may not be replicated at the same time and you may see
|
||||
inconsistencies between related entries. Separate top level entries cannot be updated atomically
|
||||
together.
|
||||
|
|
@ -472,16 +472,16 @@ Note that the elements of the sets are sorted so the SHA-1 digests are the same
|
|||
for the same elements.
|
||||
|
||||
You register the serializer in configuration:
|
||||
|
||||
|
||||
.. includecode:: ../scala/code/docs/ddata/DistributedDataDocSpec.scala#japi-serializer-config
|
||||
|
||||
Using compression can sometimes be a good idea to reduce the data size. Gzip compression is
|
||||
provided by the ``akka.cluster.ddata.protobuf.SerializationSupport`` trait:
|
||||
|
||||
.. includecode:: code/jdocs/ddata/protobuf/TwoPhaseSetSerializerWithCompression.java#compression
|
||||
|
||||
|
||||
The two embedded ``GSet`` can be serialized as illustrated above, but in general when composing
|
||||
new data types from the existing built in types it is better to make use of the existing
|
||||
new data types from the existing built in types it is better to make use of the existing
|
||||
serializer for those types. This can be done by declaring those as bytes fields in protobuf:
|
||||
|
||||
.. includecode:: ../../src/main/protobuf/TwoPhaseSetMessages.proto#twophaseset2
|
||||
|
|
@ -498,12 +498,12 @@ look like for the ``TwoPhaseSet``:
|
|||
Durable Storage
|
||||
---------------
|
||||
|
||||
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
|
||||
in the cluster, but if you stop all nodes the data is lost, unless you have saved it
|
||||
elsewhere.
|
||||
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
|
||||
in the cluster, but if you stop all nodes the data is lost, unless you have saved it
|
||||
elsewhere.
|
||||
|
||||
Entries can be configured to be durable, i.e. stored on local disk on each node. The stored data will be loaded
|
||||
next time the replicator is started, i.e. when actor system is restarted. This means data will survive as
|
||||
next time the replicator is started, i.e. when actor system is restarted. This means data will survive as
|
||||
long as at least one node from the old cluster takes part in a new cluster. The keys of the durable entries
|
||||
are configured with::
|
||||
|
||||
|
|
@ -515,10 +515,10 @@ All entries can be made durable by specifying::
|
|||
|
||||
akka.cluster.distributed-data.durable.keys = ["*"]
|
||||
|
||||
`LMDB <https://github.com/lmdbjava/lmdbjava/>`_ is the default storage implementation. It is
|
||||
possible to replace that with another implementation by implementing the actor protocol described in
|
||||
`LMDB <https://github.com/lmdbjava/lmdbjava/>`_ is the default storage implementation. It is
|
||||
possible to replace that with another implementation by implementing the actor protocol described in
|
||||
``akka.cluster.ddata.DurableStore`` and defining the ``akka.cluster.distributed-data.durable.store-actor-class``
|
||||
property for the new implementation.
|
||||
property for the new implementation.
|
||||
|
||||
The location of the files for the data is configured with::
|
||||
|
||||
|
|
@ -532,35 +532,35 @@ The location of the files for the data is configured with::
|
|||
|
||||
When running in production you may want to configure the directory to a specific
|
||||
path (alt 2), since the default directory contains the remote port of the
|
||||
actor system to make the name unique. If using a dynamically assigned
|
||||
port (0) it will be different each time and the previously stored data
|
||||
actor system to make the name unique. If using a dynamically assigned
|
||||
port (0) it will be different each time and the previously stored data
|
||||
will not be loaded.
|
||||
|
||||
Making the data durable has of course a performance cost. By default, each update is flushed
|
||||
to disk before the ``UpdateSuccess`` reply is sent. For better performance, but with the risk of losing
|
||||
to disk before the ``UpdateSuccess`` reply is sent. For better performance, but with the risk of losing
|
||||
the last writes if the JVM crashes, you can enable write behind mode. Changes are then accumulated during
|
||||
a time period before it is written to LMDB and flushed to disk. Enabling write behind is especially
|
||||
efficient when performing many writes to the same key, because it is only the last value for each key
|
||||
that will be serialized and stored. The risk of losing writes if the JVM crashes is small since the
|
||||
efficient when performing many writes to the same key, because it is only the last value for each key
|
||||
that will be serialized and stored. The risk of losing writes if the JVM crashes is small since the
|
||||
data is typically replicated to other nodes immediately according to the given ``WriteConsistency``.
|
||||
|
||||
::
|
||||
|
||||
akka.cluster.distributed-data.lmdb.write-behind-interval = 200 ms
|
||||
|
||||
Note that you should be prepared to receive ``WriteFailure`` as reply to an ``Update`` of a
|
||||
Note that you should be prepared to receive ``WriteFailure`` as reply to an ``Update`` of a
|
||||
durable entry if the data could not be stored for some reason. When enabling ``write-behind-interval``
|
||||
such errors will only be logged and ``UpdateSuccess`` will still be the reply to the ``Update``.
|
||||
|
||||
There is one important caveat when it comes pruning of :ref:`crdt_garbage_java` for durable data.
|
||||
If and old data entry that was never pruned is injected and merged with existing data after
|
||||
If and old data entry that was never pruned is injected and merged with existing data after
|
||||
that the pruning markers have been removed the value will not be correct. The time-to-live
|
||||
of the markers is defined by configuration
|
||||
of the markers is defined by configuration
|
||||
``akka.cluster.distributed-data.durable.remove-pruning-marker-after`` and is in the magnitude of days.
|
||||
This would be possible if a node with durable data didn't participate in the pruning
|
||||
(e.g. it was shutdown) and later started after this time. A node with durable data should not
|
||||
(e.g. it was shutdown) and later started after this time. A node with durable data should not
|
||||
be stopped for longer time than this duration and if it is joining again after this
|
||||
duration its data should first be manually removed (from the lmdb directory).
|
||||
duration its data should first be manually removed (from the lmdb directory).
|
||||
|
||||
.. _crdt_garbage_java:
|
||||
|
||||
|
|
@ -573,19 +573,19 @@ from one node it will associate the identifier of that node forever. That can be
|
|||
for long running systems with many cluster nodes being added and removed. To solve this problem
|
||||
the ``Replicator`` performs pruning of data associated with nodes that have been removed from the
|
||||
cluster. Data types that need pruning have to implement the ``RemovedNodePruning`` trait. See the
|
||||
API documentation of the ``Replicator`` for details.
|
||||
API documentation of the ``Replicator`` for details.
|
||||
|
||||
Samples
|
||||
=======
|
||||
|
||||
Several interesting samples are included and described in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Distributed Data Samples with Java <http://www.lightbend.com/activator/template/akka-sample-distributed-data-java>`_.
|
||||
Several interesting samples are included and described in the
|
||||
tutorial named `Akka Distributed Data Samples with Java <@exampleCodeService@/akka-samples-distributed-data-java>`_ (`source code <@samples@/akka-sample-distributed-data-java>`_)
|
||||
|
||||
* Low Latency Voting Service
|
||||
* Highly Available Shopping Cart
|
||||
* Distributed Service Registry
|
||||
* Replicated Cache
|
||||
* Replicated Metrics
|
||||
* Replicated Metrics
|
||||
|
||||
Limitations
|
||||
===========
|
||||
|
|
@ -598,7 +598,7 @@ all domains. Sometimes you need strong consistency.
|
|||
It is not intended for *Big Data*. The number of top level entries should not exceed 100000.
|
||||
When a new node is added to the cluster all these entries are transferred (gossiped) to the
|
||||
new node. The entries are split up in chunks and all existing nodes collaborate in the gossip,
|
||||
but it will take a while (tens of seconds) to transfer all entries and this means that you
|
||||
but it will take a while (tens of seconds) to transfer all entries and this means that you
|
||||
cannot have too many top level entries. The current recommended limit is 100000. We will
|
||||
be able to improve this if needed, but the design is still not intended for billions of entries.
|
||||
|
||||
|
|
@ -607,7 +607,7 @@ All data is held in memory, which is another reason why it is not intended for *
|
|||
When a data entry is changed the full state of that entry may be replicated to other nodes
|
||||
if it doesn't support :ref:`delta_crdt_java`. The full state is also replicated for delta-CRDTs,
|
||||
for example when new nodes are added to the cluster or when deltas could not be propagated because
|
||||
of network partitions or similar problems. This means that you cannot have too large
|
||||
of network partitions or similar problems. This means that you cannot have too large
|
||||
data entries, because then the remote message size will be too large.
|
||||
|
||||
Learn More about CRDTs
|
||||
|
|
@ -641,8 +641,7 @@ maven::
|
|||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
|
||||
The ``DistributedData`` extension can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-distributed-data/src/main/resources/reference.conf#distributed-data
|
||||
|
||||
|
|
@ -1,7 +1,7 @@
|
|||
.. _fsm-java:
|
||||
|
||||
#####
|
||||
FSM
|
||||
FSM
|
||||
#####
|
||||
|
||||
|
||||
|
|
@ -452,6 +452,7 @@ zero.
|
|||
Examples
|
||||
========
|
||||
|
||||
A bigger FSM example contrasted with Actor's :meth:`become`/:meth:`unbecome` can be found in
|
||||
the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ template named
|
||||
`Akka FSM in Scala <http://www.lightbend.com/activator/template/akka-sample-fsm-java-lambda>`_
|
||||
A bigger FSM example contrasted with Actor's :meth:`become`/:meth:`unbecome` can be
|
||||
downloaded as a ready to run `Akka FSM sample <@exampleCodeService@/akka-samples-fsm-java>`_
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-fsm-java>`_.
|
||||
|
|
|
|||
|
|
@ -3,8 +3,9 @@ The Obligatory Hello World
|
|||
##########################
|
||||
|
||||
The actor based version of the tough problem of printing a
|
||||
well-known greeting to the console is introduced in a `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Main in Java <http://www.lightbend.com/activator/template/akka-sample-main-java>`_.
|
||||
well-known greeting to the console is introduced in a ready to run `Akka Main sample <@exampleCodeService@/akka-samples-main-java>`_
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-main-java>`_.
|
||||
|
||||
The tutorial illustrates the generic launcher class :class:`akka.Main` which expects only
|
||||
one command line argument: the class name of the application’s main actor. This
|
||||
|
|
@ -12,7 +13,10 @@ main method will then create the infrastructure needed for running the actors,
|
|||
start the given main actor and arrange for the whole application to shut down
|
||||
once the main actor terminates.
|
||||
|
||||
There is also another `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial in the same problem domain that is named `Hello Akka! <http://www.lightbend.com/activator/template/hello-akka>`_.
|
||||
It describes the basics of Akka in more depth.
|
||||
There is also a `Gitter8 <http://www.foundweekends.org/giter8/>`_ template in the same problem domain
|
||||
that is named `Hello Akka! <https://github.com/akka/hello-akka.g8>`_.
|
||||
It describes the basics of Akka in more depth. If you have `sbt` already installed, you can create a project
|
||||
from this template by running::
|
||||
|
||||
sbt new akka/hello-akka.g8
|
||||
|
||||
|
|
|
|||
|
|
@ -116,9 +116,10 @@ and the actor will unconditionally be stopped. If persistence of an event is rej
|
|||
stored, e.g. due to serialization error, ``onPersistRejected`` will be invoked (logging a warning
|
||||
by default), and the actor continues with next message.
|
||||
|
||||
The easiest way to run this example yourself is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Persistence Samples in Java with Lambdas <http://www.lightbend.com/activator/template/akka-sample-persistence-java-lambda>`_.
|
||||
It contains instructions on how to run the ``PersistentActorExample``.
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
`Akka Persistence Sample with Scala <@exampleCodeService@/akka-samples-persistence-java>`_
|
||||
together with the tutorial. It contains instructions on how to run the ``PersistentActorExample``.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-persistence-java>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
@ -171,10 +172,10 @@ It should typically not be used when events have been deleted.
|
|||
|
||||
.. includecode:: code/jdocs/persistence/LambdaPersistenceDocTest.java#recovery-no-snap
|
||||
|
||||
Another example, which can be fun for experiments but probably not in a real application, is setting an
|
||||
upper bound to the replay which allows the actor to be replayed to a certain point "in the past"
|
||||
instead to its most up to date state. Note that after that it is a bad idea to persist new
|
||||
events because a later recovery will probably be confused by the new events that follow the
|
||||
Another example, which can be fun for experiments but probably not in a real application, is setting an
|
||||
upper bound to the replay which allows the actor to be replayed to a certain point "in the past"
|
||||
instead to its most up to date state. Note that after that it is a bad idea to persist new
|
||||
events because a later recovery will probably be confused by the new events that follow the
|
||||
events that were previously skipped.
|
||||
|
||||
.. includecode:: code/jdocs/persistence/LambdaPersistenceDocTest.java#recovery-custom
|
||||
|
|
@ -202,35 +203,35 @@ is called (logging the error by default), and the actor will be stopped.
|
|||
|
||||
.. _internal-stash-java:
|
||||
|
||||
Internal stash
|
||||
Internal stash
|
||||
--------------
|
||||
|
||||
The persistent actor has a private :ref:`stash <stash-java>` for internally caching incoming messages during
|
||||
:ref:`recovery <recovery-java>` or the ``persist\persistAll`` method persisting events. You can still
|
||||
use/inherit from the ``Stash`` interface. The internal stash cooperates with the normal stash by hooking into
|
||||
``unstashAll`` method and making sure messages are unstashed properly to the internal stash to maintain ordering
|
||||
The persistent actor has a private :ref:`stash <stash-java>` for internally caching incoming messages during
|
||||
:ref:`recovery <recovery-java>` or the ``persist\persistAll`` method persisting events. You can still
|
||||
use/inherit from the ``Stash`` interface. The internal stash cooperates with the normal stash by hooking into
|
||||
``unstashAll`` method and making sure messages are unstashed properly to the internal stash to maintain ordering
|
||||
guarantees.
|
||||
|
||||
You should be careful to not send more messages to a persistent actor than it can keep up with, otherwise the number
|
||||
of stashed messages will grow without bounds. It can be wise to protect against ``OutOfMemoryError`` by defining a
|
||||
You should be careful to not send more messages to a persistent actor than it can keep up with, otherwise the number
|
||||
of stashed messages will grow without bounds. It can be wise to protect against ``OutOfMemoryError`` by defining a
|
||||
maximum stash capacity in the mailbox configuration::
|
||||
|
||||
akka.actor.default-mailbox.stash-capacity=10000
|
||||
|
||||
Note that the stash capacity is per actor. If you have many persistent actors, e.g. when using cluster sharding,
|
||||
you may need to define a small stash capacity to ensure that the total number of stashed messages in the system
|
||||
doesn't consume too much memory. Additionally, the persistent actor defines three strategies to handle failure when the
|
||||
internal stash capacity is exceeded. The default overflow strategy is the ``ThrowOverflowExceptionStrategy``, which
|
||||
discards the current received message and throws a ``StashOverflowException``, causing actor restart if the default
|
||||
supervision strategy is used. You can override the ``internalStashOverflowStrategy`` method to return
|
||||
``DiscardToDeadLetterStrategy`` or ``ReplyToStrategy`` for any "individual" persistent actor, or define the "default"
|
||||
for all persistent actors by providing FQCN, which must be a subclass of ``StashOverflowStrategyConfigurator``, in the
|
||||
doesn't consume too much memory. Additionally, the persistent actor defines three strategies to handle failure when the
|
||||
internal stash capacity is exceeded. The default overflow strategy is the ``ThrowOverflowExceptionStrategy``, which
|
||||
discards the current received message and throws a ``StashOverflowException``, causing actor restart if the default
|
||||
supervision strategy is used. You can override the ``internalStashOverflowStrategy`` method to return
|
||||
``DiscardToDeadLetterStrategy`` or ``ReplyToStrategy`` for any "individual" persistent actor, or define the "default"
|
||||
for all persistent actors by providing FQCN, which must be a subclass of ``StashOverflowStrategyConfigurator``, in the
|
||||
persistence configuration::
|
||||
|
||||
akka.persistence.internal-stash-overflow-strategy=
|
||||
"akka.persistence.ThrowExceptionConfigurator"
|
||||
|
||||
The ``DiscardToDeadLetterStrategy`` strategy also has a pre-packaged companion configurator
|
||||
|
||||
The ``DiscardToDeadLetterStrategy`` strategy also has a pre-packaged companion configurator
|
||||
``akka.persistence.DiscardConfigurator``.
|
||||
|
||||
You can also query the default strategy via the Akka persistence extension singleton::
|
||||
|
|
@ -238,7 +239,7 @@ You can also query the default strategy via the Akka persistence extension singl
|
|||
Persistence.get(getContext().getSystem()).defaultInternalStashOverflowStrategy();
|
||||
|
||||
.. note::
|
||||
The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may
|
||||
The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may
|
||||
be discarded. You can use bounded stash instead of it.
|
||||
|
||||
|
||||
|
|
@ -332,10 +333,10 @@ While it is possible to nest mixed ``persist`` and ``persistAsync`` with keeping
|
|||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
||||
.. warning::
|
||||
While it is possible to nest ``persist`` calls within one another,
|
||||
While it is possible to nest ``persist`` calls within one another,
|
||||
it is *not* legal call ``persist`` from any other Thread than the Actors message processing Thread.
|
||||
For example, it is not legal to call ``persist`` from Futures! Doing so will break the guarantees
|
||||
that the persist methods aim to provide. Always call ``persist`` and ``persistAsync`` from within
|
||||
For example, it is not legal to call ``persist`` from Futures! Doing so will break the guarantees
|
||||
that the persist methods aim to provide. Always call ``persist`` and ``persistAsync`` from within
|
||||
the Actor's receive block (or methods synchronously invoked from there).
|
||||
|
||||
.. _failures-java:
|
||||
|
|
@ -848,7 +849,7 @@ The journal plugin class must have a constructor with one of these signatures:
|
|||
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
|
||||
of the plugin is passed in the ``String`` parameter.
|
||||
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
``akka.persistence.dispatchers.default-plugin-dispatcher``.
|
||||
|
||||
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
@ -877,7 +878,7 @@ The snapshot store plugin class must have a constructor with one of these signat
|
|||
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
|
||||
of the plugin is passed in the ``String`` parameter.
|
||||
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
``akka.persistence.dispatchers.default-plugin-dispatcher``.
|
||||
|
||||
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ The example above only illustrates the bare minimum of properties you have to ad
|
|||
All settings are described in :ref:`remote-configuration-artery-java`.
|
||||
|
||||
.. note::
|
||||
Aeron requires 64bit JVM to work reliably.
|
||||
Aeron requires 64bit JVM to work reliably.
|
||||
|
||||
Canonical address
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -249,8 +249,8 @@ remote system. This still however may pose a security risk, and one may want to
|
|||
only a specific set of known actors by enabling the whitelist feature.
|
||||
|
||||
To enable remote deployment whitelisting set the ``akka.remote.deployment.enable-whitelist`` value to ``on``.
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
.. includecode:: ../../../akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala#whitelist-config
|
||||
|
|
@ -269,7 +269,7 @@ so if network security is not considered as enough protection the classic remoti
|
|||
|
||||
Best practice is that Akka remoting nodes should only be accessible from the adjacent network.
|
||||
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-java-artery>` because of
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-java-artery>` because of
|
||||
its multiple `known attack surfaces <https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_.
|
||||
|
||||
Untrusted Mode
|
||||
|
|
@ -291,12 +291,12 @@ a denial of service attack). :class:`PossiblyHarmful` covers the predefined
|
|||
messages like :class:`PoisonPill` and :class:`Kill`, but it can also be added
|
||||
as a marker trait to user-defined messages.
|
||||
|
||||
.. warning::
|
||||
|
||||
.. warning::
|
||||
|
||||
Untrusted mode does not give full protection against attacks by itself.
|
||||
It makes it slightly harder to perform malicious or unintended actions but
|
||||
it should be complemented with :ref:`disabled Java serializer <disable-java-serializer-java-artery>`.
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls).
|
||||
|
||||
Messages sent with actor selection are by default discarded in untrusted mode, but
|
||||
|
|
@ -342,7 +342,7 @@ may not be delivered to the destination:
|
|||
|
||||
* during a network partition and the Aeron session is broken, this automatically recovered once the partition is over
|
||||
* when sending too many messages without flow control and thereby filling up the outbound send queue (``outbound-message-queue-size`` config)
|
||||
* if serialization or deserialization of a message fails (only that message will be dropped)
|
||||
* if serialization or deserialization of a message fails (only that message will be dropped)
|
||||
* if an unexpected exception occurs in the remoting infrastructure
|
||||
|
||||
In short, Actor message delivery is “at-most-once” as described in :ref:`message-delivery-reliability`
|
||||
|
|
@ -350,39 +350,39 @@ In short, Actor message delivery is “at-most-once” as described in :ref:`mes
|
|||
Some messages in Akka are called system messages and those cannot be dropped because that would result
|
||||
in an inconsistent state between the systems. Such messages are used for essentially two features; remote death
|
||||
watch and remote deployment. These messages are delivered by Akka remoting with “exactly-once” guarantee by
|
||||
confirming each message and resending unconfirmed messages. If a system message anyway cannot be delivered the
|
||||
association with the destination system is irrecoverable failed, and Terminated is signaled for all watched
|
||||
confirming each message and resending unconfirmed messages. If a system message anyway cannot be delivered the
|
||||
association with the destination system is irrecoverable failed, and Terminated is signaled for all watched
|
||||
actors on the remote system. It is placed in a so called quarantined state. Quarantine usually does not
|
||||
happen if remote watch or remote deployment is not used.
|
||||
|
||||
Each ``ActorSystem`` instance has an unique identifier (UID), which is important for differentiating between
|
||||
incarnations of a system when it is restarted with the same hostname and port. It is the specific
|
||||
incarnation (UID) that is quarantined. The only way to recover from this state is to restart one of the
|
||||
actor systems.
|
||||
incarnation (UID) that is quarantined. The only way to recover from this state is to restart one of the
|
||||
actor systems.
|
||||
|
||||
Messages that are sent to and received from a quarantined system will be dropped. However, it is possible to
|
||||
send messages with ``actorSelection`` to the address of a quarantined system, which is useful to probe if the
|
||||
system has been restarted.
|
||||
|
||||
An association will be quarantined when:
|
||||
An association will be quarantined when:
|
||||
|
||||
* Cluster node is removed from the cluster membership.
|
||||
* Remote failure detector triggers, i.e. remote watch is used. This is different when :ref:`Akka Cluster <cluster_usage_java>`
|
||||
is used. The unreachable observation by the cluster failure detector can go back to reachable if the network
|
||||
partition heals. A cluster member is not quarantined when the failure detector triggers.
|
||||
* Overflow of the system message delivery buffer, e.g. because of too many ``watch`` requests at the same time
|
||||
partition heals. A cluster member is not quarantined when the failure detector triggers.
|
||||
* Overflow of the system message delivery buffer, e.g. because of too many ``watch`` requests at the same time
|
||||
(``system-message-buffer-size`` config).
|
||||
* Unexpected exception occurs in the control subchannel of the remoting infrastructure.
|
||||
|
||||
The UID of the ``ActorSystem`` is exchanged in a two-way handshake when the first message is sent to
|
||||
a destination. The handshake will be retried until the other system replies and no other messages will
|
||||
pass through until the handshake is completed. If the handshake cannot be established within a timeout
|
||||
pass through until the handshake is completed. If the handshake cannot be established within a timeout
|
||||
(``handshake-timeout`` config) the association is stopped (freeing up resources). Queued messages will be
|
||||
dropped if the handshake cannot be established. It will not be quarantined, because the UID is unknown.
|
||||
New handshake attempt will start when next message is sent to the destination.
|
||||
|
||||
Handshake requests are actually also sent periodically to be able to establish a working connection
|
||||
when the destination system has been restarted.
|
||||
Handshake requests are actually also sent periodically to be able to establish a working connection
|
||||
when the destination system has been restarted.
|
||||
|
||||
Watching Remote Actors
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -459,12 +459,12 @@ For more information please see :ref:`serialization-java`.
|
|||
ByteBuffer based serialization
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Artery introduces a new serialization mechanism which allows the ``ByteBufferSerializer`` to directly write into a
|
||||
Artery introduces a new serialization mechanism which allows the ``ByteBufferSerializer`` to directly write into a
|
||||
shared :class:`java.nio.ByteBuffer` instead of being forced to allocate and return an ``Array[Byte]`` for each serialized
|
||||
message. For high-throughput messaging this API change can yield significant performance benefits, so we recommend
|
||||
changing your serializers to use this new mechanism.
|
||||
|
||||
This new API also plays well with new versions of Google Protocol Buffers and other serialization libraries, which gained
|
||||
This new API also plays well with new versions of Google Protocol Buffers and other serialization libraries, which gained
|
||||
the ability to serialize directly into and from ByteBuffers.
|
||||
|
||||
As the new feature only changes how bytes are read and written, and the rest of the serialization infrastructure
|
||||
|
|
@ -474,13 +474,13 @@ Implementing an :class:`akka.serialization.ByteBufferSerializer` works the same
|
|||
|
||||
.. includecode:: code/jdocs/actor/ByteBufferSerializerDocTest.java#ByteBufferSerializer-interface
|
||||
|
||||
Implementing a serializer for Artery is therefore as simple as implementing this interface, and binding the serializer
|
||||
Implementing a serializer for Artery is therefore as simple as implementing this interface, and binding the serializer
|
||||
as usual (which is explained in :ref:`serialization-java`).
|
||||
|
||||
Implementations should typically extend ``SerializerWithStringManifest`` and in addition to the ``ByteBuffer`` based
|
||||
``toBinary`` and ``fromBinary`` methods also implement the array based ``toBinary`` and ``fromBinary`` methods.
|
||||
Implementations should typically extend ``SerializerWithStringManifest`` and in addition to the ``ByteBuffer`` based
|
||||
``toBinary`` and ``fromBinary`` methods also implement the array based ``toBinary`` and ``fromBinary`` methods.
|
||||
The array based methods will be used when ``ByteBuffer`` is not used, e.g. in Akka Persistence.
|
||||
|
||||
|
||||
Note that the array based methods can be implemented by delegation like this:
|
||||
|
||||
.. includecode:: code/jdocs/actor/ByteBufferSerializerDocTest.java#bytebufserializer-with-manifest
|
||||
|
|
@ -492,38 +492,38 @@ Disabling the Java Serializer
|
|||
|
||||
It is possible to completely disable Java Serialization for the entire Actor system.
|
||||
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
serialize user messages as well as some of its internal messages in previous versions.
|
||||
Since the release of Artery, Akka internals do not rely on Java serialization anymore (exceptions to that being ``java.lang.Throwable`` and "remote deployment").
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
Akka does not use Java Serialization for any of its internal messages.
|
||||
It is highly encouraged to disable java serialization, so please plan to do so at the earliest possibility you have in your project.
|
||||
|
||||
One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
|
||||
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
your ``application.conf``:
|
||||
|
||||
.. code-block:: ruby
|
||||
|
||||
akka.actor.allow-java-serialization = off
|
||||
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
be inserted which will fail explicitly if attempts to use java serialization are made.
|
||||
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
attempting to send malicious messages intending to use java serialization as attack vector.
|
||||
The attempts are logged with the SECURITY marker.
|
||||
|
||||
|
|
@ -561,9 +561,9 @@ That is not done by the router.
|
|||
Remoting Sample
|
||||
---------------
|
||||
|
||||
There is a more extensive remote example that comes with `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_.
|
||||
The tutorial named `Akka Remote Samples with Java <http://www.lightbend.com/activator/template/akka-sample-remote-java>`_
|
||||
demonstrates both remote deployment and look-up of remote actors.
|
||||
You can download a ready to run `remoting sample <@exampleCodeService@/akka-samples-remote-java>`_
|
||||
together with a tutorial for a more hands-on experience. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-remote-java>`_.
|
||||
|
||||
Performance tuning
|
||||
------------------
|
||||
|
|
@ -605,7 +605,7 @@ Messages destined for actors not matching any of these patterns are sent using t
|
|||
External, shared Aeron media driver
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The Aeron transport is running in a so called `media driver <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
The Aeron transport is running in a so called `media driver <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
By default, Akka starts the media driver embedded in the same JVM process as application. This is
|
||||
convenient and simplifies operational concerns by only having one process to start and monitor.
|
||||
|
||||
|
|
@ -623,15 +623,15 @@ The needed classpath::
|
|||
|
||||
Agrona-0.5.4.jar:aeron-driver-1.0.1.jar:aeron-client-1.0.1.jar
|
||||
|
||||
You find those jar files on `maven central <http://search.maven.org/>`_, or you can create a
|
||||
You find those jar files on `maven central <http://search.maven.org/>`_, or you can create a
|
||||
package with your preferred build tool.
|
||||
|
||||
You can pass `Aeron properties <https://github.com/real-logic/Aeron/wiki/Configuration-Options>`_ as
|
||||
You can pass `Aeron properties <https://github.com/real-logic/Aeron/wiki/Configuration-Options>`_ as
|
||||
command line `-D` system properties::
|
||||
|
||||
-Daeron.dir=/dev/shm/aeron
|
||||
|
||||
You can also define Aeron properties in a file::
|
||||
You can also define Aeron properties in a file::
|
||||
|
||||
java io.aeron.driver.MediaDriver config/aeron.properties
|
||||
|
||||
|
|
@ -643,21 +643,21 @@ An example of such a properties file::
|
|||
aeron.rcv.buffer.length=16384
|
||||
aeron.rcv.initial.window.length=2097152
|
||||
agrona.disable.bounds.checks=true
|
||||
|
||||
|
||||
aeron.threading.mode=SHARED_NETWORK
|
||||
|
||||
|
||||
# low latency settings
|
||||
#aeron.threading.mode=DEDICATED
|
||||
#aeron.sender.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
|
||||
#aeron.receiver.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
|
||||
|
||||
# use same director in akka.remote.artery.advanced.aeron-dir config
|
||||
# of the Akka application
|
||||
|
||||
# use same director in akka.remote.artery.advanced.aeron-dir config
|
||||
# of the Akka application
|
||||
aeron.dir=/dev/shm/aeron
|
||||
|
||||
Read more about the media driver in the `Aeron documentation <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
|
||||
To use the external media driver from the Akka application you need to define the following two
|
||||
To use the external media driver from the Akka application you need to define the following two
|
||||
configuration properties::
|
||||
|
||||
akka.remote.artery.advanced {
|
||||
|
|
|
|||
|
|
@ -180,8 +180,8 @@ remote system. This still however may pose a security risk, and one may want to
|
|||
only a specific set of known actors by enabling the whitelist feature.
|
||||
|
||||
To enable remote deployment whitelisting set the ``akka.remote.deployment.enable-whitelist`` value to ``on``.
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
.. includecode:: ../../../akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala#whitelist-config
|
||||
|
|
@ -283,46 +283,46 @@ For more information please see :ref:`serialization-java`.
|
|||
Disabling the Java Serializer
|
||||
-----------------------------
|
||||
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
serialize user messages as well as some of its internal messages in previous versions.
|
||||
Since the release of Artery, Akka internals do not rely on Java serialization anymore (one exception being ``java.lang.Throwable``).
|
||||
|
||||
.. warning::
|
||||
.. warning::
|
||||
Please note Akka 2.5 by default does not use any Java Serialization for its own internal messages, unlike 2.4 where
|
||||
by default it sill did for a few of the messages. If you want an 2.4.x system to communicate with a 2.5.x series, for
|
||||
example during a rolling deployment you should first enable ``additional-serialization-bindings`` on the old systems.
|
||||
example during a rolling deployment you should first enable ``additional-serialization-bindings`` on the old systems.
|
||||
You must do so on all nodes participating in a cluster, otherwise the mis-aligned serialization
|
||||
configurations will cause deserialization errors on the receiving nodes. These additional serialization bindings are
|
||||
enabled by default in Akka 2.5.x.
|
||||
|
||||
.. note::
|
||||
When using the new remoting implementation (codename Artery), Akka does not use Java Serialization for any of its internal messages.
|
||||
.. note::
|
||||
When using the new remoting implementation (codename Artery), Akka does not use Java Serialization for any of its internal messages.
|
||||
It is highly encouraged to disable java serialization, so please plan to do so at the earliest possibility you have in your project.
|
||||
|
||||
One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
|
||||
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
your ``application.conf``:
|
||||
|
||||
.. code-block:: ruby
|
||||
|
||||
akka.actor.allow-java-serialization = off
|
||||
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
be inserted which will fail explicitly if attempts to use java serialization are made.
|
||||
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
attempting to send malicious messages intending to use java serialization as attack vector.
|
||||
The attempts are logged with the SECURITY marker.
|
||||
|
||||
|
|
@ -353,16 +353,16 @@ A group of remote actors can be configured as:
|
|||
|
||||
This configuration setting will send messages to the defined remote actor paths.
|
||||
It requires that you create the destination actors on the remote nodes with matching paths.
|
||||
That is not done by the router.
|
||||
That is not done by the router.
|
||||
|
||||
.. _remote-sample-java:
|
||||
|
||||
Remoting Sample
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
There is a more extensive remote example that comes with `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_.
|
||||
The tutorial named `Akka Remote Samples with Java <http://www.lightbend.com/activator/template/akka-sample-remote-java>`_
|
||||
demonstrates both remote deployment and look-up of remote actors.
|
||||
You can download a ready to run `remoting sample <@exampleCodeService@/akka-samples-remote-java>`_
|
||||
together with a tutorial for a more hands-on experience. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-remote-java>`_.
|
||||
|
||||
Remote Events
|
||||
-------------
|
||||
|
|
@ -424,11 +424,11 @@ An ``ActorSystem`` should not be exposed via Akka Remote over plain TCP to an un
|
|||
It should be protected by network security, such as a firewall. If that is not considered as enough protection
|
||||
:ref:`TLS with mutual authentication <remote-tls-java>` should be enabled.
|
||||
|
||||
Best practice is that Akka remoting nodes should only be accessible from the adjacent network. Note that if TLS is
|
||||
Best practice is that Akka remoting nodes should only be accessible from the adjacent network. Note that if TLS is
|
||||
enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate by
|
||||
compromising any node with certificates issued by the same internal PKI tree.
|
||||
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-java>` because of
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-java>` because of
|
||||
its multiple `known attack surfaces <https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_.
|
||||
|
||||
.. _remote-tls-java:
|
||||
|
|
@ -452,15 +452,15 @@ Next the actual SSL/TLS parameters have to be configured::
|
|||
netty.ssl.security {
|
||||
key-store = "/example/path/to/mykeystore.jks"
|
||||
trust-store = "/example/path/to/mytruststore.jks"
|
||||
|
||||
|
||||
key-store-password = "changeme"
|
||||
key-password = "changeme"
|
||||
trust-store-password = "changeme"
|
||||
|
||||
|
||||
protocol = "TLSv1.2"
|
||||
|
||||
|
||||
enabled-algorithms = [TLS_DHE_RSA_WITH_AES_128_GCM_SHA256]
|
||||
|
||||
|
||||
random-number-generator = "AES128CounterSecureRNG"
|
||||
}
|
||||
}
|
||||
|
|
@ -473,11 +473,11 @@ According to `RFC 7525 <https://tools.ietf.org/html/rfc7525>`_ the recommended a
|
|||
- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
|
||||
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
|
||||
|
||||
Creating and working with keystores and certificates is well documented in the
|
||||
Creating and working with keystores and certificates is well documented in the
|
||||
`Generating X.509 Certificates <http://typesafehub.github.io/ssl-config/CertificateGeneration.html#using-keytool>`_
|
||||
section of Lightbend's SSL-Config library.
|
||||
section of Lightbend's SSL-Config library.
|
||||
|
||||
Since an Akka remoting is inherently :ref:`peer-to-peer <symmetric-communication>` both the key-store as well as trust-store
|
||||
Since an Akka remoting is inherently :ref:`peer-to-peer <symmetric-communication>` both the key-store as well as trust-store
|
||||
need to be configured on each remoting node participating in the cluster.
|
||||
|
||||
The official `Java Secure Socket Extension documentation <http://docs.oracle.com/javase/7/jdocs/technotes/guides/security/jsse/JSSERefGuide.html>`_
|
||||
|
|
@ -485,14 +485,14 @@ as well as the `Oracle documentation on creating KeyStore and TrustStores <https
|
|||
are both great resources to research when setting up security on the JVM. Please consult those resources when troubleshooting
|
||||
and configuring SSL.
|
||||
|
||||
Since Akka 2.5.0 mutual authentication between TLS peers is enabled by default.
|
||||
Since Akka 2.5.0 mutual authentication between TLS peers is enabled by default.
|
||||
|
||||
Mutual authentication means that the the passive side (the TLS server side) of a connection will also request and verify
|
||||
Mutual authentication means that the the passive side (the TLS server side) of a connection will also request and verify
|
||||
a certificate from the connecting peer. Without this mode only the client side is requesting and verifying certificates.
|
||||
While Akka is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
|
||||
While Akka is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
|
||||
the other (the "server").
|
||||
|
||||
Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
|
||||
Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
|
||||
by compromising any node with certificates issued by the same internal PKI tree.
|
||||
|
||||
See also a description of the settings in the :ref:`remote-configuration-scala` section.
|
||||
|
|
@ -521,13 +521,13 @@ a denial of service attack). :class:`PossiblyHarmful` covers the predefined
|
|||
messages like :class:`PoisonPill` and :class:`Kill`, but it can also be added
|
||||
as a marker trait to user-defined messages.
|
||||
|
||||
.. warning::
|
||||
|
||||
.. warning::
|
||||
|
||||
Untrusted mode does not give full protection against attacks by itself.
|
||||
It makes it slightly harder to perform malicious or unintended actions but
|
||||
it should be complemented with :ref:`disabled Java serializer <disable-java-serializer-java>`.
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls) and/or enabling
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls) and/or enabling
|
||||
:ref:`TLS with mutual authentication <remote-tls-java>`.
|
||||
|
||||
Messages sent with actor selection are by default discarded in untrusted mode, but
|
||||
|
|
@ -566,7 +566,7 @@ untrusted mode when incoming via the remoting layer:
|
|||
Remote Configuration
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There are lots of configuration properties that are related to remoting in Akka. We refer to the
|
||||
There are lots of configuration properties that are related to remoting in Akka. We refer to the
|
||||
:ref:`reference configuration <config-akka-remote>` for more information.
|
||||
|
||||
.. note::
|
||||
|
|
|
|||
|
|
@ -221,10 +221,9 @@ __ Props_
|
|||
singleton scope.
|
||||
|
||||
Techniques for dependency injection and integration with dependency injection frameworks
|
||||
are described in more depth in the
|
||||
`Using Akka with Dependency Injection <http://letitcrash.com/post/55958814293/akka-dependency-injection>`_
|
||||
guideline and the `Akka Java Spring <http://www.lightbend.com/activator/template/akka-java-spring>`_ tutorial
|
||||
in Lightbend Activator.
|
||||
are described in more depth in the
|
||||
`Using Akka with Dependency Injection <http://letitcrash.com/post/55958814293/akka-dependency-injection>`_
|
||||
guideline and the `Akka Java Spring <https://github.com/typesafehub/activator-akka-java-spring>`_ tutorial.
|
||||
|
||||
The Inbox
|
||||
---------
|
||||
|
|
@ -338,7 +337,7 @@ occupying it. ``ActorSelection`` cannot be watched for this reason. It is
|
|||
possible to resolve the current incarnation's ``ActorRef`` living under the
|
||||
path by sending an ``Identify`` message to the ``ActorSelection`` which
|
||||
will be replied to with an ``ActorIdentity`` containing the correct reference
|
||||
(see :ref:`actorSelection-scala`). This can also be done with the ``resolveOne``
|
||||
(see :ref:`actorSelection-scala`). This can also be done with the ``resolveOne``
|
||||
method of the :class:`ActorSelection`, which returns a ``Future`` of the matching
|
||||
:class:`ActorRef`.
|
||||
|
||||
|
|
@ -359,7 +358,7 @@ Registering a monitor is easy:
|
|||
|
||||
It should be noted that the :class:`Terminated` message is generated
|
||||
independent of the order in which registration and termination occur.
|
||||
In particular, the watching actor will receive a :class:`Terminated` message even if the
|
||||
In particular, the watching actor will receive a :class:`Terminated` message even if the
|
||||
watched actor has already been terminated at the time of registration.
|
||||
|
||||
Registering multiple times does not necessarily lead to multiple messages being
|
||||
|
|
@ -506,8 +505,8 @@ of that reply is guaranteed, it still is a normal message.
|
|||
.. includecode:: code/docs/actor/ActorDocSpec.scala#identify
|
||||
|
||||
You can also acquire an :class:`ActorRef` for an :class:`ActorSelection` with
|
||||
the ``resolveOne`` method of the :class:`ActorSelection`. It returns a ``Future``
|
||||
of the matching :class:`ActorRef` if such an actor exists. It is completed with
|
||||
the ``resolveOne`` method of the :class:`ActorSelection`. It returns a ``Future``
|
||||
of the matching :class:`ActorRef` if such an actor exists. It is completed with
|
||||
failure [[akka.actor.ActorNotFound]] if no such actor exists or the identification
|
||||
didn't complete within the supplied `timeout`.
|
||||
|
||||
|
|
@ -790,7 +789,7 @@ before stopping the target actor. Simple cleanup tasks can be handled in ``postS
|
|||
Coordinated Shutdown
|
||||
--------------------
|
||||
|
||||
There is an extension named ``CoordinatedShutdown`` that will stop certain actors and
|
||||
There is an extension named ``CoordinatedShutdown`` that will stop certain actors and
|
||||
services in a specific order and perform registered tasks during the shutdown process.
|
||||
|
||||
The order of the shutdown phases is defined in configuration ``akka.coordinated-shutdown.phases``.
|
||||
|
|
@ -802,26 +801,26 @@ More phases can be be added in the application's configuration if needed by over
|
|||
additional ``depends-on``. Especially the phases ``before-service-unbind``, ``before-cluster-shutdown`` and
|
||||
``before-actor-system-terminate`` are intended for application specific phases or tasks.
|
||||
|
||||
The default phases are defined in a single linear order, but the phases can be ordered as a
|
||||
The default phases are defined in a single linear order, but the phases can be ordered as a
|
||||
directed acyclic graph (DAG) by defining the dependencies between the phases.
|
||||
The phases are ordered with `topological <https://en.wikipedia.org/wiki/Topological_sorting>`_ sort of the DAG.
|
||||
The phases are ordered with `topological <https://en.wikipedia.org/wiki/Topological_sorting>`_ sort of the DAG.
|
||||
|
||||
Tasks can be added to a phase with:
|
||||
|
||||
.. includecode:: code/docs/actor/ActorDocSpec.scala#coordinated-shutdown-addTask
|
||||
|
||||
The returned ``Future[Done]`` should be completed when the task is completed. The task name parameter
|
||||
is only used for debugging/logging.
|
||||
is only used for debugging/logging.
|
||||
|
||||
Tasks added to the same phase are executed in parallel without any ordering assumptions.
|
||||
Tasks added to the same phase are executed in parallel without any ordering assumptions.
|
||||
Next phase will not start until all tasks of previous phase have been completed.
|
||||
|
||||
If tasks are not completed within a configured timeout (see :ref:`reference.conf <config-akka-actor>`)
|
||||
the next phase will be started anyway. It is possible to configure ``recover=off`` for a phase
|
||||
to abort the rest of the shutdown process if a task fails or is not completed within the timeout.
|
||||
|
||||
Tasks should typically be registered as early as possible after system startup. When running
|
||||
the coordinated shutdown tasks that have been registered will be performed but tasks that are
|
||||
Tasks should typically be registered as early as possible after system startup. When running
|
||||
the coordinated shutdown tasks that have been registered will be performed but tasks that are
|
||||
added too late will not be run.
|
||||
|
||||
To start the coordinated shutdown process you can invoke ``run`` on the ``CoordinatedShutdown``
|
||||
|
|
@ -839,9 +838,9 @@ To enable a hard ``System.exit`` as a final action you can configure::
|
|||
|
||||
When using :ref:`Akka Cluster <cluster_usage_scala>` the ``CoordinatedShutdown`` will automatically run
|
||||
when the cluster node sees itself as ``Exiting``, i.e. leaving from another node will trigger
|
||||
the shutdown process on the leaving node. Tasks for graceful leaving of cluster including graceful
|
||||
shutdown of Cluster Singletons and Cluster Sharding are added automatically when Akka Cluster is used,
|
||||
i.e. running the shutdown process will also trigger the graceful leaving if it's not already in progress.
|
||||
the shutdown process on the leaving node. Tasks for graceful leaving of cluster including graceful
|
||||
shutdown of Cluster Singletons and Cluster Sharding are added automatically when Akka Cluster is used,
|
||||
i.e. running the shutdown process will also trigger the graceful leaving if it's not already in progress.
|
||||
|
||||
By default, the ``CoordinatedShutdown`` will be run when the JVM process exits, e.g.
|
||||
via ``kill SIGTERM`` signal (``SIGINT`` ctrl-c doesn't work). This behavior can be disabled with::
|
||||
|
|
@ -849,13 +848,13 @@ via ``kill SIGTERM`` signal (``SIGINT`` ctrl-c doesn't work). This behavior can
|
|||
akka.coordinated-shutdown.run-by-jvm-shutdown-hook=off
|
||||
|
||||
If you have application specific JVM shutdown hooks it's recommended that you register them via the
|
||||
``CoordinatedShutdown`` so that they are running before Akka internal shutdown hooks, e.g.
|
||||
``CoordinatedShutdown`` so that they are running before Akka internal shutdown hooks, e.g.
|
||||
those shutting down Akka Remoting (Artery).
|
||||
|
||||
.. includecode:: code/docs/actor/ActorDocSpec.scala#coordinated-shutdown-jvm-hook
|
||||
|
||||
For some tests it might be undesired to terminate the ``ActorSystem`` via ``CoordinatedShutdown``.
|
||||
You can disable that by adding the following to the configuration of the ``ActorSystem`` that is
|
||||
You can disable that by adding the following to the configuration of the ``ActorSystem`` that is
|
||||
used in the test::
|
||||
|
||||
# Don't terminate ActorSystem via CoordinatedShutdown in tests
|
||||
|
|
|
|||
|
|
@ -322,10 +322,10 @@ also :ref:`camel-examples` that implements both, an asynchronous
|
|||
consumer and an asynchronous producer, with the jetty component.
|
||||
|
||||
If the used Camel component is blocking it might be necessary to use a separate
|
||||
:ref:`dispatcher <dispatchers-scala>` for the producer. The Camel processor is
|
||||
invoked by a child actor of the producer and the dispatcher can be defined in
|
||||
the deployment section of the configuration. For example, if your producer actor
|
||||
has path ``/user/integration/output`` the dispatcher of the child actor can be
|
||||
:ref:`dispatcher <dispatchers-scala>` for the producer. The Camel processor is
|
||||
invoked by a child actor of the producer and the dispatcher can be defined in
|
||||
the deployment section of the configuration. For example, if your producer actor
|
||||
has path ``/user/integration/output`` the dispatcher of the child actor can be
|
||||
defined with::
|
||||
|
||||
akka.actor.deployment {
|
||||
|
|
@ -473,13 +473,12 @@ __ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/j
|
|||
Examples
|
||||
========
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Camel Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-camel-scala>`_
|
||||
The sample named `Akka Camel Samples with Scala <@exampleCodeService@/akka-samples-camel-scala>`_ (`source code <@samples@/akka-sample-camel-scala>`_)
|
||||
contains 3 samples:
|
||||
|
||||
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
|
||||
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
|
||||
producer actors that support :ref:`camel-asynchronous-routing` with their Camel endpoints.
|
||||
|
||||
|
||||
* Custom Camel route - Demonstrates the combined usage of a ``Producer`` and a
|
||||
``Consumer`` actor as well as the inclusion of a custom Camel route.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,18 +10,18 @@ contact points. It will establish a connection to a ``ClusterReceptionist`` some
|
|||
the cluster. It will monitor the connection to the receptionist and establish a new
|
||||
connection if the link goes down. When looking for a new receptionist it uses fresh
|
||||
contact points retrieved from previous establishment, or periodically refreshed contacts,
|
||||
i.e. not necessarily the initial contact points.
|
||||
i.e. not necessarily the initial contact points.
|
||||
|
||||
.. note::
|
||||
|
||||
``ClusterClient`` should not be used when sending messages to actors that run
|
||||
within the same cluster. Similar functionality as the ``ClusterClient`` is
|
||||
provided in a more efficient way by :ref:`distributed-pub-sub-scala` for actors that
|
||||
belong to the same cluster.
|
||||
provided in a more efficient way by :ref:`distributed-pub-sub-scala` for actors that
|
||||
belong to the same cluster.
|
||||
|
||||
Also, note it's necessary to change ``akka.actor.provider`` from ``local``
|
||||
to ``remote`` or ``cluster`` when using
|
||||
the cluster client.
|
||||
the cluster client.
|
||||
|
||||
The receptionist is supposed to be started on all nodes, or all nodes with specified role,
|
||||
in the cluster. The receptionist can be started with the ``ClusterClientReceptionist`` extension
|
||||
|
|
@ -77,11 +77,11 @@ The size of the buffer is configurable and it can be disabled by using a buffer
|
|||
It's worth noting that messages can always be lost because of the distributed nature
|
||||
of these actors. As always, additional logic should be implemented in the destination
|
||||
(acknowledgement) and in the client (retry) actors to ensure at-least-once message delivery.
|
||||
|
||||
|
||||
An Example
|
||||
----------
|
||||
|
||||
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
|
||||
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
|
||||
when the actor system is started by defining it in the ``akka.extensions`` configuration property::
|
||||
|
||||
akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
|
||||
|
|
@ -103,8 +103,7 @@ The ``initialContacts`` parameter is a ``Set[ActorPath]``, which can be created
|
|||
You will probably define the address information of the initial contact points in configuration or system property.
|
||||
See also :ref:`cluster-client-config-scala`.
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.lightbend.com/activator/template/akka-distributed-workers>`_.
|
||||
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Scala! <https://github.com/typesafehub/activator-akka-distributed-workers-scala>`_.
|
||||
|
||||
ClusterClientReceptionist Extension
|
||||
-----------------------------------
|
||||
|
|
@ -153,21 +152,21 @@ maven::
|
|||
</dependency>
|
||||
|
||||
.. _cluster-client-config-scala:
|
||||
|
||||
|
||||
Configuration
|
||||
-------------
|
||||
|
||||
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
|
||||
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
|
||||
with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterClientSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
|
||||
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
|
||||
The following configuration properties are read by the ``ClusterClientSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
|
||||
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
|
||||
with different settings if needed.
|
||||
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#cluster-client-config
|
||||
|
||||
Failure handling
|
||||
|
|
@ -190,4 +189,4 @@ within a configurable interval. This is configured with the ``reconnect-timeout`
|
|||
This can be useful when initial contacts are provided from some kind of service registry, cluster node addresses
|
||||
are entirely dynamic and the entire cluster might shut down or crash, be restarted on new addresses. Since the
|
||||
client will be stopped in that case a monitoring actor can watch it and upon ``Terminate`` a new set of initial
|
||||
contacts can be fetched and a new cluster client started.
|
||||
contacts can be fetched and a new cluster client started.
|
||||
|
|
|
|||
|
|
@ -157,9 +157,10 @@ The same type of router could also have been defined in code:
|
|||
|
||||
.. includecode:: code/docs/cluster/FactorialFrontend.scala#router-deploy-in-code
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ tutorial named
|
||||
`Akka Cluster Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-cluster-scala>`_.
|
||||
contains the full source code and instructions of how to run the **Adaptive Load Balancing** sample.
|
||||
The easiest way to run **Adaptive Load Balancing** example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the **Adaptive Load Balancing** sample.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-scala>`_.
|
||||
|
||||
Subscribe to Metrics Events
|
||||
---------------------------
|
||||
|
|
|
|||
|
|
@ -22,13 +22,13 @@ the sender to know the location of the destination actor. This is achieved by se
|
|||
the messages via a ``ShardRegion`` actor provided by this extension, which knows how
|
||||
to route the message with the entity id to the final destination.
|
||||
|
||||
Cluster sharding will not be active on members with status :ref:`WeaklyUp <weakly_up_scala>`
|
||||
Cluster sharding will not be active on members with status :ref:`WeaklyUp <weakly_up_scala>`
|
||||
if that feature is enabled.
|
||||
|
||||
.. warning::
|
||||
**Don't use Cluster Sharding together with Automatic Downing**,
|
||||
since it allows the cluster to split up into two separate clusters, which in turn will result
|
||||
in *multiple shards and entities* being started, one in each separate cluster!
|
||||
in *multiple shards and entities* being started, one in each separate cluster!
|
||||
See :ref:`automatic-vs-manual-downing-java`.
|
||||
|
||||
An Example
|
||||
|
|
@ -63,23 +63,23 @@ This example illustrates two different ways to define the entity identifier in t
|
|||
sent to the entity actor is wrapped in the envelope.
|
||||
|
||||
Note how these two messages types are handled in the ``extractEntityId`` function shown above.
|
||||
The message sent to the entity actor is the second part of the tuple return by the ``extractEntityId`` and that makes it
|
||||
The message sent to the entity actor is the second part of the tuple return by the ``extractEntityId`` and that makes it
|
||||
possible to unwrap envelopes if needed.
|
||||
|
||||
A shard is a group of entities that will be managed together. The grouping is defined by the
|
||||
``extractShardId`` function shown above. For a specific entity identifier the shard identifier must always
|
||||
be the same.
|
||||
``extractShardId`` function shown above. For a specific entity identifier the shard identifier must always
|
||||
be the same.
|
||||
|
||||
Creating a good sharding algorithm is an interesting challenge in itself. Try to produce a uniform distribution,
|
||||
i.e. same amount of entities in each shard. As a rule of thumb, the number of shards should be a factor ten greater
|
||||
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
|
||||
Creating a good sharding algorithm is an interesting challenge in itself. Try to produce a uniform distribution,
|
||||
i.e. same amount of entities in each shard. As a rule of thumb, the number of shards should be a factor ten greater
|
||||
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
|
||||
will not host any shards. Too many shards will result in less efficient management of the shards, e.g. rebalancing
|
||||
overhead, and increased latency because the coordinator is involved in the routing of the first message for each
|
||||
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
|
||||
all nodes in the cluster.
|
||||
|
||||
A simple sharding algorithm that works fine in most cases is to take the absolute value of the ``hashCode`` of
|
||||
the entity identifier modulo number of shards. As a convenience this is provided by the
|
||||
the entity identifier modulo number of shards. As a convenience this is provided by the
|
||||
``ShardRegion.HashCodeMessageExtractor``.
|
||||
|
||||
Messages to the entities are always sent via the local ``ShardRegion``. The ``ShardRegion`` actor reference for a
|
||||
|
|
@ -90,8 +90,8 @@ first message for a specific entity is delivered.
|
|||
|
||||
.. includecode:: ../../../akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala#counter-usage
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Cluster Sharding with Scala! <http://www.lightbend.com/activator/template/akka-cluster-sharding-scala>`_.
|
||||
A more comprehensive sample is available in the
|
||||
tutorial named `Akka Cluster Sharding with Scala! <https://github.com/typesafehub/activator-akka-cluster-sharding-scala>`_.
|
||||
|
||||
How it works
|
||||
------------
|
||||
|
|
@ -151,9 +151,9 @@ That means they will start buffering incoming messages for that shard, in the sa
|
|||
shard location is unknown. During the rebalance process the coordinator will not answer any
|
||||
requests for the location of shards that are being rebalanced, i.e. local buffering will
|
||||
continue until the handoff is completed. The ``ShardRegion`` responsible for the rebalanced shard
|
||||
will stop all entities in that shard by sending the specified ``handOffStopMessage``
|
||||
will stop all entities in that shard by sending the specified ``handOffStopMessage``
|
||||
(default ``PoisonPill``) to them. When all entities have been terminated the ``ShardRegion``
|
||||
owning the entities will acknowledge the handoff as completed to the coordinator.
|
||||
owning the entities will acknowledge the handoff as completed to the coordinator.
|
||||
Thereafter the coordinator will reply to requests for the location of
|
||||
the shard and thereby allocate a new home for the shard and then buffered messages in the
|
||||
``ShardRegion`` actors are delivered to the new location. This means that the state of the entities
|
||||
|
|
@ -170,7 +170,7 @@ must be to begin the rebalancing. This strategy can be replaced by an applicatio
|
|||
implementation.
|
||||
|
||||
The state of shard locations in the ``ShardCoordinator`` is persistent (durable) with
|
||||
:ref:`distributed_data_scala` or :ref:`persistence-scala` to survive failures. When a crashed or
|
||||
:ref:`distributed_data_scala` or :ref:`persistence-scala` to survive failures. When a crashed or
|
||||
unreachable coordinator node has been removed (via down) from the cluster a new ``ShardCoordinator`` singleton
|
||||
actor will take over and the state is recovered. During such a failure period shards
|
||||
with known location are still available, while messages for new (unknown) shards
|
||||
|
|
@ -211,7 +211,7 @@ This mode is enabled with configuration (enabled by default)::
|
|||
|
||||
akka.cluster.sharding.state-store-mode = ddata
|
||||
|
||||
The state of the ``ShardCoordinator`` will be replicated inside a cluster by the
|
||||
The state of the ``ShardCoordinator`` will be replicated inside a cluster by the
|
||||
:ref:`distributed_data_scala` module with ``WriteMajority``/``ReadMajority`` consistency.
|
||||
The state of the coordinator is not durable, it's not stored to disk. When all nodes in
|
||||
the cluster have been stopped the state is lost and not needed any more.
|
||||
|
|
@ -222,10 +222,10 @@ disk. The stored entities are started also after a complete cluster restart.
|
|||
Cluster Sharding is using its own Distributed Data ``Replicator`` per node role. In this way you can use a subset of
|
||||
all nodes for some entity types and another subset for other entity types. Each such replicator has a name
|
||||
that contains the node role and therefore the role configuration must be the same on all nodes in the
|
||||
cluster, i.e. you can't change the roles when performing a rolling upgrade.
|
||||
|
||||
The settings for Distributed Data is configured in the the section
|
||||
``akka.cluster.sharding.distributed-data``. It's not possible to have different
|
||||
cluster, i.e. you can't change the roles when performing a rolling upgrade.
|
||||
|
||||
The settings for Distributed Data is configured in the the section
|
||||
``akka.cluster.sharding.distributed-data``. It's not possible to have different
|
||||
``distributed-data`` settings for different sharding entity types.
|
||||
|
||||
Persistence Mode
|
||||
|
|
@ -244,7 +244,7 @@ Startup after minimum number of members
|
|||
It's good to use Cluster Sharding with the Cluster setting ``akka.cluster.min-nr-of-members`` or
|
||||
``akka.cluster.role.<role-name>.min-nr-of-members``. That will defer the allocation of the shards
|
||||
until at least that number of regions have been started and registered to the coordinator. This
|
||||
avoids that many shards are allocated to the first region that registers and only later are
|
||||
avoids that many shards are allocated to the first region that registers and only later are
|
||||
rebalanced to other nodes.
|
||||
|
||||
See :ref:`min-members_scala` for more information about ``min-nr-of-members``.
|
||||
|
|
@ -277,16 +277,16 @@ Remembering Entities
|
|||
--------------------
|
||||
|
||||
The list of entities in each ``Shard`` can be made persistent (durable) by setting
|
||||
the ``rememberEntities`` flag to true in ``ClusterShardingSettings`` when calling
|
||||
``ClusterSharding.start``. When configured to remember entities, whenever a ``Shard``
|
||||
the ``rememberEntities`` flag to true in ``ClusterShardingSettings`` when calling
|
||||
``ClusterSharding.start``. When configured to remember entities, whenever a ``Shard``
|
||||
is rebalanced onto another node or recovers after a crash it will recreate all the
|
||||
entities which were previously running in that ``Shard``. To permanently stop entities,
|
||||
entities which were previously running in that ``Shard``. To permanently stop entities,
|
||||
a ``Passivate`` message must be sent to the parent of the entity actor, otherwise the
|
||||
entity will be automatically restarted after the entity restart backoff specified in
|
||||
entity will be automatically restarted after the entity restart backoff specified in
|
||||
the configuration.
|
||||
|
||||
When :ref:`Distributed Data mode <cluster_sharding_mode_scala>` is used the identifiers of the entities are
|
||||
stored in :ref:`ddata_durable_scala` of Distributed Data. You may want to change the
|
||||
stored in :ref:`ddata_durable_scala` of Distributed Data. You may want to change the
|
||||
configuration of the akka.cluster.sharding.distributed-data.durable.lmdb.dir`, since
|
||||
the default directory contains the remote port of the actor system. If using a dynamically
|
||||
assigned port (0) it will be different each time and the previously stored data will not
|
||||
|
|
@ -300,7 +300,7 @@ using a ``Passivate``.
|
|||
Note that the state of the entities themselves will not be restored unless they have been made persistent,
|
||||
e.g. with :ref:`persistence-scala`.
|
||||
|
||||
The performance cost of ``rememberEntities`` is rather high when starting/stopping entities and when
|
||||
The performance cost of ``rememberEntities`` is rather high when starting/stopping entities and when
|
||||
shards are rebalanced. This cost increases with number of entities per shard and we currently don't
|
||||
recommend using it with more than 10000 entities per shard.
|
||||
|
||||
|
|
@ -327,7 +327,7 @@ You can send the message ``ShardRegion.GracefulShutdown`` message to the ``Shard
|
|||
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
||||
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
||||
|
||||
This is performed automatically by the :ref:`coordinated-shutdown-scala` and is therefore part of the
|
||||
This is performed automatically by the :ref:`coordinated-shutdown-scala` and is therefore part of the
|
||||
graceful leaving process of a cluster member.
|
||||
|
||||
.. _RemoveInternalClusterShardingData-scala:
|
||||
|
|
@ -341,7 +341,7 @@ Note that this is not application data.
|
|||
|
||||
There is a utility program ``akka.cluster.sharding.RemoveInternalClusterShardingData``
|
||||
that removes this data.
|
||||
|
||||
|
||||
.. warning::
|
||||
|
||||
Never use this program while there are running Akka Cluster nodes that are
|
||||
|
|
@ -355,11 +355,11 @@ and there was a network partition.
|
|||
.. warning::
|
||||
**Don't use Cluster Sharding together with Automatic Downing**,
|
||||
since it allows the cluster to split up into two separate clusters, which in turn will result
|
||||
in *multiple shards and entities* being started, one in each separate cluster!
|
||||
in *multiple shards and entities* being started, one in each separate cluster!
|
||||
See :ref:`automatic-vs-manual-downing-scala`.
|
||||
|
||||
Use this program as a standalone Java main program::
|
||||
|
||||
|
||||
java -classpath <jar files, including akka-cluster-sharding>
|
||||
akka.cluster.sharding.RemoveInternalClusterShardingData
|
||||
-2.3 entityType1 entityType2 entityType3
|
||||
|
|
@ -405,7 +405,7 @@ if needed.
|
|||
.. includecode:: ../../../akka-cluster-sharding/src/main/resources/reference.conf#sharding-ext-config
|
||||
|
||||
Custom shard allocation strategy can be defined in an optional parameter to
|
||||
``ClusterSharding.start``. See the API documentation of ``ShardAllocationStrategy`` for details of
|
||||
``ClusterSharding.start``. See the API documentation of ``ShardAllocationStrategy`` for details of
|
||||
how to implement a custom shard allocation strategy.
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -44,7 +44,7 @@ the oldest node in the cluster and resolve the singleton's ``ActorRef`` by expli
|
|||
singleton's ``actorSelection`` the ``akka.actor.Identify`` message and waiting for it to reply.
|
||||
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
|
||||
Given the implementation, there might be periods of time during which the ``ActorRef`` is unavailable,
|
||||
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
||||
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
|
||||
singleton and then deliver them when the singleton is finally available. If the buffer is full
|
||||
the ``ClusterSingletonProxy`` will drop old messages when new messages are sent via the proxy.
|
||||
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
|
||||
|
|
@ -63,7 +63,7 @@ This pattern may seem to be very tempting to use at first, but it has several dr
|
|||
* the cluster singleton may quickly become a *performance bottleneck*,
|
||||
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
|
||||
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for
|
||||
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for
|
||||
:ref:`automatic-vs-manual-downing-scala`),
|
||||
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
|
||||
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
|
||||
|
|
@ -106,8 +106,7 @@ configured proxy.
|
|||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala#create-singleton-proxy
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Distributed workers with Akka and Scala! <http://www.lightbend.com/activator/template/akka-distributed-workers>`_.
|
||||
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Scala! <https://github.com/typesafehub/activator-akka-distributed-workers-java>`_.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
|
@ -130,18 +129,18 @@ maven::
|
|||
Configuration
|
||||
-------------
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
|
||||
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
|
||||
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
|
||||
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
|
||||
with different settings if needed.
|
||||
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-config
|
||||
|
||||
The following configuration properties are read by the ``ClusterSingletonProxySettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
|
||||
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
|
||||
The following configuration properties are read by the ``ClusterSingletonProxySettings``
|
||||
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
|
||||
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
|
||||
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
|
||||
with different settings if needed.
|
||||
|
||||
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-proxy-config
|
||||
|
|
|
|||
|
|
@ -79,9 +79,10 @@ An actor that uses the cluster extension may look like this:
|
|||
The actor registers itself as subscriber of certain cluster events. It receives events corresponding to the current state
|
||||
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
|
||||
|
||||
The easiest way to run this example yourself is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Cluster Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-cluster-scala>`_.
|
||||
It contains instructions of how to run the ``SimpleClusterApp``.
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the ``SimpleClusterApp``.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-scala>`_.
|
||||
|
||||
Joining to Seed Nodes
|
||||
^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -169,7 +170,7 @@ It can also be performed programmatically with ``Cluster(system).down(address)``
|
|||
|
||||
A pre-packaged solution for the downing problem is provided by
|
||||
`Split Brain Resolver <http://developer.lightbend.com/docs/akka-commercial-addons/current/split-brain-resolver.html>`_,
|
||||
which is part of the `Lightbend Reactive Platform <http://www.lightbend.com/platform>`_.
|
||||
which is part of the `Lightbend Reactive Platform <http://www.lightbend.com/platform>`_.
|
||||
If you don’t use RP, you should anyway carefully read the `documentation <http://developer.lightbend.com/docs/akka-commercial-addons/current/split-brain-resolver.html>`_
|
||||
of the Split Brain Resolver and make sure that the solution you are using handles the concerns
|
||||
described there.
|
||||
|
|
@ -218,13 +219,13 @@ It can also be performed programmatically with:
|
|||
Note that this command can be issued to any member in the cluster, not necessarily the
|
||||
one that is leaving.
|
||||
|
||||
The :ref:`coordinated-shutdown-scala` will automatically run when the cluster node sees itself as
|
||||
``Exiting``, i.e. leaving from another node will trigger the shutdown process on the leaving node.
|
||||
Tasks for graceful leaving of cluster including graceful shutdown of Cluster Singletons and
|
||||
Cluster Sharding are added automatically when Akka Cluster is used, i.e. running the shutdown
|
||||
process will also trigger the graceful leaving if it's not already in progress.
|
||||
The :ref:`coordinated-shutdown-scala` will automatically run when the cluster node sees itself as
|
||||
``Exiting``, i.e. leaving from another node will trigger the shutdown process on the leaving node.
|
||||
Tasks for graceful leaving of cluster including graceful shutdown of Cluster Singletons and
|
||||
Cluster Sharding are added automatically when Akka Cluster is used, i.e. running the shutdown
|
||||
process will also trigger the graceful leaving if it's not already in progress.
|
||||
|
||||
Normally this is handled automatically, but in case of network failures during this process it might still
|
||||
Normally this is handled automatically, but in case of network failures during this process it might still
|
||||
be necessary to set the node’s status to ``Down`` in order to complete the removal.
|
||||
|
||||
.. _weakly_up_scala:
|
||||
|
|
@ -236,7 +237,7 @@ If a node is ``unreachable`` then gossip convergence is not possible and therefo
|
|||
``leader`` actions are also not possible. However, we still might want new nodes to join
|
||||
the cluster in this scenario.
|
||||
|
||||
``Joining`` members will be promoted to ``WeaklyUp`` and become part of the cluster if
|
||||
``Joining`` members will be promoted to ``WeaklyUp`` and become part of the cluster if
|
||||
convergence can't be reached. Once gossip convergence is reached, the leader will move ``WeaklyUp``
|
||||
members to ``Up``.
|
||||
|
||||
|
|
@ -332,9 +333,10 @@ network failures and JVM crashes, in addition to graceful termination of watched
|
|||
actor. Death watch generates the ``Terminated`` message to the watching actor when the
|
||||
unreachable cluster node has been downed and removed.
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ tutorial named
|
||||
`Akka Cluster Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-cluster-scala>`_.
|
||||
contains the full source code and instructions of how to run the **Worker Dial-in Example**.
|
||||
The easiest way to run **Worker Dial-in Example** example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the **Worker Dial-in Example** sample.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-scala>`_.
|
||||
|
||||
Node Roles
|
||||
^^^^^^^^^^
|
||||
|
|
@ -624,9 +626,10 @@ The router is configured with ``routees.paths``:::
|
|||
This means that user requests can be sent to ``StatsService`` on any node and it will use
|
||||
``StatsWorker`` on all nodes.
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ tutorial named
|
||||
`Akka Cluster Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-cluster-scala>`_.
|
||||
contains the full source code and instructions of how to run the **Router Example with Group of Routees**.
|
||||
The easiest way to run **Router Example with Group of Routees** example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the **Router Example with Group of Routees** sample.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-scala>`_.
|
||||
|
||||
Router with Pool of Remote Deployed Routees
|
||||
-------------------------------------------
|
||||
|
|
@ -700,9 +703,10 @@ All nodes start ``ClusterSingletonProxy`` and the ``ClusterSingletonManager``. T
|
|||
}
|
||||
}
|
||||
|
||||
The `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ tutorial named
|
||||
`Akka Cluster Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-cluster-scala>`_.
|
||||
contains the full source code and instructions of how to run the **Router Example with Pool of Remote Deployed Routees**.
|
||||
The easiest way to run **Router Example with Pool of Remote Deployed Routees** example yourself is to download the ready to run
|
||||
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the **Router Example with Pool of Remote Deployed Routees** sample.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-scala>`_.
|
||||
|
||||
Cluster Metrics
|
||||
^^^^^^^^^^^^^^^
|
||||
|
|
@ -775,7 +779,7 @@ Management
|
|||
HTTP
|
||||
----
|
||||
|
||||
Information and management of the cluster is available with a HTTP API.
|
||||
Information and management of the cluster is available with a HTTP API.
|
||||
See documentation of `akka/akka-cluster-management <https://github.com/akka/akka-cluster-management>`_.
|
||||
|
||||
.. _cluster_jmx_scala:
|
||||
|
|
@ -803,7 +807,7 @@ Command Line
|
|||
------------
|
||||
|
||||
.. warning::
|
||||
**Deprecation warning** - The command line script has been deprecated and is scheduled for removal
|
||||
**Deprecation warning** - The command line script has been deprecated and is scheduled for removal
|
||||
in the next major version. Use the :ref:`cluster_http_scala` API with `curl <https://curl.haxx.se/>`_
|
||||
or similar instead.
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@
|
|||
|
||||
*Akka Distributed Data* is useful when you need to share data between nodes in an
|
||||
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
|
||||
The keys are unique identifiers with type information of the data values. The values
|
||||
The keys are unique identifiers with type information of the data values. The values
|
||||
are *Conflict Free Replicated Data Types* (CRDTs).
|
||||
|
||||
All data entries are spread to all nodes, or nodes with a certain role, in the cluster
|
||||
|
|
@ -21,22 +21,22 @@ Several useful data types for counters, sets, maps and registers are provided an
|
|||
you can also implement your own custom data types.
|
||||
|
||||
It is eventually consistent and geared toward providing high read and write availability
|
||||
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
|
||||
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
|
||||
out-of-date value.
|
||||
|
||||
|
||||
|
||||
Using the Replicator
|
||||
====================
|
||||
|
||||
The ``akka.cluster.ddata.Replicator`` actor provides the API for interacting with the data.
|
||||
The ``Replicator`` actor must be started on each node in the cluster, or group of nodes tagged
|
||||
with a specific role. It communicates with other ``Replicator`` instances with the same path
|
||||
The ``Replicator`` actor must be started on each node in the cluster, or group of nodes tagged
|
||||
with a specific role. It communicates with other ``Replicator`` instances with the same path
|
||||
(without address) that are running on other nodes . For convenience it can be used with the
|
||||
``akka.cluster.ddata.DistributedData`` extension but it can also be started as an ordinary
|
||||
actor using the ``Replicator.props``. If it is started as an ordinary actor it is important
|
||||
that it is given the same name, started on same path, on all nodes.
|
||||
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_scala>`,
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_scala>`,
|
||||
will participate in Distributed Data. This means that the data will be replicated to the
|
||||
:ref:`WeaklyUp <weakly_up_scala>` nodes with the background gossip protocol. Note that it
|
||||
will not participate in any actions where the consistency mode is to read/write from all
|
||||
|
|
@ -44,9 +44,9 @@ nodes or the majority of nodes. The :ref:`WeaklyUp <weakly_up_scala>` node is no
|
|||
as part of the cluster. So 3 nodes + 5 :ref:`WeaklyUp <weakly_up_scala>` is essentially a
|
||||
3 node cluster as far as consistent actions are concerned.
|
||||
|
||||
Below is an example of an actor that schedules tick messages to itself and for each tick
|
||||
Below is an example of an actor that schedules tick messages to itself and for each tick
|
||||
adds or removes elements from a ``ORSet`` (observed-remove set). It also subscribes to
|
||||
changes of this.
|
||||
changes of this.
|
||||
|
||||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#data-bot
|
||||
|
||||
|
|
@ -84,10 +84,10 @@ You supply a write consistency level which has the following meaning:
|
|||
When you specify to write to ``n`` out of ``x`` nodes, the update will first replicate to ``n`` nodes. If there are not
|
||||
enough Acks after 1/5th of the timeout, the update will be replicated to ``n`` other nodes. If there are less than n nodes
|
||||
left all of the remaining nodes are used. Reachable nodes are prefered over unreachable nodes.
|
||||
|
||||
|
||||
Note that ``WriteMajority`` has a ``minCap`` parameter that is useful to specify to achieve better safety for small clusters.
|
||||
|
||||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#update
|
||||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#update
|
||||
|
||||
As reply of the ``Update`` a ``Replicator.UpdateSuccess`` is sent to the sender of the
|
||||
``Update`` if the value was successfully replicated according to the supplied consistency
|
||||
|
|
@ -112,7 +112,7 @@ or maintain local correlation data structures.
|
|||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#update-request-context
|
||||
|
||||
.. _replicator_get_scala:
|
||||
|
||||
|
||||
Get
|
||||
---
|
||||
|
||||
|
|
@ -160,7 +160,7 @@ The consistency level that is supplied in the :ref:`replicator_update_scala` and
|
|||
specifies per request how many replicas that must respond successfully to a write and read request.
|
||||
|
||||
For low latency reads you use ``ReadLocal`` with the risk of retrieving stale data, i.e. updates
|
||||
from other nodes might not be visible yet.
|
||||
from other nodes might not be visible yet.
|
||||
|
||||
When using ``WriteLocal`` the update is only written to the local replica and then disseminated
|
||||
in the background with the gossip protocol, which can take few seconds to spread to all nodes.
|
||||
|
|
@ -172,7 +172,7 @@ and you will not receive the value.
|
|||
If consistency is important, you can ensure that a read always reflects the most recent
|
||||
write by using the following formula::
|
||||
|
||||
(nodes_written + nodes_read) > N
|
||||
(nodes_written + nodes_read) > N
|
||||
|
||||
where N is the total number of nodes in the cluster, or the number of nodes with the role that is
|
||||
used for the ``Replicator``.
|
||||
|
|
@ -182,7 +182,7 @@ and reading from 4 nodes, or writing to 5 nodes and reading from 3 nodes.
|
|||
|
||||
By combining ``WriteMajority`` and ``ReadMajority`` levels a read always reflects the most recent write.
|
||||
The ``Replicator`` writes and reads to a majority of replicas, i.e. **N / 2 + 1**. For example,
|
||||
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
|
||||
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
|
||||
to 4 nodes and reads from 4 nodes.
|
||||
|
||||
You can define a minimum number of nodes for ``WriteMajority`` and ``ReadMajority``,
|
||||
|
|
@ -193,11 +193,11 @@ If the minCap is higher then **N / 2 + 1** the minCap will be used.
|
|||
For example if the minCap is 5 the ``WriteMajority`` and ``ReadMajority`` for cluster of 3 nodes will be 3, for
|
||||
cluster of 6 nodes will be 5 and for cluster of 12 nodes will be 7(**N / 2 + 1**).
|
||||
|
||||
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
|
||||
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
|
||||
is rather high and then the nice properties of combining majority write and reads are not
|
||||
guaranteed. Therefore the ``ReadMajority`` and ``WriteMajority`` have a ``minCap`` parameter that
|
||||
is useful to specify to achieve better safety for small clusters. It means that if the cluster
|
||||
size is smaller than the majority size it will use the ``minCap`` number of nodes but at most
|
||||
guaranteed. Therefore the ``ReadMajority`` and ``WriteMajority`` have a ``minCap`` parameter that
|
||||
is useful to specify to achieve better safety for small clusters. It means that if the cluster
|
||||
size is smaller than the majority size it will use the ``minCap`` number of nodes but at most
|
||||
the total size of the cluster.
|
||||
|
||||
Here is an example of using ``WriteMajority`` and ``ReadMajority``:
|
||||
|
|
@ -211,7 +211,7 @@ Here is an example of using ``WriteMajority`` and ``ReadMajority``:
|
|||
In some rare cases, when performing an ``Update`` it is needed to first try to fetch latest data from
|
||||
other nodes. That can be done by first sending a ``Get`` with ``ReadMajority`` and then continue with
|
||||
the ``Update`` when the ``GetSuccess``, ``GetFailure`` or ``NotFound`` reply is received. This might be
|
||||
needed when you need to base a decision on latest information or when removing entries from ``ORSet``
|
||||
needed when you need to base a decision on latest information or when removing entries from ``ORSet``
|
||||
or ``ORMap``. If an entry is added to an ``ORSet`` or ``ORMap`` from one node and removed from another
|
||||
node the entry will only be removed if the added entry is visible on the node where the removal is
|
||||
performed (hence the name observed-removed set).
|
||||
|
|
@ -224,11 +224,11 @@ The following example illustrates how to do that:
|
|||
|
||||
*Caveat:* Even if you use ``WriteMajority`` and ``ReadMajority`` there is small risk that you may
|
||||
read stale data if the cluster membership has changed between the ``Update`` and the ``Get``.
|
||||
For example, in cluster of 5 nodes when you ``Update`` and that change is written to 3 nodes:
|
||||
n1, n2, n3. Then 2 more nodes are added and a ``Get`` request is reading from 4 nodes, which
|
||||
happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the response of the
|
||||
For example, in cluster of 5 nodes when you ``Update`` and that change is written to 3 nodes:
|
||||
n1, n2, n3. Then 2 more nodes are added and a ``Get`` request is reading from 4 nodes, which
|
||||
happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the response of the
|
||||
``Get`` request.
|
||||
|
||||
|
||||
Subscribe
|
||||
---------
|
||||
|
||||
|
|
@ -268,10 +268,10 @@ to after receiving and transforming `DeleteSuccess`.
|
|||
|
||||
.. warning::
|
||||
|
||||
As deleted keys continue to be included in the stored data on each node as well as in gossip
|
||||
messages, a continuous series of updates and deletes of top-level entities will result in
|
||||
growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
|
||||
where frequent adds and removes are required, you should use a fixed number of top-level data
|
||||
As deleted keys continue to be included in the stored data on each node as well as in gossip
|
||||
messages, a continuous series of updates and deletes of top-level entities will result in
|
||||
growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
|
||||
where frequent adds and removes are required, you should use a fixed number of top-level data
|
||||
types that support both updates and removals, for example ``ORMap`` or ``ORSet``.
|
||||
|
||||
.. _delta_crdt_scala:
|
||||
|
|
@ -293,7 +293,7 @@ to nodes in different order than the causal order of the updates. For this examp
|
|||
can result in that set ``{'a', 'b', 'd'}`` can be seen before element 'c' is seen. Eventually
|
||||
it will be ``{'a', 'b', 'c', 'd'}``.
|
||||
|
||||
Note that the full state is occasionally also replicated for delta-CRDTs, for example when
|
||||
Note that the full state is occasionally also replicated for delta-CRDTs, for example when
|
||||
new nodes are added to the cluster or when deltas could not be propagated because
|
||||
of network partitions or similar problems.
|
||||
|
||||
|
|
@ -320,7 +320,7 @@ Counters
|
|||
|
||||
``GCounter`` is a "grow only counter". It only supports increments, no decrements.
|
||||
|
||||
It works in a similar way as a vector clock. It keeps track of one counter per node and the total
|
||||
It works in a similar way as a vector clock. It keeps track of one counter per node and the total
|
||||
value is the sum of these counters. The ``merge`` is implemented by taking the maximum count for
|
||||
each node.
|
||||
|
||||
|
|
@ -389,14 +389,14 @@ such as the following specialized maps.
|
|||
It is a specialized ``ORMap`` with ``PNCounter`` values.
|
||||
|
||||
``LWWMap`` (last writer wins map) is a specialized ``ORMap`` with ``LWWRegister`` (last writer wins register)
|
||||
values.
|
||||
values.
|
||||
|
||||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#ormultimap
|
||||
|
||||
When a data entry is changed the full state of that entry is replicated to other nodes, i.e.
|
||||
when you update a map the whole map is replicated. Therefore, instead of using one ``ORMap``
|
||||
with 1000 elements it is more efficient to split that up in 10 top level ``ORMap`` entries
|
||||
with 100 elements each. Top level entries are replicated individually, which has the
|
||||
with 1000 elements it is more efficient to split that up in 10 top level ``ORMap`` entries
|
||||
with 100 elements each. Top level entries are replicated individually, which has the
|
||||
trade-off that different entries may not be replicated at the same time and you may see
|
||||
inconsistencies between related entries. Separate top level entries cannot be updated atomically
|
||||
together.
|
||||
|
|
@ -485,16 +485,16 @@ Note that the elements of the sets are sorted so the SHA-1 digests are the same
|
|||
for the same elements.
|
||||
|
||||
You register the serializer in configuration:
|
||||
|
||||
|
||||
.. includecode:: code/docs/ddata/DistributedDataDocSpec.scala#serializer-config
|
||||
|
||||
Using compression can sometimes be a good idea to reduce the data size. Gzip compression is
|
||||
provided by the ``akka.cluster.ddata.protobuf.SerializationSupport`` trait:
|
||||
|
||||
.. includecode:: code/docs/ddata/protobuf/TwoPhaseSetSerializer.scala#compression
|
||||
|
||||
|
||||
The two embedded ``GSet`` can be serialized as illustrated above, but in general when composing
|
||||
new data types from the existing built in types it is better to make use of the existing
|
||||
new data types from the existing built in types it is better to make use of the existing
|
||||
serializer for those types. This can be done by declaring those as bytes fields in protobuf:
|
||||
|
||||
.. includecode:: ../../src/main/protobuf/TwoPhaseSetMessages.proto#twophaseset2
|
||||
|
|
@ -511,12 +511,12 @@ look like for the ``TwoPhaseSet``:
|
|||
Durable Storage
|
||||
---------------
|
||||
|
||||
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
|
||||
in the cluster, but if you stop all nodes the data is lost, unless you have saved it
|
||||
elsewhere.
|
||||
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
|
||||
in the cluster, but if you stop all nodes the data is lost, unless you have saved it
|
||||
elsewhere.
|
||||
|
||||
Entries can be configured to be durable, i.e. stored on local disk on each node. The stored data will be loaded
|
||||
next time the replicator is started, i.e. when actor system is restarted. This means data will survive as
|
||||
next time the replicator is started, i.e. when actor system is restarted. This means data will survive as
|
||||
long as at least one node from the old cluster takes part in a new cluster. The keys of the durable entries
|
||||
are configured with::
|
||||
|
||||
|
|
@ -528,10 +528,10 @@ All entries can be made durable by specifying::
|
|||
|
||||
akka.cluster.distributed-data.durable.keys = ["*"]
|
||||
|
||||
`LMDB <https://symas.com/products/lightning-memory-mapped-database/>`_ is the default storage implementation. It is
|
||||
possible to replace that with another implementation by implementing the actor protocol described in
|
||||
`LMDB <https://symas.com/products/lightning-memory-mapped-database/>`_ is the default storage implementation. It is
|
||||
possible to replace that with another implementation by implementing the actor protocol described in
|
||||
``akka.cluster.ddata.DurableStore`` and defining the ``akka.cluster.distributed-data.durable.store-actor-class``
|
||||
property for the new implementation.
|
||||
property for the new implementation.
|
||||
|
||||
The location of the files for the data is configured with::
|
||||
|
||||
|
|
@ -545,33 +545,33 @@ The location of the files for the data is configured with::
|
|||
|
||||
When running in production you may want to configure the directory to a specific
|
||||
path (alt 2), since the default directory contains the remote port of the
|
||||
actor system to make the name unique. If using a dynamically assigned
|
||||
port (0) it will be different each time and the previously stored data
|
||||
actor system to make the name unique. If using a dynamically assigned
|
||||
port (0) it will be different each time and the previously stored data
|
||||
will not be loaded.
|
||||
|
||||
Making the data durable has of course a performance cost. By default, each update is flushed
|
||||
to disk before the ``UpdateSuccess`` reply is sent. For better performance, but with the risk of losing
|
||||
to disk before the ``UpdateSuccess`` reply is sent. For better performance, but with the risk of losing
|
||||
the last writes if the JVM crashes, you can enable write behind mode. Changes are then accumulated during
|
||||
a time period before it is written to LMDB and flushed to disk. Enabling write behind is especially
|
||||
efficient when performing many writes to the same key, because it is only the last value for each key
|
||||
that will be serialized and stored. The risk of losing writes if the JVM crashes is small since the
|
||||
efficient when performing many writes to the same key, because it is only the last value for each key
|
||||
that will be serialized and stored. The risk of losing writes if the JVM crashes is small since the
|
||||
data is typically replicated to other nodes immediately according to the given ``WriteConsistency``.
|
||||
|
||||
::
|
||||
|
||||
akka.cluster.distributed-data.lmdb.write-behind-interval = 200 ms
|
||||
|
||||
Note that you should be prepared to receive ``WriteFailure`` as reply to an ``Update`` of a
|
||||
Note that you should be prepared to receive ``WriteFailure`` as reply to an ``Update`` of a
|
||||
durable entry if the data could not be stored for some reason. When enabling ``write-behind-interval``
|
||||
such errors will only be logged and ``UpdateSuccess`` will still be the reply to the ``Update``.
|
||||
|
||||
There is one important caveat when it comes pruning of :ref:`crdt_garbage_scala` for durable data.
|
||||
If and old data entry that was never pruned is injected and merged with existing data after
|
||||
If and old data entry that was never pruned is injected and merged with existing data after
|
||||
that the pruning markers have been removed the value will not be correct. The time-to-live
|
||||
of the markers is defined by configuration
|
||||
of the markers is defined by configuration
|
||||
``akka.cluster.distributed-data.durable.remove-pruning-marker-after`` and is in the magnitude of days.
|
||||
This would be possible if a node with durable data didn't participate in the pruning
|
||||
(e.g. it was shutdown) and later started after this time. A node with durable data should not
|
||||
(e.g. it was shutdown) and later started after this time. A node with durable data should not
|
||||
be stopped for longer time than this duration and if it is joining again after this
|
||||
duration its data should first be manually removed (from the lmdb directory).
|
||||
|
||||
|
|
@ -586,13 +586,13 @@ from one node it will associate the identifier of that node forever. That can be
|
|||
for long running systems with many cluster nodes being added and removed. To solve this problem
|
||||
the ``Replicator`` performs pruning of data associated with nodes that have been removed from the
|
||||
cluster. Data types that need pruning have to implement the ``RemovedNodePruning`` trait. See the
|
||||
API documentation of the ``Replicator`` for details.
|
||||
API documentation of the ``Replicator`` for details.
|
||||
|
||||
Samples
|
||||
=======
|
||||
|
||||
Several interesting samples are included and described in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Distributed Data Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-distributed-data-scala>`_.
|
||||
Several interesting samples are included and described in the
|
||||
tutorial named `Akka Distributed Data Samples with Scala <@exampleCodeService@/akka-samples-distributed-data-scala>`_ (`source code <@samples@/akka-sample-distributed-data-scala>`_)
|
||||
|
||||
* Low Latency Voting Service
|
||||
* Highly Available Shopping Cart
|
||||
|
|
@ -611,7 +611,7 @@ all domains. Sometimes you need strong consistency.
|
|||
It is not intended for *Big Data*. The number of top level entries should not exceed 100000.
|
||||
When a new node is added to the cluster all these entries are transferred (gossiped) to the
|
||||
new node. The entries are split up in chunks and all existing nodes collaborate in the gossip,
|
||||
but it will take a while (tens of seconds) to transfer all entries and this means that you
|
||||
but it will take a while (tens of seconds) to transfer all entries and this means that you
|
||||
cannot have too many top level entries. The current recommended limit is 100000. We will
|
||||
be able to improve this if needed, but the design is still not intended for billions of entries.
|
||||
|
||||
|
|
@ -620,7 +620,7 @@ All data is held in memory, which is another reason why it is not intended for *
|
|||
When a data entry is changed the full state of that entry may be replicated to other nodes
|
||||
if it doesn't support :ref:`delta_crdt_scala`. The full state is also replicated for delta-CRDTs,
|
||||
for example when new nodes are added to the cluster or when deltas could not be propagated because
|
||||
of network partitions or similar problems. This means that you cannot have too large
|
||||
of network partitions or similar problems. This means that you cannot have too large
|
||||
data entries, because then the remote message size will be too large.
|
||||
|
||||
Learn More about CRDTs
|
||||
|
|
@ -654,8 +654,8 @@ maven::
|
|||
|
||||
Configuration
|
||||
=============
|
||||
|
||||
|
||||
The ``DistributedData`` extension can be configured with the following properties:
|
||||
|
||||
.. includecode:: ../../../akka-distributed-data/src/main/resources/reference.conf#distributed-data
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -16,14 +16,14 @@ The ``DistributedPubSubMediator`` actor is supposed to be started on all nodes,
|
|||
or all nodes with specified role, in the cluster. The mediator can be
|
||||
started with the ``DistributedPubSub`` extension or as an ordinary actor.
|
||||
|
||||
The registry is eventually consistent, i.e. changes are not immediately visible at
|
||||
The registry is eventually consistent, i.e. changes are not immediately visible at
|
||||
other nodes, but typically they will be fully replicated to all other nodes after
|
||||
a few seconds. Changes are only performed in the own part of the registry and those
|
||||
a few seconds. Changes are only performed in the own part of the registry and those
|
||||
changes are versioned. Deltas are disseminated in a scalable way to other nodes with
|
||||
a gossip protocol.
|
||||
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_scala>`,
|
||||
will participate in Distributed Publish Subscribe, i.e. subscribers on nodes with
|
||||
Cluster members with status :ref:`WeaklyUp <weakly_up_scala>`,
|
||||
will participate in Distributed Publish Subscribe, i.e. subscribers on nodes with
|
||||
``WeaklyUp`` status will receive published messages if the publisher and subscriber are on
|
||||
same side of a network partition.
|
||||
|
||||
|
|
@ -31,26 +31,26 @@ You can send messages via the mediator on any node to registered actors on
|
|||
any other node.
|
||||
|
||||
There a two different modes of message delivery, explained in the sections
|
||||
:ref:`distributed-pub-sub-publish-scala` and :ref:`distributed-pub-sub-send-scala` below.
|
||||
:ref:`distributed-pub-sub-publish-scala` and :ref:`distributed-pub-sub-send-scala` below.
|
||||
|
||||
A more comprehensive sample is available in the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Clustered PubSub with Scala! <http://www.lightbend.com/activator/template/akka-clustering>`_.
|
||||
A more comprehensive sample is available in the
|
||||
tutorial named `Akka Clustered PubSub with Scala! <https://github.com/typesafehub/activator-akka-clustering>`_.
|
||||
|
||||
.. _distributed-pub-sub-publish-scala:
|
||||
|
||||
Publish
|
||||
-------
|
||||
|
||||
This is the true pub/sub mode. A typical usage of this mode is a chat room in an instant
|
||||
This is the true pub/sub mode. A typical usage of this mode is a chat room in an instant
|
||||
messaging application.
|
||||
|
||||
Actors are registered to a named topic. This enables many subscribers on each node.
|
||||
The message will be delivered to all subscribers of the topic.
|
||||
Actors are registered to a named topic. This enables many subscribers on each node.
|
||||
The message will be delivered to all subscribers of the topic.
|
||||
|
||||
For efficiency the message is sent over the wire only once per node (that has a matching topic),
|
||||
and then delivered to all subscribers of the local topic representation. (See more in )
|
||||
|
||||
You register actors to the local mediator with ``DistributedPubSubMediator.Subscribe``.
|
||||
You register actors to the local mediator with ``DistributedPubSubMediator.Subscribe``.
|
||||
Successful ``Subscribe`` and ``Unsubscribe`` is acknowledged with
|
||||
``DistributedPubSubMediator.SubscribeAck`` and ``DistributedPubSubMediator.UnsubscribeAck``
|
||||
replies. The acknowledgment means that the subscription is registered, but it can still
|
||||
|
|
@ -109,15 +109,15 @@ Send
|
|||
This is a point-to-point mode where each message is delivered to one destination,
|
||||
but you still do not have to know where the destination is located.
|
||||
A typical usage of this mode is private chat to one other user in an instant messaging
|
||||
application. It can also be used for distributing tasks to registered workers, like a
|
||||
application. It can also be used for distributing tasks to registered workers, like a
|
||||
cluster aware router where the routees dynamically can register themselves.
|
||||
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists in the registry. If several entries match the path because it has been registered
|
||||
on several nodes the message will be sent via the supplied ``RoutingLogic`` (default random)
|
||||
on several nodes the message will be sent via the supplied ``RoutingLogic`` (default random)
|
||||
to one destination. The sender() of the message can specify that local affinity is preferred,
|
||||
i.e. the message is sent to an actor in the same local actor system as the used mediator actor,
|
||||
if any such exists, otherwise route to any other matching entry.
|
||||
if any such exists, otherwise route to any other matching entry.
|
||||
|
||||
You register actors to the local mediator with ``DistributedPubSubMediator.Put``.
|
||||
The ``ActorRef`` in ``Put`` must belong to the same local actor system as the mediator.
|
||||
|
|
@ -150,11 +150,11 @@ It can send messages to the path from anywhere in the cluster:
|
|||
.. includecode:: ../../../akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala#send-message
|
||||
|
||||
It is also possible to broadcast messages to the actors that have been registered with
|
||||
``Put``. Send ``DistributedPubSubMediator.SendToAll`` message to the local mediator and the wrapped message
|
||||
``Put``. Send ``DistributedPubSubMediator.SendToAll`` message to the local mediator and the wrapped message
|
||||
will then be delivered to all recipients with a matching path. Actors with
|
||||
the same path, without address information, can be registered on different nodes.
|
||||
On each node there can only be one such actor, since the path is unique within one
|
||||
local actor system.
|
||||
local actor system.
|
||||
|
||||
Typical usage of this mode is to broadcast messages to all replicas
|
||||
with the same path, e.g. 3 actors on different nodes that all perform the same actions,
|
||||
|
|
|
|||
|
|
@ -493,6 +493,7 @@ zero.
|
|||
Examples
|
||||
========
|
||||
|
||||
A bigger FSM example contrasted with Actor's :meth:`become`/:meth:`unbecome` can be found in
|
||||
the `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_ template named
|
||||
`Akka FSM in Scala <http://www.lightbend.com/activator/template/akka-sample-fsm-scala>`_
|
||||
A bigger FSM example contrasted with Actor's :meth:`become`/:meth:`unbecome` can be
|
||||
downloaded as a ready to run `Akka FSM sample <@exampleCodeService@/akka-samples-fsm-scala>`_
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-fsm-scala>`_.
|
||||
|
|
|
|||
|
|
@ -3,8 +3,9 @@ The Obligatory Hello World
|
|||
##########################
|
||||
|
||||
The actor based version of the tough problem of printing a
|
||||
well-known greeting to the console is introduced in a `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial named `Akka Main in Scala <http://www.lightbend.com/activator/template/akka-sample-main-scala>`_.
|
||||
well-known greeting to the console is introduced in a ready to run `Akka Main sample <@exampleCodeService@/akka-samples-main-scala>`_
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-main-scala>`_.
|
||||
|
||||
The tutorial illustrates the generic launcher class :class:`akka.Main` which expects only
|
||||
one command line argument: the class name of the application’s main actor. This
|
||||
|
|
@ -12,7 +13,9 @@ main method will then create the infrastructure needed for running the actors,
|
|||
start the given main actor and arrange for the whole application to shut down
|
||||
once the main actor terminates.
|
||||
|
||||
There is also another `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
tutorial in the same problem domain that is named `Hello Akka! <http://www.lightbend.com/activator/template/hello-akka>`_.
|
||||
It describes the basics of Akka in more depth.
|
||||
There is also a `Gitter8 <http://www.foundweekends.org/giter8/>`_ template in the same problem domain
|
||||
that is named `Hello Akka! <https://github.com/akka/hello-akka.g8>`_.
|
||||
It describes the basics of Akka in more depth. If you have `sbt` already installed, you can create a project
|
||||
from this template by running::
|
||||
|
||||
sbt new akka/hello-akka.g8
|
||||
|
|
|
|||
|
|
@ -103,16 +103,17 @@ about successful state changes by publishing events.
|
|||
When persisting events with ``persist`` it is guaranteed that the persistent actor will not receive further commands between
|
||||
the ``persist`` call and the execution(s) of the associated event handler. This also holds for multiple ``persist``
|
||||
calls in context of a single command. Incoming messages are :ref:`stashed <internal-stash-scala>` until the ``persist``
|
||||
is completed.
|
||||
is completed.
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped. If persistence of an event is rejected before it is
|
||||
stored, e.g. due to serialization error, ``onPersistRejected`` will be invoked (logging a warning
|
||||
by default) and the actor continues with the next message.
|
||||
|
||||
The easiest way to run this example yourself is to download `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Persistence Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-persistence-scala>`_.
|
||||
It contains instructions on how to run the ``PersistentActorExample``.
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
`Akka Persistence Sample with Scala <@exampleCodeService@/akka-samples-persistence-scala>`_
|
||||
together with the tutorial. It contains instructions on how to run the ``PersistentActorExample``.
|
||||
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-persistence-scala>`_.
|
||||
|
||||
.. note::
|
||||
|
||||
|
|
@ -159,7 +160,7 @@ Recovery customization
|
|||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Applications may also customise how recovery is performed by returning a customised ``Recovery`` object
|
||||
in the ``recovery`` method of a ``PersistentActor``,
|
||||
in the ``recovery`` method of a ``PersistentActor``,
|
||||
|
||||
To skip loading snapshots and replay all events you can use ``SnapshotSelectionCriteria.None``.
|
||||
This can be useful if snapshot serialization format has changed in an incompatible way.
|
||||
|
|
@ -167,10 +168,10 @@ It should typically not be used when events have been deleted.
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-no-snap
|
||||
|
||||
Another example, which can be fun for experiments but probably not in a real application, is setting an
|
||||
upper bound to the replay which allows the actor to be replayed to a certain point "in the past"
|
||||
instead to its most up to date state. Note that after that it is a bad idea to persist new
|
||||
events because a later recovery will probably be confused by the new events that follow the
|
||||
Another example, which can be fun for experiments but probably not in a real application, is setting an
|
||||
upper bound to the replay which allows the actor to be replayed to a certain point "in the past"
|
||||
instead to its most up to date state. Note that after that it is a bad idea to persist new
|
||||
events because a later recovery will probably be confused by the new events that follow the
|
||||
events that were previously skipped.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-custom
|
||||
|
|
@ -202,34 +203,34 @@ is called (logging the error by default) and the actor will be stopped.
|
|||
|
||||
.. _internal-stash-scala:
|
||||
|
||||
Internal stash
|
||||
Internal stash
|
||||
--------------
|
||||
|
||||
The persistent actor has a private :ref:`stash <stash-scala>` for internally caching incoming messages during
|
||||
:ref:`recovery <recovery-scala>` or the ``persist\persistAll`` method persisting events. You can still use/inherit from the
|
||||
``Stash`` interface. The internal stash cooperates with the normal stash by hooking into ``unstashAll`` method and
|
||||
The persistent actor has a private :ref:`stash <stash-scala>` for internally caching incoming messages during
|
||||
:ref:`recovery <recovery-scala>` or the ``persist\persistAll`` method persisting events. You can still use/inherit from the
|
||||
``Stash`` interface. The internal stash cooperates with the normal stash by hooking into ``unstashAll`` method and
|
||||
making sure messages are unstashed properly to the internal stash to maintain ordering guarantees.
|
||||
|
||||
You should be careful to not send more messages to a persistent actor than it can keep up with, otherwise the number
|
||||
of stashed messages will grow without bounds. It can be wise to protect against ``OutOfMemoryError`` by defining a
|
||||
You should be careful to not send more messages to a persistent actor than it can keep up with, otherwise the number
|
||||
of stashed messages will grow without bounds. It can be wise to protect against ``OutOfMemoryError`` by defining a
|
||||
maximum stash capacity in the mailbox configuration::
|
||||
|
||||
akka.actor.default-mailbox.stash-capacity=10000
|
||||
|
||||
Note that the stash capacity is per actor. If you have many persistent actors, e.g. when using cluster sharding,
|
||||
you may need to define a small stash capacity to ensure that the total number of stashed messages in the system
|
||||
doesn't consume too much memory. Additionally, the persistent actor defines three strategies to handle failure when the
|
||||
internal stash capacity is exceeded. The default overflow strategy is the ``ThrowOverflowExceptionStrategy``, which
|
||||
discards the current received message and throws a ``StashOverflowException``, causing actor restart if the default
|
||||
supervision strategy is used. You can override the ``internalStashOverflowStrategy`` method to return
|
||||
``DiscardToDeadLetterStrategy`` or ``ReplyToStrategy`` for any "individual" persistent actor, or define the "default"
|
||||
for all persistent actors by providing FQCN, which must be a subclass of ``StashOverflowStrategyConfigurator``, in the
|
||||
doesn't consume too much memory. Additionally, the persistent actor defines three strategies to handle failure when the
|
||||
internal stash capacity is exceeded. The default overflow strategy is the ``ThrowOverflowExceptionStrategy``, which
|
||||
discards the current received message and throws a ``StashOverflowException``, causing actor restart if the default
|
||||
supervision strategy is used. You can override the ``internalStashOverflowStrategy`` method to return
|
||||
``DiscardToDeadLetterStrategy`` or ``ReplyToStrategy`` for any "individual" persistent actor, or define the "default"
|
||||
for all persistent actors by providing FQCN, which must be a subclass of ``StashOverflowStrategyConfigurator``, in the
|
||||
persistence configuration::
|
||||
|
||||
akka.persistence.internal-stash-overflow-strategy=
|
||||
"akka.persistence.ThrowExceptionConfigurator"
|
||||
|
||||
The ``DiscardToDeadLetterStrategy`` strategy also has a pre-packaged companion configurator
|
||||
|
||||
The ``DiscardToDeadLetterStrategy`` strategy also has a pre-packaged companion configurator
|
||||
``akka.persistence.DiscardConfigurator``.
|
||||
|
||||
You can also query the default strategy via the Akka persistence extension singleton::
|
||||
|
|
@ -237,7 +238,7 @@ You can also query the default strategy via the Akka persistence extension singl
|
|||
Persistence(context.system).defaultInternalStashOverflowStrategy
|
||||
|
||||
.. note::
|
||||
The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may
|
||||
The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may
|
||||
be discarded. You can use bounded stash instead of it.
|
||||
|
||||
.. _persist-async-scala:
|
||||
|
|
@ -334,10 +335,10 @@ While it is possible to nest mixed ``persist`` and ``persistAsync`` with keeping
|
|||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
||||
.. warning::
|
||||
While it is possible to nest ``persist`` calls within one another,
|
||||
While it is possible to nest ``persist`` calls within one another,
|
||||
it is *not* legal call ``persist`` from any other Thread than the Actors message processing Thread.
|
||||
For example, it is not legal to call ``persist`` from Futures! Doing so will break the guarantees
|
||||
that the persist methods aim to provide. Always call ``persist`` and ``persistAsync`` from within
|
||||
For example, it is not legal to call ``persist`` from Futures! Doing so will break the guarantees
|
||||
that the persist methods aim to provide. Always call ``persist`` and ``persistAsync`` from within
|
||||
the Actor's receive block (or methods synchronously invoked from there).
|
||||
|
||||
.. _failures-scala:
|
||||
|
|
@ -865,7 +866,7 @@ The journal plugin class must have a constructor with one of these signatures:
|
|||
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
|
||||
of the plugin is passed in the ``String`` parameter.
|
||||
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
``akka.persistence.dispatchers.default-plugin-dispatcher``.
|
||||
|
||||
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
@ -894,7 +895,7 @@ The snapshot store plugin class must have a constructor with one of these signat
|
|||
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
|
||||
of the plugin is passed in the ``String`` parameter.
|
||||
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
|
||||
``akka.persistence.dispatchers.default-plugin-dispatcher``.
|
||||
|
||||
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ The example above only illustrates the bare minimum of properties you have to ad
|
|||
All settings are described in :ref:`remote-configuration-artery-scala`.
|
||||
|
||||
.. note::
|
||||
Aeron requires 64bit JVM to work reliably.
|
||||
Aeron requires 64bit JVM to work reliably.
|
||||
|
||||
Canonical address
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -249,8 +249,8 @@ remote system. This still however may pose a security risk, and one may want to
|
|||
only a specific set of known actors by enabling the whitelist feature.
|
||||
|
||||
To enable remote deployment whitelisting set the ``akka.remote.deployment.enable-whitelist`` value to ``on``.
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
.. includecode:: ../../../akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala#whitelist-config
|
||||
|
|
@ -269,7 +269,7 @@ so if network security is not considered as enough protection the classic remoti
|
|||
|
||||
Best practice is that Akka remoting nodes should only be accessible from the adjacent network.
|
||||
|
||||
It is also security best practice to :ref:`disable the Java serializer <disable-java-serializer-java-artery>` because of
|
||||
It is also security best practice to :ref:`disable the Java serializer <disable-java-serializer-java-artery>` because of
|
||||
its multiple `known attack surfaces <https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_.
|
||||
|
||||
Untrusted Mode
|
||||
|
|
@ -291,12 +291,12 @@ a denial of service attack). :class:`PossiblyHarmful` covers the predefined
|
|||
messages like :class:`PoisonPill` and :class:`Kill`, but it can also be added
|
||||
as a marker trait to user-defined messages.
|
||||
|
||||
.. warning::
|
||||
|
||||
.. warning::
|
||||
|
||||
Untrusted mode does not give full protection against attacks by itself.
|
||||
It makes it slightly harder to perform malicious or unintended actions but
|
||||
it should be complemented with :ref:`disabled Java serializer <disable-java-serializer-scala-artery>`.
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls).
|
||||
|
||||
Messages sent with actor selection are by default discarded in untrusted mode, but
|
||||
|
|
@ -342,7 +342,7 @@ may not be delivered to the destination:
|
|||
|
||||
* during a network partition and the Aeron session is broken, this automatically recovered once the partition is over
|
||||
* when sending too many messages without flow control and thereby filling up the outbound send queue (``outbound-message-queue-size`` config)
|
||||
* if serialization or deserialization of a message fails (only that message will be dropped)
|
||||
* if serialization or deserialization of a message fails (only that message will be dropped)
|
||||
* if an unexpected exception occurs in the remoting infrastructure
|
||||
|
||||
In short, Actor message delivery is “at-most-once” as described in :ref:`message-delivery-reliability`
|
||||
|
|
@ -350,39 +350,39 @@ In short, Actor message delivery is “at-most-once” as described in :ref:`mes
|
|||
Some messages in Akka are called system messages and those cannot be dropped because that would result
|
||||
in an inconsistent state between the systems. Such messages are used for essentially two features; remote death
|
||||
watch and remote deployment. These messages are delivered by Akka remoting with “exactly-once” guarantee by
|
||||
confirming each message and resending unconfirmed messages. If a system message anyway cannot be delivered the
|
||||
association with the destination system is irrecoverable failed, and Terminated is signaled for all watched
|
||||
confirming each message and resending unconfirmed messages. If a system message anyway cannot be delivered the
|
||||
association with the destination system is irrecoverable failed, and Terminated is signaled for all watched
|
||||
actors on the remote system. It is placed in a so called quarantined state. Quarantine usually does not
|
||||
happen if remote watch or remote deployment is not used.
|
||||
|
||||
Each ``ActorSystem`` instance has an unique identifier (UID), which is important for differentiating between
|
||||
incarnations of a system when it is restarted with the same hostname and port. It is the specific
|
||||
incarnation (UID) that is quarantined. The only way to recover from this state is to restart one of the
|
||||
actor systems.
|
||||
incarnation (UID) that is quarantined. The only way to recover from this state is to restart one of the
|
||||
actor systems.
|
||||
|
||||
Messages that are sent to and received from a quarantined system will be dropped. However, it is possible to
|
||||
send messages with ``actorSelection`` to the address of a quarantined system, which is useful to probe if the
|
||||
system has been restarted.
|
||||
|
||||
An association will be quarantined when:
|
||||
An association will be quarantined when:
|
||||
|
||||
* Cluster node is removed from the cluster membership.
|
||||
* Remote failure detector triggers, i.e. remote watch is used. This is different when :ref:`Akka Cluster <cluster_usage_scala>`
|
||||
is used. The unreachable observation by the cluster failure detector can go back to reachable if the network
|
||||
partition heals. A cluster member is not quarantined when the failure detector triggers.
|
||||
* Overflow of the system message delivery buffer, e.g. because of too many ``watch`` requests at the same time
|
||||
partition heals. A cluster member is not quarantined when the failure detector triggers.
|
||||
* Overflow of the system message delivery buffer, e.g. because of too many ``watch`` requests at the same time
|
||||
(``system-message-buffer-size`` config).
|
||||
* Unexpected exception occurs in the control subchannel of the remoting infrastructure.
|
||||
|
||||
The UID of the ``ActorSystem`` is exchanged in a two-way handshake when the first message is sent to
|
||||
a destination. The handshake will be retried until the other system replies and no other messages will
|
||||
pass through until the handshake is completed. If the handshake cannot be established within a timeout
|
||||
pass through until the handshake is completed. If the handshake cannot be established within a timeout
|
||||
(``handshake-timeout`` config) the association is stopped (freeing up resources). Queued messages will be
|
||||
dropped if the handshake cannot be established. It will not be quarantined, because the UID is unknown.
|
||||
New handshake attempt will start when next message is sent to the destination.
|
||||
|
||||
Handshake requests are actually also sent periodically to be able to establish a working connection
|
||||
when the destination system has been restarted.
|
||||
Handshake requests are actually also sent periodically to be able to establish a working connection
|
||||
when the destination system has been restarted.
|
||||
|
||||
Watching Remote Actors
|
||||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
|
@ -459,12 +459,12 @@ For more information please see :ref:`serialization-scala`.
|
|||
ByteBuffer based serialization
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Artery introduces a new serialization mechanism which allows the ``ByteBufferSerializer`` to directly write into a
|
||||
Artery introduces a new serialization mechanism which allows the ``ByteBufferSerializer`` to directly write into a
|
||||
shared :class:`java.nio.ByteBuffer` instead of being forced to allocate and return an ``Array[Byte]`` for each serialized
|
||||
message. For high-throughput messaging this API change can yield significant performance benefits, so we recommend
|
||||
changing your serializers to use this new mechanism.
|
||||
|
||||
This new API also plays well with new versions of Google Protocol Buffers and other serialization libraries, which gained
|
||||
This new API also plays well with new versions of Google Protocol Buffers and other serialization libraries, which gained
|
||||
the ability to serialize directly into and from ByteBuffers.
|
||||
|
||||
As the new feature only changes how bytes are read and written, and the rest of the serialization infrastructure
|
||||
|
|
@ -474,13 +474,13 @@ Implementing an :class:`akka.serialization.ByteBufferSerializer` works the same
|
|||
|
||||
.. includecode:: ../../../akka-actor/src/main/scala/akka/serialization/Serializer.scala#ByteBufferSerializer
|
||||
|
||||
Implementing a serializer for Artery is therefore as simple as implementing this interface, and binding the serializer
|
||||
Implementing a serializer for Artery is therefore as simple as implementing this interface, and binding the serializer
|
||||
as usual (which is explained in :ref:`serialization-scala`).
|
||||
|
||||
Implementations should typically extend ``SerializerWithStringManifest`` and in addition to the ``ByteBuffer`` based
|
||||
``toBinary`` and ``fromBinary`` methods also implement the array based ``toBinary`` and ``fromBinary`` methods.
|
||||
Implementations should typically extend ``SerializerWithStringManifest`` and in addition to the ``ByteBuffer`` based
|
||||
``toBinary`` and ``fromBinary`` methods also implement the array based ``toBinary`` and ``fromBinary`` methods.
|
||||
The array based methods will be used when ``ByteBuffer`` is not used, e.g. in Akka Persistence.
|
||||
|
||||
|
||||
Note that the array based methods can be implemented by delegation like this:
|
||||
|
||||
.. includecode:: code/docs/actor/ByteBufferSerializerDocSpec.scala#bytebufserializer-with-manifest
|
||||
|
|
@ -492,40 +492,40 @@ Disabling the Java Serializer
|
|||
|
||||
It is possible to completely disable Java Serialization for the entire Actor system.
|
||||
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
serialize user messages as well as some of its internal messages in previous versions.
|
||||
Since the release of Artery, Akka internals do not rely on Java serialization anymore (exceptions to that being ``java.lang.Throwable`` and "remote deployment").
|
||||
|
||||
.. note::
|
||||
.. note::
|
||||
Akka does not use Java Serialization for any of its internal messages.
|
||||
It is highly encouraged to disable java serialization, so please plan to do so at the earliest possibility you have in your project.
|
||||
|
||||
One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
|
||||
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
your ``application.conf``:
|
||||
|
||||
.. code-block:: ruby
|
||||
|
||||
akka.actor.allow-java-serialization = off
|
||||
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
be inserted which will fail explicitly if attempts to use java serialization are made.
|
||||
|
||||
It will also enable the above mentioned `enable-additional-serialization-bindings`.
|
||||
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
attempting to send malicious messages intending to use java serialization as attack vector.
|
||||
The attempts are logged with the SECURITY marker.
|
||||
|
||||
|
|
@ -563,9 +563,9 @@ That is not done by the router.
|
|||
Remoting Sample
|
||||
---------------
|
||||
|
||||
There is a more extensive remote example that comes with `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_.
|
||||
The tutorial named `Akka Remote Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-remote-scala>`_
|
||||
demonstrates both remote deployment and look-up of remote actors.
|
||||
You can download a ready to run `remoting sample <@exampleCodeService@/akka-samples-remote-scala>`_
|
||||
together with a tutorial for a more hands-on experience. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-remote-scala>`_.
|
||||
|
||||
Performance tuning
|
||||
------------------
|
||||
|
|
@ -607,7 +607,7 @@ Messages destined for actors not matching any of these patterns are sent using t
|
|||
External, shared Aeron media driver
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
The Aeron transport is running in a so called `media driver <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
The Aeron transport is running in a so called `media driver <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
By default, Akka starts the media driver embedded in the same JVM process as application. This is
|
||||
convenient and simplifies operational concerns by only having one process to start and monitor.
|
||||
|
||||
|
|
@ -625,15 +625,15 @@ The needed classpath::
|
|||
|
||||
Agrona-0.5.4.jar:aeron-driver-1.0.1.jar:aeron-client-1.0.1.jar
|
||||
|
||||
You find those jar files on `maven central <http://search.maven.org/>`_, or you can create a
|
||||
You find those jar files on `maven central <http://search.maven.org/>`_, or you can create a
|
||||
package with your preferred build tool.
|
||||
|
||||
You can pass `Aeron properties <https://github.com/real-logic/Aeron/wiki/Configuration-Options>`_ as
|
||||
You can pass `Aeron properties <https://github.com/real-logic/Aeron/wiki/Configuration-Options>`_ as
|
||||
command line `-D` system properties::
|
||||
|
||||
-Daeron.dir=/dev/shm/aeron
|
||||
|
||||
You can also define Aeron properties in a file::
|
||||
You can also define Aeron properties in a file::
|
||||
|
||||
java io.aeron.driver.MediaDriver config/aeron.properties
|
||||
|
||||
|
|
@ -645,21 +645,21 @@ An example of such a properties file::
|
|||
aeron.rcv.buffer.length=16384
|
||||
aeron.rcv.initial.window.length=2097152
|
||||
agrona.disable.bounds.checks=true
|
||||
|
||||
|
||||
aeron.threading.mode=SHARED_NETWORK
|
||||
|
||||
|
||||
# low latency settings
|
||||
#aeron.threading.mode=DEDICATED
|
||||
#aeron.sender.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
|
||||
#aeron.receiver.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
|
||||
|
||||
# use same director in akka.remote.artery.advanced.aeron-dir config
|
||||
# of the Akka application
|
||||
|
||||
# use same director in akka.remote.artery.advanced.aeron-dir config
|
||||
# of the Akka application
|
||||
aeron.dir=/dev/shm/aeron
|
||||
|
||||
Read more about the media driver in the `Aeron documentation <https://github.com/real-logic/Aeron/wiki/Media-Driver-Operation>`_.
|
||||
|
||||
To use the external media driver from the Akka application you need to define the following two
|
||||
To use the external media driver from the Akka application you need to define the following two
|
||||
configuration properties::
|
||||
|
||||
akka.remote.artery.advanced {
|
||||
|
|
|
|||
|
|
@ -186,8 +186,8 @@ remote system. This still however may pose a security risk, and one may want to
|
|||
only a specific set of known actors by enabling the whitelist feature.
|
||||
|
||||
To enable remote deployment whitelisting set the ``akka.remote.deployment.enable-whitelist`` value to ``on``.
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
|
||||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
.. includecode:: ../../../akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala#whitelist-config
|
||||
|
|
@ -290,9 +290,9 @@ Disabling the Java Serializer
|
|||
-----------------------------
|
||||
|
||||
Since the ``2.4.11`` release of Akka it is possible to entirely disable the default Java Serialization mechanism.
|
||||
Please note that :ref:`new remoting implementation (codename Artery) <remoting-artery-scala>` does not use Java
|
||||
serialization for internal messages by default. For compatibility reasons, the current remoting still uses Java
|
||||
serialization for some classes, however you can disable it in this remoting implementation as well by following
|
||||
Please note that :ref:`new remoting implementation (codename Artery) <remoting-artery-scala>` does not use Java
|
||||
serialization for internal messages by default. For compatibility reasons, the current remoting still uses Java
|
||||
serialization for some classes, however you can disable it in this remoting implementation as well by following
|
||||
the steps below.
|
||||
|
||||
The first step is to enable some additional serializers that replace previous Java serialization of some internal
|
||||
|
|
@ -305,53 +305,53 @@ enabled with this configuration:
|
|||
# Set this to on to enable serialization-bindings define in
|
||||
# additional-serialization-bindings. Those are by default not included
|
||||
# for backwards compatibility reasons. They are enabled by default if
|
||||
# akka.remote.artery.enabled=on.
|
||||
# akka.remote.artery.enabled=on.
|
||||
enable-additional-serialization-bindings = on
|
||||
}
|
||||
|
||||
The reason these are not enabled by default is wire-level compatibility between any 2.4.x Actor Systems.
|
||||
If you roll out a new cluster, all on the same Akka version that can enable these serializers it is recommended to
|
||||
If you roll out a new cluster, all on the same Akka version that can enable these serializers it is recommended to
|
||||
enable this setting. When using :ref:`remoting-artery-scala` these serializers are enabled by default.
|
||||
|
||||
.. warning::
|
||||
Please note that when enabling the additional-serialization-bindings when using the old remoting,
|
||||
.. warning::
|
||||
Please note that when enabling the additional-serialization-bindings when using the old remoting,
|
||||
you must do so on all nodes participating in a cluster, otherwise the mis-aligned serialization
|
||||
configurations will cause deserialization errors on the receiving nodes.
|
||||
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
Java serialization is known to be slow and `prone to attacks
|
||||
<https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_
|
||||
of various kinds - it never was designed for high throughput messaging after all. However, it is very
|
||||
convenient to use, thus it remained the default serialization mechanism that Akka used to
|
||||
serialize user messages as well as some of its internal messages in previous versions.
|
||||
Since the release of Artery, Akka internals do not rely on Java serialization anymore (one exception being ``java.lang.Throwable``).
|
||||
|
||||
.. note::
|
||||
When using the new remoting implementation (codename Artery), Akka does not use Java Serialization for any of its internal messages.
|
||||
.. note::
|
||||
When using the new remoting implementation (codename Artery), Akka does not use Java Serialization for any of its internal messages.
|
||||
It is highly encouraged to disable java serialization, so please plan to do so at the earliest possibility you have in your project.
|
||||
|
||||
One may think that network bandwidth and latency limit the performance of remote messaging, but serialization is a more typical bottleneck.
|
||||
|
||||
For user messages, the default serializer, implemented using Java serialization, remains available and enabled.
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
We do however recommend to disable it entirely and utilise a proper serialization library instead in order effectively utilise
|
||||
the improved performance and ability for rolling deployments using Artery. Libraries that we recommend to use include,
|
||||
but are not limited to, `Kryo`_ by using the `akka-kryo-serialization`_ library or `Google Protocol Buffers`_ if you want
|
||||
more control over the schema evolution of your messages.
|
||||
more control over the schema evolution of your messages.
|
||||
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
In order to completely disable Java Serialization in your Actor system you need to add the following configuration to
|
||||
your ``application.conf``:
|
||||
|
||||
.. code-block:: ruby
|
||||
|
||||
akka.actor.allow-java-serialization = off
|
||||
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
This will completely disable the use of ``akka.serialization.JavaSerialization`` by the
|
||||
Akka Serialization extension, instead ``DisabledJavaSerializer`` will
|
||||
be inserted which will fail explicitly if attempts to use java serialization are made.
|
||||
|
||||
It will also enable the above mentioned ``enable-additional-serialization-bindings``.
|
||||
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
The log messages emitted by such serializer SHOULD be be treated as potential
|
||||
attacks which the serializer prevented, as they MAY indicate an external operator
|
||||
attempting to send malicious messages intending to use java serialization as attack vector.
|
||||
The attempts are logged with the SECURITY marker.
|
||||
|
||||
|
|
@ -382,16 +382,16 @@ A group of remote actors can be configured as:
|
|||
|
||||
This configuration setting will send messages to the defined remote actor paths.
|
||||
It requires that you create the destination actors on the remote nodes with matching paths.
|
||||
That is not done by the router.
|
||||
That is not done by the router.
|
||||
|
||||
.. _remote-sample-scala:
|
||||
|
||||
Remoting Sample
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
There is a more extensive remote example that comes with `Lightbend Activator <http://www.lightbend.com/platform/getstarted>`_.
|
||||
The tutorial named `Akka Remote Samples with Scala <http://www.lightbend.com/activator/template/akka-sample-remote-scala>`_
|
||||
demonstrates both remote deployment and look-up of remote actors.
|
||||
You can download a ready to run `remoting sample <@exampleCodeService@/akka-samples-remote-scala>`_
|
||||
together with a tutorial for a more hands-on experience. The source code of this sample can be found in the
|
||||
`Akka Samples Repository <@samples@/akka-sample-remote-scala>`_.
|
||||
|
||||
Remote Events
|
||||
-------------
|
||||
|
|
@ -453,7 +453,7 @@ An ``ActorSystem`` should not be exposed via Akka Remote over plain TCP to an un
|
|||
It should be protected by network security, such as a firewall. If that is not considered as enough protection
|
||||
:ref:`TLS with mutual authentication <remote-tls-scala>` should be enabled.
|
||||
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-scala>` because of
|
||||
It is also security best-practice to :ref:`disable the Java serializer <disable-java-serializer-scala>` because of
|
||||
its multiple `known attack surfaces <https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995>`_.
|
||||
|
||||
.. _remote-tls-scala:
|
||||
|
|
@ -477,15 +477,15 @@ Next the actual SSL/TLS parameters have to be configured::
|
|||
netty.ssl.security {
|
||||
key-store = "/example/path/to/mykeystore.jks"
|
||||
trust-store = "/example/path/to/mytruststore.jks"
|
||||
|
||||
|
||||
key-store-password = "changeme"
|
||||
key-password = "changeme"
|
||||
trust-store-password = "changeme"
|
||||
|
||||
|
||||
protocol = "TLSv1.2"
|
||||
|
||||
|
||||
enabled-algorithms = [TLS_DHE_RSA_WITH_AES_128_GCM_SHA256]
|
||||
|
||||
|
||||
random-number-generator = "AES128CounterSecureRNG"
|
||||
}
|
||||
}
|
||||
|
|
@ -500,11 +500,11 @@ According to `RFC 7525 <https://tools.ietf.org/html/rfc7525>`_ the recommended a
|
|||
|
||||
You should always check the latest information about security and algorithm recommendations though before you configure your system.
|
||||
|
||||
Creating and working with keystores and certificates is well documented in the
|
||||
Creating and working with keystores and certificates is well documented in the
|
||||
`Generating X.509 Certificates <http://typesafehub.github.io/ssl-config/CertificateGeneration.html#using-keytool>`_
|
||||
section of Lightbend's SSL-Config library.
|
||||
section of Lightbend's SSL-Config library.
|
||||
|
||||
Since an Akka remoting is inherently :ref:`peer-to-peer <symmetric-communication>` both the key-store as well as trust-store
|
||||
Since an Akka remoting is inherently :ref:`peer-to-peer <symmetric-communication>` both the key-store as well as trust-store
|
||||
need to be configured on each remoting node participating in the cluster.
|
||||
|
||||
The official `Java Secure Socket Extension documentation <http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html>`_
|
||||
|
|
@ -512,11 +512,11 @@ as well as the `Oracle documentation on creating KeyStore and TrustStores <https
|
|||
are both great resources to research when setting up security on the JVM. Please consult those resources when troubleshooting
|
||||
and configuring SSL.
|
||||
|
||||
Since Akka 2.5.0 mutual authentication between TLS peers is enabled by default.
|
||||
Since Akka 2.5.0 mutual authentication between TLS peers is enabled by default.
|
||||
|
||||
Mutual authentication means that the the passive side (the TLS server side) of a connection will also request and verify
|
||||
Mutual authentication means that the the passive side (the TLS server side) of a connection will also request and verify
|
||||
a certificate from the connecting peer. Without this mode only the client side is requesting and verifying certificates.
|
||||
While Akka is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
|
||||
While Akka is a peer-to-peer technology, each connection between nodes starts out from one side (the "client") towards
|
||||
the other (the "server").
|
||||
|
||||
Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
|
||||
|
|
@ -549,12 +549,12 @@ messages like :class:`PoisonPill` and :class:`Kill`, but it can also be added
|
|||
as a marker trait to user-defined messages.
|
||||
|
||||
.. warning::
|
||||
|
||||
|
||||
Untrusted mode does not give full protection against attacks by itself.
|
||||
It makes it slightly harder to perform malicious or unintended actions but
|
||||
it should be complemented with :ref:`disabled Java serializer <disable-java-serializer-scala>`.
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls) and/or enabling :ref:`TLS with mutual
|
||||
Additional protection can be achieved when running in an untrusted network by
|
||||
network security (e.g. firewalls) and/or enabling :ref:`TLS with mutual
|
||||
authentication <remote-tls-scala>`.
|
||||
|
||||
Messages sent with actor selection are by default discarded in untrusted mode, but
|
||||
|
|
@ -593,7 +593,7 @@ untrusted mode when incoming via the remoting layer:
|
|||
Remote Configuration
|
||||
^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
There are lots of configuration properties that are related to remoting in Akka. We refer to the
|
||||
There are lots of configuration properties that are related to remoting in Akka. We refer to the
|
||||
:ref:`reference configuration <config-akka-remote>` for more information.
|
||||
|
||||
.. note::
|
||||
|
|
|
|||
|
|
@ -79,7 +79,8 @@ object SphinxDoc {
|
|||
"sigarVersion" -> Dependencies.Compile.sigar.revision,
|
||||
"sigarLoaderVersion" -> Dependencies.Compile.Provided.sigarLoader.revision,
|
||||
"github" -> GitHub.url(v),
|
||||
"samples" -> "http://github.com/akka/akka-samples"
|
||||
"samples" -> "http://github.com/akka/akka-samples/tree/master",
|
||||
"exampleCodeService" -> "https://example.lightbend.com/v1/download"
|
||||
)
|
||||
},
|
||||
preprocess := {
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue