=doc Fixed and normalized spellings
This commit is contained in:
parent
7c4acc4f33
commit
ab0c60eca7
34 changed files with 69 additions and 69 deletions
|
|
@ -54,7 +54,7 @@ actor system instance at that ``hostname:port``. Akka uses the UID to be able to
|
|||
reliably trigger remote death watch. This means that the same actor system can never
|
||||
join a cluster again once it's been removed from that cluster. To re-join an actor
|
||||
system with the same ``hostname:port`` to a cluster you have to stop the actor system
|
||||
and start a new one with the same ``hotname:port`` which will then receive a different
|
||||
and start a new one with the same ``hostname:port`` which will then receive a different
|
||||
UID.
|
||||
|
||||
The cluster membership state is a specialized `CRDT`_, which means that it has a monotonic
|
||||
|
|
@ -345,7 +345,7 @@ Failure Detection and Unreachability
|
|||
- unreachable*
|
||||
unreachable is not a real member states but more of a flag in addition
|
||||
to the state signaling that the cluster is unable to talk to this node,
|
||||
after beeing unreachable the failure detector may detect it as reachable
|
||||
after being unreachable the failure detector may detect it as reachable
|
||||
again and thereby remove the flag
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ This is a schematic overview of the test conductor.
|
|||
|
||||
The test conductor server is responsible for coordinating barriers and sending commands to the test conductor
|
||||
clients that act upon them, e.g. throttling network traffic to/from another client. More information on the
|
||||
possible operations is availible in the ``akka.remote.testconductor.Conductor`` API documentation.
|
||||
possible operations is available in the ``akka.remote.testconductor.Conductor`` API documentation.
|
||||
|
||||
The Multi Node Spec
|
||||
===================
|
||||
|
|
|
|||
|
|
@ -227,7 +227,7 @@ observer.
|
|||
Ordering of Local Message Sends
|
||||
-------------------------------
|
||||
|
||||
Assuming strict FIFO mailboxes the abovementioned caveat of non-transitivity of
|
||||
Assuming strict FIFO mailboxes the aforementioned caveat of non-transitivity of
|
||||
the message ordering guarantee is eliminated under certain conditions. As you
|
||||
will note, these are quite subtle as it stands, and it is even possible that
|
||||
future performance optimizations will invalidate this whole paragraph. The
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ Metrics extension periodically publishes current snapshot of the cluster metrics
|
|||
|
||||
The publication period is controlled by the ``akka.cluster.metrics.collector.sample-period`` setting.
|
||||
|
||||
The payload of the ``akka.cluster.metris.ClusterMetricsChanged`` event will contain
|
||||
The payload of the ``akka.cluster.metrics.ClusterMetricsChanged`` event will contain
|
||||
latest metrics of the node as well as other cluster member nodes metrics gossip
|
||||
which was received during the collector sample period.
|
||||
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ seed nodes in the existing cluster.
|
|||
If you don't configure seed nodes you need to join the cluster programmatically or manually.
|
||||
|
||||
Manual joining can be performed by using ref:`cluster_jmx_java` or :ref:`cluster_command_line_java`.
|
||||
Joining programatically can be performed with ``Cluster.get(system).join``. Unsuccessful join attempts are
|
||||
Joining programmatically can be performed with ``Cluster.get(system).join``. Unsuccessful join attempts are
|
||||
automatically retried after the time period defined in configuration property ``retry-unsuccessful-join-after``.
|
||||
Retries can be disabled by setting the property to ``off``.
|
||||
|
||||
|
|
@ -127,7 +127,7 @@ status of the unreachable member must be changed to 'Down'. Changing status to '
|
|||
can be performed automatically or manually. By default it must be done manually, using
|
||||
:ref:`cluster_jmx_java` or :ref:`cluster_command_line_java`.
|
||||
|
||||
It can also be performed programatically with ``Cluster.get(system).down(address)``.
|
||||
It can also be performed programmatically with ``Cluster.get(system).down(address)``.
|
||||
|
||||
You can enable automatic downing with configuration::
|
||||
|
||||
|
|
@ -157,7 +157,7 @@ above.
|
|||
|
||||
A more graceful exit can be performed if you tell the cluster that a node shall leave.
|
||||
This can be performed using :ref:`cluster_jmx_java` or :ref:`cluster_command_line_java`.
|
||||
It can also be performed programatically with ``Cluster.get(system).leave(address)``.
|
||||
It can also be performed programmatically with ``Cluster.get(system).leave(address)``.
|
||||
|
||||
Note that this command can be issued to any member in the cluster, not necessarily the
|
||||
one that is leaving. The cluster extension, but not the actor system or JVM, of the
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ type :class:`ActorRef`.
|
|||
|
||||
This classification requires an :class:`ActorSystem` in order to perform book-keeping
|
||||
operations related to the subscribers being Actors, which can terminate without first
|
||||
unsubscribing from the EventBus. ManagedActorClassification maitains a system Actor which
|
||||
unsubscribing from the EventBus. ManagedActorClassification maintains a system Actor which
|
||||
takes care of unsubscribing terminated actors automatically.
|
||||
|
||||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
|
@ -148,7 +148,7 @@ it can be subscribed like this:
|
|||
|
||||
.. includecode:: code/docs/event/LoggingDocTest.java#deadletters
|
||||
|
||||
Similarily to `Actor Classification`_, :class:`EventStream` will automatically remove subscibers when they terminate.
|
||||
Similarly to `Actor Classification`_, :class:`EventStream` will automatically remove subscribers when they terminate.
|
||||
|
||||
.. note::
|
||||
The event stream is a *local facility*, meaning that it will *not* distribute events to other nodes in a clustered environment (unless you subscribe a Remote Actor to the stream explicitly).
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ First, we define what our ``Extension`` should do:
|
|||
.. includecode:: code/docs/extension/ExtensionDocTest.java
|
||||
:include: extension
|
||||
|
||||
Then we need to create an ``ExtensionId`` for our extension so we can grab ahold of it.
|
||||
Then we need to create an ``ExtensionId`` for our extension so we can grab a hold of it.
|
||||
|
||||
.. includecode:: code/docs/extension/ExtensionDocTest.java
|
||||
:include: imports
|
||||
|
|
|
|||
|
|
@ -107,7 +107,7 @@ Test Application
|
|||
----------------
|
||||
|
||||
The following section shows the effects of the different directives in practice,
|
||||
wherefor a test setup is needed. First off, we need a suitable supervisor:
|
||||
where a test setup is needed. First off, we need a suitable supervisor:
|
||||
|
||||
.. includecode:: code/docs/actor/FaultHandlingTest.java
|
||||
:include: supervisor
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ to listen for TCP connections on a particular :class:`InetSocketAddress`; the
|
|||
port may be specified as ``0`` in order to bind to a random port.
|
||||
|
||||
The actor sending the :class:`Bind` message will receive a :class:`Bound`
|
||||
message signalling that the server is ready to accept incoming connections;
|
||||
message signaling that the server is ready to accept incoming connections;
|
||||
this message also contains the :class:`InetSocketAddress` to which the socket
|
||||
was actually bound (i.e. resolved IP address and correct port number).
|
||||
|
||||
|
|
|
|||
|
|
@ -90,7 +90,7 @@ explained below.
|
|||
The last line shows a possibility to pass constructor arguments regardless of
|
||||
the context it is being used in. The presence of a matching constructor is
|
||||
verified during construction of the :class:`Props` object, resulting in an
|
||||
:class:`IllegalArgumentEception` if no or multiple matching constructors are
|
||||
:class:`IllegalArgumentException` if no or multiple matching constructors are
|
||||
found.
|
||||
|
||||
Dangerous Variants
|
||||
|
|
|
|||
|
|
@ -107,7 +107,7 @@ Test Application
|
|||
----------------
|
||||
|
||||
The following section shows the effects of the different directives in practice,
|
||||
wherefor a test setup is needed. First off, we need a suitable supervisor:
|
||||
where a test setup is needed. First off, we need a suitable supervisor:
|
||||
|
||||
.. includecode:: code/docs/actorlambda/FaultHandlingTest.java
|
||||
:include: supervisor
|
||||
|
|
|
|||
|
|
@ -184,7 +184,7 @@ In this case, a persistent actor must be recovered explicitly by sending it a ``
|
|||
|
||||
.. warning::
|
||||
|
||||
If ``preStart`` is overriden by an empty implementation, incoming commands will not be processed by the
|
||||
If ``preStart`` is overridden by an empty implementation, incoming commands will not be processed by the
|
||||
``PersistentActor`` until it receives a ``Recover`` and finishes recovery.
|
||||
|
||||
In order to completely skip recovery, you can signal it with ``Recover.create(0L)``
|
||||
|
|
@ -253,7 +253,7 @@ Deferring actions until preceding persist handlers have executed
|
|||
|
||||
Sometimes when working with ``persistAsync`` you may find that it would be nice to define some actions in terms of
|
||||
''happens-after the previous ``persistAsync`` handlers have been invoked''. ``PersistentActor`` provides an utility method
|
||||
called ``defer``, which works similarily to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
called ``defer``, which works similarly to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
use it for *read* operations, and actions which do not have corresponding events in your domain model.
|
||||
|
||||
Using this method is very similar to the persist family of methods, yet it does **not** persist the passed in event.
|
||||
|
|
@ -452,7 +452,7 @@ destinations the destinations will see gaps in the sequence. It is not possible
|
|||
However, you can send a custom correlation identifier in the message to the destination. You must then retain
|
||||
a mapping between the internal ``deliveryId`` (passed into the ``deliveryIdToMessage`` function) and your custom
|
||||
correlation id (passed into the message). You can do this by storing such mapping in a ``Map(correlationId -> deliveryId)``
|
||||
from which you can retrive the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
from which you can retrieve the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
of your message has replied with your custom correlation id.
|
||||
|
||||
The ``AbstractPersistentActorWithAtLeastOnceDelivery`` class has a state consisting of unconfirmed messages and a
|
||||
|
|
@ -499,12 +499,12 @@ Plugins can be selected either by "default", for all persistent actors and views
|
|||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistense extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
|
||||
akka.persistence.journal.plugin = ""
|
||||
akka.persistence.snapshot-store.plugin = ""
|
||||
|
||||
However, these entires are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
However, these entries are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
For an example of journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal-java-lambda`.
|
||||
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store-java-lambda`.
|
||||
|
||||
|
|
@ -663,6 +663,6 @@ the actor or view will be serviced by these specific persistence plugins instead
|
|||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entires with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -186,7 +186,7 @@ In this case, a persistent actor must be recovered explicitly by sending it a ``
|
|||
|
||||
.. warning::
|
||||
|
||||
If ``preStart`` is overriden by an empty implementation, incoming commands will not be processed by the
|
||||
If ``preStart`` is overridden by an empty implementation, incoming commands will not be processed by the
|
||||
``PersistentActor`` until it receives a ``Recover`` and finishes recovery.
|
||||
|
||||
In order to completely skip recovery, you can signal it with ``Recover.create(0L)``
|
||||
|
|
@ -256,7 +256,7 @@ Deferring actions until preceding persist handlers have executed
|
|||
|
||||
Sometimes when working with ``persistAsync`` you may find that it would be nice to define some actions in terms of
|
||||
''happens-after the previous ``persistAsync`` handlers have been invoked''. ``PersistentActor`` provides an utility method
|
||||
called ``defer``, which works similarily to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
called ``defer``, which works similarly to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
use it for *read* operations, and actions which do not have corresponding events in your domain model.
|
||||
|
||||
Using this method is very similar to the persist family of methods, yet it does **not** persist the passed in event.
|
||||
|
|
@ -456,7 +456,7 @@ destinations the destinations will see gaps in the sequence. It is not possible
|
|||
However, you can send a custom correlation identifier in the message to the destination. You must then retain
|
||||
a mapping between the internal ``deliveryId`` (passed into the ``deliveryIdToMessage`` function) and your custom
|
||||
correlation id (passed into the message). You can do this by storing such mapping in a ``Map(correlationId -> deliveryId)``
|
||||
from which you can retrive the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
from which you can retrieve the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
of your message has replied with your custom correlation id.
|
||||
|
||||
The ``UntypedPersistentActorWithAtLeastOnceDelivery`` class has a state consisting of unconfirmed messages and a
|
||||
|
|
@ -510,12 +510,12 @@ Plugins can be selected either by "default", for all persistent actors and views
|
|||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistense extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
|
||||
akka.persistence.journal.plugin = ""
|
||||
akka.persistence.snapshot-store.plugin = ""
|
||||
|
||||
However, these entires are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
However, these entries are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
For an example of journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal-java`.
|
||||
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store-java`.
|
||||
|
||||
|
|
@ -714,6 +714,6 @@ the actor or view will be serviced by these specific persistence plugins instead
|
|||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entires with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -444,7 +444,7 @@ SSL can be used as the remote transport by adding ``akka.remote.netty.ssl``
|
|||
to the ``enabled-transport`` configuration section. See a description of the settings
|
||||
in the :ref:`remote-configuration-java` section.
|
||||
|
||||
The SSL support is implemented with Java Secure Socket Extension, please consult the offical
|
||||
The SSL support is implemented with Java Secure Socket Extension, please consult the official
|
||||
`Java Secure Socket Extension documentation <http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html>`_
|
||||
and related resources for troubleshooting.
|
||||
|
||||
|
|
|
|||
|
|
@ -652,7 +652,7 @@ Start with the routing logic:
|
|||
|
||||
``select`` will be called for each message and in this example pick a few destinations by round-robin,
|
||||
by reusing the existing ``RoundRobinRoutingLogic`` and wrap the result in a ``SeveralRoutees``
|
||||
instance. ``SeveralRoutees`` will send the message to all of the supplied routues.
|
||||
instance. ``SeveralRoutees`` will send the message to all of the supplied routes.
|
||||
|
||||
The implementation of the routing logic must be thread safe, since it might be used outside of actors.
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@ Typed Actors
|
|||
Akka Typed Actors is an implementation of the `Active Objects <http://en.wikipedia.org/wiki/Active_object>`_ pattern.
|
||||
Essentially turning method invocations into asynchronous dispatch instead of synchronous that has been the default way since Smalltalk came out.
|
||||
|
||||
Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate methodcalls asynchronously to
|
||||
Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate method calls asynchronously to
|
||||
a private instance of the implementation.
|
||||
|
||||
The advantage of Typed Actors vs. Actors is that with TypedActors you have a static contract, and don't need to define your own messages, the downside is that it places some limitations on what you can do and what you can't, i.e. you can't use become/unbecome.
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ dispatcher to use, see more below). Here are some examples of how to create a
|
|||
The second line shows how to pass constructor arguments to the :class:`Actor`
|
||||
being created. The presence of a matching constructor is verified during
|
||||
construction of the :class:`Props` object, resulting in an
|
||||
:class:`IllegalArgumentEception` if no or multiple matching constructors are
|
||||
:class:`IllegalArgumentException` if no or multiple matching constructors are
|
||||
found.
|
||||
|
||||
The third line demonstrates the use of a :class:`Creator<T extends Actor>`. The
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@ If you have been creating EventStreams manually, you now have to provide an acto
|
|||
|
||||
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala#event-bus-start-unsubscriber-scala
|
||||
|
||||
Please note that this change affects you only if you have implemented your own busses, Akka's own ``context.eventStream``
|
||||
Please note that this change affects you only if you have implemented your own buses, Akka's own ``context.eventStream``
|
||||
is still there and does not require any attention from you concerning this change.
|
||||
|
||||
FSM notifies on same state transitions
|
||||
|
|
@ -169,7 +169,7 @@ and ``AbstractPersistentId``.
|
|||
The rationale behind this change being stricter de-coupling of your Actor hierarchy and the logical
|
||||
"which persistent entity this actor represents".
|
||||
|
||||
In case you want to perserve the old behavior of providing the actor's path as the default ``persistenceId``, you can easily
|
||||
In case you want to preserve the old behavior of providing the actor's path as the default ``persistenceId``, you can easily
|
||||
implement it yourself either as a helper trait or simply by overriding ``persistenceId`` as follows::
|
||||
|
||||
override def persistenceId = self.path.toStringWithoutAddress
|
||||
|
|
|
|||
|
|
@ -39,11 +39,11 @@ This concept remains the same in Akka ``2.3.4``, yet we rename ``processorId`` t
|
|||
and persistent messages can be used from different classes not only ``PersistentActor`` (Views, directly from Journals etc).
|
||||
|
||||
Please note that ``persistenceId`` is **abstract** in the new API classes (``PersistentActor`` and ``PersistentView``),
|
||||
and we do **not** provide a default (actor-path derrived) value for it like we did for ``processorId``.
|
||||
and we do **not** provide a default (actor-path derived) value for it like we did for ``processorId``.
|
||||
The rationale behind this change being stricter de-coupling of your Actor hierarchy and the logical "which persistent entity this actor represents".
|
||||
A longer discussion on this subject can be found on `issue #15436 <https://github.com/akka/akka/issues/15436>`_ on github.
|
||||
|
||||
In case you want to perserve the old behavior of providing the actor's path as the default ``persistenceId``, you can easily
|
||||
In case you want to preserve the old behavior of providing the actor's path as the default ``persistenceId``, you can easily
|
||||
implement it yourself either as a helper trait or simply by overriding ``persistenceId`` as follows::
|
||||
|
||||
override def persistenceId = self.path.toStringWithoutAddress
|
||||
|
|
@ -57,9 +57,9 @@ Removed Processor in favour of extending PersistentActor with persistAsync
|
|||
The ``Processor`` is now deprecated since ``2.3.4`` and will be removed in ``2.4.x``.
|
||||
It's semantics replicated in ``PersistentActor`` in the form of an additional ``persist`` method: ``persistAsync``.
|
||||
|
||||
In essence, the difference betwen ``persist`` and ``persistAsync`` is that the former will stash all incomming commands
|
||||
In essence, the difference between ``persist`` and ``persistAsync`` is that the former will stash all incoming commands
|
||||
until all persist callbacks have been processed, whereas the latter does not stash any commands. The new ``persistAsync``
|
||||
should be used in cases of low consistency yet high responsiveness requirements, the Actor can keep processing incomming
|
||||
should be used in cases of low consistency yet high responsiveness requirements, the Actor can keep processing incoming
|
||||
commands, even though not all previous events have been handled.
|
||||
|
||||
When these ``persist`` and ``persistAsync`` are used together in the same ``PersistentActor``, the ``persist``
|
||||
|
|
@ -126,7 +126,7 @@ should always be valid for replay.
|
|||
Renamed View to PersistentView, which receives plain messages (Persistent() wrapper is gone)
|
||||
============================================================================================
|
||||
Views used to receive messages wrapped as ``Persistent(payload, seqNr)``, this is no longer the case and views receive
|
||||
the ``payload`` as message from the ``Journal`` directly. The rationale here is that the wrapper aproach was inconsistent
|
||||
the ``payload`` as message from the ``Journal`` directly. The rationale here is that the wrapper approach was inconsistent
|
||||
with the other Akka Persistence APIs and also is not easily "discoverable" (you have to *know* you will be getting this Persistent wrapper).
|
||||
|
||||
Instead, since ``2.3.4``, views get plain messages, and can use additional methods provided by the ``View`` to identify if a message
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ explained below.
|
|||
The last line shows a possibility to pass constructor arguments regardless of
|
||||
the context it is being used in. The presence of a matching constructor is
|
||||
verified during construction of the :class:`Props` object, resulting in an
|
||||
:class:`IllegalArgumentEception` if no or multiple matching constructors are
|
||||
:class:`IllegalArgumentException` if no or multiple matching constructors are
|
||||
found.
|
||||
|
||||
Dangerous Variants
|
||||
|
|
|
|||
|
|
@ -57,7 +57,7 @@ Metrics extension periodically publishes current snapshot of the cluster metrics
|
|||
|
||||
The publication period is controlled by the ``akka.cluster.metrics.collector.sample-period`` setting.
|
||||
|
||||
The payload of the ``akka.cluster.metris.ClusterMetricsChanged`` event will contain
|
||||
The payload of the ``akka.cluster.metrics.ClusterMetricsChanged`` event will contain
|
||||
latest metrics of the node as well as other cluster member nodes metrics gossip
|
||||
which was received during the collector sample period.
|
||||
|
||||
|
|
|
|||
|
|
@ -58,13 +58,13 @@ if needed.
|
|||
|
||||
A shard is a group of entries that will be managed together. The grouping is defined by the
|
||||
``shardResolver`` function shown above. For a specific entry identifier the shard identifier must always
|
||||
be the same. Otherwise the entry actor might accidentily be started in several places at the same time.
|
||||
be the same. Otherwise the entry actor might accidentally be started in several places at the same time.
|
||||
|
||||
Creating a good sharding algorithm is an interesting challenge in itself. Try to produce a uniform distribution,
|
||||
i.e. same amount of entries in each shard. As a rule of thumb, the number of shards should be a factor ten greater
|
||||
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
|
||||
will not host any shards. Too many shards will result in less efficient management of the shards, e.g. rebalancing
|
||||
overhead, and increased latency because the corrdinator is involved in the routing of the first message for each
|
||||
overhead, and increased latency because the coordinator is involved in the routing of the first message for each
|
||||
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
|
||||
all nodes in the cluster.
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ Creating a good sharding algorithm is an interesting challenge in itself. Try to
|
|||
i.e. same amount of entries in each shard. As a rule of thumb, the number of shards should be a factor ten greater
|
||||
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
|
||||
will not host any shards. Too many shards will result in less efficient management of the shards, e.g. rebalancing
|
||||
overhead, and increased latency because the corrdinator is involved in the routing of the first message for each
|
||||
overhead, and increased latency because the coordinator is involved in the routing of the first message for each
|
||||
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
|
||||
all nodes in the cluster.
|
||||
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ In Scala:
|
|||
Here we limit the singleton to nodes tagged with the ``"worker"`` role, but all nodes, independent of
|
||||
role, can be used by specifying ``None`` as ``role`` parameter.
|
||||
|
||||
The corresponding Java API for the ``singeltonProps`` function is ``akka.cluster.singleton.ClusterSingletonPropsFactory``.
|
||||
The corresponding Java API for the ``singletonProps`` function is ``akka.cluster.singleton.ClusterSingletonPropsFactory``.
|
||||
The Java API takes a plain String for the role parameter and ``null`` means that all nodes, independent of
|
||||
role, are used.
|
||||
|
||||
|
|
|
|||
|
|
@ -82,7 +82,7 @@ seed nodes in the existing cluster.
|
|||
If you don't configure seed nodes you need to join the cluster programmatically or manually.
|
||||
|
||||
Manual joining can be performed by using ref:`cluster_jmx_scala` or :ref:`cluster_command_line_scala`.
|
||||
Joining programatically can be performed with ``Cluster(system).join``. Unsuccessful join attempts are
|
||||
Joining programmatically can be performed with ``Cluster(system).join``. Unsuccessful join attempts are
|
||||
automatically retried after the time period defined in configuration property ``retry-unsuccessful-join-after``.
|
||||
Retries can be disabled by setting the property to ``off``.
|
||||
|
||||
|
|
@ -121,7 +121,7 @@ status of the unreachable member must be changed to 'Down'. Changing status to '
|
|||
can be performed automatically or manually. By default it must be done manually, using
|
||||
:ref:`cluster_jmx_scala` or :ref:`cluster_command_line_scala`.
|
||||
|
||||
It can also be performed programatically with ``Cluster(system).down(address)``.
|
||||
It can also be performed programmatically with ``Cluster(system).down(address)``.
|
||||
|
||||
You can enable automatic downing with configuration::
|
||||
|
||||
|
|
@ -151,7 +151,7 @@ above.
|
|||
|
||||
A more graceful exit can be performed if you tell the cluster that a node shall leave.
|
||||
This can be performed using :ref:`cluster_jmx_scala` or :ref:`cluster_command_line_scala`.
|
||||
It can also be performed programatically with ``Cluster(system).leave(address)``.
|
||||
It can also be performed programmatically with ``Cluster(system).leave(address)``.
|
||||
|
||||
Note that this command can be issued to any member in the cluster, not necessarily the
|
||||
one that is leaving. The cluster extension, but not the actor system or JVM, of the
|
||||
|
|
@ -598,7 +598,7 @@ of code should only run for a specific role.
|
|||
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#test-statsService
|
||||
|
||||
Once again we take advantage of the facilities in :ref:`testkit <akka-testkit>` to verify expected behavior.
|
||||
Here using ``testActor`` as sender (via ``ImplicitSender``) and verifing the reply with ``expectMsgPF``.
|
||||
Here using ``testActor`` as sender (via ``ImplicitSender``) and verifying the reply with ``expectMsgPF``.
|
||||
|
||||
In the above code you can see ``node(third)``, which is useful facility to get the root actor reference of
|
||||
the actor system for a specific role. This can also be used to grab the ``akka.actor.Address`` of that node.
|
||||
|
|
|
|||
|
|
@ -115,7 +115,7 @@ type :class:`ActorRef`.
|
|||
|
||||
This classification requires an :class:`ActorSystem` in order to perform book-keeping
|
||||
operations related to the subscribers being Actors, which can terminate without first
|
||||
unsubscribing from the EventBus. ManagedActorClassification maitains a system Actor which
|
||||
unsubscribing from the EventBus. ManagedActorClassification maintains a system Actor which
|
||||
takes care of unsubscribing terminated actors automatically.
|
||||
|
||||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
|
@ -143,7 +143,7 @@ how a simple subscription works:
|
|||
|
||||
.. includecode:: code/docs/event/LoggingDocSpec.scala#deadletters
|
||||
|
||||
Similarily to `Actor Classification`_, :class:`EventStream` will automatically remove subscibers when they terminate.
|
||||
Similarly to `Actor Classification`_, :class:`EventStream` will automatically remove subscribers when they terminate.
|
||||
|
||||
.. note::
|
||||
The event stream is a *local facility*, meaning that it will *not* distribute events to other nodes in a clustered environment (unless you subscribe a Remote Actor to the stream explicitly).
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ First, we define what our ``Extension`` should do:
|
|||
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
|
||||
:include: extension
|
||||
|
||||
Then we need to create an ``ExtensionId`` for our extension so we can grab ahold of it.
|
||||
Then we need to create an ``ExtensionId`` for our extension so we can grab a hold of it.
|
||||
|
||||
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
|
||||
:include: extensionid
|
||||
|
|
|
|||
|
|
@ -116,7 +116,7 @@ Test Application
|
|||
----------------
|
||||
|
||||
The following section shows the effects of the different directives in practice,
|
||||
wherefor a test setup is needed. First off, we need a suitable supervisor:
|
||||
where a test setup is needed. First off, we need a suitable supervisor:
|
||||
|
||||
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
|
||||
:include: supervisor
|
||||
|
|
|
|||
|
|
@ -68,7 +68,7 @@ to listen for TCP connections on a particular :class:`InetSocketAddress`; the
|
|||
port may be specified as ``0`` in order to bind to a random port.
|
||||
|
||||
The actor sending the :class:`Bind` message will receive a :class:`Bound`
|
||||
message signalling that the server is ready to accept incoming connections;
|
||||
message signaling that the server is ready to accept incoming connections;
|
||||
this message also contains the :class:`InetSocketAddress` to which the socket
|
||||
was actually bound (i.e. resolved IP address and correct port number).
|
||||
|
||||
|
|
|
|||
|
|
@ -170,7 +170,7 @@ In this case, a persistent actor must be recovered explicitly by sending it a ``
|
|||
|
||||
.. warning::
|
||||
|
||||
If ``preStart`` is overriden by an empty implementation, incoming commands will not be processed by the
|
||||
If ``preStart`` is overridden by an empty implementation, incoming commands will not be processed by the
|
||||
``PersistentActor`` until it receives a ``Recover`` and finishes recovery.
|
||||
|
||||
In order to completely skip recovery, you can signal it with ``Recover(toSequenceNr = OL)``
|
||||
|
|
@ -226,7 +226,7 @@ The ordering between events is still guaranteed ("evt-b-1" will be sent after "e
|
|||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#persist-async
|
||||
|
||||
.. note::
|
||||
In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync(cmd)(...)`` right away on all incomming
|
||||
In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync(cmd)(...)`` right away on all incoming
|
||||
messages, and handle them in the callback.
|
||||
|
||||
.. warning::
|
||||
|
|
@ -240,7 +240,7 @@ Deferring actions until preceding persist handlers have executed
|
|||
|
||||
Sometimes when working with ``persistAsync`` you may find that it would be nice to define some actions in terms of
|
||||
''happens-after the previous ``persistAsync`` handlers have been invoked''. ``PersistentActor`` provides an utility method
|
||||
called ``defer``, which works similarily to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
called ``defer``, which works similarly to ``persistAsync`` yet does not persist the passed in event. It is recommended to
|
||||
use it for *read* operations, and actions which do not have corresponding events in your domain model.
|
||||
|
||||
Using this method is very similar to the persist family of methods, yet it does **not** persist the passed in event.
|
||||
|
|
@ -447,7 +447,7 @@ destinations the destinations will see gaps in the sequence. It is not possible
|
|||
However, you can send a custom correlation identifier in the message to the destination. You must then retain
|
||||
a mapping between the internal ``deliveryId`` (passed into the ``deliveryIdToMessage`` function) and your custom
|
||||
correlation id (passed into the message). You can do this by storing such mapping in a ``Map(correlationId -> deliveryId)``
|
||||
from which you can retrive the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
from which you can retrieve the ``deliveryId`` to be passed into the ``confirmDelivery`` method once the receiver
|
||||
of your message has replied with your custom correlation id.
|
||||
|
||||
The ``AtLeastOnceDelivery`` trait has a state consisting of unconfirmed messages and a
|
||||
|
|
@ -503,12 +503,12 @@ Plugins can be selected either by "default", for all persistent actors and views
|
|||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistense extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
|
||||
akka.persistence.journal.plugin = ""
|
||||
akka.persistence.snapshot-store.plugin = ""
|
||||
|
||||
However, these entires are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
However, these entries are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
For an example of journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal`.
|
||||
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store`.
|
||||
|
||||
|
|
@ -709,6 +709,6 @@ the actor or view will be serviced by these specific persistence plugins instead
|
|||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entires with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -448,7 +448,7 @@ SSL can be used as the remote transport by adding ``akka.remote.netty.ssl``
|
|||
to the ``enabled-transport`` configuration section. See a description of the settings
|
||||
in the :ref:`remote-configuration-scala` section.
|
||||
|
||||
The SSL support is implemented with Java Secure Socket Extension, please consult the offical
|
||||
The SSL support is implemented with Java Secure Socket Extension, please consult the official
|
||||
`Java Secure Socket Extension documentation <http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html>`_
|
||||
and related resources for troubleshooting.
|
||||
|
||||
|
|
|
|||
|
|
@ -651,7 +651,7 @@ Start with the routing logic:
|
|||
|
||||
``select`` will be called for each message and in this example pick a few destinations by round-robin,
|
||||
by reusing the existing ``RoundRobinRoutingLogic`` and wrap the result in a ``SeveralRoutees``
|
||||
instance. ``SeveralRoutees`` will send the message to all of the supplied routues.
|
||||
instance. ``SeveralRoutees`` will send the message to all of the supplied routes.
|
||||
|
||||
The implementation of the routing logic must be thread safe, since it might be used outside of actors.
|
||||
|
||||
|
|
|
|||
|
|
@ -411,7 +411,7 @@ Resolving Conflicts with Implicit ActorRef
|
|||
------------------------------------------
|
||||
|
||||
If you want the sender of messages inside your TestKit-based tests to be the ``testActor``
|
||||
simply mix in ``ÌmplicitSender`` into your test.
|
||||
simply mix in ``ImplicitSender`` into your test.
|
||||
|
||||
.. includecode:: code/docs/testkit/PlainWordSpec.scala#implicit-sender
|
||||
|
||||
|
|
@ -736,7 +736,7 @@ options:
|
|||
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#logging-receive
|
||||
|
||||
.
|
||||
If the abovementioned setting is not given in the :ref:`configuration`, this method will
|
||||
If the aforementioned setting is not given in the :ref:`configuration`, this method will
|
||||
pass through the given :class:`Receive` function unmodified, meaning that
|
||||
there is no runtime cost unless actually enabled.
|
||||
|
||||
|
|
@ -815,11 +815,11 @@ Some `Specs2 <http://specs2.org>`_ users have contributed examples of how to wor
|
|||
with :class:`org.specs2.specification.Scope`.
|
||||
* The Specification traits provide a :class:`Duration` DSL which uses partly
|
||||
the same method names as :class:`scala.concurrent.duration.Duration`, resulting in ambiguous
|
||||
implicits if ``scala.concurrent.duration._`` is imported. There are two work-arounds:
|
||||
implicits if ``scala.concurrent.duration._`` is imported. There are two workarounds:
|
||||
|
||||
* either use the Specification variant of Duration and supply an implicit
|
||||
conversion to the Akka Duration. This conversion is not supplied with the
|
||||
Akka distribution because that would mean that our JAR files would dependon
|
||||
Akka distribution because that would mean that our JAR files would depend on
|
||||
Specs2, which is not justified by this little feature.
|
||||
|
||||
* or mix :class:`org.specs2.time.NoTimeConversions` into the Specification.
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ Typed Actors
|
|||
Akka Typed Actors is an implementation of the `Active Objects <http://en.wikipedia.org/wiki/Active_object>`_ pattern.
|
||||
Essentially turning method invocations into asynchronous dispatch instead of synchronous that has been the default way since Smalltalk came out.
|
||||
|
||||
Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate methodcalls asynchronously to
|
||||
Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate method calls asynchronously to
|
||||
a private instance of the implementation.
|
||||
|
||||
The advantage of Typed Actors vs. Actors is that with TypedActors you have a
|
||||
|
|
@ -143,7 +143,7 @@ This call is asynchronous, and the Future returned can be used for asynchronous
|
|||
Stopping Typed Actors
|
||||
---------------------
|
||||
|
||||
Since Akkas Typed Actors are backed by Akka Actors they must be stopped when they aren't needed anymore.
|
||||
Since Akka's Typed Actors are backed by Akka Actors they must be stopped when they aren't needed anymore.
|
||||
|
||||
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
|
||||
:include: typed-actor-stop
|
||||
|
|
|
|||
|
|
@ -305,7 +305,7 @@ reply-to address in the message, which both burdens the user with this task but
|
|||
also places this aspect of protocol design where it belongs.
|
||||
|
||||
The other prominent difference is the removal of the :class:`Actor` trait. In
|
||||
order to avoid closing over instable references from different execution
|
||||
order to avoid closing over unstable references from different execution
|
||||
contexts (e.g. Future transformations) we turned all remaining methods that
|
||||
were on this trait into messages: the behavior receives the
|
||||
:class:`ActorContext` as an argument during processing and the lifecycle hooks
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue