=doc #18849 Improve orthography & grammar
For all docs: * remove consecutive duplicate words * Improve use of commata * Improve use of articles * Improve consistent use of singular/plural * Simplify run-on sentences Review iterations: * Integrate @rkuhn review points - bring back the comma for the interjection - ‘to not’ is not inverted if the infinitive form still follows - Elegantly connect a run on sentence with a semicolon - Correct semantic error - Strictly monotonically preserve math expressions - Use correct english futures * Cross sync changes to files in scala, java & java-lambda documentation files using git diff -u | patch
This commit is contained in:
parent
81cba2e580
commit
dff87ad04f
11 changed files with 290 additions and 296 deletions
|
|
@ -30,7 +30,7 @@ Akka persistence is a separate jar file. Make sure that you have the following d
|
|||
<version>@version@</version>
|
||||
</dependency>
|
||||
|
||||
Akka persistence extension comes with few built-in persistence plugins, including
|
||||
The Akka persistence extension comes with few built-in persistence plugins, including
|
||||
in-memory heap based journal, local file-system based snapshot-store and LevelDB based journal.
|
||||
|
||||
LevelDB based plugins will require the following additional dependency declaration::
|
||||
|
|
@ -51,7 +51,7 @@ Architecture
|
|||
|
||||
* *AbstractPersistentActor*: Is a persistent, stateful actor. It is able to persist events to a journal and can react to
|
||||
them in a thread-safe manner. It can be used to implement both *command* as well as *event sourced* actors.
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor, so that it can
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
|
||||
recover internal state from these messages.
|
||||
|
||||
* *AbstractPersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another
|
||||
|
|
@ -63,13 +63,13 @@ Architecture
|
|||
|
||||
* *AsyncWriteJournal*: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages
|
||||
are journaled and which are received by the persistent actor without being journaled. The storage backend of a journal is pluggable.
|
||||
Persistence extension comes with a "leveldb" journal plugin, which writes to the local filesystem,
|
||||
and replicated journals are available as `Community plugins`_.
|
||||
The persistence extension comes with a "leveldb" journal plugin which writes to the local filesystem.
|
||||
Replicated journals are available as `Community plugins`_.
|
||||
|
||||
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's or a view's internal state. Snapshots are
|
||||
used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
|
||||
Persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem,
|
||||
and replicated snapshot stores are available as `Community plugins`_.
|
||||
The persistence extension comes with a "local" snapshot storage plugin which writes to the local filesystem.
|
||||
Replicated snapshot stores are available as `Community plugins`_.
|
||||
|
||||
* *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
|
||||
development of event sourced applications (see section :ref:`event-sourcing-java-lambda`)
|
||||
|
|
@ -82,13 +82,13 @@ Event sourcing
|
|||
==============
|
||||
|
||||
The basic idea behind `Event Sourcing`_ is quite simple. A persistent actor receives a (non-persistent) command
|
||||
which is first validated if it can be applied to the current state. Here, validation can mean anything, from simple
|
||||
which is first validated if it can be applied to the current state. Here validation can mean anything, from simple
|
||||
inspection of a command message's fields up to a conversation with several external services, for example.
|
||||
If validation succeeds, events are generated from the command, representing the effect of the command. These events
|
||||
are then persisted and, after successful persistence, used to change the actor's state. When the persistent actor
|
||||
needs to be recovered, only the persisted events are replayed of which we know that they can be successfully applied.
|
||||
In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced
|
||||
actors may of course also process commands that do not change application state, such as query commands, for example.
|
||||
actors may of course also process commands that do not change application state such as query commands for example.
|
||||
|
||||
.. _Event Sourcing: http://martinfowler.com/eaaDev/EventSourcing.html
|
||||
|
||||
|
|
@ -132,10 +132,10 @@ Note that the stash capacity is per actor. If you have many persistent actors, e
|
|||
you may need to define a small stash capacity to ensure that the total number of stashed messages in the system
|
||||
don't consume too much memory.
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped. If persistence of an event is rejected before it is
|
||||
stored, e.g. due to serialization error, ``onPersistRejected`` will be invoked (logging a warning
|
||||
by default) and the actor continues with next message.
|
||||
by default), and the actor continues with next message.
|
||||
|
||||
The easiest way to run this example yourself is to download `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Persistence Samples in Java with Lambdas <http://www.typesafe.com/activator/template/akka-sample-persistence-java-lambda>`_.
|
||||
|
|
@ -176,7 +176,7 @@ Recovery customization
|
|||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Applications may also customise how recovery is performed by returning a customised ``Recovery`` object
|
||||
in the ``recovery`` method of a ``AbstractPersistentActor``, for example setting an upper bound to the replay,
|
||||
in the ``recovery`` method of a ``AbstractPersistentActor``, for example setting an upper bound to the replay
|
||||
which allows the actor to be replayed to a certain point "in the past" instead to its most up to date state:
|
||||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#recovery-custom
|
||||
|
|
@ -193,25 +193,25 @@ A persistent actor can query its own recovery status via the methods
|
|||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#recovery-status
|
||||
|
||||
Sometimes there is a need for performing additional initialization when the
|
||||
recovery has completed, before processing any other message sent to the persistent actor.
|
||||
recovery has completed before processing any other message sent to the persistent actor.
|
||||
The persistent actor will receive a special :class:`RecoveryCompleted` message right after recovery
|
||||
and before any other received messages.
|
||||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#recovery-completed
|
||||
|
||||
If there is a problem with recovering the state of the actor from the journal, ``onRecoveryFailure``
|
||||
is called (logging the error by default) and the actor will be stopped.
|
||||
is called (logging the error by default), and the actor will be stopped.
|
||||
|
||||
|
||||
Relaxed local consistency requirements and high throughput use-cases
|
||||
--------------------------------------------------------------------
|
||||
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and it's
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and its
|
||||
``persist`` may not be enough in terms of consuming incoming Commands at a high rate, because it has to wait until all
|
||||
Events related to a given Command are processed in order to start processing the next Command. While this abstraction is
|
||||
very useful for most cases, sometimes you may be faced with relaxed requirements about consistency – for example you may
|
||||
want to process commands as fast as you can, assuming that Event will eventually be persisted and handled properly in
|
||||
the background and retroactively reacting to persistence failures if needed.
|
||||
want to process commands as fast as you can, assuming that the Event will eventually be persisted and handled properly in
|
||||
the background, retroactively reacting to persistence failures if needed.
|
||||
|
||||
The ``persistAsync`` method provides a tool for implementing high-throughput persistent actors. It will *not*
|
||||
stash incoming Commands while the Journal is still working on persisting and/or user code is executing event callbacks.
|
||||
|
|
@ -222,7 +222,7 @@ The ordering between events is still guaranteed ("evt-b-1" will be sent after "e
|
|||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#persist-async
|
||||
|
||||
.. note::
|
||||
In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync`` on all incoming messages right away,
|
||||
In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync`` on all incoming messages right away
|
||||
and handle them in the callback.
|
||||
|
||||
.. warning::
|
||||
|
|
@ -271,9 +271,9 @@ When sending two commands to this ``PersistentActor``, the persist handlers will
|
|||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#nested-persist-persist-caller
|
||||
|
||||
First the "outer layer" of persist calls is issued and their callbacks applied, after these have successfully completed
|
||||
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
|
||||
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
|
||||
And only after all these handlers have been successfully invoked, the next command will delivered to the persistent Actor.
|
||||
Only after all these handlers have been successfully invoked will the next command be delivered to the persistent Actor.
|
||||
In other words, the stashing of incoming commands that is guaranteed by initially calling ``persist()`` on the outer layer
|
||||
is extended until all nested ``persist`` callbacks have been handled.
|
||||
|
||||
|
|
@ -286,14 +286,14 @@ In this case no stashing is happening, yet the events are still persisted and ca
|
|||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#nested-persistAsync-persistAsync-caller
|
||||
|
||||
While it is possible to nest mixed ``persist`` and ``persistAsync`` with keeping their respective semantics
|
||||
it is not a recommended practice as it may lead to overly complex nesting.
|
||||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
||||
.. _failures-lambda:
|
||||
|
||||
Failures
|
||||
--------
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped.
|
||||
|
||||
The reason that it cannot resume when persist fails is that it is unknown if the even was actually
|
||||
|
|
@ -305,11 +305,11 @@ is provided to support such restarts.
|
|||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#backoff
|
||||
|
||||
If persistence of an event is rejected before it is stored, e.g. due to serialization error,
|
||||
``onPersistRejected`` will be invoked (logging a warning by default) and the actor continues with
|
||||
``onPersistRejected`` will be invoked (logging a warning by default), and the actor continues with
|
||||
next message.
|
||||
|
||||
If there is a problem with recovering the state of the actor from the journal when the actor is
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default) and the actor will be stopped.
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default), and the actor will be stopped.
|
||||
|
||||
Atomic writes
|
||||
-------------
|
||||
|
|
@ -327,7 +327,7 @@ command, i.e. ``onPersistRejected`` is called with an exception (typically ``Uns
|
|||
Batch writes
|
||||
------------
|
||||
|
||||
To optimize throughput, a persistent actor internally batches events to be stored under high load before
|
||||
In order to optimize throughput, a persistent actor internally batches events to be stored under high load before
|
||||
writing them to the journal (as a single batch). The batch size dynamically grows from 1 under low and moderate loads
|
||||
to a configurable maximum size (default is ``200``) under high load. When using ``persistAsync`` this increases
|
||||
the maximum throughput dramatically.
|
||||
|
|
@ -340,12 +340,12 @@ writing the previous batch. Batch writes are never timer-based which keeps laten
|
|||
Message deletion
|
||||
----------------
|
||||
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number,
|
||||
persistent actors may call the ``deleteMessages`` method.
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
|
||||
Persistent actors may call the ``deleteMessages`` method to this end.
|
||||
|
||||
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with
|
||||
Deleting messages in event sourcing based applications is typically either not used at all or used in conjunction with
|
||||
:ref:`snapshotting <snapshots>`, i.e. after a snapshot has been successfully stored, a ``deleteMessages(toSequenceNr)``
|
||||
up until the sequence number of the data held by that snapshot can be issued, to safely delete the previous events,
|
||||
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
|
||||
while still having access to the accumulated state during replays - by loading the snapshot.
|
||||
|
||||
The result of the ``deleteMessages`` request is signaled to the persistent actor with a ``DeleteMessagesSuccess``
|
||||
|
|
@ -381,7 +381,7 @@ implements an exponential-backoff strategy which allows for more breathing room
|
|||
restarts of the persistent actor.
|
||||
|
||||
.. note::
|
||||
Journal implementations may choose to implement a retry mechanisms, e.g. such that only after a write fails N number
|
||||
Journal implementations may choose to implement a retry mechanism, e.g. such that only after a write fails N number
|
||||
of times a persistence failure is signalled back to the user. In other words, once a journal returns a failure,
|
||||
it is considered *fatal* by Akka Persistence, and the persistent actor which caused the failure will be stopped.
|
||||
|
||||
|
|
@ -401,15 +401,15 @@ automatically by Akka, leaving the target actor no way to refuse stopping itself
|
|||
|
||||
This can be dangerous when used with :class:`PersistentActor` due to the fact that incoming commands are *stashed* while
|
||||
the persistent actor is awaiting confirmation from the Journal that events have been written when ``persist()`` was used.
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into it's internal stash while awaiting the
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into its internal stash while awaiting the
|
||||
confirmation (thus, before calling the persist handlers) the Actor **may receive and (auto)handle the PoisonPill
|
||||
before it processes the other messages which have been put into its stash**, causing a pre-mature shutdown of the Actor.
|
||||
|
||||
.. warning::
|
||||
Consider using explicit shut-down messages instead of :class:`PoisonPill` when working with persistent actors.
|
||||
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with it's internal stashing
|
||||
mechanism when ``persist()`` is used, notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with its internal stashing
|
||||
mechanism when ``persist()`` is used. Notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#safe-shutdown
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#safe-shutdown-example-bad
|
||||
|
|
@ -444,10 +444,9 @@ and setting the “initial behavior” in the constructor by calling the :meth:`
|
|||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#view
|
||||
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
|
||||
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
|
||||
persistent actor is started later and begins to write new messages, the corresponding view is updated automatically, by
|
||||
default.
|
||||
persistent actor is started later and begins to write new messages, by default the corresponding view is updated automatically.
|
||||
|
||||
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
|
||||
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
|
||||
|
|
@ -483,7 +482,7 @@ of replayed messages for manual updates can be limited with the ``replayMax`` pa
|
|||
Recovery
|
||||
--------
|
||||
|
||||
Initial recovery of persistent views works in the very same way as for a persistent actor (i.e. by sending a ``Recover`` message
|
||||
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
|
||||
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
|
||||
Further possibilities to customize initial recovery are explained in section :ref:`recovery-java`.
|
||||
|
||||
|
|
@ -496,7 +495,7 @@ A persistent view must have an identifier that doesn't change across different a
|
|||
The identifier must be defined with the ``viewId`` method.
|
||||
|
||||
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots-java-lambda` of a view and its
|
||||
persistent actor shall be shared (which is what applications usually do not want).
|
||||
persistent actor should be shared (which is what applications usually do not want).
|
||||
|
||||
.. _snapshots-java-lambda:
|
||||
|
||||
|
|
@ -529,11 +528,11 @@ To disable snapshot-based recovery, applications should use ``SnapshotSelectionC
|
|||
saved snapshot matches the specified ``SnapshotSelectionCriteria`` will replay all journaled messages.
|
||||
|
||||
.. note::
|
||||
In order to use snapshots a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
In order to use snapshots, a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
or the persistent actor can pick a snapshot store explicitly by overriding ``String snapshotPluginId()``.
|
||||
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store,
|
||||
however Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store.
|
||||
However Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
an actor tries to store a snapshot, at which point the the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
|
||||
|
||||
Note that :ref:`cluster_sharding_java` is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
|
||||
|
|
@ -570,17 +569,16 @@ To send messages with at-least-once delivery semantics to destinations you can e
|
|||
class instead of ``AbstractPersistentActor`` on the sending side. It takes care of re-sending messages when they
|
||||
have not been confirmed within a configurable timeout.
|
||||
|
||||
The state of the sending actor, including which messages that have been sent and still not been
|
||||
confirmed by the recepient, must be persistent so that it can survive a crash of the sending actor
|
||||
The state of the sending actor, including which messages have been sent that have not been
|
||||
confirmed by the recepient must be persistent so that it can survive a crash of the sending actor
|
||||
or JVM. The ``AbstractPersistentActorWithAtLeastOnceDelivery`` class does not persist anything by itself.
|
||||
It is your responsibility to persist the intent that a message is sent and that a confirmation has been
|
||||
received.
|
||||
|
||||
.. note::
|
||||
|
||||
At-least-once delivery implies that original message send order is not always preserved
|
||||
and the destination may receive duplicate messages. That means that the
|
||||
semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
At-least-once delivery implies that original message send order is not always preserved,
|
||||
and the destination may receive duplicate messages. Semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
|
||||
* it is not at-most-once delivery
|
||||
|
||||
|
|
@ -588,9 +586,9 @@ received.
|
|||
possible resends
|
||||
|
||||
* after a crash and restart of the destination messages are still
|
||||
delivered—to the new actor incarnation
|
||||
delivered to the new actor incarnation
|
||||
|
||||
These semantics is similar to what an :class:`ActorPath` represents (see
|
||||
These semantics are similar to what an :class:`ActorPath` represents (see
|
||||
:ref:`actor-lifecycle-scala`), therefore you need to supply a path and not a
|
||||
reference when delivering messages. The messages are sent to the path with
|
||||
an actor selection.
|
||||
|
|
@ -613,10 +611,10 @@ the destination actor. When recovering, messages will be buffered until they hav
|
|||
Once recovery has completed, if there are outstanding messages that have not been confirmed (during the message replay),
|
||||
the persistent actor will resend these before sending any other messages.
|
||||
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that correlation
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that the correlation
|
||||
between ``deliver`` and ``confirmDelivery`` is possible. The ``deliveryId`` must do the round trip. Upon receipt
|
||||
of the message, destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call ``confirmDelivery`` method to complete delivery routine.
|
||||
of the message, the destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call ``confirmDelivery`` method to complete the delivery routine.
|
||||
|
||||
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#at-least-once-example
|
||||
|
||||
|
|
@ -634,8 +632,8 @@ sequence number. It does not store this state itself. You must persist events co
|
|||
``deliver`` and ``confirmDelivery`` invocations from your ``PersistentActor`` so that the state can
|
||||
be restored by calling the same methods during the recovery phase of the ``PersistentActor``. Sometimes
|
||||
these events can be derived from other business level events, and sometimes you must create separate events.
|
||||
During recovery calls to ``deliver`` will not send out the message, but it will be sent later
|
||||
if no matching ``confirmDelivery`` was performed.
|
||||
During recovery, calls to ``deliver`` will not send out messages, those will be sent later
|
||||
if no matching ``confirmDelivery`` will have been performed.
|
||||
|
||||
Support for snapshots is provided by ``getDeliverySnapshot`` and ``setDeliverySnapshot``.
|
||||
The ``AtLeastOnceDeliverySnapshot`` contains the full delivery state, including unconfirmed messages.
|
||||
|
|
@ -656,7 +654,7 @@ configured with the ``akka.persistence.at-least-once-delivery.warn-after-number-
|
|||
configuration key. The method can be overridden by implementation classes to return non-default values.
|
||||
|
||||
The ``AbstractPersistentActorWithAtLeastOnceDelivery`` class holds messages in memory until their successful delivery has been confirmed.
|
||||
The limit of maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
The maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
is defined by the ``maxUnconfirmedMessages`` method. If this limit is exceed the ``deliver`` method will
|
||||
not accept more messages and it will throw ``AtLeastOnceDelivery.MaxUnconfirmedMessagesExceededException``.
|
||||
The default value can be configured with the ``akka.persistence.at-least-once-delivery.max-unconfirmed-messages``
|
||||
|
|
@ -696,7 +694,7 @@ Then in order for it to be used on events coming to and from the journal you mus
|
|||
It is possible to bind multiple adapters to one class *for recovery*, in which case the ``fromJournal`` methods of all
|
||||
bound adapters will be applied to a given matching event (in order of definition in the configuration). Since each adapter may
|
||||
return from ``0`` to ``n`` adapted events (called as ``EventSeq``), each adapter can investigate the event and if it should
|
||||
indeed adapt it return the adapted event(s) for it, other adapters which do not have anything to contribute during this
|
||||
indeed adapt it return the adapted event(s) for it. Other adapters which do not have anything to contribute during this
|
||||
adaptation simply return ``EventSeq.empty``. The adapted events are then delivered in-order to the ``PersistentActor`` during replay.
|
||||
|
||||
.. note::
|
||||
|
|
@ -735,7 +733,7 @@ The customer can be in one of the following states:
|
|||
|
||||
``LookingAround`` customer is browsing the site, but hasn't added anything to the shopping cart
|
||||
``Shopping`` customer has recently added items to the shopping cart
|
||||
``Inactive`` customer has items in the shopping cart, but hasn't added anything recently,
|
||||
``Inactive`` customer has items in the shopping cart, but hasn't added anything recently
|
||||
``Paid`` customer has purchased the items
|
||||
|
||||
.. note::
|
||||
|
|
@ -744,7 +742,7 @@ The customer can be in one of the following states:
|
|||
``String identifier()`` method. This is required in order to simplify the serialization of FSM states.
|
||||
String identifiers should be unique!
|
||||
|
||||
Customer's actions are "recorded" as a sequence of "domain events", which are persisted. Those events are replayed on actor's
|
||||
Customer's actions are "recorded" as a sequence of "domain events" which are persisted. Those events are replayed on actor's
|
||||
start in order to restore the latest customer's state:
|
||||
|
||||
.. includecode:: ../../../akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java#customer-domain-events
|
||||
|
|
@ -769,10 +767,10 @@ Storage plugins
|
|||
|
||||
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
|
||||
|
||||
Directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
|
||||
Plugins can be selected either by "default", for all persistent actors and views,
|
||||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
or "individually", when persistent actor or view defines its own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
|
|
@ -817,7 +815,7 @@ The journal plugin instance is an actor so the methods corresponding to requests
|
|||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
actors to achive parallelism.
|
||||
|
||||
The journal plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
|
||||
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
|
||||
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
|
||||
|
||||
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
@ -853,7 +851,7 @@ Pre-packaged plugins
|
|||
Local LevelDB journal
|
||||
---------------------
|
||||
|
||||
LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb`` and it writes messages to a local LevelDB
|
||||
The LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb``. It writes messages to a local LevelDB
|
||||
instance. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-plugin-config
|
||||
|
|
@ -871,7 +869,7 @@ LevelDB based plugins will also require the following additional dependency decl
|
|||
<version>1.8</version>
|
||||
</dependency>
|
||||
|
||||
The default location of the LevelDB files is a directory named ``journal`` in the current working
|
||||
The default location of LevelDB files is a directory named ``journal`` in the current working
|
||||
directory. This location can be changed by configuration where the specified path can be relative or absolute:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#journal-config
|
||||
|
|
@ -890,7 +888,7 @@ backup node.
|
|||
.. warning::
|
||||
|
||||
A shared LevelDB instance is a single point of failure and should therefore only be used for testing
|
||||
purposes. Highly-available, replicated journal are available as `Community plugins`_.
|
||||
purposes. Highly-available, replicated journals are available as `Community plugins`_.
|
||||
|
||||
A shared LevelDB instance is started by instantiating the ``SharedLeveldbStore`` actor.
|
||||
|
||||
|
|
@ -919,7 +917,7 @@ i.e. only the first injection is used.
|
|||
Local snapshot store
|
||||
--------------------
|
||||
|
||||
Local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local`` and it writes snapshot files to
|
||||
Local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local``. It writes snapshot files to
|
||||
the local filesystem. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-snapshot-plugin-config
|
||||
|
|
@ -952,8 +950,7 @@ For more advanced schema evolution techniques refer to the :ref:`persistence-sch
|
|||
Testing
|
||||
=======
|
||||
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project
|
||||
otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project. Otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#native-config
|
||||
|
||||
|
|
@ -974,21 +971,21 @@ in your Akka configuration. The LevelDB Java port is for testing purposes only.
|
|||
Multiple persistence plugin configurations
|
||||
==========================================
|
||||
|
||||
By default, persistent actor or view will use "default" journal and snapshot store plugins
|
||||
By default, a persistent actor or view will use "default" journal and snapshot store plugins
|
||||
configured in the following sections of the ``reference.conf`` configuration resource:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#default-config
|
||||
|
||||
Note that in this case actor or view overrides only ``persistenceId`` method:
|
||||
Note that in this case the actor or view overrides only ``persistenceId`` method:
|
||||
|
||||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#default-plugins
|
||||
|
||||
When persistent actor or view overrides ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
When a persistent actor or view overrides ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
the actor or view will be serviced by these specific persistence plugins instead of the defaults:
|
||||
|
||||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with a standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -80,7 +80,7 @@ executions of the query.
|
|||
|
||||
The stream is not completed when it reaches the end of the currently used `persistenceIds`,
|
||||
but it continues to push new `persistenceIds` when new persistent actors are created.
|
||||
Corresponding query that is completed when it reaches the end of the currently
|
||||
Corresponding query that is completed when it reaches the end of the
|
||||
currently used `persistenceIds` is provided by ``currentPersistenceIds``.
|
||||
|
||||
The LevelDB write journal is notifying the query side as soon as new ``persistenceIds`` are
|
||||
|
|
|
|||
|
|
@ -196,7 +196,7 @@ Materialize view using mapAsync
|
|||
If the target database does not provide a reactive streams ``Subscriber`` that can perform writes,
|
||||
you may have to implement the write logic using plain functions or Actors instead.
|
||||
|
||||
In case your write logic is state-less and you just need to convert the events from one data data type to another
|
||||
In case your write logic is state-less and you just need to convert the events from one data type to another
|
||||
before writing into the alternative datastore, then the projection is as simple as:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceQueryDocTest.java#projection-into-different-store-simple-classes
|
||||
|
|
|
|||
|
|
@ -210,7 +210,7 @@ we are familiar with it, it does its job well and Akka is using it internally as
|
|||
|
||||
While being able to read messages with missing fields is half of the solution, you also need to deal with the missing
|
||||
values somehow. This is usually modeled as some kind of default value, or by representing the field as an ``Optional<T>``
|
||||
See below for an example how reading an optional field from from a serialized protocol buffers message might look like.
|
||||
See below for an example how reading an optional field from a serialized protocol buffers message might look like.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceSchemaEvolutionDocTest.java#protobuf-read-optional-model
|
||||
|
||||
|
|
@ -234,7 +234,7 @@ Rename fields
|
|||
|
||||
**Situation:**
|
||||
When first designing the system the ``SeatReverved`` event featured an ``code`` field.
|
||||
After some time you discover that what what was originally called ``code`` actually means ``seatNr``, thus the model
|
||||
After some time you discover that what was originally called ``code`` actually means ``seatNr``, thus the model
|
||||
should be changed to reflect this concept more accurately.
|
||||
|
||||
|
||||
|
|
@ -268,7 +268,7 @@ swiftly and refactor your models fearlessly as you go on with the project.
|
|||
|
||||
**Solution 2 - by manually handling the event versions:**
|
||||
Another solution, in case your serialization format does not support renames as easily as the above mentioned formats,
|
||||
is versioning your schema. For example, you could have made your events events carry an additional field called ``_version``
|
||||
is versioning your schema. For example, you could have made your events carry an additional field called ``_version``
|
||||
which was set to ``1`` (because it was the initial schema), and once you change the schema you bump this number to ``2``,
|
||||
and write an adapter which can perform the rename.
|
||||
|
||||
|
|
|
|||
|
|
@ -34,7 +34,7 @@ Akka persistence is a separate jar file. Make sure that you have the following d
|
|||
<version>@version@</version>
|
||||
</dependency>
|
||||
|
||||
Akka persistence extension comes with few built-in persistence plugins, including
|
||||
The Akka persistence extension comes with few built-in persistence plugins, including
|
||||
in-memory heap based journal, local file-system based snapshot-store and LevelDB based journal.
|
||||
|
||||
LevelDB based plugins will require the following additional dependency declaration::
|
||||
|
|
@ -55,7 +55,7 @@ Architecture
|
|||
|
||||
* *UntypedPersistentActor*: Is a persistent, stateful actor. It is able to persist events to a journal and can react to
|
||||
them in a thread-safe manner. It can be used to implement both *command* as well as *event sourced* actors.
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor, so that it can
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
|
||||
recover internal state from these messages.
|
||||
|
||||
* *UntypedPersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another
|
||||
|
|
@ -67,13 +67,13 @@ Architecture
|
|||
|
||||
* *AsyncWriteJournal*: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages
|
||||
are journaled and which are received by the persistent actor without being journaled. Journal maintains *highestSequenceNr* that is increased on each message.
|
||||
The storage backend of a journal is pluggable. Persistence extension comes with a "leveldb" journal plugin, which writes to the local filesystem,
|
||||
and replicated journals are available as `Community plugins`_.
|
||||
The storage backend of a journal is pluggable. The persistence extension comes with a "leveldb" journal plugin, which writes to the local filesystem.
|
||||
Replicated journals are available as `Community plugins`_.
|
||||
|
||||
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's or a view's internal state. Snapshots are
|
||||
used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
|
||||
Persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem,
|
||||
and replicated snapshot stores are available as `Community plugins`_.
|
||||
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem.
|
||||
Replicated snapshot stores are available as `Community plugins`_.
|
||||
|
||||
.. _Community plugins: http://akka.io/community/
|
||||
|
||||
|
|
@ -83,13 +83,13 @@ Event sourcing
|
|||
==============
|
||||
|
||||
The basic idea behind `Event Sourcing`_ is quite simple. A persistent actor receives a (non-persistent) command
|
||||
which is first validated if it can be applied to the current state. Here, validation can mean anything, from simple
|
||||
which is first validated if it can be applied to the current state. Here validation can mean anything from simple
|
||||
inspection of a command message's fields up to a conversation with several external services, for example.
|
||||
If validation succeeds, events are generated from the command, representing the effect of the command. These events
|
||||
are then persisted and, after successful persistence, used to change the actor's state. When the persistent actor
|
||||
needs to be recovered, only the persisted events are replayed of which we know that they can be successfully applied.
|
||||
In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced
|
||||
actors may of course also process commands that do not change application state, such as query commands, for example.
|
||||
actors may of course also process commands that do not change application state such as query commands for example.
|
||||
|
||||
.. _Event Sourcing: http://martinfowler.com/eaaDev/EventSourcing.html
|
||||
|
||||
|
|
@ -133,10 +133,10 @@ Note that the stash capacity is per actor. If you have many persistent actors, e
|
|||
you may need to define a small stash capacity to ensure that the total number of stashed messages in the system
|
||||
don't consume too much memory.
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped. If persistence of an event is rejected before it is
|
||||
stored, e.g. due to serialization error, ``onPersistRejected`` will be invoked (logging a warning
|
||||
by default) and the actor continues with next message.
|
||||
by default), and the actor continues with the next message.
|
||||
|
||||
The easiest way to run this example yourself is to download `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Persistence Samples with Java <http://www.typesafe.com/activator/template/akka-sample-persistence-java>`_.
|
||||
|
|
@ -178,7 +178,7 @@ Recovery customization
|
|||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Applications may also customise how recovery is performed by returning a customised ``Recovery`` object
|
||||
in the ``recovery`` method of a ``UntypedPersistentActor``, for example setting an upper bound to the replay,
|
||||
in the ``recovery`` method of a ``UntypedPersistentActor``, for example setting an upper bound to the replay
|
||||
which allows the actor to be replayed to a certain point "in the past" instead to its most up to date state:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#recovery-custom
|
||||
|
|
@ -195,7 +195,7 @@ A persistent actor can query its own recovery status via the methods
|
|||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#recovery-status
|
||||
|
||||
Sometimes there is a need for performing additional initialization when the
|
||||
recovery has completed, before processing any other message sent to the persistent actor.
|
||||
recovery has completed before processing any other message sent to the persistent actor.
|
||||
The persistent actor will receive a special :class:`RecoveryCompleted` message right after recovery
|
||||
and before any other received messages.
|
||||
|
||||
|
|
@ -209,12 +209,12 @@ is called (logging the error by default) and the actor will be stopped.
|
|||
Relaxed local consistency requirements and high throughput use-cases
|
||||
--------------------------------------------------------------------
|
||||
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and it's
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and its
|
||||
``persist`` may not be enough in terms of consuming incoming Commands at a high rate, because it has to wait until all
|
||||
Events related to a given Command are processed in order to start processing the next Command. While this abstraction is
|
||||
very useful for most cases, sometimes you may be faced with relaxed requirements about consistency – for example you may
|
||||
want to process commands as fast as you can, assuming that Event will eventually be persisted and handled properly in
|
||||
the background and retroactively reacting to persistence failures if needed.
|
||||
want to process commands as fast as you can, assuming that the Event will eventually be persisted and handled properly in
|
||||
the background, retroactively reacting to persistence failures if needed.
|
||||
|
||||
The ``persistAsync`` method provides a tool for implementing high-throughput persistent actors. It will *not*
|
||||
stash incoming Commands while the Journal is still working on persisting and/or user code is executing event callbacks.
|
||||
|
|
@ -225,7 +225,7 @@ The ordering between events is still guaranteed ("evt-b-1" will be sent after "e
|
|||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#persist-async
|
||||
|
||||
.. note::
|
||||
In order to implement the pattern known as "*command sourcing*" simply ``persistAsync`` all incoming messages right away,
|
||||
In order to implement the pattern known as "*command sourcing*" simply ``persistAsync`` all incoming messages right away
|
||||
and handle them in the callback.
|
||||
|
||||
.. warning::
|
||||
|
|
@ -274,9 +274,9 @@ When sending two commands to this ``PersistentActor``, the persist handlers will
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#nested-persist-persist-caller
|
||||
|
||||
First the "outer layer" of persist calls is issued and their callbacks applied, after these have successfully completed
|
||||
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
|
||||
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
|
||||
And only after all these handlers have been successfully invoked, the next command will delivered to the persistent Actor.
|
||||
Only after all these handlers have been successfully invoked will the next command be delivered to the persistent Actor.
|
||||
In other words, the stashing of incoming commands that is guaranteed by initially calling ``persist()`` on the outer layer
|
||||
is extended until all nested ``persist`` callbacks have been handled.
|
||||
|
||||
|
|
@ -284,35 +284,35 @@ It is also possible to nest ``persistAsync`` calls, using the same pattern:
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#nested-persistAsync-persistAsync
|
||||
|
||||
In this case no stashing is happening, yet the events are still persisted and callbacks executed in the expected order:
|
||||
In this case no stashing is happening, yet events are still persisted and callbacks are executed in the expected order:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#nested-persistAsync-persistAsync-caller
|
||||
|
||||
While it is possible to nest mixed ``persist`` and ``persistAsync`` with keeping their respective semantics
|
||||
it is not a recommended practice as it may lead to overly complex nesting.
|
||||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
||||
.. _failures-java:
|
||||
|
||||
Failures
|
||||
--------
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped.
|
||||
|
||||
The reason that it cannot resume when persist fails is that it is unknown if the even was actually
|
||||
The reason that it cannot resume when persist fails is that it is unknown if the event was actually
|
||||
persisted or not, and therefore it is in an inconsistent state. Restarting on persistent failures
|
||||
will most likely fail anyway, since the journal is probably unavailable. It is better to stop the
|
||||
will most likely fail anyway since the journal is probably unavailable. It is better to stop the
|
||||
actor and after a back-off timeout start it again. The ``akka.pattern.BackoffSupervisor`` actor
|
||||
is provided to support such restarts.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#backoff
|
||||
|
||||
If persistence of an event is rejected before it is stored, e.g. due to serialization error,
|
||||
``onPersistRejected`` will be invoked (logging a warning by default) and the actor continues with
|
||||
``onPersistRejected`` will be invoked (logging a warning by default), and the actor continues with
|
||||
next message.
|
||||
|
||||
If there is a problem with recovering the state of the actor from the journal when the actor is
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default) and the actor will be stopped.
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default), and the actor will be stopped.
|
||||
|
||||
Atomic writes
|
||||
-------------
|
||||
|
|
@ -330,7 +330,7 @@ command, i.e. ``onPersistRejected`` is called with an exception (typically ``Uns
|
|||
Batch writes
|
||||
------------
|
||||
|
||||
To optimize throughput, a persistent actor internally batches events to be stored under high load before
|
||||
In order to optimize throughput a persistent actor internally batches events to be stored under high load before
|
||||
writing them to the journal (as a single batch). The batch size dynamically grows from 1 under low and moderate loads
|
||||
to a configurable maximum size (default is ``200``) under high load. When using ``persistAsync`` this increases
|
||||
the maximum throughput dramatically.
|
||||
|
|
@ -343,22 +343,22 @@ writing the previous batch. Batch writes are never timer-based which keeps laten
|
|||
Message deletion
|
||||
----------------
|
||||
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number,
|
||||
persistent actors may call the ``deleteMessages`` method.
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
|
||||
Persistent actors may call the ``deleteMessages`` method to this end.
|
||||
|
||||
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with
|
||||
:ref:`snapshotting <snapshots>`, i.e. after a snapshot has been successfully stored, a ``deleteMessages(toSequenceNr)``
|
||||
up until the sequence number of the data held by that snapshot can be issued, to safely delete the previous events,
|
||||
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
|
||||
while still having access to the accumulated state during replays - by loading the snapshot.
|
||||
|
||||
The result of the ``deleteMessages`` request is signaled to the persistent actor with a ``DeleteMessagesSuccess``
|
||||
message if the delete was successful or a ``DeleteMessagesFailure`` message if it failed.
|
||||
|
||||
Message deletion doesn't affect highest sequence number of journal, even if all messages were deleted from journal after ``deleteMessages`` invocation.
|
||||
Message deletion doesn't affect the highest sequence number of the journal, even if all messages were deleted from it after ``deleteMessages`` invocation.
|
||||
|
||||
Persistence status handling
|
||||
---------------------------
|
||||
Persisting, deleting and replaying messages can either succeed or fail.
|
||||
Persisting, deleting, and replaying messages can either succeed or fail.
|
||||
|
||||
+---------------------------------+-----------------------------+-------------------------------+-----------------------------------+
|
||||
| **Method** | **Success** | **Failure / Rejection** | **After failure handler invoked** |
|
||||
|
|
@ -377,7 +377,7 @@ the user can override in the ``PersistentActor``. The default implementations of
|
|||
(``error`` for persist/recovery failures, and ``warning`` for others), logging the failure cause and information about
|
||||
which message caused the failure.
|
||||
|
||||
For critical failures, such as recovery or persisting events failing, the persistent actor will be stopped after the failure
|
||||
For critical failures such as recovery or persisting events failing the persistent actor will be stopped after the failure
|
||||
handler is invoked. This is because if the underlying journal implementation is signalling persistence failures it is most
|
||||
likely either failing completely or overloaded and restarting right-away and trying to persist the event again will most
|
||||
likely not help the journal recover – as it would likely cause a `Thundering herd problem`_, as many persistent actors
|
||||
|
|
@ -386,7 +386,7 @@ implements an exponential-backoff strategy which allows for more breathing room
|
|||
restarts of the persistent actor.
|
||||
|
||||
.. note::
|
||||
Journal implementations may choose to implement a retry mechanisms, e.g. such that only after a write fails N number
|
||||
Journal implementations may choose to implement a retry mechanism, e.g. such that only after a write fails N number
|
||||
of times a persistence failure is signalled back to the user. In other words, once a journal returns a failure,
|
||||
it is considered *fatal* by Akka Persistence, and the persistent actor which caused the failure will be stopped.
|
||||
|
||||
|
|
@ -399,22 +399,22 @@ restarts of the persistent actor.
|
|||
Safely shutting down persistent actors
|
||||
--------------------------------------
|
||||
|
||||
Special care should be given when when shutting down persistent actors from the outside.
|
||||
Special care should be given when shutting down persistent actors from the outside.
|
||||
With normal Actors it is often acceptable to use the special :ref:`PoisonPill <poison-pill-java>` message
|
||||
to signal to an Actor that it should stop itself once it receives this message – in fact this message is handled
|
||||
automatically by Akka, leaving the target actor no way to refuse stopping itself when given a poison pill.
|
||||
|
||||
This can be dangerous when used with :class:`PersistentActor` due to the fact that incoming commands are *stashed* while
|
||||
the persistent actor is awaiting confirmation from the Journal that events have been written when ``persist()`` was used.
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into it's internal stash while awaiting the
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into its internal stash while awaiting the
|
||||
confirmation (thus, before calling the persist handlers) the Actor **may receive and (auto)handle the PoisonPill
|
||||
before it processes the other messages which have been put into its stash**, causing a pre-mature shutdown of the Actor.
|
||||
|
||||
.. warning::
|
||||
Consider using explicit shut-down messages instead of :class:`PoisonPill` when working with persistent actors.
|
||||
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with it's internal stashing
|
||||
mechanism when ``persist()`` is used, notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with its internal stashing
|
||||
mechanism when ``persist()`` is used. Notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#safe-shutdown
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#safe-shutdown-example-bad
|
||||
|
|
@ -449,10 +449,10 @@ and the ``persistenceId`` methods.
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#view
|
||||
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
|
||||
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
|
||||
persistent actor is started later and begins to write new messages, the corresponding view is updated automatically, by
|
||||
default.
|
||||
persistent actor is started later and begins to write new messages, by
|
||||
default the corresponding view is updated automatically.
|
||||
|
||||
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
|
||||
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
|
||||
|
|
@ -488,7 +488,7 @@ of replayed messages for manual updates can be limited with the ``replayMax`` pa
|
|||
Recovery
|
||||
--------
|
||||
|
||||
Initial recovery of persistent views works in the very same way as for a persistent actor (i.e. by sending a ``Recover`` message
|
||||
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
|
||||
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
|
||||
Further possibilities to customize initial recovery are explained in section :ref:`recovery-java`.
|
||||
|
||||
|
|
@ -501,18 +501,18 @@ A persistent view must have an identifier that doesn't change across different a
|
|||
The identifier must be defined with the ``viewId`` method.
|
||||
|
||||
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots-java` of a view and its
|
||||
persistent actor shall be shared (which is what applications usually do not want).
|
||||
persistent actor should be shared (which is what applications usually do not want).
|
||||
|
||||
.. _snapshots-java:
|
||||
|
||||
Snapshots
|
||||
=========
|
||||
|
||||
Snapshots can dramatically reduce recovery times of persistent actor and views. The following discusses snapshots
|
||||
in context of persistent actor but this is also applicable to persistent views.
|
||||
Snapshots can dramatically reduce recovery times of persistent actors and views. The following discusses snapshots
|
||||
in context of persistent actors but this is also applicable to persistent views.
|
||||
|
||||
Persistent actor can save snapshots of internal state by calling the ``saveSnapshot`` method. If saving of a snapshot
|
||||
succeeds, the persistent actor receives a ``SaveSnapshotSuccess`` message, otherwise a ``SaveSnapshotFailure`` message
|
||||
Persistent actors can save snapshots of internal state by calling the ``saveSnapshot`` method. If saving of a snapshot
|
||||
succeeds, the persistent actor receives a ``SaveSnapshotSuccess`` message, otherwise a ``SaveSnapshotFailure`` message.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#save-snapshot
|
||||
|
||||
|
|
@ -534,12 +534,12 @@ To disable snapshot-based recovery, applications should use ``SnapshotSelectionC
|
|||
saved snapshot matches the specified ``SnapshotSelectionCriteria`` will replay all journaled messages.
|
||||
|
||||
.. note::
|
||||
In order to use snapshots a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
In order to use snapshots, a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
or the persistent actor can pick a snapshot store explicitly by overriding ``String snapshotPluginId()``.
|
||||
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store,
|
||||
however Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
an actor tries to store a snapshot, at which point the the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store.
|
||||
However, Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
an actor tries to store a snapshot, at which point the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
|
||||
|
||||
Note that :ref:`cluster_sharding_java` is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
|
||||
|
||||
|
|
@ -575,17 +575,16 @@ To send messages with at-least-once delivery semantics to destinations you can e
|
|||
class instead of ``UntypedPersistentActor`` on the sending side. It takes care of re-sending messages when they
|
||||
have not been confirmed within a configurable timeout.
|
||||
|
||||
The state of the sending actor, including which messages that have been sent and still not been
|
||||
confirmed by the recepient, must be persistent so that it can survive a crash of the sending actor
|
||||
The state of the sending actor, including which messages have been sent that have not been
|
||||
confirmed by the recepient must be persistent so that it can survive a crash of the sending actor
|
||||
or JVM. The ``UntypedPersistentActorWithAtLeastOnceDelivery`` class does not persist anything by itself.
|
||||
It is your responsibility to persist the intent that a message is sent and that a confirmation has been
|
||||
received.
|
||||
|
||||
.. note::
|
||||
|
||||
At-least-once delivery implies that original message send order is not always preserved
|
||||
and the destination may receive duplicate messages. That means that the
|
||||
semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
At-least-once delivery implies that original message sending order is not always preserved,
|
||||
and the destination may receive duplicate messages. Semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
|
||||
* it is not at-most-once delivery
|
||||
|
||||
|
|
@ -593,9 +592,9 @@ received.
|
|||
possible resends
|
||||
|
||||
* after a crash and restart of the destination messages are still
|
||||
delivered—to the new actor incarnation
|
||||
delivered to the new actor incarnation
|
||||
|
||||
These semantics is similar to what an :class:`ActorPath` represents (see
|
||||
These semantics are similar to what an :class:`ActorPath` represents (see
|
||||
:ref:`actor-lifecycle-scala`), therefore you need to supply a path and not a
|
||||
reference when delivering messages. The messages are sent to the path with
|
||||
an actor selection.
|
||||
|
|
@ -618,10 +617,10 @@ the destination actor. When recovering, messages will be buffered until they hav
|
|||
Once recovery has completed, if there are outstanding messages that have not been confirmed (during the message replay),
|
||||
the persistent actor will resend these before sending any other messages.
|
||||
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that correlation
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that the correlation
|
||||
between ``deliver`` and ``confirmDelivery`` is possible. The ``deliveryId`` must do the round trip. Upon receipt
|
||||
of the message, destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call ``confirmDelivery`` method to complete delivery routine.
|
||||
of the message, the destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call the ``confirmDelivery`` method to complete the delivery routine.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocTest.java#at-least-once-example
|
||||
|
||||
|
|
@ -639,8 +638,8 @@ sequence number. It does not store this state itself. You must persist events co
|
|||
``deliver`` and ``confirmDelivery`` invocations from your ``PersistentActor`` so that the state can
|
||||
be restored by calling the same methods during the recovery phase of the ``PersistentActor``. Sometimes
|
||||
these events can be derived from other business level events, and sometimes you must create separate events.
|
||||
During recovery calls to ``deliver`` will not send out the message, but it will be sent later
|
||||
if no matching ``confirmDelivery`` was performed.
|
||||
During recovery, calls to ``deliver`` will not send out messages, those will be sent later
|
||||
if no matching ``confirmDelivery`` will have been performed.
|
||||
|
||||
Support for snapshots is provided by ``getDeliverySnapshot`` and ``setDeliverySnapshot``.
|
||||
The ``AtLeastOnceDeliverySnapshot`` contains the full delivery state, including unconfirmed messages.
|
||||
|
|
@ -668,7 +667,7 @@ configured with the ``akka.persistence.at-least-once-delivery.warn-after-number-
|
|||
configuration key. The method can be overridden by implementation classes to return non-default values.
|
||||
|
||||
The ``UntypedPersistentActorWithAtLeastOnceDelivery`` class holds messages in memory until their successful delivery has been confirmed.
|
||||
The limit of maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
The maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
is defined by the ``maxUnconfirmedMessages`` method. If this limit is exceed the ``deliver`` method will
|
||||
not accept more messages and it will throw ``AtLeastOnceDelivery.MaxUnconfirmedMessagesExceededException``.
|
||||
The default value can be configured with the ``akka.persistence.at-least-once-delivery.max-unconfirmed-messages``
|
||||
|
|
@ -708,7 +707,7 @@ Then in order for it to be used on events coming to and from the journal you mus
|
|||
It is possible to bind multiple adapters to one class *for recovery*, in which case the ``fromJournal`` methods of all
|
||||
bound adapters will be applied to a given matching event (in order of definition in the configuration). Since each adapter may
|
||||
return from ``0`` to ``n`` adapted events (called as ``EventSeq``), each adapter can investigate the event and if it should
|
||||
indeed adapt it return the adapted event(s) for it, other adapters which do not have anything to contribute during this
|
||||
indeed adapt it return the adapted event(s) for it. Other adapters which do not have anything to contribute during this
|
||||
adaptation simply return ``EventSeq.empty``. The adapted events are then delivered in-order to the ``PersistentActor`` during replay.
|
||||
|
||||
.. note::
|
||||
|
|
@ -719,22 +718,22 @@ Storage plugins
|
|||
|
||||
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
|
||||
|
||||
Directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
|
||||
Plugins can be selected either by "default", for all persistent actors and views,
|
||||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
or "individually", when a persistent actor or view defines its own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
When a persistent actor or view does NOT override the ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
the persistence extension will use the "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
|
||||
akka.persistence.journal.plugin = ""
|
||||
akka.persistence.snapshot-store.plugin = ""
|
||||
|
||||
However, these entries are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
For an example of journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal-java`.
|
||||
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store-java`.
|
||||
For an example of a journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal-java`.
|
||||
For an example of a snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store-java`.
|
||||
|
||||
Applications can provide their own plugins by implementing a plugin API and activate them by configuration.
|
||||
Applications can provide their own plugins by implementing a plugin API and activating them by configuration.
|
||||
Plugin development requires the following imports:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocTest.java#plugin-imports
|
||||
|
|
@ -769,7 +768,7 @@ The journal plugin instance is an actor so the methods corresponding to requests
|
|||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
actors to achive parallelism.
|
||||
|
||||
The journal plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
|
||||
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
|
||||
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
|
||||
|
||||
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
@ -799,9 +798,9 @@ Don't run snapshot store tasks/futures on the system default dispatcher, since t
|
|||
|
||||
Plugin TCK
|
||||
----------
|
||||
In order to help developers build correct and high quality storage plugins, we provide an Technology Compatibility Kit (`TCK <http://en.wikipedia.org/wiki/Technology_Compatibility_Kit>`_ for short).
|
||||
In order to help developers build correct and high quality storage plugins, we provide a Technology Compatibility Kit (`TCK <http://en.wikipedia.org/wiki/Technology_Compatibility_Kit>`_ for short).
|
||||
|
||||
The TCK is usable from Java as well as Scala projects, for Java you need to include the akka-persistence-tck dependency::
|
||||
The TCK is usable from Java as well as Scala projects. For Java you need to include the akka-persistence-tck dependency::
|
||||
|
||||
<dependency>
|
||||
<groupId>com.typesafe.akka</groupId>
|
||||
|
|
@ -815,8 +814,8 @@ To include the Journal TCK tests in your test suite simply extend the provided `
|
|||
.. includecode:: ./code/docs/persistence/PersistencePluginDocTest.java#journal-tck-java
|
||||
|
||||
We also provide a simple benchmarking class ``JavaJournalPerfSpec`` which includes all the tests that ``JavaJournalSpec``
|
||||
has, and also performs some longer operations on the Journal while printing it's performance stats. While it is NOT aimed
|
||||
to provide a proper benchmarking environment it can be used to get a rough feel about your journals performance in the most
|
||||
has, and also performs some longer operations on the Journal while printing its performance stats. While it is NOT aimed
|
||||
to provide a proper benchmarking environment it can be used to get a rough feel about your journal's performance in the most
|
||||
typical scenarios.
|
||||
|
||||
In order to include the ``SnapshotStore`` TCK tests in your test suite simply extend the ``SnapshotStoreSpec``:
|
||||
|
|
@ -839,7 +838,7 @@ Pre-packaged plugins
|
|||
Local LevelDB journal
|
||||
---------------------
|
||||
|
||||
LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb`` and it writes messages to a local LevelDB
|
||||
The LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb``. It writes messages to a local LevelDB
|
||||
instance. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-plugin-config
|
||||
|
|
@ -876,7 +875,7 @@ backup node.
|
|||
.. warning::
|
||||
|
||||
A shared LevelDB instance is a single point of failure and should therefore only be used for testing
|
||||
purposes. Highly-available, replicated journal are available as `Community plugins`_.
|
||||
purposes. Highly-available, replicated journals are available as `Community plugins`_.
|
||||
|
||||
A shared LevelDB instance is started by instantiating the ``SharedLeveldbStore`` actor.
|
||||
|
||||
|
|
@ -905,7 +904,7 @@ i.e. only the first injection is used.
|
|||
Local snapshot store
|
||||
--------------------
|
||||
|
||||
Local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local`` and it writes snapshot files to
|
||||
The local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local``. It writes snapshot files to
|
||||
the local filesystem. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-snapshot-plugin-config
|
||||
|
|
@ -938,8 +937,8 @@ For more advanced schema evolution techniques refer to the :ref:`persistence-sch
|
|||
Testing
|
||||
=======
|
||||
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project
|
||||
otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project.
|
||||
Otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#native-config
|
||||
|
||||
|
|
@ -966,21 +965,21 @@ to the :ref:`reference configuration <config-akka-persistence>`.
|
|||
Multiple persistence plugin configurations
|
||||
==========================================
|
||||
|
||||
By default, persistent actor or view will use "default" journal and snapshot store plugins
|
||||
By default, a persistent actor or view will use the "default" journal and snapshot store plugins
|
||||
configured in the following sections of the ``reference.conf`` configuration resource:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#default-config
|
||||
|
||||
Note that in this case actor or view overrides only ``persistenceId`` method:
|
||||
Note that in this case the actor or view overrides only ``persistenceId`` method:
|
||||
|
||||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#default-plugins
|
||||
|
||||
When persistent actor or view overrides ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
When a persistent actor or view overrides the ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
the actor or view will be serviced by these specific persistence plugins instead of the defaults:
|
||||
|
||||
.. includecode:: ../java/code/docs/persistence/PersistenceMultiDocTest.java#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with a standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: ../scala/code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -280,7 +280,7 @@ configuration.
|
|||
.. includecode:: ../scala/code/docs/routing/RouterDocSpec.scala#config-balancing-pool2
|
||||
|
||||
The ``BalancingPool`` automatically uses a special ``BalancingDispatcher`` for its
|
||||
routees - disregarding any dispatcher that is set on the the routee Props object.
|
||||
routees - disregarding any dispatcher that is set on the routee Props object.
|
||||
This is needed in order to implement the balancing semantics via
|
||||
sharing the same mailbox by all the routees.
|
||||
|
||||
|
|
@ -388,7 +388,7 @@ TailChoppingPool and TailChoppingGroup
|
|||
--------------------------------------
|
||||
|
||||
The TailChoppingRouter will first send the message to one, randomly picked, routee
|
||||
and then after a small delay to to a second routee (picked randomly from the remaining routees) and so on.
|
||||
and then after a small delay to a second routee (picked randomly from the remaining routees) and so on.
|
||||
It waits for first reply it gets back and forwards it back to original sender. Other replies are discarded.
|
||||
|
||||
The goal of this router is to decrease latency by performing redundant queries to multiple routees, assuming that
|
||||
|
|
@ -436,7 +436,7 @@ There is 3 ways to define what data to use for the consistent hash key.
|
|||
The key is part of the message and it's convenient to define it together
|
||||
with the message definition.
|
||||
|
||||
* The messages can be be wrapped in a ``akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope``
|
||||
* The messages can be wrapped in a ``akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope``
|
||||
to define what data to use for the consistent hash key. The sender knows
|
||||
the key to use.
|
||||
|
||||
|
|
@ -507,7 +507,7 @@ to every routee of a router.
|
|||
|
||||
In this example the router receives the ``Broadcast`` message, extracts its payload
|
||||
(``"Watch out for Davy Jones' locker"``), and then sends the payload on to all of the router's
|
||||
routees. It is up to each each routee actor to handle the received payload message.
|
||||
routees. It is up to each routee actor to handle the received payload message.
|
||||
|
||||
PoisonPill Messages
|
||||
-------------------
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ executions of the query.
|
|||
|
||||
The stream is not completed when it reaches the end of the currently used `persistenceIds`,
|
||||
but it continues to push new `persistenceIds` when new persistent actors are created.
|
||||
Corresponding query that is completed when it reaches the end of the currently
|
||||
Corresponding query that is completed when it reaches the end of the
|
||||
currently used `persistenceIds` is provided by ``currentPersistenceIds``.
|
||||
|
||||
The LevelDB write journal is notifying the query side as soon as new ``persistenceIds`` are
|
||||
|
|
|
|||
|
|
@ -192,7 +192,7 @@ Materialize view using mapAsync
|
|||
If the target database does not provide a reactive streams ``Subscriber`` that can perform writes,
|
||||
you may have to implement the write logic using plain functions or Actors instead.
|
||||
|
||||
In case your write logic is state-less and you just need to convert the events from one data data type to another
|
||||
In case your write logic is state-less and you just need to convert the events from one data type to another
|
||||
before writing into the alternative datastore, then the projection is as simple as:
|
||||
|
||||
.. includecode:: code/docs/persistence/query/PersistenceQueryDocSpec.scala#projection-into-different-store-simple
|
||||
|
|
|
|||
|
|
@ -210,7 +210,7 @@ we are familiar with it, it does its job well and Akka is using it internally as
|
|||
|
||||
While being able to read messages with missing fields is half of the solution, you also need to deal with the missing
|
||||
values somehow. This is usually modeled as some kind of default value, or by representing the field as an ``Option[T]``
|
||||
See below for an example how reading an optional field from from a serialized protocol buffers message might look like.
|
||||
See below for an example how reading an optional field from a serialized protocol buffers message might look like.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala#protobuf-read-optional-model
|
||||
|
||||
|
|
@ -234,7 +234,7 @@ Rename fields
|
|||
|
||||
**Situation:**
|
||||
When first designing the system the ``SeatReverved`` event featured an ``code`` field.
|
||||
After some time you discover that what what was originally called ``code`` actually means ``seatNr``, thus the model
|
||||
After some time you discover that what was originally called ``code`` actually means ``seatNr``, thus the model
|
||||
should be changed to reflect this concept more accurately.
|
||||
|
||||
|
||||
|
|
@ -268,7 +268,7 @@ swiftly and refactor your models fearlessly as you go on with the project.
|
|||
|
||||
**Solution 2 - by manually handling the event versions:**
|
||||
Another solution, in case your serialization format does not support renames as easily as the above mentioned formats,
|
||||
is versioning your schema. For example, you could have made your events events carry an additional field called ``_version``
|
||||
is versioning your schema. For example, you could have made your events carry an additional field called ``_version``
|
||||
which was set to ``1`` (because it was the initial schema), and once you change the schema you bump this number to ``2``,
|
||||
and write an adapter which can perform the rename.
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ Akka persistence is a separate jar file. Make sure that you have the following d
|
|||
|
||||
"com.typesafe.akka" %% "akka-persistence" % "@version@" @crossString@
|
||||
|
||||
Akka persistence extension comes with few built-in persistence plugins, including
|
||||
The Akka persistence extension comes with few built-in persistence plugins, including
|
||||
in-memory heap based journal, local file-system based snapshot-store and LevelDB based journal.
|
||||
|
||||
LevelDB based plugins will require the following additional dependency declaration::
|
||||
|
|
@ -39,7 +39,7 @@ Architecture
|
|||
|
||||
* *PersistentActor*: Is a persistent, stateful actor. It is able to persist events to a journal and can react to
|
||||
them in a thread-safe manner. It can be used to implement both *command* as well as *event sourced* actors.
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor, so that it can
|
||||
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
|
||||
recover internal state from these messages.
|
||||
|
||||
* *PersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another
|
||||
|
|
@ -51,13 +51,13 @@ Architecture
|
|||
|
||||
* *AsyncWriteJournal*: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages
|
||||
are journaled and which are received by the persistent actor without being journaled. Journal maintains *highestSequenceNr* that is increased on each message.
|
||||
The storage backend of a journal is pluggable. Persistence extension comes with a "leveldb" journal plugin, which writes to the local filesystem,
|
||||
and replicated journals are available as `Community plugins`_.
|
||||
The storage backend of a journal is pluggable. The persistence extension comes with a "leveldb" journal plugin, which writes to the local filesystem.
|
||||
Replicated journals are available as `Community plugins`_.
|
||||
|
||||
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's or a view's internal state. Snapshots are
|
||||
used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
|
||||
Persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem,
|
||||
and replicated snapshot stores are available as `Community plugins`_.
|
||||
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem.
|
||||
Replicated snapshot stores are available as `Community plugins`_.
|
||||
|
||||
.. _Community plugins: http://akka.io/community/
|
||||
|
||||
|
|
@ -67,13 +67,13 @@ Event sourcing
|
|||
==============
|
||||
|
||||
The basic idea behind `Event Sourcing`_ is quite simple. A persistent actor receives a (non-persistent) command
|
||||
which is first validated if it can be applied to the current state. Here, validation can mean anything, from simple
|
||||
which is first validated if it can be applied to the current state. Here validation can mean anything, from simple
|
||||
inspection of a command message's fields up to a conversation with several external services, for example.
|
||||
If validation succeeds, events are generated from the command, representing the effect of the command. These events
|
||||
are then persisted and, after successful persistence, used to change the actor's state. When the persistent actor
|
||||
needs to be recovered, only the persisted events are replayed of which we know that they can be successfully applied.
|
||||
In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced
|
||||
actors may of course also process commands that do not change application state, such as query commands, for example.
|
||||
actors may of course also process commands that do not change application state such as query commands for example.
|
||||
|
||||
.. _Event Sourcing: http://martinfowler.com/eaaDev/EventSourcing.html
|
||||
|
||||
|
|
@ -117,10 +117,10 @@ Note that the stash capacity is per actor. If you have many persistent actors, e
|
|||
you may need to define a small stash capacity to ensure that the total number of stashed messages in the system
|
||||
don't consume too much memory.
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped. If persistence of an event is rejected before it is
|
||||
stored, e.g. due to serialization error, ``onPersistRejected`` will be invoked (logging a warning
|
||||
by default) and the actor continues with next message.
|
||||
by default) and the actor continues with the next message.
|
||||
|
||||
The easiest way to run this example yourself is to download `Typesafe Activator <http://www.typesafe.com/platform/getstarted>`_
|
||||
and open the tutorial named `Akka Persistence Samples with Scala <http://www.typesafe.com/activator/template/akka-sample-persistence-scala>`_.
|
||||
|
|
@ -161,7 +161,7 @@ Recovery customization
|
|||
^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Applications may also customise how recovery is performed by returning a customised ``Recovery`` object
|
||||
in the ``recovery`` method of a ``PersistentActor``, for example setting an upper bound to the replay,
|
||||
in the ``recovery`` method of a ``PersistentActor``, for example setting an upper bound to the replay
|
||||
which allows the actor to be replayed to a certain point "in the past" instead to its most up to date state:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-custom
|
||||
|
|
@ -178,7 +178,7 @@ A persistent actor can query its own recovery status via the methods
|
|||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#recovery-status
|
||||
|
||||
Sometimes there is a need for performing additional initialization when the
|
||||
recovery has completed, before processing any other message sent to the persistent actor.
|
||||
recovery has completed before processing any other message sent to the persistent actor.
|
||||
The persistent actor will receive a special :class:`RecoveryCompleted` message right after recovery
|
||||
and before any other received messages.
|
||||
|
||||
|
|
@ -192,12 +192,12 @@ is called (logging the error by default) and the actor will be stopped.
|
|||
Relaxed local consistency requirements and high throughput use-cases
|
||||
--------------------------------------------------------------------
|
||||
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and it's
|
||||
If faced with relaxed local consistency requirements and high throughput demands sometimes ``PersistentActor`` and its
|
||||
``persist`` may not be enough in terms of consuming incoming Commands at a high rate, because it has to wait until all
|
||||
Events related to a given Command are processed in order to start processing the next Command. While this abstraction is
|
||||
very useful for most cases, sometimes you may be faced with relaxed requirements about consistency – for example you may
|
||||
want to process commands as fast as you can, assuming that Event will eventually be persisted and handled properly in
|
||||
the background and retroactively reacting to persistence failures if needed.
|
||||
want to process commands as fast as you can, assuming that the Event will eventually be persisted and handled properly in
|
||||
the background, retroactively reacting to persistence failures if needed.
|
||||
|
||||
The ``persistAsync`` method provides a tool for implementing high-throughput persistent actors. It will *not*
|
||||
stash incoming Commands while the Journal is still working on persisting and/or user code is executing event callbacks.
|
||||
|
|
@ -209,7 +209,7 @@ The ordering between events is still guaranteed ("evt-b-1" will be sent after "e
|
|||
|
||||
.. note::
|
||||
In order to implement the pattern known as "*command sourcing*" simply call ``persistAsync(cmd)(...)`` right away on all incoming
|
||||
messages, and handle them in the callback.
|
||||
messages and handle them in the callback.
|
||||
|
||||
.. warning::
|
||||
The callback will not be invoked if the actor is restarted (or stopped) in between the call to
|
||||
|
|
@ -259,9 +259,9 @@ When sending two commands to this ``PersistentActor``, the persist handlers will
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#nested-persist-persist-caller
|
||||
|
||||
First the "outer layer" of persist calls is issued and their callbacks applied, after these have successfully completed
|
||||
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
|
||||
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
|
||||
And only after all these handlers have been successfully invoked, the next command will delivered to the persistent Actor.
|
||||
Only after all these handlers have been successfully invoked will the next command be delivered to the persistent Actor.
|
||||
In other words, the stashing of incoming commands that is guaranteed by initially calling ``persist()`` on the outer layer
|
||||
is extended until all nested ``persist`` callbacks have been handled.
|
||||
|
||||
|
|
@ -269,35 +269,35 @@ It is also possible to nest ``persistAsync`` calls, using the same pattern:
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#nested-persistAsync-persistAsync
|
||||
|
||||
In this case no stashing is happening, yet the events are still persisted and callbacks executed in the expected order:
|
||||
In this case no stashing is happening, yet events are still persisted and callbacks are executed in the expected order:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#nested-persistAsync-persistAsync-caller
|
||||
|
||||
While it is possible to nest mixed ``persist`` and ``persistAsync`` with keeping their respective semantics
|
||||
it is not a recommended practice as it may lead to overly complex nesting.
|
||||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
||||
.. _failures-scala:
|
||||
|
||||
Failures
|
||||
--------
|
||||
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default)
|
||||
If persistence of an event fails, ``onPersistFailure`` will be invoked (logging the error by default),
|
||||
and the actor will unconditionally be stopped.
|
||||
|
||||
The reason that it cannot resume when persist fails is that it is unknown if the even was actually
|
||||
The reason that it cannot resume when persist fails is that it is unknown if the event was actually
|
||||
persisted or not, and therefore it is in an inconsistent state. Restarting on persistent failures
|
||||
will most likely fail anyway, since the journal is probably unavailable. It is better to stop the
|
||||
will most likely fail anyway since the journal is probably unavailable. It is better to stop the
|
||||
actor and after a back-off timeout start it again. The ``akka.pattern.BackoffSupervisor`` actor
|
||||
is provided to support such restarts.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#backoff
|
||||
|
||||
If persistence of an event is rejected before it is stored, e.g. due to serialization error,
|
||||
``onPersistRejected`` will be invoked (logging a warning by default) and the actor continues with
|
||||
``onPersistRejected`` will be invoked (logging a warning by default), and the actor continues with
|
||||
next message.
|
||||
|
||||
If there is a problem with recovering the state of the actor from the journal when the actor is
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default) and the actor will be stopped.
|
||||
started, ``onRecoveryFailure`` is called (logging the error by default), and the actor will be stopped.
|
||||
|
||||
Atomic writes
|
||||
-------------
|
||||
|
|
@ -317,7 +317,7 @@ command, i.e. ``onPersistRejected`` is called with an exception (typically ``Uns
|
|||
Batch writes
|
||||
------------
|
||||
|
||||
To optimize throughput, a persistent actor internally batches events to be stored under high load before
|
||||
In order to optimize throughput, a persistent actor internally batches events to be stored under high load before
|
||||
writing them to the journal (as a single batch). The batch size dynamically grows from 1 under low and moderate loads
|
||||
to a configurable maximum size (default is ``200``) under high load. When using ``persistAsync`` this increases
|
||||
the maximum throughput dramatically.
|
||||
|
|
@ -330,22 +330,22 @@ writing the previous batch. Batch writes are never timer-based which keeps laten
|
|||
Message deletion
|
||||
----------------
|
||||
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number,
|
||||
persistent actors may call the ``deleteMessages`` method.
|
||||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
|
||||
Persistent actors may call the ``deleteMessages`` method to this end.
|
||||
|
||||
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with
|
||||
:ref:`snapshotting <snapshots>`, i.e. after a snapshot has been successfully stored, a ``deleteMessages(toSequenceNr)``
|
||||
up until the sequence number of the data held by that snapshot can be issued, to safely delete the previous events,
|
||||
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
|
||||
while still having access to the accumulated state during replays - by loading the snapshot.
|
||||
|
||||
The result of the ``deleteMessages`` request is signaled to the persistent actor with a ``DeleteMessagesSuccess``
|
||||
message if the delete was successful or a ``DeleteMessagesFailure`` message if it failed.
|
||||
|
||||
Message deletion doesn't affect highest sequence number of journal, even if all messages were deleted from journal after ``deleteMessages`` invocation.
|
||||
Message deletion doesn't affect the highest sequence number of the journal, even if all messages were deleted from it after ``deleteMessages`` invocation.
|
||||
|
||||
Persistence status handling
|
||||
---------------------------
|
||||
Persisting, deleting and replaying messages can either succeed or fail.
|
||||
Persisting, deleting, and replaying messages can either succeed or fail.
|
||||
|
||||
+---------------------------------+-----------------------------+-------------------------------+-----------------------------------+
|
||||
| **Method** | **Success** | **Failure / Rejection** | **After failure handler invoked** |
|
||||
|
|
@ -373,7 +373,7 @@ implements an exponential-backoff strategy which allows for more breathing room
|
|||
restarts of the persistent actor.
|
||||
|
||||
.. note::
|
||||
Journal implementations may choose to implement a retry mechanisms, e.g. such that only after a write fails N number
|
||||
Journal implementations may choose to implement a retry mechanism, e.g. such that only after a write fails N number
|
||||
of times a persistence failure is signalled back to the user. In other words, once a journal returns a failure,
|
||||
it is considered *fatal* by Akka Persistence, and the persistent actor which caused the failure will be stopped.
|
||||
|
||||
|
|
@ -386,22 +386,22 @@ restarts of the persistent actor.
|
|||
Safely shutting down persistent actors
|
||||
--------------------------------------
|
||||
|
||||
Special care should be given when when shutting down persistent actors from the outside.
|
||||
Special care should be given when shutting down persistent actors from the outside.
|
||||
With normal Actors it is often acceptable to use the special :ref:`PoisonPill <poison-pill-scala>` message
|
||||
to signal to an Actor that it should stop itself once it receives this message – in fact this message is handled
|
||||
automatically by Akka, leaving the target actor no way to refuse stopping itself when given a poison pill.
|
||||
|
||||
This can be dangerous when used with :class:`PersistentActor` due to the fact that incoming commands are *stashed* while
|
||||
the persistent actor is awaiting confirmation from the Journal that events have been written when ``persist()`` was used.
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into it's internal stash while awaiting the
|
||||
Since the incoming commands will be drained from the Actor's mailbox and put into its internal stash while awaiting the
|
||||
confirmation (thus, before calling the persist handlers) the Actor **may receive and (auto)handle the PoisonPill
|
||||
before it processes the other messages which have been put into its stash**, causing a pre-mature shutdown of the Actor.
|
||||
|
||||
.. warning::
|
||||
Consider using explicit shut-down messages instead of :class:`PoisonPill` when working with persistent actors.
|
||||
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with it's internal stashing
|
||||
mechanism when ``persist()`` is used, notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
The example below highlights how messages arrive in the Actor's mailbox and how they interact with its internal stashing
|
||||
mechanism when ``persist()`` is used. Notice the early stop behaviour that occurs when ``PoisonPill`` is used:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#safe-shutdown
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#safe-shutdown-example-bad
|
||||
|
|
@ -436,10 +436,9 @@ methods.
|
|||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#view
|
||||
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary
|
||||
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
|
||||
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
|
||||
persistent actor is started later and begins to write new messages, the corresponding view is updated automatically, by
|
||||
default.
|
||||
persistent actor is started later and begins to write new messages, by default the corresponding view is updated automatically.
|
||||
|
||||
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
|
||||
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
|
||||
|
|
@ -475,7 +474,7 @@ of replayed messages for manual updates can be limited with the ``replayMax`` pa
|
|||
Recovery
|
||||
--------
|
||||
|
||||
Initial recovery of persistent views works in the very same way as for a persistent actor (i.e. by sending a ``Recover`` message
|
||||
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
|
||||
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
|
||||
Further possibilities to customize initial recovery are explained in section :ref:`recovery`.
|
||||
|
||||
|
|
@ -488,7 +487,7 @@ A persistent view must have an identifier that doesn't change across different a
|
|||
The identifier must be defined with the ``viewId`` method.
|
||||
|
||||
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots` of a view and its
|
||||
persistent actor shall be shared (which is what applications usually do not want).
|
||||
persistent actor should be shared (which is what applications usually do not want).
|
||||
|
||||
.. _snapshots:
|
||||
|
||||
|
|
@ -525,12 +524,12 @@ To disable snapshot-based recovery, applications should use ``SnapshotSelectionC
|
|||
saved snapshot matches the specified ``SnapshotSelectionCriteria`` will replay all journaled messages.
|
||||
|
||||
.. note::
|
||||
In order to use snapshots a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
In order to use snapshots, a default snapshot-store (``akka.persistence.snapshot-store.plugin``) must be configured,
|
||||
or the ``PersistentActor`` can pick a snapshot store explicitly by overriding ``def snapshotPluginId: String``.
|
||||
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store,
|
||||
however Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
an actor tries to store a snapshot, at which point the the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
|
||||
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store.
|
||||
However, Akka will log a warning message when this situation is detected and then continue to operate until
|
||||
an actor tries to store a snapshot, at which point the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
|
||||
|
||||
Note that :ref:`cluster_sharding_scala` is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
|
||||
|
||||
|
|
@ -570,17 +569,17 @@ To send messages with at-least-once delivery semantics to destinations you can m
|
|||
trait to your ``PersistentActor`` on the sending side. It takes care of re-sending messages when they
|
||||
have not been confirmed within a configurable timeout.
|
||||
|
||||
The state of the sending actor, including which messages that have been sent and still not been
|
||||
confirmed by the recepient, must be persistent so that it can survive a crash of the sending actor
|
||||
The state of the sending actor, including which messages have been sent that have not been
|
||||
confirmed by the recepient must be persistent so that it can survive a crash of the sending actor
|
||||
or JVM. The ``AtLeastOnceDelivery`` trait does not persist anything by itself. It is your
|
||||
responsibility to persist the intent that a message is sent and that a confirmation has been
|
||||
received.
|
||||
|
||||
.. note::
|
||||
|
||||
At-least-once delivery implies that original message send order is not always preserved
|
||||
and the destination may receive duplicate messages. That means that the
|
||||
semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
At-least-once delivery implies that original message sending order is not always preserved,
|
||||
and the destination may receive duplicate messages.
|
||||
Semantics do not match those of a normal :class:`ActorRef` send operation:
|
||||
|
||||
* it is not at-most-once delivery
|
||||
|
||||
|
|
@ -588,9 +587,9 @@ received.
|
|||
possible resends
|
||||
|
||||
* after a crash and restart of the destination messages are still
|
||||
delivered—to the new actor incarnation
|
||||
delivered to the new actor incarnation
|
||||
|
||||
These semantics is similar to what an :class:`ActorPath` represents (see
|
||||
These semantics are similar to what an :class:`ActorPath` represents (see
|
||||
:ref:`actor-lifecycle-scala`), therefore you need to supply a path and not a
|
||||
reference when delivering messages. The messages are sent to the path with
|
||||
an actor selection.
|
||||
|
|
@ -613,10 +612,10 @@ the destination actor. When recovering, messages will be buffered until they hav
|
|||
Once recovery has completed, if there are outstanding messages that have not been confirmed (during the message replay),
|
||||
the persistent actor will resend these before sending any other messages.
|
||||
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that correlation
|
||||
Deliver requires a ``deliveryIdToMessage`` function to pass the provided ``deliveryId`` into the message so that the correlation
|
||||
between ``deliver`` and ``confirmDelivery`` is possible. The ``deliveryId`` must do the round trip. Upon receipt
|
||||
of the message, destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call ``confirmDelivery`` method to complete delivery routine.
|
||||
of the message, the destination actor will send the same``deliveryId`` wrapped in a confirmation message back to the sender.
|
||||
The sender will then use it to call ``confirmDelivery`` method to complete the delivery routine.
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#at-least-once-example
|
||||
|
||||
|
|
@ -634,8 +633,8 @@ sequence number. It does not store this state itself. You must persist events co
|
|||
``deliver`` and ``confirmDelivery`` invocations from your ``PersistentActor`` so that the state can
|
||||
be restored by calling the same methods during the recovery phase of the ``PersistentActor``. Sometimes
|
||||
these events can be derived from other business level events, and sometimes you must create separate events.
|
||||
During recovery calls to ``deliver`` will not send out the message, but it will be sent later
|
||||
if no matching ``confirmDelivery`` was performed.
|
||||
During recovery, calls to ``deliver`` will not send out messages, those will be sent later
|
||||
if no matching ``confirmDelivery`` will have been performed.
|
||||
|
||||
Support for snapshots is provided by ``getDeliverySnapshot`` and ``setDeliverySnapshot``.
|
||||
The ``AtLeastOnceDeliverySnapshot`` contains the full delivery state, including unconfirmed messages.
|
||||
|
|
@ -663,7 +662,7 @@ configured with the ``akka.persistence.at-least-once-delivery.warn-after-number-
|
|||
configuration key. The method can be overridden by implementation classes to return non-default values.
|
||||
|
||||
The ``AtLeastOnceDelivery`` trait holds messages in memory until their successful delivery has been confirmed.
|
||||
The limit of maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
The maximum number of unconfirmed messages that the actor is allowed to hold in memory
|
||||
is defined by the ``maxUnconfirmedMessages`` method. If this limit is exceed the ``deliver`` method will
|
||||
not accept more messages and it will throw ``AtLeastOnceDelivery.MaxUnconfirmedMessagesExceededException``.
|
||||
The default value can be configured with the ``akka.persistence.at-least-once-delivery.max-unconfirmed-messages``
|
||||
|
|
@ -703,7 +702,7 @@ Then in order for it to be used on events coming to and from the journal you mus
|
|||
It is possible to bind multiple adapters to one class *for recovery*, in which case the ``fromJournal`` methods of all
|
||||
bound adapters will be applied to a given matching event (in order of definition in the configuration). Since each adapter may
|
||||
return from ``0`` to ``n`` adapted events (called as ``EventSeq``), each adapter can investigate the event and if it should
|
||||
indeed adapt it return the adapted event(s) for it, other adapters which do not have anything to contribute during this
|
||||
indeed adapt it return the adapted event(s) for it. Other adapters which do not have anything to contribute during this
|
||||
adaptation simply return ``EventSeq.empty``. The adapted events are then delivered in-order to the ``PersistentActor`` during replay.
|
||||
|
||||
.. note::
|
||||
|
|
@ -742,7 +741,7 @@ The customer can be in one of the following states:
|
|||
|
||||
``LookingAround`` customer is browsing the site, but hasn't added anything to the shopping cart
|
||||
``Shopping`` customer has recently added items to the shopping cart
|
||||
``Inactive`` customer has items in the shopping cart, but hasn't added anything recently,
|
||||
``Inactive`` customer has items in the shopping cart, but hasn't added anything recently
|
||||
``Paid`` customer has purchased the items
|
||||
|
||||
.. note::
|
||||
|
|
@ -751,12 +750,12 @@ The customer can be in one of the following states:
|
|||
``def identifier: String`` method. This is required in order to simplify the serialization of FSM states.
|
||||
String identifiers should be unique!
|
||||
|
||||
Customer's actions are "recorded" as a sequence of "domain events", which are persisted. Those events are replayed on actor's
|
||||
Customer's actions are "recorded" as a sequence of "domain events" which are persisted. Those events are replayed on an actor's
|
||||
start in order to restore the latest customer's state:
|
||||
|
||||
.. includecode:: ../../../akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala#customer-domain-events
|
||||
|
||||
Customer state data represents the items in customer's shopping cart:
|
||||
Customer state data represents the items in a customer's shopping cart:
|
||||
|
||||
.. includecode:: ../../../akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala#customer-states-data
|
||||
|
||||
|
|
@ -778,22 +777,22 @@ Storage plugins
|
|||
|
||||
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
|
||||
|
||||
Directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see `Community plugins`_
|
||||
|
||||
Plugins can be selected either by "default", for all persistent actors and views,
|
||||
or "individually", when persistent actor or view defines it's own set of plugins.
|
||||
Plugins can be selected either by "default" for all persistent actors and views,
|
||||
or "individually", when a persistent actor or view defines its own set of plugins.
|
||||
|
||||
When persistent actor or view does NOT override ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
persistence extension will use "default" journal and snapshot-store plugins configured in the ``reference.conf``::
|
||||
When a persistent actor or view does NOT override the ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
the persistence extension will use the "default" journal and snapshot-store plugins configured in ``reference.conf``::
|
||||
|
||||
akka.persistence.journal.plugin = ""
|
||||
akka.persistence.snapshot-store.plugin = ""
|
||||
|
||||
However, these entries are provided as empty "", and require explicit user configuration via override in the user ``application.conf``.
|
||||
For an example of journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal`.
|
||||
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store`.
|
||||
For an example of a journal plugin which writes messages to LevelDB see :ref:`local-leveldb-journal`.
|
||||
For an example of a snapshot store plugin which writes snapshots as individual files to the local filesystem see :ref:`local-snapshot-store`.
|
||||
|
||||
Applications can provide their own plugins by implementing a plugin API and activate them by configuration.
|
||||
Applications can provide their own plugins by implementing a plugin API and activating them by configuration.
|
||||
Plugin development requires the following imports:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#plugin-imports
|
||||
|
|
@ -828,7 +827,7 @@ The journal plugin instance is an actor so the methods corresponding to requests
|
|||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
actors to achive parallelism.
|
||||
|
||||
The journal plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
|
||||
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
|
||||
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
|
||||
|
||||
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
|
@ -851,16 +850,16 @@ The snapshot store instance is an actor so the methods corresponding to requests
|
|||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
actors to achive parallelism.
|
||||
|
||||
The snapshot store plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
|
||||
The snapshot store plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
|
||||
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
|
||||
|
||||
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.
|
||||
|
||||
Plugin TCK
|
||||
----------
|
||||
In order to help developers build correct and high quality storage plugins, we provide an Technology Compatibility Kit (`TCK <http://en.wikipedia.org/wiki/Technology_Compatibility_Kit>`_ for short).
|
||||
In order to help developers build correct and high quality storage plugins, we provide a Technology Compatibility Kit (`TCK <http://en.wikipedia.org/wiki/Technology_Compatibility_Kit>`_ for short).
|
||||
|
||||
The TCK is usable from Java as well as Scala projects, for Scala you need to include the akka-persistence-tck dependency::
|
||||
The TCK is usable from Java as well as Scala projects. For Scala you need to include the akka-persistence-tck dependency::
|
||||
|
||||
"com.typesafe.akka" %% "akka-persistence-tck" % "@version@" % "test"
|
||||
|
||||
|
|
@ -869,8 +868,8 @@ To include the Journal TCK tests in your test suite simply extend the provided `
|
|||
.. includecode:: ./code/docs/persistence/PersistencePluginDocSpec.scala#journal-tck-scala
|
||||
|
||||
We also provide a simple benchmarking class ``JournalPerfSpec`` which includes all the tests that ``JournalSpec``
|
||||
has, and also performs some longer operations on the Journal while printing it's performance stats. While it is NOT aimed
|
||||
to provide a proper benchmarking environment it can be used to get a rough feel about your journals performance in the most
|
||||
has, and also performs some longer operations on the Journal while printing its performance stats. While it is NOT aimed
|
||||
to provide a proper benchmarking environment it can be used to get a rough feel about your journal's performance in the most
|
||||
typical scenarios.
|
||||
|
||||
In order to include the ``SnapshotStore`` TCK tests in your test suite simply extend the ``SnapshotStoreSpec``:
|
||||
|
|
@ -895,7 +894,7 @@ Pre-packaged plugins
|
|||
Local LevelDB journal
|
||||
---------------------
|
||||
|
||||
LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb`` and it writes messages to a local LevelDB
|
||||
The LevelDB journal plugin config entry is ``akka.persistence.journal.leveldb``. It writes messages to a local LevelDB
|
||||
instance. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-plugin-config
|
||||
|
|
@ -905,7 +904,7 @@ LevelDB based plugins will also require the following additional dependency decl
|
|||
"org.iq80.leveldb" % "leveldb" % "0.7"
|
||||
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8"
|
||||
|
||||
The default location of the LevelDB files is a directory named ``journal`` in the current working
|
||||
The default location of LevelDB files is a directory named ``journal`` in the current working
|
||||
directory. This location can be changed by configuration where the specified path can be relative or absolute:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#journal-config
|
||||
|
|
@ -925,7 +924,7 @@ backup node.
|
|||
.. warning::
|
||||
|
||||
A shared LevelDB instance is a single point of failure and should therefore only be used for testing
|
||||
purposes. Highly-available, replicated journal are available as `Community plugins`_.
|
||||
purposes. Highly-available, replicated journals are available as `Community plugins`_.
|
||||
|
||||
A shared LevelDB instance is started by instantiating the ``SharedLeveldbStore`` actor.
|
||||
|
||||
|
|
@ -954,7 +953,7 @@ i.e. only the first injection is used.
|
|||
Local snapshot store
|
||||
--------------------
|
||||
|
||||
Local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local`` and it writes snapshot files to
|
||||
The local snapshot store plugin config entry is ``akka.persistence.snapshot-store.local``. It writes snapshot files to
|
||||
the local filesystem. Enable this plugin by defining config property:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#leveldb-snapshot-plugin-config
|
||||
|
|
@ -989,8 +988,7 @@ For more advanced schema evolution techniques refer to the :ref:`persistence-sch
|
|||
Testing
|
||||
=======
|
||||
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project
|
||||
otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
When running tests with LevelDB default settings in ``sbt``, make sure to set ``fork := true`` in your sbt project. Otherwise, you'll see an ``UnsatisfiedLinkError``. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#native-config
|
||||
|
||||
|
|
@ -1017,21 +1015,21 @@ to the :ref:`reference configuration <config-akka-persistence>`.
|
|||
Multiple persistence plugin configurations
|
||||
==========================================
|
||||
|
||||
By default, persistent actor or view will use "default" journal and snapshot store plugins
|
||||
By default, a persistent actor or view will use the "default" journal and snapshot store plugins
|
||||
configured in the following sections of the ``reference.conf`` configuration resource:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#default-config
|
||||
|
||||
Note that in this case actor or view overrides only ``persistenceId`` method:
|
||||
Note that in this case the actor or view overrides only the ``persistenceId`` method:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#default-plugins
|
||||
|
||||
When persistent actor or view overrides ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
When the persistent actor or view overrides the ``journalPluginId`` and ``snapshotPluginId`` methods,
|
||||
the actor or view will be serviced by these specific persistence plugins instead of the defaults:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#override-plugins
|
||||
|
||||
Note that ``journalPluginId`` and ``snapshotPluginId`` must refer to properly configured ``reference.conf``
|
||||
plugin entries with standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
plugin entries with a standard ``class`` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
.. includecode:: code/docs/persistence/PersistenceMultiDocSpec.scala#override-config
|
||||
|
|
|
|||
|
|
@ -279,7 +279,7 @@ configuration.
|
|||
.. includecode:: code/docs/routing/RouterDocSpec.scala#config-balancing-pool2
|
||||
|
||||
The ``BalancingPool`` automatically uses a special ``BalancingDispatcher`` for its
|
||||
routees - disregarding any dispatcher that is set on the the routee Props object.
|
||||
routees - disregarding any dispatcher that is set on the routee Props object.
|
||||
This is needed in order to implement the balancing semantics via
|
||||
sharing the same mailbox by all the routees.
|
||||
|
||||
|
|
@ -387,7 +387,7 @@ TailChoppingPool and TailChoppingGroup
|
|||
--------------------------------------
|
||||
|
||||
The TailChoppingRouter will first send the message to one, randomly picked, routee
|
||||
and then after a small delay to to a second routee (picked randomly from the remaining routees) and so on.
|
||||
and then after a small delay to a second routee (picked randomly from the remaining routees) and so on.
|
||||
It waits for first reply it gets back and forwards it back to original sender. Other replies are discarded.
|
||||
|
||||
The goal of this router is to decrease latency by performing redundant queries to multiple routees, assuming that
|
||||
|
|
@ -435,7 +435,7 @@ There is 3 ways to define what data to use for the consistent hash key.
|
|||
The key is part of the message and it's convenient to define it together
|
||||
with the message definition.
|
||||
|
||||
* The messages can be be wrapped in a ``akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope``
|
||||
* The messages can be wrapped in a ``akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope``
|
||||
to define what data to use for the consistent hash key. The sender knows
|
||||
the key to use.
|
||||
|
||||
|
|
@ -506,7 +506,7 @@ to every routee of a router.
|
|||
|
||||
In this example the router receives the ``Broadcast`` message, extracts its payload
|
||||
(``"Watch out for Davy Jones' locker"``), and then sends the payload on to all of the router's
|
||||
routees. It is up to each each routee actor to handle the received payload message.
|
||||
routees. It is up to each routee actor to handle the received payload message.
|
||||
|
||||
PoisonPill Messages
|
||||
-------------------
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue