Normalize Event Sourcing wording #29577 (#29856)

This commit is contained in:
Josep Prat 2020-12-07 08:41:43 +01:00 committed by GitHub
parent 8b71fac817
commit 12513ec7df
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 45 additions and 46 deletions

View file

@ -31,7 +31,7 @@ Scala
Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
The above actor uses Event Sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
its state if it is valuable.

View file

@ -23,7 +23,7 @@ remote transport will place a limit on the message size.
Writing your actors such that every interaction could possibly be remote is the
safe, pessimistic bet. It means to only rely on those properties which are
always guaranteed and which are discussed in detail below. This has
always guaranteed and which are discussed in detail below. This has
some overhead in the actors implementation. If you are willing to sacrifice full
location transparency—for example in case of a group of closely collaborating
actors—you can place them always on the same JVM and enjoy stricter guarantees
@ -38,8 +38,8 @@ role of the “Dead Letter Office”.
These are the rules for message sends (i.e. the `tell` or `!` method, which
also underlies the `ask` pattern):
* **at-most-once delivery**, i.e. no guaranteed delivery
* **message ordering per senderreceiver pair**
* **at-most-once delivery**, i.e. no guaranteed delivery
* **message ordering per senderreceiver pair**
The first rule is typically found also in other actor implementations while the
second is specific to Akka.
@ -49,14 +49,14 @@ second is specific to Akka.
When it comes to describing the semantics of a delivery mechanism, there are
three basic categories:
* **at-most-once** delivery means that for each message handed to the
* **at-most-once** delivery means that for each message handed to the
mechanism, that message is delivered once or not at all; in more casual terms
it means that messages may be lost.
* **at-least-once** delivery means that for each message handed to the
* **at-least-once** delivery means that for each message handed to the
mechanism potentially multiple attempts are made at delivering it, such that
at least one succeeds; again, in more casual terms this means that messages
may be duplicated but not lost.
* **exactly-once** delivery means that for each message handed to the mechanism
* **exactly-once** delivery means that for each message handed to the mechanism
exactly one delivery is made to the recipient; the message can neither be
lost nor duplicated.
@ -121,7 +121,7 @@ The guarantee is illustrated in the following:
> Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`
This means that:
1. If `M1` is delivered it must be delivered before `M2` and `M3`
2. If `M2` is delivered it must be delivered before `M3`
3. If `M4` is delivered it must be delivered before `M5` and `M6`
@ -129,7 +129,6 @@ This means that:
5. `A2` can see messages from `A1` interleaved with messages from `A3`
6. Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive at `A2`
@@@ note
It is important to note that Akkas guarantee applies to the order in which
@ -202,14 +201,14 @@ actually do apply the best effort to keep our tests stable. A local `tell`
operation can however fail for the same reasons as a normal method call can on
the JVM:
* `StackOverflowError`
* `OutOfMemoryError`
* other `VirtualMachineError`
* `StackOverflowError`
* `OutOfMemoryError`
* other `VirtualMachineError`
In addition, local sends can fail in Akka-specific ways:
* if the mailbox does not accept the message (e.g. full BoundedMailbox)
* if the receiving actor fails while processing the message or is already
* if the mailbox does not accept the message (e.g. full BoundedMailbox)
* if the receiving actor fails while processing the message or is already
terminated
While the first is a matter of configuration the second deserves some
@ -226,16 +225,16 @@ will note, these are quite subtle as it stands, and it is even possible that
future performance optimizations will invalidate this whole paragraph. The
possibly non-exhaustive list of counter-indications is:
* Before receiving the first reply from a top-level actor, there is a lock
* Before receiving the first reply from a top-level actor, there is a lock
which protects an internal interim queue, and this lock is not fair; the
implication is that enqueue requests from different senders which arrive
during the actors construction (figuratively, the details are more involved)
may be reordered depending on low-level thread scheduling. Since completely
fair locks do not exist on the JVM this is unfixable.
* The same mechanism is used during the construction of a Router, more
* The same mechanism is used during the construction of a Router, more
precisely the routed ActorRef, hence the same problem exists for actors
deployed with Routers.
* As mentioned above, the problem occurs anywhere a lock is involved during
* As mentioned above, the problem occurs anywhere a lock is involved during
enqueueing, which may also apply to custom mailboxes.
This list has been compiled carefully, but other problematic scenarios may have
@ -243,7 +242,7 @@ escaped our analysis.
### How does Local Ordering relate to Network Ordering
The rule that *for a given pair of actors, messages sent directly from the first
The rule that *for a given pair of actors, messages sent directly from the first
to the second will not be received out-of-order* holds for messages sent over the
network with the TCP based Akka remote transport protocol.
@ -272,23 +271,23 @@ powerful, higher-level abstractions on top of it.
As discussed above a straight-forward answer to the requirement of reliable
delivery is an explicit ACKRETRY protocol. In its simplest form this requires
* a way to identify individual messages to correlate message with
* a way to identify individual messages to correlate message with
acknowledgement
* a retry mechanism which will resend messages if not acknowledged in time
* a way for the receiver to detect and discard duplicates
* a retry mechanism which will resend messages if not acknowledged in time
* a way for the receiver to detect and discard duplicates
The third becomes necessary by virtue of the acknowledgements not being guaranteed
to arrive either.
to arrive either.
An ACK-RETRY protocol with business-level acknowledgements and de-duplication using identifiers is
supported by the @ref:[Reliable Delivery](../typed/reliable-delivery.md) feature.
supported by the @ref:[Reliable Delivery](../typed/reliable-delivery.md) feature.
Another way of implementing the third part would be to make processing the messages
idempotent on the level of the business logic.
### Event Sourcing
Event sourcing (and sharding) is what makes large websites scale to
Event Sourcing (and sharding) is what makes large websites scale to
billions of users, and the idea is quite simple: when a component (think actor)
processes a command it will generate a list of events representing the effect
of the command. These events are stored in addition to being applied to the
@ -299,7 +298,7 @@ components may consume the event stream as a means to replicate the component
state on a different continent or to react to changes). If the components
state is lost—due to a machine failure or by being pushed out of a cache—it can
be reconstructed by replaying the event stream (usually employing
snapshots to speed up the process). @ref:[Event sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by
snapshots to speed up the process). @ref:[Event Sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by
Akka Persistence.
### Mailbox with Explicit Acknowledgement
@ -335,7 +334,7 @@ senders code more than is gained in debug output clarity.
The dead letter service follows the same rules with respect to delivery
guarantees as all other message sends, hence it cannot be used to implement
guaranteed delivery.
guaranteed delivery.
### How do I Receive Dead Letters?

View file

@ -197,7 +197,7 @@ Java
## Performance and denormalization
When building systems using @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques
When building systems using @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques
it is tremendously important to realise that the write-side has completely different needs from the read-side,
and separating those concerns into datastores that are optimised for either side makes it possible to offer the best
experience for the write and read sides independently.

View file

@ -1,5 +1,5 @@
---
project.description: Akka Persistence Classic, event sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors.
project.description: Akka Persistence Classic, Event Sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors.
---
# Classic Persistence
@ -44,12 +44,12 @@ Replicated journals are available as [Community plugins](https://akka.io/communi
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are
used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem. Replicated snapshot stores are available as [Community plugins](https://akka.io/community/)
* *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
development of event sourced applications (see section @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts)).
* *Event Sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)).
## Example
Akka persistence supports event sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the
Akka persistence supports Event Sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the
`persist` method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`]
is defined by implementing @scala[`receiveRecover`]@java[`createReceiveRecover`] and @scala[`receiveCommand`]@java[`createReceive`]. This is demonstrated in the following example.
@ -453,7 +453,7 @@ timer-based which keeps latencies at a minimum.
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
Persistent actors may call the `deleteMessages` method to this end.
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with
Deleting messages in Event Sourcing based applications is typically either not used at all, or used in conjunction with
[snapshotting](#snapshots), i.e. after a snapshot has been successfully stored, a `deleteMessages(toSequenceNr)`
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
while still having access to the accumulated state during replays - by loading the snapshot.
@ -750,7 +750,7 @@ configuration key. The method can be overridden by implementation classes to ret
## Event Adapters
In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model
In long running projects using Event Sourcing sometimes the need arises to detach the data model from the domain model
completely.
Event Adapters help in situations where:

View file

@ -388,7 +388,7 @@ Links to reference documentation:
The correspondence of the classic `PersistentActor` is @scala[`akka.persistence.typed.scaladsl.EventSourcedBehavior`]@java[`akka.persistence.typed.javadsl.EventSourcedBehavior`].
The Typed API is much more guided to facilitate event sourcing best practises. It also has tighter integration with
The Typed API is much more guided to facilitate Event Sourcing best practices. It also has tighter integration with
Cluster Sharding.
Links to reference documentation:

View file

@ -103,9 +103,9 @@ Java
## Event deletion
Deleting events in event sourcing based applications is typically either not used at all, or used in conjunction with snapshotting.
Deleting events in Event Sourcing based applications is typically either not used at all, or used in conjunction with snapshotting.
By deleting events you will lose the history of how the system changed before it reached current state, which is
one of the main reasons for using event sourcing in the first place.
one of the main reasons for using Event Sourcing in the first place.
If snapshot-based retention is enabled, after a snapshot has been successfully stored, a delete of the events
(journaled by a single event sourced actor) up until the sequence number of the data held by that snapshot can be issued.

View file

@ -47,9 +47,9 @@ provides tools to facilitate in building GDPR capable systems.
@@@
### Event sourcing concepts
### Event Sourcing concepts
See an [introduction to EventSourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN.
See an [introduction to Event Sourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN.
Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493)
by Randy Shoup. It is a short and recommended read if you're starting developing Events based applications.

View file

@ -8,7 +8,7 @@ warning or deprecation period. It is also not recommended to use this module in
@@@
@ref[Event sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior`
@ref[Event Sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior`
with a given `persistenceId`. Otherwise, multiple instances would store interleaving events based on different states, and when these events would later be replayed it would not be possible to reconstruct the correct state.
This restriction means that in the event of network partitions, and for a short time during rolling re-deploys, some
@ -421,7 +421,7 @@ For a snapshot plugin to support replication it needs to store and read metadata
To attach the metadata when reading the snapshot the `akka.persistence.SnapshotMetadata.apply` factory overload taking a `metadata` parameter is used.
The @apidoc[SnapshotStoreSpec] in the Persistence TCK provides a capability flag `supportsMetadata` to toggle verification that metadata is handled correctly.
The following plugins support replicated event sourcing:
The following plugins support Replicated Event Sourcing:
* [Akka Persistence Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/index.html) versions 1.0.3+
* [Akka Persistence Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/overview.html) versions 1.0.0-RC4+