parent
8b71fac817
commit
12513ec7df
13 changed files with 45 additions and 46 deletions
|
|
@ -56,7 +56,7 @@ trait ReplicatedShardingExtension extends Extension {
|
|||
}
|
||||
|
||||
/**
|
||||
* Represents the sharding instances for the replicas of one replicated event sourcing entity type
|
||||
* Represents the sharding instances for the replicas of one Replicated Event Sourcing entity type
|
||||
*
|
||||
* Not for user extension.
|
||||
*/
|
||||
|
|
|
|||
|
|
@ -75,7 +75,7 @@ class ReplicatedShardingDirectReplicationSpec extends ScalaTestWithActorTestKit
|
|||
replicaAProbe.expectNoMessage()
|
||||
}
|
||||
|
||||
"ignore messages not from replicated event sourcing" in {
|
||||
"ignore messages not from Replicated Event Sourcing" in {
|
||||
val replicaAProbe = createTestProbe[ShardingEnvelope[PublishedEvent]]()
|
||||
|
||||
val replicationActor = spawn(
|
||||
|
|
|
|||
|
|
@ -1191,7 +1191,7 @@ abstract class ShardCoordinator(
|
|||
/**
|
||||
* Singleton coordinator that decides where to allocate shards.
|
||||
*
|
||||
* Users can migrate to using DData to store state then either event sourcing or ddata to store
|
||||
* Users can migrate to using DData to store state then either Event Sourcing or ddata to store
|
||||
* the remembered entities.
|
||||
*
|
||||
* @see [[ClusterSharding$ ClusterSharding extension]]
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ Scala
|
|||
Java
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
|
||||
|
||||
The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
|
||||
The above actor uses Event Sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
|
||||
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
||||
its state if it is valuable.
|
||||
|
||||
|
|
|
|||
|
|
@ -38,8 +38,8 @@ role of the “Dead Letter Office”.
|
|||
These are the rules for message sends (i.e. the `tell` or `!` method, which
|
||||
also underlies the `ask` pattern):
|
||||
|
||||
* **at-most-once delivery**, i.e. no guaranteed delivery
|
||||
* **message ordering per sender–receiver pair**
|
||||
* **at-most-once delivery**, i.e. no guaranteed delivery
|
||||
* **message ordering per sender–receiver pair**
|
||||
|
||||
The first rule is typically found also in other actor implementations while the
|
||||
second is specific to Akka.
|
||||
|
|
@ -49,14 +49,14 @@ second is specific to Akka.
|
|||
When it comes to describing the semantics of a delivery mechanism, there are
|
||||
three basic categories:
|
||||
|
||||
* **at-most-once** delivery means that for each message handed to the
|
||||
* **at-most-once** delivery means that for each message handed to the
|
||||
mechanism, that message is delivered once or not at all; in more casual terms
|
||||
it means that messages may be lost.
|
||||
* **at-least-once** delivery means that for each message handed to the
|
||||
* **at-least-once** delivery means that for each message handed to the
|
||||
mechanism potentially multiple attempts are made at delivering it, such that
|
||||
at least one succeeds; again, in more casual terms this means that messages
|
||||
may be duplicated but not lost.
|
||||
* **exactly-once** delivery means that for each message handed to the mechanism
|
||||
* **exactly-once** delivery means that for each message handed to the mechanism
|
||||
exactly one delivery is made to the recipient; the message can neither be
|
||||
lost nor duplicated.
|
||||
|
||||
|
|
@ -129,7 +129,6 @@ This means that:
|
|||
5. `A2` can see messages from `A1` interleaved with messages from `A3`
|
||||
6. Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive at `A2`
|
||||
|
||||
|
||||
@@@ note
|
||||
|
||||
It is important to note that Akka’s guarantee applies to the order in which
|
||||
|
|
@ -202,14 +201,14 @@ actually do apply the best effort to keep our tests stable. A local `tell`
|
|||
operation can however fail for the same reasons as a normal method call can on
|
||||
the JVM:
|
||||
|
||||
* `StackOverflowError`
|
||||
* `OutOfMemoryError`
|
||||
* other `VirtualMachineError`
|
||||
* `StackOverflowError`
|
||||
* `OutOfMemoryError`
|
||||
* other `VirtualMachineError`
|
||||
|
||||
In addition, local sends can fail in Akka-specific ways:
|
||||
|
||||
* if the mailbox does not accept the message (e.g. full BoundedMailbox)
|
||||
* if the receiving actor fails while processing the message or is already
|
||||
* if the mailbox does not accept the message (e.g. full BoundedMailbox)
|
||||
* if the receiving actor fails while processing the message or is already
|
||||
terminated
|
||||
|
||||
While the first is a matter of configuration the second deserves some
|
||||
|
|
@ -226,16 +225,16 @@ will note, these are quite subtle as it stands, and it is even possible that
|
|||
future performance optimizations will invalidate this whole paragraph. The
|
||||
possibly non-exhaustive list of counter-indications is:
|
||||
|
||||
* Before receiving the first reply from a top-level actor, there is a lock
|
||||
* Before receiving the first reply from a top-level actor, there is a lock
|
||||
which protects an internal interim queue, and this lock is not fair; the
|
||||
implication is that enqueue requests from different senders which arrive
|
||||
during the actor’s construction (figuratively, the details are more involved)
|
||||
may be reordered depending on low-level thread scheduling. Since completely
|
||||
fair locks do not exist on the JVM this is unfixable.
|
||||
* The same mechanism is used during the construction of a Router, more
|
||||
* The same mechanism is used during the construction of a Router, more
|
||||
precisely the routed ActorRef, hence the same problem exists for actors
|
||||
deployed with Routers.
|
||||
* As mentioned above, the problem occurs anywhere a lock is involved during
|
||||
* As mentioned above, the problem occurs anywhere a lock is involved during
|
||||
enqueueing, which may also apply to custom mailboxes.
|
||||
|
||||
This list has been compiled carefully, but other problematic scenarios may have
|
||||
|
|
@ -272,10 +271,10 @@ powerful, higher-level abstractions on top of it.
|
|||
As discussed above a straight-forward answer to the requirement of reliable
|
||||
delivery is an explicit ACK–RETRY protocol. In its simplest form this requires
|
||||
|
||||
* a way to identify individual messages to correlate message with
|
||||
* a way to identify individual messages to correlate message with
|
||||
acknowledgement
|
||||
* a retry mechanism which will resend messages if not acknowledged in time
|
||||
* a way for the receiver to detect and discard duplicates
|
||||
* a retry mechanism which will resend messages if not acknowledged in time
|
||||
* a way for the receiver to detect and discard duplicates
|
||||
|
||||
The third becomes necessary by virtue of the acknowledgements not being guaranteed
|
||||
to arrive either.
|
||||
|
|
@ -288,7 +287,7 @@ idempotent on the level of the business logic.
|
|||
|
||||
### Event Sourcing
|
||||
|
||||
Event sourcing (and sharding) is what makes large websites scale to
|
||||
Event Sourcing (and sharding) is what makes large websites scale to
|
||||
billions of users, and the idea is quite simple: when a component (think actor)
|
||||
processes a command it will generate a list of events representing the effect
|
||||
of the command. These events are stored in addition to being applied to the
|
||||
|
|
@ -299,7 +298,7 @@ components may consume the event stream as a means to replicate the component’
|
|||
state on a different continent or to react to changes). If the component’s
|
||||
state is lost—due to a machine failure or by being pushed out of a cache—it can
|
||||
be reconstructed by replaying the event stream (usually employing
|
||||
snapshots to speed up the process). @ref:[Event sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by
|
||||
snapshots to speed up the process). @ref:[Event Sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by
|
||||
Akka Persistence.
|
||||
|
||||
### Mailbox with Explicit Acknowledgement
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ Java
|
|||
|
||||
## Performance and denormalization
|
||||
|
||||
When building systems using @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques
|
||||
When building systems using @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques
|
||||
it is tremendously important to realise that the write-side has completely different needs from the read-side,
|
||||
and separating those concerns into datastores that are optimised for either side makes it possible to offer the best
|
||||
experience for the write and read sides independently.
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
project.description: Akka Persistence Classic, event sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors.
|
||||
project.description: Akka Persistence Classic, Event Sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors.
|
||||
---
|
||||
# Classic Persistence
|
||||
|
||||
|
|
@ -44,12 +44,12 @@ Replicated journals are available as [Community plugins](https://akka.io/communi
|
|||
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are
|
||||
used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
|
||||
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem. Replicated snapshot stores are available as [Community plugins](https://akka.io/community/)
|
||||
* *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
|
||||
development of event sourced applications (see section @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts)).
|
||||
* *Event Sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
|
||||
development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)).
|
||||
|
||||
## Example
|
||||
|
||||
Akka persistence supports event sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the
|
||||
Akka persistence supports Event Sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the
|
||||
`persist` method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`]
|
||||
is defined by implementing @scala[`receiveRecover`]@java[`createReceiveRecover`] and @scala[`receiveCommand`]@java[`createReceive`]. This is demonstrated in the following example.
|
||||
|
||||
|
|
@ -453,7 +453,7 @@ timer-based which keeps latencies at a minimum.
|
|||
It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number;
|
||||
Persistent actors may call the `deleteMessages` method to this end.
|
||||
|
||||
Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with
|
||||
Deleting messages in Event Sourcing based applications is typically either not used at all, or used in conjunction with
|
||||
[snapshotting](#snapshots), i.e. after a snapshot has been successfully stored, a `deleteMessages(toSequenceNr)`
|
||||
up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events
|
||||
while still having access to the accumulated state during replays - by loading the snapshot.
|
||||
|
|
@ -750,7 +750,7 @@ configuration key. The method can be overridden by implementation classes to ret
|
|||
|
||||
## Event Adapters
|
||||
|
||||
In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model
|
||||
In long running projects using Event Sourcing sometimes the need arises to detach the data model from the domain model
|
||||
completely.
|
||||
|
||||
Event Adapters help in situations where:
|
||||
|
|
|
|||
|
|
@ -388,7 +388,7 @@ Links to reference documentation:
|
|||
|
||||
The correspondence of the classic `PersistentActor` is @scala[`akka.persistence.typed.scaladsl.EventSourcedBehavior`]@java[`akka.persistence.typed.javadsl.EventSourcedBehavior`].
|
||||
|
||||
The Typed API is much more guided to facilitate event sourcing best practises. It also has tighter integration with
|
||||
The Typed API is much more guided to facilitate Event Sourcing best practices. It also has tighter integration with
|
||||
Cluster Sharding.
|
||||
|
||||
Links to reference documentation:
|
||||
|
|
|
|||
|
|
@ -103,9 +103,9 @@ Java
|
|||
|
||||
## Event deletion
|
||||
|
||||
Deleting events in event sourcing based applications is typically either not used at all, or used in conjunction with snapshotting.
|
||||
Deleting events in Event Sourcing based applications is typically either not used at all, or used in conjunction with snapshotting.
|
||||
By deleting events you will lose the history of how the system changed before it reached current state, which is
|
||||
one of the main reasons for using event sourcing in the first place.
|
||||
one of the main reasons for using Event Sourcing in the first place.
|
||||
|
||||
If snapshot-based retention is enabled, after a snapshot has been successfully stored, a delete of the events
|
||||
(journaled by a single event sourced actor) up until the sequence number of the data held by that snapshot can be issued.
|
||||
|
|
|
|||
|
|
@ -47,9 +47,9 @@ provides tools to facilitate in building GDPR capable systems.
|
|||
|
||||
@@@
|
||||
|
||||
### Event sourcing concepts
|
||||
### Event Sourcing concepts
|
||||
|
||||
See an [introduction to EventSourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN.
|
||||
See an [introduction to Event Sourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN.
|
||||
|
||||
Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493)
|
||||
by Randy Shoup. It is a short and recommended read if you're starting developing Events based applications.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ warning or deprecation period. It is also not recommended to use this module in
|
|||
|
||||
@@@
|
||||
|
||||
@ref[Event sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior`
|
||||
@ref[Event Sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior`
|
||||
with a given `persistenceId`. Otherwise, multiple instances would store interleaving events based on different states, and when these events would later be replayed it would not be possible to reconstruct the correct state.
|
||||
|
||||
This restriction means that in the event of network partitions, and for a short time during rolling re-deploys, some
|
||||
|
|
@ -421,7 +421,7 @@ For a snapshot plugin to support replication it needs to store and read metadata
|
|||
To attach the metadata when reading the snapshot the `akka.persistence.SnapshotMetadata.apply` factory overload taking a `metadata` parameter is used.
|
||||
The @apidoc[SnapshotStoreSpec] in the Persistence TCK provides a capability flag `supportsMetadata` to toggle verification that metadata is handled correctly.
|
||||
|
||||
The following plugins support replicated event sourcing:
|
||||
The following plugins support Replicated Event Sourcing:
|
||||
|
||||
* [Akka Persistence Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/index.html) versions 1.0.3+
|
||||
* [Akka Persistence Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/overview.html) versions 1.0.0-RC4+
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ import akka.util.JavaDurationConverters._
|
|||
|
||||
/**
|
||||
* [[DurableProducerQueue]] that can be used with [[akka.actor.typed.delivery.ProducerController]]
|
||||
* for reliable delivery of messages. It is implemented with event sourcing and stores one
|
||||
* for reliable delivery of messages. It is implemented with Event Sourcing and stores one
|
||||
* event before sending the message to the destination and one event for the confirmation
|
||||
* that the message has been delivered and processed.
|
||||
*
|
||||
|
|
|
|||
|
|
@ -160,7 +160,7 @@ final class DiscardConfigurator extends StashOverflowStrategyConfigurator {
|
|||
}
|
||||
|
||||
/**
|
||||
* Scala API: A persistent Actor - can be used to implement command or event sourcing.
|
||||
* Scala API: A persistent Actor - can be used to implement command or Event Sourcing.
|
||||
*/
|
||||
trait PersistentActor extends Eventsourced with PersistenceIdentity {
|
||||
def receive = receiveCommand
|
||||
|
|
@ -290,7 +290,7 @@ trait PersistentActor extends Eventsourced with PersistenceIdentity {
|
|||
}
|
||||
|
||||
/**
|
||||
* Java API: an persistent actor - can be used to implement command or event sourcing.
|
||||
* Java API: an persistent actor - can be used to implement command or Event Sourcing.
|
||||
*/
|
||||
abstract class AbstractPersistentActor extends AbstractActor with AbstractPersistentActorLike {
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue