diff --git a/akka-cluster-sharding-typed/src/main/scala/akka/cluster/sharding/typed/ReplicatedShardingExtension.scala b/akka-cluster-sharding-typed/src/main/scala/akka/cluster/sharding/typed/ReplicatedShardingExtension.scala index af35f26cc7..70367eb44b 100644 --- a/akka-cluster-sharding-typed/src/main/scala/akka/cluster/sharding/typed/ReplicatedShardingExtension.scala +++ b/akka-cluster-sharding-typed/src/main/scala/akka/cluster/sharding/typed/ReplicatedShardingExtension.scala @@ -56,7 +56,7 @@ trait ReplicatedShardingExtension extends Extension { } /** - * Represents the sharding instances for the replicas of one replicated event sourcing entity type + * Represents the sharding instances for the replicas of one Replicated Event Sourcing entity type * * Not for user extension. */ diff --git a/akka-cluster-sharding-typed/src/test/scala/akka/cluster/sharding/typed/ReplicatedShardingDirectReplicationSpec.scala b/akka-cluster-sharding-typed/src/test/scala/akka/cluster/sharding/typed/ReplicatedShardingDirectReplicationSpec.scala index cefd31899e..905fd8b6be 100644 --- a/akka-cluster-sharding-typed/src/test/scala/akka/cluster/sharding/typed/ReplicatedShardingDirectReplicationSpec.scala +++ b/akka-cluster-sharding-typed/src/test/scala/akka/cluster/sharding/typed/ReplicatedShardingDirectReplicationSpec.scala @@ -75,7 +75,7 @@ class ReplicatedShardingDirectReplicationSpec extends ScalaTestWithActorTestKit replicaAProbe.expectNoMessage() } - "ignore messages not from replicated event sourcing" in { + "ignore messages not from Replicated Event Sourcing" in { val replicaAProbe = createTestProbe[ShardingEnvelope[PublishedEvent]]() val replicationActor = spawn( diff --git a/akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala b/akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala index 7539a686df..e70b34573e 100644 --- a/akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala +++ b/akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala @@ -1191,7 +1191,7 @@ abstract class ShardCoordinator( /** * Singleton coordinator that decides where to allocate shards. * - * Users can migrate to using DData to store state then either event sourcing or ddata to store + * Users can migrate to using DData to store state then either Event Sourcing or ddata to store * the remembered entities. * * @see [[ClusterSharding$ ClusterSharding extension]] diff --git a/akka-docs/src/main/paradox/cluster-sharding.md b/akka-docs/src/main/paradox/cluster-sharding.md index 1d860f4983..a9801d668c 100644 --- a/akka-docs/src/main/paradox/cluster-sharding.md +++ b/akka-docs/src/main/paradox/cluster-sharding.md @@ -31,7 +31,7 @@ Scala Java : @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor } -The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state. +The above actor uses Event Sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state. It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover its state if it is valuable. diff --git a/akka-docs/src/main/paradox/general/message-delivery-reliability.md b/akka-docs/src/main/paradox/general/message-delivery-reliability.md index 825d906697..517d7b481c 100644 --- a/akka-docs/src/main/paradox/general/message-delivery-reliability.md +++ b/akka-docs/src/main/paradox/general/message-delivery-reliability.md @@ -23,7 +23,7 @@ remote transport will place a limit on the message size. Writing your actors such that every interaction could possibly be remote is the safe, pessimistic bet. It means to only rely on those properties which are -always guaranteed and which are discussed in detail below. This has +always guaranteed and which are discussed in detail below. This has some overhead in the actor’s implementation. If you are willing to sacrifice full location transparency—for example in case of a group of closely collaborating actors—you can place them always on the same JVM and enjoy stricter guarantees @@ -38,8 +38,8 @@ role of the “Dead Letter Office”. These are the rules for message sends (i.e. the `tell` or `!` method, which also underlies the `ask` pattern): - * **at-most-once delivery**, i.e. no guaranteed delivery - * **message ordering per sender–receiver pair** +* **at-most-once delivery**, i.e. no guaranteed delivery +* **message ordering per sender–receiver pair** The first rule is typically found also in other actor implementations while the second is specific to Akka. @@ -49,14 +49,14 @@ second is specific to Akka. When it comes to describing the semantics of a delivery mechanism, there are three basic categories: - * **at-most-once** delivery means that for each message handed to the +* **at-most-once** delivery means that for each message handed to the mechanism, that message is delivered once or not at all; in more casual terms it means that messages may be lost. - * **at-least-once** delivery means that for each message handed to the +* **at-least-once** delivery means that for each message handed to the mechanism potentially multiple attempts are made at delivering it, such that at least one succeeds; again, in more casual terms this means that messages may be duplicated but not lost. - * **exactly-once** delivery means that for each message handed to the mechanism +* **exactly-once** delivery means that for each message handed to the mechanism exactly one delivery is made to the recipient; the message can neither be lost nor duplicated. @@ -121,7 +121,7 @@ The guarantee is illustrated in the following: > Actor `A3` sends messages `M4`, `M5`, `M6` to `A2` This means that: - + 1. If `M1` is delivered it must be delivered before `M2` and `M3` 2. If `M2` is delivered it must be delivered before `M3` 3. If `M4` is delivered it must be delivered before `M5` and `M6` @@ -129,7 +129,6 @@ This means that: 5. `A2` can see messages from `A1` interleaved with messages from `A3` 6. Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive at `A2` - @@@ note It is important to note that Akka’s guarantee applies to the order in which @@ -202,14 +201,14 @@ actually do apply the best effort to keep our tests stable. A local `tell` operation can however fail for the same reasons as a normal method call can on the JVM: - * `StackOverflowError` - * `OutOfMemoryError` - * other `VirtualMachineError` +* `StackOverflowError` +* `OutOfMemoryError` +* other `VirtualMachineError` In addition, local sends can fail in Akka-specific ways: - * if the mailbox does not accept the message (e.g. full BoundedMailbox) - * if the receiving actor fails while processing the message or is already +* if the mailbox does not accept the message (e.g. full BoundedMailbox) +* if the receiving actor fails while processing the message or is already terminated While the first is a matter of configuration the second deserves some @@ -226,16 +225,16 @@ will note, these are quite subtle as it stands, and it is even possible that future performance optimizations will invalidate this whole paragraph. The possibly non-exhaustive list of counter-indications is: - * Before receiving the first reply from a top-level actor, there is a lock +* Before receiving the first reply from a top-level actor, there is a lock which protects an internal interim queue, and this lock is not fair; the implication is that enqueue requests from different senders which arrive during the actor’s construction (figuratively, the details are more involved) may be reordered depending on low-level thread scheduling. Since completely fair locks do not exist on the JVM this is unfixable. - * The same mechanism is used during the construction of a Router, more +* The same mechanism is used during the construction of a Router, more precisely the routed ActorRef, hence the same problem exists for actors deployed with Routers. - * As mentioned above, the problem occurs anywhere a lock is involved during +* As mentioned above, the problem occurs anywhere a lock is involved during enqueueing, which may also apply to custom mailboxes. This list has been compiled carefully, but other problematic scenarios may have @@ -243,7 +242,7 @@ escaped our analysis. ### How does Local Ordering relate to Network Ordering -The rule that *for a given pair of actors, messages sent directly from the first +The rule that *for a given pair of actors, messages sent directly from the first to the second will not be received out-of-order* holds for messages sent over the network with the TCP based Akka remote transport protocol. @@ -272,23 +271,23 @@ powerful, higher-level abstractions on top of it. As discussed above a straight-forward answer to the requirement of reliable delivery is an explicit ACK–RETRY protocol. In its simplest form this requires - * a way to identify individual messages to correlate message with +* a way to identify individual messages to correlate message with acknowledgement - * a retry mechanism which will resend messages if not acknowledged in time - * a way for the receiver to detect and discard duplicates +* a retry mechanism which will resend messages if not acknowledged in time +* a way for the receiver to detect and discard duplicates The third becomes necessary by virtue of the acknowledgements not being guaranteed -to arrive either. +to arrive either. An ACK-RETRY protocol with business-level acknowledgements and de-duplication using identifiers is -supported by the @ref:[Reliable Delivery](../typed/reliable-delivery.md) feature. +supported by the @ref:[Reliable Delivery](../typed/reliable-delivery.md) feature. Another way of implementing the third part would be to make processing the messages idempotent on the level of the business logic. ### Event Sourcing -Event sourcing (and sharding) is what makes large websites scale to +Event Sourcing (and sharding) is what makes large websites scale to billions of users, and the idea is quite simple: when a component (think actor) processes a command it will generate a list of events representing the effect of the command. These events are stored in addition to being applied to the @@ -299,7 +298,7 @@ components may consume the event stream as a means to replicate the component’ state on a different continent or to react to changes). If the component’s state is lost—due to a machine failure or by being pushed out of a cache—it can be reconstructed by replaying the event stream (usually employing -snapshots to speed up the process). @ref:[Event sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by +snapshots to speed up the process). @ref:[Event Sourcing](../typed/persistence.md#event-sourcing-concepts) is supported by Akka Persistence. ### Mailbox with Explicit Acknowledgement @@ -335,7 +334,7 @@ sender’s code more than is gained in debug output clarity. The dead letter service follows the same rules with respect to delivery guarantees as all other message sends, hence it cannot be used to implement -guaranteed delivery. +guaranteed delivery. ### How do I Receive Dead Letters? diff --git a/akka-docs/src/main/paradox/persistence-query.md b/akka-docs/src/main/paradox/persistence-query.md index 271d464f2d..231038479f 100644 --- a/akka-docs/src/main/paradox/persistence-query.md +++ b/akka-docs/src/main/paradox/persistence-query.md @@ -197,7 +197,7 @@ Java ## Performance and denormalization -When building systems using @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques +When building systems using @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200%28v=pandp.10%29)) techniques it is tremendously important to realise that the write-side has completely different needs from the read-side, and separating those concerns into datastores that are optimised for either side makes it possible to offer the best experience for the write and read sides independently. diff --git a/akka-docs/src/main/paradox/persistence.md b/akka-docs/src/main/paradox/persistence.md index 6c58f2f6dd..7309a61bce 100644 --- a/akka-docs/src/main/paradox/persistence.md +++ b/akka-docs/src/main/paradox/persistence.md @@ -1,5 +1,5 @@ --- -project.description: Akka Persistence Classic, event sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors. +project.description: Akka Persistence Classic, Event Sourcing with Akka, At-Least-Once delivery, snapshots, recovery and replay with Akka actors. --- # Classic Persistence @@ -44,12 +44,12 @@ Replicated journals are available as [Community plugins](https://akka.io/communi * *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are used for optimizing recovery times. The storage backend of a snapshot store is pluggable. The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem. Replicated snapshot stores are available as [Community plugins](https://akka.io/community/) - * *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the -development of event sourced applications (see section @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts)). + * *Event Sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the +development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)). ## Example -Akka persistence supports event sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the +Akka persistence supports Event Sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the `persist` method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`] is defined by implementing @scala[`receiveRecover`]@java[`createReceiveRecover`] and @scala[`receiveCommand`]@java[`createReceive`]. This is demonstrated in the following example. @@ -453,7 +453,7 @@ timer-based which keeps latencies at a minimum. It is possible to delete all messages (journaled by a single persistent actor) up to a specified sequence number; Persistent actors may call the `deleteMessages` method to this end. -Deleting messages in event sourcing based applications is typically either not used at all, or used in conjunction with +Deleting messages in Event Sourcing based applications is typically either not used at all, or used in conjunction with [snapshotting](#snapshots), i.e. after a snapshot has been successfully stored, a `deleteMessages(toSequenceNr)` up until the sequence number of the data held by that snapshot can be issued to safely delete the previous events while still having access to the accumulated state during replays - by loading the snapshot. @@ -750,7 +750,7 @@ configuration key. The method can be overridden by implementation classes to ret ## Event Adapters -In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model +In long running projects using Event Sourcing sometimes the need arises to detach the data model from the domain model completely. Event Adapters help in situations where: diff --git a/akka-docs/src/main/paradox/typed/from-classic.md b/akka-docs/src/main/paradox/typed/from-classic.md index b5124d7bae..9e7f7aec3b 100644 --- a/akka-docs/src/main/paradox/typed/from-classic.md +++ b/akka-docs/src/main/paradox/typed/from-classic.md @@ -388,7 +388,7 @@ Links to reference documentation: The correspondence of the classic `PersistentActor` is @scala[`akka.persistence.typed.scaladsl.EventSourcedBehavior`]@java[`akka.persistence.typed.javadsl.EventSourcedBehavior`]. -The Typed API is much more guided to facilitate event sourcing best practises. It also has tighter integration with +The Typed API is much more guided to facilitate Event Sourcing best practices. It also has tighter integration with Cluster Sharding. Links to reference documentation: diff --git a/akka-docs/src/main/paradox/typed/persistence-snapshot.md b/akka-docs/src/main/paradox/typed/persistence-snapshot.md index 227609871c..22dfb1982f 100644 --- a/akka-docs/src/main/paradox/typed/persistence-snapshot.md +++ b/akka-docs/src/main/paradox/typed/persistence-snapshot.md @@ -103,9 +103,9 @@ Java ## Event deletion -Deleting events in event sourcing based applications is typically either not used at all, or used in conjunction with snapshotting. +Deleting events in Event Sourcing based applications is typically either not used at all, or used in conjunction with snapshotting. By deleting events you will lose the history of how the system changed before it reached current state, which is -one of the main reasons for using event sourcing in the first place. +one of the main reasons for using Event Sourcing in the first place. If snapshot-based retention is enabled, after a snapshot has been successfully stored, a delete of the events (journaled by a single event sourced actor) up until the sequence number of the data held by that snapshot can be issued. diff --git a/akka-docs/src/main/paradox/typed/persistence.md b/akka-docs/src/main/paradox/typed/persistence.md index 3afb7e89e2..e27b772ea0 100644 --- a/akka-docs/src/main/paradox/typed/persistence.md +++ b/akka-docs/src/main/paradox/typed/persistence.md @@ -47,9 +47,9 @@ provides tools to facilitate in building GDPR capable systems. @@@ -### Event sourcing concepts +### Event Sourcing concepts -See an [introduction to EventSourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN. +See an [introduction to Event Sourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559%28v=pandp.10%29) at MSDN. Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493) by Randy Shoup. It is a short and recommended read if you're starting developing Events based applications. diff --git a/akka-docs/src/main/paradox/typed/replicated-eventsourcing.md b/akka-docs/src/main/paradox/typed/replicated-eventsourcing.md index 3a3e7bd1e3..bb0f1979e5 100644 --- a/akka-docs/src/main/paradox/typed/replicated-eventsourcing.md +++ b/akka-docs/src/main/paradox/typed/replicated-eventsourcing.md @@ -8,7 +8,7 @@ warning or deprecation period. It is also not recommended to use this module in @@@ -@ref[Event sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior` +@ref[Event Sourcing](./persistence.md) with `EventSourcedBehavior`s is based on the single writer principle, which means that there can only be one active instance of a `EventSourcedBehavior` with a given `persistenceId`. Otherwise, multiple instances would store interleaving events based on different states, and when these events would later be replayed it would not be possible to reconstruct the correct state. This restriction means that in the event of network partitions, and for a short time during rolling re-deploys, some @@ -421,7 +421,7 @@ For a snapshot plugin to support replication it needs to store and read metadata To attach the metadata when reading the snapshot the `akka.persistence.SnapshotMetadata.apply` factory overload taking a `metadata` parameter is used. The @apidoc[SnapshotStoreSpec] in the Persistence TCK provides a capability flag `supportsMetadata` to toggle verification that metadata is handled correctly. -The following plugins support replicated event sourcing: +The following plugins support Replicated Event Sourcing: * [Akka Persistence Cassandra](https://doc.akka.io/docs/akka-persistence-cassandra/current/index.html) versions 1.0.3+ * [Akka Persistence Spanner](https://doc.akka.io/docs/akka-persistence-spanner/current/overview.html) versions 1.0.0-RC4+ diff --git a/akka-persistence-typed/src/main/scala/akka/persistence/typed/delivery/EventSourcedProducerQueue.scala b/akka-persistence-typed/src/main/scala/akka/persistence/typed/delivery/EventSourcedProducerQueue.scala index de29bb80f9..db2a901370 100644 --- a/akka-persistence-typed/src/main/scala/akka/persistence/typed/delivery/EventSourcedProducerQueue.scala +++ b/akka-persistence-typed/src/main/scala/akka/persistence/typed/delivery/EventSourcedProducerQueue.scala @@ -26,7 +26,7 @@ import akka.util.JavaDurationConverters._ /** * [[DurableProducerQueue]] that can be used with [[akka.actor.typed.delivery.ProducerController]] - * for reliable delivery of messages. It is implemented with event sourcing and stores one + * for reliable delivery of messages. It is implemented with Event Sourcing and stores one * event before sending the message to the destination and one event for the confirmation * that the message has been delivered and processed. * diff --git a/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala b/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala index 474b2a9641..8b4d26bc11 100644 --- a/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala +++ b/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala @@ -160,7 +160,7 @@ final class DiscardConfigurator extends StashOverflowStrategyConfigurator { } /** - * Scala API: A persistent Actor - can be used to implement command or event sourcing. + * Scala API: A persistent Actor - can be used to implement command or Event Sourcing. */ trait PersistentActor extends Eventsourced with PersistenceIdentity { def receive = receiveCommand @@ -290,7 +290,7 @@ trait PersistentActor extends Eventsourced with PersistenceIdentity { } /** - * Java API: an persistent actor - can be used to implement command or event sourcing. + * Java API: an persistent actor - can be used to implement command or Event Sourcing. */ abstract class AbstractPersistentActor extends AbstractActor with AbstractPersistentActorLike {