Rename sbt akka modules

Co-authored-by: Sean Glover <sean@seanglover.com>
This commit is contained in:
Matthew de Detrich 2023-01-05 11:10:50 +01:00 committed by GitHub
parent b92b749946
commit 24c03cde19
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
2930 changed files with 1466 additions and 1462 deletions

View file

@ -7,19 +7,19 @@
To use Classic Actors, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-actor_$scala.binary.version$"
artifact="pekko-actor_$scala.binary.version$"
version=PekkoVersion
group2="org.apache.pekko"
artifact2="akka-testkit_$scala.binary.version$"
artifact2="pekko-testkit_$scala.binary.version$"
scope2=test
version2=PekkoVersion
}
@@project-info{ projectId="akka-actor" }
@@project-info{ projectId="actor" }
## Introduction
@ -351,7 +351,7 @@ The remaining visible methods are user-overridable life-cycle hooks which are
described in the following:
Scala
: @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #lifecycle-hooks }
: @@snip [Actor.scala](/actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #lifecycle-hooks }
Java
: @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #lifecycle-callbacks }
@ -785,7 +785,7 @@ An Actor has to
@java[define its initial receive behavior by implementing the @javadoc[createReceive](pekko.actor.AbstractActor#createReceive()) method in the `AbstractActor`:]
Scala
: @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #receive }
: @@snip [Actor.scala](/actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #receive }
Java
: @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #createReceive }

View file

@ -109,7 +109,7 @@ dynamic in this way. ActorRefs may safely be exposed to other bundles.
To bootstrap Akka inside an OSGi environment, you can use the @apidoc[osgi.ActorSystemActivator](osgi.ActorSystemActivator) class
to conveniently set up the @apidoc[ActorSystem](actor.ActorSystem).
@@snip [Activator.scala](/akka-osgi/src/test/scala/docs/osgi/Activator.scala) { #Activator }
@@snip [Activator.scala](/osgi/src/test/scala/docs/osgi/Activator.scala) { #Activator }
The goal here is to map the OSGi lifecycle more directly to the Akka lifecycle. The @apidoc[ActorSystemActivator](osgi.ActorSystemActivator) creates
the actor system with a class loader that finds resources (`application.conf` and `reference.conf` files) and classes

View file

@ -14,15 +14,15 @@ It is not advised to build new applications with Cluster Client, and existing us
To use Cluster Client, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-tools_$scala.binary.version$
artifact=pekko-cluster-tools_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-tools" }
@@project-info{ projectId="cluster-tools" }
## Introduction
@ -123,28 +123,28 @@ pekko.extensions = ["org.apache.pekko.cluster.client.ClusterClientReceptionist"]
Next, register the actors that should be available for the client.
Scala
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #server }
: @@snip [ClusterClientSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #server }
Java
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #server }
: @@snip [ClusterClientTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #server }
On the client, you create the @apidoc[ClusterClient] actor and use it as a gateway for sending
messages to the actors identified by their path (without address information) somewhere
in the cluster.
Scala
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #client }
: @@snip [ClusterClientSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #client }
Java
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #client }
: @@snip [ClusterClientTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #client }
The `initialContacts` parameter is a @scala[`Set[ActorPath]`]@java[`Set<ActorPath>`], which can be created like this:
Scala
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #initialContacts }
: @@snip [ClusterClientSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #initialContacts }
Java
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #initialContacts }
: @@snip [ClusterClientTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #initialContacts }
You will probably define the address information of the initial contact points in configuration or system property.
See also @ref:[Configuration](#cluster-client-config).
@ -178,18 +178,18 @@ receptionists), as they become available. The code illustrates subscribing to th
initial state.
Scala
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #clientEventsListener }
: @@snip [ClusterClientSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #clientEventsListener }
Java
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #clientEventsListener }
: @@snip [ClusterClientTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #clientEventsListener }
Similarly we can have an actor that behaves in a similar fashion for learning what cluster clients are connected to a @apidoc[ClusterClientReceptionist]:
Scala
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #receptionistEventsListener }
: @@snip [ClusterClientSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/client/ClusterClientSpec.scala) { #receptionistEventsListener }
Java
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #receptionistEventsListener }
: @@snip [ClusterClientTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/client/ClusterClientTest.java) { #receptionistEventsListener }
<a id="cluster-client-config"></a>
## Configuration
@ -197,7 +197,7 @@ Java
The @apidoc[ClusterClientReceptionist] extension (or @apidoc[cluster.client.ClusterReceptionistSettings]) can be configured
with the following properties:
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #receptionist-ext-config }
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf) { #receptionist-ext-config }
The following configuration properties are read by the @apidoc[ClusterClientSettings]
when created with a @scala[@scaladoc[`ActorSystem`](pekko.actor.ActorSystem)]@java[@javadoc[`ActorSystem`](pekko.actor.ActorSystem)] parameter. It is also possible to amend the @apidoc[ClusterClientSettings]
@ -205,7 +205,7 @@ or create it from another config section with the same layout as below. @apidoc[
a parameter to the @scala[@scaladoc[`ClusterClient.props`](pekko.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.props`](pekko.cluster.client.ClusterClient$)] factory method, i.e. each client can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #cluster-client-config }
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf) { #cluster-client-config }
## Failure handling

View file

@ -22,10 +22,10 @@ For the full documentation of this feature and for new projects see @ref:[Multi-
This is how to create a singleton proxy for a specific data center:
Scala
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy-dc }
: @@snip [ClusterSingletonManagerSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy-dc }
Java
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy-dc }
: @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy-dc }
If using the own data center as the `withDataCenter` parameter that would be a proxy for the singleton in the own data center, which
is also the default if `withDataCenter` is not given.
@ -37,7 +37,7 @@ For the full documentation of this feature and for new projects see @ref:[Multi-
This is how to create a sharding proxy for a specific data center:
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #proxy-dc }

View file

@ -5,11 +5,11 @@
To use Cluster Metrics Extension, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-metrics_$scala.binary.version$
artifact=pekko-cluster-metrics_$scala.binary.version$
version=PekkoVersion
}
@ -20,7 +20,7 @@ and add the following configuration stanza to your `application.conf`
pekko.extensions = [ "pekko.cluster.metrics.ClusterMetricsExtension" ]
```
@@project-info{ projectId="akka-cluster-metrics" }
@@project-info{ projectId="cluster-metrics" }
## Introduction
@ -208,4 +208,4 @@ Custom metrics collector implementation class must be specified in the
The Cluster metrics extension can be configured with the following properties:
@@snip [reference.conf](/akka-cluster-metrics/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-metrics/src/main/resources/reference.conf)

View file

@ -78,7 +78,7 @@ Set it to a lower value if you want to limit total number of routees.
The same type of router could also have been defined in code:
Scala
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
: @@snip [StatsService.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
Java
: @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
@ -99,7 +99,7 @@ the average number of characters per word when all results have been collected.
Messages:
Scala
: @@snip [StatsMessages.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsMessages.scala) { #messages }
: @@snip [StatsMessages.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsMessages.scala) { #messages }
Java
: @@snip [StatsMessages.java](/docs/src/test/java/jdocs/cluster/StatsMessages.java) { #messages }
@ -107,7 +107,7 @@ Java
The worker that counts number of characters in each word:
Scala
: @@snip [StatsWorker.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsWorker.scala) { #worker }
: @@snip [StatsWorker.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsWorker.scala) { #worker }
Java
: @@snip [StatsWorker.java](/docs/src/test/java/jdocs/cluster/StatsWorker.java) { #worker }
@ -116,7 +116,7 @@ The service that receives text from users and splits it up into words, delegates
@@@ div { .group-scala }
@@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #service }
@@snip [StatsService.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #service }
@@@
@ -179,7 +179,7 @@ Set it to a lower value if you want to limit total number of routees.
The same type of router could also have been defined in code:
Scala
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
: @@snip [StatsService.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
Java
: @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }

View file

@ -8,15 +8,15 @@ For the full documentation of this feature and for new projects see @ref:[Cluste
To use Cluster Sharding, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-sharding_$scala.binary.version$
artifact=pekko-cluster-sharding_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-sharding" }
@@project-info{ projectId="cluster-sharding" }
## Introduction
@ -27,7 +27,7 @@ For an introduction to Sharding concepts see @ref:[Cluster Sharding](typed/clust
This is what an entity actor may look like:
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
@ -47,7 +47,7 @@ when there is no match between the roles of the current cluster node and the rol
`ClusterShardingSettings`.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-start }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-start }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-start }
@ -56,7 +56,7 @@ The @scala[`extractEntityId` and `extractShardId` are two] @java[`messageExtract
identifier and the shard identifier from incoming messages.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-extractor }
@ -94,7 +94,7 @@ delegate the message to the right node and it will create the entity actor on de
first message for a specific entity is delivered.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-usage }
@ -172,7 +172,7 @@ the `rememberEntities` flag to true in `ClusterShardingSettings` when calling
extract from the `EntityId`.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #extractShardId-StartEntity }
@ -184,7 +184,7 @@ you need to create an intermediate parent actor that defines the `supervisorStra
child entity actor.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #supervisor }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #supervisor }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #supervisor }
@ -192,7 +192,7 @@ Java
You start such a supervisor in the same way as if it was the entity actor.
Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start }
: @@snip [ClusterShardingSpec.scala](/cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start }
Java
: @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-supervisor-start }

View file

@ -8,15 +8,15 @@ For the full documentation of this feature and for new projects see @ref:[Cluste
To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-tools_$scala.binary.version$
artifact=pekko-cluster-tools_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-tools" }
@@project-info{ projectId="cluster-tools" }
## Introduction
@ -55,19 +55,19 @@ Before explaining how to create a cluster singleton actor, let's define message
which will be used by the singleton.
Scala
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #singleton-message-classes }
: @@snip [ClusterSingletonManagerSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #singleton-message-classes }
Java
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
: @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
On each node in the cluster you need to start the `ClusterSingletonManager` and
supply the `Props` of the singleton actor, in this case the JMS queue consumer.
Scala
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
: @@snip [ClusterSingletonManagerSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
Java
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-manager }
: @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-manager }
Here we limit the singleton to nodes tagged with the `"worker"` role, but all nodes, independent of
role, can be used by not specifying `withRole`.
@ -79,19 +79,19 @@ perfectly fine `terminationMessage` if you only need to stop the actor.
Here is how the singleton actor handles the `terminationMessage` in this example.
Scala
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #consumer-end }
: @@snip [ClusterSingletonManagerSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #consumer-end }
Java
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/Consumer.java) { #consumer-end }
: @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/Consumer.java) { #consumer-end }
With the names given above, access to the singleton can be obtained from any cluster node using a properly
configured proxy.
Scala
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy }
: @@snip [ClusterSingletonManagerSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy }
Java
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy }
: @@snip [ClusterSingletonManagerTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy }
A more comprehensive sample is available in the tutorial named
@scala[[Distributed workers with Akka and Scala!](https://github.com/typesafehub/activator-akka-distributed-workers)]@java[[Distributed workers with Akka and Java!](https://github.com/typesafehub/activator-akka-distributed-workers-java)].

View file

@ -24,15 +24,15 @@ recommendation if you don't have other preferences or constraints.
To use Akka Cluster add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-cluster_$scala.binary.version$"
artifact="pekko-cluster_$scala.binary.version$"
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster" }
@@project-info{ projectId="cluster" }
## When and where to use Akka Cluster
@ -53,7 +53,7 @@ Java
And the minimum configuration required is to set a host/port for remoting and the `pekko.actor.provider = "cluster"`.
@@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
@@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
The actor registers itself as subscriber of certain cluster events. It receives events corresponding to the current state
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
@ -321,12 +321,12 @@ add the `sbt-multi-jvm` plugin and the dependency to `akka-multi-node-testkit`.
First, as described in @ref:[Multi Node Testing](multi-node-testing.md), we need some scaffolding to configure the @scaladoc[MultiNodeSpec](pekko.remote.testkit.MultiNodeSpec).
Define the participating @ref:[roles](typed/cluster.md#node-roles) and their @ref:[configuration](#configuration) in an object extending @scaladoc[MultiNodeConfig](pekko.remote.testkit.MultiNodeConfig):
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #MultiNodeConfig }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #MultiNodeConfig }
Define one concrete test class for each role/node. These will be instantiated on the different nodes (JVMs). They can be
implemented differently, but often they are the same and extend an abstract test class, as illustrated here.
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #concrete-tests }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #concrete-tests }
Note the naming convention of these classes. The name of the classes must end with `MultiJvmNode1`, `MultiJvmNode2`
and so on. It is possible to define another suffix to be used by the `sbt-multi-jvm`, but the default should be
@ -334,18 +334,18 @@ fine in most cases.
Then the abstract `MultiNodeSpec`, which takes the `MultiNodeConfig` as constructor parameter.
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #abstract-test }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #abstract-test }
Most of this can be extracted to a separate trait to avoid repeating this in all your tests.
Typically you begin your test by starting up the cluster and let the members join, and create some actors.
That can be done like this:
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #startup-cluster }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #startup-cluster }
From the test you interact with the cluster using the `Cluster` extension, e.g. @scaladoc[join](pekko.cluster.Cluster#join(address:org.apache.pekko.actor.Address):Unit).
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #join }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #join }
Notice how the *testActor* from @ref:[testkit](testing.md) is added as @ref:[subscriber](#cluster-subscriber)
to cluster changes and then waiting for certain events, such as in this case all members becoming 'Up'.
@ -353,7 +353,7 @@ to cluster changes and then waiting for certain events, such as in this case all
The above code was running for all roles (JVMs). @scaladoc[runOn](pekko.remote.testkit.MultiNodeSpec#runOn(nodes:org.apache.pekko.remote.testconductor.RoleName*)(thunk:=%3EUnit):Unit) is a convenient utility to declare that a certain block
of code should only run for a specific role.
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #test-statsService }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #test-statsService }
Once again we take advantage of the facilities in @ref:[testkit](testing.md) to verify expected behavior.
Here using `testActor` as sender (via @scaladoc[ImplicitSender](pekko.testkit.ImplicitSender)) and verifying the reply with @scaladoc[expectMsgType](org.apache.pekko.testkit.TestKit#expectMsgType[T](max:scala.concurrent.duration.FiniteDuration)(implicitt:scala.reflect.ClassTag[T]):T).
@ -361,7 +361,7 @@ Here using `testActor` as sender (via @scaladoc[ImplicitSender](pekko.testkit.Im
In the above code you can see `node(third)`, which is useful facility to get the root actor reference of
the actor system for a specific role. This can also be used to grab the @scaladoc[actor.Address](pekko.actor.Address) of that node.
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses }
@@snip [StatsSampleSpec.scala](/cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses }
@@@

View file

@ -93,10 +93,10 @@ If you accidentally mix Akka versions, for example through transitive
dependencies, you might get a warning at run time such as:
```
You are using version 2.6.6 of Akka, but it appears you (perhaps indirectly) also depend on older versions
You are using version 2.6.6 of Pekko, but it appears you (perhaps indirectly) also depend on older versions
of related artifacts. You can solve this by adding an explicit dependency on version 2.6.6 of the
[akka-persistence-query] artifacts to your project. Here's a complete collection of detected
artifacts: (2.5.3, [akka-persistence-query]), (2.6.6, [akka-actor, akka-cluster]).
[pekko-persistence-query] artifacts to your project. Here's a complete collection of detected
artifacts: (2.5.3, [pekko-persistence-query]), (2.6.6, [pekko-actor, pekko-cluster]).
See also: https://doc.akka.io/docs/akka/current/common/binary-compatibility-rules.html#mixed-versioning-is-not-allowed
```

View file

@ -18,7 +18,7 @@ Most relevant default phases
| before-actor-system-terminate | Phase for custom application tasks that are to be run after cluster shutdown and before `ActorSystem` termination. |
reference.conf (HOCON)
: @@snip [reference.conf](/akka-actor/src/main/resources/reference.conf) { #coordinated-shutdown-phases }
: @@snip [reference.conf](/actor/src/main/resources/reference.conf) { #coordinated-shutdown-phases }
More phases can be added in the application's `application.conf` if needed by overriding a phase with an
additional `depends-on`.

View file

@ -12,11 +12,11 @@ Akka Coordination is a set of tools for distributed coordination.
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-coordination_$scala.binary.version$"
artifact="pekko-coordination_$scala.binary.version$"
version=PekkoVersion
}
@@project-info{ projectId="akka-coordination" }
@@project-info{ projectId="coordination" }
## Lease
@ -107,7 +107,7 @@ The configuration must define the `lease-class` property for the FQCN of the lea
The lease implementation should have support for the following properties where the defaults come from `pekko.coordination.lease`:
@@snip [reference.conf](/akka-coordination/src/main/resources/reference.conf) { #defaults }
@@snip [reference.conf](/coordination/src/main/resources/reference.conf) { #defaults }
This configuration location is passed into `getLease`.

View file

@ -34,43 +34,43 @@ See @ref:[Migration hints](#migrating-from-akka-management-discovery-before-1-0-
## Module info
@@dependency[sbt,Gradle,Maven] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-discovery_$scala.binary.version$"
artifact="pekko-discovery_$scala.binary.version$"
version=PekkoVersion
}
@@project-info{ projectId="akka-discovery" }
@@project-info{ projectId="discovery" }
## How it works
Loading the extension:
Scala
: @@snip [CompileOnlySpec.scala](/akka-discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #loading }
: @@snip [CompileOnlySpec.scala](/discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #loading }
Java
: @@snip [CompileOnlyTest.java](/akka-discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #loading }
: @@snip [CompileOnlyTest.java](/discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #loading }
A `Lookup` contains a mandatory `serviceName` and an optional `portName` and `protocol`. How these are interpreted is discovery
method dependent e.g.DNS does an A/AAAA record query if any of the fields are missing and an SRV query for a full look up:
Scala
: @@snip [CompileOnlySpec.scala](/akka-discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #basic }
: @@snip [CompileOnlySpec.scala](/discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #basic }
Java
: @@snip [CompileOnlyTest.java](/akka-discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #basic }
: @@snip [CompileOnlyTest.java](/discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #basic }
`portName` and `protocol` are optional and their meaning is interpreted by the method.
Scala
: @@snip [CompileOnlySpec.scala](/akka-discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #full }
: @@snip [CompileOnlySpec.scala](/discovery/src/test/scala/doc/org/apache/pekko/discovery/CompileOnlySpec.scala) { #full }
Java
: @@snip [CompileOnlyTest.java](/akka-discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #full }
: @@snip [CompileOnlyTest.java](/discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #full }
Port can be used when a service opens multiple ports e.g. a HTTP port and an Akka remoting port.

View file

@ -280,4 +280,4 @@ paper by Mark Shapiro et. al.
The @apidoc[cluster.ddata.DistributedData] extension can be configured with the following properties:
@@snip [reference.conf](/akka-distributed-data/src/main/resources/reference.conf) { #distributed-data }
@@snip [reference.conf](/distributed-data/src/main/resources/reference.conf) { #distributed-data }

View file

@ -8,15 +8,15 @@ For the new API see @ref[Distributed Publish Subscribe in Cluster](./typed/distr
To use Distributed Publish Subscribe you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-cluster-tools_$scala.binary.version$"
artifact="pekko-cluster-tools_$scala.binary.version$"
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-tools" }
@@project-info{ projectId="cluster-tools" }
## Introduction
@ -84,35 +84,35 @@ can explicitly remove entries with `DistributedPubSubMediator.Unsubscribe`.
An example of a subscriber actor:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #subscriber }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #subscriber }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #subscriber }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #subscriber }
Subscriber actors can be started on several nodes in the cluster, and all will receive
messages published to the "content" topic.
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-subscribers }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-subscribers }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-subscribers }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-subscribers }
A simple actor that publishes to this "content" topic:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publisher }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publisher }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publisher }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publisher }
It can publish messages to the topic from anywhere in the cluster:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publish-message }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publish-message }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publish-message }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publish-message }
### Topic Groups
@ -169,35 +169,35 @@ can explicitly remove entries with @apidoc[DistributedPubSubMediator.Remove].
An example of a destination actor:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-destination }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-destination }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-destination }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-destination }
Destination actors can be started on several nodes in the cluster, and all will receive
messages sent to the path (without address information).
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-send-destinations }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-send-destinations }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-send-destinations }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-send-destinations }
A simple actor that sends to the path:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #sender }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #sender }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #sender }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #sender }
It can send messages to the path from anywhere in the cluster:
Scala
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-message }
: @@snip [DistributedPubSubMediatorSpec.scala](/cluster-tools/src/multi-jvm/scala/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-message }
Java
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-message }
: @@snip [DistributedPubSubMediatorTest.java](/cluster-tools/src/test/java/org/apache/pekko/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-message }
It is also possible to broadcast messages to the actors that have been registered with
@apidoc[DistributedPubSubMediator.Put]. Send @apidoc[DistributedPubSubMediator.SendToAll] message to the local mediator and the wrapped message
@ -221,7 +221,7 @@ want to use different cluster roles for different mediators.
The `DistributedPubSub` extension can be configured with the following properties:
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
It is recommended to load the extension when the actor system is started by defining it in
`pekko.extensions` configuration property. Otherwise it will be activated when first used

View file

@ -40,9 +40,9 @@ At present the query is based on _tags_. So if you have not tagged your objects,
The example below shows how to get the `DurableStateStoreQuery` from the `DurableStateStoreRegistry` extension.
Scala
: @@snip [DurableStateStoreQueryUsageCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/DurableStateStoreQueryUsageCompileOnlySpec.scala) { #get-durable-state-store-query-example }
: @@snip [DurableStateStoreQueryUsageCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/DurableStateStoreQueryUsageCompileOnlySpec.scala) { #get-durable-state-store-query-example }
Java
: @@snip [DurableStateStoreQueryUsageCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/DurableStateStoreQueryUsageCompileOnlyTest.java) { #get-durable-state-store-query-example }
: @@snip [DurableStateStoreQueryUsageCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/DurableStateStoreQueryUsageCompileOnlyTest.java) { #get-durable-state-store-query-example }
The @apidoc[DurableStateChange] elements can be `UpdatedDurableState` or `DeletedDurableState`.

View file

@ -5,7 +5,7 @@ Originally conceived as a way to send messages to groups of actors, the
implementing a simple interface:
Scala
: @@snip [EventBus.scala](/akka-actor/src/main/scala/org/apache/pekko/event/EventBus.scala) { #event-bus-api }
: @@snip [EventBus.scala](/actor/src/main/scala/org/apache/pekko/event/EventBus.scala) { #event-bus-api }
Java
: @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #event-bus-api }

View file

@ -15,105 +15,105 @@ nondeterministic when loading the configuration.`
<a id="config-akka-actor"></a>
### akka-actor
@@snip [reference.conf](/akka-actor/src/main/resources/reference.conf)
@@snip [reference.conf](/actor/src/main/resources/reference.conf)
<a id="config-akka-actor-typed"></a>
### akka-actor-typed
@@snip [reference.conf](/akka-actor-typed/src/main/resources/reference.conf)
@@snip [reference.conf](/actor-typed/src/main/resources/reference.conf)
<a id="config-akka-cluster-typed"></a>
### akka-cluster-typed
@@snip [reference.conf](/akka-cluster-typed/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-typed/src/main/resources/reference.conf)
<a id="config-akka-cluster"></a>
### akka-cluster
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster/src/main/resources/reference.conf)
<a id="config-akka-discovery"></a>
### akka-discovery
@@snip [reference.conf](/akka-discovery/src/main/resources/reference.conf)
@@snip [reference.conf](/discovery/src/main/resources/reference.conf)
<a id="config-akka-coordination"></a>
### akka-coordination
@@snip [reference.conf](/akka-coordination/src/main/resources/reference.conf)
@@snip [reference.conf](/coordination/src/main/resources/reference.conf)
<a id="config-akka-multi-node-testkit"></a>
### akka-multi-node-testkit
@@snip [reference.conf](/akka-multi-node-testkit/src/main/resources/reference.conf)
@@snip [reference.conf](/multi-node-testkit/src/main/resources/reference.conf)
<a id="config-akka-persistence-typed"></a>
### akka-persistence-typed
@@snip [reference.conf](/akka-persistence-typed/src/main/resources/reference.conf)
@@snip [reference.conf](/persistence-typed/src/main/resources/reference.conf)
<a id="config-akka-persistence"></a>
### akka-persistence
@@snip [reference.conf](/akka-persistence/src/main/resources/reference.conf)
@@snip [reference.conf](/persistence/src/main/resources/reference.conf)
<a id="config-akka-persistence-query"></a>
### akka-persistence-query
@@snip [reference.conf](/akka-persistence-query/src/main/resources/reference.conf)
@@snip [reference.conf](/persistence-query/src/main/resources/reference.conf)
<a id="config-akka-persistence-testkit"></a>
### akka-persistence-testkit
@@snip [reference.conf](/akka-persistence-testkit/src/main/resources/reference.conf)
@@snip [reference.conf](/persistence-testkit/src/main/resources/reference.conf)
<a id="config-akka-remote-artery"></a>
### akka-remote artery
@@snip [reference.conf](/akka-remote/src/main/resources/reference.conf) { #shared #artery type=none }
@@snip [reference.conf](/remote/src/main/resources/reference.conf) { #shared #artery type=none }
<a id="config-akka-remote"></a>
### akka-remote classic (deprecated)
@@snip [reference.conf](/akka-remote/src/main/resources/reference.conf) { #shared #classic type=none }
@@snip [reference.conf](/remote/src/main/resources/reference.conf) { #shared #classic type=none }
<a id="config-akka-testkit"></a>
### akka-testkit
@@snip [reference.conf](/akka-testkit/src/main/resources/reference.conf)
@@snip [reference.conf](/testkit/src/main/resources/reference.conf)
<a id="config-cluster-metrics"></a>
### akka-cluster-metrics
@@snip [reference.conf](/akka-cluster-metrics/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-metrics/src/main/resources/reference.conf)
<a id="config-cluster-tools"></a>
### akka-cluster-tools
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf)
<a id="config-cluster-sharding-typed"></a>
### akka-cluster-sharding-typed
@@snip [reference.conf](/akka-cluster-sharding-typed/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-sharding-typed/src/main/resources/reference.conf)
<a id="config-cluster-sharding"></a>
### akka-cluster-sharding
@@snip [reference.conf](/akka-cluster-sharding/src/main/resources/reference.conf)
@@snip [reference.conf](/cluster-sharding/src/main/resources/reference.conf)
<a id="config-distributed-data"></a>
### akka-distributed-data
@@snip [reference.conf](/akka-distributed-data/src/main/resources/reference.conf)
@@snip [reference.conf](/distributed-data/src/main/resources/reference.conf)
<a id="config-akka-stream"></a>
### akka-stream
@@snip [reference.conf](/akka-stream/src/main/resources/reference.conf)
@@snip [reference.conf](/stream/src/main/resources/reference.conf)
<a id="config-akka-stream-testkit"></a>
### akka-stream-testkit
@@snip [reference.conf](/akka-stream-testkit/src/main/resources/reference.conf)
@@snip [reference.conf](/stream-testkit/src/main/resources/reference.conf)

View file

@ -1,3 +1,3 @@
# Configuration
@@snip [reference.conf](/akka-stream/src/main/resources/reference.conf)
@@snip [reference.conf](/stream/src/main/resources/reference.conf)

View file

@ -8,15 +8,15 @@ For the new API see @ref[Logging](typed/logging.md).
To use Logging, you must at least use the Akka actors dependency in your project, and will most likely want to configure logging via the SLF4J module (@ref:[see below](#slf4j)).
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-actor_$scala.binary.version$"
artifact="pekko-actor_$scala.binary.version$"
version=PekkoVersion
}
@@project-info{ projectId="akka-slf4j" }
@@project-info{ projectId="slf4j" }
## Introduction
@ -435,12 +435,12 @@ load is high.
A starting point for configuration of `logback.xml` for production:
@@snip [logback.xml](/akka-actor-typed-tests/src/test/resources/logback-doc-prod.xml)
@@snip [logback.xml](/actor-typed-tests/src/test/resources/logback-doc-prod.xml)
For development you might want to log to standard out, but also have all `DEBUG` level logging to file, like
in this example:
@@snip [logback.xml](/akka-actor-typed-tests/src/test/resources/logback-doc-dev.xml)
@@snip [logback.xml](/actor-typed-tests/src/test/resources/logback-doc-dev.xml)
Place the `logback.xml` file in `src/main/resources/logback.xml`. For tests you can define different
logging configuration in `src/test/resources/logback-test.xml`.

View file

@ -39,14 +39,14 @@ So in Akka, to run all the multi-JVM tests in the akka-remote project use (at
the sbt prompt):
```none
akka-remote-tests/multi-jvm:test
remote-tests/multi-jvm:test
```
Or one can change to the `akka-remote-tests` project first, and then run the
tests:
```none
project akka-remote-tests
project remote-tests
multi-jvm:test
```

View file

@ -8,16 +8,16 @@ project.description: Multi node testing of distributed systems built with Akka.
To use Multi Node Testing, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-multi-node-testkit_$scala.binary.version$
artifact=pekko-multi-node-testkit_$scala.binary.version$
version=PekkoVersion
scope=test
}
@@project-info{ projectId="akka-multi-node-testkit" }
@@project-info{ projectId="multi-node-testkit" }
## Multi Node Testing Concepts
@ -172,17 +172,17 @@ complete the test names.
First we need some scaffolding to hook up the @apidoc[MultiNodeSpec] with your favorite test framework. Lets define a trait
`STMultiNodeSpec` that uses ScalaTest to start and stop `MultiNodeSpec`.
@@snip [STMultiNodeSpec.scala](/akka-remote-tests/src/test/scala/org/apache/pekko/remote/testkit/STMultiNodeSpec.scala) { #example }
@@snip [STMultiNodeSpec.scala](/remote-tests/src/test/scala/org/apache/pekko/remote/testkit/STMultiNodeSpec.scala) { #example }
Then we need to define a configuration. Lets use two nodes `"node1` and `"node2"` and call it
`MultiNodeSampleConfig`.
@@snip [MultiNodeSample.scala](/akka-remote-tests/src/multi-jvm/scala/org/apache/pekko/remote/sample/MultiNodeSample.scala) { #package #config }
@@snip [MultiNodeSample.scala](/remote-tests/src/multi-jvm/scala/org/apache/pekko/remote/sample/MultiNodeSample.scala) { #package #config }
And then finally to the node test code. That starts the two nodes, and demonstrates a barrier, and a remote actor
message send/receive.
@@snip [MultiNodeSample.scala](/akka-remote-tests/src/multi-jvm/scala/org/apache/pekko/remote/sample/MultiNodeSample.scala) { #package #spec }
@@snip [MultiNodeSample.scala](/remote-tests/src/multi-jvm/scala/org/apache/pekko/remote/sample/MultiNodeSample.scala) { #package #spec }
## Things to Keep in Mind

View file

@ -32,10 +32,10 @@ To demonstrate the features of the @scala[`PersistentFSM` trait]@java[`AbstractP
The contract of our "WebStoreCustomerFSMActor" is that it accepts the following commands:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-commands }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-commands }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-commands }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-commands }
`AddItem` sent when the customer adds an item to a shopping cart
`Buy` - when the customer finishes the purchase
@ -45,10 +45,10 @@ Java
The customer can be in one of the following states:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-states }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-states }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states }
`LookingAround` customer is browsing the site, but hasn't added anything to the shopping cart
`Shopping` customer has recently added items to the shopping cart
@ -67,26 +67,26 @@ Customer's actions are "recorded" as a sequence of "domain events" which are per
start in order to restore the latest customer's state:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-domain-events }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-domain-events }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-domain-events }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-domain-events }
Customer state data represents the items in a customer's shopping cart:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-states-data }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-states-data }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states-data }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states-data }
Here is how everything is wired together:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-fsm-body }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-fsm-body }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-fsm-body }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-fsm-body }
@@@ note
@ -96,27 +96,27 @@ Override the `applyEvent` method to define how state data is affected by domain
@@@
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-apply-event }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-apply-event }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-apply-event }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-apply-event }
`andThen` can be used to define actions which will be executed following event's persistence - convenient for "side effects" like sending a message or logging.
Notice that actions defined in `andThen` block are not executed on recovery:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-andthen-example }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-andthen-example }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-andthen-example }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-andthen-example }
A snapshot of state data can be persisted by calling the `saveStateSnapshot()` method:
Scala
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-snapshot-example }
: @@snip [PersistentFSMSpec.scala](/persistence/src/test/scala/org/apache/pekko/persistence/fsm/PersistentFSMSpec.scala) { #customer-snapshot-example }
Java
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-snapshot-example }
: @@snip [AbstractPersistentFSMTest.java](/persistence/src/test/java/org/apache/pekko/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-snapshot-example }
On recovery state data is initialized according to the latest available snapshot, then the remaining domain events are replayed, triggering the
`applyEvent` method.
@ -144,26 +144,26 @@ The following is the shopping cart example above converted to an `EventSourcedBe
The new commands, note the replyTo field for getting the current cart.
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #commands }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #commands }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #commands }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #commands }
The states of the FSM are represented using the `EventSourcedBehavior`'s state parameter along with the event and command handlers. Here are the states:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #state }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #state }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #state }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #state }
The command handler has a separate section for each of the PersistentFSM's states:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #command-handler }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #command-handler }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #command-handler }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #command-handler }
Note that there is no explicit support for state timeout as with PersistentFSM but the same behavior can be achieved
using `Behaviors.withTimers`. If the timer is the same for all events then it can be hard coded, otherwise the
@ -172,34 +172,34 @@ constructing a `SnapshotAdapter`. This can be added to an internal event and the
must also be taken to restart timers on recovery in the signal handler:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #signal-handler }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #signal-handler }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #signal-handler }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #signal-handler }
Then the event handler:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-handler }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-handler }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-handler }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-handler }
The last step is the adapters that will allow the new @apidoc[EventSourcedBehavior] to read the old data:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-adapter }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-adapter }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-adapter }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-adapter }
The snapshot adapter needs to adapt an internal type of PersistentFSM so a helper function is provided to build the @apidoc[SnapshotAdapter]:
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #snapshot-adapter }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #snapshot-adapter }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #snapshot-adapter }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #snapshot-adapter }
That concludes all the steps to allow an @apidoc[EventSourcedBehavior] to read a `PersistentFSM`'s data. Once the new code has been running
you can not roll back as the PersistentFSM will not be able to read data written by Persistence Typed.

View file

@ -20,10 +20,10 @@ A journal plugin extends `AsyncWriteJournal`.
`AsyncWriteJournal` is an actor and the methods to be implemented are:
Scala
: @@snip [AsyncWriteJournal.scala](/akka-persistence/src/main/scala/org/apache/pekko/persistence/journal/AsyncWriteJournal.scala) { #journal-plugin-api }
: @@snip [AsyncWriteJournal.scala](/persistence/src/main/scala/org/apache/pekko/persistence/journal/AsyncWriteJournal.scala) { #journal-plugin-api }
Java
: @@snip [AsyncWritePlugin.java](/akka-persistence/src/main/java/org/apache/pekko/persistence/journal/japi/AsyncWritePlugin.java) { #async-write-plugin-api }
: @@snip [AsyncWritePlugin.java](/persistence/src/main/java/org/apache/pekko/persistence/journal/japi/AsyncWritePlugin.java) { #async-write-plugin-api }
If the storage backend API only supports synchronous, blocking writes, the methods should be implemented as:
@ -36,10 +36,10 @@ Java
A journal plugin must also implement the methods defined in `AsyncRecovery` for replays and sequence number recovery:
Scala
: @@snip [AsyncRecovery.scala](/akka-persistence/src/main/scala/org/apache/pekko/persistence/journal/AsyncRecovery.scala) { #journal-plugin-api }
: @@snip [AsyncRecovery.scala](/persistence/src/main/scala/org/apache/pekko/persistence/journal/AsyncRecovery.scala) { #journal-plugin-api }
Java
: @@snip [AsyncRecoveryPlugin.java](/akka-persistence/src/main/java/org/apache/pekko/persistence/journal/japi/AsyncRecoveryPlugin.java) { #async-replay-plugin-api }
: @@snip [AsyncRecoveryPlugin.java](/persistence/src/main/java/org/apache/pekko/persistence/journal/japi/AsyncRecoveryPlugin.java) { #async-replay-plugin-api }
A journal plugin can be activated with the following minimal configuration:
@ -68,10 +68,10 @@ Don't run journal tasks/futures on the system default dispatcher, since that mig
A snapshot store plugin must extend the `SnapshotStore` actor and implement the following methods:
Scala
: @@snip [SnapshotStore.scala](/akka-persistence/src/main/scala/org/apache/pekko/persistence/snapshot/SnapshotStore.scala) { #snapshot-store-plugin-api }
: @@snip [SnapshotStore.scala](/persistence/src/main/scala/org/apache/pekko/persistence/snapshot/SnapshotStore.scala) { #snapshot-store-plugin-api }
Java
: @@snip [SnapshotStorePlugin.java](/akka-persistence/src/main/java/org/apache/pekko/persistence/snapshot/japi/SnapshotStorePlugin.java) { #snapshot-store-plugin-api }
: @@snip [SnapshotStorePlugin.java](/persistence/src/main/java/org/apache/pekko/persistence/snapshot/japi/SnapshotStorePlugin.java) { #snapshot-store-plugin-api }
A snapshot store plugin can be activated with the following minimal configuration:
@ -99,14 +99,14 @@ Don't run snapshot store tasks/futures on the system default dispatcher, since t
In order to help developers build correct and high quality storage plugins, we provide a Technology Compatibility Kit ([TCK](https://en.wikipedia.org/wiki/Technology_Compatibility_Kit) for short).
The TCK is usable from Java as well as Scala projects. To test your implementation (independently of language) you need to include the akka-persistence-tck dependency:
The TCK is usable from Java as well as Scala projects. To test your implementation (independently of language) you need to include the pekko-persistence-tck dependency:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-persistence-tck_$scala.binary.version$"
artifact="pekko-persistence-tck_$scala.binary.version$"
version=PekkoVersion
}

View file

@ -159,4 +159,4 @@ for the default `LeveldbReadJournal.Identifier`.
It can be configured with the following properties:
@@snip [reference.conf](/akka-persistence-query/src/main/resources/reference.conf) { #query-leveldb }
@@snip [reference.conf](/persistence-query/src/main/resources/reference.conf) { #query-leveldb }

View file

@ -134,10 +134,10 @@ with the given `tags`. The journal may support other ways of doing tagging - aga
how exactly this is implemented depends on the used journal. Here is an example of such a tagging with an @apidoc[typed.*.EventSourcedBehavior]:
Scala
: @@snip [BasicPersistentActorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BasicPersistentBehaviorCompileOnly.scala) { #tagging-query }
: @@snip [BasicPersistentActorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BasicPersistentBehaviorCompileOnly.scala) { #tagging-query }
Java
: @@snip [BasicPersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BasicPersistentBehaviorTest.java) { #tagging-query }
: @@snip [BasicPersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BasicPersistentBehaviorTest.java) { #tagging-query }
@@@ note

View file

@ -11,14 +11,14 @@ For the full documentation of this feature and for new projects see @ref:[Event
To use Akka Persistence, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-persistence_$scala.binary.version$"
artifact="pekko-persistence_$scala.binary.version$"
version=PekkoVersion
group2="org.apache.pekko"
artifact2="akka-persistence-testkit_$scala.binary.version$"
artifact2="pekko-persistence-testkit_$scala.binary.version$"
version2=PekkoVersion
scope2=test
}
@ -26,7 +26,7 @@ To use Akka Persistence, you must add the following dependency in your project:
You also have to select journal plugin and optionally snapshot store plugin, see
@ref:[Persistence Plugins](persistence-plugins.md).
@@project-info{ projectId="akka-persistence" }
@@project-info{ projectId="persistence" }
## Introduction

View file

@ -52,7 +52,7 @@ Define the library dependencies with the complete version. For example:
@@@vars
```
libraryDependencies += "org.apache.pekko" % "akka-remote_$scala.binary.version$" % "2.6.14+72-53943d99-SNAPSHOT"
libraryDependencies += "org.apache.pekko" % "pekko-remote_$scala.binary.version$" % "2.6.14+72-53943d99-SNAPSHOT"
```
@@@
@ -80,7 +80,7 @@ Define the library dependencies with the timestamp as version. For example:
<dependencies>
<dependency>
<groupId>org.apache.pekko</groupId>
<artifactId>akka-remote_$scala.binary.version$</artifactId>
<artifactId>pekko-remote_$scala.binary.version$</artifactId>
<version>2.6.14+72-53943d99-SNAPSHOT</version>
</dependency>
</dependencies>

View file

@ -725,10 +725,10 @@ used for individual streams when they are materialized.
Setting attributes on individual streams can be done like so:
Scala
: @@snip [StreamAttributeDocSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/StreamAttributeDocSpec.scala) { #attributes-on-stream }
: @@snip [StreamAttributeDocSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/StreamAttributeDocSpec.scala) { #attributes-on-stream }
Java
: @@snip [StreamAttributeDocTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/StreamAttributeDocTest.java) { #attributes-on-stream }
: @@snip [StreamAttributeDocTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/StreamAttributeDocTest.java) { #attributes-on-stream }
### Stream cancellation available upstream

View file

@ -585,7 +585,7 @@ remained the same, we recommend reading the @ref:[Serialization](serialization.m
Implementing an `org.apache.pekko.serialization.ByteBufferSerializer` works the same way as any other serializer,
Scala
: @@snip [Serializer.scala](/akka-actor/src/main/scala/org/apache/pekko/serialization/Serializer.scala) { #ByteBufferSerializer }
: @@snip [Serializer.scala](/actor/src/main/scala/org/apache/pekko/serialization/Serializer.scala) { #ByteBufferSerializer }
Java
: @@snip [ByteBufferSerializerDocTest.java](/docs/src/test/java/jdocs/actor/ByteBufferSerializerDocTest.java) { #ByteBufferSerializer-interface }

View file

@ -25,15 +25,15 @@ such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
To use Akka Remoting, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-remote_$scala.binary.version$
artifact=pekko-remote_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-remote" }
@@project-info{ projectId="remote" }
Classic remoting depends on Netty. This needs to be explicitly added as a dependency so that users
not using classic remoting do not have to have Netty on the classpath:
@ -287,7 +287,7 @@ The list of allowed classes has to be configured on the "remote" system, in othe
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
should not allow others to remote deploy onto it. The full settings section may for example look like this:
@@snip [RemoteDeploymentAllowListSpec.scala](/akka-remote/src/test/scala/org/apache/pekko/remote/classic/RemoteDeploymentAllowListSpec.scala) { #allow-list-config }
@@snip [RemoteDeploymentAllowListSpec.scala](/remote/src/test/scala/org/apache/pekko/remote/classic/RemoteDeploymentAllowListSpec.scala) { #allow-list-config }
Actor classes not included in the allow list will not be allowed to be remote deployed onto this system.

View file

@ -182,7 +182,7 @@ by specifying the strategy when defining the router.
Setting the strategy is done like this:
Scala
: @@snip [RoutingSpec.scala](/akka-actor-tests/src/test/scala/org/apache/pekko/routing/RoutingSpec.scala) { #supervision }
: @@snip [RoutingSpec.scala](/actor-tests/src/test/scala/org/apache/pekko/routing/RoutingSpec.scala) { #supervision }
Java
: @@snip [RouterDocTest.java](/docs/src/test/java/jdocs/routing/RouterDocTest.java) { #supervision }

View file

@ -35,15 +35,15 @@ in serialization-bindings configuration. Typically you will create a marker @sca
for that purpose and let the messages @scala[extend]@java[implement] that.
Scala
: @@snip [SerializationDocSpec.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #marker-interface }
: @@snip [SerializationDocSpec.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #marker-interface }
Java
: @@snip [MySerializable.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/MySerializable.java) { #marker-interface }
: @@snip [MySerializable.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/MySerializable.java) { #marker-interface }
Then you configure the class name of the marker @scala[trait]@java[interface] in `serialization-bindings` to
one of the supported Jackson formats: `jackson-json` or `jackson-cbor`
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #serialization-bindings }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #serialization-bindings }
A good convention would be to name the marker interface `CborSerializable` or `JsonSerializable`.
In this documentation we have used `MySerializable` to make it clear that the marker interface itself is not
@ -107,17 +107,17 @@ MismatchedInputException: Cannot construct instance of `...` (although at least
That is probably because the class has a constructor with a single parameter, like:
Java
: @@snip [SerializationDocTest.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-1 }
: @@snip [SerializationDocTest.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-1 }
That can be solved by adding @javadoc[@JsonCreator](com.fasterxml.jackson.annotation.JsonCreator) or @javadoc[@JsonProperty](com.fasterxml.jackson.annotation.JsonProperty) annotations:
Java
: @@snip [SerializationDocTest.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-2 }
: @@snip [SerializationDocTest.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-2 }
or
Java
: @@snip [SerializationDocTest.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-3 }
: @@snip [SerializationDocTest.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #one-constructor-param-3 }
The `ParameterNamesModule` is configured with `JsonCreator.Mode.PROPERTIES` as described in the
@ -134,10 +134,10 @@ and @javadoc[@JsonSubTypes](com.fasterxml.jackson.annotation.JsonSubTypes) annot
Example:
Scala
: @@snip [SerializationDocSpec.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #polymorphism }
: @@snip [SerializationDocSpec.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #polymorphism }
Java
: @@snip [SerializationDocTest.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #polymorphism }
: @@snip [SerializationDocTest.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/SerializationDocTest.java) { #polymorphism }
If you haven't defined the annotations you will see an exception like this:
@ -174,7 +174,7 @@ The easiest workaround is to define the case objects as case class without any f
Alternatively, you can define an intermediate trait for the case object and a custom deserializer for it. The example below builds on the previous `Animal` sample by adding a fictitious, single instance, new animal, an `Unicorn`.
Scala
: @@snip [SerializationDocSpec.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #polymorphism-case-object }
: @@snip [SerializationDocSpec.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #polymorphism-case-object }
The case object `Unicorn` can't be used in a @javadoc[@JsonSubTypes](com.fasterxml.jackson.annotation.JsonSubTypes) annotation, but its trait can. When serializing the case object we need to know which type tag to use, hence the @javadoc[@JsonTypeName](com.fasterxml.jackson.annotation.JsonTypeName) annotation on the object. When deserializing, Jackson will only know about the trait variant therefore we need a custom deserializer that returns the case object.
@ -183,7 +183,7 @@ On the other hand, if the ADT only has case objects, you can solve it by impleme
@javadoc[StdDeserializer](com.fasterxml.jackson.databind.deser.std.StdDeserializer).
Scala
: @@snip [CustomAdtSerializer.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/CustomAdtSerializer.scala) { #adt-trait-object }
: @@snip [CustomAdtSerializer.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/CustomAdtSerializer.scala) { #adt-trait-object }
### Enumerations
@ -194,7 +194,7 @@ statically specify the type information to a field. When using the `@JsonScalaEn
value is serialized as a JsonString.
Scala
: @@snip [JacksonSerializerSpec.scala](/akka-serialization-jackson/src/test/scala/org/apache/pekko/serialization/jackson/JacksonSerializerSpec.scala) { #jackson-scala-enumeration }
: @@snip [JacksonSerializerSpec.scala](/serialization-jackson/src/test/scala/org/apache/pekko/serialization/jackson/JacksonSerializerSpec.scala) { #jackson-scala-enumeration }
@@@
@ -222,39 +222,39 @@ Adding an optional field can be done without any migration code. The default val
Old class:
Scala
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #add-optional }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #add-optional }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #add-optional }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #add-optional }
New class with a new optional `discount` property and a new `note` field with default value:
Scala
: @@snip [ItemAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/ItemAdded.scala) { #add-optional }
: @@snip [ItemAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/ItemAdded.scala) { #add-optional }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/ItemAdded.java) { #add-optional }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/ItemAdded.java) { #add-optional }
### Add Mandatory Field
Let's say we want to have a mandatory `discount` property without default value instead:
Scala
: @@snip [ItemAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2b/ItemAdded.scala) { #add-mandatory }
: @@snip [ItemAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2b/ItemAdded.scala) { #add-mandatory }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2b/ItemAdded.java) { #add-mandatory }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2b/ItemAdded.java) { #add-mandatory }
To add a new mandatory field we have to use a @apidoc[JacksonMigration] class and set the default value in the migration code.
This is how a migration class would look like for adding a `discount` field:
Scala
: @@snip [ItemAddedMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2b/ItemAddedMigration.scala) { #add-mandatory }
: @@snip [ItemAddedMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2b/ItemAddedMigration.scala) { #add-mandatory }
Java
: @@snip [ItemAddedMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2b/ItemAddedMigration.java) { #add-mandatory }
: @@snip [ItemAddedMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2b/ItemAddedMigration.java) { #add-mandatory }
Override the @scala[@scaladoc[currentVersion](pekko.serialization.jackson.JacksonMigration#currentVersion:Int)]@java[@javadoc[currentVersion()](pekko.serialization.jackson.JacksonMigration#currentVersion())] method to define the version number of the current (latest) version. The first version,
when no migration was used, is always 1. Increase this version number whenever you perform a change that is not
@ -269,7 +269,7 @@ to get access to mutators.
The migration class must be defined in configuration file:
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #migrations-conf }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #migrations-conf }
The same thing could have been done for the `note` field, adding a default value of `""` in the `ItemAddedMigration`.
@ -278,18 +278,18 @@ The same thing could have been done for the `note` field, adding a default value
Let's say that we want to rename the `productId` field to `itemId` in the previous example.
Scala
: @@snip [ItemAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.scala) { #rename }
: @@snip [ItemAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.scala) { #rename }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.java) { #rename }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.java) { #rename }
The migration code would look like:
Scala
: @@snip [ItemAddedMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.scala) { #rename }
: @@snip [ItemAddedMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.scala) { #rename }
Java
: @@snip [ItemAddedMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.java) { #rename }
: @@snip [ItemAddedMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.java) { #rename }
### Structural Changes
@ -298,34 +298,34 @@ In a similar way we can do arbitrary structural changes.
Old class:
Scala
: @@snip [Customer.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/Customer.scala) { #structural }
: @@snip [Customer.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/Customer.scala) { #structural }
Java
: @@snip [Customer.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/Customer.java) { #structural }
: @@snip [Customer.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/Customer.java) { #structural }
New class:
Scala
: @@snip [Customer.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/Customer.scala) { #structural }
: @@snip [Customer.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/Customer.scala) { #structural }
Java
: @@snip [Customer.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/Customer.java) { #structural }
: @@snip [Customer.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/Customer.java) { #structural }
with the `Address` class:
Scala
: @@snip [Address.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/Address.scala) { #structural }
: @@snip [Address.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/Address.scala) { #structural }
Java
: @@snip [Address.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/Address.java) { #structural }
: @@snip [Address.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/Address.java) { #structural }
The migration code would look like:
Scala
: @@snip [CustomerMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/CustomerMigration.scala) { #structural }
: @@snip [CustomerMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/CustomerMigration.scala) { #structural }
Java
: @@snip [CustomerMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/CustomerMigration.java) { #structural }
: @@snip [CustomerMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/CustomerMigration.java) { #structural }
### Rename Class
@ -334,32 +334,32 @@ It is also possible to rename the class. For example, let's rename `OrderAdded`
Old class:
Scala
: @@snip [OrderAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/OrderAdded.scala) { #rename-class }
: @@snip [OrderAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/OrderAdded.scala) { #rename-class }
Java
: @@snip [OrderAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/OrderAdded.java) { #rename-class }
: @@snip [OrderAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/OrderAdded.java) { #rename-class }
New class:
Scala
: @@snip [OrderPlaced.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/OrderPlaced.scala) { #rename-class }
: @@snip [OrderPlaced.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/OrderPlaced.scala) { #rename-class }
Java
: @@snip [OrderPlaced.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/OrderPlaced.java) { #rename-class }
: @@snip [OrderPlaced.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/OrderPlaced.java) { #rename-class }
The migration code would look like:
Scala
: @@snip [OrderPlacedMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/OrderPlacedMigration.scala) { #rename-class }
: @@snip [OrderPlacedMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2a/OrderPlacedMigration.scala) { #rename-class }
Java
: @@snip [OrderPlacedMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/OrderPlacedMigration.java) { #rename-class }
: @@snip [OrderPlacedMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2a/OrderPlacedMigration.java) { #rename-class }
Note the override of the @apidoc[transformClassName(fromVersion, className)](JacksonMigration) {scala="#transformClassName(fromVersion:Int,className:String):String" java="#transformClassName(int,java.lang.String)"} method to define the new class name.
That type of migration must be configured with the old class name as key. The actual class can be removed.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #migrations-conf-rename }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #migrations-conf-rename }
### Remove from serialization-bindings
@ -369,7 +369,7 @@ during rolling update with serialization changes, or when reading old stored dat
when changing from Jackson serializer to another serializer (e.g. Protobuf) and thereby changing the serialization
binding, but it should still be possible to deserialize old data with Jackson.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #allowed-class-prefix }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #allowed-class-prefix }
It's a list of class names or prefixes of class names.
@ -390,43 +390,43 @@ Let's take, for example, the case above where we [renamed a field](#rename-field
The starting schema is:
Scala
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #add-optional }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #add-optional }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #add-optional }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #add-optional }
In a first deployment, we still don't make any change to the event class:
Scala
: @@snip [ItemAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #forward-one-rename }
: @@snip [ItemAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1/ItemAdded.scala) { #forward-one-rename }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #forward-one-rename }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1/ItemAdded.java) { #forward-one-rename }
but we introduce a migration that can read the newer schema which is versioned `2`:
Scala
: @@snip [ItemAddedMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1withv2/ItemAddedMigration.scala) { #forward-one-rename }
: @@snip [ItemAddedMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v1withv2/ItemAddedMigration.scala) { #forward-one-rename }
Java
: @@snip [ItemAddedMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1withv2/ItemAddedMigration.java) { #forward-one-rename }
: @@snip [ItemAddedMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v1withv2/ItemAddedMigration.java) { #forward-one-rename }
Once all running nodes have the new migration code which can read version `2` of `ItemAdded` we can proceed with the
second step. So, we deploy the updated event:
Scala
: @@snip [ItemAdded.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.scala) { #rename }
: @@snip [ItemAdded.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.scala) { #rename }
Java
: @@snip [ItemAdded.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.java) { #rename }
: @@snip [ItemAdded.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAdded.java) { #rename }
and the final migration code which no longer needs forward-compatibility code:
Scala
: @@snip [ItemAddedMigration.scala](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.scala) { #rename }
: @@snip [ItemAddedMigration.scala](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.scala) { #rename }
Java
: @@snip [ItemAddedMigration.java](/akka-serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.java) { #rename }
: @@snip [ItemAddedMigration.java](/serialization-jackson/src/test/java/jdoc/org/apache/pekko/serialization/jackson/v2c/ItemAddedMigration.java) { #rename }
@ -434,7 +434,7 @@ Java
The following Jackson modules are enabled by default:
@@snip [reference.conf](/akka-serialization-jackson/src/main/resources/reference.conf) { #jackson-modules }
@@snip [reference.conf](/serialization-jackson/src/main/resources/reference.conf) { #jackson-modules }
You can amend the configuration `pekko.serialization.jackson.jackson-modules` to enable other modules.
@ -446,7 +446,7 @@ Java compiler option is enabled.
JSON can be rather verbose and for large messages it can be beneficial to compress large payloads. For
the `jackson-json` binding the default configuration is:
@@snip [reference.conf](/akka-serialization-jackson/src/main/resources/reference.conf) { #compression }
@@snip [reference.conf](/serialization-jackson/src/main/resources/reference.conf) { #compression }
Supported compression algorithms are: gzip, lz4. Use 'off' to disable compression.
Gzip is generally slower than lz4.
@ -481,12 +481,12 @@ By default the configuration for the Jackson serializers and their @javadoc[Obje
the `pekko.serialization.jackson` section. It is possible to override that configuration in a more
specific `pekko.serialization.jackson.<binding name>` section.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #specific-config }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #specific-config }
It's also possible to define several bindings and use different configuration for them. For example,
different settings for remote messages and persisted events.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #several-config }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #several-config }
### Manifest-less serialization
@ -511,7 +511,7 @@ Since this configuration can only be applied to a single root type, you will usu
apply it to a per binding configuration, not to the regular `jackson-json` or `jackson-cbor`
configurations.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #manifestless }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #manifestless }
Note that Akka remoting already implements manifest compression, and so this optimization will have
no significant impact for messages sent over remoting. It's only useful for messages serialized for
@ -522,7 +522,7 @@ other purposes, such as persistence or distributed data.
Additional Jackson serialization features can be enabled/disabled in configuration. The default values from
Jackson are used aside from the the following that are changed in Akka's default configuration.
@@snip [reference.conf](/akka-serialization-jackson/src/main/resources/reference.conf) { #features }
@@snip [reference.conf](/serialization-jackson/src/main/resources/reference.conf) { #features }
### Date/time format
@ -531,6 +531,6 @@ ISO-8601 (rfc3339) `yyyy-MM-dd'T'HH:mm:ss.SSSZ` format instead of numeric arrays
interoperability but it is slower. If you don't need the ISO format for interoperability with external systems
you can change the following configuration for better performance of date/time fields.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #date-time }
@@snip [config](/serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #date-time }
Jackson is still be able to deserialize the other format independent of this setting.

View file

@ -198,10 +198,10 @@ To serialize actor references to/from string representation you would use the @a
For example here's how a serializer could look for `Ping` and `Pong` messages:
Scala
: @@snip [PingSerializer.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/PingSerializer.scala) { #serializer }
: @@snip [PingSerializer.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/PingSerializer.scala) { #serializer }
Java
: @@snip [PingSerializerExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/PingSerializerExampleTest.java) { #serializer }
: @@snip [PingSerializerExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/PingSerializerExampleTest.java) { #serializer }
Serialization of Classic @apidoc[actor.ActorRef] is described in @ref:[Classic Serialization](serialization-classic.md#serializing-actorrefs).
Classic and Typed actor references have the same serialization format so they can be interchanged.

View file

@ -16,15 +16,15 @@ To use Akka Split Brain Resolver is part of `akka-cluster` and you probably alre
dependency included. Otherwise, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster_$scala.binary.version$
artifact=pekko-cluster_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster" }
@@project-info{ projectId="cluster" }
## Enable the Split Brain Resolver
@ -127,7 +127,7 @@ have been stable for a certain time period. Continuously adding more nodes while
partition does not influence this timeout, since the status of those nodes will not be changed to Up
while there are unreachable nodes. Joining nodes are not counted in the logic of the strategies.
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #split-brain-resolver }
@@snip [reference.conf](/cluster/src/main/resources/reference.conf) { #split-brain-resolver }
Set `pekko.cluster.split-brain-resolver.stable-after` to a shorter duration to have quicker removal of crashed nodes,
at the price of risking too early action on transient network partitions that otherwise would have healed. Do not
@ -210,7 +210,7 @@ Configuration:
pekko.cluster.split-brain-resolver.active-strategy=keep-majority
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #keep-majority }
@@snip [reference.conf](/cluster/src/main/resources/reference.conf) { #keep-majority }
### Static Quorum
@ -276,7 +276,7 @@ Configuration:
pekko.cluster.split-brain-resolver.active-strategy=static-quorum
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #static-quorum }
@@snip [reference.conf](/cluster/src/main/resources/reference.conf) { #static-quorum }
### Keep Oldest
@ -315,7 +315,7 @@ Configuration:
pekko.cluster.split-brain-resolver.active-strategy=keep-oldest
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #keep-oldest }
@@snip [reference.conf](/cluster/src/main/resources/reference.conf) { #keep-oldest }
### Down All
@ -374,7 +374,7 @@ pekko {
}
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #lease-majority }
@@snip [reference.conf](/cluster/src/main/resources/reference.conf) { #lease-majority }
See also configuration and additional dependency in [Kubernetes Lease](https://doc.akka.io/docs/akka-management/current/kubernetes-lease.html)

View file

@ -8,19 +8,19 @@ project.description: An intuitive and safe way to do asynchronous, non-blocking
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-stream_$scala.binary.version$"
artifact="pekko-stream_$scala.binary.version$"
version=PekkoVersion
group2="org.apache.pekko"
artifact2="akka-stream-testkit_$scala.binary.version$"
artifact2="pekko-stream-testkit_$scala.binary.version$"
version2=PekkoVersion
scope2=test
}
@@project-info{ projectId="akka-stream" }
@@project-info{ projectId="stream" }
@@toc { depth=2 }

View file

@ -41,10 +41,10 @@ See also:
The `ActorFlow.ask` sends a message to the actor. The actor expects `Asking` messages which contain the actor ref for replies of type `Reply`. When the actor for replies receives a reply, the `ActorFlow.ask` stream stage emits the reply and the `map` extracts the message `String`.
Scala
: @@snip [ask.scala](/akka-stream-typed/src/test/scala/docs/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
: @@snip [ask.scala](/stream-typed/src/test/scala/docs/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
Java
: @@snip [ask.java](/akka-stream-typed/src/test/java/docs/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
: @@snip [ask.java](/stream-typed/src/test/java/docs/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
## Reactive Streams semantics

View file

@ -38,10 +38,10 @@ The `askWithStatus` operator requires
The `ActorFlow.askWithStatus` sends a message to the actor. The actor expects `AskingWithStatus` messages which contain the actor ref for replies of type @scala[`StatusReply[String]`]@java[`StatusReply<String>`]. When the actor for replies receives a reply, the `ActorFlow.askWihStatus` stream stage emits the reply and the `map` extracts the message `String`.
Scala
: @@snip [ask.scala](/akka-stream-typed/src/test/scala/docs/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
: @@snip [ask.scala](/stream-typed/src/test/scala/docs/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
Java
: @@snip [ask.java](/akka-stream-typed/src/test/java/docs/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
: @@snip [ask.java](/stream-typed/src/test/java/docs/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
## Reactive Streams semantics

View file

@ -35,10 +35,10 @@ See also:
## Examples
Scala
: @@snip [ActorSourceSinkExample.scala](/akka-stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-sink-ref-with-backpressure }
: @@snip [ActorSourceSinkExample.scala](/stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-sink-ref-with-backpressure }
Java
: @@snip [ActorSinkWithAckExample.java](/akka-stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSinkWithAckExample.java) { #actor-sink-ref-with-backpressure }
: @@snip [ActorSinkWithAckExample.java](/stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSinkWithAckExample.java) { #actor-sink-ref-with-backpressure }
## Reactive Streams semantics

View file

@ -35,7 +35,7 @@ See also:
## Examples
Scala
: @@snip [ActorSourceSinkExample.scala](/akka-stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-source-ref }
: @@snip [ActorSourceSinkExample.scala](/stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-source-ref }
Java
: @@snip [ActorSourceExample.java](/akka-stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSourceExample.java) { #actor-source-ref }
: @@snip [ActorSourceExample.java](/stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSourceExample.java) { #actor-source-ref }

View file

@ -48,10 +48,10 @@ In this example we create the stream in an actor which itself reacts on the dema
Scala
: @@snip [ActorSourceSinkExample.scala](/akka-stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-source-with-backpressure }
: @@snip [ActorSourceSinkExample.scala](/stream-typed/src/test/scala/docs/org/apache/pekko/stream/typed/ActorSourceSinkExample.scala) { #actor-source-with-backpressure }
Java
: @@snip [snip](/akka-stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSourceWithBackpressureExample.java) { #sample }
: @@snip [snip](/stream-typed/src/test/java/docs/org/apache/pekko/stream/typed/ActorSourceWithBackpressureExample.java) { #sample }
## Reactive Streams semantics

View file

@ -29,10 +29,10 @@ This API was added in Akka 2.6.0 and @ref:[may be changed](../../../common/may-c
This example wraps a `flow` handling @scala[`Int`s]@java[`Integer`s], and retries elements unless the result is 0 or negative, or `maxRetries` is hit.
Scala
: @@snip [RetryFlowSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/RetryFlowSpec.scala) { #withBackoff-demo }
: @@snip [RetryFlowSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/RetryFlowSpec.scala) { #withBackoff-demo }
Java
: @@snip [RetryFlowTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/RetryFlowTest.java) { #withBackoff-demo }
: @@snip [RetryFlowTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/RetryFlowTest.java) { #withBackoff-demo }
## Reactive Streams semantics

View file

@ -30,10 +30,10 @@ This API was added in Akka 2.6.0 and @ref:[may be changed](../../../common/may-c
This example wraps a `flow` handling @scala[`Int`s]@java[`Integer`s] with `SomeContext` in context, and retries elements unless the result is 0 or negative, or `maxRetries` is hit.
Scala
: @@snip [RetryFlowSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/RetryFlowSpec.scala) { #retry-success }
: @@snip [RetryFlowSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/RetryFlowSpec.scala) { #retry-success }
Java
: @@snip [RetryFlowTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/RetryFlowTest.java) { #retry-success }
: @@snip [RetryFlowTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/RetryFlowTest.java) { #retry-success }
## Reactive Streams semantics

View file

@ -18,7 +18,7 @@ which will be completed with a result of the Java @javadoc[Collector](java.util.
Given a stream of numbers we can collect the numbers into a collection with the `seq` operator
Java
: @@snip [SinkTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SinkTest.java) { #collect-to-list }
: @@snip [SinkTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SinkTest.java) { #collect-to-list }
## Reactive Streams semantics

View file

@ -27,10 +27,10 @@ See also:
This prints out every element to standard out.
Scala
: @@snip [snip](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #foreach }
: @@snip [snip](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #foreach }
Java
: @@snip [snip](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SinkTest.java) { #foreach }
: @@snip [snip](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SinkTest.java) { #foreach }
## Reactive Streams semantics

View file

@ -17,7 +17,7 @@ after this the stream is canceled. If no element is emitted, the @scala[`Future`
## Example
Scala
: @@snip [HeadSinkSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/HeadSinkSpec.scala) { #head-operator-example }
: @@snip [HeadSinkSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/HeadSinkSpec.scala) { #head-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #head-operator-example }

View file

@ -17,7 +17,7 @@ completes. If the stream completes with no elements the @scala[`Future`] @java[`
## Example
Scala
: @@snip [LastSinkSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/LastSinkSpec.scala) { #last-operator-example }
: @@snip [LastSinkSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/LastSinkSpec.scala) { #last-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #last-operator-example }

View file

@ -18,7 +18,7 @@ completed with @scala[`None`] @java[an empty `Optional`].
## Example
Scala
: @@snip [LastSinkSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/LastSinkSpec.scala) { #lastOption-operator-example }
: @@snip [LastSinkSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/LastSinkSpec.scala) { #lastOption-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #lastOption-operator-example }

View file

@ -19,7 +19,7 @@ Materializes into a @scala[`Future`] @java[`CompletionStage`] that will be compl
## Example
Scala
: @@snip [SinkReduceSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #reduce-operator-example }
: @@snip [SinkReduceSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #reduce-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #reduce-operator-example }

View file

@ -20,7 +20,7 @@ if more element are emitted the sink will cancel the stream
Given a stream of numbers we can collect the numbers into a collection with the `seq` operator
Scala
: @@snip [SinkSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #seq-operator-example }
: @@snip [SinkSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SinkSpec.scala) { #seq-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #seq-operator-example }

View file

@ -20,7 +20,7 @@ If there is a failure signaled in the stream the @scala[`Future`] @java[`Complet
## Example
Scala
: @@snip [TakeLastSinkSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/TakeLastSinkSpec.scala) { #takeLast-operator-example }
: @@snip [TakeLastSinkSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/TakeLastSinkSpec.scala) { #takeLast-operator-example }
Java
: @@snip [SinkDocExamples.java](/docs/src/test/java/jdocs/stream/operators/SinkDocExamples.java) { #takeLast-operator-example }

View file

@ -27,7 +27,7 @@ Both streams will be materialized together.
## Example
Scala
: @@snip [FlowConcatSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatSpec.scala) { #concat }
: @@snip [FlowConcatSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatSpec.scala) { #concat }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #concat }

View file

@ -20,7 +20,7 @@ To defer the materialization of the given sources (or to completely avoid its ma
## Example
Scala
: @@snip [FlowConcatSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatAllLazySpec.scala) { #concatAllLazy }
: @@snip [FlowConcatSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatAllLazySpec.scala) { #concatAllLazy }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #concatAllLazy }

View file

@ -22,7 +22,7 @@ If materialized values needs to be collected `concatLazyMat` is available.
## Example
Scala
: @@snip [FlowConcatSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatSpec.scala) { #concatLazy }
: @@snip [FlowConcatSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowConcatSpec.scala) { #concatLazy }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #concatLazy }

View file

@ -17,7 +17,7 @@ source completes the rest of the other stream will be emitted.
## Example
Scala
: @@snip [FlowInterleaveSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowInterleaveSpec.scala) { #interleave }
: @@snip [FlowInterleaveSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowInterleaveSpec.scala) { #interleave }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #interleave }

View file

@ -18,7 +18,7 @@ the flow is complete.
## Example
Scala
: @@snip [FlowInterleaveSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowInterleaveAllSpec.scala) { #interleaveAll }
: @@snip [FlowInterleaveSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowInterleaveAllSpec.scala) { #interleaveAll }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #interleaveAll }

View file

@ -16,7 +16,7 @@ Merge multiple sources. Picks elements randomly if all sources has elements read
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #merge }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #merge }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #merge }

View file

@ -15,7 +15,7 @@ Merge multiple sources. Picks elements randomly if all sources has elements read
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeAllSpec.scala) { #merge-all }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeAllSpec.scala) { #merge-all }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #merge-all }

View file

@ -18,7 +18,7 @@ prefer the left source (see examples).
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePreferred }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePreferred }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #mergePreferred }

View file

@ -18,7 +18,7 @@ prioritized and similarly for the right source. The priorities for each source m
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePrioritized }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePrioritized }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #mergePrioritized }

View file

@ -17,7 +17,7 @@ smallest element.
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #merge-sorted }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #merge-sorted }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #merge-sorted }

View file

@ -22,7 +22,7 @@ Signal errors downstream, regardless which of the two sources emitted the error.
## Example
Scala
: @@snip [FlowOrElseSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowOrElseSpec.scala) { #or-else }
: @@snip [FlowOrElseSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowOrElseSpec.scala) { #or-else }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #or-else }

View file

@ -36,7 +36,7 @@ use @ref(prependLazy)[prependLazy.md]
## Example
Scala
: @@snip [FlowOrElseSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowPrependSpec.scala) { #prepend }
: @@snip [FlowOrElseSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowPrependSpec.scala) { #prepend }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #prepend }

View file

@ -22,7 +22,7 @@ See also @ref[prepend](prepend.md) which is detached.
## Example
Scala
: @@snip [FlowPrependSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowPrependSpec.scala) { #prependLazy }
: @@snip [FlowPrependSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowPrependSpec.scala) { #prependLazy }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #prependLazy }

View file

@ -22,7 +22,7 @@ See also:
## Examples
Scala
: @@snip [FlowZipSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipSpec.scala) { #zip }
: @@snip [FlowZipSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipSpec.scala) { #zip }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #zip }

View file

@ -24,7 +24,7 @@ See also:
## Examples
Scala
: @@snip [FlowZipWithSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipWithSpec.scala) { #zip-with }
: @@snip [FlowZipWithSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipWithSpec.scala) { #zip-with }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #zip-with }

View file

@ -22,7 +22,7 @@ See also:
## Example
Scala
: @@snip [FlowZipWithIndexSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipWithIndexSpec.scala) { #zip-with-index }
: @@snip [FlowZipWithIndexSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowZipWithIndexSpec.scala) { #zip-with-index }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #zip-with-index }

View file

@ -7,7 +7,7 @@ Integration with Reactive Streams, materializes into a @javadoc[Subscriber](java
## Signature
Scala
: @@snip[JavaFlowSupport.scala](/akka-stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #asSubscriber }
: @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #asSubscriber }
Java
: @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/AsSubscriber.java) { #api }

View file

@ -19,19 +19,19 @@ terminated with an exception.
## Examples
Scala
: @@snip [cycle.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #cycle }
: @@snip [cycle.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #cycle }
Java
: @@snip [cycle.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #cycle }
: @@snip [cycle.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #cycle }
When iterator is empty the stream will be terminated with _IllegalArgumentException_
Scala
: @@snip [cycleError.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #cycle-error }
: @@snip [cycleError.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #cycle-error }
Java
: @@snip [cycle.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #cycle-error }
: @@snip [cycle.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #cycle-error }
## Reactive Streams semantics

View file

@ -7,7 +7,7 @@ Integration with Reactive Streams, subscribes to a @javadoc[Publisher](java.util
## Signature
Scala
: @@snip[JavaFlowSupport.scala](/akka-stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #fromPublisher }
: @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #fromPublisher }
Java
: @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/FromPublisher.java) { #api }

View file

@ -17,7 +17,7 @@ prioritized and similarly for the rest of the sources. The priorities for each s
## Example
Scala
: @@snip [FlowMergeSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePrioritizedN }
: @@snip [FlowMergeSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/FlowMergeSpec.scala) { #mergePrioritizedN }
Java
: @@snip [SourceOrFlow.java](/docs/src/test/java/jdocs/stream/operators/SourceOrFlow.java) { #mergePrioritizedN }

View file

@ -23,10 +23,10 @@ See also:
This example prints the first 4 elements emitted by `Source.repeat`.
Scala
: @@snip [snip](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #repeat }
: @@snip [snip](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #repeat }
Java
: @@snip [snip](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #repeat }
: @@snip [snip](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #repeat }

View file

@ -21,10 +21,10 @@ See also:
## Examples
Scala
: @@snip [source.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #imports #source-single }
: @@snip [source.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/SourceSpec.scala) { #imports #source-single }
Java
: @@snip [source.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #imports #source-single }
: @@snip [source.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/SourceTest.java) { #imports #source-single }
## Reactive Streams semantics

View file

@ -27,7 +27,7 @@ The `KillSwitch` @scala[trait] @java[interface] allows to:
Scala
: @@snip [KillSwitch.scala](/akka-stream/src/main/scala/org/apache/pekko/stream/KillSwitch.scala) { #kill-switch }
: @@snip [KillSwitch.scala](/stream/src/main/scala/org/apache/pekko/stream/KillSwitch.scala) { #kill-switch }
After the first call to either `shutdown` or `abort`, all subsequent calls to any of these methods will be ignored.
Stream completion is performed by both

View file

@ -68,7 +68,7 @@ Scala
: @@snip [GraphDSLDocSpec.scala](/docs/src/test/scala/docs/stream/GraphDSLDocSpec.scala) { #simple-graph-dsl }
Java
: @@snip [GraphDSLTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #simple-graph-dsl }
: @@snip [GraphDSLTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #simple-graph-dsl }
@@@ note
@ -103,16 +103,16 @@ Scala
: @@snip [GraphDSLDocSpec.scala](/docs/src/test/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-reusing-a-flow }
Java
: @@snip [GraphDSLTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-reusing-a-flow }
: @@snip [GraphDSLTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-reusing-a-flow }
In some cases we may have a list of graph elements, for example if they are dynamically created.
If these graphs have similar signatures, we can construct a graph collecting all their materialized values as a collection:
Scala
: @@snip [GraphOpsIntegrationSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/GraphOpsIntegrationSpec.scala) { #graph-from-list }
: @@snip [GraphOpsIntegrationSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/GraphOpsIntegrationSpec.scala) { #graph-from-list }
Java
: @@snip [GraphDSLTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-from-list }
: @@snip [GraphDSLTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-from-list }
<a id="partial-graph-dsl"></a>
@ -303,7 +303,7 @@ this purpose exists the special type `BidiFlow` which is a graph that
has exactly two open inlets and two open outlets. The corresponding shape is
called `BidiShape` and is defined like this:
@@snip [Shape.scala](/akka-stream/src/main/scala/org/apache/pekko/stream/Shape.scala) { #bidi-shape }
@@snip [Shape.scala](/stream/src/main/scala/org/apache/pekko/stream/Shape.scala) { #bidi-shape }
A bidirectional flow is defined just like a unidirectional `Flow` as
@ -370,7 +370,7 @@ Scala
: @@snip [GraphDSLDocSpec.scala](/docs/src/test/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-matvalue }
Java
: @@snip [GraphDSLTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-matvalue }
: @@snip [GraphDSLTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-matvalue }
Be careful not to introduce a cycle where the materialized value actually contributes to the materialized value.
@ -380,7 +380,7 @@ Scala
: @@snip [GraphDSLDocSpec.scala](/docs/src/test/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-matvalue-cycle }
Java
: @@snip [GraphDSLTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-matvalue-cycle }
: @@snip [GraphDSLTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/GraphDslTest.java) { #graph-dsl-matvalue-cycle }
<a id="graph-cycles"></a>

View file

@ -137,10 +137,10 @@ There is a bidi implementing this protocol provided by @apidoc[Framing.simpleFra
@scala[@scaladoc[JsonFraming](pekko.stream.scaladsl.JsonFraming$#objectScanner(maximumObjectLength:Int):org.apache.pekko.stream.scaladsl.Flow[org.apache.pekko.util.ByteString,org.apache.pekko.util.ByteString,org.apache.pekko.NotUsed])]@java[@javadoc[JsonFraming](pekko.stream.javadsl.JsonFraming#objectScanner(int))] separates valid JSON objects from incoming @apidoc[util.ByteString] objects:
Scala
: @@snip [JsonFramingSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/JsonFramingSpec.scala) { #using-json-framing }
: @@snip [JsonFramingSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/scaladsl/JsonFramingSpec.scala) { #using-json-framing }
Java
: @@snip [JsonFramingTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/JsonFramingTest.java) { #using-json-framing }
: @@snip [JsonFramingTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/JsonFramingTest.java) { #using-json-framing }
### TLS
@ -151,10 +151,10 @@ see the @scala[@scaladoc[`Tcp Scaladoc`](pekko.stream.scaladsl.Tcp)]@java[@javad
Using TLS requires a keystore and a truststore and then a somewhat involved dance of configuring the SSLEngine and the details for how the session should be negotiated:
Scala
: @@snip [TcpSpec.scala](/akka-stream-tests/src/test/scala/org/apache/pekko/stream/io/TcpSpec.scala) { #setting-up-ssl-engine }
: @@snip [TcpSpec.scala](/stream-tests/src/test/scala/org/apache/pekko/stream/io/TcpSpec.scala) { #setting-up-ssl-engine }
Java
: @@snip [TcpTest.java](/akka-stream-tests/src/test/java/org/apache/pekko/stream/javadsl/TcpTest.java) { #setting-up-ssl-engine }
: @@snip [TcpTest.java](/stream-tests/src/test/java/org/apache/pekko/stream/javadsl/TcpTest.java) { #setting-up-ssl-engine }
The `SSLEngine` instance can then be used with the binding or outgoing connection factory methods.

View file

@ -211,4 +211,4 @@ Java
Other settings can be set globally in your `application.conf`, by overriding any of the following values
in the `pekko.stream.materializer.stream-ref.*` keyspace:
@@snip [reference.conf](/akka-stream/src/main/resources/reference.conf) { #stream-ref }
@@snip [reference.conf](/stream/src/main/resources/reference.conf) { #stream-ref }

View file

@ -8,16 +8,16 @@ For the new API see @ref[testing](typed/testing.md).
To use Akka Testkit, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-testkit_$scala.binary.version$"
artifact="pekko-testkit_$scala.binary.version$"
version=PekkoVersion
scope="test"
}
@@project-info{ projectId="akka-testkit" }
@@project-info{ projectId="testkit" }
## Introduction
@ -457,7 +457,7 @@ This code can be used to forward messages, e.g. in a chain `A --> Probe -->
B`, as long as a certain protocol is obeyed.
Scala
: @@snip [TestProbeSpec.scala](/akka-testkit/src/test/scala/org/apache/pekko/testkit/TestProbeSpec.scala) { #autopilot }
: @@snip [TestProbeSpec.scala](/testkit/src/test/scala/org/apache/pekko/testkit/TestProbeSpec.scala) { #autopilot }
Java
: @@snip [TestKitDocTest.java](/docs/src/test/java/jdocs/testkit/TestKitDocTest.java) { #test-auto-pilot }

View file

@ -44,37 +44,37 @@ To facilitate this dynamic aspect you can also subscribe to changes with the `Re
These imports are used in the following example:
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #import }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #import }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #import }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #import }
First we create a `PingService` actor and register it with the `Receptionist` against a
@apidoc[receptionist.ServiceKey] that will later be used to lookup the reference:
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #ping-service }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #ping-service }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #ping-service }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #ping-service }
Then we have another actor that requires a `PingService` to be constructed:
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #pinger }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #pinger }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #pinger }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #pinger }
Finally in the guardian actor we spawn the service as well as subscribing to any actors registering
against the @apidoc[receptionist.ServiceKey]. Subscribing means that the guardian actor will be informed of any
new registrations via a `Listing` message:
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #pinger-guardian }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #pinger-guardian }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #pinger-guardian }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #pinger-guardian }
Each time a new (which is just a single time in this example) `PingService` is registered the
guardian actor spawns a `Pinger` for each currently known `PingService`. The `Pinger`
@ -85,10 +85,10 @@ of the current state without receiving further updates by sending the `Reception
receptionist. An example of using `Receptionist.Find`:
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #find }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #find }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #find }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #find }
Also note how a @apidoc[messageAdapter](actor.typed.*.ActorContext) {scala="#messageAdapter[U](f:U=%3ET)(implicitevidence$1:scala.reflect.ClassTag[U]):org.apache.pekko.actor.typed.ActorRef[U]" java="#messageAdapter(java.lang.Class,org.apache.pekko.japi.function.Function)"} is used to convert the `Receptionist.Listing` to a message type that
the `PingManager` understands.
@ -100,10 +100,10 @@ The command can optionally send an acknowledgement once the local receptionist h
that all subscribers has seen that the instance has been removed, it may still receive messages from subscribers for some time after this.
Scala
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #deregister }
: @@snip [ReceptionistExample](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/ReceptionistExample.scala) { #deregister }
Java
: @@snip [ReceptionistExample](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #deregister }
: @@snip [ReceptionistExample](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/ReceptionistExample.java) { #deregister }
## Cluster Receptionist

View file

@ -55,10 +55,10 @@ If a behavior needs to use the `ActorContext`, for example to spawn child actors
@scala[`context.self`]@java[`context.getSelf()`], it can be obtained by wrapping construction with @apidoc[Behaviors.setup](typed.*.Behaviors$) {scala="#setup[T](factory:org.apache.pekko.actor.typed.scaladsl.ActorContext[T]=%3Eorg.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.Behavior[T]" java="#setup(org.apache.pekko.japi.function.Function)"}:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main-setup }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main-setup }
#### ActorContext Thread Safety
@ -75,10 +75,10 @@ system are directed to the root actor. The root actor is defined by the behavior
named `HelloWorldMain` in the example below:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world }
For very simple applications the guardian may contain the actual application logic and handle messages. As soon as the application
handles more than one concern the guardian should instead just bootstrap the application, spawn the various subsystems as
@ -106,19 +106,19 @@ is started, it spawns a child actor described by the `HelloWorld` behavior. Addi
`SayHello` message, it creates a child actor defined by the behavior `HelloWorldBot`:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main }
To specify a dispatcher when spawning an actor use @apidoc[DispatcherSelector]. If not specified, the actor will
use the default dispatcher, see @ref:[Default dispatcher](dispatchers.md#default-dispatcher) for details.
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main-with-dispatchers }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main-with-dispatchers }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main-with-dispatchers }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main-with-dispatchers }
Refer to @ref:[Actors](actors.md#first-example) for a walk-through of the above examples.
@ -138,18 +138,18 @@ similar to how `ActorSystem.actorOf` can be used in classic actors with the diff
The guardian behavior can be defined as:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/SpawnProtocolDocSpec.scala) { #imports1 #main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/SpawnProtocolDocSpec.scala) { #imports1 #main }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/SpawnProtocolDocTest.java) { #imports1 #main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/SpawnProtocolDocTest.java) { #imports1 #main }
and the @apidoc[ActorSystem](typed.ActorSystem) can be created with that `main` behavior and asked to spawn other actors:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/SpawnProtocolDocSpec.scala) { #imports2 #system-spawn }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/SpawnProtocolDocSpec.scala) { #imports2 #system-spawn }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/SpawnProtocolDocTest.java) { #imports2 #system-spawn }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/SpawnProtocolDocTest.java) { #imports2 #system-spawn }
The @apidoc[SpawnProtocol$] can also be used at other places in the actor hierarchy. It doesn't have to be the root
guardian actor.
@ -170,14 +170,14 @@ When an actor is stopped, it receives the @apidoc[PostStop](typed.PostStop) sign
Here is an illustrating example:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) {
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) {
#imports
#master-actor
#worker-actor
}
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) {
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) {
#imports
#master-actor
#worker-actor
@ -194,10 +194,10 @@ an actor can @apidoc[watch](typed.*.ActorContext) {scala="#watch[U](other:org.ap
termination (see @ref:[Stopping Actors](#stopping-actors)) of the watched actor.
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) { #master-actor-watch }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) { #master-actor-watch }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) { #master-actor-watch }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) { #master-actor-watch }
An alternative to @apidoc[watch](typed.*.ActorContext) {scala="#watch[U](other:org.apache.pekko.actor.typed.ActorRef[U]):Unit" java="#watch(org.apache.pekko.actor.typed.ActorRef)"} is @apidoc[watchWith](typed.*.ActorContext) {scala="#watchWith[U](other:org.apache.pekko.actor.typed.ActorRef[U],msg:T):Unit" java="#watchWith(org.apache.pekko.actor.typed.ActorRef,T)"}, which allows specifying a custom message instead of the `Terminated`.
This is often preferred over using `watch` and the `Terminated` signal because additional information can
@ -206,10 +206,10 @@ be included in the message that can be used later when receiving it.
Similar example as above, but using `watchWith` and replies to the original requestor when the job has finished.
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) { #master-actor-watchWith }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/GracefulStopDocSpec.scala) { #master-actor-watchWith }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) { #master-actor-watchWith }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/GracefulStopDocTest.java) { #master-actor-watchWith }
Note how the `replyToWhenDone` is included in the `watchWith` message and then used later when receiving the
`JobTerminated` message.

View file

@ -10,14 +10,14 @@ You are viewing the documentation for the new actor APIs, to view the Akka Class
To use Akka Actors, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-actor-typed_$scala.binary.version$
artifact=pekko-actor-typed_$scala.binary.version$
version=PekkoVersion
group2=org.apache.pekko
artifact2=akka-actor-testkit-typed_$scala.binary.version$
artifact2=pekko-actor-testkit-typed_$scala.binary.version$
version2=PekkoVersion
scope2=test
}
@ -26,7 +26,7 @@ Both the Java and Scala DSLs of Akka modules are bundled in the same JAR. For a
when using an IDE such as Eclipse or IntelliJ, you can disable the auto-importer from suggesting `javadsl`
imports when working in Scala, or viceversa. See @ref:[IDE Tips](../additional/ide.md).
@@project-info{ projectId="akka-actor-typed" }
@@project-info{ projectId="actor-typed" }
## Akka Actors
@ -55,10 +55,10 @@ look like?
In all of the following these imports are assumed:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #imports }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #imports }
Java
: @@snip [IntroSpec.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #imports }
: @@snip [IntroSpec.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #imports }
With these in place we can define our first Actor, and it will say
hello!
@ -66,10 +66,10 @@ hello!
![hello-world1.png](./images/hello-world1.png)
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-actor }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-actor }
Java
: @@snip [IntroSpec.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-actor }
: @@snip [IntroSpec.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-actor }
This small piece of code defines two message types, one for commanding the
Actor to greet someone and one that the Actor will use to confirm that it has
@ -114,10 +114,10 @@ of messages have been reached.
![hello-world2.png](./images/hello-world2.png)
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-bot }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-bot }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-bot }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-bot }
@scala[Note how this Actor manages the counter by changing the behavior for each `Greeted` reply
rather than using any variables.]@java[Note how this Actor manages the counter with an instance variable.]
@ -127,18 +127,18 @@ message at a time.
A third actor spawns the `Greeter` and the `HelloWorldBot` and starts the interaction between those.
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world-main }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world-main }
Now we want to try out this Actor, so we must start an ActorSystem to host it:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #hello-world }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #hello-world }
We start an Actor system from the defined `HelloWorldMain` behavior and send two `SayHello` messages that
will kick-off the interaction between two separate `HelloWorldBot` actors and the single `Greeter` actor.
@ -168,7 +168,7 @@ You will also need to add a @ref:[logging dependency](logging.md) to see that ou
#### Here is another example that you can edit and run in the browser:
@@fiddle [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
@@fiddle [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
@@@
@ -197,10 +197,10 @@ chat room Actor will disseminate all posted messages to all currently connected
client Actors. The protocol definition could look like the following:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-protocol }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-protocol }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-protocol }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-protocol }
Initially the client Actors only get access to an @apidoc[typed.ActorRef[GetSession]]
which allows them to make the first step. Once a clients session has been
@ -217,10 +217,10 @@ full protocol that can involve multiple Actors and that can evolve over
multiple steps. Here's the implementation of the chat room protocol:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-behavior }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-behavior }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-behavior }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-behavior }
The state is managed by changing behavior rather than using any variables.
@ -260,10 +260,10 @@ problematic, so passing an @scala[`ActorRef[PublishSessionMessage]`]@java[`Actor
In order to see this chat room in action we need to write a client Actor that can use it:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-gabbler }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-gabbler }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-gabbler }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-gabbler }
From this behavior we can create an Actor that will accept a chat room session,
post a message, wait to see it published, and then terminate. The last step
@ -288,10 +288,10 @@ nonsensical) or we start both of them from a third Actor—our only sensible
choice:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/IntroSpec.scala) { #chatroom-main }
Java
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-main }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/IntroTest.java) { #chatroom-main }
In good tradition we call the `Main` Actor what it is, it directly
corresponds to the `main` method in a traditional Java application. This
@ -339,10 +339,10 @@ Let's repeat the chat room sample from @ref:[A more complex example above](#a-mo
using `AbstractBehavior`. The protocol for interacting with the actor looks the same:
Scala
: @@snip [OOIntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-protocol }
: @@snip [OOIntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-protocol }
Java
: @@snip [OOIntroTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-protocol }
: @@snip [OOIntroTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-protocol }
Initially the client Actors only get access to an @scala[`ActorRef[GetSession]`]@java[`ActorRef<GetSession>`]
which allows them to make the first step. Once a clients session has been
@ -359,10 +359,10 @@ full protocol that can involve multiple Actors and that can evolve over
multiple steps. Here's the `AbstractBehavior` implementation of the chat room protocol:
Scala
: @@snip [OOIntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-behavior }
: @@snip [OOIntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-behavior }
Java
: @@snip [OOIntroTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-behavior }
: @@snip [OOIntroTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-behavior }
The state is managed through fields in the class, just like with a regular object oriented class.
As the state is mutable, we never return a different behavior from the message logic, but can return
@ -418,10 +418,10 @@ In order to see this chat room in action we need to write a client Actor that ca
@scala[, for this stateless actor it doesn't make much sense to use the `AbstractBehavior` so let's just reuse the functional style gabbler from the sample above]:
Scala
: @@snip [OOIntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-gabbler }
: @@snip [OOIntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-gabbler }
Java
: @@snip [OOIntroTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-gabbler }
: @@snip [OOIntroTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-gabbler }
Now to try things out we must start both a chat room and a gabbler and of
course we do this inside an Actor system. Since there can be only one user guardian
@ -432,10 +432,10 @@ choice:
Scala
: @@snip [OOIntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-main }
: @@snip [OOIntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/OOIntroSpec.scala) { #chatroom-main }
Java
: @@snip [OOIntroTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-main }
: @@snip [OOIntroTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/OOIntroTest.java) { #chatroom-main }
In good tradition we call the `Main` Actor what it is, it directly
corresponds to the `main` method in a traditional Java application. This

View file

@ -111,10 +111,10 @@ if you see this in log messages.
You can retrieve information about what data center a member belongs to:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #dcAccess }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #dcAccess }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #dcAccess }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #dcAccess }
## Failure Detection
@ -171,10 +171,10 @@ having a global singleton in one data center and accessing it from other data ce
This is how to create a singleton proxy for a specific data center:
Scala
: @@snip [SingletonCompileOnlySpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #create-singleton-proxy-dc }
: @@snip [SingletonCompileOnlySpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #create-singleton-proxy-dc }
Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #create-singleton-proxy-dc }
: @@snip [SingletonCompileOnlyTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #create-singleton-proxy-dc }
If using the own data center as the `withDataCenter` parameter that would be a proxy for the singleton in the own data center, which
is also the default if `withDataCenter` is not given.
@ -208,18 +208,18 @@ accessing them from other data centers.
This is how to create a sharding proxy for a specific data center:
Scala
: @@snip [MultiDcClusterShardingSpec.scala](/akka-cluster-sharding-typed/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/typed/MultiDcClusterShardingSpec.scala) { #proxy-dc }
: @@snip [MultiDcClusterShardingSpec.scala](/cluster-sharding-typed/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/typed/MultiDcClusterShardingSpec.scala) { #proxy-dc }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #proxy-dc }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #proxy-dc }
and it can also be used with an `EntityRef`:
Scala
: @@snip [MultiDcClusterShardingSpec.scala](/akka-cluster-sharding-typed/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/typed/MultiDcClusterShardingSpec.scala) { #proxy-dc-entityref }
: @@snip [MultiDcClusterShardingSpec.scala](/cluster-sharding-typed/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/typed/MultiDcClusterShardingSpec.scala) { #proxy-dc-entityref }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #proxy-dc-entityref }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #proxy-dc-entityref }
Another way to manage global entities is to make sure that certain entity ids are located in
only one data center by routing the messages to the right region. For example, the routing function

View file

@ -5,15 +5,15 @@
To use Akka Sharded Daemon Process, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-sharding-typed_$scala.binary.version$
artifact=pekko-cluster-sharding-typed_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-sharding-typed" }
@@project-info{ projectId="cluster-sharding-typed" }
## Introduction
@ -34,10 +34,10 @@ To set up a set of actors running with Sharded Daemon process each node in the c
when starting up:
Scala
: @@snip [ShardedDaemonProcessExample.scala](/akka-cluster-sharding-typed/src/test/scala/org/apache/pekko/cluster/sharding/typed/scaladsl/ShardedDaemonProcessSpec.scala) { #tag-processing }
: @@snip [ShardedDaemonProcessExample.scala](/cluster-sharding-typed/src/test/scala/org/apache/pekko/cluster/sharding/typed/scaladsl/ShardedDaemonProcessSpec.scala) { #tag-processing }
Java
: @@snip [ShardedDaemonProcessExample.java](/akka-cluster-sharding-typed/src/test/java/org/apache/pekko/cluster/sharding/typed/javadsl/ShardedDaemonProcessCompileOnlyTest.java) { #tag-processing }
: @@snip [ShardedDaemonProcessExample.java](/cluster-sharding-typed/src/test/java/org/apache/pekko/cluster/sharding/typed/javadsl/ShardedDaemonProcessCompileOnlyTest.java) { #tag-processing }
An additional factory method is provided for further configurability and providing a graceful stop message for the actor.

View file

@ -10,15 +10,15 @@ You are viewing the documentation for the new actor APIs, to view the Akka Class
To use Akka Cluster Sharding, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-sharding-typed_$scala.binary.version$
artifact=pekko-cluster-sharding-typed_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-sharding-typed" }
@@project-info{ projectId="cluster-sharding-typed" }
## Introduction
@ -61,37 +61,37 @@ See @ref:[Downing](cluster.md#downing).
Sharding is accessed via the @apidoc[typed.*.ClusterSharding] extension
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #sharding-extension }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #sharding-extension }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #import #sharding-extension }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #import #sharding-extension }
It is common for sharding to be used with persistence however any @apidoc[typed.Behavior] can be used with sharding e.g. a basic counter:
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter }
Each Entity type has a key that is then used to retrieve an EntityRef for a given entity identifier.
Note in the sample's @scala[`Counter.apply`]@java[`Counter.create`] function that the `entityId` parameter is not
called, it is included to demonstrate how one can pass it to an entity. Another way to do this is by sending the `entityId` as part of the message if needed.
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #init }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #init }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #init }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #init }
Messages to a specific entity are then sent via an @apidoc[typed.*.EntityRef]. The `entityId` and the name of the Entity's key can be retrieved from the `EntityRef`.
It is also possible to wrap methods in a @apidoc[typed.ShardingEnvelope] or define extractor functions and send messages directly to the shard region.
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #send }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #send }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #send }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #send }
Cluster sharding @apidoc[init](typed.*.ClusterSharding) {scala="#init[M,E](entity:org.apache.pekko.cluster.sharding.typed.scaladsl.Entity[M,E]):org.apache.pekko.actor.typed.ActorRef[E]" java="#init(org.apache.pekko.cluster.sharding.typed.javadsl.Entity)"} should be called on every node for each entity type. Which nodes entity actors are created on
can be controlled with @ref:[roles](cluster.md#node-roles). `init` will create a `ShardRegion` or a proxy depending on whether the node's role matches
@ -102,10 +102,10 @@ The behavior factory lambda passed to the init method is defined on each node an
Specifying the role:
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #roles }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #roles }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #roles }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #roles }
### A note about EntityRef and serialization
@ -131,18 +131,18 @@ persistence to ensure that there is only one active entity for each `Persistence
Here is an example of a persistent actor that is used as a sharded entity:
Scala
: @@snip [HelloWorldPersistentEntityExample.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.scala) { #persistent-entity }
: @@snip [HelloWorldPersistentEntityExample.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.scala) { #persistent-entity }
Java
: @@snip [HelloWorldPersistentEntityExample.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.java) { #persistent-entity-import #persistent-entity }
: @@snip [HelloWorldPersistentEntityExample.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.java) { #persistent-entity-import #persistent-entity }
To initialize and use the entity:
Scala
: @@snip [HelloWorldPersistentEntityExample.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.scala) { #persistent-entity-usage }
: @@snip [HelloWorldPersistentEntityExample.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.scala) { #persistent-entity-usage }
Java
: @@snip [HelloWorldPersistentEntityExample.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.java) { #persistent-entity-usage-import #persistent-entity-usage }
: @@snip [HelloWorldPersistentEntityExample.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/HelloWorldPersistentEntityExample.java) { #persistent-entity-usage-import #persistent-entity-usage }
Note how an unique @apidoc[persistence.typed.PersistenceId] can be constructed from the @apidoc[typed.*.EntityTypeKey] and the `entityId`
provided by the @apidoc[typed.*.EntityContext] in the factory function for the @apidoc[typed.Behavior]. This is a typical way
@ -164,7 +164,7 @@ be the same. Otherwise the entity actor might accidentally be started in several
By default the shard identifier is the absolute value of the `hashCode` of the entity identifier modulo
the total number of shards. The number of shards is configured by:
@@snip [reference.conf](/akka-cluster-sharding-typed/src/main/resources/reference.conf) { #number-of-shards }
@@snip [reference.conf](/cluster-sharding-typed/src/main/resources/reference.conf) { #number-of-shards }
As a rule of thumb, the number of shards should be a factor ten greater than the planned maximum number of
cluster nodes. It doesn't have to be exact. Fewer shards than number of nodes will result in that some nodes will
@ -214,18 +214,18 @@ This can be used, for example, to match up Kafka Partition consumption with shar
To use it set it as the allocation strategy on your @apidoc[typed.*.Entity]:
Scala
: @@snip [ExternalShardAllocationCompileOnlySpec](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlySpec.scala) { #entity }
: @@snip [ExternalShardAllocationCompileOnlySpec](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlySpec.scala) { #entity }
Java
: @@snip [ExternalShardAllocationCompileOnlyTest](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlyTest.java) { #entity }
: @@snip [ExternalShardAllocationCompileOnlyTest](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlyTest.java) { #entity }
For any shardId that has not been allocated it will be allocated to the requesting node. To make explicit allocations:
Scala
: @@snip [ExternalShardAllocationCompileOnlySpec](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlySpec.scala) { #client }
: @@snip [ExternalShardAllocationCompileOnlySpec](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlySpec.scala) { #client }
Java
: @@snip [ExternalShardAllocationCompileOnlyTest](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlyTest.java) { #client }
: @@snip [ExternalShardAllocationCompileOnlyTest](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ExternalShardAllocationCompileOnlyTest.java) { #client }
Any new or moved shard allocations will be moved on the next rebalance.
@ -266,18 +266,18 @@ of `Passivate` and termination of the entity. Such buffered messages are thereaf
to a new incarnation of the entity.
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter-passivate }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter-passivate }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter-passivate }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter-passivate }
and then initialized with:
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter-passivate-init }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #counter-passivate-init }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter-passivate-init }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter-passivate-init }
Note that in the above example the `stopMessage` is specified as `GoodByeCounter`. That message will be sent to
the entity when it's supposed to stop itself due to rebalance or passivation. If the `stopMessage` is not defined
@ -320,7 +320,7 @@ Idle entities can be automatically passivated when they have not received a mess
This is currently the default strategy, for compatibility, and is enabled automatically with a timeout of 2 minutes.
Specify a different idle timeout with configuration:
@@snip [passivation idle timeout](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-idle-timeout type=conf }
@@snip [passivation idle timeout](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-idle-timeout type=conf }
Or specify the idle timeout as a duration using the @apidoc[withPassivationStrategy](typed.ClusterShardingSettings) {scala="#withPassivationStrategy(settings:org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings):org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings" java="#withPassivationStrategy(org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings)"} method on `ClusterShardingSettings`.
@ -335,7 +335,7 @@ The configurable limit is for a whole shard region and is divided evenly among t
A recommended passivation strategy, which will become the new default passivation strategy in future versions of Akka
Cluster Sharding, can be enabled with configuration:
@@snip [passivation new default strategy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy type=conf }
@@snip [passivation new default strategy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy type=conf }
This default strategy uses a [composite passivation strategy](#composite-passivation-strategies) which combines
recency-based and frequency-based tracking: the main area is configured with a [segmented least recently used
@ -345,13 +345,13 @@ enabled.
The active entity limit for the default strategy can be configured:
@@snip [passivation new default strategy configured](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy-configured type=conf }
@@snip [passivation new default strategy configured](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy-configured type=conf }
Or using the @apidoc[withActiveEntityLimit](typed.ClusterShardingSettings.PassivationStrategySettings) {scala="#withActiveEntityLimit(limit:Int):org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings" java="#withActiveEntityLimit(int)"} method on `ClusterShardingSettings.PassivationStrategySettings`.
An [idle entity timeout](#idle-entity-passivation) can also be enabled and configured for this strategy:
@@snip [passivation new default strategy with idle](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy-with-idle type=conf }
@@snip [passivation new default strategy with idle](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #passivation-new-default-strategy-with-idle type=conf }
Or using the @apidoc[withIdleEntityPassivation](typed.ClusterShardingSettings.PassivationStrategySettings) {scala="#withIdleEntityPassivation(settings:org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings.IdleSettings):org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings" java="#withIdleEntityPassivation(org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings.IdleSettings)"} method on `ClusterShardingSettings.PassivationStrategySettings`.
@ -367,7 +367,7 @@ _replacement policy_ to be chosen, an _active entity limit_ to be set, and can o
entities](#idle-entity-passivation). For example, a custom strategy can be configured to use the [least recently used
policy](#least-recently-used-policy):
@@snip [custom passivation strategy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #custom-passivation-strategy type=conf }
@@snip [custom passivation strategy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #custom-passivation-strategy type=conf }
The active entity limit and replacement policy can also be configured using the `withPassivationStrategy` method on
`ClusterShardingSettings`, passing custom `ClusterShardingSettings.PassivationStrategySettings`.
@ -383,7 +383,7 @@ policy](#segmented-least-recently-used-policy) for a variation that also disting
Configure a passivation strategy to use the least recently used policy:
@@snip [LRU policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lru-policy type=conf }
@@snip [LRU policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lru-policy type=conf }
Or using the @apidoc[withLeastRecentlyUsedReplacement](typed.ClusterShardingSettings.PassivationStrategySettings) {scala="#withLeastRecentlyUsedReplacement():org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings" java="#withLeastRecentlyUsedReplacement()"} method on `ClusterShardingSettings.PassivationStrategySettings`.
@ -406,11 +406,11 @@ popular than others, to prioritize those entities that are accessed more frequen
To configure a segmented least recently used (SLRU) policy, with two levels and a protected segment limited to 80% of
the total limit:
@@snip [SLRU policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #slru-policy type=conf }
@@snip [SLRU policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #slru-policy type=conf }
Or to configure a 4-level segmented least recently used (S4LRU) policy, with 4 evenly divided levels:
@@snip [S4LRU policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #s4lru-policy type=conf }
@@snip [S4LRU policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #s4lru-policy type=conf }
Or using custom `ClusterShardingSettings.PassivationStrategySettings.LeastRecentlyUsedSettings`.
@ -424,7 +424,7 @@ will be accessed again; as seen in cyclic access patterns.
Configure a passivation strategy to use the most recently used policy:
@@snip [MRU policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #mru-policy type=conf }
@@snip [MRU policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #mru-policy type=conf }
Or using the @apidoc[withMostRecentlyUsedReplacement](typed.ClusterShardingSettings.PassivationStrategySettings) {scala="#withMostRecentlyUsedReplacement():org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings" java="#withMostRecentlyUsedReplacement()"} method on `ClusterShardingSettings.PassivationStrategySettings`.
@ -439,7 +439,7 @@ policy](#least-frequently-used-with-dynamic-aging-policy) for a variation that a
Configure automatic passivation to use the least frequently used policy:
@@snip [LFU policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lfu-policy type=conf }
@@snip [LFU policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lfu-policy type=conf }
Or using the @apidoc[withLeastFrequentlyUsedReplacement](typed.ClusterShardingSettings.PassivationStrategySettings) {scala="#withLeastFrequentlyUsedReplacement():org.apache.pekko.cluster.sharding.typed.ClusterShardingSettings.PassivationStrategySettings" java="#withLeastFrequentlyUsedReplacement()"} method on `ClusterShardingSettings.PassivationStrategySettings`.
@ -457,7 +457,7 @@ popularity can have more impact on a least frequently used policy if the active
Configure dynamic aging with the least frequently used policy:
@@snip [LFUDA policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lfuda-policy type=conf }
@@snip [LFUDA policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #lfuda-policy type=conf }
Or using custom `ClusterShardingSettings.PassivationStrategySettings.LeastFrequentlyUsedSettings`.
@ -478,11 +478,11 @@ The admission window tracks newly activated entities. When an entity is replaced
opportunity to enter the main entity tracking area, based on the [admission filter](#admission-filter). The admission
window can be enabled by selecting a policy (while the regular replacement policy is for the main area):
@@snip [admission window policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-policy type=conf }
@@snip [admission window policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-policy type=conf }
The proportion of the active entity limit used for the admission window can be configured (the default is 1%):
@@snip [admission window proportion](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-proportion type=conf }
@@snip [admission window proportion](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-proportion type=conf }
The proportion for the admission window can also be adapted and optimized dynamically, by enabling an [admission window
optimizer](#admission-window-optimizer).
@ -499,7 +499,7 @@ The optimizer currently available uses a simple hill-climbing algorithm, which s
provides an optimal active rate (where entities are already active when accessed, the _cache hit rate_). Enable
adaptive window sizing by configuring the `hill-climbing` window optimizer:
@@snip [admission window optimizer](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-optimizer type=conf }
@@snip [admission window optimizer](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-window-optimizer type=conf }
See the `reference.conf` for parameters that can be tuned for the hill climbing admission window optimizer.
@ -514,7 +514,7 @@ the cluster sharding node, selecting the entity that is estimated to be accessed
automatically ages entries, using the approach from the _TinyLFU_ cache admission algorithm. Enable an admission filter
by configuring the `frequency-sketch` admission filter:
@@snip [admission policy](/akka-cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-policy type=conf }
@@snip [admission policy](/cluster-sharding/src/test/scala/org/apache/pekko/cluster/sharding/ClusterShardingSettingsSpec.scala) { #admission-policy type=conf }
See the `reference.conf` for parameters that can be tuned for the frequency sketch admission filter.
@ -735,20 +735,20 @@ Two requests to inspect the cluster state are available:
a Region and what entities are alive for each of them.
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #get-shard-region-state }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #get-shard-region-state }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #get-shard-region-state }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #get-shard-region-state }
@apidoc[cluster.sharding.typed.GetClusterShardingStats] which will query all the regions in the cluster and reply with a
@apidoc[ShardRegion.ClusterShardingStats] containing the identifiers of the shards running in each region and a count
of entities that are alive in each shard.
Scala
: @@snip [ShardingCompileOnlySpec.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #get-cluster-sharding-stats }
: @@snip [ShardingCompileOnlySpec.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #get-cluster-sharding-stats }
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #get-cluster-sharding-stats }
: @@snip [ShardingCompileOnlyTest.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #get-cluster-sharding-stats }
If any shard queries failed, for example due to timeout if a shard was too busy to reply within the configured `pekko.cluster.sharding.shard-region-query-timeout`,
`ShardRegion.CurrentShardRegionState` and `ShardRegion.ClusterShardingStats` will also include the set of shard identifiers by region that failed.
@ -801,7 +801,7 @@ provider when there was a network partition.
Use this program as a standalone Java main program:
```
java -classpath <jar files, including akka-cluster-sharding>
java -classpath <jar files, including pekko-cluster-sharding>
org.apache.pekko.cluster.sharding.RemoveInternalClusterShardingData
-2.3 entityType1 entityType2 entityType3
```
@ -829,9 +829,9 @@ One important configuration property is `number-of-shards` as described in @ref:
You may also need to tune the configuration properties is `rebalance-absolute-limit` and `rebalance-relative-limit`
as described in @ref:[Shard allocation](#shard-allocation).
@@snip [reference.conf](/akka-cluster-sharding/src/main/resources/reference.conf) { #sharding-ext-config }
@@snip [reference.conf](/cluster-sharding/src/main/resources/reference.conf) { #sharding-ext-config }
@@snip [reference.conf](/akka-cluster-sharding-typed/src/main/resources/reference.conf) { #sharding-ext-config }
@@snip [reference.conf](/cluster-sharding-typed/src/main/resources/reference.conf) { #sharding-ext-config }
## Example project

View file

@ -7,15 +7,15 @@ You are viewing the documentation for the new actor APIs, to view the Akka Class
To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-typed_$scala.binary.version$
artifact=pekko-cluster-typed_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-typed" }
@@project-info{ projectId="cluster-typed" }
## Introduction
@ -117,20 +117,20 @@ See @ref:[Downing](cluster.md#downing).
Any @apidoc[Behavior](typed.Behavior) can be run as a singleton. E.g. a basic counter:
Scala
: @@snip [SingletonCompileOnlySpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #counter }
: @@snip [SingletonCompileOnlySpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #counter }
Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #counter }
: @@snip [SingletonCompileOnlyTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #counter }
Then on every node in the cluster, or every node with a given role, use the @apidoc[ClusterSingleton$] extension
to spawn the singleton. An instance will per data centre of the cluster:
Scala
: @@snip [SingletonCompileOnlySpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #singleton }
: @@snip [SingletonCompileOnlySpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #singleton }
Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #import #singleton }
: @@snip [SingletonCompileOnlyTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #import #singleton }
## Supervision
@ -140,10 +140,10 @@ a backoff:
Scala
: @@snip [SingletonCompileOnlySpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #backoff}
: @@snip [SingletonCompileOnlySpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #backoff}
Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #backoff}
: @@snip [SingletonCompileOnlyTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #backoff}
Be aware that this means there will be times when the singleton won't be running as restart is delayed.
See @ref[Fault Tolerance](./fault-tolerance.md) for a full list of supervision options.
@ -157,10 +157,10 @@ singleton actor is terminated.
If the shutdown logic does not include any asynchronous actions it can be executed in the @apidoc[PostStop$] signal handler.
Scala
: @@snip [SingletonCompileOnlySpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #stop-message }
: @@snip [SingletonCompileOnlySpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/SingletonCompileOnlySpec.scala) { #stop-message }
Java
: @@snip [SingletonCompileOnlyTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #stop-message }
: @@snip [SingletonCompileOnlyTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/SingletonCompileOnlyTest.java) { #stop-message }
## Lease
@ -193,12 +193,12 @@ or create it from another config section with the same layout as below. `Cluster
a parameter to the @apidoc[ClusterSingletonManager.props](ClusterSingletonManager$) {scala="#props(singletonProps:org.apache.pekko.actor.Props,terminationMessage:Any,settings:org.apache.pekko.cluster.singleton.ClusterSingletonManagerSettings):org.apache.pekko.actor.Props" java="#props(org.apache.pekko.actor.Props,java.lang.Object,org.apache.pekko.cluster.singleton.ClusterSingletonManagerSettings)"} factory method, i.e. each singleton can be configured
with different settings if needed.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf) { #singleton-config }
The following configuration properties are read by the @apidoc[ClusterSingletonSettings](typed.ClusterSingletonSettings)
when created with a @apidoc[ActorSystem](typed.ActorSystem) parameter. `ClusterSingletonSettings` is an optional parameter in
@apidoc[ClusterSingleton.init](ClusterSingleton) {scala="#init[M](singleton:org.apache.pekko.cluster.typed.SingletonActor[M]):org.apache.pekko.actor.typed.ActorRef[M]" java="#init(org.apache.pekko.cluster.typed.SingletonActor)"}. It is also possible to amend the @apidoc[ClusterSingletonProxySettings]
or create it from another config section with the same layout as below.
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }
@@snip [reference.conf](/cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }

View file

@ -26,15 +26,15 @@ recommendation if you don't have other preferences or constraints.
To use Akka Cluster add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-typed_$scala.binary.version$
artifact=pekko-cluster-typed_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-typed" }
@@project-info{ projectId="cluster-typed" }
## Cluster API Extension
@ -51,23 +51,23 @@ It does this through these references on the @apidoc[typed.Cluster$] extension:
All of the examples below assume the following imports:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-imports }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-imports }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
<a id="basic-cluster-configuration"></a>
The minimum configuration required is to set a host/port for remoting and the `pekko.actor.provider = "cluster"`.
@@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
@@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
Accessing the @apidoc[typed.Cluster$] extension on each node:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-create }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-create }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-create }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-create }
@@@ note
@ -80,18 +80,18 @@ Java
If not using configuration to specify @ref:[seed nodes to join](#joining), joining the cluster can be done programmatically via the @scala[@scaladoc[manager](pekko.cluster.typed.Cluster#manager:org.apache.pekko.actor.typed.ActorRef[org.apache.pekko.cluster.typed.ClusterCommand])]@java[@javadoc[manager()](pekko.cluster.typed.Cluster#manager())].
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-join }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-join }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-join }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-join }
@ref:[Leaving](#leaving) the cluster and @ref:[downing](#downing) a node are similar:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-leave }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-leave }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-leave }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-leave }
### Cluster Subscriptions
@ -102,18 +102,18 @@ for the node going through the @ref:[Membership Lifecycle](cluster-membership.md
This example subscribes to a @scala[`subscriber: ActorRef[MemberEvent]`]@java[`ActorRef<MemberEvent> subscriber`]:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-subscribe }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-subscribe }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-subscribe }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-subscribe }
Then asking a node to leave:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-leave-example }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-leave-example }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-leave-example }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-leave-example }
### Cluster State
@ -199,10 +199,10 @@ Joining programmatically is useful when **dynamically discovering** other nodes
at startup through an external tool or API.
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #join-seed-nodes }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #join-seed-nodes }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #join-seed-nodes }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #join-seed-nodes }
The seed node address list has the same semantics as the configured `seed-nodes`, and the the underlying
implementation of the process is the same, see @ref:[Joining configured seed nodes](#joining-configured-seed-nodes).
@ -321,10 +321,10 @@ of the own node are available from the @scala[@scaladoc[selfMember](pekko.cluste
actors:
Scala
: @@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #hasRole }
: @@snip [BasicClusterExampleSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #hasRole }
Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #hasRole }
: @@snip [BasicClusterExampleTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #hasRole }
## Failure Detector

View file

@ -34,7 +34,7 @@ Typed and classic can interact the following ways:
In the examples the `pekko.actor` package is aliased to `classic`.
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #import-alias }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #import-alias }
@@@
@ -46,44 +46,44 @@ While coexisting your application will likely still have a classic ActorSystem.
so that new code and migrated parts don't rely on the classic system:
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #adapter-import #convert-classic }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #adapter-import #convert-classic }
Java
: @@snip [ClassicWatchingTypedTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #adapter-import #convert-classic }
: @@snip [ClassicWatchingTypedTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #adapter-import #convert-classic }
Then for new typed actors here's how you create, watch and send messages to
it from a classic actor.
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #typed }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #typed }
Java
: @@snip [ClassicWatchingTypedTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #typed }
: @@snip [ClassicWatchingTypedTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #typed }
The top level classic actor is created in the usual way:
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #create-classic }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #create-classic }
Java
: @@snip [ClassicWatchingTypedTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #create-classic }
: @@snip [ClassicWatchingTypedTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #create-classic }
Then it can create a typed actor, watch it, and send a message to it:
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #classic-watch }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #classic-watch }
Java
: @@snip [ClassicWatchingTypedTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #classic-watch }
: @@snip [ClassicWatchingTypedTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #classic-watch }
@scala[There is one `import` that is needed to make that work.] @java[We import the Adapter class and
call static methods for conversion.]
Scala
: @@snip [ClassicWatchingTypedSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #adapter-import }
: @@snip [ClassicWatchingTypedSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedSpec.scala) { #adapter-import }
Java
: @@snip [ClassicWatchingTypedTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #adapter-import }
: @@snip [ClassicWatchingTypedTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/ClassicWatchingTypedTest.java) { #adapter-import }
@scala[That adds some implicit extension methods that are added to classic and typed `ActorSystem`, `ActorContext` and `ActorRef` in both directions.]
@ -93,10 +93,10 @@ Note the inline comments in the example above.
This method of using a top level classic actor is the suggested path for this type of co-existence. However, if you prefer to start with a typed top level actor then you can use the @scala[implicit @scaladoc[spawn](pekko.actor.typed.scaladsl.adapter.package$$ClassicActorSystemOps#spawn[T](behavior:org.apache.pekko.actor.typed.Behavior[T],name:String,props:org.apache.pekko.actor.typed.Props):org.apache.pekko.actor.typed.ActorRef[T]) -method]@java[@javadoc[Adapter.spawn](pekko.actor.typed.javadsl.Adapter#spawn(org.apache.pekko.actor.ActorSystem,org.apache.pekko.actor.typed.Behavior,java.lang.String,org.apache.pekko.actor.typed.Props))] directly from the typed system:
Scala
: @@snip [TypedWatchingClassicSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #create }
: @@snip [TypedWatchingClassicSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #create }
Java
: @@snip [TypedWatchingClassicTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #create }
: @@snip [TypedWatchingClassicTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #create }
The above classic-typed difference is further elaborated in @ref:[the `ActorSystem` section](./from-classic.md#actorsystem) of "Learning Akka Typed from Classic".
@ -108,28 +108,28 @@ The following will show how to create, watch and send messages back and forth fr
classic actor:
Scala
: @@snip [TypedWatchingClassicSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #classic }
: @@snip [TypedWatchingClassicSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #classic }
Java
: @@snip [TypedWatchingClassicTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #classic }
: @@snip [TypedWatchingClassicTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #classic }
<a id="top-level-typed-actor-classic-system"></a>
Creating the actor system and the typed actor:
Scala
: @@snip [TypedWatchingClassicSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #create }
: @@snip [TypedWatchingClassicSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #create }
Java
: @@snip [TypedWatchingClassicTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #create }
: @@snip [TypedWatchingClassicTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #create }
Then the typed actor creates the classic actor, watches it and sends and receives a response:
Scala
: @@snip [TypedWatchingClassicSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #typed }
: @@snip [TypedWatchingClassicSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/coexistence/TypedWatchingClassicSpec.scala) { #typed }
Java
: @@snip [TypedWatchingClassicTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #typed }
: @@snip [TypedWatchingClassicTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/coexistence/TypedWatchingClassicTest.java) { #typed }
@@@ div { .group-scala }

View file

@ -58,10 +58,10 @@ details @ref:[here](#blocking-needs-careful-management).
To select a dispatcher use `DispatcherSelector` to create a `Props` instance for spawning your actor:
Scala
: @@snip [DispatcherDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/DispatchersDocSpec.scala) { #spawn-dispatcher }
: @@snip [DispatcherDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/DispatchersDocSpec.scala) { #spawn-dispatcher }
Java
: @@snip [DispatcherDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/DispatchersDocTest.java) { #spawn-dispatcher }
: @@snip [DispatcherDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/DispatchersDocTest.java) { #spawn-dispatcher }
`DispatcherSelector` has a few convenience methods:
@ -72,7 +72,7 @@ Java
The final example shows how to load a custom dispatcher from configuration and relies on this being in your `application.conf`:
<!-- Same between Java and Scala -->
@@snip [DispatcherDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/DispatchersDocSpec.scala) { #config }
@@snip [DispatcherDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/DispatchersDocSpec.scala) { #config }
## Types of dispatchers

View file

@ -10,15 +10,15 @@ You are viewing the documentation for the new actor APIs, to view the Akka Class
To use Akka Cluster Distributed Data, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-cluster-typed_$scala.binary.version$
artifact=pekko-cluster-typed_$scala.binary.version$
version=PekkoVersion
}
@@project-info{ projectId="akka-cluster-typed" }
@@project-info{ projectId="cluster-typed" }
## Introduction
@ -53,10 +53,10 @@ and the actual CRDTs are defined in the `pekko.cluster.ddata` package, for examp
available from:
Scala
: @@snip [ReplicatorSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorDocSpec.scala) { #selfUniqueAddress }
: @@snip [ReplicatorSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorDocSpec.scala) { #selfUniqueAddress }
Java
: @@snip [ReplicatorTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/ddata/typed/javadsl/ReplicatorDocSample.java) { #selfUniqueAddress }
: @@snip [ReplicatorTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/ddata/typed/javadsl/ReplicatorDocSample.java) { #selfUniqueAddress }
The replicator can contain multiple entries each containing a replicated data type, we therefore need to create a
key identifying the entry and helping us know what type it has, and then use that key for every interaction with
@ -74,10 +74,10 @@ This sample uses the replicated data type `GCounter` to implement a counter that
cluster:
Scala
: @@snip [ReplicatorSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorDocSpec.scala) { #sample }
: @@snip [ReplicatorSpec.scala](/cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorDocSpec.scala) { #sample }
Java
: @@snip [ReplicatorTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/ddata/typed/javadsl/ReplicatorDocSample.java) { #sample }
: @@snip [ReplicatorTest.java](/cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/ddata/typed/javadsl/ReplicatorDocSample.java) { #sample }
Although you can interact with the `Replicator` using the @scala[`ActorRef[Replicator.Command]`]@java[`ActorRef<Replicator.Command>`]
from @scala[`DistributedData(ctx.system).replicator`]@java[`DistributedData(ctx.getSystem()).replicator()`] it's
@ -103,7 +103,7 @@ it contains five values:
There is alternative way of constructing the function for the `Update` message:
Scala
: @@snip [ReplicatorSpec.scala](/akka-cluster-typed/src/test/scala/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorCompileOnlyTest.scala) { #curried-update }
: @@snip [ReplicatorSpec.scala](/cluster-typed/src/test/scala/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorCompileOnlyTest.scala) { #curried-update }
@@@
@ -144,7 +144,7 @@ incoming message can be used when the `GetSuccess` response from the replicator
Alternative way of constructing the function for the `Get` and `Delete`:
Scala
: @@snip [ReplicatorSpec.scala](/akka-cluster-typed/src/test/scala/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorCompileOnlyTest.scala) { #curried-get }
: @@snip [ReplicatorSpec.scala](/cluster-typed/src/test/scala/org/apache/pekko/cluster/ddata/typed/scaladsl/ReplicatorCompileOnlyTest.scala) { #curried-get }
@@@
@ -775,7 +775,7 @@ paper by Mark Shapiro et. al.
The `DistributedData` extension can be configured with the following properties:
@@snip [reference.conf](/akka-distributed-data/src/main/resources/reference.conf) { #distributed-data }
@@snip [reference.conf](/distributed-data/src/main/resources/reference.conf) { #distributed-data }
## Example project

View file

@ -4,15 +4,15 @@ You are viewing the documentation for the new actor APIs, to view the Akka Class
## Module info
The distributed publish subscribe topic API is available and usable with the core `akka-actor-typed` module, however it will only be distributed
The distributed publish subscribe topic API is available and usable with the core `(/actor-typed` module, however it will only be distributed
when used in a clustered application:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=(/bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group="org.apache.pekko"
artifact="akka-cluster-typed_$scala.binary.version$"
artifact="(/cluster-typed_$scala.binary.version$"
version=PekkoVersion
}
@ -26,26 +26,26 @@ The identity of the topic is a tuple of the type of messages that can be publish
to not define multiple topics with different types and the same topic name.
Scala
: @@snip [PubSubExample.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #start-topic }
: @@snip [PubSubExample.scala](/(/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #start-topic }
Java
: @@snip [PubSubExample.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #start-topic }
: @@snip [PubSubExample.java](/(/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #start-topic }
Local actors can then subscribe to the topic (and unsubscribe from it):
Scala
: @@snip [PubSubExample.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #subscribe }
: @@snip [PubSubExample.scala](/(/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #subscribe }
Java
: @@snip [PubSubExample.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #subscribe }
: @@snip [PubSubExample.java](/(/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #subscribe }
And publish messages to the topic:
Scala
: @@snip [PubSubExample.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #publish }
: @@snip [PubSubExample.scala](/(/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/pubsub/PubSubExample.scala) { #publish }
Java
: @@snip [PubSubExample.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #publish }
: @@snip [PubSubExample.java](/(/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/pubsub/PubSubExample.java) { #publish }
## Pub Sub Scalability
@ -64,6 +64,6 @@ for the topic will not be sent to it.
As in @ref:[Message Delivery Reliability](../general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**. In other words, messages can be lost over the wire. In addition to that the registry of nodes which have subscribers is eventually consistent
meaning that subscribing an actor on one node will have a short delay before it is known on other nodes and published to.
If you are looking for at-least-once delivery guarantee, we recommend [Alpakka Kafka](https://doc.akka.io/docs/alpakka-kafka/current/).
If you are looking for at-least-once delivery guarantee, we recommend [Alpakka Kafka](https://doc.akka.io/docs/alp(/kafka/current/).

View file

@ -6,7 +6,7 @@
We can take the previous bank account example one step further by handling the commands within the state as well.
Scala
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #account-entity }
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #account-entity }
Take note of how the command handler is delegating to `applyCommand` in the `Account` (state), which is implemented
in the concrete `EmptyAccount`, `OpenedAccount`, and `ClosedAccount`.
@ -26,7 +26,7 @@ illustrates using `null` as the `emptyState`.]
is used in command handlers at the outer layer before delegating to the state or other methods.]
Scala
: @@snip [AccountExampleWithOptionDurableState.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithOptionDurableState.scala) { #account-entity }
: @@snip [AccountExampleWithOptionDurableState.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithOptionDurableState.scala) { #account-entity }
Java
: @@snip [AccountExampleWithNullDurableState.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #account-entity }
: @@snip [AccountExampleWithNullDurableState.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #account-entity }

View file

@ -8,21 +8,21 @@ project.description: Durable State with Akka Persistence enables actors to persi
To use Akka Persistence, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
bomGroup=org.apache.pekko bomArtifact=pekko-bom_$scala.binary.version$ bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=akka-persistence-typed_$scala.binary.version$
artifact=pekko-persistence-typed_$scala.binary.version$
version=PekkoVersion
group2=org.apache.pekko
artifact2=akka-persistence-testkit_$scala.binary.version$
artifact2=pekko-persistence-testkit_$scala.binary.version$
version2=PekkoVersion
scope2=test
}
You also have to select durable state store plugin, see @ref:[Persistence Plugins](../../persistence-plugins.md).
@@project-info{ projectId="akka-persistence-typed" }
@@project-info{ projectId="persistence-typed" }
## Introduction
@ -45,10 +45,10 @@ is ensured, have a look at the @ref:[Cluster Sharding and DurableStateBehavior](
Let's start with a simple example that models a counter using an Akka persistent actor. The minimum required for a @apidoc[DurableStateBehavior] is:
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #structure }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #structure }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #structure }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #structure }
The first important thing to notice is the `Behavior` of a persistent actor is typed to the type of the `Command`
because this is the type of message a persistent actor should receive. In Akka this is now enforced by the type system.
@ -109,18 +109,18 @@ Let's fill in the details of the example.
Commands:
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #command }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #command }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #command }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #command }
State is a storage for the latest value of the counter.
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #state }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #state }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #state }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #state }
The command handler handles the commands `Increment`, `IncrementBy` and `GetValue`.
@ -129,19 +129,19 @@ The command handler handles the commands `Increment`, `IncrementBy` and `GetValu
* `GetValue` retrieves the value of the counter from the State and replies with it to the actor passed in
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #command-handler }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #command-handler }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #command-handler }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #command-handler }
@scala[These are used to create a `DurableStateBehavior`:]
@java[These are defined in an `DurableStateBehavior`:]
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #behavior }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #behavior }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #behavior }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #behavior }
## Effects and Side Effects
@ -185,19 +185,19 @@ send an acknowledgement as a reply to the `ActorRef` passed in the command.
Example of effects and side-effects:
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #effects }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #effects }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #effects }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #effects }
The most common way to have a side-effect is to use the `thenRun` method on `Effect`. In case you have multiple side-effects
that needs to be run for several commands, you can factor them out into functions and reuse for all the commands. For example:
Scala
: @@snip [PersistentActorCompileOnlyTest.scala](/akka-persistence-typed/src/test/scala/org/apache/pekko/persistence/typed/scaladsl/PersistentActorCompileOnlyTest.scala) { #commonChainedEffects }
: @@snip [PersistentActorCompileOnlyTest.scala](/persistence-typed/src/test/scala/org/apache/pekko/persistence/typed/scaladsl/PersistentActorCompileOnlyTest.scala) { #commonChainedEffects }
Java
: @@snip [PersistentActorCompileOnlyTest.java](/akka-persistence-typed/src/test/java/org/apache/pekko/persistence/typed/javadsl/PersistentActorCompileOnlyTest.java) { #commonChainedEffects }
: @@snip [PersistentActorCompileOnlyTest.java](/persistence-typed/src/test/java/org/apache/pekko/persistence/typed/javadsl/PersistentActorCompileOnlyTest.java) { #commonChainedEffects }
### Side effects ordering and guarantees
@ -228,10 +228,10 @@ Cluster Sharding ensures that there is only one active entity (or actor instance
If the @apidoc[DurableStateBehavior] needs to use the @apidoc[typed.*.ActorContext], for example to spawn child actors, it can be obtained by wrapping construction with `Behaviors.setup`:
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #actor-context }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #actor-context }
Java
: @@snip [BasicPersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #actor-context }
: @@snip [BasicPersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #actor-context }
## Changing Behavior
@ -258,18 +258,18 @@ Once it is started then one can look it up with `GetPost`, modify it with `Chang
The state is captured by:
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #state }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #state }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #state }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #state }
The commands, of which only a subset are valid depending on the state:
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #commands }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #commands }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #commands }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #commands }
@java[The command handler to process each command is decided by the state class (or state predicate) that is
given to the `forStateType` of the `CommandHandlerBuilder` and the match cases in the builders.]
@ -278,18 +278,18 @@ It typically becomes two levels of pattern matching, first on the state and then
Delegating to methods like `addPost`, `changeBody`, `publish` etc. is a good practice because the one-line cases give a nice overview of the message dispatch.
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #command-handler }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #command-handler }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #command-handler }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #command-handler }
And finally the behavior is created @scala[from the `DurableStateBehavior.apply`]:
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #behavior }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #behavior }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #behavior }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #behavior }
This can be refactored one or two steps further by defining the command handlers in the state class as
illustrated in @ref:[command handlers in the state](persistence-style.md#command-handlers-in-the-state).
@ -312,17 +312,17 @@ After validation errors or after persisting events, using a `thenRun` side effec
be sent to the `ActorRef`.
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #reply-command }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #reply-command }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #reply-command }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #reply-command }
Scala
: @@snip [BlogPostEntityDurableState.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #reply }
: @@snip [BlogPostEntityDurableState.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.scala) { #reply }
Java
: @@snip [BlogPostEntityDurableState.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #reply }
: @@snip [BlogPostEntityDurableState.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/BlogPostEntityDurableState.java) { #reply }
Since this is such a common pattern there is a reply effect for this purpose. It has the nice property that
@ -333,18 +333,18 @@ created with @scala[`Effect.reply`]@java[`Effect().reply`], @scala[`Effect.noRep
@scala[`Effect.thenReply`]@java[`Effect().thenReply`], or @scala[`Effect.thenNoReply`]@java[`Effect().thenNoReply`].
Scala
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #withEnforcedReplies }
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #withEnforcedReplies }
Java
: @@snip [AccountExampleWithNullDurableState.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #withEnforcedReplies }
: @@snip [AccountExampleWithNullDurableState.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #withEnforcedReplies }
The commands must have a field of @scala[`ActorRef[ReplyMessageType]`]@java[`ActorRef<ReplyMessageType>`] that can then be used to send a reply.
Scala
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #reply-command }
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #reply-command }
Java
: @@snip [AccountExampleWithNullDurableState.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #reply-command }
: @@snip [AccountExampleWithNullDurableState.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #reply-command }
The `ReplyEffect` is created with @scala[`Effect.reply`]@java[`Effect().reply`], @scala[`Effect.noReply`]@java[`Effect().noReply`],
@scala[`Effect.thenReply`]@java[`Effect().thenReply`], or @scala[`Effect.thenNoReply`]@java[`Effect().thenNoReply`].
@ -353,10 +353,10 @@ The `ReplyEffect` is created with @scala[`Effect.reply`]@java[`Effect().reply`],
`EventSourcedBehaviorWithEnforcedReplies`, as opposed to newCommandHandlerBuilder when using `EventSourcedBehavior`.]
Scala
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #reply }
: @@snip [AccountExampleWithCommandHandlersInDurableState.scala](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithCommandHandlersInDurableState.scala) { #reply }
Java
: @@snip [AccountExampleWithNullDurableState.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #reply }
: @@snip [AccountExampleWithNullDurableState.java](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/AccountExampleWithNullDurableState.java) { #reply }
These effects will send the reply message even when @scala[`DurableStateBehavior.withEnforcedReplies`]@java[`DurableStateBehaviorWithEnforcedReplies`]
is not used, but then there will be no compilation errors if the reply decision is left out.
@ -378,10 +378,10 @@ Persistence allows you to use tags in persistence query. Tagging allows you to i
and separately consume them as a stream through the `DurableStateStoreQuery` interface.
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #tagging }
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #tagging }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #tagging }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #tagging }
## Wrapping DurableStateBehavior
@ -390,7 +390,7 @@ other behaviors such as `Behaviors.setup` in order to access the `ActorContext`
to access the logger from within the `ActorContext` to log for debugging the `commandHandler`.
Scala
: @@snip [DurableStatePersistentActorCompileOnly.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #wrapPersistentBehavior }
: @@snip [DurableStatePersistentActorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #wrapPersistentBehavior }
Java
: @@snip [DurableStatePersistentBehaviorTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #wrapPersistentBehavior }
: @@snip [DurableStatePersistentBehaviorTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorTest.java) { #wrapPersistentBehavior }

View file

@ -22,18 +22,18 @@ ensure the thread safety and that it is non-blocking.
Let's build an extension to manage a shared database connection pool.
Scala
: @@snip [ExtensionDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #shared-resource }
: @@snip [ExtensionDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #shared-resource }
Java
: @@snip [ExtensionDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #shared-resource }
: @@snip [ExtensionDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #shared-resource }
First create an @apidoc[actor.typed.Extension], this will be created only once per ActorSystem:
Scala
: @@snip [ExtensionDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #extension }
: @@snip [ExtensionDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #extension }
Java
: @@snip [ExtensionDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #extension }
: @@snip [ExtensionDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #extension }
This is the public API of your extension. Internally in this example we instantiate our expensive database connection.
@ -41,18 +41,18 @@ Then create an @apidoc[actor.typed.ExtensionId] to identify the extension.
@scala[A good convention is to let the companion object of the `Extension` be the `ExtensionId`.]@java[A good convention is to define the `ExtensionId` as a static inner class of the `Extension`.]
Scala
: @@snip [ExtensionDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #extension-id }
: @@snip [ExtensionDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #extension-id }
Java
: @@snip [ExtensionDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #extension-id }
: @@snip [ExtensionDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #extension-id }
Then finally to use the extension it can be looked up:
Scala
: @@snip [ExtensionDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #usage }
: @@snip [ExtensionDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #usage }
Java
: @@snip [ExtensionDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #usage }
: @@snip [ExtensionDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/extensions/ExtensionDocTest.java) { #usage }
The `DatabaseConnectionPool` can be looked up in this way any number of times and it will return the same instance.
@ -63,7 +63,7 @@ To be able to load extensions from your Akka configuration you must add FQCNs of
in the `pekko.actor.typed.extensions` section of the config you provide to your `ActorSystem`.
Scala
: @@snip [ExtensionDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #config }
: @@snip [ExtensionDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/extensions/ExtensionDocSpec.scala) { #config }
Java
: ```ruby

View file

@ -38,36 +38,36 @@ This example restarts the actor when it fails with an @javadoc[IllegalStateExcep
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart }
Or to resume, ignore the failure and process the next message, instead:
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #resume }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #resume }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #resume }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #resume }
More complicated restart strategies can be used e.g. to restart no more than 10
times in a 10 second period:
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-limit }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-limit }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-limit }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-limit }
To handle different exceptions with different strategies calls to @apidoc[supervise](typed.*.Behaviors$) {scala="#supervise[T](wrapped:org.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.scaladsl.Behaviors.Supervise[T]" java="#supervise(org.apache.pekko.actor.typed.Behavior)"}
can be nested:
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #multiple }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #multiple }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #multiple }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #multiple }
For a full list of strategies see the public methods on @apidoc[actor.typed.SupervisorStrategy].
@ -87,18 +87,18 @@ With the @ref:[functional style](style-guide.md#functional-versus-object-oriente
to store state by changing behavior e.g.
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #wrap }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #wrap }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #wrap }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #wrap }
When doing this supervision only needs to be added to the top level:
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #top-level }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #top-level }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #top-level }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #top-level }
Each returned behavior will be re-wrapped automatically with the supervisor.
@ -108,10 +108,10 @@ Child actors are often started in a @apidoc[setup](typed.*.Behaviors$) {scala="#
The child actors are stopped to avoid resource leaks of creating new child actors each time the parent is restarted.
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-stop-children }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-stop-children }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-stop-children }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-stop-children }
It is possible to override this so that child actors are not influenced when the parent actor is restarted.
The restarted parent instance will then have the same children as before the failure.
@ -122,10 +122,10 @@ when parent is restarted the @apidoc[supervise](typed.*.Behaviors$) {scala="#sup
like this:
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-keep-children }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-keep-children }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-keep-children }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-keep-children }
That means that the @apidoc[setup](typed.*.Behaviors$) {scala="#setup[T](factory:org.apache.pekko.actor.typed.scaladsl.ActorContext[T]=%3Eorg.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.Behavior[T]" java="#setup(org.apache.pekko.japi.function.Function)"} block will only be run when the parent actor is first started, and not when it is
restarted.
@ -137,10 +137,10 @@ it has created, much like the @apidoc[PostStop] signal when the @ref[actor stops
The returned behavior from the @apidoc[PreRestart] signal is ignored.
Scala
: @@snip [SupervisionCompileOnly.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-PreRestart-signal }
: @@snip [SupervisionCompileOnly.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/supervision/SupervisionCompileOnly.scala) { #restart-PreRestart-signal }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-PreRestart-signal }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/supervision/SupervisionCompileOnlyTest.java) { #restart-PreRestart-signal }
Note that @apidoc[PostStop] is not emitted for a restart, so typically you need to handle both @apidoc[PreRestart] and @apidoc[PostStop]
to cleanup resources.
@ -167,7 +167,7 @@ There might be cases when you want the original exception to bubble up the hiera
Scala
: @@snip [FaultToleranceDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FaultToleranceDocSpec.scala) { #bubbling-example }
: @@snip [FaultToleranceDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FaultToleranceDocSpec.scala) { #bubbling-example }
Java
: @@snip [SupervisionCompileOnlyTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/BubblingSample.java) { #bubbling-example }
: @@snip [SupervisionCompileOnlyTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/BubblingSample.java) { #bubbling-example }

View file

@ -78,18 +78,18 @@ the @ref:[functional style](style-guide.md#functional-versus-object-oriented-sty
Classic HelloWorld actor:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/ClassicSample.scala) { #hello-world-actor }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/ClassicSample.scala) { #hello-world-actor }
Java
: @@snip [IntroSpec.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/ClassicSample.java) { #hello-world-actor }
: @@snip [IntroSpec.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/ClassicSample.java) { #hello-world-actor }
Typed HelloWorld actor:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/TypedSample.scala) { #hello-world-actor }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/TypedSample.scala) { #hello-world-actor }
Java
: @@snip [IntroSpec.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/TypedSample.java) { #hello-world-actor }
: @@snip [IntroSpec.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/TypedSample.java) { #hello-world-actor }
Why is it called `Behavior` and not `Actor`?
@ -322,10 +322,10 @@ collection for bookkeeping of children, such as a @scala[`Map[String, ActorRef[C
@java[`Map<String, ActorRef<Child.Command>>`]. It can look like this:
Scala
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/TypedSample.scala) { #children }
: @@snip [IntroSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/fromclassic/TypedSample.scala) { #children }
Java
: @@snip [IntroSpec.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/TypedSample.java) { #children }
: @@snip [IntroSpec.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/fromclassic/TypedSample.java) { #children }
Remember to remove entries from the `Map` when the children are terminated. For that purpose it's
convenient to use `watchWith`, as illustrated in the example above, because then you can include the

View file

@ -19,29 +19,29 @@ This example demonstrates how to:
The events the FSM can receive become the type of message the Actor can receive:
Scala
: @@snip [FSMSocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #simple-events }
: @@snip [FSMSocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #simple-events }
Java
: @@snip [FSMSocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #simple-events }
: @@snip [FSMSocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #simple-events }
`SetTarget` is needed for starting it up, setting the destination for the
`Batches` to be passed on; `Queue` will add to the internal queue while
`Flush` will mark the end of a burst.
Scala
: @@snip [FSMSocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #storing-state }
: @@snip [FSMSocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #storing-state }
Java
: @@snip [FSMSocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #storing-state }
: @@snip [FSMSocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #storing-state }
Each state becomes a distinct behavior and after processing a message the next state in the form of a `Behavior`
is returned.
Scala
: @@snip [FSMSocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #simple-state }
: @@snip [FSMSocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/FSMDocSpec.scala) { #simple-state }
Java
: @@snip [FSMSocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #simple-state}
: @@snip [FSMSocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/FSMDocTest.java) { #simple-state}
@@@ div { .group-scala }
The method `idle` above makes use of `Behaviors.unhandled` which advises the system to reuse the previous behavior,

View file

@ -34,19 +34,19 @@ Tell is asynchronous which means that the method returns right away. After the s
With the given protocol and actor behavior:
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #fire-and-forget-definition }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #fire-and-forget-definition }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #fire-and-forget-definition }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #fire-and-forget-definition }
Fire and forget looks like this:
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #fire-and-forget-doit }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #fire-and-forget-doit }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #fire-and-forget-doit }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #fire-and-forget-doit }
**Useful when:**
@ -73,28 +73,28 @@ In Akka the recipient of responses has to be encoded as a field in the message i
With the following protocol:
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-protocol }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-protocol }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-protocol }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-protocol }
The sender would use its own @scala[`ActorRef[Response]`]@java[`ActorRef<Response>`], which it can access through @scala[@scaladoc[ActorContext.self](actor.typed.scaladsl.ActorContext#self:org.apache.pekko.actor.typed.ActorRef[T])]@java[@javadoc[ActorContext.getSelf()](pekko.actor.typed.javadsl.ActorContext#getSelf())], for the `replyTo`.
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-send }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-send }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-send }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-send }
On the receiving side the @scala[`ActorRef[Response]`]@java[`ActorRef<Response>`] can then be used to send one or more responses back:
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-respond }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #request-response-respond }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-respond }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #request-response-respond }
**Useful when:**
@ -119,10 +119,10 @@ Most often the sending actor does not, and should not, support receiving the res
![adapted-response.png](./images/adapted-response.png)
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #adapted-response }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #adapted-response }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #adapted-response }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #adapted-response }
You can register several message adapters for different message classes.
It's only possible to have one message adapter per message class to make sure
@ -170,10 +170,10 @@ See also the [Generic response wrapper](#generic-response-wrapper) for replies t
![ask-from-actor.png](./images/ask-from-actor.png)
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #actor-ask }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #actor-ask }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #actor-ask }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #actor-ask }
The response adapting function is running in the receiving actor and can safely access its state, but if it throws an exception the actor is stopped.
@ -207,10 +207,10 @@ to send a message to an actor and get a `Future[Response]` back. `ask` takes imp
![ask-from-outside.png](./images/ask-from-outside.png)
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #standalone-ask }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #standalone-ask }
Note that validation errors are also explicit in the message protocol. The `GiveMeCookies` request can reply
with `Cookies` or `InvalidRequest`. The requestor has to decide how to handle an `InvalidRequest` reply. Sometimes
@ -218,10 +218,10 @@ it should be treated as a failed @scala[@scaladoc[Future](scala.concurrent.Futur
requestor side. See also the [Generic response wrapper](#generic-response-wrapper) for replies that are either a success or an error.
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-fail-future }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-fail-future }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #standalone-ask-fail-future }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #standalone-ask-fail-future }
**Useful when:**
@ -252,10 +252,10 @@ Errors are preferably sent as a text describing what is wrong, but using excepti
**Example actor to actor ask:**
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #actor-ask-with-status }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #actor-ask-with-status }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #actor-ask-with-status }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #actor-ask-with-status }
A validation error is turned into a `Failure` for the message adapter. In this case we are explicitly handling the validation error separately from
other ask failures.
@ -263,18 +263,18 @@ other ask failures.
**Example ask from the outside:**
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-with-status }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-with-status }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #standalone-ask-with-status }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #standalone-ask-with-status }
Note that validation errors are also explicit in the message protocol, but encoded as the wrapper type, constructed using @scala[@scaladoc[StatusReply.Error(text)](pekko.pattern.StatusReply$$Error$#apply[T](errorMessage:String):org.apache.pekko.pattern.StatusReply[T])]@java[@javadoc[StatusReply.error(text)](pekko.pattern.StatusReply$#error(java.lang.String))]:
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-with-status-fail-future }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #standalone-ask-with-status-fail-future }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #standalone-ask-with-status-fail-future }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsAskWithStatusTest.java) { #standalone-ask-with-status-fail-future }
## Ignoring replies
@ -286,10 +286,10 @@ In some situations an actor has a response for a particular request message but
With the same protocol as the @ref[request response](#request-response) above, if the sender would prefer to ignore the reply it could pass @scala[`system.ignoreRef`]@java[`system.ignoreRef()`] for the `replyTo`, which it can access through @scala[`ActorContext.system.ignoreRef`]@java[`ActorContext.getSystem().ignoreRef()`].
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #ignore-reply }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #ignore-reply }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #ignore-reply }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #ignore-reply }
**Useful when:**
@ -316,10 +316,10 @@ this purpose the `ActorContext` provides a @apidoc[pipeToSelf](typed.*.ActorCont
An actor, `CustomerRepository`, is invoking a method on `CustomerDataAccess` that returns a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)].
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #pipeToSelf }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #pipeToSelf }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #pipeToSelf }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #pipeToSelf }
It could be tempting to just use @scala[`onComplete on the Future`]@java[`a callback on the CompletionStage`], but
that introduces the risk of accessing internal state of the actor that is not thread-safe from an external thread.
@ -353,10 +353,10 @@ As the protocol of the session actor is not a public API but rather an implement
![per-session-child.png](./images/per-session-child.png)
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #per-session-child }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #per-session-child }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #per-session-child }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #per-session-child }
In an actual session child you would likely want to include some form of timeout as well (see @ref:[scheduling messages to self](#scheduling-messages-to-self)).
@ -390,19 +390,19 @@ function and sent back to the `replyTo`. If replies don't arrive within the `tim
aggregated and sent back to the `replyTo`.
Scala
: @@snip [AggregatorSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/AggregatorSpec.scala) { #usage }
: @@snip [AggregatorSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/AggregatorSpec.scala) { #usage }
Java
: @@snip [AggregatorTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/AggregatorTest.java) { #usage }
: @@snip [AggregatorTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/AggregatorTest.java) { #usage }
The implementation of the `Aggregator`:
Scala
: @@snip [Aggregator.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/Aggregator.scala) { #behavior }
: @@snip [Aggregator.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/Aggregator.scala) { #behavior }
Java
: @@snip [Aggregator.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/Aggregator.java) { #behavior }
: @@snip [Aggregator.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/Aggregator.java) { #behavior }
**Useful when:**
@ -437,10 +437,10 @@ example rather than a built in @apidoc[actor.typed.Behavior] in Akka. It is inte
![tail-chopping.png](./images/tail-chopping.png)
Scala
: @@snip [TailChopping.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/TailChopping.scala) { #behavior }
: @@snip [TailChopping.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/TailChopping.scala) { #behavior }
Java
: @@snip [TailChopping.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/TailChopping.java) { #behavior }
: @@snip [TailChopping.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/TailChopping.java) { #behavior }
**Useful when:**
@ -467,10 +467,10 @@ The following example demonstrates how to use timers to schedule messages to an
The `Buncher` actor buffers a burst of incoming messages and delivers them as a batch after a timeout or when the number of batched messages exceeds a maximum size.
Scala
: @@snip [InteractionPatternsSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #timer }
: @@snip [InteractionPatternsSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/InteractionPatternsSpec.scala) { #timer }
Java
: @@snip [InteractionPatternsTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #timer }
: @@snip [InteractionPatternsTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/InteractionPatternsTest.java) { #timer }
There are a few things worth noting here:
@ -539,10 +539,10 @@ An alternative is to send the `entityId` in the message and have the reply sent
![sharded-response.png](./images/sharded-response.png)
Scala
: @@snip [sharded.response](/akka-cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #sharded-response }
: @@snip [sharded.response](/cluster-sharding-typed/src/test/scala/docs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #sharded-response }
Java
: @@snip [sharded.response](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingReplyCompileOnlyTest.java) { #sharded-response }
: @@snip [sharded.response](/cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingReplyCompileOnlyTest.java) { #sharded-response }
A disadvantage is that a message adapter can't be used so the response has to be in the protocol of the actor being responded to. Additionally the @apidoc[typed.*.EntityTypeKey]
could be included in the message if it is not known statically.

View file

@ -33,10 +33,10 @@ which can slow down the operations of your code if it was performed synchronousl
The @apidoc[typed.*.ActorContext] provides access to an [org.slf4j.Logger](https://www.slf4j.org/api/org/slf4j/Logger.html) for a specific actor.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #context-log }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #context-log }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #context-log }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #context-log }
The `Logger` via the `ActorContext` will automatically have a name that corresponds to the @apidoc[Behavior] of the
actor when the log is accessed the first time. The class name when using @apidoc[AbstractBehavior] or the class @scala[or object]
@ -44,10 +44,10 @@ name where the `Behavior` is defined when using the functional style. You can se
with the @apidoc[setLoggerName](typed.*.ActorContext) {scala="#setLoggerName(name:String):Unit" java="#setLoggerName(java.lang.String)"} of the `ActorContext`.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logger-name }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logger-name }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logger-name }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logger-name }
The convention is to use logger names like fully qualified class names. The parameter to `setLoggerName`
can be a `String` or a `Class`, where the latter is convenience for the class name.
@ -72,10 +72,10 @@ events will not include the `akkaSource` MDC value. This is the recommended way
of an actor, including logging from @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] callbacks.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logger-factory }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logger-factory }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logger-factory }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logger-factory }
### Placeholder arguments
@ -96,7 +96,7 @@ problem you can use the `trace2`, `debug2`, `info2`, `warn2` or `error2` extensi
by `import org.apache.pekko.actor.typed.scaladsl.LoggerOps` or `import org.apache.pekko.actor.typed.scaladsl._`.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #info2 }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #info2 }
When using the methods for 3 or more argument placeholders, the compiler will not be able to convert
the method parameters to the vararg array when they contain primitive values such as `Int`,
@ -105,13 +105,13 @@ To work around this problem you can use the `traceN`, `debugN`, `infoN`, `warnN`
methods that are added by the same `LoggerOps` import.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #infoN }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #infoN }
If you find it tedious to add the import of `LoggerOps` at many places you can make those additional methods
available with a single implicit conversion placed in a root package object of your code:
Scala
: @@snip [package.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/myapp/package.scala) { #loggerops-package-implicit }
: @@snip [package.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/myapp/package.scala) { #loggerops-package-implicit }
@@@
@ -121,10 +121,10 @@ If you want very detailed logging of messages and signals you can decorate a @ap
with @apidoc[Behaviors.logMessages](Behaviors$) {scala="#logMessages[T](logOptions:org.apache.pekko.actor.typed.LogOptions,behavior:org.apache.pekko.actor.typed.Behavior[T]):org.apache.pekko.actor.typed.Behavior[T]" java="#logMessages(org.apache.pekko.actor.typed.LogOptions,org.apache.pekko.actor.typed.Behavior)"}.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logMessages }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #logMessages }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logMessages }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #logMessages }
## MDC
@ -137,10 +137,10 @@ list and be put in the MDC attribute `akkaTags`. This can be used to categorize
to allow easier filtering of logs:
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #tags }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #tags }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #tags }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #tags }
In addition to these two built in MDC attributes you can also decorate a @apidoc[Behavior] with @apidoc[Behaviors.withMdc](Behaviors$) {scala="#withMdc[T](staticMdc:Map[String,String],mdcForMessage:T=%3EMap[String,String])(behavior:org.apache.pekko.actor.typed.Behavior[T])(implicitevidence$4:scala.reflect.ClassTag[T]):org.apache.pekko.actor.typed.Behavior[T]" java="#withMdc(java.lang.Class,java.util.Map,org.apache.pekko.japi.function.Function,org.apache.pekko.actor.typed.Behavior)"} or
use the [org.slf4j.MDC](https://www.slf4j.org/api/org/slf4j/MDC.html) API directly.
@ -149,10 +149,10 @@ The `Behaviors.withMdc` decorator has two parts. A static `Map` of MDC attribute
and a dynamic `Map` that can be constructed for each message.
Scala
: @@snip [LoggingDocExamples.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #withMdc }
: @@snip [LoggingDocExamples.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/LoggingDocExamples.scala) { #withMdc }
Java
: @@snip [LoggingDocExamples.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #withMdc }
: @@snip [LoggingDocExamples.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/LoggingDocExamples.java) { #withMdc }
If you use the MDC API directly, be aware that MDC is typically implemented with a @javadoc[ThreadLocal](java.lang.ThreadLocal) by the SLF4J backend.
Akka clears the MDC if logging is performed via the @scala[@scaladoc[log](pekko.actor.typed.scaladsl.ActorContext#log:org.slf4j.Logger)]@java[@javadoc[getLog()](pekko.actor.typed.javadsl.ActorContext#getLog())] of the `ActorContext` and it is cleared
@ -197,7 +197,7 @@ load is high.
A starting point for configuration of `logback.xml` for production:
@@snip [logback.xml](/akka-actor-typed-tests/src/test/resources/logback-doc-prod.xml)
@@snip [logback.xml](/actor-typed-tests/src/test/resources/logback-doc-prod.xml)
Note that the [AsyncAppender](https://logback.qos.ch/apidocs/ch/qos/logback/classic/AsyncAppender.html) may drop log events if the queue becomes full, which may happen if the
logging backend can't keep up with the throughput of produced log events. Dropping log events is necessary
@ -216,7 +216,7 @@ The ELK-stack is commonly used as logging infrastructure for production:
For development you might want to log to standard out, but also have all debug level logging to file, like
in this example:
@@snip [logback.xml](/akka-actor-typed-tests/src/test/resources/logback-doc-dev.xml)
@@snip [logback.xml](/actor-typed-tests/src/test/resources/logback-doc-dev.xml)
Place the `logback.xml` file in `src/main/resources/logback.xml`. For tests you can define different
logging configuration in `src/test/resources/logback-test.xml`.

View file

@ -36,14 +36,14 @@ For advanced use cases it is also possible to defer mailbox selection to config
To select a specific mailbox for an actor use @apidoc[MailboxSelector](MailboxSelector$) to create a @apidoc[Props](typed.Props) instance for spawning your actor:
Scala
: @@snip [MailboxDocSpec.scala](/akka-actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/MailboxDocSpec.scala) { #select-mailbox }
: @@snip [MailboxDocSpec.scala](/actor-typed-tests/src/test/scala/docs/org/apache/pekko/typed/MailboxDocSpec.scala) { #select-mailbox }
Java
: @@snip [MailboxDocTest.java](/akka-actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/MailboxDocTest.java) { #select-mailbox }
: @@snip [MailboxDocTest.java](/actor-typed-tests/src/test/java/jdocs/org/apache/pekko/typed/MailboxDocTest.java) { #select-mailbox }
@apidoc[fromConfig](MailboxSelector$) {scala="#fromConfig(path:String):org.apache.pekko.actor.typed.MailboxSelector" java="#fromConfig(java.lang.String)"} takes an absolute config path to a block defining the dispatcher in the config file:
@@snip [MailboxDocSpec.scala](/akka-actor-typed-tests/src/test/resources/mailbox-config-sample.conf) { }
@@snip [MailboxDocSpec.scala](/actor-typed-tests/src/test/resources/mailbox-config-sample.conf) { }
### Default Mailbox

View file

@ -11,10 +11,10 @@ To demonstrate this consider an example of a shopping application. A customer ca
* Paid
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #state }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #state }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #state }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #state }
And the commands that can result in state changes:
@ -29,10 +29,10 @@ And the following read only commands:
* Get current cart
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #commands }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #commands }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #commands }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #commands }
The command handler of the EventSourcedBehavior is used to convert the commands that change the state of the FSM
to events, and reply to commands.
@ -40,18 +40,18 @@ to events, and reply to commands.
@scala[The command handler:]@java[The `forStateType` command handler can be used:]
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #command-handler }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #command-handler }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #command-handler }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #command-handler }
The event handler is used to change state once the events have been persisted. When the EventSourcedBehavior is restarted
the events are replayed to get back into the correct state.
Scala
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/akka-persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-handler }
: @@snip [PersistentFsmToTypedMigrationSpec.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationSpec.scala) { #event-handler }
Java
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/akka-persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-handler }
: @@snip [PersistentFsmToTypedMigrationCompileOnlyTest.java](/persistence-typed/src/test/java/jdocs/org/apache/pekko/persistence/typed/PersistentFsmToTypedMigrationCompileOnlyTest.java) { #event-handler }

Some files were not shown because too many files have changed in this diff Show more