Add dependency snippets to relevant doc sections (#24418)

* Add dependency snippets to relevant doc sections

* Add 'Dependency' headings

Tried to consistently add them to each section introducing a module, after
the introduction but before the first code sample.

* Make Dependency sections more consistent
This commit is contained in:
Arnout Engelen 2018-01-31 17:19:19 +01:00 committed by GitHub
parent 36372bb2a5
commit e33db45139
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 229 additions and 234 deletions

View file

@ -31,6 +31,16 @@ See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@ @@@
## Dependency
To use Akka Cluster Sharding, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-sharding_$scala.binary_version$"
version="$akka.version$"
}
## An Example ## An Example
This is how an entity actor may look like: This is how an entity actor may look like:

View file

@ -2,34 +2,15 @@
For introduction to the Akka Cluster concepts please see @ref:[Cluster Specification](common/cluster.md). For introduction to the Akka Cluster concepts please see @ref:[Cluster Specification](common/cluster.md).
## Preparing Your Project for Clustering ## Dependency
The Akka cluster is a separate jar file. Make sure that you have the following dependency in your project: To use Akka Cluster, add the module to your project:
sbt @@dependency[sbt,Maven,Gradle] {
: @@@vars group="com.typesafe.akka"
``` artifact="akka-cluster_$scala.binary_version$"
"com.typesafe.akka" %% "akka-cluster" % "$akka.version$" version="$akka.version$"
``` }
@@@
Gradle
: @@@vars
```
compile group: 'com.typesafe.akka', name: 'akka-cluster_$scala.binary_version$', version: '$akka.version$'
```
@@@
Maven
: @@@vars
```
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster_$scala.binary_version$</artifactId>
<version>$akka.version$</version>
</dependency>
```
@@@
## A Simple Cluster Example ## A Simple Cluster Example
@ -99,10 +80,10 @@ The actor registers itself as subscriber of certain cluster events. It receives
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster. of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
The easiest way to run this example yourself is to download the ready to run The easiest way to run this example yourself is to download the ready to run
@scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)] @scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)]
@java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)] @java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)]
together with the tutorial. It contains instructions on how to run the `SimpleClusterApp`. together with the tutorial. It contains instructions on how to run the `SimpleClusterApp`.
The source code of this sample can be found in the The source code of this sample can be found in the
@scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)]. @scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)].
## Joining to Seed Nodes ## Joining to Seed Nodes
@ -152,7 +133,7 @@ seed nodes in the existing cluster. Note that if you stop all seed nodes at the
and restart them with the same `seed-nodes` configuration they will join themselves and and restart them with the same `seed-nodes` configuration they will join themselves and
form a new cluster instead of joining remaining nodes of the existing cluster. That is form a new cluster instead of joining remaining nodes of the existing cluster. That is
likely not desired and should be avoided by listing several nodes as seed nodes for redundancy likely not desired and should be avoided by listing several nodes as seed nodes for redundancy
and don't stop all of them at the same time. and don't stop all of them at the same time.
You may also use @scala[`Cluster(system).joinSeedNodes`]@java[`Cluster.get(system).joinSeedNodes`] to join programmatically, You may also use @scala[`Cluster(system).joinSeedNodes`]@java[`Cluster.get(system).joinSeedNodes`] to join programmatically,
which is attractive when dynamically discovering other nodes at startup by using some external tool or API. which is attractive when dynamically discovering other nodes at startup by using some external tool or API.
@ -402,7 +383,7 @@ The easiest way to run **Worker Dial-in Example** example yourself is to downloa
@scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)] @scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)]
@java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)] @java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)]
together with the tutorial. It contains instructions on how to run the **Worker Dial-in Example** sample. together with the tutorial. It contains instructions on how to run the **Worker Dial-in Example** sample.
The source code of this sample can be found in the The source code of this sample can be found in the
@scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)]. @scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)].
## Node Roles ## Node Roles
@ -723,7 +704,7 @@ The easiest way to run **Router Example with Group of Routees** example yourself
@scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)] @scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)]
@java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)] @java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)]
together with the tutorial. It contains instructions on how to run the **Router Example with Group of Routees** sample. together with the tutorial. It contains instructions on how to run the **Router Example with Group of Routees** sample.
The source code of this sample can be found in the The source code of this sample can be found in the
@scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)]. @scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)].
### Router with Pool of Remote Deployed Routees ### Router with Pool of Remote Deployed Routees
@ -770,7 +751,7 @@ and deploys workers. To keep track of a single master we use the @ref:[Cluster S
in the cluster-tools module. The `ClusterSingletonManager` is started on each node: in the cluster-tools module. The `ClusterSingletonManager` is started on each node:
Scala Scala
: @@@vars : @@@vars
``` ```
system.actorOf( system.actorOf(
ClusterSingletonManager.props( ClusterSingletonManager.props(
@ -797,7 +778,7 @@ Scala
name = "statsServiceProxy") name = "statsServiceProxy")
``` ```
@@@ @@@
Java Java
: @@snip [StatsSampleOneMasterMain.java]($code$/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy } : @@snip [StatsSampleOneMasterMain.java]($code$/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy }
@ -824,7 +805,7 @@ The easiest way to run **Router Example with Pool of Remote Deployed Routees** e
@scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)] @scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)]
@java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)] @java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)]
together with the tutorial. It contains instructions on how to run the **Router Example with Pool of Remote Deployed Routees** sample. together with the tutorial. It contains instructions on how to run the **Router Example with Pool of Remote Deployed Routees** sample.
The source code of this sample can be found in the The source code of this sample can be found in the
@scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)]. @scala[@extref[Akka Samples Repository](samples:akka-sample-cluster-scala)]@java[@extref[Akka Samples Repository](samples:akka-sample-cluster-java)].
## Cluster Metrics ## Cluster Metrics
@ -886,16 +867,16 @@ the actor system for a specific role. This can also be used to grab the `akka.ac
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses } @@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses }
@@@ @@@
@@@ div { .group-java } @@@ div { .group-java }
## How to Test ## How to Test
Currently testing with the `sbt-multi-jvm` plugin is only documented for Scala. Currently testing with the `sbt-multi-jvm` plugin is only documented for Scala.
Go to the corresponding Scala version of this page for details. Go to the corresponding Scala version of this page for details.
@@@ @@@
## Management ## Management

View file

@ -19,6 +19,16 @@ It is eventually consistent and geared toward providing high read and write avai
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an (partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
out-of-date value. out-of-date value.
## Dependency
To use Akka Distributed Data, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-distributed-data_$scala.binary_version$"
version="$akka.version$"
}
## Using the Replicator ## Using the Replicator
The `akka.cluster.ddata.Replicator` actor provides the API for interacting with the data. The `akka.cluster.ddata.Replicator` actor provides the API for interacting with the data.
@ -62,7 +72,7 @@ function that only uses the data parameter and stable fields from enclosing scop
for example not access the sender (@scala[`sender()`]@java[`getSender()`]) reference of an enclosing actor. for example not access the sender (@scala[`sender()`]@java[`getSender()`]) reference of an enclosing actor.
`Update` `Update`
is intended to only be sent from an actor running in same local is intended to only be sent from an actor running in same local
`ActorSystem` `ActorSystem`
as as
: the `Replicator`, because the `modify` function is typically not serializable. : the `Replicator`, because the `modify` function is typically not serializable.
@ -80,9 +90,9 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
* `WriteAll` the value will immediately be written to all nodes in the cluster * `WriteAll` the value will immediately be written to all nodes in the cluster
(or all nodes in the cluster role group) (or all nodes in the cluster role group)
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes. When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
are preferred over unreachable nodes. are preferred over unreachable nodes.
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters. Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.

View file

@ -13,64 +13,26 @@ Akka persistence is inspired by and the official replacement of the [eventsource
concepts and architecture of [eventsourced](https://github.com/eligosource/eventsourced) but significantly differs on API and implementation level. See also concepts and architecture of [eventsourced](https://github.com/eligosource/eventsourced) but significantly differs on API and implementation level. See also
@ref:[migration-eventsourced-2.3](project/migration-guide-eventsourced-2.3.x.md) @ref:[migration-eventsourced-2.3](project/migration-guide-eventsourced-2.3.x.md)
## Dependencies ## Dependency
Akka persistence is a separate jar file. Make sure that you have the following dependency in your project: To use Akka Persistence, add the module to your project:
sbt @@dependency[sbt,Maven,Gradle] {
: @@@vars group="com.typesafe.akka"
``` artifact="akka-persistence_$scala.binary_version$"
"com.typesafe.akka" %% "akka-persistence" % "$akka.version$" version="$akka.version$"
``` }
@@@
Gradle The Akka Persistence extension comes with few built-in persistence plugins, including
: @@@vars
```
compile group: 'com.typesafe.akka', name: 'akka-persistence_$scala.binary_version$', version: '$akka.version$'
```
@@@
Maven
: @@@vars
```
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-persistence_$scala.binary_version$</artifactId>
<version>$akka.version$</version>
</dependency>
```
@@@
The Akka persistence extension comes with few built-in persistence plugins, including
in-memory heap based journal, local file-system based snapshot-store and LevelDB based journal. in-memory heap based journal, local file-system based snapshot-store and LevelDB based journal.
LevelDB based plugins will require the following additional dependency declaration: LevelDB-based plugins will require the following additional dependency:
sbt @@dependency[sbt,Maven,Gradle] {
: @@@vars group="org.fusesource.leveldbjni"
``` artifact="leveldbjni-all"
"org.fusesource.leveldbjni" % "leveldbjni-all" % "1.8" version="1.8"
``` }
@@@
Gradle
: @@@vars
```
compile group: 'org.fusesource.leveldbjni', name: 'leveldbjni-all', version: '1.8'
```
@@@
Maven
: @@@vars
```
<dependency>
<groupId>org.fusesource.leveldbjni</groupId>
<artifactId>leveldbjni-all</artifactId>
<version>1.8</version>
</dependency>
```
@@@
## Architecture ## Architecture
@ -103,8 +65,8 @@ needs to be recovered, only the persisted events are replayed of which we know t
In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced In other words, events cannot fail when being replayed to a persistent actor, in contrast to commands. Event sourced
actors may of course also process commands that do not change application state such as query commands for example. actors may of course also process commands that do not change application state such as query commands for example.
Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493) by Randy Shoup. It is a short and recommended read if you're starting Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493) by Randy Shoup. It is a short and recommended read if you're starting
developing Events based applications. developing Events based applications.
Akka persistence supports event sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the Akka persistence supports event sourcing with the @scala[`PersistentActor` trait]@java[`AbstractPersistentActor` abstract class]. An actor that extends this @scala[trait]@java[class] uses the
`persist` method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`] `persist` method to persist and handle events. The behavior of @scala[a `PersistentActor`]@java[an `AbstractPersistentActor`]
@ -188,8 +150,8 @@ By default, a persistent actor is automatically recovered on start and on restar
New messages sent to a persistent actor during recovery do not interfere with replayed messages. New messages sent to a persistent actor during recovery do not interfere with replayed messages.
They are stashed and received by a persistent actor after recovery phase completes. They are stashed and received by a persistent actor after recovery phase completes.
The number of concurrent recoveries that can be in progress at the same time is limited The number of concurrent recoveries that can be in progress at the same time is limited
to not overload the system and the backend data store. When exceeding the limit the actors will wait to not overload the system and the backend data store. When exceeding the limit the actors will wait
until other recoveries have been completed. This is configured by: until other recoveries have been completed. This is configured by:
``` ```
@ -316,7 +278,7 @@ Java
Persistence.get(getContext().getSystem()).defaultInternalStashOverflowStrategy(); Persistence.get(getContext().getSystem()).defaultInternalStashOverflowStrategy();
``` ```
@@@ @@@
@@@ note @@@ note
The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may The bounded mailbox should be avoided in the persistent actor, by which the messages come from storage backends may
@ -649,7 +611,7 @@ akka.persistence.journal.leveldb.replay-filter {
<a id="snapshots"></a> <a id="snapshots"></a>
## Snapshots ## Snapshots
As you model your domain using actors, you may notice that some actors may be prone to accumulating extremely long event logs and experiencing long recovery times. Sometimes, the right approach may be to split out into a set of shorter lived actors. However, when this is not an option, you can use snapshots to reduce recovery times drastically. As you model your domain using actors, you may notice that some actors may be prone to accumulating extremely long event logs and experiencing long recovery times. Sometimes, the right approach may be to split out into a set of shorter lived actors. However, when this is not an option, you can use snapshots to reduce recovery times drastically.
Persistent actors can save snapshots of internal state by calling the `saveSnapshot` method. If saving of a snapshot Persistent actors can save snapshots of internal state by calling the `saveSnapshot` method. If saving of a snapshot
succeeds, the persistent actor receives a `SaveSnapshotSuccess` message, otherwise a `SaveSnapshotFailure` message succeeds, the persistent actor receives a `SaveSnapshotSuccess` message, otherwise a `SaveSnapshotFailure` message
@ -1057,7 +1019,7 @@ akka {
plugin = "akka.persistence.snapshot-store.local" plugin = "akka.persistence.snapshot-store.local"
auto-start-snapshot-stores = ["akka.persistence.snapshot-store.local"] auto-start-snapshot-stores = ["akka.persistence.snapshot-store.local"]
} }
} }
} }
@ -1238,7 +1200,7 @@ sbt
Gradle Gradle
: @@@vars : @@@vars
``` ```
compile group: 'org.fusesource.leveldbjni', name: 'leveldbjni-all', version: '1.8' compile group: 'org.fusesource.leveldbjni', name: 'leveldbjni-all', version: '1.8'
``` ```
@@@ @@@
@ -1252,7 +1214,7 @@ Maven
</dependency> </dependency>
``` ```
@@@ @@@
The default location of LevelDB files is a directory named `journal` in the current working The default location of LevelDB files is a directory named `journal` in the current working
directory. This location can be changed by configuration where the specified path can be relative or absolute: directory. This location can be changed by configuration where the specified path can be relative or absolute:
@ -1260,11 +1222,11 @@ directory. This location can be changed by configuration where the specified pat
With this plugin, each actor system runs its own private LevelDB instance. With this plugin, each actor system runs its own private LevelDB instance.
One peculiarity of LevelDB is that the deletion operation does not remove messages from the journal, but adds One peculiarity of LevelDB is that the deletion operation does not remove messages from the journal, but adds
a "tombstone" for each deleted message instead. In the case of heavy journal usage, especially one including frequent a "tombstone" for each deleted message instead. In the case of heavy journal usage, especially one including frequent
deletes, this may be an issue as users may find themselves dealing with continuously increasing journal sizes. To deletes, this may be an issue as users may find themselves dealing with continuously increasing journal sizes. To
this end, LevelDB offers a special journal compaction function that is exposed via the following configuration: this end, LevelDB offers a special journal compaction function that is exposed via the following configuration:
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #compaction-intervals-config } @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #compaction-intervals-config }
<a id="shared-leveldb-journal"></a> <a id="shared-leveldb-journal"></a>

View file

@ -1,7 +1,16 @@
# Streams Quickstart Guide # Streams Quickstart Guide
Create a project and add the akka-streams dependency to the build tool of your ## Dependency
choice.
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## First steps
A stream usually begins at a source, so this is also how we start an Akka A stream usually begins at a source, so this is also how we start an Akka
Stream. Before we create one, we import the full complement of streaming tools: Stream. Before we create one, we import the full complement of streaming tools:

View file

@ -13,7 +13,7 @@ flows and sinks. This makes them easily testable by wiring them up to other
sources or sinks, or some test harnesses that `akka-testkit` or sources or sinks, or some test harnesses that `akka-testkit` or
`akka-stream-testkit` provide. `akka-stream-testkit` provide.
## Built in sources, sinks and combinators ## Built-in sources, sinks and combinators
Testing a custom sink can be as simple as attaching a source that emits Testing a custom sink can be as simple as attaching a source that emits
elements from a predefined collection, running a constructed test flow and elements from a predefined collection, running a constructed test flow and
@ -93,11 +93,18 @@ provides tools specifically for writing stream tests. This module comes with
two main components that are `TestSource` and `TestSink` which two main components that are `TestSource` and `TestSink` which
provide sources and sinks that materialize to probes that allow fluent API. provide sources and sinks that materialize to probes that allow fluent API.
@@@ note ### Dependency
Be sure to add the module `akka-stream-testkit` to your dependencies. To use Akka Stream TestKit, add the module to your project:
@@@ @@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream-testkit_$scala.binary_version$"
version="$akka.version$"
scope="test"
}
### Using the TestKit
A sink returned by `TestSink.probe` allows manual control over demand and A sink returned by `TestSink.probe` allows manual control over demand and
assertions over elements coming downstream. assertions over elements coming downstream.
@ -149,4 +156,4 @@ Never use this setting in production or benchmarks. This is a testing tool to pr
during tests, but it reduces the throughput of streams. A warning message will be logged if you have this setting during tests, but it reduces the throughput of streams. A warning message will be logged if you have this setting
enabled. enabled.
@@@ @@@

View file

@ -7,9 +7,9 @@ perform tests.
Akka comes with a dedicated module `akka-testkit` for supporting tests. Akka comes with a dedicated module `akka-testkit` for supporting tests.
## Dependencies ## Dependency
Be sure to add the module `akka-testkit` to your dependencies. To use Akka Testkit, add the module to your project:
@@dependency[sbt,Maven,Gradle] { @@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka" group="com.typesafe.akka"

View file

@ -1,4 +1,4 @@
# Actors # Actors
@@@ warning @@@ warning
@ -9,15 +9,11 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@ @@@
### Migrating to 2.5.9 ## Dependency
* `EffectfulActorContext` has been renamed to `BehaviourTestKit`
* `Inbox` has been renamed to `TestInbox` to allign with `TestProbe`
* Separated into modules e.g. `akka-actor-typed` `akka-persistence-typed` along with matching package names
To use Akka Typed add the following dependency: To use Akka Typed add the following dependency:
@@dependency [sbt,Maven,Gradle] { @@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka group=com.typesafe.akka
artifact=akka-actor-typed_2.12 artifact=akka-actor-typed_2.12
version=$akka.version$ version=$akka.version$
@ -27,7 +23,7 @@ To use Akka Typed add the following dependency:
As discussed in @ref:[Actor Systems](../general/actor-systems.md) Actors are about As discussed in @ref:[Actor Systems](../general/actor-systems.md) Actors are about
sending messages between independent units of computation, but how does that sending messages between independent units of computation, but how does that
look like? look like?
In all of the following these imports are assumed: In all of the following these imports are assumed:
@ -258,7 +254,7 @@ or the `onMessage` function for user messages.
This particular `main` Actor is created using `Behaviors.deferred`, which is like a factory for a behavior. This particular `main` Actor is created using `Behaviors.deferred`, which is like a factory for a behavior.
Creation of the behavior instance is deferred until the actor is started, as opposed to `Behaviors.immutable` Creation of the behavior instance is deferred until the actor is started, as opposed to `Behaviors.immutable`
that creates the behavior instance immediately before the actor is running. The factory function in that creates the behavior instance immediately before the actor is running. The factory function in
`deferred` pass the `ActorContext` as parameter and that can for example be used for spawning child actors. `deferred` pass the `ActorContext` as parameter and that can for example be used for spawning child actors.
This `main` Actor creates the chat room and the gabbler and the session between them is initiated, and when the This `main` Actor creates the chat room and the gabbler and the session between them is initiated, and when the
gabbler is finished we will receive the `Terminated` event due to having gabbler is finished we will receive the `Terminated` event due to having
@ -358,3 +354,10 @@ address. While we cannot statically express the “current” state of an Actor,
can express the current state of a protocol between two Actors, since that is can express the current state of a protocol between two Actors, since that is
just given by the last message type that was received or sent. just given by the last message type that was received or sent.
## Migrating
### Migrating to 2.5.9
* `EffectfulActorContext` has been renamed to `BehaviourTestKit`
* `Inbox` has been renamed to `TestInbox` to allign with `TestProbe`
* Separated into modules e.g. `akka-actor-typed` `akka-persistence-typed` along with matching package names

View file

@ -1,25 +1,27 @@
# Cluster Sharding # Cluster Sharding
For an introduction to Sharding concepts see @ref:[Cluster Sharding](../cluster-sharding.md). This documentation shows how to use the typed
Cluster Sharding API.
@@@ warning @@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
of being the subject of active research. This means that API or semantics can of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned. this module in production just yet—you have been warned.
@@@ @@@
To use cluster sharding add the following dependency: ## Dependency
@@dependency [sbt,Maven,Gradle] { To use Akka Cluster Sharding, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka group=com.typesafe.akka
artifact=akka-cluster-sharding-typed_2.12 artifact=akka-cluster-sharding-typed_2.12
version=$akka.version$ version=$akka.version$
} }
For an introduction to Sharding concepts see @ref:[Cluster Sharding](../cluster-sharding.md). This documentation shows how to use the typed
Cluster Sharding API.
## Basic example ## Basic example
Sharding is accessed via the `ClusterSharding` extension Sharding is accessed via the `ClusterSharding` extension
@ -38,7 +40,7 @@ Scala
Java Java
: @@snip [ShardingCompileOnlyTest.java]($akka$/akka-cluster-sharding-typed/src/test/java/jdoc/akka/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter } : @@snip [ShardingCompileOnlyTest.java]($akka$/akka-cluster-sharding-typed/src/test/java/jdoc/akka/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #counter }
Each Entity type has a key that is then used to retrieve an EntityRef for a given entity identifier. Each Entity type has a key that is then used to retrieve an EntityRef for a given entity identifier.
Scala Scala
: @@snip [ShardingCompileOnlySpec.scala]($akka$/akka-cluster-sharding-typed/src/test/scala/doc/akka/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #spawn } : @@snip [ShardingCompileOnlySpec.scala]($akka$/akka-cluster-sharding-typed/src/test/scala/doc/akka/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #spawn }
@ -46,7 +48,7 @@ Scala
Java Java
: @@snip [ShardingCompileOnlyTest.java]($akka$/akka-cluster-sharding-typed/src/test/java/jdoc/akka/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #spawn } : @@snip [ShardingCompileOnlyTest.java]($akka$/akka-cluster-sharding-typed/src/test/java/jdoc/akka/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #spawn }
Messages to a specific entity are then sent via an EntityRef. Messages to a specific entity are then sent via an EntityRef.
It is also possible to wrap methods in a `ShardingEnvelop` or define extractor functions and send messages directly to the shard region. It is also possible to wrap methods in a `ShardingEnvelop` or define extractor functions and send messages directly to the shard region.
Scala Scala
@ -57,7 +59,7 @@ Java
## Persistence example ## Persistence example
When using sharding entities can be moved to different nodes in the cluster. Persistence can be used to recover the state of When using sharding entities can be moved to different nodes in the cluster. Persistence can be used to recover the state of
an actor after it has moved. Currently Akka typed only has a Scala API for persistence, you can track the progress of the an actor after it has moved. Currently Akka typed only has a Scala API for persistence, you can track the progress of the
Java API [here](https://github.com/akka/akka/issues/24193). Java API [here](https://github.com/akka/akka/issues/24193).
@ -72,5 +74,5 @@ To create the entity:
Scala Scala
: @@snip [ShardingCompileOnlySpec.scala]($akka$/akka-cluster-sharding-typed/src/test/scala/doc/akka/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #persistence } : @@snip [ShardingCompileOnlySpec.scala]($akka$/akka-cluster-sharding-typed/src/test/scala/doc/akka/cluster/sharding/typed/ShardingCompileOnlySpec.scala) { #persistence }
Sending messages to entities is the same as the example above. The only difference is ow when an entity is moved the state will be restored. Sending messages to entities is the same as the example above. The only difference is ow when an entity is moved the state will be restored.
See @ref:[persistence](persistence.md) for more details. See @ref:[persistence](persistence.md) for more details.

View file

@ -1,22 +1,5 @@
# Cluster Singleton # Cluster Singleton
@@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned.
@@@
To use the cluster singletons add the following dependency:
@@dependency [sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_2.12
version=$akka.version$
}
For some use cases it is convenient and sometimes also mandatory to ensure that For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster. you have exactly one actor of a certain type running somewhere in the cluster.
@ -33,7 +16,26 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
but for some cases this feature takes care of that by making sure that another singleton but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started. instance will eventually be started.
# Example @@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned.
@@@
## Dependency
To use Akka Cluster Singleton, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_2.12
version=$akka.version$
}
## Example
Any `Behavior` can be run as a singleton. E.g. a basic counter: Any `Behavior` can be run as a singleton. E.g. a basic counter:

View file

@ -1,24 +1,30 @@
# Cluster # Cluster
For an introduction to Akka Cluster concepts see @ref:[Cluster Specification](../common/cluster.md). This documentation shows how to use the typed
Cluster API.
@@@ warning @@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
of being the subject of active research. This means that API or semantics can of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned. this module in production just yet—you have been warned.
@@@ @@@
To use the testkit add the following dependency: ## Dependency
@@dependency [sbt,Maven,Gradle] { To use Akka Cluster Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka group=com.typesafe.akka
artifact=akka-cluster-typed_2.12 artifact=akka-cluster-typed_2.12
version=$akka.version$ version=$akka.version$
} }
For an introduction to Akka Cluster concepts see @ref:[Cluster Specification](../common/cluster.md). This documentation shows how to use the typed ## Examples
Cluster API. All of the examples below assume the following imports:
All of the examples below assume the following imports:
Scala Scala
: @@snip [BasicClusterExampleSpec.scala]($akka$/akka-cluster-typed/src/test/scala/docs/akka/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-imports } : @@snip [BasicClusterExampleSpec.scala]($akka$/akka-cluster-typed/src/test/scala/docs/akka/cluster/typed/BasicClusterExampleSpec.scala) { #cluster-imports }
@ -26,7 +32,7 @@ Scala
Java Java
: @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports } : @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
And the minimum configuration required is to set a host/port for remoting and the `cluster` And the minimum configuration required is to set a host/port for remoting and the `cluster`
Scala Scala
: @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports } : @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
@ -34,11 +40,11 @@ Scala
Java Java
: @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports } : @@snip [BasicClusterExampleTest.java]($akka$/akka-cluster-typed/src/test/java/jdocs/akka/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
## Cluster API extension ## Cluster API extension
The typed Cluster extension gives access to management tasks (Joining, Leaving, Downing, …) and subscription of The typed Cluster extension gives access to management tasks (Joining, Leaving, Downing, …) and subscription of
cluster membership events (MemberUp, MemberRemoved, UnreachableMember, etc). Those are exposed as two different actor cluster membership events (MemberUp, MemberRemoved, UnreachableMember, etc). Those are exposed as two different actor
references, i.e. its a message based API. references, i.e. its a message based API.
The references are on the `Cluster` extension: The references are on the `Cluster` extension:
@ -55,7 +61,7 @@ The Cluster extensions gives you access to:
* state: The current `CurrentClusterState` * state: The current `CurrentClusterState`
### Cluster Management ### Cluster Management
If not using configuration to specify seeds joining the cluster can be done programmatically via the `manager`. If not using configuration to specify seeds joining the cluster can be done programmatically via the `manager`.
@ -97,10 +103,10 @@ Java
## Serialization ## Serialization
See [serialization](https://doc.akka.io/docs/akka/current/scala/serialization.html) for how messages are sent between See [serialization](https://doc.akka.io/docs/akka/current/scala/serialization.html) for how messages are sent between
ActorSystems. Actor references are typically included in the messages, ActorSystems. Actor references are typically included in the messages,
since there is no `sender`. To serialize actor references to/from string representation you will use the `ActorRefResolver`. since there is no `sender`. To serialize actor references to/from string representation you will use the `ActorRefResolver`.
For example here's how a serializer could look for the `Ping` and `Pong` messages above: For example here's how a serializer could look for the `Ping` and `Pong` messages above:
Scala Scala
: @@snip [PingSerializer.scala]($akka$/akka-cluster-typed/src/test/scala/docs/akka/cluster/typed/PingSerializer.scala) { #serializer } : @@snip [PingSerializer.scala]($akka$/akka-cluster-typed/src/test/scala/docs/akka/cluster/typed/PingSerializer.scala) { #serializer }

View file

@ -1,4 +1,8 @@
# Persistence # Persistence
Akka Persistence is a library for building event sourced actors. For background about how it works
see the @ref:[untyped Akka Persistence section](../persistence.md). This documentation shows how the typed API for persistence
works and assumes you know what is meant by `Command`, `Event` and `State`.
@@@ warning @@@ warning
@ -6,36 +10,34 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
of being the subject of active research. This means that API or semantics can of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned. this module in production just yet—you have been warned.
@@@ @@@
@@@ warning @@@ warning
This module only has a Scala DSL. See [#24193](https://github.com/akka/akka/issues/24193) This module only has a Scala DSL. See [#24193](https://github.com/akka/akka/issues/24193)
to track progress and to contribute to the Java DSL. to track progress and to contribute to the Java DSL.
@@@ @@@
To use typed persistence add the following dependency: ## Dependency
@@dependency [sbt,Maven,Gradle] { To use Akka Persistence Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka group=com.typesafe.akka
artifact=akka-persistence-typed_2.11 artifact=akka-persistence-typed_2.11
version=$akka.version$ version=$akka.version$
} }
## Example
Akka Persistence is a library for building event sourced actors. For background about how it works
see the @ref:[untyped Akka Persistence section](../persistence.md). This documentation shows how the typed API for persistence
works and assumes you know what is meant by `Command`, `Event` and `State`.
Let's start with a simple example. The minimum required for a `PersistentBehavior` is: Let's start with a simple example. The minimum required for a `PersistentBehavior` is:
Scala Scala
: @@snip [BasicPersistentBehaviorsSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/BasicPersistentBehaviorsSpec.scala) { #structure } : @@snip [BasicPersistentBehaviorsSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/BasicPersistentBehaviorsSpec.scala) { #structure }
The first important thing to notice is the `Behavior` of a persistent actor is typed to the type of the `Command` The first important thing to notice is the `Behavior` of a persistent actor is typed to the type of the `Command`
because this type of message a persistent actor should receive. In Akka Typed this is now enforced by the type system. because this type of message a persistent actor should receive. In Akka Typed this is now enforced by the type system.
The event and state are only used internally. The event and state are only used internally.
@ -59,19 +61,19 @@ A command handler returns an `Effect` directive that defines what event or event
* `Effect.none` no events are to be persisted, for example a read-only command * `Effect.none` no events are to be persisted, for example a read-only command
* `Effect.unhandled` the command is unhandled (not supported) in current state * `Effect.unhandled` the command is unhandled (not supported) in current state
External side effects can be performed after successful persist with the `andThen` function e.g `Effect.persist(..).andThen`. External side effects can be performed after successful persist with the `andThen` function e.g `Effect.persist(..).andThen`.
In the example below a reply is sent to the `replyTo` ActorRef. Note that the new state after applying In the example below a reply is sent to the `replyTo` ActorRef. Note that the new state after applying
the event is passed as parameter to the `andThen` function. the event is passed as parameter to the `andThen` function.
### Event handler ### Event handler
When an event has been persisted successfully the current state is updated by applying the When an event has been persisted successfully the current state is updated by applying the
event to the current state with the `eventHandler` function. event to the current state with the `eventHandler` function.
The event handler returns the new state, which must be immutable so you return a new instance of the state. The event handler returns the new state, which must be immutable so you return a new instance of the state.
The same event handler is also used when the entity is started up to recover its state from the stored events. The same event handler is also used when the entity is started up to recover its state from the stored events.
It is not recommended to perform side effects It is not recommended to perform side effects
in the event handler, as those are also executed during recovery of an persistent actor in the event handler, as those are also executed during recovery of an persistent actor
## Basic example ## Basic example
@ -106,18 +108,18 @@ The behavior can then be run as with any normal typed actor as described in [typ
## Larger example ## Larger example
After processing a message plain typed actors are able to return the `Behavior` that is used After processing a message plain typed actors are able to return the `Behavior` that is used
for next message. for next message.
As you can see in the above examples this is not supported by typed persistent actors. Instead, the state is As you can see in the above examples this is not supported by typed persistent actors. Instead, the state is
returned by `eventHandler`. The reason a new behavior can't be returned is that behavior is part of the actor's returned by `eventHandler`. The reason a new behavior can't be returned is that behavior is part of the actor's
state and must also carefully be reconstructed during recovery. If it would have been supported it would mean state and must also carefully be reconstructed during recovery. If it would have been supported it would mean
that the behavior must be restored when replaying events and also encoded in the state anyway when snapshots are used. that the behavior must be restored when replaying events and also encoded in the state anyway when snapshots are used.
That would be very prone to mistakes and thus not allowed in Typed Persistence. That would be very prone to mistakes and thus not allowed in Typed Persistence.
For simple actors you can use the same set of command handlers independent of what state the entity is in, For simple actors you can use the same set of command handlers independent of what state the entity is in,
as shown in above example. For more complex actors it's useful to be able to change the behavior in the sense as shown in above example. For more complex actors it's useful to be able to change the behavior in the sense
that different functions for processing commands may be defined depending on what state the actor is in. This is useful when implementing finite state machine (FSM) like entities. that different functions for processing commands may be defined depending on what state the actor is in. This is useful when implementing finite state machine (FSM) like entities.
The next example shows how to define different behavior based on the current `State`. It is an actor that The next example shows how to define different behavior based on the current `State`. It is an actor that
represents the state of a blog post. Before a post is started the only command it can process is to `AddPost`. Once it is started represents the state of a blog post. Before a post is started the only command it can process is to `AddPost`. Once it is started
@ -133,7 +135,7 @@ The commands (only a subset are valid depending on state):
Scala Scala
: @@snip [InDepthPersistentBehaviorSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/InDepthPersistentBehaviorSpec.scala) { #commands } : @@snip [InDepthPersistentBehaviorSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/InDepthPersistentBehaviorSpec.scala) { #commands }
The command handler to process each command is decided by a `CommandHandler.byState` command handler, The command handler to process each command is decided by a `CommandHandler.byState` command handler,
which is a function from `State => CommandHandler`: which is a function from `State => CommandHandler`:
Scala Scala
@ -149,8 +151,8 @@ And a different `CommandHandler` for after the post has been added:
Scala Scala
: @@snip [InDepthPersistentBehaviorSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/InDepthPersistentBehaviorSpec.scala) { #post-added-command-handler } : @@snip [InDepthPersistentBehaviorSpec.scala]($akka$/akka-persistence-typed/src/test/scala/docs/akka/persistence/typed/InDepthPersistentBehaviorSpec.scala) { #post-added-command-handler }
The event handler is always the same independent of state. The main reason for not making the event handler The event handler is always the same independent of state. The main reason for not making the event handler
part of the `CommandHandler` is that all events must be handled and that is typically independent of what the part of the `CommandHandler` is that all events must be handled and that is typically independent of what the
current state is. The event handler can of course still decide what to do based on the state if that is needed. current state is. The event handler can of course still decide what to do based on the state if that is needed.
Scala Scala
@ -163,14 +165,14 @@ Scala
## Serialization ## Serialization
The same @ref:[serialization](../serialization.md) mechanism as for untyped The same @ref:[serialization](../serialization.md) mechanism as for untyped
actors is also used in Akka Typed, also for persistent actors. When picking serialization solution for the events actors is also used in Akka Typed, also for persistent actors. When picking serialization solution for the events
you should also consider that it must be possible read old events when the application has evolved. you should also consider that it must be possible read old events when the application has evolved.
Strategies for that can be found in the @ref:[schema evolution](../persistence-schema-evolution.md). Strategies for that can be found in the @ref:[schema evolution](../persistence-schema-evolution.md).
## Recovery ## Recovery
Since it is strongly discouraged to perform side effects in applyEvent , Since it is strongly discouraged to perform side effects in applyEvent ,
side effects should be performed once recovery has completed in the `onRecoveryCompleted` callback side effects should be performed once recovery has completed in the `onRecoveryCompleted` callback
Scala Scala

View file

@ -1,4 +1,13 @@
# Testing # Testing
Testing can either be done asynchronously using a real `ActorSystem` or synchronously on the testing thread using the `BehaviousTestKit`.
For testing logic in a `Behavior` in isolation synchronous testing is preferred. For testing interactions between multiple
actors a more realistic asynchronous test is preferred.
Certain `Behavior`s will be hard to test synchronously e.g. if they spawn Future's and you rely on a callback to complete
before observing the effect you want to test. Further support for controlling the scheduler and execution context used
will be added.
@@@ warning @@@ warning
@ -6,28 +15,20 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
of being the subject of active research. This means that API or semantics can of being the subject of active research. This means that API or semantics can
change without warning or deprecation period and it is not recommended to use change without warning or deprecation period and it is not recommended to use
this module in production just yet—you have been warned. this module in production just yet—you have been warned.
@@@ @@@
To use the testkit add the following dependency: ## Dependency
@@dependency [sbt,Maven,Gradle] { To use Akka TestKit Type, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka group=com.typesafe.akka
artifact=akka-testkit-typed_2.12 artifact=akka-testkit-typed_2.12
version=$akka.version$ version=$akka.version$
scope=test scope=test
} }
Testing can either be done asynchronously using a real `ActorSystem` or synchronously on the testing thread using the `BehaviousTestKit`.
For testing logic in a `Behavior` in isolation synchronous testing is preferred. For testing interactions between multiple
actors a more realistic asynchronous test is preferred.
Certain `Behavior`s will be hard to test synchronously e.g. if they spawn Future's and you rely on a callback to complete
before observing the effect you want to test. Further support for controlling the scheduler and execution context used
will be added.
## Synchronous behaviour testing ## Synchronous behaviour testing
The following demonstrates how to test: The following demonstrates how to test:
@ -69,13 +70,13 @@ make use of the `TestInbox` which allows the creation of an `ActorRef` that can
### Spawning children ### Spawning children
With a name: With a name:
Scala Scala
: @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child } : @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child }
Java Java
: @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child } : @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child }
Anonymously: Anonymously:
@ -83,7 +84,7 @@ Scala
: @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-anonymous-child } : @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-anonymous-child }
Java Java
: @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-anonymous-child } : @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-anonymous-child }
### Sending messages ### Sending messages
@ -94,7 +95,7 @@ Scala
: @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-message } : @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-message }
Java Java
: @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-message } : @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-message }
Another use case is sending a message to a child actor you can do this by looking up the 'TestInbox' for Another use case is sending a message to a child actor you can do this by looking up the 'TestInbox' for
a child actor from the 'BehaviorTestKit': a child actor from the 'BehaviorTestKit':
@ -103,7 +104,7 @@ Scala
: @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child-message } : @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child-message }
Java Java
: @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child-message } : @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child-message }
For anonymous children the actor names are generated in a deterministic way: For anonymous children the actor names are generated in a deterministic way:
@ -111,27 +112,27 @@ Scala
: @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child-message-anonymous } : @@snip [BasicSyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/sync/BasicSyncTestingSpec.scala) { #test-child-message-anonymous }
Java Java
: @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child-message-anonymous } : @@snip [BasicSyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/sync/BasicSyncTestingTest.java) { #test-child-message-anonymous }
### Testing other effects ### Testing other effects
The `BehaviorTestkit` keeps track other effects you can verify, look at the sub-classes of `akka.testkit.typed.Effect` The `BehaviorTestkit` keeps track other effects you can verify, look at the sub-classes of `akka.testkit.typed.Effect`
* SpawnedAdapter * SpawnedAdapter
* Stopped * Stopped
* Watched * Watched
* Unwatched * Unwatched
* Scheduled * Scheduled
See the other public methods and API documentation on `BehaviorTestkit` for other types of verification. See the other public methods and API documentation on `BehaviorTestkit` for other types of verification.
## Asynchronous testing ## Asynchronous testing
Asynchronous testing uses a real `ActorSystem` that allows you to test your Actors in a more realistic environment. Asynchronous testing uses a real `ActorSystem` that allows you to test your Actors in a more realistic environment.
The minimal setup consists of the test procedure, which provides the desired stimuli, the actor under test, The minimal setup consists of the test procedure, which provides the desired stimuli, the actor under test,
and an actor receiving replies. Bigger systems replace the actor under test with a network of actors, apply stimuli and an actor receiving replies. Bigger systems replace the actor under test with a network of actors, apply stimuli
at varying injection points and arrange results to be sent from different emission points, but the basic principle stays at varying injection points and arrange results to be sent from different emission points, but the basic principle stays
the same in that a single procedure drives the test. the same in that a single procedure drives the test.
### Basic example ### Basic example
@ -142,10 +143,10 @@ Scala
: @@snip [BasicAsyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #under-test } : @@snip [BasicAsyncTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #under-test }
Java Java
: @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #under-test } : @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #under-test }
Tests extend `TestKit` or include the `TestKitBase`. This provides access to Tests extend `TestKit` or include the `TestKitBase`. This provides access to
* An ActorSystem * An ActorSystem
* Methods for spawning Actors. These are created under the root guardian * Methods for spawning Actors. These are created under the root guardian
* Methods for creating system actors * Methods for creating system actors
@ -153,7 +154,7 @@ Scala
: @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-header } : @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-header }
Java Java
: @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-header } : @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-header }
Your test is responsible for shutting down the `ActorSystem` e.g. using `BeforeAndAfterAll` when using ScalaTest Your test is responsible for shutting down the `ActorSystem` e.g. using `BeforeAndAfterAll` when using ScalaTest
@ -161,19 +162,19 @@ Scala
: @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-shutdown } : @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-shutdown }
Java Java
: @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-shutdown } : @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-shutdown }
The following demonstrates: The following demonstrates:
* Creating a typed actor from the `TestKit`'s system using `spawn` * Creating a typed actor from the `TestKit`'s system using `spawn`
* Creating a typed `TestProbe` * Creating a typed `TestProbe`
* Verifying that the actor under test responds via the `TestProbe` * Verifying that the actor under test responds via the `TestProbe`
Scala Scala
: @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-spawn } : @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-spawn }
Java Java
: @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-spawn } : @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-spawn }
Actors can also be spawned anonymously: Actors can also be spawned anonymously:
@ -181,7 +182,7 @@ Scala
: @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-spawn-anonymous } : @@snip [BasicTestingSpec.scala]($akka$/akka-actor-typed-tests/src/test/scala/docs/akka/typed/testing/async/BasicAsyncTestingSpec.scala) { #test-spawn-anonymous }
Java Java
: @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-spawn-anonymous } : @@snip [BasicAsyncTestingTest.java]($akka$/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/testing/async/BasicAsyncTestingTest.java) { #test-spawn-anonymous }
### Controlling the scheduler ### Controlling the scheduler