rename akka-docs dir to docs (#62)

This commit is contained in:
PJ Fanning 2022-12-02 10:49:40 +01:00 committed by GitHub
parent 13dce0ec69
commit 708da8caec
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
1029 changed files with 2033 additions and 2039 deletions

View file

@ -28,7 +28,7 @@ jobs:
uses: coursier/cache-action@v6.4.0 uses: coursier/cache-action@v6.4.0
- name: create the Akka site - name: create the Akka site
run: sbt -Dakka.genjavadoc.enabled=true "Javaunidoc/doc; Compile/unidoc; akka-docs/paradox" run: sbt -Dakka.genjavadoc.enabled=true "Javaunidoc/doc; Compile/unidoc; docs/paradox"
- name: Install Coursier command line tool - name: Install Coursier command line tool
run: curl -fLo cs https://git.io/coursier-cli-linux && chmod +x cs && ./cs run: curl -fLo cs https://git.io/coursier-cli-linux && chmod +x cs && ./cs

8
.gitignore vendored
View file

@ -45,11 +45,9 @@ _akka_cluster/
_dump _dump
_mb _mb
activemq-data activemq-data
akka-contrib/rst_preprocessed/ docs/_build/
akka-docs-dev/rst_preprocessed/ docs/exts/
akka-docs/_build/ docs/rst_preprocessed/
akka-docs/exts/
akka-docs/rst_preprocessed/
akka-osgi/src/main/resources/*.conf akka-osgi/src/main/resources/*.conf
akka.sublime-project akka.sublime-project
akka.sublime-workspace akka.sublime-workspace

View file

@ -70,7 +70,7 @@ The steps are exactly the same for everyone involved in the project, including t
- Please write additional tests covering your feature and adjust existing ones if needed before submitting your pull request. The `validatePullRequest` sbt task ([explained below](#the-validatepullrequest-task)) may come in handy to verify your changes are correct. - Please write additional tests covering your feature and adjust existing ones if needed before submitting your pull request. The `validatePullRequest` sbt task ([explained below](#the-validatepullrequest-task)) may come in handy to verify your changes are correct.
- Use the `verifyCodeStyle` sbt task to ensure your code is properly formatted and includes the proper copyright headers. - Use the `verifyCodeStyle` sbt task to ensure your code is properly formatted and includes the proper copyright headers.
1. Once your feature is complete, prepare the commit following our [Creating Commits And Writing Commit Messages](#creating-commits-and-writing-commit-messages). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve). 1. Once your feature is complete, prepare the commit following our [Creating Commits And Writing Commit Messages](#creating-commits-and-writing-commit-messages). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve).
1. If it's a new feature or a change of behavior, document it on the [akka-docs](https://github.com/apache/incubator-pekko/tree/main/akka-docs). When the feature touches Scala and Java DSL, document both the Scala and Java APIs. 1. If it's a new feature or a change of behavior, document it on the [docs](https://github.com/apache/incubator-pekko/tree/main/docs). When the feature touches Scala and Java DSL, document both the Scala and Java APIs.
1. Now it's finally time to [submit the pull request](https://help.github.com/articles/using-pull-requests)! 1. Now it's finally time to [submit the pull request](https://help.github.com/articles/using-pull-requests)!
- Please make sure to include a reference to the issue you're solving *in the comment* for the Pull Request, as this will cause the PR to be linked properly with the issue. Examples of good phrases for this are: "Resolves #1234" or "Refs #1234". - Please make sure to include a reference to the issue you're solving *in the comment* for the Pull Request, as this will cause the PR to be linked properly with the issue. Examples of good phrases for this are: "Resolves #1234" or "Refs #1234".
1. If you are a first time contributor, a core member must approve the CI to run for your pull request. 1. If you are a first time contributor, a core member must approve the CI to run for your pull request.
@ -218,7 +218,7 @@ The Pekko build includes a special task called `validatePullRequest`, which inve
then running tests only on those projects. then running tests only on those projects.
For example, changing something in `akka-actor` would cause tests to be run in all projects which depend on it For example, changing something in `akka-actor` would cause tests to be run in all projects which depend on it
(e.g. `akka-actor-tests`, `akka-stream`, `akka-docs` etc.). (e.g. `akka-actor-tests`, `akka-stream`, `docs` etc.).
To use the task, simply type `validatePullRequest`, and the output should include entries like shown below: To use the task, simply type `validatePullRequest`, and the output should include entries like shown below:
@ -226,7 +226,7 @@ To use the task, simply type `validatePullRequest`, and the output should includ
> validatePullRequest > validatePullRequest
[info] Diffing [HEAD] to determine changed modules in PR... [info] Diffing [HEAD] to determine changed modules in PR...
[info] Detected uncomitted changes in directories (including in dependency analysis): [akka-protobuf,project] [info] Detected uncomitted changes in directories (including in dependency analysis): [akka-protobuf,project]
[info] Detected changes in directories: [akka-actor-tests, project, akka-stream, akka-docs, akka-persistence] [info] Detected changes in directories: [akka-actor-tests, project, akka-stream, docs, akka-persistence]
``` ```
By default, changes are diffed with the `main` branch when working locally. If you want to validate against a different By default, changes are diffed with the `main` branch when working locally. If you want to validate against a different
@ -279,7 +279,7 @@ rolling upgrade to the next version.
All wire protocol changes that may concern rolling upgrades should be documented in the All wire protocol changes that may concern rolling upgrades should be documented in the
[Rolling Update Changelog](https://pekko.apache.org/) [Rolling Update Changelog](https://pekko.apache.org/)
(found in akka-docs/src/main/paradox/project/rolling-update.md) (found in docs/src/main/paradox/project/rolling-update.md)
### Protobuf ### Protobuf
@ -321,12 +321,12 @@ To build the documentation locally:
```shell ```shell
sbt sbt
akka-docs/paradox docs/paradox
``` ```
The generated HTML documentation is in `akka-docs/target/paradox/site/main/index.html`. The generated HTML documentation is in `docs/target/paradox/site/main/index.html`.
Alternatively, use `akka-docs/paradoxBrowse` to open the generated docs in your default web browser. Alternatively, use `docs/paradoxBrowse` to open the generated docs in your default web browser.
#### Links to API documentation #### Links to API documentation
@ -567,7 +567,7 @@ Scala has proven the most viable way to do it, as long as you keep the following
Documentation of Pekko Streams operators is automatically enforced. Documentation of Pekko Streams operators is automatically enforced.
If a method exists on Source / Sink / Flow, or any other class listed in `project/StreamOperatorsIndexGenerator.scala`, If a method exists on Source / Sink / Flow, or any other class listed in `project/StreamOperatorsIndexGenerator.scala`,
it must also have a corresponding documentation page under `akka-docs/src/main/paradox/streams/operators/...`. it must also have a corresponding documentation page under `docs/src/main/paradox/streams/operators/...`.
Pekko Streams operators' consistency is enforced by `ConsistencySpec`, normally an operator should exist on both Source / SubSource, Flow / SubFlow, Sink / SubSink. Pekko Streams operators' consistency is enforced by `ConsistencySpec`, normally an operator should exist on both Source / SubSource, Flow / SubFlow, Sink / SubSink.
@ -577,7 +577,7 @@ docs pages in there to see the pattern in action. In general the page must consi
- the title, including where the operator is defined (e.g. `ActorFlow.ask` or `Source.map`) - the title, including where the operator is defined (e.g. `ActorFlow.ask` or `Source.map`)
- a short explanation of what this operator does, 1 sentence is optimal - a short explanation of what this operator does, 1 sentence is optimal
- an image explaining the operator more visually (whenever possible) - an image explaining the operator more visually (whenever possible)
- a link to the operators' "category" (these are listed in `akka-docs/src/main/paradox/categories`) - a link to the operators' "category" (these are listed in `docs/src/main/paradox/categories`)
- the method signature snippet (use the built in directives to generate it) - the method signature snippet (use the built in directives to generate it)
- a longer explanation about the operator and its exact semantics (when it pulls, cancels, signals elements) - a longer explanation about the operator and its exact semantics (when it pulls, cancels, signals elements)
- at least one usage example - at least one usage example

View file

@ -71,7 +71,7 @@ It is possible to release a revised documentation to the already existing releas
1. Switch to a new branch for your documentation change, make the change 1. Switch to a new branch for your documentation change, make the change
1. Build documentation locally with: 1. Build documentation locally with:
```sh ```sh
sbt akka-docs/paradoxBrowse sbt docs/paradoxBrowse
``` ```
1. If the generated documentation looks good, create a PR to the `docs/v2.6.4` branch you created earlier. 1. If the generated documentation looks good, create a PR to the `docs/v2.6.4` branch you created earlier.
1. It should automatically be published by GitHub Actions on merge. 1. It should automatically be published by GitHub Actions on merge.

View file

@ -1,36 +0,0 @@
# Issue Tracking
Akka is using GitHub Issues as its issue tracking system.
## Browsing
### Tickets
Before filing a ticket, please check the existing [Akka tickets](https://github.com/akka/akka/issues) for earlier reports of the same
problem. You are very welcome to comment on existing tickets, especially if you
have reproducible test cases that you can share.
### Roadmaps
Short and long-term plans are published in the [akka/akka-meta](https://github.com/akka/akka-meta/issues) repository.
## Creating tickets
*Please include the versions of Scala and Akka and relevant configuration files.*
You can create a [new ticket](https://github.com/akka/akka/issues/new) if you
have registered a GitHub user account.
Thanks a lot for reporting bugs and suggesting features!
## Submitting Pull Requests
@@@ note
*A pull request is worth a thousand +1's.* -- Old Klangian Proverb
@@@
Pull Requests fixing issues or adding functionality are very welcome.
Please read [CONTRIBUTING.md](https://github.com/akka/akka/blob/main/CONTRIBUTING.md) for
more information about contributing to Akka.

View file

@ -85,7 +85,7 @@ lazy val aggregatedProjects: Seq[ProjectReference] = userProjects ++ List[Projec
streamTests, streamTests,
streamTestsTck) streamTestsTck)
lazy val root = Project(id = "akka", base = file(".")) lazy val root = Project(id = "pekko", base = file("."))
.aggregate(aggregatedProjects: _*) .aggregate(aggregatedProjects: _*)
.enablePlugins(PublishRsyncPlugin) .enablePlugins(PublishRsyncPlugin)
.settings(rootSettings: _*) .settings(rootSettings: _*)
@ -203,7 +203,7 @@ lazy val distributedData = akkaModule("akka-distributed-data")
.configs(MultiJvm) .configs(MultiJvm)
.enablePlugins(MultiNodeScalaTest) .enablePlugins(MultiNodeScalaTest)
lazy val docs = akkaModule("akka-docs") lazy val docs = akkaModule("docs")
.configs(Jdk9.TestJdk9) .configs(Jdk9.TestJdk9)
.dependsOn( .dependsOn(
actor, actor,

View file

@ -71,10 +71,10 @@ The `createReceive` method has no arguments and returns @javadoc[AbstractActor.R
Here is an example: Here is an example:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
Java Java
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor } : @@snip [MyActor.java](/docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor }
Please note that the Akka Actor @scala[`receive`] message loop is exhaustive, which is different compared to Erlang and the late Scala Actors. This means that you Please note that the Akka Actor @scala[`receive`] message loop is exhaustive, which is different compared to Erlang and the late Scala Actors. This means that you
need to provide a pattern match for all messages that it can accept and if you need to provide a pattern match for all messages that it can accept and if you
@ -97,7 +97,7 @@ construction.
#### Here is another example that you can edit and run in the browser: #### Here is another example that you can edit and run in the browser:
@@fiddle [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template="Akka" layout="v75" minheight="400px" } @@fiddle [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template="Akka" layout="v75" minheight="400px" }
@@@ @@@
@ -110,10 +110,10 @@ dispatcher to use, see more below). Here are some examples of how to create a
`Props` instance. `Props` instance.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-props #creating-props } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-props #creating-props }
The second variant shows how to pass constructor arguments to the The second variant shows how to pass constructor arguments to the
@ -135,10 +135,10 @@ for cases when the actor constructor takes value classes as arguments.
#### Dangerous Variants #### Dangerous Variants
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props-deprecated } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props-deprecated }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #creating-props-deprecated } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #creating-props-deprecated }
This method is not recommended being used within another actor because it This method is not recommended being used within another actor because it
encourages to close over the enclosing scope, resulting in non-serializable encourages to close over the enclosing scope, resulting in non-serializable
@ -170,13 +170,13 @@ There are two edge cases in actor creation with @scaladoc[actor.Props](pekko.act
* An actor with @scaladoc[AnyVal](scala.AnyVal) arguments. * An actor with @scaladoc[AnyVal](scala.AnyVal) arguments.
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class } @@snip [PropsEdgeCaseSpec.scala](/docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class }
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class-example } @@snip [PropsEdgeCaseSpec.scala](/docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class-example }
* An actor with default constructor values. * An actor with default constructor values.
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-default-values } @@snip [PropsEdgeCaseSpec.scala](/docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-default-values }
In both cases, an @javadoc[IllegalArgumentException](java.lang.IllegalArgumentException) will be thrown stating In both cases, an @javadoc[IllegalArgumentException](java.lang.IllegalArgumentException) will be thrown stating
no matching constructor could be found. no matching constructor could be found.
@ -197,10 +197,10 @@ arguments as constructor parameters, since within static method]
the given code block will not retain a reference to its enclosing scope: the given code block will not retain a reference to its enclosing scope:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #props-factory } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #props-factory }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #props-factory } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #props-factory }
Another good practice is to declare what messages an Actor can receive Another good practice is to declare what messages an Actor can receive
@scala[in the companion object of the Actor] @scala[in the companion object of the Actor]
@ -208,10 +208,10 @@ Another good practice is to declare what messages an Actor can receive
which makes easier to know what it can receive: which makes easier to know what it can receive:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #messages-in-companion } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #messages-in-companion }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #messages-in-companion } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #messages-in-companion }
### Creating Actors with Props ### Creating Actors with Props
@ -220,20 +220,20 @@ Actors are created by passing a @apidoc[actor.Props] instance into the
@apidoc[actor.ActorContext]. @apidoc[actor.ActorContext].
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #system-actorOf } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #system-actorOf }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-actorRef } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-actorRef }
Using the `ActorSystem` will create top-level actors, supervised by the Using the `ActorSystem` will create top-level actors, supervised by the
actor systems provided guardian actor while using an actors context will actor systems provided guardian actor while using an actors context will
create a child actor. create a child actor.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #context-actorOf } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #context-actorOf }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #context-actorOf } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #context-actorOf }
It is recommended to create a hierarchy of children, grand-children and so on It is recommended to create a hierarchy of children, grand-children and so on
such that it fits the logical failure-handling structure of the application, such that it fits the logical failure-handling structure of the application,
@ -266,7 +266,7 @@ value classes.
In these cases you should either unpack the arguments or create the props by In these cases you should either unpack the arguments or create the props by
calling the constructor manually: calling the constructor manually:
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #actor-with-value-class-argument } @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #actor-with-value-class-argument }
@@@ @@@
@ -278,10 +278,10 @@ are cases when a factory method must be used, for example when the actual
constructor arguments are determined by a dependency injection framework. constructor arguments are determined by a dependency injection framework.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-indirectly } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-indirectly }
Java Java
: @@snip [DependencyInjectionDocTest.java](/akka-docs/src/test/java/jdocs/actor/DependencyInjectionDocTest.java) { #import #creating-indirectly } : @@snip [DependencyInjectionDocTest.java](/docs/src/test/java/jdocs/actor/DependencyInjectionDocTest.java) { #import #creating-indirectly }
@@@ warning @@@ warning
@ -343,7 +343,7 @@ In addition, it offers:
You can import the members in the `context` to avoid prefixing access with `context.` You can import the members in the `context` to avoid prefixing access with `context.`
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #import-context } @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #import-context }
@@@ @@@
@ -354,7 +354,7 @@ Scala
: @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #lifecycle-hooks } : @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #lifecycle-hooks }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #lifecycle-callbacks } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #lifecycle-callbacks }
The implementations shown above are the defaults provided by the @scala[@scaladoc[Actor](pekko.actor.Actor) trait.] @java[@javadoc[AbstractActor](pekko.actor.AbstractActor) class.] The implementations shown above are the defaults provided by the @scala[@scaladoc[Actor](pekko.actor.Actor) trait.] @java[@javadoc[AbstractActor](pekko.actor.AbstractActor) class.]
@ -426,10 +426,10 @@ termination (see @ref:[Stopping Actors](#stopping-actors)). This service is prov
Registering a monitor is easy: Registering a monitor is easy:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #watch } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #watch }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-terminated #watch } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-terminated #watch }
It should be noted that the @apidoc[actor.Terminated] message is generated It should be noted that the @apidoc[actor.Terminated] message is generated
independently of the order in which registration and termination occur. independently of the order in which registration and termination occur.
@ -454,10 +454,10 @@ no `Terminated` message for that actor will be processed anymore.
Right after starting the actor, its @scala[@scaladoc[preStart](pekko.actor.Actor#preStart():Unit)]@java[@javadoc[preStart](pekko.actor.AbstractActor#preStart())] method is invoked. Right after starting the actor, its @scala[@scaladoc[preStart](pekko.actor.Actor#preStart():Unit)]@java[@javadoc[preStart](pekko.actor.AbstractActor#preStart())] method is invoked.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #preStart } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #preStart }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #preStart } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #preStart }
This method is called when the actor is first created. During restarts, it is This method is called when the actor is first created. During restarts, it is
called by the default implementation of @scala[@scaladoc[postRestart](pekko.actor.Actor#postRestart(reason:Throwable):Unit)]@java[@javadoc[postRestart](pekko.actor.AbstractActor#postRestart(java.lang.Throwable))], which means that called by the default implementation of @scala[@scaladoc[postRestart](pekko.actor.Actor#postRestart(reason:Throwable):Unit)]@java[@javadoc[postRestart](pekko.actor.AbstractActor#postRestart(java.lang.Throwable))], which means that
@ -529,10 +529,10 @@ paths—logical or physical—and receive back an @apidoc[actor.ActorSelection]
result: result:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-local } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-local }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-local } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-local }
@@@ note @@@ note
@ -561,10 +561,10 @@ The path elements of an actor selection may contain wildcard patterns allowing f
broadcasting of messages to that section: broadcasting of messages to that section:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-wildcard } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-wildcard }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-wildcard } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-wildcard }
Messages can be sent via the @apidoc[actor.ActorSelection] and the path of the Messages can be sent via the @apidoc[actor.ActorSelection] and the path of the
`ActorSelection` is looked up when delivering each message. If the selection `ActorSelection` is looked up when delivering each message. If the selection
@ -581,10 +581,10 @@ negative result is generated. Please note that this does not mean that delivery
of that reply is guaranteed, it still is a normal message. of that reply is guaranteed, it still is a normal message.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #identify } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #identify }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-identify #identify } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-identify #identify }
You can also acquire an `ActorRef` for an `ActorSelection` with You can also acquire an `ActorRef` for an `ActorSelection` with
the @apidoc[resolveOne](actor.ActorSelection) {scala="#resolveOne(timeout:scala.concurrent.duration.FiniteDuration):scala.concurrent.Future[actor.ActorRef]" java="#resolveOne(java.time.Duration)"} method of the `ActorSelection`. It returns a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] the @apidoc[resolveOne](actor.ActorSelection) {scala="#resolveOne(timeout:scala.concurrent.duration.FiniteDuration):scala.concurrent.Future[actor.ActorRef]" java="#resolveOne(java.time.Duration)"} method of the `ActorSelection`. It returns a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)]
@ -595,10 +595,10 @@ didn't complete within the supplied `timeout`.
Remote actor addresses may also be looked up, if @ref:[remoting](remoting.md) is enabled: Remote actor addresses may also be looked up, if @ref:[remoting](remoting.md) is enabled:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-remote } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-remote }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-remote } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-remote }
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting-artery.md#looking-up-remote-actors). An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting-artery.md#looking-up-remote-actors).
@ -617,10 +617,10 @@ state) and works great with pattern matching at the receiver side.]
Here is an @scala[example:] @java[example of an immutable message:] Here is an @scala[example:] @java[example of an immutable message:]
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #immutable-message-definition #immutable-message-instantiation } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #immutable-message-definition #immutable-message-instantiation }
Java Java
: @@snip [ImmutableMessage.java](/akka-docs/src/test/java/jdocs/actor/ImmutableMessage.java) { #immutable-message } : @@snip [ImmutableMessage.java](/docs/src/test/java/jdocs/actor/ImmutableMessage.java) { #immutable-message }
## Send messages ## Send messages
@ -658,10 +658,10 @@ This is the preferred way of sending messages. No blocking waiting for a
message. This gives the best concurrency and scalability characteristics. message. This gives the best concurrency and scalability characteristics.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #tell } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #tell }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #tell } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #tell }
@@@ div { .group-scala } @@@ div { .group-scala }
@ -695,10 +695,10 @@ The `ask` pattern involves actors as well as futures, hence it is offered as
a use pattern rather than a method on @apidoc[actor.ActorRef]: a use pattern rather than a method on @apidoc[actor.ActorRef]:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #ask-pipeTo } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #ask-pipeTo }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-ask #ask-pipe } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-ask #ask-pipe }
This example demonstrates `ask` together with the @scala[@scaladoc[pipeTo](pekko.pattern.PipeToSupport$PipeableFuture#pipeTo(recipient:org.apache.pekko.actor.ActorRef)(implicitsender:org.apache.pekko.actor.ActorRef):scala.concurrent.Future[T])]@java[@javadoc[pipeTo](pekko.pattern.PipeToSupport.PipeableCompletionStage#pipeTo(org.apache.pekko.actor.ActorRef,org.apache.pekko.actor.ActorRef))] pattern on This example demonstrates `ask` together with the @scala[@scaladoc[pipeTo](pekko.pattern.PipeToSupport$PipeableFuture#pipeTo(recipient:org.apache.pekko.actor.ActorRef)(implicitsender:org.apache.pekko.actor.ActorRef):scala.concurrent.Future[T])]@java[@javadoc[pipeTo](pekko.pattern.PipeToSupport.PipeableCompletionStage#pipeTo(org.apache.pekko.actor.ActorRef,org.apache.pekko.actor.ActorRef))] pattern on
@ -727,10 +727,10 @@ are treated specially by the ask pattern.]
@@@ @@@
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-exception } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-exception }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #reply-exception } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #reply-exception }
If the actor does not complete the @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)], it will expire after the timeout period, If the actor does not complete the @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)], it will expire after the timeout period,
@scala[completing it with an @scaladoc[AskTimeoutException](pekko.pattern.AskTimeoutException). The timeout is taken from one of the following locations in order of precedence:] @scala[completing it with an @scaladoc[AskTimeoutException](pekko.pattern.AskTimeoutException). The timeout is taken from one of the following locations in order of precedence:]
@ -740,11 +740,11 @@ If the actor does not complete the @scala[@scaladoc[Future](scala.concurrent.Fut
1. explicitly given timeout as in: 1. explicitly given timeout as in:
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout } @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout }
2. implicit argument of type `org.apache.pekko.util.Timeout`, e.g. 2. implicit argument of type `org.apache.pekko.util.Timeout`, e.g.
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout } @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout }
@@@ @@@
@ -773,10 +773,10 @@ through a 'mediator'. This can be useful when writing actors that work as
routers, load-balancers, replicators etc. routers, load-balancers, replicators etc.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #forward } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #forward }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #forward } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #forward }
## Receive messages ## Receive messages
@ -788,7 +788,7 @@ Scala
: @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #receive } : @@snip [Actor.scala](/akka-actor/src/main/scala/org/apache/pekko/actor/Actor.scala) { #receive }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #createReceive } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #createReceive }
@@@ div { .group-scala } @@@ div { .group-scala }
@ -807,23 +807,23 @@ You can build such behavior with a builder named @javadoc[receiveBuilder](pekko.
@@@ @@@
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
Java Java
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor } : @@snip [MyActor.java](/docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor }
@@@ div { .group-java } @@@ div { .group-java }
In case you want to provide many `match` cases but want to avoid creating a long call In case you want to provide many `match` cases but want to avoid creating a long call
trail, you can split the creation of the builder into multiple statements as in the example: trail, you can split the creation of the builder into multiple statements as in the example:
@@snip [GraduallyBuiltActor.java](/akka-docs/src/test/java/jdocs/actor/GraduallyBuiltActor.java) { #imports #actor } @@snip [GraduallyBuiltActor.java](/docs/src/test/java/jdocs/actor/GraduallyBuiltActor.java) { #imports #actor }
Using small methods is a good practice, also in actors. It's recommended to delegate the Using small methods is a good practice, also in actors. It's recommended to delegate the
actual work of the message processing to methods instead of defining a huge `ReceiveBuilder` actual work of the message processing to methods instead of defining a huge `ReceiveBuilder`
with lots of code in each lambda. A well-structured actor can look like this: with lots of code in each lambda. A well-structured actor can look like this:
@@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #well-structured } @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #well-structured }
That has benefits such as: That has benefits such as:
@ -845,7 +845,7 @@ that the JVM can have problems optimizing and the resulting code might not be as
untyped version. When extending `UntypedAbstractActor` each message is received as an untyped untyped version. When extending `UntypedAbstractActor` each message is received as an untyped
`Object` and you have to inspect and cast it to the actual message type in other ways, like this: `Object` and you have to inspect and cast it to the actual message type in other ways, like this:
@@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #optimized } @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #optimized }
@@@ @@@
@ -860,10 +860,10 @@ message was sent without an actor or future context) then the sender
defaults to a 'dead-letter' actor ref. defaults to a 'dead-letter' actor ref.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
Java Java
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #reply } : @@snip [MyActor.java](/docs/src/test/java/jdocs/actor/MyActor.java) { #reply }
## Receive timeout ## Receive timeout
@ -882,10 +882,10 @@ periods).
To cancel the sending of receive timeout notifications, use `cancelReceiveTimeout`. To cancel the sending of receive timeout notifications, use `cancelReceiveTimeout`.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-timeout } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-timeout }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #receive-timeout } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #receive-timeout }
Messages marked with @apidoc[NotInfluenceReceiveTimeout] will not reset the timer. This can be useful when Messages marked with @apidoc[NotInfluenceReceiveTimeout] will not reset the timer. This can be useful when
@apidoc[actor.ReceiveTimeout] should be fired by external inactivity but not influenced by internal activity, @apidoc[actor.ReceiveTimeout] should be fired by external inactivity but not influenced by internal activity,
@ -901,10 +901,10 @@ to use the support for named timers. The lifecycle of scheduled messages can be
when the actor is restarted and that is taken care of by the timers. when the actor is restarted and that is taken care of by the timers.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/TimerDocSpec.scala) { #timers } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/TimerDocSpec.scala) { #timers }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/TimerDocTest.java) { #timers } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/TimerDocTest.java) { #timers }
The @ref:[Scheduler](scheduler.md#schedule-periodically) documentation describes the difference between The @ref:[Scheduler](scheduler.md#schedule-periodically) documentation describes the difference between
`fixed-delay` and `fixed-rate` scheduling. If you are uncertain of which one to use you should pick `fixed-delay` and `fixed-rate` scheduling. If you are uncertain of which one to use you should pick
@ -927,10 +927,10 @@ termination of the actor is performed asynchronously, i.e. `stop` may return bef
the actor is stopped. the actor is stopped.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stoppingActors-actor } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stoppingActors-actor }
Java Java
: @@snip [MyStoppingActor.java](/akka-docs/src/test/java/jdocs/actor/MyStoppingActor.java) { #my-stopping-actor } : @@snip [MyStoppingActor.java](/docs/src/test/java/jdocs/actor/MyStoppingActor.java) { #my-stopping-actor }
Processing of the current message, if any, will continue before the actor is stopped, Processing of the current message, if any, will continue before the actor is stopped,
@ -958,10 +958,10 @@ The `postStop()` hook is invoked after an actor is fully stopped. This
enables cleaning up of resources: enables cleaning up of resources:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #postStop } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #postStop }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #postStop } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #postStop }
@@@ note @@@ note
@ -982,10 +982,10 @@ ordinary messages and will be handled after messages that were already queued
in the mailbox. in the mailbox.
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #poison-pill } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #poison-pill }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #poison-pill } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #poison-pill }
<a id="killing-actors"></a> <a id="killing-actors"></a>
### Killing an Actor ### Killing an Actor
@ -998,10 +998,10 @@ See @ref:[What Supervision Means](general/supervision.md#supervision-directives)
Use `Kill` like this: Use `Kill` like this:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #kill } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #kill }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #kill } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #kill }
In general, it is not recommended to overly rely on either `PoisonPill` or `Kill` in In general, it is not recommended to overly rely on either `PoisonPill` or `Kill` in
designing your actor interactions, as often a protocol-level message like `PleaseCleanupAndStop` designing your actor interactions, as often a protocol-level message like `PleaseCleanupAndStop`
@ -1014,10 +1014,10 @@ over which design you do not have control over.
termination of several actors: termination of several actors:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #gracefulStop} : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #gracefulStop}
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-gracefulStop #gracefulStop} : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-gracefulStop #gracefulStop}
When `gracefulStop()` returns successfully, the actors @scala[@scaladoc[postStop](pekko.actor.Actor#postStop():Unit)]@java[@javadoc[postStop](pekko.actor.AbstractActor#postStop())] hook When `gracefulStop()` returns successfully, the actors @scala[@scaladoc[postStop](pekko.actor.Actor#postStop():Unit)]@java[@javadoc[postStop](pekko.actor.AbstractActor#postStop())] hook
will have been executed: there exists a happens-before edge between the end of will have been executed: there exists a happens-before edge between the end of
@ -1059,10 +1059,10 @@ Please note that the actor will revert to its original behavior when restarted b
To hotswap the Actor behavior using `become`: To hotswap the Actor behavior using `become`:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #hot-swap-actor } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #hot-swap-actor }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #hot-swap-actor } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #hot-swap-actor }
This variant of the `become` method is useful for many different things, This variant of the `become` method is useful for many different things,
such as to implement a Finite State Machine (FSM). It will replace the current behavior (i.e. the top of the behavior such as to implement a Finite State Machine (FSM). It will replace the current behavior (i.e. the top of the behavior
@ -1076,14 +1076,14 @@ in the long run, otherwise this amounts to a memory leak (which is why this
behavior is not the default). behavior is not the default).
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #swapper } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #swapper }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #swapper } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #swapper }
### Encoding Scala Actors nested receives without accidentally leaking memory ### Encoding Scala Actors nested receives without accidentally leaking memory
See this @extref[Unnested receive example](github:akka-docs/src/test/scala/docs/actor/UnnestedReceives.scala). See this @extref[Unnested receive example](github:docs/src/test/scala/docs/actor/UnnestedReceives.scala).
## Stash ## Stash
@ -1120,10 +1120,10 @@ control over the mailbox, see the documentation on mailboxes: @ref:[Mailboxes](m
Here is an example of the @scala[@scaladoc[Stash](pekko.actor.Stash) trait] @java[@javadoc[AbstractActorWithStash](pekko.actor.AbstractActorWithStash) class] in action: Here is an example of the @scala[@scaladoc[Stash](pekko.actor.Stash) trait] @java[@javadoc[AbstractActorWithStash](pekko.actor.AbstractActorWithStash) class] in action:
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stash } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stash }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #stash } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #stash }
Invoking `stash()` adds the current message (the message that the Invoking `stash()` adds the current message (the message that the
actor received last) to the actor's stash. It is typically invoked actor received last) to the actor's stash. It is typically invoked
@ -1187,7 +1187,7 @@ For example, imagine you have a set of actors which are either `Producers` or `C
have an actor share both behaviors. This can be achieved without having to duplicate code by extracting the behaviors to have an actor share both behaviors. This can be achieved without having to duplicate code by extracting the behaviors to
traits and implementing the actor's `receive` as a combination of these partial functions. traits and implementing the actor's `receive` as a combination of these partial functions.
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-orElse } @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-orElse }
Instead of inheritance the same pattern can be applied via composition - compose the receive method using partial functions from delegates. Instead of inheritance the same pattern can be applied via composition - compose the receive method using partial functions from delegates.
@ -1223,10 +1223,10 @@ One useful usage of this pattern is to disable creation of new `ActorRefs` for c
achieved by overriding @scala[@scaladoc[preRestart](pekko.actor.Actor#preRestart(reason:Throwable,message:Option[Any]):Unit)]@java[@javadoc[preRestart](pekko.actor.AbstractActor#preRestart(java.lang.Throwable,java.util.Optional))]. Below is the default implementation of these lifecycle hooks: achieved by overriding @scala[@scaladoc[preRestart](pekko.actor.Actor#preRestart(reason:Throwable,message:Option[Any]):Unit)]@java[@javadoc[preRestart](pekko.actor.AbstractActor#preRestart(java.lang.Throwable,java.util.Optional))]. Below is the default implementation of these lifecycle hooks:
Scala Scala
: @@snip [InitializationDocSpec.scala](/akka-docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #preStartInit } : @@snip [InitializationDocSpec.scala](/docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #preStartInit }
Java Java
: @@snip [InitializationDocTest.java](/akka-docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #preStartInit } : @@snip [InitializationDocTest.java](/docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #preStartInit }
Please note, that the child actors are *still restarted*, but no new @apidoc[actor.ActorRef] is created. One can recursively apply Please note, that the child actors are *still restarted*, but no new @apidoc[actor.ActorRef] is created. One can recursively apply
@ -1243,10 +1243,10 @@ and use @apidoc[become()](actor.ActorContext) {scala="#become(behavior:org.apach
of the actor. of the actor.
Scala Scala
: @@snip [InitializationDocSpec.scala](/akka-docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #messageInit } : @@snip [InitializationDocSpec.scala](/docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #messageInit }
Java Java
: @@snip [InitializationDocTest.java](/akka-docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #messageInit } : @@snip [InitializationDocTest.java](/docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #messageInit }
If the actor may receive messages before it has been initialized, a useful tool can be the `Stash` to save messages If the actor may receive messages before it has been initialized, a useful tool can be the `Stash` to save messages
until the initialization finishes, and replaying them after the actor became initialized. until the initialization finishes, and replaying them after the actor became initialized.

View file

@ -10,10 +10,10 @@ For the full documentation of this feature and for new projects see @ref:[Multi-
You can retrieve information about what data center a member belongs to: You can retrieve information about what data center a member belongs to:
Scala Scala
: @@snip [ClusterDocSpec.scala](/akka-docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #dcAccess } : @@snip [ClusterDocSpec.scala](/docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #dcAccess }
Java Java
: @@snip [ClusterDocTest.java](/akka-docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #dcAccess } : @@snip [ClusterDocTest.java](/docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #dcAccess }
For the full documentation of this feature and for new projects see @ref:[Multi-DC Cluster](typed/cluster-dc.md#membership). For the full documentation of this feature and for new projects see @ref:[Multi-DC Cluster](typed/cluster-dc.md#membership).
@ -40,6 +40,6 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #proxy-dc } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #proxy-dc }
For the full documentation of this feature and for new projects see @ref:[Multi-DC Cluster](typed/cluster-dc.md#cluster-sharding). For the full documentation of this feature and for new projects see @ref:[Multi-DC Cluster](typed/cluster-dc.md#cluster-sharding).

View file

@ -135,18 +135,18 @@ Let's take a look at this router in action. What can be more demanding than calc
The backend worker that performs the factorial calculation: The backend worker that performs the factorial calculation:
Scala Scala
: @@snip [FactorialBackend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialBackend.scala) { #backend } : @@snip [FactorialBackend.scala](/docs/src/test/scala/docs/cluster/FactorialBackend.scala) { #backend }
Java Java
: @@snip [FactorialBackend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialBackend.java) { #backend } : @@snip [FactorialBackend.java](/docs/src/test/java/jdocs/cluster/FactorialBackend.java) { #backend }
The frontend that receives user jobs and delegates to the backends via the router: The frontend that receives user jobs and delegates to the backends via the router:
Scala Scala
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #frontend } : @@snip [FactorialFrontend.scala](/docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #frontend }
Java Java
: @@snip [FactorialFrontend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #frontend } : @@snip [FactorialFrontend.java](/docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #frontend }
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows: As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
@ -177,20 +177,20 @@ other things work in the same way as other routers.
The same type of router could also have been defined in code: The same type of router could also have been defined in code:
Scala Scala
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #router-lookup-in-code #router-deploy-in-code } : @@snip [FactorialFrontend.scala](/docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #router-lookup-in-code #router-deploy-in-code }
Java Java
: @@snip [FactorialFrontend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #router-lookup-in-code #router-deploy-in-code } : @@snip [FactorialFrontend.java](/docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #router-lookup-in-code #router-deploy-in-code }
## Subscribe to Metrics Events ## Subscribe to Metrics Events
It is possible to subscribe to the metrics events directly to implement other functionality. It is possible to subscribe to the metrics events directly to implement other functionality.
Scala Scala
: @@snip [MetricsListener.scala](/akka-docs/src/test/scala/docs/cluster/MetricsListener.scala) { #metrics-listener } : @@snip [MetricsListener.scala](/docs/src/test/scala/docs/cluster/MetricsListener.scala) { #metrics-listener }
Java Java
: @@snip [MetricsListener.java](/akka-docs/src/test/java/jdocs/cluster/MetricsListener.java) { #metrics-listener } : @@snip [MetricsListener.java](/docs/src/test/java/jdocs/cluster/MetricsListener.java) { #metrics-listener }
## Custom Metrics Collector ## Custom Metrics Collector

View file

@ -81,7 +81,7 @@ Scala
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code } : @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
Java Java
: @@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code } : @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings. See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings.
@ -102,7 +102,7 @@ Scala
: @@snip [StatsMessages.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsMessages.scala) { #messages } : @@snip [StatsMessages.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsMessages.scala) { #messages }
Java Java
: @@snip [StatsMessages.java](/akka-docs/src/test/java/jdocs/cluster/StatsMessages.java) { #messages } : @@snip [StatsMessages.java](/docs/src/test/java/jdocs/cluster/StatsMessages.java) { #messages }
The worker that counts number of characters in each word: The worker that counts number of characters in each word:
@ -110,7 +110,7 @@ Scala
: @@snip [StatsWorker.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsWorker.scala) { #worker } : @@snip [StatsWorker.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsWorker.scala) { #worker }
Java Java
: @@snip [StatsWorker.java](/akka-docs/src/test/java/jdocs/cluster/StatsWorker.java) { #worker } : @@snip [StatsWorker.java](/docs/src/test/java/jdocs/cluster/StatsWorker.java) { #worker }
The service that receives text from users and splits it up into words, delegates to workers and aggregates: The service that receives text from users and splits it up into words, delegates to workers and aggregates:
@ -122,8 +122,8 @@ The service that receives text from users and splits it up into words, delegates
@@@ div { .group-java } @@@ div { .group-java }
@@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #service } @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #service }
@@snip [StatsAggregator.java](/akka-docs/src/test/java/jdocs/cluster/StatsAggregator.java) { #aggregator } @@snip [StatsAggregator.java](/docs/src/test/java/jdocs/cluster/StatsAggregator.java) { #aggregator }
@@@ @@@
@ -182,7 +182,7 @@ Scala
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code } : @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/org/apache/pekko/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
Java Java
: @@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code } : @@snip [StatsService.java](/docs/src/test/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }
See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings. See @ref:[reference configuration](general/configuration-reference.md#config-akka-cluster) for further descriptions of the settings.
@ -208,7 +208,7 @@ Scala
@@@ @@@
Java Java
: @@snip [StatsSampleOneMasterMain.java](/akka-docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #create-singleton-manager } : @@snip [StatsSampleOneMasterMain.java](/docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #create-singleton-manager }
We also need an actor on each node that keeps track of where current single master exists and We also need an actor on each node that keeps track of where current single master exists and
delegates jobs to the `StatsService`. That is provided by the `ClusterSingletonProxy`: delegates jobs to the `StatsService`. That is provided by the `ClusterSingletonProxy`:
@ -225,7 +225,7 @@ Scala
@@@ @@@
Java Java
: @@snip [StatsSampleOneMasterMain.java](/akka-docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy } : @@snip [StatsSampleOneMasterMain.java](/docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy }
The `ClusterSingletonProxy` receives text from users and delegates to the current `StatsService`, the single The `ClusterSingletonProxy` receives text from users and delegates to the current `StatsService`, the single
master. It listens to cluster events to lookup the `StatsService` on the oldest node. master. It listens to cluster events to lookup the `StatsService` on the oldest node.

View file

@ -30,7 +30,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
The above actor uses Event Sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state. The above actor uses Event Sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
@ -50,7 +50,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-start } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-start }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-start } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-start }
The @scala[`extractEntityId` and `extractShardId` are two] @java[`messageExtractor` defines] application specific @scala[functions] @java[methods] to extract the entity The @scala[`extractEntityId` and `extractShardId` are two] @java[`messageExtractor` defines] application specific @scala[functions] @java[methods] to extract the entity
identifier and the shard identifier from incoming messages. identifier and the shard identifier from incoming messages.
@ -59,7 +59,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-extractor } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-extractor }
This example illustrates two different ways to define the entity identifier in the messages: This example illustrates two different ways to define the entity identifier in the messages:
@ -97,7 +97,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-usage } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-usage }
## How it works ## How it works
@ -175,7 +175,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #extractShardId-StartEntity } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #extractShardId-StartEntity }
## Supervision ## Supervision
@ -187,7 +187,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #supervisor } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #supervisor }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #supervisor } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #supervisor }
You start such a supervisor in the same way as if it was the entity actor. You start such a supervisor in the same way as if it was the entity actor.
@ -195,7 +195,7 @@ Scala
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start } : @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/org/apache/pekko/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start }
Java Java
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-supervisor-start } : @@snip [ClusterShardingTest.java](/docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-supervisor-start }
Note that stopped entities will be started again when a new message is targeted to the entity. Note that stopped entities will be started again when a new message is targeted to the entity.

View file

@ -113,19 +113,19 @@ To accomplish this add a parent supervisor actor which will be used to create th
Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/questions/36701898/how-to-supervise-cluster-singleton-in-akka/36716708#36716708)) Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/questions/36701898/how-to-supervise-cluster-singleton-in-akka/36716708#36716708))
Scala Scala
: @@snip [ClusterSingletonSupervision.scala](/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor } : @@snip [ClusterSingletonSupervision.scala](/docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor }
Java Java
: @@snip [SupervisorActor.java](/akka-docs/src/test/java/jdocs/cluster/singleton/SupervisorActor.java) { #singleton-supervisor-actor } : @@snip [SupervisorActor.java](/docs/src/test/java/jdocs/cluster/singleton/SupervisorActor.java) { #singleton-supervisor-actor }
And used here And used here
Scala Scala
: @@snip [ClusterSingletonSupervision.scala](/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor-usage } : @@snip [ClusterSingletonSupervision.scala](/docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor-usage }
Java Java
: @@snip [ClusterSingletonSupervision.java](/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage-imports } : @@snip [ClusterSingletonSupervision.java](/docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage-imports }
@@snip [ClusterSingletonSupervision.java](/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage } @@snip [ClusterSingletonSupervision.java](/docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage }
## Lease ## Lease

View file

@ -46,10 +46,10 @@ It joins the cluster and an actor subscribes to cluster membership events and lo
An actor that uses the cluster extension may look like this: An actor that uses the cluster extension may look like this:
Scala Scala
: @@snip [SimpleClusterListener.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { type=scala } : @@snip [SimpleClusterListener.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { type=scala }
Java Java
: @@snip [SimpleClusterListener.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { type=java } : @@snip [SimpleClusterListener.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { type=java }
And the minimum configuration required is to set a host/port for remoting and the `akka.actor.provider = "cluster"`. And the minimum configuration required is to set a host/port for remoting and the `akka.actor.provider = "cluster"`.
@ -80,10 +80,10 @@ You may also join programmatically, which is attractive when dynamically discove
at startup by using some external tool or API. at startup by using some external tool or API.
Scala Scala
: @@snip [ClusterDocSpec.scala](/akka-docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #join-seed-nodes } : @@snip [ClusterDocSpec.scala](/docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #join-seed-nodes }
Java Java
: @@snip [ClusterDocTest.java](/akka-docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #join-seed-nodes-imports #join-seed-nodes } : @@snip [ClusterDocTest.java](/docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #join-seed-nodes-imports #join-seed-nodes }
For more information see @ref[tuning joins](typed/cluster.md#tuning-joins) For more information see @ref[tuning joins](typed/cluster.md#tuning-joins)
@ -91,10 +91,10 @@ It's also possible to specifically join a single node as illustrated in below ex
preferred since it has redundancy and retry mechanisms built-in. preferred since it has redundancy and retry mechanisms built-in.
Scala Scala
: @@snip [SimpleClusterListener2.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join } : @@snip [SimpleClusterListener2.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join }
Java Java
: @@snip [SimpleClusterListener2.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join } : @@snip [SimpleClusterListener2.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join }
## Leaving ## Leaving
@ -111,10 +111,10 @@ You can subscribe to change notifications of the cluster membership by using
@scala[@scaladoc[Cluster(system).subscribe](pekko.cluster.Cluster#subscribe(subscriber:org.apache.pekko.actor.ActorRef,to:Class[_]*):Unit)]@java[@javadoc[Cluster.get(system).subscribe](pekko.cluster.Cluster#subscribe(org.apache.pekko.actor.ActorRef,org.apache.pekko.cluster.ClusterEvent.SubscriptionInitialStateMode,java.lang.Class...))]. @scala[@scaladoc[Cluster(system).subscribe](pekko.cluster.Cluster#subscribe(subscriber:org.apache.pekko.actor.ActorRef,to:Class[_]*):Unit)]@java[@javadoc[Cluster.get(system).subscribe](pekko.cluster.Cluster#subscribe(org.apache.pekko.actor.ActorRef,org.apache.pekko.cluster.ClusterEvent.SubscriptionInitialStateMode,java.lang.Class...))].
Scala Scala
: @@snip [SimpleClusterListener2.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #subscribe } : @@snip [SimpleClusterListener2.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #subscribe }
Java Java
: @@snip [SimpleClusterListener2.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #subscribe } : @@snip [SimpleClusterListener2.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #subscribe }
A snapshot of the full state, @apidoc[CurrentClusterState](ClusterEvent.CurrentClusterState), is sent to the subscriber A snapshot of the full state, @apidoc[CurrentClusterState](ClusterEvent.CurrentClusterState), is sent to the subscriber
as the first message, followed by events for incremental updates. as the first message, followed by events for incremental updates.
@ -127,19 +127,19 @@ This is expected behavior. When the node has been accepted in the cluster you wi
receive `MemberUp` for that node, and other nodes. receive `MemberUp` for that node, and other nodes.
Scala Scala
: @@snip [SimpleClusterListener2.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join #subscribe } : @@snip [SimpleClusterListener2.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join #subscribe }
Java Java
: @@snip [SimpleClusterListener2.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join #subscribe } : @@snip [SimpleClusterListener2.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join #subscribe }
To avoid receiving an empty `CurrentClusterState` at the beginning, you can use it like shown in the following example, To avoid receiving an empty `CurrentClusterState` at the beginning, you can use it like shown in the following example,
to defer subscription until the `MemberUp` event for the own node is received: to defer subscription until the `MemberUp` event for the own node is received:
Scala Scala
: @@snip [SimpleClusterListener2.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join #register-on-memberup } : @@snip [SimpleClusterListener2.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #join #register-on-memberup }
Java Java
: @@snip [SimpleClusterListener2.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join #register-on-memberup } : @@snip [SimpleClusterListener2.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #join #register-on-memberup }
If you find it inconvenient to handle the `CurrentClusterState` you can use If you find it inconvenient to handle the `CurrentClusterState` you can use
@ -150,10 +150,10 @@ listening to the events when they occurred in the past. Note that those initial
to the current state and it is not the full history of all changes that actually has occurred in the cluster. to the current state and it is not the full history of all changes that actually has occurred in the cluster.
Scala Scala
: @@snip [SimpleClusterListener.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { #subscribe } : @@snip [SimpleClusterListener.scala](/docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { #subscribe }
Java Java
: @@snip [SimpleClusterListener.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { #subscribe } : @@snip [SimpleClusterListener.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { #subscribe }
### Worker Dial-in Example ### Worker Dial-in Example
@ -169,18 +169,18 @@ added or removed to the cluster dynamically.
Messages: Messages:
Scala Scala
: @@snip [TransformationMessages.scala](/akka-docs/src/test/scala/docs/cluster/TransformationMessages.scala) { #messages } : @@snip [TransformationMessages.scala](/docs/src/test/scala/docs/cluster/TransformationMessages.scala) { #messages }
Java Java
: @@snip [TransformationMessages.java](/akka-docs/src/test/java/jdocs/cluster/TransformationMessages.java) { #messages } : @@snip [TransformationMessages.java](/docs/src/test/java/jdocs/cluster/TransformationMessages.java) { #messages }
The backend worker that performs the transformation job: The backend worker that performs the transformation job:
Scala Scala
: @@snip [TransformationBackend.scala](/akka-docs/src/test/scala/docs/cluster/TransformationBackend.scala) { #backend } : @@snip [TransformationBackend.scala](/docs/src/test/scala/docs/cluster/TransformationBackend.scala) { #backend }
Java Java
: @@snip [TransformationBackend.java](/akka-docs/src/test/java/jdocs/cluster/TransformationBackend.java) { #backend } : @@snip [TransformationBackend.java](/docs/src/test/java/jdocs/cluster/TransformationBackend.java) { #backend }
Note that the `TransformationBackend` actor subscribes to cluster events to detect new, Note that the `TransformationBackend` actor subscribes to cluster events to detect new,
potential, frontend nodes, and send them a registration message so that they know potential, frontend nodes, and send them a registration message so that they know
@ -189,10 +189,10 @@ that they can use the backend worker.
The frontend that receives user jobs and delegates to one of the registered backend workers: The frontend that receives user jobs and delegates to one of the registered backend workers:
Scala Scala
: @@snip [TransformationFrontend.scala](/akka-docs/src/test/scala/docs/cluster/TransformationFrontend.scala) { #frontend } : @@snip [TransformationFrontend.scala](/docs/src/test/scala/docs/cluster/TransformationFrontend.scala) { #frontend }
Java Java
: @@snip [TransformationFrontend.java](/akka-docs/src/test/java/jdocs/cluster/TransformationFrontend.java) { #frontend } : @@snip [TransformationFrontend.java](/docs/src/test/java/jdocs/cluster/TransformationFrontend.java) { #frontend }
Note that the `TransformationFrontend` actor watch the registered backend Note that the `TransformationFrontend` actor watch the registered backend
to be able to remove it from its list of available backend workers. to be able to remove it from its list of available backend workers.
@ -237,10 +237,10 @@ be invoked when the current member status is changed to 'Up'. This can additiona
`akka.cluster.min-nr-of-members` optional configuration to defer an action until the cluster has reached a certain size. `akka.cluster.min-nr-of-members` optional configuration to defer an action until the cluster has reached a certain size.
Scala Scala
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #registerOnUp } : @@snip [FactorialFrontend.scala](/docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #registerOnUp }
Java Java
: @@snip [FactorialFrontendMain.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontendMain.java) { #registerOnUp } : @@snip [FactorialFrontendMain.java](/docs/src/test/java/jdocs/cluster/FactorialFrontendMain.java) { #registerOnUp }
## How To Cleanup when Member is Removed ## How To Cleanup when Member is Removed

View file

@ -73,10 +73,10 @@ Here's how a @apidoc[CircuitBreaker] would be configured for:
Scala Scala
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #imports1 #circuit-breaker-initialization } : @@snip [CircuitBreakerDocSpec.scala](/docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #imports1 #circuit-breaker-initialization }
Java Java
: @@snip [DangerousJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #imports1 #circuit-breaker-initialization } : @@snip [DangerousJavaActor.java](/docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #imports1 #circuit-breaker-initialization }
### Future & Synchronous based API ### Future & Synchronous based API
@ -85,10 +85,10 @@ Once a circuit breaker actor has been initialized, interacting with that actor i
The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the @scala[@scaladoc[withSyncCircuitBreaker](pekko.pattern.CircuitBreaker#withSyncCircuitBreaker[T](body:=%3ET):T)]@java[@javadoc[callWithSyncCircuitBreaker](pekko.pattern.CircuitBreaker#callWithSyncCircuitBreaker(java.util.concurrent.Callable))] and receives a method that is not wrapped in a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionState](java.util.concurrent.CompletionStage)]. The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the @scala[@scaladoc[withSyncCircuitBreaker](pekko.pattern.CircuitBreaker#withSyncCircuitBreaker[T](body:=%3ET):T)]@java[@javadoc[callWithSyncCircuitBreaker](pekko.pattern.CircuitBreaker#callWithSyncCircuitBreaker(java.util.concurrent.Callable))] and receives a method that is not wrapped in a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionState](java.util.concurrent.CompletionStage)].
Scala Scala
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-usage } : @@snip [CircuitBreakerDocSpec.scala](/docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-usage }
Java Java
: @@snip [DangerousJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #circuit-breaker-usage } : @@snip [DangerousJavaActor.java](/docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #circuit-breaker-usage }
@@@ note @@@ note
@ -115,10 +115,10 @@ Type of `defineFailureFn`: @scala[@scaladoc[Try[T]](scala.util.Try) => @scaladoc
@java[The response of a protected call is modelled using @javadoc[Optional[T]](java.util.Optional) for a successful return value and @javadoc[Optional](java.util.Optional)[@javadoc[Throwable](java.lang.Throwable)] for exceptions.] This function should return `true` if the call should increase failure count, else false. @java[The response of a protected call is modelled using @javadoc[Optional[T]](java.util.Optional) for a successful return value and @javadoc[Optional](java.util.Optional)[@javadoc[Throwable](java.lang.Throwable)] for exceptions.] This function should return `true` if the call should increase failure count, else false.
Scala Scala
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #even-no-as-failure } : @@snip [CircuitBreakerDocSpec.scala](/docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #even-no-as-failure }
Java Java
: @@snip [EvenNoFailureJavaExample.java](/akka-docs/src/test/java/jdocs/circuitbreaker/EvenNoFailureJavaExample.java) { #even-no-as-failure } : @@snip [EvenNoFailureJavaExample.java](/docs/src/test/java/jdocs/circuitbreaker/EvenNoFailureJavaExample.java) { #even-no-as-failure }
### Low level API ### Low level API
@ -138,7 +138,7 @@ The below example doesn't make a remote call when the state is *HalfOpen*. Using
@@@ @@@
Scala Scala
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-tell-pattern } : @@snip [CircuitBreakerDocSpec.scala](/docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-tell-pattern }
Java Java
: @@snip [TellPatternJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern } : @@snip [TellPatternJavaActor.java](/docs/src/test/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }

View file

@ -30,10 +30,10 @@ The phases are ordered with [topological](https://en.wikipedia.org/wiki/Topologi
Tasks can be added to a phase like in this example which allows a certain actor to react before termination starts: Tasks can be added to a phase like in this example which allows a certain actor to react before termination starts:
Scala Scala
: @@snip [snip](/akka-docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-addTask } : @@snip [snip](/docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-addTask }
Java Java
: @@snip [snip](/akka-docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-addTask } : @@snip [snip](/docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-addTask }
The returned @scala[`Future[Done]`] @java[`CompletionStage<Done>`] should be completed when the task is completed. The task name parameter The returned @scala[`Future[Done]`] @java[`CompletionStage<Done>`] should be completed when the task is completed. The task name parameter
is only used for debugging/logging. is only used for debugging/logging.
@ -48,20 +48,20 @@ to abort the rest of the shutdown process if a task fails or is not completed wi
If cancellation of previously added tasks is required: If cancellation of previously added tasks is required:
Scala Scala
: @@snip [snip](/akka-docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-cancellable } : @@snip [snip](/docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-cancellable }
Java Java
: @@snip [snip](/akka-docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-cancellable } : @@snip [snip](/docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-cancellable }
In the above example, it may be more convenient to simply stop the actor when it's done shutting down, rather than send back a done message, In the above example, it may be more convenient to simply stop the actor when it's done shutting down, rather than send back a done message,
and for the shutdown task to not complete until the actor is terminated. A convenience method is provided that adds a task that sends and for the shutdown task to not complete until the actor is terminated. A convenience method is provided that adds a task that sends
a message to the actor and then watches its termination (there is currently no corresponding functionality for the new actors API @github[see #29056](#29056)): a message to the actor and then watches its termination (there is currently no corresponding functionality for the new actors API @github[see #29056](#29056)):
Scala Scala
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-addActorTerminationTask } : @@snip [ActorDocSpec.scala](/docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-addActorTerminationTask }
Java Java
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-addActorTerminationTask } : @@snip [ActorDocTest.java](/docs/src/test/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-addActorTerminationTask }
Tasks should typically be registered as early as possible after system startup. When running Tasks should typically be registered as early as possible after system startup. When running
the coordinated shutdown tasks that have been registered will be performed but tasks that are the coordinated shutdown tasks that have been registered will be performed but tasks that are
@ -71,10 +71,10 @@ To start the coordinated shutdown process you can either invoke `terminate()` on
extension and pass it a class implementing @apidoc[CoordinatedShutdown.Reason] for informational purposes: extension and pass it a class implementing @apidoc[CoordinatedShutdown.Reason] for informational purposes:
Scala Scala
: @@snip [snip](/akka-docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-run } : @@snip [snip](/docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-run }
Java Java
: @@snip [snip](/akka-docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-run } : @@snip [snip](/docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-run }
It's safe to call the @scala[`run`] @java[`runAll`] method multiple times. It will only run once. It's safe to call the @scala[`run`] @java[`runAll`] method multiple times. It will only run once.
@ -106,10 +106,10 @@ If you have application specific JVM shutdown hooks it's recommended that you re
those shutting down Akka Remoting (Artery). those shutting down Akka Remoting (Artery).
Scala Scala
: @@snip [snip](/akka-docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-jvm-hook } : @@snip [snip](/docs/src/test/scala/docs/actor/typed/CoordinatedActorShutdownSpec.scala) { #coordinated-shutdown-jvm-hook }
Java Java
: @@snip [snip](/akka-docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-jvm-hook } : @@snip [snip](/docs/src/test/java/jdocs/actor/typed/CoordinatedActorShutdownTest.java) { #coordinated-shutdown-jvm-hook }
For some tests it might be undesired to terminate the `ActorSystem` via `CoordinatedShutdown`. For some tests it might be undesired to terminate the `ActorSystem` via `CoordinatedShutdown`.
You can disable that by adding the following to the configuration of the `ActorSystem` that is You can disable that by adding the following to the configuration of the `ActorSystem` that is

View file

@ -38,10 +38,10 @@ Any lease implementation should provide the following guarantees:
To acquire a lease: To acquire a lease:
Scala Scala
: @@snip [LeaseDocSpec.scala](/akka-docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-usage } : @@snip [LeaseDocSpec.scala](/docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-usage }
Java Java
: @@snip [LeaseDocTest.java](/akka-docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #lease-usage } : @@snip [LeaseDocTest.java](/docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #lease-usage }
Acquiring a lease returns a @scala[Future]@java[CompletionStage] as lease implementations typically are implemented Acquiring a lease returns a @scala[Future]@java[CompletionStage] as lease implementations typically are implemented
via a third party system such as the Kubernetes API server or Zookeeper. via a third party system such as the Kubernetes API server or Zookeeper.
@ -56,10 +56,10 @@ It is important to pick a lease name that will be unique for your use case. If a
in a Cluster the cluster host port can be used: in a Cluster the cluster host port can be used:
Scala Scala
: @@snip [LeaseDocSpec.scala](/akka-docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #cluster-owner } : @@snip [LeaseDocSpec.scala](/docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #cluster-owner }
Java Java
: @@snip [LeaseDocTest.scala](/akka-docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #cluster-owner } : @@snip [LeaseDocTest.scala](/docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #cluster-owner }
For use cases where multiple different leases on the same node then something unique must be added to the name. For example For use cases where multiple different leases on the same node then something unique must be added to the name. For example
a lease can be used with Cluster Sharding and in this case the shard Id is included in the lease name for each shard. a lease can be used with Cluster Sharding and in this case the shard Id is included in the lease name for each shard.
@ -88,10 +88,10 @@ Implementations should extend
the @scala[`org.apache.pekko.coordination.lease.scaladsl.Lease`]@java[`org.apache.pekko.coordination.lease.javadsl.Lease`] the @scala[`org.apache.pekko.coordination.lease.scaladsl.Lease`]@java[`org.apache.pekko.coordination.lease.javadsl.Lease`]
Scala Scala
: @@snip [LeaseDocSpec.scala](/akka-docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-example } : @@snip [LeaseDocSpec.scala](/docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-example }
Java Java
: @@snip [LeaseDocTest.java](/akka-docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #lease-example } : @@snip [LeaseDocTest.java](/docs/src/test/java/jdocs/coordination/LeaseDocTest.java) { #lease-example }
The methods should provide the following guarantees: The methods should provide the following guarantees:
@ -112,7 +112,7 @@ The lease implementation should have support for the following properties where
This configuration location is passed into `getLease`. This configuration location is passed into `getLease`.
Scala Scala
: @@snip [LeaseDocSpec.scala](/akka-docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-config } : @@snip [LeaseDocSpec.scala](/docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-config }
Java Java
: @@snip [LeaseDocSpec.scala](/akka-docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-config } : @@snip [LeaseDocSpec.scala](/docs/src/test/scala/docs/coordination/LeaseDocSpec.scala) { #lease-config }

View file

@ -95,15 +95,15 @@ The mapping between Akka service discovery terminology and SRV terminology:
Configure `akka-dns` to be used as the discovery implementation in your `application.conf`: Configure `akka-dns` to be used as the discovery implementation in your `application.conf`:
@@snip[application.conf](/akka-docs/src/test/scala/docs/discovery/DnsDiscoveryDocSpec.scala){ #configure-dns } @@snip[application.conf](/docs/src/test/scala/docs/discovery/DnsDiscoveryDocSpec.scala){ #configure-dns }
From there on, you can use the generic API that hides the fact which discovery method is being used by calling: From there on, you can use the generic API that hides the fact which discovery method is being used by calling:
Scala Scala
: @@snip[snip](/akka-docs/src/test/scala/docs/discovery/DnsDiscoveryDocSpec.scala){ #lookup-dns } : @@snip[snip](/docs/src/test/scala/docs/discovery/DnsDiscoveryDocSpec.scala){ #lookup-dns }
Java Java
: @@snip[snip](/akka-docs/src/test/java/jdocs/discovery/DnsDiscoveryDocTest.java){ #lookup-dns } : @@snip[snip](/docs/src/test/java/jdocs/discovery/DnsDiscoveryDocTest.java){ #lookup-dns }
### DNS records used ### DNS records used

View file

@ -22,10 +22,10 @@ Dispatchers are part of core Akka, which means that they are part of the akka-ac
Dispatchers implement the @scala[@scaladoc[ExecutionContext](scala.concurrent.ExecutionContext)]@java[@javadoc[Executor](java.util.concurrent.Executor)] interface and can thus be used to run @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletableFuture](java.util.concurrent.CompletableFuture)] invocations etc. Dispatchers implement the @scala[@scaladoc[ExecutionContext](scala.concurrent.ExecutionContext)]@java[@javadoc[Executor](java.util.concurrent.Executor)] interface and can thus be used to run @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletableFuture](java.util.concurrent.CompletableFuture)] invocations etc.
Scala Scala
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #lookup } : @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #lookup }
Java Java
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #lookup } : @@snip [DispatcherDocTest.java](/docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #lookup }
## Setting the dispatcher for an Actor ## Setting the dispatcher for an Actor
@ -33,7 +33,7 @@ So in case you want to give your @apidoc[actor.Actor] a different dispatcher tha
is to configure the dispatcher: is to configure the dispatcher:
<!--same config text for Scala & Java--> <!--same config text for Scala & Java-->
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #my-dispatcher-config } @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #my-dispatcher-config }
@@@ note @@@ note
@ -47,7 +47,7 @@ You can read more about parallelism in the JDK's [ForkJoinPool documentation](ht
Another example that uses the "thread-pool-executor": Another example that uses the "thread-pool-executor":
<!--same config text for Scala & Java--> <!--same config text for Scala & Java-->
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config } @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
@@@ note @@@ note
@ -61,23 +61,23 @@ For more options, see @ref[Dispatchers](typed/dispatchers.md) and the `default-d
Then you create the actor as usual and define the dispatcher in the deployment configuration. Then you create the actor as usual and define the dispatcher in the deployment configuration.
Scala Scala
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-config } : @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-config }
Java Java
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-config } : @@snip [DispatcherDocTest.java](/docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-config }
<!--same config text for Scala & Java--> <!--same config text for Scala & Java-->
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #dispatcher-deployment-config } @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #dispatcher-deployment-config }
An alternative to the deployment configuration is to define the dispatcher in code. An alternative to the deployment configuration is to define the dispatcher in code.
If you define the `dispatcher` in the deployment configuration then this value will be used instead If you define the `dispatcher` in the deployment configuration then this value will be used instead
of programmatically provided parameter. of programmatically provided parameter.
Scala Scala
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-code } : @@snip [DispatcherDocSpec.scala](/docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-code }
Java Java
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-code } : @@snip [DispatcherDocTest.java](/docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-code }
@@@ note @@@ note

View file

@ -43,10 +43,10 @@ adds or removes elements from a @apidoc[ORSet](cluster.ddata.ORSet) (observed-re
changes of this. changes of this.
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #data-bot } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #data-bot }
Java Java
: @@snip [DataBot.java](/akka-docs/src/test/java/jdocs/ddata/DataBot.java) { #data-bot } : @@snip [DataBot.java](/docs/src/test/java/jdocs/ddata/DataBot.java) { #data-bot }
<a id="replicator-update"></a> <a id="replicator-update"></a>
### Update ### Update
@ -68,10 +68,10 @@ for example not access the sender (@scala[@scaladoc[sender()](pekko.actor.Actor#
as the `Replicator`, because the `modify` function is typically not serializable. as the `Replicator`, because the `modify` function is typically not serializable.
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update }
As reply of the `Update` a @apidoc[Replicator.UpdateSuccess](cluster.ddata.Replicator.UpdateSuccess) is sent to the sender of the As reply of the `Update` a @apidoc[Replicator.UpdateSuccess](cluster.ddata.Replicator.UpdateSuccess) is sent to the sender of the
`Update` if the value was successfully replicated according to the supplied `Update` if the value was successfully replicated according to the supplied
@ -81,17 +81,17 @@ or was rolled back. It may still have been replicated to some nodes, and will ev
be replicated to all nodes with the gossip protocol. be replicated to all nodes with the gossip protocol.
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response1 } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response1 }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response1 } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response1 }
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response2 } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response2 }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response2 } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response2 }
You will always see your own writes. For example if you send two @apidoc[cluster.ddata.Replicator.Update] messages You will always see your own writes. For example if you send two @apidoc[cluster.ddata.Replicator.Update] messages
changing the value of the same `key`, the `modify` function of the second message will changing the value of the same `key`, the `modify` function of the second message will
@ -107,10 +107,10 @@ way to pass contextual information (e.g. original sender) without having to use
or maintain local correlation data structures. or maintain local correlation data structures.
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-request-context } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-request-context }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context }
<a id="replicator-get"></a> <a id="replicator-get"></a>
### Get ### Get
@ -121,37 +121,37 @@ To retrieve the current value of a data you send @apidoc[Replicator.Get](cluster
`Replicator`. You supply a consistency level which has the following meaning: `Replicator`. You supply a consistency level which has the following meaning:
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get }
As reply of the `Get` a @apidoc[Replicator.GetSuccess](cluster.ddata.Replicator.GetSuccess) is sent to the sender of the As reply of the `Get` a @apidoc[Replicator.GetSuccess](cluster.ddata.Replicator.GetSuccess) is sent to the sender of the
`Get` if the value was successfully retrieved according to the supplied @ref:[read consistency level](typed/distributed-data.md#read-consistency) within the supplied timeout. Otherwise a @apidoc[Replicator.GetFailure](cluster.ddata.Replicator.GetFailure) is sent. `Get` if the value was successfully retrieved according to the supplied @ref:[read consistency level](typed/distributed-data.md#read-consistency) within the supplied timeout. Otherwise a @apidoc[Replicator.GetFailure](cluster.ddata.Replicator.GetFailure) is sent.
If the key does not exist the reply will be @apidoc[Replicator.NotFound](cluster.ddata.Replicator.NotFound). If the key does not exist the reply will be @apidoc[Replicator.NotFound](cluster.ddata.Replicator.NotFound).
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response1 } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response1 }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response1 } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response1 }
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response2 } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response2 }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response2 } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response2 }
In the @apidoc[cluster.ddata.Replicator.Get] message you can pass an optional request context in the same way as for the In the @apidoc[cluster.ddata.Replicator.Get] message you can pass an optional request context in the same way as for the
@apidoc[cluster.ddata.Replicator.Update] message, described above. For example the original sender can be passed and replied @apidoc[cluster.ddata.Replicator.Update] message, described above. For example the original sender can be passed and replied
to after receiving and transforming @apidoc[cluster.ddata.Replicator.GetSuccess]. to after receiving and transforming @apidoc[cluster.ddata.Replicator.GetSuccess].
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-request-context } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-request-context }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-request-context } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-request-context }
### Subscribe ### Subscribe
@ -168,10 +168,10 @@ The subscriber is automatically removed if the subscriber is terminated. A subsc
also be deregistered with the @apidoc[Replicator.Unsubscribe](cluster.ddata.Replicator.Unsubscribe) message. also be deregistered with the @apidoc[Replicator.Unsubscribe](cluster.ddata.Replicator.Unsubscribe) message.
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #subscribe } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #subscribe }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #subscribe } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #subscribe }
### Consistency ### Consistency
@ -180,24 +180,24 @@ For the full documentation of this feature and for new projects see @ref:[Distri
Here is an example of using @apidoc[cluster.ddata.Replicator.WriteMajority] and @apidoc[cluster.ddata.Replicator.ReadMajority]: Here is an example of using @apidoc[cluster.ddata.Replicator.WriteMajority] and @apidoc[cluster.ddata.Replicator.ReadMajority]:
Scala Scala
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #read-write-majority } : @@snip [ShoppingCart.scala](/docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #read-write-majority }
Java Java
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #read-write-majority } : @@snip [ShoppingCart.java](/docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #read-write-majority }
Scala Scala
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #get-cart } : @@snip [ShoppingCart.scala](/docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #get-cart }
Java Java
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #get-cart } : @@snip [ShoppingCart.java](/docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #get-cart }
Scala Scala
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #add-item } : @@snip [ShoppingCart.scala](/docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #add-item }
Java Java
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #add-item } : @@snip [ShoppingCart.java](/docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #add-item }
In some rare cases, when performing an @apidoc[cluster.ddata.Replicator.Update] it is needed to first try to fetch latest data from In some rare cases, when performing an @apidoc[cluster.ddata.Replicator.Update] it is needed to first try to fetch latest data from
other nodes. That can be done by first sending a @apidoc[cluster.ddata.Replicator.Get] with @apidoc[cluster.ddata.Replicator.ReadMajority] and then continue with other nodes. That can be done by first sending a @apidoc[cluster.ddata.Replicator.Get] with @apidoc[cluster.ddata.Replicator.ReadMajority] and then continue with
@ -210,10 +210,10 @@ performed (hence the name observed-removed set).
The following example illustrates how to do that: The following example illustrates how to do that:
Scala Scala
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #remove-item } : @@snip [ShoppingCart.scala](/docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #remove-item }
Java Java
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #remove-item } : @@snip [ShoppingCart.java](/docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #remove-item }
@@@ warning @@@ warning
@ -231,10 +231,10 @@ happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the re
For the full documentation of this feature and for new projects see @ref:[Distributed Data - Delete](typed/distributed-data.md#delete). For the full documentation of this feature and for new projects see @ref:[Distributed Data - Delete](typed/distributed-data.md#delete).
Scala Scala
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #delete } : @@snip [DistributedDataDocSpec.scala](/docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #delete }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #delete } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #delete }
@@@ warning @@@ warning

View file

@ -8,7 +8,7 @@ Scala
: @@snip [EventBus.scala](/akka-actor/src/main/scala/org/apache/pekko/event/EventBus.scala) { #event-bus-api } : @@snip [EventBus.scala](/akka-actor/src/main/scala/org/apache/pekko/event/EventBus.scala) { #event-bus-api }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #event-bus-api } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #event-bus-api }
@@@ note @@@ note
@ -48,18 +48,18 @@ compare subscribers and how exactly to classify them.
The necessary methods to be implemented are illustrated with the following example: The necessary methods to be implemented are illustrated with the following example:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus }
A test for this implementation may look like this: A test for this implementation may look like this:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus-test } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus-test }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus-test } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus-test }
This classifier is efficient in case no subscribers exist for a particular event. This classifier is efficient in case no subscribers exist for a particular event.
@ -76,18 +76,18 @@ classifier hierarchy.
The necessary methods to be implemented are illustrated with the following example: The necessary methods to be implemented are illustrated with the following example:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus }
A test for this implementation may look like this: A test for this implementation may look like this:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus-test } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus-test }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus-test } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus-test }
This classifier is also efficient in case no subscribers are found for an This classifier is also efficient in case no subscribers are found for an
event, but it uses conventional locking to synchronize an internal classifier event, but it uses conventional locking to synchronize an internal classifier
@ -106,18 +106,18 @@ stations by geographical reachability (for old-school radio-wave transmission).
The necessary methods to be implemented are illustrated with the following example: The necessary methods to be implemented are illustrated with the following example:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus }
A test for this implementation may look like this: A test for this implementation may look like this:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus-test } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus-test }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus-test } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus-test }
This classifier takes always a time which is proportional to the number of This classifier takes always a time which is proportional to the number of
subscriptions, independent of how many actually match. subscriptions, independent of how many actually match.
@ -136,18 +136,18 @@ takes care of unsubscribing terminated actors automatically.
The necessary methods to be implemented are illustrated with the following example: The necessary methods to be implemented are illustrated with the following example:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus }
A test for this implementation may look like this: A test for this implementation may look like this:
Scala Scala
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus-test } : @@snip [EventBusDocSpec.scala](/docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus-test }
Java Java
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus-test } : @@snip [EventBusDocTest.java](/docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus-test }
This classifier is still is generic in the event type, and it is efficient for This classifier is still is generic in the event type, and it is efficient for
all use cases. all use cases.
@ -163,19 +163,19 @@ how a simple subscription works. Given a simple actor:
@@@ div { .group-scala } @@@ div { .group-scala }
@@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #deadletters } @@snip [LoggingDocSpec.scala](/docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #deadletters }
@@@ @@@
@@@ div { .group-java } @@@ div { .group-java }
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports-deadletter } @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports-deadletter }
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletter-actor } @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletter-actor }
it can be subscribed like this: it can be subscribed like this:
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletters } @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletters }
@@@ @@@
@ -185,10 +185,10 @@ is implemented in the event stream, it is possible to subscribe to a group of ev
subscribing to their common superclass as demonstrated in the following example: subscribing to their common superclass as demonstrated in the following example:
Scala Scala
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #superclass-subscription-eventstream } : @@snip [LoggingDocSpec.scala](/docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #superclass-subscription-eventstream }
Java Java
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #superclass-subscription-eventstream } : @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #superclass-subscription-eventstream }
Similarly to @ref:[Actor Classification](#actor-classification), @apidoc[event.EventStream] will automatically remove subscribers when they terminate. Similarly to @ref:[Actor Classification](#actor-classification), @apidoc[event.EventStream] will automatically remove subscribers when they terminate.
@ -250,18 +250,18 @@ However, in case you find yourself in need of debugging these kinds of low level
it's still possible to subscribe to them explicitly: it's still possible to subscribe to them explicitly:
Scala Scala
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #suppressed-deadletters } : @@snip [LoggingDocSpec.scala](/docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #suppressed-deadletters }
Java Java
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #suppressed-deadletters } : @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #suppressed-deadletters }
or all dead letters (including the suppressed ones): or all dead letters (including the suppressed ones):
Scala Scala
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #all-deadletters } : @@snip [LoggingDocSpec.scala](/docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #all-deadletters }
Java Java
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #all-deadletters } : @@snip [LoggingDocTest.java](/docs/src/test/java/jdocs/event/LoggingDocTest.java) { #all-deadletters }
### Other Uses ### Other Uses

View file

@ -24,40 +24,40 @@ So let's create a sample extension that lets us count the number of times someth
First, we define what our @apidoc[Extension](actor.Extension) should do: First, we define what our @apidoc[Extension](actor.Extension) should do:
Scala Scala
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension } : @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension }
Java Java
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extension } : @@snip [ExtensionDocTest.java](/docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extension }
Then we need to create an @apidoc[ExtensionId](actor.ExtensionId) for our extension so we can grab a hold of it. Then we need to create an @apidoc[ExtensionId](actor.ExtensionId) for our extension so we can grab a hold of it.
Scala Scala
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extensionid } : @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extensionid }
Java Java
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extensionid } : @@snip [ExtensionDocTest.java](/docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extensionid }
Wicked! Now all we need to do is to actually use it: Wicked! Now all we need to do is to actually use it:
Scala Scala
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage } : @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage }
Java Java
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage } : @@snip [ExtensionDocTest.java](/docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage }
Or from inside of an Akka Actor: Or from inside of an Akka Actor:
Scala Scala
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor } : @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor }
Java Java
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage-actor } : @@snip [ExtensionDocTest.java](/docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage-actor }
@@@ div { .group-scala } @@@ div { .group-scala }
You can also hide extension behind traits: You can also hide extension behind traits:
@@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor-trait } @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor-trait }
@@@ @@@
@ -70,7 +70,7 @@ To be able to load extensions from your Akka configuration you must add FQCNs of
in the `akka.extensions` section of the config you provide to your @apidoc[ActorSystem](actor.ActorSystem). in the `akka.extensions` section of the config you provide to your @apidoc[ActorSystem](actor.ActorSystem).
Scala Scala
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #config } : @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #config }
Java Java
: @@@vars : @@@vars
@ -93,23 +93,23 @@ The @ref:[configuration](general/configuration.md) can be used for application s
Sample configuration: Sample configuration:
@@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #config } @@snip [SettingsExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #config }
The @apidoc[Extension](actor.Extension): The @apidoc[Extension](actor.Extension):
Scala Scala
: @@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #imports #extension #extensionid } : @@snip [SettingsExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #imports #extension #extensionid }
Java Java
: @@snip [SettingsExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #imports #extension #extensionid } : @@snip [SettingsExtensionDocTest.java](/docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #imports #extension #extensionid }
Use it: Use it:
Scala Scala
: @@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #extension-usage-actor } : @@snip [SettingsExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #extension-usage-actor }
Java Java
: @@snip [SettingsExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #extension-usage-actor } : @@snip [SettingsExtensionDocTest.java](/docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #extension-usage-actor }
## Library extensions ## Library extensions

View file

@ -36,7 +36,7 @@
# Full Source Code of the Fault Tolerance Sample # Full Source Code of the Fault Tolerance Sample
Scala Scala
: @@snip [FaultHandlingDocSample.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSample.scala) { #all } : @@snip [FaultHandlingDocSample.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSample.scala) { #all }
Java Java
: @@snip [FaultHandlingDocSample.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingDocSample.java) { #all } : @@snip [FaultHandlingDocSample.java](/docs/src/test/java/jdocs/actor/FaultHandlingDocSample.java) { #all }

View file

@ -42,10 +42,10 @@ in more depth.
For the sake of demonstration let us consider the following strategy: For the sake of demonstration let us consider the following strategy:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #strategy } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #strategy }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #strategy } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #strategy }
We have chosen a few well-known exception types in order to demonstrate the We have chosen a few well-known exception types in order to demonstrate the
application of the fault handling directives described in @ref:[supervision](general/supervision.md). application of the fault handling directives described in @ref:[supervision](general/supervision.md).
@ -100,7 +100,7 @@ in the same way as the default strategy defined above.
You can combine your own strategy with the default strategy: You can combine your own strategy with the default strategy:
@@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #default-strategy-fallback } @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #default-strategy-fallback }
@@@ @@@
@ -143,73 +143,73 @@ The following section shows the effects of the different directives in practice,
where a test setup is needed. First off, we need a suitable supervisor: where a test setup is needed. First off, we need a suitable supervisor:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor }
This supervisor will be used to create a child, with which we can experiment: This supervisor will be used to create a child, with which we can experiment:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #child } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #child }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #child } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #child }
The test is easier by using the utilities described in @scala[@ref:[Testing Actor Systems](testing.md)]@java[@ref:[TestKit](testing.md)], The test is easier by using the utilities described in @scala[@ref:[Testing Actor Systems](testing.md)]@java[@ref:[TestKit](testing.md)],
where `TestProbe` provides an actor ref useful for receiving and inspecting replies. where `TestProbe` provides an actor ref useful for receiving and inspecting replies.
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #testkit } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #testkit }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #testkit } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #testkit }
Let us create actors: Let us create actors:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #create } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #create }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #create } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #create }
The first test shall demonstrate the `Resume` directive, so we try it out by The first test shall demonstrate the `Resume` directive, so we try it out by
setting some non-initial state in the actor and have it fail: setting some non-initial state in the actor and have it fail:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #resume } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #resume }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #resume } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #resume }
As you can see the value 42 survives the fault handling directive. Now, if we As you can see the value 42 survives the fault handling directive. Now, if we
change the failure to a more serious `NullPointerException`, that will no change the failure to a more serious `NullPointerException`, that will no
longer be the case: longer be the case:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #restart } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #restart }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #restart } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #restart }
And finally in case of the fatal `IllegalArgumentException` the child will be And finally in case of the fatal `IllegalArgumentException` the child will be
terminated by the supervisor: terminated by the supervisor:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #stop } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #stop }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #stop } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #stop }
Up to now the supervisor was completely unaffected by the childs failure, Up to now the supervisor was completely unaffected by the childs failure,
because the directives set did handle it. In case of an `Exception`, this is not because the directives set did handle it. In case of an `Exception`, this is not
true anymore and the supervisor escalates the failure. true anymore and the supervisor escalates the failure.
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-kill } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-kill }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-kill } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-kill }
The supervisor itself is supervised by the top-level actor provided by the The supervisor itself is supervised by the top-level actor provided by the
`ActorSystem`, which has the default policy to restart in case of all `ActorSystem`, which has the default policy to restart in case of all
@ -222,19 +222,19 @@ In case this is not desired (which depends on the use case), we need to use a
different supervisor which overrides this behavior. different supervisor which overrides this behavior.
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor2 } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor2 }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor2 } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor2 }
With this parent, the child survives the escalated restart, as demonstrated in With this parent, the child survives the escalated restart, as demonstrated in
the last test: the last test:
Scala Scala
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart } : @@snip [FaultHandlingDocSpec.scala](/docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart }
Java Java
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart } : @@snip [FaultHandlingTest.java](/docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart }
## Delayed restarts for classic actors ## Delayed restarts for classic actors
@ -270,11 +270,11 @@ If the 'on stop' strategy is used for sharded actors a final termination message
The termination message is configured with: The termination message is configured with:
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-sharded } @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-sharded }
And must be used for passivation: And must be used for passivation:
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-sharded-passivation } @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-sharded-passivation }
### Simple backoff ### Simple backoff
@ -283,10 +283,10 @@ The following snippet shows how to create a backoff supervisor which will start
because of a failure, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds: because of a failure, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
Scala Scala
: @@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-stop } : @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-stop }
Java Java
: @@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-stop } : @@snip [BackoffSupervisorDocTest.java](/docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-stop }
Using a `randomFactor` to add a little bit of additional variance to the backoff intervals Using a `randomFactor` to add a little bit of additional variance to the backoff intervals
is highly recommended, in order to avoid multiple actors re-start at the exact same point in time, is highly recommended, in order to avoid multiple actors re-start at the exact same point in time,
@ -302,10 +302,10 @@ The following snippet shows how to create a backoff supervisor which will start
because of some exception, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds: because of some exception, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
Scala Scala
: @@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-fail } : @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-fail }
Java Java
: @@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-fail } : @@snip [BackoffSupervisorDocTest.java](/docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-fail }
### Customization ### Customization
@ -323,13 +323,13 @@ Only available on `BackoffOnStopOptions`:
Some examples: Some examples:
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-stop } @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-stop }
The above code sets up a back-off supervisor that requires the child actor to send a `org.apache.pekko.pattern.BackoffSupervisor.Reset` message The above code sets up a back-off supervisor that requires the child actor to send a `org.apache.pekko.pattern.BackoffSupervisor.Reset` message
to its parent when a message is successfully processed, resetting the back-off. It also uses a default stopping strategy, any exception to its parent when a message is successfully processed, resetting the back-off. It also uses a default stopping strategy, any exception
will cause the child to stop. will cause the child to stop.
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-fail } @@snip [BackoffSupervisorDocSpec.scala](/docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-fail }
The above code sets up a back-off supervisor that stops and starts the child after back-off if MyException is thrown, any other exception will be The above code sets up a back-off supervisor that stops and starts the child after back-off if MyException is thrown, any other exception will be
escalated. The back-off is automatically reset if the child does not throw any errors within 10 seconds. escalated. The back-off is automatically reset if the child does not throw any errors within 10 seconds.

View file

@ -39,28 +39,28 @@ send them on after the burst ended or a flush request is received.
First, consider all of the below to use these import statements: First, consider all of the below to use these import statements:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-imports } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-imports }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-imports } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-imports }
The contract of our “Buncher” actor is that it accepts or produces the following messages: The contract of our “Buncher” actor is that it accepts or produces the following messages:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-events } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-events }
Java Java
: @@snip [Events.java](/akka-docs/src/test/java/jdocs/actor/fsm/Events.java) { #simple-events } : @@snip [Events.java](/docs/src/test/java/jdocs/actor/fsm/Events.java) { #simple-events }
`SetTarget` is needed for starting it up, setting the destination for the `SetTarget` is needed for starting it up, setting the destination for the
`Batches` to be passed on; `Queue` will add to the internal queue while `Batches` to be passed on; `Queue` will add to the internal queue while
`Flush` will mark the end of a burst. `Flush` will mark the end of a burst.
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-state } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-state }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-state } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-state }
The actor can be in two states: no message queued (aka `Idle`) or some The actor can be in two states: no message queued (aka `Idle`) or some
message queued (aka `Active`). It will stay in the `Active` state as long as message queued (aka `Active`). It will stay in the `Active` state as long as
@ -71,10 +71,10 @@ the actual queue of messages.
Now lets take a look at the skeleton for our FSM actor: Now lets take a look at the skeleton for our FSM actor:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
The basic strategy is to declare the actor, @scala[mixing in the `FSM` trait]@java[by inheriting the `AbstractFSM` class] The basic strategy is to declare the actor, @scala[mixing in the `FSM` trait]@java[by inheriting the `AbstractFSM` class]
and specifying the possible states and data values as type parameters. Within and specifying the possible states and data values as type parameters. Within
@ -102,10 +102,10 @@ which is not handled by the `when()` block is passed to the
`whenUnhandled()` block: `whenUnhandled()` block:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-elided } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-elided }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #unhandled-elided } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #unhandled-elided }
The first case handled here is adding `Queue()` requests to the internal The first case handled here is adding `Queue()` requests to the internal
queue and going to the `Active` state (this does the obvious thing of staying queue and going to the `Active` state (this does the obvious thing of staying
@ -120,10 +120,10 @@ multiple such blocks and all of them will be tried for matching behavior in
case a state transition occurs (i.e. only when the state actually changes). case a state transition occurs (i.e. only when the state actually changes).
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-elided } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-elided }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #transition-elided } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #transition-elided }
The transition callback is a @scala[partial function]@java[builder constructed by `matchState`, followed by zero or multiple `state`], which takes as input a pair of The transition callback is a @scala[partial function]@java[builder constructed by `matchState`, followed by zero or multiple `state`], which takes as input a pair of
states—the current and the next state. @scala[The FSM trait includes a convenience states—the current and the next state. @scala[The FSM trait includes a convenience
@ -145,10 +145,10 @@ To verify that this buncher actually works, it is quite easy to write a test
using the @scala[@ref:[Testing Actor Systems which is conveniently bundled with ScalaTest traits into `AkkaSpec`](testing.md)]@java[@ref:[TestKit](testing.md), here using JUnit as an example]: using the @scala[@ref:[Testing Actor Systems which is conveniently bundled with ScalaTest traits into `AkkaSpec`](testing.md)]@java[@ref:[TestKit](testing.md), here using JUnit as an example]:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #test-code } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #test-code }
Java Java
: @@snip [BuncherTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/BuncherTest.java) { #test-code } : @@snip [BuncherTest.java](/docs/src/test/java/jdocs/actor/fsm/BuncherTest.java) { #test-code }
## Reference ## Reference
@ -164,10 +164,10 @@ Actor since an Actor is created to drive the FSM.
] ]
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
@@@ note @@@ note
@ -221,10 +221,10 @@ which is conveniently given using the @scala[partial function literal]@java[stat
demonstrated below: demonstrated below:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #when-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #when-syntax }
Java Java
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #when-syntax } : @@snip [Buncher.java](/docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #when-syntax }
@@@ div { .group-scala } @@@ div { .group-scala }
@ -246,10 +246,10 @@ states. If you want to leave the handling of a state “unhandled” (more below
it still needs to be declared like this: it still needs to be declared like this:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #NullFunction } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #NullFunction }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #NullFunction } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #NullFunction }
### Defining the Initial State ### Defining the Initial State
@ -270,10 +270,10 @@ do something else in this case you can specify that with
`whenUnhandled(stateFunction)`: `whenUnhandled(stateFunction)`:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #unhandled-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #unhandled-syntax }
Within this handler the state of the FSM may be queried using the Within this handler the state of the FSM may be queried using the
`stateName` method. `stateName` method.
@ -313,10 +313,10 @@ does not modify the state transition.
All modifiers can be chained to achieve a nice and concise description: All modifiers can be chained to achieve a nice and concise description:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #modifier-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #modifier-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #modifier-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #modifier-syntax }
The parentheses are not actually needed in all cases, but they visually The parentheses are not actually needed in all cases, but they visually
distinguish between modifiers and their arguments and therefore make the code distinguish between modifiers and their arguments and therefore make the code
@ -355,10 +355,10 @@ resulting state is needed as it is not possible to modify the transition in
progress. progress.
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #transition-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #transition-syntax }
@@@ div { .group-scala } @@@ div { .group-scala }
@ -375,10 +375,10 @@ It is also possible to pass a function object accepting two states to
a method: a method:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transition-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transition-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #alt-transition-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #alt-transition-syntax }
The handlers registered with this method are stacked, so you can intersperse The handlers registered with this method are stacked, so you can intersperse
`onTransition` blocks with `when` blocks as suits your design. It `onTransition` blocks with `when` blocks as suits your design. It
@ -430,13 +430,13 @@ transformed using Scalas full supplement of functional programming tools. In
order to retain type inference, there is a helper function which may be used in order to retain type inference, there is a helper function which may be used in
case some common handling logic shall be applied to different clauses: case some common handling logic shall be applied to different clauses:
@@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transform-syntax } @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transform-syntax }
It goes without saying that the arguments to this method may also be stored, to It goes without saying that the arguments to this method may also be stored, to
be used several times, e.g. when applying the same transformation to several be used several times, e.g. when applying the same transformation to several
`when()` blocks: `when()` blocks:
@@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transform-syntax } @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transform-syntax }
@@@ @@@
@ -499,20 +499,20 @@ may not be used within a `when` block).
@@@ @@@
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #stop-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #stop-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #stop-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #stop-syntax }
You can use `onTermination(handler)` to specify custom code that is You can use `onTermination(handler)` to specify custom code that is
executed when the FSM is stopped. The handler is a partial function which takes executed when the FSM is stopped. The handler is a partial function which takes
a `StopEvent(reason, stateName, stateData)` as argument: a `StopEvent(reason, stateName, stateData)` as argument:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #termination-syntax } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #termination-syntax }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #termination-syntax } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #termination-syntax }
As for the `whenUnhandled` case, this handler is not stacked, so each As for the `whenUnhandled` case, this handler is not stacked, so each
invocation of `onTermination` replaces the previously installed handler. invocation of `onTermination` replaces the previously installed handler.
@ -545,10 +545,10 @@ The setting `akka.actor.debug.fsm` in @ref:[configuration](general/configuration
event trace by `LoggingFSM` instances: event trace by `LoggingFSM` instances:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
This FSM will log at DEBUG level: This FSM will log at DEBUG level:
@ -567,10 +567,10 @@ log which may be used during debugging (for tracing how the FSM entered a
certain failure state) or for other creative uses: certain failure state) or for other creative uses:
Scala Scala
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm } : @@snip [FSMDocSpec.scala](/docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
Java Java
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm } : @@snip [FSMDocTest.java](/docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
The `logDepth` defaults to zero, which turns off the event log. The `logDepth` defaults to zero, which turns off the event log.

View file

@ -18,17 +18,17 @@ Akka offers tiny helpers for use with @scala[@scaladoc[Future](scala.concurrent.
@scala[`org.apache.pekko.pattern.after`]@java[@javadoc[org.apache.pekko.pattern.Patterns.after](pekko.pattern.Patterns#after)] makes it easy to complete a @scala[@scaladoc[Future](pekko.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] with a value or exception after a timeout. @scala[`org.apache.pekko.pattern.after`]@java[@javadoc[org.apache.pekko.pattern.Patterns.after](pekko.pattern.Patterns#after)] makes it easy to complete a @scala[@scaladoc[Future](pekko.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] with a value or exception after a timeout.
Scala Scala
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #after } : @@snip [FutureDocSpec.scala](/docs/src/test/scala/docs/future/FutureDocSpec.scala) { #after }
Java Java
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports #after } : @@snip [FutureDocTest.java](/docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports #after }
## Retry ## Retry
@scala[`org.apache.pekko.pattern.retry`]@java[@javadoc[org.apache.pekko.pattern.Patterns.retry](pekko.pattern.Patterns#retry)] will retry a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] some number of times with a delay between each attempt. @scala[`org.apache.pekko.pattern.retry`]@java[@javadoc[org.apache.pekko.pattern.Patterns.retry](pekko.pattern.Patterns#retry)] will retry a @scala[@scaladoc[Future](scala.concurrent.Future)]@java[@javadoc[CompletionStage](java.util.concurrent.CompletionStage)] some number of times with a delay between each attempt.
Scala Scala
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #retry } : @@snip [FutureDocSpec.scala](/docs/src/test/scala/docs/future/FutureDocSpec.scala) { #retry }
Java Java
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports #retry } : @@snip [FutureDocTest.java](/docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports #retry }

View file

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 42 KiB

Before After
Before After

View file

@ -176,10 +176,10 @@ e.g. in the reference configuration. The settings as merged with the reference
and parsed by the actor system can be displayed like this: and parsed by the actor system can be displayed like this:
Scala Scala
: @@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #dump-config } : @@snip [ConfigDocSpec.scala](/docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #dump-config }
Java Java
: @@snip [ConfigDocTest.java](/akka-docs/src/test/java/jdocs/config/ConfigDocTest.java) { #dump-config } : @@snip [ConfigDocTest.java](/docs/src/test/java/jdocs/config/ConfigDocTest.java) { #dump-config }
## A Word About ClassLoaders ## A Word About ClassLoaders
@ -225,10 +225,10 @@ my.other.setting = "hello"
``` ```
Scala Scala
: @@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #separate-apps } : @@snip [ConfigDocSpec.scala](/docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #separate-apps }
Java Java
: @@snip [ConfigDocTest.java](/akka-docs/src/test/java/jdocs/config/ConfigDocTest.java) { #separate-apps } : @@snip [ConfigDocTest.java](/docs/src/test/java/jdocs/config/ConfigDocTest.java) { #separate-apps }
These two samples demonstrate different variations of the “lift-a-subtree” These two samples demonstrate different variations of the “lift-a-subtree”
trick: in the first case, the configuration accessible from within the actor trick: in the first case, the configuration accessible from within the actor
@ -266,10 +266,10 @@ the @apidoc[ActorSystem](typed.ActorSystem).
Scala Scala
: @@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #imports #custom-config } : @@snip [ConfigDocSpec.scala](/docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #imports #custom-config }
Java Java
: @@snip [ConfigDocTest.java](/akka-docs/src/test/java/jdocs/config/ConfigDocTest.java) { #imports #custom-config } : @@snip [ConfigDocTest.java](/docs/src/test/java/jdocs/config/ConfigDocTest.java) { #imports #custom-config }
## Reading configuration from a custom location ## Reading configuration from a custom location
@ -314,10 +314,10 @@ You can also combine your custom config with the usual config,
that might look like: that might look like:
Scala Scala
: @@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #custom-config-2 } : @@snip [ConfigDocSpec.scala](/docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #custom-config-2 }
Java Java
: @@snip [ConfigDocTest.java](/akka-docs/src/test/java/jdocs/config/ConfigDocTest.java) { #custom-config-2 } : @@snip [ConfigDocTest.java](/docs/src/test/java/jdocs/config/ConfigDocTest.java) { #custom-config-2 }
When working with [Config](https://lightbend.github.io/config/latest/api/com/typesafe/config/Config.html) objects, keep in mind that there are When working with [Config](https://lightbend.github.io/config/latest/api/com/typesafe/config/Config.html) objects, keep in mind that there are

View file

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 72 KiB

Before After
Before After

View file

@ -71,10 +71,10 @@ Since Akka runs on the JVM there are still some rules to be followed.
Most importantly, you must not close over internal Actor state and exposing it to other threads: Most importantly, you must not close over internal Actor state and exposing it to other threads:
Scala Scala
: @@snip [SharedMutableStateDocSpec.scala](/akka-docs/src/test/scala/docs/actor/typed/SharedMutableStateDocSpec.scala) { #mutable-state } : @@snip [SharedMutableStateDocSpec.scala](/docs/src/test/scala/docs/actor/typed/SharedMutableStateDocSpec.scala) { #mutable-state }
Java Java
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/actor/typed/SharedMutableStateDocTest.java) { #mutable-state } : @@snip [DistributedDataDocTest.java](/docs/src/test/java/jdocs/actor/typed/SharedMutableStateDocTest.java) { #mutable-state }
* Messages **should** be immutable, this is to avoid the shared mutable state trap. * Messages **should** be immutable, this is to avoid the shared mutable state trap.

View file

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 160 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 83 KiB

After

Width:  |  Height:  |  Size: 83 KiB

Before After
Before After

View file

@ -15,7 +15,7 @@
version="1.1" version="1.1"
inkscape:version="0.48.2 r9819" inkscape:version="0.48.2 r9819"
sodipodi:docname="actor_lifecycle.svg" sodipodi:docname="actor_lifecycle.svg"
inkscape:export-filename="D:\workspace\akka\akka-docs\rst\images\actor_lifecycle.png" inkscape:export-filename="D:\workspace\akka\docs\rst\images\actor_lifecycle.png"
inkscape:export-xdpi="136.88808" inkscape:export-xdpi="136.88808"
inkscape:export-ydpi="136.88808"> inkscape:export-ydpi="136.88808">
<defs <defs

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 81 KiB

After

Width:  |  Height:  |  Size: 81 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 445 KiB

After

Width:  |  Height:  |  Size: 445 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 119 KiB

After

Width:  |  Height:  |  Size: 119 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 18 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 10 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 71 KiB

After

Width:  |  Height:  |  Size: 71 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 104 KiB

After

Width:  |  Height:  |  Size: 104 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Before After
Before After

Some files were not shown because too many files have changed in this diff Show more