Use absolute snippet paths (#25607)
* Support absolute snippet path in signature directive * Removed $ akka $ from snippet paths * Remove $ code $ snippet alias * Remove $ code $ snippet prefix
This commit is contained in:
parent
a1b4ac7c8e
commit
3eb9b3a1a6
217 changed files with 2022 additions and 2025 deletions
|
|
@ -61,10 +61,10 @@ the messages should be processed. You can build such behavior with a builder nam
|
|||
Here is an example:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
|
||||
|
||||
Java
|
||||
: @@snip [MyActor.java]($code$/java/jdocs/actor/MyActor.java) { #imports #my-actor }
|
||||
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor }
|
||||
|
||||
Please note that the Akka Actor @scala[`receive`] message loop is exhaustive, which
|
||||
is different compared to Erlang and the late Scala Actors. This means that you
|
||||
|
|
@ -89,7 +89,7 @@ construction.
|
|||
|
||||
#### Here is another example that you can edit and run in the browser:
|
||||
|
||||
@@fiddle [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
|
||||
@@fiddle [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #fiddle_code template=Akka layout=v75 minheight=400px }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -102,10 +102,10 @@ dispatcher to use, see more below). Here are some examples of how to create a
|
|||
`Props` instance.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #creating-props }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-props #creating-props }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-props #creating-props }
|
||||
|
||||
|
||||
The second variant shows how to pass constructor arguments to the
|
||||
|
|
@ -127,10 +127,10 @@ for cases when the actor constructor takes value classes as arguments.
|
|||
|
||||
#### Dangerous Variants
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #creating-props-deprecated }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-props-deprecated }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #creating-props-deprecated }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #creating-props-deprecated }
|
||||
|
||||
This method is not recommended to be used within another actor because it
|
||||
encourages to close over the enclosing scope, resulting in non-serializable
|
||||
|
|
@ -162,13 +162,13 @@ There are two edge cases in actor creation with `Props`:
|
|||
|
||||
* An actor with `AnyVal` arguments.
|
||||
|
||||
@@snip [PropsEdgeCaseSpec.scala]($code$/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class }
|
||||
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class }
|
||||
|
||||
@@snip [PropsEdgeCaseSpec.scala]($code$/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class-example }
|
||||
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-value-class-example }
|
||||
|
||||
* An actor with default constructor values.
|
||||
|
||||
@@snip [PropsEdgeCaseSpec.scala]($code$/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-default-values }
|
||||
@@snip [PropsEdgeCaseSpec.scala](/akka-docs/src/test/scala/docs/actor/PropsEdgeCaseSpec.scala) { #props-edge-cases-default-values }
|
||||
|
||||
In both cases an `IllegalArgumentException` will be thrown stating
|
||||
no matching constructor could be found.
|
||||
|
|
@ -189,10 +189,10 @@ arguments as constructor parameters, since within static method]
|
|||
the given code block will not retain a reference to its enclosing scope:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #props-factory }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #props-factory }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #props-factory }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #props-factory }
|
||||
|
||||
Another good practice is to declare what messages an Actor can receive
|
||||
@scala[in the companion object of the Actor]
|
||||
|
|
@ -200,10 +200,10 @@ Another good practice is to declare what messages an Actor can receive
|
|||
which makes easier to know what it can receive:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #messages-in-companion }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #messages-in-companion }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #messages-in-companion }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #messages-in-companion }
|
||||
|
||||
### Creating Actors with Props
|
||||
|
||||
|
|
@ -212,20 +212,20 @@ Actors are created by passing a `Props` instance into the
|
|||
`ActorContext`.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #system-actorOf }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #system-actorOf }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-actorRef }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-actorRef }
|
||||
|
||||
Using the `ActorSystem` will create top-level actors, supervised by the
|
||||
actor system’s provided guardian actor, while using an actor’s context will
|
||||
create a child actor.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #context-actorOf }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #context-actorOf }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #context-actorOf }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #context-actorOf }
|
||||
|
||||
It is recommended to create a hierarchy of children, grand-children and so on
|
||||
such that it fits the logical failure-handling structure of the application,
|
||||
|
|
@ -258,7 +258,7 @@ value classes.
|
|||
In these cases you should either unpack the arguments or create the props by
|
||||
calling the constructor manually:
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #actor-with-value-class-argument }
|
||||
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #actor-with-value-class-argument }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -270,10 +270,10 @@ are cases when a factory method must be used, for example when the actual
|
|||
constructor arguments are determined by a dependency injection framework.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #creating-indirectly }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #creating-indirectly }
|
||||
|
||||
Java
|
||||
: @@snip [DependencyInjectionDocTest.java]($code$/java/jdocs/actor/DependencyInjectionDocTest.java) { #import #creating-indirectly }
|
||||
: @@snip [DependencyInjectionDocTest.java](/akka-docs/src/test/java/jdocs/actor/DependencyInjectionDocTest.java) { #import #creating-indirectly }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -301,10 +301,10 @@ to a notification service) and watching other actors’ lifecycle. For these
|
|||
purposes there is the `Inbox` class:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDSLSpec.scala]($akka$/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #inbox }
|
||||
: @@snip [ActorDSLSpec.scala](/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #inbox }
|
||||
|
||||
Java
|
||||
: @@snip [InboxDocTest.java]($code$/java/jdocs/actor/InboxDocTest.java) { #inbox }
|
||||
: @@snip [InboxDocTest.java](/akka-docs/src/test/java/jdocs/actor/InboxDocTest.java) { #inbox }
|
||||
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
|
@ -314,7 +314,7 @@ in this example the sender reference will be that of the actor hidden away
|
|||
within the inbox. This allows the reply to be received on the last line.
|
||||
Watching an actor is quite simple as well:
|
||||
|
||||
@@snip [ActorDSLSpec.scala]($akka$/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #watch }
|
||||
@@snip [ActorDSLSpec.scala](/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #watch }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -324,7 +324,7 @@ The `send` method wraps a normal `tell` and supplies the internal
|
|||
actor’s reference as the sender. This allows the reply to be received on the
|
||||
last line. Watching an actor is quite simple as well:
|
||||
|
||||
@@snip [InboxDocTest.java]($code$/java/jdocs/actor/InboxDocTest.java) { #watch }
|
||||
@@snip [InboxDocTest.java](/akka-docs/src/test/java/jdocs/actor/InboxDocTest.java) { #watch }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -371,7 +371,7 @@ time).
|
|||
|
||||
You can import the members in the `context` to avoid prefixing access with `context.`
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #import-context }
|
||||
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #import-context }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -379,10 +379,10 @@ The remaining visible methods are user-overridable life-cycle hooks which are
|
|||
described in the following:
|
||||
|
||||
Scala
|
||||
: @@snip [Actor.scala]($akka$/akka-actor/src/main/scala/akka/actor/Actor.scala) { #lifecycle-hooks }
|
||||
: @@snip [Actor.scala](/akka-actor/src/main/scala/akka/actor/Actor.scala) { #lifecycle-hooks }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #lifecycle-callbacks }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #lifecycle-callbacks }
|
||||
|
||||
The implementations shown above are the defaults provided by the @scala[`Actor` trait.] @java[`AbstractActor` class.]
|
||||
|
||||
|
|
@ -455,10 +455,10 @@ termination (see [Stopping Actors](#stopping-actors)). This service is provided
|
|||
Registering a monitor is easy:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #watch }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #watch }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-terminated #watch }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-terminated #watch }
|
||||
|
||||
It should be noted that the `Terminated` message is generated
|
||||
independent of the order in which registration and termination occur.
|
||||
|
|
@ -484,10 +484,10 @@ no `Terminated` message for that actor will be processed anymore.
|
|||
Right after starting the actor, its `preStart` method is invoked.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #preStart }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #preStart }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #preStart }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #preStart }
|
||||
|
||||
This method is called when the actor is first created. During restarts it is
|
||||
called by the default implementation of `postRestart`, which means that
|
||||
|
|
@ -561,10 +561,10 @@ paths—logical or physical—and receive back an `ActorSelection` with the
|
|||
result:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #selection-local }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-local }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #selection-local }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-local }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -593,10 +593,10 @@ The path elements of an actor selection may contain wildcard patterns allowing f
|
|||
broadcasting of messages to that section:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #selection-wildcard }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-wildcard }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #selection-wildcard }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-wildcard }
|
||||
|
||||
Messages can be sent via the `ActorSelection` and the path of the
|
||||
`ActorSelection` is looked up when delivering each message. If the selection
|
||||
|
|
@ -613,10 +613,10 @@ negative result is generated. Please note that this does not mean that delivery
|
|||
of that reply is guaranteed, it still is a normal message.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #identify }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #identify }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-identify #identify }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-identify #identify }
|
||||
|
||||
You can also acquire an `ActorRef` for an `ActorSelection` with
|
||||
the `resolveOne` method of the `ActorSelection`. It returns a `Future`
|
||||
|
|
@ -628,10 +628,10 @@ didn't complete within the supplied `timeout`.
|
|||
Remote actor addresses may also be looked up, if @ref:[remoting](remoting.md) is enabled:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #selection-remote }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #selection-remote }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #selection-remote }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #selection-remote }
|
||||
|
||||
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting.md#remote-sample).
|
||||
|
||||
|
|
@ -650,10 +650,10 @@ state) and works great with pattern matching at the receiver side.]
|
|||
Here is an @scala[example:] @java[example of an immutable message:]
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #immutable-message-definition #immutable-message-instantiation }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #immutable-message-definition #immutable-message-instantiation }
|
||||
|
||||
Java
|
||||
: @@snip [ImmutableMessage.java]($code$/java/jdocs/actor/ImmutableMessage.java) { #immutable-message }
|
||||
: @@snip [ImmutableMessage.java](/akka-docs/src/test/java/jdocs/actor/ImmutableMessage.java) { #immutable-message }
|
||||
|
||||
|
||||
## Send messages
|
||||
|
|
@ -691,10 +691,10 @@ This is the preferred way of sending messages. No blocking waiting for a
|
|||
message. This gives the best concurrency and scalability characteristics.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #tell }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #tell }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #tell }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #tell }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -728,10 +728,10 @@ The `ask` pattern involves actors as well as futures, hence it is offered as
|
|||
a use pattern rather than a method on `ActorRef`:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #ask-pipeTo }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #ask-pipeTo }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-ask #ask-pipe }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-ask #ask-pipe }
|
||||
|
||||
|
||||
This example demonstrates `ask` together with the `pipeTo` pattern on
|
||||
|
|
@ -767,10 +767,10 @@ are treated specially by the ask pattern.
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #reply-exception }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-exception }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #reply-exception }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #reply-exception }
|
||||
|
||||
If the actor does not complete the future, it will expire after the timeout period,
|
||||
@scala[completing it with an `AskTimeoutException`. The timeout is taken from one of the following locations in order of precedence:]
|
||||
|
|
@ -780,11 +780,11 @@ If the actor does not complete the future, it will expire after the timeout peri
|
|||
|
||||
1. explicitly given timeout as in:
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout }
|
||||
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout }
|
||||
|
||||
2. implicit argument of type `akka.util.Timeout`, e.g.
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout }
|
||||
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -816,10 +816,10 @@ through a 'mediator'. This can be useful when writing actors that work as
|
|||
routers, load-balancers, replicators etc.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #forward }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #forward }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #forward }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #forward }
|
||||
|
||||
## Receive messages
|
||||
|
||||
|
|
@ -829,10 +829,10 @@ An Actor has to
|
|||
@java[define its initial receive behavior by implementing the `createReceive` method in the `AbstractActor`:]
|
||||
|
||||
Scala
|
||||
: @@snip [Actor.scala]($akka$/akka-actor/src/main/scala/akka/actor/Actor.scala) { #receive }
|
||||
: @@snip [Actor.scala](/akka-actor/src/main/scala/akka/actor/Actor.scala) { #receive }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #createReceive }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #createReceive }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -851,23 +851,23 @@ You can build such behavior with a builder named `ReceiveBuilder`. Here is an ex
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #imports1 #my-actor }
|
||||
|
||||
Java
|
||||
: @@snip [MyActor.java]($code$/java/jdocs/actor/MyActor.java) { #imports #my-actor }
|
||||
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #imports #my-actor }
|
||||
|
||||
@@@ div { .group-java }
|
||||
|
||||
In case you want to provide many `match` cases but want to avoid creating a long call
|
||||
trail, you can split the creation of the builder into multiple statements as in the example:
|
||||
|
||||
@@snip [GraduallyBuiltActor.java]($code$/java/jdocs/actor/GraduallyBuiltActor.java) { #imports #actor }
|
||||
@@snip [GraduallyBuiltActor.java](/akka-docs/src/test/java/jdocs/actor/GraduallyBuiltActor.java) { #imports #actor }
|
||||
|
||||
Using small methods is a good practice, also in actors. It's recommended to delegate the
|
||||
actual work of the message processing to methods instead of defining a huge `ReceiveBuilder`
|
||||
with lots of code in each lambda. A well structured actor can look like this:
|
||||
|
||||
@@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #well-structured }
|
||||
@@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #well-structured }
|
||||
|
||||
That has benefits such as:
|
||||
|
||||
|
|
@ -889,7 +889,7 @@ that the JVM can have problems optimizing and the resulting code might not be as
|
|||
untyped version. When extending `UntypedAbstractActor` each message is received as an untyped
|
||||
`Object` and you have to inspect and cast it to the actual message type in other ways, like this:
|
||||
|
||||
@@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #optimized }
|
||||
@@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #optimized }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -904,10 +904,10 @@ message was sent without an actor or future context) then the sender
|
|||
defaults to a 'dead-letter' actor ref.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
|
||||
|
||||
Java
|
||||
: @@snip [MyActor.java]($code$/java/jdocs/actor/MyActor.java) { #reply }
|
||||
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/actor/MyActor.java) { #reply }
|
||||
|
||||
## Receive timeout
|
||||
|
||||
|
|
@ -924,10 +924,10 @@ Once set, the receive timeout stays in effect (i.e. continues firing repeatedly
|
|||
periods). Pass in `Duration.Undefined` to switch off this feature.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #receive-timeout }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-timeout }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #receive-timeout }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #receive-timeout }
|
||||
|
||||
Messages marked with `NotInfluenceReceiveTimeout` will not reset the timer. This can be useful when
|
||||
`ReceiveTimeout` should be fired by external inactivity but not influenced by internal activity,
|
||||
|
|
@ -943,10 +943,10 @@ to use the support for named timers. The lifecycle of scheduled messages can be
|
|||
when the actor is restarted and that is taken care of by the timers.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/TimerDocSpec.scala) { #timers }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/TimerDocSpec.scala) { #timers }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/TimerDocTest.java) { #timers }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/TimerDocTest.java) { #timers }
|
||||
|
||||
Each timer has a key and can be replaced or cancelled. It's guaranteed that a message from the
|
||||
previous incarnation of the timer with the same key is not received, even though it might already
|
||||
|
|
@ -966,10 +966,10 @@ termination of the actor is performed asynchronously, i.e. `stop` may return bef
|
|||
the actor is stopped.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #stoppingActors-actor }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stoppingActors-actor }
|
||||
|
||||
Java
|
||||
: @@snip [MyStoppingActor.java]($code$/java/jdocs/actor/MyStoppingActor.java) { #my-stopping-actor }
|
||||
: @@snip [MyStoppingActor.java](/akka-docs/src/test/java/jdocs/actor/MyStoppingActor.java) { #my-stopping-actor }
|
||||
|
||||
|
||||
Processing of the current message, if any, will continue before the actor is stopped,
|
||||
|
|
@ -997,10 +997,10 @@ The `postStop()` hook is invoked after an actor is fully stopped. This
|
|||
enables cleaning up of resources:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #postStop }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #postStop }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #postStop }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #postStop }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -1021,10 +1021,10 @@ ordinary messages and will be handled after messages that were already queued
|
|||
in the mailbox.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #poison-pill }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #poison-pill }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #poison-pill }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #poison-pill }
|
||||
|
||||
<a id="killing-actors"></a>
|
||||
### Killing an Actor
|
||||
|
|
@ -1038,10 +1038,10 @@ See @ref:[What Supervision Means](general/supervision.md#supervision-directives)
|
|||
Use `Kill` like this:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #kill }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #kill }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #kill }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #kill }
|
||||
|
||||
In general though it is not recommended to overly rely on either `PoisonPill` or `Kill` in
|
||||
designing your actor interactions, as often times a protocol-level message like `PleaseCleanupAndStop`
|
||||
|
|
@ -1054,10 +1054,10 @@ over which design you do not have control over.
|
|||
termination of several actors:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #gracefulStop}
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #gracefulStop}
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #import-gracefulStop #gracefulStop}
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #import-gracefulStop #gracefulStop}
|
||||
|
||||
When `gracefulStop()` returns successfully, the actor’s `postStop()` hook
|
||||
will have been executed: there exists a happens-before edge between the end of
|
||||
|
|
@ -1088,7 +1088,7 @@ services in a specific order and perform registered tasks during the shutdown pr
|
|||
The order of the shutdown phases is defined in configuration `akka.coordinated-shutdown.phases`.
|
||||
The default phases are defined as:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-actor/src/main/resources/reference.conf) { #coordinated-shutdown-phases }
|
||||
@@snip [reference.conf](/akka-actor/src/main/resources/reference.conf) { #coordinated-shutdown-phases }
|
||||
|
||||
More phases can be added in the application's configuration if needed by overriding a phase with an
|
||||
additional `depends-on`. Especially the phases `before-service-unbind`, `before-cluster-shutdown` and
|
||||
|
|
@ -1101,10 +1101,10 @@ The phases are ordered with [topological](https://en.wikipedia.org/wiki/Topologi
|
|||
Tasks can be added to a phase with:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-addTask }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-addTask }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-addTask }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-addTask }
|
||||
|
||||
The returned @scala[`Future[Done]`] @java[`CompletionStage<Done>`] should be completed when the task is completed. The task name parameter
|
||||
is only used for debugging/logging.
|
||||
|
|
@ -1124,10 +1124,10 @@ To start the coordinated shutdown process you can invoke @scala[`run`] @java[`ru
|
|||
extension:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-run }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-run }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-run }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-run }
|
||||
|
||||
It's safe to call the @scala[`run`] @java[`runAll`] method multiple times. It will only run once.
|
||||
|
||||
|
|
@ -1157,10 +1157,10 @@ If you have application specific JVM shutdown hooks it's recommended that you re
|
|||
those shutting down Akka Remoting (Artery).
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-jvm-hook }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #coordinated-shutdown-jvm-hook }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-jvm-hook }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #coordinated-shutdown-jvm-hook }
|
||||
|
||||
For some tests it might be undesired to terminate the `ActorSystem` via `CoordinatedShutdown`.
|
||||
You can disable that by adding the following to the configuration of the `ActorSystem` that is
|
||||
|
|
@ -1193,10 +1193,10 @@ Please note that the actor will revert to its original behavior when restarted b
|
|||
To hotswap the Actor behavior using `become`:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #hot-swap-actor }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #hot-swap-actor }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #hot-swap-actor }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #hot-swap-actor }
|
||||
|
||||
This variant of the `become` method is useful for many different things,
|
||||
such as to implement a Finite State Machine (FSM, for an example see @scala[[Dining
|
||||
|
|
@ -1212,10 +1212,10 @@ in the long run, otherwise this amounts to a memory leak (which is why this
|
|||
behavior is not the default).
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #swapper }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #swapper }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #swapper }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #swapper }
|
||||
|
||||
### Encoding Scala Actors nested receives without accidentally leaking memory
|
||||
|
||||
|
|
@ -1257,10 +1257,10 @@ control over the mailbox, see the documentation on mailboxes: @ref:[Mailboxes](m
|
|||
Here is an example of the @scala[`Stash`] @java[`AbstractActorWithStash` class] in action:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #stash }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #stash }
|
||||
|
||||
Java
|
||||
: @@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #stash }
|
||||
: @@snip [ActorDocTest.java](/akka-docs/src/test/java/jdocs/actor/ActorDocTest.java) { #stash }
|
||||
|
||||
Invoking `stash()` adds the current message (the message that the
|
||||
actor received last) to the actor's stash. It is typically invoked
|
||||
|
|
@ -1348,7 +1348,7 @@ For example, imagine you have a set of actors which are either `Producers` or `C
|
|||
have an actor share both behaviors. This can be achieved without having to duplicate code by extracting the behaviors to
|
||||
traits and implementing the actor's `receive` as combination of these partial functions.
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #receive-orElse }
|
||||
@@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #receive-orElse }
|
||||
|
||||
Instead of inheritance the same pattern can be applied via composition - compose the receive method using partial functions from delegates.
|
||||
|
||||
|
|
@ -1384,10 +1384,10 @@ One useful usage of this pattern is to disable creation of new `ActorRefs` for c
|
|||
achieved by overriding `preRestart()`. Below is the default implementation of these lifecycle hooks:
|
||||
|
||||
Scala
|
||||
: @@snip [InitializationDocSpec.scala]($code$/scala/docs/actor/InitializationDocSpec.scala) { #preStartInit }
|
||||
: @@snip [InitializationDocSpec.scala](/akka-docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #preStartInit }
|
||||
|
||||
Java
|
||||
: @@snip [InitializationDocTest.java]($code$/java/jdocs/actor/InitializationDocTest.java) { #preStartInit }
|
||||
: @@snip [InitializationDocTest.java](/akka-docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #preStartInit }
|
||||
|
||||
|
||||
Please note, that the child actors are *still restarted*, but no new `ActorRef` is created. One can recursively apply
|
||||
|
|
@ -1404,10 +1404,10 @@ and use `become()` or a finite state-machine state transition to encode the init
|
|||
of the actor.
|
||||
|
||||
Scala
|
||||
: @@snip [InitializationDocSpec.scala]($code$/scala/docs/actor/InitializationDocSpec.scala) { #messageInit }
|
||||
: @@snip [InitializationDocSpec.scala](/akka-docs/src/test/scala/docs/actor/InitializationDocSpec.scala) { #messageInit }
|
||||
|
||||
Java
|
||||
: @@snip [InitializationDocTest.java]($code$/java/jdocs/actor/InitializationDocTest.java) { #messageInit }
|
||||
: @@snip [InitializationDocTest.java](/akka-docs/src/test/java/jdocs/actor/InitializationDocTest.java) { #messageInit }
|
||||
|
||||
If the actor may receive messages before it has been initialized, a useful tool can be the `Stash` to save messages
|
||||
until the initialization finishes, and replaying them after the actor became initialized.
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ exhaustiveness.
|
|||
Here is an example where the compiler will warn you that the match in
|
||||
receive isn't exhaustive:
|
||||
|
||||
@@snip [Faq.scala]($code$/scala/docs/faq/Faq.scala) { #exhaustiveness-check }
|
||||
@@snip [Faq.scala](/akka-docs/src/test/scala/docs/faq/Faq.scala) { #exhaustiveness-check }
|
||||
|
||||
## Remoting
|
||||
|
||||
|
|
|
|||
|
|
@ -112,8 +112,8 @@ dynamic in this way. ActorRefs may safely be exposed to other bundles.
|
|||
To bootstrap Akka inside an OSGi environment, you can use the `akka.osgi.ActorSystemActivator` class
|
||||
to conveniently set up the ActorSystem.
|
||||
|
||||
@@snip [Activator.scala]($akka$/akka-osgi/src/test/scala/docs/osgi/Activator.scala) { #Activator }
|
||||
@@snip [Activator.scala](/akka-osgi/src/test/scala/docs/osgi/Activator.scala) { #Activator }
|
||||
|
||||
The goal here is to map the OSGi lifecycle more directly to the Akka lifecycle. The `ActorSystemActivator` creates
|
||||
the actor system with a class loader that finds resources (`application.conf` and `reference.conf` files) and classes
|
||||
from the application bundle and all transitive dependencies.
|
||||
from the application bundle and all transitive dependencies.
|
||||
|
|
|
|||
|
|
@ -55,10 +55,10 @@ value and providing an @scala[implicit] `ExecutionContext` to be used for it,
|
|||
@scala[for these examples we're going to use the default global one, but YMMV:]
|
||||
|
||||
Scala
|
||||
: @@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #create }
|
||||
: @@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #create }
|
||||
|
||||
Java
|
||||
: @@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #import-agent #create type=java }
|
||||
: @@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #import-agent #create type=java }
|
||||
|
||||
## Reading an Agent's value
|
||||
|
||||
|
|
@ -66,10 +66,10 @@ Agents can be dereferenced (you can get an Agent's value) by invoking the Agent
|
|||
with @scala[parentheses] @java[`get()`] like this:
|
||||
|
||||
Scala
|
||||
: @@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #read-apply #read-get }
|
||||
: @@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #read-apply #read-get }
|
||||
|
||||
Java
|
||||
: @@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #read-get type=java }
|
||||
: @@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #read-get type=java }
|
||||
|
||||
Reading an Agent's current value does not involve any message passing and
|
||||
happens immediately. So while updates to an Agent are asynchronous, reading the
|
||||
|
|
@ -80,7 +80,7 @@ state of an Agent is synchronous.
|
|||
You can also get a `Future` to the Agents value, that will be completed after the
|
||||
currently queued updates have completed:
|
||||
|
||||
@@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #import-future #read-future type=java }
|
||||
@@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #import-future #read-future type=java }
|
||||
|
||||
See @ref:[Futures](futures.md) for more information on `Futures`.
|
||||
|
||||
|
|
@ -97,10 +97,10 @@ occur in order. You apply a value or a function by invoking the `send`
|
|||
function.
|
||||
|
||||
Scala
|
||||
: @@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #send }
|
||||
: @@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #send }
|
||||
|
||||
Java
|
||||
: @@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #import-function #send type=java }
|
||||
: @@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #import-function #send type=java }
|
||||
|
||||
You can also dispatch a function to update the internal state but on its own
|
||||
thread. This does not use the reactive thread pool and can be used for
|
||||
|
|
@ -109,19 +109,19 @@ method. Dispatches using either `sendOff` or `send` will still be executed
|
|||
in order.
|
||||
|
||||
Scala
|
||||
: @@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #send-off }
|
||||
: @@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #send-off }
|
||||
|
||||
Java
|
||||
: @@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #import-function #send-off type=java }
|
||||
: @@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #import-function #send-off type=java }
|
||||
|
||||
All `send` methods also have a corresponding `alter` method that returns a `Future`.
|
||||
See @ref:[`Future`s](futures.md) for more information on `Future`s.
|
||||
|
||||
Scala
|
||||
: @@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #alter #alter-off }
|
||||
: @@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #alter #alter-off }
|
||||
|
||||
Java
|
||||
: @@snip [AgentDocTest.java]($code$/java/jdocs/agent/AgentDocTest.java) { #import-future #import-function #alter #alter-off type=java }
|
||||
: @@snip [AgentDocTest.java](/akka-docs/src/test/java/jdocs/agent/AgentDocTest.java) { #import-future #import-function #alter #alter-off type=java }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -130,7 +130,7 @@ Java
|
|||
You can also get a `Future` to the Agents value, that will be completed after the
|
||||
currently queued updates have completed:
|
||||
|
||||
@@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #read-future }
|
||||
@@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #read-future }
|
||||
|
||||
See @ref:[`Future`s](futures.md) for more information on `Future`s.
|
||||
|
||||
|
|
@ -143,7 +143,7 @@ as-is. They are so-called 'persistent'.
|
|||
|
||||
Example of monadic usage:
|
||||
|
||||
@@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #monadic-example }
|
||||
@@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #monadic-example }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -163,6 +163,6 @@ transaction is aborted. @scala[Here's an example:]
|
|||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
@@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #transfer-example }
|
||||
@@snip [AgentDocSpec.scala](/akka-docs/src/test/scala/docs/agent/AgentDocSpec.scala) { #transfer-example }
|
||||
|
||||
@@@
|
||||
@@@
|
||||
|
|
|
|||
|
|
@ -54,10 +54,10 @@ APIs. The [camel-extra](http://code.google.com/p/camel-extra/) project provides
|
|||
Here's an example of using Camel's integration components in Akka.
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #Consumer-mina }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #Consumer-mina }
|
||||
|
||||
Java
|
||||
: @@snip [MyEndpoint.java]($code$/java/jdocs/camel/MyEndpoint.java) { #Consumer-mina }
|
||||
: @@snip [MyEndpoint.java](/akka-docs/src/test/java/jdocs/camel/MyEndpoint.java) { #Consumer-mina }
|
||||
|
||||
The above example exposes an actor over a TCP endpoint via Apache
|
||||
Camel's [Mina component](http://camel.apache.org/mina2.html). The actor implements the @scala[`endpointUri`]@java[`getEndpointUri`] method to define
|
||||
|
|
@ -68,7 +68,7 @@ component), the actor's @scala[`endpointUri`]@java[`getEndpointUri`] method shou
|
|||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
@@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #Consumer }
|
||||
@@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #Consumer }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -85,10 +85,10 @@ Actors can also trigger message exchanges with external systems i.e. produce to
|
|||
Camel endpoints.
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #imports #Producer }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #imports #Producer }
|
||||
|
||||
Java
|
||||
: @@snip [Orders.java]($code$/java/jdocs/camel/Orders.java) { #Producer }
|
||||
: @@snip [Orders.java](/akka-docs/src/test/java/jdocs/camel/Orders.java) { #Producer }
|
||||
|
||||
In the above example, any message sent to this actor will be sent to
|
||||
the JMS queue @scala[`orders`]@java[`Orders`]. Producer actors may choose from the same set of Camel
|
||||
|
|
@ -98,7 +98,7 @@ components as Consumer actors do.
|
|||
|
||||
Below an example of how to send a message to the `Orders` producer.
|
||||
|
||||
@@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #TellProducer }
|
||||
@@snip [ProducerTestBase.java](/akka-docs/src/test/java/jdocs/camel/ProducerTestBase.java) { #TellProducer }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -127,10 +127,10 @@ The @extref[Camel](github:akka-camel/src/main/scala/akka/camel/Camel.scala) @sca
|
|||
Below you can see how you can get access to these Apache Camel objects.
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelExtension }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #CamelExtension }
|
||||
|
||||
Java
|
||||
: @@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtension }
|
||||
: @@snip [CamelExtensionTest.java](/akka-docs/src/test/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtension }
|
||||
|
||||
One `CamelExtension` is only loaded once for every one `ActorSystem`, which makes it safe to call the `CamelExtension` at any point in your code to get to the
|
||||
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one `ProducerTemplate` for every one `ActorSystem` that uses a `CamelExtension`.
|
||||
|
|
@ -141,10 +141,10 @@ This interface define a single method `getContext()` used to load the [CamelCont
|
|||
Below an example on how to add the ActiveMQ component to the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java), which is required when you would like to use the ActiveMQ component.
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelExtensionAddComponent }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #CamelExtensionAddComponent }
|
||||
|
||||
Java
|
||||
: @@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtensionAddComponent }
|
||||
: @@snip [CamelExtensionTest.java](/akka-docs/src/test/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtensionAddComponent }
|
||||
|
||||
The [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) joins the lifecycle of the `ActorSystem` and `CamelExtension` it is associated with; the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is started when
|
||||
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the `ProducerTemplate`.
|
||||
|
|
@ -159,19 +159,19 @@ requested the actor to be created. Some Camel components can take a while to sta
|
|||
The @extref[Camel](github:akka-camel/src/main/scala/akka/camel/Camel.scala) @scala[trait]@java[interface] allows you to find out when the endpoint is activated or deactivated.
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelActivation }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #CamelActivation }
|
||||
|
||||
Java
|
||||
: @@snip [ActivationTestBase.java]($code$/java/jdocs/camel/ActivationTestBase.java) { #CamelActivation }
|
||||
: @@snip [ActivationTestBase.java](/akka-docs/src/test/java/jdocs/camel/ActivationTestBase.java) { #CamelActivation }
|
||||
|
||||
The above code shows that you can get a `Future` to the activation of the route from the endpoint to the actor, or you can wait in a blocking fashion on the activation of the route.
|
||||
An `ActivationTimeoutException` is thrown if the endpoint could not be activated within the specified timeout. Deactivation works in a similar fashion:
|
||||
|
||||
Scala
|
||||
: @@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelDeactivation }
|
||||
: @@snip [Introduction.scala](/akka-docs/src/test/scala/docs/camel/Introduction.scala) { #CamelDeactivation }
|
||||
|
||||
Java
|
||||
: @@snip [ActivationTestBase.java]($code$/java/jdocs/camel/ActivationTestBase.java) { #CamelDeactivation }
|
||||
: @@snip [ActivationTestBase.java](/akka-docs/src/test/java/jdocs/camel/ActivationTestBase.java) { #CamelDeactivation }
|
||||
|
||||
Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the [SendProcessor](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java) is stopped.
|
||||
A `DeActivationTimeoutException` is thrown if the associated camel objects could not be deactivated within the specified timeout.
|
||||
|
|
@ -184,10 +184,10 @@ For example, the following actor class (Consumer1) implements the
|
|||
messages from the `file:data/input/actor` Camel endpoint.
|
||||
|
||||
Scala
|
||||
: @@snip [Consumers.scala]($code$/scala/docs/camel/Consumers.scala) { #Consumer1 }
|
||||
: @@snip [Consumers.scala](/akka-docs/src/test/scala/docs/camel/Consumers.scala) { #Consumer1 }
|
||||
|
||||
Java
|
||||
: @@snip [Consumer1.java]($code$/java/jdocs/camel/Consumer1.java) { #Consumer1 }
|
||||
: @@snip [Consumer1.java](/akka-docs/src/test/java/jdocs/camel/Consumer1.java) { #Consumer1 }
|
||||
|
||||
Whenever a file is put into the data/input/actor directory, its content is
|
||||
picked up by the Camel [file component](http://camel.apache.org/file2.html) and sent as message to the
|
||||
|
|
@ -200,10 +200,10 @@ component to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, ac
|
|||
from localhost on port 8877.
|
||||
|
||||
Scala
|
||||
: @@snip [Consumers.scala]($code$/scala/docs/camel/Consumers.scala) { #Consumer2 }
|
||||
: @@snip [Consumers.scala](/akka-docs/src/test/scala/docs/camel/Consumers.scala) { #Consumer2 }
|
||||
|
||||
Java
|
||||
: @@snip [Consumer2.java]($code$/java/jdocs/camel/Consumer2.java) { #Consumer2 }
|
||||
: @@snip [Consumer2.java](/akka-docs/src/test/java/jdocs/camel/Consumer2.java) { #Consumer2 }
|
||||
|
||||
After starting the actor, clients can send messages to that actor by POSTing to
|
||||
`http://localhost:8877/camel/default`. The actor sends a response by using the
|
||||
|
|
@ -231,10 +231,10 @@ special akka.camel.Ack message (positive acknowledgement) or a akka.actor.Status
|
|||
acknowledgement).
|
||||
|
||||
Scala
|
||||
: @@snip [Consumers.scala]($code$/scala/docs/camel/Consumers.scala) { #Consumer3 }
|
||||
: @@snip [Consumers.scala](/akka-docs/src/test/scala/docs/camel/Consumers.scala) { #Consumer3 }
|
||||
|
||||
Java
|
||||
: @@snip [Consumer3.java]($code$/java/jdocs/camel/Consumer3.java) { #Consumer3 }
|
||||
: @@snip [Consumer3.java](/akka-docs/src/test/java/jdocs/camel/Consumer3.java) { #Consumer3 }
|
||||
|
||||
<a id="camel-timeout"></a>
|
||||
### Consumer timeout
|
||||
|
|
@ -252,10 +252,10 @@ result in the [Exchange](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0
|
|||
The timeout on the consumer actor can be overridden with the `replyTimeout`, as shown below.
|
||||
|
||||
Scala
|
||||
: @@snip [Consumers.scala]($code$/scala/docs/camel/Consumers.scala) { #Consumer4 }
|
||||
: @@snip [Consumers.scala](/akka-docs/src/test/scala/docs/camel/Consumers.scala) { #Consumer4 }
|
||||
|
||||
Java
|
||||
: @@snip [Consumer4.java]($code$/java/jdocs/camel/Consumer4.java) { #Consumer4 }
|
||||
: @@snip [Consumer4.java](/akka-docs/src/test/java/jdocs/camel/Consumer4.java) { #Consumer4 }
|
||||
|
||||
## Producer Actors
|
||||
|
||||
|
|
@ -263,10 +263,10 @@ For sending messages to Camel endpoints, actors need to @scala[mixin the @extref
|
|||
@java[inherit from the @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) class] and implement the `getEndpointUri` method.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #Producer1 }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #Producer1 }
|
||||
|
||||
Java
|
||||
: @@snip [Producer1.java]($code$/java/jdocs/camel/Producer1.java) { #Producer1 }
|
||||
: @@snip [Producer1.java](/akka-docs/src/test/java/jdocs/camel/Producer1.java) { #Producer1 }
|
||||
|
||||
Producer1 inherits a default implementation of the @scala[`receive`]@java[`onReceive`] method from the
|
||||
@scala[Producer trait]@java[@extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala)] class. To customize a producer actor's default behavior you must override the
|
||||
|
|
@ -282,10 +282,10 @@ following example uses the ask pattern to send a message to a
|
|||
Producer actor and waits for a response.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #AskProducer }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #AskProducer }
|
||||
|
||||
Java
|
||||
: @@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #AskProducer }
|
||||
: @@snip [ProducerTestBase.java](/akka-docs/src/test/java/jdocs/camel/ProducerTestBase.java) { #AskProducer }
|
||||
|
||||
The future contains the response `CamelMessage`, or an `AkkaCamelException` when an error occurred, which contains the headers of the response.
|
||||
|
||||
|
|
@ -298,22 +298,22 @@ message is forwarded to a target actor instead of being replied to the original
|
|||
sender.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #RouteResponse }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #RouteResponse }
|
||||
|
||||
Java
|
||||
: @@snip [ResponseReceiver.java]($code$/java/jdocs/camel/ResponseReceiver.java) { #RouteResponse }
|
||||
@@snip [Forwarder.java]($code$/java/jdocs/camel/Forwarder.java) { #RouteResponse }
|
||||
@@snip [OnRouteResponseTestBase.java]($code$/java/jdocs/camel/OnRouteResponseTestBase.java) { #RouteResponse }
|
||||
: @@snip [ResponseReceiver.java](/akka-docs/src/test/java/jdocs/camel/ResponseReceiver.java) { #RouteResponse }
|
||||
@@snip [Forwarder.java](/akka-docs/src/test/java/jdocs/camel/Forwarder.java) { #RouteResponse }
|
||||
@@snip [OnRouteResponseTestBase.java](/akka-docs/src/test/java/jdocs/camel/OnRouteResponseTestBase.java) { #RouteResponse }
|
||||
|
||||
Before producing messages to endpoints, producer actors can pre-process them by
|
||||
overriding the @scala[@extref[Producer](github:akka-camel/src/main/scala/akka/camel/Producer.scala).transformOutgoingMessage]
|
||||
@java[@extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala).onTransformOutgoingMessag] method.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #TransformOutgoingMessage }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #TransformOutgoingMessage }
|
||||
|
||||
Java
|
||||
: @@snip [Transformer.java]($code$/java/jdocs/camel/Transformer.java) { #TransformOutgoingMessage }
|
||||
: @@snip [Transformer.java](/akka-docs/src/test/java/jdocs/camel/Transformer.java) { #TransformOutgoingMessage }
|
||||
|
||||
### Producer configuration options
|
||||
|
||||
|
|
@ -323,10 +323,10 @@ respectively). By default, the producer initiates an in-out message exchange
|
|||
with the endpoint. For initiating an in-only exchange, producer actors have to override the @scala[`oneway`]@java[`isOneway`] method to return true.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #Oneway }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #Oneway }
|
||||
|
||||
Java
|
||||
: @@snip [OnewaySender.java]($code$/java/jdocs/camel/OnewaySender.java) { #Oneway }
|
||||
: @@snip [OnewaySender.java](/akka-docs/src/test/java/jdocs/camel/OnewaySender.java) { #Oneway }
|
||||
|
||||
### Message correlation
|
||||
|
||||
|
|
@ -334,10 +334,10 @@ To correlate request with response messages, applications can set the
|
|||
`Message.MessageExchangeId` message header.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #Correlate }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #Correlate }
|
||||
|
||||
Java
|
||||
: @@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #Correlate }
|
||||
: @@snip [ProducerTestBase.java](/akka-docs/src/test/java/jdocs/camel/ProducerTestBase.java) { #Correlate }
|
||||
|
||||
### ProducerTemplate
|
||||
|
||||
|
|
@ -346,19 +346,19 @@ convenient way for actors to produce messages to Camel endpoints. Actors may als
|
|||
`ProducerTemplate` for producing messages to endpoints.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #ProducerTemplate }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #ProducerTemplate }
|
||||
|
||||
Java
|
||||
: @@snip [MyActor.java]($code$/java/jdocs/camel/MyActor.java) { #ProducerTemplate }
|
||||
: @@snip [MyActor.java](/akka-docs/src/test/java/jdocs/camel/MyActor.java) { #ProducerTemplate }
|
||||
|
||||
For initiating a two-way message exchange, one of the
|
||||
`ProducerTemplate.request*` methods must be used.
|
||||
|
||||
Scala
|
||||
: @@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #RequestProducerTemplate }
|
||||
: @@snip [Producers.scala](/akka-docs/src/test/scala/docs/camel/Producers.scala) { #RequestProducerTemplate }
|
||||
|
||||
Java
|
||||
: @@snip [RequestBodyActor.java]($code$/java/jdocs/camel/RequestBodyActor.java) { #RequestProducerTemplate }
|
||||
: @@snip [RequestBodyActor.java](/akka-docs/src/test/java/jdocs/camel/RequestBodyActor.java) { #RequestProducerTemplate }
|
||||
|
||||
<a id="camel-asynchronous-routing"></a>
|
||||
## Asynchronous routing
|
||||
|
|
@ -463,12 +463,12 @@ reference an `ActorRef` directly as shown in the below example, The route starts
|
|||
ends at the target actor.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRoute.scala]($code$/scala/docs/camel/CustomRoute.scala) { #CustomRoute }
|
||||
: @@snip [CustomRoute.scala](/akka-docs/src/test/scala/docs/camel/CustomRoute.scala) { #CustomRoute }
|
||||
|
||||
Java
|
||||
: @@snip [Responder.java]($code$/java/jdocs/camel/Responder.java) { #CustomRoute }
|
||||
@@snip [CustomRouteBuilder.java]($code$/java/jdocs/camel/CustomRouteBuilder.java) { #CustomRoute }
|
||||
@@snip [CustomRouteTestBase.java]($code$/java/jdocs/camel/CustomRouteTestBase.java) { #CustomRoute }
|
||||
: @@snip [Responder.java](/akka-docs/src/test/java/jdocs/camel/Responder.java) { #CustomRoute }
|
||||
@@snip [CustomRouteBuilder.java](/akka-docs/src/test/java/jdocs/camel/CustomRouteBuilder.java) { #CustomRoute }
|
||||
@@snip [CustomRouteTestBase.java](/akka-docs/src/test/java/jdocs/camel/CustomRouteTestBase.java) { #CustomRoute }
|
||||
|
||||
@java[The `CamelPath.toCamelUri` converts the `ActorRef` to the Camel actor component URI format which points to the actor endpoint as described above.]
|
||||
When a message is received on the jetty endpoint, it is routed to the `Responder` actor, which in return replies back to the client of
|
||||
|
|
@ -487,10 +487,10 @@ The following examples demonstrate how to extend a route to a consumer actor for
|
|||
handling exceptions thrown by that actor.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRoute.scala]($code$/scala/docs/camel/CustomRoute.scala) { #ErrorThrowingConsumer }
|
||||
: @@snip [CustomRoute.scala](/akka-docs/src/test/scala/docs/camel/CustomRoute.scala) { #ErrorThrowingConsumer }
|
||||
|
||||
Java
|
||||
: @@snip [ErrorThrowingConsumer.java]($code$/java/jdocs/camel/ErrorThrowingConsumer.java) { #ErrorThrowingConsumer }
|
||||
: @@snip [ErrorThrowingConsumer.java](/akka-docs/src/test/java/jdocs/camel/ErrorThrowingConsumer.java) { #ErrorThrowingConsumer }
|
||||
|
||||
The above ErrorThrowingConsumer sends the Failure back to the sender in preRestart
|
||||
because the Exception that is thrown in the actor would
|
||||
|
|
|
|||
|
|
@ -105,28 +105,28 @@ akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
|
|||
Next, register the actors that should be available for the client.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #server }
|
||||
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #server }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #server }
|
||||
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #server }
|
||||
|
||||
On the client you create the @unidoc[ClusterClient] actor and use it as a gateway for sending
|
||||
messages to the actors identified by their path (without address information) somewhere
|
||||
in the cluster.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #client }
|
||||
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #client }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #client }
|
||||
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #client }
|
||||
|
||||
The `initialContacts` parameter is a @scala[`Set[ActorPath]`]@java[`Set<ActorPath>`], which can be created like this:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #initialContacts }
|
||||
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #initialContacts }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #initialContacts }
|
||||
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #initialContacts }
|
||||
|
||||
You will probably define the address information of the initial contact points in configuration or system property.
|
||||
See also [Configuration](#cluster-client-config).
|
||||
|
|
@ -160,18 +160,18 @@ receptionists), as they become available. The code illustrates subscribing to th
|
|||
initial state.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #clientEventsListener }
|
||||
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #clientEventsListener }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #clientEventsListener }
|
||||
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #clientEventsListener }
|
||||
|
||||
Similarly we can have an actor that behaves in a similar fashion for learning what cluster clients are connected to a @unidoc[ClusterClientReceptionist]:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #receptionistEventsListener }
|
||||
: @@snip [ClusterClientSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #receptionistEventsListener }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #receptionistEventsListener }
|
||||
: @@snip [ClusterClientTest.java](/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #receptionistEventsListener }
|
||||
|
||||
<a id="cluster-client-config"></a>
|
||||
## Configuration
|
||||
|
|
@ -179,7 +179,7 @@ Java
|
|||
The @unidoc[ClusterClientReceptionist] extension (or @unidoc[akka.cluster.client.ClusterReceptionistSettings]) can be configured
|
||||
with the following properties:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #receptionist-ext-config }
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #receptionist-ext-config }
|
||||
|
||||
The following configuration properties are read by the @unidoc[ClusterClientSettings]
|
||||
when created with a @scala[@scaladoc[`ActorSystem`](akka.actor.ActorSystem)]@java[@javadoc[`ActorSystem`](akka.actor.ActorSystem)] parameter. It is also possible to amend the @unidoc[ClusterClientSettings]
|
||||
|
|
@ -187,7 +187,7 @@ or create it from another config section with the same layout as below. @unidoc[
|
|||
a parameter to the @scala[@scaladoc[`ClusterClient.props`](akka.cluster.client.ClusterClient$)]@java[@javadoc[`ClusterClient.props`](akka.cluster.client.ClusterClient$)] factory method, i.e. each client can be configured
|
||||
with different settings if needed.
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #cluster-client-config }
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #cluster-client-config }
|
||||
|
||||
## Failure handling
|
||||
|
||||
|
|
|
|||
|
|
@ -98,10 +98,10 @@ if you see this in log messages.
|
|||
You can retrieve information about what data center a member belongs to:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterDocSpec.scala]($code$/scala/docs/cluster/ClusterDocSpec.scala) { #dcAccess }
|
||||
: @@snip [ClusterDocSpec.scala](/akka-docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #dcAccess }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterDocTest.java]($code$/java/jdocs/cluster/ClusterDocTest.java) { #dcAccess }
|
||||
: @@snip [ClusterDocTest.java](/akka-docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #dcAccess }
|
||||
|
||||
## Failure Detection
|
||||
|
||||
|
|
@ -156,10 +156,10 @@ having a global singleton in one data center and accessing it from other data ce
|
|||
This is how to create a singleton proxy for a specific data center:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy-dc }
|
||||
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy-dc }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy-dc }
|
||||
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy-dc }
|
||||
|
||||
If using the own data center as the `withDataCenter` parameter that would be a proxy for the singleton in the own data center, which
|
||||
is also the default if `withDataCenter` is not given.
|
||||
|
|
@ -193,10 +193,10 @@ accessing them from other data centers.
|
|||
This is how to create a sharding proxy for a specific data center:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #proxy-dc }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #proxy-dc }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #proxy-dc }
|
||||
|
||||
Another way to manage global entities is to make sure that certain entity ids are located in
|
||||
only one data center by routing the messages to the right region. For example, the routing function
|
||||
|
|
|
|||
|
|
@ -138,18 +138,18 @@ Let's take a look at this router in action. What can be more demanding than calc
|
|||
The backend worker that performs the factorial calculation:
|
||||
|
||||
Scala
|
||||
: @@snip [FactorialBackend.scala]($code$/scala/docs/cluster/FactorialBackend.scala) { #backend }
|
||||
: @@snip [FactorialBackend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialBackend.scala) { #backend }
|
||||
|
||||
Java
|
||||
: @@snip [FactorialBackend.java]($code$/java/jdocs/cluster/FactorialBackend.java) { #backend }
|
||||
: @@snip [FactorialBackend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialBackend.java) { #backend }
|
||||
|
||||
The frontend that receives user jobs and delegates to the backends via the router:
|
||||
|
||||
Scala
|
||||
: @@snip [FactorialFrontend.scala]($code$/scala/docs/cluster/FactorialFrontend.scala) { #frontend }
|
||||
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #frontend }
|
||||
|
||||
Java
|
||||
: @@snip [FactorialFrontend.java]($code$/java/jdocs/cluster/FactorialFrontend.java) { #frontend }
|
||||
: @@snip [FactorialFrontend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #frontend }
|
||||
|
||||
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
|
||||
|
||||
|
|
@ -180,10 +180,10 @@ other things work in the same way as other routers.
|
|||
The same type of router could also have been defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [FactorialFrontend.scala]($code$/scala/docs/cluster/FactorialFrontend.scala) { #router-lookup-in-code #router-deploy-in-code }
|
||||
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #router-lookup-in-code #router-deploy-in-code }
|
||||
|
||||
Java
|
||||
: @@snip [FactorialFrontend.java]($code$/java/jdocs/cluster/FactorialFrontend.java) { #router-lookup-in-code #router-deploy-in-code }
|
||||
: @@snip [FactorialFrontend.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontend.java) { #router-lookup-in-code #router-deploy-in-code }
|
||||
|
||||
The easiest way to run **Adaptive Load Balancing** example yourself is to download the ready to run
|
||||
@scala[@extref[Akka Cluster Sample with Scala](ecs:akka-samples-cluster-scala)] @java[@extref[Akka Cluster Sample with Java](ecs:akka-samples-cluster-java)]
|
||||
|
|
@ -196,10 +196,10 @@ The source code of this sample can be found in the
|
|||
It is possible to subscribe to the metrics events directly to implement other functionality.
|
||||
|
||||
Scala
|
||||
: @@snip [MetricsListener.scala]($code$/scala/docs/cluster/MetricsListener.scala) { #metrics-listener }
|
||||
: @@snip [MetricsListener.scala](/akka-docs/src/test/scala/docs/cluster/MetricsListener.scala) { #metrics-listener }
|
||||
|
||||
Java
|
||||
: @@snip [MetricsListener.java]($code$/java/jdocs/cluster/MetricsListener.java) { #metrics-listener }
|
||||
: @@snip [MetricsListener.java](/akka-docs/src/test/java/jdocs/cluster/MetricsListener.java) { #metrics-listener }
|
||||
|
||||
## Custom Metrics Collector
|
||||
|
||||
|
|
@ -217,4 +217,4 @@ Custom metrics collector implementation class must be specified in the
|
|||
|
||||
The Cluster metrics extension can be configured with the following properties:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-metrics/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-cluster-metrics/src/main/resources/reference.conf)
|
||||
|
|
|
|||
|
|
@ -72,10 +72,10 @@ Set it to a lower value if you want to limit total number of routees.
|
|||
The same type of router could also have been defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [StatsService.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
|
||||
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
|
||||
|
||||
Java
|
||||
: @@snip [StatsService.java]($code$/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
|
||||
: @@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
|
||||
|
||||
See [configuration](#cluster-configuration) section for further descriptions of the settings.
|
||||
|
||||
|
|
@ -93,31 +93,31 @@ the average number of characters per word when all results have been collected.
|
|||
Messages:
|
||||
|
||||
Scala
|
||||
: @@snip [StatsMessages.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsMessages.scala) { #messages }
|
||||
: @@snip [StatsMessages.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsMessages.scala) { #messages }
|
||||
|
||||
Java
|
||||
: @@snip [StatsMessages.java]($code$/java/jdocs/cluster/StatsMessages.java) { #messages }
|
||||
: @@snip [StatsMessages.java](/akka-docs/src/test/java/jdocs/cluster/StatsMessages.java) { #messages }
|
||||
|
||||
The worker that counts number of characters in each word:
|
||||
|
||||
Scala
|
||||
: @@snip [StatsWorker.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsWorker.scala) { #worker }
|
||||
: @@snip [StatsWorker.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsWorker.scala) { #worker }
|
||||
|
||||
Java
|
||||
: @@snip [StatsWorker.java]($code$/java/jdocs/cluster/StatsWorker.java) { #worker }
|
||||
: @@snip [StatsWorker.java](/akka-docs/src/test/java/jdocs/cluster/StatsWorker.java) { #worker }
|
||||
|
||||
The service that receives text from users and splits it up into words, delegates to workers and aggregates:
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
@@snip [StatsService.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #service }
|
||||
@@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #service }
|
||||
|
||||
@@@
|
||||
|
||||
@@@ div { .group-java }
|
||||
|
||||
@@snip [StatsService.java]($code$/java/jdocs/cluster/StatsService.java) { #service }
|
||||
@@snip [StatsAggregator.java]($code$/java/jdocs/cluster/StatsAggregator.java) { #aggregator }
|
||||
@@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #service }
|
||||
@@snip [StatsAggregator.java](/akka-docs/src/test/java/jdocs/cluster/StatsAggregator.java) { #aggregator }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -180,10 +180,10 @@ Set it to a lower value if you want to limit total number of routees.
|
|||
The same type of router could also have been defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [StatsService.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
|
||||
: @@snip [StatsService.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
|
||||
|
||||
Java
|
||||
: @@snip [StatsService.java]($code$/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }
|
||||
: @@snip [StatsService.java](/akka-docs/src/test/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }
|
||||
|
||||
See [configuration](#cluster-configuration) section for further descriptions of the settings.
|
||||
|
||||
|
|
@ -206,7 +206,7 @@ Scala
|
|||
@@@
|
||||
|
||||
Java
|
||||
: @@snip [StatsSampleOneMasterMain.java]($code$/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #create-singleton-manager }
|
||||
: @@snip [StatsSampleOneMasterMain.java](/akka-docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #create-singleton-manager }
|
||||
|
||||
We also need an actor on each node that keeps track of where current single master exists and
|
||||
delegates jobs to the `StatsService`. That is provided by the `ClusterSingletonProxy`:
|
||||
|
|
@ -223,7 +223,7 @@ Scala
|
|||
@@@
|
||||
|
||||
Java
|
||||
: @@snip [StatsSampleOneMasterMain.java]($code$/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy }
|
||||
: @@snip [StatsSampleOneMasterMain.java](/akka-docs/src/test/java/jdocs/cluster/StatsSampleOneMasterMain.java) { #singleton-proxy }
|
||||
|
||||
The `ClusterSingletonProxy` receives text from users and delegates to the current `StatsService`, the single
|
||||
master. It listens to cluster events to lookup the `StatsService` on the oldest node.
|
||||
|
|
|
|||
|
|
@ -55,10 +55,10 @@ See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
|
|||
This is how an entity actor may look like:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-actor }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-actor }
|
||||
|
||||
The above actor uses event sourcing and the support provided in @scala[`PersistentActor`] @java[`AbstractPersistentActor`] to store its state.
|
||||
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
|
||||
|
|
@ -75,19 +75,19 @@ in case if there is no match between the roles of the current cluster node and t
|
|||
`ClusterShardingSettings`.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-start }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-start }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-start }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-start }
|
||||
|
||||
The @scala[`extractEntityId` and `extractShardId` are two] @java[`messageExtractor` defines] application specific @scala[functions] @java[methods] to extract the entity
|
||||
identifier and the shard identifier from incoming messages.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-extractor }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-extractor }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-extractor }
|
||||
|
||||
This example illustrates two different ways to define the entity identifier in the messages:
|
||||
|
||||
|
|
@ -122,10 +122,10 @@ delegate the message to the right node and it will create the entity actor on de
|
|||
first message for a specific entity is delivered.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-usage }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-usage }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-usage }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -344,10 +344,10 @@ the `rememberEntities` flag to true in `ClusterShardingSettings` when calling
|
|||
extract from the `EntityId`.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #extractShardId-StartEntity }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #extractShardId-StartEntity }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #extractShardId-StartEntity }
|
||||
|
||||
When configured to remember entities, whenever a `Shard` is rebalanced onto another
|
||||
node or recovers after a crash it will recreate all the entities which were previously
|
||||
|
|
@ -381,18 +381,18 @@ you need to create an intermediate parent actor that defines the `supervisorStra
|
|||
child entity actor.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #supervisor }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #supervisor }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #supervisor }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #supervisor }
|
||||
|
||||
You start such a supervisor in the same way as if it was the entity actor.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterShardingSpec.scala]($akka$/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start }
|
||||
: @@snip [ClusterShardingSpec.scala](/akka-cluster-sharding/src/multi-jvm/scala/akka/cluster/sharding/ClusterShardingSpec.scala) { #counter-supervisor-start }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterShardingTest.java]($code$/java/jdocs/sharding/ClusterShardingTest.java) { #counter-supervisor-start }
|
||||
: @@snip [ClusterShardingTest.java](/akka-docs/src/test/java/jdocs/sharding/ClusterShardingTest.java) { #counter-supervisor-start }
|
||||
|
||||
Note that stopped entities will be started again when a new message is targeted to the entity.
|
||||
|
||||
|
|
@ -466,7 +466,7 @@ with the same layout as below. `ClusterShardingSettings` is a parameter to the `
|
|||
the `ClusterSharding` extension, i.e. each each entity type can be configured with different settings
|
||||
if needed.
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-sharding/src/main/resources/reference.conf) { #sharding-ext-config }
|
||||
@@snip [reference.conf](/akka-cluster-sharding/src/main/resources/reference.conf) { #sharding-ext-config }
|
||||
|
||||
Custom shard allocation strategy can be defined in an optional parameter to
|
||||
`ClusterSharding.start`. See the API documentation of @scala[`ShardAllocationStrategy`] @java[`AbstractShardAllocationStrategy`] for details
|
||||
|
|
|
|||
|
|
@ -99,19 +99,19 @@ Before explaining how to create a cluster singleton actor, let's define message
|
|||
which will be used by the singleton.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #singleton-message-classes }
|
||||
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #singleton-message-classes }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
|
||||
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/akka/cluster/singleton/TestSingletonMessages.java) { #singleton-message-classes }
|
||||
|
||||
On each node in the cluster you need to start the `ClusterSingletonManager` and
|
||||
supply the `Props` of the singleton actor, in this case the JMS queue consumer.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
|
||||
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-manager }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-manager }
|
||||
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-manager }
|
||||
|
||||
Here we limit the singleton to nodes tagged with the `"worker"` role, but all nodes, independent of
|
||||
role, can be used by not specifying `withRole`.
|
||||
|
|
@ -123,19 +123,19 @@ perfectly fine `terminationMessage` if you only need to stop the actor.
|
|||
Here is how the singleton actor handles the `terminationMessage` in this example.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #consumer-end }
|
||||
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #consumer-end }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/Consumer.java) { #consumer-end }
|
||||
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/akka/cluster/singleton/Consumer.java) { #consumer-end }
|
||||
|
||||
With the names given above, access to the singleton can be obtained from any cluster node using a properly
|
||||
configured proxy.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonManagerSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy }
|
||||
: @@snip [ClusterSingletonManagerSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/singleton/ClusterSingletonManagerSpec.scala) { #create-singleton-proxy }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonManagerTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy }
|
||||
: @@snip [ClusterSingletonManagerTest.java](/akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java) { #create-singleton-proxy }
|
||||
|
||||
A more comprehensive sample is available in the tutorial named
|
||||
@scala[[Distributed workers with Akka and Scala!](https://github.com/typesafehub/activator-akka-distributed-workers)]@java[[Distributed workers with Akka and Java!](https://github.com/typesafehub/activator-akka-distributed-workers-java)].
|
||||
|
|
@ -148,7 +148,7 @@ or create it from another config section with the same layout as below. `Cluster
|
|||
a parameter to the `ClusterSingletonManager.props` factory method, i.e. each singleton can be configured
|
||||
with different settings if needed.
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-config }
|
||||
|
||||
The following configuration properties are read by the `ClusterSingletonProxySettings`
|
||||
when created with a `ActorSystem` parameter. It is also possible to amend the `ClusterSingletonProxySettings`
|
||||
|
|
@ -156,23 +156,23 @@ or create it from another config section with the same layout as below. `Cluster
|
|||
a parameter to the `ClusterSingletonProxy.props` factory method, i.e. each singleton proxy can be configured
|
||||
with different settings if needed.
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #singleton-proxy-config }
|
||||
|
||||
## Supervision
|
||||
|
||||
Sometimes it is useful to add supervision for the Cluster Singleton itself. To accomplish this you need to add a parent supervisor actor which will be used to create the 'real' singleton instance. Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/a/36716708/779513))
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonSupervision.scala]($akka$/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor }
|
||||
: @@snip [ClusterSingletonSupervision.scala](/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor }
|
||||
|
||||
Java
|
||||
: @@snip [SupervisorActor.java]($akka$/akka-docs/src/test/java/jdocs/cluster/singleton/SupervisorActor.java) { #singleton-supervisor-actor }
|
||||
: @@snip [SupervisorActor.java](/akka-docs/src/test/java/jdocs/cluster/singleton/SupervisorActor.java) { #singleton-supervisor-actor }
|
||||
|
||||
And used here
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterSingletonSupervision.scala]($akka$/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor-usage }
|
||||
: @@snip [ClusterSingletonSupervision.scala](/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor-usage }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterSingletonSupervision.java]($akka$/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage-imports }
|
||||
@@snip [ClusterSingletonSupervision.java]($akka$/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage }
|
||||
: @@snip [ClusterSingletonSupervision.java](/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage-imports }
|
||||
@@snip [ClusterSingletonSupervision.java](/akka-docs/src/test/java/jdocs/cluster/singleton/ClusterSingletonSupervision.java) { #singleton-supervisor-actor-usage }
|
||||
|
|
|
|||
|
|
@ -155,10 +155,10 @@ ip-addresses or host names of the machines in `application.conf` instead of `127
|
|||
An actor that uses the cluster extension may look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [SimpleClusterListener.scala]($code$/scala/docs/cluster/SimpleClusterListener.scala) { type=scala }
|
||||
: @@snip [SimpleClusterListener.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { type=scala }
|
||||
|
||||
Java
|
||||
: @@snip [SimpleClusterListener.java]($code$/java/jdocs/cluster/SimpleClusterListener.java) { type=java }
|
||||
: @@snip [SimpleClusterListener.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { type=java }
|
||||
|
||||
The actor registers itself as subscriber of certain cluster events. It receives events corresponding to the current state
|
||||
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
|
||||
|
|
@ -239,10 +239,10 @@ supposed to be the first seed node, and that should be placed first in the param
|
|||
`joinSeedNodes`.
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterDocSpec.scala]($code$/scala/docs/cluster/ClusterDocSpec.scala) { #join-seed-nodes }
|
||||
: @@snip [ClusterDocSpec.scala](/akka-docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #join-seed-nodes }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterDocTest.java]($code$/java/jdocs/cluster/ClusterDocTest.java) { #join-seed-nodes-imports #join-seed-nodes }
|
||||
: @@snip [ClusterDocTest.java](/akka-docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #join-seed-nodes-imports #join-seed-nodes }
|
||||
|
||||
Unsuccessful attempts to contact seed nodes are automatically retried after the time period defined in
|
||||
configuration property `seed-node-timeout`. Unsuccessful attempt to join a specific seed node is
|
||||
|
|
@ -367,10 +367,10 @@ This can be performed using [JMX](#cluster-jmx) or [HTTP](#cluster-http).
|
|||
It can also be performed programmatically with:
|
||||
|
||||
Scala
|
||||
: @@snip [ClusterDocSpec.scala]($code$/scala/docs/cluster/ClusterDocSpec.scala) { #leave }
|
||||
: @@snip [ClusterDocSpec.scala](/akka-docs/src/test/scala/docs/cluster/ClusterDocSpec.scala) { #leave }
|
||||
|
||||
Java
|
||||
: @@snip [ClusterDocTest.java]($code$/java/jdocs/cluster/ClusterDocTest.java) { #leave }
|
||||
: @@snip [ClusterDocTest.java](/akka-docs/src/test/java/jdocs/cluster/ClusterDocTest.java) { #leave }
|
||||
|
||||
Note that this command can be issued to any member in the cluster, not necessarily the
|
||||
one that is leaving.
|
||||
|
|
@ -413,10 +413,10 @@ You can subscribe to change notifications of the cluster membership by using
|
|||
@scala[`Cluster(system).subscribe`]@java[`Cluster.get(system).subscribe`].
|
||||
|
||||
Scala
|
||||
: @@snip [SimpleClusterListener2.scala]($code$/scala/docs/cluster/SimpleClusterListener2.scala) { #subscribe }
|
||||
: @@snip [SimpleClusterListener2.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener2.scala) { #subscribe }
|
||||
|
||||
Java
|
||||
: @@snip [SimpleClusterListener2.java]($code$/java/jdocs/cluster/SimpleClusterListener2.java) { #subscribe }
|
||||
: @@snip [SimpleClusterListener2.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener2.java) { #subscribe }
|
||||
|
||||
A snapshot of the full state, `akka.cluster.ClusterEvent.CurrentClusterState`, is sent to the subscriber
|
||||
as the first message, followed by events for incremental updates.
|
||||
|
|
@ -434,10 +434,10 @@ listening to the events when they occurred in the past. Note that those initial
|
|||
to the current state and it is not the full history of all changes that actually has occurred in the cluster.
|
||||
|
||||
Scala
|
||||
: @@snip [SimpleClusterListener.scala]($code$/scala/docs/cluster/SimpleClusterListener.scala) { #subscribe }
|
||||
: @@snip [SimpleClusterListener.scala](/akka-docs/src/test/scala/docs/cluster/SimpleClusterListener.scala) { #subscribe }
|
||||
|
||||
Java
|
||||
: @@snip [SimpleClusterListener.java]($code$/java/jdocs/cluster/SimpleClusterListener.java) { #subscribe }
|
||||
: @@snip [SimpleClusterListener.java](/akka-docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { #subscribe }
|
||||
|
||||
The events to track the life-cycle of members are:
|
||||
|
||||
|
|
@ -473,18 +473,18 @@ added or removed to the cluster dynamically.
|
|||
Messages:
|
||||
|
||||
Scala
|
||||
: @@snip [TransformationMessages.scala]($code$/scala/docs/cluster/TransformationMessages.scala) { #messages }
|
||||
: @@snip [TransformationMessages.scala](/akka-docs/src/test/scala/docs/cluster/TransformationMessages.scala) { #messages }
|
||||
|
||||
Java
|
||||
: @@snip [TransformationMessages.java]($code$/java/jdocs/cluster/TransformationMessages.java) { #messages }
|
||||
: @@snip [TransformationMessages.java](/akka-docs/src/test/java/jdocs/cluster/TransformationMessages.java) { #messages }
|
||||
|
||||
The backend worker that performs the transformation job:
|
||||
|
||||
Scala
|
||||
: @@snip [TransformationBackend.scala]($code$/scala/docs/cluster/TransformationBackend.scala) { #backend }
|
||||
: @@snip [TransformationBackend.scala](/akka-docs/src/test/scala/docs/cluster/TransformationBackend.scala) { #backend }
|
||||
|
||||
Java
|
||||
: @@snip [TransformationBackend.java]($code$/java/jdocs/cluster/TransformationBackend.java) { #backend }
|
||||
: @@snip [TransformationBackend.java](/akka-docs/src/test/java/jdocs/cluster/TransformationBackend.java) { #backend }
|
||||
|
||||
Note that the `TransformationBackend` actor subscribes to cluster events to detect new,
|
||||
potential, frontend nodes, and send them a registration message so that they know
|
||||
|
|
@ -493,10 +493,10 @@ that they can use the backend worker.
|
|||
The frontend that receives user jobs and delegates to one of the registered backend workers:
|
||||
|
||||
Scala
|
||||
: @@snip [TransformationFrontend.scala]($code$/scala/docs/cluster/TransformationFrontend.scala) { #frontend }
|
||||
: @@snip [TransformationFrontend.scala](/akka-docs/src/test/scala/docs/cluster/TransformationFrontend.scala) { #frontend }
|
||||
|
||||
Java
|
||||
: @@snip [TransformationFrontend.java]($code$/java/jdocs/cluster/TransformationFrontend.java) { #frontend }
|
||||
: @@snip [TransformationFrontend.java](/akka-docs/src/test/java/jdocs/cluster/TransformationFrontend.java) { #frontend }
|
||||
|
||||
Note that the `TransformationFrontend` actor watch the registered backend
|
||||
to be able to remove it from its list of available backend workers.
|
||||
|
|
@ -551,10 +551,10 @@ be invoked when the current member status is changed to 'Up', i.e. the cluster
|
|||
has at least the defined number of members.
|
||||
|
||||
Scala
|
||||
: @@snip [FactorialFrontend.scala]($code$/scala/docs/cluster/FactorialFrontend.scala) { #registerOnUp }
|
||||
: @@snip [FactorialFrontend.scala](/akka-docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #registerOnUp }
|
||||
|
||||
Java
|
||||
: @@snip [FactorialFrontendMain.java]($code$/java/jdocs/cluster/FactorialFrontendMain.java) { #registerOnUp }
|
||||
: @@snip [FactorialFrontendMain.java](/akka-docs/src/test/java/jdocs/cluster/FactorialFrontendMain.java) { #registerOnUp }
|
||||
|
||||
This callback can be used for other things than starting actors.
|
||||
|
||||
|
|
@ -721,12 +721,12 @@ add the `sbt-multi-jvm` plugin and the dependency to `akka-multi-node-testkit`.
|
|||
First, as described in @ref:[Multi Node Testing](multi-node-testing.md), we need some scaffolding to configure the `MultiNodeSpec`.
|
||||
Define the participating roles and their [configuration](#cluster-configuration) in an object extending `MultiNodeConfig`:
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #MultiNodeConfig }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #MultiNodeConfig }
|
||||
|
||||
Define one concrete test class for each role/node. These will be instantiated on the different nodes (JVMs). They can be
|
||||
implemented differently, but often they are the same and extend an abstract test class, as illustrated here.
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #concrete-tests }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #concrete-tests }
|
||||
|
||||
Note the naming convention of these classes. The name of the classes must end with `MultiJvmNode1`, `MultiJvmNode2`
|
||||
and so on. It is possible to define another suffix to be used by the `sbt-multi-jvm`, but the default should be
|
||||
|
|
@ -734,18 +734,18 @@ fine in most cases.
|
|||
|
||||
Then the abstract `MultiNodeSpec`, which takes the `MultiNodeConfig` as constructor parameter.
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #abstract-test }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #abstract-test }
|
||||
|
||||
Most of this can be extracted to a separate trait to avoid repeating this in all your tests.
|
||||
|
||||
Typically you begin your test by starting up the cluster and let the members join, and create some actors.
|
||||
That can be done like this:
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #startup-cluster }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #startup-cluster }
|
||||
|
||||
From the test you interact with the cluster using the `Cluster` extension, e.g. `join`.
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #join }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #join }
|
||||
|
||||
Notice how the *testActor* from @ref:[testkit](testing.md) is added as [subscriber](#cluster-subscriber)
|
||||
to cluster changes and then waiting for certain events, such as in this case all members becoming 'Up'.
|
||||
|
|
@ -753,7 +753,7 @@ to cluster changes and then waiting for certain events, such as in this case all
|
|||
The above code was running for all roles (JVMs). `runOn` is a convenient utility to declare that a certain block
|
||||
of code should only run for a specific role.
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #test-statsService }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #test-statsService }
|
||||
|
||||
Once again we take advantage of the facilities in @ref:[testkit](testing.md) to verify expected behavior.
|
||||
Here using `testActor` as sender (via `ImplicitSender`) and verifying the reply with `expectMsgPF`.
|
||||
|
|
@ -761,7 +761,7 @@ Here using `testActor` as sender (via `ImplicitSender`) and verifying the reply
|
|||
In the above code you can see `node(third)`, which is useful facility to get the root actor reference of
|
||||
the actor system for a specific role. This can also be used to grab the `akka.actor.Address` of that node.
|
||||
|
||||
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses }
|
||||
@@snip [StatsSampleSpec.scala](/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #addresses }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -73,10 +73,10 @@ Here's how a `CircuitBreaker` would be configured for:
|
|||
|
||||
|
||||
Scala
|
||||
: @@snip [CircuitBreakerDocSpec.scala]($code$/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #imports1 #circuit-breaker-initialization }
|
||||
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #imports1 #circuit-breaker-initialization }
|
||||
|
||||
Java
|
||||
: @@snip [DangerousJavaActor.java]($code$/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #imports1 #circuit-breaker-initialization }
|
||||
: @@snip [DangerousJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #imports1 #circuit-breaker-initialization }
|
||||
|
||||
### Future & Synchronous based API
|
||||
|
||||
|
|
@ -85,10 +85,10 @@ Once a circuit breaker actor has been initialized, interacting with that actor i
|
|||
The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the `withSyncCircuitBreaker` and receives a method that is not wrapped in a `Future`.
|
||||
|
||||
Scala
|
||||
: @@snip [CircuitBreakerDocSpec.scala]($code$/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-usage }
|
||||
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-usage }
|
||||
|
||||
Java
|
||||
: @@snip [DangerousJavaActor.java]($code$/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #circuit-breaker-usage }
|
||||
: @@snip [DangerousJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/DangerousJavaActor.java) { #circuit-breaker-usage }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -121,10 +121,10 @@ Type of `defineFailureFn`: @scala[`Try[T] ⇒ Boolean`]@java[`BiFunction[Optiona
|
|||
@java[The response of a protected call is modelled using `Optional[T]` for a successful return value and `Optional[Throwable]` for exceptions.] This function should return `true` if the call should increase failure count, else false.
|
||||
|
||||
Scala
|
||||
: @@snip [CircuitBreakerDocSpec.scala]($code$/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #even-no-as-failure }
|
||||
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #even-no-as-failure }
|
||||
|
||||
Java
|
||||
: @@snip [EvenNoFailureJavaExample.java]($code$/java/jdocs/circuitbreaker/EvenNoFailureJavaExample.java) { #even-no-as-failure }
|
||||
: @@snip [EvenNoFailureJavaExample.java](/akka-docs/src/test/java/jdocs/circuitbreaker/EvenNoFailureJavaExample.java) { #even-no-as-failure }
|
||||
|
||||
### Low level API
|
||||
|
||||
|
|
@ -139,7 +139,7 @@ The below example doesn't make a remote call when the state is *HalfOpen*. Using
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [CircuitBreakerDocSpec.scala]($code$/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-tell-pattern }
|
||||
: @@snip [CircuitBreakerDocSpec.scala](/akka-docs/src/test/scala/docs/circuitbreaker/CircuitBreakerDocSpec.scala) { #circuit-breaker-tell-pattern }
|
||||
|
||||
Java
|
||||
: @@snip [TellPatternJavaActor.java]($code$/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
|
||||
: @@snip [TellPatternJavaActor.java](/akka-docs/src/test/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ when finite-ness does not matter; this is a supertype of `FiniteDuration`
|
|||
In Scala durations are constructable using a mini-DSL and support all expected
|
||||
arithmetic operations:
|
||||
|
||||
@@snip [Sample.scala]($code$/scala/docs/duration/Sample.scala) { #dsl }
|
||||
@@snip [Sample.scala](/akka-docs/src/test/scala/docs/duration/Sample.scala) { #dsl }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -37,9 +37,9 @@ might go wrong, depending on what starts the next line.
|
|||
Java provides less syntactic sugar, so you have to spell out the operations as
|
||||
method calls instead:
|
||||
|
||||
@@snip [Java.java]($code$/java/jdocs/duration/Java.java) { #import }
|
||||
@@snip [Java.java](/akka-docs/src/test/java/jdocs/duration/Java.java) { #import }
|
||||
|
||||
@@snip [Java.java]($code$/java/jdocs/duration/Java.java) { #dsl }
|
||||
@@snip [Java.java](/akka-docs/src/test/java/jdocs/duration/Java.java) { #dsl }
|
||||
|
||||
## Deadline
|
||||
|
||||
|
|
@ -48,8 +48,8 @@ of an absolute point in time, and support deriving a duration from this by calcu
|
|||
difference between now and the deadline. This is useful when you want to keep one overall
|
||||
deadline without having to take care of the book-keeping wrt. the passing of time yourself:
|
||||
|
||||
@@snip [Sample.scala]($code$/scala/docs/duration/Sample.scala) { #deadline }
|
||||
@@snip [Sample.scala](/akka-docs/src/test/scala/docs/duration/Sample.scala) { #deadline }
|
||||
|
||||
In Java you create these from durations:
|
||||
|
||||
@@snip [Java.java]($code$/java/jdocs/duration/Java.java) { #deadline }
|
||||
@@snip [Java.java](/akka-docs/src/test/java/jdocs/duration/Java.java) { #deadline }
|
||||
|
|
|
|||
|
|
@ -31,17 +31,17 @@ gives excellent performance in most cases.
|
|||
Dispatchers implement the `ExecutionContext` interface and can thus be used to run `Future` invocations etc.
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #lookup }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #lookup }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #lookup }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #lookup }
|
||||
## Setting the dispatcher for an Actor
|
||||
|
||||
So in case you want to give your `Actor` a different dispatcher than the default, you need to do two things, of which the first
|
||||
is to configure the dispatcher:
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #my-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #my-dispatcher-config }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -55,7 +55,7 @@ You can read more about parallelism in the JDK's [ForkJoinPool documentation](ht
|
|||
Another example that uses the "thread-pool-executor":
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -69,23 +69,23 @@ For more options, see the default-dispatcher section of the @ref:[configuration]
|
|||
Then you create the actor as usual and define the dispatcher in the deployment configuration.
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-config }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-config }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-config }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-config }
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #dispatcher-deployment-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #dispatcher-deployment-config }
|
||||
|
||||
An alternative to the deployment configuration is to define the dispatcher in code.
|
||||
If you define the `dispatcher` in the deployment configuration then this value will be used instead
|
||||
of programmatically provided parameter.
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-code }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-dispatcher-in-code }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-code }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-dispatcher-in-code }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -140,40 +140,40 @@ There are 3 different types of message dispatchers:
|
|||
|
||||
Configuring a dispatcher with fixed thread pool size, e.g. for actors that perform blocking IO:
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
|
||||
|
||||
And then using it:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-fixed-pool-size-dispatcher }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-fixed-pool-size-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-fixed-pool-size-dispatcher }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-fixed-pool-size-dispatcher }
|
||||
|
||||
Another example that uses the thread pool based on the number of cores (e.g. for CPU bound tasks)
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) {#my-thread-pool-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) {#my-thread-pool-dispatcher-config }
|
||||
|
||||
A different kind of dispatcher that uses an affinity pool may increase throughput in cases where there is relatively small
|
||||
number of actors that maintain some internal state. The affinity pool tries its best to ensure that an actor is always
|
||||
scheduled to run on the same thread. This actor to thread pinning aims to decrease CPU cache misses which can result
|
||||
in significant throughput improvement.
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #affinity-pool-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #affinity-pool-dispatcher-config }
|
||||
|
||||
Configuring a `PinnedDispatcher`:
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) {#my-pinned-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) {#my-pinned-dispatcher-config }
|
||||
|
||||
And then using it:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-pinned-dispatcher }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-pinned-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-pinned-dispatcher }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-pinned-dispatcher }
|
||||
|
||||
Note that `thread-pool-executor` configuration as per the above `my-thread-pool-dispatcher` example is
|
||||
NOT applicable. This is because every actor will have its own thread pool when using `PinnedDispatcher`,
|
||||
|
|
@ -193,10 +193,10 @@ is typically that (network) I/O occurs under the covers.
|
|||
|
||||
|
||||
Scala
|
||||
: @@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-in-actor }
|
||||
: @@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-in-actor }
|
||||
|
||||
Java
|
||||
: @@snip [BlockingDispatcherSample.java]($akka$/akka-docs/src/test/java/jdocs/actor/BlockingActor.java) { #blocking-in-actor }
|
||||
: @@snip [BlockingDispatcherSample.java](/akka-docs/src/test/java/jdocs/actor/BlockingActor.java) { #blocking-in-actor }
|
||||
|
||||
|
||||
When facing this, you
|
||||
|
|
@ -206,10 +206,10 @@ find bottlenecks or run out of memory or threads when the application runs
|
|||
under increased load.
|
||||
|
||||
Scala
|
||||
: @@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-in-future }
|
||||
: @@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-in-future }
|
||||
|
||||
Java
|
||||
: @@snip [BlockingDispatcherSample.java]($akka$/akka-docs/src/test/java/jdocs/actor/BlockingFutureActor.java) { #blocking-in-future }
|
||||
: @@snip [BlockingDispatcherSample.java](/akka-docs/src/test/java/jdocs/actor/BlockingFutureActor.java) { #blocking-in-future }
|
||||
|
||||
|
||||
### Problem: Blocking on default dispatcher
|
||||
|
|
@ -256,17 +256,17 @@ including Streams, Http and other reactive libraries built on top of it.
|
|||
Let's set up an application with the above `BlockingFutureActor` and the following `PrintActor`.
|
||||
|
||||
Scala
|
||||
: @@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #print-actor }
|
||||
: @@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #print-actor }
|
||||
|
||||
Java
|
||||
: @@snip [BlockingDispatcherSample.java]($akka$/akka-docs/src/test/java/jdocs/actor/PrintActor.java) { #print-actor }
|
||||
: @@snip [BlockingDispatcherSample.java](/akka-docs/src/test/java/jdocs/actor/PrintActor.java) { #print-actor }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-main }
|
||||
: @@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #blocking-main }
|
||||
|
||||
Java
|
||||
: @@snip [BlockingDispatcherSample.java]($akka$/akka-docs/src/test/java/jdocs/actor/BlockingDispatcherTest.java) { #blocking-main }
|
||||
: @@snip [BlockingDispatcherSample.java](/akka-docs/src/test/java/jdocs/actor/BlockingDispatcherTest.java) { #blocking-main }
|
||||
|
||||
|
||||
Here the app is sending 100 messages to `BlockingFutureActor` and `PrintActor` and large numbers
|
||||
|
|
@ -326,7 +326,7 @@ In `application.conf`, the dispatcher dedicated to blocking behavior should
|
|||
be configured as follows:
|
||||
|
||||
<!--same config text for Scala & Java-->
|
||||
@@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #my-blocking-dispatcher-config }
|
||||
@@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #my-blocking-dispatcher-config }
|
||||
|
||||
A `thread-pool-executor` based dispatcher allows us to set a limit on the number of threads it will host,
|
||||
and this way we gain tight control over how at-most-how-many blocked threads will be in the system.
|
||||
|
|
@ -339,10 +339,10 @@ Whenever blocking has to be done, use the above configured dispatcher
|
|||
instead of the default one:
|
||||
|
||||
Scala
|
||||
: @@snip [BlockingDispatcherSample.scala]($akka$/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #separate-dispatcher }
|
||||
: @@snip [BlockingDispatcherSample.scala](/akka-docs/src/test/scala/docs/actor/BlockingDispatcherSample.scala) { #separate-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [BlockingDispatcherSample.java]($akka$/akka-docs/src/test/java/jdocs/actor/SeparateDispatcherFutureActor.java) { #separate-dispatcher }
|
||||
: @@snip [BlockingDispatcherSample.java](/akka-docs/src/test/java/jdocs/actor/SeparateDispatcherFutureActor.java) { #separate-dispatcher }
|
||||
|
||||
The thread pool behavior is shown in the below diagram.
|
||||
|
||||
|
|
|
|||
|
|
@ -61,10 +61,10 @@ adds or removes elements from a `ORSet` (observed-remove set). It also subscribe
|
|||
changes of this.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #data-bot }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #data-bot }
|
||||
|
||||
Java
|
||||
: @@snip [DataBot.java]($code$/java/jdocs/ddata/DataBot.java) { #data-bot }
|
||||
: @@snip [DataBot.java](/akka-docs/src/test/java/jdocs/ddata/DataBot.java) { #data-bot }
|
||||
|
||||
<a id="replicator-update"></a>
|
||||
### Update
|
||||
|
|
@ -104,10 +104,10 @@ are preferred over unreachable nodes.
|
|||
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #update }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update }
|
||||
|
||||
As reply of the `Update` a `Replicator.UpdateSuccess` is sent to the sender of the
|
||||
`Update` if the value was successfully replicated according to the supplied consistency
|
||||
|
|
@ -117,17 +117,17 @@ or was rolled back. It may still have been replicated to some nodes, and will ev
|
|||
be replicated to all nodes with the gossip protocol.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response1 }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response1 }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response1 }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response1 }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response2 }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-response2 }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response2 }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response2 }
|
||||
|
||||
You will always see your own writes. For example if you send two `Update` messages
|
||||
changing the value of the same `key`, the `modify` function of the second message will
|
||||
|
|
@ -139,10 +139,10 @@ way to pass contextual information (e.g. original sender) without having to use
|
|||
or maintain local correlation data structures.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-request-context }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-request-context }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context }
|
||||
|
||||
<a id="replicator-get"></a>
|
||||
### Get
|
||||
|
|
@ -162,10 +162,10 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
|
|||
Note that `ReadMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #get }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get }
|
||||
|
||||
As reply of the `Get` a `Replicator.GetSuccess` is sent to the sender of the
|
||||
`Get` if the value was successfully retrieved according to the supplied consistency
|
||||
|
|
@ -173,17 +173,17 @@ level within the supplied timeout. Otherwise a `Replicator.GetFailure` is sent.
|
|||
If the key does not exist the reply will be `Replicator.NotFound`.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response1 }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response1 }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response1 }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response1 }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response2 }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-response2 }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response2 }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response2 }
|
||||
|
||||
You will always read your own writes. For example if you send a `Update` message
|
||||
followed by a `Get` of the same `key` the `Get` will retrieve the change that was
|
||||
|
|
@ -196,10 +196,10 @@ In the `Get` message you can pass an optional request context in the same way as
|
|||
to after receiving and transforming `GetSuccess`.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-request-context }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #get-request-context }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-request-context }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #get-request-context }
|
||||
|
||||
### Consistency
|
||||
|
||||
|
|
@ -252,24 +252,24 @@ the total size of the cluster.
|
|||
Here is an example of using `WriteMajority` and `ReadMajority`:
|
||||
|
||||
Scala
|
||||
: @@snip [ShoppingCart.scala]($code$/scala/docs/ddata/ShoppingCart.scala) { #read-write-majority }
|
||||
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #read-write-majority }
|
||||
|
||||
Java
|
||||
: @@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #read-write-majority }
|
||||
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #read-write-majority }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [ShoppingCart.scala]($code$/scala/docs/ddata/ShoppingCart.scala) { #get-cart }
|
||||
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #get-cart }
|
||||
|
||||
Java
|
||||
: @@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #get-cart }
|
||||
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #get-cart }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [ShoppingCart.scala]($code$/scala/docs/ddata/ShoppingCart.scala) { #add-item }
|
||||
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #add-item }
|
||||
|
||||
Java
|
||||
: @@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #add-item }
|
||||
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #add-item }
|
||||
|
||||
In some rare cases, when performing an `Update` it is needed to first try to fetch latest data from
|
||||
other nodes. That can be done by first sending a `Get` with `ReadMajority` and then continue with
|
||||
|
|
@ -282,10 +282,10 @@ performed (hence the name observed-removed set).
|
|||
The following example illustrates how to do that:
|
||||
|
||||
Scala
|
||||
: @@snip [ShoppingCart.scala]($code$/scala/docs/ddata/ShoppingCart.scala) { #remove-item }
|
||||
: @@snip [ShoppingCart.scala](/akka-docs/src/test/scala/docs/ddata/ShoppingCart.scala) { #remove-item }
|
||||
|
||||
Java
|
||||
: @@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #remove-item }
|
||||
: @@snip [ShoppingCart.java](/akka-docs/src/test/java/jdocs/ddata/ShoppingCart.java) { #remove-item }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -311,10 +311,10 @@ The subscriber is automatically removed if the subscriber is terminated. A subsc
|
|||
also be deregistered with the `Replicator.Unsubscribe` message.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #subscribe }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #subscribe }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #subscribe }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #subscribe }
|
||||
|
||||
### Delete
|
||||
|
||||
|
|
@ -336,10 +336,10 @@ In the *Delete* message you can pass an optional request context in the same way
|
|||
to after receiving and transforming *DeleteSuccess*.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #delete }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #delete }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #delete }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #delete }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -406,10 +406,10 @@ as two internal `GCounter`s. Merge is handled by merging the internal P and N co
|
|||
The value of the counter is the value of the P counter minus the value of the N counter.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #pncounter }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #pncounter }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #pncounter }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #pncounter }
|
||||
|
||||
`GCounter` and `PNCounter` have support for [delta-CRDT](#delta-crdt) and don't need causal
|
||||
delivery of deltas.
|
||||
|
|
@ -420,10 +420,10 @@ values they are guaranteed to be replicated together as one unit, which is somet
|
|||
related data.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #pncountermap }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #pncountermap }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #pncountermap }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #pncountermap }
|
||||
|
||||
### Sets
|
||||
|
||||
|
|
@ -432,10 +432,10 @@ the data type to use. The elements can be any type of values that can be seriali
|
|||
Merge is the union of the two sets.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #gset }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #gset }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #gset }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #gset }
|
||||
|
||||
`GSet` has support for [delta-CRDT](#delta-crdt) and it doesn't require causal delivery of deltas.
|
||||
|
||||
|
|
@ -449,10 +449,10 @@ called "birth dot". The version vector and the dots are used by the `merge` func
|
|||
track causality of the operations and resolve concurrent updates.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #orset }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #orset }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #orset }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #orset }
|
||||
|
||||
`ORSet` has support for [delta-CRDT](#delta-crdt) and it requires causal delivery of deltas.
|
||||
|
||||
|
|
@ -486,10 +486,10 @@ uses delta propagation to deliver updates. Effectively, the update for map is th
|
|||
being the key and full update for the respective value (`ORSet`, `PNCounter` or `LWWRegister`) kept in the map.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #ormultimap }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #ormultimap }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #ormultimap }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #ormultimap }
|
||||
|
||||
When a data entry is changed the full state of that entry is replicated to other nodes, i.e.
|
||||
when you update a map, the whole map is replicated. Therefore, instead of using one `ORMap`
|
||||
|
|
@ -525,10 +525,10 @@ in the below section about `LWWRegister`.
|
|||
to `true`. Thereafter it cannot be changed. `true` wins over `false` in merge.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #flag }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #flag }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #flag }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #flag }
|
||||
|
||||
`LWWRegister` (last writer wins register) can hold any (serializable) value.
|
||||
|
||||
|
|
@ -540,20 +540,20 @@ Merge takes the register updated by the node with lowest address (`UniqueAddress
|
|||
if the timestamps are exactly the same.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #lwwregister }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #lwwregister }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister }
|
||||
|
||||
Instead of using timestamps based on `System.currentTimeMillis()` time it is possible to
|
||||
use a timestamp value based on something else, for example an increasing version number
|
||||
from a database record that is used for optimistic concurrency control.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #lwwregister-custom-clock }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #lwwregister-custom-clock }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister-custom-clock }
|
||||
: @@snip [DistributedDataDocTest.java](/akka-docs/src/test/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister-custom-clock }
|
||||
|
||||
For first-write-wins semantics you can use the `LWWRegister#reverseClock` instead of the
|
||||
`LWWRegister#defaultClock`.
|
||||
|
|
@ -579,10 +579,10 @@ to keep track of addition and removals. A `TwoPhaseSet` is a set where an eleme
|
|||
removed, but never added again thereafter.
|
||||
|
||||
Scala
|
||||
: @@snip [TwoPhaseSet.scala]($code$/scala/docs/ddata/TwoPhaseSet.scala) { #twophaseset }
|
||||
: @@snip [TwoPhaseSet.scala](/akka-docs/src/test/scala/docs/ddata/TwoPhaseSet.scala) { #twophaseset }
|
||||
|
||||
Java
|
||||
: @@snip [TwoPhaseSet.java]($code$/java/jdocs/ddata/TwoPhaseSet.java) { #twophaseset }
|
||||
: @@snip [TwoPhaseSet.java](/akka-docs/src/test/java/jdocs/ddata/TwoPhaseSet.java) { #twophaseset }
|
||||
|
||||
Data types should be immutable, i.e. "modifying" methods should return a new instance.
|
||||
|
||||
|
|
@ -602,15 +602,15 @@ deterministically in the serialization.
|
|||
|
||||
This is a protobuf representation of the above `TwoPhaseSet`:
|
||||
|
||||
@@snip [TwoPhaseSetMessages.proto]($code$/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset }
|
||||
@@snip [TwoPhaseSetMessages.proto](/akka-docs/src/test/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset }
|
||||
|
||||
The serializer for the `TwoPhaseSet`:
|
||||
|
||||
Scala
|
||||
: @@snip [TwoPhaseSetSerializer.scala]($code$/scala/docs/ddata/protobuf/TwoPhaseSetSerializer.scala) { #serializer }
|
||||
: @@snip [TwoPhaseSetSerializer.scala](/akka-docs/src/test/scala/docs/ddata/protobuf/TwoPhaseSetSerializer.scala) { #serializer }
|
||||
|
||||
Java
|
||||
: @@snip [TwoPhaseSetSerializer.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer.java) { #serializer }
|
||||
: @@snip [TwoPhaseSetSerializer.java](/akka-docs/src/test/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer.java) { #serializer }
|
||||
|
||||
Note that the elements of the sets are sorted so the SHA-1 digests are the same
|
||||
for the same elements.
|
||||
|
|
@ -618,25 +618,25 @@ for the same elements.
|
|||
You register the serializer in configuration:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #serializer-config }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #serializer-config }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #japi-serializer-config }
|
||||
: @@snip [DistributedDataDocSpec.scala](/akka-docs/src/test/scala/docs/ddata/DistributedDataDocSpec.scala) { #japi-serializer-config }
|
||||
|
||||
Using compression can sometimes be a good idea to reduce the data size. Gzip compression is
|
||||
provided by the @scala[`akka.cluster.ddata.protobuf.SerializationSupport` trait]@java[`akka.cluster.ddata.protobuf.AbstractSerializationSupport` interface]:
|
||||
|
||||
Scala
|
||||
: @@snip [TwoPhaseSetSerializer.scala]($code$/scala/docs/ddata/protobuf/TwoPhaseSetSerializer.scala) { #compression }
|
||||
: @@snip [TwoPhaseSetSerializer.scala](/akka-docs/src/test/scala/docs/ddata/protobuf/TwoPhaseSetSerializer.scala) { #compression }
|
||||
|
||||
Java
|
||||
: @@snip [TwoPhaseSetSerializerWithCompression.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializerWithCompression.java) { #compression }
|
||||
: @@snip [TwoPhaseSetSerializerWithCompression.java](/akka-docs/src/test/java/jdocs/ddata/protobuf/TwoPhaseSetSerializerWithCompression.java) { #compression }
|
||||
|
||||
The two embedded `GSet` can be serialized as illustrated above, but in general when composing
|
||||
new data types from the existing built in types it is better to make use of the existing
|
||||
serializer for those types. This can be done by declaring those as bytes fields in protobuf:
|
||||
|
||||
@@snip [TwoPhaseSetMessages.proto]($code$/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset2 }
|
||||
@@snip [TwoPhaseSetMessages.proto](/akka-docs/src/test/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset2 }
|
||||
|
||||
and use the methods `otherMessageToProto` and `otherMessageFromBinary` that are provided
|
||||
by the `SerializationSupport` trait to serialize and deserialize the `GSet` instances. This
|
||||
|
|
@ -644,10 +644,10 @@ works with any type that has a registered Akka serializer. This is how such an s
|
|||
look like for the `TwoPhaseSet`:
|
||||
|
||||
Scala
|
||||
: @@snip [TwoPhaseSetSerializer2.scala]($code$/scala/docs/ddata/protobuf/TwoPhaseSetSerializer2.scala) { #serializer }
|
||||
: @@snip [TwoPhaseSetSerializer2.scala](/akka-docs/src/test/scala/docs/ddata/protobuf/TwoPhaseSetSerializer2.scala) { #serializer }
|
||||
|
||||
Java
|
||||
: @@snip [TwoPhaseSetSerializer2.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer2.java) { #serializer }
|
||||
: @@snip [TwoPhaseSetSerializer2.java](/akka-docs/src/test/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer2.java) { #serializer }
|
||||
|
||||
<a id="ddata-durable"></a>
|
||||
### Durable Storage
|
||||
|
|
@ -792,4 +792,4 @@ paper by Mark Shapiro et. al.
|
|||
|
||||
The `DistributedData` extension can be configured with the following properties:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-distributed-data/src/main/resources/reference.conf) { #distributed-data }
|
||||
@@snip [reference.conf](/akka-distributed-data/src/main/resources/reference.conf) { #distributed-data }
|
||||
|
|
|
|||
|
|
@ -76,35 +76,35 @@ can explicitly remove entries with `DistributedPubSubMediator.Unsubscribe`.
|
|||
An example of a subscriber actor:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #subscriber }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #subscriber }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #subscriber }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #subscriber }
|
||||
|
||||
Subscriber actors can be started on several nodes in the cluster, and all will receive
|
||||
messages published to the "content" topic.
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-subscribers }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-subscribers }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-subscribers }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-subscribers }
|
||||
|
||||
A simple actor that publishes to this "content" topic:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publisher }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publisher }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publisher }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publisher }
|
||||
|
||||
It can publish messages to the topic from anywhere in the cluster:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publish-message }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #publish-message }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publish-message }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publish-message }
|
||||
|
||||
### Topic Groups
|
||||
|
||||
|
|
@ -161,35 +161,35 @@ can explicitly remove entries with `DistributedPubSubMediator.Remove`.
|
|||
An example of a destination actor:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-destination }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-destination }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-destination }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-destination }
|
||||
|
||||
Destination actors can be started on several nodes in the cluster, and all will receive
|
||||
messages sent to the path (without address information).
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-send-destinations }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #start-send-destinations }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-send-destinations }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-send-destinations }
|
||||
|
||||
A simple actor that sends to the path:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #sender }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #sender }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #sender }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #sender }
|
||||
|
||||
It can send messages to the path from anywhere in the cluster:
|
||||
|
||||
Scala
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-message }
|
||||
: @@snip [DistributedPubSubMediatorSpec.scala](/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala) { #send-message }
|
||||
|
||||
Java
|
||||
: @@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-message }
|
||||
: @@snip [DistributedPubSubMediatorTest.java](/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-message }
|
||||
|
||||
It is also possible to broadcast messages to the actors that have been registered with
|
||||
`Put`. Send `DistributedPubSubMediator.SendToAll` message to the local mediator and the wrapped message
|
||||
|
|
@ -213,7 +213,7 @@ want to use different cluster roles for different mediators.
|
|||
|
||||
The `DistributedPubSub` extension can be configured with the following properties:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
|
||||
|
||||
It is recommended to load the extension when the actor system is started by defining it in
|
||||
`akka.extensions` configuration property. Otherwise it will be activated when first used
|
||||
|
|
|
|||
|
|
@ -5,10 +5,10 @@ Originally conceived as a way to send messages to groups of actors, the
|
|||
implementing a simple interface:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBus.scala]($akka$/akka-actor/src/main/scala/akka/event/EventBus.scala) { #event-bus-api }
|
||||
: @@snip [EventBus.scala](/akka-actor/src/main/scala/akka/event/EventBus.scala) { #event-bus-api }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #event-bus-api }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #event-bus-api }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -48,18 +48,18 @@ compare subscribers and how exactly to classify.
|
|||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #lookup-bus }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus }
|
||||
|
||||
A test for this implementation may look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus-test }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #lookup-bus-test }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #lookup-bus-test }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #lookup-bus-test }
|
||||
|
||||
This classifier is efficient in case no subscribers exist for a particular event.
|
||||
|
||||
|
|
@ -76,18 +76,18 @@ classifier hierarchy.
|
|||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus }
|
||||
|
||||
A test for this implementation may look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus-test }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #subchannel-bus-test }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus-test }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #subchannel-bus-test }
|
||||
|
||||
This classifier is also efficient in case no subscribers are found for an
|
||||
event, but it uses conventional locking to synchronize an internal classifier
|
||||
|
|
@ -106,18 +106,18 @@ stations by geographical reachability (for old-school radio-wave transmission).
|
|||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #scanning-bus }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus }
|
||||
|
||||
A test for this implementation may look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus-test }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #scanning-bus-test }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #scanning-bus-test }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #scanning-bus-test }
|
||||
|
||||
This classifier takes always a time which is proportional to the number of
|
||||
subscriptions, independent of how many actually match.
|
||||
|
|
@ -137,18 +137,18 @@ takes care of unsubscribing terminated actors automatically.
|
|||
The necessary methods to be implemented are illustrated with the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #actor-bus }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #actor-bus }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus }
|
||||
|
||||
A test for this implementation may look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [EventBusDocSpec.scala]($code$/scala/docs/event/EventBusDocSpec.scala) { #actor-bus-test }
|
||||
: @@snip [EventBusDocSpec.scala](/akka-docs/src/test/scala/docs/event/EventBusDocSpec.scala) { #actor-bus-test }
|
||||
|
||||
Java
|
||||
: @@snip [EventBusDocTest.java]($code$/java/jdocs/event/EventBusDocTest.java) { #actor-bus-test }
|
||||
: @@snip [EventBusDocTest.java](/akka-docs/src/test/java/jdocs/event/EventBusDocTest.java) { #actor-bus-test }
|
||||
|
||||
This classifier is still is generic in the event type, and it is efficient for
|
||||
all use cases.
|
||||
|
|
@ -165,20 +165,20 @@ how a simple subscription works. Given a simple actor:
|
|||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
@@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #deadletters }
|
||||
@@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #deadletters }
|
||||
|
||||
@@@
|
||||
|
||||
@@@ div { .group-java }
|
||||
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #imports-deadletter }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports-deadletter }
|
||||
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #deadletter-actor }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletter-actor }
|
||||
|
||||
it can be subscribed like this:
|
||||
It can be subscribed like this:
|
||||
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #deadletters }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #deadletters }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -188,10 +188,10 @@ is implemented in the event stream, it is possible to subscribe to a group of ev
|
|||
subscribing to their common superclass as demonstrated in the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #superclass-subscription-eventstream }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #superclass-subscription-eventstream }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #superclass-subscription-eventstream }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #superclass-subscription-eventstream }
|
||||
|
||||
Similarly to [Actor Classification](#actor-classification), `EventStream` will automatically remove subscribers when they terminate.
|
||||
|
||||
|
|
@ -253,18 +253,18 @@ However, in case you find yourself in need of debugging these kinds of low level
|
|||
it's still possible to subscribe to them explicitly:
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #suppressed-deadletters }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #suppressed-deadletters }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #suppressed-deadletters }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #suppressed-deadletters }
|
||||
|
||||
or all dead letters (including the suppressed ones):
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #all-deadletters }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #all-deadletters }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #all-deadletters }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #all-deadletters }
|
||||
|
||||
### Other Uses
|
||||
|
||||
|
|
|
|||
|
|
@ -21,40 +21,40 @@ So let's create a sample extension that lets us count the number of times someth
|
|||
First, we define what our `Extension` should do:
|
||||
|
||||
Scala
|
||||
: @@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #extension }
|
||||
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension }
|
||||
|
||||
Java
|
||||
: @@snip [ExtensionDocTest.java]($code$/java/jdocs/extension/ExtensionDocTest.java) { #imports #extension }
|
||||
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extension }
|
||||
|
||||
Then we need to create an `ExtensionId` for our extension so we can grab a hold of it.
|
||||
|
||||
Scala
|
||||
: @@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #extensionid }
|
||||
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extensionid }
|
||||
|
||||
Java
|
||||
: @@snip [ExtensionDocTest.java]($code$/java/jdocs/extension/ExtensionDocTest.java) { #imports #extensionid }
|
||||
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #imports #extensionid }
|
||||
|
||||
Wicked! Now all we need to do is to actually use it:
|
||||
|
||||
Scala
|
||||
: @@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage }
|
||||
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage }
|
||||
|
||||
Java
|
||||
: @@snip [ExtensionDocTest.java]($code$/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage }
|
||||
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage }
|
||||
|
||||
Or from inside of an Akka Actor:
|
||||
|
||||
Scala
|
||||
: @@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor }
|
||||
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor }
|
||||
|
||||
Java
|
||||
: @@snip [ExtensionDocTest.java]($code$/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage-actor }
|
||||
: @@snip [ExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/ExtensionDocTest.java) { #extension-usage-actor }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
You can also hide extension behind traits:
|
||||
|
||||
@@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor-trait }
|
||||
@@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #extension-usage-actor-trait }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -66,7 +66,7 @@ To be able to load extensions from your Akka configuration you must add FQCNs of
|
|||
in the `akka.extensions` section of the config you provide to your `ActorSystem`.
|
||||
|
||||
Scala
|
||||
: @@snip [ExtensionDocSpec.scala]($code$/scala/docs/extension/ExtensionDocSpec.scala) { #config }
|
||||
: @@snip [ExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #config }
|
||||
|
||||
Java
|
||||
: @@@vars
|
||||
|
|
@ -89,23 +89,23 @@ The @ref:[configuration](general/configuration.md) can be used for application s
|
|||
|
||||
Sample configuration:
|
||||
|
||||
@@snip [SettingsExtensionDocSpec.scala]($code$/scala/docs/extension/SettingsExtensionDocSpec.scala) { #config }
|
||||
@@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #config }
|
||||
|
||||
The `Extension`:
|
||||
|
||||
Scala
|
||||
: @@snip [SettingsExtensionDocSpec.scala]($code$/scala/docs/extension/SettingsExtensionDocSpec.scala) { #imports #extension #extensionid }
|
||||
: @@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #imports #extension #extensionid }
|
||||
|
||||
Java
|
||||
: @@snip [SettingsExtensionDocTest.java]($code$/java/jdocs/extension/SettingsExtensionDocTest.java) { #imports #extension #extensionid }
|
||||
: @@snip [SettingsExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #imports #extension #extensionid }
|
||||
|
||||
Use it:
|
||||
|
||||
Scala
|
||||
: @@snip [SettingsExtensionDocSpec.scala]($code$/scala/docs/extension/SettingsExtensionDocSpec.scala) { #extension-usage-actor }
|
||||
: @@snip [SettingsExtensionDocSpec.scala](/akka-docs/src/test/scala/docs/extension/SettingsExtensionDocSpec.scala) { #extension-usage-actor }
|
||||
|
||||
Java
|
||||
: @@snip [SettingsExtensionDocTest.java]($code$/java/jdocs/extension/SettingsExtensionDocTest.java) { #extension-usage-actor }
|
||||
: @@snip [SettingsExtensionDocTest.java](/akka-docs/src/test/java/jdocs/extension/SettingsExtensionDocTest.java) { #extension-usage-actor }
|
||||
|
||||
## Library extensions
|
||||
|
||||
|
|
@ -125,4 +125,4 @@ this could be important is in tests.
|
|||
The``akka.library-extensions`` must never be assigned (`= ["Extension"]`) instead of appending as this will break
|
||||
the library-extension mechanism and make behavior depend on class path ordering.
|
||||
|
||||
@@@
|
||||
@@@
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@
|
|||
# Full Source Code of the Fault Tolerance Sample
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSample.scala]($code$/scala/docs/actor/FaultHandlingDocSample.scala) { #all }
|
||||
: @@snip [FaultHandlingDocSample.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSample.scala) { #all }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingDocSample.java]($code$/java/jdocs/actor/FaultHandlingDocSample.java) { #all }
|
||||
: @@snip [FaultHandlingDocSample.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingDocSample.java) { #all }
|
||||
|
|
|
|||
|
|
@ -36,10 +36,10 @@ in more depth.
|
|||
For the sake of demonstration let us consider the following strategy:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #strategy }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #strategy }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #strategy }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #strategy }
|
||||
|
||||
We have chosen a few well-known exception types in order to demonstrate the
|
||||
application of the fault handling directives described in @ref:[supervision](general/supervision.md).
|
||||
|
|
@ -94,7 +94,7 @@ in the same way as the default strategy defined above.
|
|||
|
||||
You can combine your own strategy with the default strategy:
|
||||
|
||||
@@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #default-strategy-fallback }
|
||||
@@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #default-strategy-fallback }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -135,73 +135,73 @@ The following section shows the effects of the different directives in practice,
|
|||
where a test setup is needed. First off, we need a suitable supervisor:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #supervisor }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor }
|
||||
|
||||
This supervisor will be used to create a child, with which we can experiment:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #child }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #child }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #child }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #child }
|
||||
|
||||
The test is easier by using the utilities described in @scala[@ref:[Testing Actor Systems](testing.md)]@java[@ref:[TestKit](testing.md)],
|
||||
where `TestProbe` provides an actor ref useful for receiving and inspecting replies.
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #testkit }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #testkit }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #testkit }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #testkit }
|
||||
|
||||
Let us create actors:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #create }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #create }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #create }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #create }
|
||||
|
||||
The first test shall demonstrate the `Resume` directive, so we try it out by
|
||||
setting some non-initial state in the actor and have it fail:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #resume }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #resume }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #resume }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #resume }
|
||||
|
||||
As you can see the value 42 survives the fault handling directive. Now, if we
|
||||
change the failure to a more serious `NullPointerException`, that will no
|
||||
longer be the case:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #restart }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #restart }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #restart }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #restart }
|
||||
|
||||
And finally in case of the fatal `IllegalArgumentException` the child will be
|
||||
terminated by the supervisor:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #stop }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #stop }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #stop }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #stop }
|
||||
|
||||
Up to now the supervisor was completely unaffected by the child’s failure,
|
||||
because the directives set did handle it. In case of an `Exception`, this is not
|
||||
true anymore and the supervisor escalates the failure.
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-kill }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-kill }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #escalate-kill }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-kill }
|
||||
|
||||
The supervisor itself is supervised by the top-level actor provided by the
|
||||
`ActorSystem`, which has the default policy to restart in case of all
|
||||
|
|
@ -214,16 +214,16 @@ In case this is not desired (which depends on the use case), we need to use a
|
|||
different supervisor which overrides this behavior.
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor2 }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #supervisor2 }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #supervisor2 }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #supervisor2 }
|
||||
|
||||
With this parent, the child survives the escalated restart, as demonstrated in
|
||||
the last test:
|
||||
|
||||
Scala
|
||||
: @@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart }
|
||||
: @@snip [FaultHandlingDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart }
|
||||
|
||||
Java
|
||||
: @@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart }
|
||||
: @@snip [FaultHandlingTest.java](/akka-docs/src/test/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart }
|
||||
|
|
|
|||
|
|
@ -40,28 +40,28 @@ send them on after the burst ended or a flush request is received.
|
|||
First, consider all of the below to use these import statements:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #simple-imports }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-imports }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #simple-imports }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-imports }
|
||||
|
||||
The contract of our “Buncher” actor is that it accepts or produces the following messages:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #simple-events }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-events }
|
||||
|
||||
Java
|
||||
: @@snip [Events.java]($code$/java/jdocs/actor/fsm/Events.java) { #simple-events }
|
||||
: @@snip [Events.java](/akka-docs/src/test/java/jdocs/actor/fsm/Events.java) { #simple-events }
|
||||
|
||||
`SetTarget` is needed for starting it up, setting the destination for the
|
||||
`Batches` to be passed on; `Queue` will add to the internal queue while
|
||||
`Flush` will mark the end of a burst.
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #simple-state }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-state }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #simple-state }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-state }
|
||||
|
||||
The actor can be in two states: no message queued (aka `Idle`) or some
|
||||
message queued (aka `Active`). It will stay in the `Active` state as long as
|
||||
|
|
@ -72,10 +72,10 @@ the actual queue of messages.
|
|||
Now let’s take a look at the skeleton for our FSM actor:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
|
||||
|
||||
The basic strategy is to declare the actor, @scala[mixing in the `FSM` trait]@java[by inheriting the `AbstractFSM` class]
|
||||
and specifying the possible states and data values as type parameters. Within
|
||||
|
|
@ -103,10 +103,10 @@ which is not handled by the `when()` block is passed to the
|
|||
`whenUnhandled()` block:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #unhandled-elided }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-elided }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #unhandled-elided }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #unhandled-elided }
|
||||
|
||||
The first case handled here is adding `Queue()` requests to the internal
|
||||
queue and going to the `Active` state (this does the obvious thing of staying
|
||||
|
|
@ -121,10 +121,10 @@ multiple such blocks and all of them will be tried for matching behavior in
|
|||
case a state transition occurs (i.e. only when the state actually changes).
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #transition-elided }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-elided }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #transition-elided }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #transition-elided }
|
||||
|
||||
The transition callback is a @scala[partial function]@java[builder constructed by `matchState`, followed by zero or multiple `state`], which takes as input a pair of
|
||||
states—the current and the next state. @scala[The FSM trait includes a convenience
|
||||
|
|
@ -146,10 +146,10 @@ To verify that this buncher actually works, it is quite easy to write a test
|
|||
using the @scala[@ref:[Testing Actor Systems which is conveniently bundled with ScalaTest traits into `AkkaSpec`](testing.md)]@java[@ref:[TestKit](testing.md), here using JUnit as an example]:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #test-code }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #test-code }
|
||||
|
||||
Java
|
||||
: @@snip [BuncherTest.java]($code$/java/jdocs/actor/fsm/BuncherTest.java) { #test-code }
|
||||
: @@snip [BuncherTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/BuncherTest.java) { #test-code }
|
||||
|
||||
## Reference
|
||||
|
||||
|
|
@ -165,10 +165,10 @@ Actor since an Actor is created to drive the FSM.
|
|||
]
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #simple-fsm }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #simple-fsm }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -222,10 +222,10 @@ which is conveniently given using the @scala[partial function literal]@java[stat
|
|||
demonstrated below:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #when-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #when-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [Buncher.java]($code$/java/jdocs/actor/fsm/Buncher.java) { #when-syntax }
|
||||
: @@snip [Buncher.java](/akka-docs/src/test/java/jdocs/actor/fsm/Buncher.java) { #when-syntax }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -247,10 +247,10 @@ states. If you want to leave the handling of a state “unhandled” (more below
|
|||
it still needs to be declared like this:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #NullFunction }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #NullFunction }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #NullFunction }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #NullFunction }
|
||||
|
||||
### Defining the Initial State
|
||||
|
||||
|
|
@ -271,10 +271,10 @@ do something else in this case you can specify that with
|
|||
`whenUnhandled(stateFunction)`:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #unhandled-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #unhandled-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #unhandled-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #unhandled-syntax }
|
||||
|
||||
Within this handler the state of the FSM may be queried using the
|
||||
`stateName` method.
|
||||
|
|
@ -314,10 +314,10 @@ does not modify the state transition.
|
|||
All modifiers can be chained to achieve a nice and concise description:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #modifier-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #modifier-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #modifier-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #modifier-syntax }
|
||||
|
||||
The parentheses are not actually needed in all cases, but they visually
|
||||
distinguish between modifiers and their arguments and therefore make the code
|
||||
|
|
@ -356,10 +356,10 @@ resulting state is needed as it is not possible to modify the transition in
|
|||
progress.
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #transition-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transition-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #transition-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #transition-syntax }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -376,10 +376,10 @@ It is also possible to pass a function object accepting two states to
|
|||
a method:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #alt-transition-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transition-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #alt-transition-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #alt-transition-syntax }
|
||||
|
||||
The handlers registered with this method are stacked, so you can intersperse
|
||||
`onTransition` blocks with `when` blocks as suits your design. It
|
||||
|
|
@ -431,13 +431,13 @@ transformed using Scala’s full supplement of functional programming tools. In
|
|||
order to retain type inference, there is a helper function which may be used in
|
||||
case some common handling logic shall be applied to different clauses:
|
||||
|
||||
@@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #transform-syntax }
|
||||
@@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #transform-syntax }
|
||||
|
||||
It goes without saying that the arguments to this method may also be stored, to
|
||||
be used several times, e.g. when applying the same transformation to several
|
||||
`when()` blocks:
|
||||
|
||||
@@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #alt-transform-syntax }
|
||||
@@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #alt-transform-syntax }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -495,20 +495,20 @@ may not be used within a `when` block).
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #stop-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #stop-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #stop-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #stop-syntax }
|
||||
|
||||
You can use `onTermination(handler)` to specify custom code that is
|
||||
executed when the FSM is stopped. The handler is a partial function which takes
|
||||
a `StopEvent(reason, stateName, stateData)` as argument:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #termination-syntax }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #termination-syntax }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #termination-syntax }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #termination-syntax }
|
||||
|
||||
As for the `whenUnhandled` case, this handler is not stacked, so each
|
||||
invocation of `onTermination` replaces the previously installed handler.
|
||||
|
|
@ -541,10 +541,10 @@ The setting `akka.actor.debug.fsm` in @ref:[configuration](general/configuration
|
|||
event trace by `LoggingFSM` instances:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
|
||||
|
||||
This FSM will log at DEBUG level:
|
||||
|
||||
|
|
@ -563,10 +563,10 @@ log which may be used during debugging (for tracing how the FSM entered a
|
|||
certain failure state) or for other creative uses:
|
||||
|
||||
Scala
|
||||
: @@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
|
||||
: @@snip [FSMDocSpec.scala](/akka-docs/src/test/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
|
||||
|
||||
Java
|
||||
: @@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
|
||||
: @@snip [FSMDocTest.java](/akka-docs/src/test/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
|
||||
|
||||
The `logDepth` defaults to zero, which turns off the event log.
|
||||
|
||||
|
|
|
|||
|
|
@ -33,10 +33,10 @@ it will use its default dispatcher as the `ExecutionContext`, or you can use the
|
|||
by the @scala[`ExecutionContext` companion object]@java[`ExecutionContexts` class] to wrap `Executors` and `ExecutorServices`, or even create your own.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #diy-execution-context }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #diy-execution-context }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports1 #diy-execution-context }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports1 #diy-execution-context }
|
||||
|
||||
### Within Actors
|
||||
|
||||
|
|
@ -48,10 +48,10 @@ to reuse the dispatcher for running the Futures by importing
|
|||
@scala[`context.dispatcher`]@java[`getContext().dispatcher()`].
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #context-dispatcher }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #context-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [ActorWithFuture.java]($code$/java/jdocs/future/ActorWithFuture.java) { #context-dispatcher }
|
||||
: @@snip [ActorWithFuture.java](/akka-docs/src/test/java/jdocs/future/ActorWithFuture.java) { #context-dispatcher }
|
||||
|
||||
## Use with Actors
|
||||
|
||||
|
|
@ -62,10 +62,10 @@ Using @scala[an `Actor`'s `?`]@java[the `ActorRef`'s `ask`] method to send a mes
|
|||
To wait for and retrieve the actual result the simplest method is:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #ask-blocking }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #ask-blocking }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports1 #ask-blocking }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports1 #ask-blocking }
|
||||
|
||||
This will cause the current thread to block and wait for the @scala[`Actor`]@java[`AbstractActor`] to 'complete' the `Future` with its reply.
|
||||
Blocking is discouraged though as it will cause performance problems.
|
||||
|
|
@ -86,7 +86,7 @@ asynchronous composition as described below.
|
|||
|
||||
When using non-blocking it is better to use the `mapTo` method to safely try to cast a `Future` to an expected type:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #map-to }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #map-to }
|
||||
|
||||
The `mapTo` method will return a new `Future` that contains the result if the cast was successful,
|
||||
or a `ClassCastException` if not. Handling `Exception`s will be discussed further within this documentation.
|
||||
|
|
@ -99,10 +99,10 @@ Another useful message-transfer pattern is "pipe", which is to send the result o
|
|||
The pipe pattern can be used by importing @java[`akka.pattern.PatternsCS.pipe`.]@scala[`akka.pattern.pipe`, and define or import an implicit instance of `ExecutionContext` in the scope.]
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-usage }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-usage }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports-ask #imports-pipe #pipe-to-usage }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports-ask #imports-pipe #pipe-to-usage }
|
||||
|
||||
To see how this works in more detail, let's introduce a small example consisting of three different actors,
|
||||
`UserProxyActor`, `UserDataActor` and `UserActivityActor`.
|
||||
|
|
@ -120,26 +120,26 @@ then it gets the corresponding result from the appropriate backend actor based o
|
|||
The message types you send to `UserProxyActor` are `GetUserData` and `GetUserActivities`:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-proxy-messages }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-proxy-messages }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #pipe-to-proxy-messages }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #pipe-to-proxy-messages }
|
||||
|
||||
and `UserData` and @scala[`List[UserActivity]`]@java[`ArrayList<UserActivity>`] are returned to the original sender in the end.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-returned-data }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-returned-data }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #pipe-to-returned-data }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #pipe-to-returned-data }
|
||||
|
||||
The backend `UserDataActor` and `UserActivityActor` are defined as follows:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-user-data-actor }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-user-data-actor }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #pipe-to-user-data-actor }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #pipe-to-user-data-actor }
|
||||
|
||||
`UserDataActor` holds the data in memory, so that it can return the current state of the user data quickly upon a request.
|
||||
|
||||
|
|
@ -147,10 +147,10 @@ On the other hand, `UserActivityActor` queries into a `repository` to retrieve h
|
|||
sends the result to the `sender()` which is `UserProxy` in this case, with the pipe pattern.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-user-activity-actor }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-user-activity-actor }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports-pipe #pipe-to-user-activity-actor }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports-pipe #pipe-to-user-activity-actor }
|
||||
|
||||
Since it needs to talk to the separate `repository`, it takes time to retrieve the list of `UserActivity`,
|
||||
hence the return type of `queryHistoricalActivities` is @scala[`Future`]@java[`CompletableFuture`].
|
||||
|
|
@ -160,10 +160,10 @@ so that the result of the @scala[`Future`]@java[`CompletableFuture`] is sent to
|
|||
Finally, the definition of `UserProxyActor` is as below.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #pipe-to-proxy-actor }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #pipe-to-proxy-actor }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports-ask #imports-pipe #pipe-to-proxy-actor }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports-ask #imports-pipe #pipe-to-proxy-actor }
|
||||
|
||||
Note that the @scala[`pipeTo`]@java[`pipe`] method used with the @scala[`?`]@java[`ask`] method.
|
||||
Using @scala[`pipeTo`]@java[`pipe`] with the @scala[`?`]@java[`ask`] method is a common practice when you want to relay a message from one actor to another.
|
||||
|
|
@ -175,10 +175,10 @@ If you find yourself creating a pool of @scala[`Actor`s]@java[`AbstractActor`s]
|
|||
there is an easier (and faster) way:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #future-eval }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #future-eval }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports2 #future-eval }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports2 #future-eval }
|
||||
|
||||
In the above code the block passed to `Future` will be executed by the default `Dispatcher`,
|
||||
with the return value of the block used to complete the `Future` (in this case, the result would be the string: "HelloWorld").
|
||||
|
|
@ -188,32 +188,32 @@ and we also avoid the overhead of managing an @scala[`Actor`]@java[`AbstractActo
|
|||
You can also create already completed Futures using the @scala[`Future` companion]@java[`Futures` class], which can be either successes:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #successful }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #successful }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #successful }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #successful }
|
||||
|
||||
Or failures:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #failed }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #failed }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #failed }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #failed }
|
||||
|
||||
It is also possible to create an empty `Promise`, to be filled later, and obtain the corresponding `Future`:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #promise }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #promise }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #promise }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #promise }
|
||||
|
||||
@@@ div { .group-java }
|
||||
|
||||
For these examples `PrintResult` is defined as follows:
|
||||
|
||||
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #print-result }
|
||||
@@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #print-result }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -227,10 +227,10 @@ which performs some operation on the result of the `Future`, and returning a new
|
|||
The return value of the `map` method is another `Future` that will contain the new result:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #map }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #map }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports2 #map }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports2 #map }
|
||||
|
||||
In this example we are joining two strings together within a `Future`. Instead of waiting for @scala[`this`]@java[`f1`] to complete,
|
||||
we apply our function that calculates the length of the string using the `map` method.
|
||||
|
|
@ -247,24 +247,24 @@ the `Future` has already been completed, when one of these methods is called.
|
|||
The `map` method is fine if we are modifying a single `Future`,
|
||||
but if 2 or more `Future`s are involved `map` will not allow you to combine them together:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #wrong-nested-map }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #wrong-nested-map }
|
||||
|
||||
`f3` is a `Future[Future[Int]]` instead of the desired `Future[Int]`. Instead, the `flatMap` method should be used:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #flat-map }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #flat-map }
|
||||
|
||||
Composing futures using nested combinators it can sometimes become quite complicated and hard to read, in these cases using Scala's
|
||||
'for comprehensions' usually yields more readable code. See next section for examples.
|
||||
|
||||
If you need to do conditional propagation, you can use `filter`:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #filter }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #filter }
|
||||
|
||||
### For Comprehensions
|
||||
|
||||
Since `Future` has a `map`, `filter` and `flatMap` method it can be used in a 'for comprehension':
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #for-comprehension }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #for-comprehension }
|
||||
|
||||
Something to keep in mind when doing this is even though it looks like parts of the above example can run in parallel,
|
||||
each step of the for comprehension is run sequentially. This will happen on separate threads for each step but
|
||||
|
|
@ -282,13 +282,13 @@ A common use case for this is combining the replies of several `Actor`s into a s
|
|||
without resorting to calling `Await.result` or `Await.ready` to block for each result.
|
||||
First an example of using `Await.result`:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #composing-wrong }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #composing-wrong }
|
||||
|
||||
Here we wait for the results from the first 2 `Actor`s before sending that result to the third `Actor`.
|
||||
We called `Await.result` 3 times, which caused our little program to block 3 times before getting our final result.
|
||||
Now compare that to this example:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #composing }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #composing }
|
||||
|
||||
Here we have 2 actors processing a single message each. Once the 2 results are available
|
||||
(note that we don't block to get these results!), they are being added together and sent to a third `Actor`,
|
||||
|
|
@ -309,10 +309,10 @@ below are some examples on how that can be done in a non-blocking fashion.
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #sequence-ask }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #sequence-ask }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports3 #sequence }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports3 #sequence }
|
||||
|
||||
To better explain what happened in the example, `Future.sequence` is taking the @scala[`List[Future[Int]]`]@java[`Iterable<Future<Integer>>`]
|
||||
and turning it into a @scala[`Future[List[Int]]`]@java[`Future<Iterable<Integer>>`]. We can then use `map` to work with the @scala[`List[Int]`]@java[`Iterable<Integer>`] directly,
|
||||
|
|
@ -323,16 +323,16 @@ The `traverse` method is similar to `sequence`, but it takes a sequence of `A` a
|
|||
@java[and returns a `Future<Iterable<B>>`, enabling parallel map over the sequence, if you use `Futures.future` to create the `Future`.]
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #traverse }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #traverse }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports4 #traverse }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports4 #traverse }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
This is the same result as this example:
|
||||
|
||||
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #sequence }
|
||||
@@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #sequence }
|
||||
|
||||
But it may be faster to use `traverse` as it doesn't have to create an intermediate `List[Future[Int]]`.
|
||||
|
||||
|
|
@ -345,10 +345,10 @@ and then applies the function to all elements in the sequence of futures, non-bl
|
|||
the execution will be started when the last of the Futures is completed.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #fold }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #fold }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports5 #fold }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports5 #fold }
|
||||
|
||||
That's all it takes!
|
||||
|
||||
|
|
@ -357,10 +357,10 @@ In some cases you don't have a start-value and you're able to use the value of t
|
|||
as the start-value, you can use `reduce`, it works like this:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #reduce }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #reduce }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports6 #reduce }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports6 #reduce }
|
||||
|
||||
Same as with `fold`, the execution will be done asynchronously when the last of the `Future` is completed,
|
||||
you can also parallelize it by chunking your futures into sub-sequences and reduce them, and then reduce the reduced results again.
|
||||
|
|
@ -371,24 +371,24 @@ Sometimes you just want to listen to a `Future` being completed, and react to th
|
|||
For this `Future` supports `onComplete`, `onSuccess` and `onFailure`, of which the last two are specializations of the first.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #onSuccess }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #onSuccess }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onSuccess }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #onSuccess }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #onFailure }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #onFailure }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onFailure }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #onFailure }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #onComplete }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #onComplete }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onComplete }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #onComplete }
|
||||
|
||||
## Define Ordering
|
||||
|
||||
|
|
@ -399,10 +399,10 @@ the specified callback, a `Future` that will have the same result as the `Future
|
|||
which allows for ordering like in the following sample:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #and-then }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #and-then }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #and-then }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #and-then }
|
||||
|
||||
## Auxiliary Methods
|
||||
|
||||
|
|
@ -410,19 +410,19 @@ Java
|
|||
if the first `Future` fails.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #fallback-to }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #fallback-to }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #fallback-to }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #fallback-to }
|
||||
|
||||
You can also combine two Futures into a new `Future` that will hold a tuple of the two Futures successful results,
|
||||
using the `zip` operation.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #zip }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #zip }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #zip }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #zip }
|
||||
|
||||
## Exceptions
|
||||
|
||||
|
|
@ -435,10 +435,10 @@ It is also possible to handle an `Exception` by returning a different result.
|
|||
This is done with the `recover` method. For example:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #recover }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #recover }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #recover }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #recover }
|
||||
|
||||
In this example, if the actor replied with a `akka.actor.Status.Failure` containing the `ArithmeticException`,
|
||||
our `Future` would have a result of 0. The `recover` method works very similarly to the standard try/catch blocks,
|
||||
|
|
@ -449,30 +449,30 @@ You can also use the `recoverWith` method, which has the same relationship to `r
|
|||
and is use like this:
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #try-recover }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #try-recover }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #try-recover }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #try-recover }
|
||||
|
||||
## After
|
||||
|
||||
`akka.pattern.after` makes it easy to complete a `Future` with a value or exception after a timeout.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #after }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #after }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports7 #after }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports7 #after }
|
||||
|
||||
## Retry
|
||||
|
||||
@scala[`akka.pattern.retry`]@java[`akka.pattern.PatternsCS.retry`] will retry a @scala[`Future` class]@java[`CompletionStage` class] some number of times with a delay between each attempt.
|
||||
|
||||
Scala
|
||||
: @@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #retry }
|
||||
: @@snip [FutureDocSpec.scala](/akka-docs/src/test/scala/docs/future/FutureDocSpec.scala) { #retry }
|
||||
|
||||
Java
|
||||
: @@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports8 #retry }
|
||||
: @@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #imports8 #retry }
|
||||
|
||||
@@@ div { .group-java }
|
||||
|
||||
|
|
@ -515,7 +515,7 @@ All *async* methods without an explicit Executor are performed using the `ForkJo
|
|||
When non-async methods are applied on a not yet completed `CompletionStage`, they are completed by
|
||||
the thread which completes initial `CompletionStage`:
|
||||
|
||||
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-completion-thread }
|
||||
@@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #apply-completion-thread }
|
||||
|
||||
In this example Scala `Future` is converted to `CompletionStage` just like Akka does.
|
||||
The completion is delayed: we are calling `thenApply` multiple times on a not yet complete `CompletionStage`, then
|
||||
|
|
@ -530,7 +530,7 @@ default `thenApply` breaks the chain and executes on `ForkJoinPool.commonPool()`
|
|||
|
||||
In the next example `thenApply` methods are executed on an already completed `Future`/`CompletionStage`:
|
||||
|
||||
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-main-thread }
|
||||
@@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #apply-main-thread }
|
||||
|
||||
First `thenApply` is still executed on `ForkJoinPool.commonPool()` (because it is actually `thenApplyAsync`
|
||||
which is always executed on global Java pool).
|
||||
|
|
@ -546,11 +546,11 @@ and stages are executed on the current thread - the thread which called second a
|
|||
|
||||
As mentioned above, default *async* methods are always executed on `ForkJoinPool.commonPool()`:
|
||||
|
||||
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-async-default }
|
||||
@@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #apply-async-default }
|
||||
|
||||
`CompletionStage` also has *async* methods which take `Executor` as a second parameter, just like `Future`:
|
||||
|
||||
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-async-executor }
|
||||
@@snip [FutureDocTest.java](/akka-docs/src/test/java/jdocs/future/FutureDocTest.java) { #apply-async-executor }
|
||||
|
||||
This example is behaving like `Future`: every stage is executed on an explicitly specified `Executor`.
|
||||
|
||||
|
|
|
|||
|
|
@ -310,7 +310,7 @@ substitutions.
|
|||
You may also specify and parse the configuration programmatically in other ways when instantiating
|
||||
the `ActorSystem`.
|
||||
|
||||
@@snip [ConfigDocSpec.scala]($code$/scala/docs/config/ConfigDocSpec.scala) { #imports #custom-config }
|
||||
@@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #imports #custom-config }
|
||||
|
||||
## Reading configuration from a custom location
|
||||
|
||||
|
|
@ -353,7 +353,7 @@ you could put a config string in code using
|
|||
You can also combine your custom config with the usual config,
|
||||
that might look like:
|
||||
|
||||
@@snip [ConfigDoc.java]($code$/java/jdocs/config/ConfigDoc.java) { #java-custom-config }
|
||||
@@snip [ConfigDoc.java](/akka-docs/src/test/java/jdocs/config/ConfigDoc.java) { #java-custom-config }
|
||||
|
||||
When working with `Config` objects, keep in mind that there are
|
||||
three "layers" in the cake:
|
||||
|
|
@ -388,7 +388,7 @@ things like dispatcher, mailbox, router settings, and remote deployment.
|
|||
Configuration of these features are described in the chapters detailing corresponding
|
||||
topics. An example may look like this:
|
||||
|
||||
@@snip [ConfigDocSpec.scala]($code$/scala/docs/config/ConfigDocSpec.scala) { #deployment-section }
|
||||
@@snip [ConfigDocSpec.scala](/akka-docs/src/test/scala/docs/config/ConfigDocSpec.scala) { #deployment-section }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -422,64 +422,64 @@ Each Akka module has a reference configuration file with the default values.
|
|||
<a id="config-akka-actor"></a>
|
||||
### akka-actor
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-actor/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-actor/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-agent"></a>
|
||||
### akka-agent
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-agent/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-agent/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-camel"></a>
|
||||
### akka-camel
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-camel/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-camel/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-cluster"></a>
|
||||
### akka-cluster
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-multi-node-testkit"></a>
|
||||
### akka-multi-node-testkit
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-multi-node-testkit/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-multi-node-testkit/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-persistence"></a>
|
||||
### akka-persistence
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-persistence/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-persistence/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-akka-remote"></a>
|
||||
### akka-remote
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-remote/src/main/resources/reference.conf) { #shared #classic type=none }
|
||||
@@snip [reference.conf](/akka-remote/src/main/resources/reference.conf) { #shared #classic type=none }
|
||||
|
||||
<a id="config-akka-remote-artery"></a>
|
||||
### akka-remote (artery)
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-remote/src/main/resources/reference.conf) { #shared #artery type=none }
|
||||
@@snip [reference.conf](/akka-remote/src/main/resources/reference.conf) { #shared #artery type=none }
|
||||
|
||||
<a id="config-akka-testkit"></a>
|
||||
### akka-testkit
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-testkit/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-testkit/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-cluster-metrics"></a>
|
||||
### akka-cluster-metrics
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-metrics/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-cluster-metrics/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-cluster-tools"></a>
|
||||
### akka-cluster-tools
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-cluster-sharding"></a>
|
||||
### akka-cluster-sharding
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-cluster-sharding/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-cluster-sharding/src/main/resources/reference.conf)
|
||||
|
||||
<a id="config-distributed-data"></a>
|
||||
### akka-distributed-data
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-distributed-data/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-distributed-data/src/main/resources/reference.conf)
|
||||
|
|
|
|||
|
|
@ -67,6 +67,6 @@ Since Akka runs on the JVM there are still some rules to be followed.
|
|||
|
||||
* Closing over internal Actor state and exposing it to other threads
|
||||
|
||||
@@snip [SharedMutableStateDocSpec.scala]($code$/scala/docs/actor/SharedMutableStateDocSpec.scala) { #mutable-state }
|
||||
@@snip [SharedMutableStateDocSpec.scala](/akka-docs/src/test/scala/docs/actor/SharedMutableStateDocSpec.scala) { #mutable-state }
|
||||
|
||||
* Messages **should** be immutable, this is to avoid the shared mutable state trap.
|
||||
* Messages **should** be immutable, this is to avoid the shared mutable state trap.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,3 @@
|
|||
# Configuration
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-stream/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf](/akka-stream/src/main/resources/reference.conf)
|
||||
|
|
|
|||
|
|
@ -210,13 +210,13 @@ to recover before the persistent actor is started.
|
|||
The following Scala snippet shows how to create a backoff supervisor which will start the given echo actor after it has stopped
|
||||
because of a failure, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
|
||||
|
||||
@@snip [BackoffSupervisorDocSpec.scala]($code$/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-stop }
|
||||
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-stop }
|
||||
|
||||
The above is equivalent to this Java code:
|
||||
|
||||
@@snip [BackoffSupervisorDocTest.java]($code$/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-imports }
|
||||
@@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-imports }
|
||||
|
||||
@@snip [BackoffSupervisorDocTest.java]($code$/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-stop }
|
||||
@@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-stop }
|
||||
|
||||
Using a `randomFactor` to add a little bit of additional variance to the backoff intervals
|
||||
is highly recommended, in order to avoid multiple actors re-start at the exact same point in time,
|
||||
|
|
@ -231,23 +231,23 @@ crashes and the supervision strategy decides that it should restart.
|
|||
The following Scala snippet shows how to create a backoff supervisor which will start the given echo actor after it has crashed
|
||||
because of some exception, in increasing intervals of 3, 6, 12, 24 and finally 30 seconds:
|
||||
|
||||
@@snip [BackoffSupervisorDocSpec.scala]($code$/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-fail }
|
||||
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-fail }
|
||||
|
||||
The above is equivalent to this Java code:
|
||||
|
||||
@@snip [BackoffSupervisorDocTest.java]($code$/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-imports }
|
||||
@@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-imports }
|
||||
|
||||
@@snip [BackoffSupervisorDocTest.java]($code$/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-fail }
|
||||
@@snip [BackoffSupervisorDocTest.java](/akka-docs/src/test/java/jdocs/pattern/BackoffSupervisorDocTest.java) { #backoff-fail }
|
||||
|
||||
The `akka.pattern.BackoffOptions` can be used to customize the behavior of the back-off supervisor actor, below are some examples:
|
||||
|
||||
@@snip [BackoffSupervisorDocSpec.scala]($code$/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-stop }
|
||||
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-stop }
|
||||
|
||||
The above code sets up a back-off supervisor that requires the child actor to send a `akka.pattern.BackoffSupervisor.Reset` message
|
||||
to its parent when a message is successfully processed, resetting the back-off. It also uses a default stopping strategy, any exception
|
||||
will cause the child to stop.
|
||||
|
||||
@@snip [BackoffSupervisorDocSpec.scala]($code$/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-fail }
|
||||
@@snip [BackoffSupervisorDocSpec.scala](/akka-docs/src/test/scala/docs/pattern/BackoffSupervisorDocSpec.scala) { #backoff-custom-fail }
|
||||
|
||||
The above code sets up a back-off supervisor that restarts the child after back-off if MyException is thrown, any other exception will be
|
||||
escalated. The back-off is automatically reset if the child does not throw any errors within 10 seconds.
|
||||
|
|
|
|||
|
|
@ -40,10 +40,10 @@ The easiest way to see the actor hierarchy in action is to print `ActorRef` inst
|
|||
In your Hello World project, navigate to the `com.lightbend.akka.sample` package and create a new @scala[Scala file called `ActorHierarchyExperiments.scala`]@java[Java file called `ActorHierarchyExperiments.java`] here. Copy and paste the code from the snippet below to this new source file. Save your file and run `sbt "runMain com.lightbend.akka.sample.ActorHierarchyExperiments"` to observe the output.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #print-refs }
|
||||
: @@snip [ActorHierarchyExperiments.scala](/akka-docs/src/test/scala/tutorial_1/ActorHierarchyExperiments.scala) { #print-refs }
|
||||
|
||||
Java
|
||||
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #print-refs }
|
||||
: @@snip [ActorHierarchyExperiments.java](/akka-docs/src/test/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #print-refs }
|
||||
|
||||
Note the way a message asked the first actor to do its work. We sent the message by using the parent's reference: @scala[`firstRef ! "printit"`]@java[`firstRef.tell("printit", ActorRef.noSender())`]. When the code executes, the output includes the references for the first actor and the child it created as part of the `printit` case. Your output should look similar to the following:
|
||||
|
||||
|
|
@ -79,18 +79,18 @@ The Akka actor API exposes many lifecycle hooks that you can override in an acto
|
|||
Let's use the `preStart()` and `postStop()` lifecycle hooks in a simple experiment to observe the behavior when we stop an actor. First, add the following 2 actor classes to your project:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop }
|
||||
: @@snip [ActorHierarchyExperiments.scala](/akka-docs/src/test/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop }
|
||||
|
||||
Java
|
||||
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop }
|
||||
: @@snip [ActorHierarchyExperiments.java](/akka-docs/src/test/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop }
|
||||
|
||||
And create a 'main' class like above to start the actors and then send them a `"stop"` message:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop-main }
|
||||
: @@snip [ActorHierarchyExperiments.scala](/akka-docs/src/test/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop-main }
|
||||
|
||||
Java
|
||||
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop-main }
|
||||
: @@snip [ActorHierarchyExperiments.java](/akka-docs/src/test/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop-main }
|
||||
|
||||
You can again use `sbt` to start this program. The output should look like this:
|
||||
|
||||
|
|
@ -115,18 +115,18 @@ stop and restart the child. If you don't change the default strategy all failure
|
|||
Let's observe the default strategy in a simple experiment. Add the following classes to your project, just as you did with the previous ones:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise }
|
||||
: @@snip [ActorHierarchyExperiments.scala](/akka-docs/src/test/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise }
|
||||
|
||||
Java
|
||||
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise }
|
||||
: @@snip [ActorHierarchyExperiments.java](/akka-docs/src/test/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise }
|
||||
|
||||
And run with:
|
||||
|
||||
Scala
|
||||
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise-main }
|
||||
: @@snip [ActorHierarchyExperiments.scala](/akka-docs/src/test/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise-main }
|
||||
|
||||
Java
|
||||
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise-main }
|
||||
: @@snip [ActorHierarchyExperiments.java](/akka-docs/src/test/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise-main }
|
||||
|
||||
You should see output similar to the following:
|
||||
|
||||
|
|
|
|||
|
|
@ -24,10 +24,10 @@ We can define the first actor, the IotSupervisor, with a few simple lines of cod
|
|||
1. Paste the following code into the new file to define the IotSupervisor.
|
||||
|
||||
Scala
|
||||
: @@snip [IotSupervisor.scala]($code$/scala/tutorial_2/IotSupervisor.scala) { #iot-supervisor }
|
||||
: @@snip [IotSupervisor.scala](/akka-docs/src/test/scala/tutorial_2/IotSupervisor.scala) { #iot-supervisor }
|
||||
|
||||
Java
|
||||
: @@snip [IotSupervisor.java]($code$/java/jdocs/tutorial_2/IotSupervisor.java) { #iot-supervisor }
|
||||
: @@snip [IotSupervisor.java](/akka-docs/src/test/java/jdocs/tutorial_2/IotSupervisor.java) { #iot-supervisor }
|
||||
|
||||
The code is similar to the actor examples we used in the previous experiments, but notice:
|
||||
|
||||
|
|
@ -37,10 +37,10 @@ The code is similar to the actor examples we used in the previous experiments, b
|
|||
To provide the `main` entry point that creates the actor system, add the following code to the new @scala[`IotApp` object] @java[`IotMain` class].
|
||||
|
||||
Scala
|
||||
: @@snip [IotApp.scala]($code$/scala/tutorial_2/IotApp.scala) { #iot-app }
|
||||
: @@snip [IotApp.scala](/akka-docs/src/test/scala/tutorial_2/IotApp.scala) { #iot-app }
|
||||
|
||||
Java
|
||||
: @@snip [IotMain.java]($code$/java/jdocs/tutorial_2/IotMain.java) { #iot-app }
|
||||
: @@snip [IotMain.java](/akka-docs/src/test/java/jdocs/tutorial_2/IotMain.java) { #iot-app }
|
||||
|
||||
The application does little, other than print out that it is started. But, we have the first actor in place and we are ready to add other actors.
|
||||
|
||||
|
|
|
|||
|
|
@ -37,10 +37,10 @@ The protocol for obtaining the current temperature from the device actor is simp
|
|||
We need two messages, one for the request, and one for the reply. Our first attempt might look like the following:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-1 }
|
||||
: @@snip [DeviceInProgress.scala](/akka-docs/src/test/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-1 }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceInProgress.java]($code$/java/jdocs/tutorial_3/DeviceInProgress.java) { #read-protocol-1 }
|
||||
: @@snip [DeviceInProgress.java](/akka-docs/src/test/java/jdocs/tutorial_3/DeviceInProgress.java) { #read-protocol-1 }
|
||||
|
||||
These two messages seem to cover the required functionality. However, the approach we choose must take into account the distributed nature of the application. While the basic mechanism is the same for communicating with an actor on the local JVM as with a remote actor, we need to keep the following in mind:
|
||||
|
||||
|
|
@ -123,20 +123,20 @@ For the full details on delivery guarantees please refer to the @ref:[reference
|
|||
Our first query protocol was correct, but did not take into account distributed application execution. If we want to implement resends in the actor that queries a device actor (because of timed out requests), or if we want to query multiple actors, we need to be able to correlate requests and responses. Hence, we add one more field to our messages, so that an ID can be provided by the requester (we will add this code to our app in a later step):
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-2 }
|
||||
: @@snip [DeviceInProgress.scala](/akka-docs/src/test/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-2 }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #read-protocol-2 }
|
||||
: @@snip [DeviceInProgress2.java](/akka-docs/src/test/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #read-protocol-2 }
|
||||
|
||||
## Defining the device actor and its read protocol
|
||||
|
||||
As we learned in the Hello World example, each actor defines the type of messages it will accept. Our device actor has the responsibility to use the same ID parameter for the response of a given query, which would make it look like the following.
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #device-with-read }
|
||||
: @@snip [DeviceInProgress.scala](/akka-docs/src/test/scala/tutorial_3/DeviceInProgress.scala) { #device-with-read }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #device-with-read }
|
||||
: @@snip [DeviceInProgress2.java](/akka-docs/src/test/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #device-with-read }
|
||||
|
||||
Note in the code that:
|
||||
|
||||
|
|
@ -152,10 +152,10 @@ Based on the simple actor above, we could write a simple test. In the `com.light
|
|||
You can run this test @java[by running `mvn test` or] by running `test` at the sbt prompt.
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-read-test }
|
||||
: @@snip [DeviceSpec.scala](/akka-docs/src/test/scala/tutorial_3/DeviceSpec.scala) { #device-read-test }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_3/DeviceTest.java) { #device-read-test }
|
||||
: @@snip [DeviceTest.java](/akka-docs/src/test/java/jdocs/tutorial_3/DeviceTest.java) { #device-read-test }
|
||||
|
||||
Now, the actor needs a way to change the state of the temperature when it receives a message from the sensor.
|
||||
|
||||
|
|
@ -164,10 +164,10 @@ Now, the actor needs a way to change the state of the temperature when it receiv
|
|||
The purpose of the write protocol is to update the `currentTemperature` field when the actor receives a message that contains the temperature. Again, it is tempting to define the write protocol as a very simple message, something like this:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #write-protocol-1 }
|
||||
: @@snip [DeviceInProgress.scala](/akka-docs/src/test/scala/tutorial_3/DeviceInProgress.scala) { #write-protocol-1 }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceInProgress3.java]($code$/java/jdocs/tutorial_3/DeviceInProgress3.java) { #write-protocol-1 }
|
||||
: @@snip [DeviceInProgress3.java](/akka-docs/src/test/java/jdocs/tutorial_3/DeviceInProgress3.java) { #write-protocol-1 }
|
||||
|
||||
However, this approach does not take into account that the sender of the record temperature message can never be sure if the message was processed or not. We have seen that Akka does not guarantee delivery of these messages and leaves it to the application to provide success notifications. In our case, we would like to send an acknowledgment to the sender once we have updated our last temperature recording, e.g. @scala[`final case class TemperatureRecorded(requestId: Long)`]@java[`TemperatureRecorded`].
|
||||
Just like in the case of temperature queries and responses, it is a good idea to include an ID field to provide maximum flexibility.
|
||||
|
|
@ -177,18 +177,18 @@ Just like in the case of temperature queries and responses, it is a good idea to
|
|||
Putting the read and write protocol together, the device actor looks like the following example:
|
||||
|
||||
Scala
|
||||
: @@snip [Device.scala]($code$/scala/tutorial_3/Device.scala) { #full-device }
|
||||
: @@snip [Device.scala](/akka-docs/src/test/scala/tutorial_3/Device.scala) { #full-device }
|
||||
|
||||
Java
|
||||
: @@snip [Device.java]($code$/java/jdocs/tutorial_3/Device.java) { #full-device }
|
||||
: @@snip [Device.java](/akka-docs/src/test/java/jdocs/tutorial_3/Device.java) { #full-device }
|
||||
|
||||
We should also write a new test case now, exercising both the read/query and write/record functionality together:
|
||||
|
||||
Scala:
|
||||
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-write-read-test }
|
||||
: @@snip [DeviceSpec.scala](/akka-docs/src/test/scala/tutorial_3/DeviceSpec.scala) { #device-write-read-test }
|
||||
|
||||
Java:
|
||||
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_3/DeviceTest.java) { #device-write-read-test }
|
||||
: @@snip [DeviceTest.java](/akka-docs/src/test/java/jdocs/tutorial_3/DeviceTest.java) { #device-write-read-test }
|
||||
|
||||
## What's Next?
|
||||
|
||||
|
|
|
|||
|
|
@ -78,10 +78,10 @@ The messages that we will use to communicate registration requests and
|
|||
their acknowledgement have a simple definition:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceManager.scala]($code$/scala/tutorial_4/DeviceManager.scala) { #device-manager-msgs }
|
||||
: @@snip [DeviceManager.scala](/akka-docs/src/test/scala/tutorial_4/DeviceManager.scala) { #device-manager-msgs }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceManager.java]($code$/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-msgs }
|
||||
: @@snip [DeviceManager.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-msgs }
|
||||
|
||||
In this case we have not included a request ID field in the messages. Since registration happens once, when the component connects the system to some network protocol, the ID is not important. However, it is usually a best practice to include a request ID.
|
||||
|
||||
|
|
@ -97,10 +97,10 @@ message is preserved in the upper layers.* We will show you in the next section
|
|||
The device actor registration code looks like the following. Modify your example to match.
|
||||
|
||||
Scala
|
||||
: @@snip [Device.scala]($code$/scala/tutorial_4/Device.scala) { #device-with-register }
|
||||
: @@snip [Device.scala](/akka-docs/src/test/scala/tutorial_4/Device.scala) { #device-with-register }
|
||||
|
||||
Java
|
||||
: @@snip [Device.java]($code$/java/jdocs/tutorial_4/Device.java) { #device-with-register }
|
||||
: @@snip [Device.java](/akka-docs/src/test/java/jdocs/tutorial_4/Device.java) { #device-with-register }
|
||||
|
||||
@@@ note { .group-scala }
|
||||
|
||||
|
|
@ -111,10 +111,10 @@ We used a feature of scala pattern matching where we can check to see if a certa
|
|||
We can now write two new test cases, one exercising successful registration, the other testing the case when IDs don't match:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_4/DeviceSpec.scala) { #device-registration-tests }
|
||||
: @@snip [DeviceSpec.scala](/akka-docs/src/test/scala/tutorial_4/DeviceSpec.scala) { #device-registration-tests }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_4/DeviceTest.java) { #device-registration-tests }
|
||||
: @@snip [DeviceTest.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceTest.java) { #device-registration-tests }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -138,27 +138,27 @@ We also want to keep the ID of the original sender of the request so that our de
|
|||
sender while @scala[`!`] @java[`tell`] sets the sender to be the current actor. Just like with our device actor, we ensure that we don't respond to wrong group IDs. Add the following to your source file:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-register }
|
||||
: @@snip [DeviceGroup.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroup.scala) { #device-group-register }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-register }
|
||||
: @@snip [DeviceGroup.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-register }
|
||||
|
||||
Just as we did with the device, we test this new functionality. We also test that the actors returned for the two different IDs are actually different, and we also attempt to record a temperature reading for each of the devices to see if the actors are responding.
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test-registration }
|
||||
: @@snip [DeviceGroupSpec.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test-registration }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test-registration }
|
||||
: @@snip [DeviceGroupTest.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test-registration }
|
||||
|
||||
If a device actor already exists for the registration request, we would like to use
|
||||
the existing actor instead of a new one. We have not tested this yet, so we need to fix this:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test3 }
|
||||
: @@snip [DeviceGroupSpec.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test3 }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test3 }
|
||||
: @@snip [DeviceGroupTest.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test3 }
|
||||
|
||||
|
||||
### Keeping track of the device actors in the group
|
||||
|
|
@ -177,19 +177,19 @@ Unfortunately, the `Terminated` message only contains the `ActorRef` of the chil
|
|||
Adding the functionality to identify the actor results in this:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-remove }
|
||||
: @@snip [DeviceGroup.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroup.scala) { #device-group-remove }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-remove }
|
||||
: @@snip [DeviceGroup.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-remove }
|
||||
|
||||
So far we have no means to get which devices the group device actor keeps track of and, therefore, we cannot test our new functionality yet. To make it testable, we add a new query capability (message @scala[`RequestDeviceList(requestId: Long)`] @java[`RequestDeviceList`]) that lists the currently active
|
||||
device IDs:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-full }
|
||||
: @@snip [DeviceGroup.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroup.scala) { #device-group-full }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-full }
|
||||
: @@snip [DeviceGroup.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-full }
|
||||
|
||||
We are almost ready to test the removal of devices. But, we still need the following capabilities:
|
||||
|
||||
|
|
@ -201,20 +201,20 @@ We are almost ready to test the removal of devices. But, we still need the follo
|
|||
We add two more test cases now. In the first, we test that we get back the list of proper IDs once we have added a few devices. The second test case makes sure that the device ID is properly removed after the device actor has been stopped:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-list-terminate-test }
|
||||
: @@snip [DeviceGroupSpec.scala](/akka-docs/src/test/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-list-terminate-test }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-list-terminate-test }
|
||||
: @@snip [DeviceGroupTest.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-list-terminate-test }
|
||||
|
||||
## Creating device manager actors
|
||||
|
||||
Going up to the next level in our hierarchy, we need to create the entry point for our device manager component in the `DeviceManager` source file. This actor is very similar to the device group actor, but creates device group actors instead of device actors:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceManager.scala]($code$/scala/tutorial_4/DeviceManager.scala) { #device-manager-full }
|
||||
: @@snip [DeviceManager.scala](/akka-docs/src/test/scala/tutorial_4/DeviceManager.scala) { #device-manager-full }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceManager.java]($code$/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-full }
|
||||
: @@snip [DeviceManager.java](/akka-docs/src/test/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-full }
|
||||
|
||||
We leave tests of the device manager as an exercise for you since it is very similar to the tests we have already written for the group
|
||||
actor.
|
||||
|
|
|
|||
|
|
@ -48,10 +48,10 @@ for each device actor, with respect to a temperature query:
|
|||
Summarizing these in message types we can add the following to `DeviceGroup`:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_5/DeviceGroup.scala) { #query-protocol }
|
||||
: @@snip [DeviceGroup.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroup.scala) { #query-protocol }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_5/DeviceGroup.java) { #query-protocol }
|
||||
: @@snip [DeviceGroup.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroup.java) { #query-protocol }
|
||||
|
||||
## Implementing the query
|
||||
|
||||
|
|
@ -89,10 +89,10 @@ until the timeout to mark these as not available.
|
|||
Putting this together, the outline of our `DeviceGroupQuery` actor looks like this:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-outline }
|
||||
: @@snip [DeviceGroupQuery.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuery.scala) { #query-outline }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-outline }
|
||||
: @@snip [DeviceGroupQuery.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-outline }
|
||||
|
||||
#### Tracking actor state
|
||||
|
||||
|
|
@ -123,10 +123,10 @@ To accomplish this, add the following to your `DeviceGroupQuery` source file:
|
|||
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-state }
|
||||
: @@snip [DeviceGroupQuery.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuery.scala) { #query-state }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-state }
|
||||
: @@snip [DeviceGroupQuery.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-state }
|
||||
|
||||
It is not yet clear how we will "mutate" the `repliesSoFar` and `stillWaiting` data structures. One important thing to note is that the function `waitingForReplies` **does not handle the messages directly. It returns a `Receive` function that will handle the messages**. This means that if we call `waitingForReplies` again, with different parameters,
|
||||
then it returns a brand new `Receive` that will use those new parameters.
|
||||
|
|
@ -153,10 +153,10 @@ only the first call will have any effect, the rest is ignored.
|
|||
With all this knowledge, we can create the `receivedResponse` method:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-collect-reply }
|
||||
: @@snip [DeviceGroupQuery.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuery.scala) { #query-collect-reply }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-collect-reply }
|
||||
: @@snip [DeviceGroupQuery.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-collect-reply }
|
||||
|
||||
It is quite natural to ask at this point, what have we gained by using the `context.become()` trick instead of
|
||||
making the `repliesSoFar` and `stillWaiting` structures mutable fields of the actor (i.e. `var`s)? In this
|
||||
|
|
@ -171,10 +171,10 @@ with the solution we have used here as it helps structuring more complex actor c
|
|||
Our query actor is now done:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-full }
|
||||
: @@snip [DeviceGroupQuery.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuery.scala) { #query-full }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-full }
|
||||
: @@snip [DeviceGroupQuery.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-full }
|
||||
|
||||
### Testing the query actor
|
||||
|
||||
|
|
@ -185,46 +185,46 @@ to the query actor, so we can pass in @scala[`TestProbe`] @java[`TestKit`] refer
|
|||
there are two devices and both report a temperature:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-normal }
|
||||
: @@snip [DeviceGroupQuerySpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-normal }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-normal }
|
||||
: @@snip [DeviceGroupQueryTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-normal }
|
||||
|
||||
That was the happy case, but we know that sometimes devices cannot provide a temperature measurement. This
|
||||
scenario is just slightly different from the previous:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-no-reading }
|
||||
: @@snip [DeviceGroupQuerySpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-no-reading }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-no-reading }
|
||||
: @@snip [DeviceGroupQueryTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-no-reading }
|
||||
|
||||
We also know, that sometimes device actors stop before answering:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped }
|
||||
: @@snip [DeviceGroupQuerySpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped }
|
||||
: @@snip [DeviceGroupQueryTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped }
|
||||
|
||||
If you remember, there is another case related to device actors stopping. It is possible that we get a normal reply
|
||||
from a device actor, but then receive a `Terminated` for the same actor later. In this case, we would like to keep
|
||||
the first reply and not mark the device as `DeviceNotAvailable`. We should test this, too:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped-later }
|
||||
: @@snip [DeviceGroupQuerySpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped-later }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped-later }
|
||||
: @@snip [DeviceGroupQueryTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped-later }
|
||||
|
||||
The final case is when not all devices respond in time. To keep our test relatively fast, we will construct the
|
||||
`DeviceGroupQuery` actor with a smaller timeout:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-timeout }
|
||||
: @@snip [DeviceGroupQuerySpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-timeout }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-timeout }
|
||||
: @@snip [DeviceGroupQueryTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-timeout }
|
||||
|
||||
Our query works as expected now, it is time to include this new functionality in the `DeviceGroup` actor now.
|
||||
|
||||
|
|
@ -234,10 +234,10 @@ Including the query feature in the group actor is fairly simple now. We did all
|
|||
itself, the group actor only needs to create it with the right initial parameters and nothing else.
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_5/DeviceGroup.scala) { #query-added }
|
||||
: @@snip [DeviceGroup.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroup.scala) { #query-added }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_5/DeviceGroup.java) { #query-added }
|
||||
: @@snip [DeviceGroup.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroup.java) { #query-added }
|
||||
|
||||
It is probably worth restating what we said at the beginning of the chapter. By keeping the temporary state that is only relevant to the query itself in a separate actor we keep the group actor implementation very simple. It delegates
|
||||
everything to child actors and therefore does not have to keep state that is not relevant to its core business. Also, multiple queries can now run parallel to each other, in fact, as many as needed. In our case querying an individual device actor is a fast operation, but if this were not the case, for example, because the remote sensors need to be contacted over the network, this design would significantly improve throughput.
|
||||
|
|
@ -245,10 +245,10 @@ everything to child actors and therefore does not have to keep state that is not
|
|||
We close this chapter by testing that everything works together. This test is a variant of the previous ones, now exercising the group query feature:
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_5/DeviceGroupSpec.scala) { #group-query-integration-test }
|
||||
: @@snip [DeviceGroupSpec.scala](/akka-docs/src/test/scala/tutorial_5/DeviceGroupSpec.scala) { #group-query-integration-test }
|
||||
|
||||
Java
|
||||
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupTest.java) { #group-query-integration-test }
|
||||
: @@snip [DeviceGroupTest.java](/akka-docs/src/test/java/jdocs/tutorial_5/DeviceGroupTest.java) { #group-query-integration-test }
|
||||
|
||||
## Summary
|
||||
In the context of the IoT system, this guide introduced the following concepts, among others. You can follow the links to review them if necessary:
|
||||
|
|
|
|||
|
|
@ -148,7 +148,7 @@ Finally the promise returned by Patterns.ask() is fulfilled as a failure, includ
|
|||
|
||||
Let's have a look at the example code:
|
||||
|
||||
@@snip [SupervisedAsk.java]($code$/java/jdocs/pattern/SupervisedAsk.java)
|
||||
@@snip [SupervisedAsk.java](/akka-docs/src/test/java/jdocs/pattern/SupervisedAsk.java)
|
||||
|
||||
In the askOf method the SupervisorCreator is sent the user message.
|
||||
The SupervisorCreator creates a SupervisorActor and forwards the message.
|
||||
|
|
@ -161,7 +161,7 @@ Afterwards the actor hierarchy is stopped.
|
|||
|
||||
Finally we are able to execute an actor and receive the results or exceptions.
|
||||
|
||||
@@snip [SupervisedAskSpec.java]($code$/java/jdocs/pattern/SupervisedAskSpec.java)
|
||||
@@snip [SupervisedAskSpec.java](/akka-docs/src/test/java/jdocs/pattern/SupervisedAskSpec.java)
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -15,19 +15,19 @@ To use TCP, you must add the following dependency in your project:
|
|||
The code snippets through-out this section assume the following imports:
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #imports }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #imports }
|
||||
|
||||
Java
|
||||
: @@snip [IODocTest.java]($code$/java/jdocs/io/japi/IODocTest.java) { #imports }
|
||||
: @@snip [IODocTest.java](/akka-docs/src/test/java/jdocs/io/japi/IODocTest.java) { #imports }
|
||||
|
||||
All of the Akka I/O APIs are accessed through manager objects. When using an I/O API, the first step is to acquire a
|
||||
reference to the appropriate manager. The code below shows how to acquire a reference to the `Tcp` manager.
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #manager }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #manager }
|
||||
|
||||
Java
|
||||
: @@snip [EchoManager.java]($code$/java/jdocs/io/japi/EchoManager.java) { #manager }
|
||||
: @@snip [EchoManager.java](/akka-docs/src/test/java/jdocs/io/japi/EchoManager.java) { #manager }
|
||||
|
||||
The manager is an actor that handles the underlying low level I/O resources (selectors, channels) and instantiates
|
||||
workers for specific tasks, such as listening to incoming connections.
|
||||
|
|
@ -35,10 +35,10 @@ workers for specific tasks, such as listening to incoming connections.
|
|||
## Connecting
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #client }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #client }
|
||||
|
||||
Java
|
||||
: @@snip [IODocTest.java]($code$/java/jdocs/io/japi/IODocTest.java) { #client }
|
||||
: @@snip [IODocTest.java](/akka-docs/src/test/java/jdocs/io/japi/IODocTest.java) { #client }
|
||||
|
||||
The first step of connecting to a remote address is sending a
|
||||
@scala[`Connect` message]@java[message by the `TcpMessage.connect` method] to the TCP manager; in addition to the simplest form shown above there
|
||||
|
|
@ -81,10 +81,10 @@ fine-grained connection close events, see [Closing Connections](#closing-connect
|
|||
## Accepting connections
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #server }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #server }
|
||||
|
||||
Java
|
||||
: @@snip [IODocTest.java]($code$/java/jdocs/io/japi/IODocTest.java) { #server }
|
||||
: @@snip [IODocTest.java](/akka-docs/src/test/java/jdocs/io/japi/IODocTest.java) { #server }
|
||||
|
||||
To create a TCP server and listen for inbound connections, a @scala[`Bind` command]@java[message by the `TcpMessage.bind` method]
|
||||
has to be sent to the TCP manager. This will instruct the TCP manager
|
||||
|
|
@ -104,10 +104,10 @@ actor in the system to the connection actor (i.e. the actor which sent the
|
|||
`Connected` message). The simplistic handler is defined as:
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #simplistic-handler }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #simplistic-handler }
|
||||
|
||||
Java
|
||||
: @@snip [IODocTest.java]($code$/java/jdocs/io/japi/IODocTest.java) { #simplistic-handler }
|
||||
: @@snip [IODocTest.java](/akka-docs/src/test/java/jdocs/io/japi/IODocTest.java) { #simplistic-handler }
|
||||
|
||||
For a more complete sample which also takes into account the possibility of
|
||||
failures when sending please see [Throttling Reads and Writes](#throttling-reads-and-writes) below.
|
||||
|
|
@ -245,18 +245,18 @@ to the client before fully closing the connection. This is enabled using a flag
|
|||
upon connection activation (observe the @scala[`Register` message]@java[`TcpMessage.register` method]):
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #echo-manager }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #echo-manager }
|
||||
|
||||
Java
|
||||
: @@snip [EchoManager.java]($code$/java/jdocs/io/japi/EchoManager.java) { #echo-manager }
|
||||
: @@snip [EchoManager.java](/akka-docs/src/test/java/jdocs/io/japi/EchoManager.java) { #echo-manager }
|
||||
|
||||
With this preparation let us dive into the handler itself:
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #simple-echo-handler }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #simple-echo-handler }
|
||||
|
||||
Java
|
||||
: @@snip [SimpleEchoHandler.java]($code$/java/jdocs/io/japi/SimpleEchoHandler.java) { #simple-echo-handler }
|
||||
: @@snip [SimpleEchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/SimpleEchoHandler.java) { #simple-echo-handler }
|
||||
|
||||
The principle is simple: when having written a chunk always wait for the
|
||||
`Ack` to come back before sending the next chunk. While waiting we switch
|
||||
|
|
@ -264,10 +264,10 @@ behavior such that new incoming data are buffered. The helper functions used
|
|||
are a bit lengthy but not complicated:
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #simple-helpers }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #simple-helpers }
|
||||
|
||||
Java
|
||||
: @@snip [SimpleEchoHandler.java]($code$/java/jdocs/io/japi/SimpleEchoHandler.java) { #simple-helpers }
|
||||
: @@snip [SimpleEchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/SimpleEchoHandler.java) { #simple-helpers }
|
||||
|
||||
The most interesting part is probably the last: an `Ack` removes the oldest
|
||||
data chunk from the buffer, and if that was the last chunk then we either close
|
||||
|
|
@ -289,10 +289,10 @@ how end-to-end back-pressure is realized across a TCP connection.
|
|||
## NACK-Based Write Back-Pressure with Suspending
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #echo-handler }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #echo-handler }
|
||||
|
||||
Java
|
||||
: @@snip [EchoHandler.java]($code$/java/jdocs/io/japi/EchoHandler.java) { #echo-handler }
|
||||
: @@snip [EchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/EchoHandler.java) { #echo-handler }
|
||||
|
||||
The principle here is to keep writing until a `CommandFailed` is
|
||||
received, using acknowledgements only to prune the resend buffer. When a such a
|
||||
|
|
@ -300,10 +300,10 @@ failure was received, transition into a different state for handling and handle
|
|||
resending of all queued data:
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #buffering }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #buffering }
|
||||
|
||||
Java
|
||||
: @@snip [EchoHandler.java]($code$/java/jdocs/io/japi/EchoHandler.java) { #buffering }
|
||||
: @@snip [EchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/EchoHandler.java) { #buffering }
|
||||
|
||||
It should be noted that all writes which are currently buffered have also been
|
||||
sent to the connection actor upon entering this state, which means that the
|
||||
|
|
@ -317,10 +317,10 @@ the first ten writes after a failure before resuming the optimistic
|
|||
write-through behavior.
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #closing }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #closing }
|
||||
|
||||
Java
|
||||
: @@snip [EchoHandler.java]($code$/java/jdocs/io/japi/EchoHandler.java) { #closing }
|
||||
: @@snip [EchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/EchoHandler.java) { #closing }
|
||||
|
||||
Closing the connection while still sending all data is a bit more involved than
|
||||
in the ACK-based approach: the idea is to always send all outstanding messages
|
||||
|
|
@ -330,10 +330,10 @@ behavior to await the `WritingResumed` event and start over.
|
|||
The helper functions are very similar to the ACK-based case:
|
||||
|
||||
Scala
|
||||
: @@snip [EchoServer.scala]($code$/scala/docs/io/EchoServer.scala) { #helpers }
|
||||
: @@snip [EchoServer.scala](/akka-docs/src/test/scala/docs/io/EchoServer.scala) { #helpers }
|
||||
|
||||
Java
|
||||
: @@snip [EchoHandler.java]($code$/java/jdocs/io/japi/EchoHandler.java) { #helpers }
|
||||
: @@snip [EchoHandler.java](/akka-docs/src/test/java/jdocs/io/japi/EchoHandler.java) { #helpers }
|
||||
|
||||
## Read Back-Pressure with Pull Mode
|
||||
|
||||
|
|
@ -346,10 +346,10 @@ With the Pull mode this buffer can be completely eliminated as the following sni
|
|||
demonstrates:
|
||||
|
||||
Scala
|
||||
: @@snip [ReadBackPressure.scala]($code$/scala/docs/io/ReadBackPressure.scala) { #pull-reading-echo }
|
||||
: @@snip [ReadBackPressure.scala](/akka-docs/src/test/scala/docs/io/ReadBackPressure.scala) { #pull-reading-echo }
|
||||
|
||||
Java
|
||||
: @@snip [JavaReadBackPressure.java]($code$/java/jdocs/io/JavaReadBackPressure.java) { #pull-reading-echo }
|
||||
: @@snip [JavaReadBackPressure.java](/akka-docs/src/test/java/jdocs/io/JavaReadBackPressure.java) { #pull-reading-echo }
|
||||
|
||||
The idea here is that reading is not resumed until the previous write has been
|
||||
completely acknowledged by the connection actor. Every pull mode connection
|
||||
|
|
@ -363,10 +363,10 @@ To enable pull reading on an outbound connection the `pullMode` parameter of
|
|||
the @scala[`Connect`]@java[`TcpMessage.connect` method] should be set to `true`:
|
||||
|
||||
Scala
|
||||
: @@snip [ReadBackPressure.scala]($code$/scala/docs/io/ReadBackPressure.scala) { #pull-mode-connect }
|
||||
: @@snip [ReadBackPressure.scala](/akka-docs/src/test/scala/docs/io/ReadBackPressure.scala) { #pull-mode-connect }
|
||||
|
||||
Java
|
||||
: @@snip [JavaReadBackPressure.java]($code$/java/jdocs/io/JavaReadBackPressure.java) { #pull-mode-connect }
|
||||
: @@snip [JavaReadBackPressure.java](/akka-docs/src/test/java/jdocs/io/JavaReadBackPressure.java) { #pull-mode-connect }
|
||||
|
||||
### Pull Mode Reading for Inbound Connections
|
||||
|
||||
|
|
@ -375,10 +375,10 @@ connections but it is possible to create a listener actor with this mode of read
|
|||
by setting the `pullMode` parameter of the @scala[`Bind` command]@java[`TcpMessage.bind` method] to `true`:
|
||||
|
||||
Scala
|
||||
: @@snip [ReadBackPressure.scala]($code$/scala/docs/io/ReadBackPressure.scala) { #pull-mode-bind }
|
||||
: @@snip [ReadBackPressure.scala](/akka-docs/src/test/scala/docs/io/ReadBackPressure.scala) { #pull-mode-bind }
|
||||
|
||||
Java
|
||||
: @@snip [JavaReadBackPressure.java]($code$/java/jdocs/io/JavaReadBackPressure.java) { #pull-mode-bind }
|
||||
: @@snip [JavaReadBackPressure.java](/akka-docs/src/test/java/jdocs/io/JavaReadBackPressure.java) { #pull-mode-bind }
|
||||
|
||||
One of the effects of this setting is that all connections accepted by this listener
|
||||
actor will use pull mode reading.
|
||||
|
|
@ -392,10 +392,10 @@ Listener actors with pull mode start suspended so to start accepting connections
|
|||
a @scala[`ResumeAccepting` command]@java[message by the `TcpMessage.resumeAccepting` method] has to be sent to the listener actor after binding was successful:
|
||||
|
||||
Scala
|
||||
: @@snip [ReadBackPressure.scala]($code$/scala/docs/io/ReadBackPressure.scala) { #pull-accepting #pull-accepting-cont }
|
||||
: @@snip [ReadBackPressure.scala](/akka-docs/src/test/scala/docs/io/ReadBackPressure.scala) { #pull-accepting #pull-accepting-cont }
|
||||
|
||||
Java
|
||||
: @@snip [JavaReadBackPressure.java]($code$/java/jdocs/io/JavaReadBackPressure.java) { #pull-accepting }
|
||||
: @@snip [JavaReadBackPressure.java](/akka-docs/src/test/java/jdocs/io/JavaReadBackPressure.java) { #pull-accepting }
|
||||
|
||||
As shown in the example, after handling an incoming connection we need to resume accepting again.
|
||||
|
||||
|
|
|
|||
|
|
@ -30,10 +30,10 @@ offered using distinct IO extensions described below.
|
|||
### Simple Send
|
||||
|
||||
Scala
|
||||
: @@snip [UdpDocSpec.scala]($code$/scala/docs/io/UdpDocSpec.scala) { #sender }
|
||||
: @@snip [UdpDocSpec.scala](/akka-docs/src/test/scala/docs/io/UdpDocSpec.scala) { #sender }
|
||||
|
||||
Java
|
||||
: @@snip [UdpDocTest.java]($code$/java/jdocs/io/UdpDocTest.java) { #sender }
|
||||
: @@snip [UdpDocTest.java](/akka-docs/src/test/java/jdocs/io/UdpDocTest.java) { #sender }
|
||||
|
||||
The simplest form of UDP usage is to just send datagrams without the need of
|
||||
getting a reply. To this end a “simple sender” facility is provided as
|
||||
|
|
@ -55,10 +55,10 @@ want to close the ephemeral port the sender is bound to.
|
|||
### Bind (and Send)
|
||||
|
||||
Scala
|
||||
: @@snip [UdpDocSpec.scala]($code$/scala/docs/io/UdpDocSpec.scala) { #listener }
|
||||
: @@snip [UdpDocSpec.scala](/akka-docs/src/test/scala/docs/io/UdpDocSpec.scala) { #listener }
|
||||
|
||||
Java
|
||||
: @@snip [UdpDocTest.java]($code$/java/jdocs/io/UdpDocTest.java) { #listener }
|
||||
: @@snip [UdpDocTest.java](/akka-docs/src/test/java/jdocs/io/UdpDocTest.java) { #listener }
|
||||
|
||||
If you want to implement a UDP server which listens on a socket for incoming
|
||||
datagrams then you need to use the @scala[`Bind`]@java[`UdpMessage.bind`] message as shown above. The
|
||||
|
|
@ -84,10 +84,10 @@ connection is only able to send to the `remoteAddress` it was connected to,
|
|||
and will receive datagrams only from that address.
|
||||
|
||||
Scala
|
||||
: @@snip [UdpDocSpec.scala]($code$/scala/docs/io/UdpDocSpec.scala) { #connected }
|
||||
: @@snip [UdpDocSpec.scala](/akka-docs/src/test/scala/docs/io/UdpDocSpec.scala) { #connected }
|
||||
|
||||
Java
|
||||
: @@snip [UdpDocTest.java]($code$/java/jdocs/io/UdpDocTest.java) { #connected }
|
||||
: @@snip [UdpDocTest.java](/akka-docs/src/test/java/jdocs/io/UdpDocTest.java) { #connected }
|
||||
|
||||
Consequently the example shown here looks quite similar to the previous one,
|
||||
the biggest difference is the absence of remote address information in
|
||||
|
|
@ -114,23 +114,23 @@ class which @scala[extends]@java[implements] `akka.io.Inet.SocketOption`. Provid
|
|||
for opening a datagram channel by overriding `create` method.
|
||||
|
||||
Scala
|
||||
: @@snip [ScalaUdpMulticast.scala]($code$/scala/docs/io/ScalaUdpMulticast.scala) { #inet6-protocol-family }
|
||||
: @@snip [ScalaUdpMulticast.scala](/akka-docs/src/test/scala/docs/io/ScalaUdpMulticast.scala) { #inet6-protocol-family }
|
||||
|
||||
Java
|
||||
: @@snip [JavaUdpMulticast.java]($code$/java/jdocs/io/JavaUdpMulticast.java) { #inet6-protocol-family }
|
||||
: @@snip [JavaUdpMulticast.java](/akka-docs/src/test/java/jdocs/io/JavaUdpMulticast.java) { #inet6-protocol-family }
|
||||
|
||||
Another socket option will be needed to join a multicast group.
|
||||
|
||||
Scala
|
||||
: @@snip [ScalaUdpMulticast.scala]($code$/scala/docs/io/ScalaUdpMulticast.scala) { #multicast-group }
|
||||
: @@snip [ScalaUdpMulticast.scala](/akka-docs/src/test/scala/docs/io/ScalaUdpMulticast.scala) { #multicast-group }
|
||||
|
||||
Java
|
||||
: @@snip [JavaUdpMulticast.java]($code$/java/jdocs/io/JavaUdpMulticast.java) { #multicast-group }
|
||||
: @@snip [JavaUdpMulticast.java](/akka-docs/src/test/java/jdocs/io/JavaUdpMulticast.java) { #multicast-group }
|
||||
|
||||
Socket options must be provided to @scala[`UdpMessage.Bind`]@java[`UdpMessage.bind`] message.
|
||||
|
||||
Scala
|
||||
: @@snip [ScalaUdpMulticast.scala]($code$/scala/docs/io/ScalaUdpMulticast.scala) { #bind }
|
||||
: @@snip [ScalaUdpMulticast.scala](/akka-docs/src/test/scala/docs/io/ScalaUdpMulticast.scala) { #bind }
|
||||
|
||||
Java
|
||||
: @@snip [JavaUdpMulticast.java]($code$/java/jdocs/io/JavaUdpMulticast.java) { #bind }
|
||||
: @@snip [JavaUdpMulticast.java](/akka-docs/src/test/java/jdocs/io/JavaUdpMulticast.java) { #bind }
|
||||
|
|
|
|||
|
|
@ -33,10 +33,10 @@ is accessible @scala[through the `IO` entry point]@java[by querying an `ActorSys
|
|||
looks up the TCP manager and returns its `ActorRef`:
|
||||
|
||||
Scala
|
||||
: @@snip [IODocSpec.scala]($code$/scala/docs/io/IODocSpec.scala) { #manager }
|
||||
: @@snip [IODocSpec.scala](/akka-docs/src/test/scala/docs/io/IODocSpec.scala) { #manager }
|
||||
|
||||
Java
|
||||
: @@snip [EchoManager.java]($code$/java/jdocs/io/japi/EchoManager.java) { #manager }
|
||||
: @@snip [EchoManager.java](/akka-docs/src/test/java/jdocs/io/japi/EchoManager.java) { #manager }
|
||||
|
||||
The manager receives I/O command messages and instantiates worker actors in response. The worker actors present
|
||||
themselves to the API user in the reply to the command that was sent. For example after a `Connect` command sent to
|
||||
|
|
@ -115,4 +115,4 @@ A `ByteStringBuilder` can be wrapped in a `java.io.OutputStream` via the `asOutp
|
|||
|
||||
## Architecture in-depth
|
||||
|
||||
For further details on the design and internal architecture see @ref:[I/O Layer Design](common/io-layer.md).
|
||||
For further details on the design and internal architecture see @ref:[I/O Layer Design](common/io-layer.md).
|
||||
|
|
|
|||
|
|
@ -25,11 +25,11 @@ Create a `LoggingAdapter` and use the `error`, `warning`, `info`, or `debug` met
|
|||
as illustrated in this example:
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #my-actor }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #my-actor }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #imports }
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #my-actor }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #my-actor }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -65,10 +65,10 @@ the same line with the same severity). You may pass an array as the only
|
|||
substitution argument to have its elements be treated individually:
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #array }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #array }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #array }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #array }
|
||||
|
||||
The Java `Class` of the log source is also included in the generated
|
||||
`LogEvent`. In case of a simple string this is replaced with a “marker”
|
||||
|
|
@ -259,7 +259,7 @@ using implicit parameters and thus fully customizable: create your own
|
|||
instance of `LogSource[T]` and have it in scope when creating the
|
||||
logger.
|
||||
|
||||
@@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #my-source }
|
||||
@@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #my-source }
|
||||
|
||||
This example creates a log source which mimics traditional usage of Java
|
||||
loggers, which are based upon the originating object’s class name as log
|
||||
|
|
@ -332,11 +332,11 @@ logger available in the 'akka-slf4j' module.
|
|||
Example of creating a listener:
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #my-event-listener }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #my-event-listener }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #imports #imports-listener }
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #my-event-listener }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports #imports-listener }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #my-event-listener }
|
||||
|
||||
## Logging to stdout during startup and shutdown
|
||||
|
||||
|
|
@ -512,11 +512,11 @@ if it is not set to a new map. Use `log.clearMDC()`.
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #mdc }
|
||||
: @@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #mdc }
|
||||
|
||||
Java
|
||||
: @@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #imports-mdc }
|
||||
@@snip [LoggingDocTest.java]($code$/java/jdocs/event/LoggingDocTest.java) { #mdc-actor }
|
||||
: @@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #imports-mdc }
|
||||
@@snip [LoggingDocTest.java](/akka-docs/src/test/java/jdocs/event/LoggingDocTest.java) { #mdc-actor }
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
|
|
@ -524,7 +524,7 @@ For convenience, you can mix in the `log` member into actors, instead of definin
|
|||
This trait also lets you override `def mdc(msg: Any): MDC` for specifying MDC values
|
||||
depending on current message and lets you forget about the cleanup as well, since it already does it for you.
|
||||
|
||||
@@snip [LoggingDocSpec.scala]($code$/scala/docs/event/LoggingDocSpec.scala) { #mdc-actor }
|
||||
@@snip [LoggingDocSpec.scala](/akka-docs/src/test/scala/docs/event/LoggingDocSpec.scala) { #mdc-actor }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -25,15 +25,15 @@ by having that actor @scala[extend]@java[implement] the parameterized @scala[tra
|
|||
an example:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #required-mailbox-class }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #required-mailbox-class }
|
||||
|
||||
Java
|
||||
: @@snip [MyBoundedActor.java]($code$/java/jdocs/actor/MyBoundedActor.java) { #my-bounded-untyped-actor }
|
||||
: @@snip [MyBoundedActor.java](/akka-docs/src/test/java/jdocs/actor/MyBoundedActor.java) { #my-bounded-untyped-actor }
|
||||
|
||||
The type parameter to the `RequiresMessageQueue` @scala[trait]@java[interface] needs to be mapped to a mailbox in
|
||||
configuration like this:
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #bounded-mailbox-config #required-mailbox-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #bounded-mailbox-config #required-mailbox-config }
|
||||
|
||||
Now every time you create an actor of type `MyBoundedActor` it will try to get a bounded
|
||||
mailbox. If the actor has a different mailbox configured in deployment, either directly or via
|
||||
|
|
@ -199,46 +199,46 @@ The following mailboxes should only be used with zero `mailbox-push-timeout-time
|
|||
How to create a PriorityMailbox:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #prio-mailbox }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #prio-mailbox }
|
||||
|
||||
And then add it to the configuration:
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-dispatcher-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-dispatcher-config }
|
||||
|
||||
And then an example on how you would use it:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-dispatcher }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #prio-dispatcher }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #prio-dispatcher }
|
||||
|
||||
It is also possible to configure a mailbox type directly like this (this is a top-level configuration entry):
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox-config #mailbox-deployment-config }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox-config #mailbox-deployment-config }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox-config-java #mailbox-deployment-config }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #prio-mailbox-config-java #mailbox-deployment-config }
|
||||
|
||||
And then use it either from deployment like this:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-mailbox-in-config }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-mailbox-in-config }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-mailbox-in-config }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-mailbox-in-config }
|
||||
|
||||
Or code like this:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-mailbox-in-code }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #defining-mailbox-in-code }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-mailbox-in-code }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #defining-mailbox-in-code }
|
||||
|
||||
### ControlAwareMailbox
|
||||
|
||||
|
|
@ -247,40 +247,40 @@ immediately no matter how many other messages are already in its mailbox.
|
|||
|
||||
It can be configured like this:
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-mailbox-config }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-mailbox-config }
|
||||
|
||||
Control messages need to extend the `ControlMessage` trait:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-mailbox-messages }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-mailbox-messages }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #control-aware-mailbox-messages }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #control-aware-mailbox-messages }
|
||||
|
||||
And then an example on how you would use it:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-dispatcher }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #control-aware-dispatcher }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #control-aware-dispatcher }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #control-aware-dispatcher }
|
||||
|
||||
## Creating your own Mailbox type
|
||||
|
||||
An example is worth a thousand quacks:
|
||||
|
||||
Scala
|
||||
: @@snip [MyUnboundedMailbox.scala]($code$/scala/docs/dispatcher/MyUnboundedMailbox.scala) { #mailbox-marker-interface }
|
||||
: @@snip [MyUnboundedMailbox.scala](/akka-docs/src/test/scala/docs/dispatcher/MyUnboundedMailbox.scala) { #mailbox-marker-interface }
|
||||
|
||||
Java
|
||||
: @@snip [MyUnboundedMessageQueueSemantics.java]($code$/java/jdocs/dispatcher/MyUnboundedMessageQueueSemantics.java) { #mailbox-marker-interface }
|
||||
: @@snip [MyUnboundedMessageQueueSemantics.java](/akka-docs/src/test/java/jdocs/dispatcher/MyUnboundedMessageQueueSemantics.java) { #mailbox-marker-interface }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [MyUnboundedMailbox.scala]($code$/scala/docs/dispatcher/MyUnboundedMailbox.scala) { #mailbox-implementation-example }
|
||||
: @@snip [MyUnboundedMailbox.scala](/akka-docs/src/test/scala/docs/dispatcher/MyUnboundedMailbox.scala) { #mailbox-implementation-example }
|
||||
|
||||
Java
|
||||
: @@snip [MyUnboundedMailbox.java]($code$/java/jdocs/dispatcher/MyUnboundedMailbox.java) { #mailbox-implementation-example }
|
||||
: @@snip [MyUnboundedMailbox.java](/akka-docs/src/test/java/jdocs/dispatcher/MyUnboundedMailbox.java) { #mailbox-implementation-example }
|
||||
|
||||
And then you specify the FQCN of your MailboxType as the value of the "mailbox-type" in the dispatcher
|
||||
configuration, or the mailbox configuration.
|
||||
|
|
@ -299,15 +299,15 @@ dispatcher or mailbox setting using it.
|
|||
|
||||
You can also use the mailbox as a requirement on the dispatcher like this:
|
||||
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #custom-mailbox-config-java }
|
||||
@@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #custom-mailbox-config-java }
|
||||
|
||||
Or by defining the requirement on your actor class like this:
|
||||
|
||||
Scala
|
||||
: @@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #require-mailbox-on-actor }
|
||||
: @@snip [DispatcherDocSpec.scala](/akka-docs/src/test/scala/docs/dispatcher/DispatcherDocSpec.scala) { #require-mailbox-on-actor }
|
||||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java]($code$/java/jdocs/dispatcher/DispatcherDocTest.java) { #require-mailbox-on-actor }
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #require-mailbox-on-actor }
|
||||
|
||||
## Special Semantics of `system.actorOf`
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ To configure it in your project you should do the following steps:
|
|||
|
||||
1. Add it as a plugin by adding the following to your project/plugins.sbt:
|
||||
|
||||
@@snip [plugins.sbt]($akka$/project/plugins.sbt) { #sbt-multi-jvm }
|
||||
@@snip [plugins.sbt](/project/plugins.sbt) { #sbt-multi-jvm }
|
||||
|
||||
2. Add multi-JVM testing to `build.sbt` or `project/Build.scala` by enabling `MultiJvmPlugin` and
|
||||
setting the `MultiJvm` config.
|
||||
|
|
|
|||
|
|
@ -169,17 +169,17 @@ complete the test names.
|
|||
First we need some scaffolding to hook up the `MultiNodeSpec` with your favorite test framework. Lets define a trait
|
||||
`STMultiNodeSpec` that uses ScalaTest to start and stop `MultiNodeSpec`.
|
||||
|
||||
@@snip [STMultiNodeSpec.scala]($akka$/akka-remote-tests/src/test/scala/akka/remote/testkit/STMultiNodeSpec.scala) { #example }
|
||||
@@snip [STMultiNodeSpec.scala](/akka-remote-tests/src/test/scala/akka/remote/testkit/STMultiNodeSpec.scala) { #example }
|
||||
|
||||
Then we need to define a configuration. Lets use two nodes `"node1` and `"node2"` and call it
|
||||
`MultiNodeSampleConfig`.
|
||||
|
||||
@@snip [MultiNodeSample.scala]($akka$/akka-remote-tests/src/multi-jvm/scala/akka/remote/sample/MultiNodeSample.scala) { #package #config }
|
||||
@@snip [MultiNodeSample.scala](/akka-remote-tests/src/multi-jvm/scala/akka/remote/sample/MultiNodeSample.scala) { #package #config }
|
||||
|
||||
And then finally to the node test code. That starts the two nodes, and demonstrates a barrier, and a remote actor
|
||||
message send/receive.
|
||||
|
||||
@@snip [MultiNodeSample.scala]($akka$/akka-remote-tests/src/multi-jvm/scala/akka/remote/sample/MultiNodeSample.scala) { #package #spec }
|
||||
@@snip [MultiNodeSample.scala](/akka-remote-tests/src/multi-jvm/scala/akka/remote/sample/MultiNodeSample.scala) { #package #spec }
|
||||
|
||||
The easiest way to run this example yourself is to download the ready to run
|
||||
@extref[Akka Multi-Node Testing Sample with Scala](ecs:akka-samples-multi-node-scala)
|
||||
|
|
|
|||
|
|
@ -31,10 +31,10 @@ To demonstrate the features of the @scala[`PersistentFSM` trait]@java[`AbstractP
|
|||
The contract of our "WebStoreCustomerFSMActor" is that it accepts the following commands:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-commands }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-commands }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-commands }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-commands }
|
||||
|
||||
`AddItem` sent when the customer adds an item to a shopping cart
|
||||
`Buy` - when the customer finishes the purchase
|
||||
|
|
@ -44,10 +44,10 @@ Java
|
|||
The customer can be in one of the following states:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-states }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-states }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states }
|
||||
|
||||
`LookingAround` customer is browsing the site, but hasn't added anything to the shopping cart
|
||||
`Shopping` customer has recently added items to the shopping cart
|
||||
|
|
@ -66,26 +66,26 @@ Customer's actions are "recorded" as a sequence of "domain events" which are per
|
|||
start in order to restore the latest customer's state:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-domain-events }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-domain-events }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-domain-events }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-domain-events }
|
||||
|
||||
Customer state data represents the items in a customer's shopping cart:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-states-data }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-states-data }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states-data }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-states-data }
|
||||
|
||||
Here is how everything is wired together:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-fsm-body }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-fsm-body }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-fsm-body }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-fsm-body }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -95,27 +95,27 @@ Override the `applyEvent` method to define how state data is affected by domain
|
|||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-apply-event }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-apply-event }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-apply-event }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-apply-event }
|
||||
|
||||
`andThen` can be used to define actions which will be executed following event's persistence - convenient for "side effects" like sending a message or logging.
|
||||
Notice that actions defined in `andThen` block are not executed on recovery:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-andthen-example }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-andthen-example }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-andthen-example }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-andthen-example }
|
||||
|
||||
A snapshot of state data can be persisted by calling the `saveStateSnapshot()` method:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentFSMSpec.scala]($akka$/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-snapshot-example }
|
||||
: @@snip [PersistentFSMSpec.scala](/akka-persistence/src/test/scala/akka/persistence/fsm/PersistentFSMSpec.scala) { #customer-snapshot-example }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractPersistentFSMTest.java]($akka$/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-snapshot-example }
|
||||
: @@snip [AbstractPersistentFSMTest.java](/akka-persistence/src/test/java/akka/persistence/fsm/AbstractPersistentFSMTest.java) { #customer-snapshot-example }
|
||||
|
||||
On recovery state data is initialized according to the latest available snapshot, then the remaining domain events are replayed, triggering the
|
||||
`applyEvent` method.
|
||||
|
|
|
|||
|
|
@ -12,30 +12,30 @@ A journal plugin extends `AsyncWriteJournal`.
|
|||
`AsyncWriteJournal` is an actor and the methods to be implemented are:
|
||||
|
||||
Scala
|
||||
: @@snip [AsyncWriteJournal.scala]($akka$/akka-persistence/src/main/scala/akka/persistence/journal/AsyncWriteJournal.scala) { #journal-plugin-api }
|
||||
: @@snip [AsyncWriteJournal.scala](/akka-persistence/src/main/scala/akka/persistence/journal/AsyncWriteJournal.scala) { #journal-plugin-api }
|
||||
|
||||
Java
|
||||
: @@snip [AsyncWritePlugin.java]($akka$/akka-persistence/src/main/java/akka/persistence/journal/japi/AsyncWritePlugin.java) { #async-write-plugin-api }
|
||||
: @@snip [AsyncWritePlugin.java](/akka-persistence/src/main/java/akka/persistence/journal/japi/AsyncWritePlugin.java) { #async-write-plugin-api }
|
||||
|
||||
If the storage backend API only supports synchronous, blocking writes, the methods should be implemented as:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #sync-journal-plugin-api }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #sync-journal-plugin-api }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #sync-journal-plugin-api }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #sync-journal-plugin-api }
|
||||
|
||||
A journal plugin must also implement the methods defined in `AsyncRecovery` for replays and sequence number recovery:
|
||||
|
||||
Scala
|
||||
: @@snip [AsyncRecovery.scala]($akka$/akka-persistence/src/main/scala/akka/persistence/journal/AsyncRecovery.scala) { #journal-plugin-api }
|
||||
: @@snip [AsyncRecovery.scala](/akka-persistence/src/main/scala/akka/persistence/journal/AsyncRecovery.scala) { #journal-plugin-api }
|
||||
|
||||
Java
|
||||
: @@snip [AsyncRecoveryPlugin.java]($akka$/akka-persistence/src/main/java/akka/persistence/journal/japi/AsyncRecoveryPlugin.java) { #async-replay-plugin-api }
|
||||
: @@snip [AsyncRecoveryPlugin.java](/akka-persistence/src/main/java/akka/persistence/journal/japi/AsyncRecoveryPlugin.java) { #async-replay-plugin-api }
|
||||
|
||||
A journal plugin can be activated with the following minimal configuration:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-plugin-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-plugin-config }
|
||||
|
||||
The journal plugin instance is an actor so the methods corresponding to requests from persistent actors
|
||||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
|
|
@ -60,14 +60,14 @@ Don't run journal tasks/futures on the system default dispatcher, since that mig
|
|||
A snapshot store plugin must extend the `SnapshotStore` actor and implement the following methods:
|
||||
|
||||
Scala
|
||||
: @@snip [SnapshotStore.scala]($akka$/akka-persistence/src/main/scala/akka/persistence/snapshot/SnapshotStore.scala) { #snapshot-store-plugin-api }
|
||||
: @@snip [SnapshotStore.scala](/akka-persistence/src/main/scala/akka/persistence/snapshot/SnapshotStore.scala) { #snapshot-store-plugin-api }
|
||||
|
||||
Java
|
||||
: @@snip [SnapshotStorePlugin.java]($akka$/akka-persistence/src/main/java/akka/persistence/snapshot/japi/SnapshotStorePlugin.java) { #snapshot-store-plugin-api }
|
||||
: @@snip [SnapshotStorePlugin.java](/akka-persistence/src/main/java/akka/persistence/snapshot/japi/SnapshotStorePlugin.java) { #snapshot-store-plugin-api }
|
||||
|
||||
A snapshot store plugin can be activated with the following minimal configuration:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-store-plugin-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-store-plugin-config }
|
||||
|
||||
The snapshot store instance is an actor so the methods corresponding to requests from persistent actors
|
||||
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
|
||||
|
|
@ -102,10 +102,10 @@ The TCK is usable from Java as well as Scala projects. To test your implementati
|
|||
To include the Journal TCK tests in your test suite simply extend the provided @scala[`JournalSpec`]@java[`JavaJournalSpec`]:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-tck-scala }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-tck-scala }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #journal-tck-java }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #journal-tck-java }
|
||||
|
||||
Please note that some of the tests are optional, and by overriding the `supports...` methods you give the
|
||||
TCK the needed information about which tests to run. You can implement these methods using @scala[boolean values or] the
|
||||
|
|
@ -119,19 +119,19 @@ typical scenarios.
|
|||
In order to include the `SnapshotStore` TCK tests in your test suite extend the `SnapshotStoreSpec`:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-store-tck-scala }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-store-tck-scala }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #snapshot-store-tck-java }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #snapshot-store-tck-java }
|
||||
|
||||
In case your plugin requires some setting up (starting a mock database, removing temporary files etc.) you can override the
|
||||
`beforeAll` and `afterAll` methods to hook into the tests lifecycle:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-tck-before-after-scala }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-tck-before-after-scala }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #journal-tck-before-after-java }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #journal-tck-before-after-java }
|
||||
|
||||
We *highly recommend* including these specifications in your test suite, as they cover a broad range of cases you
|
||||
might have otherwise forgotten to test for when writing a plugin from scratch.
|
||||
|
|
|
|||
|
|
@ -23,10 +23,10 @@ The `ReadJournal` is retrieved via the `akka.persistence.query.PersistenceQuery`
|
|||
extension:
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #get-read-journal }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #get-read-journal }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #get-read-journal }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #get-read-journal }
|
||||
|
||||
## Supported Queries
|
||||
|
||||
|
|
@ -36,10 +36,10 @@ Java
|
|||
identified by `persistenceId`.
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #EventsByPersistenceId }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #EventsByPersistenceId }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #EventsByPersistenceId }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #EventsByPersistenceId }
|
||||
|
||||
You can retrieve a subset of all events by specifying `fromSequenceNr` and `toSequenceNr`
|
||||
or use `0L` and @scala[`Long.MaxValue`]@java[`Long.MAX_VALUE`] respectively to retrieve all events. Note that
|
||||
|
|
@ -68,10 +68,10 @@ backend journal.
|
|||
`persistenceIds` is used for retrieving all `persistenceIds` of all persistent actors.
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #AllPersistenceIds }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #AllPersistenceIds }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #AllPersistenceIds }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #AllPersistenceIds }
|
||||
|
||||
The returned event stream is unordered and you can expect different order for multiple
|
||||
executions of the query.
|
||||
|
|
@ -93,19 +93,19 @@ backend journal.
|
|||
all domain events of an Aggregate Root type.
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #EventsByTag }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #EventsByTag }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #EventsByTag }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #EventsByTag }
|
||||
|
||||
To tag events you create an @ref:[Event Adapters](persistence.md#event-adapters) that wraps the events in a `akka.persistence.journal.Tagged`
|
||||
with the given `tags`.
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #tagger }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #tagger }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #tagger }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #tagger }
|
||||
|
||||
You can use `NoOffset` to retrieve all events with a given tag or retrieve a subset of all
|
||||
events by specifying a `Sequence` `offset`. The `offset` corresponds to an ordered sequence number for
|
||||
|
|
@ -153,4 +153,4 @@ for the default `LeveldbReadJournal.Identifier`.
|
|||
|
||||
It can be configured with the following properties:
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-persistence-query/src/main/resources/reference.conf) { #query-leveldb }
|
||||
@@snip [reference.conf](/akka-persistence-query/src/main/resources/reference.conf) { #query-leveldb }
|
||||
|
|
|
|||
|
|
@ -45,10 +45,10 @@ databases). For example, given a library that provides a `akka.persistence.query
|
|||
journal is as simple as:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #basic-usage }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #basic-usage }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #basic-usage }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #basic-usage }
|
||||
|
||||
Journal implementers are encouraged to put this identifier in a variable known to the user, such that one can access it via
|
||||
@scala[`readJournalFor[NoopJournal](NoopJournal.identifier)`]@java[`getJournalFor(NoopJournal.class, NoopJournal.identifier)`], however this is not enforced.
|
||||
|
|
@ -78,18 +78,18 @@ By default this stream should be assumed to be a "live" stream, which means that
|
|||
persistence ids as they come into the system:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #all-persistence-ids-live }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #all-persistence-ids-live }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #all-persistence-ids-live }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #all-persistence-ids-live }
|
||||
|
||||
If your usage does not require a live stream, you can use the `currentPersistenceIds` query:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #all-persistence-ids-snap }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #all-persistence-ids-snap }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #all-persistence-ids-snap }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #all-persistence-ids-snap }
|
||||
|
||||
#### EventsByPersistenceIdQuery and CurrentEventsByPersistenceIdQuery
|
||||
|
||||
|
|
@ -98,10 +98,10 @@ however, since it is a stream it is possible to keep it alive and watch for addi
|
|||
persistent actor identified by the given `persistenceId`.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #events-by-persistent-id }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #events-by-persistent-id }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-persistent-id }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-persistent-id }
|
||||
|
||||
Most journals will have to revert to polling in order to achieve this,
|
||||
which can typically be configured with a `refresh-interval` configuration property.
|
||||
|
|
@ -121,10 +121,10 @@ Some journals may support tagging of events via an @ref:[Event Adapters](persist
|
|||
how exactly this is implemented depends on the used journal. Here is an example of such a tagging event adapter:
|
||||
|
||||
Scala
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #tagger }
|
||||
: @@snip [LeveldbPersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #tagger }
|
||||
|
||||
Java
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #tagger }
|
||||
: @@snip [LeveldbPersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #tagger }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -142,10 +142,10 @@ In the example below we query all events which have been tagged (we assume this
|
|||
tag - for example if the journal stored the events as json it may try to find those with the field `tag` set to this value etc.).
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #events-by-tag }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #events-by-tag }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-tag }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-tag }
|
||||
|
||||
As you can see, we can use all the usual stream operators available from @ref:[Streams](stream/index.md) on the resulting query stream,
|
||||
including for example taking the first 10 and cancelling the stream. It is worth pointing out that the built-in `EventsByTag`
|
||||
|
|
@ -166,24 +166,24 @@ is defined as the second type parameter of the returned `Source`, which allows j
|
|||
specialised query object, as demonstrated in the sample below:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-types }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-types }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-types }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-types }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-definition }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-definition }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-definition }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-definition }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-usage }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #advanced-journal-query-usage }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-usage }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #advanced-journal-query-usage }
|
||||
|
||||
## Performance and denormalization
|
||||
|
||||
|
|
@ -215,10 +215,10 @@ If the read datastore exposes a [Reactive Streams](http://reactive-streams.org)
|
|||
is as simple as, using the read-journal and feeding it into the databases driver interface, for example like so:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-rs }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-rs }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-rs }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-rs }
|
||||
|
||||
### Materialize view using mapAsync
|
||||
|
||||
|
|
@ -229,17 +229,17 @@ In case your write logic is state-less and you need to convert the events from o
|
|||
before writing into the alternative datastore, then the projection will look like this:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-simple-classes }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-simple-classes }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-simple-classes }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-simple-classes }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-simple }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-simple }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-simple }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-simple }
|
||||
|
||||
### Resumable projections
|
||||
|
||||
|
|
@ -252,17 +252,17 @@ you need to do some complex logic that would be best handled inside an Actor bef
|
|||
into the other datastore:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-actor-run }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-actor-run }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-actor-run }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-actor-run }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-actor }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-actor }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-actor }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-actor }
|
||||
|
||||
<a id="read-journal-plugin-api"></a>
|
||||
## Query plugins
|
||||
|
|
@ -295,18 +295,18 @@ As illustrated below one of the implementations can delegate to the other.
|
|||
Below is a simple journal implementation:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #my-read-journal }
|
||||
: @@snip [PersistenceQueryDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #my-read-journal }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #my-read-journal }
|
||||
: @@snip [PersistenceQueryDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceQueryDocTest.java) { #my-read-journal }
|
||||
|
||||
And the `eventsByTag` could be backed by such an Actor for example:
|
||||
|
||||
Scala
|
||||
: @@snip [MyEventsByTagPublisher.scala]($code$/scala/docs/persistence/query/MyEventsByTagPublisher.scala) { #events-by-tag-publisher }
|
||||
: @@snip [MyEventsByTagPublisher.scala](/akka-docs/src/test/scala/docs/persistence/query/MyEventsByTagPublisher.scala) { #events-by-tag-publisher }
|
||||
|
||||
Java
|
||||
: @@snip [MyEventsByTagJavaPublisher.java]($code$/java/jdocs/persistence/query/MyEventsByTagJavaPublisher.java) { #events-by-tag-publisher }
|
||||
: @@snip [MyEventsByTagJavaPublisher.java](/akka-docs/src/test/java/jdocs/persistence/query/MyEventsByTagJavaPublisher.java) { #events-by-tag-publisher }
|
||||
|
||||
The `ReadJournalProvider` class must have a constructor with one of these signatures:
|
||||
|
||||
|
|
|
|||
|
|
@ -168,22 +168,22 @@ For more in-depth explanations on how serialization picks the serializer to use
|
|||
First we start by defining our domain model class, here representing a person:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer-model }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer-model }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #simplest-custom-serializer-model }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #simplest-custom-serializer-model }
|
||||
|
||||
Next we implement a serializer (or extend an existing one to be able to handle the new `Person` class):
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #simplest-custom-serializer }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #simplest-custom-serializer }
|
||||
|
||||
And finally we register the serializer and bind it to handle the `docs.persistence.Person` class:
|
||||
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer-config }
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #simplest-custom-serializer-config }
|
||||
|
||||
Deserialization will be performed by the same serializer which serialized the message initially
|
||||
because of the `identifier` being stored together with the message.
|
||||
|
|
@ -219,16 +219,16 @@ values somehow. This is usually modeled as some kind of default value, or by rep
|
|||
See below for an example how reading an optional field from a serialized protocol buffers message might look like.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-read-optional-model }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-read-optional-model }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional-model }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional-model }
|
||||
|
||||
Next we prepare an protocol definition using the protobuf Interface Description Language, which we'll use to generate
|
||||
the serializer code to be used on the Akka Serialization layer (notice that the schema aproach allows us to rename
|
||||
fields, as long as the numeric identifiers of the fields do not change):
|
||||
|
||||
@@snip [FlightAppModels.proto]($code$/../main/protobuf/FlightAppModels.proto) { #protobuf-read-optional-proto }
|
||||
@@snip [FlightAppModels.proto](/akka-docs/src/test/../main/protobuf/FlightAppModels.proto) { #protobuf-read-optional-proto }
|
||||
|
||||
The serializer implementation uses the protobuf generated classes to marshall the payloads.
|
||||
Optional fields can be handled explicitly or missing values by calling the `has...` methods on the protobuf object,
|
||||
|
|
@ -236,10 +236,10 @@ which we do for `seatType` in order to use a `Unknown` type in case the event wa
|
|||
the field to this event type:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-read-optional }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-read-optional }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional }
|
||||
|
||||
<a id="rename-field"></a>
|
||||
### Rename fields
|
||||
|
|
@ -265,7 +265,7 @@ add the overhead of having to maintain the schema. When using serializers like t
|
|||
|
||||
This is how such a rename would look in protobuf:
|
||||
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-rename-proto }
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-rename-proto }
|
||||
|
||||
It is important to learn about the strengths and limitations of your serializers, in order to be able to move
|
||||
swiftly and refactor your models fearlessly as you go on with the project.
|
||||
|
|
@ -294,10 +294,10 @@ or using a library like @scala[[Stamina](https://github.com/scalapenos/stamina)]
|
|||
The following snippet showcases how one could apply renames if working with plain JSON (using @scala[`spray.json.JsObject`]@java[a `JsObject` as an example JSON representation]):
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #rename-plain-json }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #rename-plain-json }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #rename-plain-json }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #rename-plain-json }
|
||||
|
||||
As you can see, manually handling renames induces some boilerplate onto the EventAdapter, however much of it
|
||||
you will find is common infrastructure code that can be either provided by an external library (for promotion management)
|
||||
|
|
@ -363,19 +363,19 @@ Other events (**E**) can just be passed through.
|
|||
The serializer detects that the string manifest points to a removed event type and skips attempting to deserialize it:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest }
|
||||
|
||||
The EventAdapter we implemented is aware of `EventDeserializationSkipped` events (our "Tombstones"),
|
||||
and emits and empty `EventSeq` whenever such object is encoutered:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest-adapter }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest-adapter }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest-adapter }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest-adapter }
|
||||
|
||||
<a id="detach-domain-from-data-model"></a>
|
||||
### Detach domain model from data model
|
||||
|
|
@ -405,20 +405,20 @@ include additional data for the event (e.g. tags), for ease of later querying.
|
|||
We will use the following domain and data models to showcase how the separation can be implemented by the adapter:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models }
|
||||
|
||||
The `EventAdapter` takes care of converting from one model to the other one (in both directions),
|
||||
allowing the models to be completely detached from each other, such that they can be optimised independently
|
||||
as long as the mapping logic is able to convert between them:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models-adapter }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models-adapter }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models-adapter }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models-adapter }
|
||||
|
||||
The same technique could also be used directly in the Serializer if the end result of marshalling is bytes.
|
||||
Then the serializer can simply convert the bytes do the domain object by using the generated protobuf builders.
|
||||
|
|
@ -441,10 +441,10 @@ The journal plugin notices that the incoming event type is JSON (for example by
|
|||
event) and stores the incoming object directly.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models-adapter-json }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #detach-models-adapter-json }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models-adapter-json }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #detach-models-adapter-json }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -500,10 +500,10 @@ and the address change is handled similarly:
|
|||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #split-events-during-recovery }
|
||||
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #split-events-during-recovery }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #split-events-during-recovery }
|
||||
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #split-events-during-recovery }
|
||||
|
||||
By returning an `EventSeq` from the event adapter, the recovered event can be converted to multiple events before
|
||||
being delivered to the persistent actor.
|
||||
|
|
|
|||
|
|
@ -95,10 +95,10 @@ Akka persistence supports event sourcing with the @scala[`PersistentActor` trait
|
|||
is defined by implementing @scala[`receiveRecover`]@java[`createReceiveRecover`] and @scala[`receiveCommand`]@java[`createReceive`]. This is demonstrated in the following example.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistentActorExample.scala]($code$/scala/docs/persistence/PersistentActorExample.scala) { #persistent-actor-example }
|
||||
: @@snip [PersistentActorExample.scala](/akka-docs/src/test/scala/docs/persistence/PersistentActorExample.scala) { #persistent-actor-example }
|
||||
|
||||
Java
|
||||
: @@snip [PersistentActorExample.java]($code$/java/jdocs/persistence/PersistentActorExample.java) { #persistent-actor-example }
|
||||
: @@snip [PersistentActorExample.java](/akka-docs/src/test/java/jdocs/persistence/PersistentActorExample.java) { #persistent-actor-example }
|
||||
|
||||
The example defines two data types, `Cmd` and `Evt` to represent commands and events, respectively. The
|
||||
`state` of the `ExamplePersistentActor` is a list of persisted event data contained in `ExampleState`.
|
||||
|
|
@ -151,10 +151,10 @@ A persistent actor must have an identifier that doesn't change across different
|
|||
The identifier must be defined with the `persistenceId` method.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #persistence-id-override }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #persistence-id-override }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #persistence-id-override }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #persistence-id-override }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -199,10 +199,10 @@ This can be useful if snapshot serialization format has changed in an incompatib
|
|||
It should typically not be used when events have been deleted.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-no-snap }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-no-snap }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-no-snap }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-no-snap }
|
||||
|
||||
Another possible recovery customization, which can be useful for debugging, is setting an
|
||||
upper bound on the replay, causing the actor to be replayed only up to a certain point "in the past" (instead of being replayed to its most up to date state). Note that after that it is a bad idea to persist new
|
||||
|
|
@ -210,28 +210,28 @@ events because a later recovery will probably be confused by the new events that
|
|||
events that were previously skipped.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-custom }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-custom }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-custom }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-custom }
|
||||
|
||||
Recovery can be disabled by returning `Recovery.none()` in the `recovery` method of a `PersistentActor`:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-disabled }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-disabled }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-disabled }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-disabled }
|
||||
|
||||
#### Recovery status
|
||||
|
||||
A persistent actor can query its own recovery status via the methods
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-status }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-status }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-status }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-status }
|
||||
|
||||
Sometimes there is a need for performing additional initialization when the
|
||||
recovery has completed before processing any other message sent to the persistent actor.
|
||||
|
|
@ -239,10 +239,10 @@ The persistent actor will receive a special `RecoveryCompleted` message right af
|
|||
and before any other received messages.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-completed }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #recovery-completed }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-completed }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #recovery-completed }
|
||||
|
||||
The actor will always receive a `RecoveryCompleted` message, even if there are no events
|
||||
in the journal and the snapshot store is empty, or if it's a new persistent actor with a previously
|
||||
|
|
@ -325,10 +325,10 @@ In the below example, the event callbacks may be called "at any time", even afte
|
|||
The ordering between events is still guaranteed ("evt-b-1" will be sent after "evt-a-2", which will be sent after "evt-a-1" etc.).
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #persist-async }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #persist-async }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #persist-async }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #persist-async }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -357,10 +357,10 @@ Using those methods is very similar to the persist family of methods, yet they d
|
|||
It will be kept in memory and used when invoking the handler.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #defer }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #defer }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer }
|
||||
|
||||
Notice that the `sender()` is **safe** to access in the handler callback, and will be pointing to the original sender
|
||||
of the command for which this `defer` or `deferAsync` handler was called.
|
||||
|
|
@ -368,18 +368,18 @@ of the command for which this `defer` or `deferAsync` handler was called.
|
|||
The calling side will get the responses in this (guaranteed) order:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #defer-caller }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #defer-caller }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer-caller }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer-caller }
|
||||
|
||||
You can also call `defer` or `deferAsync` with `persist`.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #defer-with-persist }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #defer-with-persist }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer-with-persist }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #defer-with-persist }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -400,18 +400,18 @@ those situations, as well as their implication on the stashing behavior (that `p
|
|||
example two persist calls are issued, and each of them issues another persist inside its callback:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persist-persist }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persist-persist }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persist-persist }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persist-persist }
|
||||
|
||||
When sending two commands to this `PersistentActor`, the persist handlers will be executed in the following order:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persist-persist-caller }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persist-persist-caller }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persist-persist-caller }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persist-persist-caller }
|
||||
|
||||
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
|
||||
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
|
||||
|
|
@ -422,18 +422,18 @@ is extended until all nested `persist` callbacks have been handled.
|
|||
It is also possible to nest `persistAsync` calls, using the same pattern:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persistAsync-persistAsync }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persistAsync-persistAsync }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persistAsync-persistAsync }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persistAsync-persistAsync }
|
||||
|
||||
In this case no stashing is happening, yet events are still persisted and callbacks are executed in the expected order:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persistAsync-persistAsync-caller }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #nested-persistAsync-persistAsync-caller }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persistAsync-persistAsync-caller }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #nested-persistAsync-persistAsync-caller }
|
||||
|
||||
While it is possible to nest mixed `persist` and `persistAsync` with keeping their respective semantics
|
||||
it is not a recommended practice, as it may lead to overly complex nesting.
|
||||
|
|
@ -461,10 +461,10 @@ actor and after a back-off timeout start it again. The `akka.pattern.BackoffSupe
|
|||
is provided to support such restarts.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #backoff }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #backoff }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #backoff }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #backoff }
|
||||
|
||||
If persistence of an event is rejected before it is stored, e.g. due to serialization error,
|
||||
`onPersistRejected` will be invoked (logging a warning by default), and the actor continues with
|
||||
|
|
@ -580,24 +580,24 @@ The example below highlights how messages arrive in the Actor's mailbox and how
|
|||
mechanism when `persist()` is used. Notice the early stop behavior that occurs when `PoisonPill` is used:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown-example-bad }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown-example-bad }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-bad }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-bad }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown-example-good }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown-example-good }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-good }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-good }
|
||||
|
||||
<a id="replay-filter"></a>
|
||||
### Replay Filter
|
||||
|
|
@ -640,23 +640,23 @@ Persistent actors can save snapshots of internal state by calling the `saveSnap
|
|||
succeeds, the persistent actor receives a `SaveSnapshotSuccess` message, otherwise a `SaveSnapshotFailure` message
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #save-snapshot }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #save-snapshot }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #save-snapshot }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #save-snapshot }
|
||||
|
||||
where `metadata` is of type `SnapshotMetadata`:
|
||||
|
||||
@@snip [SnapshotProtocol.scala]($akka$/akka-persistence/src/main/scala/akka/persistence/SnapshotProtocol.scala) { #snapshot-metadata }
|
||||
@@snip [SnapshotProtocol.scala](/akka-persistence/src/main/scala/akka/persistence/SnapshotProtocol.scala) { #snapshot-metadata }
|
||||
|
||||
During recovery, the persistent actor is offered a previously saved snapshot via a `SnapshotOffer` message from
|
||||
which it can initialize internal state.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #snapshot-offer }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #snapshot-offer }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #snapshot-offer }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #snapshot-offer }
|
||||
|
||||
The replayed messages that follow the `SnapshotOffer` message, if any, are younger than the offered snapshot.
|
||||
They finally recover the persistent actor to its current (i.e. latest) state.
|
||||
|
|
@ -665,10 +665,10 @@ In general, a persistent actor is only offered a snapshot if that persistent act
|
|||
and at least one of these snapshots matches the `SnapshotSelectionCriteria` that can be specified for recovery.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #snapshot-criteria }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #snapshot-criteria }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #snapshot-criteria }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #snapshot-criteria }
|
||||
|
||||
If not specified, they default to @scala[`SnapshotSelectionCriteria.Latest`]@java[`SnapshotSelectionCriteria.latest()`] which selects the latest (= youngest) snapshot.
|
||||
To disable snapshot-based recovery, applications should use @scala[`SnapshotSelectionCriteria.None`]@java[`SnapshotSelectionCriteria.none()`]. A recovery where no
|
||||
|
|
@ -786,10 +786,10 @@ of the message, the destination actor will send the same``deliveryId`` wrapped i
|
|||
The sender will then use it to call `confirmDelivery` method to complete the delivery routine.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #at-least-once-example }
|
||||
: @@snip [PersistenceDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceDocSpec.scala) { #at-least-once-example }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #at-least-once-example }
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #at-least-once-example }
|
||||
|
||||
The `deliveryId` generated by the persistence module is a strictly monotonically increasing sequence number
|
||||
without gaps. The same sequence is used for all destinations of the actor, i.e. when sending to multiple
|
||||
|
|
@ -864,14 +864,14 @@ json instead of serializing the object to its binary representation.
|
|||
Implementing an EventAdapter is rather stright forward:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceEventAdapterDocSpec.scala]($code$/scala/docs/persistence/PersistenceEventAdapterDocSpec.scala) { #identity-event-adapter }
|
||||
: @@snip [PersistenceEventAdapterDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceEventAdapterDocSpec.scala) { #identity-event-adapter }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceEventAdapterDocTest.java]($code$/java/jdocs/persistence/PersistenceEventAdapterDocTest.java) { #identity-event-adapter }
|
||||
: @@snip [PersistenceEventAdapterDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceEventAdapterDocTest.java) { #identity-event-adapter }
|
||||
|
||||
Then in order for it to be used on events coming to and from the journal you must bind it using the below configuration syntax:
|
||||
|
||||
@@snip [PersistenceEventAdapterDocSpec.scala]($code$/scala/docs/persistence/PersistenceEventAdapterDocSpec.scala) { #event-adapters-config }
|
||||
@@snip [PersistenceEventAdapterDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceEventAdapterDocSpec.scala) { #event-adapters-config }
|
||||
|
||||
It is possible to bind multiple adapters to one class *for recovery*, in which case the `fromJournal` methods of all
|
||||
bound adapters will be applied to a given matching event (in order of definition in the configuration). Since each adapter may
|
||||
|
|
@ -913,10 +913,10 @@ Applications can provide their own plugins by implementing a plugin API and acti
|
|||
Plugin development requires the following imports:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #plugin-imports }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #plugin-imports }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #plugin-imports }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #plugin-imports }
|
||||
|
||||
### Eager initialization of persistence plugin
|
||||
|
||||
|
|
@ -958,7 +958,7 @@ akka {
|
|||
The LevelDB journal plugin config entry is `akka.persistence.journal.leveldb`. It writes messages to a local LevelDB
|
||||
instance. Enable this plugin by defining config property:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-plugin-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-plugin-config }
|
||||
|
||||
LevelDB based plugins will also require the following additional dependency declaration:
|
||||
|
||||
|
|
@ -971,7 +971,7 @@ LevelDB based plugins will also require the following additional dependency decl
|
|||
The default location of LevelDB files is a directory named `journal` in the current working
|
||||
directory. This location can be changed by configuration where the specified path can be relative or absolute:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #journal-config }
|
||||
|
||||
With this plugin, each actor system runs its own private LevelDB instance.
|
||||
|
||||
|
|
@ -980,7 +980,7 @@ a "tombstone" for each deleted message instead. In the case of heavy journal usa
|
|||
deletes, this may be an issue as users may find themselves dealing with continuously increasing journal sizes. To
|
||||
this end, LevelDB offers a special journal compaction function that is exposed via the following configuration:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #compaction-intervals-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #compaction-intervals-config }
|
||||
|
||||
<a id="shared-leveldb-journal"></a>
|
||||
### Shared LevelDB journal
|
||||
|
|
@ -1005,29 +1005,29 @@ This plugin has been supplanted by [Persistence Plugin Proxy](#persistence-plugi
|
|||
A shared LevelDB instance is started by instantiating the `SharedLeveldbStore` actor.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-creation }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-creation }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #shared-store-creation }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #shared-store-creation }
|
||||
|
||||
By default, the shared instance writes journaled messages to a local directory named `journal` in the current
|
||||
working directory. The storage location can be changed by configuration:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-config }
|
||||
|
||||
Actor systems that use a shared LevelDB store must activate the `akka.persistence.journal.leveldb-shared`
|
||||
plugin.
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-journal-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-journal-config }
|
||||
|
||||
This plugin must be initialized by injecting the (remote) `SharedLeveldbStore` actor reference. Injection is
|
||||
done by calling the `SharedLeveldbJournal.setStore` method with the actor reference as argument.
|
||||
|
||||
Scala
|
||||
: @@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-usage }
|
||||
: @@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-usage }
|
||||
|
||||
Java
|
||||
: @@snip [LambdaPersistencePluginDocTest.java]($code$/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #shared-store-usage }
|
||||
: @@snip [LambdaPersistencePluginDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistencePluginDocTest.java) { #shared-store-usage }
|
||||
|
||||
Internal journal commands (sent by persistent actors) are buffered until injection completes. Injection is idempotent
|
||||
i.e. only the first injection is used.
|
||||
|
|
@ -1038,12 +1038,12 @@ i.e. only the first injection is used.
|
|||
The local snapshot store plugin config entry is `akka.persistence.snapshot-store.local`. It writes snapshot files to
|
||||
the local filesystem. Enable this plugin by defining config property:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-snapshot-plugin-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-snapshot-plugin-config }
|
||||
|
||||
The default storage location is a directory named `snapshots` in the current working
|
||||
directory. This can be changed by configuration where the specified path can be relative or absolute:
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #snapshot-config }
|
||||
|
||||
Note that it is not mandatory to specify a snapshot store plugin. If you don't use snapshots
|
||||
you don't have to configure it.
|
||||
|
|
@ -1097,7 +1097,7 @@ Serialization of snapshots and payloads of `Persistent` messages is configurable
|
|||
|
||||
it must add
|
||||
|
||||
@@snip [PersistenceSerializerDocSpec.scala]($code$/scala/docs/persistence/PersistenceSerializerDocSpec.scala) { #custom-serializer-config }
|
||||
@@snip [PersistenceSerializerDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSerializerDocSpec.scala) { #custom-serializer-config }
|
||||
|
||||
to the application configuration. If not specified, a default serializer is used.
|
||||
|
||||
|
|
@ -1107,11 +1107,11 @@ For more advanced schema evolution techniques refer to the @ref:[Persistence - S
|
|||
|
||||
When running tests with LevelDB default settings in `sbt`, make sure to set `fork := true` in your sbt project. Otherwise, you'll see an `UnsatisfiedLinkError`. Alternatively, you can switch to a LevelDB Java port by setting
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #native-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #native-config }
|
||||
|
||||
or
|
||||
|
||||
@@snip [PersistencePluginDocSpec.scala]($code$/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-native-config }
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-native-config }
|
||||
|
||||
in your Akka configuration. The LevelDB Java port is for testing purposes only.
|
||||
|
||||
|
|
@ -1144,29 +1144,29 @@ to the @ref:[reference configuration](general/configuration.md#config-akka-persi
|
|||
By default, a persistent actor will use the "default" journal and snapshot store plugins
|
||||
configured in the following sections of the `reference.conf` configuration resource:
|
||||
|
||||
@@snip [PersistenceMultiDocSpec.scala]($code$/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #default-config }
|
||||
@@snip [PersistenceMultiDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #default-config }
|
||||
|
||||
Note that in this case the actor overrides only the `persistenceId` method:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceMultiDocSpec.scala]($code$/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #default-plugins }
|
||||
: @@snip [PersistenceMultiDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #default-plugins }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceMultiDocTest.java]($code$/java/jdocs/persistence/PersistenceMultiDocTest.java) { #default-plugins }
|
||||
: @@snip [PersistenceMultiDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceMultiDocTest.java) { #default-plugins }
|
||||
|
||||
When the persistent actor overrides the `journalPluginId` and `snapshotPluginId` methods,
|
||||
the actor will be serviced by these specific persistence plugins instead of the defaults:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceMultiDocSpec.scala]($code$/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #override-plugins }
|
||||
: @@snip [PersistenceMultiDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #override-plugins }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceMultiDocTest.java]($code$/java/jdocs/persistence/PersistenceMultiDocTest.java) { #override-plugins }
|
||||
: @@snip [PersistenceMultiDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceMultiDocTest.java) { #override-plugins }
|
||||
|
||||
Note that `journalPluginId` and `snapshotPluginId` must refer to properly configured `reference.conf`
|
||||
plugin entries with a standard `class` property as well as settings which are specific for those plugins, i.e.:
|
||||
|
||||
@@snip [PersistenceMultiDocSpec.scala]($code$/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #override-config }
|
||||
@@snip [PersistenceMultiDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #override-config }
|
||||
|
||||
## Give persistence plugin configurations at runtime
|
||||
|
||||
|
|
@ -1177,10 +1177,10 @@ the actor will use the declared `Config` objects with a fallback on the default
|
|||
It allows a dynamic configuration of the journal and the snapshot store at runtime:
|
||||
|
||||
Scala
|
||||
: @@snip [PersistenceMultiDocSpec.scala]($code$/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #runtime-config }
|
||||
: @@snip [PersistenceMultiDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceMultiDocSpec.scala) { #runtime-config }
|
||||
|
||||
Java
|
||||
: @@snip [PersistenceMultiDocTest.java]($code$/java/jdocs/persistence/PersistenceMultiDocTest.java) { #runtime-config }
|
||||
: @@snip [PersistenceMultiDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceMultiDocTest.java) { #runtime-config }
|
||||
|
||||
## See also
|
||||
|
||||
|
|
|
|||
|
|
@ -260,10 +260,10 @@ which in this sample corresponds to `sampleActorSystem@127.0.0.1:2553`.
|
|||
Once you have configured the properties above you would do the following in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #sample-actor }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #sample-actor }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #sample-actor }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #sample-actor }
|
||||
|
||||
The actor class `SampleActor` has to be available to the runtimes using it, i.e. the classloader of the
|
||||
actor systems has to have a JAR containing the class.
|
||||
|
|
@ -300,26 +300,26 @@ precedence.
|
|||
With these imports:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #import }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #import }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #import }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #import }
|
||||
|
||||
and a remote address like this:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #make-address-artery }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #make-address-artery }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #make-address-artery }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #make-address-artery }
|
||||
|
||||
you can advise the system to create a child on that remote node like so:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #deploy }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #deploy }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
|
||||
|
||||
### Remote deployment whitelist
|
||||
|
||||
|
|
@ -334,7 +334,7 @@ The list of allowed classes has to be configured on the "remote" system, in othe
|
|||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
@@snip [RemoteDeploymentWhitelistSpec.scala]($akka$/akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala) { #whitelist-config }
|
||||
@@snip [RemoteDeploymentWhitelistSpec.scala](/akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala) { #whitelist-config }
|
||||
|
||||
Actor classes not included in the whitelist will not be allowed to be remote deployed onto this system.
|
||||
|
||||
|
|
@ -662,10 +662,10 @@ remained the same, we recommend reading the @ref:[Serialization](serialization.m
|
|||
Implementing an `akka.serialization.ByteBufferSerializer` works the same way as any other serializer,
|
||||
|
||||
Scala
|
||||
: @@snip [Serializer.scala]($akka$/akka-actor/src/main/scala/akka/serialization/Serializer.scala) { #ByteBufferSerializer }
|
||||
: @@snip [Serializer.scala](/akka-actor/src/main/scala/akka/serialization/Serializer.scala) { #ByteBufferSerializer }
|
||||
|
||||
Java
|
||||
: @@snip [ByteBufferSerializerDocTest.java]($code$/java/jdocs/actor/ByteBufferSerializerDocTest.java) { #ByteBufferSerializer-interface }
|
||||
: @@snip [ByteBufferSerializerDocTest.java](/akka-docs/src/test/java/jdocs/actor/ByteBufferSerializerDocTest.java) { #ByteBufferSerializer-interface }
|
||||
|
||||
Implementing a serializer for Artery is therefore as simple as implementing this interface, and binding the serializer
|
||||
as usual (which is explained in @ref:[Serialization](serialization.md)).
|
||||
|
|
@ -677,10 +677,10 @@ The array based methods will be used when `ByteBuffer` is not used, e.g. in Akka
|
|||
Note that the array based methods can be implemented by delegation like this:
|
||||
|
||||
Scala
|
||||
: @@snip [ByteBufferSerializerDocSpec.scala]($code$/scala/docs/actor/ByteBufferSerializerDocSpec.scala) { #bytebufserializer-with-manifest }
|
||||
: @@snip [ByteBufferSerializerDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ByteBufferSerializerDocSpec.scala) { #bytebufserializer-with-manifest }
|
||||
|
||||
Java
|
||||
: @@snip [ByteBufferSerializerDocTest.java]($code$/java/jdocs/actor/ByteBufferSerializerDocTest.java) { #bytebufserializer-with-manifest }
|
||||
: @@snip [ByteBufferSerializerDocTest.java](/akka-docs/src/test/java/jdocs/actor/ByteBufferSerializerDocTest.java) { #bytebufserializer-with-manifest }
|
||||
|
||||
<a id="disable-java-serializer"></a>
|
||||
### Disabling the Java Serializer
|
||||
|
|
@ -693,14 +693,14 @@ It is absolutely feasible to combine remoting with @ref:[Routing](routing.md).
|
|||
|
||||
A pool of remote deployed routees can be configured as:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-pool-artery }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-pool-artery }
|
||||
|
||||
This configuration setting will clone the actor defined in the `Props` of the `remotePool` 10
|
||||
times and deploy it evenly distributed across the two given target nodes.
|
||||
|
||||
A group of remote actors can be configured as:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group-artery }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group-artery }
|
||||
|
||||
This configuration setting will send messages to the defined remote actor paths.
|
||||
It requires that you create the destination actors on the remote nodes with matching paths.
|
||||
|
|
@ -909,7 +909,7 @@ There are lots of configuration properties that are related to remoting in Akka.
|
|||
Setting properties like the listening IP and port number programmatically is
|
||||
best done by using something like the following:
|
||||
|
||||
@@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #programmatic-artery }
|
||||
@@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #programmatic-artery }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -164,10 +164,10 @@ which in this sample corresponds to `sampleActorSystem@127.0.0.1:2553`.
|
|||
Once you have configured the properties above you would do the following in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #sample-actor }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #sample-actor }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #sample-actor }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #sample-actor }
|
||||
|
||||
The actor class `SampleActor` has to be available to the runtimes using it, i.e. the classloader of the
|
||||
actor systems has to have a JAR containing the class.
|
||||
|
|
@ -209,26 +209,26 @@ precedence.
|
|||
With these imports:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #import }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #import }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #import }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #import }
|
||||
|
||||
and a remote address like this:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #make-address }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #make-address }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #make-address }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #make-address }
|
||||
|
||||
you can advise the system to create a child on that remote node like so:
|
||||
|
||||
Scala
|
||||
: @@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #deploy }
|
||||
: @@snip [RemoteDeploymentDocSpec.scala](/akka-docs/src/test/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #deploy }
|
||||
|
||||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
|
||||
|
||||
<a id="remote-deployment-whitelist"></a>
|
||||
### Remote deployment whitelist
|
||||
|
|
@ -244,7 +244,7 @@ The list of allowed classes has to be configured on the "remote" system, in othe
|
|||
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
|
||||
should not allow others to remote deploy onto it. The full settings section may for example look like this:
|
||||
|
||||
@@snip [RemoteDeploymentWhitelistSpec.scala]($akka$/akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala) { #whitelist-config }
|
||||
@@snip [RemoteDeploymentWhitelistSpec.scala](/akka-remote/src/test/scala/akka/remote/RemoteDeploymentWhitelistSpec.scala) { #whitelist-config }
|
||||
|
||||
Actor classes not included in the whitelist will not be allowed to be remote deployed onto this system.
|
||||
|
||||
|
|
@ -345,14 +345,14 @@ It is absolutely feasible to combine remoting with @ref:[Routing](routing.md).
|
|||
|
||||
A pool of remote deployed routees can be configured as:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-pool }
|
||||
|
||||
This configuration setting will clone the actor defined in the `Props` of the `remotePool` 10
|
||||
times and deploy it evenly distributed across the two given target nodes.
|
||||
|
||||
A group of remote actors can be configured as:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group }
|
||||
|
||||
This configuration setting will send messages to the defined remote actor paths.
|
||||
It requires that you create the destination actors on the remote nodes with matching paths.
|
||||
|
|
@ -587,7 +587,7 @@ There are lots of configuration properties that are related to remoting in Akka.
|
|||
Setting properties like the listening IP and port number programmatically is
|
||||
best done by using something like the following:
|
||||
|
||||
@@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #programmatic }
|
||||
@@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #programmatic }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -26,10 +26,10 @@ also possible to [create your own](#custom-router).
|
|||
The following example illustrates how to use a `Router` and manage the routees from within an actor.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #router-in-actor }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #router-in-actor }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #router-in-actor }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #router-in-actor }
|
||||
|
||||
We create a `Router` and specify that it should use `RoundRobinRoutingLogic` when routing the
|
||||
messages to the routees.
|
||||
|
|
@ -97,22 +97,22 @@ few exceptions. These are documented in the [Specially Handled Messages](#router
|
|||
The following code and configuration snippets show how to create a [round-robin](#round-robin-router) router that forwards messages to five `Worker` routees. The
|
||||
routees will be created as the router's children.
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-1 }
|
||||
|
||||
Here is the same example, but with the router configuration provided programmatically instead of
|
||||
from configuration.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-2 }
|
||||
|
||||
#### Remote Deployed Routees
|
||||
|
||||
|
|
@ -123,10 +123,10 @@ fashion. In order to deploy routees remotely, wrap the router configuration in a
|
|||
deployment requires the `akka-remote` module to be included in the classpath.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #remoteRoutees }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #remoteRoutees }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #remoteRoutees }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #remoteRoutees }
|
||||
|
||||
#### Senders
|
||||
|
||||
|
|
@ -134,20 +134,20 @@ By default, when a routee sends a message, it will @ref:[implicitly set itself a
|
|||
](actors.md#actors-tell-sender).
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #reply-with-self }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #reply-with-self }
|
||||
|
||||
However, it is often useful for routees to set the *router* as a sender. For example, you might want
|
||||
to set the router as the sender if you want to hide the details of the routees behind the router.
|
||||
The following code snippet shows how to set the parent router as sender.
|
||||
|
||||
Scala
|
||||
: @@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #reply-with-sender }
|
||||
: @@snip [ActorDocSpec.scala](/akka-docs/src/test/scala/docs/actor/ActorDocSpec.scala) { #reply-with-sender }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #reply-with-parent }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #reply-with-parent }
|
||||
|
||||
#### Supervision
|
||||
|
||||
|
|
@ -176,10 +176,10 @@ by specifying the strategy when defining the router.
|
|||
Setting the strategy is done like this:
|
||||
|
||||
Scala
|
||||
: @@snip [RoutingSpec.scala]($akka$/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala) { #supervision }
|
||||
: @@snip [RoutingSpec.scala](/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala) { #supervision }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #supervision }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #supervision }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -200,42 +200,42 @@ to these paths, wildcards can be and will result in the same @ref:[semantics as
|
|||
The example below shows how to create a router by providing it with the path strings of three
|
||||
routee actors.
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-group }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-group-1 }
|
||||
|
||||
Here is the same example, but with the router configuration provided programmatically instead of
|
||||
from configuration.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #round-robin-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #round-robin-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #round-robin-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #round-robin-group-2 }
|
||||
|
||||
The routee actors are created externally from the router:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #create-workers }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #create-workers }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #create-workers }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #create-workers }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #create-worker-actors }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #create-worker-actors }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #create-worker-actors }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #create-worker-actors }
|
||||
|
||||
The paths may contain protocol and address information for actors running on remote hosts.
|
||||
Remoting requires the `akka-remote` module to be included in the classpath.
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-remote-round-robin-group }
|
||||
|
||||
## Router usage
|
||||
|
||||
|
|
@ -246,10 +246,10 @@ Note that deployment paths in the configuration starts with `/parent/` followed
|
|||
of the router actor.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #create-parent }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #create-parent }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #create-parent }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #create-parent }
|
||||
|
||||
<a id="round-robin-router"></a>
|
||||
### RoundRobinPool and RoundRobinGroup
|
||||
|
|
@ -258,39 +258,39 @@ Routes in a [round-robin](http://en.wikipedia.org/wiki/Round-robin) fashion to i
|
|||
|
||||
RoundRobinPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-1 }
|
||||
|
||||
RoundRobinPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-pool-2 }
|
||||
|
||||
RoundRobinGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-group }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #round-robin-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #round-robin-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #round-robin-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #round-robin-group-1 }
|
||||
|
||||
RoundRobinGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #round-robin-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #round-robin-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #round-robin-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #round-robin-group-2 }
|
||||
|
||||
### RandomPool and RandomGroup
|
||||
|
||||
|
|
@ -298,39 +298,39 @@ This router type selects one of its routees randomly for each message.
|
|||
|
||||
RandomPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-random-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-random-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #random-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #random-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #random-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #random-pool-1 }
|
||||
|
||||
RandomPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #random-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #random-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #random-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #random-pool-2 }
|
||||
|
||||
RandomGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-random-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-random-group }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #random-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #random-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #random-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #random-group-1 }
|
||||
|
||||
RandomGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #random-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #random-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #random-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #random-group-2 }
|
||||
|
||||
<a id="balancing-pool"></a>
|
||||
### BalancingPool
|
||||
|
|
@ -362,27 +362,27 @@ as described in [Specially Handled Messages](#router-special-messages).
|
|||
|
||||
BalancingPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #balancing-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #balancing-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #balancing-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #balancing-pool-1 }
|
||||
|
||||
BalancingPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #balancing-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #balancing-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #balancing-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #balancing-pool-2 }
|
||||
|
||||
Addition configuration for the balancing dispatcher, which is used by the pool,
|
||||
can be configured in the `pool-dispatcher` section of the router deployment
|
||||
configuration.
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool2 }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool2 }
|
||||
|
||||
The `BalancingPool` automatically uses a special `BalancingDispatcher` for its
|
||||
routees - disregarding any dispatcher that is set on the routee Props object.
|
||||
|
|
@ -395,14 +395,14 @@ can be configured as explained in @ref:[Dispatchers](dispatchers.md). In situati
|
|||
routees are expected to perform blocking operations it may be useful to replace it
|
||||
with a `thread-pool-executor` hinting the number of allocated threads explicitly:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool3 }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool3 }
|
||||
|
||||
It is also possible to change the `mailbox` used by the balancing dispatcher for
|
||||
scenarios where the default unbounded mailbox is not well suited. An example of such
|
||||
a scenario could arise whether there exists the need to manage priority for each message.
|
||||
You can then implement a priority mailbox and configure your dispatcher:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool4 }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-balancing-pool4 }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -428,21 +428,21 @@ since their mailbox size is unknown
|
|||
|
||||
SmallestMailboxPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-smallest-mailbox-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-smallest-mailbox-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #smallest-mailbox-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #smallest-mailbox-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #smallest-mailbox-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #smallest-mailbox-pool-1 }
|
||||
|
||||
SmallestMailboxPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #smallest-mailbox-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #smallest-mailbox-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #smallest-mailbox-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #smallest-mailbox-pool-2 }
|
||||
|
||||
There is no Group variant of the SmallestMailboxPool because the size of the mailbox
|
||||
and the internal dispatching state of the actor is not practically available from the paths
|
||||
|
|
@ -454,41 +454,41 @@ A broadcast router forwards the message it receives to *all* its routees.
|
|||
|
||||
BroadcastPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-broadcast-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-broadcast-pool }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcast-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcast-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcast-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcast-pool-1 }
|
||||
|
||||
BroadcastPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcast-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcast-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcast-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcast-pool-2 }
|
||||
|
||||
BroadcastGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-broadcast-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-broadcast-group }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcast-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcast-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcast-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcast-group-1 }
|
||||
|
||||
BroadcastGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #broadcast-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #broadcast-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #broadcast-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #broadcast-group-2 }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -509,41 +509,41 @@ It is expecting at least one reply within a configured duration, otherwise it wi
|
|||
|
||||
ScatterGatherFirstCompletedPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-scatter-gather-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-scatter-gather-pool }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-pool-1 }
|
||||
|
||||
ScatterGatherFirstCompletedPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-pool-2 }
|
||||
|
||||
ScatterGatherFirstCompletedGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-scatter-gather-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-scatter-gather-group }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #scatter-gather-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #scatter-gather-group-1 }
|
||||
|
||||
ScatterGatherFirstCompletedGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #scatter-gather-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #scatter-gather-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #scatter-gather-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #scatter-gather-group-2 }
|
||||
|
||||
### TailChoppingPool and TailChoppingGroup
|
||||
|
||||
|
|
@ -559,39 +559,39 @@ This optimisation was described nicely in a blog post by Peter Bailis:
|
|||
|
||||
TailChoppingPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-tail-chopping-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-tail-chopping-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-pool-1 }
|
||||
|
||||
TailChoppingPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-pool-2 }
|
||||
|
||||
TailChoppingGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-tail-chopping-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-tail-chopping-group }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #tail-chopping-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #tail-chopping-group-1 }
|
||||
|
||||
TailChoppingGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #tail-chopping-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #tail-chopping-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #tail-chopping-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #tail-chopping-group-2 }
|
||||
|
||||
### ConsistentHashingPool and ConsistentHashingGroup
|
||||
|
||||
|
|
@ -618,17 +618,17 @@ the same time for one router. The @scala[`hashMapping`]@java[`withHashMapper`] i
|
|||
Code example:
|
||||
|
||||
Scala
|
||||
: @@snip [ConsistentHashingRouterDocSpec.scala]($code$/scala/docs/routing/ConsistentHashingRouterDocSpec.scala) { #cache-actor }
|
||||
: @@snip [ConsistentHashingRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/ConsistentHashingRouterDocSpec.scala) { #cache-actor }
|
||||
|
||||
Java
|
||||
: @@snip [ConsistentHashingRouterDocTest.java]($code$/java/jdocs/routing/ConsistentHashingRouterDocTest.java) { #cache-actor }
|
||||
: @@snip [ConsistentHashingRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/ConsistentHashingRouterDocTest.java) { #cache-actor }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [ConsistentHashingRouterDocSpec.scala]($code$/scala/docs/routing/ConsistentHashingRouterDocSpec.scala) { #consistent-hashing-router }
|
||||
: @@snip [ConsistentHashingRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/ConsistentHashingRouterDocSpec.scala) { #consistent-hashing-router }
|
||||
|
||||
Java
|
||||
: @@snip [ConsistentHashingRouterDocTest.java]($code$/java/jdocs/routing/ConsistentHashingRouterDocTest.java) { #consistent-hashing-router }
|
||||
: @@snip [ConsistentHashingRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/ConsistentHashingRouterDocTest.java) { #consistent-hashing-router }
|
||||
|
||||
In the above example you see that the `Get` message implements `ConsistentHashable` itself,
|
||||
while the `Entry` message is wrapped in a `ConsistentHashableEnvelope`. The `Evict`
|
||||
|
|
@ -636,39 +636,39 @@ message is handled by the `hashMapping` partial function.
|
|||
|
||||
ConsistentHashingPool defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-consistent-hashing-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-consistent-hashing-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-pool-1 }
|
||||
|
||||
ConsistentHashingPool defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-pool-2 }
|
||||
|
||||
ConsistentHashingGroup defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-consistent-hashing-group }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-consistent-hashing-group }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-group-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #consistent-hashing-group-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-group-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #consistent-hashing-group-1 }
|
||||
|
||||
ConsistentHashingGroup defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #consistent-hashing-group-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #paths #consistent-hashing-group-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #consistent-hashing-group-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #paths #consistent-hashing-group-2 }
|
||||
|
||||
`virtual-nodes-factor` is the number of virtual nodes per routee that is used in the
|
||||
consistent hash node ring to make the distribution more uniform.
|
||||
|
|
@ -694,10 +694,10 @@ The example below shows how you would use a `Broadcast` message to send a very i
|
|||
to every routee of a router.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcastDavyJonesWarning }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcastDavyJonesWarning }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcastDavyJonesWarning }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcastDavyJonesWarning }
|
||||
|
||||
In this example the router receives the `Broadcast` message, extracts its payload
|
||||
(`"Watch out for Davy Jones' locker"`), and then sends the payload on to all of the router's
|
||||
|
|
@ -718,10 +718,10 @@ receives a `PoisonPill` message, that actor will be stopped. See the @ref:[Poiso
|
|||
documentation for details.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #poisonPill }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #poisonPill }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #poisonPill }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #poisonPill }
|
||||
|
||||
For a router, which normally passes on messages to routees, it is important to realise that
|
||||
`PoisonPill` messages are processed by the router only. `PoisonPill` messages sent to a router
|
||||
|
|
@ -740,10 +740,10 @@ routee will receive the `PoisonPill` message. Note that this will stop all route
|
|||
routees aren't children of the router, i.e. even routees programmatically provided to the router.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcastPoisonPill }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcastPoisonPill }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcastPoisonPill }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcastPoisonPill }
|
||||
|
||||
With the code shown above, each routee will receive a `PoisonPill` message. Each routee will
|
||||
continue to process its messages as normal, eventually processing the `PoisonPill`. This will
|
||||
|
|
@ -771,10 +771,10 @@ supervision directive that is applied to the router. Routees that are not the ro
|
|||
those that were created externally to the router, will not be affected.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #kill }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #kill }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #kill }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #kill }
|
||||
|
||||
As with the `PoisonPill` message, there is a distinction between killing a router, which
|
||||
indirectly kills its children (who happen to be routees), and killing routees directly (some of whom
|
||||
|
|
@ -782,10 +782,10 @@ may not be children.) To kill routees directly the router should be sent a `Kill
|
|||
in a `Broadcast` message.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #broadcastKill }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #broadcastKill }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #broadcastKill }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #broadcastKill }
|
||||
|
||||
### Management Messages
|
||||
|
||||
|
|
@ -817,13 +817,13 @@ pressure is lower than certain threshold. Both thresholds are configurable.
|
|||
|
||||
Pool with default resizer defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-resize-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-resize-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #resize-pool-1 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #resize-pool-1 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #resize-pool-1 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #resize-pool-1 }
|
||||
|
||||
Several more configuration options are available and described in `akka.actor.deployment.default.resizer`
|
||||
section of the reference @ref:[configuration](general/configuration.md).
|
||||
|
|
@ -831,10 +831,10 @@ section of the reference @ref:[configuration](general/configuration.md).
|
|||
Pool with resizer defined in code:
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #resize-pool-2 }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #resize-pool-2 }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #resize-pool-2 }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #resize-pool-2 }
|
||||
|
||||
*It is also worth pointing out that if you define the ``router`` in the configuration file then this value
|
||||
will be used instead of any programmatically sent parameters.*
|
||||
|
|
@ -867,13 +867,13 @@ The memory usage is O(n) where n is the number of sizes you allow, i.e. upperBou
|
|||
|
||||
Pool with `OptimalSizeExploringResizer` defined in configuration:
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-optimal-size-exploring-resize-pool }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-optimal-size-exploring-resize-pool }
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #optimal-size-exploring-resize-pool }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #optimal-size-exploring-resize-pool }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #optimal-size-exploring-resize-pool }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #optimal-size-exploring-resize-pool }
|
||||
|
||||
Several more configuration options are available and described in `akka.actor.deployment.default.optimal-size-exploring-resizer`
|
||||
section of the reference @ref:[configuration](general/configuration.md).
|
||||
|
|
@ -928,10 +928,10 @@ The router created in this example is replicating each message to a few destinat
|
|||
Start with the routing logic:
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #routing-logic }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #routing-logic }
|
||||
|
||||
Java
|
||||
: @@snip [CustomRouterDocTest.java]($code$/java/jdocs/routing/CustomRouterDocTest.java) { #routing-logic }
|
||||
: @@snip [CustomRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/CustomRouterDocTest.java) { #routing-logic }
|
||||
|
||||
`select` will be called for each message and in this example pick a few destinations by round-robin,
|
||||
by reusing the existing `RoundRobinRoutingLogic` and wrap the result in a `SeveralRoutees`
|
||||
|
|
@ -942,10 +942,10 @@ The implementation of the routing logic must be thread safe, since it might be u
|
|||
A unit test of the routing logic:
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #unit-test-logic }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #unit-test-logic }
|
||||
|
||||
Java
|
||||
: @@snip [CustomRouterDocTest.java]($code$/java/jdocs/routing/CustomRouterDocTest.java) { #unit-test-logic }
|
||||
: @@snip [CustomRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/CustomRouterDocTest.java) { #unit-test-logic }
|
||||
|
||||
You could stop here and use the `RedundancyRoutingLogic` with a `akka.routing.Router`
|
||||
as described in [A Simple Router](#simple-router).
|
||||
|
|
@ -956,27 +956,27 @@ Create a class that extends `Pool`, `Group` or `CustomRouterConfig`. That class
|
|||
for the routing logic and holds the configuration for the router. Here we make it a `Group`.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #group }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #group }
|
||||
|
||||
Java
|
||||
: @@snip [RedundancyGroup.java]($code$/java/jdocs/routing/RedundancyGroup.java) { #group }
|
||||
: @@snip [RedundancyGroup.java](/akka-docs/src/test/java/jdocs/routing/RedundancyGroup.java) { #group }
|
||||
|
||||
This can be used exactly as the router actors provided by Akka.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #usage-1 }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #usage-1 }
|
||||
|
||||
Java
|
||||
: @@snip [CustomRouterDocTest.java]($code$/java/jdocs/routing/CustomRouterDocTest.java) { #usage-1 }
|
||||
: @@snip [CustomRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/CustomRouterDocTest.java) { #usage-1 }
|
||||
|
||||
Note that we added a constructor in `RedundancyGroup` that takes a `Config` parameter.
|
||||
That makes it possible to define it in configuration.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #config }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #config }
|
||||
|
||||
Java
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #jconfig }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #jconfig }
|
||||
|
||||
Note the fully qualified class name in the `router` property. The router class must extend
|
||||
`akka.routing.RouterConfig` (`Pool`, `Group` or `CustomRouterConfig`) and have
|
||||
|
|
@ -984,10 +984,10 @@ constructor with one `com.typesafe.config.Config` parameter.
|
|||
The deployment section of the configuration is passed to the constructor.
|
||||
|
||||
Scala
|
||||
: @@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #usage-2 }
|
||||
: @@snip [CustomRouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/CustomRouterDocSpec.scala) { #usage-2 }
|
||||
|
||||
Java
|
||||
: @@snip [CustomRouterDocTest.java]($code$/java/jdocs/routing/CustomRouterDocTest.java) { #usage-2 }
|
||||
: @@snip [CustomRouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/CustomRouterDocTest.java) { #usage-2 }
|
||||
|
||||
## Configuring Dispatchers
|
||||
|
||||
|
|
@ -997,7 +997,7 @@ The dispatcher for created children of the pool will be taken from
|
|||
To make it easy to define the dispatcher of the routees of the pool you can
|
||||
define the dispatcher inline in the deployment section of the config.
|
||||
|
||||
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-pool-dispatcher }
|
||||
@@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #config-pool-dispatcher }
|
||||
|
||||
That is the only thing you need to do enable a dedicated dispatcher for a
|
||||
pool.
|
||||
|
|
@ -1019,10 +1019,10 @@ property in their constructor or factory method, custom routers have to
|
|||
implement the method in a suitable way.
|
||||
|
||||
Scala
|
||||
: @@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #dispatchers }
|
||||
: @@snip [RouterDocSpec.scala](/akka-docs/src/test/scala/docs/routing/RouterDocSpec.scala) { #dispatchers }
|
||||
|
||||
Java
|
||||
: @@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #dispatchers }
|
||||
: @@snip [RouterDocTest.java](/akka-docs/src/test/java/jdocs/routing/RouterDocTest.java) { #dispatchers }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
|
|||
|
|
@ -53,34 +53,34 @@ by the `akka.scheduler.tick-duration` configuration property.
|
|||
## Some examples
|
||||
|
||||
Scala
|
||||
: @@snip [SchedulerDocSpec.scala]($code$/scala/docs/actor/SchedulerDocSpec.scala) { #imports1 }
|
||||
: @@snip [SchedulerDocSpec.scala](/akka-docs/src/test/scala/docs/actor/SchedulerDocSpec.scala) { #imports1 }
|
||||
|
||||
Java
|
||||
: @@snip [SchedulerDocTest.java]($code$/java/jdocs/actor/SchedulerDocTest.java) { #imports1 }
|
||||
: @@snip [SchedulerDocTest.java](/akka-docs/src/test/java/jdocs/actor/SchedulerDocTest.java) { #imports1 }
|
||||
|
||||
Schedule to send the "foo"-message to the testActor after 50ms:
|
||||
|
||||
Scala
|
||||
: @@snip [SchedulerDocSpec.scala]($code$/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-one-off-message }
|
||||
: @@snip [SchedulerDocSpec.scala](/akka-docs/src/test/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-one-off-message }
|
||||
|
||||
Java
|
||||
: @@snip [SchedulerDocTest.java]($code$/java/jdocs/actor/SchedulerDocTest.java) { #schedule-one-off-message }
|
||||
: @@snip [SchedulerDocTest.java](/akka-docs/src/test/java/jdocs/actor/SchedulerDocTest.java) { #schedule-one-off-message }
|
||||
|
||||
Schedule a @scala[function]@java[`Runnable`], that sends the current time to the testActor, to be executed after 50ms:
|
||||
|
||||
Scala
|
||||
: @@snip [SchedulerDocSpec.scala]($code$/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-one-off-thunk }
|
||||
: @@snip [SchedulerDocSpec.scala](/akka-docs/src/test/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-one-off-thunk }
|
||||
|
||||
Java
|
||||
: @@snip [SchedulerDocTest.java]($code$/java/jdocs/actor/SchedulerDocTest.java) { #schedule-one-off-thunk }
|
||||
: @@snip [SchedulerDocTest.java](/akka-docs/src/test/java/jdocs/actor/SchedulerDocTest.java) { #schedule-one-off-thunk }
|
||||
|
||||
Schedule to send the "Tick"-message to the `tickActor` after 0ms repeating every 50ms:
|
||||
|
||||
Scala
|
||||
: @@snip [SchedulerDocSpec.scala]($code$/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-recurring }
|
||||
: @@snip [SchedulerDocSpec.scala](/akka-docs/src/test/scala/docs/actor/SchedulerDocSpec.scala) { #schedule-recurring }
|
||||
|
||||
Java
|
||||
: @@snip [SchedulerDocTest.java]($code$/java/jdocs/actor/SchedulerDocTest.java) { #schedule-recurring }
|
||||
: @@snip [SchedulerDocTest.java](/akka-docs/src/test/java/jdocs/actor/SchedulerDocTest.java) { #schedule-recurring }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -95,7 +95,7 @@ necessary parameters) and then call the method when the message is received.
|
|||
|
||||
## From `akka.actor.ActorSystem`
|
||||
|
||||
@@snip [ActorSystem.scala]($akka$/akka-actor/src/main/scala/akka/actor/ActorSystem.scala) { #scheduler }
|
||||
@@snip [ActorSystem.scala](/akka-actor/src/main/scala/akka/actor/ActorSystem.scala) { #scheduler }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -112,10 +112,10 @@ different one using the `akka.scheduler.implementation` configuration
|
|||
property. The referenced class must implement the following interface:
|
||||
|
||||
Scala
|
||||
: @@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #scheduler }
|
||||
: @@snip [Scheduler.scala](/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #scheduler }
|
||||
|
||||
Java
|
||||
: @@snip [AbstractScheduler.java]($akka$/akka-actor/src/main/java/akka/actor/AbstractScheduler.java) { #scheduler }
|
||||
: @@snip [AbstractScheduler.java](/akka-actor/src/main/java/akka/actor/AbstractScheduler.java) { #scheduler }
|
||||
|
||||
## The Cancellable interface
|
||||
|
||||
|
|
@ -131,4 +131,4 @@ scheduled task was canceled or will (eventually) have run.
|
|||
|
||||
@@@
|
||||
|
||||
@@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
|
||||
@@snip [Scheduler.scala](/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
|
||||
|
|
|
|||
|
|
@ -24,12 +24,12 @@ For Akka to know which `Serializer` to use for what, you need edit your [Configu
|
|||
in the "akka.actor.serializers"-section you bind names to implementations of the `akka.serialization.Serializer`
|
||||
you wish to use, like this:
|
||||
|
||||
@@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-serializers-config }
|
||||
@@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-serializers-config }
|
||||
|
||||
After you've bound names to different implementations of `Serializer` you need to wire which classes
|
||||
should be serialized using which `Serializer`, this is done in the "akka.actor.serialization-bindings"-section:
|
||||
|
||||
@@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #serialization-bindings-config }
|
||||
@@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialization-bindings-config }
|
||||
|
||||
You only need to specify the name of an interface or abstract base class of the
|
||||
messages. In case of ambiguity, i.e. the message implements several of the
|
||||
|
|
@ -79,11 +79,11 @@ Alternatively, you can disable all Java serialization which then automatically w
|
|||
|
||||
Normally, messages sent between local actors (i.e. same JVM) do not undergo serialization. For testing, sometimes, it may be desirable to force serialization on all messages (both remote and local). If you want to do this in order to verify that your messages are serializable you can enable the following config option:
|
||||
|
||||
@@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-messages-config }
|
||||
@@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-messages-config }
|
||||
|
||||
If you want to verify that your `Props` are serializable you can enable the following config option:
|
||||
|
||||
@@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-creators-config }
|
||||
@@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-creators-config }
|
||||
|
||||
@@@ warning
|
||||
|
||||
|
|
@ -97,17 +97,17 @@ If you want to programmatically serialize/deserialize using Akka Serialization,
|
|||
here's some examples:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #programmatic }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #programmatic }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #programmatic }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #programmatic }
|
||||
|
||||
For more information, have a look at the `ScalaDoc` for `akka.serialization._`
|
||||
|
||||
|
|
@ -120,17 +120,17 @@ The first code snippet on this page contains a configuration file that reference
|
|||
A custom `Serializer` has to inherit from @scala[`akka.serialization.Serializer`]@java[`akka.serialization.JSerializer`] and can be defined like the following:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #my-own-serializer }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #my-own-serializer }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #my-own-serializer }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #my-own-serializer }
|
||||
|
||||
The manifest is a type hint so that the same serializer can be used for different
|
||||
classes. The manifest parameter in @scala[`fromBinary`]@java[`fromBinaryJava`] is the class of the object that
|
||||
|
|
@ -160,10 +160,10 @@ class name if you used `includeManifest=true`, otherwise it will be the empty st
|
|||
This is how a `SerializerWithStringManifest` looks like:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #my-own-serializer2 }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #my-own-serializer2 }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #my-own-serializer2 }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #my-own-serializer2 }
|
||||
|
||||
You must also bind it to a name in your [Configuration]() and then list which classes
|
||||
that should be serialized using it.
|
||||
|
|
@ -186,17 +186,17 @@ address which shall be the recipient of the serialized information. Use
|
|||
`Serialization.serializedActorPath(actorRef)` like this:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #imports }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #imports }
|
||||
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #actorref-serializer }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #actorref-serializer }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #actorref-serializer }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #actorref-serializer }
|
||||
|
||||
This assumes that serialization happens in the context of sending a message
|
||||
through the remote transport. There are other uses of serialization, though,
|
||||
|
|
@ -212,10 +212,10 @@ the appropriate address to use when sending to `remoteAddr` you can use
|
|||
`ActorRefProvider.getExternalAddressFor(remoteAddr)` like this:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #external-address }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #external-address }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #external-address }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #external-address }
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -242,10 +242,10 @@ There is also a default remote address which is the one used by cluster support
|
|||
(and typical systems have just this one); you can get it like this:
|
||||
|
||||
Scala
|
||||
: @@snip [SerializationDocSpec.scala]($code$/scala/docs/serialization/SerializationDocSpec.scala) { #external-address-default }
|
||||
: @@snip [SerializationDocSpec.scala](/akka-docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #external-address-default }
|
||||
|
||||
Java
|
||||
: @@snip [SerializationDocTest.java]($code$/java/jdocs/serialization/SerializationDocTest.java) { #external-address-default }
|
||||
: @@snip [SerializationDocTest.java](/akka-docs/src/test/java/jdocs/serialization/SerializationDocTest.java) { #external-address-default }
|
||||
|
||||
Another solution is to encapsulate your serialization code in `Serialization.withTransportInformation`.
|
||||
It ensures the actorRefs are serialized using systems default address when
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ This operator is included in:
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [ActorFlow.scala]($akka$/akka-stream-typed/src/main/scala/akka/stream/typed/scaladsl/ActorFlow.scala) { #ask }
|
||||
@@signature [ActorFlow.scala](/akka-stream-typed/src/main/scala/akka/stream/typed/scaladsl/ActorFlow.scala) { #ask }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
@ -31,8 +31,8 @@ a `IOResult` upon reaching the end of the file or if there is a failure.
|
|||
|
||||
|
||||
Scala
|
||||
: @@snip [ask.scala]($akka$/akka-stream-typed/src/test/scala/akka/stream/typed/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
|
||||
: @@snip [ask.scala](/akka-stream-typed/src/test/scala/akka/stream/typed/scaladsl/ActorFlowSpec.scala) { #imports #ask-actor #ask }
|
||||
|
||||
Java
|
||||
: @@snip [ask.java]($akka$/akka-stream-typed/src/test/java/akka/stream/typed/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
|
||||
: @@snip [ask.java](/akka-stream-typed/src/test/java/akka/stream/typed/javadsl/ActorFlowCompileTest.java) { #ask-actor #ask }
|
||||
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ This operator is included in:
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [ActorSink.scala]($akka$/akka-stream-typed/src/main/scala/akka/stream/typed/scaladsl/ActorSink.scala) { #actorRef }
|
||||
@@signature [ActorSink.scala](/akka-stream-typed/src/main/scala/akka/stream/typed/scaladsl/ActorSink.scala) { #actorRef }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Emit the contents of a file.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [FileIO.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #fromPath }
|
||||
@@signature [FileIO.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #fromPath }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Create a sink which will write incoming `ByteString` s to a given file path.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [FileIO.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #toPath }
|
||||
@@signature [FileIO.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #toPath }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Creates a `Flow` from a `Sink` and a `Source` where the Flow's input will be sen
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSource }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSource }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Allows coupling termination (cancellation, completion, erroring) of Sinks and So
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSourceCoupled }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSourceCoupled }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Creates a real `Flow` upon receiving the first element by calling relevant `flow
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #lazyInitAsync }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #lazyInitAsync }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Send the elements from the stream to an `ActorRef`.
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #actorRef }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #actorRef }
|
||||
@@@
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -14,18 +14,18 @@ to provide back pressure onto the sink.
|
|||
Actor to be interacted with:
|
||||
|
||||
Scala
|
||||
: @@snip [IntegrationDocSpec.scala]($code$/scala/docs/stream/IntegrationDocSpec.scala) { #actorRefWithAck-actor }
|
||||
: @@snip [IntegrationDocSpec.scala](/akka-docs/src/test/scala/docs/stream/IntegrationDocSpec.scala) { #actorRefWithAck-actor }
|
||||
|
||||
Java
|
||||
: @@snip [IntegrationDocTest.java]($code$/java/jdocs/stream/IntegrationDocTest.java) { #actorRefWithAck-actor }
|
||||
: @@snip [IntegrationDocTest.java](/akka-docs/src/test/java/jdocs/stream/IntegrationDocTest.java) { #actorRefWithAck-actor }
|
||||
|
||||
Using the `actorRefWithAck` operator with the above actor:
|
||||
|
||||
Scala
|
||||
: @@snip [IntegrationDocSpec.scala]($code$/scala/docs/stream/IntegrationDocSpec.scala) { #actorRefWithAck }
|
||||
: @@snip [IntegrationDocSpec.scala](/akka-docs/src/test/scala/docs/stream/IntegrationDocSpec.scala) { #actorRefWithAck }
|
||||
|
||||
Java
|
||||
: @@snip [IntegrationDocTest.java]($code$/java/jdocs/stream/IntegrationDocTest.java) { #actorRefWithAck }
|
||||
: @@snip [IntegrationDocTest.java](/akka-docs/src/test/java/jdocs/stream/IntegrationDocTest.java) { #actorRefWithAck }
|
||||
|
||||
## Reactive Streams semantics
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Integration with Reactive Streams, materializes into a `org.reactivestreams.Publ
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #asPublisher }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #asPublisher }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Immediately cancel the stream
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #cancelled }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #cancelled }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Combine several sinks into one using a user specified strategy
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #combine }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #combine }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Fold over emitted element with a function, where each invocation will get the ne
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fold }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fold }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Invoke a given procedure for each element received.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #forEach }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #forEach }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Like `foreach` but allows up to `parallellism` procedure calls to happen in para
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #foreachParallel }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #foreachParallel }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Integration with Reactive Streams, wraps a `org.reactivestreams.Subscriber` as a
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fromSubscriber }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fromSubscriber }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materializes into a @scala[`Future`] @java[`CompletionStage`] which completes wi
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #head }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #head }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materializes into a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #headOption }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #headOption }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Consume all elements but discards them.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #ignore }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #ignore }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materializes into a @scala[`Future`] @java[`CompletionStage`] which will complet
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #last }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #last }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materialize a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`]
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lastOption }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lastOption }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Creates a real `Sink` upon receiving the first element.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lazyInitAsync }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lazyInitAsync }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Invoke a callback when the stream has completed or failed.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #onComplete }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #onComplete }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materializes this Sink, immediately returning (1) its materialized value, and (2
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #preMaterialize }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #preMaterialize }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Materialize a `SinkQueue` that can be pulled to trigger demand through the sink.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #queue }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #queue }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Apply a reduction function on the incoming elements and pass the result to the n
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #reduce }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #reduce }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Collect values emitted from the stream into a collection.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #seq }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #seq }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Collect the last `n` values emitted from the stream into a collection.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #takeLast }
|
||||
@@signature [Sink.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #takeLast }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Attaches the given `Sink` to this `Flow`, meaning that elements that pass throug
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #alsoTo }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #alsoTo }
|
||||
@@@
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Stream the values of an `immutable.Seq`.
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Source.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Source.scala) { #apply }
|
||||
@@signature [Source.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Source.scala) { #apply }
|
||||
@@@
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Use the `ask` pattern to send a request-reply message to the target `ref` actor.
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #ask }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #ask }
|
||||
@@@
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ If the time between the emission of an element and the following downstream dema
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #backpressureTimeout }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #backpressureTimeout }
|
||||
@@@
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Allow for a slower downstream by passing incoming elements and a summary into an
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #batch }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #batch }
|
||||
@@@
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Allow for a slower downstream by passing incoming elements and a summary into an
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #batchWeighted }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #batchWeighted }
|
||||
@@@
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ Allow for a temporarily faster upstream events by buffering `size` elements.
|
|||
@@@ div { .group-scala }
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #buffer }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #buffer }
|
||||
@@@
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Apply a partial function to each incoming element, if the partial function is de
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collect }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collect }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Transform this stream by testing the type of each of the elements on which the e
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collectType }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collectType }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ If the completion of the stream does not happen until the provided timeout, the
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #completionTimeout }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #completionTimeout }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ After completion of the original upstream the elements of the given source will
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #concat }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #concat }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Allow for a slower downstream by passing incoming elements and a summary into an
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflate }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflate }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Allow for a slower downstream by passing incoming elements and a summary into an
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflateWithSeed }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflateWithSeed }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Delay every element passed through with a specific duration.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #delay }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #delay }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Detach upstream demand from downstream demand without detaching the stream rates
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #detach }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #detach }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Each upstream element will either be diverted to the given sink, or the downstre
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #divertTo }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #divertTo }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Drop `n` elements and then pass any subsequent element downstream.
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #drop }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #drop }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Drop elements as long as a predicate function return true for the element
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWhile }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWhile }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Drop elements until a timeout has fired
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWithin }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWithin }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Like `extrapolate`, but does not have the `initial` argument, and the `Iterator`
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #expand }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #expand }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Allow for a faster downstream by expanding the last emitted element to an `Itera
|
|||
|
||||
## Signature
|
||||
|
||||
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #extrapolate }
|
||||
@@signature [Flow.scala](/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #extrapolate }
|
||||
|
||||
@@@
|
||||
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue