* ../scala and ../java links
* removed -java -scala from anchors
* fix FIXMEs of unresolved links
* and some more weird link texts
This commit is contained in:
Patrik Nordwall 2017-05-11 17:27:57 +02:00
parent 4b260fd9fc
commit 3d9a997036
106 changed files with 778 additions and 850 deletions

View file

@ -1,4 +0,0 @@
# Experimental Modules
The label experimental caused confusion and discomfort and has therefore been replaced with "May Change"
please see @ref:[Modules marked "May Change"](../scala/common/may-change.md).

View file

@ -1,4 +0,0 @@
# Experimental Modules
The label experimental caused confusion and discomfort and has therefore been replaced with "May Change"
please see @ref:[Modules marked "May Change"](../scala/common/may-change.md).

View file

@ -17,8 +17,9 @@ its syntax from Erlang.
Since Akka enforces parental supervision every actor is supervised and
(potentially) the supervisor of its children, it is advisable that you
familiarize yourself with @ref:[Actor Systems](../scala/general/actor-systems.md) and <!-- FIXME: More than one link target with name supervision in path Some(/java/actors.rst) --> supervision and it
may also help to read @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
familiarize yourself with @ref:[Actor Systems](general/actor-systems.md) and
@ref:[supervision](general/supervision.md) and it
may also help to read @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@
@ -140,7 +141,7 @@ create a child actor.
It is recommended to create a hierarchy of children, grand-children and so on
such that it fits the logical failure-handling structure of the application,
see @ref:[Actor Systems](../scala/general/actor-systems.md).
see @ref:[Actor Systems](general/actor-systems.md).
The call to `actorOf` returns an instance of `ActorRef`. This is a
handle to the actor instance and the only way to interact with it. The
@ -158,7 +159,7 @@ another child to the same parent an *InvalidActorNameException* is thrown.
Actors are automatically started asynchronously when created.
<a id="actor-create-factory-java"></a>
<a id="actor-create-factory"></a>
### Dependency Injection
If your actor has a constructor that takes parameters then those need to
@ -175,7 +176,7 @@ constructor arguments are determined by a dependency injection framework.
You might be tempted at times to offer an `IndirectActorProducer`
which always returns the same instance, e.g. by using a static field. This is
not supported, as it goes against the meaning of an actor restart, which is
described here: @ref:[What Restarting Means](../scala/general/supervision.md#supervision-restart).
described here: @ref:[What Restarting Means](general/supervision.md#supervision-restart).
When using a dependency injection framework, actor beans *MUST NOT* have
singleton scope.
@ -236,7 +237,7 @@ time).
* parent supervisor
* supervised children
* lifecycle monitoring
* hotswap behavior stack as described in [Become/Unbecome](#actor-hotswap-java)
* hotswap behavior stack as described in [Become/Unbecome](#actor-hotswap)
The remaining visible methods are user-overridable life-cycle hooks which are
described in the following:
@ -285,11 +286,11 @@ occupying it. `ActorSelection` cannot be watched for this reason. It is
possible to resolve the current incarnation's `ActorRef` living under the
path by sending an `Identify` message to the `ActorSelection` which
will be replied to with an `ActorIdentity` containing the correct reference
(see [Identifying Actors via Actor Selection](#actorselection-java)). This can also be done with the `resolveOne`
(see [Identifying Actors via Actor Selection](#actorselection)). This can also be done with the `resolveOne`
method of the `ActorSelection`, which returns a `Future` of the matching
`ActorRef`.
<a id="deathwatch-java"></a>
<a id="deathwatch"></a>
### Lifecycle Monitoring aka DeathWatch
In order to be notified when another actor terminates (i.e. stops permanently,
@ -322,7 +323,7 @@ using `context.unwatch(target)`. This works even if the `Terminated`
message has already been enqueued in the mailbox; after calling `unwatch`
no `Terminated` message for that actor will be processed anymore.
<a id="start-hook-java"></a>
<a id="start-hook"></a>
### Start Hook
Right after starting the actor, its `preStart` method is invoked.
@ -337,13 +338,13 @@ Initialization code which is part of the actors constructor will always be
called when an instance of the actor class is created, which happens at every
restart.
<a id="restart-hook-java"></a>
<a id="restart-hook"></a>
### Restart Hooks
All actors are supervised, i.e. linked to another actor with a fault
handling strategy. Actors may be restarted in case an exception is thrown while
processing a message (see <!-- FIXME: More than one link target with name supervision in path Some(/java/actors.rst) --> supervision). This restart involves the hooks
mentioned above:
processing a message (see @ref:[supervision](general/supervision.md)).
This restart involves the hooks mentioned above:
1.
The old actor is informed by calling `preRestart` with the exception
@ -374,11 +375,11 @@ usual.
Be aware that the ordering of failure notifications relative to user messages
is not deterministic. In particular, a parent might restart its child before
it has processed the last messages sent by the child before the failure.
See @ref:[Discussion: Message Ordering](../scala/general/message-delivery-reliability.md#message-ordering) for details.
See @ref:[Discussion: Message Ordering](general/message-delivery-reliability.md#message-ordering) for details.
@@@
<a id="stop-hook-java"></a>
<a id="stop-hook"></a>
### Stop Hook
After stopping an actor, its `postStop` hook is called, which may be used
@ -387,10 +388,10 @@ to run after message queuing has been disabled for this actor, i.e. messages
sent to a stopped actor will be redirected to the `deadLetters` of the
`ActorSystem`.
<a id="actorselection-java"></a>
<a id="actorselection"></a>
## Identifying Actors via Actor Selection
As described in @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md), each actor has a unique logical path, which
As described in @ref:[Actor References, Paths and Addresses](general/addressing.md), each actor has a unique logical path, which
is obtained by following the chain of actors from child to parent until
reaching the root of the actor system, and it has a physical path, which may
differ if the supervision chain includes any remote supervisors. These paths
@ -408,7 +409,7 @@ It is always preferable to communicate with other Actors using their ActorRef
instead of relying upon ActorSelection. Exceptions are
>
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery-java) facility
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery) facility
* initiating first contact with a remote system
In all other cases ActorRefs can be provided during Actor creation or
@ -460,7 +461,7 @@ Remote actor addresses may also be looked up, if @ref:[remoting](remoting.md) is
@@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #selection-remote }
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting.md#remote-sample-java).
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting.md#remote-sample).
## Messages and immutability
@ -496,7 +497,7 @@ In all these methods you have the option of passing along your own `ActorRef`.
Make it a practice of doing so because it will allow the receiver actors to be able to respond
to your message, since the sender reference is sent along with the message.
<a id="actors-tell-sender-java"></a>
<a id="actors-tell-sender"></a>
### Tell: Fire-forget
This is the preferred way of sending messages. No blocking waiting for a
@ -513,7 +514,7 @@ different one. Outside of an actor and if no reply is needed the second
argument can be `null`; if a reply is needed outside of an actor you can use
the ask-pattern described next..
<a id="actors-ask-java"></a>
<a id="actors-ask"></a>
### Ask: Send-And-Receive-Future
The `ask` pattern involves actors as well as futures, hence it is offered as
@ -573,7 +574,7 @@ on the enclosing actor from within the callback. This would break the actor
encapsulation and may introduce synchronization bugs and race conditions because
the callback will be scheduled concurrently to the enclosing actor. Unfortunately
there is not yet a way to detect these illegal accesses at compile time. See also:
@ref:[Actors and shared mutable state](../scala/general/jmm.md#jmm-shared-state)
@ref:[Actors and shared mutable state](general/jmm.md#jmm-shared-state)
@@@
@ -586,7 +587,7 @@ routers, load-balancers, replicators etc.
@@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #forward }
<a id="actors-receive-java"></a>
<a id="actors-receive"></a>
## Receive messages
An actor has to define its initial receive behavior by implementing
@ -667,7 +668,7 @@ Messages marked with `NotInfluenceReceiveTimeout` will not reset the timer. This
`ReceiveTimeout` should be fired by external inactivity but not influenced by internal activity,
e.g. scheduled tick messages.
<a id="stopping-actors-java"></a>
<a id="stopping-actors"></a>
## Stopping actors
Actors are stopped by invoking the `stop` method of a `ActorRefFactory`,
@ -687,7 +688,7 @@ Termination of an actor proceeds in two steps: first the actor suspends its
mailbox processing and sends a stop command to all its children, then it keeps
processing the internal termination notifications from its children until the last one is
gone, finally terminating itself (invoking `postStop`, dumping mailbox,
publishing `Terminated` on the [DeathWatch](#deathwatch-java), telling
publishing `Terminated` on the [DeathWatch](#deathwatch), telling
its supervisor). This procedure ensures that actor system sub-trees terminate
in an orderly fashion, propagating the stop command to the leaves and
collecting their confirmation back to the stopped supervisor. If one of the
@ -714,7 +715,7 @@ message which will eventually arrive.
@@@
<a id="poison-pill-java"></a>
<a id="poison-pill"></a>
### PoisonPill
You can also send an actor the `akka.actor.PoisonPill` message, which will
@ -755,7 +756,7 @@ message, i.e. not for top-level actors.
@@@
<a id="coordinated-shutdown-java"></a>
<a id="coordinated-shutdown"></a>
### Coordinated Shutdown
There is an extension named `CoordinatedShutdown` that will stop certain actors and
@ -784,7 +785,7 @@ is only used for debugging/logging.
Tasks added to the same phase are executed in parallel without any ordering assumptions.
Next phase will not start until all tasks of previous phase have been completed.
If tasks are not completed within a configured timeout (see @ref:[reference.conf](../scala/general/configuration.md#config-akka-actor))
If tasks are not completed within a configured timeout (see @ref:[reference.conf](general/configuration.md#config-akka-actor))
the next phase will be started anyway. It is possible to configure `recover=off` for a phase
to abort the rest of the shutdown process if a task fails or is not completed within the timeout.
@ -837,7 +838,7 @@ akka.coordinated-shutdown.run-by-jvm-shutdown-hook = off
akka.cluster.run-coordinated-shutdown-when-down = off
```
<a id="actor-hotswap-java"></a>
<a id="actor-hotswap"></a>
## Become/Unbecome
Akka supports hotswapping the Actors message loop (e.g. its implementation) at
@ -870,7 +871,7 @@ behavior is not the default).
@@snip [ActorDocTest.java]($code$/java/jdocs/actor/ActorDocTest.java) { #swapper }
<a id="stash-java"></a>
<a id="stash"></a>
## Stash
The `AbstractActorWithStash` class enables an actor to temporarily stash away messages
@ -931,14 +932,14 @@ then you should use the `AbstractActorWithUnboundedStash` class instead.
@@@
<a id="killing-actors-java"></a>
<a id="killing-actors"></a>
## Killing an Actor
You can kill an actor by sending a `Kill` message. This will cause the actor
to throw a `ActorKilledException`, triggering a failure. The actor will
suspend operation and its supervisor will be asked how to handle the failure,
which may mean resuming the actor, restarting it or terminating it completely.
See @ref:[What Supervision Means](../scala/general/supervision.md#supervision-directives) for more information.
See @ref:[What Supervision Means](general/supervision.md#supervision-directives) for more information.
Use `Kill` like this:
@ -957,8 +958,7 @@ lost. It is important to understand that it is not put back on the mailbox. So
if you want to retry processing of a message, you need to deal with it yourself
by catching the exception and retry your flow. Make sure that you put a bound
on the number of retries since you don't want a system to livelock (so
consuming a lot of cpu cycles without making progress). Another possibility
would be to have a look at the <!-- FIXME: unresolved link reference: mailbox-acking --> mailbox-acking.
consuming a lot of cpu cycles without making progress).
### What happens to the mailbox
@ -969,7 +969,7 @@ messages on that mailbox will be there as well.
### What happens to the actor
If code within an actor throws an exception, that actor is suspended and the
supervision process is started (see <!-- FIXME: More than one link target with name supervision in path Some(/java/actors.rst) --> supervision). Depending on the
supervision process is started (see @ref:[supervision](general/supervision.md)). Depending on the
supervisors decision the actor is resumed (as if nothing happened), restarted
(wiping out its internal state and starting from scratch) or terminated.
@ -1008,7 +1008,7 @@ Please note, that the child actors are *still restarted*, but no new `ActorRef`
the same principles for the children, ensuring that their `preStart()` method is called only at the creation of their
refs.
For more information see @ref:[What Restarting Means](../scala/general/supervision.md#supervision-restart).
For more information see @ref:[What Restarting Means](general/supervision.md#supervision-restart).
### Initialization via message passing

View file

@ -81,7 +81,7 @@ See @ref:[Futures](futures.md) for more information on `Futures`.
## Configuration
There are several configuration properties for the agents module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-agent).
to the @ref:[reference configuration](general/configuration.md#config-akka-agent).
## Deprecated Transactional Agents

View file

@ -32,8 +32,8 @@ The above example exposes an actor over a TCP endpoint via Apache
Camel's [Mina component](http://camel.apache.org/mina2.html). The actor implements the *getEndpointUri* method to define
an endpoint from which it can receive messages. After starting the actor, TCP
clients can immediately send messages to and receive responses from that
actor. If the message exchange should go over HTTP (via Camel's <!-- FIXME: duplicate target id: jetty component --> `Jetty
component`_), the actor's *getEndpointUri* method should return a different URI, for instance "jetty:[http://localhost:8877/example](http://localhost:8877/example)".
actor. If the message exchange should go over HTTP (via Camel's Jetty
component), the actor's *getEndpointUri* method should return a different URI, for instance "jetty:[http://localhost:8877/example](http://localhost:8877/example)".
In the above case an extra constructor is added that can set the endpoint URI, which would result in
the *getEndpointUri* returning the URI that was set using this constructor.
@ -72,13 +72,13 @@ You can also create a CamelMessage yourself with the appropriate body and header
The akka-camel module is implemented as an Akka Extension, the `CamelExtension` object.
Extensions will only be loaded once per `ActorSystem`, which will be managed by Akka.
The `CamelExtension` object provides access to the [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) interface.
The [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) interface in turn provides access to two important Apache Camel objects, the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and the <!-- FIXME: duplicate target id: producertemplate --> `ProducerTemplate`_.
The [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) interface in turn provides access to two important Apache Camel objects, the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and the `ProducerTemplate`.
Below you can see how you can get access to these Apache Camel objects.
@@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtension }
One `CamelExtension` is only loaded once for every one `ActorSystem`, which makes it safe to call the `CamelExtension` at any point in your code to get to the
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one <!-- FIXME: duplicate target id: producertemplate --> `ProducerTemplate`_ for every one `ActorSystem` that uses a `CamelExtension`.
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one `ProducerTemplate` for every one `ActorSystem` that uses a `CamelExtension`.
By Default, a new [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is created when the `CamelExtension` starts. If you want to inject your own context instead,
you can implement the [ContextProvider](@github@/akka-camel/src/main/scala/akka/camel/ContextProvider.scala) interface and add the FQCN of your implementation in the config, as the value of the "akka.camel.context-provider".
This interface define a single method `getContext()` used to load the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java).
@ -88,7 +88,7 @@ Below an example on how to add the ActiveMQ component to the [CamelContext](http
@@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtensionAddComponent }
The [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) joins the lifecycle of the `ActorSystem` and `CamelExtension` it is associated with; the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is started when
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the <!-- FIXME: unresolved link reference: producertemplate --> `ProducerTemplate`_.
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the `ProducerTemplate`.
The `CamelExtension` is used by both *Producer* and *Consumer* actors to interact with Apache Camel internally.
You can access the `CamelExtension` inside a *Producer* or a *Consumer* using the `camel` method, or get straight at the *CamelContext*
@ -125,8 +125,8 @@ actor. Messages consumed by actors from Camel endpoints are of type
[CamelMessage](#camelmessage). These are immutable representations of Camel messages.
Here's another example that sets the endpointUri to
`jetty:http://localhost:8877/camel/default`. It causes Camel's <!-- FIXME: duplicate target id: jetty component --> `Jetty
component`_ to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, accepting HTTP connections
`jetty:http://localhost:8877/camel/default`. It causes Camel's Jetty
component to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, accepting HTTP connections
from localhost on port 8877.
@@snip [Consumer2.java]($code$/java/jdocs/camel/Consumer2.java) { #Consumer2 }
@ -138,7 +138,7 @@ client the response type should be [CamelMessage](#camelmessage). For any other
new CamelMessage object is created by akka-camel with the actor response as message
body.
<a id="camel-acknowledgements-java"></a>
<a id="camel-acknowledgements"></a>
### Delivery acknowledgements
With in-out message exchanges, clients usually know that a message exchange is
@ -158,14 +158,14 @@ acknowledgement).
@@snip [Consumer3.java]($code$/java/jdocs/camel/Consumer3.java) { #Consumer3 }
<a id="camel-timeout-java"></a>
<a id="camel-timeout"></a>
### Consumer timeout
Camel Exchanges (and their corresponding endpoints) that support two-way communications need to wait for a response from
an actor before returning it to the initiating client.
For some endpoint types, timeout values can be defined in an endpoint-specific
way which is described in the documentation of the individual <!-- FIXME: duplicate target id: camel components --> `Camel
components`_. Another option is to configure timeouts on the level of consumer actors.
way which is described in the documentation of the individual Camel
components. Another option is to configure timeouts on the level of consumer actors.
Two-way communications between a Camel endpoint and an actor are
initiated by sending the request message to the actor with the [ask](@github@/akka-actor/src/main/scala/akka/pattern/Patterns.scala) pattern
@ -197,7 +197,7 @@ Producer actor and waits for a response.
The future contains the response CamelMessage, or an `AkkaCamelException` when an error occurred, which contains the headers of the response.
<a id="camel-custom-processing-java"></a>
<a id="camel-custom-processing"></a>
### Custom Processing
Instead of replying to the initial sender, producer actors can implement custom
@ -235,7 +235,7 @@ To correlate request with response messages, applications can set the
### ProducerTemplate
The [UntypedProducerActor](@github@/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) class is a very convenient way for actors to produce messages to Camel endpoints.
Actors may also use a Camel <!-- FIXME: unresolved link reference: producertemplate --> `ProducerTemplate`_ for producing messages to endpoints.
Actors may also use a Camel `ProducerTemplate` for producing messages to endpoints.
@@snip [MyActor.java]($code$/java/jdocs/camel/MyActor.java) { #ProducerTemplate }
@ -244,7 +244,7 @@ For initiating a two-way message exchange, one of the
@@snip [RequestBodyActor.java]($code$/java/jdocs/camel/RequestBodyActor.java) { #RequestProducerTemplate }
<a id="camel-asynchronous-routing-java"></a>
<a id="camel-asynchronous-routing"></a>
## Asynchronous routing
In-out message exchanges between endpoints and actors are
@ -259,18 +259,18 @@ asynchronous routing engine. Asynchronous responses are wrapped and added to the
producer actor's mailbox for later processing. By default, response messages are
returned to the initial sender but this can be overridden by Producer
implementations (see also description of the `onRouteResponse` method
in [Custom Processing](#camel-custom-processing-java)).
in [Custom Processing](#camel-custom-processing)).
However, asynchronous two-way message exchanges, without allocating a thread for
the full duration of exchange, cannot be generically supported by Camel's
asynchronous routing engine alone. This must be supported by the individual
<!-- FIXME: duplicate target id: camel components --> `Camel components`_ (from which endpoints are created) as well. They must be
Camel components (from which endpoints are created) as well. They must be
able to suspend any work started for request processing (thereby freeing threads
to do other work) and resume processing when the response is ready. This is
currently the case for a [subset of components](http://camel.apache.org/asynchronous-routing-engine.html) such as the <!-- FIXME: duplicate target id: jetty component --> `Jetty component`_.
currently the case for a [subset of components](http://camel.apache.org/asynchronous-routing-engine.html) such as the Jetty component.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also [Examples](#camel-examples-java) that implements both, an asynchronous
also [Examples](#camel-examples) that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
If the used Camel component is blocking it might be necessary to use a separate
@ -297,22 +297,22 @@ most use cases, some applications may require more specialized routes to actors.
The akka-camel module provides two mechanisms for customizing routes to actors,
which will be explained in this section. These are:
* Usage of [Akka Camel components](#camel-components-java) to access actors.
* Usage of [Akka Camel components](#camel-components) to access actors.
Any Camel route can use these components to access Akka actors.
* [Intercepting route construction](#camel-intercepting-route-construction-java) to actors.
* [Intercepting route construction](#camel-intercepting-route-construction) to actors.
This option gives you the ability to change routes that have already been added to Camel.
Consumer actors have a hook into the route definition process which can be used to change the route.
<a id="camel-components-java"></a>
<a id="camel-components"></a>
### Akka Camel components
Akka actors can be accessed from Camel routes using the <!-- FIXME: duplicate target id: actor --> `actor`_ Camel component. This component can be used to
Akka actors can be accessed from Camel routes using the actor Camel component. This component can be used to
access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections.
<a id="access-to-actors-java"></a>
<a id="access-to-actors"></a>
### Access to actors
To access actors from custom Camel routes, the <!-- FIXME: duplicate target id: actor --> `actor`_ Camel
To access actors from custom Camel routes, the actor Camel
component should be used. It fully supports Camel's [asynchronous routing
engine](http://camel.apache.org/asynchronous-routing-engine.html).
@ -336,14 +336,14 @@ for instance `10 seconds` except that
in the url it is handy to use a +
between the amount and the unit, like
for example `200+millis`
See also [Consumer timeout](#camel-timeout-java).|
See also [Consumer timeout](#camel-timeout).|
|autoAck | Boolean | true |
If set to true, in-only message exchanges
are auto-acknowledged when the message is
added to the actor's mailbox. If set to
false, actors must acknowledge the
receipt of the message.
See also [Delivery acknowledgements](#camel-acknowledgements-java). |
See also [Delivery acknowledgements](#camel-acknowledgements). |
Here's an actor endpoint URI example containing an actor path:
@ -364,10 +364,10 @@ The *CamelPath.toCamelUri* converts the *ActorRef* to the Camel actor component
When a message is received on the jetty endpoint, it is routed to the Responder actor, which in return replies back to the client of
the HTTP request.
<a id="camel-intercepting-route-construction-java"></a>
<a id="camel-intercepting-route-construction"></a>
### Intercepting route construction
The previous section, [Akka Camel components](#camel-components-java), explained how to setup a route to
The previous section, [Akka Camel components](#camel-components), explained how to setup a route to
an actor manually.
It was the application's responsibility to define the route and add it to the current CamelContext.
This section explains a more convenient way to define custom routes: akka-camel is still setting up the routes to consumer actors
@ -391,13 +391,13 @@ returns a ProcessorDefinition (in the above example, the ProcessorDefinition
returned by the end method. See the [org.apache.camel.model](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/model/) package for
details). After executing the route definition handler, akka-camel finally calls
a to(targetActorUri) on the returned ProcessorDefinition to complete the
route to the consumer actor (where targetActorUri is the actor component URI as described in [Access to actors](#access-to-actors-java)).
route to the consumer actor (where targetActorUri is the actor component URI as described in [Access to actors](#access-to-actors)).
If the actor cannot be found, a *ActorNotRegisteredException* is thrown.
*) Before passing the RouteDefinition instance to the route definition handler,
akka-camel may make some further modifications to it.
<a id="camel-examples-java"></a>
<a id="camel-examples"></a>
## Examples
The sample named [Akka Camel Samples with Java](@exampleCodeService@/akka-samples-camel-java) ([source code](@samples@/akka-sample-camel-java))
@ -405,7 +405,7 @@ contains 3 samples:
>
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support [Asynchronous routing](#camel-asynchronous-routing-java) with their Camel endpoints.
producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a `Producer` and a
`Consumer` actor as well as the inclusion of a custom Camel route.
* Quartz Scheduler Example - Showing how simple is to implement a cron-style scheduler by
@ -414,7 +414,7 @@ using the Camel Quartz component
## Configuration
There are several configuration properties for the Camel module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-camel).
to the @ref:[reference configuration](general/configuration.md#config-akka-camel).
## Additional Resources

View file

@ -101,7 +101,7 @@ The `initialContacts` parameter is a `Set<ActorPath>`, which can be created like
@@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #initialContacts }
You will probably define the address information of the initial contact points in configuration or system property.
See also [Configuration](#cluster-client-config-java).
See also [Configuration](#cluster-client-config).
A more comprehensive sample is available in the tutorial named [Distributed workers with Akka and Java!](https://github.com/typesafehub/activator-akka-distributed-workers-java).
@ -155,7 +155,7 @@ maven:
</dependency>
```
<a id="cluster-client-config-java"></a>
<a id="cluster-client-config"></a>
## Configuration
The `ClusterClientReceptionist` extension (or `ClusterReceptionistSettings`) can be configured

View file

@ -29,7 +29,7 @@ and add the following configuration stanza to your `application.conf`
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
```
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-java),
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Cluster Metrics collection and dissemination.
## Metrics Collector
@ -120,7 +120,7 @@ It can be configured to use a specific MetricsSelector to produce the probabilit
* `mix` / `MixMetricsSelector` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors.
* Any custom implementation of `akka.cluster.metrics.MetricsSelector`
The collected metrics values are smoothed with [exponential weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[cluster_configuration_java](cluster-usage.md#cluster-configuration-java) you can adjust how quickly past data is decayed compared to new data.
The collected metrics values are smoothed with [exponential weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[Cluster configuration](cluster-usage.md#cluster-configuration) you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action. What can be more demanding than calculating factorials?

View file

@ -19,7 +19,7 @@ the sender to know the location of the destination actor. This is achieved by se
the messages via a `ShardRegion` actor provided by this extension, which knows how
to route the message with the entity id to the final destination.
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-java)
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up)
if that feature is enabled.
@@@ warning
@ -27,7 +27,7 @@ if that feature is enabled.
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing-java).
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@
@ -166,7 +166,7 @@ must be to begin the rebalancing. This strategy can be replaced by an applicatio
implementation.
The state of shard locations in the `ShardCoordinator` is persistent (durable) with
@ref:[distributed_data_java](distributed-data.md) or @ref:[Persistence](persistence.md) to survive failures. When a crashed or
@ref:[Distributed Data](distributed-data.md) or @ref:[Persistence](persistence.md) to survive failures. When a crashed or
unreachable coordinator node has been removed (via down) from the cluster a new `ShardCoordinator` singleton
actor will take over and the state is recovered. During such a failure period shards
with known location are still available, while messages for new (unknown) shards
@ -183,11 +183,11 @@ unused shards due to the round-trip to the coordinator. Rebalancing of shards ma
also add latency. This should be considered when designing the application specific
shard resolution, e.g. to avoid too fine grained shards.
<a id="cluster-sharding-mode-java"></a>
<a id="cluster-sharding-mode"></a>
## Distributed Data vs. Persistence Mode
The state of the coordinator and the state of [cluster_sharding_remembering_java](#cluster-sharding-remembering-java) of the shards
are persistent (durable) to survive failures. @ref:[distributed_data_java](distributed-data.md) or @ref:[Persistence](persistence.md)
The state of the coordinator and the state of [Remembering Entities](#cluster-sharding-remembering) of the shards
are persistent (durable) to survive failures. @ref:[Distributed Data](distributed-data.md) or @ref:[Persistence](persistence.md)
can be used for the storage. Distributed Data is used by default.
The functionality when using the two modes is the same. If your sharded entities are not using Akka Persistence
@ -207,11 +207,11 @@ akka.cluster.sharding.state-store-mode = ddata
```
The state of the `ShardCoordinator` will be replicated inside a cluster by the
@ref:[distributed_data_java](distributed-data.md) module with `WriteMajority`/`ReadMajority` consistency.
@ref:[Distributed Data](distributed-data.md) module with `WriteMajority`/`ReadMajority` consistency.
The state of the coordinator is not durable, it's not stored to disk. When all nodes in
the cluster have been stopped the state is lost and not needed any more.
The state of [cluster_sharding_remembering_java](#cluster-sharding-remembering-java) is also durable, i.e. it is stored to
The state of [Remembering Entities](#cluster-sharding-remembering) is also durable, i.e. it is stored to
disk. The stored entities are started also after a complete cluster restart.
Cluster Sharding is using its own Distributed Data `Replicator` per node role. In this way you can use a subset of
@ -241,7 +241,7 @@ until at least that number of regions have been started and registered to the co
avoids that many shards are allocated to the first region that registers and only later are
rebalanced to other nodes.
See @ref:[min-members_java](cluster-usage.md#min-members-java) for more information about `min-nr-of-members`.
See @ref:[How To Startup when Cluster Size Reached](cluster-usage.md#min-members) for more information about `min-nr-of-members`.
## Proxy Only Mode
@ -263,7 +263,7 @@ then supposed to stop itself. Incoming messages will be buffered by the `Shard`
between reception of `Passivate` and termination of the entity. Such buffered messages
are thereafter delivered to a new incarnation of the entity.
<a id="cluster-sharding-remembering-java"></a>
<a id="cluster-sharding-remembering"></a>
## Remembering Entities
The list of entities in each `Shard` can be made persistent (durable) by setting
@ -275,8 +275,8 @@ a `Passivate` message must be sent to the parent of the entity actor, otherwise
entity will be automatically restarted after the entity restart backoff specified in
the configuration.
When [Distributed Data mode](#cluster-sharding-mode-java) is used the identifiers of the entities are
stored in @ref:[ddata_durable_java](distributed-data.md#ddata-durable-java) of Distributed Data. You may want to change the
When [Distributed Data mode](#cluster-sharding-mode) is used the identifiers of the entities are
stored in @ref:[Durable Storage](distributed-data.md#ddata-durable) of Distributed Data. You may want to change the
configuration of the akka.cluster.sharding.distributed-data.durable.lmdb.dir`, since
the default directory contains the remote port of the actor system. If using a dynamically
assigned port (0) it will be different each time and the previously stored data will not
@ -316,10 +316,10 @@ to the `ShardRegion` actor to handoff all shards that are hosted by that `ShardR
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
This is performed automatically by the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-java) and is therefore part of the
This is performed automatically by the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown) and is therefore part of the
graceful leaving process of a cluster member.
<a id="removeinternalclustershardingdata-java"></a>
<a id="removeinternalclustershardingdata"></a>
## Removal of Internal Cluster Sharding Data
The Cluster Sharding coordinator stores the locations of the shards using Akka Persistence.
@ -346,7 +346,7 @@ and there was a network partition.
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing-java).
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@

View file

@ -50,7 +50,7 @@ It's worth noting that messages can always be lost because of the distributed na
As always, additional logic should be implemented in the singleton (acknowledgement) and in the
client (retry) actors to ensure at-least-once message delivery.
The singleton instance will not run on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-java).
The singleton instance will not run on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up).
## Potential problems to be aware of
@ -60,7 +60,7 @@ This pattern may seem to be very tempting to use at first, but it has several dr
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
@ref:[Downing](cluster-usage.md#automatic-vs-manual-downing-java)),
@ref:[Downing](cluster-usage.md#automatic-vs-manual-downing)),
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).

View file

@ -1,6 +1,6 @@
# Cluster Usage
For introduction to the Akka Cluster concepts please see <!-- FIXME: More than one link target with name cluster in path Some(/java/cluster-usage.rst) --> cluster.
For introduction to the Akka Cluster concepts please see @ref:[Cluster Specification](common/cluster.md).
## Preparing Your Project for Clustering
@ -14,7 +14,7 @@ The Akka cluster is a separate jar file. Make sure that you have the following d
</dependency>
```
<a id="cluster-simple-example-java"></a>
<a id="cluster-simple-example"></a>
## A Simple Cluster Example
The following configuration enables the `Cluster` extension to be used.
@ -65,7 +65,7 @@ The `akka.cluster.seed-nodes` should normally also be added to your `application
@@@ note
If you are running Akka in a Docker container or the nodes for some other reason have separate internal and
external ip addresses you must configure remoting according to @ref:[Akka behind NAT or in a Docker container](remoting.md#remote-configuration-nat-java)
external ip addresses you must configure remoting according to @ref:[Akka behind NAT or in a Docker container](remoting.md#remote-configuration-nat)
@@@
@ -94,7 +94,7 @@ it sends a message to all seed nodes and then sends join command to the one that
answers first. If no one of the seed nodes replied (might not be started yet)
it retries this procedure until successful or shutdown.
You define the seed nodes in the [cluster_configuration_java](#cluster-configuration-java) file (application.conf):
You define the seed nodes in the [configuration](#cluster-configuration) file (application.conf):
```
akka.cluster.seed-nodes = [
@ -125,7 +125,7 @@ seed nodes in the existing cluster.
If you don't configure seed nodes you need to join the cluster programmatically or manually.
Manual joining can be performed by using [cluster_jmx_java](#cluster-jmx-java) or [cluster_http_java](#cluster-http-java).
Manual joining can be performed by using [JMX](#cluster-jmx) or [HTTP](#cluster-http).
Joining programmatically can be performed with `Cluster.get(system).join`. Unsuccessful join attempts are
automatically retried after the time period defined in configuration property `retry-unsuccessful-join-after`.
Retries can be disabled by setting the property to `off`.
@ -160,7 +160,7 @@ when you start the `ActorSystem`.
@@@
<a id="automatic-vs-manual-downing-java"></a>
<a id="automatic-vs-manual-downing"></a>
## Downing
When a member is considered by the failure detector to be unreachable the
@ -168,7 +168,7 @@ leader is not allowed to perform its duties, such as changing status of
new joining members to 'Up'. The node must first become reachable again, or the
status of the unreachable member must be changed to 'Down'. Changing status to 'Down'
can be performed automatically or manually. By default it must be done manually, using
[cluster_jmx_java](#cluster-jmx-java) or [cluster_http_java](#cluster-http-java).
[JMX](#cluster-jmx) or [HTTP](#cluster-http).
It can also be performed programmatically with `Cluster.get(system).down(address)`.
@ -201,7 +201,7 @@ can also happen because of long GC pauses or system overload.
We recommend against using the auto-down feature of Akka Cluster in production.
This is crucial for correct behavior if you use @ref:[Cluster Singleton](cluster-singleton.md) or
@ref:[cluster_sharding_java](cluster-sharding.md), especially together with Akka @ref:[Persistence](persistence.md).
@ref:[Cluster Sharding](cluster-sharding.md), especially together with Akka @ref:[Persistence](persistence.md).
For Akka Persistence with Cluster Sharding it can result in corrupt data in case
of network partitions.
@ -216,7 +216,7 @@ as unreachable and removed after the automatic or manual downing as described
above.
A more graceful exit can be performed if you tell the cluster that a node shall leave.
This can be performed using [cluster_jmx_java](#cluster-jmx-java) or [cluster_http_java](#cluster-http-java).
This can be performed using [JMX](#cluster-jmx) or [HTTP](#cluster-http).
It can also be performed programmatically with:
@@snip [ClusterDocTest.java]($code$/java/jdocs/cluster/ClusterDocTest.java) { #leave }
@ -224,7 +224,7 @@ It can also be performed programmatically with:
Note that this command can be issued to any member in the cluster, not necessarily the
one that is leaving.
The @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-java) will automatically run when the cluster node sees itself as
The @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown) will automatically run when the cluster node sees itself as
`Exiting`, i.e. leaving from another node will trigger the shutdown process on the leaving node.
Tasks for graceful leaving of cluster including graceful shutdown of Cluster Singletons and
Cluster Sharding are added automatically when Akka Cluster is used, i.e. running the shutdown
@ -233,7 +233,7 @@ process will also trigger the graceful leaving if it's not already in progress.
Normally this is handled automatically, but in case of network failures during this process it might still
be necessary to set the nodes status to `Down` in order to complete the removal.
<a id="weakly-up-java"></a>
<a id="weakly-up"></a>
## WeaklyUp Members
If a node is `unreachable` then gossip convergence is not possible and therefore any
@ -255,7 +255,7 @@ in this state, but you should be aware of that members on the other side of a ne
have no knowledge about the existence of the new members. You should for example not count
`WeaklyUp` members in quorum decisions.
<a id="cluster-subscriber-java"></a>
<a id="cluster-subscriber"></a>
## Subscribe to Cluster Events
You can subscribe to change notifications of the cluster membership by using
@ -349,7 +349,7 @@ and it is typically defined in the start script as a system property or environm
The roles of the nodes is part of the membership information in `MemberEvent` that you can subscribe to.
<a id="min-members-java"></a>
<a id="min-members"></a>
## How To Startup when Cluster Size Reached
A common use case is to start actors after the cluster has been initialized,
@ -385,7 +385,7 @@ This callback can be used for other things than starting actors.
You can do some clean up in a `registerOnMemberRemoved` callback, which will
be invoked when the current member status is changed to 'Removed' or the cluster have been shutdown.
An alternative is to register tasks to the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-java).
An alternative is to register tasks to the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown).
@@@ note
@ -411,7 +411,7 @@ Distributes actors across several nodes in the cluster and supports interaction
with the actors using their logical identifier, but without having to care about
their physical location in the cluster.
See @ref:[cluster_sharding_java](cluster-sharding.md).
See @ref:[Cluster Sharding](cluster-sharding.md).
## Distributed Publish Subscribe
@ -434,7 +434,7 @@ See @ref:[Cluster Client](cluster-client.md).
*Akka Distributed Data* is useful when you need to share data between nodes in an
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
See @ref:[distributed_data_java](distributed-data.md).
See @ref:[Distributed Data](distributed-data.md).
## Failure Detector
@ -472,7 +472,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [cluster_configuration_java](#cluster-configuration-java) you can adjust the `akka.cluster.failure-detector.threshold`
In the [configuration](#cluster-configuration) you can adjust the `akka.cluster.failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -498,7 +498,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.cluster.failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [cluster_configuration_java](#cluster-configuration-java) of this depending on you environment.
adjust the [configuration](#cluster-configuration) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -510,7 +510,7 @@ actor. Death watch generates the `Terminated` message to the watching actor when
unreachable cluster node has been downed and removed.
If you encounter suspicious false positives when the system is under load you should
define a separate dispatcher for the cluster actors as described in [cluster_dispatcher_java](#cluster-dispatcher-java).
define a separate dispatcher for the cluster actors as described in [Cluster Dispatcher](#cluster-dispatcher).
## Cluster Aware Routers
@ -521,7 +521,7 @@ automatically unregistered from the router. When new nodes join the cluster addi
routees are added to the router, according to the configuration. Routees are also added
when a node becomes reachable again, after having been unreachable.
Cluster aware routers make use of members with status [WeaklyUp](#weakly-up-java).
Cluster aware routers make use of members with status [WeaklyUp](#weakly-up).
There are two distinct types of routers.
@ -565,7 +565,7 @@ the router will try to use them as soon as the member status is changed to 'Up'.
The actor paths without address information that are defined in `routees.paths` are used for selecting the
actors to which the messages will be forwarded to by the router.
Messages will be forwarded to the routees using @ref:[ActorSelection](actors.md#actorselection-java), so the same delivery semantics should be expected.
Messages will be forwarded to the routees using @ref:[ActorSelection](actors.md#actorselection), so the same delivery semantics should be expected.
It is possible to limit the lookup of routees to member nodes tagged with a certain role by specifying `use-role`.
`max-total-nr-of-instances` defines total number of routees in the cluster. By default `max-total-nr-of-instances`
@ -576,7 +576,7 @@ The same type of router could also have been defined in code:
@@snip [StatsService.java]($code$/java/jdocs/cluster/StatsService.java) { #router-lookup-in-code }
See [cluster_configuration_java](#cluster-configuration-java) section for further descriptions of the settings.
See [configuration](#cluster-configuration) section for further descriptions of the settings.
### Router Example with Group of Routees
@ -660,7 +660,7 @@ The same type of router could also have been defined in code:
@@snip [StatsService.java]($code$/java/jdocs/cluster/StatsService.java) { #router-deploy-in-code }
See [cluster_configuration_java](#cluster-configuration-java) section for further descriptions of the settings.
See [configuration](#cluster-configuration) section for further descriptions of the settings.
### Router Example with Pool of Remote Deployed Routees
@ -705,13 +705,13 @@ and to the registered subscribers on the system event bus with the help of `clus
## Management
<a id="cluster-http-java"></a>
<a id="cluster-http"></a>
### HTTP
Information and management of the cluster is available with a HTTP API.
See documentation of [akka/akka-cluster-management](https://github.com/akka/akka-cluster-management).
See documentation of [Akka Management](http://developer.lightbend.com/docs/akka-management/current/).
<a id="cluster-jmx-java"></a>
<a id="cluster-jmx"></a>
### JMX
Information and management of the cluster is available as JMX MBeans with the root name `akka.Cluster`.
@ -728,18 +728,18 @@ From JMX you can:
Member nodes are identified by their address, in format *akka.<protocol>://<actor-system-name>@<hostname>:<port>*.
<a id="cluster-command-line-java"></a>
<a id="cluster-command-line"></a>
### Command Line
@@@ warning
**Deprecation warning** - The command line script has been deprecated and is scheduled for removal
in the next major version. Use the [cluster_http_java](#cluster-http-java) API with [curl](https://curl.haxx.se/)
in the next major version. Use the [HTTP management](#cluster-http) API with [curl](https://curl.haxx.se/)
or similar instead.
@@@
The cluster can be managed with the script `akka-cluster` provided in the Akka github repository here: @[github@/akka-cluster/jmx-client](mailto:github@/akka-cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
The cluster can be managed with the script `akka-cluster` provided in the Akka github repository here: [@github@/akka-cluster/jmx-client](@github@/akka-cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
Run it without parameters to see instructions about how to use the script:
@ -771,11 +771,11 @@ To be able to use the script you must enable remote monitoring and management wh
as described in [Monitoring and Management Using JMX Technology](http://docs.oracle.com/javase/8/jdocs/technotes/guides/management/agent.html).
Make sure you understand the security implications of enabling remote monitoring and management.
<a id="cluster-configuration-java"></a>
<a id="cluster-configuration"></a>
## Configuration
There are several configuration properties for the cluster. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-cluster) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-cluster) for more information.
### Cluster Info Logging
@ -785,7 +785,7 @@ You can silence the logging of cluster events at info level with configuration p
akka.cluster.log-info = off
```
<a id="cluster-dispatcher-java"></a>
<a id="cluster-dispatcher"></a>
### Cluster Dispatcher
Under the hood the cluster extension is implemented with actors and it can be necessary

View file

@ -13,7 +13,7 @@ dispatchers in this ActorSystem. If no ExecutionContext is given, it will fallba
`akka.actor.default-dispatcher.default-executor.fallback`. By default this is a "fork-join-executor", which
gives excellent performance in most cases.
<a id="dispatcher-lookup-java"></a>
<a id="dispatcher-lookup"></a>
## Looking up a Dispatcher
Dispatchers implement the `ExecutionContext` interface and can thus be used to run `Future` invocations etc.
@ -47,7 +47,7 @@ You can read more about it in the JDK's [ThreadPoolExecutor documentation](https
@@@
For more options, see the default-dispatcher section of the <!-- FIXME: More than one link target with name configuration in path Some(/java/dispatchers.rst) --> configuration.
For more options, see the default-dispatcher section of the @ref:[configuration](general/configuration.md).
Then you create the actor as usual and define the dispatcher in the deployment configuration.

View file

@ -29,12 +29,12 @@ with a specific role. It communicates with other `Replicator` instances with the
actor using the `Replicator.props`. If it is started as an ordinary actor it is important
that it is given the same name, started on same path, on all nodes.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-java),
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Data. This means that the data will be replicated to the
@ref:[WeaklyUp](cluster-usage.md#weakly-up-java) nodes with the background gossip protocol. Note that it
@ref:[WeaklyUp](cluster-usage.md#weakly-up) nodes with the background gossip protocol. Note that it
will not participate in any actions where the consistency mode is to read/write from all
nodes or the majority of nodes. The @ref:[WeaklyUp](cluster-usage.md#weakly-up-java) node is not counted
as part of the cluster. So 3 nodes + 5 @ref:[WeaklyUp](cluster-usage.md#weakly-up-java) is essentially a
nodes or the majority of nodes. The @ref:[WeaklyUp](cluster-usage.md#weakly-up) node is not counted
as part of the cluster. So 3 nodes + 5 @ref:[WeaklyUp](cluster-usage.md#weakly-up) is essentially a
3 node cluster as far as consistent actions are concerned.
Below is an example of an actor that schedules tick messages to itself and for each tick
@ -43,7 +43,7 @@ changes of this.
@@snip [DataBot.java]($code$/java/jdocs/ddata/DataBot.java) { #data-bot }
<a id="replicator-update-java"></a>
<a id="replicator-update"></a>
### Update
To modify and replicate a data value you send a `Replicator.Update` message to the local
@ -76,16 +76,10 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
* `WriteAll` the value will immediately be written to all nodes in the cluster
(or all nodes in the cluster role group)
When you specify to write to
`n`
out of
`x`
nodes, the update will first replicate to
`n`
nodes. If there are not
: enough Acks after 1/5th of the timeout, the update will be replicated to `n` other nodes. If there are less than n nodes
left all of the remaining nodes are used. Reachable nodes are prefered over unreachable nodes.
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
are prefered over unreachable nodes.
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
@ -113,7 +107,7 @@ or maintain local correlation data structures.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context }
<a id="replicator-get-java"></a>
<a id="replicator-get"></a>
### Get
To retrieve the current value of a data you send `Replicator.Get` message to the
@ -155,7 +149,7 @@ to after receiving and transforming `GetSuccess`.
### Consistency
The consistency level that is supplied in the [replicator_update_java](#replicator-update-java) and [replicator_get_java](#replicator-get-java)
The consistency level that is supplied in the [Update](#replicator-update) and [Get](#replicator-get)
specifies per request how many replicas that must respond successfully to a write and read request.
For low latency reads you use `ReadLocal` with the risk of retrieving stale data, i.e. updates
@ -186,6 +180,14 @@ The `Replicator` writes and reads to a majority of replicas, i.e. **N / 2 + 1**.
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
to 4 nodes and reads from 4 nodes.
You can define a minimum number of nodes for `WriteMajority` and `ReadMajority`,
this will minimize the risk of reading steal data. Minimum cap is
provided by minCap property of `WriteMajority` and `ReadMajority` and defines the required majority.
If the minCap is higher then **N / 2 + 1** the minCap will be used.
For example if the minCap is 5 the `WriteMajority` and `ReadMajority` for cluster of 3 nodes will be 3, for
cluster of 6 nodes will be 5 and for cluster of 12 nodes will be 7 ( **N / 2 + 1** ).
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
is rather high and then the nice properties of combining majority write and reads are not
guaranteed. Therefore the `ReadMajority` and `WriteMajority` have a `minCap` parameter that
@ -265,7 +267,7 @@ types that support both updates and removals, for example `ORMap` or `ORSet`.
@@@
<a id="delta-crdt-java"></a>
<a id="delta-crdt"></a>
### delta-CRDT
[Delta State Replicated Data Types](http://arxiv.org/abs/1603.01529)
@ -321,7 +323,7 @@ The value of the counter is the value of the P counter minus the value of the N
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #pncounter }
`GCounter` and `PNCounter` have support for [delta_crdt_java](#delta-crdt-java) and don't need causal
`GCounter` and `PNCounter` have support for [delta-CRDT](#delta-crdt) and don't need causal
delivery of deltas.
Several related counters can be managed in a map with the `PNCounterMap` data type.
@ -339,7 +341,7 @@ Merge is simply the union of the two sets.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #gset }
`GSet` has support for [delta_crdt_java](#delta-crdt-java) and it doesn't require causal delivery of deltas.
`GSet` has support for [delta-CRDT](#delta-crdt) and it doesn't require causal delivery of deltas.
If you need add and remove operations you should use the `ORSet` (observed-remove set).
Elements can be added and removed any number of times. If an element is concurrently added and
@ -352,7 +354,7 @@ track causality of the operations and resolve concurrent updates.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #orset }
`ORSet` has support for [delta_crdt_java](#delta-crdt-java) and it requires causal delivery of deltas.
`ORSet` has support for [delta-CRDT](#delta-crdt) and it requires causal delivery of deltas.
### Maps
@ -490,7 +492,7 @@ look like for the `TwoPhaseSet`:
@@snip [TwoPhaseSetSerializer2.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer2.java) { #serializer }
<a id="ddata-durable-java"></a>
<a id="ddata-durable"></a>
### Durable Storage
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
@ -553,7 +555,7 @@ Note that you should be prepared to receive `WriteFailure` as reply to an `Updat
durable entry if the data could not be stored for some reason. When enabling `write-behind-interval`
such errors will only be logged and `UpdateSuccess` will still be the reply to the `Update`.
There is one important caveat when it comes pruning of [crdt_garbage_java](#crdt-garbage-java) for durable data.
There is one important caveat when it comes pruning of [CRDT Garbage](#crdt-garbage) for durable data.
If and old data entry that was never pruned is injected and merged with existing data after
that the pruning markers have been removed the value will not be correct. The time-to-live
of the markers is defined by configuration
@ -563,7 +565,7 @@ This would be possible if a node with durable data didn't participate in the pru
be stopped for longer time than this duration and if it is joining again after this
duration its data should first be manually removed (from the lmdb directory).
<a id="crdt-garbage-java"></a>
<a id="crdt-garbage"></a>
### CRDT Garbage
One thing that can be problematic with CRDTs is that some data types accumulate history (garbage).
@ -602,7 +604,7 @@ be able to improve this if needed, but the design is still not intended for bill
All data is held in memory, which is another reason why it is not intended for *Big Data*.
When a data entry is changed the full state of that entry may be replicated to other nodes
if it doesn't support [delta_crdt_java](#delta-crdt-java). The full state is also replicated for delta-CRDTs,
if it doesn't support [delta-CRDT](#delta-crdt). The full state is also replicated for delta-CRDTs,
for example when new nodes are added to the cluster or when deltas could not be propagated because
of network partitions or similar problems. This means that you cannot have too large
data entries, because then the remote message size will be too large.

View file

@ -19,7 +19,7 @@ a few seconds. Changes are only performed in the own part of the registry and th
changes are versioned. Deltas are disseminated in a scalable way to other nodes with
a gossip protocol.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-java),
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Publish Subscribe, i.e. subscribers on nodes with
`WeaklyUp` status will receive published messages if the publisher and subscriber are on
same side of a network partition.
@ -28,9 +28,9 @@ You can send messages via the mediator on any node to registered actors on
any other node.
There a two different modes of message delivery, explained in the sections
[Publish](#distributed-pub-sub-publish-java) and [Send](#distributed-pub-sub-send-java) below.
[Publish](#distributed-pub-sub-publish) and [Send](#distributed-pub-sub-send) below.
<a id="distributed-pub-sub-publish-java"></a>
<a id="distributed-pub-sub-publish"></a>
## Publish
This is the true pub/sub mode. A typical usage of this mode is a chat room in an instant
@ -94,7 +94,7 @@ to subscribers that subscribed without a group id.
@@@
<a id="distributed-pub-sub-send-java"></a>
<a id="distributed-pub-sub-send"></a>
## Send
This is a point-to-point mode where each message is delivered to one destination,
@ -174,10 +174,10 @@ akka.extensions = ["akka.cluster.pubsub.DistributedPubSub"]
## Delivery Guarantee
As in @ref:[Message Delivery Reliability](../scala/general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
In other words, messages can be lost over the wire.
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](https://github.com/akka/reactive-kafka).
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](http://doc.akka.io/docs/akka-stream-kafka/current/home.html).
## Dependencies

View file

@ -95,11 +95,11 @@ A test for this implementation may look like this:
This classifier takes always a time which is proportional to the number of
subscriptions, independent of how many actually match.
<a id="actor-classification-java"></a>
<a id="actor-classification"></a>
### Actor Classification
This classification was originally developed specifically for implementing
@ref:[DeathWatch](actors.md#deathwatch-java): subscribers as well as classifiers are of
@ref:[DeathWatch](actors.md#deathwatch): subscribers as well as classifiers are of
type `ActorRef`.
This classification requires an `ActorSystem` in order to perform book-keeping
@ -118,7 +118,7 @@ A test for this implementation may look like this:
This classifier is still is generic in the event type, and it is efficient for
all use cases.
<a id="event-stream-java"></a>
<a id="event-stream"></a>
## Event Stream
The event stream is the main event bus of each actor system: it is used for
@ -178,7 +178,7 @@ event class have been done)
### Dead Letters
As described at @ref:[Stopping actors](actors.md#stopping-actors-java), messages queued when an actor
As described at @ref:[Stopping actors](actors.md#stopping-actors), messages queued when an actor
terminates or sent after its death are re-routed to the dead letter mailbox,
which by default will publish the messages wrapped in `DeadLetter`. This
wrapper holds the original sender, receiver and message of the envelope which

View file

@ -56,10 +56,10 @@ akka {
The sky is the limit!
By the way, did you know that Akka's `Typed Actors`, `Serialization` and other features are implemented as Akka Extensions?
<a id="extending-akka-java-settings"></a>
<a id="extending-akka-settings"></a>
### Application specific settings
The <!-- FIXME: More than one link target with name configuration in path Some(/java/extending-akka.rst) --> configuration can be used for application specific settings. A good practice is to place those settings in an Extension.
The @ref:[configuration](general/configuration.md) can be used for application specific settings. A good practice is to place those settings in an Extension.
Sample configuration:

View file

@ -1,5 +1,5 @@
<a id="fault-tolerance-sample-java"></a>
<a id="fault-tolerance-sample"></a>
# Diagrams of the Fault Tolerance Sample
![faulttolerancesample-normal-flow.png](../images/faulttolerancesample-normal-flow.png)

View file

@ -1,6 +1,6 @@
# Fault Tolerance
As explained in @ref:[Actor Systems](../scala/general/actor-systems.md) each actor is the supervisor of its
As explained in @ref:[Actor Systems](general/actor-systems.md) each actor is the supervisor of its
children, and as such each actor defines fault handling supervisor strategy.
This strategy cannot be changed afterwards as it is an integral part of the
actor systems structure.
@ -34,7 +34,7 @@ For the sake of demonstration let us consider the following strategy:
@@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #strategy }
I have chosen a few well-known exception types in order to demonstrate the
application of the fault handling directives described in <!-- FIXME: More than one link target with name supervision in path Some(/java/fault-tolerance.rst) --> supervision.
application of the fault handling directives described in @ref:[supervision](general/supervision.md).
First off, it is a one-for-one strategy, meaning that each child is treated
separately (an all-for-one strategy works very similarly, the only difference
is that any decision is applied to all children of the supervisor, not only the
@ -96,7 +96,7 @@ by overriding the `logFailure` method.
## Supervision of Top-Level Actors
Toplevel actors means those which are created using `system.actorOf()`, and
they are children of the @ref:[User Guardian](../scala/general/supervision.md#user-guardian). There are no
they are children of the @ref:[User Guardian](general/supervision.md#user-guardian). There are no
special rules applied in this case, the guardian simply applies the configured
strategy.
@ -111,7 +111,7 @@ This supervisor will be used to create a child, with which we can experiment:
@@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #child }
The test is easier by using the utilities described in <!-- FIXME: More than one link target with name akka-testkit in path Some(/java/fault-tolerance.rst) --> akka-testkit,
The test is easier by using the utilities described in @ref:[TestKit](testing.md),
where `TestProbe` provides an actor ref useful for receiving and inspecting replies.
@@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #testkit }

View file

@ -96,7 +96,7 @@ data is available via `stateData` as shown, and the new state data would be
available as `nextStateData`.
To verify that this buncher actually works, it is quite easy to write a test
using the <!-- FIXME: More than one link target with name akka-testkit in path Some(/java/fsm.rst) --> akka-testkit, here using JUnit as an example:
using the @ref:[TestKit](testing.md), here using JUnit as an example:
@@snip [BuncherTest.java]($code$/java/jdocs/actor/fsm/BuncherTest.java) { #test-code }
@ -384,12 +384,12 @@ In case you override `postStop` and want to have your
## Testing and Debugging Finite State Machines
During development and for trouble shooting FSMs need care just as any other
actor. There are specialized tools available as described in @ref:[TestFSMRef](../scala/testing.md#testfsmref)
actor. There are specialized tools available as described in @ref:[TestFSMRef](testing.md#testfsmref)
and in the following.
### Event Tracing
The setting `akka.actor.debug.fsm` in <!-- FIXME: More than one link target with name configuration in path Some(/java/fsm.rst) --> configuration enables logging of an
The setting `akka.actor.debug.fsm` in @ref:[configuration](general/configuration.md) enables logging of an
event trace by `LoggingFSM` instances:
@@snip [FSMDocTest.java]($code$/java/jdocs/actor/fsm/FSMDocTest.java) { #logging-fsm }
@ -403,7 +403,7 @@ messages
* all state transitions
Life cycle changes and special messages can be logged as described for
@ref:[Actors](../scala/testing.md#actor-logging-scala).
@ref:[Actors](testing.md#actor-logging).
### Rolling Event Log

View file

@ -9,7 +9,7 @@ code could share it for the profit of all. Where applicable it might also make
sense to add to the `akka.pattern` package for creating an [OTP-like library](http://www.erlang.org/doc/man_index.html).
You might find some of the patterns described in the Scala chapter of
@ref:[HowTo: Common Patterns](../scala/howto.md) useful even though the example code is written in Scala.
@ref:[HowTo: Common Patterns](howto.md) useful even though the example code is written in Scala.
## Scheduling Periodic Messages

View file

@ -120,7 +120,7 @@ Once a connection has been established data can be sent to it from any actor in
Tcp.Write
: The simplest `WriteCommand` implementation which wraps a `ByteString` instance and an "ack" event.
A `ByteString` (as explained in @ref:[this section](io.md#bytestring-java)) models one or more chunks of immutable
A `ByteString` (as explained in @ref:[this section](io.md#bytestring)) models one or more chunks of immutable
in-memory data with a maximum (total) size of 2 GB (2^31 bytes).
Tcp.WriteFile

View file

@ -73,7 +73,7 @@ not error handling. In other words, data may still be lost, even if every write
@@@
<a id="bytestring-java"></a>
<a id="bytestring"></a>
### ByteString
To maintain isolation, actors should communicate with immutable objects only. `ByteString` is an

View file

@ -63,7 +63,7 @@ akka {
```
To customize the logging further or take other actions for dead letters you can subscribe
to the @ref:[Event Stream](event-bus.md#event-stream-java).
to the @ref:[Event Stream](event-bus.md#event-stream).
### Auxiliary logging options
@ -153,7 +153,7 @@ akka {
}
```
<a id="logging-remote-java"></a>
<a id="logging-remote"></a>
### Auxiliary remote logging options
If you want to see all messages that are sent through remoting at DEBUG log level, use the following config option. Note that this logs the messages as they are sent by the transport layer, not by an actor.
@ -197,7 +197,7 @@ akka {
}
```
Also see the logging options for TestKit: @ref:[actor.logging-java](testing.md#actor-logging-java).
Also see the logging options for TestKit: @ref:[actor.logging-java](testing.md#actor-logging).
### Turn Off Logging
@ -221,12 +221,12 @@ that receives the log events in the same order they were emitted.
@@@ note
The event handler actor does not have a bounded inbox and is run on the default dispatcher. This means
that logging extreme amounts of data may affect your application badly. This can be somewhat mitigated by using an async logging backend though. (See [Using the SLF4J API directly](#slf4j-directly-java))
that logging extreme amounts of data may affect your application badly. This can be somewhat mitigated by using an async logging backend though. (See [Using the SLF4J API directly](#slf4j-directly))
@@@
You can configure which event handlers are created at system start-up and listen to logging events. That is done using the
`loggers` element in the <!-- FIXME: More than one link target with name configuration in path Some(/java/logging.rst) --> configuration.
`loggers` element in the @ref:[configuration](general/configuration.md).
Here you can also define the log level. More fine grained filtering based on the log source
can be implemented in a custom `LoggingFilter`, which can be defined in the `logging-filter`
configuration property.
@ -241,7 +241,7 @@ akka {
}
```
The default one logs to STDOUT and is registered by default. It is not intended to be used for production. There is also an [SLF4J](#slf4j-java)
The default one logs to STDOUT and is registered by default. It is not intended to be used for production. There is also an [SLF4J](#slf4j)
logger available in the 'akka-slf4j' module.
Example of creating a listener:
@ -257,7 +257,7 @@ Instead log messages are printed to stdout (System.out). The default log level f
stdout logger is `WARNING` and it can be silenced completely by setting
`akka.stdout-loglevel=OFF`.
<a id="slf4j-java"></a>
<a id="slf4j"></a>
## SLF4J
Akka provides a logger for [SL4FJ](http://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
@ -273,7 +273,7 @@ It has a single dependency: the slf4j-api jar. In your runtime, you also need a
```
You need to enable the Slf4jLogger in the `loggers` element in
the <!-- FIXME: More than one link target with name configuration in path Some(/java/logging.rst) --> configuration. Here you can also define the log level of the event bus.
the @ref:[configuration](general/configuration.md). Here you can also define the log level of the event bus.
More fine grained log levels can be defined in the configuration of the SLF4J backend
(e.g. logback.xml). You should also define `akka.event.slf4j.Slf4jLoggingFilter` in
the `logging-filter` configuration property. It will filter the log events using the backend
@ -316,7 +316,7 @@ shown below:
final LoggingAdapter log = Logging.getLogger(system.eventStream(), "my.string");
```
<a id="slf4j-directly-java"></a>
<a id="slf4j-directly"></a>
### Using the SLF4J API directly
If you use the SLF4J API directly in your application, remember that the logging operations will block
@ -452,13 +452,13 @@ A more advanced (including most Akka added information) example pattern would be
<pattern>%date{ISO8601} level=[%level] marker=[%marker] logger=[%logger] akkaSource=[%X{akkaSource}] sourceActorSystem=[%X{sourceActorSystem}] sourceThread=[%X{sourceThread}] mdc=[ticket-#%X{ticketNumber}: %X{ticketDesc}] - msg=[%msg]%n----%n</pattern>
```
<a id="jul-java"></a>
<a id="jul"></a>
## java.util.logging
Akka includes a logger for [java.util.logging](https://docs.oracle.com/javase/8/jdocs/api/java/util/logging/package-summary.html#package.description).
You need to enable the `akka.event.jul.JavaLogger` in the `loggers` element in
the <!-- FIXME: More than one link target with name configuration in path Some(/java/logging.rst) --> configuration. Here you can also define the log level of the event bus.
the @ref:[configuration](general/configuration.md). Here you can also define the log level of the event bus.
More fine grained log levels can be defined in the configuration of the logging backend.
You should also define `akka.event.jul.JavaLoggingFilter` in
the `logging-filter` configuration property. It will filter the log events using the backend

View file

@ -81,7 +81,7 @@ all domain events of an Aggregate Root type.
@@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #EventsByTag }
To tag events you create an @ref:[Event Adapters](persistence.md#event-adapters-java) that wraps the events in a `akka.persistence.journal.Tagged`
To tag events you create an @ref:[Event Adapters](persistence.md#event-adapters) that wraps the events in a `akka.persistence.journal.Tagged`
with the given `tags`.
@@snip [LeveldbPersistenceQueryDocTest.java]($code$/java/jdocs/persistence/query/LeveldbPersistenceQueryDocTest.java) { #tagger }

View file

@ -79,7 +79,7 @@ If your usage does not require a live stream, you can use the `currentPersistenc
#### EventsByPersistenceIdQuery and CurrentEventsByPersistenceIdQuery
`eventsByPersistenceId` is a query equivalent to replaying a @ref:[PersistentActor](persistence.md#event-sourcing-java),
`eventsByPersistenceId` is a query equivalent to replaying a @ref:[PersistentActor](persistence.md#event-sourcing),
however, since it is a stream it is possible to keep it alive and watch for additional incoming events persisted by the
persistent actor identified by the given `persistenceId`.
@ -98,7 +98,7 @@ The goal of this query is to allow querying for all events which are "tagged" wi
That includes the use case to query all domain events of an Aggregate Root type.
Please refer to your read journal plugin's documentation to find out if and how it is supported.
Some journals may support tagging of events via an @ref:[Event Adapters](persistence.md#event-adapters-java) that wraps the events in a
Some journals may support tagging of events via an @ref:[Event Adapters](persistence.md#event-adapters) that wraps the events in a
`akka.persistence.journal.Tagged` with the given `tags`. The journal may support other ways of doing tagging - again,
how exactly this is implemented depends on the used journal. Here is an example of such a tagging event adapter:
@ -116,7 +116,7 @@ on relational databases, yet may be hard to implement efficiently on plain key-v
@@@
In the example below we query all events which have been tagged (we assume this was performed by the write-side using an
@ref:[EventAdapter](persistence.md#event-adapters-java), or that the journal is smart enough that it can figure out what we mean by this
@ref:[EventAdapter](persistence.md#event-adapters), or that the journal is smart enough that it can figure out what we mean by this
tag - for example if the journal stored the events as json it may try to find those with the field `tag` set to this value etc.).
@@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-tag }
@ -131,7 +131,7 @@ If your usage does not require a live stream, you can use the `currentEventsByTa
### Materialized values of queries
Journals are able to provide additional information related to a query by exposing @ref:[Materialized values](stream/stream-quickstart.md#materialized-values-quick-java),
Journals are able to provide additional information related to a query by exposing @ref:[Materialized values](stream/stream-quickstart.md#materialized-values-quick),
which are a feature of @ref:[Streams](stream/index.md) that allows to expose additional values at stream materialization time.
More advanced query journals may use this technique to expose information about the character of the materialized
@ -147,7 +147,7 @@ specialised query object, as demonstrated in the sample below:
## Performance and denormalization
When building systems using @ref:[Event sourcing](persistence.md#event-sourcing-java) and CQRS ([Command & Query Responsibility Segregation](https://msdn.microsoft.com/en-us/library/jj554200.aspx)) techniques
When building systems using @ref:[Event sourcing](persistence.md#event-sourcing) and CQRS ([Command & Query Responsibility Segregation](https://msdn.microsoft.com/en-us/library/jj554200.aspx)) techniques
it is tremendously important to realise that the write-side has completely different needs from the read-side,
and separating those concerns into datastores that are optimised for either side makes it possible to offer the best
experience for the write and read sides independently.
@ -202,7 +202,7 @@ into the other datastore:
@@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #projection-into-different-store-actor }
<a id="read-journal-plugin-api-java"></a>
<a id="read-journal-plugin-api"></a>
## Query plugins
Query plugins are various (mostly community driven) `ReadJournal` implementations for all kinds

View file

@ -56,10 +56,10 @@ definition - in a backwards compatible way - such that the new deserialization c
The most common schema changes you will likely are:
* [adding a field to an event type](#add-field-java),
* [remove or rename field in event type](#rename-field-java),
* [remove event type](#remove-event-class-java),
* [split event into multiple smaller events](#split-large-event-into-smaller-java).
* [adding a field to an event type](#add-field),
* [remove or rename field in event type](#rename-field),
* [remove event type](#remove-event-class),
* [split event into multiple smaller events](#split-large-event-into-smaller).
The following sections will explain some patterns which can be used to safely evolve your schema when facing those changes.
@ -121,7 +121,7 @@ serializers, and the yellow payload indicates the user provided event (by callin
As you can see, the `PersistentMessage` acts as an envelope around the payload, adding various fields related to the
origin of the event (`persistenceId`, `sequenceNr` and more).
More advanced techniques (e.g. [Remove event class and ignore events](#remove-event-class-java)) will dive into using the manifests for increasing the
More advanced techniques (e.g. [Remove event class and ignore events](#remove-event-class)) will dive into using the manifests for increasing the
flexibility of the persisted vs. exposed types even more. However for now we will focus on the simpler evolution techniques,
concerning simply configuring the payload serializers.
@ -169,7 +169,7 @@ Deserialization will be performed by the same serializer which serialized the me
because of the `identifier` being stored together with the message.
Please refer to the @ref:[Akka Serialization](serialization.md) documentation for more advanced use of serializers,
especially the @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer-java) section since it is very useful for Persistence based applications
especially the @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer) section since it is very useful for Persistence based applications
dealing with schema evolutions, as we will see in some of the examples below.
## Schema evolution in action
@ -179,7 +179,7 @@ some of the various options one might go about handling the described situation.
a complete guide, so feel free to adapt these techniques depending on your serializer's capabilities
and/or other domain specific limitations.
<a id="add-field-java"></a>
<a id="add-field"></a>
### Add fields
**Situation:**
@ -213,7 +213,7 @@ the field to this event type:
@@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional }
<a id="rename-field-java"></a>
<a id="rename-field"></a>
### Rename fields
**Situation:**
@ -281,7 +281,7 @@ changes in the message format.
@@@
<a id="remove-event-class-java"></a>
<a id="remove-event-class"></a>
### Remove event class and ignore events
**Situation:**
@ -293,7 +293,7 @@ and should be deleted. You still have to be able to replay from a journal which
The problem of removing an event type from the domain model is not as much its removal, as the implications
for the recovery mechanisms that this entails. For example, a naive way of filtering out certain kinds of events from
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters-java):
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters):
![persistence-drop-event.png](../images/persistence-drop-event.png)
>
@ -322,7 +322,7 @@ this before starting to deserialize the object.
This aproach allows us to *remove the original class from our classpath*, which makes for less "old" classes lying around in the project.
This can for example be implemented by using an `SerializerWithStringManifest`
(documented in depth in @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer-java)). By looking at the string manifest, the serializer can notice
(documented in depth in @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer)). By looking at the string manifest, the serializer can notice
that the type is no longer needed, and skip the deserialization all-together:
![persistence-drop-event-serializer.png](../images/persistence-drop-event-serializer.png)
@ -340,7 +340,7 @@ and emits and empty `EventSeq` whenever such object is encoutered:
@@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest-adapter }
<a id="detach-domain-from-data-model-java"></a>
<a id="detach-domain-from-data-model"></a>
### Detach domain model from data model
**Situation:**
@ -378,14 +378,14 @@ as long as the mapping logic is able to convert between them:
The same technique could also be used directly in the Serializer if the end result of marshalling is bytes.
Then the serializer can simply convert the bytes do the domain object by using the generated protobuf builders.
<a id="store-human-readable-java"></a>
<a id="store-human-readable"></a>
### Store events as human-readable data model
**Situation:**
You want to keep your persisted events in a human-readable format, for example JSON.
**Solution:**
This is a special case of the [Detach domain model from data model](#detach-domain-from-data-model-java) pattern, and thus requires some co-operation
This is a special case of the [Detach domain model from data model](#detach-domain-from-data-model) pattern, and thus requires some co-operation
from the Journal implementation to achieve this.
An example of a Journal which may implement this pattern is MongoDB, however other databases such as PostgreSQL
@ -425,7 +425,7 @@ that provides that functionality, or implement one yourself.
@@@
<a id="split-large-event-into-smaller-java"></a>
<a id="split-large-event-into-smaller"></a>
### Split large event into fine-grained events
**Situation:**

View file

@ -56,9 +56,9 @@ used for optimizing recovery times. The storage backend of a snapshot store is p
The persistence extension comes with a "local" snapshot storage plugin which writes to the local filesystem.
Replicated snapshot stores are available as [Community plugins](http://akka.io/community/).
* *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
development of event sourced applications (see section [Event sourcing](#event-sourcing-java))
development of event sourced applications (see section [Event sourcing](#event-sourcing))
<a id="event-sourcing-java"></a>
<a id="event-sourcing"></a>
## Event sourcing
The basic idea behind [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) is quite simple. A persistent actor receives a (non-persistent) command
@ -95,7 +95,7 @@ about successful state changes by publishing events.
When persisting events with `persist` it is guaranteed that the persistent actor will not receive further commands between
the `persist` call and the execution(s) of the associated event handler. This also holds for multiple `persist`
calls in context of a single command. Incoming messages are [stashed](#internal-stash-java) until the `persist`
calls in context of a single command. Incoming messages are [stashed](#internal-stash) until the `persist`
is completed.
If persistence of an event fails, `onPersistFailure` will be invoked (logging the error by default),
@ -135,7 +135,7 @@ behavior is corrupted.
@@@
<a id="recovery-java"></a>
<a id="recovery"></a>
### Recovery
By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages.
@ -158,7 +158,7 @@ recovery in the future, store its `ActorPath` explicitly in your persisted event
@@@
<a id="recovery-custom-java"></a>
<a id="recovery-custom"></a>
#### Recovery customization
Applications may also customise how recovery is performed by returning a customised `Recovery` object
@ -198,11 +198,11 @@ and before any other received messages.
If there is a problem with recovering the state of the actor from the journal, `onRecoveryFailure`
is called (logging the error by default), and the actor will be stopped.
<a id="internal-stash-java"></a>
<a id="internal-stash"></a>
### Internal stash
The persistent actor has a private @ref:[stash](actors.md#stash-java) for internally caching incoming messages during
[recovery](#recovery-java) or the `persist\persistAll` method persisting events. You can still
The persistent actor has a private @ref:[stash](actors.md#stash) for internally caching incoming messages during
[recovery](#recovery) or the `persist\persistAll` method persisting events. You can still
use/inherit from the `Stash` interface. The internal stash cooperates with the normal stash by hooking into
`unstashAll` method and making sure messages are unstashed properly to the internal stash to maintain ordering
guarantees.
@ -277,7 +277,7 @@ The callback will not be invoked if the actor is restarted (or stopped) in betwe
@@@
<a id="defer-java"></a>
<a id="defer"></a>
### Deferring actions until preceding persist handlers have executed
Sometimes when working with `persistAsync` or `persist` you may find that it would be nice to define some actions in terms of
@ -306,7 +306,7 @@ The callback will not be invoked if the actor is restarted (or stopped) in betwe
@@@
<a id="nested-persist-calls-java"></a>
<a id="nested-persist-calls"></a>
### Nested persist calls
It is possible to call `persist` and `persistAsync` inside their respective callback blocks and they will properly
@ -350,7 +350,7 @@ the Actor's receive block (or methods synchronously invoked from there).
@@@
<a id="failures-java"></a>
<a id="failures"></a>
### Failures
If persistence of an event fails, `onPersistFailure` will be invoked (logging the error by default),
@ -371,7 +371,7 @@ next message.
If there is a problem with recovering the state of the actor from the journal when the actor is
started, `onRecoveryFailure` is called (logging the error by default), and the actor will be stopped.
Note that failure to load snapshot is also treated like this, but you can disable loading of snapshots
if you for example know that serialization format has changed in an incompatible way, see [Recovery customization](#recovery-custom-java).
if you for example know that serialization format has changed in an incompatible way, see [Recovery customization](#recovery-custom).
### Atomic writes
@ -438,7 +438,7 @@ For critical failures, such as recovery or persisting events failing, the persis
handler is invoked. This is because if the underlying journal implementation is signalling persistence failures it is most
likely either failing completely or overloaded and restarting right-away and trying to persist the event again will most
likely not help the journal recover as it would likely cause a [Thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem), as many persistent actors
would restart and try to persist their events again. Instead, using a `BackoffSupervisor` (as described in [Failures](#failures-java)) which
would restart and try to persist their events again. Instead, using a `BackoffSupervisor` (as described in [Failures](#failures)) which
implements an exponential-backoff strategy which allows for more breathing room for the journal to recover between
restarts of the persistent actor.
@ -452,11 +452,11 @@ Check the documentation of the journal implementation you are using for details
@@@
<a id="safe-shutdown-java"></a>
<a id="safe-shutdown"></a>
### Safely shutting down persistent actors
Special care should be given when when shutting down persistent actors from the outside.
With normal Actors it is often acceptable to use the special @ref:[PoisonPill](actors.md#poison-pill-java) message
With normal Actors it is often acceptable to use the special @ref:[PoisonPill](actors.md#poison-pill) message
to signal to an Actor that it should stop itself once it receives this message in fact this message is handled
automatically by Akka, leaving the target actor no way to refuse stopping itself when given a poison pill.
@ -481,7 +481,7 @@ mechanism when `persist()` is used. Notice the early stop behaviour that occurs
@@snip [LambdaPersistenceDocTest.java]($code$/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-good }
<a id="replay-filter-java"></a>
<a id="replay-filter"></a>
### Replay Filter
There could be cases where event streams are corrupted and multiple writers (i.e. multiple persistent actor instances)
@ -549,7 +549,7 @@ Since it is acceptable for some applications to not use any snapshotting, it is
However Akka will log a warning message when this situation is detected and then continue to operate until
an actor tries to store a snapshot, at which point the operation will fail (by replying with an `SaveSnapshotFailure` for example).
Note that @ref:[cluster_sharding_java](cluster-sharding.md) is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
Note that @ref:[Cluster Sharding](cluster-sharding.md) is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
@@@
@ -572,7 +572,7 @@ status messages as illustrated in the following table.
|`deleteSnapshot(Long)` | `DeleteSnapshotSuccess` | `DeleteSnapshotFailure` |
|`deleteSnapshots(SnapshotSelectionCriteria)` | `DeleteSnapshotsSuccess` | `DeleteSnapshotsFailure`|
<a id="at-least-once-delivery-java"></a>
<a id="at-least-once-delivery"></a>
## At-Least-Once Delivery
To send messages with at-least-once delivery semantics to destinations you can extend the `AbstractPersistentActorWithAtLeastOnceDelivery`
@ -597,7 +597,7 @@ possible resends
delivered to the new actor incarnation
These semantics are similar to what an `ActorPath` represents (see
@ref:[Actor Lifecycle](../scala/actors.md#actor-lifecycle-scala)), therefore you need to supply a path and not a
@ref:[Actor Lifecycle](actors.md#actor-lifecycle)), therefore you need to supply a path and not a
reference when delivering messages. The messages are sent to the path with
an actor selection.
@ -676,7 +676,7 @@ not accept more messages and it will throw `AtLeastOnceDelivery.MaxUnconfirmedMe
The default value can be configured with the `akka.persistence.at-least-once-delivery.max-unconfirmed-messages`
configuration key. The method can be overridden by implementation classes to return non-default values.
<a id="event-adapters-java"></a>
<a id="event-adapters"></a>
## Event Adapters
In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model
@ -713,11 +713,11 @@ adaptation simply return `EventSeq.empty`. The adapted events are then delivered
@@@ note
For more advanced schema evolution techniques refer to the @ref:[Persistence - Schema Evolution](../scala/persistence-schema-evolution.md) documentation.
For more advanced schema evolution techniques refer to the @ref:[Persistence - Schema Evolution](persistence-schema-evolution.md) documentation.
@@@
<a id="persistent-fsm-java"></a>
<a id="persistent-fsm"></a>
## Persistent FSM
`AbstractPersistentFSM` handles the incoming messages in an FSM like fashion.
@ -805,8 +805,8 @@ akka.persistence.snapshot-store.plugin = ""
```
However, these entries are provided as empty "", and require explicit user configuration via override in the user `application.conf`.
For an example of journal plugin which writes messages to LevelDB see [Local LevelDB journal](#local-leveldb-journal-java).
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see [Local snapshot store](#local-snapshot-store-java).
For an example of journal plugin which writes messages to LevelDB see [Local LevelDB journal](#local-leveldb-journal).
For an example of snapshot store plugin which writes snapshots as individual files to the local filesystem see [Local snapshot store](#local-snapshot-store).
Applications can provide their own plugins by implementing a plugin API and activate them by configuration.
Plugin development requires the following imports:
@ -820,7 +820,7 @@ to start a certain plugin eagerly. In order to do that, you should first add the
under the `akka.extensions` key. Then, specify the IDs of plugins you wish to start automatically under
`akka.persistence.journal.auto-start-journals` and `akka.persistence.snapshot-store.auto-start-snapshot-stores`.
<a id="journal-plugin-api-java"></a>
<a id="journal-plugin-api"></a>
### Journal plugin API
A journal plugin extends `AsyncWriteJournal`.
@ -929,7 +929,7 @@ might have otherwise forgotten to test for when writing a plugin from scratch.
## Pre-packaged plugins
<a id="local-leveldb-journal-java"></a>
<a id="local-leveldb-journal"></a>
### Local LevelDB journal
The LevelDB journal plugin config entry is `akka.persistence.journal.leveldb`. It writes messages to a local LevelDB
@ -959,7 +959,7 @@ directory. This location can be changed by configuration where the specified pat
With this plugin, each actor system runs its own private LevelDB instance.
<a id="shared-leveldb-journal-java"></a>
<a id="shared-leveldb-journal"></a>
### Shared LevelDB journal
A LevelDB instance can also be shared by multiple actor systems (on the same or on different nodes). This, for
@ -975,7 +975,7 @@ purposes. Highly-available, replicated journals are available as [Community plug
@@@ note
This plugin has been supplanted by [Persistence Plugin Proxy](#persistence-plugin-proxy-java).
This plugin has been supplanted by [Persistence Plugin Proxy](#persistence-plugin-proxy).
@@@
@ -1001,7 +1001,7 @@ done by calling the `SharedLeveldbJournal.setStore` method with the actor refere
Internal journal commands (sent by persistent actors) are buffered until injection completes. Injection is idempotent
i.e. only the first injection is used.
<a id="local-snapshot-store-java"></a>
<a id="local-snapshot-store"></a>
### Local snapshot store
Local snapshot store plugin config entry is `akka.persistence.snapshot-store.local`. It writes snapshot files to
@ -1017,7 +1017,7 @@ directory. This can be changed by configuration where the specified path can be
Note that it is not mandatory to specify a snapshot store plugin. If you don't use snapshots
you don't have to configure it.
<a id="persistence-plugin-proxy-java"></a>
<a id="persistence-plugin-proxy"></a>
### Persistence Plugin Proxy
A persistence plugin proxy allows sharing of journals and snapshot stores across multiple actor systems (on the same or
@ -1055,7 +1055,7 @@ The proxied persistence plugin can (and should) be configured using its original
@@@
<a id="custom-serialization-java"></a>
<a id="custom-serialization"></a>
## Custom serialization
Serialization of snapshots and payloads of `Persistent` messages is configurable with Akka's
@ -1070,7 +1070,7 @@ it must add
to the application configuration. If not specified, a default serializer is used.
For more advanced schema evolution techniques refer to the @ref:[Persistence - Schema Evolution](../scala/persistence-schema-evolution.md) documentation.
For more advanced schema evolution techniques refer to the @ref:[Persistence - Schema Evolution](persistence-schema-evolution.md) documentation.
## Testing
@ -1086,19 +1086,19 @@ in your Akka configuration. The LevelDB Java port is for testing purposes only.
@@@ warning
It is not possible to test persistence provided classes (i.e. [PersistentActor](#event-sourcing-java)
and [AtLeastOnceDelivery](#at-least-once-delivery-java)) using `TestActorRef` due to its *synchronous* nature.
It is not possible to test persistence provided classes (i.e. [PersistentActor](#event-sourcing)
and [AtLeastOnceDelivery](#at-least-once-delivery)) using `TestActorRef` due to its *synchronous* nature.
These traits need to be able to perform asynchronous tasks in the background in order to handle internal persistence
related events.
When testing Persistence based projects always rely on @ref:[asynchronous messaging using the TestKit](testing.md#async-integration-testing-java).
When testing Persistence based projects always rely on @ref:[asynchronous messaging using the TestKit](testing.md#async-integration-testing).
@@@
## Configuration
There are several configuration properties for the persistence module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-persistence).
to the @ref:[reference configuration](general/configuration.md#config-akka-persistence).
## Multiple persistence plugin configurations

View file

@ -2,7 +2,7 @@
@@@ note
This page describes the @ref:[may change](../scala/common/may-change.md) remoting subsystem, codenamed *Artery* that will eventually
This page describes the @ref:[may change](common/may-change.md) remoting subsystem, codenamed *Artery* that will eventually
replace the old remoting implementation. For the current stable remoting system please refer to @ref:[Remoting](remoting.md).
@@@
@ -91,7 +91,7 @@ listening for connections and handling messages as not to interfere with other a
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in [Remote Configuration](#remote-configuration-artery-java).
All settings are described in [Remote Configuration](#remote-configuration-artery).
@@@ note
@ -115,7 +115,7 @@ real network.
In cases, where Network Address Translation (NAT) is used or other network bridging is involved, it is important
to configure the system so that it understands that there is a difference between his externally visible, canonical
address and between the host-port pair that is used to listen for connections. See [Akka behind NAT or in a Docker container](#remote-configuration-nat-artery-java)
address and between the host-port pair that is used to listen for connections. See [Akka behind NAT or in a Docker container](#remote-configuration-nat-artery)
for details.
## Acquiring references to remote actors
@ -170,7 +170,7 @@ and automatically reply to with a `ActorIdentity` message containing the
the `ActorSelection`, which returns a `Future` of the matching
`ActorRef`.
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@ note
@ -278,7 +278,7 @@ Actor classes not included in the whitelist will not be allowed to be remote dep
An `ActorSystem` should not be exposed via Akka Remote (Artery) over plain Aeron/UDP to an untrusted network (e.g. internet).
It should be protected by network security, such as a firewall. There is currently no support for encryption with Artery
so if network security is not considered as enough protection the classic remoting with
@ref:[TLS and mutual authentication](remoting.md#remote-tls-java) should be used.
@ref:[TLS and mutual authentication](remoting.md#remote-tls) should be used.
Best practice is that Akka remoting nodes should only be accessible from the adjacent network.
@ -354,7 +354,7 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
Akka remoting is using Aeron as underlying message transport. Aeron is using UDP and adds
among other things reliable delivery and session semantics, very similar to TCP. This means that
the order of the messages are preserved, which is needed for the @ref:[Actor message ordering guarantees](../scala/general/message-delivery-reliability.md#message-ordering).
the order of the messages are preserved, which is needed for the @ref:[Actor message ordering guarantees](general/message-delivery-reliability.md#message-ordering).
Under normal circumstances all messages will be delivered but there are cases when messages
may not be delivered to the destination:
@ -363,7 +363,7 @@ may not be delivered to the destination:
* if serialization or deserialization of a message fails (only that message will be dropped)
* if an unexpected exception occurs in the remoting infrastructure
In short, Actor message delivery is “at-most-once” as described in @ref:[Message Delivery Reliability](../scala/general/message-delivery-reliability.md)
In short, Actor message delivery is “at-most-once” as described in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md)
Some messages in Akka are called system messages and those cannot be dropped because that would result
in an inconsistent state between the systems. Such messages are used for essentially two features; remote death
@ -405,7 +405,7 @@ when the destination system has been restarted.
### Watching Remote Actors
Watching a remote actor is API wise not different than watching a local actor, as described in
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch-java). However, it is important to note, that unlike in the local case, remoting has to handle
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch). However, it is important to note, that unlike in the local case, remoting has to handle
when a remote actor does not terminate in a graceful way sending a system message to notify the watcher actor about
the event, but instead being hosted on a system which stopped abruptly (crashed). These situations are handled
by the built-in failure detector.
@ -432,7 +432,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [Remote Configuration](#remote-configuration-artery-java) you can adjust the `akka.remote.watch-failure-detector.threshold`
In the [Remote Configuration](#remote-configuration-artery) you can adjust the `akka.remote.watch-failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -458,7 +458,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [Remote Configuration](#remote-configuration-artery-java) of this depending on you environment.
adjust the [Remote Configuration](#remote-configuration-artery) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -471,7 +471,7 @@ those actors are serializable. Failing to do so will cause the system to behave
For more information please see @ref:[Serialization](serialization.md).
<a id="remote-bytebuffer-serialization-java"></a>
<a id="remote-bytebuffer-serialization"></a>
### ByteBuffer based serialization
Artery introduces a new serialization mechanism which allows the `ByteBufferSerializer` to directly write into a
@ -545,7 +545,7 @@ The attempts are logged with the SECURITY marker.
Please note that this option does not stop you from manually invoking java serialization.
Please note that this means that you will have to configure different serializers which will able to handle all of your
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as [ByteBuffer based serialization](#remote-bytebuffer-serialization-java) to learn how to do this.
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as [ByteBuffer based serialization](#remote-bytebuffer-serialization) to learn how to do this.
## Routers with Remote Destinations
@ -748,15 +748,15 @@ crashes unexpectedly.
for production systems.
The location of the file can be controlled via the *akka.remote.artery.advanced.flight-recoder.destination* setting (see
@ref:[akka-remote (artery)](../scala/general/configuration.md#config-akka-remote-artery) for details). By default, a file with the *.afr* extension is produced in the temporary
@ref:[akka-remote (artery)](general/configuration.md#config-akka-remote-artery) for details). By default, a file with the *.afr* extension is produced in the temporary
directory of the operating system. In cases where the flight recorder casuses issues, it can be disabled by adding the
setting *akka.remote.artery.advanced.flight-recorder.enabled=off*, although this is not recommended.
<a id="remote-configuration-artery-java"></a>
<a id="remote-configuration-artery"></a>
## Remote Configuration
There are lots of configuration properties that are related to remoting in Akka. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-remote-artery) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-remote-artery) for more information.
@@@ note
@ -767,7 +767,7 @@ best done by using something like the following:
@@@
<a id="remote-configuration-nat-artery-java"></a>
<a id="remote-configuration-nat-artery"></a>
### Akka behind NAT or in a Docker container
In setups involving Network Address Translation (NAT), Load Balancers or Docker

View file

@ -9,7 +9,7 @@ peer-to-peer fashion and it has limitations for client-server setups. In
particular Akka Remoting does not work transparently with Network Address Translation,
Load Balancers, or in Docker containers. For symmetric communication in these situations
network and/or Akka configuration will have to be changed as described in
@ref:[Peer-to-Peer vs. Client-Server](../scala/general/remoting.md#symmetric-communication).
@ref:[Peer-to-Peer vs. Client-Server](general/remoting.md#symmetric-communication).
@@@
@ -62,7 +62,7 @@ listening for connections and handling messages as not to interfere with other a
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in [Remote Configuration](#remote-configuration-java).
All settings are described in [Remote Configuration](#remote-configuration).
## Looking up Remote Actors
@ -95,7 +95,7 @@ the `ActorSelection`, which returns a `CompletionStage` of the matching
@@@ note
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@
@ -186,7 +186,7 @@ you can advise the system to create a child on that remote node like so:
@@snip [RemoteDeploymentDocTest.java]($code$/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
<a id="remote-deployment-whitelist-java"></a>
<a id="remote-deployment-whitelist"></a>
### Remote deployment whitelist
As remote deployment can potentially be abused by both users and even attackers a whitelist feature
@ -227,7 +227,7 @@ is restarted. After a restart communication can be resumed again and the link ca
## Watching Remote Actors
Watching a remote actor is not different than watching a local actor, as described in
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch-java).
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch).
### Failure Detector
@ -251,7 +251,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [Remote Configuration](#remote-configuration-java) you can adjust the `akka.remote.watch-failure-detector.threshold`
In the [Remote Configuration](#remote-configuration) you can adjust the `akka.remote.watch-failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -277,7 +277,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [Remote Configuration](#remote-configuration-java) of this depending on you environment.
adjust the [Remote Configuration](#remote-configuration) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -290,7 +290,7 @@ those actors are serializable. Failing to do so will cause the system to behave
For more information please see @ref:[Serialization](serialization.md).
<a id="disable-java-serializer-java"></a>
<a id="disable-java-serializer"></a>
### Disabling the Java Serializer
Java serialization is known to be slow and [prone to attacks](https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995)
@ -344,7 +344,7 @@ The attempts are logged with the SECURITY marker.
Please note that this option does not stop you from manually invoking java serialization.
Please note that this means that you will have to configure different serializers which will able to handle all of your
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as @ref:[ByteBuffer based serialization](remoting-artery.md#remote-bytebuffer-serialization-java) to learn how to do this.
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as @ref:[ByteBuffer based serialization](remoting-artery.md#remote-bytebuffer-serialization) to learn how to do this.
## Routers with Remote Destinations
@ -365,7 +365,7 @@ This configuration setting will send messages to the defined remote actor paths.
It requires that you create the destination actors on the remote nodes with matching paths.
That is not done by the router.
<a id="remote-sample-java"></a>
<a id="remote-sample"></a>
## Remoting Sample
You can download a ready to run [remoting sample](@exampleCodeService@/akka-samples-remote-java)
@ -428,21 +428,21 @@ which includes the addresses of local and remote ActorSystems.
To intercept generic remoting related errors, listen to `RemotingErrorEvent` which holds the `Throwable` cause.
<a id="remote-security-java"></a>
<a id="remote-security"></a>
## Remote Security
An `ActorSystem` should not be exposed via Akka Remote over plain TCP to an untrusted network (e.g. internet).
It should be protected by network security, such as a firewall. If that is not considered as enough protection
[TLS with mutual authentication](#remote-tls-java) should be enabled.
[TLS with mutual authentication](#remote-tls) should be enabled.
Best practice is that Akka remoting nodes should only be accessible from the adjacent network. Note that if TLS is
enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate by
compromising any node with certificates issued by the same internal PKI tree.
It is also security best-practice to [disable the Java serializer](#disable-java-serializer-java) because of
It is also security best-practice to [disable the Java serializer](#disable-java-serializer) because of
its multiple [known attack surfaces](https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995).
<a id="remote-tls-java"></a>
<a id="remote-tls"></a>
### Configuring SSL/TLS for Akka Remoting
SSL can be used as the remote transport by adding `akka.remote.netty.ssl` to the `enabled-transport` configuration section.
@ -495,7 +495,7 @@ Creating and working with keystores and certificates is well documented in the
[Generating X.509 Certificates](http://typesafehub.github.io/ssl-config/CertificateGeneration.html#using-keytool)
section of Lightbend's SSL-Config library.
Since an Akka remoting is inherently @ref:[peer-to-peer](../scala/general/remoting.md#symmetric-communication) both the key-store as well as trust-store
Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](http://docs.oracle.com/javase/7/jdocs/technotes/guides/security/jsse/JSSERefGuide.html)
@ -513,7 +513,7 @@ the other (the "server").
Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
by compromising any node with certificates issued by the same internal PKI tree.
See also a description of the settings in the @ref:[Remote Configuration](../scala/remoting.md#remote-configuration-scala) section.
See also a description of the settings in the @ref:[Remote Configuration](remoting.md#remote-configuration) section.
@@@ note
@ -546,10 +546,10 @@ as a marker trait to user-defined messages.
Untrusted mode does not give full protection against attacks by itself.
It makes it slightly harder to perform malicious or unintended actions but
it should be complemented with [disabled Java serializer](#disable-java-serializer-java).
it should be complemented with [disabled Java serializer](#disable-java-serializer).
Additional protection can be achieved when running in an untrusted network by
network security (e.g. firewalls) and/or enabling
[TLS with mutual authentication](#remote-tls-java).
[TLS with mutual authentication](#remote-tls).
@@@
@ -588,11 +588,11 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
@@@
<a id="remote-configuration-java"></a>
<a id="remote-configuration"></a>
## Remote Configuration
There are lots of configuration properties that are related to remoting in Akka. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-remote) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-remote) for more information.
@@@ note
@ -603,7 +603,7 @@ best done by using something like the following:
@@@
<a id="remote-configuration-nat-java"></a>
<a id="remote-configuration-nat"></a>
### Akka behind NAT or in a Docker container
In setups involving Network Address Translation (NAT), Load Balancers or Docker

View file

@ -6,9 +6,9 @@ routees yourselves or use a self contained router actor with configuration capab
Different routing strategies can be used, according to your application's needs. Akka comes with
several useful routing strategies right out of the box. But, as you will see in this chapter, it is
also possible to [create your own](#custom-router-java).
also possible to [create your own](#custom-router).
<a id="simple-router-java"></a>
<a id="simple-router"></a>
## A Simple Router
The following example illustrates how to use a `Router` and manage the routees from within an actor.
@ -40,9 +40,9 @@ outside of actors.
@@@ note
In general, any message sent to a router will be sent onwards to its routees, but there is one exception.
The special [Broadcast Messages](#broadcast-messages-java) will send to *all* of a router's routees.
However, do not use [Broadcast Messages](#broadcast-messages-java) when you use [BalancingPool](#balancing-pool-java) for routees
as described in [Specially Handled Messages](#router-special-messages-java).
The special [Broadcast Messages](#broadcast-messages) will send to *all* of a router's routees.
However, do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routees
as described in [Specially Handled Messages](#router-special-messages).
@@@
@ -72,13 +72,13 @@ original sender, not to the router actor.
@@@ note
In general, any message sent to a router will be sent onwards to its routees, but there are a
few exceptions. These are documented in the [Specially Handled Messages](#router-special-messages-java) section below.
few exceptions. These are documented in the [Specially Handled Messages](#router-special-messages) section below.
@@@
### Pool
The following code and configuration snippets show how to create a [round-robin](#round-robin-router-java) router that forwards messages to five `Worker` routees. The
The following code and configuration snippets show how to create a [round-robin](#round-robin-router) router that forwards messages to five `Worker` routees. The
routees will be created as the router's children.
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
@ -103,7 +103,7 @@ deployment requires the `akka-remote` module to be included in the classpath.
#### Senders
When a routee sends a message, it can @ref:[set itself as the sender
](actors.md#actors-tell-sender-java).
](actors.md#actors-tell-sender).
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #reply-with-self }
@ -190,7 +190,7 @@ of the router actor.
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #create-parent }
<a id="round-robin-router-java"></a>
<a id="round-robin-router"></a>
### RoundRobinPool and RoundRobinGroup
Routes in a [round-robin](http://en.wikipedia.org/wiki/Round-robin) fashion to its routees.
@ -239,7 +239,7 @@ RandomGroup defined in code:
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #paths #random-group-2 }
<a id="balancing-pool-java"></a>
<a id="balancing-pool"></a>
### BalancingPool
A Router that will try to redistribute work from busy routees to idle routees.
@ -262,8 +262,8 @@ a restriction on the message queue implementation as BalancingPool does.
@@@ note
Do not use [Broadcast Messages](#broadcast-messages-java) when you use [BalancingPool](#balancing-pool-java) for routers,
as described in [Specially Handled Messages](#router-special-messages-java).
Do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routers,
as described in [Specially Handled Messages](#router-special-messages).
@@@
@ -368,7 +368,7 @@ BroadcastGroup defined in code:
Broadcast routers always broadcast *every* message to their routees. If you do not want to
broadcast every message, then you can use a non-broadcasting router and use
[Broadcast Messages](#broadcast-messages-java) as needed.
[Broadcast Messages](#broadcast-messages) as needed.
@@@
@ -488,7 +488,7 @@ ConsistentHashingGroup defined in code:
`virtual-nodes-factor` is the number of virtual nodes per routee that is used in the
consistent hash node ring to make the distribution more uniform.
<a id="router-special-messages-java"></a>
<a id="router-special-messages"></a>
## Specially Handled Messages
Most messages sent to router actors will be forwarded according to the routers' routing logic.
@ -496,9 +496,9 @@ However there are a few types of messages that have special behavior.
Note that these special messages, except for the `Broadcast` message, are only handled by
self contained router actors and not by the `akka.routing.Router` component described
in [A Simple Router](#simple-router-java).
in [A Simple Router](#simple-router).
<a id="broadcast-messages-java"></a>
<a id="broadcast-messages"></a>
### Broadcast Messages
A `Broadcast` message can be used to send a message to *all* of a router's routees. When a router
@ -516,8 +516,8 @@ routees. It is up to each routee actor to handle the received payload message.
@@@ note
Do not use [Broadcast Messages](#broadcast-messages-java) when you use [BalancingPool](#balancing-pool-java) for routers.
Routees on [BalancingPool](#balancing-pool-java) shares the same mailbox instance, thus some routees can
Do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routers.
Routees on [BalancingPool](#balancing-pool) shares the same mailbox instance, thus some routees can
possibly get the broadcast message multiple times, while other routees get no broadcast message.
@@@
@ -525,7 +525,7 @@ possibly get the broadcast message multiple times, while other routees get no br
### PoisonPill Messages
A `PoisonPill` message has special handling for all actors, including for routers. When any actor
receives a `PoisonPill` message, that actor will be stopped. See the @ref:[PoisonPill](actors.md#poison-pill-java)
receives a `PoisonPill` message, that actor will be stopped. See the @ref:[PoisonPill](actors.md#poison-pill)
documentation for details.
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #poisonPill }
@ -538,7 +538,7 @@ However, a `PoisonPill` message sent to a router may still affect its routees, b
stop the router and when the router stops it also stops its children. Stopping children is normal
actor behavior. The router will stop routees that it has created as children. Each child will
process its current message and then stop. This may lead to some messages being unprocessed.
See the documentation on @ref:[Stopping actors](actors.md#stopping-actors-java) for more information.
See the documentation on @ref:[Stopping actors](actors.md#stopping-actors) for more information.
If you wish to stop a router and its routees, but you would like the routees to first process all
the messages currently in their mailboxes, then you should not send a `PoisonPill` message to the
@ -550,10 +550,8 @@ routees aren't children of the router, i.e. even routees programmatically provid
With the code shown above, each routee will receive a `PoisonPill` message. Each routee will
continue to process its messages as normal, eventually processing the `PoisonPill`. This will
cause the routee to stop. After all routees have stopped the router will itself be <!-- FIXME: unresolved link reference: stopped
automatically <note-router-terminated-children-java> --> stopped
automatically <note-router-terminated-children-java> unless it is a dynamic router, e.g. using
a resizer.
cause the routee to stop. After all routees have stopped the router will itself be stopped
automatically unless it is a dynamic router, e.g. using a resizer.
@@@ note
@ -565,7 +563,7 @@ discusses in more detail how `PoisonPill` messages can be used to shut down rout
### Kill Messages
`Kill` messages are another type of message that has special handling. See
@ref:[Killing an Actor](actors.md#killing-actors-java) for general information about how actors handle `Kill` messages.
@ref:[Killing an Actor](actors.md#killing-actors) for general information about how actors handle `Kill` messages.
When a `Kill` message is sent to a router the router processes the message internally, and does
*not* send it on to its routees. The router will throw an `ActorKilledException` and fail. It
@ -598,7 +596,7 @@ an ordinary message you are not guaranteed that the routees have been changed wh
is routed. If you need to know when the change has been applied you can send `AddRoutee` followed by `GetRoutees`
and when you receive the `Routees` reply you know that the preceding change has been applied.
<a id="resizable-routers-java"></a>
<a id="resizable-routers"></a>
## Dynamically Resizable Pool
All pools can be used with a fixed number of routees or with a resize strategy to adjust the number
@ -619,7 +617,7 @@ Pool with default resizer defined in configuration:
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #resize-pool-1 }
Several more configuration options are available and described in `akka.actor.deployment.default.resizer`
section of the reference <!-- FIXME: More than one link target with name configuration in path Some(/java/routing.rst) --> configuration.
section of the reference @ref:[configuration](general/configuration.md).
Pool with resizer defined in code:
@ -660,7 +658,7 @@ Pool with `OptimalSizeExploringResizer` defined in configuration:
@@snip [RouterDocTest.java]($code$/java/jdocs/routing/RouterDocTest.java) { #optimal-size-exploring-resize-pool }
Several more configuration options are available and described in `akka.actor.deployment.default.optimal-size-exploring-resizer`
section of the reference <!-- FIXME: More than one link target with name configuration in path Some(/java/routing.rst) --> configuration.
section of the reference @ref:[configuration](general/configuration.md).
@@@ note
@ -674,7 +672,7 @@ Dispatchers](#configuring-dispatchers) for more information.
@@@
<a id="router-design-java"></a>
<a id="router-design"></a>
## How Routing is Designed within Akka
On the surface routers look like normal actors, but they are actually implemented differently.
@ -693,7 +691,7 @@ routers were implemented with normal actors. Fortunately all of this complexity
consumers of the routing API. However, it is something to be aware of when implementing your own
routers.
<a id="custom-router-java"></a>
<a id="custom-router"></a>
## Custom Router
You can create your own router should you not find any of the ones provided by Akka sufficient for your needs.
@ -701,7 +699,7 @@ In order to roll your own router you have to fulfill certain criteria which are
Before creating your own router you should consider whether a normal actor with router-like
behavior might do the job just as well as a full-blown router. As explained
[above](#router-design-java), the primary benefit of routers over normal actors is their
[above](#router-design), the primary benefit of routers over normal actors is their
higher performance. But they are somewhat more complicated to write than normal actors. Therefore if
lower maximum throughput is acceptable in your application you may wish to stick with traditional
actors. This section, however, assumes that you wish to get maximum performance and so demonstrates
@ -724,7 +722,7 @@ A unit test of the routing logic:
@@snip [CustomRouterDocTest.java]($code$/java/jdocs/routing/CustomRouterDocTest.java) { #unit-test-logic }
You could stop here and use the `RedundancyRoutingLogic` with a `akka.routing.Router`
as described in [A Simple Router](#simple-router-java).
as described in [A Simple Router](#simple-router).
Let us continue and make this into a self contained, configurable, router actor.
@ -752,7 +750,7 @@ The deployment section of the configuration is passed to the constructor.
## Configuring Dispatchers
The dispatcher for created children of the pool will be taken from
`Props` as described in @ref:[Dispatchers](../scala/dispatchers.md).
`Props` as described in @ref:[Dispatchers](dispatchers.md).
To make it easy to define the dispatcher of the routees of the pool you can
define the dispatcher inline in the deployment section of the config.

View file

@ -97,7 +97,7 @@ bytes to different objects.
Then you only need to fill in the blanks, bind it to a name in your [Configuration]() and then
list which classes that should be serialized using it.
<a id="string-manifest-serializer-java"></a>
<a id="string-manifest-serializer"></a>
### Serializer with String Manifest
The `Serializer` illustrated above supports a class based manifest (type hint).

View file

@ -538,7 +538,7 @@ states.
These stages can transform the rate of incoming elements since there are stages that emit multiple elements for a
single input (e.g. `mapConcat') or consume multiple elements before emitting one output (e.g. `filter`).
However, these rate transformations are data-driven, i.e. it is the incoming elements that define how the
rate is affected. This is in contrast with [detached-stages-overview_java](#detached-stages-overview-java) which can change their processing behavior
rate is affected. This is in contrast with [detached stages](#detached-stages-overview) which can change their processing behavior
depending on being backpressured by downstream or not.
### map
@ -996,7 +996,7 @@ Delay every element passed through with a specific duration.
**completes** when upstream completes and buffered elements has been drained
<a id="detached-stages-overview-java"></a>
<a id="detached-stages-overview"></a>
## Backpressure aware stages
These stages are aware of the backpressure provided by their downstreams and able to adapt their behavior to that signal.

View file

@ -231,7 +231,7 @@ needs to return a different object that provides the necessary interaction capab
Unlike actors though, each of the processing stages might provide a materialized value, so when we compose multiple
stages or modules, we need to combine the materialized value as well (there are default rules which make this easier,
for example *to()* and *via()* takes care of the most common case of taking the materialized value to the left.
See @ref:[Combining materialized values](../../scala/stream/stream-flows-and-basics.md#flow-combine-mat-scala) for details). We demonstrate how this works by a code example and a diagram which
See @ref:[Combining materialized values](../stream/stream-flows-and-basics.md#flow-combine-mat) for details). We demonstrate how this works by a code example and a diagram which
graphically demonstrates what is happening.
The propagation of the individual materialized values from the enclosed modules towards the top will look like this:
@ -273,7 +273,7 @@ the `CompletionStage<Sink>` part, and wraps the other two values in a custom cas
@@@ note
The nested structure in the above example is not necessary for combining the materialized values, it just
demonstrates how the two features work together. See @ref:[Operator Fusion](stream-flows-and-basics.md#operator-fusion-java) for further examples
demonstrates how the two features work together. See @ref:[Operator Fusion](stream-flows-and-basics.md#operator-fusion) for further examples
of combining materialized values without nesting and hierarchy involved.
@@@
@ -284,7 +284,7 @@ We have seen that we can use `named()` to introduce a nesting level in the fluid
`create()` from `GraphDSL`). Apart from having the effect of adding a nesting level, `named()` is actually
a shorthand for calling `withAttributes(Attributes.name("someName"))`. Attributes provide a way to fine-tune certain
aspects of the materialized running entity. For example buffer sizes for asynchronous stagescan be controlled via
attributes (see @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers-java)). When it comes to hierarchic composition, attributes are inherited
attributes (see @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers)). When it comes to hierarchic composition, attributes are inherited
by nested modules, unless they override them with a custom value.
The code below, a modification of an earlier example sets the `inputBuffer` attribute on certain modules, but not

View file

@ -11,7 +11,7 @@ This part also serves as supplementary material for the main body of documentati
open while reading the manual and look for examples demonstrating various streaming concepts
as they appear in the main body of documentation.
If you need a quick reference of the available processing stages used in the recipes see @ref:[stages-overview_java](stages-overview.md).
If you need a quick reference of the available processing stages used in the recipes see @ref:[stages overview](stages-overview.md).
## Working with Flows
@ -79,7 +79,7 @@ demand comes in and then reset the stage state. It will then complete the stage.
@@snip [RecipeDigest.java]($code$/java/jdocs/stream/javadsl/cookbook/RecipeDigest.java) { #calculating-digest2 }
<a id="cookbook-parse-lines-java"></a>
<a id="cookbook-parse-lines"></a>
### Parsing lines from a stream of ByteStrings
**Situation:** A stream of bytes is given as a stream of `ByteString` s containing lines terminated by line ending

View file

@ -13,7 +13,7 @@ might be easy to make with a custom `GraphStage`
@@@
<a id="graphstage-java"></a>
<a id="graphstage"></a>
## Custom processing with GraphStage
The `GraphStage` abstraction can be used to create arbitrary graph processing stages with any number of input
@ -283,7 +283,7 @@ the `Materializer` you're using is able to provide you with a logger.
Please note that you can always simply use a logging library directly inside a Stage.
Make sure to use an asynchronous appender however, to not accidentally block the stage when writing to files etc.
See @ref:[Using the SLF4J API directly](../logging.md#slf4j-directly-java) for more details on setting up async appenders in SLF4J.
See @ref:[Using the SLF4J API directly](../logging.md#slf4j-directly) for more details on setting up async appenders in SLF4J.
@@@
@ -337,7 +337,7 @@ when a future completes:
### Integration with actors
**This section is a stub and will be extended in the next release**
**This is a :ref:`may change <may-change>` feature***
**This is a @ref:[may change](../common/may-change.md) feature***
It is possible to acquire an ActorRef that can be addressed from the outside of the stage, similarly how
`AsyncCallback` allows injecting asynchronous events into a stage logic. This reference can be obtained

View file

@ -1,6 +1,6 @@
# Dynamic stream handling
<a id="kill-switch-java"></a>
<a id="kill-switch"></a>
## Controlling graph completion with KillSwitch
A `KillSwitch` allows the completion of graphs of `FlowShape` from the outside. It consists of a flow element that
@ -18,7 +18,7 @@ Graph completion is performed by both
A `KillSwitch` can control the completion of one or multiple streams, and therefore comes in two different flavours.
<a id="unique-kill-switch-java"></a>
<a id="unique-kill-switch"></a>
### UniqueKillSwitch
`UniqueKillSwitch` allows to control the completion of **one** materialized `Graph` of `FlowShape`. Refer to the
@ -32,7 +32,7 @@ below for usage examples.
@@snip [KillSwitchDocTest.java]($code$/java/jdocs/stream/KillSwitchDocTest.java) { #unique-abort }
<a id="shared-kill-switch-java"></a>
<a id="shared-kill-switch"></a>
### SharedKillSwitch
A `SharedKillSwitch` allows to control the completion of an arbitrary number graphs of `FlowShape`. It can be
@ -123,6 +123,6 @@ than 3 seconds are forcefully removed (and their stream failed).
The resulting Flow now has a type of `Flow[String, String, UniqueKillSwitch]` representing a publish-subscribe
channel which can be used any number of times to attach new producers or consumers. In addition, it materializes
to a `UniqueKillSwitch` (see [UniqueKillSwitch](#unique-kill-switch-java)) that can be used to deregister a single user externally:
to a `UniqueKillSwitch` (see [UniqueKillSwitch](#unique-kill-switch)) that can be used to deregister a single user externally:
@@snip [HubDocTest.java]($code$/java/jdocs/stream/HubDocTest.java) { #pub-sub-4 }

View file

@ -36,7 +36,7 @@ elements that cause the division by zero are effectively dropped.
@@@ note
Be aware that dropping elements may result in deadlocks in graphs with
cycles, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-java).
cycles, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles).
@@@

View file

@ -1,6 +1,6 @@
# Basics and working with Flows
<a id="core-concepts-java"></a>
<a id="core-concepts"></a>
## Core concepts
Akka Streams is a library to process and transfer a sequence of elements using bounded buffer space. This
@ -36,7 +36,7 @@ is running.
Processing Stage
: The common name for all building blocks that build up a Graph.
Examples of a processing stage would be operations like `map()`, `filter()`, custom `GraphStage` s and graph
junctions like `Merge` or `Broadcast`. For the full list of built-in processing stages see @ref:[stages-overview_java](stages-overview.md)
junctions like `Merge` or `Broadcast`. For the full list of built-in processing stages see @ref:[stages overview](stages-overview.md)
When we talk about *asynchronous, non-blocking backpressure* we mean that the processing stages available in Akka
@ -45,7 +45,7 @@ will use asynchronous means to slow down a fast producer, without blocking its t
design, since entities that need to wait (a fast producer waiting on a slow consumer) will not block the thread but
can hand it back for further use to an underlying thread-pool.
<a id="defining-and-running-streams-java"></a>
<a id="defining-and-running-streams"></a>
## Defining and running streams
Linear processing pipelines can be expressed in Akka Streams using the following core abstractions:
@ -131,7 +131,7 @@ In accordance to the Reactive Streams specification ([Rule 2.13](https://github.
Akka Streams do not allow `null` to be passed through the stream as an element. In case you want to model the concept
of absence of a value we recommend using `java.util.Optional` which is available since Java 8.
<a id="back-pressure-explained-java"></a>
<a id="back-pressure-explained"></a>
## Back-pressure explained
Akka Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](http://reactive-streams.org/)
@ -141,7 +141,7 @@ The user of the library does not have to write any explicit back-pressure handli
and dealt with automatically by all of the provided Akka Streams processing stages. It is possible however to add
explicit buffer stages with overflow strategies that can influence the behaviour of the stream. This is especially important
in complex processing graphs which may even contain loops (which *must* be treated with very special
care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-java)).
care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles)).
The back pressure protocol is defined in terms of the number of elements a downstream `Subscriber` is able to receive
and buffer, referred to as `demand`.
@ -156,7 +156,7 @@ different Reactive Streams implementations.
Akka Streams implements these concepts as `Source`, `Flow` (referred to as `Processor` in Reactive Streams)
and `Sink` without exposing the Reactive Streams interfaces directly.
If you need to integrate with other Reactive Stream libraries read @ref:[Integrating with Reactive Streams](stream-integrations.md#reactive-streams-integration-java).
If you need to integrate with other Reactive Stream libraries read @ref:[Integrating with Reactive Streams](stream-integrations.md#reactive-streams-integration).
@@@
@ -198,7 +198,7 @@ it will have to abide to this back-pressure by applying one of the below strateg
As we can see, this scenario effectively means that the `Subscriber` will *pull* the elements from the Publisher
this mode of operation is referred to as pull-based back-pressure.
<a id="stream-materialization-java"></a>
<a id="stream-materialization"></a>
## Stream Materialization
When constructing flows and graphs in Akka Streams think of them as preparing a blueprint, an execution plan.
@ -222,7 +222,7 @@ yet will materialize that stage multiple times.
@@@
<a id="operator-fusion-java"></a>
<a id="operator-fusion"></a>
### Operator Fusion
By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
@ -282,7 +282,7 @@ resulting values. Some examples of using these combiners are illustrated in the
@@@ note
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see @ref:[Accessing the materialized value inside the Graph](stream-graphs.md#graph-matvalue-java).
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see @ref:[Accessing the materialized value inside the Graph](stream-graphs.md#graph-matvalue).
@@@

View file

@ -11,14 +11,14 @@ Some graph operations which are common enough and fit the linear style of Flows,
streams, such that the second one is consumed after the first one has completed), may have shorthand methods defined on
`Flow` or `Source` themselves, however you should keep in mind that those are also implemented as graph junctions.
<a id="graph-dsl-java"></a>
<a id="graph-dsl"></a>
## Constructing Graphs
Graphs are built from simple Flows which serve as the linear connections within the graphs as well as junctions
which serve as fan-in and fan-out points for Flows. Thanks to the junctions having meaningful types based on their behaviour
and making them explicit elements these elements should be rather straightforward to use.
Akka Streams currently provide these junctions (for a detailed list see @ref:[stages-overview_java](stages-overview.md)):
Akka Streams currently provide these junctions (for a detailed list see @ref:[stages overview](stages-overview.md)):
* **Fan-out**
@ -68,7 +68,7 @@ is passed to it and return the inlets and outlets of the resulting copy so that
Another alternative is to pass existing graphs—of any shape—into the factory method that produces a
new graph. The difference between these approaches is that importing using `builder.add(...)` ignores the
materialized value of the imported graph while importing via the factory method allows its inclusion;
for more details see @ref:[Stream Materialization](../../scala/stream/stream-flows-and-basics.md#stream-materialization-scala).
for more details see @ref:[Stream Materialization](../stream/stream-flows-and-basics.md#stream-materialization).
In the example below we prepare a graph that consists of two parallel streams,
in which we re-use the same instance of `Flow`, yet it will properly be
@ -76,7 +76,7 @@ materialized as two connections between the corresponding Sources and Sinks:
@@snip [GraphDSLDocTest.java]($code$/java/jdocs/stream/GraphDSLDocTest.java) { #graph-dsl-reusing-a-flow }
<a id="partial-graph-dsl-java"></a>
<a id="partial-graph-dsl"></a>
## Constructing and combining Partial Graphs
Sometimes it is not possible (or needed) to construct the entire computation graph in one place, but instead construct
@ -110,7 +110,7 @@ A partial graph also verifies that all ports are either connected or part of the
@@@
<a id="constructing-sources-sinks-flows-from-partial-graphs-java"></a>
<a id="constructing-sources-sinks-flows-from-partial-graphs"></a>
## Constructing Sources, Sinks and Flows from Partial Graphs
Instead of treating a `Graph` as simply a collection of flows and junctions which may not yet all be
@ -153,7 +153,7 @@ The same can be done for a `Sink` but in this case it will be fan-out:
@@snip [StreamPartialGraphDSLDocTest.java]($code$/java/jdocs/stream/StreamPartialGraphDSLDocTest.java) { #sink-combine }
<a id="bidi-flow-java"></a>
<a id="bidi-flow"></a>
## Bidirectional Flows
A graph topology that is often useful is that of two flows going in opposite
@ -186,7 +186,7 @@ turns an object into a sequence of bytes.
The other stage that we talked about is a little more involved since reversing
a framing protocol means that any received chunk of bytes may correspond to
zero or more messages. This is best implemented using a `GraphStage`
(see also @ref:[Custom processing with GraphStage](stream-customize.md#graphstage-java)).
(see also @ref:[Custom processing with GraphStage](stream-customize.md#graphstage)).
@@snip [BidiFlowDocTest.java]($code$/java/jdocs/stream/BidiFlowDocTest.java) { #framing }
@ -199,7 +199,7 @@ together and also turned around with the `.reversed()` method. The test
simulates both parties of a network communication protocol without actually
having to open a network connection—the flows can just be connected directly.
<a id="graph-matvalue-java"></a>
<a id="graph-matvalue"></a>
## Accessing the materialized value inside the Graph
In certain cases it might be necessary to feed back the materialized value of a Graph (partial, closed or backing a
@ -215,7 +215,7 @@ The following example demonstrates a case where the materialized `CompletionStag
@@snip [GraphDSLDocTest.java]($code$/java/jdocs/stream/GraphDSLDocTest.java) { #graph-dsl-matvalue-cycle }
<a id="graph-cycles-java"></a>
<a id="graph-cycles"></a>
## Graph cycles, liveness and deadlocks
Cycles in bounded stream topologies need special considerations to avoid potential deadlocks and other liveness issues.

View file

@ -322,7 +322,7 @@ The numbers in parenthesis illustrates how many calls that are in progress at
the same time. Here the downstream demand and thereby the number of concurrent
calls are limited by the buffer size (4) of the `ActorMaterializerSettings`.
<a id="reactive-streams-integration-java"></a>
<a id="reactive-streams-integration"></a>
## Integrating with Reactive Streams
[Reactive Streams](http://reactive-streams.org/) defines a standard for asynchronous stream processing with non-blocking
@ -427,7 +427,7 @@ type-safe and safe to implement `akka.stream.stage.GraphStage`. It can also
expose a "stage actor ref" is needed to be addressed as-if an Actor.
Custom stages implemented using `GraphStage` are also automatically fusable.
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage-java).
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage).
@@@
@ -482,7 +482,7 @@ type-safe and safe to implement `akka.stream.stage.GraphStage`. It can also
expose a "stage actor ref" is needed to be addressed as-if an Actor.
Custom stages implemented using `GraphStage` are also automatically fusable.
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](../../scala/stream/stream-customize.md#graphstage-scala).
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](../stream/stream-customize.md#graphstage).
@@@

View file

@ -63,13 +63,13 @@ composition, therefore it may take some careful study of this subject until you
feel familiar with the tools and techniques. The documentation is here to help
and for best results we recommend the following approach:
* Read the @ref:[Quick Start Guide](stream-quickstart.md#stream-quickstart-java) to get a feel for how streams
* Read the @ref:[Quick Start Guide](stream-quickstart.md#stream-quickstart) to get a feel for how streams
look like and what they can do.
* The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../../scala/general/stream/stream-design.md) at this
* The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../general/stream/stream-design.md) at this
point.
* The bottom-up learners may feel more at home rummaging through the
@ref:[Streams Cookbook](stream-cookbook.md).
* For a complete overview of the built-in processing stages you can look at the
table in @ref:[stages-overview_java](stages-overview.md)
table in @ref:[stages overview](stages-overview.md)
* The other sections can be read sequentially or as needed during the previous
steps, each digging deeper into specific topics.

View file

@ -67,7 +67,7 @@ When writing such end-to-end back-pressured systems you may sometimes end up in
in which *either side is waiting for the other one to start the conversation*. One does not need to look far
to find examples of such back-pressure loops. In the two examples shown previously, we always assumed that the side we
are connecting to would start the conversation, which effectively means both sides are back-pressured and can not get
the conversation started. There are multiple ways of dealing with this which are explained in depth in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-java),
the conversation started. There are multiple ways of dealing with this which are explained in depth in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles),
however in client-server scenarios it is often the simplest to make either side simply send an initial message.
@@@ note

View file

@ -43,7 +43,7 @@ not be able to operate at full capacity <a id="^1" href="#1">[1]</a>.
note::
: Asynchronous stream processing stages have internal buffers to make communication between them more efficient.
For more details about the behavior of these and how to add additional buffers refer to @ref:[Buffers and working with rate](../../scala/stream/stream-rate.md).
For more details about the behavior of these and how to add additional buffers refer to @ref:[Buffers and working with rate](stream-rate.md).
## Parallel processing
@ -63,7 +63,7 @@ One drawback of the example code above that it does not preserve the ordering of
if children like to track their "own" pancakes. In those cases the `Balance` and `Merge` stages should be replaced
by strict-round robing balancing and merging stages that put in and take out pancakes in a strict order.
A more detailed example of creating a worker pool can be found in the cookbook: @ref:[Balancing jobs to a fixed pool of workers](../../scala/stream/stream-cookbook.md#cookbook-balance-scala)
A more detailed example of creating a worker pool can be found in the cookbook: @ref:[Balancing jobs to a fixed pool of workers](stream-cookbook.md#cookbook-balance)
## Combining pipelining and parallel processing

View file

@ -1,9 +1,9 @@
<a id="stream-quickstart-java"></a>
<a id="stream-quickstart"></a>
# Quick Start Guide
Create a project and add the akka-streams dependency to the build tool of your
choice as described in @ref:[Using a build tool](../../java/guide/quickstart.md).
choice as described in @ref:[Using a build tool](../guide/quickstart.md).
A stream usually begins at a source, so this is also how we start an Akka
Stream. Before we create one, we import the full complement of streaming tools:
@ -146,7 +146,7 @@ combinator will assert *back-pressure* upstream.
This is basically all there is to Akka Streams in a nutshell—glossing over the
fact that there are dozens of sources and sinks and many more stream
transformation combinators to choose from, see also @ref:[stages-overview_java](stages-overview.md).
transformation combinators to choose from, see also @ref:[stages overview](stages-overview.md).
# Reactive Tweets
@ -167,7 +167,7 @@ Here's the data model we'll be working with throughout the quickstart examples:
@@@ note
If you would like to get an overview of the used vocabulary first instead of diving head-first
into an actual example you can have a look at the @ref:[Core concepts](stream-flows-and-basics.md#core-concepts-java) and @ref:[Defining and running streams](stream-flows-and-basics.md#defining-and-running-streams-java)
into an actual example you can have a look at the @ref:[Core concepts](stream-flows-and-basics.md#core-concepts) and @ref:[Defining and running streams](stream-flows-and-basics.md#defining-and-running-streams)
sections of the docs, and then come back to this quickstart to see it all pieced together into a simple example application.
@@@
@ -183,7 +183,7 @@ which will be responsible for materializing and running the streams we are about
@@snip [TwitterStreamQuickstartDocTest.java]($code$/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #materializer-setup }
The `ActorMaterializer` can optionally take `ActorMaterializerSettings` which can be used to define
materialization properties, such as default buffer sizes (see also @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers-java)), the dispatcher to
materialization properties, such as default buffer sizes (see also @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers)), the dispatcher to
be used by the pipeline etc. These can be overridden with `withAttributes` on `Flow`, `Source`, `Sink` and `Graph`.
Let's assume we have a stream of tweets readily available. In Akka this is expressed as a `Source<Out, M>`:
@ -195,7 +195,7 @@ more advanced graph elements to finally be consumed by a `Sink<In,M3>`.
The first type parameter—`Tweet` in this case—designates the kind of elements produced
by the source while the `M` type parameters describe the object that is created during
materialization ([see below](#materialized-values-quick-java))—`NotUsed` (from the `scala.runtime`
materialization ([see below](#materialized-values-quick))—`NotUsed` (from the `scala.runtime`
package) means that no value is produced, it is the generic equivalent of `void`.
The operations should look familiar to anyone who has used the Scala Collections library,
@ -204,7 +204,7 @@ only make sense in streaming and vice versa):
@@snip [TwitterStreamQuickstartDocTest.java]($code$/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #authors-filter-map }
Finally in order to @ref:[materialize](stream-flows-and-basics.md#stream-materialization-java) and run the stream computation we need to attach
Finally in order to @ref:[materialize](stream-flows-and-basics.md#stream-materialization) and run the stream computation we need to attach
the Flow to a `Sink<T, M>` that will get the Flow running. The simplest way to do this is to call
`runWith(sink)` on a `Source<Out, M>`. For convenience a number of common Sinks are predefined and collected as static methods on
the `Sink class`.
@ -274,14 +274,14 @@ Both `Graph` and `RunnableGraph` are *immutable, thread-safe, and freely shareab
A graph can also have one of several other shapes, with one or more unconnected ports. Having unconnected ports
expresses a graph that is a *partial graph*. Concepts around composing and nesting graphs in large structures are
explained in detail in @ref:[Modularity, Composition and Hierarchy](stream-composition.md). It is also possible to wrap complex computation graphs
as Flows, Sinks or Sources, which will be explained in detail in @ref:[Constructing and combining Partial Graphs](stream-graphs.md#partial-graph-dsl-java).
as Flows, Sinks or Sources, which will be explained in detail in @ref:[Constructing and combining Partial Graphs](stream-graphs.md#partial-graph-dsl).
## Back-pressure in action
One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks
(Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more
about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read
@ref:[Back-pressure explained](stream-flows-and-basics.md#back-pressure-explained-java).
@ref:[Back-pressure explained](stream-flows-and-basics.md#back-pressure-explained).
A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough,
either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting
@ -295,7 +295,7 @@ The `buffer` element takes an explicit and required `OverflowStrategy`, which de
when it receives another element while it is full. Strategies provided include dropping the oldest element (`dropHead`),
dropping the entire buffer, signalling failures etc. Be sure to pick and choose the strategy that fits your use case best.
<a id="materialized-values-quick-java"></a>
<a id="materialized-values-quick"></a>
## Materialized values
So far we've been only processing data using Flows and consuming it into some kind of external Sink - be it by printing
@ -336,7 +336,7 @@ will be different, as illustrated by this example:
@@snip [TwitterStreamQuickstartDocTest.java]($code$/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #tweets-runnable-flow-materialized-twice }
Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or
steering these elements which will be discussed in detail in @ref:[Stream Materialization](stream-flows-and-basics.md#stream-materialization-java). Summing up this section, now we know
steering these elements which will be discussed in detail in @ref:[Stream Materialization](stream-flows-and-basics.md#stream-materialization). Summing up this section, now we know
what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above:
@@snip [TwitterStreamQuickstartDocTest.java]($code$/java/jdocs/stream/TwitterStreamQuickstartDocTest.java) { #tweets-fold-count-oneline }

View file

@ -3,7 +3,7 @@
When upstream and downstream rates differ, especially when the throughput has spikes, it can be useful to introduce
buffers in a stream. In this chapter we cover how buffers are used in Akka Streams.
<a id="async-stream-buffers-java"></a>
<a id="async-stream-buffers"></a>
## Buffers for asynchronous stages
In this section we will discuss internal buffers that are introduced as an optimization when using asynchronous stages.

View file

@ -60,8 +60,8 @@ instead of using `TestActorRef` whenever possible.
Due to the synchronous nature of `TestActorRef` it will **not** work with some support
traits that Akka provides as they require asynchronous behaviours to function properly.
Examples of traits that do not mix well with test actor refs are @ref:[PersistentActor](persistence.md#event-sourcing-java)
and @ref:[AtLeastOnceDelivery](persistence.md#at-least-once-delivery-java) provided by @ref:[Akka Persistence](persistence.md).
Examples of traits that do not mix well with test actor refs are @ref:[PersistentActor](persistence.md#event-sourcing)
and @ref:[AtLeastOnceDelivery](persistence.md#at-least-once-delivery) provided by @ref:[Akka Persistence](persistence.md).
@@@
@ -140,7 +140,7 @@ Feel free to experiment with the possibilities, and if you find useful
patterns, don't hesitate to let the Akka forums know about them! Who knows,
common operations might even be worked into nice DSLs.
<a id="async-integration-testing-java"></a>
<a id="async-integration-testing"></a>
## Asynchronous Integration Testing with `TestKit`
When you are reasonably sure that your actor's business logic is correct, the
@ -319,7 +319,7 @@ for managing time constraints:
@@snip [TestKitDocTest.java]($code$/java/jdocs/testkit/TestKitDocTest.java) { #test-within }
The block in `within` must complete after a @ref:[Duration](../scala/common/duration.md) which
The block in `within` must complete after a @ref:[Duration](common/duration.md) which
is between `min` and `max`, where the former defaults to zero. The
deadline calculated by adding the `max` parameter to the block's start
time is implicitly available within the block to all examination methods, if
@ -639,7 +639,7 @@ actor semantics
exception stack traces
* Exclusion of certain classes of dead-lock scenarios
<a id="actor-logging-java"></a>
<a id="actor-logging"></a>
## Tracing Actor Invocations
The testing facilities described up to this point were aiming at formulating
@ -682,4 +682,4 @@ akka {
## Configuration
There are several configuration properties for the TestKit module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-testkit).
to the @ref:[reference configuration](general/configuration.md#config-akka-testkit).

View file

@ -161,7 +161,7 @@ as an input parameter to TypedActor.get(…).
By having your Typed Actor implementation class implement `TypedActor.Supervisor`
you can define the strategy to use for supervising child actors, as described in
<!-- FIXME: More than one link target with name supervision in path Some(/java/typed-actors.rst) --> supervision and @ref:[Fault Tolerance](fault-tolerance.md).
@ref:[supervision](general/supervision.md) and @ref:[Fault Tolerance](fault-tolerance.md).
## Receive arbitrary messages

View file

@ -41,13 +41,13 @@ construction).
### Life-cycle management
Life-cycle hooks are also exposed as DSL elements (see @ref:[Start Hook](actors.md#start-hook-scala) and @ref:[Stop Hook](actors.md#stop-hook-scala)), where later invocations of the methods shown below will replace the contents of the respective hooks:
Life-cycle hooks are also exposed as DSL elements (see @ref:[Start Hook](actors.md#start-hook) and @ref:[Stop Hook](actors.md#stop-hook)), where later invocations of the methods shown below will replace the contents of the respective hooks:
@@snip [ActorDSLSpec.scala]($akka$/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #simple-start-stop }
The above is enough if the logical life-cycle of the actor matches the restart
cycles (i.e. `whenStopping` is executed before a restart and `whenStarting`
afterwards). If that is not desired, use the following two hooks (see @ref:[Restart Hooks](actors.md#restart-hook-scala)):
afterwards). If that is not desired, use the following two hooks (see @ref:[Restart Hooks](actors.md#restart-hook)):
@@snip [ActorDSLSpec.scala]($akka$/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala) { #failing-actor }

View file

@ -17,8 +17,8 @@ its syntax from Erlang.
Since Akka enforces parental supervision every actor is supervised and
(potentially) the supervisor of its children, it is advisable that you
familiarize yourself with @ref:[Actor Systems](../scala/general/actor-systems.md) and <!-- FIXME: More than one link target with name supervision in path Some(/scala/actors.rst) --> supervision and it
may also help to read @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
familiarize yourself with @ref:[Actor Systems](general/actor-systems.md) and @ref:[supervision](general/supervision.md)
and it may also help to read @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@
@ -157,7 +157,7 @@ create a child actor.
It is recommended to create a hierarchy of children, grand-children and so on
such that it fits the logical failure-handling structure of the application,
see @ref:[Actor Systems](../scala/general/actor-systems.md).
see @ref:[Actor Systems](general/actor-systems.md).
The call to `actorOf` returns an instance of `ActorRef`. This is a
handle to the actor instance and the only way to interact with it. The
@ -200,7 +200,7 @@ constructor arguments are determined by a dependency injection framework.
You might be tempted at times to offer an `IndirectActorProducer`
which always returns the same instance, e.g. by using a `lazy val`. This is
not supported, as it goes against the meaning of an actor restart, which is
described here: @ref:[What Restarting Means](../scala/general/supervision.md#supervision-restart).
described here: @ref:[What Restarting Means](general/supervision.md#supervision-restart).
When using a dependency injection framework, actor beans *MUST NOT* have
singleton scope.
@ -276,7 +276,7 @@ described in the following:
The implementations shown above are the defaults provided by the `Actor`
trait.
<a id="actor-lifecycle-scala"></a>
<a id="actor-lifecycle"></a>
### Actor Lifecycle
![actor_lifecycle.png](../images/actor_lifecycle.png)
@ -295,7 +295,7 @@ are notified of the termination. After the incarnation is stopped, the path can
be reused again by creating an actor with `actorOf()`. In this case the
name of the new incarnation will be the same as the previous one but the
UIDs will differ. An actor can be stopped by the actor itself, another actor
or the `ActorSystem` (see [Stopping actors](#stopping-actors-scala)).
or the `ActorSystem` (see [Stopping actors](#stopping-actors)).
@@@ note
@ -317,11 +317,11 @@ occupying it. `ActorSelection` cannot be watched for this reason. It is
possible to resolve the current incarnation's `ActorRef` living under the
path by sending an `Identify` message to the `ActorSelection` which
will be replied to with an `ActorIdentity` containing the correct reference
(see [actorSelection-scala](#actorselection-scala)). This can also be done with the `resolveOne`
(see [ActorSelection](#actorselection)). This can also be done with the `resolveOne`
method of the `ActorSelection`, which returns a `Future` of the matching
`ActorRef`.
<a id="deathwatch-scala"></a>
<a id="deathwatch"></a>
### Lifecycle Monitoring aka DeathWatch
In order to be notified when another actor terminates (i.e. stops permanently,
@ -352,7 +352,7 @@ using `context.unwatch(target)`. This works even if the `Terminated`
message has already been enqueued in the mailbox; after calling `unwatch`
no `Terminated` message for that actor will be processed anymore.
<a id="start-hook-scala"></a>
<a id="start-hook"></a>
### Start Hook
Right after starting the actor, its `preStart` method is invoked.
@ -367,12 +367,12 @@ Initialization code which is part of the actors constructor will always be
called when an instance of the actor class is created, which happens at every
restart.
<a id="restart-hook-scala"></a>
<a id="restart-hook"></a>
### Restart Hooks
All actors are supervised, i.e. linked to another actor with a fault
handling strategy. Actors may be restarted in case an exception is thrown while
processing a message (see <!-- FIXME: More than one link target with name supervision in path Some(/scala/actors.rst) --> supervision). This restart involves the hooks
processing a message (see @ref:[supervision](general/supervision.md)). This restart involves the hooks
mentioned above:
1.
@ -404,11 +404,11 @@ usual.
Be aware that the ordering of failure notifications relative to user messages
is not deterministic. In particular, a parent might restart its child before
it has processed the last messages sent by the child before the failure.
See @ref:[Discussion: Message Ordering](../scala/general/message-delivery-reliability.md#message-ordering) for details.
See @ref:[Discussion: Message Ordering](general/message-delivery-reliability.md#message-ordering) for details.
@@@
<a id="stop-hook-scala"></a>
<a id="stop-hook"></a>
### Stop Hook
After stopping an actor, its `postStop` hook is called, which may be used
@ -417,10 +417,10 @@ to run after message queuing has been disabled for this actor, i.e. messages
sent to a stopped actor will be redirected to the `deadLetters` of the
`ActorSystem`.
<a id="actorselection-scala"></a>
<a id="actorselection"></a>
## Identifying Actors via Actor Selection
As described in @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md), each actor has a unique logical path, which
As described in @ref:[Actor References, Paths and Addresses](general/addressing.md), each actor has a unique logical path, which
is obtained by following the chain of actors from child to parent until
reaching the root of the actor system, and it has a physical path, which may
differ if the supervision chain includes any remote supervisors. These paths
@ -438,7 +438,7 @@ It is always preferable to communicate with other Actors using their ActorRef
instead of relying upon ActorSelection. Exceptions are
>
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery-scala) facility
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery) facility
* initiating first contact with a remote system
In all other cases ActorRefs can be provided during Actor creation or
@ -487,7 +487,7 @@ Remote actor addresses may also be looked up, if @ref:[remoting](remoting.md) is
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #selection-remote }
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting.md#remote-sample-scala).
An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoting.md#remote-sample).
## Messages and immutability
@ -528,7 +528,7 @@ remoting. So always prefer `tell` for performance, and only `ask` if you must.
@@@
<a id="actors-tell-sender-scala"></a>
<a id="actors-tell-sender"></a>
### Tell: Fire-forget
This is the preferred way of sending messages. No blocking waiting for a
@ -544,7 +544,7 @@ to reply to the original sender, by using `sender() ! replyMsg`.
If invoked from an instance that is **not** an Actor the sender will be
`deadLetters` actor reference by default.
<a id="actors-ask-scala"></a>
<a id="actors-ask"></a>
### Ask: Send-And-Receive-Future
The `ask` pattern involves actors as well as futures, hence it is offered as
@ -603,7 +603,7 @@ on the enclosing actor from within the callback. This would break the actor
encapsulation and may introduce synchronization bugs and race conditions because
the callback will be scheduled concurrently to the enclosing actor. Unfortunately
there is not yet a way to detect these illegal accesses at compile time.
See also: @ref:[Actors and shared mutable state](../scala/general/jmm.md#jmm-shared-state)
See also: @ref:[Actors and shared mutable state](general/jmm.md#jmm-shared-state)
@@@
@ -664,7 +664,7 @@ Messages marked with `NotInfluenceReceiveTimeout` will not reset the timer. This
`ReceiveTimeout` should be fired by external inactivity but not influenced by internal activity,
e.g. scheduled tick messages.
<a id="stopping-actors-scala"></a>
<a id="stopping-actors"></a>
## Stopping actors
Actors are stopped by invoking the `stop` method of a `ActorRefFactory`,
@ -684,7 +684,7 @@ Termination of an actor proceeds in two steps: first the actor suspends its
mailbox processing and sends a stop command to all its children, then it keeps
processing the internal termination notifications from its children until the last one is
gone, finally terminating itself (invoking `postStop`, dumping mailbox,
publishing `Terminated` on the [DeathWatch](#deathwatch-scala), telling
publishing `Terminated` on the [DeathWatch](#deathwatch), telling
its supervisor). This procedure ensures that actor system sub-trees terminate
in an orderly fashion, propagating the stop command to the leaves and
collecting their confirmation back to the stopped supervisor. If one of the
@ -711,7 +711,7 @@ message which will eventually arrive.
@@@
<a id="poison-pill-scala"></a>
<a id="poison-pill"></a>
### PoisonPill
You can also send an actor the `akka.actor.PoisonPill` message, which will
@ -748,7 +748,7 @@ message, i.e. not for top-level actors.
@@@
<a id="coordinated-shutdown-scala"></a>
<a id="coordinated-shutdown"></a>
### Coordinated Shutdown
There is an extension named `CoordinatedShutdown` that will stop certain actors and
@ -777,7 +777,7 @@ is only used for debugging/logging.
Tasks added to the same phase are executed in parallel without any ordering assumptions.
Next phase will not start until all tasks of previous phase have been completed.
If tasks are not completed within a configured timeout (see @ref:[reference.conf](../scala/general/configuration.md#config-akka-actor))
If tasks are not completed within a configured timeout (see @ref:[reference.conf](general/configuration.md#config-akka-actor))
the next phase will be started anyway. It is possible to configure `recover=off` for a phase
to abort the rest of the shutdown process if a task fails or is not completed within the timeout.
@ -869,7 +869,7 @@ behavior is not the default).
See this [Unnested receive example](@github@/akka-docs/rst/scala/code/docs/actor/UnnestedReceives.scala).
<a id="stash-scala"></a>
<a id="stash"></a>
## Stash
The *Stash* trait enables an actor to temporarily stash away messages
@ -938,14 +938,14 @@ then you should use the `UnboundedStash` trait instead.
@@@
<a id="killing-actors-scala"></a>
<a id="killing-actors"></a>
## Killing an Actor
You can kill an actor by sending a `Kill` message. This will cause the actor
to throw a `ActorKilledException`, triggering a failure. The actor will
suspend operation and its supervisor will be asked how to handle the failure,
which may mean resuming the actor, restarting it or terminating it completely.
See @ref:[What Supervision Means](../scala/general/supervision.md#supervision-directives) for more information.
See @ref:[What Supervision Means](general/supervision.md#supervision-directives) for more information.
Use `Kill` like this:
@ -967,8 +967,7 @@ lost. It is important to understand that it is not put back on the mailbox. So
if you want to retry processing of a message, you need to deal with it yourself
by catching the exception and retry your flow. Make sure that you put a bound
on the number of retries since you don't want a system to livelock (so
consuming a lot of cpu cycles without making progress). Another possibility
would be to have a look at the <!-- FIXME: unresolved link reference: mailbox-acking --> mailbox-acking.
consuming a lot of cpu cycles without making progress).
### What happens to the mailbox
@ -979,7 +978,7 @@ messages on that mailbox will be there as well.
### What happens to the actor
If code within an actor throws an exception, that actor is suspended and the
supervision process is started (see <!-- FIXME: More than one link target with name supervision in path Some(/scala/actors.rst) --> supervision). Depending on the
supervision process is started (see @ref:[supervision](general/supervision.md)). Depending on the
supervisors decision the actor is resumed (as if nothing happened), restarted
(wiping out its internal state and starting from scratch) or terminated.
@ -1033,7 +1032,7 @@ Please note, that the child actors are *still restarted*, but no new `ActorRef`
the same principles for the children, ensuring that their `preStart()` method is called only at the creation of their
refs.
For more information see @ref:[What Restarting Means](../scala/general/supervision.md#supervision-restart).
For more information see @ref:[What Restarting Means](general/supervision.md#supervision-restart).
### Initialization via message passing

View file

@ -30,7 +30,7 @@ Follow the instructions for the `JavaAppPackaging` in the [sbt-native-packager p
You can use both Akka remoting and Akka Cluster inside of Docker containers. But note
that you will need to take special care with the network configuration when using Docker,
described here: @ref:[Akka behind NAT or in a Docker container](../../scala/remoting.md#remote-configuration-nat)
described here: @ref:[Akka behind NAT or in a Docker container](../remoting.md#remote-configuration-nat)
For an example of how to set up a project using Akka Cluster and Docker take a look at the
["akka-docker-cluster" sample](https://github.com/muuki88/activator-akka-docker).

View file

@ -93,12 +93,12 @@ be different
If you still do not see anything, look at what the logging of remote
life-cycle events tells you (normally logged at INFO level) or switch on
@ref:[Auxiliary remote logging options](../../java/logging.md#logging-remote-java)
@ref:[Auxiliary remote logging options](../logging.md#logging-remote)
to see all sent and received messages (logged at DEBUG level).
### Which options shall I enable when debugging remoting issues?
Have a look at the @ref:[Remote Configuration](../../java/remoting.md#remote-configuration-java), the typical candidates are:
Have a look at the @ref:[Remote Configuration](../remoting.md#remote-configuration), the typical candidates are:
* *akka.remote.log-sent-messages*
* *akka.remote.log-received-messages*
@ -170,4 +170,4 @@ To enable different types of debug logging add the following to your configurati
* `akka.actor.debug.autoreceive` will log all *special* messages like `Kill`, `PoisonPill` e.t.c. sent to all actors
* `akka.actor.debug.lifecycle` will log all actor lifecycle events of all actors
Read more about it in the docs for @ref:[Logging](../../java/logging.md) and @ref:[actor.logging-scala](../../scala/testing.md#actor-logging-scala).
Read more about it in the docs for @ref:[Logging](../logging.md) and @ref:[actor.logging-scala](../testing.md#actor-logging).

View file

@ -109,7 +109,7 @@ Example of monadic usage:
## Configuration
There are several configuration properties for the agents module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-agent).
to the @ref:[reference configuration](general/configuration.md#config-akka-agent).
## Deprecated Transactional Agents

View file

@ -33,8 +33,8 @@ The above example exposes an actor over a TCP endpoint via Apache
Camel's [Mina component](http://camel.apache.org/mina2.html). The actor implements the endpointUri method to define
an endpoint from which it can receive messages. After starting the actor, TCP
clients can immediately send messages to and receive responses from that
actor. If the message exchange should go over HTTP (via Camel's <!-- FIXME: duplicate target id: jetty component --> `Jetty
component`_), only the actor's endpointUri method must be changed.
actor. If the message exchange should go over HTTP (via Camel's Jetty
component, only the actor's endpointUri method must be changed.
@@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #Consumer }
@ -70,13 +70,13 @@ You can also create a CamelMessage yourself with the appropriate body and header
The akka-camel module is implemented as an Akka Extension, the `CamelExtension` object.
Extensions will only be loaded once per `ActorSystem`, which will be managed by Akka.
The `CamelExtension` object provides access to the [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) trait.
The [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) trait in turn provides access to two important Apache Camel objects, the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and the <!-- FIXME: duplicate target id: producertemplate --> `ProducerTemplate`_.
The [Camel](@github@/akka-camel/src/main/scala/akka/camel/Camel.scala) trait in turn provides access to two important Apache Camel objects, the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and the `ProducerTemplate`.
Below you can see how you can get access to these Apache Camel objects.
@@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelExtension }
One `CamelExtension` is only loaded once for every one `ActorSystem`, which makes it safe to call the `CamelExtension` at any point in your code to get to the
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one <!-- FIXME: duplicate target id: producertemplate --> `ProducerTemplate`_ for every one `ActorSystem` that uses a `CamelExtension`.
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one `ProducerTemplate` for every one `ActorSystem` that uses a `CamelExtension`.
By Default, a new [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is created when the `CamelExtension` starts. If you want to inject your own context instead,
you can extend the [ContextProvider](@github@/akka-camel/src/main/scala/akka/camel/ContextProvider.scala) trait and add the FQCN of your implementation in the config, as the value of the "akka.camel.context-provider".
This interface define a single method `getContext` used to load the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java).
@ -86,7 +86,7 @@ Below an example on how to add the ActiveMQ component to the [CamelContext](http
@@snip [Introduction.scala]($code$/scala/docs/camel/Introduction.scala) { #CamelExtensionAddComponent }
The [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) joins the lifecycle of the `ActorSystem` and `CamelExtension` it is associated with; the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is started when
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the <!-- FIXME: unresolved link reference: producertemplate --> `ProducerTemplate`_.
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the `ProducerTemplate`.
The `CamelExtension` is used by both *Producer* and *Consumer* actors to interact with Apache Camel internally.
You can access the `CamelExtension` inside a *Producer* or a *Consumer* using the `camel` definition, or get straight at the *CamelContext* using the `camelContext` definition.
@ -121,8 +121,8 @@ actor. Messages consumed by actors from Camel endpoints are of type
[CamelMessage](@github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala). These are immutable representations of Camel messages.
Here's another example that sets the endpointUri to
`jetty:http://localhost:8877/camel/default`. It causes Camel's <!-- FIXME: duplicate target id: jetty component --> `Jetty
component`_ to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, accepting HTTP connections
`jetty:http://localhost:8877/camel/default`. It causes Camel's Jetty
component to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, accepting HTTP connections
from localhost on port 8877.
@@snip [Consumers.scala]($code$/scala/docs/camel/Consumers.scala) { #Consumer2 }
@ -160,8 +160,8 @@ acknowledgement).
Camel Exchanges (and their corresponding endpoints) that support two-way communications need to wait for a response from
an actor before returning it to the initiating client.
For some endpoint types, timeout values can be defined in an endpoint-specific
way which is described in the documentation of the individual <!-- FIXME: duplicate target id: camel components --> `Camel
components`_. Another option is to configure timeouts on the level of consumer actors.
way which is described in the documentation of the individual Camel
components. Another option is to configure timeouts on the level of consumer actors.
Two-way communications between a Camel endpoint and an actor are
initiated by sending the request message to the actor with the [ask](@github@/akka-actor/src/main/scala/akka/pattern/AskSupport.scala) pattern
@ -193,7 +193,7 @@ Producer actor and waits for a response.
The future contains the response CamelMessage, or an `AkkaCamelException` when an error occurred, which contains the headers of the response.
<a id="camel-custom-processing-scala"></a>
<a id="camel-custom-processing"></a>
### Custom Processing
Instead of replying to the initial sender, producer actors can implement custom
@ -227,8 +227,8 @@ To correlate request with response messages, applications can set the
### ProducerTemplate
The [Producer](@github@/akka-camel/src/main/scala/akka/camel/Producer.scala) trait is a very
convenient way for actors to produce messages to Camel endpoints. Actors may also use a Camel <!-- FIXME: unresolved link reference: producertemplate --> `ProducerTemplate`_ for producing
messages to endpoints.
convenient way for actors to produce messages to Camel endpoints. Actors may also use a Camel
`ProducerTemplate` for producing messages to endpoints.
@@snip [Producers.scala]($code$/scala/docs/camel/Producers.scala) { #ProducerTemplate }
@ -252,15 +252,16 @@ asynchronous routing engine. Asynchronous responses are wrapped and added to the
producer actor's mailbox for later processing. By default, response messages are
returned to the initial sender but this can be overridden by Producer
implementations (see also description of the `routeResponse` method
in [Custom Processing](#camel-custom-processing-scala)).
in [Custom Processing](#camel-custom-processing)).
However, asynchronous two-way message exchanges, without allocating a thread for
the full duration of exchange, cannot be generically supported by Camel's
asynchronous routing engine alone. This must be supported by the individual
<!-- FIXME: duplicate target id: camel components --> `Camel components`_ (from which endpoints are created) as well. They must be
Camel components (from which endpoints are created) as well. They must be
able to suspend any work started for request processing (thereby freeing threads
to do other work) and resume processing when the response is ready. This is
currently the case for a [subset of components](http://camel.apache.org/asynchronous-routing-engine.html) such as the <!-- FIXME: duplicate target id: jetty component --> `Jetty component`_.
currently the case for a [subset of components](http://camel.apache.org/asynchronous-routing-engine.html)
such as the Jetty component.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also [Examples](#camel-examples) that implements both, an asynchronous
@ -290,7 +291,7 @@ most use cases, some applications may require more specialized routes to actors.
The akka-camel module provides two mechanisms for customizing routes to actors,
which will be explained in this section. These are:
* Usage of [camel-components](#camel-components-2) to access actors.
* Usage of [camel components](#camel-components-2) to access actors.
Any Camel route can use these components to access Akka actors.
* [Intercepting route construction](#camel-intercepting-route-construction) to actors.
This option gives you the ability to change routes that have already been added to Camel.
@ -299,13 +300,13 @@ Consumer actors have a hook into the route definition process which can be used
<a id="camel-components-2"></a>
### Akka Camel components
Akka actors can be accessed from Camel routes using the <!-- FIXME: duplicate target id: actor --> `actor`_ Camel component. This component can be used to
Akka actors can be accessed from Camel routes using the actor Camel component. This component can be used to
access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections.
<a id="access-to-actors"></a>
### Access to actors
To access actors from custom Camel routes, the <!-- FIXME: duplicate target id: actor --> `actor`_ Camel
To access actors from custom Camel routes, the actor Camel
component should be used. It fully supports Camel's [asynchronous routing
engine](http://camel.apache.org/asynchronous-routing-engine.html).
@ -357,7 +358,7 @@ the HTTP request.
<a id="camel-intercepting-route-construction"></a>
### Intercepting route construction
The previous section, [camel-components](#camel-components-2), explained how to setup a route to an actor manually.
The previous section, [camel components](#camel-components-2), explained how to setup a route to an actor manually.
It was the application's responsibility to define the route and add it to the current CamelContext.
This section explains a more convenient way to define custom routes: akka-camel is still setting up the routes to consumer actors (and adds these routes to the current CamelContext) but applications can define extensions to these routes.
Extensions can be defined with Camel's [Java DSL](http://camel.apache.org/dsl.html) or [Scala DSL](http://camel.apache.org/scala-dsl.html).
@ -403,7 +404,7 @@ using the Camel Quartz component
## Configuration
There are several configuration properties for the Camel module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-camel).
to the @ref:[reference configuration](general/configuration.md#config-akka-camel).
## Additional Resources

View file

@ -101,7 +101,7 @@ The `initialContacts` parameter is a `Set[ActorPath]`, which can be created like
@@snip [ClusterClientSpec.scala]($akka$/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/client/ClusterClientSpec.scala) { #initialContacts }
You will probably define the address information of the initial contact points in configuration or system property.
See also [Configuration](#cluster-client-config-scala).
See also [Configuration](#cluster-client-config).
A more comprehensive sample is available in the tutorial named [Distributed workers with Akka and Scala!](https://github.com/typesafehub/activator-akka-distributed-workers-scala).
@ -155,7 +155,7 @@ maven:
</dependency>
```
<a id="cluster-client-config-scala"></a>
<a id="cluster-client-config"></a>
## Configuration
The `ClusterClientReceptionist` extension (or `ClusterReceptionistSettings`) can be configured

View file

@ -25,7 +25,7 @@ and add the following configuration stanza to your `application.conf`
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
```
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala), if that feature is enabled,
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up), if that feature is enabled,
will participate in Cluster Metrics collection and dissemination.
## Metrics Collector
@ -112,7 +112,7 @@ It can be configured to use a specific MetricsSelector to produce the probabilit
* `mix` / `MixMetricsSelector` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors.
* Any custom implementation of `akka.cluster.metrics.MetricsSelector`
The collected metrics values are smoothed with [exponential weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[cluster_configuration_scala](cluster-usage.md#cluster-configuration-scala) you can adjust how quickly past data is decayed compared to new data.
The collected metrics values are smoothed with [exponential weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[Cluster configuration](cluster-usage.md#cluster-configuration) you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action. What can be more demanding than calculating factorials?

View file

@ -19,7 +19,7 @@ the sender to know the location of the destination actor. This is achieved by se
the messages via a `ShardRegion` actor provided by this extension, which knows how
to route the message with the entity id to the final destination.
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala)
Cluster sharding will not be active on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up)
if that feature is enabled.
@@@ warning
@ -27,7 +27,7 @@ if that feature is enabled.
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See @ref:[Downing](../java/cluster-usage.md#automatic-vs-manual-downing-java).
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@
@ -169,7 +169,7 @@ must be to begin the rebalancing. This strategy can be replaced by an applicatio
implementation.
The state of shard locations in the `ShardCoordinator` is persistent (durable) with
@ref:[distributed_data_scala](distributed-data.md) or @ref:[Persistence](persistence.md) to survive failures. When a crashed or
@ref:[Distributed Data](distributed-data.md) or @ref:[Persistence](persistence.md) to survive failures. When a crashed or
unreachable coordinator node has been removed (via down) from the cluster a new `ShardCoordinator` singleton
actor will take over and the state is recovered. During such a failure period shards
with known location are still available, while messages for new (unknown) shards
@ -186,11 +186,11 @@ unused shards due to the round-trip to the coordinator. Rebalancing of shards ma
also add latency. This should be considered when designing the application specific
shard resolution, e.g. to avoid too fine grained shards.
<a id="cluster-sharding-mode-scala"></a>
<a id="cluster-sharding-mode"></a>
## Distributed Data vs. Persistence Mode
The state of the coordinator and the state of [cluster_sharding_remembering_scala](#cluster-sharding-remembering-scala) of the shards
are persistent (durable) to survive failures. @ref:[distributed_data_scala](distributed-data.md) or @ref:[Persistence](persistence.md)
The state of the coordinator and the state of [Remembering Entities](#cluster-sharding-remembering) of the shards
are persistent (durable) to survive failures. @ref:[Distributed Data](distributed-data.md) or @ref:[Persistence](persistence.md)
can be used for the storage. Distributed Data is used by default.
The functionality when using the two modes is the same. If your sharded entities are not using Akka Persistence
@ -210,11 +210,11 @@ akka.cluster.sharding.state-store-mode = ddata
```
The state of the `ShardCoordinator` will be replicated inside a cluster by the
@ref:[distributed_data_scala](distributed-data.md) module with `WriteMajority`/`ReadMajority` consistency.
@ref:[Distributed Data](distributed-data.md) module with `WriteMajority`/`ReadMajority` consistency.
The state of the coordinator is not durable, it's not stored to disk. When all nodes in
the cluster have been stopped the state is lost and not needed any more.
The state of [cluster_sharding_remembering_scala](#cluster-sharding-remembering-scala) is also durable, i.e. it is stored to
The state of [Remembering Entities](#cluster-sharding-remembering) is also durable, i.e. it is stored to
disk. The stored entities are started also after a complete cluster restart.
Cluster Sharding is using its own Distributed Data `Replicator` per node role. In this way you can use a subset of
@ -244,7 +244,7 @@ until at least that number of regions have been started and registered to the co
avoids that many shards are allocated to the first region that registers and only later are
rebalanced to other nodes.
See @ref:[min-members_scala](cluster-usage.md#min-members-scala) for more information about `min-nr-of-members`.
See @ref:[How To Startup when Cluster Size Reached](cluster-usage.md#min-members) for more information about `min-nr-of-members`.
## Proxy Only Mode
@ -266,7 +266,7 @@ then supposed to stop itself. Incoming messages will be buffered by the `Shard`
between reception of `Passivate` and termination of the entity. Such buffered messages
are thereafter delivered to a new incarnation of the entity.
<a id="cluster-sharding-remembering-scala"></a>
<a id="cluster-sharding-remembering"></a>
## Remembering Entities
The list of entities in each `Shard` can be made persistent (durable) by setting
@ -278,8 +278,8 @@ a `Passivate` message must be sent to the parent of the entity actor, otherwise
entity will be automatically restarted after the entity restart backoff specified in
the configuration.
When [Distributed Data mode](#cluster-sharding-mode-scala) is used the identifiers of the entities are
stored in @ref:[ddata_durable_scala](distributed-data.md#ddata-durable-scala) of Distributed Data. You may want to change the
When [Distributed Data mode](#cluster-sharding-mode) is used the identifiers of the entities are
stored in @ref:[Durable Storage](distributed-data.md#ddata-durable) of Distributed Data. You may want to change the
configuration of the akka.cluster.sharding.distributed-data.durable.lmdb.dir`, since
the default directory contains the remote port of the actor system. If using a dynamically
assigned port (0) it will be different each time and the previously stored data will not
@ -318,10 +318,10 @@ You can send the message `ShardRegion.GracefulShutdown` message to the `ShardReg
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
This is performed automatically by the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-scala) and is therefore part of the
This is performed automatically by the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown) and is therefore part of the
graceful leaving process of a cluster member.
<a id="removeinternalclustershardingdata-scala"></a>
<a id="removeinternalclustershardingdata"></a>
## Removal of Internal Cluster Sharding Data
The Cluster Sharding coordinator stores the locations of the shards using Akka Persistence.
@ -348,7 +348,7 @@ and there was a network partition.
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing-scala).
See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@

View file

@ -50,7 +50,7 @@ It's worth noting that messages can always be lost because of the distributed na
As always, additional logic should be implemented in the singleton (acknowledgement) and in the
client (retry) actors to ensure at-least-once message delivery.
The singleton instance will not run on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala).
The singleton instance will not run on members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up).
## Potential problems to be aware of
@ -60,7 +60,7 @@ This pattern may seem to be very tempting to use at first, but it has several dr
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see Auto Downing docs for
@ref:[Downing](cluster-usage.md#automatic-vs-manual-downing-scala)),
@ref:[Downing](cluster-usage.md#automatic-vs-manual-downing)),
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).

View file

@ -1,6 +1,6 @@
# Cluster Usage
For introduction to the Akka Cluster concepts please see <!-- FIXME: More than one link target with name cluster in path Some(/scala/cluster-usage.rst) --> cluster.
For introduction to the Akka Cluster concepts please see @ref:[Cluster Specification](common/cluster.md).
## Preparing Your Project for Clustering
@ -89,7 +89,7 @@ it sends a message to all seed nodes and then sends join command to the one that
answers first. If no one of the seed nodes replied (might not be started yet)
it retries this procedure until successful or shutdown.
You define the seed nodes in the [cluster_configuration_scala](#cluster-configuration-scala) file (application.conf):
You define the seed nodes in the [configuration](#cluster-configuration) file (application.conf):
```
akka.cluster.seed-nodes = [
@ -120,7 +120,7 @@ seed nodes in the existing cluster.
If you don't configure seed nodes you need to join the cluster programmatically or manually.
Manual joining can be performed by using [cluster_jmx_scala](#cluster-jmx-scala) or [cluster_http_scala](#cluster-http-scala).
Manual joining can be performed by using [JMX](#cluster-jmx) or [HTTP](#cluster-http).
Joining programmatically can be performed with `Cluster(system).join`. Unsuccessful join attempts are
automatically retried after the time period defined in configuration property `retry-unsuccessful-join-after`.
Retries can be disabled by setting the property to `off`.
@ -156,7 +156,7 @@ when you start the `ActorSystem`.
@@@
<a id="automatic-vs-manual-downing-scala"></a>
<a id="automatic-vs-manual-downing"></a>
## Downing
When a member is considered by the failure detector to be unreachable the
@ -164,7 +164,7 @@ leader is not allowed to perform its duties, such as changing status of
new joining members to 'Up'. The node must first become reachable again, or the
status of the unreachable member must be changed to 'Down'. Changing status to 'Down'
can be performed automatically or manually. By default it must be done manually, using
[cluster_jmx_scala](#cluster-jmx-scala) or [cluster_http_scala](#cluster-http-scala).
[JMX](#cluster-jmx) or [HTTP](#cluster-http).
It can also be performed programmatically with `Cluster(system).down(address)`.
@ -197,7 +197,7 @@ can also happen because of long GC pauses or system overload.
We recommend against using the auto-down feature of Akka Cluster in production.
This is crucial for correct behavior if you use @ref:[Cluster Singleton](cluster-singleton.md) or
@ref:[cluster_sharding_scala](cluster-sharding.md), especially together with Akka @ref:[Persistence](persistence.md).
@ref:[Cluster Sharding](cluster-sharding.md), especially together with Akka @ref:[Persistence](persistence.md).
For Akka Persistence with Cluster Sharding it can result in corrupt data in case
of network partitions.
@ -212,7 +212,7 @@ as unreachable and removed after the automatic or manual downing as described
above.
A more graceful exit can be performed if you tell the cluster that a node shall leave.
This can be performed using [cluster_jmx_scala](#cluster-jmx-scala) or [cluster_http_scala](#cluster-http-scala).
This can be performed using [JMX](#cluster-jmx) or [HTTP](#cluster-http).
It can also be performed programmatically with:
@@snip [ClusterDocSpec.scala]($code$/scala/docs/cluster/ClusterDocSpec.scala) { #leave }
@ -220,7 +220,7 @@ It can also be performed programmatically with:
Note that this command can be issued to any member in the cluster, not necessarily the
one that is leaving.
The @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-scala) will automatically run when the cluster node sees itself as
The @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown) will automatically run when the cluster node sees itself as
`Exiting`, i.e. leaving from another node will trigger the shutdown process on the leaving node.
Tasks for graceful leaving of cluster including graceful shutdown of Cluster Singletons and
Cluster Sharding are added automatically when Akka Cluster is used, i.e. running the shutdown
@ -229,7 +229,7 @@ process will also trigger the graceful leaving if it's not already in progress.
Normally this is handled automatically, but in case of network failures during this process it might still
be necessary to set the nodes status to `Down` in order to complete the removal.
<a id="weakly-up-scala"></a>
<a id="weakly-up"></a>
## WeaklyUp Members
If a node is `unreachable` then gossip convergence is not possible and therefore any
@ -251,7 +251,7 @@ in this state, but you should be aware of that members on the other side of a ne
have no knowledge about the existence of the new members. You should for example not count
`WeaklyUp` members in quorum decisions.
<a id="cluster-subscriber-scala"></a>
<a id="cluster-subscriber"></a>
## Subscribe to Cluster Events
You can subscribe to change notifications of the cluster membership by using
@ -346,7 +346,7 @@ and it is typically defined in the start script as a system property or environm
The roles of the nodes is part of the membership information in `MemberEvent` that you can subscribe to.
<a id="min-members-scala"></a>
<a id="min-members"></a>
## How To Startup when Cluster Size Reached
A common use case is to start actors after the cluster has been initialized,
@ -382,7 +382,7 @@ This callback can be used for other things than starting actors.
You can do some clean up in a `registerOnMemberRemoved` callback, which will
be invoked when the current member status is changed to 'Removed' or the cluster have been shutdown.
An alternative is to register tasks to the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown-scala).
An alternative is to register tasks to the @ref:[Coordinated Shutdown](actors.md#coordinated-shutdown).
@@@ note
@ -408,7 +408,7 @@ Distributes actors across several nodes in the cluster and supports interaction
with the actors using their logical identifier, but without having to care about
their physical location in the cluster.
See @ref:[cluster_sharding_scala](cluster-sharding.md)
See @ref:[Cluster Sharding](cluster-sharding.md)
## Distributed Publish Subscribe
@ -431,7 +431,7 @@ See @ref:[Cluster Client](cluster-client.md).
*Akka Distributed Data* is useful when you need to share data between nodes in an
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
See @ref:[distributed_data_scala](distributed-data.md).
See @ref:[Distributed Data](distributed-data.md).
## Failure Detector
@ -469,7 +469,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [cluster_configuration_scala](#cluster-configuration-scala) you can adjust the `akka.cluster.failure-detector.threshold`
In the [configuration](#cluster-configuration) you can adjust the `akka.cluster.failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -495,7 +495,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.cluster.failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [cluster_configuration_scala](#cluster-configuration-scala) of this depending on you environment.
adjust the [configuration](#cluster-configuration) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -507,9 +507,9 @@ actor. Death watch generates the `Terminated` message to the watching actor when
unreachable cluster node has been downed and removed.
If you encounter suspicious false positives when the system is under load you should
define a separate dispatcher for the cluster actors as described in [cluster_dispatcher_scala](#cluster-dispatcher-scala).
define a separate dispatcher for the cluster actors as described in [Cluster Dispatcher](#cluster-dispatcher).
<a id="cluster-aware-routers-scala"></a>
<a id="cluster-aware-routers"></a>
## Cluster Aware Routers
All @ref:[routers](routing.md) can be made aware of member nodes in the cluster, i.e.
@ -519,7 +519,7 @@ automatically unregistered from the router. When new nodes join the cluster, add
routees are added to the router, according to the configuration. Routees are also added
when a node becomes reachable again, after having been unreachable.
Cluster aware routers make use of members with status [WeaklyUp](#weakly-up-scala) if that feature
Cluster aware routers make use of members with status [WeaklyUp](#weakly-up) if that feature
is enabled.
There are two distinct types of routers.
@ -564,7 +564,7 @@ the router will try to use them as soon as the member status is changed to 'Up'.
The actor paths without address information that are defined in `routees.paths` are used for selecting the
actors to which the messages will be forwarded to by the router.
Messages will be forwarded to the routees using @ref:[ActorSelection](actors.md#actorselection-scala), so the same delivery semantics should be expected.
Messages will be forwarded to the routees using @ref:[ActorSelection](actors.md#actorselection), so the same delivery semantics should be expected.
It is possible to limit the lookup of routees to member nodes tagged with a certain role by specifying `use-role`.
`max-total-nr-of-instances` defines total number of routees in the cluster. By default `max-total-nr-of-instances`
@ -575,7 +575,7 @@ The same type of router could also have been defined in code:
@@snip [StatsService.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-lookup-in-code }
See [cluster_configuration_scala](#cluster-configuration-scala) section for further descriptions of the settings.
See [configuration](#cluster-configuration) section for further descriptions of the settings.
### Router Example with Group of Routees
@ -658,7 +658,7 @@ The same type of router could also have been defined in code:
@@snip [StatsService.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsService.scala) { #router-deploy-in-code }
See [cluster_configuration_scala](#cluster-configuration-scala) section for further descriptions of the settings.
See [configuration](#cluster-configuration) section for further descriptions of the settings.
### Router Example with Pool of Remote Deployed Routees
@ -717,13 +717,13 @@ and to the registered subscribers on the system event bus with the help of `clus
## How to Test
@ref:[Multi Node Testing](../scala/multi-node-testing.md) is useful for testing cluster applications.
@ref:[Multi Node Testing](multi-node-testing.md) is useful for testing cluster applications.
Set up your project according to the instructions in @ref:[Multi Node Testing](../scala/multi-node-testing.md) and @ref:[Multi JVM Testing](../scala/multi-jvm-testing.md), i.e.
Set up your project according to the instructions in @ref:[Multi Node Testing](multi-node-testing.md) and @ref:[Multi JVM Testing](multi-jvm-testing.md), i.e.
add the `sbt-multi-jvm` plugin and the dependency to `akka-multi-node-testkit`.
First, as described in @ref:[Multi Node Testing](../scala/multi-node-testing.md), we need some scaffolding to configure the `MultiNodeSpec`.
Define the participating roles and their [cluster_configuration_scala](#cluster-configuration-scala) in an object extending `MultiNodeConfig`:
First, as described in @ref:[Multi Node Testing](multi-node-testing.md), we need some scaffolding to configure the `MultiNodeSpec`.
Define the participating roles and their [configuration](#cluster-configuration) in an object extending `MultiNodeConfig`:
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #MultiNodeConfig }
@ -751,7 +751,7 @@ From the test you interact with the cluster using the `Cluster` extension, e.g.
@@snip [StatsSampleSpec.scala]($akka$/akka-cluster-metrics/src/multi-jvm/scala/akka/cluster/metrics/sample/StatsSampleSpec.scala) { #join }
Notice how the *testActor* from @ref:[testkit](testing.md) is added as [subscriber](#cluster-subscriber-scala)
Notice how the *testActor* from @ref:[testkit](testing.md) is added as [subscriber](#cluster-subscriber)
to cluster changes and then waiting for certain events, such as in this case all members becoming 'Up'.
The above code was running for all roles (JVMs). `runOn` is a convenient utility to declare that a certain block
@ -769,13 +769,13 @@ the actor system for a specific role. This can also be used to grab the `akka.ac
## Management
<a id="cluster-http-scala"></a>
<a id="cluster-http"></a>
### HTTP
Information and management of the cluster is available with a HTTP API.
See documentation of [akka/akka-cluster-management](https://github.com/akka/akka-cluster-management).
See documentation of [Akka Management](http://developer.lightbend.com/docs/akka-management/current/).
<a id="cluster-jmx-scala"></a>
<a id="cluster-jmx"></a>
### JMX
Information and management of the cluster is available as JMX MBeans with the root name `akka.Cluster`.
@ -792,18 +792,18 @@ From JMX you can:
Member nodes are identified by their address, in format *akka.<protocol>://<actor-system-name>@<hostname>:<port>*.
<a id="cluster-command-line-scala"></a>
<a id="cluster-command-line"></a>
### Command Line
@@@ warning
**Deprecation warning** - The command line script has been deprecated and is scheduled for removal
in the next major version. Use the [cluster_http_scala](#cluster-http-scala) API with [curl](https://curl.haxx.se/)
in the next major version. Use the [HTTP management](#cluster-http) API with [curl](https://curl.haxx.se/)
or similar instead.
@@@
The cluster can be managed with the script `akka-cluster` provided in the Akka github repository here: @[github@/akka-cluster/jmx-client](mailto:github@/akka-cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
The cluster can be managed with the script `akka-cluster` provided in the Akka github repository here: [@github@/akka-cluster/jmx-client](@github@/akka-cluster/jmx-client). Place the script and the `jmxsh-R5.jar` library in the same directory.
Run it without parameters to see instructions about how to use the script:
@ -835,11 +835,11 @@ To be able to use the script you must enable remote monitoring and management wh
as described in [Monitoring and Management Using JMX Technology](http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
Make sure you understand the security implications of enabling remote monitoring and management.
<a id="cluster-configuration-scala"></a>
<a id="cluster-configuration"></a>
## Configuration
There are several configuration properties for the cluster. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-cluster) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-cluster) for more information.
### Cluster Info Logging
@ -849,7 +849,7 @@ You can silence the logging of cluster events at info level with configuration p
akka.cluster.log-info = off
```
<a id="cluster-dispatcher-scala"></a>
<a id="cluster-dispatcher"></a>
### Cluster Dispatcher
Under the hood the cluster extension is implemented with actors and it can be necessary

View file

@ -12,7 +12,7 @@ This means that the new JARs are a drop-in replacement for the old one
Binary compatibility is maintained between:
* **minor** and **patch** versions - please note that the meaning of "minor" has shifted to be more restrictive with Akka `2.4.0`, read [24versioningChange](#24versioningchange) for details.
* **minor** and **patch** versions - please note that the meaning of "minor" has shifted to be more restrictive with Akka `2.4.0`, read [Change in versioning scheme](#24versioningchange) for details.
Binary compatibility is **NOT** maintained between:
@ -20,7 +20,7 @@ Binary compatibility is **NOT** maintained between:
* any versions of **may change** modules read @ref:[Modules marked "May Change"](may-change.md) for details
* a few notable exclusions explained below
Specific examples (please read [24versioningChange](#24versioningchange) to understand the difference in "before 2.4 era" and "after 2.4 era"):
Specific examples (please read [Change in versioning scheme](#24versioningchange) to understand the difference in "before 2.4 era" and "after 2.4 era"):
```
# [epoch.major.minor] era
@ -44,12 +44,7 @@ Some modules are excluded from the binary compatibility guarantees, such as:
>
* `*-testkit` modules - since these are to be used only in tests, which usually are re-compiled and run on demand
*
`*-tck`
modules - since they may want to add new tests (or force configuring something), in order to discover possible
: failures in an existing implementation that the TCK is supposed to be testing.
Compatibility here is not *guaranteed*, however it is attempted to make the upgrade prosess as smooth as possible.
* `*-tck` modules - since they may want to add new tests (or force configuring something), in order to discover possible failures in an existing implementation that the TCK is supposed to be testing. Compatibility here is not *guaranteed*, however it is attempted to make the upgrade prosess as smooth as possible.
* all @ref:[may change](may-change.md) modules - which by definition are subject to rapid iteration and change. Read more about that in @ref:[Modules marked "May Change"](may-change.md)
<a id="24versioningchange"></a>

View file

@ -103,13 +103,6 @@ example invoking a synchronous-only API.
@@@
@@@ note
There is also a `CircuitBreakerProxy` actor that you can use, which is an alternative implementation of the pattern.
The main difference is that it is intended to be used only for request-reply interactions with another actor. See <!-- FIXME: unresolved link reference: circuit-breaker-proxy --> circuit-breaker-proxy
@@@
### Control failure count explicitly
By default, the circuit breaker treat `Exception` as failure in synchronized API, or failed `Future` as failure in future based API.

View file

@ -260,55 +260,35 @@ members in quorum decisions.
#### State Diagram for the Member States (`akka.cluster.allow-weakly-up-members=off`)
![member-states.png](../images/member-states.png)
![member-states.png](../../images/member-states.png)
#### State Diagram for the Member States (`akka.cluster.allow-weakly-up-members=on`)
![member-states-weakly-up.png](../images/member-states-weakly-up.png)
![member-states-weakly-up.png](../../images/member-states-weakly-up.png)
#### Member States
*
**joining**
: transient state when joining a cluster
* **joining** - transient state when joining a cluster
*
**weakly up**
: transient state while network split (only if `akka.cluster.allow-weakly-up-members=on`)
* **weakly up** - transient state while network split (only if `akka.cluster.allow-weakly-up-members=on`)
*
**up**
: normal operating state
* **up** - normal operating state
*
**leaving**
/
**exiting**
: states during graceful removal
* **leaving** / **exiting** - states during graceful removal
*
**down**
: marked as down (no longer part of cluster decisions)
* **down** - marked as down (no longer part of cluster decisions)
*
**removed**
: tombstone state (no longer a member)
* **removed** - tombstone state (no longer a member)
#### User Actions
*
**join**
: join a single node to a cluster - can be explicit or automatic on
* **join** - join a single node to a cluster - can be explicit or automatic on
startup if a node to join have been specified in the configuration
*
**leave**
: tell a node to leave the cluster gracefully
* **leave** - tell a node to leave the cluster gracefully
*
**down**
: mark a node as down
* **down** - mark a node as down
#### Leader Actions
@ -321,15 +301,8 @@ The `leader` has the following duties:
#### Failure Detection and Unreachability
*
fd*
: the failure detector of one of the monitoring nodes has triggered
* **fd*** - the failure detector of one of the monitoring nodes has triggered
causing the monitored node to be marked as unreachable
*
unreachable*
: unreachable is not a real member states but more of a flag in addition
to the state signaling that the cluster is unable to talk to this node,
after being unreachable the failure detector may detect it as reachable
again and thereby remove the flag
* **unreachable*** - unreachable is not a real member states but more of a flag in addition to the state signaling that the cluster is unable to talk to this node, after being unreachable the failure detector may detect it as reachable again and thereby remove the flag

View file

@ -13,7 +13,7 @@ dispatchers in this ActorSystem. If no ExecutionContext is given, it will fallba
`akka.actor.default-dispatcher.default-executor.fallback`. By default this is a "fork-join-executor", which
gives excellent performance in most cases.
<a id="dispatcher-lookup-scala"></a>
<a id="dispatcher-lookup"></a>
## Looking up a Dispatcher
Dispatchers implement the `ExecutionContext` interface and can thus be used to run `Future` invocations etc.
@ -48,7 +48,7 @@ You can read more about it in the JDK's [ThreadPoolExecutor documentation](https
@@@
For more options, see the default-dispatcher section of the <!-- FIXME: More than one link target with name configuration in path Some(/scala/dispatchers.rst) --> configuration.
For more options, see the default-dispatcher section of the @ref:[configuration](general/configuration.md).
Then you create the actor as usual and define the dispatcher in the deployment configuration.

View file

@ -29,12 +29,12 @@ with a specific role. It communicates with other `Replicator` instances with the
actor using the `Replicator.props`. If it is started as an ordinary actor it is important
that it is given the same name, started on same path, on all nodes.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala),
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Data. This means that the data will be replicated to the
@ref:[WeaklyUp](cluster-usage.md#weakly-up-scala) nodes with the background gossip protocol. Note that it
@ref:[WeaklyUp](cluster-usage.md#weakly-up) nodes with the background gossip protocol. Note that it
will not participate in any actions where the consistency mode is to read/write from all
nodes or the majority of nodes. The @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala) node is not counted
as part of the cluster. So 3 nodes + 5 @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala) is essentially a
nodes or the majority of nodes. The @ref:[WeaklyUp](cluster-usage.md#weakly-up) node is not counted
as part of the cluster. So 3 nodes + 5 @ref:[WeaklyUp](cluster-usage.md#weakly-up) is essentially a
3 node cluster as far as consistent actions are concerned.
Below is an example of an actor that schedules tick messages to itself and for each tick
@ -43,7 +43,7 @@ changes of this.
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #data-bot }
<a id="replicator-update-scala"></a>
<a id="replicator-update"></a>
### Update
To modify and replicate a data value you send a `Replicator.Update` message to the local
@ -76,16 +76,10 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
* `WriteAll` the value will immediately be written to all nodes in the cluster
(or all nodes in the cluster role group)
When you specify to write to
`n`
out of
`x`
nodes, the update will first replicate to
`n`
nodes. If there are not
: enough Acks after 1/5th of the timeout, the update will be replicated to `n` other nodes. If there are less than n nodes
left all of the remaining nodes are used. Reachable nodes are prefered over unreachable nodes.
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
are prefered over unreachable nodes.
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
@ -113,7 +107,7 @@ or maintain local correlation data structures.
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #update-request-context }
<a id="replicator-get-scala"></a>
<a id="replicator-get"></a>
### Get
To retrieve the current value of a data you send `Replicator.Get` message to the
@ -155,7 +149,7 @@ to after receiving and transforming `GetSuccess`.
### Consistency
The consistency level that is supplied in the [replicator_update_scala](#replicator-update-scala) and [replicator_get_scala](#replicator-get-scala)
The consistency level that is supplied in the [Update](#replicator-update) and [Get](#replicator-get)
specifies per request how many replicas that must respond successfully to a write and read request.
For low latency reads you use `ReadLocal` with the risk of retrieving stale data, i.e. updates
@ -277,7 +271,7 @@ types that support both updates and removals, for example `ORMap` or `ORSet`.
@@@
<a id="delta-crdt-scala"></a>
<a id="delta-crdt"></a>
### delta-CRDT
[Delta State Replicated Data Types](http://arxiv.org/abs/1603.01529)
@ -333,7 +327,7 @@ The value of the counter is the value of the P counter minus the value of the N
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #pncounter }
`GCounter` and `PNCounter` have support for [delta_crdt_scala](#delta-crdt-scala) and don't need causal
`GCounter` and `PNCounter` have support for [delta-CRDT](#delta-crdt) and don't need causal
delivery of deltas.
Several related counters can be managed in a map with the `PNCounterMap` data type.
@ -351,7 +345,7 @@ Merge is simply the union of the two sets.
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #gset }
`GSet` has support for [delta_crdt_scala](#delta-crdt-scala) and it doesn't require causal delivery of deltas.
`GSet` has support for [delta-CRDT](#delta-crdt) and it doesn't require causal delivery of deltas.
If you need add and remove operations you should use the `ORSet` (observed-remove set).
Elements can be added and removed any number of times. If an element is concurrently added and
@ -364,7 +358,7 @@ track causality of the operations and resolve concurrent updates.
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #orset }
`ORSet` has support for [delta_crdt_scala](#delta-crdt-scala) and it requires causal delivery of deltas.
`ORSet` has support for [delta-CRDT](#delta-crdt) and it requires causal delivery of deltas.
### Maps
@ -502,7 +496,7 @@ look like for the `TwoPhaseSet`:
@@snip [TwoPhaseSetSerializer2.scala]($code$/scala/docs/ddata/protobuf/TwoPhaseSetSerializer2.scala) { #serializer }
<a id="ddata-durable-scala"></a>
<a id="ddata-durable"></a>
### Durable Storage
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
@ -565,7 +559,7 @@ Note that you should be prepared to receive `WriteFailure` as reply to an `Updat
durable entry if the data could not be stored for some reason. When enabling `write-behind-interval`
such errors will only be logged and `UpdateSuccess` will still be the reply to the `Update`.
There is one important caveat when it comes pruning of [crdt_garbage_scala](#crdt-garbage-scala) for durable data.
There is one important caveat when it comes pruning of [CRDT Garbage](#crdt-garbage) for durable data.
If and old data entry that was never pruned is injected and merged with existing data after
that the pruning markers have been removed the value will not be correct. The time-to-live
of the markers is defined by configuration
@ -575,7 +569,7 @@ This would be possible if a node with durable data didn't participate in the pru
be stopped for longer time than this duration and if it is joining again after this
duration its data should first be manually removed (from the lmdb directory).
<a id="crdt-garbage-scala"></a>
<a id="crdt-garbage"></a>
### CRDT Garbage
One thing that can be problematic with CRDTs is that some data types accumulate history (garbage).
@ -614,7 +608,7 @@ be able to improve this if needed, but the design is still not intended for bill
All data is held in memory, which is another reason why it is not intended for *Big Data*.
When a data entry is changed the full state of that entry may be replicated to other nodes
if it doesn't support [delta_crdt_scala](#delta-crdt-scala). The full state is also replicated for delta-CRDTs,
if it doesn't support [delta-CRDT](#delta-crdt). The full state is also replicated for delta-CRDTs,
for example when new nodes are added to the cluster or when deltas could not be propagated because
of network partitions or similar problems. This means that you cannot have too large
data entries, because then the remote message size will be too large.

View file

@ -19,7 +19,7 @@ a few seconds. Changes are only performed in the own part of the registry and th
changes are versioned. Deltas are disseminated in a scalable way to other nodes with
a gossip protocol.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up-scala),
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Publish Subscribe, i.e. subscribers on nodes with
`WeaklyUp` status will receive published messages if the publisher and subscriber are on
same side of a network partition.
@ -28,12 +28,12 @@ You can send messages via the mediator on any node to registered actors on
any other node.
There a two different modes of message delivery, explained in the sections
[Publish](#distributed-pub-sub-publish-scala) and [Send](#distributed-pub-sub-send-scala) below.
[Publish](#distributed-pub-sub-publish) and [Send](#distributed-pub-sub-send) below.
A more comprehensive sample is available in the
tutorial named [Akka Clustered PubSub with Scala!](https://github.com/typesafehub/activator-akka-clustering).
<a id="distributed-pub-sub-publish-scala"></a>
<a id="distributed-pub-sub-publish"></a>
## Publish
This is the true pub/sub mode. A typical usage of this mode is a chat room in an instant
@ -97,7 +97,7 @@ to subscribers that subscribed without a group id.
@@@
<a id="distributed-pub-sub-send-scala"></a>
<a id="distributed-pub-sub-send"></a>
## Send
This is a point-to-point mode where each message is delivered to one destination,
@ -177,10 +177,10 @@ akka.extensions = ["akka.cluster.pubsub.DistributedPubSub"]
## Delivery Guarantee
As in @ref:[Message Delivery Reliability](../scala/general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
In other words, messages can be lost over the wire.
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](https://github.com/akka/reactive-kafka).
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](http://doc.akka.io/docs/akka-stream-kafka/current/home.html).
## Dependencies

View file

@ -95,11 +95,11 @@ A test for this implementation may look like this:
This classifier takes always a time which is proportional to the number of
subscriptions, independent of how many actually match.
<a id="actor-classification-scala"></a>
<a id="actor-classification"></a>
### Actor Classification
This classification was originally developed specifically for implementing
@ref:[DeathWatch](actors.md#deathwatch-scala): subscribers as well as classifiers are of
@ref:[DeathWatch](actors.md#deathwatch): subscribers as well as classifiers are of
type `ActorRef`.
This classification requires an `ActorSystem` in order to perform book-keeping
@ -118,7 +118,7 @@ A test for this implementation may look like this:
This classifier is still is generic in the event type, and it is efficient for
all use cases.
<a id="event-stream-scala"></a>
<a id="event-stream"></a>
## Event Stream
The event stream is the main event bus of each actor system: it is used for
@ -172,7 +172,7 @@ event class have been done)
### Dead Letters
As described at @ref:[Stopping actors](actors.md#stopping-actors-scala), messages queued when an actor
As described at @ref:[Stopping actors](actors.md#stopping-actors), messages queued when an actor
terminates or sent after its death are re-routed to the dead letter mailbox,
which by default will publish the messages wrapped in `DeadLetter`. This
wrapper holds the original sender, receiver and message of the envelope which

View file

@ -52,10 +52,10 @@ in the `akka.extensions` section of the config you provide to your `ActorSystem`
The sky is the limit!
By the way, did you know that Akka's `Typed Actors`, `Serialization` and other features are implemented as Akka Extensions?
<a id="extending-akka-scala-settings"></a>
<a id="extending-akka-settings"></a>
### Application specific settings
The <!-- FIXME: More than one link target with name configuration in path Some(/scala/extending-akka.rst) --> configuration can be used for application specific settings. A good practice is to place those settings in an Extension.
The @ref:[configuration](general/configuration.md) can be used for application specific settings. A good practice is to place those settings in an Extension.
Sample configuration:

View file

@ -1,5 +1,5 @@
<a id="fault-tolerance-sample-scala"></a>
<a id="fault-tolerance-sample"></a>
# Diagrams of the Fault Tolerance Sample
![faulttolerancesample-normal-flow.png](../images/faulttolerancesample-normal-flow.png)

View file

@ -1,6 +1,6 @@
# Fault Tolerance
As explained in @ref:[Actor Systems](../scala/general/actor-systems.md) each actor is the supervisor of its
As explained in @ref:[Actor Systems](general/actor-systems.md) each actor is the supervisor of its
children, and as such each actor defines fault handling supervisor strategy.
This strategy cannot be changed afterwards as it is an integral part of the
actor systems structure.
@ -34,7 +34,7 @@ For the sake of demonstration let us consider the following strategy:
@@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #strategy }
I have chosen a few well-known exception types in order to demonstrate the
application of the fault handling directives described in <!-- FIXME: More than one link target with name supervision in path Some(/scala/fault-tolerance.rst) --> supervision.
application of the fault handling directives described in @ref:[supervision](general/supervision.md).
First off, it is a one-for-one strategy, meaning that each child is treated
separately (an all-for-one strategy works very similarly, the only difference
is that any decision is applied to all children of the supervisor, not only the
@ -104,7 +104,7 @@ by overriding the `logFailure` method.
## Supervision of Top-Level Actors
Toplevel actors means those which are created using `system.actorOf()`, and
they are children of the @ref:[User Guardian](../scala/general/supervision.md#user-guardian). There are no
they are children of the @ref:[User Guardian](general/supervision.md#user-guardian). There are no
special rules applied in this case, the guardian simply applies the configured
strategy.

View file

@ -433,7 +433,7 @@ and in the following.
### Event Tracing
The setting `akka.actor.debug.fsm` in <!-- FIXME: More than one link target with name configuration in path Some(/scala/fsm.rst) --> configuration enables logging of an
The setting `akka.actor.debug.fsm` in @ref:[configuration](general/configuration.md) enables logging of an
event trace by `LoggingFSM` instances:
@@snip [FSMDocSpec.scala]($code$/scala/docs/actor/FSMDocSpec.scala) { #logging-fsm }
@ -447,7 +447,7 @@ messages
* all state transitions
Life cycle changes and special messages can be logged as described for
@ref:[Actors](testing.md#actor-logging-scala).
@ref:[Actors](testing.md#actor-logging).
### Rolling Event Log

View file

@ -119,7 +119,7 @@ includes the following suggestions:
>
* Do the blocking call within an actor (or a set of actors managed by a router
[@ref:[Java](../../java/routing.md), @ref:[Scala](../../scala/routing.md)]), making sure to
@ref:[router](../routing.md), making sure to
configure a thread pool which is either dedicated for this purpose or
sufficiently sized.
* Do the blocking call within a `Future`, ensuring an upper bound on
@ -143,9 +143,8 @@ on which DBMS is deployed on what hardware.
@@@ note
Configuring thread pools is a task best delegated to Akka, simply configure
in the `application.conf` and instantiate through an `ActorSystem`
[@ref:[Java](../../java/dispatchers.md#dispatcher-lookup-java), @ref:[Scala
](../../scala/dispatchers.md#dispatcher-lookup-scala)]
in the `application.conf` and instantiate through an
@ref:[`ActorSystem`](../dispatchers.md#dispatcher-lookup)
@@@
@ -167,4 +166,4 @@ actor, which in turn will recursively stop all its child actors, the system
guardian.
If you want to execute some operations while terminating `ActorSystem`,
look at `CoordinatedShutdown` [@ref:[Java](../../java/actors.md#coordinated-shutdown-java), @ref:[Scala](../../scala/actors.md#coordinated-shutdown-scala)]
look at @ref:[`CoordinatedShutdown`](../actors.md#coordinated-shutdown).

View file

@ -4,8 +4,7 @@ The previous section about @ref:[Actor Systems](actor-systems.md) explained how
hierarchies and are the smallest unit when building an application. This
section looks at one such actor in isolation, explaining the concepts you
encounter while implementing it. For a more in depth reference with all the
details please refer to
@ref:[Actors (Scala)](../../scala/actors.md) and @ref:[Actors (Java)](../../java/actors.md).
details please refer to @ref:[Actors](../actors.md).
An actor is a container for [State](#state), [Behavior](#behavior), a [Mailbox](#mailbox), [Child Actors](#child-actors)
and a [Supervisor Strategy](#supervisor-strategy). All of this is encapsulated behind an [Actor
@ -32,7 +31,7 @@ publishes this information itself.
Actor objects will typically contain some variables which reflect possible
states the actor may be in. This can be an explicit state machine (e.g. using
the @ref:[FSM](../../scala/fsm.md) module), or it could be a counter, set of listeners,
the @ref:[FSM](../fsm.md) module), or it could be a counter, set of listeners,
pending requests, etc. These data are what make an actor valuable, and they
must be protected from corruption by other actors. The good news is that Akka
actors conceptually each have their own light-weight thread, which is
@ -53,7 +52,7 @@ the actor. This is to enable the ability of self-healing of the system.
Optionally, an actor's state can be automatically recovered to the state
before a restart by persisting received messages and replaying them after
restart (see @ref:[Persistence](../../scala/persistence.md)).
restart (see @ref:[Persistence](../persistence.md)).
## Behavior

View file

@ -241,11 +241,7 @@ supported.
## Application specific settings
The configuration can also be used for application specific settings.
A good practice is to place those settings in an Extension, as described in:
>
* Scala API: @ref:[extending-akka-scala.settings](../../scala/extending-akka.md#extending-akka-scala-settings)
* Java API: @ref:[extending-akka-java.settings](../../java/extending-akka.md#extending-akka-java-settings)
A good practice is to place those settings in an @ref:[Extension](../extending-akka.md#extending-akka-settings).
## Configuring multiple ActorSystem

View file

@ -280,14 +280,11 @@ acknowledgement
The third becomes necessary by virtue of the acknowledgements not being guaranteed
to arrive either. An ACK-RETRY protocol with business-level acknowledgements is
supported by @ref:[At-Least-Once Delivery](../../scala/persistence.md#at-least-once-delivery-scala) of the Akka Persistence module. Duplicates can be
detected by tracking the identifiers of messages sent via @ref:[At-Least-Once Delivery](../../scala/persistence.md#at-least-once-delivery-scala).
supported by @ref:[At-Least-Once Delivery](../persistence.md#at-least-once-delivery) of the Akka Persistence module. Duplicates can be
detected by tracking the identifiers of messages sent via @ref:[At-Least-Once Delivery](../persistence.md#at-least-once-delivery).
Another way of implementing the third part would be to make processing the messages
idempotent on the level of the business logic.
Another example of implementing all three requirements is shown at
<!-- FIXME: unresolved link reference: reliable-proxy --> reliable-proxy (which is now superseded by @ref:[At-Least-Once Delivery](../../scala/persistence.md#at-least-once-delivery-scala)).
### Event Sourcing
Event sourcing (and sharding) is what makes large websites scale to
@ -301,7 +298,7 @@ components may consume the event stream as a means to replicate the component
state on a different continent or to react to changes). If the components
state is lost—due to a machine failure or by being pushed out of a cache—it can
easily be reconstructed by replaying the event stream (usually employing
snapshots to speed up the process). @ref:[Event sourcing](../../scala/persistence.md#event-sourcing-scala) is supported by
snapshots to speed up the process). @ref:[Event sourcing](../persistence.md#event-sourcing) is supported by
Akka Persistence.
### Mailbox with Explicit Acknowledgement
@ -314,8 +311,6 @@ guarantees are otherwise sufficient to fulfill the applications requirements.
Please note that the caveats for [The Rules for In-JVM (Local) Message Sends](#the-rules-for-in-jvm-local-message-sends)
do apply.
An example implementation of this pattern is shown at <!-- FIXME: unresolved link reference: mailbox-acking --> mailbox-acking.
<a id="deadletters"></a>
## Dead Letters
@ -344,8 +339,8 @@ guaranteed delivery.
### How do I Receive Dead Letters?
An actor can subscribe to class `akka.actor.DeadLetter` on the event
stream, see @ref:[Event Stream](../../java/event-bus.md#event-stream-java) (Java) or @ref:[Event Stream](../../scala/event-bus.md#event-stream-scala)
(Scala) for how to do that. The subscribed actor will then receive all dead
stream, see @ref:[Event Stream](../event-bus.md#event-stream)
for how to do that. The subscribed actor will then receive all dead
letters published in the (local) system from that point onwards. Dead letters
are not propagated over the network, if you want to collect them in one place
you will have to subscribe one actor per network node and forward them

View file

@ -66,7 +66,7 @@ containers violates assumption 1, unless additional steps are taken in the
network configuration to allow symmetric communication between involved systems.
In such situations Akka can be configured to bind to a different network
address than the one used for establishing connections between Akka nodes.
See @ref:[Akka behind NAT or in a Docker container](../../scala/remoting.md#remote-configuration-nat).
See @ref:[Akka behind NAT or in a Docker container](../remoting.md#remote-configuration-nat).
## Marking Points for Scaling Up with Routers
@ -81,4 +81,4 @@ up a configurable number of children of the desired type and route to them in
the configured fashion. Once such a router has been declared, its configuration
can be freely overridden from the configuration file, including mixing it with
the remote deployment of (some of) the children. Read more about
this in @ref:[Routing (Scala)](../../scala/routing.md) and @ref:[Routing (Java)](../../java/routing.md).
this in @ref:[Routing](../routing.md).

View file

@ -195,7 +195,7 @@ Provided as a built-in pattern the `akka.pattern.BackoffSupervisor` implements t
This pattern is useful when the started actor fails <a id="^1" href="#1">[1]</a> because some external resource is not available,
and we need to give it some time to start-up again. One of the prime examples when this is useful is
when a @ref:[PersistentActor](../../scala/persistence.md) fails (by stopping) with a persistence failure - which indicates that
when a @ref:[PersistentActor](../persistence.md) fails (by stopping) with a persistence failure - which indicates that
the database may be down or overloaded, in such situations it makes most sense to give it a little bit of time
to recover before the peristent actor is started.
@ -279,4 +279,4 @@ Please note that creating one-off actors from an all-for-one supervisor entails
that failures escalated by the temporary actor will affect all the permanent
ones. If this is not desired, install an intermediate supervisor; this can very
easily be done by declaring a router of size 1 for the worker, see
@ref:[Routing](../../scala/routing.md) or @ref:[Routing](../../java/routing.md).
@ref:[Routing](../routing.md).

View file

@ -120,7 +120,7 @@ Once a connection has been established data can be sent to it from any actor in
Tcp.Write
: The simplest `WriteCommand` implementation which wraps a `ByteString` instance and an "ack" event.
A `ByteString` (as explained in @ref:[this section](io.md#bytestring-scala)) models one or more chunks of immutable
A `ByteString` (as explained in @ref:[this section](io.md#bytestring)) models one or more chunks of immutable
in-memory data with a maximum (total) size of 2 GB (2^31 bytes).
Tcp.WriteFile

View file

@ -73,7 +73,7 @@ not error handling. In other words, data may still be lost, even if every write
@@@
<a id="bytestring-scala"></a>
<a id="bytestring"></a>
### ByteString
To maintain isolation, actors should communicate with immutable objects only. `ByteString` is an

View file

@ -65,7 +65,7 @@ akka {
```
To customize the logging further or take other actions for dead letters you can subscribe
to the @ref:[Event Stream](event-bus.md#event-stream-scala).
to the @ref:[Event Stream](event-bus.md#event-stream).
### Auxiliary logging options
@ -213,7 +213,7 @@ akka {
}
```
Also see the logging options for TestKit: @ref:[actor.logging-scala](testing.md#actor-logging-scala).
Also see the logging options for TestKit: @ref:[actor.logging-scala](testing.md#actor-logging).
### Translating Log Source to String and Class
@ -266,12 +266,12 @@ that receives the log events in the same order they were emitted.
The event handler actor does not have a bounded inbox and is run on the default dispatcher. This means
that logging extreme amounts of data may affect your application badly. This can be somewhat mitigated by
using an async logging backend though. (See [Using the SLF4J API directly](#slf4j-directly-scala))
using an async logging backend though. (See [Using the SLF4J API directly](#slf4j-directly))
@@@
You can configure which event handlers are created at system start-up and listen to logging events. That is done using the
`loggers` element in the <!-- FIXME: More than one link target with name configuration in path Some(/scala/logging.rst) --> configuration.
`loggers` element in the @ref:[configuration](general/configuration.md).
Here you can also define the log level. More fine grained filtering based on the log source
can be implemented in a custom `LoggingFilter`, which can be defined in the `logging-filter`
configuration property.
@ -287,7 +287,7 @@ akka {
```
The default one logs to STDOUT and is registered by default. It is not intended
to be used for production. There is also an [SLF4J](#slf4j-scala)
to be used for production. There is also an [SLF4J](#slf4j)
logger available in the 'akka-slf4j' module.
Example of creating a listener:
@ -301,7 +301,7 @@ Instead log messages are printed to stdout (System.out). The default log level f
stdout logger is `WARNING` and it can be silenced completely by setting
`akka.stdout-loglevel=OFF`.
<a id="slf4j-scala"></a>
<a id="slf4j"></a>
## SLF4J
Akka provides a logger for [SL4FJ](http://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
@ -313,7 +313,7 @@ libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.1.3"
```
You need to enable the Slf4jLogger in the `loggers` element in
the <!-- FIXME: More than one link target with name configuration in path Some(/scala/logging.rst) --> configuration. Here you can also define the log level of the event bus.
the @ref:[configuration](general/configuration.md). Here you can also define the log level of the event bus.
More fine grained log levels can be defined in the configuration of the SLF4J backend
(e.g. logback.xml). You should also define `akka.event.slf4j.Slf4jLoggingFilter` in
the `logging-filter` configuration property. It will filter the log events using the backend
@ -356,7 +356,7 @@ shown below:
val log = Logging(system.eventStream, "my.nice.string")
```
<a id="slf4j-directly-scala"></a>
<a id="slf4j-directly"></a>
### Using the SLF4J API directly
If you use the SLF4J API directly in your application, remember that the logging operations will block
@ -496,13 +496,13 @@ A more advanced (including most Akka added information) example pattern would be
<pattern>%date{ISO8601} level=[%level] marker=[%marker] logger=[%logger] akkaSource=[%X{akkaSource}] sourceActorSystem=[%X{sourceActorSystem}] sourceThread=[%X{sourceThread}] mdc=[ticket-#%X{ticketNumber}: %X{ticketDesc}] - msg=[%msg]%n----%n</pattern>
```
<a id="jul-scala"></a>
<a id="jul"></a>
## java.util.logging
Akka includes a logger for [java.util.logging](https://docs.oracle.com/javase/8/docs/api/java/util/logging/package-summary.html#package.description).
You need to enable the `akka.event.jul.JavaLogger` in the `loggers` element in
the <!-- FIXME: More than one link target with name configuration in path Some(/scala/logging.rst) --> configuration. Here you can also define the log level of the event bus.
the @ref:[configuration](general/configuration.md). Here you can also define the log level of the event bus.
More fine grained log levels can be defined in the configuration of the logging backend.
You should also define `akka.event.jul.JavaLoggingFilter` in
the `logging-filter` configuration property. It will filter the log events using the backend

View file

@ -77,7 +77,7 @@ all domain events of an Aggregate Root type.
@@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #EventsByTag }
To tag events you create an @ref:[Event Adapters](persistence.md#event-adapters-scala) that wraps the events in a `akka.persistence.journal.Tagged`
To tag events you create an @ref:[Event Adapters](persistence.md#event-adapters) that wraps the events in a `akka.persistence.journal.Tagged`
with the given `tags`.
@@snip [LeveldbPersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/LeveldbPersistenceQueryDocSpec.scala) { #tagger }

View file

@ -75,7 +75,7 @@ If your usage does not require a live stream, you can use the `currentPersistenc
#### EventsByPersistenceIdQuery and CurrentEventsByPersistenceIdQuery
`eventsByPersistenceId` is a query equivalent to replaying a @ref:[PersistentActor](persistence.md#event-sourcing-scala),
`eventsByPersistenceId` is a query equivalent to replaying a @ref:[PersistentActor](persistence.md#event-sourcing),
however, since it is a stream it is possible to keep it alive and watch for additional incoming events persisted by the
persistent actor identified by the given `persistenceId`.
@ -94,7 +94,7 @@ The goal of this query is to allow querying for all events which are "tagged" wi
That includes the use case to query all domain events of an Aggregate Root type.
Please refer to your read journal plugin's documentation to find out if and how it is supported.
Some journals may support tagging of events via an @ref:[Event Adapters](persistence.md#event-adapters-scala) that wraps the events in a
Some journals may support tagging of events via an @ref:[Event Adapters](persistence.md#event-adapters) that wraps the events in a
`akka.persistence.journal.Tagged` with the given `tags`. The journal may support other ways of doing tagging - again,
how exactly this is implemented depends on the used journal. Here is an example of such a tagging event adapter:
@ -112,7 +112,7 @@ on relational databases, yet may be hard to implement efficiently on plain key-v
@@@
In the example below we query all events which have been tagged (we assume this was performed by the write-side using an
@ref:[EventAdapter](persistence.md#event-adapters-scala), or that the journal is smart enough that it can figure out what we mean by this
@ref:[EventAdapter](persistence.md#event-adapters), or that the journal is smart enough that it can figure out what we mean by this
tag - for example if the journal stored the events as json it may try to find those with the field `tag` set to this value etc.).
@@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #events-by-tag }
@ -127,7 +127,7 @@ If your usage does not require a live stream, you can use the `currentEventsByTa
### Materialized values of queries
Journals are able to provide additional information related to a query by exposing @ref:[Materialized values](stream/stream-quickstart.md#materialized-values-quick-scala),
Journals are able to provide additional information related to a query by exposing @ref:[Materialized values](stream/stream-quickstart.md#materialized-values-quick),
which are a feature of @ref:[Streams](stream/index.md) that allows to expose additional values at stream materialization time.
More advanced query journals may use this technique to expose information about the character of the materialized
@ -143,7 +143,7 @@ specialised query object, as demonstrated in the sample below:
## Performance and denormalization
When building systems using @ref:[Event sourcing](persistence.md#event-sourcing-scala) and CQRS ([Command & Query Responsibility Segregation](https://msdn.microsoft.com/en-us/library/jj554200.aspx)) techniques
When building systems using @ref:[Event sourcing](persistence.md#event-sourcing) and CQRS ([Command & Query Responsibility Segregation](https://msdn.microsoft.com/en-us/library/jj554200.aspx)) techniques
it is tremendously important to realise that the write-side has completely different needs from the read-side,
and separating those concerns into datastores that are optimised for either side makes it possible to offer the best
experience for the write and read sides independently.
@ -196,7 +196,7 @@ into the other datastore:
@@snip [PersistenceQueryDocSpec.scala]($code$/scala/docs/persistence/query/PersistenceQueryDocSpec.scala) { #projection-into-different-store-actor }
<a id="read-journal-plugin-api-scala"></a>
<a id="read-journal-plugin-api"></a>
## Query plugins
Query plugins are various (mostly community driven) `ReadJournal` implementations for all kinds

View file

@ -56,10 +56,10 @@ definition - in a backwards compatible way - such that the new deserialization c
The most common schema changes you will likely are:
* [adding a field to an event type](#add-field-scala),
* [remove or rename field in event type](#rename-field-scala),
* [remove event type](#remove-event-class-scala),
* [split event into multiple smaller events](#split-large-event-into-smaller-scala).
* [adding a field to an event type](#add-field),
* [remove or rename field in event type](#rename-field),
* [remove event type](#remove-event-class),
* [split event into multiple smaller events](#split-large-event-into-smaller).
The following sections will explain some patterns which can be used to safely evolve your schema when facing those changes.
@ -121,7 +121,7 @@ serializers, and the yellow payload indicates the user provided event (by callin
As you can see, the `PersistentMessage` acts as an envelope around the payload, adding various fields related to the
origin of the event (`persistenceId`, `sequenceNr` and more).
More advanced techniques (e.g. [Remove event class and ignore events](#remove-event-class-scala)) will dive into using the manifests for increasing the
More advanced techniques (e.g. [Remove event class and ignore events](#remove-event-class)) will dive into using the manifests for increasing the
flexibility of the persisted vs. exposed types even more. However for now we will focus on the simpler evolution techniques,
concerning simply configuring the payload serializers.
@ -169,7 +169,7 @@ Deserialization will be performed by the same serializer which serialized the me
because of the `identifier` being stored together with the message.
Please refer to the @ref:[Akka Serialization](serialization.md) documentation for more advanced use of serializers,
especially the @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer-scala) section since it is very useful for Persistence based applications
especially the @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer) section since it is very useful for Persistence based applications
dealing with schema evolutions, as we will see in some of the examples below.
## Schema evolution in action
@ -179,7 +179,7 @@ some of the various options one might go about handling the described situation.
a complete guide, so feel free to adapt these techniques depending on your serializer's capabilities
and/or other domain specific limitations.
<a id="add-field-scala"></a>
<a id="add-field"></a>
### Add fields
**Situation:**
@ -213,7 +213,7 @@ the field to this event type:
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-read-optional }
<a id="rename-field-scala"></a>
<a id="rename-field"></a>
### Rename fields
**Situation:**
@ -279,7 +279,7 @@ changes in the message format.
@@@
<a id="remove-event-class-scala"></a>
<a id="remove-event-class"></a>
### Remove event class and ignore events
**Situation:**
@ -291,7 +291,7 @@ and should be deleted. You still have to be able to replay from a journal which
The problem of removing an event type from the domain model is not as much its removal, as the implications
for the recovery mechanisms that this entails. For example, a naive way of filtering out certain kinds of events from
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters-scala):
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters):
![persistence-drop-event.png](../images/persistence-drop-event.png)
>
@ -320,7 +320,7 @@ this before starting to deserialize the object.
This aproach allows us to *remove the original class from our classpath*, which makes for less "old" classes lying around in the project.
This can for example be implemented by using an `SerializerWithStringManifest`
(documented in depth in @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer-scala)). By looking at the string manifest, the serializer can notice
(documented in depth in @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer)). By looking at the string manifest, the serializer can notice
that the type is no longer needed, and skip the deserialization all-together:
![persistence-drop-event-serializer.png](../images/persistence-drop-event-serializer.png)
@ -338,7 +338,7 @@ and emits and empty `EventSeq` whenever such object is encoutered:
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest-adapter }
<a id="detach-domain-from-data-model-scala"></a>
<a id="detach-domain-from-data-model"></a>
### Detach domain model from data model
**Situation:**
@ -376,14 +376,14 @@ as long as the mapping logic is able to convert between them:
The same technique could also be used directly in the Serializer if the end result of marshalling is bytes.
Then the serializer can simply convert the bytes do the domain object by using the generated protobuf builders.
<a id="store-human-readable-scala"></a>
<a id="store-human-readable"></a>
### Store events as human-readable data model
**Situation:**
You want to keep your persisted events in a human-readable format, for example JSON.
**Solution:**
This is a special case of the [Detach domain model from data model](#detach-domain-from-data-model-scala) pattern, and thus requires some co-operation
This is a special case of the [Detach domain model from data model](#detach-domain-from-data-model) pattern, and thus requires some co-operation
from the Journal implementation to achieve this.
An example of a Journal which may implement this pattern is MongoDB, however other databases such as PostgreSQL
@ -423,7 +423,7 @@ that provides that functionality, or implement one yourself.
@@@
<a id="split-large-event-into-smaller-scala"></a>
<a id="split-large-event-into-smaller"></a>
### Split large event into fine-grained events
**Situation:**

View file

@ -11,7 +11,7 @@ communication with at-least-once message delivery semantics.
Akka persistence is inspired by and the official replacement of the [eventsourced](https://github.com/eligosource/eventsourced) library. It follows the same
concepts and architecture of [eventsourced](https://github.com/eligosource/eventsourced) but significantly differs on API and implementation level. See also
@ref:[migration-eventsourced-2.3](../scala/project/migration-guide-eventsourced-2.3.x.md)
@ref:[migration-eventsourced-2.3](project/migration-guide-eventsourced-2.3.x.md)
## Dependencies
@ -48,7 +48,7 @@ used for optimizing recovery times. The storage backend of a snapshot store is p
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem.
Replicated snapshot stores are available as [Community plugins](http://akka.io/community/).
<a id="event-sourcing-scala"></a>
<a id="event-sourcing"></a>
## Event sourcing
The basic idea behind [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) is quite simple. A persistent actor receives a (non-persistent) command
@ -85,7 +85,7 @@ about successful state changes by publishing events.
When persisting events with `persist` it is guaranteed that the persistent actor will not receive further commands between
the `persist` call and the execution(s) of the associated event handler. This also holds for multiple `persist`
calls in context of a single command. Incoming messages are [stashed](#internal-stash-scala) until the `persist`
calls in context of a single command. Incoming messages are [stashed](#internal-stash) until the `persist`
is completed.
If persistence of an event fails, `onPersistFailure` will be invoked (logging the error by default),
@ -109,7 +109,7 @@ behavior when replaying the events. When replay is completed it will use the new
@@@
<a id="persistence-id-scala"></a>
<a id="persistence-id"></a>
### Identifiers
A persistent actor must have an identifier that doesn't change across different actor incarnations.
@ -126,7 +126,7 @@ behavior is corrupted.
@@@
<a id="recovery-scala"></a>
<a id="recovery"></a>
### Recovery
By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages.
@ -149,7 +149,7 @@ recovery in the future, store its `ActorPath` explicitly in your persisted event
@@@
<a id="recovery-custom-scala"></a>
<a id="recovery-custom"></a>
#### Recovery customization
Applications may also customise how recovery is performed by returning a customised `Recovery` object
@ -193,11 +193,11 @@ unused `persistenceId`.
If there is a problem with recovering the state of the actor from the journal, `onRecoveryFailure`
is called (logging the error by default) and the actor will be stopped.
<a id="internal-stash-scala"></a>
<a id="internal-stash"></a>
### Internal stash
The persistent actor has a private @ref:[stash](actors.md#stash-scala) for internally caching incoming messages during
[recovery](#recovery-scala) or the `persist\persistAll` method persisting events. You can still use/inherit from the
The persistent actor has a private @ref:[stash](actors.md#stash) for internally caching incoming messages during
[recovery](#recovery) or the `persist\persistAll` method persisting events. You can still use/inherit from the
`Stash` interface. The internal stash cooperates with the normal stash by hooking into `unstashAll` method and
making sure messages are unstashed properly to the internal stash to maintain ordering guarantees.
@ -240,7 +240,7 @@ be discarded. You can use bounded stash instead of it.
@@@
<a id="persist-async-scala"></a>
<a id="persist-async"></a>
### Relaxed local consistency requirements and high throughput use-cases
If faced with relaxed local consistency requirements and high throughput demands sometimes `PersistentActor` and its
@ -272,7 +272,7 @@ The callback will not be invoked if the actor is restarted (or stopped) in betwe
@@@
<a id="defer-scala"></a>
<a id="defer"></a>
### Deferring actions until preceding persist handlers have executed
Sometimes when working with `persistAsync` or `persist` you may find that it would be nice to define some actions in terms of
@ -303,7 +303,7 @@ The callback will not be invoked if the actor is restarted (or stopped) in betwe
@@@
<a id="nested-persist-calls-scala"></a>
<a id="nested-persist-calls"></a>
### Nested persist calls
It is possible to call `persist` and `persistAsync` inside their respective callback blocks and they will properly
@ -347,7 +347,7 @@ the Actor's receive block (or methods synchronously invoked from there).
@@@
<a id="failures-scala"></a>
<a id="failures"></a>
### Failures
If persistence of an event fails, `onPersistFailure` will be invoked (logging the error by default),
@ -368,7 +368,7 @@ next message.
If there is a problem with recovering the state of the actor from the journal when the actor is
started, `onRecoveryFailure` is called (logging the error by default), and the actor will be stopped.
Note that failure to load snapshot is also treated like this, but you can disable loading of snapshots
if you for example know that serialization format has changed in an incompatible way, see [Recovery customization](#recovery-custom-scala).
if you for example know that serialization format has changed in an incompatible way, see [Recovery customization](#recovery-custom).
### Atomic writes
@ -436,7 +436,7 @@ For critical failures, such as recovery or persisting events failing, the persis
handler is invoked. This is because if the underlying journal implementation is signalling persistence failures it is most
likely either failing completely or overloaded and restarting right-away and trying to persist the event again will most
likely not help the journal recover as it would likely cause a [Thundering herd problem](https://en.wikipedia.org/wiki/Thundering_herd_problem), as many persistent actors
would restart and try to persist their events again. Instead, using a `BackoffSupervisor` (as described in [Failures](#failures-scala)) which
would restart and try to persist their events again. Instead, using a `BackoffSupervisor` (as described in [Failures](#failures)) which
implements an exponential-backoff strategy which allows for more breathing room for the journal to recover between
restarts of the persistent actor.
@ -450,11 +450,11 @@ Check the documentation of the journal implementation you are using for details
@@@
<a id="safe-shutdown-scala"></a>
<a id="safe-shutdown"></a>
### Safely shutting down persistent actors
Special care should be given when shutting down persistent actors from the outside.
With normal Actors it is often acceptable to use the special @ref:[PoisonPill](actors.md#poison-pill-scala) message
With normal Actors it is often acceptable to use the special @ref:[PoisonPill](actors.md#poison-pill) message
to signal to an Actor that it should stop itself once it receives this message in fact this message is handled
automatically by Akka, leaving the target actor no way to refuse stopping itself when given a poison pill.
@ -479,7 +479,7 @@ mechanism when `persist()` is used. Notice the early stop behaviour that occurs
@@snip [PersistenceDocSpec.scala]($code$/scala/docs/persistence/PersistenceDocSpec.scala) { #safe-shutdown-example-good }
<a id="replay-filter-scala"></a>
<a id="replay-filter"></a>
### Replay Filter
There could be cases where event streams are corrupted and multiple writers (i.e. multiple persistent actor instances)
@ -552,7 +552,7 @@ Since it is acceptable for some applications to not use any snapshotting, it is
However, Akka will log a warning message when this situation is detected and then continue to operate until
an actor tries to store a snapshot, at which point the operation will fail (by replying with an `SaveSnapshotFailure` for example).
Note that @ref:[cluster_sharding_scala](cluster-sharding.md) is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
Note that @ref:[Cluster Sharding](cluster-sharding.md) is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.
@@@
@ -579,7 +579,7 @@ If failure messages are left unhandled by the actor, a default warning log messa
No default action is performed on the success messages, however you're free to handle them e.g. in order to delete
an in memory representation of the snapshot, or in the case of failure to attempt save the snapshot again.
<a id="at-least-once-delivery-scala"></a>
<a id="at-least-once-delivery"></a>
## At-Least-Once Delivery
To send messages with at-least-once delivery semantics to destinations you can mix-in `AtLeastOnceDelivery`
@ -605,7 +605,7 @@ possible resends
delivered to the new actor incarnation
These semantics are similar to what an `ActorPath` represents (see
@ref:[Actor Lifecycle](actors.md#actor-lifecycle-scala)), therefore you need to supply a path and not a
@ref:[Actor Lifecycle](actors.md#actor-lifecycle)), therefore you need to supply a path and not a
reference when delivering messages. The messages are sent to the path with
an actor selection.
@ -684,7 +684,7 @@ not accept more messages and it will throw `AtLeastOnceDelivery.MaxUnconfirmedMe
The default value can be configured with the `akka.persistence.at-least-once-delivery.max-unconfirmed-messages`
configuration key. The method can be overridden by implementation classes to return non-default values.
<a id="event-adapters-scala"></a>
<a id="event-adapters"></a>
## Event Adapters
In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model
@ -1083,19 +1083,19 @@ in your Akka configuration. The LevelDB Java port is for testing purposes only.
@@@ warning
It is not possible to test persistence provided classes (i.e. [PersistentActor](#event-sourcing-scala)
and [AtLeastOnceDelivery](#at-least-once-delivery-scala)) using `TestActorRef` due to its *synchronous* nature.
It is not possible to test persistence provided classes (i.e. [PersistentActor](#event-sourcing)
and [AtLeastOnceDelivery](#at-least-once-delivery)) using `TestActorRef` due to its *synchronous* nature.
These traits need to be able to perform asynchronous tasks in the background in order to handle internal persistence
related events.
When testing Persistence based projects always rely on @ref:[asynchronous messaging using the TestKit](testing.md#async-integration-testing-scala).
When testing Persistence based projects always rely on @ref:[asynchronous messaging using the TestKit](testing.md#async-integration-testing).
@@@
## Configuration
There are several configuration properties for the persistence module, please refer
to the @ref:[reference configuration](../scala/general/configuration.md#config-akka-persistence).
to the @ref:[reference configuration](general/configuration.md#config-akka-persistence).
## Multiple persistence plugin configurations

View file

@ -33,7 +33,7 @@ All Akka releases are published via Sonatype to Maven Central, see
## Snapshots Repository
Nightly builds are available in [http://repo.akka.io/snapshots](http://repo.akka.io/snapshots)/ as both `SNAPSHOT` and
Nightly builds are available in [http://repo.akka.io/snapshots](http://repo.akka.io/snapshots/) as both `SNAPSHOT` and
timestamped versions.
For timestamped versions, pick a timestamp from

View file

@ -66,7 +66,7 @@ public class SomeActor extends AbstractActor {
}
```
See @ref:[Receive messages](../../java/actors.md#actors-receive-java) documentation for more advice about how to implement
See @ref:[Receive messages](../../java/actors.md#actors-receive) documentation for more advice about how to implement
`createReceive`.
A few new methods have been added with deprecation of the old. Worth noting is `preRestart`.
@ -212,7 +212,7 @@ public class SomeActor extends AbstractActor {
}
```
See @ref:[Receive messages](../../java/actors.md#actors-receive-java) documentation for more advice about how to implement
See @ref:[Receive messages](../../java/actors.md#actors-receive) documentation for more advice about how to implement
`createReceive`.
Similar with `UntypedActorWithStash`, `UntypedPersistentActor`, and
@ -227,7 +227,7 @@ Use plain `system.actorOf` instead of the DSL to create Actors if you have been
### ExtensionKey Deprecation
`ExtensionKey` is a shortcut for writing @ref:[Akka Extensions](../../scala/extending-akka.md) but extensions created with it
`ExtensionKey` is a shortcut for writing @ref:[Akka Extensions](../extending-akka.md) but extensions created with it
cannot be used from Java and it does in fact not save many lines of code over directly implementing `ExtensionId`.
Old:
@ -269,8 +269,7 @@ The change is source compatible and such library should be recompiled and releas
and replaced by `GraphStage` in 2.0-M2. The `GraphStage` API has all features (and even more) as the
previous APIs and is even nicer to use.
Please refer to the GraphStage documentation @ref:[ for Scala](../../scala/stream/stream-customize.md#graphstage-scala) or
the documentation @ref:[for Java](../../scala/stream/stream-customize.md#graphstage-scala), for details on building custom GraphStages.
Please refer to the @ref:[GraphStage documentation](../stream/stream-customize.md#graphstage), for details on building custom GraphStages.
`StatefulStage` would be migrated to a simple `GraphStage` that contains some mutable state in its `GraphStageLogic`,
and `PushPullStage` directly translate to graph stages.
@ -310,7 +309,7 @@ and one would have to validate each implementation of such Actor using the React
The replacement API is the powerful `GraphStage`. It has all features that raw Actors provided for implementing Stream
stages and adds additional protocol and type-safety. You can learn all about it in the documentation:
:ref:`stream-customize-scala`and @ref:[Custom stream processing in JavaDSL](../../java/stream/stream-customize.md).
@ref:[Custom stream processing](../stream/stream-customize.md).
You should also read the blog post series on the official team blog, starting with [Mastering GraphStages, part I](http://blog.akka.io/streams/2016/07/30/mastering-graph-stage-part-1),
which explains using and implementing GraphStages in more practical terms than the reference documentation.
@ -338,7 +337,7 @@ In 2.4 fusing stages together into the same actor could be completely disabled w
`akka.stream.materializer.auto-fusing`. The new materializer introduced in Akka 2.5 does not support disabling fusing,
so this setting does not have any effect any more and has been deprecated. Running each stage in a stream on a separate
actor can be done by adding explicit async boundaries around every stage. How to add asynchronous boundaries can be seen
in @ref:[Operator Fusion](../../java/stream/stream-flows-and-basics.md#operator-fusion-java) (Java) and @ref:[Operator Fusion](../../scala/stream/stream-flows-and-basics.md#operator-fusion-scala) (Scala).
in @ref:[Operator Fusion](../stream/stream-flows-and-basics.md#operator-fusion).
## Remote
@ -437,8 +436,7 @@ to the better. It might also be in conflict with your previous shutdown code so
read the documentation for the Coordinated Shutdown and revisit your own implementations.
Most likely your implementation will not be needed any more or it can be simplified.
More information can be found in the @ref:[documentation for Scala](../../scala/actors.md#coordinated-shutdown-scala) or
@ref:[documentation for Java](../../java/actors.md#coordinated-shutdown-java)
More information can be found in the @ref:[documentation](../actors.md#coordinated-shutdown).
For some tests it might be undesired to terminate the `ActorSystem` via `CoordinatedShutdown`.
You can disable that by adding the following to the configuration of the `ActorSystem` that is
@ -454,7 +452,7 @@ akka.cluster.run-coordinated-shutdown-when-down = off
<a id="mig25-weaklyup"></a>
### WeaklyUp
@ref:[weakly_up_scala](../../scala/cluster-usage.md#weakly-up-scala) is now enabled by default, but it can be disabled with configuration option:
@ref:[WeaklyUp](../cluster-usage.md#weakly-up) is now enabled by default, but it can be disabled with configuration option:
```
akka.cluster.allow-weakly-up-members = off
@ -467,8 +465,7 @@ you might need to enable/disable it in configuration when performing rolling upg
### Cluster Sharding state-store-mode
Distributed Data mode is now the default `state-store-mode` for Cluster Sharding. The persistence mode
is also supported. Read more in the documentation @ref:[for Scala](../../scala/cluster-sharding.md#cluster-sharding-mode-scala) or
the documentation @ref:[for Java](../../java/cluster-sharding.md#cluster-sharding-mode-java).
is also supported. Read more in the @ref:[documentation](../cluster-sharding.md#cluster-sharding-mode).
It's important to use the same mode on all nodes in the cluster, i.e. if you perform a rolling upgrade
from 2.4.16 you might need to change the `state-store-mode` to be the same (`persistence` is default
@ -478,7 +475,7 @@ in 2.4.x):
akka.cluster.sharding.state-store-mode = persistence
```
Note that the stored @ref:[cluster_sharding_remembering_java](../../java/cluster-sharding.md#cluster-sharding-remembering-java) data with `persistence` mode cannot
Note that the stored @ref:[Remembering Entities](../cluster-sharding.md#cluster-sharding-remembering) data with `persistence` mode cannot
be migrated to the `data` mode. Such entities must be started again in some other way when using
`ddata` mode.
@ -488,7 +485,7 @@ There is a new cluster management tool with HTTP API that has the same functiona
The HTTP API gives you access to cluster membership information as JSON including full reachability status between the nodes.
It supports the ordinary cluster operations such as join, leave, and down.
See documentation of [akka/akka-cluster-management](https://github.com/akka/akka-cluster-management).
See documentation of [Akka Management](http://developer.lightbend.com/docs/akka-management/current/).
The command line script for cluster management has been deprecated and is scheduled for removal
in the next major version. Use the HTTP API with [curl](https://curl.haxx.se/) or similar instead.
@ -529,13 +526,12 @@ version of Akka.
### Removal of PersistentView
After being deprecated for a long time, and replaced by @ref:[Persistence Query Java](../../java/persistence-query.md)
(@ref:[Persistence Query Scala](../../scala/persistence-query.md)) `PersistentView` has been removed now removed.
After being deprecated for a long time, and replaced by
@ref:[Persistence Query](../persistence-query.md) `PersistentView` has been removed now removed.
The corresponding query type is `EventsByPersistenceId`. There are several alternatives for connecting the `Source`
to an actor corresponding to a previous `PersistentView`. There are several alternatives for connecting the `Source`
to an actor corresponding to a previous `PersistentView` actor which are documented in @ref:[Integration](../../scala/stream/stream-integrations.md)
for Scala and @ref:[Java](../../java/stream/stream-integrations.md).
to an actor corresponding to a previous `PersistentView` actor which are documented in @ref:[Integration](../stream/stream-integrations.md).
The consuming actor may be a plain `Actor` or an `PersistentActor` if it needs to store its own state (e.g. `fromSequenceNr` offset).
@ -543,10 +539,10 @@ Please note that Persistence Query is not experimental/may-change anymore in Akk
### Persistence Plugin Proxy
A new @ref:[persistence plugin proxy](../../scala/persistence.md#persistence-plugin-proxy) was added, that allows sharing of an otherwise
A new @ref:[persistence plugin proxy](../persistence.md#persistence-plugin-proxy) was added, that allows sharing of an otherwise
non-sharable journal or snapshot store. The proxy is available by setting `akka.persistence.journal.plugin` or
`akka.persistence.snapshot-store.plugin` to `akka.persistence.journal.proxy` or `akka.persistence.snapshot-store.proxy`,
respectively. The proxy supplants the @ref:[Shared LevelDB journal](../../scala/persistence.md#shared-leveldb-journal).
respectively. The proxy supplants the @ref:[Shared LevelDB journal](../persistence.md#shared-leveldb-journal).
## Persistence Query
@ -623,7 +619,7 @@ separate library outside of Akka.
### JavaLogger
`akka.contrib.jul.JavaLogger` has been deprecated and included in `akka-actor` instead as
`akka.event.jul.JavaLogger`. See @ref:[documentation](../../scala/logging.md#jul-scala).
`akka.event.jul.JavaLogger`. See @ref:[documentation](../logging.md#jul).
The `JavaLoggingAdapter` has also been deprecated, but not included in `akka-actor`.
Feel free to copy the source into your project or create a separate library outside of Akka.
@ -640,7 +636,7 @@ a separate library outside of Akka.
### ReliableProxy
`ReliableProxy` has been deprecated. Use @ref:[At-Least-Once Delivery](../../scala/persistence.md#at-least-once-delivery-scala) instead. `ReliableProxy`
`ReliableProxy` has been deprecated. Use @ref:[At-Least-Once Delivery](../persistence.md#at-least-once-delivery) instead. `ReliableProxy`
was only intended as an example and doesn't have full production quality. If there is demand
for a lightweight (non-durable) at-least once delivery mechanism we are open for a design discussion.

View file

@ -2,7 +2,7 @@
@@@ note
This page describes the @ref:[may change](../scala/common/may-change.md) remoting subsystem, codenamed *Artery* that will eventually replace the
This page describes the @ref:[may change](common/may-change.md) remoting subsystem, codenamed *Artery* that will eventually replace the
old remoting implementation. For the current stable remoting system please refer to @ref:[Remoting](remoting.md).
@@@
@ -87,7 +87,7 @@ listening for connections and handling messages as not to interfere with other a
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in [Remote Configuration](#remote-configuration-artery-scala).
All settings are described in [Remote Configuration](#remote-configuration-artery).
@@@ note
@ -111,7 +111,7 @@ real network.
In cases, where Network Address Translation (NAT) is used or other network bridging is involved, it is important
to configure the system so that it understands that there is a difference between his externally visible, canonical
address and between the host-port pair that is used to listen for connections. See [Akka behind NAT or in a Docker container](#remote-configuration-nat-artery-scala)
address and between the host-port pair that is used to listen for connections. See [Akka behind NAT or in a Docker container](#remote-configuration-nat-artery)
for details.
## Acquiring references to remote actors
@ -166,7 +166,7 @@ and automatically reply to with a `ActorIdentity` message containing the
the `ActorSelection`, which returns a `Future` of the matching
`ActorRef`.
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@ note
@ -274,11 +274,11 @@ Actor classes not included in the whitelist will not be allowed to be remote dep
An `ActorSystem` should not be exposed via Akka Remote (Artery) over plain Aeron/UDP to an untrusted network (e.g. internet).
It should be protected by network security, such as a firewall. There is currently no support for encryption with Artery
so if network security is not considered as enough protection the classic remoting with
@ref:[TLS and mutual authentication](remoting.md#remote-tls-scala) should be used.
@ref:[TLS and mutual authentication](remoting.md#remote-tls) should be used.
Best practice is that Akka remoting nodes should only be accessible from the adjacent network.
It is also security best practice to @ref:[disable the Java serializer](../java/remoting-artery.md#disable-java-serializer-java-artery) because of
It is also security best practice to @ref:[disable the Java serializer](remoting-artery.md#disable-java-serializer-java-artery) because of
its multiple [known attack surfaces](https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995).
### Untrusted Mode
@ -350,7 +350,7 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
Akka remoting is using Aeron as underlying message transport. Aeron is using UDP and adds
among other things reliable delivery and session semantics, very similar to TCP. This means that
the order of the messages are preserved, which is needed for the @ref:[Actor message ordering guarantees](../scala/general/message-delivery-reliability.md#message-ordering).
the order of the messages are preserved, which is needed for the @ref:[Actor message ordering guarantees](general/message-delivery-reliability.md#message-ordering).
Under normal circumstances all messages will be delivered but there are cases when messages
may not be delivered to the destination:
@ -359,7 +359,7 @@ may not be delivered to the destination:
* if serialization or deserialization of a message fails (only that message will be dropped)
* if an unexpected exception occurs in the remoting infrastructure
In short, Actor message delivery is “at-most-once” as described in @ref:[Message Delivery Reliability](../scala/general/message-delivery-reliability.md)
In short, Actor message delivery is “at-most-once” as described in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md)
Some messages in Akka are called system messages and those cannot be dropped because that would result
in an inconsistent state between the systems. Such messages are used for essentially two features; remote death
@ -401,7 +401,7 @@ when the destination system has been restarted.
### Watching Remote Actors
Watching a remote actor is API wise not different than watching a local actor, as described in
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch-scala). However, it is important to note, that unlike in the local case, remoting has to handle
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch). However, it is important to note, that unlike in the local case, remoting has to handle
when a remote actor does not terminate in a graceful way sending a system message to notify the watcher actor about
the event, but instead being hosted on a system which stopped abruptly (crashed). These situations are handled
by the built-in failure detector.
@ -428,7 +428,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [Remote Configuration](#remote-configuration-artery-scala) you can adjust the `akka.remote.watch-failure-detector.threshold`
In the [Remote Configuration](#remote-configuration-artery) you can adjust the `akka.remote.watch-failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -454,7 +454,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [Remote Configuration](#remote-configuration-artery-scala) of this depending on you environment.
adjust the [Remote Configuration](#remote-configuration-artery) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -467,7 +467,7 @@ those actors are serializable. Failing to do so will cause the system to behave
For more information please see @ref:[Serialization](serialization.md).
<a id="remote-bytebuffer-serialization-scala"></a>
<a id="remote-bytebuffer-serialization"></a>
### ByteBuffer based serialization
Artery introduces a new serialization mechanism which allows the `ByteBufferSerializer` to directly write into a
@ -543,7 +543,7 @@ The attempts are logged with the SECURITY marker.
Please note that this option does not stop you from manually invoking java serialization.
Please note that this means that you will have to configure different serializers which will able to handle all of your
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as [ByteBuffer based serialization](#remote-bytebuffer-serialization-scala) to learn how to do this.
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as [ByteBuffer based serialization](#remote-bytebuffer-serialization) to learn how to do this.
## Routers with Remote Destinations
@ -746,15 +746,15 @@ crashes unexpectedly.
for production systems.
The location of the file can be controlled via the *akka.remote.artery.advanced.flight-recoder.destination* setting (see
@ref:[akka-remote (artery)](../scala/general/configuration.md#config-akka-remote-artery) for details). By default, a file with the *.afr* extension is produced in the temporary
@ref:[akka-remote (artery)](general/configuration.md#config-akka-remote-artery) for details). By default, a file with the *.afr* extension is produced in the temporary
directory of the operating system. In cases where the flight recorder casuses issues, it can be disabled by adding the
setting *akka.remote.artery.advanced.flight-recorder.enabled=off*, although this is not recommended.
<a id="remote-configuration-artery-scala"></a>
<a id="remote-configuration-artery"></a>
## Remote Configuration
There are lots of configuration properties that are related to remoting in Akka. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-remote-artery) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-remote-artery) for more information.
@@@ note
@ -765,7 +765,7 @@ best done by using something like the following:
@@@
<a id="remote-configuration-nat-artery-scala"></a>
<a id="remote-configuration-nat-artery"></a>
### Akka behind NAT or in a Docker container
In setups involving Network Address Translation (NAT), Load Balancers or Docker

View file

@ -58,7 +58,7 @@ listening for connections and handling messages as not to interfere with other a
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in [Remote Configuration](#remote-configuration-scala).
All settings are described in [Remote Configuration](#remote-configuration).
## Types of Remote Interaction
@ -100,7 +100,7 @@ the `ActorSelection`, which returns a `Future` of the matching
@@@ note
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](../scala/general/addressing.md).
For more details on how actor addresses and paths are formed and used, please refer to @ref:[Actor References, Paths and Addresses](general/addressing.md).
@@@
@ -191,7 +191,7 @@ you can advise the system to create a child on that remote node like so:
@@snip [RemoteDeploymentDocSpec.scala]($code$/scala/docs/remoting/RemoteDeploymentDocSpec.scala) { #deploy }
<a id="remote-deployment-whitelist-scala"></a>
<a id="remote-deployment-whitelist"></a>
### Remote deployment whitelist
As remote deployment can potentially be abused by both users and even attackers a whitelist feature
@ -232,7 +232,7 @@ is restarted. After a restart communication can be resumed again and the link ca
## Watching Remote Actors
Watching a remote actor is not different than watching a local actor, as described in
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch-scala).
@ref:[Lifecycle Monitoring aka DeathWatch](actors.md#deathwatch).
### Failure Detector
@ -256,7 +256,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the [Remote Configuration](#remote-configuration-scala) you can adjust the `akka.remote.watch-failure-detector.threshold`
In the [Remote Configuration](#remote-configuration) you can adjust the `akka.remote.watch-failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -282,7 +282,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the [Remote Configuration](#remote-configuration-scala) of this depending on you environment.
adjust the [Remote Configuration](#remote-configuration) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -295,7 +295,7 @@ those actors are serializable. Failing to do so will cause the system to behave
For more information please see @ref:[Serialization](serialization.md).
<a id="disable-java-serializer-scala"></a>
<a id="disable-java-serializer"></a>
### Disabling the Java Serializer
Since the `2.4.11` release of Akka it is possible to entirely disable the default Java Serialization mechanism.
@ -372,7 +372,7 @@ The attempts are logged with the SECURITY marker.
Please note that this option does not stop you from manually invoking java serialization.
Please note that this means that you will have to configure different serializers which will able to handle all of your
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as @ref:[ByteBuffer based serialization](remoting-artery.md#remote-bytebuffer-serialization-scala) to learn how to do this.
remote messages. Please refer to the @ref:[Serialization](serialization.md) documentation as well as @ref:[ByteBuffer based serialization](remoting-artery.md#remote-bytebuffer-serialization) to learn how to do this.
## Routers with Remote Destinations
@ -393,7 +393,7 @@ This configuration setting will send messages to the defined remote actor paths.
It requires that you create the destination actors on the remote nodes with matching paths.
That is not done by the router.
<a id="remote-sample-scala"></a>
<a id="remote-sample"></a>
## Remoting Sample
You can download a ready to run [remoting sample](@exampleCodeService@/akka-samples-remote-scala)
@ -456,17 +456,17 @@ To be notified when the remoting subsystem has been shut down, listen to `Remot
To intercept generic remoting related errors, listen to `RemotingErrorEvent` which holds the `Throwable` cause.
<a id="remote-security-scala"></a>
<a id="remote-security"></a>
## Remote Security
An `ActorSystem` should not be exposed via Akka Remote over plain TCP to an untrusted network (e.g. internet).
It should be protected by network security, such as a firewall. If that is not considered as enough protection
[TLS with mutual authentication](#remote-tls-scala) should be enabled.
[TLS with mutual authentication](#remote-tls) should be enabled.
It is also security best-practice to [disable the Java serializer](#disable-java-serializer-scala) because of
It is also security best-practice to [disable the Java serializer](#disable-java-serializer) because of
its multiple [known attack surfaces](https://community.hpe.com/t5/Security-Research/The-perils-of-Java-deserialization/ba-p/6838995).
<a id="remote-tls-scala"></a>
<a id="remote-tls"></a>
### Configuring SSL/TLS for Akka Remoting
SSL can be used as the remote transport by adding `akka.remote.netty.ssl` to the `enabled-transport` configuration section.
@ -521,7 +521,7 @@ Creating and working with keystores and certificates is well documented in the
[Generating X.509 Certificates](http://typesafehub.github.io/ssl-config/CertificateGeneration.html#using-keytool)
section of Lightbend's SSL-Config library.
Since an Akka remoting is inherently @ref:[peer-to-peer](../scala/general/remoting.md#symmetric-communication) both the key-store as well as trust-store
Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html)
@ -539,7 +539,7 @@ the other (the "server").
Note that if TLS is enabled with mutual authentication there is still a risk that an attacker can gain access to a valid certificate
by compromising any node with certificates issued by the same internal PKI tree.
See also a description of the settings in the [Remote Configuration](#remote-configuration-scala) section.
See also a description of the settings in the [Remote Configuration](#remote-configuration) section.
@@@ note
@ -572,11 +572,9 @@ as a marker trait to user-defined messages.
Untrusted mode does not give full protection against attacks by itself.
It makes it slightly harder to perform malicious or unintended actions but
it should be complemented with [disabled Java serializer](#disable-java-serializer-scala).
it should be complemented with [disabled Java serializer](#disable-java-serializer).
Additional protection can be achieved when running in an untrusted network by
network security (e.g. firewalls) and/or enabling <!-- FIXME: unresolved link reference: tls with mutual
authentication <remote-tls-scala> --> TLS with mutual
authentication <remote-tls-scala>.
network security (e.g. firewalls) and/or enabling [TLS with mutual authentication](#remote-tls).
@@@
@ -615,11 +613,11 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
@@@
<a id="remote-configuration-scala"></a>
<a id="remote-configuration"></a>
## Remote Configuration
There are lots of configuration properties that are related to remoting in Akka. We refer to the
@ref:[reference configuration](../scala/general/configuration.md#config-akka-remote) for more information.
@ref:[reference configuration](general/configuration.md#config-akka-remote) for more information.
@@@ note

View file

@ -6,9 +6,9 @@ routees yourselves or use a self contained router actor with configuration capab
Different routing strategies can be used, according to your application's needs. Akka comes with
several useful routing strategies right out of the box. But, as you will see in this chapter, it is
also possible to [create your own](#custom-router-scala).
also possible to [create your own](#custom-router).
<a id="simple-router-scala"></a>
<a id="simple-router"></a>
## A Simple Router
The following example illustrates how to use a `Router` and manage the routees from within an actor.
@ -40,9 +40,9 @@ outside of actors.
@@@ note
In general, any message sent to a router will be sent onwards to its routees, but there is one exception.
The special [Broadcast Messages](#broadcast-messages-scala) will send to *all* of a router's routees.
However, do not use [Broadcast Messages](#broadcast-messages-scala) when you use [BalancingPool](#balancing-pool-scala) for routees
as described in [Specially Handled Messages](#router-special-messages-scala).
The special [Broadcast Messages](#broadcast-messages) will send to *all* of a router's routees.
However, do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routees
as described in [Specially Handled Messages](#router-special-messages).
@@@
@ -72,13 +72,13 @@ original sender, not to the router actor.
@@@ note
In general, any message sent to a router will be sent onwards to its routees, but there are a
few exceptions. These are documented in the [Specially Handled Messages](#router-special-messages-scala) section below.
few exceptions. These are documented in the [Specially Handled Messages](#router-special-messages) section below.
@@@
### Pool
The following code and configuration snippets show how to create a [round-robin](#round-robin-router-scala) router that forwards messages to five `Worker` routees. The
The following code and configuration snippets show how to create a [round-robin](#round-robin-router) router that forwards messages to five `Worker` routees. The
routees will be created as the router's children.
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #config-round-robin-pool }
@ -103,7 +103,7 @@ deployment requires the `akka-remote` module to be included in the classpath.
#### Senders
By default, when a routee sends a message, it will @ref:[implicitly set itself as the sender
](actors.md#actors-tell-sender-scala).
](actors.md#actors-tell-sender).
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #reply-without-sender }
@ -190,7 +190,7 @@ of the router actor.
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #create-parent }
<a id="round-robin-router-scala"></a>
<a id="round-robin-router"></a>
### RoundRobinPool and RoundRobinGroup
Routes in a [round-robin](http://en.wikipedia.org/wiki/Round-robin) fashion to its routees.
@ -239,7 +239,7 @@ RandomGroup defined in code:
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #paths #random-group-2 }
<a id="balancing-pool-scala"></a>
<a id="balancing-pool"></a>
### BalancingPool
A Router that will try to redistribute work from busy routees to idle routees.
@ -262,8 +262,8 @@ a restriction on the message queue implementation as BalancingPool does.
@@@ note
Do not use [Broadcast Messages](#broadcast-messages-scala) when you use [BalancingPool](#balancing-pool-scala) for routers.
as described in [Specially Handled Messages](#router-special-messages-scala),
Do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routers.
as described in [Specially Handled Messages](#router-special-messages),
@@@
@ -368,7 +368,7 @@ BroadcastGroup defined in code:
Broadcast routers always broadcast *every* message to their routees. If you do not want to
broadcast every message, then you can use a non-broadcasting router and use
[Broadcast Messages](#broadcast-messages-scala) as needed.
[Broadcast Messages](#broadcast-messages) as needed.
@@@
@ -488,7 +488,7 @@ ConsistentHashingGroup defined in code:
`virtual-nodes-factor` is the number of virtual nodes per routee that is used in the
consistent hash node ring to make the distribution more uniform.
<a id="router-special-messages-scala"></a>
<a id="router-special-messages"></a>
## Specially Handled Messages
Most messages sent to router actors will be forwarded according to the routers' routing logic.
@ -496,9 +496,9 @@ However there are a few types of messages that have special behavior.
Note that these special messages, except for the `Broadcast` message, are only handled by
self contained router actors and not by the `akka.routing.Router` component described
in [A Simple Router](#simple-router-scala).
in [A Simple Router](#simple-router).
<a id="broadcast-messages-scala"></a>
<a id="broadcast-messages"></a>
### Broadcast Messages
A `Broadcast` message can be used to send a message to *all* of a router's routees. When a router
@ -516,8 +516,8 @@ routees. It is up to each routee actor to handle the received payload message.
@@@ note
Do not use [Broadcast Messages](#broadcast-messages-scala) when you use [BalancingPool](#balancing-pool-scala) for routers.
Routees on [BalancingPool](#balancing-pool-scala) shares the same mailbox instance, thus some routees can
Do not use [Broadcast Messages](#broadcast-messages) when you use [BalancingPool](#balancing-pool) for routers.
Routees on [BalancingPool](#balancing-pool) shares the same mailbox instance, thus some routees can
possibly get the broadcast message multiple times, while other routees get no broadcast message.
@@@
@ -525,7 +525,7 @@ possibly get the broadcast message multiple times, while other routees get no br
### PoisonPill Messages
A `PoisonPill` message has special handling for all actors, including for routers. When any actor
receives a `PoisonPill` message, that actor will be stopped. See the @ref:[PoisonPill](actors.md#poison-pill-scala)
receives a `PoisonPill` message, that actor will be stopped. See the @ref:[PoisonPill](actors.md#poison-pill)
documentation for details.
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #poisonPill }
@ -538,7 +538,7 @@ However, a `PoisonPill` message sent to a router may still affect its routees, b
stop the router and when the router stops it also stops its children. Stopping children is normal
actor behavior. The router will stop routees that it has created as children. Each child will
process its current message and then stop. This may lead to some messages being unprocessed.
See the documentation on @ref:[Stopping actors](actors.md#stopping-actors-scala) for more information.
See the documentation on @ref:[Stopping actors](actors.md#stopping-actors) for more information.
If you wish to stop a router and its routees, but you would like the routees to first process all
the messages currently in their mailboxes, then you should not send a `PoisonPill` message to the
@ -550,10 +550,8 @@ routees aren't children of the router, i.e. even routees programmatically provid
With the code shown above, each routee will receive a `PoisonPill` message. Each routee will
continue to process its messages as normal, eventually processing the `PoisonPill`. This will
cause the routee to stop. After all routees have stopped the router will itself be <!-- FIXME: unresolved link reference: stopped
automatically <note-router-terminated-children-scala> --> stopped
automatically <note-router-terminated-children-scala> unless it is a dynamic router, e.g. using
a resizer.
cause the routee to stop. After all routees have stopped the router will itself be stopped
automatically unless it is a dynamic router, e.g. using a resizer.
@@@ note
@ -565,7 +563,7 @@ discusses in more detail how `PoisonPill` messages can be used to shut down rout
### Kill Messages
`Kill` messages are another type of message that has special handling. See
@ref:[Killing an Actor](actors.md#killing-actors-scala) for general information about how actors handle `Kill` messages.
@ref:[Killing an Actor](actors.md#killing-actors) for general information about how actors handle `Kill` messages.
When a `Kill` message is sent to a router the router processes the message internally, and does
*not* send it on to its routees. The router will throw an `ActorKilledException` and fail. It
@ -598,7 +596,7 @@ an ordinary message you are not guaranteed that the routees have been changed wh
is routed. If you need to know when the change has been applied you can send `AddRoutee` followed by `GetRoutees`
and when you receive the `Routees` reply you know that the preceding change has been applied.
<a id="resizable-routers-scala"></a>
<a id="resizable-routers"></a>
## Dynamically Resizable Pool
Most pools can be used with a fixed number of routees or with a resize strategy to adjust the number
@ -619,7 +617,7 @@ Pool with default resizer defined in configuration:
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #resize-pool-1 }
Several more configuration options are available and described in `akka.actor.deployment.default.resizer`
section of the reference <!-- FIXME: More than one link target with name configuration in path Some(/scala/routing.rst) --> configuration.
section of the reference @ref:[configuration](general/configuration.md).
Pool with resizer defined in code:
@ -661,7 +659,7 @@ Pool with `OptimalSizeExploringResizer` defined in configuration:
@@snip [RouterDocSpec.scala]($code$/scala/docs/routing/RouterDocSpec.scala) { #optimal-size-exploring-resize-pool }
Several more configuration options are available and described in `akka.actor.deployment.default.optimal-size-exploring-resizer`
section of the reference <!-- FIXME: More than one link target with name configuration in path Some(/scala/routing.rst) --> configuration.
section of the reference @ref:[configuration](general/configuration.md).
@@@ note
@ -675,7 +673,7 @@ Dispatchers](#configuring-dispatchers) for more information.
@@@
<a id="router-design-scala"></a>
<a id="router-design"></a>
## How Routing is Designed within Akka
On the surface routers look like normal actors, but they are actually implemented differently.
@ -694,7 +692,7 @@ routers were implemented with normal actors. Fortunately all of this complexity
consumers of the routing API. However, it is something to be aware of when implementing your own
routers.
<a id="custom-router-scala"></a>
<a id="custom-router"></a>
## Custom Router
You can create your own router should you not find any of the ones provided by Akka sufficient for your needs.
@ -702,7 +700,7 @@ In order to roll your own router you have to fulfill certain criteria which are
Before creating your own router you should consider whether a normal actor with router-like
behavior might do the job just as well as a full-blown router. As explained
[above](#router-design-scala), the primary benefit of routers over normal actors is their
[above](#router-design), the primary benefit of routers over normal actors is their
higher performance. But they are somewhat more complicated to write than normal actors. Therefore if
lower maximum throughput is acceptable in your application you may wish to stick with traditional
actors. This section, however, assumes that you wish to get maximum performance and so demonstrates
@ -725,7 +723,7 @@ A unit test of the routing logic:
@@snip [CustomRouterDocSpec.scala]($code$/scala/docs/routing/CustomRouterDocSpec.scala) { #unit-test-logic }
You could stop here and use the `RedundancyRoutingLogic` with a `akka.routing.Router`
as described in [A Simple Router](#simple-router-scala).
as described in [A Simple Router](#simple-router).
Let us continue and make this into a self contained, configurable, router actor.

View file

@ -20,8 +20,8 @@ Java deserialization is [known to be vulnerable](https://community.hpe.com/t5/Se
Akka Remoting uses Java serialiser as default configuration which makes it vulnerable in its default form. The documentation of how to disable Java serializer was not complete. The documentation of how to enable mutual authentication was missing (only described in reference.conf).
To protect against such attacks the system should be updated to Akka *2.4.17* or later and be configured with
@ref:[disabled Java serializer](../../scala/remoting.md#disable-java-serializer-scala). Additional protection can be achieved when running in an
untrusted network by enabling @ref:[TLS with mutual authentication](../../scala/remoting.md#remote-tls-scala).
@ref:[disabled Java serializer](../remoting.md#disable-java-serializer). Additional protection can be achieved when running in an
untrusted network by enabling @ref:[TLS with mutual authentication](../remoting.md#remote-tls).
Please subscribe to the [akka-security](https://groups.google.com/forum/#!forum/akka-security) mailing list to be notified promptly about future security issues.

View file

@ -18,9 +18,9 @@ to ensure that a fix can be provided without delay.
## Security Related Documentation
* @ref:[Disabling the Java Serializer](../remoting.md#disable-java-serializer-scala)
* @ref:[Remote deployment whitelist](../remoting.md#remote-deployment-whitelist-scala)
* @ref:[Remote Security](../remoting.md#remote-security-scala)
* @ref:[Disabling the Java Serializer](../remoting.md#disable-java-serializer)
* @ref:[Remote deployment whitelist](../remoting.md#remote-deployment-whitelist)
* @ref:[Remote Security](../remoting.md#remote-security)
## Fixed Security Vulnerabilities

View file

@ -92,7 +92,7 @@ bytes to different objects.
Then you only need to fill in the blanks, bind it to a name in your [Configuration]() and then
list which classes that should be serialized using it.
<a id="string-manifest-serializer-scala"></a>
<a id="string-manifest-serializer"></a>
### Serializer with String Manifest
The `Serializer` illustrated above supports a class based manifest (type hint).

View file

@ -528,7 +528,7 @@ states (for example `Try` in Scala).
These stages can transform the rate of incoming elements since there are stages that emit multiple elements for a
single input (e.g. `mapConcat') or consume multiple elements before emitting one output (e.g. `filter`).
However, these rate transformations are data-driven, i.e. it is the incoming elements that define how the
rate is affected. This is in contrast with [detached-stages-overview_scala](#detached-stages-overview-scala) which can change their processing behavior
rate is affected. This is in contrast with [detached stages](#detached-stages-overview) which can change their processing behavior
depending on being backpressured by downstream or not.
### map
@ -986,7 +986,7 @@ Delay every element passed through with a specific duration.
**completes** when upstream completes and buffered elements has been drained
<a id="detached-stages-overview-scala"></a>
<a id="detached-stages-overview"></a>
## Backpressure aware stages
These stages are aware of the backpressure provided by their downstreams and able to adapt their behavior to that signal.

View file

@ -232,7 +232,7 @@ needs to return a different object that provides the necessary interaction capab
Unlike actors though, each of the processing stages might provide a materialized value, so when we compose multiple
stages or modules, we need to combine the materialized value as well (there are default rules which make this easier,
for example *to()* and *via()* takes care of the most common case of taking the materialized value to the left.
See @ref:[Combining materialized values](stream-flows-and-basics.md#flow-combine-mat-scala) for details). We demonstrate how this works by a code example and a diagram which
See @ref:[Combining materialized values](stream-flows-and-basics.md#flow-combine-mat) for details). We demonstrate how this works by a code example and a diagram which
graphically demonstrates what is happening.
The propagation of the individual materialized values from the enclosed modules towards the top will look like this:
@ -272,7 +272,7 @@ the `Future[Sink]` part, and wraps the other two values in a custom case class `
@@@ note
The nested structure in the above example is not necessary for combining the materialized values, it just
demonstrates how the two features work together. See @ref:[Combining materialized values](stream-flows-and-basics.md#flow-combine-mat-scala) for further examples
demonstrates how the two features work together. See @ref:[Combining materialized values](stream-flows-and-basics.md#flow-combine-mat) for further examples
of combining materialized values without nesting and hierarchy involved.
@@@
@ -283,7 +283,7 @@ We have seen that we can use `named()` to introduce a nesting level in the fluid
`create()` from `GraphDSL`). Apart from having the effect of adding a nesting level, `named()` is actually
a shorthand for calling `withAttributes(Attributes.name("someName"))`. Attributes provide a way to fine-tune certain
aspects of the materialized running entity. For example buffer sizes for asynchronous stages can be controlled via
attributes (see @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers-scala)). When it comes to hierarchic composition, attributes are inherited
attributes (see @ref:[Buffers for asynchronous stages](stream-rate.md#async-stream-buffers)). When it comes to hierarchic composition, attributes are inherited
by nested modules, unless they override them with a custom value.
The code below, a modification of an earlier example sets the `inputBuffer` attribute on certain modules, but not

View file

@ -11,7 +11,7 @@ This part also serves as supplementary material for the main body of documentati
open while reading the manual and look for examples demonstrating various streaming concepts
as they appear in the main body of documentation.
If you need a quick reference of the available processing stages used in the recipes see @ref:[stages-overview_scala](stages-overview.md).
If you need a quick reference of the available processing stages used in the recipes see @ref:[stages overview](stages-overview.md).
## Working with Flows
@ -77,7 +77,7 @@ demand comes in and then reset the stage state. It will then complete the stage.
@@snip [RecipeDigest.scala]($code$/scala/docs/stream/cookbook/RecipeDigest.scala) { #calculating-digest }
<a id="cookbook-parse-lines-scala"></a>
<a id="cookbook-parse-lines"></a>
### Parsing lines from a stream of ByteStrings
**Situation:** A stream of bytes is given as a stream of `ByteString` s containing lines terminated by line ending
@ -181,7 +181,7 @@ element. If this function would return a pair of the two argument it would be ex
@@snip [RecipeManualTrigger.scala]($code$/scala/docs/stream/cookbook/RecipeManualTrigger.scala) { #manually-triggered-stream-zipwith }
<a id="cookbook-balance-scala"></a>
<a id="cookbook-balance"></a>
### Balancing jobs to a fixed pool of workers
**Situation:** Given a stream of jobs and a worker process expressed as a `Flow` create a pool of workers

View file

@ -13,7 +13,7 @@ might be easy to make with a custom `GraphStage`
@@@
<a id="graphstage-scala"></a>
<a id="graphstage"></a>
## Custom processing with GraphStage
The `GraphStage` abstraction can be used to create arbitrary graph processing stages with any number of input
@ -286,7 +286,7 @@ In that sense, it serves a very similar purpose as `ActorLogging` does for Actor
Please note that you can always simply use a logging library directly inside a Stage.
Make sure to use an asynchronous appender however, to not accidentally block the stage when writing to files etc.
See @ref:[Using the SLF4J API directly](../logging.md#slf4j-directly-scala) for more details on setting up async appenders in SLF4J.
See @ref:[Using the SLF4J API directly](../logging.md#slf4j-directly) for more details on setting up async appenders in SLF4J.
@@@
@ -340,7 +340,7 @@ when a future completes:
### Integration with actors
**This section is a stub and will be extended in the next release**
**This is a :ref:`may change <may-change>` feature***
**This is a @ref:[may change](../common/may-change.md) feature***
It is possible to acquire an ActorRef that can be addressed from the outside of the stage, similarly how
`AsyncCallback` allows injecting asynchronous events into a stage logic. This reference can be obtained

View file

@ -1,6 +1,6 @@
# Dynamic stream handling
<a id="kill-switch-scala"></a>
<a id="kill-switch"></a>
## Controlling graph completion with KillSwitch
A `KillSwitch` allows the completion of graphs of `FlowShape` from the outside. It consists of a flow element that
@ -17,7 +17,7 @@ Graph completion is performed by both
A `KillSwitch` can control the completion of one or multiple streams, and therefore comes in two different flavours.
<a id="unique-kill-switch-scala"></a>
<a id="unique-kill-switch"></a>
### UniqueKillSwitch
`UniqueKillSwitch` allows to control the completion of **one** materialized `Graph` of `FlowShape`. Refer to the
@ -31,7 +31,7 @@ below for usage examples.
@@snip [KillSwitchDocSpec.scala]($code$/scala/docs/stream/KillSwitchDocSpec.scala) { #unique-abort }
<a id="shared-kill-switch-scala"></a>
<a id="shared-kill-switch"></a>
### SharedKillSwitch
A `SharedKillSwitch` allows to control the completion of an arbitrary number graphs of `FlowShape`. It can be
@ -122,6 +122,6 @@ than 3 seconds are forcefully removed (and their stream failed).
The resulting Flow now has a type of `Flow[String, String, UniqueKillSwitch]` representing a publish-subscribe
channel which can be used any number of times to attach new producers or consumers. In addition, it materializes
to a `UniqueKillSwitch` (see [UniqueKillSwitch](#unique-kill-switch-scala)) that can be used to deregister a single user externally:
to a `UniqueKillSwitch` (see [UniqueKillSwitch](#unique-kill-switch)) that can be used to deregister a single user externally:
@@snip [HubsDocSpec.scala]($code$/scala/docs/stream/HubsDocSpec.scala) { #pub-sub-4 }

View file

@ -36,7 +36,7 @@ elements that cause the division by zero are effectively dropped.
@@@ note
Be aware that dropping elements may result in deadlocks in graphs with
cycles, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-scala).
cycles, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles).
@@@

View file

@ -1,6 +1,6 @@
# Basics and working with Flows
<a id="core-concepts-scala"></a>
<a id="core-concepts"></a>
## Core concepts
Akka Streams is a library to process and transfer a sequence of elements using bounded buffer space. This
@ -36,7 +36,7 @@ is running.
Processing Stage
: The common name for all building blocks that build up a Graph.
Examples of a processing stage would be operations like `map()`, `filter()`, custom `GraphStage` s and graph
junctions like `Merge` or `Broadcast`. For the full list of built-in processing stages see @ref:[stages-overview_scala](stages-overview.md)
junctions like `Merge` or `Broadcast`. For the full list of built-in processing stages see @ref:[stages overview](stages-overview.md)
When we talk about *asynchronous, non-blocking backpressure* we mean that the processing stages available in Akka
@ -45,7 +45,7 @@ will use asynchronous means to slow down a fast producer, without blocking its t
design, since entities that need to wait (a fast producer waiting on a slow consumer) will not block the thread but
can hand it back for further use to an underlying thread-pool.
<a id="defining-and-running-streams-scala"></a>
<a id="defining-and-running-streams"></a>
## Defining and running streams
Linear processing pipelines can be expressed in Akka Streams using the following core abstractions:
@ -135,7 +135,7 @@ In accordance to the Reactive Streams specification ([Rule 2.13](https://github.
Akka Streams do not allow `null` to be passed through the stream as an element. In case you want to model the concept
of absence of a value we recommend using `scala.Option` or `scala.util.Either`.
<a id="back-pressure-explained-scala"></a>
<a id="back-pressure-explained"></a>
## Back-pressure explained
Akka Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](http://reactive-streams.org/)
@ -145,7 +145,7 @@ The user of the library does not have to write any explicit back-pressure handli
and dealt with automatically by all of the provided Akka Streams processing stages. It is possible however to add
explicit buffer stages with overflow strategies that can influence the behaviour of the stream. This is especially important
in complex processing graphs which may even contain loops (which *must* be treated with very special
care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-scala)).
care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles)).
The back pressure protocol is defined in terms of the number of elements a downstream `Subscriber` is able to receive
and buffer, referred to as `demand`.
@ -160,7 +160,7 @@ different Reactive Streams implementations.
Akka Streams implements these concepts as `Source`, `Flow` (referred to as `Processor` in Reactive Streams)
and `Sink` without exposing the Reactive Streams interfaces directly.
If you need to integrate with other Reactive Stream libraries read @ref:[Integrating with Reactive Streams](stream-integrations.md#reactive-streams-integration-scala).
If you need to integrate with other Reactive Stream libraries read @ref:[Integrating with Reactive Streams](stream-integrations.md#reactive-streams-integration).
@@@
@ -202,7 +202,7 @@ it will have to abide to this back-pressure by applying one of the below strateg
As we can see, this scenario effectively means that the `Subscriber` will *pull* the elements from the Publisher
this mode of operation is referred to as pull-based back-pressure.
<a id="stream-materialization-scala"></a>
<a id="stream-materialization"></a>
## Stream Materialization
When constructing flows and graphs in Akka Streams think of them as preparing a blueprint, an execution plan.
@ -226,7 +226,7 @@ yet will materialize that stage multiple times.
@@@
<a id="operator-fusion-scala"></a>
<a id="operator-fusion"></a>
### Operator Fusion
By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
@ -275,7 +275,7 @@ The new fusing behavior can be disabled by setting the configuration parameter `
In that case you can still manually fuse those graphs which shall run on less Actors. With the exception of the
`SslTlsStage` and the `groupBy` operator all built-in processing stages can be fused.
<a id="flow-combine-mat-scala"></a>
<a id="flow-combine-mat"></a>
### Combining materialized values
Since every processing stage in Akka Streams can provide a materialized value after being materialized, it is necessary
@ -287,7 +287,7 @@ resulting values. Some examples of using these combiners are illustrated in the
@@@ note
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see @ref:[Accessing the materialized value inside the Graph](stream-graphs.md#graph-matvalue-scala).
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see @ref:[Accessing the materialized value inside the Graph](stream-graphs.md#graph-matvalue).
@@@

View file

@ -11,14 +11,14 @@ Some graph operations which are common enough and fit the linear style of Flows,
streams, such that the second one is consumed after the first one has completed), may have shorthand methods defined on
`Flow` or `Source` themselves, however you should keep in mind that those are also implemented as graph junctions.
<a id="graph-dsl-scala"></a>
<a id="graph-dsl"></a>
## Constructing Graphs
Graphs are built from simple Flows which serve as the linear connections within the graphs as well as junctions
which serve as fan-in and fan-out points for Flows. Thanks to the junctions having meaningful types based on their behaviour
and making them explicit elements these elements should be rather straightforward to use.
Akka Streams currently provide these junctions (for a detailed list see @ref:[stages-overview_scala](stages-overview.md)):
Akka Streams currently provide these junctions (for a detailed list see @ref:[stages overview](stages-overview.md)):
* **Fan-out**
@ -73,7 +73,7 @@ is passed to it and return the inlets and outlets of the resulting copy so that
Another alternative is to pass existing graphs—of any shape—into the factory method that produces a
new graph. The difference between these approaches is that importing using `builder.add(...)` ignores the
materialized value of the imported graph while importing via the factory method allows its inclusion;
for more details see @ref:[Stream Materialization](stream-flows-and-basics.md#stream-materialization-scala).
for more details see @ref:[Stream Materialization](stream-flows-and-basics.md#stream-materialization).
In the example below we prepare a graph that consists of two parallel streams,
in which we re-use the same instance of `Flow`, yet it will properly be
@ -81,7 +81,7 @@ materialized as two connections between the corresponding Sources and Sinks:
@@snip [GraphDSLDocSpec.scala]($code$/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-reusing-a-flow }
<a id="partial-graph-dsl-scala"></a>
<a id="partial-graph-dsl"></a>
## Constructing and combining Partial Graphs
Sometimes it is not possible (or needed) to construct the entire computation graph in one place, but instead construct
@ -116,7 +116,7 @@ A partial graph also verifies that all ports are either connected or part of the
@@@
<a id="constructing-sources-sinks-flows-from-partial-graphs-scala"></a>
<a id="constructing-sources-sinks-flows-from-partial-graphs"></a>
## Constructing Sources, Sinks and Flows from Partial Graphs
Instead of treating a partial graph as simply a collection of flows and junctions which may not yet all be
@ -207,7 +207,7 @@ using `add()` twice.
@@snip [GraphDSLDocSpec.scala]($code$/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-components-use }
<a id="bidi-flow-scala"></a>
<a id="bidi-flow"></a>
## Bidirectional Flows
A graph topology that is often useful is that of two flows going in opposite
@ -240,7 +240,7 @@ turns an object into a sequence of bytes.
The other stage that we talked about is a little more involved since reversing
a framing protocol means that any received chunk of bytes may correspond to
zero or more messages. This is best implemented using a `GraphStage`
(see also @ref:[Custom processing with GraphStage](stream-customize.md#graphstage-scala)).
(see also @ref:[Custom processing with GraphStage](stream-customize.md#graphstage)).
@@snip [BidiFlowDocSpec.scala]($code$/scala/docs/stream/BidiFlowDocSpec.scala) { #framing }
@ -253,7 +253,7 @@ together and also turned around with the `.reversed` method. The test
simulates both parties of a network communication protocol without actually
having to open a network connection—the flows can just be connected directly.
<a id="graph-matvalue-scala"></a>
<a id="graph-matvalue"></a>
## Accessing the materialized value inside the Graph
In certain cases it might be necessary to feed back the materialized value of a Graph (partial, closed or backing a
@ -269,7 +269,7 @@ The following example demonstrates a case where the materialized `Future` of a f
@@snip [GraphDSLDocSpec.scala]($code$/scala/docs/stream/GraphDSLDocSpec.scala) { #graph-dsl-matvalue-cycle }
<a id="graph-cycles-scala"></a>
<a id="graph-cycles"></a>
## Graph cycles, liveness and deadlocks
Cycles in bounded stream topologies need special considerations to avoid potential deadlocks and other liveness issues.

View file

@ -322,7 +322,7 @@ The numbers in parenthesis illustrates how many calls that are in progress at
the same time. Here the downstream demand and thereby the number of concurrent
calls are limited by the buffer size (4) of the `ActorMaterializerSettings`.
<a id="reactive-streams-integration-scala"></a>
<a id="reactive-streams-integration"></a>
## Integrating with Reactive Streams
[Reactive Streams](http://reactive-streams.org/) defines a standard for asynchronous stream processing with non-blocking
@ -427,7 +427,7 @@ type-safe and safe to implement `akka.stream.stage.GraphStage`. It can also
expose a "stage actor ref" is needed to be addressed as-if an Actor.
Custom stages implemented using `GraphStage` are also automatically fusable.
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage-scala).
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage).
@@@
@ -482,7 +482,7 @@ type-safe and safe to implement `akka.stream.stage.GraphStage`. It can also
expose a "stage actor ref" is needed to be addressed as-if an Actor.
Custom stages implemented using `GraphStage` are also automatically fusable.
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage-scala).
To learn more about implementing custom stages using it refer to @ref:[Custom processing with GraphStage](stream-customize.md#graphstage).
@@@

View file

@ -63,13 +63,13 @@ composition, therefore it may take some careful study of this subject until you
feel familiar with the tools and techniques. The documentation is here to help
and for best results we recommend the following approach:
* Read the @ref:[Quick Start Guide](stream-quickstart.md#stream-quickstart-scala) to get a feel for how streams
* Read the @ref:[Quick Start Guide](stream-quickstart.md#stream-quickstart) to get a feel for how streams
look like and what they can do.
* The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../../scala/general/stream/stream-design.md) at this
* The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../general/stream/stream-design.md) at this
point.
* The bottom-up learners may feel more at home rummaging through the
@ref:[Streams Cookbook](stream-cookbook.md).
* For a complete overview of the built-in processing stages you can look at the
table in @ref:[stages-overview_scala](stages-overview.md)
table in @ref:[stages overview](stages-overview.md)
* The other sections can be read sequentially or as needed during the previous
steps, each digging deeper into specific topics.

View file

@ -67,7 +67,7 @@ When writing such end-to-end back-pressured systems you may sometimes end up in
in which *either side is waiting for the other one to start the conversation*. One does not need to look far
to find examples of such back-pressure loops. In the two examples shown previously, we always assumed that the side we
are connecting to would start the conversation, which effectively means both sides are back-pressured and can not get
the conversation started. There are multiple ways of dealing with this which are explained in depth in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles-scala),
the conversation started. There are multiple ways of dealing with this which are explained in depth in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles),
however in client-server scenarios it is often the simplest to make either side simply send an initial message.
@@@ note

Some files were not shown because too many files have changed in this diff Show more