Fix or improve some formatting in the new docs
This commit is contained in:
parent
7ca11b08ab
commit
9cc0f5ea98
43 changed files with 894 additions and 414 deletions
|
|
@ -408,7 +408,6 @@ result:
|
|||
It is always preferable to communicate with other Actors using their ActorRef
|
||||
instead of relying upon ActorSelection. Exceptions are
|
||||
|
||||
>
|
||||
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery) facility
|
||||
* initiating first contact with a remote system
|
||||
|
||||
|
|
@ -465,9 +464,12 @@ An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoti
|
|||
|
||||
## Messages and immutability
|
||||
|
||||
**IMPORTANT**: Messages can be any kind of object but have to be
|
||||
immutable. Akka can’t enforce immutability (yet) so this has to be by
|
||||
convention.
|
||||
@@@ warning { title=IMPORTANT }
|
||||
|
||||
Messages can be any kind of object but have to be immutable. Akka can’t enforce
|
||||
immutability (yet) so this has to be by convention.
|
||||
|
||||
@@@
|
||||
|
||||
Here is an example of an immutable message:
|
||||
|
||||
|
|
@ -1029,4 +1031,4 @@ the potential issues is that messages might be lost when sent to remote actors.
|
|||
an uninitialized state might lead to the condition that it receives a user message before the initialization has been
|
||||
done.
|
||||
|
||||
@@@
|
||||
@@@
|
||||
|
|
|
|||
|
|
@ -2,9 +2,9 @@
|
|||
|
||||
Agents in Akka are inspired by [agents in Clojure](http://clojure.org/agents).
|
||||
|
||||
@@@ warning
|
||||
@@@ warning { title="Deprecation warning" }
|
||||
|
||||
**Deprecation warning** - Agents have been deprecated and are scheduled for removal
|
||||
Agents have been deprecated and are scheduled for removal
|
||||
in the next major version. We have found that their leaky abstraction (they do not
|
||||
work over the network) make them inferior to pure Actors, and in face of the soon
|
||||
inclusion of Akka Typed we see little value in maintaining the current Agents.
|
||||
|
|
@ -90,4 +90,4 @@ Agents participating in enclosing STM transaction is a deprecated feature in 2.3
|
|||
If an Agent is used within an enclosing `Scala STM transaction`, then it will participate in
|
||||
that transaction. If you send to an Agent within a transaction then the dispatch
|
||||
to the Agent will be held until that transaction commits, and discarded if the
|
||||
transaction is aborted.
|
||||
transaction is aborted.
|
||||
|
|
|
|||
|
|
@ -42,23 +42,23 @@ The `ClusterClientReceptionist` sends out notifications in relation to having re
|
|||
from a `ClusterClient`. This notification enables the server containing the receptionist to become aware of
|
||||
what clients are connected.
|
||||
|
||||
**1. ClusterClient.Send**
|
||||
1. **ClusterClient.Send**
|
||||
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists. If several entries match the path the message will be delivered
|
||||
to one random destination. The `sender` of the message can specify that local
|
||||
affinity is preferred, i.e. the message is sent to an actor in the same local actor
|
||||
system as the used receptionist actor, if any such exists, otherwise random to any other
|
||||
matching entry.
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists. If several entries match the path the message will be delivered
|
||||
to one random destination. The `sender` of the message can specify that local
|
||||
affinity is preferred, i.e. the message is sent to an actor in the same local actor
|
||||
system as the used receptionist actor, if any such exists, otherwise random to any other
|
||||
matching entry.
|
||||
|
||||
**2. ClusterClient.SendToAll**
|
||||
2. **ClusterClient.SendToAll**
|
||||
|
||||
The message will be delivered to all recipients with a matching path.
|
||||
The message will be delivered to all recipients with a matching path.
|
||||
|
||||
**3. ClusterClient.Publish**
|
||||
3. **ClusterClient.Publish**
|
||||
|
||||
The message will be delivered to all recipients Actors that have been registered as subscribers
|
||||
to the named topic.
|
||||
The message will be delivered to all recipients Actors that have been registered as subscribers
|
||||
to the named topic.
|
||||
|
||||
Response messages from the destination actor are tunneled via the receptionist
|
||||
to avoid inbound connections from other cluster nodes to the client, i.e.
|
||||
|
|
@ -193,4 +193,4 @@ within a configurable interval. This is configured with the `reconnect-timeout`,
|
|||
This can be useful when initial contacts are provided from some kind of service registry, cluster node addresses
|
||||
are entirely dynamic and the entire cluster might shut down or crash, be restarted on new addresses. Since the
|
||||
client will be stopped in that case a monitoring actor can watch it and upon `Terminate` a new set of initial
|
||||
contacts can be fetched and a new cluster client started.
|
||||
contacts can be fetched and a new cluster client started.
|
||||
|
|
|
|||
|
|
@ -57,7 +57,6 @@ identifier and the shard identifier from incoming messages.
|
|||
|
||||
This example illustrates two different ways to define the entity identifier in the messages:
|
||||
|
||||
>
|
||||
* The `Get` message includes the identifier itself.
|
||||
* The `EntityEnvelope` holds the identifier, and the actual message that is
|
||||
sent to the entity actor is wrapped in the envelope.
|
||||
|
|
@ -418,4 +417,4 @@ a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards ru
|
|||
of entities that are alive in each shard.
|
||||
|
||||
The purpose of these messages is testing and monitoring, they are not provided to give access to
|
||||
directly sending messages to the individual entities.
|
||||
directly sending messages to the individual entities.
|
||||
|
|
|
|||
|
|
@ -74,33 +74,37 @@ where you'd use periods to denote sub-sections, like this: `"foo.bar.my-dispatch
|
|||
|
||||
There are 3 different types of message dispatchers:
|
||||
|
||||
* Dispatcher
|
||||
* This is an event-based dispatcher that binds a set of Actors to a thread pool. It is the default dispatcher
|
||||
used if one is not specified.
|
||||
* **Dispatcher**
|
||||
|
||||
This is an event-based dispatcher that binds a set of Actors to a thread
|
||||
pool. It is the default dispatcher used if one is not specified.
|
||||
|
||||
* Sharability: Unlimited
|
||||
* Mailboxes: Any, creates one per Actor
|
||||
* Use cases: Default dispatcher, Bulkheading
|
||||
*
|
||||
Driven by:
|
||||
`java.util.concurrent.ExecutorService`
|
||||
: specify using "executor" using "fork-join-executor",
|
||||
"thread-pool-executor" or the FQCN of
|
||||
an `akka.dispatcher.ExecutorServiceConfigurator`
|
||||
|
||||
* PinnedDispatcher
|
||||
* This dispatcher dedicates a unique thread for each actor using it; i.e. each actor will have its own thread pool with only one thread in the pool.
|
||||
* Driven by: `java.util.concurrent.ExecutorService`.
|
||||
Specify using "executor" using "fork-join-executor", "thread-pool-executor" or the FQCN of
|
||||
an `akka.dispatcher.ExecutorServiceConfigurator`.
|
||||
|
||||
* **PinnedDispatcher**
|
||||
|
||||
This dispatcher dedicates a unique thread for each actor using it; i.e.
|
||||
each actor will have its own thread pool with only one thread in the pool.
|
||||
|
||||
* Sharability: None
|
||||
* Mailboxes: Any, creates one per Actor
|
||||
* Use cases: Bulkheading
|
||||
*
|
||||
Driven by: Any
|
||||
`akka.dispatch.ThreadPoolExecutorConfigurator`
|
||||
: by default a "thread-pool-executor"
|
||||
* Driven by: Any `akka.dispatch.ThreadPoolExecutorConfigurator`.
|
||||
By default a "thread-pool-executor".
|
||||
|
||||
* CallingThreadDispatcher
|
||||
* This dispatcher runs invocations on the current thread only. This dispatcher does not create any new threads,
|
||||
but it can be used from different threads concurrently for the same actor. See @ref:[Java-CallingThreadDispatcher](testing.md#java-callingthreaddispatcher)
|
||||
for details and restrictions.
|
||||
* **CallingThreadDispatcher**
|
||||
|
||||
This dispatcher runs invocations on the current thread only. This
|
||||
dispatcher does not create any new threads, but it can be used from
|
||||
different threads concurrently for the same actor.
|
||||
See @ref:[Scala-CallingThreadDispatcher](testing.md#scala-callingthreaddispatcher)
|
||||
for details and restrictions.
|
||||
|
||||
* Sharability: Unlimited
|
||||
* Mailboxes: Any, creates one per Actor per Thread (on demand)
|
||||
* Use cases: Testing
|
||||
|
|
@ -135,4 +139,4 @@ and that pool will have only one thread.
|
|||
Note that it's not guaranteed that the *same* thread is used over time, since the core pool timeout
|
||||
is used for `PinnedDispatcher` to keep resource usage down in case of idle actors. To use the same
|
||||
thread all the time you need to add `thread-pool-executor.allow-core-timeout=off` to the
|
||||
configuration of the `PinnedDispatcher`.
|
||||
configuration of the `PinnedDispatcher`.
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ Read the following source code. The inlined comments explain the different piece
|
|||
the fault handling and why they are added. It is also highly recommended to run this
|
||||
sample as it is easy to follow the log output to understand what is happening in runtime.
|
||||
|
||||
@@toc
|
||||
@@toc { depth=1 }
|
||||
|
||||
@@@ index
|
||||
|
||||
|
|
@ -157,4 +157,4 @@ different supervisor which overrides this behavior.
|
|||
With this parent, the child survives the escalated restart, as demonstrated in
|
||||
the last test:
|
||||
|
||||
@@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart }
|
||||
@@snip [FaultHandlingTest.java]($code$/java/jdocs/actor/FaultHandlingTest.java) { #escalate-restart }
|
||||
|
|
|
|||
|
|
@ -7,13 +7,11 @@ an Akka Actor and is best described in the [Erlang design principles](http://www
|
|||
|
||||
A FSM can be described as a set of relations of the form:
|
||||
|
||||
>
|
||||
**State(S) x Event(E) -> Actions (A), State(S')**
|
||||
> **State(S) x Event(E) -> Actions (A), State(S')**
|
||||
|
||||
These relations are interpreted as meaning:
|
||||
|
||||
>
|
||||
*If we are in state S and the event E occurs, we should perform the actions A
|
||||
> *If we are in state S and the event E occurs, we should perform the actions A
|
||||
and make a transition to the state S'.*
|
||||
|
||||
## A Simple Example
|
||||
|
|
@ -52,7 +50,6 @@ The basic strategy is to declare the actor, by inheriting the `AbstractFSM` clas
|
|||
and specifying the possible states and data values as type parameters. Within
|
||||
the body of the actor a DSL is used for declaring the state machine:
|
||||
|
||||
>
|
||||
* `startWith` defines the initial state and initial data
|
||||
* then there is one `when(<state>) { ... }` declaration per state to be
|
||||
handled (could potentially be multiple ones, the passed
|
||||
|
|
@ -121,7 +118,6 @@ the FSM logic.
|
|||
|
||||
The `AbstractFSM` class takes two type parameters:
|
||||
|
||||
>
|
||||
1. the supertype of all state names, usually an enum,
|
||||
2. the type of the state data which are tracked by the `AbstractFSM` module
|
||||
itself.
|
||||
|
|
@ -139,8 +135,9 @@ internal state explicit in a few well-known places.
|
|||
|
||||
A state is defined by one or more invocations of the method
|
||||
|
||||
>
|
||||
`when(<name>[, stateTimeout = <timeout>])(stateFunction)`.
|
||||
```
|
||||
when(<name>[, stateTimeout = <timeout>])(stateFunction)
|
||||
```
|
||||
|
||||
The given name must be an object which is type-compatible with the first type
|
||||
parameter given to the `AbstractFSM` class. This object is used as a hash key,
|
||||
|
|
@ -179,8 +176,9 @@ It is recommended practice to declare the states as an enum and then verify that
|
|||
|
||||
Each FSM needs a starting point, which is declared using
|
||||
|
||||
>
|
||||
`startWith(state, data[, timeout])`
|
||||
```
|
||||
startWith(state, data[, timeout])
|
||||
```
|
||||
|
||||
The optionally given timeout argument overrides any specification given for the
|
||||
desired initial state. If you want to cancel a default timeout, use
|
||||
|
|
@ -260,8 +258,9 @@ Up to this point, the FSM DSL has been centered on states and events. The dual
|
|||
view is to describe it as a series of transitions. This is enabled by the
|
||||
method
|
||||
|
||||
>
|
||||
`onTransition(handler)`
|
||||
```
|
||||
onTransition(handler)
|
||||
```
|
||||
|
||||
which associates actions with a transition instead of with a state and event.
|
||||
The handler is a partial function which takes a pair of states as input; no
|
||||
|
|
@ -310,8 +309,9 @@ the listener.
|
|||
Besides state timeouts, FSM manages timers identified by `String` names.
|
||||
You may set a timer using
|
||||
|
||||
>
|
||||
`setTimer(name, msg, interval, repeat)`
|
||||
```
|
||||
setTimer(name, msg, interval, repeat)
|
||||
```
|
||||
|
||||
where `msg` is the message object which will be sent after the duration
|
||||
`interval` has elapsed. If `repeat` is `true`, then the timer is
|
||||
|
|
@ -321,15 +321,17 @@ adding the new timer.
|
|||
|
||||
Timers may be canceled using
|
||||
|
||||
>
|
||||
`cancelTimer(name)`
|
||||
```
|
||||
cancelTimer(name)
|
||||
```
|
||||
|
||||
which is guaranteed to work immediately, meaning that the scheduled message
|
||||
will not be processed after this call even if the timer already fired and
|
||||
queued it. The status of any timer may be inquired with
|
||||
|
||||
>
|
||||
`isTimerActive(name)`
|
||||
```
|
||||
isTimerActive(name)
|
||||
```
|
||||
|
||||
These named timers complement state timeouts because they are not affected by
|
||||
intervening reception of other messages.
|
||||
|
|
@ -338,8 +340,9 @@ intervening reception of other messages.
|
|||
|
||||
The FSM is stopped by specifying the result state as
|
||||
|
||||
>
|
||||
`stop([reason[, data]])`
|
||||
```
|
||||
stop([reason[, data]])
|
||||
```
|
||||
|
||||
The reason must be one of `Normal` (which is the default), `Shutdown`
|
||||
or `Failure(reason)`, and the second argument may be given to change the
|
||||
|
|
@ -396,7 +399,6 @@ event trace by `LoggingFSM` instances:
|
|||
|
||||
This FSM will log at DEBUG level:
|
||||
|
||||
>
|
||||
* all processed events, including `StateTimeout` and scheduled timer
|
||||
messages
|
||||
* every setting and cancellation of named timers
|
||||
|
|
@ -434,4 +436,4 @@ zero.
|
|||
A bigger FSM example contrasted with Actor's `become`/`unbecome` can be
|
||||
downloaded as a ready to run @extref[Akka FSM sample](ecs:akka-samples-fsm-java)
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
@extref[Akka Samples Repository](samples:akka-sample-fsm-java).
|
||||
@extref[Akka Samples Repository](samples:akka-sample-fsm-java).
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@
|
|||
UDP is a connectionless datagram protocol which offers two different ways of
|
||||
communication on the JDK level:
|
||||
|
||||
>
|
||||
* sockets which are free to send datagrams to any destination and receive
|
||||
datagrams from any origin
|
||||
* sockets which are restricted to communication with one specific remote
|
||||
|
|
@ -98,4 +97,4 @@ Another socket option will be needed to join a multicast group.
|
|||
|
||||
Socket options must be provided to `UdpMessage.bind` command.
|
||||
|
||||
@@snip [JavaUdpMulticast.java]($code$/java/jdocs/io/JavaUdpMulticast.java) { #bind }
|
||||
@@snip [JavaUdpMulticast.java]($code$/java/jdocs/io/JavaUdpMulticast.java) { #bind }
|
||||
|
|
|
|||
|
|
@ -263,7 +263,6 @@ stdout logger is `WARNING` and it can be silenced completely by setting
|
|||
Akka provides a logger for [SL4FJ](http://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
|
||||
It has a single dependency: the slf4j-api jar. In your runtime, you also need a SLF4J backend. We recommend [Logback](http://logback.qos.ch/):
|
||||
|
||||
>
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>ch.qos.logback</groupId>
|
||||
|
|
@ -499,4 +498,4 @@ shown below:
|
|||
|
||||
```scala
|
||||
final LoggingAdapter log = Logging.getLogger(system.eventStream(), "my.string");
|
||||
```
|
||||
```
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ type to `PersistentActor` s and @ref:[persistence queries](persistence-query.md)
|
|||
instead of having to explicitly deal with different schemas.
|
||||
|
||||
In summary, schema evolution in event sourced systems exposes the following characteristics:
|
||||
:
|
||||
|
||||
* Allow the system to continue operating without large scale migrations to be applied,
|
||||
* Allow the system to read "old" events from the underlying storage, however present them in a "new" view to the application logic,
|
||||
* Transparently promote events to the latest versions during recovery (or queries) such that the business logic need not consider multiple versions of events
|
||||
|
|
@ -111,7 +111,7 @@ The below figure explains how the default serialization scheme works, and how it
|
|||
user provided message itself, which we will from here on refer to as the `payload` (highlighted in yellow):
|
||||
|
||||

|
||||
>
|
||||
|
||||
Akka Persistence provided serializers wrap the user payload in an envelope containing all persistence-relevant information.
|
||||
**If the Journal uses provided Protobuf serializers for the wrapper types (e.g. PersistentRepr), then the payload will
|
||||
be serialized using the user configured serializer, and if none is provided explicitly, Java serialization will be used for it.**
|
||||
|
|
@ -234,7 +234,7 @@ add the overhead of having to maintain the schema. When using serializers like t
|
|||
(except renaming the field and method used during serialization) is needed to perform such evolution:
|
||||
|
||||

|
||||
>
|
||||
|
||||
This is how such a rename would look in protobuf:
|
||||
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-rename-proto }
|
||||
|
|
@ -262,7 +262,7 @@ automatically by the serializer. You can do these kinds of "promotions" either m
|
|||
or using a library like [Stamina](https://github.com/javapenos/stamina) which helps to create those `V1->V2->V3->...->Vn` promotion chains without much boilerplate.
|
||||
|
||||

|
||||
>
|
||||
|
||||
The following snippet showcases how one could apply renames if working with plain JSON (using a
|
||||
|
||||
`JsObject` as an example JSON representation):
|
||||
|
|
@ -296,7 +296,7 @@ for the recovery mechanisms that this entails. For example, a naive way of filte
|
|||
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters):
|
||||
|
||||

|
||||
>
|
||||
|
||||
The `EventAdapter` can drop old events (**O**) by emitting an empty `EventSeq`.
|
||||
Other events can simply be passed through (**E**).
|
||||
|
||||
|
|
@ -311,7 +311,6 @@ In the just described technique we have saved the PersistentActor from receiving
|
|||
out in the `EventAdapter`, however the event itself still was deserialized and loaded into memory.
|
||||
This has two notable *downsides*:
|
||||
|
||||
>
|
||||
* first, that the deserialization was actually performed, so we spent some of out time budget on the
|
||||
deserialization, even though the event does not contribute anything to the persistent actors state.
|
||||
* second, that we are *unable to remove the event class* from the system – since the serializer still needs to create
|
||||
|
|
@ -326,7 +325,7 @@ This can for example be implemented by using an `SerializerWithStringManifest`
|
|||
that the type is no longer needed, and skip the deserialization all-together:
|
||||
|
||||

|
||||
>
|
||||
|
||||
The serializer is aware of the old event types that need to be skipped (**O**), and can skip deserializing them alltogether
|
||||
by simply returning a "tombstone" (**T**), which the EventAdapter converts into an empty EventSeq.
|
||||
Other events (**E**) can simply be passed through.
|
||||
|
|
@ -360,7 +359,7 @@ classes which very often may be less user-friendly yet highly optimised for thro
|
|||
these types in a 1:1 style as illustrated below:
|
||||
|
||||

|
||||
>
|
||||
|
||||
Domain events (**A**) are adapted to the data model events (**D**) by the `EventAdapter`.
|
||||
The data model can be a format natively understood by the journal, such that it can store it more efficiently or
|
||||
include additional data for the event (e.g. tags), for ease of later querying.
|
||||
|
|
@ -442,7 +441,7 @@ on what the user actually intended to change (instead of the composite `UserDeta
|
|||
of our model).
|
||||
|
||||

|
||||
>
|
||||
|
||||
The `EventAdapter` splits the incoming event into smaller more fine grained events during recovery.
|
||||
|
||||
During recovery however, we now need to convert the old `V1` model into the `V2` representation of the change.
|
||||
|
|
@ -452,4 +451,4 @@ and the address change is handled similarily:
|
|||
@@snip [PersistenceSchemaEvolutionDocTest.java]($code$/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #split-events-during-recovery }
|
||||
|
||||
By returning an `EventSeq` from the event adapter, the recovered event can be converted to multiple events before
|
||||
being delivered to the persistent actor.
|
||||
being delivered to the persistent actor.
|
||||
|
|
|
|||
|
|
@ -703,10 +703,11 @@ Artery has been designed for low latency and as a result it can be CPU hungry wh
|
|||
This is not always desirable. It is possible to tune the tradeoff between CPU usage and latency with
|
||||
the following configuration:
|
||||
|
||||
>
|
||||
```
|
||||
# Values can be from 1 to 10, where 10 strongly prefers low latency
|
||||
# and 1 strongly prefers less CPU usage
|
||||
akka.remote.artery.advanced.idle-cpu-level = 1
|
||||
```
|
||||
|
||||
By setting this value to a lower number, it tells Akka to do longer "sleeping" periods on its thread dedicated
|
||||
for [spin-waiting](https://en.wikipedia.org/wiki/Busy_waiting) and hence reducing CPU load when there is no
|
||||
|
|
@ -789,4 +790,4 @@ akka {
|
|||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
|
|
|||
|
|
@ -633,4 +633,4 @@ Keep in mind that local.address will most likely be in one of private network ra
|
|||
* *172.16.0.0 - 172.31.255.255* (network class B)
|
||||
* *192.168.0.0 - 192.168.255.255* (network class C)
|
||||
|
||||
For further details see [RFC 1597]([https://tools.ietf.org/html/rfc1597](https://tools.ietf.org/html/rfc1597)) and [RFC 1918]([https://tools.ietf.org/html/rfc1918](https://tools.ietf.org/html/rfc1918)).
|
||||
For further details see [RFC 1597](https://tools.ietf.org/html/rfc1597) and [RFC 1918](https://tools.ietf.org/html/rfc1918).
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -11,12 +11,8 @@ be processed arrive and leave the stage. In this view, a `Source` is nothing els
|
|||
output port, or, a `BidiFlow` is a "box" with exactly two input and two output ports. In the figure below
|
||||
we illustrate the most common used stages viewed as "boxes".
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The *linear* stages are `Source`, `Sink`
|
||||
and `Flow`, as these can be used to compose strict chains of processing stages.
|
||||
Fan-in and Fan-out stages have usually multiple input or multiple output ports, therefore they allow to build
|
||||
|
|
@ -35,12 +31,8 @@ to interact with. One good example is the `Http` server component, which is enco
|
|||
The following figure demonstrates various composite stages, that contain various other type of stages internally, but
|
||||
hiding them behind a *shape* that looks like a `Source`, `Flow`, etc.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
One interesting example above is a `Flow` which is composed of a disconnected `Sink` and `Source`.
|
||||
This can be achieved by using the `fromSinkAndSource()` constructor method on `Flow` which takes the two parts as
|
||||
parameters.
|
||||
|
|
@ -62,12 +54,8 @@ These mechanics allow arbitrary nesting of modules. For example the following fi
|
|||
that is built from a composite `Source` and a composite `Sink` (which in turn contains a composite
|
||||
`Flow`).
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The above diagram contains one more shape that we have not seen yet, which is called `RunnableGraph`. It turns
|
||||
out, that if we wire all exposed ports together, so that no more open ports remain, we get a module that is *closed*.
|
||||
This is what the `RunnableGraph` class represents. This is the shape that a `Materializer` can take
|
||||
|
|
@ -93,12 +81,8 @@ Once we have hidden the internals of our components, they act like any other bui
|
|||
we hide some of the internals of our composites, the result looks just like if any other predefine component has been
|
||||
used:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
If we look at usage of built-in components, and our custom components, there is no difference in usage as the code
|
||||
snippet below demonstrates.
|
||||
|
||||
|
|
@ -115,12 +99,8 @@ operate on are uniform across all DSLs and fit together nicely.
|
|||
|
||||
As a first example, let's look at a more complex layout:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The diagram shows a `RunnableGraph` (remember, if there are no unwired ports, the graph is closed, and therefore
|
||||
can be materialized) that encapsulates a non-trivial stream processing network. It contains fan-in, fan-out stages,
|
||||
directed and non-directed cycles. The `runnable()` method of the `GraphDSL` factory object allows the creation of a
|
||||
|
|
@ -133,19 +113,13 @@ It is possible to refer to the ports, so another version might look like this:
|
|||
|
||||
@@snip [CompositionDocTest.java]($code$/java/jdocs/stream/CompositionDocTest.java) { #complex-graph-alt }
|
||||
|
||||
|
|
||||
|
||||
Similar to the case in the first section, so far we have not considered modularity. We created a complex graph, but
|
||||
the layout is flat, not modularized. We will modify our example, and create a reusable component with the graph DSL.
|
||||
The way to do it is to use the `create()` method on `GraphDSL` factory. If we remove the sources and sinks
|
||||
from the previous example, what remains is a partial graph:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
We can recreate a similar graph in code, using the DSL in a similar way than before:
|
||||
|
||||
@@snip [CompositionDocTest.java]($code$/java/jdocs/stream/CompositionDocTest.java) { #partial-graph }
|
||||
|
|
@ -159,12 +133,8 @@ matching built-in ones.
|
|||
The resulting graph is already a properly wrapped module, so there is no need to call *named()* to encapsulate the graph, but
|
||||
it is a good practice to give names to modules to help debugging.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Since our partial graph has the right shape, it can be already used in the simpler, linear DSL:
|
||||
|
||||
@@snip [CompositionDocTest.java]($code$/java/jdocs/stream/CompositionDocTest.java) { #partial-use }
|
||||
|
|
@ -175,12 +145,8 @@ has a `fromGraph()` method that just adds the DSL to a `FlowShape`. There are si
|
|||
For convenience, it is also possible to skip the partial graph creation, and use one of the convenience creator methods.
|
||||
To demonstrate this, we will create the following graph:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The code version of the above closed graph might look like this:
|
||||
|
||||
@@snip [CompositionDocTest.java]($code$/java/jdocs/stream/CompositionDocTest.java) { #partial-flow-dsl }
|
||||
|
|
@ -236,12 +202,8 @@ graphically demonstrates what is happening.
|
|||
|
||||
The propagation of the individual materialized values from the enclosed modules towards the top will look like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
To implement the above, first, we create a composite `Source`, where the enclosed `Source` have a
|
||||
materialized type of `CompletableFuture<Optional<Integer>>>`. By using the combiner function `Keep.left()`, the resulting materialized
|
||||
type is of the nested module (indicated by the color *red* on the diagram):
|
||||
|
|
@ -297,11 +259,7 @@ the same attribute explicitly set. `nestedSource` gets the default attributes fr
|
|||
on the other hand has this attribute set, so it will be used by all nested modules. `nestedFlow` will inherit from `nestedSink`
|
||||
except the `map` stage which has again an explicitly provided attribute overriding the inherited one.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This diagram illustrates the inheritance process for the example code (representing the materializer default attributes
|
||||
as the color *red*, the attributes set on `nestedSink` as *blue* and the attributes set on `nestedFlow` as *green*).
|
||||
|
|
|
|||
|
|
@ -156,7 +156,7 @@ Sometimes we want to map elements into multiple groups simultaneously.
|
|||
To achieve the desired result, we attack the problem in two steps:
|
||||
|
||||
* first, using a function `topicMapper` that gives a list of topics (groups) a message belongs to, we transform our
|
||||
stream of `Message` to a stream of :class:`Pair<Message, Topic>`` where for each topic the message belongs to a separate pair
|
||||
stream of `Message` to a stream of `Pair<Message, Topic>` where for each topic the message belongs to a separate pair
|
||||
will be emitted. This is achieved by using `mapConcat`
|
||||
* Then we take this new stream of message topic pairs (containing a separate pair for each topic a given message
|
||||
belongs to) and feed it into groupBy, using the topic as the group key.
|
||||
|
|
@ -374,4 +374,4 @@ but only if this does not interfere with normal traffic.
|
|||
|
||||
There is a built-in operation that allows to do this directly:
|
||||
|
||||
@@snip [RecipeKeepAlive.java]($code$/java/jdocs/stream/javadsl/cookbook/RecipeKeepAlive.java) { #inject-keepalive }
|
||||
@@snip [RecipeKeepAlive.java]($code$/java/jdocs/stream/javadsl/cookbook/RecipeKeepAlive.java) { #inject-keepalive }
|
||||
|
|
|
|||
|
|
@ -92,12 +92,8 @@ the initial state while orange indicates the end state. If an operation is not l
|
|||
to call it while the port is in that state. If an event is not listed for a state, then that event cannot happen
|
||||
in that state.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The following operations are available for *input* ports:
|
||||
|
||||
* `pull(in)` requests a new element from an input port. This is only possible after the port has been pushed by upstream.
|
||||
|
|
@ -127,12 +123,8 @@ the initial state while orange indicates the end state. If an operation is not l
|
|||
to call it while the port is in that state. If an event is not listed for a state, then that event cannot happen
|
||||
in that state.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Finally, there are two methods available for convenience to complete the stage and all of its ports:
|
||||
|
||||
* `completeStage()` is equivalent to closing all output ports and cancelling all input ports.
|
||||
|
|
@ -144,7 +136,7 @@ of actions which will greatly simplify some use cases at the cost of some extra
|
|||
between the two APIs could be described as that the first one is signal driven from the outside, while this API
|
||||
is more active and drives its surroundings.
|
||||
|
||||
The operations of this part of the :class:`GraphStage` API are:
|
||||
The operations of this part of the `GraphStage` API are:
|
||||
|
||||
* `emit(out, elem)` and `emitMultiple(out, Iterable(elem1, elem2))` replaces the `OutHandler` with a handler that emits
|
||||
one or more elements when there is demand, and then reinstalls the current handlers
|
||||
|
|
@ -158,7 +150,7 @@ The following methods are safe to call after invoking `emit` and `read` (and wil
|
|||
operation when those are done): `complete(out)`, `completeStage()`, `emit`, `emitMultiple`, `abortEmitting()`
|
||||
and `abortReading()`
|
||||
|
||||
An example of how this API simplifies a stage can be found below in the second version of the :class:`Duplicator`.
|
||||
An example of how this API simplifies a stage can be found below in the second version of the `Duplicator`.
|
||||
|
||||
### Custom linear processing stages using GraphStage
|
||||
|
||||
|
|
@ -169,20 +161,12 @@ Such a stage can be illustrated as a box with two flows as it is
|
|||
seen in the illustration below. Demand flowing upstream leading to elements
|
||||
flowing downstream.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
To illustrate these concepts we create a small `GraphStage` that implements the `map` transformation.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Map calls `push(out)` from the `onPush()` handler and it also calls `pull()` from the `onPull` handler resulting in the
|
||||
conceptual wiring above, and fully expressed in code below:
|
||||
|
||||
|
|
@ -194,12 +178,8 @@ demand is passed along upstream elements passed on downstream.
|
|||
To demonstrate a many-to-one stage we will implement
|
||||
filter. The conceptual wiring of `Filter` looks like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
As we see above, if the given predicate matches the current element we are propagating it downwards, otherwise
|
||||
we return the “ball” to our upstream so that we get the new element. This is achieved by modifying the map
|
||||
example by adding a conditional in the `onPush` handler and decide between a `pull(in)` or `push(out)` call
|
||||
|
|
@ -210,12 +190,8 @@ example by adding a conditional in the `onPush` handler and decide between a `pu
|
|||
To complete the picture we define a one-to-many transformation as the next step. We chose a straightforward example stage
|
||||
that emits every upstream element twice downstream. The conceptual wiring of this stage looks like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This is a stage that has state: an option with the last element it has seen indicating if it
|
||||
has duplicated this last element already or not. We must also make sure to emit the extra element
|
||||
if the upstream completes.
|
||||
|
|
@ -238,12 +214,8 @@ reinstate the original handlers:
|
|||
Finally, to demonstrate all of the stages above, we put them together into a processing chain,
|
||||
which conceptually would correspond to the following structure:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
In code this is only a few lines, using the `via` use our custom stages in a stream:
|
||||
|
||||
@@snip [GraphStageDocTest.java]($code$/java/jdocs/stream/GraphStageDocTest.java) { #graph-stage-chain }
|
||||
|
|
@ -251,12 +223,8 @@ In code this is only a few lines, using the `via` use our custom stages in a str
|
|||
If we attempt to draw the sequence of events, it shows that there is one "event token"
|
||||
in circulation in a potential chain of stages, just like our conceptual "railroad tracks" representation predicts.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
### Completion
|
||||
|
||||
Completion handling usually (but not exclusively) comes into the picture when processing stages need to emit
|
||||
|
|
@ -400,21 +368,13 @@ The next diagram illustrates the event sequence for a buffer with capacity of tw
|
|||
the downstream demand is slow to start and the buffer will fill up with upstream elements before any demand
|
||||
is seen from downstream.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Another scenario would be where the demand from downstream starts coming in before any element is pushed
|
||||
into the buffer stage.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The first difference we can notice is that our `Buffer` stage is automatically pulling its upstream on
|
||||
initialization. The buffer has demand for up to two elements without any downstream demand.
|
||||
|
||||
|
|
@ -452,4 +412,4 @@ Cleaning up resources should be done in `GraphStageLogic.postStop` and not in th
|
|||
callbacks. The reason for this is that when the stage itself completes or is failed there is no signal from the upstreams
|
||||
or the downstreams. Even for stages that do not complete or fail in this manner, this can happen when the
|
||||
`Materializer` is shutdown or the `ActorSystem` is terminated while a stream is still running, what is called an
|
||||
"abrupt termination".
|
||||
"abrupt termination".
|
||||
|
|
|
|||
|
|
@ -246,12 +246,8 @@ work on the tasks in parallel. It is important to note that asynchronous boundar
|
|||
flow where elements are passed asynchronously (as in other streaming libraries), but instead attributes always work
|
||||
by adding information to the flow graph that has been constructed up to this point:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This means that everything that is inside the red bubble will be executed by one actor and everything outside of it
|
||||
by another. This scheme can be applied successively, always having one such boundary enclose the previous ones plus all
|
||||
processing stages that have been added since them.
|
||||
|
|
@ -303,4 +299,4 @@ been signalled already – thus the ordering in the case of zipping is defined b
|
|||
|
||||
If you find yourself in need of fine grained control over order of emitted elements in fan-in
|
||||
scenarios consider using `MergePreferred` or `GraphStage` – which gives you full control over how the
|
||||
merge is performed.
|
||||
merge is performed.
|
||||
|
|
|
|||
|
|
@ -41,10 +41,12 @@ be able to operate at full throughput because they will wait on a previous or su
|
|||
pancake example frying the second half of the pancake is usually faster than frying the first half, `fryingPan2` will
|
||||
not be able to operate at full capacity <a id="^1" href="#1">[1]</a>.
|
||||
|
||||
note::
|
||||
: Asynchronous stream processing stages have internal buffers to make communication between them more efficient.
|
||||
@@@ note
|
||||
|
||||
Asynchronous stream processing stages have internal buffers to make communication between them more efficient.
|
||||
For more details about the behavior of these and how to add additional buffers refer to @ref:[Buffers and working with rate](stream-rate.md).
|
||||
|
||||
@@@
|
||||
|
||||
## Parallel processing
|
||||
|
||||
|
|
@ -102,4 +104,4 @@ at the entry point of the pipeline. This only matters however if the processing
|
|||
deviation.
|
||||
|
||||
> <a id="1" href="#^1">[1]</a> Roland's reason for this seemingly suboptimal procedure is that he prefers the temperature of the second pan
|
||||
to be slightly lower than the first in order to achieve a more homogeneous result.
|
||||
to be slightly lower than the first in order to achieve a more homogeneous result.
|
||||
|
|
|
|||
|
|
@ -437,7 +437,6 @@ result:
|
|||
It is always preferable to communicate with other Actors using their ActorRef
|
||||
instead of relying upon ActorSelection. Exceptions are
|
||||
|
||||
>
|
||||
* sending messages using the @ref:[At-Least-Once Delivery](persistence.md#at-least-once-delivery) facility
|
||||
* initiating first contact with a remote system
|
||||
|
||||
|
|
@ -491,12 +490,15 @@ An example demonstrating actor look-up is given in @ref:[Remoting Sample](remoti
|
|||
|
||||
## Messages and immutability
|
||||
|
||||
**IMPORTANT**: Messages can be any kind of object but have to be
|
||||
immutable. Scala can’t enforce immutability (yet) so this has to be by
|
||||
convention. Primitives like String, Int, Boolean are always immutable. Apart
|
||||
from these the recommended approach is to use Scala case classes which are
|
||||
immutable (if you don’t explicitly expose the state) and works great with
|
||||
pattern matching at the receiver side.
|
||||
@@@ warning { title=IMPORTANT }
|
||||
|
||||
Messages can be any kind of object but have to be immutable. Scala can’t enforce
|
||||
immutability (yet) so this has to be by convention. Primitives like String, Int,
|
||||
Boolean are always immutable. Apart from these the recommended approach is to
|
||||
use Scala case classes which are immutable (if you don’t explicitly expose the
|
||||
state) and works great with pattern matching at the receiver side.
|
||||
|
||||
@@@
|
||||
|
||||
Here is an example:
|
||||
|
||||
|
|
@ -581,11 +583,11 @@ taken from one of the following locations in order of precedence:
|
|||
|
||||
1. explicitly given timeout as in:
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout }
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-explicit-timeout }
|
||||
|
||||
2. implicit argument of type `akka.util.Timeout`, e.g.
|
||||
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout }
|
||||
@@snip [ActorDocSpec.scala]($code$/scala/docs/actor/ActorDocSpec.scala) { #using-implicit-timeout }
|
||||
|
||||
See @ref:[Futures](futures.md) for more information on how to await or query a
|
||||
future.
|
||||
|
|
@ -1053,4 +1055,4 @@ the potential issues is that messages might be lost when sent to remote actors.
|
|||
an uninitialized state might lead to the condition that it receives a user message before the initialization has been
|
||||
done.
|
||||
|
||||
@@@
|
||||
@@@
|
||||
|
|
|
|||
|
|
@ -115,11 +115,8 @@ akka.protocol://system@host:1234/user/my/actor/hierarchy/path
|
|||
|
||||
Observe all the parts you need here:
|
||||
|
||||
*
|
||||
`protocol`
|
||||
is the protocol to be used to communicate with the remote system.
|
||||
: Most of the cases this is *tcp*.
|
||||
|
||||
* `protocol` is the protocol to be used to communicate with the remote system.
|
||||
Most of the cases this is *tcp*.
|
||||
* `system` is the remote system’s name (must match exactly, case-sensitive!)
|
||||
* `host` is the remote system’s IP address or DNS name, and it must match that
|
||||
system’s configuration (i.e. *akka.remote.netty.tcp.hostname*)
|
||||
|
|
|
|||
|
|
@ -2,9 +2,9 @@
|
|||
|
||||
Agents in Akka are inspired by [agents in Clojure](http://clojure.org/agents).
|
||||
|
||||
@@@ warning
|
||||
@@@ warning { title="Deprecation warning" }
|
||||
|
||||
**Deprecation warning** - Agents have been deprecated and are scheduled for removal
|
||||
Agents have been deprecated and are scheduled for removal
|
||||
in the next major version. We have found that their leaky abstraction (they do not
|
||||
work over the network) make them inferior to pure Actors, and in face of the soon
|
||||
inclusion of Akka Typed we see little value in maintaining the current Agents.
|
||||
|
|
@ -120,4 +120,4 @@ that transaction. If you send to an Agent within a transaction then the dispatch
|
|||
to the Agent will be held until that transaction commits, and discarded if the
|
||||
transaction is aborted. Here's an example:
|
||||
|
||||
@@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #transfer-example }
|
||||
@@snip [AgentDocSpec.scala]($code$/scala/docs/agent/AgentDocSpec.scala) { #transfer-example }
|
||||
|
|
|
|||
|
|
@ -42,23 +42,23 @@ The `ClusterClientReceptionist` sends out notifications in relation to having re
|
|||
from a `ClusterClient`. This notification enables the server containing the receptionist to become aware of
|
||||
what clients are connected.
|
||||
|
||||
**1. ClusterClient.Send**
|
||||
1. **ClusterClient.Send**
|
||||
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists. If several entries match the path the message will be delivered
|
||||
to one random destination. The sender() of the message can specify that local
|
||||
affinity is preferred, i.e. the message is sent to an actor in the same local actor
|
||||
system as the used receptionist actor, if any such exists, otherwise random to any other
|
||||
matching entry.
|
||||
The message will be delivered to one recipient with a matching path, if any such
|
||||
exists. If several entries match the path the message will be delivered
|
||||
to one random destination. The sender() of the message can specify that local
|
||||
affinity is preferred, i.e. the message is sent to an actor in the same local actor
|
||||
system as the used receptionist actor, if any such exists, otherwise random to any other
|
||||
matching entry.
|
||||
|
||||
**2. ClusterClient.SendToAll**
|
||||
2. **ClusterClient.SendToAll**
|
||||
|
||||
The message will be delivered to all recipients with a matching path.
|
||||
The message will be delivered to all recipients with a matching path.
|
||||
|
||||
**3. ClusterClient.Publish**
|
||||
3. **ClusterClient.Publish**
|
||||
|
||||
The message will be delivered to all recipients Actors that have been registered as subscribers
|
||||
to the named topic.
|
||||
The message will be delivered to all recipients Actors that have been registered as subscribers
|
||||
to the named topic.
|
||||
|
||||
Response messages from the destination actor are tunneled via the receptionist
|
||||
to avoid inbound connections from other cluster nodes to the client, i.e.
|
||||
|
|
@ -193,4 +193,4 @@ within a configurable interval. This is configured with the `reconnect-timeout`,
|
|||
This can be useful when initial contacts are provided from some kind of service registry, cluster node addresses
|
||||
are entirely dynamic and the entire cluster might shut down or crash, be restarted on new addresses. Since the
|
||||
client will be stopped in that case a monitoring actor can watch it and upon `Terminate` a new set of initial
|
||||
contacts can be fetched and a new cluster client started.
|
||||
contacts can be fetched and a new cluster client started.
|
||||
|
|
|
|||
|
|
@ -57,7 +57,6 @@ identifier and the shard identifier from incoming messages.
|
|||
|
||||
This example illustrates two different ways to define the entity identifier in the messages:
|
||||
|
||||
>
|
||||
* The `Get` message includes the identifier itself.
|
||||
* The `EntityEnvelope` holds the identifier, and the actual message that is
|
||||
sent to the entity actor is wrapped in the envelope.
|
||||
|
|
@ -420,4 +419,4 @@ a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards ru
|
|||
of entities that are alive in each shard.
|
||||
|
||||
The purpose of these messages is testing and monitoring, they are not provided to give access to
|
||||
directly sending messages to the individual entities.
|
||||
directly sending messages to the individual entities.
|
||||
|
|
|
|||
|
|
@ -38,7 +38,6 @@ You can read more about parallelism in the JDK's [ForkJoinPool documentation](ht
|
|||
|
||||
Another example that uses the "thread-pool-executor":
|
||||
|
||||
>
|
||||
@@snip [DispatcherDocSpec.scala]($code$/scala/docs/dispatcher/DispatcherDocSpec.scala) { #fixed-pool-size-dispatcher-config }
|
||||
|
||||
@@@ note
|
||||
|
|
@ -75,33 +74,37 @@ where you'd use periods to denote sub-sections, like this: `"foo.bar.my-dispatch
|
|||
|
||||
There are 3 different types of message dispatchers:
|
||||
|
||||
* Dispatcher
|
||||
* This is an event-based dispatcher that binds a set of Actors to a thread pool. It is the default dispatcher
|
||||
used if one is not specified.
|
||||
* **Dispatcher**
|
||||
|
||||
This is an event-based dispatcher that binds a set of Actors to a thread
|
||||
pool. It is the default dispatcher used if one is not specified.
|
||||
|
||||
* Sharability: Unlimited
|
||||
* Mailboxes: Any, creates one per Actor
|
||||
* Use cases: Default dispatcher, Bulkheading
|
||||
*
|
||||
Driven by:
|
||||
`java.util.concurrent.ExecutorService`
|
||||
: specify using "executor" using "fork-join-executor",
|
||||
"thread-pool-executor" or the FQCN of
|
||||
an `akka.dispatcher.ExecutorServiceConfigurator`
|
||||
|
||||
* PinnedDispatcher
|
||||
* This dispatcher dedicates a unique thread for each actor using it; i.e. each actor will have its own thread pool with only one thread in the pool.
|
||||
* Driven by: `java.util.concurrent.ExecutorService`.
|
||||
Specify using "executor" using "fork-join-executor", "thread-pool-executor" or the FQCN of
|
||||
an `akka.dispatcher.ExecutorServiceConfigurator`.
|
||||
|
||||
* **PinnedDispatcher**
|
||||
|
||||
This dispatcher dedicates a unique thread for each actor using it; i.e.
|
||||
each actor will have its own thread pool with only one thread in the pool.
|
||||
|
||||
* Sharability: None
|
||||
* Mailboxes: Any, creates one per Actor
|
||||
* Use cases: Bulkheading
|
||||
*
|
||||
Driven by: Any
|
||||
`akka.dispatch.ThreadPoolExecutorConfigurator`
|
||||
: by default a "thread-pool-executor"
|
||||
* Driven by: Any `akka.dispatch.ThreadPoolExecutorConfigurator`.
|
||||
By default a "thread-pool-executor".
|
||||
|
||||
* CallingThreadDispatcher
|
||||
* This dispatcher runs invocations on the current thread only. This dispatcher does not create any new threads,
|
||||
but it can be used from different threads concurrently for the same actor. See @ref:[Scala-CallingThreadDispatcher](testing.md#scala-callingthreaddispatcher)
|
||||
for details and restrictions.
|
||||
* **CallingThreadDispatcher**
|
||||
|
||||
This dispatcher runs invocations on the current thread only. This
|
||||
dispatcher does not create any new threads, but it can be used from
|
||||
different threads concurrently for the same actor.
|
||||
See @ref:[Scala-CallingThreadDispatcher](testing.md#scala-callingthreaddispatcher)
|
||||
for details and restrictions.
|
||||
|
||||
* Sharability: Unlimited
|
||||
* Mailboxes: Any, creates one per Actor per Thread (on demand)
|
||||
* Use cases: Testing
|
||||
|
|
@ -136,4 +139,4 @@ and that pool will have only one thread.
|
|||
Note that it's not guaranteed that the *same* thread is used over time, since the core pool timeout
|
||||
is used for `PinnedDispatcher` to keep resource usage down in case of idle actors. To use the same
|
||||
thread all the time you need to add `thread-pool-executor.allow-core-timeout=off` to the
|
||||
configuration of the `PinnedDispatcher`.
|
||||
configuration of the `PinnedDispatcher`.
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ Read the following source code. The inlined comments explain the different piece
|
|||
the fault handling and why they are added. It is also highly recommended to run this
|
||||
sample as it is easy to follow the log output to understand what is happening at runtime.
|
||||
|
||||
@@toc
|
||||
@@toc { depth=1 }
|
||||
|
||||
@@@ index
|
||||
|
||||
|
|
@ -164,4 +164,4 @@ different supervisor which overrides this behavior.
|
|||
With this parent, the child survives the escalated restart, as demonstrated in
|
||||
the last test:
|
||||
|
||||
@@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart }
|
||||
@@snip [FaultHandlingDocSpec.scala]($code$/scala/docs/actor/FaultHandlingDocSpec.scala) { #escalate-restart }
|
||||
|
|
|
|||
|
|
@ -7,13 +7,11 @@ is best described in the [Erlang design principles](http://www.erlang.org/docume
|
|||
|
||||
A FSM can be described as a set of relations of the form:
|
||||
|
||||
>
|
||||
**State(S) x Event(E) -> Actions (A), State(S')**
|
||||
> **State(S) x Event(E) -> Actions (A), State(S')**
|
||||
|
||||
These relations are interpreted as meaning:
|
||||
|
||||
>
|
||||
*If we are in state S and the event E occurs, we should perform the actions A
|
||||
> *If we are in state S and the event E occurs, we should perform the actions A
|
||||
and make a transition to the state S'.*
|
||||
|
||||
## A Simple Example
|
||||
|
|
@ -50,7 +48,6 @@ The basic strategy is to declare the actor, mixing in the `FSM` trait
|
|||
and specifying the possible states and data values as type parameters. Within
|
||||
the body of the actor a DSL is used for declaring the state machine:
|
||||
|
||||
>
|
||||
* `startWith` defines the initial state and initial data
|
||||
* then there is one `when(<state>) { ... }` declaration per state to be
|
||||
handled (could potentially be multiple ones, the passed
|
||||
|
|
@ -131,7 +128,6 @@ the FSM logic.
|
|||
|
||||
The `FSM` trait takes two type parameters:
|
||||
|
||||
>
|
||||
1. the supertype of all state names, usually a sealed trait with case objects
|
||||
extending it,
|
||||
2. the type of the state data which are tracked by the `FSM` module
|
||||
|
|
@ -150,8 +146,9 @@ internal state explicit in a few well-known places.
|
|||
|
||||
A state is defined by one or more invocations of the method
|
||||
|
||||
>
|
||||
`when(<name>[, stateTimeout = <timeout>])(stateFunction)`.
|
||||
```
|
||||
when(<name>[, stateTimeout = <timeout>])(stateFunction)
|
||||
```
|
||||
|
||||
The given name must be an object which is type-compatible with the first type
|
||||
parameter given to the `FSM` trait. This object is used as a hash key,
|
||||
|
|
@ -194,8 +191,9 @@ it still needs to be declared like this:
|
|||
|
||||
Each FSM needs a starting point, which is declared using
|
||||
|
||||
>
|
||||
`startWith(state, data[, timeout])`
|
||||
```
|
||||
startWith(state, data[, timeout])
|
||||
```
|
||||
|
||||
The optionally given timeout argument overrides any specification given for the
|
||||
desired initial state. If you want to cancel a default timeout, use
|
||||
|
|
@ -275,8 +273,9 @@ Up to this point, the FSM DSL has been centered on states and events. The dual
|
|||
view is to describe it as a series of transitions. This is enabled by the
|
||||
method
|
||||
|
||||
>
|
||||
`onTransition(handler)`
|
||||
```
|
||||
onTransition(handler)
|
||||
```
|
||||
|
||||
which associates actions with a transition instead of with a state and event.
|
||||
The handler is a partial function which takes a pair of states as input; no
|
||||
|
|
@ -354,8 +353,9 @@ be used several times, e.g. when applying the same transformation to several
|
|||
Besides state timeouts, FSM manages timers identified by `String` names.
|
||||
You may set a timer using
|
||||
|
||||
>
|
||||
`setTimer(name, msg, interval, repeat)`
|
||||
```
|
||||
setTimer(name, msg, interval, repeat)
|
||||
```
|
||||
|
||||
where `msg` is the message object which will be sent after the duration
|
||||
`interval` has elapsed. If `repeat` is `true`, then the timer is
|
||||
|
|
@ -365,15 +365,17 @@ adding the new timer.
|
|||
|
||||
Timers may be canceled using
|
||||
|
||||
>
|
||||
`cancelTimer(name)`
|
||||
```
|
||||
cancelTimer(name)
|
||||
```
|
||||
|
||||
which is guaranteed to work immediately, meaning that the scheduled message
|
||||
will not be processed after this call even if the timer already fired and
|
||||
queued it. The status of any timer may be inquired with
|
||||
|
||||
>
|
||||
`isTimerActive(name)`
|
||||
```
|
||||
isTimerActive(name)
|
||||
```
|
||||
|
||||
These named timers complement state timeouts because they are not affected by
|
||||
intervening reception of other messages.
|
||||
|
|
@ -382,8 +384,9 @@ intervening reception of other messages.
|
|||
|
||||
The FSM is stopped by specifying the result state as
|
||||
|
||||
>
|
||||
`stop([reason[, data]])`
|
||||
```
|
||||
stop([reason[, data]])
|
||||
```
|
||||
|
||||
The reason must be one of `Normal` (which is the default), `Shutdown`
|
||||
or `Failure(reason)`, and the second argument may be given to change the
|
||||
|
|
@ -440,7 +443,6 @@ event trace by `LoggingFSM` instances:
|
|||
|
||||
This FSM will log at DEBUG level:
|
||||
|
||||
>
|
||||
* all processed events, including `StateTimeout` and scheduled timer
|
||||
messages
|
||||
* every setting and cancellation of named timers
|
||||
|
|
@ -478,4 +480,4 @@ zero.
|
|||
A bigger FSM example contrasted with Actor's `become`/`unbecome` can be
|
||||
downloaded as a ready to run @extref[Akka FSM sample](ecs:akka-samples-fsm-scala)
|
||||
together with a tutorial. The source code of this sample can be found in the
|
||||
@extref[Akka Samples Repository](samples:akka-sample-fsm-scala).
|
||||
@extref[Akka Samples Repository](samples:akka-sample-fsm-scala).
|
||||
|
|
|
|||
|
|
@ -166,4 +166,4 @@ actor, which in turn will recursively stop all its child actors, the system
|
|||
guardian.
|
||||
|
||||
If you want to execute some operations while terminating `ActorSystem`,
|
||||
look at @ref:[`CoordinatedShutdown`](../actors.md#coordinated-shutdown).
|
||||
look at @ref:[`CoordinatedShutdown`](../actors.md#coordinated-shutdown).
|
||||
|
|
|
|||
|
|
@ -169,7 +169,7 @@ environment independent settings and then override some settings for specific en
|
|||
|
||||
Specifying system property with `-Dconfig.resource=/dev.conf` will load the `dev.conf` file, which includes the `application.conf`
|
||||
|
||||
dev.conf:
|
||||
### dev.conf
|
||||
|
||||
```
|
||||
include "application"
|
||||
|
|
@ -211,7 +211,8 @@ res1: java.lang.String =
|
|||
# String: 1
|
||||
"b" : 12
|
||||
}
|
||||
}```
|
||||
}
|
||||
```
|
||||
|
||||
The comments preceding every item give detailed information about the origin of
|
||||
the setting (file & line number) plus possible comments which were present,
|
||||
|
|
@ -483,4 +484,4 @@ Each Akka module has a reference configuration file with the default values.
|
|||
<a id="config-distributed-data"></a>
|
||||
### akka-distributed-data
|
||||
|
||||
@@snip [reference.conf]($akka$/akka-distributed-data/src/main/resources/reference.conf)
|
||||
@@snip [reference.conf]($akka$/akka-distributed-data/src/main/resources/reference.conf)
|
||||
|
|
|
|||
|
|
@ -108,4 +108,4 @@ is the only one trying, the operation will succeed.
|
|||
|
||||
>
|
||||
* The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914
|
||||
* Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606
|
||||
* Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606
|
||||
|
|
|
|||
|
|
@ -21,6 +21,6 @@ The easiest way is to use the @scala[`akka-scala-seed` in [Get started with Ligh
|
|||
|
||||
1. Unzip the zip file and rename the directory to your preference.
|
||||
|
||||
1. Read the @scala[[Guide](http://developer.lightbend.com/guides/hello-akka/)] @java[[Guide](http://developer.lightbend.com/guides/hello-akka/)] for this example project. It describes the example, the basic concepts of actors and how to run the "Hello World" application. ** **FIXME the Guide is in progress [here](https://github.com/akka/akka-scala-seed.g8/pull/4/files#diff-179702d743b88d85b3971cba561e6ace)**.
|
||||
1. Read the @scala[[Guide](http://developer.lightbend.com/guides/hello-akka/)] @java[[Guide](http://developer.lightbend.com/guides/hello-akka/)] for this example project. It describes the example, the basic concepts of actors and how to run the "Hello World" application. **FIXME the Guide is in progress [here](https://github.com/akka/akka-scala-seed.g8/pull/4/files#diff-179702d743b88d85b3971cba561e6ace)**.
|
||||
|
||||
After that you can go back here and you are ready to dive deeper.
|
||||
|
|
|
|||
|
|
@ -100,9 +100,13 @@ message is preserved in the upper layers.* We will show you in the next section
|
|||
We also add a safeguard against requests that come with a mismatched group or device ID. This is how the resulting
|
||||
the code looks like:
|
||||
|
||||
>@scala[NOTE: We used a feature of scala pattern matching where we can match if a certain field equals to an expected
|
||||
@@@ note { .group-scala }
|
||||
|
||||
We used a feature of scala pattern matching where we can match if a certain field equals to an expected
|
||||
value. This is achieved by variables included in backticks, like `` `variable` ``, and it means that the pattern
|
||||
only match if it contains the value of `variable` in that position.]
|
||||
only match if it contains the value of `variable` in that position.
|
||||
|
||||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [Device.scala]($code$/scala/tutorial_3/Device.scala) { #device-with-register }
|
||||
|
|
@ -113,11 +117,15 @@ Java
|
|||
We should not leave features untested, so we immediately write two new test cases, one exercising successful
|
||||
registration, the other testing the case when IDs don't match:
|
||||
|
||||
> NOTE: We used the `expectNoMsg()` helper method from @scala[`TestProbe`] @java[`TestKit`]. This assertion waits until the defined time-limit
|
||||
@@@ note
|
||||
|
||||
We used the `expectNoMsg()` helper method from @scala[`TestProbe`] @java[`TestKit`]. This assertion waits until the defined time-limit
|
||||
and fails if it receives any messages during this period. If no messages are received during the waiting period the
|
||||
assertion passes. It is usually a good idea to keep these timeouts low (but not too low) because they add significant
|
||||
test execution time otherwise.
|
||||
|
||||
@@@
|
||||
|
||||
Scala
|
||||
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-registration-tests }
|
||||
|
||||
|
|
|
|||
|
|
@ -3,7 +3,6 @@
|
|||
UDP is a connectionless datagram protocol which offers two different ways of
|
||||
communication on the JDK level:
|
||||
|
||||
>
|
||||
* sockets which are free to send datagrams to any destination and receive
|
||||
datagrams from any origin
|
||||
* sockets which are restricted to communication with one specific remote
|
||||
|
|
@ -98,4 +97,4 @@ Another socket option will be needed to join a multicast group.
|
|||
|
||||
Socket options must be provided to `UdpMessage.Bind` message.
|
||||
|
||||
@@snip [ScalaUdpMulticast.scala]($code$/scala/docs/io/ScalaUdpMulticast.scala) { #bind }
|
||||
@@snip [ScalaUdpMulticast.scala]($code$/scala/docs/io/ScalaUdpMulticast.scala) { #bind }
|
||||
|
|
|
|||
|
|
@ -307,7 +307,6 @@ stdout logger is `WARNING` and it can be silenced completely by setting
|
|||
Akka provides a logger for [SL4FJ](http://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
|
||||
It has a single dependency: the slf4j-api jar. In your runtime, you also need a SLF4J backend. We recommend [Logback](http://logback.qos.ch/):
|
||||
|
||||
>
|
||||
```scala
|
||||
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.1.3"
|
||||
```
|
||||
|
|
@ -543,4 +542,4 @@ shown below:
|
|||
|
||||
```scala
|
||||
val log = Logging(system.eventStream, "my.nice.string")
|
||||
```
|
||||
```
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ type to `PersistentActor` s and @ref:[persistence queries](persistence-query.md)
|
|||
instead of having to explicitly deal with different schemas.
|
||||
|
||||
In summary, schema evolution in event sourced systems exposes the following characteristics:
|
||||
:
|
||||
|
||||
* Allow the system to continue operating without large scale migrations to be applied,
|
||||
* Allow the system to read "old" events from the underlying storage, however present them in a "new" view to the application logic,
|
||||
* Transparently promote events to the latest versions during recovery (or queries) such that the business logic need not consider multiple versions of events
|
||||
|
|
@ -111,7 +111,7 @@ The below figure explains how the default serialization scheme works, and how it
|
|||
user provided message itself, which we will from here on refer to as the `payload` (highlighted in yellow):
|
||||
|
||||

|
||||
>
|
||||
|
||||
Akka Persistence provided serializers wrap the user payload in an envelope containing all persistence-relevant information.
|
||||
**If the Journal uses provided Protobuf serializers for the wrapper types (e.g. PersistentRepr), then the payload will
|
||||
be serialized using the user configured serializer, and if none is provided explicitly, Java serialization will be used for it.**
|
||||
|
|
@ -234,7 +234,7 @@ add the overhead of having to maintain the schema. When using serializers like t
|
|||
(except renaming the field and method used during serialization) is needed to perform such evolution:
|
||||
|
||||

|
||||
>
|
||||
|
||||
This is how such a rename would look in protobuf:
|
||||
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #protobuf-rename-proto }
|
||||
|
|
@ -262,7 +262,7 @@ automatically by the serializer. You can do these kinds of "promotions" either m
|
|||
or using a library like [Stamina](https://github.com/scalapenos/stamina) which helps to create those `V1->V2->V3->...->Vn` promotion chains without much boilerplate.
|
||||
|
||||

|
||||
>
|
||||
|
||||
The following snippet showcases how one could apply renames if working with plain JSON (using `spray.json.JsObject`):
|
||||
|
||||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #rename-plain-json }
|
||||
|
|
@ -294,7 +294,7 @@ for the recovery mechanisms that this entails. For example, a naive way of filte
|
|||
being delivered to a recovering `PersistentActor` is pretty simple, as one can simply filter them out in an @ref:[EventAdapter](persistence.md#event-adapters):
|
||||
|
||||

|
||||
>
|
||||
|
||||
The `EventAdapter` can drop old events (**O**) by emitting an empty `EventSeq`.
|
||||
Other events can simply be passed through (**E**).
|
||||
|
||||
|
|
@ -309,7 +309,6 @@ In the just described technique we have saved the PersistentActor from receiving
|
|||
out in the `EventAdapter`, however the event itself still was deserialized and loaded into memory.
|
||||
This has two notable *downsides*:
|
||||
|
||||
>
|
||||
* first, that the deserialization was actually performed, so we spent some of out time budget on the
|
||||
deserialization, even though the event does not contribute anything to the persistent actors state.
|
||||
* second, that we are *unable to remove the event class* from the system – since the serializer still needs to create
|
||||
|
|
@ -324,7 +323,7 @@ This can for example be implemented by using an `SerializerWithStringManifest`
|
|||
that the type is no longer needed, and skip the deserialization all-together:
|
||||
|
||||

|
||||
>
|
||||
|
||||
The serializer is aware of the old event types that need to be skipped (**O**), and can skip deserializing them alltogether
|
||||
by simply returning a "tombstone" (**T**), which the EventAdapter converts into an empty EventSeq.
|
||||
Other events (**E**) can simply be passed through.
|
||||
|
|
@ -358,7 +357,7 @@ classes which very often may be less user-friendly yet highly optimised for thro
|
|||
these types in a 1:1 style as illustrated below:
|
||||
|
||||

|
||||
>
|
||||
|
||||
Domain events (**A**) are adapted to the data model events (**D**) by the `EventAdapter`.
|
||||
The data model can be a format natively understood by the journal, such that it can store it more efficiently or
|
||||
include additional data for the event (e.g. tags), for ease of later querying.
|
||||
|
|
@ -440,7 +439,7 @@ on what the user actually intended to change (instead of the composite `UserDeta
|
|||
of our model).
|
||||
|
||||

|
||||
>
|
||||
|
||||
The `EventAdapter` splits the incoming event into smaller more fine grained events during recovery.
|
||||
|
||||
During recovery however, we now need to convert the old `V1` model into the `V2` representation of the change.
|
||||
|
|
@ -450,4 +449,4 @@ and the address change is handled similarily:
|
|||
@@snip [PersistenceSchemaEvolutionDocSpec.scala]($code$/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #split-events-during-recovery }
|
||||
|
||||
By returning an `EventSeq` from the event adapter, the recovered event can be converted to multiple events before
|
||||
being delivered to the persistent actor.
|
||||
being delivered to the persistent actor.
|
||||
|
|
|
|||
|
|
@ -701,10 +701,11 @@ Artery has been designed for low latency and as a result it can be CPU hungry wh
|
|||
This is not always desirable. It is possible to tune the tradeoff between CPU usage and latency with
|
||||
the following configuration:
|
||||
|
||||
>
|
||||
```
|
||||
# Values can be from 1 to 10, where 10 strongly prefers low latency
|
||||
# and 1 strongly prefers less CPU usage
|
||||
akka.remote.artery.advanced.idle-cpu-level = 1
|
||||
```
|
||||
|
||||
By setting this value to a lower number, it tells Akka to do longer "sleeping" periods on its thread dedicated
|
||||
for [spin-waiting](https://en.wikipedia.org/wiki/Busy_waiting) and hence reducing CPU load when there is no
|
||||
|
|
@ -787,4 +788,4 @@ akka {
|
|||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
```
|
||||
|
|
|
|||
|
|
@ -658,4 +658,4 @@ Keep in mind that local.address will most likely be in one of private network ra
|
|||
* *172.16.0.0 - 172.31.255.255* (network class B)
|
||||
* *192.168.0.0 - 192.168.255.255* (network class C)
|
||||
|
||||
For further details see [RFC 1597]([https://tools.ietf.org/html/rfc1597](https://tools.ietf.org/html/rfc1597)) and [RFC 1918]([https://tools.ietf.org/html/rfc1918](https://tools.ietf.org/html/rfc1918)).
|
||||
For further details see [RFC 1597](https://tools.ietf.org/html/rfc1597) and [RFC 1918](https://tools.ietf.org/html/rfc1918).
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Java Serialization, Fixed in Akka 2.4.17
|
||||
|
||||
## Date
|
||||
### Date
|
||||
|
||||
10 Feburary 2017
|
||||
|
||||
|
|
@ -52,4 +52,4 @@ It will also be fixed in 2.5-M2 or 2.5.0-RC1.
|
|||
|
||||
### Acknowledgements
|
||||
|
||||
We would like to thank Alvaro Munoz at Hewlett Packard Enterprise Security & Adrian Bravo at Workday for their thorough investigation and bringing this issue to our attention.
|
||||
We would like to thank Alvaro Munoz at Hewlett Packard Enterprise Security & Adrian Bravo at Workday for their thorough investigation and bringing this issue to our attention.
|
||||
|
|
|
|||
|
|
@ -24,10 +24,10 @@ to ensure that a fix can be provided without delay.
|
|||
|
||||
## Fixed Security Vulnerabilities
|
||||
|
||||
@@toc { depth=1 }
|
||||
@@toc { .list depth=1 }
|
||||
|
||||
@@@ index
|
||||
|
||||
* [2017-02-10-java-serialization](2017-02-10-java-serialization.md)
|
||||
|
||||
@@@
|
||||
@@@
|
||||
|
|
|
|||
File diff suppressed because it is too large
Load diff
|
|
@ -11,12 +11,8 @@ be processed arrive and leave the stage. In this view, a `Source` is nothing els
|
|||
output port, or, a `BidiFlow` is a "box" with exactly two input and two output ports. In the figure below
|
||||
we illustrate the most common used stages viewed as "boxes".
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The *linear* stages are `Source`, `Sink`
|
||||
and `Flow`, as these can be used to compose strict chains of processing stages.
|
||||
Fan-in and Fan-out stages have usually multiple input or multiple output ports, therefore they allow to build
|
||||
|
|
@ -35,12 +31,8 @@ to interact with. One good example is the `Http` server component, which is enco
|
|||
The following figure demonstrates various composite stages, that contain various other type of stages internally, but
|
||||
hiding them behind a *shape* that looks like a `Source`, `Flow`, etc.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
One interesting example above is a `Flow` which is composed of a disconnected `Sink` and `Source`.
|
||||
This can be achieved by using the `fromSinkAndSource()` constructor method on `Flow` which takes the two parts as
|
||||
parameters.
|
||||
|
|
@ -62,12 +54,8 @@ These mechanics allow arbitrary nesting of modules. For example the following fi
|
|||
that is built from a composite `Source` and a composite `Sink` (which in turn contains a composite
|
||||
`Flow`).
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The above diagram contains one more shape that we have not seen yet, which is called `RunnableGraph`. It turns
|
||||
out, that if we wire all exposed ports together, so that no more open ports remain, we get a module that is *closed*.
|
||||
This is what the `RunnableGraph` class represents. This is the shape that a `Materializer` can take
|
||||
|
|
@ -93,12 +81,8 @@ Once we have hidden the internals of our components, they act like any other bui
|
|||
we hide some of the internals of our composites, the result looks just like if any other predefine component has been
|
||||
used:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
If we look at usage of built-in components, and our custom components, there is no difference in usage as the code
|
||||
snippet below demonstrates.
|
||||
|
||||
|
|
@ -115,12 +99,8 @@ operate on are uniform across all DSLs and fit together nicely.
|
|||
|
||||
As a first example, let's look at a more complex layout:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The diagram shows a `RunnableGraph` (remember, if there are no unwired ports, the graph is closed, and therefore
|
||||
can be materialized) that encapsulates a non-trivial stream processing network. It contains fan-in, fan-out stages,
|
||||
directed and non-directed cycles. The `runnable()` method of the `GraphDSL` object allows the creation of a
|
||||
|
|
@ -134,19 +114,13 @@ explicitly, and it is not necessary to import our linear stages via `add()`, so
|
|||
|
||||
@@snip [CompositionDocSpec.scala]($code$/scala/docs/stream/CompositionDocSpec.scala) { #complex-graph-alt }
|
||||
|
||||
|
|
||||
|
||||
Similar to the case in the first section, so far we have not considered modularity. We created a complex graph, but
|
||||
the layout is flat, not modularized. We will modify our example, and create a reusable component with the graph DSL.
|
||||
The way to do it is to use the `create()` factory method on `GraphDSL`. If we remove the sources and sinks
|
||||
from the previous example, what remains is a partial graph:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
We can recreate a similar graph in code, using the DSL in a similar way than before:
|
||||
|
||||
@@snip [CompositionDocSpec.scala]($code$/scala/docs/stream/CompositionDocSpec.scala) { #partial-graph }
|
||||
|
|
@ -160,12 +134,8 @@ matching built-in ones.
|
|||
The resulting graph is already a properly wrapped module, so there is no need to call *named()* to encapsulate the graph, but
|
||||
it is a good practice to give names to modules to help debugging.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Since our partial graph has the right shape, it can be already used in the simpler, linear DSL:
|
||||
|
||||
@@snip [CompositionDocSpec.scala]($code$/scala/docs/stream/CompositionDocSpec.scala) { #partial-use }
|
||||
|
|
@ -176,12 +146,8 @@ has a `fromGraph()` method that just adds the DSL to a `FlowShape`. There are si
|
|||
For convenience, it is also possible to skip the partial graph creation, and use one of the convenience creator methods.
|
||||
To demonstrate this, we will create the following graph:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The code version of the above closed graph might look like this:
|
||||
|
||||
@@snip [CompositionDocSpec.scala]($code$/scala/docs/stream/CompositionDocSpec.scala) { #partial-flow-dsl }
|
||||
|
|
@ -237,12 +203,8 @@ graphically demonstrates what is happening.
|
|||
|
||||
The propagation of the individual materialized values from the enclosed modules towards the top will look like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
To implement the above, first, we create a composite `Source`, where the enclosed `Source` have a
|
||||
materialized type of `Promise[[Option[Int]]`. By using the combiner function `Keep.left`, the resulting materialized
|
||||
type is of the nested module (indicated by the color *red* on the diagram):
|
||||
|
|
@ -296,11 +258,7 @@ the same attribute explicitly set. `nestedSource` gets the default attributes fr
|
|||
on the other hand has this attribute set, so it will be used by all nested modules. `nestedFlow` will inherit from `nestedSink`
|
||||
except the `map` stage which has again an explicitly provided attribute overriding the inherited one.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This diagram illustrates the inheritance process for the example code (representing the materializer default attributes
|
||||
as the color *red*, the attributes set on `nestedSink` as *blue* and the attributes set on `nestedFlow` as *green*).
|
||||
as the color *red*, the attributes set on `nestedSink` as *blue* and the attributes set on `nestedFlow` as *green*).
|
||||
|
|
|
|||
|
|
@ -95,12 +95,8 @@ the initial state while orange indicates the end state. If an operation is not l
|
|||
to call it while the port is in that state. If an event is not listed for a state, then that event cannot happen
|
||||
in that state.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The following operations are available for *input* ports:
|
||||
|
||||
* `pull(in)` requests a new element from an input port. This is only possible after the port has been pushed by upstream.
|
||||
|
|
@ -130,12 +126,8 @@ the initial state while orange indicates the end state. If an operation is not l
|
|||
to call it while the port is in that state. If an event is not listed for a state, then that event cannot happen
|
||||
in that state.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Finally, there are two methods available for convenience to complete the stage and all of its ports:
|
||||
|
||||
* `completeStage()` is equivalent to closing all output ports and cancelling all input ports.
|
||||
|
|
@ -147,7 +139,7 @@ of actions which will greatly simplify some use cases at the cost of some extra
|
|||
between the two APIs could be described as that the first one is signal driven from the outside, while this API
|
||||
is more active and drives its surroundings.
|
||||
|
||||
The operations of this part of the :class:`GraphStage` API are:
|
||||
The operations of this part of the `GraphStage` API are:
|
||||
|
||||
* `emit(out, elem)` and `emitMultiple(out, Iterable(elem1, elem2))` replaces the `OutHandler` with a handler that emits
|
||||
one or more elements when there is demand, and then reinstalls the current handlers
|
||||
|
|
@ -161,7 +153,7 @@ The following methods are safe to call after invoking `emit` and `read` (and wil
|
|||
operation when those are done): `complete(out)`, `completeStage()`, `emit`, `emitMultiple`, `abortEmitting()`
|
||||
and `abortReading()`
|
||||
|
||||
An example of how this API simplifies a stage can be found below in the second version of the :class:`Duplicator`.
|
||||
An example of how this API simplifies a stage can be found below in the second version of the `Duplicator`.
|
||||
|
||||
### Custom linear processing stages using GraphStage
|
||||
|
||||
|
|
@ -172,20 +164,12 @@ Such a stage can be illustrated as a box with two flows as it is
|
|||
seen in the illustration below. Demand flowing upstream leading to elements
|
||||
flowing downstream.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
To illustrate these concepts we create a small `GraphStage` that implements the `map` transformation.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Map calls `push(out)` from the `onPush()` handler and it also calls `pull()` from the `onPull` handler resulting in the
|
||||
conceptual wiring above, and fully expressed in code below:
|
||||
|
||||
|
|
@ -197,12 +181,8 @@ demand is passed along upstream elements passed on downstream.
|
|||
To demonstrate a many-to-one stage we will implement
|
||||
filter. The conceptual wiring of `Filter` looks like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
As we see above, if the given predicate matches the current element we are propagating it downwards, otherwise
|
||||
we return the “ball” to our upstream so that we get the new element. This is achieved by modifying the map
|
||||
example by adding a conditional in the `onPush` handler and decide between a `pull(in)` or `push(out)` call
|
||||
|
|
@ -213,12 +193,8 @@ example by adding a conditional in the `onPush` handler and decide between a `pu
|
|||
To complete the picture we define a one-to-many transformation as the next step. We chose a straightforward example stage
|
||||
that emits every upstream element twice downstream. The conceptual wiring of this stage looks like this:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This is a stage that has state: an option with the last element it has seen indicating if it
|
||||
has duplicated this last element already or not. We must also make sure to emit the extra element
|
||||
if the upstream completes.
|
||||
|
|
@ -241,12 +217,8 @@ reinstate the original handlers:
|
|||
Finally, to demonstrate all of the stages above, we put them together into a processing chain,
|
||||
which conceptually would correspond to the following structure:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
In code this is only a few lines, using the `via` use our custom stages in a stream:
|
||||
|
||||
@@snip [GraphStageDocSpec.scala]($code$/scala/docs/stream/GraphStageDocSpec.scala) { #graph-stage-chain }
|
||||
|
|
@ -254,12 +226,8 @@ In code this is only a few lines, using the `via` use our custom stages in a str
|
|||
If we attempt to draw the sequence of events, it shows that there is one "event token"
|
||||
in circulation in a potential chain of stages, just like our conceptual "railroad tracks" representation predicts.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
### Completion
|
||||
|
||||
Completion handling usually (but not exclusively) comes into the picture when processing stages need to emit
|
||||
|
|
@ -403,21 +371,13 @@ The next diagram illustrates the event sequence for a buffer with capacity of tw
|
|||
the downstream demand is slow to start and the buffer will fill up with upstream elements before any demand
|
||||
is seen from downstream.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
Another scenario would be where the demand from downstream starts coming in before any element is pushed
|
||||
into the buffer stage.
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
The first difference we can notice is that our `Buffer` stage is automatically pulling its upstream on
|
||||
initialization. The buffer has demand for up to two elements without any downstream demand.
|
||||
|
||||
|
|
@ -487,4 +447,4 @@ shown in the linked sketch the author encountered such a density of compiler Sta
|
|||
that he gave up).
|
||||
|
||||
It is interesting to note that a simplified form of this problem has found its way into the [dotty test suite](https://github.com/lampepfl/dotty/pull/1186/files).
|
||||
Dotty is the development version of Scala on its way to Scala 3.
|
||||
Dotty is the development version of Scala on its way to Scala 3.
|
||||
|
|
|
|||
|
|
@ -250,12 +250,8 @@ work on the tasks in parallel. It is important to note that asynchronous boundar
|
|||
flow where elements are passed asynchronously (as in other streaming libraries), but instead attributes always work
|
||||
by adding information to the flow graph that has been constructed up to this point:
|
||||
|
||||
|
|
||||
|
||||

|
||||
|
||||
|
|
||||
|
||||
This means that everything that is inside the red bubble will be executed by one actor and everything outside of it
|
||||
by another. This scheme can be applied successively, always having one such boundary enclose the previous ones plus all
|
||||
processing stages that have been added since them.
|
||||
|
|
@ -308,4 +304,4 @@ been signalled already – thus the ordering in the case of zipping is defined b
|
|||
|
||||
If you find yourself in need of fine grained control over order of emitted elements in fan-in
|
||||
scenarios consider using `MergePreferred` or `GraphStage` – which gives you full control over how the
|
||||
merge is performed.
|
||||
merge is performed.
|
||||
|
|
|
|||
|
|
@ -88,11 +88,15 @@ example we make a small detour to highlight some of the theory behind this.
|
|||
|
||||
## A Little Bit of Theory
|
||||
|
||||
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model1. send a finite number of messages to Actors it knows2. create a finite number of new Actors3. designate the behavior to be applied to the next message) as defined by Hewitt, Bishop and Steiger in 1973 is a
|
||||
computational model that expresses exactly what it means for computation to be
|
||||
distributed. The processing units—Actors—can only communicate by exchanging
|
||||
messages and upon reception of a message an Actor can do the following three
|
||||
fundamental actions:
|
||||
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model) as defined by
|
||||
Hewitt, Bishop and Steiger in 1973 is a computational model that expresses
|
||||
exactly what it means for computation to be distributed. The processing
|
||||
units—Actors—can only communicate by exchanging messages and upon reception of a
|
||||
message an Actor can do the following three fundamental actions:
|
||||
|
||||
1. send a finite number of messages to Actors it knows
|
||||
2. create a finite number of new Actors
|
||||
3. designate the behavior to be applied to the next message
|
||||
|
||||
The Akka Typed project expresses these actions using behaviors and addresses.
|
||||
Messages can be sent to an address and behind this façade there is a behavior
|
||||
|
|
@ -284,4 +288,4 @@ having to worry about timeouts and spurious failures. Another side-effect is
|
|||
that behaviors can nicely be composed and decorated, see the `And`,
|
||||
`Or`, `Widened`, `ContextAware` combinators; nothing about
|
||||
these is special or internal, new combinators can be written as external
|
||||
libraries or tailor-made for each project.
|
||||
libraries or tailor-made for each project.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue