Remove blockquotes around lists in the docs

This commit is contained in:
Peter Vlugter 2017-05-19 11:36:53 +12:00
parent 9cc0f5ea98
commit bcd311a035
27 changed files with 49 additions and 100 deletions

View file

@ -390,7 +390,6 @@ akka-camel may make some further modifications to it.
The sample named @extref[Akka Camel Samples with Java](ecs:akka-samples-camel-java) (@extref[source code](samples:akka-sample-camel-java)) The sample named @extref[Akka Camel Samples with Java](ecs:akka-samples-camel-java) (@extref[source code](samples:akka-sample-camel-java))
contains 3 samples: contains 3 samples:
>
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and * Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints. producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a `Producer` and a * Custom Camel route - Demonstrates the combined usage of a `Producer` and a

View file

@ -24,7 +24,6 @@ while in the second case this is not automatically done. The second parameter
to `Logging.getLogger` is the source of this logging channel. The source to `Logging.getLogger` is the source of this logging channel. The source
object is translated to a String according to the following rules: object is translated to a String according to the following rules:
>
* if it is an Actor or ActorRef, its path is used * if it is an Actor or ActorRef, its path is used
* in case of a String it is used as is * in case of a String it is used as is
* in case of a class an approximation of its simpleName * in case of a class an approximation of its simpleName

View file

@ -319,7 +319,6 @@ There is no Group variant of the BalancingPool.
A Router that tries to send to the non-suspended child routee with fewest messages in mailbox. A Router that tries to send to the non-suspended child routee with fewest messages in mailbox.
The selection is done in this order: The selection is done in this order:
>
* pick any idle routee (not processing message) with empty mailbox * pick any idle routee (not processing message) with empty mailbox
* pick any routee with empty mailbox * pick any routee with empty mailbox
* pick routee with fewest pending messages in mailbox * pick routee with fewest pending messages in mailbox

View file

@ -190,7 +190,6 @@ encoded in the provided `RunnableGraph`. To be able to interact with the running
needs to return a different object that provides the necessary interaction capabilities. In other words, the needs to return a different object that provides the necessary interaction capabilities. In other words, the
`RunnableGraph` can be seen as a factory, which creates: `RunnableGraph` can be seen as a factory, which creates:
>
* a network of running processing entities, inaccessible from the outside * a network of running processing entities, inaccessible from the outside
* a materialized value, optionally providing a controlled interaction capability with the network * a materialized value, optionally providing a controlled interaction capability with the network

View file

@ -314,7 +314,6 @@ by calling `getStageActorRef(receive)` passing in a function that takes a `Pair`
or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this
`ActorRef` are: `ActorRef` are:
>
* they are not location transparent, they cannot be accessed via remoting. * they are not location transparent, they cannot be accessed via remoting.
* they cannot be returned as materialized values. * they cannot be returned as materialized values.
* they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the * they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the

View file

@ -228,7 +228,6 @@ yet will materialize that stage multiple times.
By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
stream graph can be executed within the same Actor and has two consequences: stream graph can be executed within the same Actor and has two consequences:
>
* passing elements from one processing stage to the next is a lot faster between fused * passing elements from one processing stage to the next is a lot faster between fused
stages due to avoiding the asynchronous messaging overhead stages due to avoiding the asynchronous messaging overhead
* fused stream processing stages does not run in parallel to each other, meaning that * fused stream processing stages does not run in parallel to each other, meaning that

View file

@ -22,7 +22,6 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st
* **Fan-out** * **Fan-out**
>
* `Broadcast<T>` *(1 input, N outputs)* given an input element emits to each output * `Broadcast<T>` *(1 input, N outputs)* given an input element emits to each output
* `Balance<T>` *(1 input, N outputs)* given an input element emits to one of its output ports * `Balance<T>` *(1 input, N outputs)* given an input element emits to one of its output ports
* `UnzipWith<In,A,B,...>` *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) * `UnzipWith<In,A,B,...>` *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20)
@ -30,7 +29,6 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st
* **Fan-in** * **Fan-in**
>
* `Merge<In>` *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output * `Merge<In>` *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output
* `MergePreferred<In>` like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` * `MergePreferred<In>` like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others`
* `ZipWith<A,B,...,Out>` *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element * `ZipWith<A,B,...,Out>` *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element

View file

@ -28,7 +28,6 @@ This is how this setup would look like implemented as a stream:
The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way, The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way,
basically doing the same as Roland with his frying pans: basically doing the same as Roland with his frying pans:
>
1. A `ScoopOfBatter` enters `fryingPan1` 1. A `ScoopOfBatter` enters `fryingPan1`
2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available 2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available
3. `fryingPan2` takes the HalfCookedPancake 3. `fryingPan2` takes the HalfCookedPancake
@ -87,7 +86,6 @@ in sequence.
It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs: It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs:
>
* the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough * the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough
flat surface. flat surface.
* the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared * the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared

View file

@ -8,7 +8,6 @@ perform tests.
Akka comes with a dedicated module `akka-testkit` for supporting tests at Akka comes with a dedicated module `akka-testkit` for supporting tests at
different levels, which fall into two clearly distinct categories: different levels, which fall into two clearly distinct categories:
>
* Testing isolated pieces of code without involving the actor model, meaning * Testing isolated pieces of code without involving the actor model, meaning
without multiple threads; this implies completely deterministic behavior without multiple threads; this implies completely deterministic behavior
concerning the ordering of events and no concurrency concerns and will be concerning the ordering of events and no concurrency concerns and will be
@ -130,7 +129,6 @@ underlying actor:
You may of course mix and match both modi operandi of `TestActorRef` as You may of course mix and match both modi operandi of `TestActorRef` as
suits your test needs: suits your test needs:
>
* one common use case is setting up the actor into a specific internal state * one common use case is setting up the actor into a specific internal state
before sending the test message before sending the test message
* another is to verify correct internal state transitions after having sent * another is to verify correct internal state transitions after having sent
@ -190,7 +188,6 @@ out, in which case they use the default value from configuration item
`akka.test.single-expect-default` which itself defaults to 3 seconds (or they `akka.test.single-expect-default` which itself defaults to 3 seconds (or they
obey the innermost enclosing `Within` as detailed [below](#testkit-within)). The full signatures are: obey the innermost enclosing `Within` as detailed [below](#testkit-within)). The full signatures are:
>
* *
`public <T> T expectMsgEquals(FiniteDuration max, T msg)` `public <T> T expectMsgEquals(FiniteDuration max, T msg)`
The given message object must be received within the specified time; the The given message object must be received within the specified time; the
@ -632,7 +629,6 @@ send returns and no `InterruptedException` will be thrown.
To summarize, these are the features with the `CallingThreadDispatcher` To summarize, these are the features with the `CallingThreadDispatcher`
has to offer: has to offer:
>
* Deterministic execution of single-threaded tests while retaining nearly full * Deterministic execution of single-threaded tests while retaining nearly full
actor semantics actor semantics
* Full message processing history leading up to the point of failure in * Full message processing history leading up to the point of failure in

View file

@ -89,7 +89,6 @@ and we know how to create a Typed Actor from that, so let's look at calling thes
Methods returning: Methods returning:
>
* `void` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell` * `void` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell`
* `scala.concurrent.Future<?>` will use `send-request-reply` semantics, exactly like `ActorRef.ask` * `scala.concurrent.Future<?>` will use `send-request-reply` semantics, exactly like `ActorRef.ask`
* `akka.japi.Option<?>` will use `send-request-reply` semantics, but *will* block to wait for an answer, * `akka.japi.Option<?>` will use `send-request-reply` semantics, but *will* block to wait for an answer,
@ -175,7 +174,6 @@ e.g. when interfacing with untyped actors.
By having your Typed Actor implementation class implement any and all of the following: By having your Typed Actor implementation class implement any and all of the following:
>
* `TypedActor.PreStart` * `TypedActor.PreStart`
* `TypedActor.PostStop` * `TypedActor.PostStop`
* `TypedActor.PreRestart` * `TypedActor.PreRestart`

View file

@ -380,7 +380,6 @@ akka-camel may make some further modifications to it.
The sample named @extref[Akka Camel Samples with Scala](ecs:akka-samples-camel-scala) (@extref[source code](samples:akka-sample-camel-scala)) The sample named @extref[Akka Camel Samples with Scala](ecs:akka-samples-camel-scala) (@extref[source code](samples:akka-sample-camel-scala))
contains 3 samples: contains 3 samples:
>
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and * Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints. producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a `Producer` and a * Custom Camel route - Demonstrates the combined usage of a `Producer` and a

View file

@ -42,7 +42,6 @@ OK: 3.1.n --> 3.2.0 ...
Some modules are excluded from the binary compatibility guarantees, such as: Some modules are excluded from the binary compatibility guarantees, such as:
>
* `*-testkit` modules - since these are to be used only in tests, which usually are re-compiled and run on demand * `*-testkit` modules - since these are to be used only in tests, which usually are re-compiled and run on demand
* `*-tck` modules - since they may want to add new tests (or force configuring something), in order to discover possible failures in an existing implementation that the TCK is supposed to be testing. Compatibility here is not *guaranteed*, however it is attempted to make the upgrade prosess as smooth as possible. * `*-tck` modules - since they may want to add new tests (or force configuring something), in order to discover possible failures in an existing implementation that the TCK is supposed to be testing. Compatibility here is not *guaranteed*, however it is attempted to make the upgrade prosess as smooth as possible.
* all @ref:[may change](may-change.md) modules - which by definition are subject to rapid iteration and change. Read more about that in @ref:[Modules marked "May Change"](may-change.md) * all @ref:[may change](may-change.md) modules - which by definition are subject to rapid iteration and change. Read more about that in @ref:[Modules marked "May Change"](may-change.md)

View file

@ -46,7 +46,6 @@ Now, the difficulty in designing such a system is how to decide who should
supervise what. There is of course no single best solution, but there are a few supervise what. There is of course no single best solution, but there are a few
guidelines which might be helpful: guidelines which might be helpful:
>
* If one actor manages the work another actor is doing, e.g. by passing on * If one actor manages the work another actor is doing, e.g. by passing on
sub-tasks, then the manager should supervise the child. The reason is that sub-tasks, then the manager should supervise the child. The reason is that
the manager knows which kind of failures are expected and how to handle the manager knows which kind of failures are expected and how to handle
@ -117,7 +116,6 @@ under increased load.
The non-exhaustive list of adequate solutions to the “blocking problem” The non-exhaustive list of adequate solutions to the “blocking problem”
includes the following suggestions: includes the following suggestions:
>
* Do the blocking call within an actor (or a set of actors managed by a router * Do the blocking call within an actor (or a set of actors managed by a router
@ref:[router](../routing.md), making sure to @ref:[router](../routing.md), making sure to
configure a thread pool which is either dedicated for this purpose or configure a thread pool which is either dedicated for this purpose or

View file

@ -357,7 +357,6 @@ that might look like:
When working with `Config` objects, keep in mind that there are When working with `Config` objects, keep in mind that there are
three "layers" in the cake: three "layers" in the cake:
>
* `ConfigFactory.defaultOverrides()` (system properties) * `ConfigFactory.defaultOverrides()` (system properties)
* the app's settings * the app's settings
* `ConfigFactory.defaultReference()` (reference.conf) * `ConfigFactory.defaultReference()` (reference.conf)
@ -365,7 +364,6 @@ three "layers" in the cake:
The normal goal is to customize the middle layer while leaving the The normal goal is to customize the middle layer while leaving the
other two alone. other two alone.
>
* `ConfigFactory.load()` loads the whole stack * `ConfigFactory.load()` loads the whole stack
* the overloads of `ConfigFactory.load()` let you specify a * the overloads of `ConfigFactory.load()` let you specify a
different middle layer different middle layer
@ -402,7 +400,6 @@ You can use asterisks as wildcard matches for the actor path sections, so you co
`/*/sampleActor` and that would match all `sampleActor` on that level in the hierarchy. `/*/sampleActor` and that would match all `sampleActor` on that level in the hierarchy.
In addition, please note: In addition, please note:
>
* you can also use wildcards in the last position to match all actors at a certain level: `/someParent/*` * you can also use wildcards in the last position to match all actors at a certain level: `/someParent/*`
* you can use double-wildcards in the last position to match all child actors and their children * you can use double-wildcards in the last position to match all child actors and their children
recursively: `/someParent/**` recursively: `/someParent/**`

View file

@ -113,13 +113,12 @@ other message dissemination features (unless stated otherwise).
The guarantee is illustrated in the following: The guarantee is illustrated in the following:
> Actor `A1` sends messages `M1`, `M2`, `M3` to `A2`
> >
Actor `A1` sends messages `M1`, `M2`, `M3` to `A2` > Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`
>
Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`
>
This means that: This means that:
:
1. If `M1` is delivered it must be delivered before `M2` and `M3` 1. If `M1` is delivered it must be delivered before `M2` and `M3`
2. If `M2` is delivered it must be delivered before `M3` 2. If `M2` is delivered it must be delivered before `M3`
3. If `M4` is delivered it must be delivered before `M5` and `M6` 3. If `M4` is delivered it must be delivered before `M5` and `M6`
@ -140,14 +139,13 @@ order.
Please note that this rule is **not transitive**: Please note that this rule is **not transitive**:
> Actor `A` sends message `M1` to actor `C`
> >
Actor `A` sends message `M1` to actor `C` > Actor `A` then sends message `M2` to actor `B`
> >
Actor `A` then sends message `M2` to actor `B` > Actor `B` forwards message `M2` to actor `C`
> >
Actor `B` forwards message `M2` to actor `C` > Actor `C` may receive `M1` and `M2` in any order
>
Actor `C` may receive `M1` and `M2` in any order
Causal transitive ordering would imply that `M2` is never received before Causal transitive ordering would imply that `M2` is never received before
`M1` at actor `C` (though any of them might be lost). This ordering can be `M1` at actor `C` (though any of them might be lost). This ordering can be
@ -173,12 +171,11 @@ Please note, that the ordering guarantees discussed above only hold for user mes
of an actor is communicated by special system messages that are not ordered relative to ordinary user messages. In of an actor is communicated by special system messages that are not ordered relative to ordinary user messages. In
particular: particular:
> Child actor `C` sends message `M` to its parent `P`
> >
Child actor `C` sends message `M` to its parent `P` > Child actor fails with failure `F`
> >
Child actor fails with failure `F` > Parent actor `P` might receive the two events either in order `M`, `F` or `F`, `M`
>
Parent actor `P` might receive the two events either in order `M`, `F` or `F`, `M`
The reason for this is that internal system messages has their own mailboxes therefore the ordering of enqueue calls of The reason for this is that internal system messages has their own mailboxes therefore the ordering of enqueue calls of
a user and system message cannot guarantee the ordering of their dequeue times. a user and system message cannot guarantee the ordering of their dequeue times.
@ -251,14 +248,13 @@ As explained in the previous section local message sends obey transitive causal
ordering under certain conditions. This ordering can be violated due to different ordering under certain conditions. This ordering can be violated due to different
message delivery latencies. For example: message delivery latencies. For example:
> Actor `A` on node-1 sends message `M1` to actor `C` on node-3
> >
Actor `A` on node-1 sends message `M1` to actor `C` on node-3 > Actor `A` on node-1 then sends message `M2` to actor `B` on node-2
> >
Actor `A` on node-1 then sends message `M2` to actor `B` on node-2 > Actor `B` on node-2 forwards message `M2` to actor `C` on node-3
> >
Actor `B` on node-2 forwards message `M2` to actor `C` on node-3 > Actor `C` may receive `M1` and `M2` in any order
>
Actor `C` may receive `M1` and `M2` in any order
It might take longer time for `M1` to "travel" to node-3 than it takes It might take longer time for `M1` to "travel" to node-3 than it takes
for `M2` to "travel" to node-3 via node-2. for `M2` to "travel" to node-3 via node-2.

View file

@ -14,7 +14,6 @@ Akka is built upon a conscious decision to offer APIs that are minimal and consi
From this follows that the principles implemented by Akka Streams are: From this follows that the principles implemented by Akka Streams are:
>
* all features are explicit in the API, no magic * all features are explicit in the API, no magic
* supreme compositionality: combined pieces retain the function of each part * supreme compositionality: combined pieces retain the function of each part
* exhaustive model of the domain of distributed bounded stream processing * exhaustive model of the domain of distributed bounded stream processing
@ -25,7 +24,6 @@ This means that we provide all the tools necessary to express any stream process
One important consequence of offering only features that can be relied upon is the restriction that Akka Streams cannot ensure that all objects sent through a processing topology will be processed. Elements can be dropped for a number of reasons: One important consequence of offering only features that can be relied upon is the restriction that Akka Streams cannot ensure that all objects sent through a processing topology will be processed. Elements can be dropped for a number of reasons:
>
* plain user code can consume one element in a *map(...)* stage and produce an entirely different one as its result * plain user code can consume one element in a *map(...)* stage and produce an entirely different one as its result
* common stream operators drop elements intentionally, e.g. take/drop/filter/conflate/buffer/… * common stream operators drop elements intentionally, e.g. take/drop/filter/conflate/buffer/…
* stream failure will tear down the stream without waiting for processing to finish, all elements that are in flight will be discarded * stream failure will tear down the stream without waiting for processing to finish, all elements that are in flight will be discarded
@ -51,7 +49,6 @@ This means that `Sink.asPublisher(true)` (for enabling fan-out support) must be
We expect libraries to be built on top of Akka Streams, in fact Akka HTTP is one such example that lives within the Akka project itself. In order to allow users to profit from the principles that are described for Akka Streams above, the following rules are established: We expect libraries to be built on top of Akka Streams, in fact Akka HTTP is one such example that lives within the Akka project itself. In order to allow users to profit from the principles that are described for Akka Streams above, the following rules are established:
>
* libraries shall provide their users with reusable pieces, i.e. expose factories that return graphs, allowing full compositionality * libraries shall provide their users with reusable pieces, i.e. expose factories that return graphs, allowing full compositionality
* libraries may optionally and additionally provide facilities that consume and materialize graphs * libraries may optionally and additionally provide facilities that consume and materialize graphs
@ -71,7 +68,6 @@ Exceptions from this need to be well-justified and carefully documented.
Akka Streams must enable a library to express any stream processing utility in terms of immutable blueprints. The most common building blocks are Akka Streams must enable a library to express any stream processing utility in terms of immutable blueprints. The most common building blocks are
>
* Source: something with exactly one output stream * Source: something with exactly one output stream
* Sink: something with exactly one input stream * Sink: something with exactly one input stream
* Flow: something with exactly one input and one output stream * Flow: something with exactly one input and one output stream

View file

@ -106,6 +106,5 @@ is the only one trying, the operation will succeed.
## Recommended literature ## Recommended literature
>
* The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914 * The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914
* Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606 * Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606

View file

@ -25,7 +25,6 @@ class MyActor extends Actor with akka.actor.ActorLogging {
The second parameter to the `Logging` is the source of this logging channel. The second parameter to the `Logging` is the source of this logging channel.
The source object is translated to a String according to the following rules: The source object is translated to a String according to the following rules:
>
* if it is an Actor or ActorRef, its path is used * if it is an Actor or ActorRef, its path is used
* in case of a String it is used as is * in case of a String it is used as is
* in case of a class an approximation of its simpleName * in case of a class an approximation of its simpleName

View file

@ -188,7 +188,6 @@ together with the tutorial. The source code of this sample can be found in the @
There are a couple of things to keep in mind when writing multi node tests or else your tests might behave in There are a couple of things to keep in mind when writing multi node tests or else your tests might behave in
surprising ways. surprising ways.
>
* Don't issue a shutdown of the first node. The first node is the controller and if it shuts down your test will break. * Don't issue a shutdown of the first node. The first node is the controller and if it shuts down your test will break.
* To be able to use `blackhole`, `passThrough`, and `throttle` you must activate the failure injector and * To be able to use `blackhole`, `passThrough`, and `throttle` you must activate the failure injector and
throttler transport adapters by specifying `testTransport(on = true)` in your MultiNodeConfig. throttler transport adapters by specifying `testTransport(on = true)` in your MultiNodeConfig.

View file

@ -319,7 +319,6 @@ There is no Group variant of the BalancingPool.
A Router that tries to send to the non-suspended child routee with fewest messages in mailbox. A Router that tries to send to the non-suspended child routee with fewest messages in mailbox.
The selection is done in this order: The selection is done in this order:
>
* pick any idle routee (not processing message) with empty mailbox * pick any idle routee (not processing message) with empty mailbox
* pick any routee with empty mailbox * pick any routee with empty mailbox
* pick routee with fewest pending messages in mailbox * pick routee with fewest pending messages in mailbox

View file

@ -191,7 +191,6 @@ encoded in the provided `RunnableGraph`. To be able to interact with the running
needs to return a different object that provides the necessary interaction capabilities. In other words, the needs to return a different object that provides the necessary interaction capabilities. In other words, the
`RunnableGraph` can be seen as a factory, which creates: `RunnableGraph` can be seen as a factory, which creates:
>
* a network of running processing entities, inaccessible from the outside * a network of running processing entities, inaccessible from the outside
* a materialized value, optionally providing a controlled interaction capability with the network * a materialized value, optionally providing a controlled interaction capability with the network

View file

@ -317,7 +317,6 @@ by calling `getStageActorRef(receive)` passing in a function that takes a `Pair`
or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this
`ActorRef` are: `ActorRef` are:
>
* they are not location transparent, they cannot be accessed via remoting. * they are not location transparent, they cannot be accessed via remoting.
* they cannot be returned as materialized values. * they cannot be returned as materialized values.
* they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the * they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the

View file

@ -232,7 +232,6 @@ yet will materialize that stage multiple times.
By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
stream graph can be executed within the same Actor and has two consequences: stream graph can be executed within the same Actor and has two consequences:
>
* passing elements from one processing stage to the next is a lot faster between fused * passing elements from one processing stage to the next is a lot faster between fused
stages due to avoiding the asynchronous messaging overhead stages due to avoiding the asynchronous messaging overhead
* fused stream processing stages does not run in parallel to each other, meaning that * fused stream processing stages does not run in parallel to each other, meaning that

View file

@ -22,7 +22,6 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st
* **Fan-out** * **Fan-out**
>
* `Broadcast[T]` *(1 input, N outputs)* given an input element emits to each output * `Broadcast[T]` *(1 input, N outputs)* given an input element emits to each output
* `Balance[T]` *(1 input, N outputs)* given an input element emits to one of its output ports * `Balance[T]` *(1 input, N outputs)* given an input element emits to one of its output ports
* `UnzipWith[In,A,B,...]` *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) * `UnzipWith[In,A,B,...]` *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20)
@ -30,7 +29,6 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st
* **Fan-in** * **Fan-in**
>
* `Merge[In]` *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output * `Merge[In]` *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output
* `MergePreferred[In]` like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` * `MergePreferred[In]` like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others`
* `ZipWith[A,B,...,Out]` *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element * `ZipWith[A,B,...,Out]` *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element
@ -182,7 +180,6 @@ In general a custom `Shape` needs to be able to provide all its input and output
able to create a new instance from given ports. There are some predefined shapes provided to avoid unnecessary able to create a new instance from given ports. There are some predefined shapes provided to avoid unnecessary
boilerplate: boilerplate:
>
* `SourceShape`, `SinkShape`, `FlowShape` for simpler shapes, * `SourceShape`, `SinkShape`, `FlowShape` for simpler shapes,
* `UniformFanInShape` and `UniformFanOutShape` for junctions with multiple input (or output) ports * `UniformFanInShape` and `UniformFanOutShape` for junctions with multiple input (or output) ports
of the same type, of the same type,

View file

@ -28,7 +28,6 @@ This is how this setup would look like implemented as a stream:
The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way, The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way,
basically doing the same as Roland with his frying pans: basically doing the same as Roland with his frying pans:
>
1. A `ScoopOfBatter` enters `fryingPan1` 1. A `ScoopOfBatter` enters `fryingPan1`
2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available 2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available
3. `fryingPan2` takes the HalfCookedPancake 3. `fryingPan2` takes the HalfCookedPancake
@ -87,7 +86,6 @@ in sequence.
It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs: It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs:
>
* the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough * the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough
flat surface. flat surface.
* the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared * the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared

View file

@ -16,7 +16,6 @@ perform tests.
Akka comes with a dedicated module `akka-testkit` for supporting tests at Akka comes with a dedicated module `akka-testkit` for supporting tests at
different levels, which fall into two clearly distinct categories: different levels, which fall into two clearly distinct categories:
>
* Testing isolated pieces of code without involving the actor model, meaning * Testing isolated pieces of code without involving the actor model, meaning
without multiple threads; this implies completely deterministic behavior without multiple threads; this implies completely deterministic behavior
concerning the ordering of events and no concurrency concerns and will be concerning the ordering of events and no concurrency concerns and will be
@ -156,7 +155,6 @@ underlying actor:
You may of course mix and match both modi operandi of `TestActorRef` as You may of course mix and match both modi operandi of `TestActorRef` as
suits your test needs: suits your test needs:
>
* one common use case is setting up the actor into a specific internal state * one common use case is setting up the actor into a specific internal state
before sending the test message before sending the test message
* another is to verify correct internal state transitions after having sent * another is to verify correct internal state transitions after having sent
@ -207,7 +205,6 @@ actor—are stopped.
The above mentioned `expectMsg` is not the only method for formulating The above mentioned `expectMsg` is not the only method for formulating
assertions concerning received messages. Here is the full list: assertions concerning received messages. Here is the full list:
>
* *
`expectMsg[T](d: Duration, msg: T): T` `expectMsg[T](d: Duration, msg: T): T`
The given message object must be received within the specified time; the The given message object must be received within the specified time; the
@ -280,7 +277,6 @@ provided hint for easier debugging.
In addition to message reception assertions there are also methods which help In addition to message reception assertions there are also methods which help
with message flows: with message flows:
>
* *
`receiveOne(d: Duration): AnyRef` `receiveOne(d: Duration): AnyRef`
Tries to receive one message for at most the given time interval and Tries to receive one message for at most the given time interval and
@ -703,7 +699,6 @@ send returns and no `InterruptedException` will be thrown.
To summarize, these are the features with the `CallingThreadDispatcher` To summarize, these are the features with the `CallingThreadDispatcher`
has to offer: has to offer:
>
* Deterministic execution of single-threaded tests while retaining nearly full * Deterministic execution of single-threaded tests while retaining nearly full
actor semantics actor semantics
* Full message processing history leading up to the point of failure in * Full message processing history leading up to the point of failure in

View file

@ -99,7 +99,6 @@ and we know how to create a Typed Actor from that, so let's look at calling thes
Methods returning: Methods returning:
>
* `Unit` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell` * `Unit` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell`
* `scala.concurrent.Future[_]` will use `send-request-reply` semantics, exactly like `ActorRef.ask` * `scala.concurrent.Future[_]` will use `send-request-reply` semantics, exactly like `ActorRef.ask`
* `scala.Option[_]` will use `send-request-reply` semantics, but *will* block to wait for an answer, * `scala.Option[_]` will use `send-request-reply` semantics, but *will* block to wait for an answer,
@ -170,13 +169,11 @@ you can define the strategy to use for supervising child actors, as described in
By having your Typed Actor implementation class implement any and all of the following: By having your Typed Actor implementation class implement any and all of the following:
>
>
* `TypedActor.PreStart` * `TypedActor.PreStart`
* `TypedActor.PostStop` * `TypedActor.PostStop`
* `TypedActor.PreRestart` * `TypedActor.PreRestart`
* `TypedActor.PostRestart` * `TypedActor.PostRestart`
>
You can hook into the lifecycle of your Typed Actor. You can hook into the lifecycle of your Typed Actor.
## Receive arbitrary messages ## Receive arbitrary messages