diff --git a/akka-docs/src/main/paradox/java/camel.md b/akka-docs/src/main/paradox/java/camel.md index 78ac42b9b9..7221901871 100644 --- a/akka-docs/src/main/paradox/java/camel.md +++ b/akka-docs/src/main/paradox/java/camel.md @@ -390,7 +390,6 @@ akka-camel may make some further modifications to it. The sample named @extref[Akka Camel Samples with Java](ecs:akka-samples-camel-java) (@extref[source code](samples:akka-sample-camel-java)) contains 3 samples: -> * Asynchronous routing and transformation - This example demonstrates how to implement consumer and producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints. * Custom Camel route - Demonstrates the combined usage of a `Producer` and a @@ -413,4 +412,4 @@ For an introduction to akka-camel 1, see also the [Appendix E - Akka and Camel]( Other, more advanced external articles (for version 1) are: * [Akka Consumer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html) - * [Akka Producer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html) \ No newline at end of file + * [Akka Producer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html) diff --git a/akka-docs/src/main/paradox/java/logging.md b/akka-docs/src/main/paradox/java/logging.md index 0272dfe71d..771ca6cef6 100644 --- a/akka-docs/src/main/paradox/java/logging.md +++ b/akka-docs/src/main/paradox/java/logging.md @@ -24,7 +24,6 @@ while in the second case this is not automatically done. The second parameter to `Logging.getLogger` is the source of this logging channel. The source object is translated to a String according to the following rules: -> * if it is an Actor or ActorRef, its path is used * in case of a String it is used as is * in case of a class an approximation of its simpleName diff --git a/akka-docs/src/main/paradox/java/routing.md b/akka-docs/src/main/paradox/java/routing.md index dc1e009549..0495c950d0 100644 --- a/akka-docs/src/main/paradox/java/routing.md +++ b/akka-docs/src/main/paradox/java/routing.md @@ -319,7 +319,6 @@ There is no Group variant of the BalancingPool. A Router that tries to send to the non-suspended child routee with fewest messages in mailbox. The selection is done in this order: -> * pick any idle routee (not processing message) with empty mailbox * pick any routee with empty mailbox * pick routee with fewest pending messages in mailbox @@ -784,4 +783,4 @@ It is not allowed to configure the `routerDispatcher` to be a `akka.dispatch.BalancingDispatcherConfigurator` since the messages meant for the special router actor cannot be processed by any other actor. -@@@ \ No newline at end of file +@@@ diff --git a/akka-docs/src/main/paradox/java/stream/stream-composition.md b/akka-docs/src/main/paradox/java/stream/stream-composition.md index 275df25a50..295cb0964b 100644 --- a/akka-docs/src/main/paradox/java/stream/stream-composition.md +++ b/akka-docs/src/main/paradox/java/stream/stream-composition.md @@ -190,7 +190,6 @@ encoded in the provided `RunnableGraph`. To be able to interact with the running needs to return a different object that provides the necessary interaction capabilities. In other words, the `RunnableGraph` can be seen as a factory, which creates: -> * a network of running processing entities, inaccessible from the outside * a materialized value, optionally providing a controlled interaction capability with the network diff --git a/akka-docs/src/main/paradox/java/stream/stream-customize.md b/akka-docs/src/main/paradox/java/stream/stream-customize.md index 78df1752b1..83048f9120 100644 --- a/akka-docs/src/main/paradox/java/stream/stream-customize.md +++ b/akka-docs/src/main/paradox/java/stream/stream-customize.md @@ -314,7 +314,6 @@ by calling `getStageActorRef(receive)` passing in a function that takes a `Pair` or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this `ActorRef` are: -> * they are not location transparent, they cannot be accessed via remoting. * they cannot be returned as materialized values. * they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the diff --git a/akka-docs/src/main/paradox/java/stream/stream-flows-and-basics.md b/akka-docs/src/main/paradox/java/stream/stream-flows-and-basics.md index 658e94d395..4e29af8c2e 100644 --- a/akka-docs/src/main/paradox/java/stream/stream-flows-and-basics.md +++ b/akka-docs/src/main/paradox/java/stream/stream-flows-and-basics.md @@ -228,7 +228,6 @@ yet will materialize that stage multiple times. By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or stream graph can be executed within the same Actor and has two consequences: -> * passing elements from one processing stage to the next is a lot faster between fused stages due to avoiding the asynchronous messaging overhead * fused stream processing stages does not run in parallel to each other, meaning that diff --git a/akka-docs/src/main/paradox/java/stream/stream-graphs.md b/akka-docs/src/main/paradox/java/stream/stream-graphs.md index dede47722f..61e3211797 100644 --- a/akka-docs/src/main/paradox/java/stream/stream-graphs.md +++ b/akka-docs/src/main/paradox/java/stream/stream-graphs.md @@ -22,20 +22,18 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st * **Fan-out** -> - * `Broadcast` – *(1 input, N outputs)* given an input element emits to each output - * `Balance` – *(1 input, N outputs)* given an input element emits to one of its output ports - * `UnzipWith` – *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) - * `UnZip` – *(1 input, 2 outputs)* splits a stream of `Pair` tuples into two streams, one of type `A` and one of type `B` + * `Broadcast` – *(1 input, N outputs)* given an input element emits to each output + * `Balance` – *(1 input, N outputs)* given an input element emits to one of its output ports + * `UnzipWith` – *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) + * `UnZip` – *(1 input, 2 outputs)* splits a stream of `Pair` tuples into two streams, one of type `A` and one of type `B` * **Fan-in** -> - * `Merge` – *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output - * `MergePreferred` – like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` - * `ZipWith` – *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element - * `Zip` – *(2 inputs, 1 output)* is a `ZipWith` specialised to zipping input streams of `A` and `B` into a `Pair(A,B)` tuple stream - * `Concat` – *(2 inputs, 1 output)* concatenates two streams (first consume one, then the second one) + * `Merge` – *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output + * `MergePreferred` – like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` + * `ZipWith` – *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element + * `Zip` – *(2 inputs, 1 output)* is a `ZipWith` specialised to zipping input streams of `A` and `B` into a `Pair(A,B)` tuple stream + * `Concat` – *(2 inputs, 1 output)* concatenates two streams (first consume one, then the second one) One of the goals of the GraphDSL DSL is to look similar to how one would draw a graph on a whiteboard, so that it is simple to translate a design from whiteboard to code and be able to relate those two. Let's illustrate this by translating @@ -300,4 +298,4 @@ arc that injects a single element using `Source.single`. @@snip [GraphCyclesDocTest.java]($code$/java/jdocs/stream/GraphCyclesDocTest.java) { #zipping-live } When we run the above example we see that processing starts and never stops. The important takeaway from this example -is that balanced cycles often need an initial "kick-off" element to be injected into the cycle. \ No newline at end of file +is that balanced cycles often need an initial "kick-off" element to be injected into the cycle. diff --git a/akka-docs/src/main/paradox/java/stream/stream-parallelism.md b/akka-docs/src/main/paradox/java/stream/stream-parallelism.md index c3c058754b..7b7b51be94 100644 --- a/akka-docs/src/main/paradox/java/stream/stream-parallelism.md +++ b/akka-docs/src/main/paradox/java/stream/stream-parallelism.md @@ -28,7 +28,6 @@ This is how this setup would look like implemented as a stream: The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way, basically doing the same as Roland with his frying pans: -> 1. A `ScoopOfBatter` enters `fryingPan1` 2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available 3. `fryingPan2` takes the HalfCookedPancake @@ -87,7 +86,6 @@ in sequence. It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs: -> * the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough flat surface. * the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared diff --git a/akka-docs/src/main/paradox/java/testing.md b/akka-docs/src/main/paradox/java/testing.md index 20bc4a1383..4c8406ee37 100644 --- a/akka-docs/src/main/paradox/java/testing.md +++ b/akka-docs/src/main/paradox/java/testing.md @@ -8,7 +8,6 @@ perform tests. Akka comes with a dedicated module `akka-testkit` for supporting tests at different levels, which fall into two clearly distinct categories: -> * Testing isolated pieces of code without involving the actor model, meaning without multiple threads; this implies completely deterministic behavior concerning the ordering of events and no concurrency concerns and will be @@ -130,7 +129,6 @@ underlying actor: You may of course mix and match both modi operandi of `TestActorRef` as suits your test needs: -> * one common use case is setting up the actor into a specific internal state before sending the test message * another is to verify correct internal state transitions after having sent @@ -190,7 +188,6 @@ out, in which case they use the default value from configuration item `akka.test.single-expect-default` which itself defaults to 3 seconds (or they obey the innermost enclosing `Within` as detailed [below](#testkit-within)). The full signatures are: -> * `public  T expectMsgEquals(FiniteDuration max, T msg)` The given message object must be received within the specified time; the @@ -632,7 +629,6 @@ send returns and no `InterruptedException` will be thrown. To summarize, these are the features with the `CallingThreadDispatcher` has to offer: -> * Deterministic execution of single-threaded tests while retaining nearly full actor semantics * Full message processing history leading up to the point of failure in @@ -682,4 +678,4 @@ akka { ## Configuration There are several configuration properties for the TestKit module, please refer -to the @ref:[reference configuration](general/configuration.md#config-akka-testkit). \ No newline at end of file +to the @ref:[reference configuration](general/configuration.md#config-akka-testkit). diff --git a/akka-docs/src/main/paradox/java/typed-actors.md b/akka-docs/src/main/paradox/java/typed-actors.md index 8d3f760895..aa92beff82 100644 --- a/akka-docs/src/main/paradox/java/typed-actors.md +++ b/akka-docs/src/main/paradox/java/typed-actors.md @@ -89,7 +89,6 @@ and we know how to create a Typed Actor from that, so let's look at calling thes Methods returning: -> * `void` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell` * `scala.concurrent.Future` will use `send-request-reply` semantics, exactly like `ActorRef.ask` * `akka.japi.Option` will use `send-request-reply` semantics, but *will* block to wait for an answer, @@ -175,7 +174,6 @@ e.g. when interfacing with untyped actors. By having your Typed Actor implementation class implement any and all of the following: -> * `TypedActor.PreStart` * `TypedActor.PostStop` * `TypedActor.PreRestart` @@ -208,4 +206,4 @@ In order to round robin among a few instances of such actors, you can simply cre and then facade it with a `TypedActor` like shown in the example below. This works because typed actors of course communicate using the same mechanisms as normal actors, and methods calls on them get transformed into message sends of `MethodCall` messages. -@@snip [TypedActorDocTest.java]($code$/java/jdocs/actor/TypedActorDocTest.java) { #typed-router } \ No newline at end of file +@@snip [TypedActorDocTest.java]($code$/java/jdocs/actor/TypedActorDocTest.java) { #typed-router } diff --git a/akka-docs/src/main/paradox/scala/camel.md b/akka-docs/src/main/paradox/scala/camel.md index 69e68dc24f..446aed45a6 100644 --- a/akka-docs/src/main/paradox/scala/camel.md +++ b/akka-docs/src/main/paradox/scala/camel.md @@ -380,7 +380,6 @@ akka-camel may make some further modifications to it. The sample named @extref[Akka Camel Samples with Scala](ecs:akka-samples-camel-scala) (@extref[source code](samples:akka-sample-camel-scala)) contains 3 samples: -> * Asynchronous routing and transformation - This example demonstrates how to implement consumer and producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints. * Custom Camel route - Demonstrates the combined usage of a `Producer` and a @@ -403,4 +402,4 @@ For an introduction to akka-camel 1, see also the [Appendix E - Akka and Camel]( Other, more advanced external articles (for version 1) are: * [Akka Consumer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html) - * [Akka Producer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html) \ No newline at end of file + * [Akka Producer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html) diff --git a/akka-docs/src/main/paradox/scala/common/binary-compatibility-rules.md b/akka-docs/src/main/paradox/scala/common/binary-compatibility-rules.md index d510656554..fedc14ca3e 100644 --- a/akka-docs/src/main/paradox/scala/common/binary-compatibility-rules.md +++ b/akka-docs/src/main/paradox/scala/common/binary-compatibility-rules.md @@ -42,7 +42,6 @@ OK: 3.1.n --> 3.2.0 ... Some modules are excluded from the binary compatibility guarantees, such as: -> * `*-testkit` modules - since these are to be used only in tests, which usually are re-compiled and run on demand * `*-tck` modules - since they may want to add new tests (or force configuring something), in order to discover possible failures in an existing implementation that the TCK is supposed to be testing. Compatibility here is not *guaranteed*, however it is attempted to make the upgrade prosess as smooth as possible. * all @ref:[may change](may-change.md) modules - which by definition are subject to rapid iteration and change. Read more about that in @ref:[Modules marked "May Change"](may-change.md) @@ -138,4 +137,4 @@ there is no guarantee objects can be cleanly deserialized if serialized with a d The internal Akka Protobuf serializers that can be enabled explicitly with `enable-additional-serialization-bindings` or implicitly with `akka.actor.allow-java-serialization = off` (which is preferable from a security standpoint) -does not suffer from this problem. \ No newline at end of file +does not suffer from this problem. diff --git a/akka-docs/src/main/paradox/scala/general/actor-systems.md b/akka-docs/src/main/paradox/scala/general/actor-systems.md index e43af69192..7666158e6b 100644 --- a/akka-docs/src/main/paradox/scala/general/actor-systems.md +++ b/akka-docs/src/main/paradox/scala/general/actor-systems.md @@ -46,7 +46,6 @@ Now, the difficulty in designing such a system is how to decide who should supervise what. There is of course no single best solution, but there are a few guidelines which might be helpful: -> * If one actor manages the work another actor is doing, e.g. by passing on sub-tasks, then the manager should supervise the child. The reason is that the manager knows which kind of failures are expected and how to handle @@ -117,7 +116,6 @@ under increased load. The non-exhaustive list of adequate solutions to the “blocking problem” includes the following suggestions: -> * Do the blocking call within an actor (or a set of actors managed by a router @ref:[router](../routing.md), making sure to configure a thread pool which is either dedicated for this purpose or diff --git a/akka-docs/src/main/paradox/scala/general/configuration.md b/akka-docs/src/main/paradox/scala/general/configuration.md index 67949fe7b0..beb6613dff 100644 --- a/akka-docs/src/main/paradox/scala/general/configuration.md +++ b/akka-docs/src/main/paradox/scala/general/configuration.md @@ -357,7 +357,6 @@ that might look like: When working with `Config` objects, keep in mind that there are three "layers" in the cake: -> * `ConfigFactory.defaultOverrides()` (system properties) * the app's settings * `ConfigFactory.defaultReference()` (reference.conf) @@ -365,7 +364,6 @@ three "layers" in the cake: The normal goal is to customize the middle layer while leaving the other two alone. -> * `ConfigFactory.load()` loads the whole stack * the overloads of `ConfigFactory.load()` let you specify a different middle layer @@ -402,7 +400,6 @@ You can use asterisks as wildcard matches for the actor path sections, so you co `/*/sampleActor` and that would match all `sampleActor` on that level in the hierarchy. In addition, please note: -> * you can also use wildcards in the last position to match all actors at a certain level: `/someParent/*` * you can use double-wildcards in the last position to match all child actors and their children recursively: `/someParent/**` diff --git a/akka-docs/src/main/paradox/scala/general/message-delivery-reliability.md b/akka-docs/src/main/paradox/scala/general/message-delivery-reliability.md index 35df07acd0..6fb154834a 100644 --- a/akka-docs/src/main/paradox/scala/general/message-delivery-reliability.md +++ b/akka-docs/src/main/paradox/scala/general/message-delivery-reliability.md @@ -113,13 +113,12 @@ other message dissemination features (unless stated otherwise). The guarantee is illustrated in the following: +> Actor `A1` sends messages `M1`, `M2`, `M3` to `A2` > -Actor `A1` sends messages `M1`, `M2`, `M3` to `A2` -> -Actor `A3` sends messages `M4`, `M5`, `M6` to `A2` -> +> Actor `A3` sends messages `M4`, `M5`, `M6` to `A2` + This means that: -: + 1. If `M1` is delivered it must be delivered before `M2` and `M3` 2. If `M2` is delivered it must be delivered before `M3` 3. If `M4` is delivered it must be delivered before `M5` and `M6` @@ -140,14 +139,13 @@ order. Please note that this rule is **not transitive**: +> Actor `A` sends message `M1` to actor `C` > -Actor `A` sends message `M1` to actor `C` +> Actor `A` then sends message `M2` to actor `B` > -Actor `A` then sends message `M2` to actor `B` +> Actor `B` forwards message `M2` to actor `C` > -Actor `B` forwards message `M2` to actor `C` -> -Actor `C` may receive `M1` and `M2` in any order +> Actor `C` may receive `M1` and `M2` in any order Causal transitive ordering would imply that `M2` is never received before `M1` at actor `C` (though any of them might be lost). This ordering can be @@ -173,12 +171,11 @@ Please note, that the ordering guarantees discussed above only hold for user mes of an actor is communicated by special system messages that are not ordered relative to ordinary user messages. In particular: +> Child actor `C` sends message `M` to its parent `P` > -Child actor `C` sends message `M` to its parent `P` +> Child actor fails with failure `F` > -Child actor fails with failure `F` -> -Parent actor `P` might receive the two events either in order `M`, `F` or `F`, `M` +> Parent actor `P` might receive the two events either in order `M`, `F` or `F`, `M` The reason for this is that internal system messages has their own mailboxes therefore the ordering of enqueue calls of a user and system message cannot guarantee the ordering of their dequeue times. @@ -251,14 +248,13 @@ As explained in the previous section local message sends obey transitive causal ordering under certain conditions. This ordering can be violated due to different message delivery latencies. For example: +> Actor `A` on node-1 sends message `M1` to actor `C` on node-3 > -Actor `A` on node-1 sends message `M1` to actor `C` on node-3 +> Actor `A` on node-1 then sends message `M2` to actor `B` on node-2 > -Actor `A` on node-1 then sends message `M2` to actor `B` on node-2 +> Actor `B` on node-2 forwards message `M2` to actor `C` on node-3 > -Actor `B` on node-2 forwards message `M2` to actor `C` on node-3 -> -Actor `C` may receive `M1` and `M2` in any order +> Actor `C` may receive `M1` and `M2` in any order It might take longer time for `M1` to "travel" to node-3 than it takes for `M2` to "travel" to node-3 via node-2. @@ -358,4 +354,4 @@ seeing a `akka.dispatch.Terminate` message dropped means that two termination requests were given, but of course only one can succeed. In the same vein, you might see `akka.actor.Terminated` messages from children while stopping a hierarchy of actors turning up in dead letters if the parent -is still watching the child when the parent terminates. \ No newline at end of file +is still watching the child when the parent terminates. diff --git a/akka-docs/src/main/paradox/scala/general/stream/stream-design.md b/akka-docs/src/main/paradox/scala/general/stream/stream-design.md index a6e173675a..7cd85fbba9 100644 --- a/akka-docs/src/main/paradox/scala/general/stream/stream-design.md +++ b/akka-docs/src/main/paradox/scala/general/stream/stream-design.md @@ -14,7 +14,6 @@ Akka is built upon a conscious decision to offer APIs that are minimal and consi From this follows that the principles implemented by Akka Streams are: -> * all features are explicit in the API, no magic * supreme compositionality: combined pieces retain the function of each part * exhaustive model of the domain of distributed bounded stream processing @@ -25,7 +24,6 @@ This means that we provide all the tools necessary to express any stream process One important consequence of offering only features that can be relied upon is the restriction that Akka Streams cannot ensure that all objects sent through a processing topology will be processed. Elements can be dropped for a number of reasons: -> * plain user code can consume one element in a *map(...)* stage and produce an entirely different one as its result * common stream operators drop elements intentionally, e.g. take/drop/filter/conflate/buffer/… * stream failure will tear down the stream without waiting for processing to finish, all elements that are in flight will be discarded @@ -51,7 +49,6 @@ This means that `Sink.asPublisher(true)` (for enabling fan-out support) must be We expect libraries to be built on top of Akka Streams, in fact Akka HTTP is one such example that lives within the Akka project itself. In order to allow users to profit from the principles that are described for Akka Streams above, the following rules are established: -> * libraries shall provide their users with reusable pieces, i.e. expose factories that return graphs, allowing full compositionality * libraries may optionally and additionally provide facilities that consume and materialize graphs @@ -71,7 +68,6 @@ Exceptions from this need to be well-justified and carefully documented. Akka Streams must enable a library to express any stream processing utility in terms of immutable blueprints. The most common building blocks are -> * Source: something with exactly one output stream * Sink: something with exactly one input stream * Flow: something with exactly one input and one output stream @@ -102,4 +98,4 @@ The ability for failures to propagate faster than data elements is essential for A recovery element (i.e. any transformation that absorbs an `onError` signal and turns that into possibly more data elements followed normal stream completion) acts as a bulkhead that confines a stream collapse to a given region of the stream topology. Within the collapsed region buffered elements may be lost, but the outside is not affected by the failure. -This works in the same fashion as a `try`–`catch` expression: it marks a region in which exceptions are caught, but the exact amount of code that was skipped within this region in case of a failure might not be known precisely—the placement of statements matters. \ No newline at end of file +This works in the same fashion as a `try`–`catch` expression: it marks a region in which exceptions are caught, but the exact amount of code that was skipped within this region in case of a failure might not be known precisely—the placement of statements matters. diff --git a/akka-docs/src/main/paradox/scala/general/terminology.md b/akka-docs/src/main/paradox/scala/general/terminology.md index 46c8a3028c..c9f77fa8b6 100644 --- a/akka-docs/src/main/paradox/scala/general/terminology.md +++ b/akka-docs/src/main/paradox/scala/general/terminology.md @@ -106,6 +106,5 @@ is the only one trying, the operation will succeed. ## Recommended literature -> * The Art of Multiprocessor Programming, M. Herlihy and N Shavit, 2008. ISBN 978-0123705914 * Java Concurrency in Practice, B. Goetz, T. Peierls, J. Bloch, J. Bowbeer, D. Holmes and D. Lea, 2006. ISBN 978-0321349606 diff --git a/akka-docs/src/main/paradox/scala/logging.md b/akka-docs/src/main/paradox/scala/logging.md index 3aa21eb3b0..4af5c55f8c 100644 --- a/akka-docs/src/main/paradox/scala/logging.md +++ b/akka-docs/src/main/paradox/scala/logging.md @@ -25,7 +25,6 @@ class MyActor extends Actor with akka.actor.ActorLogging { The second parameter to the `Logging` is the source of this logging channel. The source object is translated to a String according to the following rules: -> * if it is an Actor or ActorRef, its path is used * in case of a String it is used as is * in case of a class an approximation of its simpleName diff --git a/akka-docs/src/main/paradox/scala/multi-node-testing.md b/akka-docs/src/main/paradox/scala/multi-node-testing.md index c5420ca15a..7ed562dd6a 100644 --- a/akka-docs/src/main/paradox/scala/multi-node-testing.md +++ b/akka-docs/src/main/paradox/scala/multi-node-testing.md @@ -188,7 +188,6 @@ together with the tutorial. The source code of this sample can be found in the @ There are a couple of things to keep in mind when writing multi node tests or else your tests might behave in surprising ways. -> * Don't issue a shutdown of the first node. The first node is the controller and if it shuts down your test will break. * To be able to use `blackhole`, `passThrough`, and `throttle` you must activate the failure injector and throttler transport adapters by specifying `testTransport(on = true)` in your MultiNodeConfig. @@ -201,4 +200,4 @@ thread. This also means that you shouldn't use them from inside an actor, a futu ## Configuration There are several configuration properties for the Multi-Node Testing module, please refer -to the @ref:[reference configuration](general/configuration.md#config-akka-multi-node-testkit). \ No newline at end of file +to the @ref:[reference configuration](general/configuration.md#config-akka-multi-node-testkit). diff --git a/akka-docs/src/main/paradox/scala/routing.md b/akka-docs/src/main/paradox/scala/routing.md index e4c44a90ab..3449e2e7c1 100644 --- a/akka-docs/src/main/paradox/scala/routing.md +++ b/akka-docs/src/main/paradox/scala/routing.md @@ -319,7 +319,6 @@ There is no Group variant of the BalancingPool. A Router that tries to send to the non-suspended child routee with fewest messages in mailbox. The selection is done in this order: -> * pick any idle routee (not processing message) with empty mailbox * pick any routee with empty mailbox * pick routee with fewest pending messages in mailbox @@ -785,4 +784,4 @@ It is not allowed to configure the `routerDispatcher` to be a `akka.dispatch.BalancingDispatcherConfigurator` since the messages meant for the special router actor cannot be processed by any other actor. -@@@ \ No newline at end of file +@@@ diff --git a/akka-docs/src/main/paradox/scala/stream/stream-composition.md b/akka-docs/src/main/paradox/scala/stream/stream-composition.md index 875b59336c..d7bf780916 100644 --- a/akka-docs/src/main/paradox/scala/stream/stream-composition.md +++ b/akka-docs/src/main/paradox/scala/stream/stream-composition.md @@ -191,7 +191,6 @@ encoded in the provided `RunnableGraph`. To be able to interact with the running needs to return a different object that provides the necessary interaction capabilities. In other words, the `RunnableGraph` can be seen as a factory, which creates: -> * a network of running processing entities, inaccessible from the outside * a materialized value, optionally providing a controlled interaction capability with the network diff --git a/akka-docs/src/main/paradox/scala/stream/stream-customize.md b/akka-docs/src/main/paradox/scala/stream/stream-customize.md index 32e4b360b8..0f23d92ec2 100644 --- a/akka-docs/src/main/paradox/scala/stream/stream-customize.md +++ b/akka-docs/src/main/paradox/scala/stream/stream-customize.md @@ -317,7 +317,6 @@ by calling `getStageActorRef(receive)` passing in a function that takes a `Pair` or `unwatch(ref)` methods. The reference can be also watched by external actors. The current limitations of this `ActorRef` are: -> * they are not location transparent, they cannot be accessed via remoting. * they cannot be returned as materialized values. * they cannot be accessed from the constructor of the `GraphStageLogic`, but they can be accessed from the diff --git a/akka-docs/src/main/paradox/scala/stream/stream-flows-and-basics.md b/akka-docs/src/main/paradox/scala/stream/stream-flows-and-basics.md index 1a9bae68db..0cd2c6d167 100644 --- a/akka-docs/src/main/paradox/scala/stream/stream-flows-and-basics.md +++ b/akka-docs/src/main/paradox/scala/stream/stream-flows-and-basics.md @@ -232,7 +232,6 @@ yet will materialize that stage multiple times. By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or stream graph can be executed within the same Actor and has two consequences: -> * passing elements from one processing stage to the next is a lot faster between fused stages due to avoiding the asynchronous messaging overhead * fused stream processing stages does not run in parallel to each other, meaning that diff --git a/akka-docs/src/main/paradox/scala/stream/stream-graphs.md b/akka-docs/src/main/paradox/scala/stream/stream-graphs.md index 1d034fe3f0..ec9a0c9451 100644 --- a/akka-docs/src/main/paradox/scala/stream/stream-graphs.md +++ b/akka-docs/src/main/paradox/scala/stream/stream-graphs.md @@ -22,20 +22,18 @@ Akka Streams currently provide these junctions (for a detailed list see @ref:[st * **Fan-out** -> - * `Broadcast[T]` – *(1 input, N outputs)* given an input element emits to each output - * `Balance[T]` – *(1 input, N outputs)* given an input element emits to one of its output ports - * `UnzipWith[In,A,B,...]` – *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) - * `UnZip[A,B]` – *(1 input, 2 outputs)* splits a stream of `(A,B)` tuples into two streams, one of type `A` and one of type `B` + * `Broadcast[T]` – *(1 input, N outputs)* given an input element emits to each output + * `Balance[T]` – *(1 input, N outputs)* given an input element emits to one of its output ports + * `UnzipWith[In,A,B,...]` – *(1 input, N outputs)* takes a function of 1 input that given a value for each input emits N output elements (where N <= 20) + * `UnZip[A,B]` – *(1 input, 2 outputs)* splits a stream of `(A,B)` tuples into two streams, one of type `A` and one of type `B` * **Fan-in** -> - * `Merge[In]` – *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output - * `MergePreferred[In]` – like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` - * `ZipWith[A,B,...,Out]` – *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element - * `Zip[A,B]` – *(2 inputs, 1 output)* is a `ZipWith` specialised to zipping input streams of `A` and `B` into an `(A,B)` tuple stream - * `Concat[A]` – *(2 inputs, 1 output)* concatenates two streams (first consume one, then the second one) + * `Merge[In]` – *(N inputs , 1 output)* picks randomly from inputs pushing them one by one to its output + * `MergePreferred[In]` – like `Merge` but if elements are available on `preferred` port, it picks from it, otherwise randomly from `others` + * `ZipWith[A,B,...,Out]` – *(N inputs, 1 output)* which takes a function of N inputs that given a value for each input emits 1 output element + * `Zip[A,B]` – *(2 inputs, 1 output)* is a `ZipWith` specialised to zipping input streams of `A` and `B` into an `(A,B)` tuple stream + * `Concat[A]` – *(2 inputs, 1 output)* concatenates two streams (first consume one, then the second one) One of the goals of the GraphDSL DSL is to look similar to how one would draw a graph on a whiteboard, so that it is simple to translate a design from whiteboard to code and be able to relate those two. Let's illustrate this by translating @@ -182,7 +180,6 @@ In general a custom `Shape` needs to be able to provide all its input and output able to create a new instance from given ports. There are some predefined shapes provided to avoid unnecessary boilerplate: -> * `SourceShape`, `SinkShape`, `FlowShape` for simpler shapes, * `UniformFanInShape` and `UniformFanOutShape` for junctions with multiple input (or output) ports of the same type, @@ -361,4 +358,4 @@ arc that injects a single element using `Source.single`. @@snip [GraphCyclesSpec.scala]($code$/scala/docs/stream/GraphCyclesSpec.scala) { #zipping-live } When we run the above example we see that processing starts and never stops. The important takeaway from this example -is that balanced cycles often need an initial "kick-off" element to be injected into the cycle. \ No newline at end of file +is that balanced cycles often need an initial "kick-off" element to be injected into the cycle. diff --git a/akka-docs/src/main/paradox/scala/stream/stream-parallelism.md b/akka-docs/src/main/paradox/scala/stream/stream-parallelism.md index 7c378638db..e2bc035e27 100644 --- a/akka-docs/src/main/paradox/scala/stream/stream-parallelism.md +++ b/akka-docs/src/main/paradox/scala/stream/stream-parallelism.md @@ -28,7 +28,6 @@ This is how this setup would look like implemented as a stream: The two `map` stages in sequence (encapsulated in the "frying pan" flows) will be executed in a pipelined way, basically doing the same as Roland with his frying pans: -> 1. A `ScoopOfBatter` enters `fryingPan1` 2. `fryingPan1` emits a HalfCookedPancake once `fryingPan2` becomes available 3. `fryingPan2` takes the HalfCookedPancake @@ -87,7 +86,6 @@ in sequence. It is also possible to organize parallelized stages into pipelines. This would mean employing four chefs: -> * the first two chefs prepare half-cooked pancakes from batter, in parallel, then putting those on a large enough flat surface. * the second two chefs take these and fry their other side in their own pans, then they put the pancakes on a shared @@ -104,4 +102,4 @@ at the entry point of the pipeline. This only matters however if the processing deviation. > [1] Roland's reason for this seemingly suboptimal procedure is that he prefers the temperature of the second pan -to be slightly lower than the first in order to achieve a more homogeneous result. \ No newline at end of file +to be slightly lower than the first in order to achieve a more homogeneous result. diff --git a/akka-docs/src/main/paradox/scala/testing.md b/akka-docs/src/main/paradox/scala/testing.md index 3f3b04e4dc..b581916af6 100644 --- a/akka-docs/src/main/paradox/scala/testing.md +++ b/akka-docs/src/main/paradox/scala/testing.md @@ -16,7 +16,6 @@ perform tests. Akka comes with a dedicated module `akka-testkit` for supporting tests at different levels, which fall into two clearly distinct categories: -> * Testing isolated pieces of code without involving the actor model, meaning without multiple threads; this implies completely deterministic behavior concerning the ordering of events and no concurrency concerns and will be @@ -156,7 +155,6 @@ underlying actor: You may of course mix and match both modi operandi of `TestActorRef` as suits your test needs: -> * one common use case is setting up the actor into a specific internal state before sending the test message * another is to verify correct internal state transitions after having sent @@ -207,7 +205,6 @@ actor—are stopped. The above mentioned `expectMsg` is not the only method for formulating assertions concerning received messages. Here is the full list: -> * `expectMsg[T](d: Duration, msg: T): T` The given message object must be received within the specified time; the @@ -280,7 +277,6 @@ provided hint for easier debugging. In addition to message reception assertions there are also methods which help with message flows: -> * `receiveOne(d: Duration): AnyRef` Tries to receive one message for at most the given time interval and @@ -703,7 +699,6 @@ send returns and no `InterruptedException` will be thrown. To summarize, these are the features with the `CallingThreadDispatcher` has to offer: -> * Deterministic execution of single-threaded tests while retaining nearly full actor semantics * Full message processing history leading up to the point of failure in @@ -820,4 +815,4 @@ when writing the tests or alternatively the `sequential` keyword. ## Configuration There are several configuration properties for the TestKit module, please refer -to the @ref:[reference configuration](general/configuration.md#config-akka-testkit). \ No newline at end of file +to the @ref:[reference configuration](general/configuration.md#config-akka-testkit). diff --git a/akka-docs/src/main/paradox/scala/typed-actors.md b/akka-docs/src/main/paradox/scala/typed-actors.md index 55043ea319..2d767ca722 100644 --- a/akka-docs/src/main/paradox/scala/typed-actors.md +++ b/akka-docs/src/main/paradox/scala/typed-actors.md @@ -99,7 +99,6 @@ and we know how to create a Typed Actor from that, so let's look at calling thes Methods returning: -> * `Unit` will be dispatched with `fire-and-forget` semantics, exactly like `ActorRef.tell` * `scala.concurrent.Future[_]` will use `send-request-reply` semantics, exactly like `ActorRef.ask` * `scala.Option[_]` will use `send-request-reply` semantics, but *will* block to wait for an answer, @@ -170,13 +169,11 @@ you can define the strategy to use for supervising child actors, as described in By having your Typed Actor implementation class implement any and all of the following: -> -> * `TypedActor.PreStart` * `TypedActor.PostStop` * `TypedActor.PreRestart` * `TypedActor.PostRestart` -> + You can hook into the lifecycle of your Typed Actor. ## Receive arbitrary messages @@ -226,4 +223,4 @@ In order to round robin among a few instances of such actors, you can simply cre and then facade it with a `TypedActor` like shown in the example below. This works because typed actors of course communicate using the same mechanisms as normal actors, and methods calls on them get transformed into message sends of `MethodCall` messages. -@@snip [TypedActorDocSpec.scala]($code$/scala/docs/actor/TypedActorDocSpec.scala) { #typed-router } \ No newline at end of file +@@snip [TypedActorDocSpec.scala]($code$/scala/docs/actor/TypedActorDocSpec.scala) { #typed-router }