Replace processing stage with operator

This commit is contained in:
Richard Imaoka 2018-05-19 10:15:20 +09:00
parent 9784f65ced
commit 984c081757
49 changed files with 77 additions and 77 deletions

View file

@ -4,7 +4,7 @@ It took quite a while until we were reasonably happy with the look and feel of t
@@@ note
As detailed in the introduction keep in mind that the Akka Streams API is completely decoupled from the Reactive Streams interfaces which are an implementation detail for how to pass stream data between individual processing stages.
As detailed in the introduction keep in mind that the Akka Streams API is completely decoupled from the Reactive Streams interfaces which are an implementation detail for how to pass stream data between individual operators.
@@@

View file

@ -205,7 +205,7 @@ Distributed Data is intended to solve the following challenges:
Actors are a fundamental model for concurrency, but there are common patterns where their use requires the user
to implement the same pattern over and over. Very common is the scenario where a chain, or graph, of actors, need to
process a potentially large, or infinite, stream of sequential events and properly coordinate resource usage so that
faster processing stages does not overwhelm slower ones in the chain or graph. Streams provide a higher-level
faster operators do not overwhelm slower ones in the chain or graph. Streams provide a higher-level
abstraction on top of actors that simplifies writing such processing networks, handling all the fine details in the
background and providing a safe, typed, composable programming model. Streams is also an implementation
of the [Reactive Streams standard](http://www.reactive-streams.org) which enables integration with all third

View file

@ -2,7 +2,7 @@
Creates a real `Flow` upon receiving the first element by calling relevant `flowFactory` given as an argument.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Attaches the given `Sink` to this `Flow`, meaning that elements that pass through this `Flow` will also be sent to the `Sink`.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@ div { .group-scala }
## Signature

View file

@ -2,7 +2,7 @@
Use the `ask` pattern to send a request-reply message to the target `ref` actor.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@ref[Asynchronous operators](../index.md#asynchronous-operators)
@@@ div { .group-scala }
## Signature

View file

@ -2,7 +2,7 @@
Apply a partial function to each incoming element, if the partial function is defined for a value the returned value is passed downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Transform this stream by testing the type of each of the elements on which the element is an instance of the provided type as they pass through this processing step.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Detach upstream demand from downstream demand without detaching the stream rates.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Each upstream element will either be diverted to the given sink, or the downstream consumer according to the predicate function applied to the element.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Drop `n` elements and then pass any subsequent element downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Drop elements as long as a predicate function return true for the element
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Filter the incoming elements using a predicate.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Filter the incoming elements using a predicate.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Start with current value `zero` and then apply the current and next value to the given function, when upstream complete the current value is emitted downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Just like `fold` but receiving a function that results in a @scala[`Future`] @java[`CompletionStage`] to the next value.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Accumulate incoming events until the specified number of elements have been accumulated and then pass the collection of elements downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Intersperse stream with provided element similar to `List.mkString`.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Limit number of element from upstream to given `max` number.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Log elements flowing through the stream as well as completion and erroring.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Transform each element in the stream by calling a mapping function with it and passing the returned value downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Pass incoming elements to a function that return a @scala[`Future`] @java[`CompletionStage`] result.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@ref[Asynchronous operators](../index.md#asynchronous-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Like `mapAsync` but @scala[`Future`] @java[`CompletionStage`] results are passed downstream as they arrive regardless of the order of the elements that triggered them.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@ref[Asynchronous operators](../index.md#asynchronous-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Transform each element into zero or more elements that are individually passed downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
While similar to `recover` this stage can be used to transform an error signal to a different one *without* logging it as an error in the process.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Allow sending of one last element downstream when a failure has happened upstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Allow switching to alternative Source when a failure has happened upstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
RecoverWithRetries allows to switch to alternative Source on flow failure.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Start with first element and then apply the current and next value to the given function, when upstream complete the current value is emitted downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Emit its current value which starts at `zero` and then applies the current and next value to the given function emitting the next current value.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Just like `scan` but receiving a function that results in a @scala[`Future`] @java[`CompletionStage`] to the next value.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Provide a sliding window over the incoming stream and pass the windows as groups of elements downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Transform each element into zero or more elements that are individually passed downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Pass `n` incoming elements downstream and then complete
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Pass elements downstream as long as a predicate function return true for the element include the element when the predicate first return false and then complete.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Limit the throughput to a specific number of elements per time unit, or a specific total cost per time unit, where a function has to be provided to calculate the individual cost of each element.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Watch a specific `ActorRef` and signal a failure downstream once the actor terminates.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -2,7 +2,7 @@
Attaches the given `Sink` to this `Flow` as a wire tap, meaning that elements that pass through will also be sent to the wire-tap `Sink`, without the latter affecting the mainline flow.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@ref[Simple operators](../index.md#simple-operators)
@@@div { .group-scala }

View file

@ -107,7 +107,7 @@ Sources and sinks for reading and writing files can be found on `FileIO`.
|FileIO|<a name="frompath"></a>@ref[fromPath](FileIO/fromPath.md)|Emit the contents of a file.|
|FileIO|<a name="topath"></a>@ref[toPath](FileIO/toPath.md)|Create a sink which will write incoming `ByteString` s to a given file path.|
## Simple processing stages
## Simple operators
These stages can transform the rate of incoming elements since there are stages that emit multiple elements for a
single input (e.g. `mapConcat`) or consume multiple elements before emitting one output (e.g. `filter`).
@ -160,7 +160,7 @@ depending on being backpressured by downstream or not.
|Flow|<a name="fromsinkandsource"></a>@ref[fromSinkAndSource](Flow/fromSinkAndSource.md)|Creates a `Flow` from a `Sink` and a `Source` where the Flow's input will be sent to the `Sink` and the `Flow` 's output will come from the Source.|
|Flow|<a name="fromsinkandsourcecoupled"></a>@ref[fromSinkAndSourceCoupled](Flow/fromSinkAndSourceCoupled.md)|Allows coupling termination (cancellation, completion, erroring) of Sinks and Sources while creating a Flow between them.|
## Asynchronous processing stages
## Asynchronous operators
These stages encapsulate an asynchronous computation, properly handling backpressure while taking care of the asynchronous
operation at the same time (usually handling the completion of a @scala[`Future`] @java[`CompletionStage`]).

View file

@ -18,7 +18,7 @@ the modularity aspects of the library.
## Basics of composition and modularity
Every processing stage used in Akka Streams can be imagined as a "box" with input and output ports where elements to
Every operator used in Akka Streams can be imagined as a "box" with input and output ports where elements to
be processed arrive and leave the stage. In this view, a `Source` is nothing else than a "box" with a single
output port, or, a `BidiFlow` is a "box" with exactly two input and two output ports. In the figure below
we illustrate the most common used stages viewed as "boxes".
@ -26,7 +26,7 @@ we illustrate the most common used stages viewed as "boxes".
![compose_shapes.png](../images/compose_shapes.png)
The *linear* stages are `Source`, `Sink`
and `Flow`, as these can be used to compose strict chains of processing stages.
and `Flow`, as these can be used to compose strict chains of operators.
Fan-in and Fan-out stages have usually multiple input or multiple output ports, therefore they allow to build
more complex graph layouts, not only chains. `BidiFlow` stages are usually useful in IO related tasks, where
there are input and output channels to be handled. Due to the specific shape of `BidiFlow` it is easy to
@ -243,7 +243,7 @@ needs to return a different object that provides the necessary interaction capab
* a network of running processing entities, inaccessible from the outside
* a materialized value, optionally providing a controlled interaction capability with the network
Unlike actors though, each of the processing stages might provide a materialized value, so when we compose multiple
Unlike actors though, each of the operators might provide a materialized value, so when we compose multiple
stages or modules, we need to combine the materialized value as well (there are default rules which make this easier,
for example *to()* and *via()* takes care of the most common case of taking the materialized value to the left.
See @ref:[Combining materialized values](stream-flows-and-basics.md#flow-combine-mat) for details).

View file

@ -21,7 +21,7 @@ This part also serves as supplementary material for the main body of documentati
open while reading the manual and look for examples demonstrating various streaming concepts
as they appear in the main body of documentation.
If you need a quick reference of the available processing stages used in the recipes see @ref:[operator index](operators/index.md).
If you need a quick reference of the available operators used in the recipes see @ref:[operator index](operators/index.md).
## Working with Flows

View file

@ -14,7 +14,7 @@ To use Akka Streams, add the module to your project:
While the processing vocabulary of Akka Streams is quite rich (see the @ref:[Streams Cookbook](stream-cookbook.md) for examples) it
is sometimes necessary to define new transformation stages either because some functionality is missing from the
stock operations, or for performance reasons. In this part we show how to build custom processing stages and graph
stock operations, or for performance reasons. In this part we show how to build custom operators and graph
junctions of various kinds.
@@@ note
@ -28,7 +28,7 @@ might be easy to make with a custom @ref[`GraphStage`](stream-customize.md)
<a id="graphstage"></a>
## Custom processing with GraphStage
The `GraphStage` abstraction can be used to create arbitrary graph processing stages with any number of input
The `GraphStage` abstraction can be used to create arbitrary operators with any number of input
or output ports. It is a counterpart of the `GraphDSL.create()` method which creates new stream processing
stages by composing others. Where `GraphStage` differs is that it creates a stage that is itself not divisible into
smaller ones, and allows state to be maintained inside it in a safe way.
@ -181,7 +181,7 @@ and `abortReading()`
An example of how this API simplifies a stage can be found below in the second version of the `Duplicator`.
### Custom linear processing stages using GraphStage
### Custom linear operators using GraphStage
GraphStage allows for custom linear processing stages through letting them
have one input and one output and using `FlowShape` as their shape.
@ -276,7 +276,7 @@ in circulation in a potential chain of stages, just like our conceptual "railroa
### Completion
Completion handling usually (but not exclusively) comes into the picture when processing stages need to emit
Completion handling usually (but not exclusively) comes into the picture when operators need to emit
a few more elements after their upstream source has been completed. We have seen an example of this in our
first `Duplicator` implementation where the last element needs to be doubled even after the upstream neighbor
stage has been completed. This can be done by overriding the `onUpstreamFinish` method in @scala[`InHandler`]@java[`AbstractInHandler`].
@ -462,7 +462,7 @@ Scala
Java
: @@snip [GraphStageDocTest.java]($code$/java/jdocs/stream/GraphStageDocTest.java) { #detached }
## Thread safety of custom processing stages
## Thread safety of custom operators
All of the above custom stages (linear or graph) provide a few simple guarantees that implementors can rely on.
:

View file

@ -45,13 +45,13 @@ Graph
: A description of a stream processing topology, defining the pathways through which elements shall flow when the stream
is running.
Processing Stage
Operator
: The common name for all building blocks that build up a Graph.
Examples of a processing stage would be operations like `map()`, `filter()`, custom operators extending @ref[`GraphStage`s](stream-customize.md) and graph
junctions like `Merge` or `Broadcast`. For the full list of built-in processing stages see the @ref:[operator index](operators/index.md)
Examples of operators are like `map()`, `filter()`, custom ones extending @ref[`GraphStage`s](stream-customize.md) and graph
junctions like `Merge` or `Broadcast`. For the full list of built-in operators see the @ref:[operator index](operators/index.md)
When we talk about *asynchronous, non-blocking backpressure* we mean that the processing stages available in Akka
When we talk about *asynchronous, non-blocking backpressure* we mean that the operators available in Akka
Streams will not use blocking calls but asynchronous message passing to exchange messages between each other, and they
will use asynchronous means to slow down a fast producer, without blocking its thread. This is a thread-pool friendly
design, since entities that need to wait (a fast producer waiting on a slow consumer) will not block the thread but
@ -63,15 +63,15 @@ can hand it back for further use to an underlying thread-pool.
Linear processing pipelines can be expressed in Akka Streams using the following core abstractions:
Source
: A processing stage with *exactly one output*, emitting data elements whenever downstream processing stages are
: A operator with *exactly one output*, emitting data elements whenever downstream operators are
ready to receive them.
Sink
: A processing stage with *exactly one input*, requesting and accepting data elements possibly slowing down the upstream
: A operator with *exactly one input*, requesting and accepting data elements possibly slowing down the upstream
producer of elements
Flow
: A processing stage which has *exactly one input and output*, which connects its upstream and downstream by
: A operator which has *exactly one input and output*, which connects its upstream and downstream by
transforming the data elements flowing through it.
RunnableGraph
@ -83,7 +83,7 @@ a `Flow` to a `Sink` to get a new sink. After a stream is properly terminated by
it will be represented by the `RunnableGraph` type, indicating that it is ready to be executed.
It is important to remember that even after constructing the `RunnableGraph` by connecting all the source, sink and
different processing stages, no data will flow through it until it is materialized. Materialization is the process of
different operators, no data will flow through it until it is materialized. Materialization is the process of
allocating all resources needed to run the computation described by a Graph (in Akka Streams this will often involve
starting up Actors). Thanks to Flows being a description of the processing pipeline they are *immutable,
thread-safe, and freely shareable*, which means that it is for example safe to share and send them between actors, to have
@ -132,7 +132,7 @@ Scala
Java
: @@snip [FlowDocTest.java]($code$/java/jdocs/stream/FlowDocTest.java) { #materialization-runWith }
It is worth pointing out that since processing stages are *immutable*, connecting them returns a new processing stage,
It is worth pointing out that since operators are *immutable*, connecting them returns a new operator,
instead of modifying the existing instance, so while constructing long flows, remember to assign the new value to a variable or run it:
Scala
@ -143,8 +143,8 @@ Java
@@@ note
By default Akka Streams elements support **exactly one** downstream processing stage.
Making fan-out (supporting multiple downstream processing stages) an explicit opt-in feature allows default stream elements to
By default Akka Streams elements support **exactly one** downstream operator.
Making fan-out (supporting multiple downstream operators) an explicit opt-in feature allows default stream elements to
be less complex and more efficient. Also it allows for greater flexibility on *how exactly* to handle the multicast scenarios,
by providing named fan-out elements such as broadcast (signals all down-stream elements) or balance (signals one of available down-stream elements).
@ -196,7 +196,7 @@ Akka Streams implement an asynchronous non-blocking back-pressure protocol stand
specification, which Akka is a founding member of.
The user of the library does not have to write any explicit back-pressure handling code — it is built in
and dealt with automatically by all of the provided Akka Streams processing stages. It is possible however to add
and dealt with automatically by all of the provided Akka Streams operators. It is possible however to add
explicit buffer stages with overflow strategies that can influence the behavior of the stream. This is especially important
in complex processing graphs which may even contain loops (which *must* be treated with very special
care, as explained in @ref:[Graph cycles, liveness and deadlocks](stream-graphs.md#graph-cycles)).
@ -287,9 +287,9 @@ yet will materialize that stage multiple times.
By default Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
stream graph can be executed within the same Actor and has two consequences:
* passing elements from one processing stage to the next is a lot faster between fused
* passing elements from one operator to the next is a lot faster between fused
stages due to avoiding the asynchronous messaging overhead
* fused stream processing stages does not run in parallel to each other, meaning that
* fused stream operators do not run in parallel to each other, meaning that
only up to one CPU core is used for each fused part
To allow for parallel processing you will have to insert asynchronous boundaries manually into your flows and
@ -312,11 +312,11 @@ by adding information to the flow graph that has been constructed up to this poi
This means that everything that is inside the red bubble will be executed by one actor and everything outside of it
by another. This scheme can be applied successively, always having one such boundary enclose the previous ones plus all
processing stages that have been added since them.
operators that have been added since them.
@@@ warning
Without fusing (i.e. up to version 2.0-M2) each stream processing stage had an implicit input buffer
Without fusing (i.e. up to version 2.0-M2) each stream operator had an implicit input buffer
that holds a few elements for efficiency reasons. If your flow graphs contain cycles then these buffers
may have been crucial in order to avoid deadlocks. With fusing these implicit buffers are no longer
there, data elements are passed without buffering between fused stages. In those cases where buffering
@ -328,7 +328,7 @@ is needed in order to allow the stream to run at all, you will have to insert ex
<a id="flow-combine-mat"></a>
### Combining materialized values
Since every processing stage in Akka Streams can provide a materialized value after being materialized, it is necessary
Since every operator in Akka Streams can provide a materialized value after being materialized, it is necessary
to somehow express how these values should be composed to a final value when we plug these stages together. For this,
many operator methods have variants that take an additional argument, a function, that will be used to combine the
resulting values. Some examples of using these combiners are illustrated in the example below.

View file

@ -32,7 +32,7 @@ Java
![tcp-stream-bind.png](../images/tcp-stream-bind.png)
Next, we handle *each* incoming connection using a `Flow` which will be used as the processing stage
Next, we handle *each* incoming connection using a `Flow` which will be used as the operator
to handle and emit `ByteString` s from and to the TCP Socket. Since one `ByteString` does not have to necessarily
correspond to exactly one line of text (the client might be sending the line in chunks) we use the @scala[`Framing.delimiter`]@java[`delimiter`]
helper Flow @java[from `akka.stream.javadsl.Framing`] to chunk the inputs up into actual lines of text. The last boolean
@ -78,7 +78,7 @@ Java
The `repl` flow we use to handle the server interaction first prints the servers response, then awaits on input from
the command line (this blocking call is used here for the sake of simplicity) and converts it to a
`ByteString` which is then sent over the wire to the server. Then we connect the TCP pipeline to this
processing stageat this point it will be materialized and start processing data once the server responds with
operatorat this point it will be materialized and start processing data once the server responds with
an *initial message*.
A resilient REPL client would be more sophisticated than this, for example it should split out the input reading into
@ -172,7 +172,7 @@ Scala
Java
: @@snip [StreamFileDocTest.java]($code$/java/jdocs/stream/io/StreamFileDocTest.java) { #file-source }
Please note that these processing stages are backed by Actors and by default are configured to run on a pre-configured
Please note that these operators are backed by Actors and by default are configured to run on a pre-configured
threadpool-backed dispatcher dedicated for File IO. This is very important as it isolates the blocking file IO operations from the rest
of the ActorSystem allowing each dispatcher to be utilised in the most efficient way. If you want to configure a custom
dispatcher for file IO operations globally, you can do so by changing the `akka.stream.materializer.blocking-io-dispatcher`,

View file

@ -12,12 +12,12 @@ To use Akka Streams, add the module to your project:
## Introduction
Akka Streams processing stages (be it simple operators on Flows and Sources or graph junctions) are "fused" together
Akka Streams operators (be it simple operators on Flows and Sources or graph junctions) are "fused" together
and executed sequentially by default. This avoids the overhead of events crossing asynchronous boundaries but
limits the flow to execute at most one stage at any given time.
In many cases it is useful to be able to concurrently execute the stages of a flow, this is done by explicitly marking
them as asynchronous using the @scala[`async`]@java[`async()`] method. Each processing stage marked as asynchronous will run in a
them as asynchronous using the @scala[`async`]@java[`async()`] method. Each operator marked as asynchronous will run in a
dedicated actor internally, while all stages not marked asynchronous will run in one single actor.
We will illustrate through the example of pancake cooking how streams can be used for various processing patterns,
@ -58,7 +58,7 @@ not be able to operate at full capacity <a id="^1" href="#1">[1]</a>.
@@@ note
Asynchronous stream processing stages have internal buffers to make communication between them more efficient.
Asynchronous stream operators have internal buffers to make communication between them more efficient.
For more details about the behavior of these and how to add additional buffers refer to @ref:[Buffers and working with rate](stream-rate.md).
@@@
@ -92,7 +92,7 @@ The two concurrency patterns that we demonstrated as means to increase throughpu
In fact, it is rather simple to combine the two approaches and streams provide
a nice unifying language to express and compose them.
First, let's look at how we can parallelize pipelined processing stages. In the case of pancakes this means that we
First, let's look at how we can parallelize pipelined operators. In the case of pancakes this means that we
will employ two chefs, each working using Roland's pipelining method, but we use the two chefs in parallel, just like
Patrik used the two frying pans. This is how it looks like if expressed as streams:

View file

@ -65,7 +65,7 @@ These situations are exactly those where the internal batching buffering strateg
### Internal buffers and their effect
As we have explained, for performance reasons Akka Streams introduces a buffer for every asynchronous processing stage.
As we have explained, for performance reasons Akka Streams introduces a buffer for every asynchronous operator.
The purpose of these buffers is solely optimization, in fact the size of 1 would be the most natural choice if there
would be no need for throughput improvements. Therefore it is recommended to keep these buffer sizes small,
and increase them only to a level suitable for the throughput requirements of the application. Default buffer sizes
@ -110,7 +110,7 @@ a leading 1 though which is caused by an initial prefetch of the @scala[@scalado
@@@ note
In general, when time or rate driven processing stages exhibit strange behavior, one of the first solutions to try
In general, when time or rate driven operators exhibit strange behavior, one of the first solutions to try
should be to decrease the input buffer of the affected elements to 1.
@@@

View file

@ -24,9 +24,9 @@ object StreamOperatorsIndexGenerator extends AutoPlugin {
"Sink stages",
"Additional Sink and Source converters",
"File IO Sinks and Sources",
"Simple processing stages",
"Simple operators",
"Flow stages composed of Sinks and Sources",
"Asynchronous processing stages",
"Asynchronous operators",
"Timer driven stages",
"Backpressure aware stages",
"Nesting and flattening stages",