consistent wording; stream ops are "operators" (#25064)

This commit is contained in:
Konrad `ktoso` Malawski 2018-05-09 16:50:32 +02:00 committed by GitHub
parent d2f2d50b6b
commit 7fa28b3488
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
15 changed files with 73 additions and 73 deletions

View file

@ -178,7 +178,7 @@ but if 2 or more `Future`s are involved `map` will not allow you to combine them
@@snip [FutureDocSpec.scala]($code$/scala/docs/future/FutureDocSpec.scala) { #flat-map }
Composing futures using nested combinators it can sometimes become quite complicated and hard to read, in these cases using Scala's
Composing futures using nested operators it can sometimes become quite complicated and hard to read, in these cases using Scala's
'for comprehensions' usually yields more readable code. See next section for examples.
If you need to do conditional propagation, you can use `filter`:

View file

@ -143,7 +143,7 @@ Scala
Java
: @@snip [PersistenceQueryDocTest.java]($code$/java/jdocs/persistence/PersistenceQueryDocTest.java) { #events-by-tag }
As you can see, we can use all the usual stream combinators available from @ref:[Streams](stream/index.md) on the resulting query stream,
As you can see, we can use all the usual stream operators available from @ref:[Streams](stream/index.md) on the resulting query stream,
including for example taking the first 10 and cancelling the stream. It is worth pointing out that the built-in `EventsByTag`
query has an optionally supported offset parameter (of type `Long`) which the journals can use to implement resumable-streams.
For example a journal may be able to use a WHERE clause to begin the read starting from a specific row, or in a datastore

View file

@ -158,7 +158,7 @@ we have a stream of streams, where every substream will serve identical words.
To count the words, we need to process the stream of streams (the actual groups
containing identical words). `groupBy` returns a @scala[`SubFlow`] @java[`SubSource`], which
means that we transform the resulting substreams directly. In this case we use
the `reduce` combinator to aggregate the word itself and the number of its
the `reduce` operator to aggregate the word itself and the number of its
occurrences within a @scala[tuple `(String, Integer)`] @java[`Pair<String, Integer>`]. Each substream will then
emit one final value—precisely such a pair—when the overall input completes. As
a last step we merge back these values from the substreams into one single

View file

@ -484,7 +484,7 @@ or the downstreams. Even for stages that do not complete or fail in this manner,
@@@ div { .group-scala }
## Extending Flow Combinators with Custom Operators
## Extending Flow Operators with Custom Operators
The most general way of extending any `Source`, `Flow` or `SubFlow` (e.g. from `groupBy`) is
demonstrated above: create a graph of flow-shape like the `Duplicator` example given above and use the `.via(...)`

View file

@ -318,7 +318,7 @@ is needed in order to allow the stream to run at all, you will have to insert ex
Since every processing stage in Akka Streams can provide a materialized value after being materialized, it is necessary
to somehow express how these values should be composed to a final value when we plug these stages together. For this,
many combinator methods have variants that take an additional argument, a function, that will be used to combine the
many operator methods have variants that take an additional argument, a function, that will be used to combine the
resulting values. Some examples of using these combiners are illustrated in the example below.
Scala

View file

@ -107,7 +107,7 @@ Scala
Java
: @@snip [QuickStartDocTest.java]($code$/java/jdocs/stream/QuickStartDocTest.java) { #transform-source }
First we use the `scan` combinator to run a computation over the whole
First we use the `scan` operator to run a computation over the whole
stream: starting with the number 1 (@scala[`BigInt(1)`]@java[`BigInteger.ONE`]) we multiple by each of
the incoming numbers, one after the other; the scan operation emits the initial
value and then every calculation result. This yields the series of factorial
@ -185,7 +185,7 @@ Java
All operations so far have been time-independent and could have been performed
in the same fashion on strict collections of elements. The next line
demonstrates that we are in fact dealing with streams that can flow at a
certain speed: we use the `throttle` combinator to slow down the stream to 1
certain speed: we use the `throttle` operator to slow down the stream to 1
element per second.
If you run this program you will see one line printed per second. One aspect
@ -195,14 +195,14 @@ JVM does not crash with an OutOfMemoryError, even though you will also notice
that running the streams happens in the background, asynchronously (this is the
reason for the auxiliary information to be provided as a @scala[`Future`]@java[`CompletionStage`], in the future). The
secret that makes this work is that Akka Streams implicitly implement pervasive
flow control, all combinators respect back-pressure. This allows the throttle
flow control, all operators respect back-pressure. This allows the throttle
combinator to signal to all its upstream sources of data that it can only
accept elements at a certain rate—when the incoming rate is higher than one per
second the throttle combinator will assert *back-pressure* upstream.
second the throttle operator will assert *back-pressure* upstream.
This is basically all there is to Akka Streams in a nutshell—glossing over the
fact that there are dozens of sources and sinks and many more stream
transformation combinators to choose from, see also @ref:[operator index](operators/index.md).
transformation operators to choose from, see also @ref:[operator index](operators/index.md).
# Reactive Tweets

View file

@ -29,7 +29,7 @@ Java
The same strategy can be applied for sources as well. In the next example we
have a source that produces an infinite stream of elements. Such source can be
tested by asserting that first arbitrary number of elements hold some
condition. Here the `take` combinator and `Sink.seq` are very useful.
condition. Here the `take` operator and `Sink.seq` are very useful.
Scala
: @@snip [StreamTestKitDocSpec.scala]($code$/scala/docs/stream/StreamTestKitDocSpec.scala) { #grouped-infinite }

View file

@ -278,7 +278,7 @@ A side-effect of this is that behaviors can now be tested in isolation without
having to be packaged into an Actor, tests can run fully synchronously without
having to worry about timeouts and spurious failures. Another side-effect is
that behaviors can nicely be composed and decorated, for example `Behaviors.tap`
is not special or using something internal. New combinators can be written as
is not special or using something internal. New operators can be written as
external libraries or tailor-made for each project.
## A Little Bit of Theory