!str #19005 make groupBy et al return a SubFlow

A SubFlow (or SubSource) is not a Graph, it is an unfinished builder
that accepts transformations. This allows us to capture the substreams’
transformations before materializing the flow, which will be very
helpful in fully fusing all operators.

Another change is that groupBy now requires a maxSubstreams parameter in
order to bound its resource usage. In exchange the matching merge can be
unbounded. This trades silent deadlock for explicit stream failure.

This commit also changes all uses of Predef.identity to use `conforms`
and removes the HTTP impl.util.identityFunc.
This commit is contained in:
Roland Kuhn 2015-11-25 19:58:48 +01:00
parent 654fa41443
commit 1500d1f36d
56 changed files with 3484 additions and 720 deletions

View file

@ -322,6 +322,62 @@ Update procedure
*There is no simple update procedure. The affected stages must be ported to the new ``GraphStage`` DSL manually. Please
read the* ``GraphStage`` *documentation (TODO) for details.*
GroupBy, SplitWhen and SplitAfter now return SubFlow or SubSource
=================================================================
Previously the ``groupBy``, ``splitWhen``, and ``splitAfter`` combinators
returned a type that included a :class:`Source` within its elements.
Transforming these substreams was only possible by nesting the respective
combinators inside a ``map`` of the outer stream. This has been made more
convenient and also safer by dropping down into transforming the substreams
instead: the return type is now a :class:`SubSource` (for sources) or a
:class:`SubFlow` (for flows) that does not implement the :class:`Graph`
interface and therefore only represents an unfinished intermediate builder
step.
Update Procedure
----------------
The transformations that were done on the substreams need to be lifted up one
level. This only works for cases where the processing topology is homogenous
for all substreams.
Example
^^^^^^^
::
Flow.<Integer> create()
// This no longer works!
.groupBy(i -> i % 2)
// This no longer works!
.map(pair -> pair.second().map(i -> i + 3))
// This no longer works!
.flatten(FlattenStrategy.concat())
This is implemented now as
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/MigrationsJava.java#group-flatten
Example 2
^^^^^^^^^
::
Flow.<String> create()
// This no longer works!
.groupBy(i -> i)
// This no longer works!
.map(pair ->
pair.second().runFold(new Pair<>(pair.first(), 0),
(pair, word) -> new Pair<>(word, pair.second() + 1)))
// This no longer works!
.mapAsyncUnordered(4, i -> i)
This is implemented now as
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/MigrationsJava.java#group-fold
Semantic change in ``isHoldingUpstream`` in the DetachedStage DSL
=================================================================
@ -437,7 +493,7 @@ should be replaced by:
.. includecode:: code/docs/MigrationsJava.java#query-param
SynchronousFileSource and SynchronousFileSink
============================================
=============================================
Both have been replaced by ``Source.file(…)`` and ``Sink.file(…)`` due to discoverability issues
paired with names which leaked internal implementation details.
@ -553,4 +609,4 @@ Example
should be replaced by
.. includecode:: code/docs/MigrationsJava.java#output-input-stream-source-sink
.. includecode:: code/docs/MigrationsJava.java#output-input-stream-source-sink

View file

@ -107,23 +107,28 @@ Implementing reduce-by-key
elements.
The "hello world" of reduce-by-key style operations is *wordcount* which we demonstrate below. Given a stream of words
we first create a new stream ``wordStreams`` that groups the words according to the ``i -> i`` function, i.e. now
we first create a new stream that groups the words according to the ``i -> i`` function, i.e. now
we have a stream of streams, where every substream will serve identical words.
To count the words, we need to process the stream of streams (the actual groups containing identical words). By mapping
over the groups and using ``fold`` (remember that ``fold`` automatically materializes and runs the stream it is used
on) we get a stream with elements of ``Future[String,Int]``. Now all we need is to flatten this stream, which
can be achieved by calling ``mapAsync`` with ``i -> i`` identity function.
To count the words, we need to process the stream of streams (the actual groups
containing identical words). ``groupBy`` returns a :class:`SubSource`, which
means that we transform the resulting substreams directly. In this case we use
the ``fold`` combinator to aggregate the word itself and the number of its
occurrences within a :class:`Pair<String, Integer>`. Each substream will then
emit one final value—precisely such a pair—when the overall input completes. As
a last step we merge back these values from the substreams into one single
output stream.
There is one tricky issue to be noted here. The careful reader probably noticed that we put a ``buffer`` between the
``mapAsync()`` operation that flattens the stream of futures and the actual stream of futures. The reason for this is
that the substreams produced by ``groupBy()`` can only complete when the original upstream source completes. This means
that ``mapAsync()`` cannot pull for more substreams because it still waits on folding futures to finish, but these
futures never finish if the additional group streams are not consumed. This typical deadlock situation is resolved by
this buffer which either able to contain all the group streams (which ensures that they are already running and folding)
or fails with an explicit failure instead of a silent deadlock.
One noteworthy detail pertains to the ``MAXIMUM_DISTINCT_WORDS`` parameter:
this defines the breadth of the merge operation. Akka Streams is focused on
bounded resource consumption and the number of concurrently open inputs to the
merge operator describes the amount of resources needed by the merge itself.
Therefore only a finite number of substreams can be active at any given time.
If the ``groupBy`` operator encounters more keys than this number then the
stream cannot continue without violating its resource bound, in this case
``groupBy`` will terminate with a failure.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKey.java#word-count
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKeyTest.java#word-count
By extracting the parts specific to *wordcount* into
@ -133,13 +138,14 @@ By extracting the parts specific to *wordcount* into
we get a generalized version below:
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKey.java#reduce-by-key-general
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKeyTest.java#reduce-by-key-general
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKey.java#reduce-by-key-general2
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeReduceByKeyTest.java#reduce-by-key-general2
.. note::
Please note that the reduce-by-key version we discussed above is sequential, in other words it is **NOT** a
parallelization pattern like mapReduce and similar frameworks.
Please note that the reduce-by-key version we discussed above is sequential
in reading the overall input stream, in other words it is **NOT** a
parallelization pattern like MapReduce and similar frameworks.
Sorting elements to multiple groups with groupBy
------------------------------------------------
@ -150,12 +156,12 @@ Sometimes we want to map elements into multiple groups simultaneously.
To achieve the desired result, we attack the problem in two steps:
* first, using a function ``topicMapper`` that gives a list of topics (groups) a message belongs to, we transform our
stream of ``Message`` to a stream of ``(Message, Topic)`` where for each topic the message belongs to a separate pair
stream of ``Message`` to a stream of :class:`Pair<Message, Topic>`` where for each topic the message belongs to a separate pair
will be emitted. This is achieved by using ``mapConcat``
* Then we take this new stream of message topic pairs (containing a separate pair for each topic a given message
belongs to) and feed it into groupBy, using the topic as the group key.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeMultiGroupBy.java#multi-groupby
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/cookbook/RecipeMultiGroupByTest.java#multi-groupby
Working with Graphs
===================

View file

@ -231,8 +231,8 @@ resulting values. Some examples of using these combiners are illustrated in the
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/FlowDocTest.java#flow-mat-combine
.. note::
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see
:ref:`graph-matvalue-java`
In Graphs it is possible to access the materialized value from inside the stream processing graph. For details see :ref:`graph-matvalue-java`.
Stream ordering
===============

View file

@ -4,60 +4,89 @@
Testing streams
###############
Verifying behaviour of Akka Stream sources, flows and sinks can be done using various code patterns and libraries. Here we will discuss testing these elements using:
Verifying behaviour of Akka Stream sources, flows and sinks can be done using
various code patterns and libraries. Here we will discuss testing these
elements using:
- simple sources, sinks and flows;
- sources and sinks in combination with :class:`TestProbe` from the :mod:`akka-testkit` module;
- sources and sinks specifically crafted for writing tests from the :mod:`akka-stream-testkit` module.
It is important to keep your data processing pipeline as separate sources, flows and sinks. This makes them easily testable by wiring them up to other sources or sinks, or some test harnesses that :mod:`akka-testkit` or :mod:`akka-stream-testkit` provide.
It is important to keep your data processing pipeline as separate sources,
flows and sinks. This makes them easily testable by wiring them up to other
sources or sinks, or some test harnesses that :mod:`akka-testkit` or
:mod:`akka-stream-testkit` provide.
Built in sources, sinks and combinators
=======================================
Testing a custom sink can be as simple as attaching a source that emits elements from a predefined collection, running a constructed test flow and asserting on the results that sink produced. Here is an example of a test for a sink:
Testing a custom sink can be as simple as attaching a source that emits
elements from a predefined collection, running a constructed test flow and
asserting on the results that sink produced. Here is an example of a test for a
sink:
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#strict-collection
The same strategy can be applied for sources as well. In the next example we have a source that produces an infinite stream of elements. Such source can be tested by asserting that first arbitrary number of elements hold some condition. Here the :code:`grouped` combinator and :code:`Sink.head` are very useful.
The same strategy can be applied for sources as well. In the next example we
have a source that produces an infinite stream of elements. Such source can be
tested by asserting that first arbitrary number of elements hold some
condition. Here the ``grouped`` combinator and ``Sink.head`` are very useful.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#grouped-infinite
When testing a flow we need to attach a source and a sink. As both stream ends are under our control, we can choose sources that tests various edge cases of the flow and sinks that ease assertions.
When testing a flow we need to attach a source and a sink. As both stream ends
are under our control, we can choose sources that tests various edge cases of
the flow and sinks that ease assertions.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#folded-stream
TestKit
=======
Akka Stream offers integration with Actors out of the box. This support can be used for writing stream tests that use familiar :class:`TestProbe` from the :mod:`akka-testkit` API.
Akka Stream offers integration with Actors out of the box. This support can be
used for writing stream tests that use familiar :class:`TestProbe` from the
:mod:`akka-testkit` API.
One of the more straightforward tests would be to materialize stream to a :class:`Future` and then use :code:`pipe` pattern to pipe the result of that future to the probe.
One of the more straightforward tests would be to materialize stream to a
:class:`Future` and then use ``pipe`` pattern to pipe the result of that future
to the probe.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#pipeto-testprobe
Instead of materializing to a future, we can use a :class:`Sink.actorRef` that sends all incoming elements to the given :class:`ActorRef`. Now we can use assertion methods on :class:`TestProbe` and expect elements one by one as they arrive. We can also assert stream completion by expecting for :code:`onCompleteMessage` which was given to :class:`Sink.actorRef`.
Instead of materializing to a future, we can use a :class:`Sink.actorRef` that
sends all incoming elements to the given :class:`ActorRef`. Now we can use
assertion methods on :class:`TestProbe` and expect elements one by one as they
arrive. We can also assert stream completion by expecting for
``onCompleteMessage`` which was given to :class:`Sink.actorRef`.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#sink-actorref
Similarly to :class:`Sink.actorRef` that provides control over received elements, we can use :class:`Source.actorRef` and have full control over elements to be sent.
Similarly to :class:`Sink.actorRef` that provides control over received
elements, we can use :class:`Source.actorRef` and have full control over
elements to be sent.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#source-actorref
Streams TestKit
===============
You may have noticed various code patterns that emerge when testing stream pipelines. Akka Stream has a separate :mod:`akka-stream-testkit` module that provides tools specifically for writing stream tests. This module comes with two main components that are :class:`TestSource` and :class:`TestSink` which provide sources and sinks that materialize to probes that allow fluent API.
You may have noticed various code patterns that emerge when testing stream
pipelines. Akka Stream has a separate :mod:`akka-stream-testkit` module that
provides tools specifically for writing stream tests. This module comes with
two main components that are :class:`TestSource` and :class:`TestSink` which
provide sources and sinks that materialize to probes that allow fluent API.
.. note::
Be sure to add the module :mod:`akka-stream-testkit` to your dependencies.
A sink returned by :code:`TestSink.probe` allows manual control over demand and assertions over elements coming downstream.
A sink returned by ``TestSink.probe`` allows manual control over demand and
assertions over elements coming downstream.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#test-sink-probe
A source returned by :code:`TestSource.probe` can be used for asserting demand or controlling when stream is completed or ended with an error.
A source returned by ``TestSource.probe`` can be used for asserting demand or
controlling when stream is completed or ended with an error.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#test-source-probe
@ -68,3 +97,4 @@ You can also inject exceptions and test sink behaviour on error conditions.
Test source and sink can be used together in combination when testing flows.
.. includecode:: ../../../akka-samples/akka-docs-java-lambda/src/test/java/docs/stream/StreamTestKitDocTest.java#test-source-and-sink