Move stream documentation to their own files

And generate the index
This commit is contained in:
Konrad Malawski 2018-04-18 11:44:37 +02:00 committed by Arnout Engelen
parent 7b29b08d46
commit 054b70c41b
183 changed files with 2025 additions and 2864 deletions

View file

@ -0,0 +1,21 @@
Sources and sinks for integrating with `java.io.InputStream` and `java.io.OutputStream` can be found on
`StreamConverters`. As they are blocking APIs the implementations of these stages are run on a separate
dispatcher configured through the `akka.stream.blocking-io-dispatcher`.
@@@ warning
Be aware that `asInputStream` and `asOutputStream` materialize `InputStream` and `OutputStream` respectively as
blocking API implementation. They will block tread until data will be available from upstream.
Because of blocking nature these objects cannot be used in `mapMaterializeValue` section as it causes deadlock
of the stream materialization process.
For example, following snippet will fall with timeout exception:
```scala
...
.toMat(StreamConverters.asInputStream().mapMaterializedValue { inputStream ⇒
inputStream.read() // this could block forever
...
}).run()
```
@@@

View file

@ -0,0 +1,2 @@
These stages encapsulate an asynchronous computation, properly handling backpressure while taking care of the asynchronous
operation at the same time (usually handling the completion of a @scala[`Future`] @java[`CompletionStage`]).

View file

@ -0,0 +1 @@
These stages are aware of the backpressure provided by their downstreams and able to adapt their behavior to that signal.

View file

@ -0,0 +1,2 @@
These stages take multiple streams as their input and provide a single output combining the elements from all of
the inputs in different ways.

View file

@ -0,0 +1,2 @@
These have one input and multiple outputs. They might route the elements between different outputs, or emit elements on
multiple outputs at the same time.

View file

@ -0,0 +1 @@
Sources and sinks for reading and writing files can be found on `FileIO`.

View file

@ -0,0 +1,4 @@
These stages either take a stream and turn it into a stream of streams (nesting) or they take a stream that contains
nested streams and turn them into a stream of elements instead (flattening).
See the [Substreams](stream-substream.md) page for more detail and code samples.

View file

@ -0,0 +1,5 @@
These stages can transform the rate of incoming elements since there are stages that emit multiple elements for a
single input (e.g. `mapConcat`) or consume multiple elements before emitting one output (e.g. `filter`).
However, these rate transformations are data-driven, i.e. it is the incoming elements that define how the
rate is affected. This is in contrast with [detached stages](#backpressure-aware-stages) which can change their processing behavior
depending on being backpressured by downstream or not.

View file

@ -0,0 +1 @@
These built-in sinks are available from @scala[`akka.stream.scaladsl.Sink`] @java[`akka.stream.javadsl.Sink`]:

View file

@ -0,0 +1 @@
These built-in sources are available from @scala[`akka.stream.scaladsl.Source`] @java[`akka.stream.javadsl.Source`]:

View file

@ -0,0 +1 @@
Those stages operate taking time into consideration.

View file

@ -0,0 +1 @@
These stages process elements using timers, delaying, dropping or grouping elements for certain time durations.

View file

@ -1,2 +1,3 @@
RedirectMatch 301 ^(.*)/scala/(.*) $1/$2?language=scala
RedirectMatch 301 ^(.*)/java/(.*) $1/$2?language=java
RedirectMatch 301 ^(.*)/stream/stages-overview\.html.* $1/stream/operators/index.html

View file

@ -19,9 +19,9 @@
* [stream-refs](stream-refs.md)
* [stream-parallelism](stream-parallelism.md)
* [stream-testkit](stream-testkit.md)
* [stages-overview](stages-overview.md)
* [stream-substream](stream-substream.md)
* [stream-cookbook](stream-cookbook.md)
* [../general/stream/stream-configuration](../general/stream/stream-configuration.md)
* [operators](operators/index.md)
@@@

View file

@ -0,0 +1,18 @@
# FileIO.fromPath
Emit the contents of a file.
@ref[File IO Sinks and Sources](../index.md#file-io-sinks-and-sources)
@@@div { .group-scala }
## Signature
@@signature [FileIO.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #fromPath }
@@@
## Description
Emit the contents of a file, as `ByteString`s, materializes into a @scala[`Future`] @java[`CompletionStage`] which will be completed with
a `IOResult` upon reaching the end of the file or if there is a failure.

View file

@ -0,0 +1,14 @@
# FileIO.toPath
Create a sink which will write incoming `ByteString` s to a given file path.
@ref[File IO Sinks and Sources](../index.md#file-io-sinks-and-sources)
@@@div { .group-scala }
## Signature
@@signature [FileIO.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/FileIO.scala) { #toPath }
@@@

View file

@ -0,0 +1,22 @@
# Flow.fromSinkAndSource
Creates a `Flow` from a `Sink` and a `Source` where the Flow's input will be sent to the `Sink` and the `Flow` 's output will come from the Source.
@ref[Flow stages composed of Sinks and Sources](../index.md#flow-stages-composed-of-sinks-and-sources)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSource }
@@@
## Description
Creates a `Flow` from a `Sink` and a `Source` where the Flow's input will be sent to the `Sink`
and the `Flow` 's output will come from the Source.
Note that termination events, like completion and cancelation is not automatically propagated through to the "other-side"
of the such-composed Flow. Use `Flow.fromSinkAndSourceCoupled` if you want to couple termination of both of the ends,
for example most useful in handling websocket connections.

View file

@ -0,0 +1,33 @@
# Flow.fromSinkAndSourceCoupled
Allows coupling termination (cancellation, completion, erroring) of Sinks and Sources while creating a Flow between them.
@ref[Flow stages composed of Sinks and Sources](../index.md#flow-stages-composed-of-sinks-and-sources)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fromSinkAndSourceCoupled }
@@@
## Description
Allows coupling termination (cancellation, completion, erroring) of Sinks and Sources while creating a Flow between them.
Similar to `Flow.fromSinkAndSource` however couples the termination of these two stages.
E.g. if the emitted `Flow` gets a cancellation, the `Source` of course is cancelled,
however the Sink will also be completed. The table below illustrates the effects in detail:
| Returned Flow | Sink (in) | Source (out) |
|-------------------------------------------------|-----------------------------|---------------------------------|
| cause: upstream (sink-side) receives completion | effect: receives completion | effect: receives cancel |
| cause: upstream (sink-side) receives error | effect: receives error | effect: receives cancel |
| cause: downstream (source-side) receives cancel | effect: completes | effect: receives cancel |
| effect: cancels upstream, completes downstream | effect: completes | cause: signals complete |
| effect: cancels upstream, errors downstream | effect: receives error | cause: signals error or throws |
| effect: cancels upstream, completes downstream | cause: cancels | effect: receives cancel |
The order in which the *in* and *out* sides receive their respective completion signals is not defined, do not rely on its ordering.

View file

@ -1,9 +1,17 @@
# lazyInitAsync
# Flow.lazyInitAsync
Creates a real `Flow` upon receiving the first element by calling relevant `flowFactory` given as an argument.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #lazyInitAsync }
@@@
## Description
Creates a real `Flow` upon receiving the first element by calling relevant `flowFactory` given as an argument.
@ -30,5 +38,3 @@ Adheres to the `ActorAttributes.SupervisionStrategy` attribute.
@@@
## Example

View file

@ -2,6 +2,8 @@
Send the elements from the stream to an `ActorRef`.
@ref[Sink stages](../index.md#sink-stages)
@@@ div { .group-scala }
## Signature
@ -21,5 +23,4 @@ Send the elements from the stream to an `ActorRef`. No backpressure so care must
@@@
## Example

View file

@ -1,7 +1,8 @@
# actorRefWithAck
Send the elements from the stream to an `ActorRef` which must then acknowledge reception after completing a message,
to provide back pressure onto the sink.
Send the elements from the stream to an `ActorRef` which must then acknowledge reception after completing a message, to provide back pressure onto the sink.
@ref[Sink stages](../index.md#sink-stages)
## Description
@ -17,5 +18,3 @@ to provide back pressure onto the sink.
@@@
## Example

View file

@ -2,19 +2,13 @@
Integration with Reactive Streams, materializes into a `org.reactivestreams.Publisher`.
@ref[Sink stages](../index.md#sink-stages)
@@@ div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #asPublisher }
@@@
## Description
@@@div { .callout }
@@@
## Example

View file

@ -0,0 +1,26 @@
# cancelled
Immediately cancel the stream
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #cancelled }
@@@
## Description
Immediately cancel the stream
@@@div { .callout }
**cancels** immediately
@@@

View file

@ -0,0 +1,27 @@
# combine
Combine several sinks into one using a user specified strategy
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #combine }
@@@
## Description
Combine several sinks into one using a user specified strategy
@@@div { .callout }
**cancels** depends on the strategy
**backpressures** depends on the strategy
@@@

View file

@ -0,0 +1,32 @@
# fold
Fold over emitted element with a function, where each invocation will get the new element and the result from the previous fold invocation.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fold }
@@@
## Description
Fold over emitted element with a function, where each invocation will get the new element and the result from the
previous fold invocation. The first invocation will be provided the `zero` value.
Materializes into a @scala[`Future`] @java[`CompletionStage`] that will complete with the last state when the stream has completed.
This stage allows combining values into a result without a global mutable state by instead passing the state along
between invocations.
@@@div { .callout }
**cancels** never
**backpressures** when the previous fold function invocation has not yet completed
@@@div

View file

@ -2,8 +2,16 @@
Invoke a given procedure for each element received.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #forEach }
@@@
## Description
Invoke a given procedure for each element received. Note that it is not safe to mutate shared state from the procedure.
@ -22,5 +30,4 @@ Note that it is not safe to mutate state from the procedure.
@@@
## Example

View file

@ -2,8 +2,16 @@
Like `foreach` but allows up to `parallellism` procedure calls to happen in parallel.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #foreachParallel }
@@@
## Description
Like `foreach` but allows up to `parallellism` procedure calls to happen in parallel.
@ -17,5 +25,3 @@ Like `foreach` but allows up to `parallellism` procedure calls to happen in para
@@@
## Example

View file

@ -0,0 +1,14 @@
# fromSubscriber
Integration with Reactive Streams, wraps a `org.reactivestreams.Subscriber` as a sink.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #fromSubscriber }
@@@

View file

@ -1,10 +1,17 @@
# head
Materializes into a @scala[`Future`] @java[`CompletionStage`] which completes with the first value arriving,
after this the stream is canceled.
Materializes into a @scala[`Future`] @java[`CompletionStage`] which completes with the first value arriving, after this the stream is canceled.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #head }
@@@
## Description
Materializes into a @scala[`Future`] @java[`CompletionStage`] which completes with the first value arriving,
@ -19,5 +26,3 @@ after this the stream is canceled. If no element is emitted, the @scala[`Future`
@@@
## Example

View file

@ -1,10 +1,17 @@
# headOption
# Sink.headOption
Materializes into a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the first value arriving wrapped in @scala[`Some`] @java[`Optional`],
or @scala[a `None`] @java[an empty Optional] if the stream completes without any elements emitted.
Materializes into a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the first value arriving wrapped in @scala[`Some`] @java[`Optional`], or @scala[a `None`] @java[an empty Optional] if the stream completes without any elements emitted.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #headOption }
@@@
## Description
Materializes into a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the first value arriving wrapped in @scala[`Some`] @java[`Optional`],
@ -19,5 +26,4 @@ or @scala[a `None`] @java[an empty Optional] if the stream completes without any
@@@
## Example

View file

@ -1,9 +1,17 @@
# ignore
# Sink.ignore
Consume all elements but discards them.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #ignore }
@@@
## Description
Consume all elements but discards them. Useful when a stream has to be consumed but there is no use to actually
@ -18,5 +26,4 @@ do anything with the elements.
@@@
## Example

View file

@ -1,10 +1,17 @@
# last
# Sink.last
Materializes into a @scala[`Future`] @java[`CompletionStage`] which will complete with the last value emitted when the stream
completes.
Materializes into a @scala[`Future`] @java[`CompletionStage`] which will complete with the last value emitted when the stream completes.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #last }
@@@
## Description
Materializes into a @scala[`Future`] @java[`CompletionStage`] which will complete with the last value emitted when the stream
@ -19,5 +26,4 @@ completes. If the stream completes with no elements the @scala[`Future`] @java[`
@@@
## Example

View file

@ -1,10 +1,17 @@
# lastOption
# Sink.lastOption
Materialize a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the last value
emitted wrapped in an @scala[`Some`] @java[`Optional`] when the stream completes.
Materialize a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the last value emitted wrapped in an @scala[`Some`] @java[`Optional`] when the stream completes.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lastOption }
@@@
## Description
Materialize a @scala[`Future[Option[T]]`] @java[`CompletionStage<Optional<T>>`] which completes with the last value
@ -20,5 +27,3 @@ completed with @scala[`None`] @java[an empty `Optional`].
@@@
## Example

View file

@ -0,0 +1,33 @@
# Sink.lazyInitAsync
Creates a real `Sink` upon receiving the first element.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #lazyInitAsync }
@@@
## Description
Creates a real `Sink` upon receiving the first element. Internal `Sink` will not be created if there are no elements,
because of completion or error.
- If upstream completes before an element was received then the @scala[`Future`]@java[`CompletionStage`] is completed with @scala[`None`]@java[an empty `Optional`].
- If upstream fails before an element was received, `sinkFactory` throws an exception, or materialization of the internal
sink fails then the @scala[`Future`]@java[`CompletionStage`] is completed with the exception.
- Otherwise the @scala[`Future`]@java[`CompletionStage`] is completed with the materialized value of the internal sink.
@@@div { .callout }
**cancels** never
**backpressures** when initialized and when created sink backpressures
@@@

View file

@ -1,9 +1,17 @@
# onComplete
# Sink.onComplete
Invoke a callback when the stream has completed or failed.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #onComplete }
@@@
## Description
Invoke a callback when the stream has completed or failed.
@ -17,5 +25,4 @@ Invoke a callback when the stream has completed or failed.
@@@
## Example

View file

@ -0,0 +1,18 @@
# Sink.preMaterialize
Materializes this Sink, immediately returning (1) its materialized value, and (2) a new Sink that can be consume elements 'into' the pre-materialized one.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #preMaterialize }
@@@
## Description
Materializes this Sink, immediately returning (1) its materialized value, and (2) a new Sink that can be consume elements 'into' the pre-materialized one. Useful for when you need a materialized value of a Sink when handing it out to someone to materialize it for you.

View file

@ -1,9 +1,17 @@
# queue
# Sink.queue
Materialize a `SinkQueue` that can be pulled to trigger demand through the sink.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #queue }
@@@
## Description
Materialize a `SinkQueue` that can be pulled to trigger demand through the sink. The queue contains
@ -18,5 +26,3 @@ a buffer in case stream emitting elements faster than queue pulling them.
@@@
## Example

View file

@ -0,0 +1,29 @@
# Sink.reduce
Apply a reduction function on the incoming elements and pass the result to the next invocation.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #reduce }
@@@
## Description
Apply a reduction function on the incoming elements and pass the result to the next invocation. The first invocation
receives the two first elements of the flow.
Materializes into a @scala[`Future`] @java[`CompletionStage`] that will be completed by the last result of the reduction function.
@@@div { .callout }
**cancels** never
**backpressures** when the previous reduction function invocation has not yet completed
@@@

View file

@ -1,10 +1,17 @@
# seq
# Sink.seq
Collect values emitted from the stream into a collection, the collection is available through a @scala[`Future`] @java[`CompletionStage`] or
which completes when the stream completes.
Collect values emitted from the stream into a collection.
@ref[Sink stages](../index.md#sink-stages)
@@@div { .group-scala }
## Signature
@@signature [Sink.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Sink.scala) { #seq }
@@@
## Description
Collect values emitted from the stream into a collection, the collection is available through a @scala[`Future`] @java[`CompletionStage`] or
@ -18,5 +25,4 @@ if more element are emitted the sink will cancel the stream
@@@
## Example

View file

@ -2,6 +2,8 @@
Attaches the given `Sink` to this `Flow`, meaning that elements that pass through this `Flow` will also be sent to the `Sink`.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@ div { .group-scala }
## Signature
@ -25,5 +27,4 @@ Attaches the given `Sink` to this `Flow`, meaning that elements that pass throug
@@@
## Example

View file

@ -1,6 +1,8 @@
# apply
Stream the values of an `immutable.
Stream the values of an `immutable.Seq`.
@ref[Source stages](../index.md#source-stages)
@@@ div { .group-scala }
## Signature
@ -12,7 +14,6 @@ Stream the values of an `immutable.
Stream the values of an `immutable.Seq`.
@@@div { .callout }
**emits** the next value of the seq
@ -21,16 +22,3 @@ Stream the values of an `immutable.Seq`.
@@@
@@@ div { .group-java }
### from
Stream the values of an `Iterable`. Make sure the `Iterable` is immutable or at least not modified after being used
as a source.
@@@
@@@
## Example

View file

@ -2,6 +2,8 @@
Use the `ask` pattern to send a request-reply message to the target `ref` actor.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@@@ div { .group-scala }
## Signature
@ -33,5 +35,3 @@ Adheres to the [[ActorAttributes.SupervisionStrategy]] attribute.
@@@
## Example

View file

@ -1,7 +1,8 @@
# backpressureTimeout
If the time between the emission of an element and the following downstream demand exceeds the provided timeout,
the stream is failed with a `TimeoutException`.
If the time between the emission of an element and the following downstream demand exceeds the provided timeout, the stream is failed with a `TimeoutException`.
@ref[Time aware stages](../index.md#time-aware-stages)
@@@ div { .group-scala }
## Signature
@ -29,5 +30,3 @@ check is one period (equals to timeout value).
@@@
## Example

View file

@ -2,6 +2,8 @@
Fan-out the stream to several streams.
@ref[Fan-out stages](../index.md#fan-out-stages)
## Description
Fan-out the stream to several streams. Each upstream element is emitted to the first available downstream consumer.
@ -19,5 +21,3 @@ Fan-out the stream to several streams. Each upstream element is emitted to the f
@@@
## Example

View file

@ -1,7 +1,8 @@
# batch
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there
is backpressure and a maximum number of batched elements is not yet reached.
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there is backpressure and a maximum number of batched elements is not yet reached.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@ div { .group-scala }
## Signature
@ -33,5 +34,3 @@ aggregated to the batched value.
@@@
## Example

View file

@ -1,7 +1,8 @@
# batchWeighted
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there
is backpressure and a maximum weight batched elements is not yet reached.
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there is backpressure and a maximum weight batched elements is not yet reached.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@ div { .group-scala }
## Signature
@ -31,5 +32,3 @@ aggregated to the batched value.
@@@
## Example

View file

@ -2,6 +2,8 @@
Emit each incoming element each of `n` outputs.
@ref[Fan-out stages](../index.md#fan-out-stages)
## Description
Emit each incoming element each of `n` outputs.
@ -19,5 +21,4 @@ Emit each incoming element each of `n` outputs.
@@@
## Example

View file

@ -2,6 +2,8 @@
Allow for a temporarily faster upstream events by buffering `size` elements.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@ div { .group-scala }
## Signature
@ -14,7 +16,7 @@ Allow for a temporarily faster upstream events by buffering `size` elements.
Allow for a temporarily faster upstream events by buffering `size` elements. When the buffer is full, a new element is
handled according to the specified `OverflowStrategy`:
* `backpressure` backpressue is applied upstream
* `backpressure` backpressure is applied upstream
* `dropHead` drops the oldest element in the buffer to make space for the new element
* `dropTail` drops the youngest element in the buffer to make space for the new element
* `dropBuffer` drops the entire buffer and buffers the new element
@ -27,11 +29,10 @@ handled according to the specified `OverflowStrategy`:
**emits** when downstream stops backpressuring and there is a pending element in the buffer
**backpressures** when `OverflowStrategy` is `backpressue` and buffer is full
**backpressures** when `OverflowStrategy` is `backpressure` and buffer is full
**completes** when upstream completes and buffered elements has been drained, or when `OverflowStrategy` is `fail`, the buffer is full and a new element arrives
@@@
## Example

View file

@ -1,10 +1,17 @@
# collect
Apply a partial function to each incoming element, if the partial function is defined for a value the returned
value is passed downstream.
Apply a partial function to each incoming element, if the partial function is defined for a value the returned value is passed downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collect }
@@@
## Description
Apply a partial function to each incoming element, if the partial function is defined for a value the returned
@ -21,5 +28,3 @@ value is passed downstream. Can often replace `filter` followed by `map` to achi
@@@
## Example

View file

@ -0,0 +1,16 @@
# collectType
Transform this stream by testing the type of each of the elements on which the element is an instance of the provided type as they pass through this processing step.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #collectType }
@@@

View file

@ -1,10 +1,17 @@
# completionTimeout
If the completion of the stream does not happen until the provided timeout, the stream is failed
with a `TimeoutException`.
If the completion of the stream does not happen until the provided timeout, the stream is failed with a `TimeoutException`.
@ref[Time aware stages](../index.md#time-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #completionTimeout }
@@@
## Description
If the completion of the stream does not happen until the provided timeout, the stream is failed
@ -23,5 +30,3 @@ with a `TimeoutException`.
@@@
## Example

View file

@ -2,8 +2,16 @@
After completion of the original upstream the elements of the given source will be emitted.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #concat }
@@@
## Description
After completion of the original upstream the elements of the given source will be emitted.
@ -19,5 +27,3 @@ After completion of the original upstream the elements of the given source will
@@@
## Example

View file

@ -1,10 +1,17 @@
# conflate
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as
there is backpressure.
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there is backpressure.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflate }
@@@
## Description
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as
@ -22,5 +29,3 @@ average of incoming numbers, if aggregation should lead to a different type `con
@@@
## Example

View file

@ -1,10 +1,17 @@
# conflateWithSeed
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there
is backpressure.
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there is backpressure.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #conflateWithSeed }
@@@
## Description
Allow for a slower downstream by passing incoming elements and a summary into an aggregate function as long as there
@ -22,5 +29,4 @@ transform it to the summary type.
@@@
## Example

View file

@ -2,8 +2,16 @@
Delay every element passed through with a specific duration.
@ref[Timer driven stages](../index.md#timer-driven-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #delay }
@@@
## Description
Delay every element passed through with a specific duration.
@ -20,5 +28,3 @@ Delay every element passed through with a specific duration.
@@@
## Example

View file

@ -2,8 +2,16 @@
Detach upstream demand from downstream demand without detaching the stream rates.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #detach }
@@@
## Description
Detach upstream demand from downstream demand without detaching the stream rates.
@ -19,5 +27,3 @@ Detach upstream demand from downstream demand without detaching the stream rates
@@@
## Example

View file

@ -2,8 +2,16 @@
Each upstream element will either be diverted to the given sink, or the downstream consumer according to the predicate function applied to the element.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #divertTo }
@@@
## Description
Each upstream element will either be diverted to the given sink, or the downstream consumer according to the predicate function applied to the element.
@ -21,5 +29,3 @@ Each upstream element will either be diverted to the given sink, or the downstre
@@@
## Example

View file

@ -2,8 +2,16 @@
Drop `n` elements and then pass any subsequent element downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #drop }
@@@
## Description
Drop `n` elements and then pass any subsequent element downstream.
@ -19,5 +27,3 @@ Drop `n` elements and then pass any subsequent element downstream.
@@@
## Example

View file

@ -1,9 +1,17 @@
# dropWhile
dropWhile
Drop elements as long as a predicate function return true for the element
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWhile }
@@@
## Description
Drop elements as long as a predicate function return true for the element
@ -19,5 +27,3 @@ Drop elements as long as a predicate function return true for the element
@@@
## Example

View file

@ -1,9 +1,17 @@
# dropWithin
dropWithin
Drop elements until a timeout has fired
@ref[Timer driven stages](../index.md#timer-driven-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #dropWithin }
@@@
## Description
Drop elements until a timeout has fired
@ -19,5 +27,3 @@ Drop elements until a timeout has fired
@@@
## Example

View file

@ -1,16 +1,23 @@
# expand
Like `extrapolate`, but does not have the `initial` argument, and the `Iterator` is also used in lieu of the original
element, allowing for it to be rewritten and/or filtered.
Like `extrapolate`, but does not have the `initial` argument, and the `Iterator` is also used in lieu of the original element, allowing for it to be rewritten and/or filtered.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #expand }
@@@
## Description
Like `extrapolate`, but does not have the `initial` argument, and the `Iterator` is also used in lieu of the original
element, allowing for it to be rewritten and/or filtered.
See @ref:[Understanding extrapolate and expand](../stream-rate.md#understanding-extrapolate-and-expand) for more information
See @ref:[Understanding extrapolate and expand](../../stream-rate.md#understanding-extrapolate-and-expand) for more information
and examples.
@ -24,5 +31,3 @@ and examples.
@@@
## Example

View file

@ -2,8 +2,16 @@
Allow for a faster downstream by expanding the last emitted element to an `Iterator`.
@ref[Backpressure aware stages](../index.md#backpressure-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #extrapolate }
@@@
## Description
Allow for a faster downstream by expanding the last emitted element to an `Iterator`. For example, an
@ -14,7 +22,7 @@ All original elements are always emitted unchanged - the `Iterator` is only used
Includes an optional `initial` argument to prevent blocking the entire stream when there are multiple producers.
See @ref:[Understanding extrapolate and expand](../stream-rate.md#understanding-extrapolate-and-expand) for more information
See @ref:[Understanding extrapolate and expand](../../stream-rate.md#understanding-extrapolate-and-expand) for more information
and examples.
@ -28,5 +36,3 @@ and examples.
@@@
## Example

View file

@ -2,8 +2,16 @@
Filter the incoming elements using a predicate.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #filter }
@@@
## Description
Filter the incoming elements using a predicate. If the predicate returns true the element is passed downstream, if
@ -20,5 +28,3 @@ it returns false the element is discarded.
@@@
## Example

View file

@ -2,8 +2,16 @@
Filter the incoming elements using a predicate.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #filterNot }
@@@
## Description
Filter the incoming elements using a predicate. If the predicate returns false the element is passed downstream, if
@ -20,5 +28,3 @@ it returns true the element is discarded.
@@@
## Example

View file

@ -1,10 +1,17 @@
# flatMapConcat
Transform each input element into a `Source` whose elements are then flattened into the output stream through
concatenation.
Transform each input element into a `Source` whose elements are then flattened into the output stream through concatenation.
@ref[Nesting and flattening stages](../index.md#nesting-and-flattening-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #flatMapConcat }
@@@
## Description
Transform each input element into a `Source` whose elements are then flattened into the output stream through
@ -21,5 +28,3 @@ concatenation. This means each source is fully consumed before consumption of th
@@@
## Example

View file

@ -1,10 +1,17 @@
# flatMapMerge
Transform each input element into a `Source` whose elements are then flattened into the output stream through
merging.
Transform each input element into a `Source` whose elements are then flattened into the output stream through merging.
@ref[Nesting and flattening stages](../index.md#nesting-and-flattening-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #flatMapMerge }
@@@
## Description
Transform each input element into a `Source` whose elements are then flattened into the output stream through
@ -21,5 +28,4 @@ merging. The maximum number of merged sources has to be specified.
@@@
## Example

View file

@ -1,10 +1,17 @@
# fold
Start with current value `zero` and then apply the current and next value to the given function, when upstream
complete the current value is emitted downstream.
Start with current value `zero` and then apply the current and next value to the given function, when upstream complete the current value is emitted downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #fold }
@@@
## Description
Start with current value `zero` and then apply the current and next value to the given function, when upstream
@ -21,5 +28,3 @@ complete the current value is emitted downstream.
@@@
## Example

View file

@ -2,8 +2,16 @@
Just like `fold` but receiving a function that results in a @scala[`Future`] @java[`CompletionStage`] to the next value.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #foldAsync }
@@@
## Description
Just like `fold` but receiving a function that results in a @scala[`Future`] @java[`CompletionStage`] to the next value.
@ -19,5 +27,3 @@ Just like `fold` but receiving a function that results in a @scala[`Future`] @ja
@@@
## Example

View file

@ -2,8 +2,16 @@
Demultiplex the incoming stream into separate output streams.
@ref[Nesting and flattening stages](../index.md#nesting-and-flattening-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #groupBy }
@@@
## Description
Demultiplex the incoming stream into separate output streams.
@ -18,5 +26,3 @@ there is an element pending for a group whose substream backpressures
@@@
## Example

View file

@ -1,10 +1,17 @@
# grouped
Accumulate incoming events until the specified number of elements have been accumulated and then pass the collection of
elements downstream.
Accumulate incoming events until the specified number of elements have been accumulated and then pass the collection of elements downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #grouped }
@@@
## Description
Accumulate incoming events until the specified number of elements have been accumulated and then pass the collection of
@ -21,5 +28,3 @@ elements downstream.
@@@
## Example

View file

@ -1,10 +1,17 @@
# groupedWeightedWithin
Chunk up this stream into groups of elements received within a time window, or limited by the weight of the elements,
whatever happens first.
Chunk up this stream into groups of elements received within a time window, or limited by the weight of the elements, whatever happens first.
@ref[Timer driven stages](../index.md#timer-driven-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #groupedWeightedWithin }
@@@
## Description
Chunk up this stream into groups of elements received within a time window, or limited by the weight of the elements,
@ -23,5 +30,3 @@ but not if no elements has been grouped (i.e: no empty groups), or when weight l
@@@
## Example

View file

@ -1,10 +1,17 @@
# groupedWithin
Chunk up this stream into groups of elements received within a time window, or limited by the number of the elements,
whatever happens first.
Chunk up this stream into groups of elements received within a time window, or limited by the number of the elements, whatever happens first.
@ref[Timer driven stages](../index.md#timer-driven-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #groupedWithin }
@@@
## Description
Chunk up this stream into groups of elements received within a time window, or limited by the number of the elements,
@ -23,5 +30,3 @@ but not if no elements has been grouped (i.e: no empty groups), or when limit ha
@@@
## Example

View file

@ -1,10 +1,17 @@
# idleTimeout
If the time between two processed elements exceeds the provided timeout, the stream is failed
with a `TimeoutException`.
If the time between two processed elements exceeds the provided timeout, the stream is failed with a `TimeoutException`.
@ref[Time aware stages](../index.md#time-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #idleTimeout }
@@@
## Description
If the time between two processed elements exceeds the provided timeout, the stream is failed
@ -24,5 +31,3 @@ check is one period (equals to timeout value).
@@@
## Example

View file

@ -2,8 +2,16 @@
Delays the initial element by the specified duration.
@ref[Timer driven stages](../index.md#timer-driven-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #initialDelay }
@@@
## Description
Delays the initial element by the specified duration.
@ -21,5 +29,3 @@ Delays the initial element by the specified duration.
@@@
## Example

View file

@ -1,10 +1,17 @@
# initialTimeout
If the first element has not passed through this stage before the provided timeout, the stream is failed
with a `TimeoutException`.
If the first element has not passed through this stage before the provided timeout, the stream is failed with a `TimeoutException`.
@ref[Time aware stages](../index.md#time-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #initialTimeout }
@@@
## Description
If the first element has not passed through this stage before the provided timeout, the stream is failed
@ -23,5 +30,3 @@ with a `TimeoutException`.
@@@
## Example

View file

@ -2,8 +2,16 @@
Emits a specifiable number of elements from the original source, then from the provided source and repeats.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #interleave }
@@@
## Description
Emits a specifiable number of elements from the original source, then from the provided source and repeats. If one
@ -20,5 +28,3 @@ source completes the rest of the other stream will be emitted.
@@@
## Example

View file

@ -1,9 +1,17 @@
# intersperse
Intersperse stream with provided element similar to `List.
Intersperse stream with provided element similar to `List.mkString`.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #intersperse }
@@@
## Description
Intersperse stream with provided element similar to `List.mkString`. It can inject start and end marker elements to stream.
@ -19,5 +27,3 @@ Intersperse stream with provided element similar to `List.mkString`. It can inje
@@@
## Example

View file

@ -2,8 +2,16 @@
Injects additional (configured) elements if upstream does not emit for a configured amount of time.
@ref[Time aware stages](../index.md#time-aware-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #keepAlive }
@@@
## Description
Injects additional (configured) elements if upstream does not emit for a configured amount of time.
@ -21,5 +29,3 @@ Injects additional (configured) elements if upstream does not emit for a configu
@@@
## Example

View file

@ -2,8 +2,16 @@
Limit number of element from upstream to given `max` number.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #limit }
@@@
## Description
Limit number of element from upstream to given `max` number.
@ -19,5 +27,3 @@ Limit number of element from upstream to given `max` number.
@@@
## Example

View file

@ -2,8 +2,16 @@
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #limitWeighted }
@@@
## Description
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function.
@ -20,5 +28,3 @@ Evaluated cost of each element defines how many elements will be allowed to trav
@@@
## Example

View file

@ -2,8 +2,16 @@
Log elements flowing through the stream as well as completion and erroring.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #log }
@@@
## Description
Log elements flowing through the stream as well as completion and erroring. By default element and
@ -21,5 +29,3 @@ This can be changed by calling @scala[`Attributes.logLevels(...)`] @java[`Attrib
@@@
## Example

View file

@ -2,8 +2,16 @@
Transform each element in the stream by calling a mapping function with it and passing the returned value downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #map }
@@@
## Description
Transform each element in the stream by calling a mapping function with it and passing the returned value downstream.
@ -19,5 +27,4 @@ Transform each element in the stream by calling a mapping function with it and p
@@@
## Example

View file

@ -2,8 +2,16 @@
Pass incoming elements to a function that return a @scala[`Future`] @java[`CompletionStage`] result.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #mapAsync }
@@@
## Description
Pass incoming elements to a function that return a @scala[`Future`] @java[`CompletionStage`] result. When the @scala[`Future`] @java[`CompletionStage`] arrives the result is passed
@ -23,5 +31,3 @@ If a @scala[`Future`] @java[`CompletionStage`] fails, the stream also fails (unl
@@@
## Example

View file

@ -1,10 +1,17 @@
# mapAsyncUnordered
Like `mapAsync` but @scala[`Future`] @java[`CompletionStage`] results are passed downstream as they arrive regardless of the order of the elements
that triggered them.
Like `mapAsync` but @scala[`Future`] @java[`CompletionStage`] results are passed downstream as they arrive regardless of the order of the elements that triggered them.
@ref[Asynchronous processing stages](../index.md#asynchronous-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #mapAsyncUnordered }
@@@
## Description
Like `mapAsync` but @scala[`Future`] @java[`CompletionStage`] results are passed downstream as they arrive regardless of the order of the elements
@ -23,5 +30,3 @@ If a @scala[`Future`] @java[`CompletionStage`] fails, the stream also fails (unl
@@@
## Example

View file

@ -2,8 +2,16 @@
Transform each element into zero or more elements that are individually passed downstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #mapConcat }
@@@
## Description
Transform each element into zero or more elements that are individually passed downstream.
@ -19,5 +27,3 @@ Transform each element into zero or more elements that are individually passed d
@@@
## Example

View file

@ -1,10 +1,17 @@
# mapError
While similar to `recover` this stage can be used to transform an error signal to a different one *without* logging
it as an error in the process.
While similar to `recover` this stage can be used to transform an error signal to a different one *without* logging it as an error in the process.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #mapError }
@@@
## Description
While similar to `recover` this stage can be used to transform an error signal to a different one *without* logging
@ -25,5 +32,3 @@ Similarily to `recover` throwing an exception inside `mapError` _will_ be logged
@@@
## Example

View file

@ -2,8 +2,16 @@
Merge multiple sources.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #merge }
@@@
## Description
Merge multiple sources. Picks elements randomly if all sources has elements ready.
@ -19,5 +27,3 @@ Merge multiple sources. Picks elements randomly if all sources has elements read
@@@
## Example

View file

@ -2,6 +2,8 @@
Merge multiple sources.
@ref[Fan-in stages](../index.md#fan-in-stages)
## Signature
## Description
@ -19,5 +21,3 @@ Merge multiple sources. Prefer one source if all sources has elements ready.
@@@
## Example

View file

@ -2,6 +2,8 @@
Merge multiple sources.
@ref[Fan-in stages](../index.md#fan-in-stages)
## Signature
## Description
@ -20,5 +22,3 @@ sources has elements ready the relative priorities for those sources are used to
@@@
## Example

View file

@ -2,8 +2,16 @@
Merge multiple sources.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #mergeSorted }
@@@
## Description
Merge multiple sources. Waits for one element to be ready from each input stream and emits the
@ -20,5 +28,3 @@ smallest element.
@@@
## Example

View file

@ -2,8 +2,16 @@
Materializes to a `FlowMonitor` that monitors messages flowing through or completion of the stage.
@ref[Watching status stages](../index.md#watching-status-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #monitor }
@@@
## Description
Materializes to a `FlowMonitor` that monitors messages flowing through or completion of the stage. The stage otherwise
@ -21,5 +29,3 @@ event, and may therefore affect performance.
@@@
## Example

View file

@ -1,10 +1,17 @@
# orElse
If the primary source completes without emitting any elements, the elements from the secondary source
are emitted.
If the primary source completes without emitting any elements, the elements from the secondary source are emitted.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #orElse }
@@@
## Description
If the primary source completes without emitting any elements, the elements from the secondary source
@ -28,5 +35,3 @@ without emitting and the secondary stream already has completed or when the seco
@@@
## Example

View file

@ -2,6 +2,8 @@
Fan-out the stream to several streams.
@ref[Fan-out stages](../index.md#fan-out-stages)
## Signature
## Description
@ -22,5 +24,3 @@ partitioner function applied to the element.
@@@
## Example

View file

@ -1,10 +1,17 @@
# prefixAndTail
Take up to *n* elements from the stream (less than *n* only if the upstream completes before emitting *n* elements)
and returns a pair containing a strict sequence of the taken element and a stream representing the remaining elements.
Take up to *n* elements from the stream (less than *n* only if the upstream completes before emitting *n* elements) and returns a pair containing a strict sequence of the taken element and a stream representing the remaining elements.
@ref[Nesting and flattening stages](../index.md#nesting-and-flattening-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #prefixAndTail }
@@@
## Description
Take up to *n* elements from the stream (less than *n* only if the upstream completes before emitting *n* elements)
@ -21,5 +28,4 @@ and returns a pair containing a strict sequence of the taken element and a strea
@@@
## Example

View file

@ -2,8 +2,16 @@
Prepends the given source to the flow, consuming it until completion before the original source is consumed.
@ref[Fan-in stages](../index.md#fan-in-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #prepend }
@@@
## Description
Prepends the given source to the flow, consuming it until completion before the original source is consumed.
@ -21,5 +29,3 @@ If materialized values needs to be collected `prependMat` is available.
@@@
## Example

View file

@ -2,8 +2,16 @@
Allow sending of one last element downstream when a failure has happened upstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #recover }
@@@
## Description
Allow sending of one last element downstream when a failure has happened upstream.
@ -21,5 +29,3 @@ Throwing an exception inside `recover` _will_ be logged on ERROR level automatic
@@@
## Example

View file

@ -2,8 +2,16 @@
Allow switching to alternative Source when a failure has happened upstream.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #recoverWith }
@@@
## Description
Allow switching to alternative Source when a failure has happened upstream.
@ -21,5 +29,3 @@ Throwing an exception inside `recoverWith` _will_ be logged on ERROR level autom
@@@
## Example

View file

@ -2,8 +2,16 @@
RecoverWithRetries allows to switch to alternative Source on flow failure.
@ref[Simple processing stages](../index.md#simple-processing-stages)
@@@div { .group-scala }
## Signature
@@signature [Flow.scala]($akka$/akka-stream/src/main/scala/akka/stream/scaladsl/Flow.scala) { #recoverWithRetries }
@@@
## Description
RecoverWithRetries allows to switch to alternative Source on flow failure. It will stay in effect after
@ -25,5 +33,3 @@ This stage can recover the failure signal, but not the skipped elements, which w
@@@
## Example

Some files were not shown because too many files have changed in this diff Show more