Fixes#28769
Use case for this is if you have a sequence of elements that has been
partitioned across multiple streams, and you want to merge them back
together in order. It will typically be used in combination with
`zipWithIndex` to define the index for the sequence, followed by a
`Partition`, followed by the processing of different substreams with
different flows (each flow emitting exactly one output for each input),
and then merging with this stage, using the index from `zipWithIndex`.
A more concrete use case is if you're consuming messages from a message
broker, and you have a flow that you wish to apply to some messages, but
not others, you can partition the message stream according to which
should be processed by the flow and which should bypass it, and then
bring the elements back together acknowledgement. If an ordinary merge
was used rather than this, the messages that bypass the processing flow
would likely overtake the messages going through the processing flow,
and the result would be out of order offset acknowledgement which would
lead to dropping messages on failure.
I've included a minimal version of the above example in the documentation.
There's only one read so it was relying on both the Data and the Failed
being in the shared queue when it takes place.
Remove the data so that the poll on the shared queue will wait for the
Failed to be added.
Ref #28829
* Add scalafix plugin for jdk 9.
* Add command alias sortImports.
* Excludes some sources from SortImports.
* Update SortImports to 0.4.0
* Sort imports with `sortImports` command.
* scalafix ExplicitNonNullaryApply prepare
+ Temporarily use com.sandinh:sbt-scalafix because scalacenter/scalafix#1098
+ Add ExplicitNonNullaryApply rule to .scalafix.conf
+ Manually fix a NonNullaryApply case in DeathWatchSpec that cause
`fixall` fail because ExplicitNonNullaryApply rule incorrectly rewrite
`context unbecome` to `context unbecome()` instead of `context.unbecome()`
* scalafix ExplicitNonNullaryApply
fix by enabling only ExplicitNonNullaryApply rule in .scalafix.conf then:
```
% sbt -Dakka.build.scalaVersion=2.13.1
> fixall
```
* scalafmtAll
* Revert to ch.epfl.scala:sbt-scalafix
Co-authored-by: Bùi Việt Thành <thanhbv@sandinh.net>
* deprecate internal sameThread ec and use a new one for all internal use sites
* Use the respective Scala version standard library "same thread" ec
* fallback to the old inline impl on 2.12 when reflection isn't possible
Notably fixes the case where upstream finished before the connection
was successfully established, and avoids RSTing the incoming stream
when the outgoing stream is done (which is now possible due to the
cancellation reason being propagated).
This way the stack trace will be more helpful because it contains the stage
that actually triggered the materialization.
Otherwise, we will only fail during `preStart` in the interpreter where the
stage will be failed and the error be propagated through the stream where
it can be hard to figure out what happened.
Also improve the message itself to contain the user provided name of the
sink/source.
* allow Sink.queue concurrent pulling
* replace methods with default parameters on two overloaded methods to pass binary compatibility check :/
* replace ⇒ with =>
* reformat
* add javadsl
* fix PR comments and add concurrency to Sink.queue
* fix merge after auto resolving
* duplicate changes to javadsl
* revert source changes
* add graceful terminations
* clean up tests
* optimize imports
* trigger rebuild
* cover the case when materializer shutdown before async callbacks were processed
* vars to vals; fix require messages
* disable compatibility check for @InternalApi private[akka] class
This will also mean that completion will not be blocked by elements that
will later be filtered out.
One particular use case of that would be a kind of partitioning use case,
where you put several streams behind a broadcast and each consumer will filter
out elements not handled there. In that case, the broadcast can get head-of-line
blocked when one of the consumers currently has no demand but also wouldn't
have to handle any elements because they would all be filtered out.
* Unfortunately it seems the jdk9-only tests could not actually be compiled.
With these changes those can actually be compiled and ran again.
* Always link to jdk11 for java.* javadocs
* Update sbt-paradox-akka to fix linking to inner classes for javadoc