Fixes#28769
Use case for this is if you have a sequence of elements that has been
partitioned across multiple streams, and you want to merge them back
together in order. It will typically be used in combination with
`zipWithIndex` to define the index for the sequence, followed by a
`Partition`, followed by the processing of different substreams with
different flows (each flow emitting exactly one output for each input),
and then merging with this stage, using the index from `zipWithIndex`.
A more concrete use case is if you're consuming messages from a message
broker, and you have a flow that you wish to apply to some messages, but
not others, you can partition the message stream according to which
should be processed by the flow and which should bypass it, and then
bring the elements back together acknowledgement. If an ordinary merge
was used rather than this, the messages that bypass the processing flow
would likely overtake the messages going through the processing flow,
and the result would be out of order offset acknowledgement which would
lead to dropping messages on failure.
I've included a minimal version of the above example in the documentation.
Refs #29111
This seems only to happen with TLS 1.3. In that case, remaining data in
`transportInBuffer` was left there instead of putting it back onto the
chopping block.
Then `doWrap` was run but `doUnwrap` was never called again because only
the chopping block was checked for outstanding data but not the buffer.
Refs #29110
TLSActor could get caught in a spin-loop on connection termination
because there was an implicit assumption that when inbound is closed
(peer has sent `close_notify`), this SSLEngine would also automatically
send a `close_notify` and close the connection.
Therefore, it would stay in `flushOutbound` pumping in a loop.
This is not true anymore with TLS 1.3, more accurately it can be
configured using `-Djdk.tls.acknowledgeCloseNotify` which is `false` by
default leading to half-open connections.
The solution is to not support half-open TLS connections for now and
consider a connection closed as soon as `isInboundClosed` and there's no
outstanding data.
(To support half-open connections, this fix would have to be reverted
and `flushOutbound` fixed accordingly.)
Refs #28993
The previous `nextDeadline - time < 0` required that nanoTime resolution is
actually high enough to see that the deadline had already passed. If it
had not, the current keep alive was missed and also all future ones (until
another a regular element would trigger another push/pull cycle).
Now, with `>=` it also works in that case and just fails noisily if our
assumptions are not true.
It's not clear how it could have happened. On my machine, timers
trigger 1-2 tick-durations too late (but at least ~2ms). How that could be
the same in terms of the nanoTime resolution is hard to see.
* Add scalafix plugin for jdk 9.
* Add command alias sortImports.
* Excludes some sources from SortImports.
* Update SortImports to 0.4.0
* Sort imports with `sortImports` command.
* scalafix ExplicitNonNullaryApply prepare
+ Temporarily use com.sandinh:sbt-scalafix because scalacenter/scalafix#1098
+ Add ExplicitNonNullaryApply rule to .scalafix.conf
+ Manually fix a NonNullaryApply case in DeathWatchSpec that cause
`fixall` fail because ExplicitNonNullaryApply rule incorrectly rewrite
`context unbecome` to `context unbecome()` instead of `context.unbecome()`
* scalafix ExplicitNonNullaryApply
fix by enabling only ExplicitNonNullaryApply rule in .scalafix.conf then:
```
% sbt -Dakka.build.scalaVersion=2.13.1
> fixall
```
* scalafmtAll
* Revert to ch.epfl.scala:sbt-scalafix
Co-authored-by: Bùi Việt Thành <thanhbv@sandinh.net>
* deprecate internal sameThread ec and use a new one for all internal use sites
* Use the respective Scala version standard library "same thread" ec
* fallback to the old inline impl on 2.12 when reflection isn't possible
Notably fixes the case where upstream finished before the connection
was successfully established, and avoids RSTing the incoming stream
when the outgoing stream is done (which is now possible due to the
cancellation reason being propagated).
* update MiMa latestPatch
* Even later latest
* exclude jdk9 classes in 2.6.x excludes
* mima exclude for SystemMaterializer.materializer type
Co-authored-by: Johan Andrén <johan@markatta.com>