Fixes#28769
Use case for this is if you have a sequence of elements that has been
partitioned across multiple streams, and you want to merge them back
together in order. It will typically be used in combination with
`zipWithIndex` to define the index for the sequence, followed by a
`Partition`, followed by the processing of different substreams with
different flows (each flow emitting exactly one output for each input),
and then merging with this stage, using the index from `zipWithIndex`.
A more concrete use case is if you're consuming messages from a message
broker, and you have a flow that you wish to apply to some messages, but
not others, you can partition the message stream according to which
should be processed by the flow and which should bypass it, and then
bring the elements back together acknowledgement. If an ordinary merge
was used rather than this, the messages that bypass the processing flow
would likely overtake the messages going through the processing flow,
and the result would be out of order offset acknowledgement which would
lead to dropping messages on failure.
I've included a minimal version of the above example in the documentation.
New type StatusReply simplifies the very common use case of replying to a request with either a successful reply or an error reply which can be repetitive to define for every actor, with the additional overhead of having to make sure each such sealed top type + 2 concrete reply classes has working serialization.
Especially in the Scala 2.13 version, the previous `if xs.iterator.isEmpty`
for the common case was a big problem, since it expensively created an
iterator even for the most common ++=(ByteString) case.
It turned up in profiles, the usual implementation in 2.13 is through
SeqOps.isEmpty -> lengthCompare which checks if knownSize != -1 for
the fast path. All of that doesn't sound too bad but introduces enough
indirection that the inliner might not be able to inline all of that
which then leads to a multimorphic callsite e.g. to call lengthCompare.
This allows tests to override `clearLogsAfterEachTest` to have the logs
cleared after every successful test. It also provides an explicit
`clearCapturedLogs` for tests that can't be cleared after each test, but
still want to limit logs between tests.
* Since DeathWatchNotification is sent over the control channel it may overtake
other messages that have been sent from the same actor before it stopped.
* It can be confusing that Terminated can't be used as an end-of-conversation marker.
* In classic Remoting we didn't have this problem because all messages were sent over
the same connection.
* don't send DeathWatchNotification when system is terminating
* when using Cluster we can rely on that the other side will publish AddressTerminated
when the member has been removed
* it's actually already a race condition that often will result in that the DeathWatchNotification
from the terminating side
* in DeathWatch.scala it will remove the watchedBy when receiving AddressTerminated, and that
may (sometimes) happen before tellWatchersWeDied
* same for Unwatch
* to avoid sending many Unwatch messages when watcher's ActorSystem is terminated
* same race exists for Unwatch as for DeathWatchNotification, if RemoteWatcher publishAddressTerminated
before the watcher is terminated
* config for the flush timeout, and possibility to disable