Before, for every round of `parse` a new buffer was allocated that was then
copied again by `ByteString.fromArray`. This effectively more than doubled
the allocation rate while inflating. For bulk data like expected for
compressed data this can make a big difference in throughput.
The slight downside of keeping the buffer is that the stage now uses more memory
by default even while idle. deflate/gzip's window is 64kb which happens to be also the
default `maxBytesPerChunk` setting. It is therefore expected that the additional buffer will
less than double the existing memory footprint while dividing the allocation rate
by more than two which seems like a good trade-off.
- MergeHub materialization now produces a DrainingControl object alongside the Sink
- The DrainingControl can be invoked to initiate the draining procedure
- No new producers can be connected to the Hub
- The Hub is completed as soon as all the currently connected producers complete
* Remove @switch when it doesn't take effect
* Use ActorRef.noSender
* Minor tweaks to SchedulerSpec
* Disambiguate TypedActor for Scala 3
* Bump ScalaTest to a version compatible with Scala 3
* Bump ScalaCheck
* Disambiguate Event in SupervisorHierarchySpec
* Scala 3 compatible EventBusSpec
* Prevent private unused variables to be erased by Scala 3
* Bump mockito
* Explicit actorRef2Scala import
* restore original .scalafix.conf
* Scala 3 compatible tailrec
* Reminder to re add switch annotation in case
* Move to nowarn instead of silencer
* Bump to Scala 2.12.13
* Cross compatible annotations
* fix docs generation
* adapt the build for Scala 3
* fix errors but bus
* remove more SerialVersion from trait
* scalacheck only from scalatest
* cross-compile akka-actor-tests
* restore cross-compilation
* early initializers workaround
* scalacheck switch
* cross compatible FSM.State class
* cross compatible LARS spec
* Change results to pass LineNumberSpec
* fix stackoverflow in AsyncDnsResolverIntegrationSpec
* FSM.State unapply
* fix Scala 2.13 mima
* SerialVersionRemover compiler plugin
* removed unused nowarns
* Memory leak in auto-closing of sub inlets/outlets fixed#29966
* Avoid separate nested instances of the AsyncCallback state classes for each async callback.
* New attribute with source location information introduced and added to stages that takes lambdas
* Better default toString for GraphStageLogic including source location where possible
and used that for debugging, errors and stream snapshots
A new queue implementation that drops new elements immediately when the buffer is full. Does not use async callbacks like Source.queue with OverflowStrategy.dropNew, which can still result in OOM errors (#25798).
Fixes#28769
Use case for this is if you have a sequence of elements that has been
partitioned across multiple streams, and you want to merge them back
together in order. It will typically be used in combination with
`zipWithIndex` to define the index for the sequence, followed by a
`Partition`, followed by the processing of different substreams with
different flows (each flow emitting exactly one output for each input),
and then merging with this stage, using the index from `zipWithIndex`.
A more concrete use case is if you're consuming messages from a message
broker, and you have a flow that you wish to apply to some messages, but
not others, you can partition the message stream according to which
should be processed by the flow and which should bypass it, and then
bring the elements back together acknowledgement. If an ordinary merge
was used rather than this, the messages that bypass the processing flow
would likely overtake the messages going through the processing flow,
and the result would be out of order offset acknowledgement which would
lead to dropping messages on failure.
I've included a minimal version of the above example in the documentation.