Add throttle combinator without bucket size in parameters (#23808)

* +str Add throttle combinator without bucket size in parameters

* Add mima filter plus comment formatting

* Reviwed throttle messages across all classes

* move mima filter to 2.4.6

* change to throttleEven, plus grammar fixes

* fix formatting

* changed mima filter according to new API

* fixes accoring to feedback
This commit is contained in:
Alexander Golubev 2017-11-17 06:20:12 -05:00 committed by Patrik Nordwall
parent b51d720b18
commit a7b1a6675b
8 changed files with 282 additions and 37 deletions

View file

@ -1337,13 +1337,25 @@ class SubSource[+Out, +Mat](delegate: scaladsl.SubFlow[Out, Mat, scaladsl.Source
* Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size or maximumBurst).
* Tokens drops into the bucket at a given rate and can be `spared` for later use up to bucket capacity
* to allow some burstiness. Whenever stream wants to send an element, it takes as many
* tokens from the bucket as number of elements. If there isn't any, throttle waits until the
* bucket accumulates enough tokens. Bucket is full when stream just materialized and started.
* tokens from the bucket as element costs If there isn't any, throttle waits until the
* bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally
* to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.
*
* Parameter `mode` manages behaviour when upstream is faster than throttle rate:
* - [[akka.stream.ThrottleMode.Shaping]] makes pauses before emitting messages to meet throttle rate
* - [[akka.stream.ThrottleMode.Enforcing]] fails with exception when upstream is faster than throttle rate
*
* It is recommended to use non-zero burst sizes as they improve both performance and throttling precision by allowing
* the implementation to avoid using the scheduler when input rates fall below the enforced limit and to reduce
* most of the inaccuracy caused by the scheduler resolution (which is in the range of milliseconds).
*
* WARNING: Be aware that throttle is using scheduler to slow down the stream. This scheduler has minimal time of triggering
* next push. Consequently it will slow down the stream as it has minimal pause for emitting. This can happen in
* case burst is 0 and speed is higher than 30 events per second. You need to consider another solution in case you are expecting
* events being evenly spread with some small interval (30 milliseconds or less).
* In other words the throttler always enforces the rate limit, but in certain cases (mostly due to limited scheduler resolution) it
* enforces a tighter bound than what was prescribed. This can be also mitigated by increasing the burst size.
*
* '''Emits when''' upstream emits an element and configured time per each element elapsed
*
* '''Backpressures when''' downstream backpressures or the incoming rate is higher than the speed limit
@ -1351,6 +1363,8 @@ class SubSource[+Out, +Mat](delegate: scaladsl.SubFlow[Out, Mat, scaladsl.Source
* '''Completes when''' upstream completes
*
* '''Cancels when''' downstream cancels
*
* @see [[#throttleEven]]
*/
def throttle(elements: Int, per: FiniteDuration, maximumBurst: Int,
mode: ThrottleMode): javadsl.SubSource[Out, Mat] =
@ -1365,15 +1379,26 @@ class SubSource[+Out, +Mat](delegate: scaladsl.SubFlow[Out, Mat, scaladsl.Source
* Throttle implements the token bucket model. There is a bucket with a given token capacity (burst size or maximumBurst).
* Tokens drops into the bucket at a given rate and can be `spared` for later use up to bucket capacity
* to allow some burstiness. Whenever stream wants to send an element, it takes as many
* tokens from the bucket as element cost. If there isn't any, throttle waits until the
* tokens from the bucket as element costs. If there isn't any, throttle waits until the
* bucket accumulates enough tokens. Elements that costs more than the allowed burst will be delayed proportionally
* to their cost minus available tokens, meeting the target rate.
* to their cost minus available tokens, meeting the target rate. Bucket is full when stream just materialized and started.
*
* Parameter `mode` manages behaviour when upstream is faster than throttle rate:
* - [[akka.stream.ThrottleMode.Shaping]] makes pauses before emitting messages to meet throttle rate
* - [[akka.stream.ThrottleMode.Enforcing]] fails with exception when upstream is faster than throttle rate. Enforcing
* cannot emit elements that cost more than the maximumBurst
*
* It is recommended to use non-zero burst sizes as they improve both performance and throttling precision by allowing
* the implementation to avoid using the scheduler when input rates fall below the enforced limit and to reduce
* most of the inaccuracy caused by the scheduler resolution (which is in the range of milliseconds).
*
* WARNING: Be aware that throttle is using scheduler to slow down the stream. This scheduler has minimal time of triggering
* next push. Consequently it will slow down the stream as it has minimal pause for emitting. This can happen in
* case burst is 0 and speed is higher than 30 events per second. You need to consider another solution in case you are expecting
* events being evenly spread with some small interval (30 milliseconds or less).
* In other words the throttler always enforces the rate limit, but in certain cases (mostly due to limited scheduler resolution) it
* enforces a tighter bound than what was prescribed. This can be also mitigated by increasing the burst size.
*
* '''Emits when''' upstream emits an element and configured time per each element elapsed
*
* '''Backpressures when''' downstream backpressures or the incoming rate is higher than the speed limit
@ -1381,11 +1406,40 @@ class SubSource[+Out, +Mat](delegate: scaladsl.SubFlow[Out, Mat, scaladsl.Source
* '''Completes when''' upstream completes
*
* '''Cancels when''' downstream cancels
*
* @see [[#throttleEven]]
*/
def throttle(cost: Int, per: FiniteDuration, maximumBurst: Int,
costCalculation: function.Function[Out, Integer], mode: ThrottleMode): javadsl.SubSource[Out, Mat] =
new SubSource(delegate.throttle(cost, per, maximumBurst, costCalculation.apply _, mode))
/**
* This is a simplified version of throttle that spreads events evenly across the given time interval.
*
* Use this combinator when you need just slow down a stream without worrying about exact amount
* of time between events.
*
* If you want to be sure that no time interval has no more than specified number of events you need to use
* [[throttle()]] with maximumBurst attribute.
* @see [[#throttle]]
*/
def throttleEven(elements: Int, per: FiniteDuration, mode: ThrottleMode): javadsl.SubSource[Out, Mat] =
new SubSource(delegate.throttle(elements, per, Int.MaxValue, mode))
/**
* This is a simplified version of throttle that spreads events evenly across the given time interval.
*
* Use this combinator when you need just slow down a stream without worrying about exact amount
* of time between events.
*
* If you want to be sure that no time interval has no more than specified number of events you need to use
* [[throttle()]] with maximumBurst attribute.
* @see [[#throttle]]
*/
def throttleEven(cost: Int, per: FiniteDuration,
costCalculation: (Out) Int, mode: ThrottleMode): javadsl.SubSource[Out, Mat] =
new SubSource(delegate.throttle(cost, per, Int.MaxValue, costCalculation.apply _, mode))
/**
* Detaches upstream demand from downstream demand without detaching the
* stream rates; in other words acts like a buffer of size 1.