Merge branch 'master' into scalatest310
This commit is contained in:
commit
ec208cad08
34 changed files with 497 additions and 137 deletions
|
|
@ -92,7 +92,7 @@ Java
|
|||
|
||||
@@@ note
|
||||
|
||||
Using the `CircuitBreaker` companion object's @scala[*apply*]@java[*create*] method
|
||||
Using the `CircuitBreaker`'s companion object @scala[*apply*]@java[*create*] method
|
||||
will return a `CircuitBreaker` where callbacks are executed in the caller's thread.
|
||||
This can be useful if the asynchronous `Future` behavior is unnecessary, for
|
||||
example invoking a synchronous-only API.
|
||||
|
|
@ -101,11 +101,11 @@ example invoking a synchronous-only API.
|
|||
|
||||
### Control failure count explicitly
|
||||
|
||||
By default, the circuit breaker treat `Exception` as failure in synchronized API, or failed `Future` as failure in future based API.
|
||||
Failure will increment failure count, when failure count reach the *maxFailures*, circuit breaker will be opened.
|
||||
However, some applications might requires certain exception to not increase failure count, or vice versa,
|
||||
sometime we want to increase the failure count even if the call succeeded.
|
||||
Akka circuit breaker provides a way to achieve such use case:
|
||||
By default, the circuit breaker treats `Exception` as failure in synchronized API, or failed `Future` as failure in future based API.
|
||||
On failure, the failure count will increment. If the failure count reaches the *maxFailures*, the circuit breaker will be opened.
|
||||
However, some applications may require certain exceptions to not increase the failure count.
|
||||
In other cases one may want to increase the failure count even if the call succeeded.
|
||||
Akka circuit breaker provides a way to achieve such use cases:
|
||||
|
||||
* `withCircuitBreaker`
|
||||
* `withSyncCircuitBreaker`
|
||||
|
|
@ -113,7 +113,7 @@ Akka circuit breaker provides a way to achieve such use case:
|
|||
* `callWithCircuitBreakerCS`
|
||||
* `callWithSyncCircuitBreaker`
|
||||
|
||||
All methods above accepts an argument `defineFailureFn`
|
||||
All methods above accept an argument `defineFailureFn`
|
||||
|
||||
Type of `defineFailureFn`: @scala[`Try[T] => Boolean`]@java[`BiFunction[Optional[T], Optional[Throwable], java.lang.Boolean]`]
|
||||
|
||||
|
|
@ -128,9 +128,14 @@ Java
|
|||
|
||||
### Low level API
|
||||
|
||||
The low-level API allows you to describe the behavior of the CircuitBreaker in detail, including deciding what to return to the calling `Actor` in case of success or failure. This is especially useful when expecting the remote call to send a reply. CircuitBreaker doesn't support `Tell Protection` (protecting against calls that expect a reply) natively at the moment, so you need to use the low-level power-user APIs, `succeed` and `fail` methods, as well as `isClose`, `isOpen`, `isHalfOpen` to implement it.
|
||||
The low-level API allows you to describe the behavior of the CircuitBreaker in detail, including deciding what to return to the calling `Actor` in case of success or failure. This is especially useful when expecting the remote call to send a reply.
|
||||
CircuitBreaker doesn't support `Tell Protection` (protecting against calls that expect a reply) natively at the moment.
|
||||
Thus you need to use the low-level power-user APIs, `succeed` and `fail` methods, as well as `isClose`, `isOpen`, `isHalfOpen` to implement it.
|
||||
|
||||
As can be seen in the examples below, a `Tell Protection` pattern could be implemented by using the `succeed` and `fail` methods, which would count towards the `CircuitBreaker` counts. In the example, a call is made to the remote service if the `breaker.isClosed`, and once a response is received, the `succeed` method is invoked, which tells the `CircuitBreaker` to keep the breaker closed. If on the other hand an error or timeout is received, we trigger a `fail` and the breaker accrues this failure towards its count for opening the breaker.
|
||||
As can be seen in the examples below, a `Tell Protection` pattern could be implemented by using the `succeed` and `fail` methods, which would count towards the `CircuitBreaker` counts.
|
||||
In the example, a call is made to the remote service if the `breaker.isClosed`.
|
||||
Once a response is received, the `succeed` method is invoked, which tells the `CircuitBreaker` to keep the breaker closed.
|
||||
On the other hand, if an error or timeout is received we trigger a `fail`, and the breaker accrues this failure towards its count for opening the breaker.
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ Another example that uses the "thread-pool-executor":
|
|||
|
||||
@@@ note
|
||||
|
||||
The thread pool executor dispatcher is implemented using by a `java.util.concurrent.ThreadPoolExecutor`.
|
||||
The thread pool executor dispatcher is implemented using a `java.util.concurrent.ThreadPoolExecutor`.
|
||||
You can read more about it in the JDK's [ThreadPoolExecutor documentation](https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html).
|
||||
|
||||
@@@
|
||||
|
|
|
|||
|
|
@ -50,18 +50,18 @@ class MyActor extends Actor with akka.actor.ActorLogging {
|
|||
@@@
|
||||
|
||||
The first parameter to @scala[`Logging`] @java[`Logging.getLogger`] could also be any
|
||||
`LoggingBus`, specifically @scala[`system.eventStream`] @scala[`system.eventStream()`]; in the demonstrated
|
||||
case, the actor system's address is included in the `akkaSource`
|
||||
representation of the log source (see @ref:[Logging Thread, Akka Source and Actor System in MDC](#logging-thread-akka-source-and-actor-system-in-mdc))
|
||||
`LoggingBus`, specifically @scala[`system.eventStream`] @scala[`system.eventStream()`].
|
||||
In the demonstrated case, the actor system's address is included in the `akkaSource`
|
||||
representation of the log source (see @ref:[Logging Thread, Akka Source and Actor System in MDC](#logging-thread-akka-source-and-actor-system-in-mdc)),
|
||||
while in the second case this is not automatically done.
|
||||
The second parameter to @scala[`Logging`] @java[`Logging.getLogger`] is the source of this logging channel.
|
||||
The source object is translated to a String according to the following rules:
|
||||
|
||||
* if it is an Actor or ActorRef, its path is used
|
||||
* in case of a String it is used as is
|
||||
* in case of a class an approximation of its simpleName
|
||||
* and in all other cases @scala[a compile error occurs unless an implicit
|
||||
`LogSource[T]` is in scope for the type in question] @java[the simpleName of its class]
|
||||
* in case of a Class an approximation of its `simpleName` is used
|
||||
* in all other cases @scala[a compile error occurs unless an implicit
|
||||
`LogSource[T]` is in scope for the type in question] @java[the `simpleName` of its class] is used
|
||||
|
||||
The log message may contain argument placeholders `{}`, which will be
|
||||
substituted if the log level is enabled. Giving more arguments than
|
||||
|
|
|
|||
|
|
@ -3,6 +3,9 @@ project.description: Migrating to Akka 2.6.
|
|||
---
|
||||
# Migration Guide 2.5.x to 2.6.x
|
||||
|
||||
An overview of the changes in Akka 2.6 is presented in the [What's new in Akka 2.6 video](https://akka.io/blog/news/2019/12/12/akka-26-intro)
|
||||
and the [release announcement](https://akka.io/blog/news/2019/11/06/akka-2.6.0-released).
|
||||
|
||||
Akka 2.6.x is binary backwards compatible with 2.5.x with the ordinary exceptions listed in the
|
||||
@ref:[Binary Compatibility Rules](../common/circuitbreaker.md).
|
||||
|
||||
|
|
|
|||
|
|
@ -16,6 +16,19 @@ End the current substream whenever a predicate returns `true`, starting a new su
|
|||
|
||||
End the current substream whenever a predicate returns `true`, starting a new substream for the next element.
|
||||
|
||||
## Example
|
||||
|
||||
Given some time series data source we would like to split the stream into sub-streams for each second.
|
||||
By using `sliding` we can compare the timestamp of the current and next element to decide when to split.
|
||||
|
||||
Scala
|
||||
: @@snip [Scan.scala](/akka-docs/src/test/scala/docs/stream/operators/sourceorflow/Split.scala) { #splitAfter }
|
||||
|
||||
Java
|
||||
: @@snip [SourceOrFlow.java](/akka-docs/src/test/java/jdocs/stream/operators/sourceorflow/Split.java) { #splitAfter }
|
||||
|
||||
An alternative way of implementing this is shown in @ref:[splitWhen example](splitWhen.md#example).
|
||||
|
||||
## Reactive Streams semantics
|
||||
|
||||
@@@div { .callout }
|
||||
|
|
|
|||
|
|
@ -16,6 +16,20 @@ Split off elements into a new substream whenever a predicate function return `tr
|
|||
|
||||
Split off elements into a new substream whenever a predicate function return `true`.
|
||||
|
||||
## Example
|
||||
|
||||
Given some time series data source we would like to split the stream into sub-streams for each second.
|
||||
We need to compare the timestamp of the previous and current element to decide when to split. This
|
||||
decision can be implemented in a `statefulMapConcat` operator preceding the `splitWhen`.
|
||||
|
||||
Scala
|
||||
: @@snip [Scan.scala](/akka-docs/src/test/scala/docs/stream/operators/sourceorflow/Split.scala) { #splitWhen }
|
||||
|
||||
Java
|
||||
: @@snip [SourceOrFlow.java](/akka-docs/src/test/java/jdocs/stream/operators/sourceorflow/Split.java) { #splitWhen }
|
||||
|
||||
An alternative way of implementing this is shown in @ref:[splitAfter example](splitAfter.md#example).
|
||||
|
||||
## Reactive Streams semantics
|
||||
|
||||
@@@div { .callout }
|
||||
|
|
|
|||
|
|
@ -34,7 +34,8 @@ systems. The API of Akka’s Actors has borrowed some of its syntax from Erlang.
|
|||
## First example
|
||||
|
||||
If you are new to Akka you might want to start with reading the @ref:[Getting Started Guide](guide/introduction.md)
|
||||
and then come back here to learn more.
|
||||
and then come back here to learn more. We also recommend watching the short
|
||||
[introduction video to Akka actors](https://akka.io/blog/news/2019/12/03/akka-typed-actor-intro-video).
|
||||
|
||||
It is helpful to become familiar with the foundational, external and internal
|
||||
ecosystem of your Actors, to see what you can leverage and customize as needed, see
|
||||
|
|
@ -132,7 +133,7 @@ Scala
|
|||
Java
|
||||
: @@snip [IntroSpec.scala](/akka-actor-typed-tests/src/test/java/jdocs/akka/typed/IntroTest.java) { #hello-world }
|
||||
|
||||
We start an Actor system from the defined `HelloWorldMain` behavior and send two `Start` messages that
|
||||
We start an Actor system from the defined `HelloWorldMain` behavior and send two `SayHello` messages that
|
||||
will kick-off the interaction between two separate `HelloWorldBot` actors and the single `Greeter` actor.
|
||||
|
||||
An application normally consists of a single `ActorSystem`, running many actors, per JVM.
|
||||
|
|
|
|||
|
|
@ -27,6 +27,9 @@ It could for example be actors representing Aggregate Roots in Domain-Driven Des
|
|||
Here we call these actors "entities". These actors typically have persistent (durable) state,
|
||||
but this feature is not limited to actors with persistent state.
|
||||
|
||||
The [Introduction to Akka Cluster Sharding video](https://akka.io/blog/news/2019/12/16/akka-cluster-sharding-intro-video)
|
||||
is a good starting point for learning Cluster Sharding.
|
||||
|
||||
Cluster sharding is typically used when you have many stateful actors that together consume
|
||||
more resources (e.g. memory) than fit on one machine. If you only have a few stateful actors
|
||||
it might be easier to run them on a @ref:[Cluster Singleton](cluster-singleton.md) node.
|
||||
|
|
|
|||
|
|
@ -209,8 +209,8 @@ The thread information was recorded using the YourKit profiler, however any good
|
|||
has this feature (including the free and bundled with the Oracle JDK [VisualVM](https://visualvm.github.io/), as well as [Java Mission Control](https://openjdk.java.net/projects/jmc/)).
|
||||
|
||||
The orange portion of the thread shows that it is idle. Idle threads are fine -
|
||||
they're ready to accept new work. However, large amount of turquoise (blocked, or sleeping as in our example) threads
|
||||
is very bad and leads to thread starvation.
|
||||
they're ready to accept new work. However, a large number of turquoise (blocked, or sleeping as in our example) threads
|
||||
leads to thread starvation.
|
||||
|
||||
@@@ note
|
||||
|
||||
|
|
@ -267,7 +267,7 @@ unless you @ref:[set up a separate dispatcher for the actor](../dispatchers.md#s
|
|||
|
||||
### Solution: Dedicated dispatcher for blocking operations
|
||||
|
||||
One of the most efficient methods of isolating the blocking behavior such that it does not impact the rest of the system
|
||||
One of the most efficient methods of isolating the blocking behavior, such that it does not impact the rest of the system,
|
||||
is to prepare and use a dedicated dispatcher for all those blocking operations.
|
||||
This technique is often referred to as "bulk-heading" or simply "isolating blocking".
|
||||
|
||||
|
|
|
|||
|
|
@ -30,6 +30,9 @@ allows for very high transaction rates and efficient replication. A stateful act
|
|||
events to the actor, allowing it to rebuild its state. This can be either the full history of changes
|
||||
or starting from a checkpoint in a snapshot which can dramatically reduce recovery times.
|
||||
|
||||
The [Event Sourcing with Akka 2.6 video](https://akka.io/blog/news/2020/01/07/akka-event-sourcing-video)
|
||||
is a good starting point for learning Event Sourcing.
|
||||
|
||||
@@@ note
|
||||
|
||||
The General Data Protection Regulation (GDPR) requires that personal information must be deleted at the request of users.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,108 @@
|
|||
/*
|
||||
* Copyright (C) 2020 Lightbend Inc. <https://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package jdocs.stream.operators.sourceorflow;
|
||||
|
||||
import akka.actor.ActorSystem;
|
||||
import akka.japi.Pair;
|
||||
import akka.japi.function.Creator;
|
||||
import akka.japi.function.Function;
|
||||
import akka.stream.javadsl.Sink;
|
||||
import akka.stream.javadsl.Source;
|
||||
|
||||
import java.time.Duration;
|
||||
import java.time.Instant;
|
||||
import java.time.LocalDateTime;
|
||||
import java.time.ZoneOffset;
|
||||
import java.util.Collections;
|
||||
|
||||
public class Split {
|
||||
public static void splitWhenExample(String[] args) {
|
||||
ActorSystem system = ActorSystem.create();
|
||||
|
||||
// #splitWhen
|
||||
Source.range(1, 100)
|
||||
.throttle(1, Duration.ofMillis(100))
|
||||
.map(elem -> new Pair<>(elem, Instant.now()))
|
||||
.statefulMapConcat(
|
||||
() -> {
|
||||
return new Function<Pair<Integer, Instant>, Iterable<Pair<Integer, Boolean>>>() {
|
||||
// stateful decision in statefulMapConcat
|
||||
// keep track of time bucket (one per second)
|
||||
LocalDateTime currentTimeBucket =
|
||||
LocalDateTime.ofInstant(Instant.ofEpochMilli(0), ZoneOffset.UTC);
|
||||
|
||||
@Override
|
||||
public Iterable<Pair<Integer, Boolean>> apply(
|
||||
Pair<Integer, Instant> elemTimestamp) {
|
||||
LocalDateTime time =
|
||||
LocalDateTime.ofInstant(elemTimestamp.second(), ZoneOffset.UTC);
|
||||
LocalDateTime bucket = time.withNano(0);
|
||||
boolean newBucket = !bucket.equals(currentTimeBucket);
|
||||
if (newBucket) currentTimeBucket = bucket;
|
||||
return Collections.singleton(new Pair<>(elemTimestamp.first(), newBucket));
|
||||
}
|
||||
};
|
||||
})
|
||||
.splitWhen(elemDecision -> elemDecision.second()) // split when time bucket changes
|
||||
.map(elemDecision -> elemDecision.first())
|
||||
.fold(0, (acc, notUsed) -> acc + 1) // sum
|
||||
.to(Sink.foreach(System.out::println))
|
||||
.run(system);
|
||||
// 3
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 7
|
||||
// #splitWhen
|
||||
}
|
||||
|
||||
public static void splitAfterExample(String[] args) {
|
||||
ActorSystem system = ActorSystem.create();
|
||||
|
||||
// #splitAfter
|
||||
Source.range(1, 100)
|
||||
.throttle(1, Duration.ofMillis(100))
|
||||
.map(elem -> new Pair<>(elem, Instant.now()))
|
||||
.sliding(2, 1)
|
||||
.splitAfter(
|
||||
slidingElements -> {
|
||||
if (slidingElements.size() == 2) {
|
||||
Pair<Integer, Instant> current = slidingElements.get(0);
|
||||
Pair<Integer, Instant> next = slidingElements.get(1);
|
||||
LocalDateTime currentBucket =
|
||||
LocalDateTime.ofInstant(current.second(), ZoneOffset.UTC).withNano(0);
|
||||
LocalDateTime nextBucket =
|
||||
LocalDateTime.ofInstant(next.second(), ZoneOffset.UTC).withNano(0);
|
||||
return !currentBucket.equals(nextBucket);
|
||||
} else {
|
||||
return false;
|
||||
}
|
||||
})
|
||||
.map(slidingElements -> slidingElements.get(0).first())
|
||||
.fold(0, (acc, notUsed) -> acc + 1) // sum
|
||||
.to(Sink.foreach(System.out::println))
|
||||
.run(system);
|
||||
// 3
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 6
|
||||
// note that the very last element is never included due to sliding,
|
||||
// but that would not be problem for an infinite stream
|
||||
// #splitAfter
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,101 @@
|
|||
/*
|
||||
* Copyright (C) 2019-2020 Lightbend Inc. <https://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package docs.stream.operators.sourceorflow
|
||||
|
||||
import java.time.Instant
|
||||
import java.time.LocalDateTime
|
||||
import java.time.ZoneOffset
|
||||
|
||||
import scala.concurrent.duration._
|
||||
|
||||
import akka.stream.scaladsl.Sink
|
||||
import akka.stream.scaladsl.Source
|
||||
|
||||
object Split {
|
||||
def splitWhenExample(args: Array[String]): Unit = {
|
||||
import akka.actor.ActorSystem
|
||||
|
||||
implicit val system: ActorSystem = ActorSystem()
|
||||
|
||||
//#splitWhen
|
||||
Source(1 to 100)
|
||||
.throttle(1, 100.millis)
|
||||
.map(elem => (elem, Instant.now()))
|
||||
.statefulMapConcat(() => {
|
||||
// stateful decision in statefulMapConcat
|
||||
// keep track of time bucket (one per second)
|
||||
var currentTimeBucket = LocalDateTime.ofInstant(Instant.ofEpochMilli(0), ZoneOffset.UTC)
|
||||
|
||||
{
|
||||
case (elem, timestamp) =>
|
||||
val time = LocalDateTime.ofInstant(timestamp, ZoneOffset.UTC)
|
||||
val bucket = time.withNano(0)
|
||||
val newBucket = bucket != currentTimeBucket
|
||||
if (newBucket)
|
||||
currentTimeBucket = bucket
|
||||
List((elem, newBucket))
|
||||
}
|
||||
})
|
||||
.splitWhen(_._2) // split when time bucket changes
|
||||
.map(_._1)
|
||||
.fold(0)((acc, _) => acc + 1) // sum
|
||||
.to(Sink.foreach(println))
|
||||
.run()
|
||||
// 3
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 7
|
||||
//#splitWhen
|
||||
}
|
||||
|
||||
def splitAfterExample(args: Array[String]): Unit = {
|
||||
import akka.actor.ActorSystem
|
||||
|
||||
implicit val system: ActorSystem = ActorSystem()
|
||||
|
||||
//#splitAfter
|
||||
Source(1 to 100)
|
||||
.throttle(1, 100.millis)
|
||||
.map(elem => (elem, Instant.now()))
|
||||
.sliding(2)
|
||||
.splitAfter { slidingElements =>
|
||||
if (slidingElements.size == 2) {
|
||||
val current = slidingElements.head
|
||||
val next = slidingElements.tail.head
|
||||
val currentBucket = LocalDateTime.ofInstant(current._2, ZoneOffset.UTC).withNano(0)
|
||||
val nextBucket = LocalDateTime.ofInstant(next._2, ZoneOffset.UTC).withNano(0)
|
||||
currentBucket != nextBucket
|
||||
} else {
|
||||
false
|
||||
}
|
||||
}
|
||||
.map(_.head._1)
|
||||
.fold(0)((acc, _) => acc + 1) // sum
|
||||
.to(Sink.foreach(println))
|
||||
.run()
|
||||
// 3
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 10
|
||||
// 6
|
||||
// note that the very last element is never included due to sliding,
|
||||
// but that would not be problem for an infinite stream
|
||||
//#splitAfter
|
||||
}
|
||||
|
||||
}
|
||||
Loading…
Add table
Add a link
Reference in a new issue