+str #18142 ask pattern integration for akka streams

progressed with cleanup, removing the same thread exec context is
weird... causes issues :-/ Need to debug more, could be that some race
also exists in mapAsync then :\

WIP

finish ask impl via watch stage

mima

consistency spec

fix paradox, and fix adding ask/watch to javadsl source

follow up review
This commit is contained in:
Konrad Malawski 2018-01-14 00:21:00 +09:00 committed by Konrad `ktoso` Malawski
parent 5040ce82f1
commit 4714f16dcf
18 changed files with 643 additions and 47 deletions

View file

@ -1135,7 +1135,40 @@ Adheres to the `ActorAttributes.SupervisionStrategy` attribute.
**completes** when upstream completes and all elements have been emitted from the internal flow
**cancels** when downstream cancels
**completes** when upstream completes and all futures have been completed and all elements have been emitted
---------------------------------------------------------------
### watch
Watch a specific `ActorRef` and signal a failure downstream once the actor terminates.
The signaled failure will be an @java[@javadoc:[WatchedActorTerminatedException](akka.stream.WatchedActorTerminatedException)]
@scala[@scaladoc[WatchedActorTerminatedException](akka.stream.WatchedActorTerminatedException)].
**emits** when upstream emits
**backpressures** when downstream backpressures
**completes** when upstream completes
---------------------------------------------------------------
### ask
Specialized stage implementing the @scala[@extref[ask](github:akka-actor/src/main/scala/akka/pattern/AskSupport.scala)]@java[@extref[ask](github:akka-actor/src/main/scala/akka/pattern/Patterns.scala)] pattern for inter-op with untyped actors.
The stream will be failed using an an @java[@javadoc:[WatchedActorTerminatedException](akka.stream.WatchedActorTerminatedException)]
@scala[@scaladoc[WatchedActorTerminatedException](akka.stream.WatchedActorTerminatedException)] if the target actor terminates,
or with an @java[@javadoc:[WatchedActorTerminatedException](akka.pattern.AskTimeoutException)] @scala[@scaladoc[WatchedActorTerminatedException](akka.pattern.AskTimeoutException)] if any of the asks times out.
**emits** when the futures (in submission order) created by the ask pattern internally are completed
**backpressures** when the number of futures reaches the configured parallelism and the downstream backpressures
**fails** when the passed in actor terminates, or a timeout is exceeded in any of the asks performed
**completes** when upstream completes and all futures have been completed and all elements have been emitted
---------------------------------------------------------------
@ -1215,6 +1248,28 @@ If a @scala[`Future`] @java[`CompletionStage`] fails, the stream also fails (unl
**completes** upstream completes and all @scala[`Future` s] @java[`CompletionStage` s] has been completed and all elements has been emitted
---------------------------------------------------------------
### ask
Use the `ask` pattern to send a request-reply message to the target `ref` actor.
If any of the asks times out it will fail the stream with a [[akka.pattern.AskTimeoutException]].
The `mapTo` class parameter is used to cast the incoming responses to the expected response type.
Similar to the plain ask pattern, the target actor is allowed to reply with `akka.util.Status`.
An `akka.util.Status#Failure` will cause the stage to fail with the cause carried in the `Failure` message.
Adheres to the [[ActorAttributes.SupervisionStrategy]] attribute.
**emits** when the ask @scala[`Future`] @java[`CompletionStage`] returned by the provided function finishes for the next element in sequence
**backpressures** when the number of ask @scala[`Future` s] @java[`CompletionStage` s] reaches the configured parallelism and the downstream backpressures
**completes** when upstream completes and all ask @scala[`Future` s] @java[`CompletionStage` s] has been completed and all elements has been emitted
---------------------------------------------------------------
<br/>

View file

@ -8,18 +8,20 @@ For piping the elements of a stream as messages to an ordinary actor you can use
Messages can be sent to a stream with `Source.queue` or via the `ActorRef` that is
materialized by `Source.actorRef`.
### mapAsync + ask
### ask
A nice way to delegate some processing of elements in a stream to an actor is to
use `ask` in `mapAsync`. The back-pressure of the stream is maintained by
the @scala[`Future`]@java[`CompletionStage`] of the `ask` and the mailbox of the actor will not be filled with
more messages than the given `parallelism` of the `mapAsync` stage.
### ask
A nice way to delegate some processing of elements in a stream to an actor is to use `ask`.
The back-pressure of the stream is maintained by the @scala[`Future`]@java[`CompletionStage`] of
the `ask` and the mailbox of the actor will not be filled with more messages than the given
`parallelism` of the `ask` stage (similarly to how the `mapAsync` stage works).
Scala
: @@snip [IntegrationDocSpec.scala]($code$/scala/docs/stream/IntegrationDocSpec.scala) { #mapAsync-ask }
: @@snip [IntegrationDocSpec.scala]($code$/scala/docs/stream/IntegrationDocSpec.scala) { #ask }
Java
: @@snip [IntegrationDocTest.java]($code$/java/jdocs/stream/IntegrationDocTest.java) { #mapAsync-ask }
: @@snip [IntegrationDocTest.java]($code$/java/jdocs/stream/IntegrationDocTest.java) { #ask }
Note that the messages received in the actor will be in the same order as
the stream elements, i.e. the `parallelism` does not change the ordering
@ -29,8 +31,9 @@ is already a message in the mailbox when the actor has completed previous
message.
The actor must reply to the @scala[`sender()`]@java[`getSender()`] for each message from the stream. That
reply will complete the @scala[`Future`]@java[`CompletionStage`] of the `ask` and it will be the element that
is emitted downstreams from `mapAsync`.
reply will complete the @scala[`Future`]@java[`CompletionStage`] of the `ask` and it will be the element that is emitted downstreams.
In case the target actor is stopped, the stage will fail with an `AskStageTargetActorTerminatedException`
Scala
: @@snip [IntegrationDocSpec.scala]($code$/scala/docs/stream/IntegrationDocSpec.scala) { #ask-actor }
@ -38,20 +41,21 @@ Scala
Java
: @@snip [IntegrationDocTest.java]($code$/java/jdocs/stream/IntegrationDocTest.java) { #ask-actor }
The stream can be completed with failure by sending `akka.actor.Status.Failure`
as reply from the actor.
The stream can be completed with failure by sending `akka.actor.Status.Failure` as reply from the actor.
If the `ask` fails due to timeout the stream will be completed with
`TimeoutException` failure. If that is not desired outcome you can use `recover`
on the `ask` @scala[`Future`]@java[`CompletionStage`].
on the `ask` @scala[`Future`]@java[`CompletionStage`], or use the other "restart" stages to restart it.
If you don't care about the reply values and only use them as back-pressure signals you
can use `Sink.ignore` after the `mapAsync` stage and then actor is effectively a sink
can use `Sink.ignore` after the `ask` stage and then actor is effectively a sink
of the stream.
The same pattern can be used with @ref:[Actor routers](../routing.md). Then you
can use `mapAsyncUnordered` for better efficiency if you don't care about the
order of the emitted downstream elements (the replies).
Note that while you may implement the same concept using `mapAsync`, that style would not be aware of the actor terminating.
If you are intending to ask multiple actors by using @ref:[Actor routers](../routing.md), then
you should use `mapAsyncUnordered` and perform the ask manually in there, as the ordering of the replies is not important,
since multiple actors are being asked concurrently to begin with, and no single actor is the one to be watched by the stage.
### Sink.actorRefWithAck

View file

@ -257,7 +257,7 @@ public class IntegrationDocTest extends AbstractJavaTest {
public DatabaseService(ActorRef probe) {
this.probe = probe;
}
@Override
public Receive createReceive() {
return receiveBuilder()
@ -272,11 +272,11 @@ public class IntegrationDocTest extends AbstractJavaTest {
//#sometimes-slow-service
static class SometimesSlowService {
private final Executor ec;
public SometimesSlowService(Executor ec) {
this.ec = ec;
}
private final AtomicInteger runningCount = new AtomicInteger();
public CompletionStage<String> convert(String s) {
@ -292,7 +292,7 @@ public class IntegrationDocTest extends AbstractJavaTest {
}
}
//#sometimes-slow-service
//#ask-actor
static class Translator extends AbstractActor {
@Override
@ -308,22 +308,21 @@ public class IntegrationDocTest extends AbstractJavaTest {
}
}
//#ask-actor
@SuppressWarnings("unchecked")
@Test
public void mapAsyncPlusAsk() throws Exception {
//#mapAsync-ask
public void askStage() throws Exception {
//#ask
Source<String, NotUsed> words =
Source.from(Arrays.asList("hello", "hi"));
Timeout askTimeout = Timeout.apply(5, TimeUnit.SECONDS);
words
.mapAsync(5, elem -> ask(ref, elem, askTimeout))
.map(elem -> (String) elem)
.ask(5, ref, String.class, askTimeout)
// continue processing of the replies from the actor
.map(elem -> elem.toLowerCase())
.runWith(Sink.ignore(), mat);
//#mapAsync-ask
//#ask
}

View file

@ -140,19 +140,18 @@ class IntegrationDocSpec extends AkkaSpec(IntegrationDocSpec.config) {
implicit val materializer = ActorMaterializer()
val ref: ActorRef = system.actorOf(Props[Translator])
"mapAsync + ask" in {
//#mapAsync-ask
import akka.pattern.ask
"ask" in {
//#ask
implicit val askTimeout = Timeout(5.seconds)
val words: Source[String, NotUsed] =
Source(List("hello", "hi"))
words
.mapAsync(parallelism = 5)(elem (ref ? elem).mapTo[String])
.ask[String](parallelism = 5)(ref)
// continue processing of the replies from the actor
.map(_.toLowerCase)
.runWith(Sink.ignore)
//#mapAsync-ask
//#ask
}
"calling external service with mapAsync" in {