From 38539372b5ec4f2f9a3f4d7e375da9d4c8e12feb Mon Sep 17 00:00:00 2001 From: Roland Kuhn Date: Sat, 20 Dec 2014 16:56:22 +0100 Subject: [PATCH] some more doc fixes --- .../code/docs/stream/ActorPublisherDocSpec.scala | 4 ++++ .../rst/scala/http/custom-directives.rst | 6 ++++++ akka-docs-dev/rst/scala/stream-integrations.rst | 16 +++++++++++----- akka-docs-dev/rst/scala/stream-introduction.rst | 4 +++- akka-docs-dev/rst/scala/stream-quickstart.rst | 10 ++++++---- akka-docs-dev/rst/stream-design.rst | 6 ++++-- 6 files changed, 34 insertions(+), 12 deletions(-) create mode 100644 akka-docs-dev/rst/scala/http/custom-directives.rst diff --git a/akka-docs-dev/rst/scala/code/docs/stream/ActorPublisherDocSpec.scala b/akka-docs-dev/rst/scala/code/docs/stream/ActorPublisherDocSpec.scala index ac7b73e5d6..b1d49778c9 100644 --- a/akka-docs-dev/rst/scala/code/docs/stream/ActorPublisherDocSpec.scala +++ b/akka-docs-dev/rst/scala/code/docs/stream/ActorPublisherDocSpec.scala @@ -48,6 +48,10 @@ object ActorPublisherDocSpec { @tailrec final def deliverBuf(): Unit = if (totalDemand > 0) { + /* + * totalDemand is a Long and could be larger than + * what buf.splitAt can accept + */ if (totalDemand <= Int.MaxValue) { val (use, keep) = buf.splitAt(totalDemand.toInt) buf = keep diff --git a/akka-docs-dev/rst/scala/http/custom-directives.rst b/akka-docs-dev/rst/scala/http/custom-directives.rst new file mode 100644 index 0000000000..4d8cafc5f4 --- /dev/null +++ b/akka-docs-dev/rst/scala/http/custom-directives.rst @@ -0,0 +1,6 @@ +.. _Custom Directives: + +Custom Directives +================= + +*TODO* diff --git a/akka-docs-dev/rst/scala/stream-integrations.rst b/akka-docs-dev/rst/scala/stream-integrations.rst index da60f72d76..c5c75208f3 100644 --- a/akka-docs-dev/rst/scala/stream-integrations.rst +++ b/akka-docs-dev/rst/scala/stream-integrations.rst @@ -61,8 +61,8 @@ This is how it can be used as input :class:`Source` to a :class:`Flow`: .. includecode:: code/docs/stream/ActorPublisherDocSpec.scala#actor-publisher-usage -You can only attach one subscriber to this publisher. Use ``Sink.fanoutPublisher`` to enable -multiple subscribers. +You can only attach one subscriber to this publisher. Use a ``Broadcast`` +element or attach a ``Sink.fanoutPublisher`` to enable multiple subscribers. ActorSubscriber ^^^^^^^^^^^^^^^ @@ -132,6 +132,12 @@ That means that back-pressure works as expected. For example if the ``emailServe is the bottleneck it will limit the rate at which incoming tweets are retrieved and email addresses looked up. +The final piece of this pipeline is to generate the demand that pulls the tweet +authors information through the emailing pipeline: we attach a ``Sink.ignore`` +which makes it all run. If our email process would return some interesting data +for further transformation then we would of course not ignore it but send that +result stream onwards for further processing or storage. + Note that ``mapAsync`` preserves the order of the stream elements. In this example the order is not important and then we can use the more efficient ``mapAsyncUnordered``: @@ -176,7 +182,7 @@ For example, if 5 elements have been requested by downstream there will be at mo futures in progress. ``mapAsync`` emits the future results in the same order as the input elements -were received. That means that completed results are only emitted downstreams +were received. That means that completed results are only emitted downstream when earlier results have been completed and emitted. One slow call will thereby delay the results of all successive calls, even though they are completed before the slow call. @@ -353,7 +359,7 @@ A :class:`Flow` can also be materialized to a :class:`Subscriber`, :class:`Publi A publisher can be connected to a subscriber with the ``subscribe`` method. It is also possible to expose a :class:`Source` as a :class:`Publisher` -by using the ``publisher`` :class:`Sink`: +by using the Publisher-:class:`Sink`: .. includecode:: code/docs/stream/ReactiveStreamsDocSpec.scala#source-publisher @@ -372,7 +378,7 @@ The buffer size controls how far apart the slowest subscriber can be from the fa before slowing down the stream. To make the picture complete, it is also possible to expose a :class:`Sink` as a :class:`Subscriber` -by using the ``subscriber`` :class:`Source`: +by using the Subscriber-:class:`Source`: .. includecode:: code/docs/stream/ReactiveStreamsDocSpec.scala#sink-subscriber diff --git a/akka-docs-dev/rst/scala/stream-introduction.rst b/akka-docs-dev/rst/scala/stream-introduction.rst index 3ca729d719..c27e636809 100644 --- a/akka-docs-dev/rst/scala/stream-introduction.rst +++ b/akka-docs-dev/rst/scala/stream-introduction.rst @@ -35,7 +35,7 @@ efficiently and with bounded resource usage—no more OutOfMemoryErrors. In orde to achieve this our streams need to be able to limit the buffering that they employ, they need to be able to slow down producers if the consumers cannot keep up. This feature is called back-pressure and is at the core of the -[Reactive Streams](http://reactive-streams.org/) initiative of which Akka is a +`Reactive Streams`_ initiative of which Akka is a founding member. For you this means that the hard problem of propagating and reacting to back-pressure has been incorporated in the design of Akka Streams already, so you have one less thing to worry about; it also means that Akka @@ -43,6 +43,8 @@ Streams interoperate seamlessly with all other Reactive Streams implementations (where Reactive Streams interfaces define the interoperability SPI while implementations like Akka Streams offer a nice user API). +.. _Reactive Streams: http://reactive-streams.org/ + How to read these docs ====================== diff --git a/akka-docs-dev/rst/scala/stream-quickstart.rst b/akka-docs-dev/rst/scala/stream-quickstart.rst index 2908b299a8..df9ced101c 100644 --- a/akka-docs-dev/rst/scala/stream-quickstart.rst +++ b/akka-docs-dev/rst/scala/stream-quickstart.rst @@ -6,10 +6,12 @@ Quick Start: Reactive Tweets A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them. -We will also consider the problem inherent to all non-blocking streaming solutions – "*What if the subscriber is slower -to consume the live stream of data?*" i.e. it is unable to keep up with processing the live data. Traditionally the solution -is often to buffer the elements, but this can (and usually *will*) cause eventual buffer overflows and instability of such systems. -Instead Akka Streams depend on internal backpressure signals that allow to control what should happen in such scenarios. +We will also consider the problem inherent to all non-blocking streaming +solutions: *"What if the subscriber is too slow to consume the live stream of +data?"*. Traditionally the solution is often to buffer the elements, but this +can—and usually will—cause eventual buffer overflows and instability of such +systems. Instead Akka Streams depend on internal backpressure signals that +allow to control what should happen in such scenarios. Here's the data model we'll be working with throughout the quickstart examples: diff --git a/akka-docs-dev/rst/stream-design.rst b/akka-docs-dev/rst/stream-design.rst index 29beda444e..6226a006c8 100644 --- a/akka-docs-dev/rst/stream-design.rst +++ b/akka-docs-dev/rst/stream-design.rst @@ -30,6 +30,8 @@ Another aspect of materialization is that we want to support distributed stream Interoperation with other Reactive Streams implementations ---------------------------------------------------------- +Akka Streams fully implement the Reactive Streams specification and interoperate with all other conformant implementations. We chose to completely separate the Reactive Streams interfaces (which we regard to be an SPI) from the user-level API. In order to obtain a :class:`Publisher` or :class:`Subscriber` from an Akka Stream topology, a corresponding :class:`PublisherSink` or :class:`SubscriberSource` must be used. + All stream Processors produced by the default materialization of Akka Streams are restricted to having a single Subscriber, additional Subscribers will be rejected. The reason for this is that the stream topologies described using our DSL never require fan-out behavior from the Publisher sides of the elements, all fan-out is done using explicit elements like ``Broadcast[T]``. This means that ``Sink.fanoutPublisher`` must be used where multicast behavior is needed for interoperation with other Reactive Streams implementations. @@ -69,11 +71,11 @@ Other topologies can always be expressed as a combination of a PartialFlowGraph The difference between Error and Failure ---------------------------------------- -The starting point for this discussion is the [definition given by the Reactive Manifesto](http://www.reactivemanifesto.org/glossary#Failure). Translated to streams this means that an error is accessible within the stream as a normal data element, while a failure means that the stream itself has failed and is collapsing. In concrete terms, on the Reactive Streams interface level data elements (including errors) are signaled via ``onNext`` while failures raise the ``onError`` signal. +The starting point for this discussion is the `definition given by the Reactive Manifesto `_. Translated to streams this means that an error is accessible within the stream as a normal data element, while a failure means that the stream itself has failed and is collapsing. In concrete terms, on the Reactive Streams interface level data elements (including errors) are signaled via ``onNext`` while failures raise the ``onError`` signal. .. note:: - Unfortunately the method name for signaling _failure_ to a Subscriber is called ``onError`` for historical reasons. Always keep in mind that the Reactive Streams interfaces (Publisher/Subscription/Subscriber) are modeling the low-level infrastructure for passing streams between execution units, and errors on this level are precisely the failures that we are talking about on the higher level that is modeled by Akka Streams. + Unfortunately the method name for signaling *failure* to a Subscriber is called ``onError`` for historical reasons. Always keep in mind that the Reactive Streams interfaces (Publisher/Subscription/Subscriber) are modeling the low-level infrastructure for passing streams between execution units, and errors on this level are precisely the failures that we are talking about on the higher level that is modeled by Akka Streams. There is only limited support for treating ``onError`` in Akka Streams compared to the operators that are available for the transformation of data elements, which is intentional in the spirit of the previous paragraph. Since ``onError`` signals that the stream is collapsing, its ordering semantics are not the same as for stream completion: transformation stages of any kind will just collapse with the stream, possibly still holding elements in implicit or explicit buffers. This means that data elements emitted before a failure can still be lost if the ``onError`` overtakes them.