Remove docs for the (deprecated) contrib subproject (#22926)

This commit is contained in:
Arnout Engelen 2017-05-12 06:54:46 -07:00 committed by GitHub
parent 4cb9c2436f
commit 1c830d224e
10 changed files with 0 additions and 693 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

View file

@ -1,109 +0,0 @@
.. _aggregator:
Aggregator Pattern
==================
.. warning::
**Deprecation warning** - ``Aggregator`` has been deprecated and is scheduled for removal
in the next major version. Feel free to copy the source into your project or create
a separate library outside of Akka.
The aggregator pattern supports writing actors that aggregate data from multiple
other actors and updates its state based on those responses. It is even harder to
optionally aggregate more data based on the runtime state of the actor or take
certain actions (sending another message and get another response) based on two or
more previous responses.
A common thought is to use the ask pattern to request information from other
actors. However, ask creates another actor specifically for the ask. We cannot
use a callback from the future to update the state as the thread executing the
callback is not defined. This will likely close-over the current actor.
The aggregator pattern solves such scenarios. It makes sure we're
acting from the same actor in the scope of the actor receive.
Introduction
------------
The aggregator pattern allows match patterns to be dynamically added to and removed
from an actor from inside the message handling logic. All match patterns are called
from the receive loop and run in the thread handling the incoming message. These
dynamically added patterns and logic can safely read and/or modify this actor's
mutable state without risking integrity or concurrency issues.
Usage
-----
To use the aggregator pattern, you need to extend the :class:`Aggregator` trait.
The trait takes care of :class:`receive` and actors extending this trait should
not override :class:`receive`. The trait provides the :class:`expect`,
:class:`expectOnce`, and :class:`unexpect` calls. The :class:`expect` and
:class:`expectOnce` calls return a handle that can be used for later de-registration
by passing the handle to :class:`unexpect`.
:class:`expect` is often used for standing matches such as catching error messages or timeouts.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#expect-timeout
:class:`expectOnce` is used for matching the initial message as well as other requested messages
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#initial-expect
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#expect-balance
:class:`unexpect` can be used for expecting multiple responses until a timeout or when the logic
dictates such an :class:`expect` no longer applies.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#unexpect-sample
As the name eludes, :class:`expect` keeps the partial function matching any
received messages until :class:`unexpect` is called or the actor terminates,
whichever comes first. On the other hand, :class:`expectOnce` removes the partial
function once a match has been established.
It is a common pattern to register the initial expectOnce from the construction
of the actor to accept the initial message. Once that message is received, the
actor starts doing all aggregations and sends the response back to the original
requester. The aggregator should terminate after the response is sent (or timed
out). A different original request should use a different actor instance.
As you can see, aggregator actors are generally stateful, short lived actors.
Sample Use Case - AccountBalanceRetriever
-----------------------------------------
This example below shows a typical and intended use of the aggregator pattern.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#demo-code
Sample Use Case - Multiple Response Aggregation and Chaining
------------------------------------------------------------
A shorter example showing aggregating responses and chaining further requests.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/AggregatorSpec.scala#chain-sample
Pitfalls
--------
* The current implementation does not match the sender of the message. This is
designed to work with :class:`ActorSelection` as well as :class:`ActorRef`.
Without the sender(), there is a chance a received message can be matched by
more than one partial function. The partial function that was registered via
:class:`expect` or :class:`expectOnce` first (chronologically) and is not yet
de-registered by :class:`unexpect` takes precedence in this case. Developers
should make sure the messages can be uniquely matched or the wrong logic can
be executed for a certain message.
* The :class:`sender` referenced in any :class:`expect` or :class:`expectOnce`
logic refers to the sender() of that particular message and not the sender() of
the original message. The original sender() still needs to be saved so a final
response can be sent back.
* :class:`context.become` is not supported when extending the :class:`Aggregator`
trait.
* We strongly recommend against overriding :class:`receive`. If your use case
really dictates, you may do so with extreme caution. Always provide a pattern
match handling aggregator messages among your :class:`receive` pattern matches,
as follows::
case msg if handleMessage(msg) ⇒ // noop
// side effects of handleMessage does the actual match
Sorry, there is not yet a Java implementation of the aggregator pattern available.

View file

@ -1,59 +0,0 @@
.. _circuit-breaker-proxy:
Circuit-Breaker Actor
=====================
This is an alternative implementation of the :ref:`Akka Circuit Breaker Pattern <circuit-breaker>`.
The main difference is that it is intended to be used only for request-reply interactions with an actor using the Circuit-Breaker as a proxy of the target one
in order to provide the same failfast functionalities and a protocol similar to the circuit-breaker implementation in Akka.
.. warning::
**Deprecation warning** - ``CircuitBreakerProxy`` has been deprecated and is scheduled for removal
in the next major version. ``akka.pattern.CircuitBreaker`` with explicit ``ask`` requests can be used instead.
Usage
-----
Let's assume we have an actor wrapping a back-end service and able to respond to ``Request`` calls with a ``Response`` object
containing an ``Either[String, String]`` to map successful and failed responses. The service is also potentially slowing down
because of the workload.
A simple implementation can be given by this class
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/circuitbreaker/sample/CircuitBreaker.scala#simple-service
If we want to interface with this service using the Circuit Breaker we can use two approaches:
Using a non-conversational approach:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/circuitbreaker/sample/CircuitBreaker.scala#basic-sample
Using the ``ask`` pattern, in this case it is useful to be able to map circuit open failures to the same type of failures
returned by the service (a ``Left[String]`` in our case):
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/circuitbreaker/sample/CircuitBreaker.scala#ask-sample
If it is not possible to define a specific error response, you can map the Open Circuit notification to a failure.
That also means that your ``CircuitBreakerActor`` will be useful to protect you from time out for extra workload or
temporary failures in the target actor.
You can decide to do that in two ways:
The first is to use the ``askWithCircuitBreaker`` method on the ``ActorRef`` or ``ActorSelection`` instance pointing to
your circuit breaker proxy (enabled by importing ``import akka.contrib.circuitbreaker.Implicits.askWithCircuitBreaker``)
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/circuitbreaker/sample/CircuitBreaker.scala#ask-with-circuit-breaker-sample
The second is to map the future response of your ``ask`` pattern application with the ``failForOpenCircuit``
enabled by importing ``import akka.contrib.circuitbreaker.Implicits.futureExtensions``
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/circuitbreaker/sample/CircuitBreaker.scala#ask-with-failure-sample
Direct Communication With The Target Actor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
To send messages to the `target` actor without expecting any response you can wrap your message in a ``TellOnly`` or a ``Passthrough``
envelope. The difference between the two is that ``TellOnly`` will forward the message only when in closed mode and
``Passthrough`` will do it in any state. You can for example use the ``Passthrough`` envelope to wrap a ``PoisonPill``
message to terminate the target actor. That will cause the circuit breaker proxy to be terminated too

View file

@ -1,82 +0,0 @@
.. _akka-contrib:
External Contributions
======================
.. warning::
**Deprecation warning** - ``akka-contrib`` has been deprecated and is scheduled for removal
in the next major version. The reason is to reduce the amount of things to maintain in
the core Akka projects. Contributions to the core of Akka or its satellite projects
are welcome. Contributions that don't fit into existing modules can be hosted in
new Akka Github repositories in the ``akka`` Github organization or outside of it
depending on what kind of library it is. Please ask.
This subproject provides a home to modules contributed by external developers
which may or may not move into the officially supported code base over time.
The conditions under which this transition can occur include:
* there must be enough interest in the module to warrant inclusion in the
standard distribution,
* the module must be actively maintained and
* code quality must be good enough to allow efficient maintenance by the Akka
core development team
If a contributions turns out to not “take off” it may be removed again at a
later time.
Caveat Emptor
-------------
A module in this subproject doesn't have to obey the rule of staying binary
compatible between minor releases. Breaking API changes may be introduced in
minor releases without notice as we refine and simplify based on your feedback.
A module may be dropped in any release without prior deprecation. The Lightbend
subscription does not cover support for these modules.
The Current List of Modules
---------------------------
.. toctree::
reliable-proxy
throttle
jul
peek-mailbox
aggregator
receive-pipeline
circuitbreaker
Suggested Way of Using these Contributions
------------------------------------------
Since the Akka team does not restrict updates to this subproject even during
otherwise binary compatible releases, and modules may be removed without
deprecation, it is suggested to copy the source files into your own code base,
changing the package name. This way you can choose when to update or which
fixes to include (to keep binary compatibility if needed) and later releases of
Akka do not potentially break your application.
Suggested Format of Contributions
---------------------------------
Each contribution should be a self-contained unit, consisting of one source
file or one exclusively used package, without dependencies to other modules in
this subproject; it may depend on everything else in the Akka distribution,
though. This ensures that contributions may be moved into the standard
distribution individually. The module shall be within a subpackage of
``akka.contrib``.
Each module must be accompanied by a test suite which verifies that the
provided features work, possibly complemented by integration and unit tests.
The tests should follow the :ref:`developer_guidelines` and go into the
``src/test/scala`` or ``src/test/java`` directories (with package name matching
the module which is being tested). As an example, if the module were called
``akka.contrib.pattern.ReliableProxy``, then the test suite should be called
``akka.contrib.pattern.ReliableProxySpec``.
Each module must also have proper documentation in `reStructured Text`_ format.
The documentation should be a single ``<module>.rst`` file in the
``akka-contrib/docs`` directory, including a link from ``index.rst`` (this file).
.. _reStructured Text: http://sphinx.pocoo.org/rest.html

View file

@ -1,17 +0,0 @@
Java Logging (JUL)
==================
This extension module provides a logging backend which uses the `java.util.logging` (j.u.l)
API to do the endpoint logging for `akka.event.Logging`.
Provided with this module is an implementation of `akka.event.LoggingAdapter` which is independent of any `ActorSystem` being in place. This means that j.u.l can be used as the backend, via the Akka Logging API, for both Actor and non-Actor codebases.
To enable j.u.l as the `akka.event.Logging` backend, use the following Akka config:
loggers = ["akka.contrib.jul.JavaLogger"]
To access the `akka.event.Logging` API from non-Actor code, mix in `akka.contrib.jul.JavaLogging`.
This module is preferred over SLF4J with its JDK14 backend, due to integration issues resulting in the incorrect handling of `threadId`, `className` and `methodName`.
This extension module was contributed by Sam Halliday.

View file

@ -1,59 +0,0 @@
.. _mailbox-acking:
Mailbox with Explicit Acknowledgement
=====================================
.. warning::
**Deprecation warning** - ``PeekMailbox`` has been deprecated and is scheduled for removal
in the next major version. Use an explicit supervisor or proxy actor instead.
When an Akka actor is processing a message and an exception occurs, the normal
behavior is for the actor to drop that message, and then continue with the next
message after it has been restarted. This is in some cases not the desired
solution, e.g. when using failure and supervision to manage a connection to an
unreliable resource; the actor could after the restart go into a buffering mode
(i.e. change its behavior) and retry the real processing later, when the
unreliable resource is back online.
One way to do this is by sending all messages through the supervisor and
buffering them there, acknowledging successful processing in the child; another
way is to build an explicit acknowledgement mechanism into the mailbox. The
idea with the latter is that a message is reprocessed in case of failure until
the mailbox is told that processing was successful.
The pattern is implemented `here
<@github@/akka-contrib/src/main/scala/akka/contrib/mailbox/PeekMailbox.scala>`_.
A demonstration of how to use it (although for brevity not a perfect example)
is the following:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/mailbox/PeekMailboxSpec.scala
:include: demo
:exclude: business-logic-elided
Running this application (try it in the Akka sources by saying
``sbt akka-contrib/test:run``) may produce the following output (note the
processing of “World” on lines 2 and 16):
.. code-block:: none
Hello
World
[ERROR] [12/17/2012 16:28:36.581] [MySystem-peek-dispatcher-5] [akka://MySystem/user/myActor] DONTWANNA
java.lang.Exception: DONTWANNA
at akka.contrib.mailbox.MyActor.doStuff(PeekMailbox.scala:105)
at akka.contrib.mailbox.MyActor$$anonfun$receive$1.applyOrElse(PeekMailbox.scala:98)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:425)
at akka.actor.ActorCell.invoke(ActorCell.scala:386)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:230)
at akka.dispatch.Mailbox.run(Mailbox.scala:212)
at akka.dispatch.ForkJoinExecutorConfigurator$MailboxExecutionTask.exec(AbstractDispatcher.scala:502)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:262)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:975)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1478)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:104)
World
Normally one would want to make processing idempotent (i.e. it does not matter
if a message is processed twice) or ``context.become`` a different behavior
upon restart; the above example included the ``println(msg)`` call just to
demonstrate the re-processing.

View file

@ -1,142 +0,0 @@
.. _receive-pipeline:
Receive Pipeline Pattern
========================
.. warning::
**Deprecation warning** - ``ReceivePipeline`` has been deprecated and is scheduled for removal
in the next major version. Feel free to copy the source into your project or create
a separate library outside of Akka.
The Receive Pipeline Pattern lets you define general interceptors for your messages
and plug an arbitrary amount of them into your Actors.
It's useful for defining cross aspects to be applied to all or many of your Actors.
Some Possible Use Cases
-----------------------
* Measure the time spent for processing the messages
* Audit messages with an associated author
* Log all important messages
* Secure restricted messages
* Text internationalization
Interceptors
------------
Multiple interceptors can be added to actors that mixin the :class:`ReceivePipeline` trait.
These interceptors internally define layers of decorators around the actor's behavior. The first interceptor
defines an outer decorator which delegates to a decorator corresponding to the second interceptor and so on,
until the last interceptor which defines a decorator for the actor's :class:`Receive`.
The first or outermost interceptor receives messages sent to the actor.
For each message received by an interceptor, the interceptor will typically perform some
processing based on the message and decide whether
or not to pass the received message (or some other message) to the next interceptor.
An :class:`Interceptor` is a type alias for
:class:`PartialFunction[Any, Delegation]`. The :class:`Any` input is the message
it receives from the previous interceptor (or, in the case of the first interceptor,
the message that was sent to the actor).
The :class:`Delegation` return type is used to control what (if any)
message is passed on to the next interceptor.
A simple example
----------------
To pass a transformed message to the actor
(or next inner interceptor) an interceptor can return :class:`Inner(newMsg)` where :class:`newMsg` is the transformed message.
The following interceptor shows this. It intercepts :class:`Int` messages,
adds one to them and passes on the incremented value to the next interceptor.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#interceptor-sample1
Building the Pipeline
---------------------
To give your Actor the ability to pipeline received messages in this way, you'll need to mixin with the
:class:`ReceivePipeline` trait. It has two methods for controlling the pipeline, :class:`pipelineOuter`
and :class:`pipelineInner`, both receiving an :class:`Interceptor`.
The first one adds the interceptor at the
beginning of the pipeline and the second one adds it at the end, just before the current
Actor's behavior.
In this example we mixin our Actor with the :class:`ReceivePipeline` trait and
we add :class:`Increment` and :class:`Double` interceptors with :class:`pipelineInner`.
They will be applied in this very order.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#in-actor
If we add :class:`Double` with :class:`pipelineOuter` it will be applied before :class:`Increment` so the output is 11
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#in-actor-outer
Interceptors Mixin
------------------
Defining all the pipeline inside the Actor implementation is good for showing up the pattern, but it isn't
very practical. The real flexibility of this pattern comes when you define every interceptor in its own
trait and then you mixin any of them into your Actors.
Let's see it in an example. We have the following model:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#mixin-model
and these two interceptors defined, each one in its own trait:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#mixin-interceptors
The first one intercepts any messages having
an internationalized text and replaces it with the resolved text before resuming with the chain. The second one
intercepts any message with an author defined and prints it before resuming the chain with the message unchanged.
But since :class:`I18n` adds the interceptor with :class:`pipelineInner` and :class:`Audit` adds it with
:class:`pipelineOuter`, the audit will happen before the internationalization.
So if we mixin both interceptors in our Actor, we will see the following output for these example messages:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#mixin-actor
Unhandled Messages
------------------
With all that behaviors chaining occurring, what happens to unhandled messages? Let me explain it with
a simple rule.
.. note::
Every message not handled by an interceptor will be passed to the next one in the chain. If none
of the interceptors handles a message, the current Actor's behavior will receive it, and if the
behavior doesn't handle it either, it will be treated as usual with the unhandled method.
But sometimes it is desired for interceptors to break the chain. You can do it by explicitly indicating
that the message has been completely handled by the interceptor by returning
:class:`HandledCompletely`.
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#unhandled
Processing after delegation
---------------------------
But what if you want to perform some action after the actor has processed the message (for example to
measure the message processing time)?
In order to support such use cases, the :class:`Inner` return type has a method :class:`andAfter` which accepts
a code block that can perform some action after the message has been processed by subsequent inner interceptors.
The following is a simple interceptor that times message processing:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReceivePipelineSpec.scala#interceptor-after
.. note::
The :class:`andAfter` code blocks are run on return from handling the message with the next inner handler and
on the same thread. It is therefore safe for the :class:`andAfter` logic to close over the interceptor's state.
Using Receive Pipelines with Persistence
----------------------------------------
When using ``ReceivePipeline`` together with :ref:`PersistentActor<persistence-scala>` make sure to
mix-in the traits in the following order for them to properly co-operate::
class ExampleActor extends PersistentActor with ReceivePipeline {
/* ... */
}
The order is important here because of how both traits use internal "around" methods to implement their features,
and if mixed-in the other way around it would not work as expected. If you want to learn more about how exactly this
works, you can read up on Scala's `type linearization mechanism`_;
.. _type linearization mechanism: http://ktoso.github.io/scala-types-of-types/#type-linearization-vs-the-diamond-problem

View file

@ -1,155 +0,0 @@
.. _reliable-proxy:
Reliable Proxy Pattern
======================
.. warning::
**Deprecation warning** - ``ReliableProxy`` has been deprecated and is scheduled for removal
in the next major version. Use :ref:`at-least-once-delivery-scala` instead.
Looking at :ref:`message-delivery-reliability` one might come to the conclusion that
Akka actors are made for blue-sky scenarios: sending messages is the only way
for actors to communicate, and then that is not even guaranteed to work. Is the
whole paradigm built on sand? Of course the answer is an emphatic “No!”.
A local message send—within the same JVM instance—is not likely to fail, and if
it does the reason was one of
* it was meant to fail (due to consciously choosing a bounded mailbox, which
upon overflow will have to drop messages)
* or it failed due to a catastrophic VM error, e.g. an
:class:`OutOfMemoryError`, a memory access violation (“segmentation fault”,
GPF, etc.), JVM bug—or someone calling ``System.exit()``.
In all of these cases, the actor was very likely not in a position to process
the message anyway, so this part of the non-guarantee is not problematic.
It is a lot more likely for an unintended message delivery failure to occur
when a message send crosses JVM boundaries, i.e. an intermediate unreliable
network is involved. If someone unplugs an ethernet cable, or a power failure
shuts down a router, messages will be lost while the actors would be able to
process them just fine.
.. note::
This does not mean that message send semantics are different between local
and remote operations, it just means that in practice there is a difference
between how good the “best effort” is.
Introducing the Reliable Proxy
------------------------------
.. image:: ReliableProxy.png
To bridge the disparity between “local” and “remote” sends is the goal of this
pattern. When sending from A to B must be as reliable as in-JVM, regardless of
the deployment, then you can interject a reliable tunnel and send through that
instead. The tunnel consists of two end-points, where the ingress point P (the
“proxy”) is a child of A and the egress point E is a child of P, deployed onto
the same network node where B lives. Messages sent to P will be wrapped in an
envelope, tagged with a sequence number and sent to E, who verifies that the
received envelope has the right sequence number (the next expected one) and
forwards the contained message to B. When B receives this message, the
``sender()`` will be a reference to the sender() of the original message to P.
Reliability is added by E replying to orderly received messages with an ACK, so
that P can tick those messages off its resend list. If ACKs do not come in a
timely fashion, P will try to resend until successful.
Exactly what does it guarantee?
-------------------------------
Sending via a :class:`ReliableProxy` makes the message send exactly as reliable
as if the represented target were to live within the same JVM, provided that
the remote actor system does not terminate. In effect, both ends (i.e. JVM and
actor system) must be considered as one when evaluating the reliability of this
communication channel. The benefit is that the network in-between is taken out
of that equation.
Connecting to the target
^^^^^^^^^^^^^^^^^^^^^^^^
The ``proxy`` tries to connect to the ``target`` using the mechanism outlined in
:ref:`actorSelection-scala`. Once connected, if the ``tunnel`` terminates the ``proxy``
will optionally try to reconnect to the target using using the same process.
Note that during the reconnection process there is a possibility that a message
could be delivered to the ``target`` more than once. Consider the case where a message
is delivered to the ``target`` and the target system crashes before the ACK
is sent to the ``sender``. After the ``proxy`` reconnects to the ``target`` it
will start resending all of the messages that it has not received an ACK for, and
the message that it never got an ACK for will be redelivered. Either this possibility
should be considered in the design of the ``target`` or reconnection should be disabled.
How to use it
-------------
Since this implementation does not offer much in the way of configuration,
simply instantiate a proxy wrapping a target ``ActorPath``. From Java it looks
like this:
.. includecode:: @contribSrc@/src/test/java/akka/contrib/pattern/ReliableProxyTest.java
:include: import,demo-proxy
And from Scala like this:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala#demo
Since the :class:`ReliableProxy` actor is an :ref:`fsm-scala`, it also offers
the capability to subscribe to state transitions. If you need to know when all
enqueued messages have been received by the remote end-point (and consequently
been forwarded to the target), you can subscribe to the FSM notifications and
observe a transition from state :class:`ReliableProxy.Active` to state
:class:`ReliableProxy.Idle`.
.. includecode:: @contribSrc@/src/test/java/akka/contrib/pattern/ReliableProxyTest.java#demo-transition
From Scala it would look like so:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala#demo-transition
Configuration
^^^^^^^^^^^^^
* Set ``akka.reliable-proxy.debug`` to ``on`` to turn on extra debug logging for your
:class:`ReliableProxy` actors.
* ``akka.reliable-proxy.default-connect-interval`` is used only if you create a :class:`ReliableProxy`
with no reconnections (that is, ``reconnectAfter == None``). The default value is the value of the configuration
property ``akka.remote.retry-gate-closed-for``. For example, if ``akka.remote.retry-gate-closed-for`` is ``5 s``
case the :class:`ReliableProxy` will send an ``Identify`` message to the *target* every 5 seconds
to try to resolve the :class:`ActorPath` to an :class:`ActorRef` so that messages can be sent to the *target*.
The Actor Contract
------------------
Message it Processes
^^^^^^^^^^^^^^^^^^^^
* :class:`FSM.SubscribeTransitionCallBack` and :class:`FSM.UnsubscribeTransitionCallBack`, see :ref:`fsm-scala`
* :class:`ReliableProxy.Unsent`, see the API documentation for details.
* any other message is transferred through the reliable tunnel and forwarded to the designated target actor
Messages it Sends
^^^^^^^^^^^^^^^^^
* :class:`FSM.CurrentState` and :class:`FSM.Transition`, see :ref:`fsm-scala`
* :class:`ReliableProxy.TargetChanged` is sent to the FSM transition subscribers if the proxy reconnects to a
new target.
* :class:`ReliableProxy.ProxyTerminated` is sent to the FSM transition subscribers if the proxy is stopped.
Exceptions it Escalates
^^^^^^^^^^^^^^^^^^^^^^^
* no specific exception types
* any exception encountered by either the local or remote end-point are escalated (only fatal VM errors)
Arguments it Takes
^^^^^^^^^^^^^^^^^^
* *target* is the :class:`ActorPath` to the actor to which the tunnel shall reliably deliver
messages, ``B`` in the above illustration.
* *retryAfter* is the timeout for receiving ACK messages from the remote
end-point; once it fires, all outstanding message sends will be retried.
* *reconnectAfter* is an optional interval between connection attempts. It is also used as the interval
between receiving a ``Terminated`` for the tunnel and attempting to reconnect to the target actor.
* *maxConnectAttempts* is an optional maximum number of attempts to connect to the target while in
the ``Connecting`` state.

View file

@ -1,70 +0,0 @@
Throttling Actor Messages
=========================
Introduction
------------
.. warning::
**Deprecation warning** - ``TimerBasedThrottler`` has been deprecated and is scheduled for removal
in the next major version. Use Akka Streams instead, see
:ref:`migration guide <migration-guide-TimerBasedThrottler>`.
Suppose you are writing an application that makes HTTP requests to an external
web service and that this web service has a restriction in place: you may not
make more than 10 requests in 1 minute. You will get blocked or need to pay if
you dont stay under this limit. In such a scenario you will want to employ
a *message throttler*.
This extension module provides a simple implementation of a throttling actor,
the :class:`TimerBasedThrottler`.
How to use it
-------------
You can use a :class:`TimerBasedThrottler` as follows. From Java it looks
like this:
.. includecode:: @contribSrc@/src/test/java/akka/contrib/throttle/TimerBasedThrottlerTest.java#demo-code
And from Scala like this:
.. includecode:: @contribSrc@/src/test/scala/akka/contrib/throttle/TimerBasedThrottlerSpec.scala#demo-code
Please refer to the JavaDoc/ScalaDoc documentation for the details.
The guarantees
--------------
:class:`TimerBasedThrottler` uses a timer internally. When the throttlers rate is 3 msg/s,
for example, the throttler will start a timer that triggers
every second and each time will give the throttler exactly three "vouchers";
each voucher gives the throttler a right to deliver a message. In this way,
at most 3 messages will be sent out by the throttler in each interval.
It should be noted that such timer-based throttlers provide relatively **weak guarantees**:
* Only *start times* are taken into account. This may be a problem if, for example, the
throttler is used to throttle requests to an external web service. If a web request
takes very long on the server then the rate *observed on the server* may be higher.
* A timer-based throttler only makes guarantees for the intervals of its own timer. In
our example, no more than 3 messages are delivered within such intervals. Other
intervals on the timeline, however, may contain more calls.
The two cases are illustrated in the two figures below, each showing a timeline and three
intervals of the timer. The message delivery times chosen by the throttler are indicated
by dots, and as you can see, each interval contains at most 3 point, so the throttler
works correctly. Still, there is in each example an interval (the red one) that is
problematic. In the first scenario, this is because the delivery times are merely the
start times of longer requests (indicated by the four bars above the timeline that start
at the dots), so that the server observes four requests during the red interval. In the
second scenario, the messages are centered around one of the points in time where the
timer triggers, causing the red interval to contain too many messages.
.. image:: throttler.png
For some application scenarios, the guarantees provided by a timer-based throttler might
be too weak. Charles Cordingleys `blog post <http://www.cordinc.com/blog/2010/04/java-multichannel-asynchronous.html>`_
discusses a throttler with stronger guarantees (it solves problem 2 from above).
Future versions of this module may feature throttlers with better guarantees.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.9 KiB