add preprocessor for RST docs, see #2461 and #2431

The idea is to filter the sources, replacing @<var>@ occurrences with
the mapping for <var> (which is currently hard-coded). @@ -> @. In order
to make this work, I had to move the doc sources one directory down
(into akka-docs/rst) so that the filtered result could be in a sibling
directory so that relative links (to _sphinx plugins or real code) would
continue to work.

While I was at it I also changed it so that WARNINGs and ERRORs are not
swallowed into the debug dump anymore but printed at [warn] level
(minimum).

One piece of fallout is that the (online) html build is now run after
the normal one, not in parallel.
This commit is contained in:
Roland 2012-09-21 10:47:58 +02:00
parent c0f60da8cc
commit 9bc01ae265
266 changed files with 270 additions and 182 deletions

View file

@ -0,0 +1,757 @@
.. _actors-scala:
################
Actors (Scala)
################
The `Actor Model`_ provides a higher level of abstraction for writing concurrent
and distributed systems. It alleviates the developer from having to deal with
explicit locking and thread management, making it easier to write correct
concurrent and parallel systems. Actors were defined in the 1973 paper by Carl
Hewitt but have been popularized by the Erlang language, and used for example at
Ericsson with great success to build highly concurrent and reliable telecom
systems.
The API of Akkas Actors is similar to Scala Actors which has borrowed some of
its syntax from Erlang.
.. _Actor Model: http://en.wikipedia.org/wiki/Actor_model
Creating Actors
===============
.. note::
Since Akka enforces parental supervision every actor is supervised and
(potentially) the supervisor of its children, it is advisable that you
familiarize yourself with :ref:`actor-systems` and :ref:`supervision` and it
may also help to read :ref:`actorOf-vs-actorFor` (the whole of
:ref:`addressing` is recommended reading in any case).
Defining an Actor class
-----------------------
Actor classes are implemented by extending the Actor class and implementing the
:meth:`receive` method. The :meth:`receive` method should define a series of case
statements (which has the type ``PartialFunction[Any, Unit]``) that defines
which messages your Actor can handle, using standard Scala pattern matching,
along with the implementation of how the messages should be processed.
Here is an example:
.. includecode:: code/docs/actor/ActorDocSpec.scala
:include: imports1,my-actor
Please note that the Akka Actor ``receive`` message loop is exhaustive, which is
different compared to Erlang and Scala Actors. This means that you need to
provide a pattern match for all messages that it can accept and if you want to
be able to handle unknown messages then you need to have a default case as in
the example above. Otherwise an ``akka.actor.UnhandledMessage(message, sender, recipient)`` will be
published to the ``ActorSystem``'s ``EventStream``.
The result of the :meth:`receive` method is a partial function object, which is
stored within the actor as its “initial behavior”, see `Become/Unbecome`_ for
further information on changing the behavior of an actor after its
construction.
.. note::
The initial behavior of an Actor is extracted prior to constructor is run,
so if you want to base your initial behavior on member state, you should
use ``become`` in the constructor.
Creating Actors with default constructor
----------------------------------------
.. includecode:: code/docs/actor/ActorDocSpec.scala
:include: imports2,system-actorOf
The call to :meth:`actorOf` returns an instance of ``ActorRef``. This is a handle to
the ``Actor`` instance which you can use to interact with the ``Actor``. The
``ActorRef`` is immutable and has a one to one relationship with the Actor it
represents. The ``ActorRef`` is also serializable and network-aware. This means
that you can serialize it, send it over the wire and use it on a remote host and
it will still be representing the same Actor on the original node, across the
network.
In the above example the actor was created from the system. It is also possible
to create actors from other actors with the actor ``context``. The difference is
how the supervisor hierarchy is arranged. When using the context the current actor
will be supervisor of the created child actor. When using the system it will be
a top level actor, that is supervised by the system (internal guardian actor).
.. includecode:: code/docs/actor/ActorDocSpec.scala#context-actorOf
The name parameter is optional, but you should preferably name your actors, since
that is used in log messages and for identifying actors. The name must not be empty
or start with ``$``, but it may contain URL encoded characters (eg. ``%20`` for a blank space).
If the given name is already in use by another child to the
same parent actor an `InvalidActorNameException` is thrown.
Actors are automatically started asynchronously when created.
When you create the ``Actor`` then it will automatically call the ``preStart``
callback method on the ``Actor`` trait. This is an excellent place to
add initialization code for the actor.
.. code-block:: scala
override def preStart() = {
... // initialization code
}
Creating Actors with non-default constructor
--------------------------------------------
If your Actor has a constructor that takes parameters then you can't create it
using ``actorOf(Props[TYPE])``. Instead you can use a variant of ``actorOf`` that takes
a call-by-name block in which you can create the Actor in any way you like.
Here is an example:
.. includecode:: code/docs/actor/ActorDocSpec.scala#creating-constructor
.. warning::
You might be tempted at times to offer an ``Actor`` factory which always
returns the same instance, e.g. by using a ``lazy val`` or an
``object ... extends Actor``. This is not supported, as it goes against the
meaning of an actor restart, which is described here:
:ref:`supervision-restart`.
.. warning::
Also avoid passing mutable state into the constructor of the Actor, since
the call-by-name block can be executed by another thread.
Props
-----
``Props`` is a configuration class to specify options for the creation
of actors. Here are some examples on how to create a ``Props`` instance.
.. includecode:: code/docs/actor/ActorDocSpec.scala#creating-props-config
Creating Actors with Props
--------------------------
Actors are created by passing in a ``Props`` instance into the ``actorOf`` factory method.
.. includecode:: code/docs/actor/ActorDocSpec.scala#creating-props
Creating Actors using anonymous classes
---------------------------------------
When spawning actors for specific sub-tasks from within an actor, it may be convenient to include the code to be executed directly in place, using an anonymous class.
.. includecode:: code/docs/actor/ActorDocSpec.scala#anonymous-actor
.. warning::
In this case you need to carefully avoid closing over the containing actors
reference, i.e. do not call methods on the enclosing actor from within the
anonymous Actor class. This would break the actor encapsulation and may
introduce synchronization bugs and race conditions because the other actors
code will be scheduled concurrently to the enclosing actor. Unfortunately
there is not yet a way to detect these illegal accesses at compile time.
See also: :ref:`jmm-shared-state`
Actor API
=========
The :class:`Actor` trait defines only one abstract method, the above mentioned
:meth:`receive`, which implements the behavior of the actor.
If the current actor behavior does not match a received message,
:meth:`unhandled` is called, which by default publishes an
``akka.actor.UnhandledMessage(message, sender, recipient)`` on the actor
systems event stream (set configuration item
``akka.event-handler-startup-timeout`` to ``true`` to have them converted into
actual Debug messages)
In addition, it offers:
* :obj:`self` reference to the :class:`ActorRef` of the actor
* :obj:`sender` reference sender Actor of the last received message, typically used as described in :ref:`Actor.Reply`
* :obj:`supervisorStrategy` user overridable definition the strategy to use for supervising child actors
* :obj:`context` exposes contextual information for the actor and the current message, such as:
* factory methods to create child actors (:meth:`actorOf`)
* system that the actor belongs to
* parent supervisor
* supervised children
* lifecycle monitoring
* hotswap behavior stack as described in :ref:`Actor.HotSwap`
You can import the members in the :obj:`context` to avoid prefixing access with ``context.``
.. includecode:: code/docs/actor/ActorDocSpec.scala#import-context
The remaining visible methods are user-overridable life-cycle hooks which are
described in the following::
def preStart() {}
def preRestart(reason: Throwable, message: Option[Any]) {
context.children foreach (context.stop(_))
postStop()
}
def postRestart(reason: Throwable) { preStart() }
def postStop() {}
The implementations shown above are the defaults provided by the :class:`Actor`
trait.
.. _deathwatch-scala:
Lifecycle Monitoring aka DeathWatch
-----------------------------------
In order to be notified when another actor terminates (i.e. stops permanently,
not temporary failure and restart), an actor may register itself for reception
of the :class:`Terminated` message dispatched by the other actor upon
termination (see `Stopping Actors`_). This service is provided by the
:class:`DeathWatch` component of the actor system.
Registering a monitor is easy:
.. includecode:: code/docs/actor/ActorDocSpec.scala#watch
It should be noted that the :class:`Terminated` message is generated
independent of the order in which registration and termination occur.
Registering multiple times does not necessarily lead to multiple messages being
generated, but there is no guarantee that only exactly one such message is
received: if termination of the watched actor has generated and queued the
message, and another registration is done before this message has been
processed, then a second message will be queued, because registering for
monitoring of an already terminated actor leads to the immediate generation of
the :class:`Terminated` message.
It is also possible to deregister from watching another actors liveliness
using ``context.unwatch(target)``, but obviously this cannot guarantee
non-reception of the :class:`Terminated` message because that may already have
been queued.
Start Hook
----------
Right after starting the actor, its :meth:`preStart` method is invoked.
::
override def preStart() {
// registering with other actors
someService ! Register(self)
}
Restart Hooks
-------------
All actors are supervised, i.e. linked to another actor with a fault
handling strategy. Actors will be restarted in case an exception is thrown while
processing a message. This restart involves the hooks mentioned above:
1. The old actor is informed by calling :meth:`preRestart` with the exception
which caused the restart and the message which triggered that exception; the
latter may be ``None`` if the restart was not caused by processing a
message, e.g. when a supervisor does not trap the exception and is restarted
in turn by its supervisor. This method is the best place for cleaning up,
preparing hand-over to the fresh actor instance, etc.
By default it stops all children and calls :meth:`postStop`.
2. The initial factory from the ``actorOf`` call is used
to produce the fresh instance.
3. The new actors :meth:`postRestart` method is invoked with the exception
which caused the restart. By default the :meth:`preStart`
is called, just as in the normal start-up case.
An actor restart replaces only the actual actor object; the contents of the
mailbox is unaffected by the restart, so processing of messages will resume
after the :meth:`postRestart` hook returns. The message
that triggered the exception will not be received again. Any message
sent to an actor while it is being restarted will be queued to its mailbox as
usual.
Stop Hook
---------
After stopping an actor, its :meth:`postStop` hook is called, which may be used
e.g. for deregistering this actor from other services. This hook is guaranteed
to run after message queuing has been disabled for this actor, i.e. messages
sent to a stopped actor will be redirected to the :obj:`deadLetters` of the
:obj:`ActorSystem`.
Identifying Actors
==================
As described in :ref:`addressing`, each actor has a unique logical path, which
is obtained by following the chain of actors from child to parent until
reaching the root of the actor system, and it has a physical path, which may
differ if the supervision chain includes any remote supervisors. These paths
are used by the system to look up actors, e.g. when a remote message is
received and the recipient is searched, but they are also useful more directly:
actors may look up other actors by specifying absolute or relative
paths—logical or physical—and receive back an :class:`ActorRef` with the
result::
context.actorFor("/user/serviceA/aggregator") // will look up this absolute path
context.actorFor("../joe") // will look up sibling beneath same supervisor
The supplied path is parsed as a :class:`java.net.URI`, which basically means
that it is split on ``/`` into path elements. If the path starts with ``/``, it
is absolute and the look-up starts at the root guardian (which is the parent of
``"/user"``); otherwise it starts at the current actor. If a path element equals
``..``, the look-up will take a step “up” towards the supervisor of the
currently traversed actor, otherwise it will step “down” to the named child.
It should be noted that the ``..`` in actor paths here always means the logical
structure, i.e. the supervisor.
If the path being looked up does not exist, a special actor reference is
returned which behaves like the actor systems dead letter queue but retains
its identity (i.e. the path which was looked up).
Remote actor addresses may also be looked up, if remoting is enabled::
context.actorFor("akka://app@otherhost:1234/user/serviceB")
These look-ups return a (possibly remote) actor reference immediately, so you
will have to send to it and await a reply in order to verify that ``serviceB``
is actually reachable and running. An example demonstrating actor look-up is
given in :ref:`remote-lookup-sample-scala`.
Messages and immutability
=========================
**IMPORTANT**: Messages can be any kind of object but have to be
immutable. Scala cant enforce immutability (yet) so this has to be by
convention. Primitives like String, Int, Boolean are always immutable. Apart
from these the recommended approach is to use Scala case classes which are
immutable (if you dont explicitly expose the state) and works great with
pattern matching at the receiver side.
Here is an example:
.. code-block:: scala
// define the case class
case class Register(user: User)
// create a new case class message
val message = Register(user)
Other good messages types are ``scala.Tuple2``, ``scala.List``, ``scala.Map``
which are all immutable and great for pattern matching.
Send messages
=============
Messages are sent to an Actor through one of the following methods.
* ``!`` means “fire-and-forget”, e.g. send a message asynchronously and return
immediately. Also known as ``tell``.
* ``?`` sends a message asynchronously and returns a :class:`Future`
representing a possible reply. Also known as ``ask``.
Message ordering is guaranteed on a per-sender basis.
.. note::
There are performance implications of using ``ask`` since something needs to
keep track of when it times out, there needs to be something that bridges
a ``Promise`` into an ``ActorRef`` and it also needs to be reachable through
remoting. So always prefer ``tell`` for performance, and only ``ask`` if you must.
Tell: Fire-forget
-----------------
This is the preferred way of sending messages. No blocking waiting for a
message. This gives the best concurrency and scalability characteristics.
.. code-block:: scala
actor ! "hello"
If invoked from within an Actor, then the sending actor reference will be
implicitly passed along with the message and available to the receiving Actor
in its ``sender: ActorRef`` member field. The target actor can use this
to reply to the original sender, by using ``sender ! replyMsg``.
If invoked from an instance that is **not** an Actor the sender will be
:obj:`deadLetters` actor reference by default.
Ask: Send-And-Receive-Future
----------------------------
The ``ask`` pattern involves actors as well as futures, hence it is offered as
a use pattern rather than a method on :class:`ActorRef`:
.. includecode:: code/docs/actor/ActorDocSpec.scala#ask-pipeTo
This example demonstrates ``ask`` together with the ``pipeTo`` pattern on
futures, because this is likely to be a common combination. Please note that
all of the above is completely non-blocking and asynchronous: ``ask`` produces
a :class:`Future`, three of which are composed into a new future using the
for-comprehension and then ``pipeTo`` installs an ``onComplete``-handler on the
future to affect the submission of the aggregated :class:`Result` to another
actor.
Using ``ask`` will send a message to the receiving Actor as with ``tell``, and
the receiving actor must reply with ``sender ! reply`` in order to complete the
returned :class:`Future` with a value. The ``ask`` operation involves creating
an internal actor for handling this reply, which needs to have a timeout after
which it is destroyed in order not to leak resources; see more below.
.. warning::
To complete the future with an exception you need send a Failure message to the sender.
This is *not done automatically* when an actor throws an exception while processing a message.
.. includecode:: code/docs/actor/ActorDocSpec.scala#reply-exception
If the actor does not complete the future, it will expire after the timeout
period, completing it with an :class:`AskTimeoutException`. The timeout is
taken from one of the following locations in order of precedence:
1. explicitly given timeout as in:
.. includecode:: code/docs/actor/ActorDocSpec.scala#using-explicit-timeout
2. implicit argument of type :class:`akka.util.Timeout`, e.g.
.. includecode:: code/docs/actor/ActorDocSpec.scala#using-implicit-timeout
See :ref:`futures-scala` for more information on how to await or query a
future.
The ``onComplete``, ``onSuccess``, or ``onFailure`` methods of the ``Future`` can be
used to register a callback to get a notification when the Future completes.
Gives you a way to avoid blocking.
.. warning::
When using future callbacks, such as ``onComplete``, ``onSuccess``, and ``onFailure``,
inside actors you need to carefully avoid closing over
the containing actors reference, i.e. do not call methods or access mutable state
on the enclosing actor from within the callback. This would break the actor
encapsulation and may introduce synchronization bugs and race conditions because
the callback will be scheduled concurrently to the enclosing actor. Unfortunately
there is not yet a way to detect these illegal accesses at compile time.
See also: :ref:`jmm-shared-state`
Forward message
---------------
You can forward a message from one actor to another. This means that the
original sender address/reference is maintained even though the message is going
through a 'mediator'. This can be useful when writing actors that work as
routers, load-balancers, replicators etc.
.. code-block:: scala
myActor.forward(message)
Receive messages
================
An Actor has to implement the ``receive`` method to receive messages:
.. code-block:: scala
def receive: PartialFunction[Any, Unit]
Note: Akka has an alias to the ``PartialFunction[Any, Unit]`` type called
``Receive`` (``akka.actor.Actor.Receive``), so you can use this type instead for
clarity. But most often you don't need to spell it out.
This method should return a ``PartialFunction``, e.g. a match/case clause in
which the message can be matched against the different case clauses using Scala
pattern matching. Here is an example:
.. includecode:: code/docs/actor/ActorDocSpec.scala
:include: imports1,my-actor
.. _Actor.Reply:
Reply to messages
=================
If you want to have a handle for replying to a message, you can use
``sender``, which gives you an ActorRef. You can reply by sending to
that ActorRef with ``sender ! replyMsg``. You can also store the ActorRef
for replying later, or passing on to other actors. If there is no sender (a
message was sent without an actor or future context) then the sender
defaults to a 'dead-letter' actor ref.
.. code-block:: scala
case request =>
val result = process(request)
sender ! result // will have dead-letter actor as default
Initial receive timeout
=======================
A timeout mechanism can be used to receive a message when no initial message is
received within a certain time. To receive this timeout you have to set the
``receiveTimeout`` property and declare a case handing the ReceiveTimeout
object.
.. includecode:: code/docs/actor/ActorDocSpec.scala#receive-timeout
.. _stopping-actors-scala:
Stopping actors
===============
Actors are stopped by invoking the :meth:`stop` method of a ``ActorRefFactory``,
i.e. ``ActorContext`` or ``ActorSystem``. Typically the context is used for stopping
child actors and the system for stopping top level actors. The actual termination of
the actor is performed asynchronously, i.e. :meth:`stop` may return before the actor is
stopped.
Processing of the current message, if any, will continue before the actor is stopped,
but additional messages in the mailbox will not be processed. By default these
messages are sent to the :obj:`deadLetters` of the :obj:`ActorSystem`, but that
depends on the mailbox implementation.
Termination of an actor proceeds in two steps: first the actor suspends its
mailbox processing and sends a stop command to all its children, then it keeps
processing the termination messages from its children until the last one is
gone, finally terminating itself (invoking :meth:`postStop`, dumping mailbox,
publishing :class:`Terminated` on the :ref:`DeathWatch <deathwatch-scala>`, telling
its supervisor). This procedure ensures that actor system sub-trees terminate
in an orderly fashion, propagating the stop command to the leaves and
collecting their confirmation back to the stopped supervisor. If one of the
actors does not respond (i.e. processing a message for extended periods of time
and therefore not receiving the stop command), this whole process will be
stuck.
Upon :meth:`ActorSystem.shutdown()`, the system guardian actors will be
stopped, and the aforementioned process will ensure proper termination of the
whole system.
The :meth:`postStop()` hook is invoked after an actor is fully stopped. This
enables cleaning up of resources:
.. code-block:: scala
override def postStop() = {
// close some file or database connection
}
.. note::
Since stopping an actor is asynchronous, you cannot immediately reuse the
name of the child you just stopped; this will result in an
:class:`InvalidActorNameException`. Instead, :meth:`watch()` the terminating
actor and create its replacement in response to the :class:`Terminated`
message which will eventually arrive.
PoisonPill
----------
You can also send an actor the ``akka.actor.PoisonPill`` message, which will
stop the actor when the message is processed. ``PoisonPill`` is enqueued as
ordinary messages and will be handled after messages that were already queued
in the mailbox.
Graceful Stop
-------------
:meth:`gracefulStop` is useful if you need to wait for termination or compose ordered
termination of several actors:
.. includecode:: code/docs/actor/ActorDocSpec.scala#gracefulStop
When ``gracefulStop()`` returns successfully, the actors ``postStop()`` hook
will have been executed: there exists a happens-before edge between the end of
``postStop()`` and the return of ``gracefulStop()``.
.. warning::
Keep in mind that an actor stopping and its name being deregistered are
separate events which happen asynchronously from each other. Therefore it may
be that you will find the name still in use after ``gracefulStop()``
returned. In order to guarantee proper deregistration, only reuse names from
within a supervisor you control and only in response to a :class:`Terminated`
message, i.e. not for top-level actors.
.. _Actor.HotSwap:
Become/Unbecome
===============
Upgrade
-------
Akka supports hotswapping the Actors message loop (e.g. its implementation) at
runtime: Invoke the ``context.become`` method from within the Actor.
Become takes a ``PartialFunction[Any, Unit]`` that implements
the new message handler. The hotswapped code is kept in a Stack which can be
pushed and popped.
.. warning::
Please note that the actor will revert to its original behavior when restarted by its Supervisor.
To hotswap the Actor behavior using ``become``:
.. includecode:: code/docs/actor/ActorDocSpec.scala#hot-swap-actor
The ``become`` method is useful for many different things, but a particular nice
example of it is in example where it is used to implement a Finite State Machine
(FSM): `Dining Hakkers`_.
.. _Dining Hakkers: http://github.com/akka/akka/blob/master/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala
Here is another little cute example of ``become`` and ``unbecome`` in action:
.. includecode:: code/docs/actor/ActorDocSpec.scala#swapper
Encoding Scala Actors nested receives without accidentally leaking memory
-------------------------------------------------------------------------
See this `Unnested receive example <https://github.com/akka/akka/blob/master/akka-docs/scala/code/docs/actor/UnnestedReceives.scala>`_.
Downgrade
---------
Since the hotswapped code is pushed to a Stack you can downgrade the code as
well, all you need to do is to: Invoke the ``context.unbecome`` method from within the Actor.
This will pop the Stack and replace the Actor's implementation with the
``PartialFunction[Any, Unit]`` that is at the top of the Stack.
Here's how you use the ``unbecome`` method:
.. code-block:: scala
def receive = {
case "revert" => context.unbecome()
}
Stash
=====
The `Stash` trait enables an actor to temporarily stash away messages
that can not or should not be handled using the actor's current
behavior. Upon changing the actor's message handler, i.e., right
before invoking ``context.become`` or ``context.unbecome``, all
stashed messages can be "unstashed", thereby prepending them to the actor's
mailbox. This way, the stashed messages can be processed in the same
order as they have been received originally.
.. warning::
Please note that the ``Stash`` can only be used together with actors
that have a deque-based mailbox. For this, configure the
``mailbox-type`` of the dispatcher to be a deque-based mailbox, such as
``akka.dispatch.UnboundedDequeBasedMailbox`` (see :ref:`dispatchers-scala`).
Here is an example of the ``Stash`` in action:
.. includecode:: code/docs/actor/ActorDocSpec.scala#stash
Invoking ``stash()`` adds the current message (the message that the
actor received last) to the actor's stash. It is typically invoked
when handling the default case in the actor's message handler to stash
messages that aren't handled by the other cases. It is illegal to
stash the same message twice; to do so results in an
``IllegalStateException`` being thrown. The stash may also be bounded
in which case invoking ``stash()`` may lead to a capacity violation,
which results in a ``StashOverflowException``. The capacity of the
stash can be configured using the ``stash-capacity`` setting (an ``Int``) of the
dispatcher's configuration.
Invoking ``unstashAll()`` enqueues messages from the stash to the
actor's mailbox until the capacity of the mailbox (if any) has been
reached (note that messages from the stash are prepended to the
mailbox). In case a bounded mailbox overflows, a
``MessageQueueAppendFailedException`` is thrown.
The stash is guaranteed to be empty after calling ``unstashAll()``.
The stash is backed by a ``scala.collection.immutable.Vector``. As a
result, even a very large number of messages may be stashed without a
major impact on performance.
.. warning::
Note that the ``Stash`` trait must be mixed into (a subclass of) the
``Actor`` trait before any trait/class that overrides the ``preRestart``
callback. This means it's not possible to write
``Actor with MyActor with Stash`` if ``MyActor`` overrides ``preRestart``.
Note that the stash is not persisted across restarts of an actor,
unlike the actor's mailbox. Therefore, it should be managed like other
parts of the actor's state which have the same property.
Killing an Actor
================
You can kill an actor by sending a ``Kill`` message. This will restart the actor
through regular supervisor semantics.
Use it like this:
.. code-block:: scala
// kill the actor called 'victim'
victim ! Kill
Actors and exceptions
=====================
It can happen that while a message is being processed by an actor, that some
kind of exception is thrown, e.g. a database exception.
What happens to the Message
---------------------------
If an exception is thrown while a message is being processed (so taken of his
mailbox and handed over to the receive), then this message will be lost. It is
important to understand that it is not put back on the mailbox. So if you want
to retry processing of a message, you need to deal with it yourself by catching
the exception and retry your flow. Make sure that you put a bound on the number
of retries since you don't want a system to livelock (so consuming a lot of cpu
cycles without making progress).
What happens to the mailbox
---------------------------
If an exception is thrown while a message is being processed, nothing happens to
the mailbox. If the actor is restarted, the same mailbox will be there. So all
messages on that mailbox, will be there as well.
What happens to the actor
-------------------------
If an exception is thrown, the actor instance is discarded and a new instance is
created. This new instance will now be used in the actor references to this actor
(so this is done invisible to the developer). Note that this means that current
state of the failing actor instance is lost if you don't store and restore it in
``preRestart`` and ``postRestart`` callbacks.
Extending Actors using PartialFunction chaining
===============================================
A bit advanced but very useful way of defining a base message handler and then
extend that, either through inheritance or delegation, is to use
``PartialFunction.orElse`` chaining.
.. includecode:: code/docs/actor/ActorDocSpec.scala#receive-orElse
Or:
.. includecode:: code/docs/actor/ActorDocSpec.scala#receive-orElse2

View file

@ -0,0 +1,131 @@
.. _agents-scala:
################
Agents (Scala)
################
Agents in Akka are inspired by `agents in Clojure`_.
.. _agents in Clojure: http://clojure.org/agents
Agents provide asynchronous change of individual locations. Agents are bound to
a single storage location for their lifetime, and only allow mutation of that
location (to a new state) to occur as a result of an action. Update actions are
functions that are asynchronously applied to the Agent's state and whose return
value becomes the Agent's new state. The state of an Agent should be immutable.
While updates to Agents are asynchronous, the state of an Agent is always
immediately available for reading by any thread (using ``get`` or ``apply``)
without any messages.
Agents are reactive. The update actions of all Agents get interleaved amongst
threads in a thread pool. At any point in time, at most one ``send`` action for
each Agent is being executed. Actions dispatched to an agent from another thread
will occur in the order they were sent, potentially interleaved with actions
dispatched to the same agent from other sources.
If an Agent is used within an enclosing transaction, then it will participate in
that transaction. Agents are integrated with Scala STM - any dispatches made in
a transaction are held until that transaction commits, and are discarded if it
is retried or aborted.
Creating and stopping Agents
============================
Agents are created by invoking ``Agent(value)`` passing in the Agent's initial
value:
.. includecode:: code/docs/agent/AgentDocSpec.scala#create
Note that creating an Agent requires an implicit ``ActorSystem`` (for creating
the underlying actors). See :ref:`actor-systems` for more information about
actor systems. An ActorSystem can be in implicit scope when creating an Agent:
.. includecode:: code/docs/agent/AgentDocSpec.scala#create-implicit-system
Or the ActorSystem can be passed explicitly when creating an Agent:
.. includecode:: code/docs/agent/AgentDocSpec.scala#create-explicit-system
An Agent will be running until you invoke ``close`` on it. Then it will be
eligible for garbage collection (unless you hold on to it in some way).
.. includecode:: code/docs/agent/AgentDocSpec.scala#close
Updating Agents
===============
You update an Agent by sending a function that transforms the current value or
by sending just a new value. The Agent will apply the new value or function
atomically and asynchronously. The update is done in a fire-forget manner and
you are only guaranteed that it will be applied. There is no guarantee of when
the update will be applied but dispatches to an Agent from a single thread will
occur in order. You apply a value or a function by invoking the ``send``
function.
.. includecode:: code/docs/agent/AgentDocSpec.scala#send
You can also dispatch a function to update the internal state but on its own
thread. This does not use the reactive thread pool and can be used for
long-running or blocking operations. You do this with the ``sendOff``
method. Dispatches using either ``sendOff`` or ``send`` will still be executed
in order.
.. includecode:: code/docs/agent/AgentDocSpec.scala#send-off
Reading an Agent's value
========================
Agents can be dereferenced (you can get an Agent's value) by invoking the Agent
with parentheses like this:
.. includecode:: code/docs/agent/AgentDocSpec.scala#read-apply
Or by using the get method:
.. includecode:: code/docs/agent/AgentDocSpec.scala#read-get
Reading an Agent's current value does not involve any message passing and
happens immediately. So while updates to an Agent are asynchronous, reading the
state of an Agent is synchronous.
Awaiting an Agent's value
=========================
It is also possible to read the value after all currently queued sends have
completed. You can do this with ``await``:
.. includecode:: code/docs/agent/AgentDocSpec.scala#read-await
You can also get a ``Future`` to this value, that will be completed after the
currently queued updates have completed:
.. includecode:: code/docs/agent/AgentDocSpec.scala#read-future
Transactional Agents
====================
If an Agent is used within an enclosing transaction, then it will participate in
that transaction. If you send to an Agent within a transaction then the dispatch
to the Agent will be held until that transaction commits, and discarded if the
transaction is aborted. Here's an example:
.. includecode:: code/docs/agent/AgentDocSpec.scala#transfer-example
Monadic usage
=============
Agents are also monadic, allowing you to compose operations using
for-comprehensions. In monadic usage, new Agents are created leaving the
original Agents untouched. So the old values (Agents) are still available
as-is. They are so-called 'persistent'.
Example of monadic usage:
.. includecode:: code/docs/agent/AgentDocSpec.scala#monadic-example

View file

@ -0,0 +1,570 @@
.. _camel-scala:
##############
Camel (Scala)
##############
Additional Resources
====================
For an introduction to akka-camel 2, see also the Peter Gabryanczyk's talk `Migrating akka-camel module to Akka 2.x`_.
For an introduction to akka-camel 1, see also the `Appendix E - Akka and Camel`_
(pdf) of the book `Camel in Action`_.
.. _Appendix E - Akka and Camel: http://www.manning.com/ibsen/appEsample.pdf
.. _Camel in Action: http://www.manning.com/ibsen/
.. _Migrating akka-camel module to Akka 2.x: http://skillsmatter.com/podcast/scala/akka-2-x
Other, more advanced external articles (for version 1) are:
* `Akka Consumer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html>`_
* `Akka Producer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html>`_
Introduction
============
The akka-camel module allows Untyped Actors to receive
and send messages over a great variety of protocols and APIs.
In addition to the native Scala and Java actor API, actors can now exchange messages with other systems over large number
of protocols and APIs such as HTTP, SOAP, TCP, FTP, SMTP or JMS, to mention a
few. At the moment, approximately 80 protocols and APIs are supported.
Apache Camel
------------
The akka-camel module is based on `Apache Camel`_, a powerful and light-weight
integration framework for the JVM. For an introduction to Apache Camel you may
want to read this `Apache Camel article`_. Camel comes with a
large number of `components`_ that provide bindings to different protocols and
APIs. The `camel-extra`_ project provides further components.
.. _Apache Camel: http://camel.apache.org/
.. _Apache Camel article: http://architects.dzone.com/articles/apache-camel-integration
.. _components: http://camel.apache.org/components.html
.. _camel-extra: http://code.google.com/p/camel-extra/
Consumer
--------
Usage of Camel's integration components in Akka is essentially a
one-liner. Here's an example.
.. includecode:: code/docs/camel/Introduction.scala#Consumer-mina
The above example exposes an actor over a TCP endpoint via Apache
Camel's `Mina component`_. The actor implements the endpointUri method to define
an endpoint from which it can receive messages. After starting the actor, TCP
clients can immediately send messages to and receive responses from that
actor. If the message exchange should go over HTTP (via Camel's `Jetty
component`_), only the actor's endpointUri method must be changed.
.. _Mina component: http://camel.apache.org/mina.html
.. _Jetty component: http://camel.apache.org/jetty.html
.. includecode:: code/docs/camel/Introduction.scala#Consumer
Producer
--------
Actors can also trigger message exchanges with external systems i.e. produce to
Camel endpoints.
.. includecode:: code/docs/camel/Introduction.scala
:include: imports,Producer
In the above example, any message sent to this actor will be sent to
the JMS queue ``orders``. Producer actors may choose from the same set of Camel
components as Consumer actors do.
CamelMessage
------------
The number of Camel components is constantly increasing. The akka-camel module
can support these in a plug-and-play manner. Just add them to your application's
classpath, define a component-specific endpoint URI and use it to exchange
messages over the component-specific protocols or APIs. This is possible because
Camel components bind protocol-specific message formats to a Camel-specific
`normalized message format`__. The normalized message format hides
protocol-specific details from Akka and makes it therefore very easy to support
a large number of protocols through a uniform Camel component interface. The
akka-camel module further converts mutable Camel messages into immutable
representations which are used by Consumer and Producer actors for pattern
matching, transformation, serialization or storage. In the above example of the Orders Producer,
the XML message is put in the body of a newly created Camel Message with an empty set of headers.
You can also create a CamelMessage yourself with the appropriate body and headers as you see fit.
__ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Message.java
CamelExtension
--------------
The akka-camel module is implemented as an Akka Extension, the ``CamelExtension`` object.
Extensions will only be loaded once per ``ActorSystem``, which will be managed by Akka.
The ``CamelExtension`` object provides access to the `Camel`_ trait.
The `Camel`_ trait in turn provides access to two important Apache Camel objects, the `CamelContext`_ and the `ProducerTemplate`_.
Below you can see how you can get access to these Apache Camel objects.
.. includecode:: code/docs/camel/Introduction.scala#CamelExtension
One ``CamelExtension`` is only loaded once for every one ``ActorSystem``, which makes it safe to call the ``CamelExtension`` at any point in your code to get to the
Apache Camel objects associated with it. There is one `CamelContext`_ and one `ProducerTemplate`_ for every one ``ActorSystem`` that uses a ``CamelExtension``.
Below an example on how to add the ActiveMQ component to the `CamelContext`_, which is required when you would like to use the ActiveMQ component.
.. includecode:: code/docs/camel/Introduction.scala#CamelExtensionAddComponent
The `CamelContext`_ joins the lifecycle of the ``ActorSystem`` and ``CamelExtension`` it is associated with; the `CamelContext`_ is started when
the ``CamelExtension`` is created, and it is shut down when the associated ``ActorSystem`` is shut down. The same is true for the `ProducerTemplate`_.
The ``CamelExtension`` is used by both `Producer` and `Consumer` actors to interact with Apache Camel internally.
You can access the ``CamelExtension`` inside a `Producer` or a `Consumer` using the ``camel`` definition, or get straight at the `CamelContext` using the ``camelContext`` definition.
Actors are created and started asynchronously. When a `Consumer` actor is created, the `Consumer` is published at its Camel endpoint (more precisely, the route is added to the `CamelContext`_ from the `Endpoint`_ to the actor).
When a `Producer` actor is created, a `SendProcessor`_ and `Endpoint`_ are created so that the Producer can send messages to it.
Publication is done asynchronously; setting up an endpoint may still be in progress after you have
requested the actor to be created. Some Camel components can take a while to startup, and in some cases you might want to know when the endpoints are activated and ready to be used.
The `Camel`_ trait allows you to find out when the endpoint is activated or deactivated.
.. includecode:: code/docs/camel/Introduction.scala#CamelActivation
The above code shows that you can get a ``Future`` to the activation of the route from the endpoint to the actor, or you can wait in a blocking fashion on the activation of the route.
An ``ActivationTimeoutException`` is thrown if the endpoint could not be activated within the specified timeout. Deactivation works in a similar fashion:
.. includecode:: code/docs/camel/Introduction.scala#CamelDeactivation
Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the `SendProcessor`_ is stopped.
A ``DeActivationTimeoutException`` is thrown if the associated camel objects could not be deactivated within the specified timeout.
.. _Camel: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Camel.scala
.. _CamelContext: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java
.. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java
.. _SendProcessor: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java
.. _Endpoint: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Endpoint.java
Consumer Actors
================
For objects to receive messages, they must mixin the `Consumer`_
trait. For example, the following actor class (Consumer1) implements the
endpointUri method, which is declared in the Consumer trait, in order to receive
messages from the ``file:data/input/actor`` Camel endpoint.
.. _Consumer: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Consumer.scala
.. includecode:: code/docs/camel/Consumers.scala#Consumer1
Whenever a file is put into the data/input/actor directory, its content is
picked up by the Camel `file component`_ and sent as message to the
actor. Messages consumed by actors from Camel endpoints are of type
`CamelMessage`_. These are immutable representations of Camel messages.
.. _file component: http://camel.apache.org/file2.html
.. _Message: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala
Here's another example that sets the endpointUri to
``jetty:http://localhost:8877/camel/default``. It causes Camel's `Jetty
component`_ to start an embedded `Jetty`_ server, accepting HTTP connections
from localhost on port 8877.
.. _Jetty component: http://camel.apache.org/jetty.html
.. _Jetty: http://www.eclipse.org/jetty/
.. includecode:: code/docs/camel/Consumers.scala#Consumer2
After starting the actor, clients can send messages to that actor by POSTing to
``http://localhost:8877/camel/default``. The actor sends a response by using the
sender `!` method. For returning a message body and headers to the HTTP
client the response type should be `CamelMessage`_. For any other response type, a
new CamelMessage object is created by akka-camel with the actor response as message
body.
.. _CamelMessage: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala
.. _camel-acknowledgements:
Delivery acknowledgements
-------------------------
With in-out message exchanges, clients usually know that a message exchange is
done when they receive a reply from a consumer actor. The reply message can be a
CamelMessage (or any object which is then internally converted to a CamelMessage) on
success, and a Failure message on failure.
With in-only message exchanges, by default, an exchange is done when a message
is added to the consumer actor's mailbox. Any failure or exception that occurs
during processing of that message by the consumer actor cannot be reported back
to the endpoint in this case. To allow consumer actors to positively or
negatively acknowledge the receipt of a message from an in-only message
exchange, they need to override the ``autoAck`` method to return false.
In this case, consumer actors must reply either with a
special akka.camel.Ack message (positive acknowledgement) or a akka.actor.Status.Failure (negative
acknowledgement).
.. includecode:: code/docs/camel/Consumers.scala#Consumer3
.. _camel-timeout:
Consumer timeout
----------------
Camel Exchanges (and their corresponding endpoints) that support two-way communications need to wait for a response from
an actor before returning it to the initiating client.
For some endpoint types, timeout values can be defined in an endpoint-specific
way which is described in the documentation of the individual `Camel
components`_. Another option is to configure timeouts on the level of consumer actors.
.. _Camel components: http://camel.apache.org/components.html
Two-way communications between a Camel endpoint and an actor are
initiated by sending the request message to the actor with the `ask`_ pattern
and the actor replies to the endpoint when the response is ready. The ask request to the actor can timeout, which will
result in the `Exchange`_ failing with a TimeoutException set on the failure of the `Exchange`_.
The timeout on the consumer actor can be overridden with the ``replyTimeout``, as shown below.
.. includecode:: code/docs/camel/Consumers.scala#Consumer4
.. _Exchange: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java
.. _ask: http://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/pattern/AskSupport.scala
Producer Actors
===============
For sending messages to Camel endpoints, actors need to mixin the `Producer`_ trait and implement the endpointUri method.
.. includecode:: code/docs/camel/Producers.scala#Producer1
Producer1 inherits a default implementation of the receive method from the
Producer trait. To customize a producer actor's default behavior you must override the `Producer`_.transformResponse and
`Producer`_.transformOutgoingMessage methods. This is explained later in more detail.
Producer Actors cannot override the default `Producer`_.receive method.
Any message sent to a `Producer`_ actor will be sent to
the associated Camel endpoint, in the above example to
``http://localhost:8080/news``. The `Producer`_ always sends messages asynchronously. Response messages (if supported by the
configured endpoint) will, by default, be returned to the original sender. The
following example uses the ask pattern to send a message to a
Producer actor and waits for a response.
.. includecode:: code/docs/camel/Producers.scala#AskProducer
The future contains the response CamelMessage, or an ``AkkaCamelException`` when an error occurred, which contains the headers of the response.
.. _camel-custom-processing-scala:
Custom Processing
-----------------
Instead of replying to the initial sender, producer actors can implement custom
response processing by overriding the routeResponse method. In the following example, the response
message is forwarded to a target actor instead of being replied to the original
sender.
.. includecode:: code/docs/camel/Producers.scala#RouteResponse
Before producing messages to endpoints, producer actors can pre-process them by
overriding the `Producer`_.transformOutgoingMessage method.
.. includecode:: code/docs/camel/Producers.scala#TransformOutgoingMessage
Producer configuration options
------------------------------
The interaction of producer actors with Camel endpoints can be configured to be
one-way or two-way (by initiating in-only or in-out message exchanges,
respectively). By default, the producer initiates an in-out message exchange
with the endpoint. For initiating an in-only exchange, producer actors have to override the oneway method to return true.
.. includecode:: code/docs/camel/Producers.scala#Oneway
Message correlation
-------------------
To correlate request with response messages, applications can set the
`Message.MessageExchangeId` message header.
.. includecode:: code/docs/camel/Producers.scala#Correlate
ProducerTemplate
----------------
The `Producer`_ trait is a very
convenient way for actors to produce messages to Camel endpoints. Actors may also use a Camel `ProducerTemplate`_ for producing
messages to endpoints.
.. includecode:: code/docs/camel/Producers.scala#ProducerTemplate
For initiating a a two-way message exchange, one of the
``ProducerTemplate.request*`` methods must be used.
.. includecode:: code/docs/camel/Producers.scala#RequestProducerTemplate
.. _Producer: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Producer.scala
.. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java
.. _camel-asynchronous-routing:
Asynchronous routing
====================
In-out message exchanges between endpoints and actors are
designed to be asynchronous. This is the case for both, consumer and producer
actors.
* A consumer endpoint sends request messages to its consumer actor using the ``!``
(tell) operator and the actor returns responses with ``sender !`` once they are
ready.
* A producer actor sends request messages to its endpoint using Camel's
asynchronous routing engine. Asynchronous responses are wrapped and added to the
producer actor's mailbox for later processing. By default, response messages are
returned to the initial sender but this can be overridden by Producer
implementations (see also description of the ``routeResponse`` method
in :ref:`camel-custom-processing-scala`).
However, asynchronous two-way message exchanges, without allocating a thread for
the full duration of exchange, cannot be generically supported by Camel's
asynchronous routing engine alone. This must be supported by the individual
`Camel components`_ (from which endpoints are created) as well. They must be
able to suspend any work started for request processing (thereby freeing threads
to do other work) and resume processing when the response is ready. This is
currently the case for a `subset of components`_ such as the `Jetty component`_.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also a :ref:`camel-async-example` that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
.. _Camel components: http://camel.apache.org/components.html
.. _subset of components: http://camel.apache.org/asynchronous-routing-engine.html
.. _Jetty component: http://camel.apache.org/jetty.html
Custom Camel routes
===================
In all the examples so far, routes to consumer actors have been automatically
constructed by akka-camel, when the actor was started. Although the default
route construction templates, used by akka-camel internally, are sufficient for
most use cases, some applications may require more specialized routes to actors.
The akka-camel module provides two mechanisms for customizing routes to actors,
which will be explained in this section. These are:
* Usage of :ref:`camel-components` to access actors.
Any Camel route can use these components to access Akka actors.
* :ref:`camel-intercepting-route-construction` to actors.
This option gives you the ability to change routes that have already been added to Camel.
Consumer actors have a hook into the route definition process which can be used to change the route.
.. _camel-components:
Akka Camel components
---------------------
Akka actors can be accessed from Camel routes using the `actor`_ Camel component. This component can be used to
access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections.
.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala
.. _access-to-actors:
Access to actors
----------------
To access actors from custom Camel routes, the `actor`_ Camel
component should be used. It fully supports Camel's `asynchronous routing
engine`_.
.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala
.. _asynchronous routing engine: http://camel.apache.org/asynchronous-routing-engine.html
This component accepts the following endpoint URI format:
* ``[<actor-path>]?<options>``
where ``<actor-path>`` is the ``ActorPath`` to the actor. The ``<options>`` are
name-value pairs separated by ``&`` (i.e. ``name1=value1&name2=value2&...``).
URI options
^^^^^^^^^^^
The following URI options are supported:
+--------------+----------+---------+-------------------------------------------+
| Name | Type | Default | Description |
+==============+==========+=========+===========================================+
| replyTimeout | Duration | false | The reply timeout, specified in the same |
| | | | way that you use the duration in akka, |
| | | | for instance ``10 seconds`` except that |
| | | | in the url it is handy to use a + |
| | | | between the amount and the unit, like |
| | | | for example ``200+millis`` |
| | | | |
| | | | See also :ref:`camel-timeout`. |
+--------------+----------+---------+-------------------------------------------+
| autoAck | Boolean | true | If set to true, in-only message exchanges |
| | | | are auto-acknowledged when the message is |
| | | | added to the actor's mailbox. If set to |
| | | | false, actors must acknowledge the |
| | | | receipt of the message. |
| | | | |
| | | | See also :ref:`camel-acknowledgements`. |
+--------------+----------+---------+-------------------------------------------+
Here's an actor endpoint URI example containing an actor uuid::
akka://some-system/user/myconsumer?autoAck=false&replyTimeout=100+millis
In the following example, a custom route to an actor is created, using the
actor's path. the akka camel package contains an implicit ``toActorRouteDefinition`` that allows for a route to
reference an ``ActorRef`` directly as shown in the below example, The route starts from a `Jetty`_ endpoint and
ends at the target actor.
.. includecode:: code/docs/camel/CustomRoute.scala#CustomRoute
When a message is received on the jetty endpoint, it is routed to the Responder actor, which in return replies back to the client of
the HTTP request.
.. _camel-intercepting-route-construction:
Intercepting route construction
-------------------------------
The previous section, :ref:`camel-components`, explained how to setup a route to an actor manually.
It was the application's responsibility to define the route and add it to the current CamelContext.
This section explains a more convenient way to define custom routes: akka-camel is still setting up the routes to consumer actors (and adds these routes to the current CamelContext) but applications can define extensions to these routes.
Extensions can be defined with Camel's `Java DSL`_ or `Scala DSL`_.
For example, an extension could be a custom error handler that redelivers messages from an endpoint to an actor's bounded mailbox when the mailbox was full.
.. _Java DSL: http://camel.apache.org/dsl.html
.. _Scala DSL: http://camel.apache.org/scala-dsl.html
The following examples demonstrate how to extend a route to a consumer actor for
handling exceptions thrown by that actor.
.. includecode:: code/docs/camel/CustomRoute.scala#ErrorThrowingConsumer
The above ErrorThrowingConsumer sends the Failure back to the sender in preRestart
because the Exception that is thrown in the actor would
otherwise just crash the actor, by default the actor would be restarted, and the response would never reach the client of the Consumer.
The akka-camel module creates a RouteDefinition instance by calling
from(endpointUri) on a Camel RouteBuilder (where endpointUri is the endpoint URI
of the consumer actor) and passes that instance as argument to the route
definition handler \*). The route definition handler then extends the route and
returns a ProcessorDefinition (in the above example, the ProcessorDefinition
returned by the end method. See the `org.apache.camel.model`__ package for
details). After executing the route definition handler, akka-camel finally calls
a to(targetActorUri) on the returned ProcessorDefinition to complete the
route to the consumer actor (where targetActorUri is the actor component URI as described in :ref:`access-to-actors`).
If the actor cannot be found, a `ActorNotRegisteredException` is thrown.
\*) Before passing the RouteDefinition instance to the route definition handler,
akka-camel may make some further modifications to it.
__ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/model/
.. _camel-examples:
Examples
========
.. _camel-async-example:
Asynchronous routing and transformation example
-----------------------------------------------
This example demonstrates how to implement consumer and producer actors that
support :ref:`camel-asynchronous-routing` with their Camel endpoints. The sample
application transforms the content of the Akka homepage, http://akka.io, by
replacing every occurrence of *Akka* with *AKKA*. To run this example, add
a Boot class that starts the actors. After starting
the :ref:`microkernel-scala`, direct the browser to http://localhost:8875 and the
transformed Akka homepage should be displayed. Please note that this example
will probably not work if you're behind an HTTP proxy.
The following figure gives an overview how the example actors interact with
external systems and with each other. A browser sends a GET request to
http://localhost:8875 which is the published endpoint of the ``HttpConsumer``
actor. The ``HttpConsumer`` actor forwards the requests to the ``HttpProducer``
actor which retrieves the Akka homepage from http://akka.io. The retrieved HTML
is then forwarded to the ``HttpTransformer`` actor which replaces all occurrences
of *Akka* with *AKKA*. The transformation result is sent back the HttpConsumer
which finally returns it to the browser.
.. image:: ../modules/camel-async-interact.png
Implementing the example actor classes and wiring them together is rather easy
as shown in the following snippet.
.. includecode:: code/docs/camel/HttpExample.scala#HttpExample
The `jetty endpoints`_ of HttpConsumer and HttpProducer support asynchronous
in-out message exchanges and do not allocate threads for the full duration of
the exchange. This is achieved by using `Jetty continuations`_ on the
consumer-side and by using `Jetty's asynchronous HTTP client`_ on the producer
side. The following high-level sequence diagram illustrates that.
.. _jetty endpoints: http://camel.apache.org/jetty.html
.. _Jetty continuations: http://wiki.eclipse.org/Jetty/Feature/Continuations
.. _Jetty's asynchronous HTTP client: http://wiki.eclipse.org/Jetty/Tutorial/HttpClient
.. image:: ../modules/camel-async-sequence.png
Custom Camel route example
--------------------------
This section also demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route. The
following figure gives an overview.
.. image:: ../modules/camel-custom-route.png
* A consumer actor receives a message from an HTTP client
* It forwards the message to another actor that transforms the message (encloses
the original message into hyphens)
* The transformer actor forwards the transformed message to a producer actor
* The producer actor sends the message to a custom Camel route beginning at the
``direct:welcome`` endpoint
* A processor (transformer) in the custom Camel route prepends "Welcome" to the
original message and creates a result message
* The producer actor sends the result back to the consumer actor which returns
it to the HTTP client
The consumer, transformer and
producer actor implementations are as follows.
.. includecode:: code/docs/camel/CustomRouteExample.scala#CustomRouteExample
The producer actor knows where to reply the message to because the consumer and
transformer actors have forwarded the original sender reference as well. The
application configuration and the route starting from direct:welcome are done in the code above.
To run the example, add the lines shown in the example to a Boot class and the start the :ref:`microkernel-scala` and POST a message to
``http://localhost:8877/camel/welcome``.
.. code-block:: none
curl -H "Content-Type: text/plain" -d "Anke" http://localhost:8877/camel/welcome
The response should be:
.. code-block:: none
Welcome - Anke -
Quartz Scheduler Example
------------------------
Here is an example showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component in Akka.
The following example creates a "timer" actor which fires a message every 2
seconds:
.. includecode:: code/docs/camel/QuartzExample.scala#Quartz
For more information about the Camel Quartz component, see here:
http://camel.apache.org/quartz.html

View file

@ -0,0 +1,409 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
//#imports1
import akka.actor.Actor
import akka.actor.Props
import akka.event.Logging
//#imports1
import scala.concurrent.Future
import akka.actor.{ ActorRef, ActorSystem }
import org.scalatest.{ BeforeAndAfterAll, WordSpec }
import org.scalatest.matchers.MustMatchers
import akka.testkit._
import akka.util._
import scala.concurrent.util.duration._
import akka.actor.Actor.Receive
import scala.concurrent.Await
//#my-actor
class MyActor extends Actor {
val log = Logging(context.system, this)
def receive = {
case "test" log.info("received test")
case _ log.info("received unknown message")
}
}
//#my-actor
case class DoIt(msg: ImmutableMessage)
case class Message(s: String)
//#context-actorOf
class FirstActor extends Actor {
val myActor = context.actorOf(Props[MyActor], name = "myactor")
//#context-actorOf
def receive = {
case x sender ! x
}
}
class AnonymousActor extends Actor {
//#anonymous-actor
def receive = {
case m: DoIt
context.actorOf(Props(new Actor {
def receive = {
case DoIt(msg)
val replyMsg = doSomeDangerousWork(msg)
sender ! replyMsg
context.stop(self)
}
def doSomeDangerousWork(msg: ImmutableMessage): String = { "done" }
})) forward m
}
//#anonymous-actor
}
//#system-actorOf
object Main extends App {
val system = ActorSystem("MySystem")
val myActor = system.actorOf(Props[MyActor], name = "myactor")
//#system-actorOf
}
class ReplyException extends Actor {
def receive = {
case _
//#reply-exception
try {
val result = operation()
sender ! result
} catch {
case e: Exception
sender ! akka.actor.Status.Failure(e)
throw e
}
//#reply-exception
}
def operation(): String = { "Hi" }
}
//#swapper
case object Swap
class Swapper extends Actor {
import context._
val log = Logging(system, this)
def receive = {
case Swap
log.info("Hi")
become {
case Swap
log.info("Ho")
unbecome() // resets the latest 'become' (just for fun)
}
}
}
object SwapperApp extends App {
val system = ActorSystem("SwapperSystem")
val swap = system.actorOf(Props[Swapper], name = "swapper")
swap ! Swap // logs Hi
swap ! Swap // logs Ho
swap ! Swap // logs Hi
swap ! Swap // logs Ho
swap ! Swap // logs Hi
swap ! Swap // logs Ho
}
//#swapper
//#receive-orElse
abstract class GenericActor extends Actor {
// to be defined in subclassing actor
def specificMessageHandler: Receive
// generic message handler
def genericMessageHandler: Receive = {
case event printf("generic: %s\n", event)
}
def receive = specificMessageHandler orElse genericMessageHandler
}
class SpecificActor extends GenericActor {
def specificMessageHandler = {
case event: MyMsg printf("specific: %s\n", event.subject)
}
}
case class MyMsg(subject: String)
//#receive-orElse
//#receive-orElse2
trait ComposableActor extends Actor {
private var receives: List[Receive] = List()
protected def registerReceive(receive: Receive) {
receives = receive :: receives
}
def receive = receives reduce { _ orElse _ }
}
class MyComposableActor extends ComposableActor {
override def preStart() {
registerReceive({
case "foo" /* Do something */
})
registerReceive({
case "bar" /* Do something */
})
}
}
//#receive-orElse2
class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
"import context" in {
//#import-context
class FirstActor extends Actor {
import context._
val myActor = actorOf(Props[MyActor], name = "myactor")
def receive = {
case x myActor ! x
}
}
//#import-context
val first = system.actorOf(Props(new FirstActor), name = "first")
system.stop(first)
}
"creating actor with AkkaSpec.actorOf" in {
val myActor = system.actorOf(Props[MyActor])
// testing the actor
// TODO: convert docs to AkkaSpec(Map(...))
val filter = EventFilter.custom {
case e: Logging.Info true
case _ false
}
system.eventStream.publish(TestEvent.Mute(filter))
system.eventStream.subscribe(testActor, classOf[Logging.Info])
myActor ! "test"
expectMsgPF(1 second) { case Logging.Info(_, _, "received test") true }
myActor ! "unknown"
expectMsgPF(1 second) { case Logging.Info(_, _, "received unknown message") true }
system.eventStream.unsubscribe(testActor)
system.eventStream.publish(TestEvent.UnMute(filter))
system.stop(myActor)
}
"creating actor with constructor" in {
class MyActor(arg: String) extends Actor {
def receive = { case _ () }
}
//#creating-constructor
// allows passing in arguments to the MyActor constructor
val myActor = system.actorOf(Props(new MyActor("...")), name = "myactor")
//#creating-constructor
system.stop(myActor)
}
"creating a Props config" in {
//#creating-props-config
import akka.actor.Props
val props1 = Props.empty
val props2 = Props[MyActor]
val props3 = Props(new MyActor)
val props4 = Props(
creator = { () new MyActor },
dispatcher = "my-dispatcher")
val props5 = props1.withCreator(new MyActor)
val props6 = props5.withDispatcher("my-dispatcher")
//#creating-props-config
}
"creating actor with Props" in {
//#creating-props
import akka.actor.Props
val myActor = system.actorOf(Props[MyActor].withDispatcher("my-dispatcher"), name = "myactor2")
//#creating-props
system.stop(myActor)
}
"using implicit timeout" in {
val myActor = system.actorOf(Props(new FirstActor))
//#using-implicit-timeout
import scala.concurrent.util.duration._
import akka.util.Timeout
import akka.pattern.ask
implicit val timeout = Timeout(5 seconds)
val future = myActor ? "hello"
//#using-implicit-timeout
Await.result(future, timeout.duration) must be("hello")
}
"using explicit timeout" in {
val myActor = system.actorOf(Props(new FirstActor))
//#using-explicit-timeout
import scala.concurrent.util.duration._
import akka.pattern.ask
val future = myActor.ask("hello")(5 seconds)
//#using-explicit-timeout
Await.result(future, 5 seconds) must be("hello")
}
"using receiveTimeout" in {
//#receive-timeout
import akka.actor.ReceiveTimeout
import scala.concurrent.util.duration._
class MyActor extends Actor {
context.setReceiveTimeout(30 milliseconds)
def receive = {
case "Hello" //...
case ReceiveTimeout throw new RuntimeException("received timeout")
}
}
//#receive-timeout
}
"using hot-swap" in {
//#hot-swap-actor
class HotSwapActor extends Actor {
import context._
def angry: Receive = {
case "foo" sender ! "I am already angry?"
case "bar" become(happy)
}
def happy: Receive = {
case "bar" sender ! "I am already happy :-)"
case "foo" become(angry)
}
def receive = {
case "foo" become(angry)
case "bar" become(happy)
}
}
//#hot-swap-actor
val actor = system.actorOf(Props(new HotSwapActor), name = "hot")
}
"using Stash" in {
//#stash
import akka.actor.Stash
class ActorWithProtocol extends Actor with Stash {
def receive = {
case "open"
unstashAll()
context.become {
case "write" // do writing...
case "close"
unstashAll()
context.unbecome()
case msg stash()
}
case msg stash()
}
}
//#stash
}
"using watch" in {
//#watch
import akka.actor.{ Actor, Props, Terminated }
class WatchActor extends Actor {
val child = context.actorOf(Props.empty, "child")
context.watch(child) // <-- this is the only call needed for registration
var lastSender = system.deadLetters
def receive = {
case "kill" context.stop(child); lastSender = sender
case Terminated(`child`) lastSender ! "finished"
}
}
//#watch
val a = system.actorOf(Props(new WatchActor))
implicit val sender = testActor
a ! "kill"
expectMsg("finished")
}
"using pattern gracefulStop" in {
val actorRef = system.actorOf(Props[MyActor])
//#gracefulStop
import akka.pattern.gracefulStop
import scala.concurrent.Await
try {
val stopped: Future[Boolean] = gracefulStop(actorRef, 5 seconds)(system)
Await.result(stopped, 6 seconds)
// the actor has been stopped
} catch {
case e: akka.pattern.AskTimeoutException // the actor wasn't stopped within 5 seconds
}
//#gracefulStop
}
"using pattern ask / pipeTo" in {
val actorA, actorB, actorC, actorD = system.actorOf(Props.empty)
//#ask-pipeTo
import akka.pattern.{ ask, pipe }
import system.dispatcher // The ExecutionContext that will be used
case class Result(x: Int, s: String, d: Double)
case object Request
implicit val timeout = Timeout(5 seconds) // needed for `?` below
val f: Future[Result] =
for {
x ask(actorA, Request).mapTo[Int] // call pattern directly
s (actorB ask Request).mapTo[String] // call by implicit conversion
d (actorC ? Request).mapTo[Double] // call by symbolic name
} yield Result(x, s, d)
f pipeTo actorD // .. or ..
pipe(f) to actorD
//#ask-pipeTo
}
"replying with own or other sender" in {
val actor = system.actorOf(Props(new Actor {
def receive = {
case ref: ActorRef
//#reply-with-sender
sender.tell("reply", context.parent) // replies will go back to parent
sender.!("reply")(context.parent) // alternative syntax (beware of the parens!)
//#reply-with-sender
case x
//#reply-without-sender
sender ! x // replies will go to this actor
//#reply-without-sender
}
}))
implicit val me = testActor
actor ! 42
expectMsg(42)
lastSender must be === actor
actor ! me
expectMsg("reply")
lastSender must be === system.actorFor("/user")
expectMsg("reply")
lastSender must be === system.actorFor("/user")
}
}

View file

@ -0,0 +1,211 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
import akka.testkit.{ AkkaSpec MyFavoriteTestFrameWorkPlusAkkaTestKit }
//#test-code
import akka.actor.Props
class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
"simple finite state machine" must {
//#fsm-code-elided
//#simple-imports
import akka.actor.{ Actor, ActorRef, FSM }
import scala.concurrent.util.duration._
//#simple-imports
//#simple-events
// received events
case class SetTarget(ref: ActorRef)
case class Queue(obj: Any)
case object Flush
// sent events
case class Batch(obj: Seq[Any])
//#simple-events
//#simple-state
// states
sealed trait State
case object Idle extends State
case object Active extends State
sealed trait Data
case object Uninitialized extends Data
case class Todo(target: ActorRef, queue: Seq[Any]) extends Data
//#simple-state
//#simple-fsm
class Buncher extends Actor with FSM[State, Data] {
//#fsm-body
startWith(Idle, Uninitialized)
//#when-syntax
when(Idle) {
case Event(SetTarget(ref), Uninitialized)
stay using Todo(ref, Vector.empty)
}
//#when-syntax
//#transition-elided
onTransition {
case Active -> Idle
stateData match {
case Todo(ref, queue) ref ! Batch(queue)
}
}
//#transition-elided
//#when-syntax
when(Active, stateTimeout = 1 second) {
case Event(Flush | StateTimeout, t: Todo)
goto(Idle) using t.copy(queue = Vector.empty)
}
//#when-syntax
//#unhandled-elided
whenUnhandled {
// common code for both states
case Event(Queue(obj), t @ Todo(_, v))
goto(Active) using t.copy(queue = v :+ obj)
case Event(e, s)
log.warning("received unhandled request {} in state {}/{}", e, stateName, s)
stay
}
//#unhandled-elided
//#fsm-body
initialize
}
//#simple-fsm
object DemoCode {
trait StateType
case object SomeState extends StateType
case object Processing extends StateType
case object Error extends StateType
case object Idle extends StateType
case object Active extends StateType
class Dummy extends Actor with FSM[StateType, Int] {
class X
val newData = 42
object WillDo
object Tick
//#modifier-syntax
when(SomeState) {
case Event(msg, _)
goto(Processing) using (newData) forMax (5 seconds) replying (WillDo)
}
//#modifier-syntax
//#transition-syntax
onTransition {
case Idle -> Active setTimer("timeout", Tick, 1 second, true)
case Active -> _ cancelTimer("timeout")
case x -> Idle log.info("entering Idle from " + x)
}
//#transition-syntax
//#alt-transition-syntax
onTransition(handler _)
def handler(from: StateType, to: StateType) {
// handle it here ...
}
//#alt-transition-syntax
//#stop-syntax
when(Error) {
case Event("stop", _)
// do cleanup ...
stop()
}
//#stop-syntax
//#transform-syntax
when(SomeState)(transform {
case Event(bytes: Array[Byte], read) stay using (read + bytes.length)
case Event(bytes: List[Byte], read) stay using (read + bytes.size)
} using {
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000
goto(Processing)
})
//#transform-syntax
//#alt-transform-syntax
val processingTrigger: PartialFunction[State, State] = {
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000
goto(Processing)
}
when(SomeState)(transform {
case Event(bytes: Array[Byte], read) stay using (read + bytes.length)
case Event(bytes: List[Byte], read) stay using (read + bytes.size)
} using processingTrigger)
//#alt-transform-syntax
//#termination-syntax
onTermination {
case StopEvent(FSM.Normal, state, data) // ...
case StopEvent(FSM.Shutdown, state, data) // ...
case StopEvent(FSM.Failure(cause), state, data) // ...
}
//#termination-syntax
//#unhandled-syntax
whenUnhandled {
case Event(x: X, data)
log.info("Received unhandled event: " + x)
stay
case Event(msg, _)
log.warning("Received unknown event: " + msg)
goto(Error)
}
//#unhandled-syntax
}
//#logging-fsm
import akka.actor.LoggingFSM
class MyFSM extends Actor with LoggingFSM[StateType, Data] {
//#body-elided
override def logDepth = 12
onTermination {
case StopEvent(FSM.Failure(_), state, data)
val lastEvents = getLog.mkString("\n\t")
log.warning("Failure in state " + state + " with data " + data + "\n" +
"Events leading up to this point:\n\t" + lastEvents)
}
// ...
//#body-elided
}
//#logging-fsm
}
//#fsm-code-elided
"batch correctly" in {
val buncher = system.actorOf(Props(new Buncher))
buncher ! SetTarget(testActor)
buncher ! Queue(42)
buncher ! Queue(43)
expectMsg(Batch(Seq(42, 43)))
buncher ! Queue(44)
buncher ! Flush
buncher ! Queue(45)
expectMsg(Batch(Seq(44)))
expectMsg(Batch(Seq(45)))
}
"batch not if uninitialized" in {
val buncher = system.actorOf(Props(new Buncher))
buncher ! Queue(42)
expectNoMsg
}
}
}
//#test-code

View file

@ -0,0 +1,294 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
//#all
//#imports
import akka.actor._
import akka.actor.SupervisorStrategy._
import scala.concurrent.util.duration._
import scala.concurrent.util.Duration
import akka.util.Timeout
import akka.event.LoggingReceive
import akka.pattern.{ ask, pipe }
import com.typesafe.config.ConfigFactory
//#imports
/**
* Runs the sample
*/
object FaultHandlingDocSample extends App {
import Worker._
val config = ConfigFactory.parseString("""
akka.loglevel = DEBUG
akka.actor.debug {
receive = on
lifecycle = on
}
""")
val system = ActorSystem("FaultToleranceSample", config)
val worker = system.actorOf(Props[Worker], name = "worker")
val listener = system.actorOf(Props[Listener], name = "listener")
// start the work and listen on progress
// note that the listener is used as sender of the tell,
// i.e. it will receive replies from the worker
worker.tell(Start, sender = listener)
}
/**
* Listens on progress from the worker and shuts down the system when enough
* work has been done.
*/
class Listener extends Actor with ActorLogging {
import Worker._
// If we don't get any progress within 15 seconds then the service is unavailable
context.setReceiveTimeout(15 seconds)
def receive = {
case Progress(percent)
log.info("Current progress: {} %", percent)
if (percent >= 100.0) {
log.info("That's all, shutting down")
context.system.shutdown()
}
case ReceiveTimeout
// No progress within 15 seconds, ServiceUnavailable
log.error("Shutting down due to unavailable service")
context.system.shutdown()
}
}
//#messages
object Worker {
case object Start
case object Do
case class Progress(percent: Double)
}
//#messages
/**
* Worker performs some work when it receives the `Start` message.
* It will continuously notify the sender of the `Start` message
* of current ``Progress``. The `Worker` supervise the `CounterService`.
*/
class Worker extends Actor with ActorLogging {
import Worker._
import CounterService._
implicit val askTimeout = Timeout(5 seconds)
// Stop the CounterService child if it throws ServiceUnavailable
override val supervisorStrategy = OneForOneStrategy() {
case _: CounterService.ServiceUnavailable Stop
}
// The sender of the initial Start message will continuously be notified about progress
var progressListener: Option[ActorRef] = None
val counterService = context.actorOf(Props[CounterService], name = "counter")
val totalCount = 51
import context.dispatcher // Use this Actors' Dispatcher as ExecutionContext
def receive = LoggingReceive {
case Start if progressListener.isEmpty
progressListener = Some(sender)
context.system.scheduler.schedule(Duration.Zero, 1 second, self, Do)
case Do
counterService ! Increment(1)
counterService ! Increment(1)
counterService ! Increment(1)
// Send current progress to the initial sender
counterService ? GetCurrentCount map {
case CurrentCount(_, count) Progress(100.0 * count / totalCount)
} pipeTo progressListener.get
}
}
//#messages
object CounterService {
case class Increment(n: Int)
case object GetCurrentCount
case class CurrentCount(key: String, count: Long)
class ServiceUnavailable(msg: String) extends RuntimeException(msg)
private case object Reconnect
}
//#messages
/**
* Adds the value received in `Increment` message to a persistent
* counter. Replies with `CurrentCount` when it is asked for `CurrentCount`.
* `CounterService` supervise `Storage` and `Counter`.
*/
class CounterService extends Actor {
import CounterService._
import Counter._
import Storage._
// Restart the storage child when StorageException is thrown.
// After 3 restarts within 5 seconds it will be stopped.
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 3, withinTimeRange = 5 seconds) {
case _: Storage.StorageException Restart
}
val key = self.path.name
var storage: Option[ActorRef] = None
var counter: Option[ActorRef] = None
var backlog = IndexedSeq.empty[(ActorRef, Any)]
val MaxBacklog = 10000
import context.dispatcher // Use this Actors' Dispatcher as ExecutionContext
override def preStart() {
initStorage()
}
/**
* The child storage is restarted in case of failure, but after 3 restarts,
* and still failing it will be stopped. Better to back-off than continuously
* failing. When it has been stopped we will schedule a Reconnect after a delay.
* Watch the child so we receive Terminated message when it has been terminated.
*/
def initStorage() {
storage = Some(context.watch(context.actorOf(Props[Storage], name = "storage")))
// Tell the counter, if any, to use the new storage
counter foreach { _ ! UseStorage(storage) }
// We need the initial value to be able to operate
storage.get ! Get(key)
}
def receive = LoggingReceive {
case Entry(k, v) if k == key && counter == None
// Reply from Storage of the initial value, now we can create the Counter
val c = context.actorOf(Props(new Counter(key, v)))
counter = Some(c)
// Tell the counter to use current storage
c ! UseStorage(storage)
// and send the buffered backlog to the counter
for ((replyTo, msg) backlog) c.tell(msg, sender = replyTo)
backlog = IndexedSeq.empty
case msg @ Increment(n) forwardOrPlaceInBacklog(msg)
case msg @ GetCurrentCount forwardOrPlaceInBacklog(msg)
case Terminated(actorRef) if Some(actorRef) == storage
// After 3 restarts the storage child is stopped.
// We receive Terminated because we watch the child, see initStorage.
storage = None
// Tell the counter that there is no storage for the moment
counter foreach { _ ! UseStorage(None) }
// Try to re-establish storage after while
context.system.scheduler.scheduleOnce(10 seconds, self, Reconnect)
case Reconnect
// Re-establish storage after the scheduled delay
initStorage()
}
def forwardOrPlaceInBacklog(msg: Any) {
// We need the initial value from storage before we can start delegate to the counter.
// Before that we place the messages in a backlog, to be sent to the counter when
// it is initialized.
counter match {
case Some(c) c forward msg
case None
if (backlog.size >= MaxBacklog)
throw new ServiceUnavailable("CounterService not available, lack of initial value")
backlog = backlog :+ (sender, msg)
}
}
}
//#messages
object Counter {
case class UseStorage(storage: Option[ActorRef])
}
//#messages
/**
* The in memory count variable that will send current
* value to the `Storage`, if there is any storage
* available at the moment.
*/
class Counter(key: String, initialValue: Long) extends Actor {
import Counter._
import CounterService._
import Storage._
var count = initialValue
var storage: Option[ActorRef] = None
def receive = LoggingReceive {
case UseStorage(s)
storage = s
storeCount()
case Increment(n)
count += n
storeCount()
case GetCurrentCount
sender ! CurrentCount(key, count)
}
def storeCount() {
// Delegate dangerous work, to protect our valuable state.
// We can continue without storage.
storage foreach { _ ! Store(Entry(key, count)) }
}
}
//#messages
object Storage {
case class Store(entry: Entry)
case class Get(key: String)
case class Entry(key: String, value: Long)
class StorageException(msg: String) extends RuntimeException(msg)
}
//#messages
/**
* Saves key/value pairs to persistent storage when receiving `Store` message.
* Replies with current value when receiving `Get` message.
* Will throw StorageException if the underlying data store is out of order.
*/
class Storage extends Actor {
import Storage._
val db = DummyDB
def receive = LoggingReceive {
case Store(Entry(key, count)) db.save(key, count)
case Get(key) sender ! Entry(key, db.load(key).getOrElse(0L))
}
}
//#dummydb
object DummyDB {
import Storage.StorageException
private var db = Map[String, Long]()
@throws(classOf[StorageException])
def save(key: String, value: Long): Unit = synchronized {
if (11 <= value && value <= 14) throw new StorageException("Simulated store failure " + value)
db += (key -> value)
}
@throws(classOf[StorageException])
def load(key: String): Option[Long] = synchronized {
db.get(key)
}
}
//#dummydb
//#all

View file

@ -0,0 +1,156 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
//#testkit
import akka.testkit.{ AkkaSpec, ImplicitSender, EventFilter }
import akka.actor.{ ActorRef, Props, Terminated }
//#testkit
object FaultHandlingDocSpec {
//#supervisor
//#child
import akka.actor.Actor
//#child
//#supervisor
//#supervisor
class Supervisor extends Actor {
//#strategy
import akka.actor.OneForOneStrategy
import akka.actor.SupervisorStrategy._
import scala.concurrent.util.duration._
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException Resume
case _: NullPointerException Restart
case _: IllegalArgumentException Stop
case _: Exception Escalate
}
//#strategy
def receive = {
case p: Props sender ! context.actorOf(p)
}
}
//#supervisor
//#supervisor2
class Supervisor2 extends Actor {
//#strategy2
import akka.actor.OneForOneStrategy
import akka.actor.SupervisorStrategy._
import scala.concurrent.util.duration._
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException Resume
case _: NullPointerException Restart
case _: IllegalArgumentException Stop
case _: Exception Escalate
}
//#strategy2
def receive = {
case p: Props sender ! context.actorOf(p)
}
// override default to kill all children during restart
override def preRestart(cause: Throwable, msg: Option[Any]) {}
}
//#supervisor2
//#child
class Child extends Actor {
var state = 0
def receive = {
case ex: Exception throw ex
case x: Int state = x
case "get" sender ! state
}
}
//#child
}
//#testkit
class FaultHandlingDocSpec extends AkkaSpec with ImplicitSender {
//#testkit
import FaultHandlingDocSpec._
//#testkit
"A supervisor" must {
"apply the chosen strategy for its child" in {
//#testkit
//#create
val supervisor = system.actorOf(Props[Supervisor], "supervisor")
supervisor ! Props[Child]
val child = expectMsgType[ActorRef] // retrieve answer from TestKits testActor
//#create
EventFilter[ArithmeticException](occurrences = 1) intercept {
//#resume
child ! 42 // set state to 42
child ! "get"
expectMsg(42)
child ! new ArithmeticException // crash it
child ! "get"
expectMsg(42)
//#resume
}
EventFilter[NullPointerException](occurrences = 1) intercept {
//#restart
child ! new NullPointerException // crash it harder
child ! "get"
expectMsg(0)
//#restart
}
EventFilter[IllegalArgumentException](occurrences = 1) intercept {
//#stop
watch(child) // have testActor watch child
child ! new IllegalArgumentException // break it
expectMsgPF() {
case t @ Terminated(`child`) if t.existenceConfirmed ()
}
child.isTerminated must be(true)
//#stop
}
EventFilter[Exception]("CRASH", occurrences = 4) intercept {
//#escalate-kill
supervisor ! Props[Child] // create new child
val child2 = expectMsgType[ActorRef]
watch(child2)
child2 ! "get" // verify it is alive
expectMsg(0)
child2 ! new Exception("CRASH") // escalate failure
expectMsgPF() {
case t @ Terminated(`child2`) if t.existenceConfirmed ()
}
//#escalate-kill
//#escalate-restart
val supervisor2 = system.actorOf(Props[Supervisor2], "supervisor2")
supervisor2 ! Props[Child]
val child3 = expectMsgType[ActorRef]
child3 ! 23
child3 ! "get"
expectMsg(23)
child3 ! new Exception("CRASH")
child3 ! "get"
expectMsg(0)
//#escalate-restart
}
//#testkit
// code here
}
}
}
//#testkit

View file

@ -0,0 +1,64 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
//#imports1
import akka.actor.Actor
import akka.actor.Props
import scala.concurrent.util.duration._
//#imports1
import org.scalatest.{ BeforeAndAfterAll, WordSpec }
import org.scalatest.matchers.MustMatchers
import akka.testkit._
class SchedulerDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
"schedule a one-off task" in {
//#schedule-one-off-message
//Use the system's dispatcher as ExecutionContext
import system.dispatcher
//Schedules to send the "foo"-message to the testActor after 50ms
system.scheduler.scheduleOnce(50 milliseconds, testActor, "foo")
//#schedule-one-off-message
expectMsg(1 second, "foo")
//#schedule-one-off-thunk
//Schedules a function to be executed (send the current time) to the testActor after 50ms
system.scheduler.scheduleOnce(50 milliseconds) {
testActor ! System.currentTimeMillis
}
//#schedule-one-off-thunk
}
"schedule a recurring task" in {
//#schedule-recurring
val Tick = "tick"
val tickActor = system.actorOf(Props(new Actor {
def receive = {
case Tick //Do something
}
}))
//Use system's dispatcher as ExecutionContext
import system.dispatcher
//This will schedule to send the Tick-message
//to the tickActor after 0ms repeating every 50ms
val cancellable =
system.scheduler.schedule(0 milliseconds,
50 milliseconds,
tickActor,
Tick)
//This cancels further Ticks to be sent
cancellable.cancel()
//#schedule-recurring
system.stop(tickActor)
}
}

View file

@ -0,0 +1,179 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import language.postfixOps
//#imports
import scala.concurrent.{ Promise, Future, Await }
import scala.concurrent.util.duration._
import akka.actor.{ ActorContext, TypedActor, TypedProps }
//#imports
import org.scalatest.{ BeforeAndAfterAll, WordSpec }
import org.scalatest.matchers.MustMatchers
import akka.testkit._
//#typed-actor-iface
trait Squarer {
//#typed-actor-iface-methods
def squareDontCare(i: Int): Unit //fire-forget
def square(i: Int): Future[Int] //non-blocking send-request-reply
def squareNowPlease(i: Int): Option[Int] //blocking send-request-reply
def squareNow(i: Int): Int //blocking send-request-reply
//#typed-actor-iface-methods
}
//#typed-actor-iface
//#typed-actor-impl
class SquarerImpl(val name: String) extends Squarer {
def this() = this("default")
//#typed-actor-impl-methods
import TypedActor.dispatcher //So we can create Promises
def squareDontCare(i: Int): Unit = i * i //Nobody cares :(
def square(i: Int): Future[Int] = Promise.successful(i * i).future
def squareNowPlease(i: Int): Option[Int] = Some(i * i)
def squareNow(i: Int): Int = i * i
//#typed-actor-impl-methods
}
//#typed-actor-impl
import java.lang.String.{ valueOf println } //Mr funny man avoids printing to stdout AND keeping docs alright
//#typed-actor-supercharge
trait Foo {
def doFoo(times: Int): Unit = println("doFoo(" + times + ")")
}
trait Bar {
import TypedActor.dispatcher //So we have an implicit dispatcher for our Promise
def doBar(str: String): Future[String] = Promise.successful(str.toUpperCase).future
}
class FooBar extends Foo with Bar
//#typed-actor-supercharge
class TypedActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
"get the TypedActor extension" in {
val someReference: AnyRef = null
try {
//#typed-actor-extension-tools
import akka.actor.TypedActor
//Returns the Typed Actor Extension
val extension = TypedActor(system) //system is an instance of ActorSystem
//Returns whether the reference is a Typed Actor Proxy or not
TypedActor(system).isTypedActor(someReference)
//Returns the backing Akka Actor behind an external Typed Actor Proxy
TypedActor(system).getActorRefFor(someReference)
//Returns the current ActorContext,
// method only valid within methods of a TypedActor implementation
val c: ActorContext = TypedActor.context
//Returns the external proxy of the current Typed Actor,
// method only valid within methods of a TypedActor implementation
val s: Squarer = TypedActor.self[Squarer]
//Returns a contextual instance of the Typed Actor Extension
//this means that if you create other Typed Actors with this,
//they will become children to the current Typed Actor.
TypedActor(TypedActor.context)
//#typed-actor-extension-tools
} catch {
case e: Exception //dun care
}
}
"create a typed actor" in {
//#typed-actor-create1
val mySquarer: Squarer =
TypedActor(system).typedActorOf(TypedProps[SquarerImpl]())
//#typed-actor-create1
//#typed-actor-create2
val otherSquarer: Squarer =
TypedActor(system).typedActorOf(TypedProps(classOf[Squarer], new SquarerImpl("foo")), "name")
//#typed-actor-create2
//#typed-actor-calls
//#typed-actor-call-oneway
mySquarer.squareDontCare(10)
//#typed-actor-call-oneway
//#typed-actor-call-future
val fSquare = mySquarer.square(10) //A Future[Int]
//#typed-actor-call-future
//#typed-actor-call-option
val oSquare = mySquarer.squareNowPlease(10) //Option[Int]
//#typed-actor-call-option
//#typed-actor-call-strict
val iSquare = mySquarer.squareNow(10) //Int
//#typed-actor-call-strict
//#typed-actor-calls
Await.result(fSquare, 3 seconds) must be === 100
oSquare must be === Some(100)
iSquare must be === 100
//#typed-actor-stop
TypedActor(system).stop(mySquarer)
//#typed-actor-stop
//#typed-actor-poisonpill
TypedActor(system).poisonPill(otherSquarer)
//#typed-actor-poisonpill
}
"proxy any ActorRef" in {
//#typed-actor-remote
val typedActor: Foo with Bar =
TypedActor(system).
typedActorOf(
TypedProps[FooBar],
system.actorFor("akka://SomeSystem@somehost:2552/user/some/foobar"))
//Use "typedActor" as a FooBar
//#typed-actor-remote
}
"create hierarchies" in {
try {
//#typed-actor-hierarchy
//Inside your Typed Actor
val childSquarer: Squarer = TypedActor(TypedActor.context).typedActorOf(TypedProps[SquarerImpl]())
//Use "childSquarer" as a Squarer
//#typed-actor-hierarchy
} catch {
case e: Exception //ignore
}
}
"supercharge" in {
//#typed-actor-supercharge-usage
val awesomeFooBar: Foo with Bar = TypedActor(system).typedActorOf(TypedProps[FooBar]())
awesomeFooBar.doFoo(10)
val f = awesomeFooBar.doBar("yes")
TypedActor(system).poisonPill(awesomeFooBar)
//#typed-actor-supercharge-usage
Await.result(f, 3 seconds) must be === "YES"
}
}

View file

@ -0,0 +1,50 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import akka.actor._
import scala.collection.mutable.ListBuffer
/**
* Requirements are as follows:
* The first thing the actor needs to do, is to subscribe to a channel of events,
* Then it must replay (process) all "old" events
* Then it has to wait for a GoAhead signal to begin processing the new events
* It mustn't "miss" events that happen between catching up with the old events and getting the GoAhead signal
*/
class UnnestedReceives extends Actor {
import context.become
//If you need to store sender/senderFuture you can change it to ListBuffer[(Any, Channel)]
val queue = new ListBuffer[Any]()
//This message processes a message/event
def process(msg: Any): Unit = println("processing: " + msg)
//This method subscribes the actor to the event bus
def subscribe() {} //Your external stuff
//This method retrieves all prior messages/events
def allOldMessages() = List()
override def preStart {
//We override preStart to be sure that the first message the actor gets is
//'Replay, that message will start to be processed _after_ the actor is started
self ! 'Replay
//Then we subscribe to the stream of messages/events
subscribe()
}
def receive = {
case 'Replay //Our first message should be a 'Replay message, all others are invalid
allOldMessages() foreach process //Process all old messages/events
become { //Switch behavior to look for the GoAhead signal
case 'GoAhead //When we get the GoAhead signal we process all our buffered messages/events
queue foreach process
queue.clear
become { //Then we change behaviour to process incoming messages/events as they arrive
case msg process(msg)
}
case msg //While we haven't gotten the GoAhead signal, buffer all incoming messages
queue += msg //Here you have full control, you can handle overflow etc
}
}
}

View file

@ -0,0 +1,192 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.agent
import language.postfixOps
import akka.agent.Agent
import scala.concurrent.util.duration._
import akka.util.Timeout
import akka.testkit._
class AgentDocSpec extends AkkaSpec {
"create and close" in {
//#create
import akka.agent.Agent
val agent = Agent(5)
//#create
//#close
agent.close()
//#close
}
"create with implicit system" in {
//#create-implicit-system
import akka.actor.ActorSystem
import akka.agent.Agent
implicit val system = ActorSystem("app")
val agent = Agent(5)
//#create-implicit-system
agent.close()
system.shutdown()
}
"create with explicit system" in {
//#create-explicit-system
import akka.actor.ActorSystem
import akka.agent.Agent
val system = ActorSystem("app")
val agent = Agent(5)(system)
//#create-explicit-system
agent.close()
system.shutdown()
}
"send and sendOff" in {
val agent = Agent(0)
import system.dispatcher
//#send
// send a value
agent send 7
// send a function
agent send (_ + 1)
agent send (_ * 2)
//#send
def longRunningOrBlockingFunction = (i: Int) i * 1
//#send-off
// sendOff a function
agent sendOff (longRunningOrBlockingFunction)
//#send-off
val result = agent.await(Timeout(5 seconds))
result must be === 16
}
"read with apply" in {
val agent = Agent(0)
//#read-apply
val result = agent()
//#read-apply
result must be === 0
}
"read with get" in {
val agent = Agent(0)
//#read-get
val result = agent.get
//#read-get
result must be === 0
}
"read with await" in {
val agent = Agent(0)
//#read-await
import scala.concurrent.util.duration._
import akka.util.Timeout
implicit val timeout = Timeout(5 seconds)
val result = agent.await
//#read-await
result must be === 0
}
"read with future" in {
val agent = Agent(0)
//#read-future
import scala.concurrent.Await
implicit val timeout = Timeout(5 seconds)
val future = agent.future
val result = Await.result(future, timeout.duration)
//#read-future
result must be === 0
}
"transfer example" in {
//#transfer-example
import akka.agent.Agent
import scala.concurrent.util.duration._
import akka.util.Timeout
import scala.concurrent.stm._
def transfer(from: Agent[Int], to: Agent[Int], amount: Int): Boolean = {
atomic { txn
if (from.get < amount) false
else {
from send (_ - amount)
to send (_ + amount)
true
}
}
}
val from = Agent(100)
val to = Agent(20)
val ok = transfer(from, to, 50)
implicit val timeout = Timeout(5 seconds)
val fromValue = from.await // -> 50
val toValue = to.await // -> 70
//#transfer-example
fromValue must be === 50
toValue must be === 70
}
"monadic example" in {
//#monadic-example
val agent1 = Agent(3)
val agent2 = Agent(5)
// uses foreach
var result = 0
for (value agent1) {
result = value + 1
}
// uses map
val agent3 = for (value agent1) yield value + 1
// or using map directly
val agent4 = agent1 map (_ + 1)
// uses flatMap
val agent5 = for {
value1 agent1
value2 agent2
} yield value1 + value2
//#monadic-example
result must be === 4
agent3() must be === 4
agent4() must be === 4
agent5() must be === 8
agent1.close()
agent2.close()
agent3.close()
agent4.close()
agent5.close()
}
}

View file

@ -0,0 +1,73 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.camel
import language.postfixOps
object Consumers {
object Sample1 {
//#Consumer1
import akka.camel.{ CamelMessage, Consumer }
class Consumer1 extends Consumer {
def endpointUri = "file:data/input/actor"
def receive = {
case msg: CamelMessage println("received %s" format msg.bodyAs[String])
}
}
//#Consumer1
}
object Sample2 {
//#Consumer2
import akka.camel.{ CamelMessage, Consumer }
class Consumer2 extends Consumer {
def endpointUri = "jetty:http://localhost:8877/camel/default"
def receive = {
case msg: CamelMessage sender ! ("Hello %s" format msg.bodyAs[String])
}
}
//#Consumer2
}
object Sample3 {
//#Consumer3
import akka.camel.{ CamelMessage, Consumer }
import akka.camel.Ack
import akka.actor.Status.Failure
class Consumer3 extends Consumer {
override def autoAck = false
def endpointUri = "jms:queue:test"
def receive = {
case msg: CamelMessage
sender ! Ack
// on success
// ..
val someException = new Exception("e1")
// on failure
sender ! Failure(someException)
}
}
//#Consumer3
}
object Sample4 {
//#Consumer4
import akka.camel.{ CamelMessage, Consumer }
import scala.concurrent.util.duration._
class Consumer4 extends Consumer {
def endpointUri = "jetty:http://localhost:8877/camel/default"
override def replyTimeout = 500 millis
def receive = {
case msg: CamelMessage sender ! ("Hello %s" format msg.bodyAs[String])
}
}
//#Consumer4
}
}

View file

@ -0,0 +1,63 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.camel
import akka.camel.CamelMessage
import akka.actor.Status.Failure
import language.existentials
object CustomRoute {
object Sample1 {
//#CustomRoute
import akka.actor.{ Props, ActorSystem, Actor, ActorRef }
import akka.camel.{ CamelMessage, CamelExtension }
import org.apache.camel.builder.RouteBuilder
import akka.camel._
class Responder extends Actor {
def receive = {
case msg: CamelMessage
sender ! (msg.mapBody {
body: String "received %s" format body
})
}
}
class CustomRouteBuilder(system: ActorSystem, responder: ActorRef) extends RouteBuilder {
def configure {
from("jetty:http://localhost:8877/camel/custom").to(responder)
}
}
val system = ActorSystem("some-system")
val camel = CamelExtension(system)
val responder = system.actorOf(Props[Responder], name = "TestResponder")
camel.context.addRoutes(new CustomRouteBuilder(system, responder))
//#CustomRoute
}
object Sample2 {
//#ErrorThrowingConsumer
import akka.camel.Consumer
import org.apache.camel.builder.Builder
import org.apache.camel.model.RouteDefinition
class ErrorThrowingConsumer(override val endpointUri: String) extends Consumer {
def receive = {
case msg: CamelMessage throw new Exception("error: %s" format msg.body)
}
override def onRouteDefinition(rd: RouteDefinition) = {
// Catch any exception and handle it by returning the exception message as response
rd.onException(classOf[Exception]).handled(true).transform(Builder.exceptionMessage).end
}
final override def preRestart(reason: Throwable, message: Option[Any]) {
sender ! Failure(reason)
}
}
//#ErrorThrowingConsumer
}
}

View file

@ -0,0 +1,50 @@
package docs.camel
object CustomRouteExample {
{
//#CustomRouteExample
import akka.actor.{ Actor, ActorRef, Props, ActorSystem }
import akka.camel.{ CamelMessage, Consumer, Producer, CamelExtension }
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.{ Exchange, Processor }
class Consumer3(transformer: ActorRef) extends Actor with Consumer {
def endpointUri = "jetty:http://0.0.0.0:8877/camel/welcome"
def receive = {
// Forward a string representation of the message body to transformer
case msg: CamelMessage transformer.forward(msg.bodyAs[String])
}
}
class Transformer(producer: ActorRef) extends Actor {
def receive = {
// example: transform message body "foo" to "- foo -" and forward result to producer
case msg: CamelMessage producer.forward(msg.mapBody((body: String) "- %s -" format body))
}
}
class Producer1 extends Actor with Producer {
def endpointUri = "direct:welcome"
}
class CustomRouteBuilder extends RouteBuilder {
def configure {
from("direct:welcome").process(new Processor() {
def process(exchange: Exchange) {
// Create a 'welcome' message from the input message
exchange.getOut.setBody("Welcome %s" format exchange.getIn.getBody)
}
})
}
}
// the below lines can be added to a Boot class, so that you can run the example from a MicroKernel
val system = ActorSystem("some-system")
val producer = system.actorOf(Props[Producer1])
val mediator = system.actorOf(Props(new Transformer(producer)))
val consumer = system.actorOf(Props(new Consumer3(mediator)))
CamelExtension(system).context.addRoutes(new CustomRouteBuilder)
//#CustomRouteExample
}
}

View file

@ -0,0 +1,47 @@
package docs.camel
object HttpExample {
{
//#HttpExample
import org.apache.camel.Exchange
import akka.actor.{ Actor, ActorRef, Props, ActorSystem }
import akka.camel.{ Producer, CamelMessage, Consumer }
import akka.actor.Status.Failure
class HttpConsumer(producer: ActorRef) extends Consumer {
def endpointUri = "jetty:http://0.0.0.0:8875/"
def receive = {
case msg producer forward msg
}
}
class HttpProducer(transformer: ActorRef) extends Actor with Producer {
def endpointUri = "jetty://http://akka.io/?bridgeEndpoint=true"
override def transformOutgoingMessage(msg: Any) = msg match {
case msg: CamelMessage msg.addHeaders(msg.headers(Set(Exchange.HTTP_PATH)))
}
override def routeResponse(msg: Any) { transformer forward msg }
}
class HttpTransformer extends Actor {
def receive = {
case msg: CamelMessage sender ! (msg.mapBody { body: Array[Byte] new String(body).replaceAll("Akka ", "AKKA ") })
case msg: Failure sender ! msg
}
}
// Create the actors. this can be done in a Boot class so you can
// run the example in the MicroKernel. just add the below three lines to your boot class.
val system = ActorSystem("some-system")
val httpTransformer = system.actorOf(Props[HttpTransformer])
val httpProducer = system.actorOf(Props(new HttpProducer(httpTransformer)))
val httpConsumer = system.actorOf(Props(new HttpConsumer(httpProducer)))
//#HttpExample
}
}

View file

@ -0,0 +1,104 @@
package docs.camel
import akka.actor.{ Props, ActorSystem }
import akka.camel.CamelExtension
import language.postfixOps
import akka.util.Timeout
object Introduction {
def foo = {
//#Consumer-mina
import akka.camel.{ CamelMessage, Consumer }
class MyEndpoint extends Consumer {
def endpointUri = "mina:tcp://localhost:6200?textline=true"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
}
}
// start and expose actor via tcp
import akka.actor.{ ActorSystem, Props }
val system = ActorSystem("some-system")
val mina = system.actorOf(Props[MyEndpoint])
//#Consumer-mina
}
def bar = {
//#Consumer
import akka.camel.{ CamelMessage, Consumer }
class MyEndpoint extends Consumer {
def endpointUri = "jetty:http://localhost:8877/example"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
}
}
//#Consumer
}
def baz = {
//#Producer
import akka.actor.Actor
import akka.camel.{ Producer, Oneway }
import akka.actor.{ ActorSystem, Props }
class Orders extends Actor with Producer with Oneway {
def endpointUri = "jms:queue:Orders"
}
val sys = ActorSystem("some-system")
val orders = sys.actorOf(Props[Orders])
orders ! <order amount="100" currency="PLN" itemId="12345"/>
//#Producer
}
{
//#CamelExtension
val system = ActorSystem("some-system")
val camel = CamelExtension(system)
val camelContext = camel.context
val producerTemplate = camel.template
//#CamelExtension
}
{
//#CamelExtensionAddComponent
// import org.apache.activemq.camel.component.ActiveMQComponent
val system = ActorSystem("some-system")
val camel = CamelExtension(system)
val camelContext = camel.context
// camelContext.addComponent("activemq", ActiveMQComponent.activeMQComponent("vm://localhost?broker.persistent=false"))
//#CamelExtensionAddComponent
}
{
//#CamelActivation
import akka.camel.{ CamelMessage, Consumer }
import scala.concurrent.util.duration._
class MyEndpoint extends Consumer {
def endpointUri = "mina:tcp://localhost:6200?textline=true"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
}
}
val system = ActorSystem("some-system")
val camel = CamelExtension(system)
val actorRef = system.actorOf(Props[MyEndpoint])
// get a future reference to the activation of the endpoint of the Consumer Actor
val activationFuture = camel.activationFutureFor(actorRef)(timeout = 10 seconds, executor = system.dispatcher)
//#CamelActivation
//#CamelDeactivation
system.stop(actorRef)
// get a future reference to the deactivation of the endpoint of the Consumer Actor
val deactivationFuture = camel.deactivationFutureFor(actorRef)(timeout = 10 seconds, executor = system.dispatcher)
//#CamelDeactivation
}
}

View file

@ -0,0 +1,128 @@
package docs.camel
import akka.camel.CamelExtension
import language.postfixOps
object Producers {
object Sample1 {
//#Producer1
import akka.actor.Actor
import akka.actor.{ Props, ActorSystem }
import akka.camel.{ Producer, CamelMessage }
import akka.util.Timeout
class Producer1 extends Actor with Producer {
def endpointUri = "http://localhost:8080/news"
}
//#Producer1
//#AskProducer
import akka.pattern.ask
import scala.concurrent.util.duration._
implicit val timeout = Timeout(10 seconds)
val system = ActorSystem("some-system")
val producer = system.actorOf(Props[Producer1])
val future = producer.ask("some request").mapTo[CamelMessage]
//#AskProducer
}
object Sample2 {
//#RouteResponse
import akka.actor.{ Actor, ActorRef }
import akka.camel.{ Producer, CamelMessage }
import akka.actor.{ Props, ActorSystem }
class ResponseReceiver extends Actor {
def receive = {
case msg: CamelMessage
// do something with the forwarded response
}
}
class Forwarder(uri: String, target: ActorRef) extends Actor with Producer {
def endpointUri = uri
override def routeResponse(msg: Any) { target forward msg }
}
val system = ActorSystem("some-system")
val receiver = system.actorOf(Props[ResponseReceiver])
val forwardResponse = system.actorOf(Props(new Forwarder("http://localhost:8080/news/akka", receiver)))
// the Forwarder sends out a request to the web page and forwards the response to
// the ResponseReceiver
forwardResponse ! "some request"
//#RouteResponse
}
object Sample3 {
//#TransformOutgoingMessage
import akka.actor.Actor
import akka.camel.{ Producer, CamelMessage }
class Transformer(uri: String) extends Actor with Producer {
def endpointUri = uri
def upperCase(msg: CamelMessage) = msg.mapBody {
body: String body.toUpperCase
}
override def transformOutgoingMessage(msg: Any) = msg match {
case msg: CamelMessage upperCase(msg)
}
}
//#TransformOutgoingMessage
}
object Sample4 {
//#Oneway
import akka.actor.{ Actor, Props, ActorSystem }
import akka.camel.Producer
class OnewaySender(uri: String) extends Actor with Producer {
def endpointUri = uri
override def oneway: Boolean = true
}
val system = ActorSystem("some-system")
val producer = system.actorOf(Props(new OnewaySender("activemq:FOO.BAR")))
producer ! "Some message"
//#Oneway
}
object Sample5 {
//#Correlate
import akka.camel.{ Producer, CamelMessage }
import akka.actor.Actor
import akka.actor.{ Props, ActorSystem }
class Producer2 extends Actor with Producer {
def endpointUri = "activemq:FOO.BAR"
}
val system = ActorSystem("some-system")
val producer = system.actorOf(Props[Producer2])
producer ! CamelMessage("bar", Map(CamelMessage.MessageExchangeId -> "123"))
//#Correlate
}
object Sample6 {
//#ProducerTemplate
import akka.actor.Actor
class MyActor extends Actor {
def receive = {
case msg
val template = CamelExtension(context.system).template
template.sendBody("direct:news", msg)
}
}
//#ProducerTemplate
}
object Sample7 {
//#RequestProducerTemplate
import akka.actor.Actor
class MyActor extends Actor {
def receive = {
case msg
val template = CamelExtension(context.system).template
sender ! template.requestBody("direct:news", msg)
}
}
//#RequestProducerTemplate
}
}

View file

@ -0,0 +1,47 @@
package docs.camel
object PublishSubscribe {
{
//#PubSub
import akka.actor.{ Actor, ActorRef, ActorSystem, Props }
import akka.camel.{ Producer, CamelMessage, Consumer }
class Subscriber(name: String, uri: String) extends Actor with Consumer {
def endpointUri = uri
def receive = {
case msg: CamelMessage println("%s received: %s" format (name, msg.body))
}
}
class Publisher(name: String, uri: String) extends Actor with Producer {
def endpointUri = uri
// one-way communication with JMS
override def oneway = true
}
class PublisherBridge(uri: String, publisher: ActorRef) extends Actor with Consumer {
def endpointUri = uri
def receive = {
case msg: CamelMessage {
publisher ! msg.bodyAs[String]
sender ! ("message published")
}
}
}
// Add below to a Boot class
// Setup publish/subscribe example
val system = ActorSystem("some-system")
val jmsUri = "jms:topic:test"
val jmsSubscriber1 = system.actorOf(Props(new Subscriber("jms-subscriber-1", jmsUri)))
val jmsSubscriber2 = system.actorOf(Props(new Subscriber("jms-subscriber-2", jmsUri)))
val jmsPublisher = system.actorOf(Props(new Publisher("jms-publisher", jmsUri)))
val jmsPublisherBridge = system.actorOf(Props(new PublisherBridge("jetty:http://0.0.0.0:8877/camel/pub/jms", jmsPublisher)))
//#PubSub
}
}

View file

@ -0,0 +1,30 @@
package docs.camel
object QuartzExample {
//#Quartz
import akka.actor.{ ActorSystem, Props }
import akka.camel.{ Consumer }
class MyQuartzActor extends Consumer {
def endpointUri = "quartz://example?cron=0/2+*+*+*+*+?"
def receive = {
case msg println("==============> received %s " format msg)
} // end receive
} // end MyQuartzActor
object MyQuartzActor {
def main(str: Array[String]) {
val system = ActorSystem("my-quartz-system")
system.actorOf(Props[MyQuartzActor])
} // end main
} // end MyQuartzActor
//#Quartz
}

View file

@ -0,0 +1,73 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.dataflow
import language.postfixOps
import scala.concurrent.util.duration._
import scala.concurrent.{ Await, Future, Promise }
import org.scalatest.WordSpec
import org.scalatest.matchers.MustMatchers
import scala.util.{ Try, Failure, Success }
class DataflowDocSpec extends WordSpec with MustMatchers {
//#import-akka-dataflow
import akka.dataflow._ //to get the flow method and implicit conversions
//#import-akka-dataflow
//#import-global-implicit
import scala.concurrent.ExecutionContext.Implicits.global
//#import-global-implicit
"demonstrate flow using hello world" in {
def println[T](any: Try[T]): Unit = any.get must be === "Hello world!"
//#simplest-hello-world
flow { "Hello world!" } onComplete println
//#simplest-hello-world
//#nested-hello-world-a
flow {
val f1 = flow { "Hello" }
f1() + " world!"
} onComplete println
//#nested-hello-world-a
//#nested-hello-world-b
flow {
val f1 = flow { "Hello" }
val f2 = flow { "world!" }
f1() + " " + f2()
} onComplete println
//#nested-hello-world-b
}
"demonstrate the use of dataflow variables" in {
def println[T](any: Try[T]): Unit = any.get must be === 20
//#dataflow-variable-a
flow {
val v1, v2 = Promise[Int]()
// v1 will become the value of v2 + 10 when v2 gets a value
v1 << v2() + 10
v2 << flow { 5 } // As you can see, no blocking!
v1() + v2()
} onComplete println
//#dataflow-variable-a
}
"demonstrate the difference between for and flow" in {
def println[T](any: Try[T]): Unit = any.get must be === 2
//#for-vs-flow
val f1, f2 = Future { 1 }
val usingFor = for { v1 f1; v2 f2 } yield v1 + v2
val usingFlow = flow { f1() + f2() }
usingFor onComplete println
usingFlow onComplete println
//#for-vs-flow
}
}

View file

@ -0,0 +1,228 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.dispatcher
import language.postfixOps
import org.scalatest.{ BeforeAndAfterAll, WordSpec }
import org.scalatest.matchers.MustMatchers
import akka.testkit.AkkaSpec
import akka.event.Logging
import akka.event.LoggingAdapter
import scala.concurrent.util.duration._
import akka.actor.{ Props, Actor, PoisonPill, ActorSystem }
object DispatcherDocSpec {
val config = """
//#my-dispatcher-config
my-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "fork-join-executor"
# Configuration for the fork join pool
fork-join-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 2
# Parallelism (threads) ... ceil(available processors * factor)
parallelism-factor = 2.0
# Max number of threads to cap factor-based parallelism number to
parallelism-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
//#my-dispatcher-config
//#my-thread-pool-dispatcher-config
my-thread-pool-dispatcher {
# Dispatcher is the name of the event-based dispatcher
type = Dispatcher
# What kind of ExecutionService to use
executor = "thread-pool-executor"
# Configuration for the thread pool
thread-pool-executor {
# minimum number of threads to cap factor-based core number to
core-pool-size-min = 2
# No of core threads ... ceil(available processors * factor)
core-pool-size-factor = 2.0
# maximum number of threads to cap factor-based number to
core-pool-size-max = 10
}
# Throughput defines the maximum number of messages to be
# processed per actor before the thread jumps to the next actor.
# Set to 1 for as fair as possible.
throughput = 100
}
//#my-thread-pool-dispatcher-config
//#my-pinned-dispatcher-config
my-pinned-dispatcher {
executor = "thread-pool-executor"
type = PinnedDispatcher
}
//#my-pinned-dispatcher-config
//#my-bounded-config
my-dispatcher-bounded-queue {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-factor = 8.0
max-pool-size-factor = 16.0
}
# Specifies the bounded capacity of the mailbox queue
mailbox-capacity = 100
throughput = 3
}
//#my-bounded-config
//#my-balancing-config
my-balancing-dispatcher {
type = BalancingDispatcher
executor = "thread-pool-executor"
thread-pool-executor {
core-pool-size-factor = 8.0
max-pool-size-factor = 16.0
}
}
//#my-balancing-config
//#prio-dispatcher-config
prio-dispatcher {
mailbox-type = "docs.dispatcher.DispatcherDocSpec$MyPrioMailbox"
}
//#prio-dispatcher-config
//#prio-dispatcher-config-java
prio-dispatcher-java {
mailbox-type = "docs.dispatcher.DispatcherDocTestBase$MyPrioMailbox"
//Other dispatcher configuration goes here
}
//#prio-dispatcher-config-java
"""
//#prio-mailbox
import akka.dispatch.PriorityGenerator
import akka.dispatch.UnboundedPriorityMailbox
import com.typesafe.config.Config
// We inherit, in this case, from UnboundedPriorityMailbox
// and seed it with the priority generator
class MyPrioMailbox(settings: ActorSystem.Settings, config: Config) extends UnboundedPriorityMailbox(
// Create a new PriorityGenerator, lower prio means more important
PriorityGenerator {
// 'highpriority messages should be treated first if possible
case 'highpriority 0
// 'lowpriority messages should be treated last if possible
case 'lowpriority 2
// PoisonPill when no other left
case PoisonPill 3
// We default to 1, which is in between high and low
case otherwise 1
})
//#prio-mailbox
class MyActor extends Actor {
def receive = {
case x
}
}
//#mailbox-implementation-example
class MyUnboundedMailbox extends akka.dispatch.MailboxType {
import akka.actor.{ ActorRef, ActorSystem }
import com.typesafe.config.Config
import java.util.concurrent.ConcurrentLinkedQueue
import akka.dispatch.{
Envelope,
MessageQueue,
QueueBasedMessageQueue,
UnboundedMessageQueueSemantics
}
// This constructor signature must exist, it will be called by Akka
def this(settings: ActorSystem.Settings, config: Config) = this()
// The create method is called to create the MessageQueue
final override def create(owner: Option[ActorRef], system: Option[ActorSystem]): MessageQueue =
new QueueBasedMessageQueue with UnboundedMessageQueueSemantics {
final val queue = new ConcurrentLinkedQueue[Envelope]()
}
}
//#mailbox-implementation-example
}
class DispatcherDocSpec extends AkkaSpec(DispatcherDocSpec.config) {
import DispatcherDocSpec.MyActor
"defining dispatcher" in {
val context = system
//#defining-dispatcher
import akka.actor.Props
val myActor =
context.actorOf(Props[MyActor].withDispatcher("my-dispatcher"), "myactor1")
//#defining-dispatcher
}
"defining dispatcher with bounded queue" in {
val dispatcher = system.dispatchers.lookup("my-dispatcher-bounded-queue")
}
"defining pinned dispatcher" in {
val context = system
//#defining-pinned-dispatcher
val myActor =
context.actorOf(Props[MyActor].withDispatcher("my-pinned-dispatcher"), "myactor2")
//#defining-pinned-dispatcher
}
"defining priority dispatcher" in {
//#prio-dispatcher
// We create a new Actor that just prints out what it processes
val a = system.actorOf(
Props(new Actor {
val log: LoggingAdapter = Logging(context.system, this)
self ! 'lowpriority
self ! 'lowpriority
self ! 'highpriority
self ! 'pigdog
self ! 'pigdog2
self ! 'pigdog3
self ! 'highpriority
self ! PoisonPill
def receive = {
case x log.info(x.toString)
}
}).withDispatcher("prio-dispatcher"))
/*
Logs:
'highpriority
'highpriority
'pigdog
'pigdog2
'pigdog3
'lowpriority
'lowpriority
*/
//#prio-dispatcher
awaitCond(a.isTerminated, 5 seconds)
}
"defining balancing dispatcher" in {
val dispatcher = system.dispatchers.lookup("my-balancing-dispatcher")
}
}

View file

@ -0,0 +1,99 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.event
import akka.testkit.AkkaSpec
import akka.actor.Actor
import akka.actor.Props
object LoggingDocSpec {
//#my-actor
import akka.event.Logging
class MyActor extends Actor {
val log = Logging(context.system, this)
override def preStart() = {
log.debug("Starting")
}
override def preRestart(reason: Throwable, message: Option[Any]) {
log.error(reason, "Restarting due to [{}] when processing [{}]",
reason.getMessage, message.getOrElse(""))
}
def receive = {
case "test" log.info("Received test")
case x log.warning("Received unknown message: {}", x)
}
}
//#my-actor
//#my-event-listener
import akka.event.Logging.InitializeLogger
import akka.event.Logging.LoggerInitialized
import akka.event.Logging.Error
import akka.event.Logging.Warning
import akka.event.Logging.Info
import akka.event.Logging.Debug
class MyEventListener extends Actor {
def receive = {
case InitializeLogger(_) sender ! LoggerInitialized
case Error(cause, logSource, logClass, message) // ...
case Warning(logSource, logClass, message) // ...
case Info(logSource, logClass, message) // ...
case Debug(logSource, logClass, message) // ...
}
}
//#my-event-listener
//#my-source
import akka.event.LogSource
import akka.actor.ActorSystem
object MyType {
implicit val logSource: LogSource[AnyRef] = new LogSource[AnyRef] {
def genString(o: AnyRef): String = o.getClass.getName
override def getClazz(o: AnyRef): Class[_] = o.getClass
}
}
class MyType(system: ActorSystem) {
import MyType._
import akka.event.Logging
val log = Logging(system, this)
}
//#my-source
}
class LoggingDocSpec extends AkkaSpec {
import LoggingDocSpec.MyActor
"use a logging actor" in {
val myActor = system.actorOf(Props(new MyActor))
myActor ! "test"
}
"allow registration to dead letters" in {
//#deadletters
import akka.actor.{ Actor, DeadLetter, Props }
val listener = system.actorOf(Props(new Actor {
def receive = {
case d: DeadLetter println(d)
}
}))
system.eventStream.subscribe(listener, classOf[DeadLetter])
//#deadletters
}
"demonstrate logging more arguments" in {
//#array
val args = Array("The", "brown", "fox", "jumps", 42)
system.log.debug("five parameters: {}, {}, {}, {}, {}", args)
//#array
}
}

View file

@ -0,0 +1,91 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.extension
import java.util.concurrent.atomic.AtomicLong
import akka.actor.Actor
import akka.testkit.AkkaSpec
//#extension
import akka.actor.Extension
class CountExtensionImpl extends Extension {
//Since this Extension is a shared instance
// per ActorSystem we need to be threadsafe
private val counter = new AtomicLong(0)
//This is the operation this Extension provides
def increment() = counter.incrementAndGet()
}
//#extension
//#extensionid
import akka.actor.ExtensionId
import akka.actor.ExtensionIdProvider
import akka.actor.ExtendedActorSystem
object CountExtension
extends ExtensionId[CountExtensionImpl]
with ExtensionIdProvider {
//The lookup method is required by ExtensionIdProvider,
// so we return ourselves here, this allows us
// to configure our extension to be loaded when
// the ActorSystem starts up
override def lookup = CountExtension
//This method will be called by Akka
// to instantiate our Extension
override def createExtension(system: ExtendedActorSystem) = new CountExtensionImpl
}
//#extensionid
object ExtensionDocSpec {
val config = """
//#config
akka {
extensions = ["docs.extension.CountExtension"]
}
//#config
"""
//#extension-usage-actor
class MyActor extends Actor {
def receive = {
case someMessage
CountExtension(context.system).increment()
}
}
//#extension-usage-actor
//#extension-usage-actor-trait
trait Counting { self: Actor
def increment() = CountExtension(context.system).increment()
}
class MyCounterActor extends Actor with Counting {
def receive = {
case someMessage increment()
}
}
//#extension-usage-actor-trait
}
class ExtensionDocSpec extends AkkaSpec(ExtensionDocSpec.config) {
import ExtensionDocSpec._
"demonstrate how to create an extension in Scala" in {
//#extension-usage
CountExtension(system).increment
//#extension-usage
}
"demonstrate how to lookup a configured extension in Scala" in {
//#extension-lookup
system.extension(CountExtension)
//#extension-lookup
}
}

View file

@ -0,0 +1,78 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.extension
//#imports
import akka.actor.Extension
import akka.actor.ExtensionId
import akka.actor.ExtensionIdProvider
import akka.actor.ExtendedActorSystem
import scala.concurrent.util.Duration
import com.typesafe.config.Config
import java.util.concurrent.TimeUnit
//#imports
import akka.actor.Actor
import akka.testkit.AkkaSpec
//#extension
class SettingsImpl(config: Config) extends Extension {
val DbUri: String = config.getString("myapp.db.uri")
val CircuitBreakerTimeout: Duration = Duration(config.getMilliseconds("myapp.circuit-breaker.timeout"), TimeUnit.MILLISECONDS)
}
//#extension
//#extensionid
object Settings extends ExtensionId[SettingsImpl] with ExtensionIdProvider {
override def lookup = Settings
override def createExtension(system: ExtendedActorSystem) = new SettingsImpl(system.settings.config)
}
//#extensionid
object SettingsExtensionDocSpec {
val config = """
//#config
myapp {
db {
uri = "mongodb://example1.com:27017,example2.com:27017"
}
circuit-breaker {
timeout = 30 seconds
}
}
//#config
"""
//#extension-usage-actor
class MyActor extends Actor {
val settings = Settings(context.system)
val connection = connect(settings.DbUri, settings.CircuitBreakerTimeout)
//#extension-usage-actor
def receive = {
case someMessage
}
def connect(dbUri: String, circuitBreakerTimeout: Duration) = {
"dummy"
}
}
}
class SettingsExtensionDocSpec extends AkkaSpec(SettingsExtensionDocSpec.config) {
"demonstrate how to create application specific settings extension in Scala" in {
//#extension-usage
val dbUri = Settings(system).DbUri
val circuitBreakerTimeout = Settings(system).CircuitBreakerTimeout
//#extension-usage
}
}

View file

@ -0,0 +1,393 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.future
import language.postfixOps
import akka.testkit._
import akka.actor.{ Actor, Props }
import akka.actor.Status
import akka.util.Timeout
import scala.concurrent.util.duration._
import java.lang.IllegalStateException
import scala.concurrent.{ Await, ExecutionContext, Future, Promise }
import scala.util.{ Failure, Success }
object FutureDocSpec {
class MyActor extends Actor {
def receive = {
case x: String sender ! x.toUpperCase
case x: Int if x < 0 sender ! Status.Failure(new ArithmeticException("Negative values not supported"))
case x: Int sender ! x
}
}
case object GetNext
class OddActor extends Actor {
var n = 1
def receive = {
case GetNext
sender ! n
n += 2
}
}
}
class FutureDocSpec extends AkkaSpec {
import FutureDocSpec._
import system.dispatcher
"demonstrate usage custom ExecutionContext" in {
val yourExecutorServiceGoesHere = java.util.concurrent.Executors.newSingleThreadExecutor()
//#diy-execution-context
import scala.concurrent.{ ExecutionContext, Promise }
implicit val ec = ExecutionContext.fromExecutorService(yourExecutorServiceGoesHere)
// Do stuff with your brand new shiny ExecutionContext
val f = Promise.successful("foo")
// Then shut your ExecutionContext down at some
// appropriate place in your program/application
ec.shutdown()
//#diy-execution-context
}
"demonstrate usage of blocking from actor" in {
val actor = system.actorOf(Props[MyActor])
val msg = "hello"
//#ask-blocking
import scala.concurrent.Await
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.util.duration._
implicit val timeout = Timeout(5 seconds)
val future = actor ? msg // enabled by the ask import
val result = Await.result(future, timeout.duration).asInstanceOf[String]
//#ask-blocking
result must be("HELLO")
}
"demonstrate usage of mapTo" in {
val actor = system.actorOf(Props[MyActor])
val msg = "hello"
implicit val timeout = Timeout(5 seconds)
//#map-to
import scala.concurrent.Future
import akka.pattern.ask
val future: Future[String] = ask(actor, msg).mapTo[String]
//#map-to
Await.result(future, timeout.duration) must be("HELLO")
}
"demonstrate usage of simple future eval" in {
//#future-eval
import scala.concurrent.Await
import scala.concurrent.Future
import scala.concurrent.util.duration._
val future = Future {
"Hello" + "World"
}
val result = Await.result(future, 1 second)
//#future-eval
result must be("HelloWorld")
}
"demonstrate usage of map" in {
//#map
val f1 = Future {
"Hello" + "World"
}
val f2 = f1 map { x
x.length
}
val result = Await.result(f2, 1 second)
result must be(10)
f1.value must be(Some(Success("HelloWorld")))
//#map
}
"demonstrate wrong usage of nested map" in {
//#wrong-nested-map
val f1 = Future {
"Hello" + "World"
}
val f2 = Future.successful(3)
val f3 = f1 map { x
f2 map { y
x.length * y
}
}
//#wrong-nested-map
Await.ready(f3, 1 second)
}
"demonstrate usage of flatMap" in {
//#flat-map
val f1 = Future {
"Hello" + "World"
}
val f2 = Future.successful(3)
val f3 = f1 flatMap { x
f2 map { y
x.length * y
}
}
val result = Await.result(f3, 1 second)
result must be(30)
//#flat-map
}
"demonstrate usage of filter" in {
//#filter
val future1 = Future.successful(4)
val future2 = future1.filter(_ % 2 == 0)
val result = Await.result(future2, 1 second)
result must be(4)
val failedFilter = future1.filter(_ % 2 == 1).recover {
case m: NoSuchElementException 0 //When filter fails, it will have a java.util.NoSuchElementException
}
val result2 = Await.result(failedFilter, 1 second)
result2 must be(0) //Can only be 0 when there was a MatchError
//#filter
}
"demonstrate usage of for comprehension" in {
//#for-comprehension
val f = for {
a Future(10 / 2) // 10 / 2 = 5
b Future(a + 1) // 5 + 1 = 6
c Future(a - 1) // 5 - 1 = 4
if c > 3 // Future.filter
} yield b * c // 6 * 4 = 24
// Note that the execution of futures a, b, and c
// are not done in parallel.
val result = Await.result(f, 1 second)
result must be(24)
//#for-comprehension
}
"demonstrate wrong way of composing" in {
val actor1 = system.actorOf(Props[MyActor])
val actor2 = system.actorOf(Props[MyActor])
val actor3 = system.actorOf(Props[MyActor])
val msg1 = 1
val msg2 = 2
implicit val timeout = Timeout(5 seconds)
import scala.concurrent.Await
import akka.pattern.ask
//#composing-wrong
val f1 = ask(actor1, msg1)
val f2 = ask(actor2, msg2)
val a = Await.result(f1, 1 second).asInstanceOf[Int]
val b = Await.result(f2, 1 second).asInstanceOf[Int]
val f3 = ask(actor3, (a + b))
val result = Await.result(f3, 1 second).asInstanceOf[Int]
//#composing-wrong
result must be(3)
}
"demonstrate composing" in {
val actor1 = system.actorOf(Props[MyActor])
val actor2 = system.actorOf(Props[MyActor])
val actor3 = system.actorOf(Props[MyActor])
val msg1 = 1
val msg2 = 2
implicit val timeout = Timeout(5 seconds)
import scala.concurrent.Await
import akka.pattern.ask
//#composing
val f1 = ask(actor1, msg1)
val f2 = ask(actor2, msg2)
val f3 = for {
a f1.mapTo[Int]
b f2.mapTo[Int]
c ask(actor3, (a + b)).mapTo[Int]
} yield c
val result = Await.result(f3, 1 second).asInstanceOf[Int]
//#composing
result must be(3)
}
"demonstrate usage of sequence with actors" in {
implicit val timeout = Timeout(5 seconds)
val oddActor = system.actorOf(Props[OddActor])
//#sequence-ask
// oddActor returns odd numbers sequentially from 1 as a List[Future[Int]]
val listOfFutures = List.fill(100)(akka.pattern.ask(oddActor, GetNext).mapTo[Int])
// now we have a Future[List[Int]]
val futureList = Future.sequence(listOfFutures)
// Find the sum of the odd numbers
val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int]
oddSum must be(10000)
//#sequence-ask
}
"demonstrate usage of sequence" in {
//#sequence
val futureList = Future.sequence((1 to 100).toList.map(x Future(x * 2 - 1)))
val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int]
oddSum must be(10000)
//#sequence
}
"demonstrate usage of traverse" in {
//#traverse
val futureList = Future.traverse((1 to 100).toList)(x Future(x * 2 - 1))
val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int]
oddSum must be(10000)
//#traverse
}
"demonstrate usage of fold" in {
//#fold
val futures = for (i 1 to 1000) yield Future(i * 2) // Create a sequence of Futures
val futureSum = Future.fold(futures)(0)(_ + _)
Await.result(futureSum, 1 second) must be(1001000)
//#fold
}
"demonstrate usage of reduce" in {
//#reduce
val futures = for (i 1 to 1000) yield Future(i * 2) // Create a sequence of Futures
val futureSum = Future.reduce(futures)(_ + _)
Await.result(futureSum, 1 second) must be(1001000)
//#reduce
}
"demonstrate usage of recover" in {
implicit val timeout = Timeout(5 seconds)
val actor = system.actorOf(Props[MyActor])
val msg1 = -1
//#recover
val future = akka.pattern.ask(actor, msg1) recover {
case e: ArithmeticException 0
}
//#recover
Await.result(future, 1 second) must be(0)
}
"demonstrate usage of recoverWith" in {
implicit val timeout = Timeout(5 seconds)
val actor = system.actorOf(Props[MyActor])
val msg1 = -1
//#try-recover
val future = akka.pattern.ask(actor, msg1) recoverWith {
case e: ArithmeticException Future.successful(0)
case foo: IllegalArgumentException Future.failed[Int](new IllegalStateException("All br0ken!"))
}
//#try-recover
Await.result(future, 1 second) must be(0)
}
"demonstrate usage of zip" in {
val future1 = Future { "foo" }
val future2 = Future { "bar" }
//#zip
val future3 = future1 zip future2 map { case (a, b) a + " " + b }
//#zip
Await.result(future3, 1 second) must be("foo bar")
}
"demonstrate usage of andThen" in {
def loadPage(s: String) = s
val url = "foo bar"
def log(cause: Throwable) = ()
def watchSomeTV = ()
//#and-then
val result = Future { loadPage(url) } andThen {
case Failure(exception) log(exception)
} andThen {
case _ watchSomeTV
}
//#and-then
Await.result(result, 1 second) must be("foo bar")
}
"demonstrate usage of fallbackTo" in {
val future1 = Future { "foo" }
val future2 = Future { "bar" }
val future3 = Future { "pigdog" }
//#fallback-to
val future4 = future1 fallbackTo future2 fallbackTo future3
//#fallback-to
Await.result(future4, 1 second) must be("foo")
}
"demonstrate usage of onSuccess & onFailure & onComplete" in {
{
val future = Future { "foo" }
//#onSuccess
future onSuccess {
case "bar" println("Got my bar alright!")
case x: String println("Got some random string: " + x)
}
//#onSuccess
Await.result(future, 1 second) must be("foo")
}
{
val future = Future.failed[String](new IllegalStateException("OHNOES"))
//#onFailure
future onFailure {
case ise: IllegalStateException if ise.getMessage == "OHNOES"
//OHNOES! We are in deep trouble, do something!
case e: Exception
//Do something else
}
//#onFailure
}
{
val future = Future { "foo" }
def doSomethingOnSuccess(r: String) = ()
def doSomethingOnFailure(t: Throwable) = ()
//#onComplete
future onComplete {
case Success(result) doSomethingOnSuccess(result)
case Failure(failure) doSomethingOnFailure(failure)
}
//#onComplete
Await.result(future, 1 second) must be("foo")
}
}
"demonstrate usage of Future.successful & Future.failed" in {
//#successful
val future = Future.successful("Yay!")
//#successful
//#failed
val otherFuture = Future.failed[String](new IllegalArgumentException("Bang!"))
//#failed
Await.result(future, 1 second) must be("Yay!")
intercept[IllegalArgumentException] { Await.result(otherFuture, 1 second) }
}
"demonstrate usage of pattern.after" in {
//#after
import akka.pattern.after
val delayed = after(200 millis, using = system.scheduler)(Future.failed(
new IllegalStateException("OHNOES")))
val future = Future { Thread.sleep(1000); "foo" }
val result = future either delayed
//#after
intercept[IllegalStateException] { Await.result(result, 2 second) }
}
}

View file

@ -0,0 +1,84 @@
/**
* Copyright (C) 2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.io
//#imports
import akka.actor._
import akka.util.{ ByteString, ByteStringBuilder, ByteIterator }
//#imports
abstract class BinaryDecoding {
//#decoding
implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN
val FrameDecoder = for {
frameLenBytes IO.take(4)
frameLen = frameLenBytes.iterator.getInt
frame IO.take(frameLen)
} yield {
val in = frame.iterator
val n = in.getInt
val m = in.getInt
val a = Array.newBuilder[Short]
val b = Array.newBuilder[Long]
for (i 1 to n) {
a += in.getShort
b += in.getInt
}
val data = Array.ofDim[Double](m)
in.getDoubles(data)
(a.result, b.result, data)
}
//#decoding
}
abstract class RestToSeq {
implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN
val bytes: ByteString
val in = bytes.iterator
//#rest-to-seq
val n = in.getInt
val m = in.getInt
// ... in.get...
val rest: ByteString = in.toSeq
//#rest-to-seq
}
abstract class BinaryEncoding {
//#encoding
implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN
val a: Array[Short]
val b: Array[Long]
val data: Array[Double]
val frameBuilder = ByteString.newBuilder
val n = a.length
val m = data.length
frameBuilder.putInt(n)
frameBuilder.putInt(m)
for (i 0 to n - 1) {
frameBuilder.putShort(a(i))
frameBuilder.putLong(b(i))
}
frameBuilder.putDoubles(data)
val frame = frameBuilder.result()
//#encoding
//#sending
val socket: IO.SocketHandle
socket.write(ByteString.newBuilder.putInt(frame.length).result)
socket.write(frame)
//#sending
}

View file

@ -0,0 +1,226 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.io
import language.postfixOps
//#imports
import akka.actor._
import akka.util.{ ByteString, ByteStringBuilder }
import java.net.InetSocketAddress
//#imports
//#actor
class HttpServer(port: Int) extends Actor {
val state = IO.IterateeRef.Map.async[IO.Handle]()(context.dispatcher)
override def preStart {
IOManager(context.system) listen new InetSocketAddress(port)
}
def receive = {
case IO.NewClient(server)
val socket = server.accept()
state(socket) flatMap (_ HttpServer.processRequest(socket))
case IO.Read(socket, bytes)
state(socket)(IO Chunk bytes)
case IO.Closed(socket, cause)
state(socket)(IO EOF)
state -= socket
}
}
//#actor
//#actor-companion
object HttpServer {
import HttpIteratees._
def processRequest(socket: IO.SocketHandle): IO.Iteratee[Unit] =
IO repeat {
for {
request readRequest
} yield {
val rsp = request match {
case Request("GET", "ping" :: Nil, _, _, headers, _)
OKResponse(ByteString("<p>pong</p>"),
request.headers.exists { case Header(n, v) n.toLowerCase == "connection" && v.toLowerCase == "keep-alive" })
case req
OKResponse(ByteString("<p>" + req.toString + "</p>"),
request.headers.exists { case Header(n, v) n.toLowerCase == "connection" && v.toLowerCase == "keep-alive" })
}
socket write OKResponse.bytes(rsp).compact
if (!rsp.keepAlive) socket.close()
}
}
}
//#actor-companion
//#request-class
case class Request(meth: String, path: List[String], query: Option[String], httpver: String, headers: List[Header], body: Option[ByteString])
case class Header(name: String, value: String)
//#request-class
//#constants
object HttpConstants {
val SP = ByteString(" ")
val HT = ByteString("\t")
val CRLF = ByteString("\r\n")
val COLON = ByteString(":")
val PERCENT = ByteString("%")
val PATH = ByteString("/")
val QUERY = ByteString("?")
}
//#constants
//#read-request
object HttpIteratees {
import HttpConstants._
def readRequest =
for {
requestLine readRequestLine
(meth, (path, query), httpver) = requestLine
headers readHeaders
body readBody(headers)
} yield Request(meth, path, query, httpver, headers, body)
//#read-request
//#read-request-line
def ascii(bytes: ByteString): String = bytes.decodeString("US-ASCII").trim
def readRequestLine =
for {
meth IO takeUntil SP
uri readRequestURI
_ IO takeUntil SP // ignore the rest
httpver IO takeUntil CRLF
} yield (ascii(meth), uri, ascii(httpver))
//#read-request-line
//#read-request-uri
def readRequestURI = IO peek 1 flatMap {
case PATH
for {
path readPath
query readQuery
} yield (path, query)
case _ sys.error("Not Implemented")
}
//#read-request-uri
//#read-path
def readPath = {
def step(segments: List[String]): IO.Iteratee[List[String]] = IO peek 1 flatMap {
case PATH IO drop 1 flatMap (_ readUriPart(pathchar) flatMap (segment step(segment :: segments)))
case _ segments match {
case "" :: rest IO Done rest.reverse
case _ IO Done segments.reverse
}
}
step(Nil)
}
//#read-path
//#read-query
def readQuery: IO.Iteratee[Option[String]] = IO peek 1 flatMap {
case QUERY IO drop 1 flatMap (_ readUriPart(querychar) map (Some(_)))
case _ IO Done None
}
//#read-query
//#read-uri-part
val alpha = Set.empty ++ ('a' to 'z') ++ ('A' to 'Z') map (_.toByte)
val digit = Set.empty ++ ('0' to '9') map (_.toByte)
val hexdigit = digit ++ (Set.empty ++ ('a' to 'f') ++ ('A' to 'F') map (_.toByte))
val subdelim = Set('!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=') map (_.toByte)
val pathchar = alpha ++ digit ++ subdelim ++ (Set(':', '@') map (_.toByte))
val querychar = pathchar ++ (Set('/', '?') map (_.toByte))
def readUriPart(allowed: Set[Byte]): IO.Iteratee[String] = for {
str IO takeWhile allowed map ascii
pchar IO peek 1 map (_ == PERCENT)
all if (pchar) readPChar flatMap (ch readUriPart(allowed) map (str + ch + _)) else IO Done str
} yield all
def readPChar = IO take 3 map {
case Seq('%', rest @ _*) if rest forall hexdigit
java.lang.Integer.parseInt(rest map (_.toChar) mkString, 16).toChar
}
//#read-uri-part
//#read-headers
def readHeaders = {
def step(found: List[Header]): IO.Iteratee[List[Header]] = {
IO peek 2 flatMap {
case CRLF IO takeUntil CRLF flatMap (_ IO Done found)
case _ readHeader flatMap (header step(header :: found))
}
}
step(Nil)
}
def readHeader =
for {
name IO takeUntil COLON
value IO takeUntil CRLF flatMap readMultiLineValue
} yield Header(ascii(name), ascii(value))
def readMultiLineValue(initial: ByteString): IO.Iteratee[ByteString] = IO peek 1 flatMap {
case SP IO takeUntil CRLF flatMap (bytes readMultiLineValue(initial ++ bytes))
case _ IO Done initial
}
//#read-headers
//#read-body
def readBody(headers: List[Header]) =
if (headers.exists(header header.name == "Content-Length" || header.name == "Transfer-Encoding"))
IO.takeAll map (Some(_))
else
IO Done None
//#read-body
}
//#ok-response
object OKResponse {
import HttpConstants.CRLF
val okStatus = ByteString("HTTP/1.1 200 OK")
val contentType = ByteString("Content-Type: text/html; charset=utf-8")
val cacheControl = ByteString("Cache-Control: no-cache")
val date = ByteString("Date: ")
val server = ByteString("Server: Akka")
val contentLength = ByteString("Content-Length: ")
val connection = ByteString("Connection: ")
val keepAlive = ByteString("Keep-Alive")
val close = ByteString("Close")
def bytes(rsp: OKResponse) = {
new ByteStringBuilder ++=
okStatus ++= CRLF ++=
contentType ++= CRLF ++=
cacheControl ++= CRLF ++=
date ++= ByteString(new java.util.Date().toString) ++= CRLF ++=
server ++= CRLF ++=
contentLength ++= ByteString(rsp.body.length.toString) ++= CRLF ++=
connection ++= (if (rsp.keepAlive) keepAlive else close) ++= CRLF ++= CRLF ++= rsp.body result
}
}
case class OKResponse(body: ByteString, keepAlive: Boolean)
//#ok-response
//#main
object Main extends App {
val port = Option(System.getenv("PORT")) map (_.toInt) getOrElse 8080
val system = ActorSystem()
val server = system.actorOf(Props(new HttpServer(port)))
}
//#main

View file

@ -0,0 +1,16 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.pattern
// this part will not appear in the docs
//#all-of-it
class ScalaTemplate {
println("Hello, Template!")
//#uninteresting-stuff
// dont show this plumbimg
//#uninteresting-stuff
}
//#all-of-it

View file

@ -0,0 +1,52 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.remoting
import akka.actor.{ ExtendedActorSystem, ActorSystem, Actor, ActorRef }
import akka.testkit.{ AkkaSpec, ImplicitSender }
//#import
import akka.actor.{ Props, Deploy, Address, AddressFromURIString }
import akka.remote.RemoteScope
//#import
object RemoteDeploymentDocSpec {
class Echo extends Actor {
def receive = {
case x sender ! self
}
}
}
class RemoteDeploymentDocSpec extends AkkaSpec("""
akka.actor.provider = "akka.remote.RemoteActorRefProvider"
akka.remote.netty.port = 0
""") with ImplicitSender {
import RemoteDeploymentDocSpec._
val other = ActorSystem("remote", system.settings.config)
val address = other.asInstanceOf[ExtendedActorSystem].provider.getExternalAddressFor(Address("akka", "s", "host", 1)).get
override def atTermination() { other.shutdown() }
"demonstrate programmatic deployment" in {
//#deploy
val ref = system.actorOf(Props[Echo].withDeploy(Deploy(scope = RemoteScope(address))))
//#deploy
ref.path.address must be(address)
ref ! "test"
expectMsgType[ActorRef].path.address must be(address)
}
"demonstrate address extractor" in {
//#make-address
val one = AddressFromURIString("akka://sys@host:1234")
val two = Address("akka", "sys", "host", 1234) // this gives the same
//#make-address
one must be === two
}
}

View file

@ -0,0 +1,73 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import akka.testkit.AkkaSpec
import akka.testkit.ImplicitSender
object ConsistentHashingRouterDocSpec {
//#cache-actor
import akka.actor.Actor
import akka.routing.ConsistentHashingRouter.ConsistentHashable
class Cache extends Actor {
var cache = Map.empty[String, String]
def receive = {
case Entry(key, value) cache += (key -> value)
case Get(key) sender ! cache.get(key)
case Evict(key) cache -= key
}
}
case class Evict(key: String)
case class Get(key: String) extends ConsistentHashable {
override def consistentHashKey: Any = key
}
case class Entry(key: String, value: String)
//#cache-actor
}
class ConsistentHashingRouterDocSpec extends AkkaSpec with ImplicitSender {
import ConsistentHashingRouterDocSpec._
"demonstrate usage of ConsistentHashableRouter" in {
//#consistent-hashing-router
import akka.actor.Props
import akka.routing.ConsistentHashingRouter
import akka.routing.ConsistentHashingRouter.ConsistentHashMapping
import akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope
def hashMapping: ConsistentHashMapping = {
case Evict(key) key
}
val cache = system.actorOf(Props[Cache].withRouter(ConsistentHashingRouter(10,
hashMapping = hashMapping)), name = "cache")
cache ! ConsistentHashableEnvelope(
message = Entry("hello", "HELLO"), hashKey = "hello")
cache ! ConsistentHashableEnvelope(
message = Entry("hi", "HI"), hashKey = "hi")
cache ! Get("hello")
expectMsg(Some("HELLO"))
cache ! Get("hi")
expectMsg(Some("HI"))
cache ! Evict("hi")
cache ! Get("hi")
expectMsg(None)
//#consistent-hashing-router
}
}

View file

@ -0,0 +1,29 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import RouterDocSpec.MyActor
import akka.testkit.AkkaSpec
import akka.routing.RoundRobinRouter
import akka.actor.{ ActorRef, Props, Actor }
object RouterDocSpec {
class MyActor extends Actor {
def receive = {
case _
}
}
}
class RouterDocSpec extends AkkaSpec {
import RouterDocSpec._
//#dispatchers
val router: ActorRef = system.actorOf(Props[MyActor]
.withRouter(RoundRobinRouter(5, routerDispatcher = "router")) // head will run on "router" dispatcher
.withDispatcher("workers")) // MyActor workers will run on "workers" dispatcher
//#dispatchers
}

View file

@ -0,0 +1,94 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import language.postfixOps
import akka.routing.{ ScatterGatherFirstCompletedRouter, BroadcastRouter, RandomRouter, RoundRobinRouter }
import annotation.tailrec
import akka.actor.{ Props, Actor }
import scala.concurrent.util.duration._
import akka.util.Timeout
import scala.concurrent.Await
import akka.pattern.ask
import akka.routing.SmallestMailboxRouter
case class FibonacciNumber(nbr: Int)
//#printlnActor
class PrintlnActor extends Actor {
def receive = {
case msg
println("Received message '%s' in actor %s".format(msg, self.path.name))
}
}
//#printlnActor
//#fibonacciActor
class FibonacciActor extends Actor {
def receive = {
case FibonacciNumber(nbr) sender ! fibonacci(nbr)
}
private def fibonacci(n: Int): Int = {
@tailrec
def fib(n: Int, b: Int, a: Int): Int = n match {
case 0 a
case _ fib(n - 1, a + b, b)
}
fib(n, 1, 0)
}
}
//#fibonacciActor
//#parentActor
class ParentActor extends Actor {
def receive = {
case "rrr"
//#roundRobinRouter
val roundRobinRouter =
context.actorOf(Props[PrintlnActor].withRouter(RoundRobinRouter(5)), "router")
1 to 10 foreach {
i roundRobinRouter ! i
}
//#roundRobinRouter
case "rr"
//#randomRouter
val randomRouter =
context.actorOf(Props[PrintlnActor].withRouter(RandomRouter(5)), "router")
1 to 10 foreach {
i randomRouter ! i
}
//#randomRouter
case "smr"
//#smallestMailboxRouter
val smallestMailboxRouter =
context.actorOf(Props[PrintlnActor].withRouter(SmallestMailboxRouter(5)), "router")
1 to 10 foreach {
i smallestMailboxRouter ! i
}
//#smallestMailboxRouter
case "br"
//#broadcastRouter
val broadcastRouter =
context.actorOf(Props[PrintlnActor].withRouter(BroadcastRouter(5)), "router")
broadcastRouter ! "this is a broadcast message"
//#broadcastRouter
case "sgfcr"
//#scatterGatherFirstCompletedRouter
val scatterGatherFirstCompletedRouter = context.actorOf(
Props[FibonacciActor].withRouter(ScatterGatherFirstCompletedRouter(
nrOfInstances = 5, within = 2 seconds)), "router")
implicit val timeout = Timeout(5 seconds)
val futureResult = scatterGatherFirstCompletedRouter ? FibonacciNumber(10)
val result = Await.result(futureResult, timeout.duration)
//#scatterGatherFirstCompletedRouter
println("The result of calculating Fibonacci for 10 is %d".format(result))
}
}
//#parentActor

View file

@ -0,0 +1,158 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import akka.actor.{ Actor, Props, ActorSystem, ActorLogging }
import com.typesafe.config.ConfigFactory
import akka.routing.FromConfig
import akka.routing.ConsistentHashingRouter.ConsistentHashable
import akka.testkit.AkkaSpec
import akka.testkit.ImplicitSender
object RouterWithConfigDocSpec {
val config = ConfigFactory.parseString("""
//#config-round-robin
akka.actor.deployment {
/myrouter1 {
router = round-robin
nr-of-instances = 5
}
}
//#config-round-robin
//#config-resize
akka.actor.deployment {
/myrouter2 {
router = round-robin
resizer {
lower-bound = 2
upper-bound = 15
}
}
}
//#config-resize
//#config-random
akka.actor.deployment {
/myrouter3 {
router = random
nr-of-instances = 5
}
}
//#config-random
//#config-smallest-mailbox
akka.actor.deployment {
/myrouter4 {
router = smallest-mailbox
nr-of-instances = 5
}
}
//#config-smallest-mailbox
//#config-broadcast
akka.actor.deployment {
/myrouter5 {
router = broadcast
nr-of-instances = 5
}
}
//#config-broadcast
//#config-scatter-gather
akka.actor.deployment {
/myrouter6 {
router = scatter-gather
nr-of-instances = 5
within = 10 seconds
}
}
//#config-scatter-gather
//#config-consistent-hashing
akka.actor.deployment {
/myrouter7 {
router = consistent-hashing
nr-of-instances = 5
virtual-nodes-factor = 10
}
}
//#config-consistent-hashing
""")
case class Message(nbr: Int) extends ConsistentHashable {
override def consistentHashKey = nbr
}
class ExampleActor extends Actor with ActorLogging {
def receive = {
case Message(nbr)
log.debug("Received %s in router %s".format(nbr, self.path.name))
sender ! nbr
}
}
}
class RouterWithConfigDocSpec extends AkkaSpec(RouterWithConfigDocSpec.config) with ImplicitSender {
import RouterWithConfigDocSpec._
"demonstrate configured round-robin router" in {
//#configurableRouting
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter1")
//#configurableRouting
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
"demonstrate configured random router" in {
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter3")
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
"demonstrate configured smallest-mailbox router" in {
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter4")
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
"demonstrate configured broadcast router" in {
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter5")
1 to 10 foreach { i router ! Message(i) }
receiveN(5 * 10)
}
"demonstrate configured scatter-gather router" in {
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter6")
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
"demonstrate configured consistent-hashing router" in {
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter7")
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
"demonstrate configured round-robin router with resizer" in {
//#configurableRoutingWithResizer
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"myrouter2")
//#configurableRoutingWithResizer
1 to 10 foreach { i router ! Message(i) }
receiveN(10)
}
}

View file

@ -0,0 +1,52 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import akka.actor.{ Actor, Props, ActorSystem }
import com.typesafe.config.ConfigFactory
import akka.routing.FromConfig
case class Message(nbr: Int)
class ExampleActor extends Actor {
def receive = {
case Message(nbr) println("Received %s in router %s".format(nbr, self.path.name))
}
}
object RouterWithConfigExample extends App {
val config = ConfigFactory.parseString("""
//#config
akka.actor.deployment {
/router {
router = round-robin
nr-of-instances = 5
}
}
//#config
//#config-resize
akka.actor.deployment {
/router2 {
router = round-robin
resizer {
lower-bound = 2
upper-bound = 15
}
}
}
//#config-resize
""")
val system = ActorSystem("Example", config)
//#configurableRouting
val router = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"router")
//#configurableRouting
1 to 10 foreach { i router ! Message(i) }
//#configurableRoutingWithResizer
val router2 = system.actorOf(Props[ExampleActor].withRouter(FromConfig()),
"router2")
//#configurableRoutingWithResizer
1 to 10 foreach { i router2 ! Message(i) }
}

View file

@ -0,0 +1,53 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.routing
import akka.routing.RoundRobinRouter
import akka.actor.{ ActorRef, Props, Actor, ActorSystem }
import akka.routing.DefaultResizer
import akka.remote.routing.RemoteRouterConfig
case class Message1(nbr: Int)
class ExampleActor1 extends Actor {
def receive = {
case Message1(nbr) println("Received %s in router %s".format(nbr, self.path.name))
}
}
object RoutingProgrammaticallyExample extends App {
val system = ActorSystem("RPE")
//#programmaticRoutingNrOfInstances
val router1 = system.actorOf(Props[ExampleActor1].withRouter(
RoundRobinRouter(nrOfInstances = 5)))
//#programmaticRoutingNrOfInstances
1 to 6 foreach { i router1 ! Message1(i) }
//#programmaticRoutingRoutees
val actor1 = system.actorOf(Props[ExampleActor1])
val actor2 = system.actorOf(Props[ExampleActor1])
val actor3 = system.actorOf(Props[ExampleActor1])
val routees = Vector[ActorRef](actor1, actor2, actor3)
val router2 = system.actorOf(Props().withRouter(
RoundRobinRouter(routees = routees)))
//#programmaticRoutingRoutees
1 to 6 foreach { i router2 ! Message1(i) }
//#programmaticRoutingWithResizer
val resizer = DefaultResizer(lowerBound = 2, upperBound = 15)
val router3 = system.actorOf(Props[ExampleActor1].withRouter(
RoundRobinRouter(resizer = Some(resizer))))
//#programmaticRoutingWithResizer
1 to 6 foreach { i router3 ! Message1(i) }
//#remoteRoutees
import akka.actor.{ Address, AddressFromURIString }
val addresses = Seq(
Address("akka", "remotesys", "otherhost", 1234),
AddressFromURIString("akka://othersys@anotherhost:1234"))
val routerRemote = system.actorOf(Props[ExampleActor1].withRouter(
RemoteRouterConfig(RoundRobinRouter(5), addresses)))
//#remoteRoutees
}

View file

@ -0,0 +1,227 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
//#extract-transport
package object akka {
// needs to be inside the akka package because accessing unsupported API !
def transportOf(system: actor.ExtendedActorSystem): remote.RemoteTransport =
system.provider match {
case r: remote.RemoteActorRefProvider r.transport
case _
throw new UnsupportedOperationException(
"this method requires the RemoteActorRefProvider to be configured")
}
}
//#extract-transport
package docs.serialization {
import org.scalatest.matchers.MustMatchers
import akka.testkit._
//#imports
import akka.actor.{ ActorRef, ActorSystem }
import akka.serialization._
import com.typesafe.config.ConfigFactory
//#imports
import akka.actor.ExtensionKey
import akka.actor.ExtendedActorSystem
import akka.actor.Extension
import akka.actor.Address
import akka.remote.RemoteActorRefProvider
//#my-own-serializer
class MyOwnSerializer extends Serializer {
// This is whether "fromBinary" requires a "clazz" or not
def includeManifest: Boolean = false
// Pick a unique identifier for your Serializer,
// you've got a couple of billions to choose from,
// 0 - 16 is reserved by Akka itself
def identifier = 1234567
// "toBinary" serializes the given object to an Array of Bytes
def toBinary(obj: AnyRef): Array[Byte] = {
// Put the code that serializes the object here
//#...
Array[Byte]()
//#...
}
// "fromBinary" deserializes the given array,
// using the type hint (if any, see "includeManifest" above)
// into the optionally provided classLoader.
def fromBinary(bytes: Array[Byte],
clazz: Option[Class[_]]): AnyRef = {
// Put your code that deserializes here
//#...
null
//#...
}
}
//#my-own-serializer
trait MyOwnSerializable
case class Customer(name: String) extends MyOwnSerializable
class SerializationDocSpec extends AkkaSpec {
"demonstrate configuration of serialize messages" in {
//#serialize-messages-config
val config = ConfigFactory.parseString("""
akka {
actor {
serialize-messages = on
}
}
""")
//#serialize-messages-config
val a = ActorSystem("system", config)
a.settings.SerializeAllMessages must be(true)
a.shutdown()
}
"demonstrate configuration of serialize creators" in {
//#serialize-creators-config
val config = ConfigFactory.parseString("""
akka {
actor {
serialize-creators = on
}
}
""")
//#serialize-creators-config
val a = ActorSystem("system", config)
a.settings.SerializeAllCreators must be(true)
a.shutdown()
}
"demonstrate configuration of serializers" in {
//#serialize-serializers-config
val config = ConfigFactory.parseString("""
akka {
actor {
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
myown = "docs.serialization.MyOwnSerializer"
}
}
}
""")
//#serialize-serializers-config
val a = ActorSystem("system", config)
a.shutdown()
}
"demonstrate configuration of serialization-bindings" in {
//#serialization-bindings-config
val config = ConfigFactory.parseString("""
akka {
actor {
serializers {
java = "akka.serialization.JavaSerializer"
proto = "akka.remote.serialization.ProtobufSerializer"
myown = "docs.serialization.MyOwnSerializer"
}
serialization-bindings {
"java.lang.String" = java
"docs.serialization.Customer" = java
"com.google.protobuf.Message" = proto
"docs.serialization.MyOwnSerializable" = myown
"java.lang.Boolean" = myown
}
}
}
""")
//#serialization-bindings-config
val a = ActorSystem("system", config)
SerializationExtension(a).serializerFor(classOf[String]).getClass must equal(classOf[JavaSerializer])
SerializationExtension(a).serializerFor(classOf[Customer]).getClass must equal(classOf[JavaSerializer])
SerializationExtension(a).serializerFor(classOf[java.lang.Boolean]).getClass must equal(classOf[MyOwnSerializer])
a.shutdown()
}
"demonstrate the programmatic API" in {
//#programmatic
val system = ActorSystem("example")
// Get the Serialization Extension
val serialization = SerializationExtension(system)
// Have something to serialize
val original = "woohoo"
// Find the Serializer for it
val serializer = serialization.findSerializerFor(original)
// Turn it into bytes
val bytes = serializer.toBinary(original)
// Turn it back into an object
val back = serializer.fromBinary(bytes, manifest = None)
// Voilá!
back must equal(original)
//#programmatic
system.shutdown()
}
"demonstrate serialization of ActorRefs" in {
val theActorRef: ActorRef = system.deadLetters
val theActorSystem: ActorSystem = system
//#actorref-serializer
// Serialize
// (beneath toBinary)
// If there is no transportAddress,
// it means that either this Serializer isn't called
// within a piece of code that sets it,
// so either you need to supply your own,
// or simply use the local path.
val identifier: String = Serialization.currentTransportAddress.value match {
case null theActorRef.path.toString
case address theActorRef.path.toStringWithAddress(address)
}
// Then just serialize the identifier however you like
// Deserialize
// (beneath fromBinary)
val deserializedActorRef = theActorSystem actorFor identifier
// Then just use the ActorRef
//#actorref-serializer
//#external-address
object ExternalAddress extends ExtensionKey[ExternalAddressExt]
class ExternalAddressExt(system: ExtendedActorSystem) extends Extension {
def addressFor(remoteAddr: Address): Address =
system.provider.getExternalAddressFor(remoteAddr) getOrElse
(throw new UnsupportedOperationException("cannot send to " + remoteAddr))
}
def serializeTo(ref: ActorRef, remote: Address): String =
ref.path.toStringWithAddress(ExternalAddress(theActorSystem).addressFor(remote))
//#external-address
}
"demonstrate how to do default Akka serialization of ActorRef" in {
val theActorSystem: ActorSystem = system
//#external-address-default
object ExternalAddress extends ExtensionKey[ExternalAddressExt]
class ExternalAddressExt(system: ExtendedActorSystem) extends Extension {
def addressForAkka: Address = akka.transportOf(system).address
}
def serializeAkkaDefault(ref: ActorRef): String =
ref.path.toStringWithAddress(ExternalAddress(theActorSystem).addressForAkka)
//#external-address-default
}
}
}

View file

@ -0,0 +1,47 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.testkit
//#plain-spec
import akka.actor.ActorSystem
import akka.actor.Actor
import akka.actor.Props
import akka.testkit.TestKit
import org.scalatest.WordSpec
import org.scalatest.matchers.MustMatchers
import org.scalatest.BeforeAndAfterAll
import akka.testkit.ImplicitSender
object MySpec {
class EchoActor extends Actor {
def receive = {
case x sender ! x
}
}
}
//#implicit-sender
class MySpec(_system: ActorSystem) extends TestKit(_system) with ImplicitSender
with WordSpec with MustMatchers with BeforeAndAfterAll {
//#implicit-sender
def this() = this(ActorSystem("MySpec"))
import MySpec._
override def afterAll {
system.shutdown()
}
"An Echo actor" must {
"send back messages unchanged" in {
val echo = system.actorOf(Props[EchoActor])
echo ! "hello world"
expectMsg("hello world")
}
}
}
//#plain-spec

View file

@ -0,0 +1,158 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.testkit
import language.postfixOps
//#testkit-usage
import scala.util.Random
import org.scalatest.BeforeAndAfterAll
import org.scalatest.WordSpec
import org.scalatest.matchers.ShouldMatchers
import com.typesafe.config.ConfigFactory
import akka.actor.Actor
import akka.actor.ActorRef
import akka.actor.ActorSystem
import akka.actor.Props
import akka.testkit.DefaultTimeout
import akka.testkit.ImplicitSender
import akka.testkit.TestKit
import scala.concurrent.util.duration._
/**
* a Test to show some TestKit examples
*/
class TestKitUsageSpec
extends TestKit(ActorSystem("TestKitUsageSpec",
ConfigFactory.parseString(TestKitUsageSpec.config)))
with DefaultTimeout with ImplicitSender
with WordSpec with ShouldMatchers with BeforeAndAfterAll {
import TestKitUsageSpec._
val echoRef = system.actorOf(Props(new EchoActor))
val forwardRef = system.actorOf(Props(new ForwardingActor(testActor)))
val filterRef = system.actorOf(Props(new FilteringActor(testActor)))
val randomHead = Random.nextInt(6)
val randomTail = Random.nextInt(10)
val headList = Seq().padTo(randomHead, "0")
val tailList = Seq().padTo(randomTail, "1")
val seqRef = system.actorOf(Props(new SequencingActor(testActor, headList, tailList)))
override def afterAll {
system.shutdown()
}
"An EchoActor" should {
"Respond with the same message it receives" in {
within(500 millis) {
echoRef ! "test"
expectMsg("test")
}
}
}
"A ForwardingActor" should {
"Forward a message it receives" in {
within(500 millis) {
forwardRef ! "test"
expectMsg("test")
}
}
}
"A FilteringActor" should {
"Filter all messages, except expected messagetypes it receives" in {
var messages = Seq[String]()
within(500 millis) {
filterRef ! "test"
expectMsg("test")
filterRef ! 1
expectNoMsg
filterRef ! "some"
filterRef ! "more"
filterRef ! 1
filterRef ! "text"
filterRef ! 1
receiveWhile(500 millis) {
case msg: String messages = msg +: messages
}
}
messages.length should be(3)
messages.reverse should be(Seq("some", "more", "text"))
}
}
"A SequencingActor" should {
"receive an interesting message at some point " in {
within(500 millis) {
ignoreMsg {
case msg: String msg != "something"
}
seqRef ! "something"
expectMsg("something")
ignoreMsg {
case msg: String msg == "1"
}
expectNoMsg
ignoreNoMsg
}
}
}
}
object TestKitUsageSpec {
// Define your test specific configuration here
val config = """
akka {
loglevel = "WARNING"
}
"""
/**
* An Actor that echoes everything you send to it
*/
class EchoActor extends Actor {
def receive = {
case msg sender ! msg
}
}
/**
* An Actor that forwards every message to a next Actor
*/
class ForwardingActor(next: ActorRef) extends Actor {
def receive = {
case msg next ! msg
}
}
/**
* An Actor that only forwards certain messages to a next Actor
*/
class FilteringActor(next: ActorRef) extends Actor {
def receive = {
case msg: String next ! msg
case _ None
}
}
/**
* An actor that sends a sequence of messages with a random head list, an
* interesting value and a random tail list. The idea is that you would
* like to test that the interesting value is received and that you cant
* be bothered with the rest
*/
class SequencingActor(next: ActorRef, head: Seq[String], tail: Seq[String])
extends Actor {
def receive = {
case msg {
head foreach { next ! _ }
next ! msg
tail foreach { next ! _ }
}
}
}
}
//#testkit-usage

View file

@ -0,0 +1,290 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.testkit
import language.postfixOps
import scala.util.Success
//#imports-test-probe
import akka.testkit.TestProbe
import scala.concurrent.util.duration._
import akka.actor._
import scala.concurrent.Future
//#imports-test-probe
import akka.testkit.AkkaSpec
import akka.testkit.DefaultTimeout
import akka.testkit.ImplicitSender
import scala.util.control.NonFatal
object TestkitDocSpec {
case object Say42
case object Unknown
class MyActor extends Actor {
def receive = {
case Say42 sender ! 42
case "some work" sender ! "some result"
}
}
//#my-double-echo
class MyDoubleEcho extends Actor {
var dest1: ActorRef = _
var dest2: ActorRef = _
def receive = {
case (d1: ActorRef, d2: ActorRef)
dest1 = d1
dest2 = d2
case x
dest1 ! x
dest2 ! x
}
}
//#my-double-echo
import akka.testkit.TestProbe
//#test-probe-forward-actors
class Source(target: ActorRef) extends Actor {
def receive = {
case "start" target ! "work"
}
}
class Destination extends Actor {
def receive = {
case x // Do something..
}
}
//#test-probe-forward-actors
class LoggingActor extends Actor {
//#logging-receive
import akka.event.LoggingReceive
def receive = LoggingReceive {
case msg // Do something...
}
//#logging-receive
}
}
class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender {
import TestkitDocSpec._
"demonstrate usage of TestActorRef" in {
//#test-actor-ref
import akka.testkit.TestActorRef
val actorRef = TestActorRef[MyActor]
val actor = actorRef.underlyingActor
//#test-actor-ref
}
"demonstrate usage of TestFSMRef" in {
//#test-fsm-ref
import akka.testkit.TestFSMRef
import akka.actor.FSM
import scala.concurrent.util.duration._
val fsm = TestFSMRef(new Actor with FSM[Int, String] {
startWith(1, "")
when(1) {
case Event("go", _) goto(2) using "go"
}
when(2) {
case Event("back", _) goto(1) using "back"
}
})
assert(fsm.stateName == 1)
assert(fsm.stateData == "")
fsm ! "go" // being a TestActorRef, this runs also on the CallingThreadDispatcher
assert(fsm.stateName == 2)
assert(fsm.stateData == "go")
fsm.setState(stateName = 1)
assert(fsm.stateName == 1)
assert(fsm.timerActive_?("test") == false)
fsm.setTimer("test", 12, 10 millis, true)
assert(fsm.timerActive_?("test") == true)
fsm.cancelTimer("test")
assert(fsm.timerActive_?("test") == false)
//#test-fsm-ref
}
"demonstrate testing of behavior" in {
//#test-behavior
import akka.testkit.TestActorRef
import scala.concurrent.util.duration._
import scala.concurrent.Await
import akka.pattern.ask
val actorRef = TestActorRef(new MyActor)
// hypothetical message stimulating a '42' answer
val future = actorRef ? Say42
val Success(result: Int) = future.value.get
result must be(42)
//#test-behavior
}
"demonstrate unhandled message" in {
//#test-unhandled
import akka.testkit.TestActorRef
system.eventStream.subscribe(testActor, classOf[UnhandledMessage])
val ref = TestActorRef[MyActor]
ref.receive(Unknown)
expectMsg(1 second, UnhandledMessage(Unknown, system.deadLetters, ref))
//#test-unhandled
}
"demonstrate expecting exceptions" in {
//#test-expecting-exceptions
import akka.testkit.TestActorRef
val actorRef = TestActorRef(new Actor {
def receive = {
case "hello" throw new IllegalArgumentException("boom")
}
})
intercept[IllegalArgumentException] { actorRef.receive("hello") }
//#test-expecting-exceptions
}
"demonstrate within" in {
type Worker = MyActor
//#test-within
import akka.actor.Props
import scala.concurrent.util.duration._
val worker = system.actorOf(Props[Worker])
within(200 millis) {
worker ! "some work"
expectMsg("some result")
expectNoMsg // will block for the rest of the 200ms
Thread.sleep(300) // will NOT make this block fail
}
//#test-within
}
"demonstrate dilated duration" in {
//#duration-dilation
import scala.concurrent.util.duration._
import akka.testkit._
10.milliseconds.dilated
//#duration-dilation
}
"demonstrate usage of probe" in {
//#test-probe
val probe1 = TestProbe()
val probe2 = TestProbe()
val actor = system.actorOf(Props[MyDoubleEcho])
actor ! (probe1.ref, probe2.ref)
actor ! "hello"
probe1.expectMsg(500 millis, "hello")
probe2.expectMsg(500 millis, "hello")
//#test-probe
//#test-special-probe
case class Update(id: Int, value: String)
val probe = new TestProbe(system) {
def expectUpdate(x: Int) = {
expectMsgPF() {
case Update(id, _) if id == x true
}
sender ! "ACK"
}
}
//#test-special-probe
}
"demonstrate probe reply" in {
import akka.testkit.TestProbe
import scala.concurrent.util.duration._
import akka.pattern.ask
//#test-probe-reply
val probe = TestProbe()
val future = probe.ref ? "hello"
probe.expectMsg(0 millis, "hello") // TestActor runs on CallingThreadDispatcher
probe.reply("world")
assert(future.isCompleted && future.value == Some(Success("world")))
//#test-probe-reply
}
"demonstrate probe forward" in {
import akka.testkit.TestProbe
import akka.actor.Props
//#test-probe-forward
val probe = TestProbe()
val source = system.actorOf(Props(new Source(probe.ref)))
val dest = system.actorOf(Props[Destination])
source ! "start"
probe.expectMsg("work")
probe.forward(dest)
//#test-probe-forward
}
"demonstrate " in {
//#calling-thread-dispatcher
import akka.testkit.CallingThreadDispatcher
val ref = system.actorOf(Props[MyActor].withDispatcher(CallingThreadDispatcher.Id))
//#calling-thread-dispatcher
}
"demonstrate EventFilter" in {
//#event-filter
import akka.testkit.EventFilter
import com.typesafe.config.ConfigFactory
implicit val system = ActorSystem("testsystem", ConfigFactory.parseString("""
akka.event-handlers = ["akka.testkit.TestEventListener"]
"""))
try {
val actor = system.actorOf(Props.empty)
EventFilter[ActorKilledException](occurrences = 1) intercept {
actor ! Kill
}
} finally {
system.shutdown()
}
//#event-filter
}
"demonstrate TestKitBase" in {
//#test-kit-base
import akka.testkit.TestKitBase
class MyTest extends TestKitBase {
implicit lazy val system = ActorSystem()
//#put-your-test-code-here
val probe = TestProbe()
probe.send(testActor, "hello")
try expectMsg("hello") catch { case NonFatal(e) system.shutdown(); throw e }
//#put-your-test-code-here
system.shutdown()
}
//#test-kit-base
}
"demonstrate within() nesting" in {
intercept[AssertionError] {
//#test-within-probe
val probe = TestProbe()
within(1 second) {
probe.expectMsg("hello")
}
//#test-within-probe
}
}
}

View file

@ -0,0 +1,233 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.transactor
import language.postfixOps
import akka.actor._
import akka.transactor._
import scala.concurrent.util.duration._
import akka.util.Timeout
import akka.testkit._
import scala.concurrent.stm._
object CoordinatedExample {
//#coordinated-example
import akka.actor._
import akka.transactor._
import scala.concurrent.stm._
case class Increment(friend: Option[ActorRef] = None)
case object GetCount
class Counter extends Actor {
val count = Ref(0)
def receive = {
case coordinated @ Coordinated(Increment(friend)) {
friend foreach (_ ! coordinated(Increment()))
coordinated atomic { implicit t
count transform (_ + 1)
}
}
case GetCount sender ! count.single.get
}
}
//#coordinated-example
}
object CoordinatedApi {
case object Message
class Coordinator extends Actor {
//#receive-coordinated
def receive = {
case coordinated @ Coordinated(Message) {
//#coordinated-atomic
coordinated atomic { implicit t
// do something in the coordinated transaction ...
}
//#coordinated-atomic
}
}
//#receive-coordinated
}
}
object CounterExample {
//#counter-example
import akka.transactor._
import scala.concurrent.stm._
case object Increment
class Counter extends Transactor {
val count = Ref(0)
def atomically = implicit txn {
case Increment count transform (_ + 1)
}
}
//#counter-example
}
object FriendlyCounterExample {
//#friendly-counter-example
import akka.actor._
import akka.transactor._
import scala.concurrent.stm._
case object Increment
class FriendlyCounter(friend: ActorRef) extends Transactor {
val count = Ref(0)
override def coordinate = {
case Increment include(friend)
}
def atomically = implicit txn {
case Increment count transform (_ + 1)
}
}
//#friendly-counter-example
class Friend extends Transactor {
val count = Ref(0)
def atomically = implicit txn {
case Increment count transform (_ + 1)
}
}
}
// Only checked for compilation
object TransactorCoordinate {
case object Message
case object SomeMessage
case object SomeOtherMessage
case object OtherMessage
case object Message1
case object Message2
class TestCoordinateInclude(actor1: ActorRef, actor2: ActorRef, actor3: ActorRef) extends Transactor {
//#coordinate-include
override def coordinate = {
case Message include(actor1, actor2, actor3)
}
//#coordinate-include
def atomically = txn doNothing
}
class TestCoordinateSendTo(someActor: ActorRef, actor1: ActorRef, actor2: ActorRef) extends Transactor {
//#coordinate-sendto
override def coordinate = {
case SomeMessage sendTo(someActor -> SomeOtherMessage)
case OtherMessage sendTo(actor1 -> Message1, actor2 -> Message2)
}
//#coordinate-sendto
def atomically = txn doNothing
}
}
class TransactorDocSpec extends AkkaSpec {
"coordinated example" in {
import CoordinatedExample._
//#run-coordinated-example
import scala.concurrent.Await
import scala.concurrent.util.duration._
import akka.util.Timeout
import akka.pattern.ask
val system = ActorSystem("app")
val counter1 = system.actorOf(Props[Counter], name = "counter1")
val counter2 = system.actorOf(Props[Counter], name = "counter2")
implicit val timeout = Timeout(5 seconds)
counter1 ! Coordinated(Increment(Some(counter2)))
val count = Await.result(counter1 ? GetCount, timeout.duration)
// count == 1
//#run-coordinated-example
count must be === 1
system.shutdown()
}
"coordinated api" in {
import CoordinatedApi._
//#implicit-timeout
import scala.concurrent.util.duration._
import akka.util.Timeout
implicit val timeout = Timeout(5 seconds)
//#implicit-timeout
//#create-coordinated
val coordinated = Coordinated()
//#create-coordinated
val system = ActorSystem("coordinated")
val actor = system.actorOf(Props[Coordinator], name = "coordinator")
//#send-coordinated
actor ! Coordinated(Message)
//#send-coordinated
//#include-coordinated
actor ! coordinated(Message)
//#include-coordinated
coordinated.await()
system.shutdown()
}
"counter transactor" in {
import CounterExample._
val system = ActorSystem("transactors")
lazy val underlyingCounter = new Counter
val counter = system.actorOf(Props(underlyingCounter), name = "counter")
val coordinated = Coordinated()(Timeout(5 seconds))
counter ! coordinated(Increment)
coordinated.await()
underlyingCounter.count.single.get must be === 1
system.shutdown()
}
"friendly counter transactor" in {
import FriendlyCounterExample._
val system = ActorSystem("transactors")
lazy val underlyingFriend = new Friend
val friend = system.actorOf(Props(underlyingFriend), name = "friend")
lazy val underlyingFriendlyCounter = new FriendlyCounter(friend)
val friendlyCounter = system.actorOf(Props(underlyingFriendlyCounter), name = "friendly")
val coordinated = Coordinated()(Timeout(5 seconds))
friendlyCounter ! coordinated(Increment)
coordinated.await()
underlyingFriendlyCounter.count.single.get must be === 1
underlyingFriend.count.single.get must be === 1
system.shutdown()
}
}

View file

@ -0,0 +1,184 @@
/**
* Copyright (C) 2009-2012 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.zeromq
import language.postfixOps
import akka.actor.{ Actor, Props }
import scala.concurrent.util.duration._
import akka.testkit._
import akka.zeromq.{ ZeroMQVersion, ZeroMQExtension }
import java.text.SimpleDateFormat
import java.util.Date
import akka.zeromq.{ SocketType, Bind }
object ZeromqDocSpec {
//#health
import akka.zeromq._
import akka.actor.Actor
import akka.actor.Props
import akka.actor.ActorLogging
import akka.serialization.SerializationExtension
import java.lang.management.ManagementFactory
case object Tick
case class Heap(timestamp: Long, used: Long, max: Long)
case class Load(timestamp: Long, loadAverage: Double)
class HealthProbe extends Actor {
val pubSocket = ZeroMQExtension(context.system).newSocket(SocketType.Pub, Bind("tcp://127.0.0.1:1235"))
val memory = ManagementFactory.getMemoryMXBean
val os = ManagementFactory.getOperatingSystemMXBean
val ser = SerializationExtension(context.system)
import context.dispatcher
override def preStart() {
context.system.scheduler.schedule(1 second, 1 second, self, Tick)
}
override def postRestart(reason: Throwable) {
// don't call preStart, only schedule once
}
def receive: Receive = {
case Tick
val currentHeap = memory.getHeapMemoryUsage
val timestamp = System.currentTimeMillis
// use akka SerializationExtension to convert to bytes
val heapPayload = ser.serialize(Heap(timestamp, currentHeap.getUsed, currentHeap.getMax)).get
// the first frame is the topic, second is the message
pubSocket ! ZMQMessage(Seq(Frame("health.heap"), Frame(heapPayload)))
// use akka SerializationExtension to convert to bytes
val loadPayload = ser.serialize(Load(timestamp, os.getSystemLoadAverage)).get
// the first frame is the topic, second is the message
pubSocket ! ZMQMessage(Seq(Frame("health.load"), Frame(loadPayload)))
}
}
//#health
//#logger
class Logger extends Actor with ActorLogging {
ZeroMQExtension(context.system).newSocket(SocketType.Sub, Listener(self), Connect("tcp://127.0.0.1:1235"), Subscribe("health"))
val ser = SerializationExtension(context.system)
val timestampFormat = new SimpleDateFormat("HH:mm:ss.SSS")
def receive = {
// the first frame is the topic, second is the message
case m: ZMQMessage if m.firstFrameAsString == "health.heap"
val Heap(timestamp, used, max) = ser.deserialize(m.payload(1), classOf[Heap]).get
log.info("Used heap {} bytes, at {}", used, timestampFormat.format(new Date(timestamp)))
case m: ZMQMessage if m.firstFrameAsString == "health.load"
val Load(timestamp, loadAverage) = ser.deserialize(m.payload(1), classOf[Load]).get
log.info("Load average {}, at {}", loadAverage, timestampFormat.format(new Date(timestamp)))
}
}
//#logger
//#alerter
class HeapAlerter extends Actor with ActorLogging {
ZeroMQExtension(context.system).newSocket(SocketType.Sub, Listener(self), Connect("tcp://127.0.0.1:1235"), Subscribe("health.heap"))
val ser = SerializationExtension(context.system)
var count = 0
def receive = {
// the first frame is the topic, second is the message
case m: ZMQMessage if m.firstFrameAsString == "health.heap"
val Heap(timestamp, used, max) = ser.deserialize(m.payload(1), classOf[Heap]).get
if ((used.toDouble / max) > 0.9) count += 1
else count = 0
if (count > 10) log.warning("Need more memory, using {} %", (100.0 * used / max))
}
}
//#alerter
}
class ZeromqDocSpec extends AkkaSpec("akka.loglevel=INFO") {
import ZeromqDocSpec._
"demonstrate how to create socket" in {
checkZeroMQInstallation()
//#pub-socket
import akka.zeromq.ZeroMQExtension
val pubSocket = ZeroMQExtension(system).newSocket(SocketType.Pub, Bind("tcp://127.0.0.1:21231"))
//#pub-socket
//#sub-socket
import akka.zeromq._
val listener = system.actorOf(Props(new Actor {
def receive: Receive = {
case Connecting //...
case m: ZMQMessage //...
case _ //...
}
}))
val subSocket = ZeroMQExtension(system).newSocket(SocketType.Sub, Listener(listener), Connect("tcp://127.0.0.1:21231"), SubscribeAll)
//#sub-socket
//#sub-topic-socket
val subTopicSocket = ZeroMQExtension(system).newSocket(SocketType.Sub, Listener(listener), Connect("tcp://127.0.0.1:21231"), Subscribe("foo.bar"))
//#sub-topic-socket
//#unsub-topic-socket
subTopicSocket ! Unsubscribe("foo.bar")
//#unsub-topic-socket
val payload = Array.empty[Byte]
//#pub-topic
pubSocket ! ZMQMessage(Seq(Frame("foo.bar"), Frame(payload)))
//#pub-topic
system.stop(subSocket)
system.stop(subTopicSocket)
//#high-watermark
val highWatermarkSocket = ZeroMQExtension(system).newSocket(
SocketType.Router,
Listener(listener),
Bind("tcp://127.0.0.1:21233"),
HighWatermark(50000))
//#high-watermark
}
"demonstrate pub-sub" in {
checkZeroMQInstallation()
//#health
system.actorOf(Props[HealthProbe], name = "health")
//#health
//#logger
system.actorOf(Props[Logger], name = "logger")
//#logger
//#alerter
system.actorOf(Props[HeapAlerter], name = "alerter")
//#alerter
// Let it run for a while to see some output.
// Don't do like this in real tests, this is only doc demonstration.
Thread.sleep(3.seconds.toMillis)
}
def checkZeroMQInstallation() = try {
ZeroMQExtension(system).version match {
case ZeroMQVersion(2, 1, _) Unit
case version pending
}
} catch {
case e: LinkageError pending
}
}

View file

@ -0,0 +1,111 @@
Dataflow Concurrency (Scala)
============================
Description
-----------
Akka implements `Oz-style dataflow concurrency <http://www.mozart-oz.org/documentation/tutorial/node8.html#chapter.concurrency>`_
by using a special API for :ref:`futures-scala` that enables a complimentary way of writing synchronous-looking code that in reality is asynchronous.
The benefit of Dataflow concurrency is that it is deterministic; that means that it will always behave the same.
If you run it once and it yields output 5 then it will do that **every time**, run it 10 million times - same result.
If it on the other hand deadlocks the first time you run it, then it will deadlock **every single time** you run it.
Also, there is **no difference** between sequential code and concurrent code. These properties makes it very easy to reason about concurrency.
The limitation is that the code needs to be side-effect free, i.e. deterministic.
You can't use exceptions, time, random etc., but need to treat the part of your program that uses dataflow concurrency as a pure function with input and output.
The best way to learn how to program with dataflow variables is to read the fantastic book `Concepts, Techniques, and Models of Computer Programming <http://www.info.ucl.ac.be/%7Epvr/book.html>`_. By Peter Van Roy and Seif Haridi.
Getting Started (SBT)
---------------------
Scala's Delimited Continuations plugin is required to use the Dataflow API. To enable the plugin when using sbt, your project must inherit the ``AutoCompilerPlugins`` trait and contain a bit of configuration as is seen in this example:
.. code-block:: scala
autoCompilerPlugins := true,
libraryDependencies <+= scalaVersion {
v => compilerPlugin("org.scala-lang.plugins" % "continuations" % @scalaVersion@)
},
scalacOptions += "-P:continuations:enable",
You will also need to include a dependency on ``akka-dataflow``:
.. code-block:: scala
"com.typesafe.akka" %% "akka-dataflow" % "@version@" @crossString@
Dataflow variables
------------------
A Dataflow variable can be read any number of times but only be written to once, which maps very well to the concept of Futures/Promises :ref:`futures-scala`.
Conversion from ``Future`` and ``Promise`` to Dataflow Variables is implicit and is invisible to the user (after importing akka.dataflow._).
The mapping from ``Promise`` and ``Future`` is as follows:
- Futures are readable-many, using the ``apply`` method, inside ``flow`` blocks.
- Promises are readable-many, just like Futures.
- Promises are writable-once, using the ``<<`` operator, inside ``flow`` blocks.
Writing to an already written Promise throws a ``java.lang.IllegalStateException``,
this has the effect that races to write a promise will be deterministic,
only one of the writers will succeed and the others will fail.
The flow
--------
The ``flow`` method acts as the delimiter of dataflow expressions (this also neatly aligns with the concept of delimited continuations),
and flow-expressions compose. At this point you might wonder what the ``flow``-construct brings to the table that for-comprehensions don't,
and that is the use of the CPS plugin that makes the *look like* it is synchronous, but in reality is asynchronous and non-blocking.
The result of a call to ``flow`` is a Future with the resulting value of the flow.
To be able to use the ``flow`` method, you need to import:
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: import-akka-dataflow
The ``flow`` method will, just like Futures and Promises, require an implicit ``ExecutionContext`` in scope.
For the examples here we will use:
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: import-global-implicit
Using flow
~~~~~~~~~~
First off we have the obligatory "Hello world!":
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: simplest-hello-world
You can also refer to the results of other flows within flows:
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: nested-hello-world-a
… or:
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: nested-hello-world-b
Working with variables
~~~~~~~~~~~~~~~~~~~~~~
Inside the flow method you can use Promises as Dataflow variables:
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: dataflow-variable-a
Flow compared to for
--------------------
Should I use Dataflow or for-comprehensions?
.. includecode:: code/docs/dataflow/DataflowDocSpec.scala
:include: for-vs-flow
Conclusions:
- Dataflow has a smaller code footprint and arguably is easier to reason about.
- For-comprehensions are more general than Dataflow, and can operate on a wide array of types.

View file

@ -0,0 +1,222 @@
.. _dispatchers-scala:
Dispatchers (Scala)
===================
An Akka ``MessageDispatcher`` is what makes Akka Actors "tick", it is the engine of the machine so to speak.
All ``MessageDispatcher`` implementations are also an ``ExecutionContext``, which means that they can be used
to execute arbitrary code, for instance :ref:`futures-scala`.
Default dispatcher
------------------
Every ``ActorSystem`` will have a default dispatcher that will be used in case nothing else is configured for an ``Actor``.
The default dispatcher can be configured, and is by default a ``Dispatcher`` with a "fork-join-executor", which gives excellent performance in most cases.
Setting the dispatcher for an Actor
-----------------------------------
So in case you want to give your ``Actor`` a different dispatcher than the default, you need to do two things, of which the first is:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#defining-dispatcher
.. note::
The "dispatcherId" you specify in withDispatcher is in fact a path into your configuration.
So in this example it's a top-level section, but you could for instance put it as a sub-section,
where you'd use periods to denote sub-sections, like this: ``"foo.bar.my-dispatcher"``
And then you just need to configure that dispatcher in your configuration:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-dispatcher-config
And here's another example that uses the "thread-pool-executor":
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-thread-pool-dispatcher-config
For more options, see the default-dispatcher section of the :ref:`configuration`.
Types of dispatchers
--------------------
There are 4 different types of message dispatchers:
* Dispatcher
- This is an event-based dispatcher that binds a set of Actors to a thread pool. It is the default dispatcher
used if one is not specified.
- Sharability: Unlimited
- Mailboxes: Any, creates one per Actor
- Use cases: Default dispatcher, Bulkheading
- Driven by: ``java.util.concurrent.ExecutorService``
specify using "executor" using "fork-join-executor",
"thread-pool-executor" or the FQCN of
an ``akka.dispatcher.ExecutorServiceConfigurator``
* PinnedDispatcher
- This dispatcher dedicates a unique thread for each actor using it; i.e. each actor will have its own thread pool with only one thread in the pool.
- Sharability: None
- Mailboxes: Any, creates one per Actor
- Use cases: Bulkheading
- Driven by: Any ``akka.dispatch.ThreadPoolExecutorConfigurator``
by default a "thread-pool-executor"
* BalancingDispatcher
- This is an executor based event driven dispatcher that will try to redistribute work from busy actors to idle actors.
- All the actors share a single Mailbox that they get their messages from.
- It is assumed that all actors using the same instance of this dispatcher can process all messages that have been sent to one of the actors; i.e. the actors belong to a pool of actors, and to the client there is no guarantee about which actor instance actually processes a given message.
- Sharability: Actors of the same type only
- Mailboxes: Any, creates one for all Actors
- Use cases: Work-sharing
- Driven by: ``java.util.concurrent.ExecutorService``
specify using "executor" using "fork-join-executor",
"thread-pool-executor" or the FQCN of
an ``akka.dispatcher.ExecutorServiceConfigurator``
- Note that you can **not** use a ``BalancingDispatcher`` as a **Router Dispatcher**. (You can however use it for the **Routees**)
* CallingThreadDispatcher
- This dispatcher runs invocations on the current thread only. This dispatcher does not create any new threads,
but it can be used from different threads concurrently for the same actor. See :ref:`Scala-CallingThreadDispatcher`
for details and restrictions.
- Sharability: Unlimited
- Mailboxes: Any, creates one per Actor per Thread (on demand)
- Use cases: Testing
- Driven by: The calling thread (duh)
More dispatcher configuration examples
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuring a ``PinnedDispatcher``:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-pinned-dispatcher-config
And then using it:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#defining-pinned-dispatcher
Note that ``thread-pool-executor`` configuration as per the above ``my-thread-pool-dispatcher`` example is
NOT applicable. This is because every actor will have its own thread pool when using ``PinnedDispatcher``,
and that pool will have only one thread.
Mailboxes
---------
An Akka ``Mailbox`` holds the messages that are destined for an ``Actor``.
Normally each ``Actor`` has its own mailbox, but with example a ``BalancingDispatcher`` all actors with the same ``BalancingDispatcher`` will share a single instance.
Builtin implementations
^^^^^^^^^^^^^^^^^^^^^^^
Akka comes shipped with a number of default mailbox implementations:
* UnboundedMailbox
- Backed by a ``java.util.concurrent.ConcurrentLinkedQueue``
- Blocking: No
- Bounded: No
* BoundedMailbox
- Backed by a ``java.util.concurrent.LinkedBlockingQueue``
- Blocking: Yes
- Bounded: Yes
* UnboundedPriorityMailbox
- Backed by a ``java.util.concurrent.PriorityBlockingQueue``
- Blocking: Yes
- Bounded: No
* BoundedPriorityMailbox
- Backed by a ``java.util.PriorityBlockingQueue`` wrapped in an ``akka.util.BoundedBlockingQueue``
- Blocking: Yes
- Bounded: Yes
* Durable mailboxes, see :ref:`durable-mailboxes`.
Mailbox configuration examples
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
How to create a PriorityMailbox:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#prio-mailbox
And then add it to the configuration:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#prio-dispatcher-config
And then an example on how you would use it:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#prio-dispatcher
Creating your own Mailbox type
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An example is worth a thousand quacks:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#mailbox-implementation-example
And then you just specify the FQCN of your MailboxType as the value of the "mailbox-type" in the dispatcher configuration.
.. note::
Make sure to include a constructor which takes
``akka.actor.ActorSystem.Settings`` and ``com.typesafe.config.Config``
arguments, as this constructor is invoked reflectively to construct your
mailbox type. The config passed in as second argument is that section from
the configuration which describes the dispatcher using this mailbox type; the
mailbox type will be instantiated once for each dispatcher using it.
Special Semantics of ``system.actorOf``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In order to make ``system.actorOf`` both synchronous and non-blocking while
keeping the return type :class:`ActorRef` (and the semantics that the returned
ref is fully functional), special handling takes place for this case. Behind
the scenes, a hollow kind of actor reference is constructed, which is sent to
the systems guardian actor who actually creates the actor and its context and
puts those inside the reference. Until that has happened, messages sent to the
:class:`ActorRef` will be queued locally, and only upon swapping the real
filling in will they be transferred into the real mailbox. Thus,
.. code-block:: scala
val props: Props = ...
// this actor uses MyCustomMailbox, which is assumed to be a singleton
system.actorOf(props.withDispatcher("myCustomMailbox")) ! "bang"
assert(MyCustomMailbox.instance.getLastEnqueuedMessage == "bang")
will probably fail; you will have to allow for some time to pass and retry the
check à la :meth:`TestKit.awaitCond`.

View file

@ -0,0 +1,202 @@
.. _event-bus-scala:
#################
Event Bus (Scala)
#################
Originally conceived as a way to send messages to groups of actors, the
:class:`EventBus` has been generalized into a set of composable traits
implementing a simple interface:
- :meth:`subscribe(subscriber: Subscriber, classifier: Classifier): Boolean`
subscribes the given subscriber to events with the given classifier
- :meth:`unsubscribe(subscriber: Subscriber, classifier: Classifier): Boolean`
undoes a specific subscription
- :meth:`unsubscribe(subscriber: Subscriber)` undoes all subscriptions for the
given subscriber
- :meth:`publish(event: Event)` publishes an event, which first is classified
according to the specific bus (see `Classifiers`_) and then published to all
subscribers for the obtained classifier
This mechanism is used in different places within Akka, e.g. the
:ref:`DeathWatch <deathwatch-scala>` and the `Event Stream`_. Implementations
can make use of the specific building blocks presented below.
An event bus must define the following three abstract types:
- :class:`Event` is the type of all events published on that bus
- :class:`Subscriber` is the type of subscribers allowed to register on that
event bus
- :class:`Classifier` defines the classifier to be used in selecting
subscribers for dispatching events
The traits below are still generic in these types, but they need to be defined
for any concrete implementation.
Classifiers
===========
The classifiers presented here are part of the Akka distribution, but rolling
your own in case you do not find a perfect match is not difficult, check the
implementation of the existing ones on `github`_.
.. _github: https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/event/EventBus.scala
Lookup Classification
---------------------
The simplest classification is just to extract an arbitrary classifier from
each event and maintaining a set of subscribers for each possible classifier.
This can be compared to tuning in on a radio station. The trait
:class:`LookupClassification` is still generic in that it abstracts over how to
compare subscribers and how exactly to classify. The necessary methods to be
implemented are the following:
- :meth:`classify(event: Event): Classifier` is used for extracting the
classifier from the incoming events.
- :meth:`compareSubscribers(a: Subscriber, b: Subscriber): Int` must define a
partial order over the subscribers, expressed as expected from
:meth:`java.lang.Comparable.compare`.
- :meth:`publish(event: Event, subscriber: Subscriber)` will be invoked for
each event for all subscribers which registered themselves for the events
classifier.
- :meth:`mapSize: Int` determines the initial size of the index data structure
used internally (i.e. the expected number of different classifiers).
This classifier is efficient in case no subscribers exist for a particular event.
Subchannel Classification
-------------------------
If classifiers form a hierarchy and it is desired that subscription be possible
not only at the leaf nodes, this classification may be just the right one. It
can be compared to tuning in on (possibly multiple) radio channels by genre.
This classification has been developed for the case where the classifier is
just the JVM class of the event and subscribers may be interested in
subscribing to all subclasses of a certain class, but it may be used with any
classifier hierarchy. The abstract members needed by this classifier are
- :obj:`subclassification: Subclassification[Classifier]` is an object
providing :meth:`isEqual(a: Classifier, b: Classifier)` and
:meth:`isSubclass(a: Classifier, b: Classifier)` to be consumed by the other
methods of this classifier.
- :meth:`classify(event: Event): Classifier` is used for extracting the
classifier from the incoming events.
- :meth:`publish(event: Event, subscriber: Subscriber)` will be invoked for
each event for all subscribers which registered themselves for the events
classifier.
This classifier is also efficient in case no subscribers are found for an
event, but it uses conventional locking to synchronize an internal classifier
cache, hence it is not well-suited to use cases in which subscriptions change
with very high frequency (keep in mind that “opening” a classifier by sending
the first message will also have to re-check all previous subscriptions).
Scanning Classification
-----------------------
The previous classifier was built for multi-classifier subscriptions which are
strictly hierarchical, this classifier is useful if there are overlapping
classifiers which cover various parts of the event space without forming a
hierarchy. It can be compared to tuning in on (possibly multiple) radio
stations by geographical reachability (for old-school radio-wave transmission).
The abstract members for this classifier are:
- :meth:`compareClassifiers(a: Classifier, b: Classifier): Int` is needed for
determining matching classifiers and storing them in an ordered collection.
- :meth:`compareSubscribers(a: Subscriber, b: Subscriber): Int` is needed for
storing subscribers in an ordered collection.
- :meth:`matches(classifier: Classifier, event: Event): Boolean` determines
whether a given classifier shall match a given event; it is invoked for each
subscription for all received events, hence the name of the classifier.
- :meth:`publish(event: Event, subscriber: Subscriber)` will be invoked for
each event for all subscribers which registered themselves for a classifier
matching this event.
This classifier takes always a time which is proportional to the number of
subscriptions, independent of how many actually match.
Actor Classification
--------------------
This classification has been developed specifically for implementing
:ref:`DeathWatch <deathwatch-scala>`: subscribers as well as classifiers are of
type :class:`ActorRef`. The abstract members are
- :meth:`classify(event: Event): ActorRef` is used for extracting the
classifier from the incoming events.
- :meth:`mapSize: Int` determines the initial size of the index data structure
used internally (i.e. the expected number of different classifiers).
This classifier is still is generic in the event type, and it is efficient for
all use cases.
.. _event-stream-scala:
Event Stream
============
The event stream is the main event bus of each actor system: it is used for
carrying :ref:`log messages <logging-scala>` and `Dead Letters`_ and may be
used by the user code for other purposes as well. It uses `Subchannel
Classification`_ which enables registering to related sets of channels (as is
used for :class:`RemoteLifeCycleMessage`). The following example demonstrates
how a simple subscription works:
.. includecode:: code/docs/event/LoggingDocSpec.scala#deadletters
Default Handlers
----------------
Upon start-up the actor system creates and subscribes actors to the event
stream for logging: these are the handlers which are configured for example in
``application.conf``:
.. code-block:: text
akka {
event-handlers = ["akka.event.Logging$DefaultLogger"]
}
The handlers listed here by fully-qualified class name will be subscribed to
all log event classes with priority higher than or equal to the configured
log-level and their subscriptions are kept in sync when changing the log-level
at runtime::
system.eventStream.setLogLevel(Logging.DebugLevel)
This means that log events for a level which will not be logged are not
typically not dispatched at all (unless manual subscriptions to the respective
event class have been done)
Dead Letters
------------
As described at :ref:`stopping-actors-scala`, messages queued when an actor
terminates or sent after its death are re-routed to the dead letter mailbox,
which by default will publish the messages wrapped in :class:`DeadLetter`. This
wrapper holds the original sender, receiver and message of the envelope which
was redirected.
Other Uses
----------
The event stream is always there and ready to be used, just publish your own
events (it accepts ``AnyRef``) and subscribe listeners to the corresponding JVM
classes.

View file

@ -0,0 +1,89 @@
.. _extending-akka-scala:
#########################
Akka Extensions (Scala)
#########################
If you want to add features to Akka, there is a very elegant, but powerful mechanism for doing so.
It's called Akka Extensions and is comprised of 2 basic components: an ``Extension`` and an ``ExtensionId``.
Extensions will only be loaded once per ``ActorSystem``, which will be managed by Akka.
You can choose to have your Extension loaded on-demand or at ``ActorSystem`` creation time through the Akka configuration.
Details on how to make that happens are below, in the "Loading from Configuration" section.
.. warning::
Since an extension is a way to hook into Akka itself, the implementor of the extension needs to
ensure the thread safety of his/her extension.
Building an Extension
=====================
So let's create a sample extension that just lets us count the number of times something has happened.
First, we define what our ``Extension`` should do:
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: extension
Then we need to create an ``ExtensionId`` for our extension so we can grab ahold of it.
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: extensionid
Wicked! Now all we need to do is to actually use it:
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: extension-usage
Or from inside of an Akka Actor:
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: extension-usage-actor
You can also hide extension behind traits:
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: extension-usage-actor-trait
That's all there is to it!
Loading from Configuration
==========================
To be able to load extensions from your Akka configuration you must add FQCNs of implementations of either ``ExtensionId`` or ``ExtensionIdProvider``
in the ``akka.extensions`` section of the config you provide to your ``ActorSystem``.
.. includecode:: code/docs/extension/ExtensionDocSpec.scala
:include: config
Applicability
=============
The sky is the limit!
By the way, did you know that Akka's ``Typed Actors``, ``Serialization`` and other features are implemented as Akka Extensions?
.. _extending-akka-scala.settings:
Application specific settings
-----------------------------
The :ref:`configuration` can be used for application specific settings. A good practice is to place those settings in an Extension.
Sample configuration:
.. includecode:: code/docs/extension/SettingsExtensionDocSpec.scala
:include: config
The ``Extension``:
.. includecode:: code/docs/extension/SettingsExtensionDocSpec.scala
:include: imports,extension,extensionid
Use it:
.. includecode:: code/docs/extension/SettingsExtensionDocSpec.scala
:include: extension-usage-actor

View file

@ -0,0 +1,55 @@
.. _fault-tolerance-sample-scala:
Diagrams of the Fault Tolerance Sample (Scala)
----------------------------------------------
.. image:: ../images/faulttolerancesample-normal-flow.png
*The above diagram illustrates the normal message flow.*
**Normal flow:**
======= ==================================================================================
Step Description
======= ==================================================================================
1 The progress ``Listener`` starts the work.
2 The ``Worker`` schedules work by sending ``Do`` messages periodically to itself
3, 4, 5 When receiving ``Do`` the ``Worker`` tells the ``CounterService``
to increment the counter, three times. The ``Increment`` message is forwarded
to the ``Counter``, which updates its counter variable and sends current value
to the ``Storage``.
6, 7 The ``Worker`` asks the ``CounterService`` of current value of the counter and pipes
the result back to the ``Listener``.
======= ==================================================================================
.. image:: ../images/faulttolerancesample-failure-flow.png
*The above diagram illustrates what happens in case of storage failure.*
**Failure flow:**
=========== ==================================================================================
Step Description
=========== ==================================================================================
1 The ``Storage`` throws ``StorageException``.
2 The ``CounterService`` is supervisor of the ``Storage`` and restarts the
``Storage`` when ``StorageException`` is thrown.
3, 4, 5, 6 The ``Storage`` continues to fail and is restarted.
7 After 3 failures and restarts within 5 seconds the ``Storage`` is stopped by its
supervisor, i.e. the ``CounterService``.
8 The ``CounterService`` is also watching the ``Storage`` for termination and
receives the ``Terminated`` message when the ``Storage`` has been stopped ...
9, 10, 11 and tells the ``Counter`` that there is no ``Storage``.
12 The ``CounterService`` schedules a ``Reconnect`` message to itself.
13, 14 When it receives the ``Reconnect`` message it creates a new ``Storage`` ...
15, 16 and tells the ``Counter`` to use the new ``Storage``
=========== ==================================================================================
Full Source Code of the Fault Tolerance Sample (Scala)
------------------------------------------------------
.. includecode:: code/docs/actor/FaultHandlingDocSample.scala#all

View file

@ -0,0 +1,159 @@
.. _fault-tolerance-scala:
Fault Tolerance (Scala)
=======================
As explained in :ref:`actor-systems` each actor is the supervisor of its
children, and as such each actor defines fault handling supervisor strategy.
This strategy cannot be changed afterwards as it is an integral part of the
actor systems structure.
Fault Handling in Practice
--------------------------
First, let us look at a sample that illustrates one way to handle data store errors,
which is a typical source of failure in real world applications. Of course it depends
on the actual application what is possible to do when the data store is unavailable,
but in this sample we use a best effort re-connect approach.
Read the following source code. The inlined comments explain the different pieces of
the fault handling and why they are added. It is also highly recommended to run this
sample as it is easy to follow the log output to understand what is happening in runtime.
.. toctree::
fault-tolerance-sample
.. includecode:: code/docs/actor/FaultHandlingDocSample.scala#all
:exclude: imports,messages,dummydb
Creating a Supervisor Strategy
------------------------------
The following sections explain the fault handling mechanism and alternatives
in more depth.
For the sake of demonstration let us consider the following strategy:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: strategy
I have chosen a few well-known exception types in order to demonstrate the
application of the fault handling directives described in :ref:`supervision`.
First off, it is a one-for-one strategy, meaning that each child is treated
separately (an all-for-one strategy works very similarly, the only difference
is that any decision is applied to all children of the supervisor, not only the
failing one). There are limits set on the restart frequency, namely maximum 10
restarts per minute; each of these settings could be left out, which means
that the respective limit does not apply, leaving the possibility to specify an
absolute upper limit on the restarts or to make the restarts work infinitely.
The match statement which forms the bulk of the body is of type ``Decider``,
which is a ``PartialFunction[Throwable, Directive]``. This
is the piece which maps child failure types to their corresponding directives.
Default Supervisor Strategy
^^^^^^^^^^^^^^^^^^^^^^^^^^^
``Escalate`` is used if the defined strategy doesn't cover the exception that was thrown.
When the supervisor strategy is not defined for an actor the following
exceptions are handled by default:
* ``ActorInitializationException`` will stop the failing child actor
* ``ActorKilledException`` will stop the failing child actor
* ``Exception`` will restart the failing child actor
* Other types of ``Throwable`` will be escalated to parent actor
If the exception escalate all the way up to the root guardian it will handle it
in the same way as the default strategy defined above.
Stopping Supervisor Strategy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Closer to the Erlang way is the strategy to just stop children when they fail
and then take corrective action in the supervisor when DeathWatch signals the
loss of the child. This strategy is also provided pre-packaged as
:obj:`SupervisorStrategy.stoppingStrategy` with an accompanying
:class:`StoppingSupervisorStrategy` configurator to be used when you want the
``"/user"`` guardian to apply it.
Supervision of Top-Level Actors
-------------------------------
Toplevel actors means those which are created using ``system.actorOf()``, and
they are children of the :ref:`User Guardian <user-guardian>`. There are no
special rules applied in this case, the guardian simply applies the configured
strategy.
Test Application
----------------
The following section shows the effects of the different directives in practice,
wherefor a test setup is needed. First off, we need a suitable supervisor:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: supervisor
This supervisor will be used to create a child, with which we can experiment:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: child
The test is easier by using the utilities described in :ref:`akka-testkit`,
where ``AkkaSpec`` is a convenient mixture of ``TestKit with WordSpec with
MustMatchers``
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: testkit
Let us create actors:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: create
The first test shall demonstrate the ``Resume`` directive, so we try it out by
setting some non-initial state in the actor and have it fail:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: resume
As you can see the value 42 survives the fault handling directive. Now, if we
change the failure to a more serious ``NullPointerException``, that will no
longer be the case:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: restart
And finally in case of the fatal ``IllegalArgumentException`` the child will be
terminated by the supervisor:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: stop
Up to now the supervisor was completely unaffected by the childs failure,
because the directives set did handle it. In case of an ``Exception``, this is not
true anymore and the supervisor escalates the failure.
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: escalate-kill
The supervisor itself is supervised by the top-level actor provided by the
:class:`ActorSystem`, which has the default policy to restart in case of all
``Exception`` cases (with the notable exceptions of
``ActorInitializationException`` and ``ActorKilledException``). Since the
default directive in case of a restart is to kill all children, we expected our poor
child not to survive this failure.
In case this is not desired (which depends on the use case), we need to use a
different supervisor which overrides this behavior.
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: supervisor2
With this parent, the child survives the escalated restart, as demonstrated in
the last test:
.. includecode:: code/docs/actor/FaultHandlingDocSpec.scala
:include: escalate-restart

459
akka-docs/rst/scala/fsm.rst Normal file
View file

@ -0,0 +1,459 @@
.. _fsm-scala:
###
FSM
###
Overview
========
The FSM (Finite State Machine) is available as a mixin for the akka Actor and
is best described in the `Erlang design principles
<http://www.erlang.org/documentation/doc-4.8.2/doc/design_principles/fsm.html>`_
A FSM can be described as a set of relations of the form:
**State(S) x Event(E) -> Actions (A), State(S')**
These relations are interpreted as meaning:
*If we are in state S and the event E occurs, we should perform the actions A
and make a transition to the state S'.*
A Simple Example
================
To demonstrate most of the features of the :class:`FSM` trait, consider an
actor which shall receive and queue messages while they arrive in a burst and
send them on after the burst ended or a flush request is received.
First, consider all of the below to use these import statements:
.. includecode:: code/docs/actor/FSMDocSpec.scala#simple-imports
The contract of our “Buncher” actor is that is accepts or produces the following messages:
.. includecode:: code/docs/actor/FSMDocSpec.scala#simple-events
``SetTarget`` is needed for starting it up, setting the destination for the
``Batches`` to be passed on; ``Queue`` will add to the internal queue while
``Flush`` will mark the end of a burst.
.. includecode:: code/docs/actor/FSMDocSpec.scala#simple-state
The actor can be in two states: no message queued (aka ``Idle``) or some
message queued (aka ``Active``). It will stay in the active state as long as
messages keep arriving and no flush is requested. The internal state data of
the actor is made up of the target actor reference to send the batches to and
the actual queue of messages.
Now lets take a look at the skeleton for our FSM actor:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: simple-fsm
:exclude: transition-elided,unhandled-elided
The basic strategy is to declare the actor, mixing in the :class:`FSM` trait
and specifying the possible states and data values as type parameters. Within
the body of the actor a DSL is used for declaring the state machine:
* :meth:`startsWith` defines the initial state and initial data
* then there is one :meth:`when(<state>) { ... }` declaration per state to be
handled (could potentially be multiple ones, the passed
:class:`PartialFunction` will be concatenated using :meth:`orElse`)
* finally starting it up using :meth:`initialize`, which performs the
transition into the initial state and sets up timers (if required).
In this case, we start out in the ``Idle`` and ``Uninitialized`` state, where
only the ``SetTarget()`` message is handled; ``stay`` prepares to end this
events processing for not leaving the current state, while the ``using``
modifier makes the FSM replace the internal state (which is ``Uninitialized``
at this point) with a fresh ``Todo()`` object containing the target actor
reference. The ``Active`` state has a state timeout declared, which means that
if no message is received for 1 second, a ``FSM.StateTimeout`` message will be
generated. This has the same effect as receiving the ``Flush`` command in this
case, namely to transition back into the ``Idle`` state and resetting the
internal queue to the empty vector. But how do messages get queued? Since this
shall work identically in both states, we make use of the fact that any event
which is not handled by the ``when()`` block is passed to the
``whenUnhandled()`` block:
.. includecode:: code/docs/actor/FSMDocSpec.scala#unhandled-elided
The first case handled here is adding ``Queue()`` requests to the internal
queue and going to the ``Active`` state (this does the obvious thing of staying
in the ``Active`` state if already there), but only if the FSM data are not
``Uninitialized`` when the ``Queue()`` event is received. Otherwise—and in all
other non-handled cases—the second case just logs a warning and does not change
the internal state.
The only missing piece is where the ``Batches`` are actually sent to the
target, for which we use the ``onTransition`` mechanism: you can declare
multiple such blocks and all of them will be tried for matching behavior in
case a state transition occurs (i.e. only when the state actually changes).
.. includecode:: code/docs/actor/FSMDocSpec.scala#transition-elided
The transition callback is a partial function which takes as input a pair of
states—the current and the next state. The FSM trait includes a convenience
extractor for these in form of an arrow operator, which conveniently reminds
you of the direction of the state change which is being matched. During the
state change, the old state data is available via ``stateData`` as shown, and
the new state data would be available as ``nextStateData``.
To verify that this buncher actually works, it is quite easy to write a test
using the :ref:`akka-testkit`, which is conveniently bundled with ScalaTest traits
into ``AkkaSpec``:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: test-code
:exclude: fsm-code-elided
Reference
=========
The FSM Trait and Object
------------------------
The :class:`FSM` trait may only be mixed into an :class:`Actor`. Instead of
extending :class:`Actor`, the self type approach was chosen in order to make it
obvious that an actor is actually created:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: simple-fsm
:exclude: fsm-body
The :class:`FSM` trait takes two type parameters:
#. the supertype of all state names, usually a sealed trait with case objects
extending it,
#. the type of the state data which are tracked by the :class:`FSM` module
itself.
.. _fsm-philosophy:
.. note::
The state data together with the state name describe the internal state of
the state machine; if you stick to this scheme and do not add mutable fields
to the FSM class you have the advantage of making all changes of the
internal state explicit in a few well-known places.
Defining States
---------------
A state is defined by one or more invocations of the method
:func:`when(<name>[, stateTimeout = <timeout>])(stateFunction)`.
The given name must be an object which is type-compatible with the first type
parameter given to the :class:`FSM` trait. This object is used as a hash key,
so you must ensure that it properly implements :meth:`equals` and
:meth:`hashCode`; in particular it must not be mutable. The easiest fit for
these requirements are case objects.
If the :meth:`stateTimeout` parameter is given, then all transitions into this
state, including staying, receive this timeout by default. Initiating the
transition with an explicit timeout may be used to override this default, see
`Initiating Transitions`_ for more information. The state timeout of any state
may be changed during action processing with
:func:`setStateTimeout(state, duration)`. This enables runtime configuration
e.g. via external message.
The :meth:`stateFunction` argument is a :class:`PartialFunction[Event, State]`,
which is conveniently given using the partial function literal syntax as
demonstrated below:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: when-syntax
The :class:`Event(msg: Any, data: D)` case class is parameterized with the data
type held by the FSM for convenient pattern matching.
Defining the Initial State
--------------------------
Each FSM needs a starting point, which is declared using
:func:`startWith(state, data[, timeout])`
The optionally given timeout argument overrides any specification given for the
desired initial state. If you want to cancel a default timeout, use
:obj:`Duration.Inf`.
Unhandled Events
----------------
If a state doesn't handle a received event a warning is logged. If you want to
do something else in this case you can specify that with
:func:`whenUnhandled(stateFunction)`:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: unhandled-syntax
**IMPORTANT**: This handler is not stacked, meaning that each invocation of
:func:`whenUnhandled` replaces the previously installed handler.
Initiating Transitions
----------------------
The result of any :obj:`stateFunction` must be a definition of the next state
unless terminating the FSM, which is described in `Termination from Inside`_.
The state definition can either be the current state, as described by the
:func:`stay` directive, or it is a different state as given by
:func:`goto(state)`. The resulting object allows further qualification by way
of the modifiers described in the following:
* :meth:`forMax(duration)`
This modifier sets a state timeout on the next state. This means that a timer
is started which upon expiry sends a :obj:`StateTimeout` message to the FSM.
This timer is canceled upon reception of any other message in the meantime;
you can rely on the fact that the :obj:`StateTimeout` message will not be
processed after an intervening message.
This modifier can also be used to override any default timeout which is
specified for the target state. If you want to cancel the default timeout,
use :obj:`Duration.Inf`.
* :meth:`using(data)`
This modifier replaces the old state data with the new data given. If you
follow the advice :ref:`above <fsm-philosophy>`, this is the only place where
internal state data are ever modified.
* :meth:`replying(msg)`
This modifier sends a reply to the currently processed message and otherwise
does not modify the state transition.
All modifier can be chained to achieve a nice and concise description:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: modifier-syntax
The parentheses are not actually needed in all cases, but they visually
distinguish between modifiers and their arguments and therefore make the code
even more pleasant to read for foreigners.
.. note::
Please note that the ``return`` statement may not be used in :meth:`when`
blocks or similar; this is a Scala restriction. Either refactor your code
using ``if () ... else ...`` or move it into a method definition.
Monitoring Transitions
----------------------
Transitions occur "between states" conceptually, which means after any actions
you have put into the event handling block; this is obvious since the next
state is only defined by the value returned by the event handling logic. You do
not need to worry about the exact order with respect to setting the internal
state variable, as everything within the FSM actor is running single-threaded
anyway.
Internal Monitoring
^^^^^^^^^^^^^^^^^^^
Up to this point, the FSM DSL has been centered on states and events. The dual
view is to describe it as a series of transitions. This is enabled by the
method
:func:`onTransition(handler)`
which associates actions with a transition instead of with a state and event.
The handler is a partial function which takes a pair of states as input; no
resulting state is needed as it is not possible to modify the transition in
progress.
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: transition-syntax
The convenience extractor :obj:`->` enables decomposition of the pair of states
with a clear visual reminder of the transition's direction. As usual in pattern
matches, an underscore may be used for irrelevant parts; alternatively you
could bind the unconstrained state to a variable, e.g. for logging as shown in
the last case.
It is also possible to pass a function object accepting two states to
:func:`onTransition`, in case your transition handling logic is implemented as
a method:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: alt-transition-syntax
The handlers registered with this method are stacked, so you can intersperse
:func:`onTransition` blocks with :func:`when` blocks as suits your design. It
should be noted, however, that *all handlers will be invoked for each
transition*, not only the first matching one. This is designed specifically so
you can put all transition handling for a certain aspect into one place without
having to worry about earlier declarations shadowing later ones; the actions
are still executed in declaration order, though.
.. note::
This kind of internal monitoring may be used to structure your FSM according
to transitions, so that for example the cancellation of a timer upon leaving
a certain state cannot be forgot when adding new target states.
External Monitoring
^^^^^^^^^^^^^^^^^^^
External actors may be registered to be notified of state transitions by
sending a message :class:`SubscribeTransitionCallBack(actorRef)`. The named
actor will be sent a :class:`CurrentState(self, stateName)` message immediately
and will receive :class:`Transition(actorRef, oldState, newState)` messages
whenever a new state is reached. External monitors may be unregistered by
sending :class:`UnsubscribeTransitionCallBack(actorRef)` to the FSM actor.
Registering a not-running listener generates a warning and fails gracefully.
Stopping a listener without unregistering will remove the listener from the
subscription list upon the next transition.
Transforming State
------------------
The partial functions supplied as argument to the ``when()`` blocks can be
transformed using Scalas full supplement of functional programming tools. In
order to retain type inference, there is a helper function which may be used in
case some common handling logic shall be applied to different clauses:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: transform-syntax
It goes without saying that the arguments to this method may also be stored, to
be used several times, e.g. when applying the same transformation to several
``when()`` blocks:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: alt-transform-syntax
Timers
------
Besides state timeouts, FSM manages timers identified by :class:`String` names.
You may set a timer using
:func:`setTimer(name, msg, interval, repeat)`
where :obj:`msg` is the message object which will be sent after the duration
:obj:`interval` has elapsed. If :obj:`repeat` is :obj:`true`, then the timer is
scheduled at fixed rate given by the :obj:`interval` parameter. Timers may be
canceled using
:func:`cancelTimer(name)`
which is guaranteed to work immediately, meaning that the scheduled message
will not be processed after this call even if the timer already fired and
queued it. The status of any timer may be inquired with
:func:`timerActive_?(name)`
These named timers complement state timeouts because they are not affected by
intervening reception of other messages.
Termination from Inside
-----------------------
The FSM is stopped by specifying the result state as
:func:`stop([reason[, data]])`
The reason must be one of :obj:`Normal` (which is the default), :obj:`Shutdown`
or :obj:`Failure(reason)`, and the second argument may be given to change the
state data which is available during termination handling.
.. note::
It should be noted that :func:`stop` does not abort the actions and stop the
FSM immediately. The stop action must be returned from the event handler in
the same way as a state transition (but note that the ``return`` statement
may not be used within a :meth:`when` block).
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: stop-syntax
You can use :func:`onTermination(handler)` to specify custom code that is
executed when the FSM is stopped. The handler is a partial function which takes
a :class:`StopEvent(reason, stateName, stateData)` as argument:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: termination-syntax
As for the :func:`whenUnhandled` case, this handler is not stacked, so each
invocation of :func:`onTermination` replaces the previously installed handler.
Termination from Outside
------------------------
When an :class:`ActorRef` associated to a FSM is stopped using the
:meth:`stop()` method, its :meth:`postStop` hook will be executed. The default
implementation by the :class:`FSM` trait is to execute the
:meth:`onTermination` handler if that is prepared to handle a
:obj:`StopEvent(Shutdown, ...)`.
.. warning::
In case you override :meth:`postStop` and want to have your
:meth:`onTermination` handler called, do not forget to call
``super.postStop``.
Testing and Debugging Finite State Machines
===========================================
During development and for trouble shooting FSMs need care just as any other
actor. There are specialized tools available as described in :ref:`TestFSMRef`
and in the following.
Event Tracing
-------------
The setting ``akka.actor.debug.fsm`` in :ref:`configuration` enables logging of an
event trace by :class:`LoggingFSM` instances:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: logging-fsm
:exclude: body-elided
This FSM will log at DEBUG level:
* all processed events, including :obj:`StateTimeout` and scheduled timer
messages
* every setting and cancellation of named timers
* all state transitions
Life cycle changes and special messages can be logged as described for
:ref:`Actors <actor.logging-scala>`.
Rolling Event Log
-----------------
The :class:`LoggingFSM` trait adds one more feature to the FSM: a rolling event
log which may be used during debugging (for tracing how the FSM entered a
certain failure state) or for other creative uses:
.. includecode:: code/docs/actor/FSMDocSpec.scala
:include: logging-fsm
The :meth:`logDepth` defaults to zero, which turns off the event log.
.. warning::
The log buffer is allocated during actor creation, which is why the
configuration is done using a virtual method call. If you want to override
with a ``val``, make sure that its initialization happens before the
initializer of :class:`LoggingFSM` runs, and do not change the value returned
by ``logDepth`` after the buffer has been allocated.
The contents of the event log are available using method :meth:`getLog`, which
returns an :class:`IndexedSeq[LogEntry]` where the oldest entry is at index
zero.
Examples
========
A bigger FSM example contrasted with Actor's :meth:`become`/:meth:`unbecome` can be found in the sources:
* `Dining Hakkers using FSM <https://github.com/akka/akka/blob/master/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnFsm.scala#L1>`_
* `Dining Hakkers using become <https://github.com/akka/akka/blob/master/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala#L1>`_

View file

@ -0,0 +1,270 @@
.. _futures-scala:
Futures (Scala)
===============
Introduction
------------
In the Scala Standard Library, a `Future <http://en.wikipedia.org/wiki/Futures_and_promises>`_ is a data structure
used to retrieve the result of some concurrent operation. This result can be accessed synchronously (blocking)
or asynchronously (non-blocking).
Execution Contexts
------------------
In order to execute callbacks and operations, Futures need something called an ``ExecutionContext``,
which is very similar to a ``java.util.concurrent.Executor``. if you have an ``ActorSystem`` in scope,
it will use its default dispatcher as the ``ExecutionContext``, or you can use the factory methods provided
by the ``ExecutionContext`` companion object to wrap ``Executors`` and ``ExecutorServices``, or even create your own.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: diy-execution-context
Use With Actors
---------------
There are generally two ways of getting a reply from an ``Actor``: the first is by a sent message (``actor ! msg``),
which only works if the original sender was an ``Actor``) and the second is through a ``Future``.
Using an ``Actor``\'s ``?`` method to send a message will return a ``Future``. To wait for and retrieve the actual result the simplest method is:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: ask-blocking
This will cause the current thread to block and wait for the ``Actor`` to 'complete' the ``Future`` with it's reply.
Blocking is discouraged though as it will cause performance problems.
The blocking operations are located in ``Await.result`` and ``Await.ready`` to make it easy to spot where blocking occurs.
Alternatives to blocking are discussed further within this documentation. Also note that the ``Future`` returned by
an ``Actor`` is a ``Future[Any]`` since an ``Actor`` is dynamic. That is why the ``asInstanceOf`` is used in the above sample.
When using non-blocking it is better to use the ``mapTo`` method to safely try to cast a ``Future`` to an expected type:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: map-to
The ``mapTo`` method will return a new ``Future`` that contains the result if the cast was successful,
or a ``ClassCastException`` if not. Handling ``Exception``\s will be discussed further within this documentation.
Use Directly
------------
A common use case within Akka is to have some computation performed concurrently without needing the extra utility of an ``Actor``.
If you find yourself creating a pool of ``Actor``\s for the sole reason of performing a calculation in parallel,
there is an easier (and faster) way:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: future-eval
In the above code the block passed to ``Future`` will be executed by the default ``Dispatcher``,
with the return value of the block used to complete the ``Future`` (in this case, the result would be the string: "HelloWorld").
Unlike a ``Future`` that is returned from an ``Actor``, this ``Future`` is properly typed,
and we also avoid the overhead of managing an ``Actor``.
You can also create already completed Futures using the ``Future`` companion, which can be either successes:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: successful
Or failures:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: failed
Functional Futures
------------------
Scala's ``Future`` has several monadic methods that are very similar to the ones used by Scala's collections.
These allow you to create 'pipelines' or 'streams' that the result will travel through.
Future is a Monad
^^^^^^^^^^^^^^^^^
The first method for working with ``Future`` functionally is ``map``. This method takes a ``Function``
which performs some operation on the result of the ``Future``, and returning a new result.
The return value of the ``map`` method is another ``Future`` that will contain the new result:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: map
In this example we are joining two strings together within a ``Future``. Instead of waiting for this to complete,
we apply our function that calculates the length of the string using the ``map`` method.
Now we have a second ``Future`` that will eventually contain an ``Int``.
When our original ``Future`` completes, it will also apply our function and complete the second ``Future`` with its result.
When we finally get the result, it will contain the number 10. Our original ``Future`` still contains the
string "HelloWorld" and is unaffected by the ``map``.
The ``map`` method is fine if we are modifying a single ``Future``,
but if 2 or more ``Future``\s are involved ``map`` will not allow you to combine them together:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: wrong-nested-map
``f3`` is a ``Future[Future[Int]]`` instead of the desired ``Future[Int]``. Instead, the ``flatMap`` method should be used:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: flat-map
Composing futures using nested combinators it can sometimes become quite complicated and hard read, in these cases using Scala's
'for comprehensions' usually yields more readable code. See next section for examples.
If you need to do conditional propagation, you can use ``filter``:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: filter
For Comprehensions
^^^^^^^^^^^^^^^^^^
Since ``Future`` has a ``map``, ``filter`` and ``flatMap`` method it can be easily used in a 'for comprehension':
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: for-comprehension
Something to keep in mind when doing this is even though it looks like parts of the above example can run in parallel,
each step of the for comprehension is run sequentially. This will happen on separate threads for each step but
there isn't much benefit over running the calculations all within a single ``Future``.
The real benefit comes when the ``Future``\s are created first, and then combining them together.
Composing Futures
^^^^^^^^^^^^^^^^^
The example for comprehension above is an example of composing ``Future``\s.
A common use case for this is combining the replies of several ``Actor``\s into a single calculation
without resorting to calling ``Await.result`` or ``Await.ready`` to block for each result.
First an example of using ``Await.result``:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: composing-wrong
Here we wait for the results from the first 2 ``Actor``\s before sending that result to the third ``Actor``.
We called ``Await.result`` 3 times, which caused our little program to block 3 times before getting our final result.
Now compare that to this example:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: composing
Here we have 2 actors processing a single message each. Once the 2 results are available
(note that we don't block to get these results!), they are being added together and sent to a third ``Actor``,
which replies with a string, which we assign to 'result'.
This is fine when dealing with a known amount of Actors, but can grow unwieldy if we have more than a handful.
The ``sequence`` and ``traverse`` helper methods can make it easier to handle more complex use cases.
Both of these methods are ways of turning, for a subclass ``T`` of ``Traversable``, ``T[Future[A]]`` into a ``Future[T[A]]``.
For example:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: sequence-ask
To better explain what happened in the example, ``Future.sequence`` is taking the ``List[Future[Int]]``
and turning it into a ``Future[List[Int]]``. We can then use ``map`` to work with the ``List[Int]`` directly,
and we find the sum of the ``List``.
The ``traverse`` method is similar to ``sequence``, but it takes a ``T[A]`` and a function ``A => Future[B]`` to return a ``Future[T[B]]``,
where ``T`` is again a subclass of Traversable. For example, to use ``traverse`` to sum the first 100 odd numbers:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: traverse
This is the same result as this example:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: sequence
But it may be faster to use ``traverse`` as it doesn't have to create an intermediate ``List[Future[Int]]``.
Then there's a method that's called ``fold`` that takes a start-value, a sequence of ``Future``\s and a function
from the type of the start-value and the type of the futures and returns something with the same type as the start-value,
and then applies the function to all elements in the sequence of futures, asynchronously,
the execution will start when the last of the Futures is completed.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: fold
That's all it takes!
If the sequence passed to ``fold`` is empty, it will return the start-value, in the case above, that will be 0.
In some cases you don't have a start-value and you're able to use the value of the first completing ``Future`` in the sequence
as the start-value, you can use ``reduce``, it works like this:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: reduce
Same as with ``fold``, the execution will be done asynchronously when the last of the ``Future`` is completed,
you can also parallelize it by chunking your futures into sub-sequences and reduce them, and then reduce the reduced results again.
Callbacks
---------
Sometimes you just want to listen to a ``Future`` being completed, and react to that not by creating a new ``Future``, but by side-effecting.
For this Scala supports ``onComplete``, ``onSuccess`` and ``onFailure``, of which the latter two are specializations of the first.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: onSuccess
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: onFailure
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: onComplete
Define Ordering
---------------
Since callbacks are executed in any order and potentially in parallel,
it can be tricky at the times when you need sequential ordering of operations.
But there's a solution and it's name is ``andThen``. It creates a new ``Future`` with
the specified callback, a ``Future`` that will have the same result as the ``Future`` it's called on,
which allows for ordering like in the following sample:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: and-then
Auxiliary Methods
-----------------
``Future`` ``fallbackTo`` combines 2 Futures into a new ``Future``, and will hold the successful value of the second ``Future``
if the first ``Future`` fails.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: fallback-to
You can also combine two Futures into a new ``Future`` that will hold a tuple of the two Futures successful results,
using the ``zip`` operation.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: zip
Exceptions
----------
Since the result of a ``Future`` is created concurrently to the rest of the program, exceptions must be handled differently.
It doesn't matter if an ``Actor`` or the dispatcher is completing the ``Future``,
if an ``Exception`` is caught the ``Future`` will contain it instead of a valid result.
If a ``Future`` does contain an ``Exception``, calling ``Await.result`` will cause it to be thrown again so it can be handled properly.
It is also possible to handle an ``Exception`` by returning a different result.
This is done with the ``recover`` method. For example:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: recover
In this example, if the actor replied with a ``akka.actor.Status.Failure`` containing the ``ArithmeticException``,
our ``Future`` would have a result of 0. The ``recover`` method works very similarly to the standard try/catch blocks,
so multiple ``Exception``\s can be handled in this manner, and if an ``Exception`` is not handled this way
it will behave as if we hadn't used the ``recover`` method.
You can also use the ``recoverWith`` method, which has the same relationship to ``recover`` as ``flatMap`` has to ``map``,
and is use like this:
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: try-recover
After
-----
``akka.pattern.after`` makes it easy to complete a ``Future`` with a value or exception after a timeout.
.. includecode:: code/docs/future/FutureDocSpec.scala
:include: after

View file

@ -0,0 +1,130 @@
.. _howto-scala:
######################
HowTo: Common Patterns
######################
This section lists common actor patterns which have been found to be useful,
elegant or instructive. Anything is welcome, example topics being message
routing strategies, supervision patterns, restart handling, etc. As a special
bonus, additions to this section are marked with the contributors name, and it
would be nice if every Akka user who finds a recurring pattern in his or her
code could share it for the profit of all. Where applicable it might also make
sense to add to the ``akka.pattern`` package for creating an `OTP-like library
<http://www.erlang.org/doc/man_index.html>`_.
Throttling Messages
===================
Contributed by: Kaspar Fischer
"A message throttler that ensures that messages are not sent out at too high a rate."
The pattern is described in `Throttling Messages in Akka 2 <http://letitcrash.com/post/28901663062/throttling-messages-in-akka-2>`_.
Balancing Workload Across Nodes
===============================
Contributed by: Derek Wyatt
"Often times, people want the functionality of the BalancingDispatcher with the
stipulation that the Actors doing the work have distinct Mailboxes on remote
nodes. In this post well explore the implementation of such a concept."
The pattern is described `Balancing Workload across Nodes with Akka 2 <http://letitcrash.com/post/29044669086/balancing-workload-across-nodes-with-akka-2>`_.
Ordered Termination
===================
Contributed by: Derek Wyatt
"When an Actor stops, its children stop in an undefined order. Child termination is
asynchronous and thus non-deterministic.
If an Actor has children that have order dependencies, then you might need to ensure
a particular shutdown order of those children so that their postStop() methods get
called in the right order."
The pattern is described `An Akka 2 Terminator <http://letitcrash.com/post/29773618510/an-akka-2-terminator>`_.
Akka AMQP Proxies
=================
Contributed by: Fabrice Drouin
"“AMQP proxies” is a simple way of integrating AMQP with Akka to distribute jobs across a network of computing nodes.
You still write “local” code, have very little to configure, and end up with a distributed, elastic,
fault-tolerant grid where computing nodes can be written in nearly every programming language."
The pattern is described `Akka AMQP Proxies <http://letitcrash.com/post/29988753572/akka-amqp-proxies>`_.
Shutdown Patterns in Akka 2
===========================
Contributed by: Derek Wyatt
“How do you tell Akka to shut down the ActorSystem when everythings finished?
It turns out that theres no magical flag for this, no configuration setting, no special callback you can register for,
and neither will the illustrious shutdown fairy grace your application with her glorious presence at that perfect moment.
Shes just plain mean.
In this post, well discuss why this is the case and provide you with a simple option for shutting down “at the right time”,
as well as a not-so-simple-option for doing the exact same thing."
The pattern is described `Shutdown Patterns in Akka 2 <http://letitcrash.com/post/30165507578/shutdown-patterns-in-akka-2>`_.
Distributed (in-memory) graph processing with Akka
==================================================
Contributed by: Adelbert Chang
"Graphs have always been an interesting structure to study in both mathematics and computer science (among other fields),
and have become even more interesting in the context of online social networks such as Facebook and Twitter,
whose underlying network structures are nicely represented by graphs."
The pattern is described `Distributed In-Memory Graph Processing with Akka <http://letitcrash.com/post/30257014291/distributed-in-memory-graph-processing-with-akka>`_.
Case Study: An Auto-Updating Cache Using Actors
===============================================
Contributed by: Eric Pederson
"We recently needed to build a caching system in front of a slow backend system with the following requirements:
The data in the backend system is constantly being updated so the caches need to be updated every N minutes.
Requests to the backend system need to be throttled.
The caching system we built used Akka actors and Scalas support for functions as first class objects."
The pattern is described `Case Study: An Auto-Updating Cache using Actors <http://letitcrash.com/post/30509298968/case-study-an-auto-updating-cache-using-actors>`_.
Discovering message flows in actor systems with the Spider Pattern
==================================================================
Contributed by: Raymond Roestenburg
"Building actor systems is fun but debugging them can be difficult, you mostly end up browsing through many log files
on several machines to find out whats going on. Im sure you have browsed through logs and thought,
“Hey, where did that message go?”, “Why did this message cause that effect” or “Why did this actor never get a message?”
This is where the Spider pattern comes in."
The pattern is described `Discovering Message Flows in Actor System with the Spider Pattern <http://letitcrash.com/post/30585282971/discovering-message-flows-in-actor-systems-with-the>`_.
Template Pattern
================
*Contributed by: N. N.*
This is an especially nice pattern, since it does even come with some empty example code:
.. includecode:: code/docs/pattern/ScalaTemplate.scala
:include: all-of-it
:exclude: uninteresting-stuff
.. note::
Spread the word: this is the easiest way to get famous!
Please keep this pattern at the end of this file.

View file

@ -0,0 +1,31 @@
.. _scala-api:
Scala API
=========
.. toctree::
:maxdepth: 2
actors
typed-actors
logging
event-bus
scheduler
futures
dataflow
fault-tolerance
dispatchers
routing
remoting
serialization
fsm
stm
agents
transactors
io
testing
extending-akka
zeromq
microkernel
camel
howto

244
akka-docs/rst/scala/io.rst Normal file
View file

@ -0,0 +1,244 @@
.. _io-scala:
IO (Scala)
==========
Introduction
------------
This documentation is in progress and some sections may be incomplete. More will be coming.
Components
----------
ByteString
^^^^^^^^^^
A primary goal of Akka's IO module is to only communicate between actors with immutable objects. When dealing with network IO on the jvm ``Array[Byte]`` and ``ByteBuffer`` are commonly used to represent collections of ``Byte``\s, but they are mutable. Scala's collection library also lacks a suitably efficient immutable collection for ``Byte``\s. Being able to safely and efficiently move ``Byte``\s around is very important for this IO module, so ``ByteString`` was developed.
``ByteString`` is a `Rope-like <http://en.wikipedia.org/wiki/Rope_(computer_science)>`_ data structure that is immutable and efficient. When 2 ``ByteString``\s are concatenated together they are both stored within the resulting ``ByteString`` instead of copying both to a new ``Array``. Operations such as ``drop`` and ``take`` return ``ByteString``\s that still reference the original ``Array``, but just change the offset and length that is visible. Great care has also been taken to make sure that the internal ``Array`` cannot be modified. Whenever a potentially unsafe ``Array`` is used to create a new ``ByteString`` a defensive copy is created. If you require a ``ByteString`` that only blocks a much memory as necessary for it's content, use the ``compact`` method to get a ``CompactByteString`` instance. If the ``ByteString`` represented only a slice of the original array, this will result in copying all bytes in that slice.
``ByteString`` inherits all methods from ``IndexedSeq``, and it also has some new ones. For more information, look up the ``akka.util.ByteString`` class and it's companion object in the ScalaDoc.
``ByteString`` also comes with it's own optimized builder and iterator classes ``ByteStringBuilder`` and ``ByteIterator`` which provides special features in addition to the standard builder / iterator methods:
Compatibility with java.io
..........................
A ``ByteStringBuilder`` can be wrapped in a `java.io.OutputStream` via the ``asOutputStream`` method. Likewise, ``ByteIterator`` can we wrapped in a ``java.io.InputStream`` via ``asInputStream``. Using these, ``akka.io`` applications can integrate legacy code based on ``java.io`` streams.
Encoding and decoding of binary data
....................................
``ByteStringBuilder`` and ``ByteIterator`` support encoding and decoding of binary data. As an example, consider a stream of binary data frames with the following format:
.. code-block:: text
frameLen: Int
n: Int
m: Int
n times {
a: Short
b: Long
}
data: m times Double
In this example, the data is to be stored in arrays of ``a``, ``b`` and ``data``.
Decoding of such frames can be efficiently implemented in the following fashion:
.. includecode:: code/docs/io/BinaryCoding.scala
:include: decoding
This implementation naturally follows the example data format. In a true Scala application, one might, of course, want use specialized immutable Short/Long/Double containers instead of mutable Arrays.
After extracting data from a ``ByteIterator``, the remaining content can also be turned back into a ``ByteString`` using the ``toSeq`` method
.. includecode:: code/docs/io/BinaryCoding.scala
:include: rest-to-seq
with no copying from bytes to rest involved. In general, conversions from ByteString to ByteIterator and vice versa are O(1) for non-chunked ByteStrings and (at worst) O(nChunks) for chunked ByteStrings.
Encoding of data also is very natural, using ``ByteStringBuilder``
.. includecode:: code/docs/io/BinaryCoding.scala
:include: encoding
The encoded data then can be sent over socket (see ``IOManager``):
.. includecode:: code/docs/io/BinaryCoding.scala
:include: sending
IO.Handle
^^^^^^^^^
``IO.Handle`` is an immutable reference to a Java NIO ``Channel``. Passing mutable ``Channel``\s between ``Actor``\s could lead to unsafe behavior, so instead subclasses of the ``IO.Handle`` trait are used. Currently there are 2 concrete subclasses: ``IO.SocketHandle`` (representing a ``SocketChannel``) and ``IO.ServerHandle`` (representing a ``ServerSocketChannel``).
IOManager
^^^^^^^^^
The ``IOManager`` takes care of the low level IO details. Each ``ActorSystem`` has it's own ``IOManager``, which can be accessed calling ``IOManager(system: ActorSystem)``. ``Actor``\s communicate with the ``IOManager`` with specific messages. The messages sent from an ``Actor`` to the ``IOManager`` are handled automatically when using certain methods and the messages sent from an ``IOManager`` are handled within an ``Actor``\'s ``receive`` method.
Connecting to a remote host:
.. code-block:: scala
val address = new InetSocketAddress("remotehost", 80)
val socket = IOManager(actorSystem).connect(address)
.. code-block:: scala
val socket = IOManager(actorSystem).connect("remotehost", 80)
Creating a server:
.. code-block:: scala
val address = new InetSocketAddress("localhost", 80)
val serverSocket = IOManager(actorSystem).listen(address)
.. code-block:: scala
val serverSocket = IOManager(actorSystem).listen("localhost", 80)
Receiving messages from the ``IOManager``:
.. code-block:: scala
def receive = {
case IO.Listening(server, address) =>
println("The server is listening on socket " + address)
case IO.Connected(socket, address) =>
println("Successfully connected to " + address)
case IO.NewClient(server) =>
println("New incoming connection on server")
val socket = server.accept()
println("Writing to new client socket")
socket.write(bytes)
println("Closing socket")
socket.close()
case IO.Read(socket, bytes) =>
println("Received incoming data from socket")
case IO.Closed(socket: IO.SocketHandle, cause) =>
println("Socket has closed, cause: " + cause)
case IO.Closed(server: IO.ServerHandle, cause) =>
println("Server socket has closed, cause: " + cause)
}
IO.Iteratee
^^^^^^^^^^^
Included with Akka's IO module is a basic implementation of ``Iteratee``\s. ``Iteratee``\s are an effective way of handling a stream of data without needing to wait for all the data to arrive. This is especially useful when dealing with non blocking IO since we will usually receive data in chunks which may not include enough information to process, or it may contain much more data then we currently need.
This ``Iteratee`` implementation is much more basic then what is usually found. There is only support for ``ByteString`` input, and enumerators aren't used. The reason for this limited implementation is to reduce the amount of explicit type signatures needed and to keep things simple. It is important to note that Akka's ``Iteratee``\s are completely optional, incoming data can be handled in any way, including other ``Iteratee`` libraries.
``Iteratee``\s work by processing the data that it is given and returning either the result (with any unused input) or a continuation if more input is needed. They are monadic, so methods like ``flatMap`` can be used to pass the result of an ``Iteratee`` to another.
The basic ``Iteratee``\s included in the IO module can all be found in the ScalaDoc under ``akka.actor.IO``, and some of them are covered in the example below.
Examples
--------
Http Server
^^^^^^^^^^^
This example will create a simple high performance HTTP server. We begin with our imports:
.. includecode:: code/docs/io/HTTPServer.scala
:include: imports
Some commonly used constants:
.. includecode:: code/docs/io/HTTPServer.scala
:include: constants
And case classes to hold the resulting request:
.. includecode:: code/docs/io/HTTPServer.scala
:include: request-class
Now for our first ``Iteratee``. There are 3 main sections of a HTTP request: the request line, the headers, and an optional body. The main request ``Iteratee`` handles each section separately:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-request
In the above code ``readRequest`` takes the results of 3 different ``Iteratees`` (``readRequestLine``, ``readHeaders``, ``readBody``) and combines them into a single ``Request`` object. ``readRequestLine`` actually returns a tuple, so we extract it's individual components. ``readBody`` depends on values contained within the header section, so we must pass those to the method.
The request line has 3 parts to it: the HTTP method, the requested URI, and the HTTP version. The parts are separated by a single space, and the entire request line ends with a ``CRLF``.
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-request-line
Reading the request method is simple as it is a single string ending in a space. The simple ``Iteratee`` that performs this is ``IO.takeUntil(delimiter: ByteString): Iteratee[ByteString]``. It keeps consuming input until the specified delimiter is found. Reading the HTTP version is also a simple string that ends with a ``CRLF``.
The ``ascii`` method is a helper that takes a ``ByteString`` and parses it as a ``US-ASCII`` ``String``.
Reading the request URI is a bit more complicated because we want to parse the individual components of the URI instead of just returning a simple string:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-request-uri
For this example we are only interested in handling absolute paths. To detect if we the URI is an absolute path we use ``IO.peek(length: Int): Iteratee[ByteString]``, which returns a ``ByteString`` of the request length but doesn't actually consume the input. We peek at the next bit of input and see if it matches our ``PATH`` constant (defined above as ``ByteString("/")``). If it doesn't match we throw an error, but for a more robust solution we would want to handle other valid URIs.
Next we handle the path itself:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-path
The ``step`` method is a recursive method that takes a ``List`` of the accumulated path segments. It first checks if the remaining input starts with the ``PATH`` constant, and if it does, it drops that input, and returns the ``readUriPart`` ``Iteratee`` which has it's result added to the path segment accumulator and the ``step`` method is run again.
If after reading in a path segment the next input does not start with a path, we reverse the accumulated segments and return it (dropping the last segment if it is blank).
Following the path we read in the query (if it exists):
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-query
It is much simpler then reading the path since we aren't doing any parsing of the query since there is no standard format of the query string.
Both the path and query used the ``readUriPart`` ``Iteratee``, which is next:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-uri-part
Here we have several ``Set``\s that contain valid characters pulled from the URI spec. The ``readUriPart`` method takes a ``Set`` of valid characters (already mapped to ``Byte``\s) and will continue to match characters until it reaches on that is not part of the ``Set``. If it is a percent encoded character then that is handled as a valid character and processing continues, or else we are done collecting this part of the URI.
Headers are next:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-headers
And if applicable, we read in the message body:
.. includecode:: code/docs/io/HTTPServer.scala
:include: read-body
Finally we get to the actual ``Actor``:
.. includecode:: code/docs/io/HTTPServer.scala
:include: actor
And it's companion object:
.. includecode:: code/docs/io/HTTPServer.scala
:include: actor-companion
And the OKResponse:
.. includecode:: code/docs/io/HTTPServer.scala
:include: ok-response
A ``main`` method to start everything up:
.. includecode:: code/docs/io/HTTPServer.scala
:include: main

View file

@ -0,0 +1,296 @@
.. _logging-scala:
#################
Logging (Scala)
#################
How to Log
==========
Create a ``LoggingAdapter`` and use the ``error``, ``warning``, ``info``, or ``debug`` methods,
as illustrated in this example:
.. includecode:: code/docs/event/LoggingDocSpec.scala
:include: my-actor
For convenience you can mixin the ``log`` member into actors, instead of defining it as above.
.. code-block:: scala
class MyActor extends Actor with akka.actor.ActorLogging {
...
}
The second parameter to the ``Logging`` is the source of this logging channel.
The source object is translated to a String according to the following rules:
* if it is an Actor or ActorRef, its path is used
* in case of a String it is used as is
* in case of a class an approximation of its simpleName
* and in all other cases a compile error occurs unless and implicit
:class:`LogSource[T]` is in scope for the type in question.
The log message may contain argument placeholders ``{}``, which will be
substituted if the log level is enabled. Giving more arguments as there are
placeholders results in a warning being appended to the log statement (i.e. on
the same line with the same severity). You may pass a Java array as the only
substitution argument to have its elements be treated individually:
.. includecode:: code/docs/event/LoggingDocSpec.scala#array
The Java :class:`Class` of the log source is also included in the generated
:class:`LogEvent`. In case of a simple string this is replaced with a “marker”
class :class:`akka.event.DummyClassForStringSources` in order to allow special
treatment of this case, e.g. in the SLF4J event listener which will then use
the string instead of the class name for looking up the logger instance to
use.
Auxiliary logging options
-------------------------
Akka has a couple of configuration options for very low level debugging, that makes most sense in
for developers and not for operations.
You almost definitely need to have logging set to DEBUG to use any of the options below:
.. code-block:: ruby
akka {
loglevel = DEBUG
}
This config option is very good if you want to know what config settings are loaded by Akka:
.. code-block:: ruby
akka {
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = on
}
If you want very detailed logging of all user-level messages that are processed
by Actors that use akka.event.LoggingReceive:
.. code-block:: ruby
akka {
actor {
debug {
# enable function of LoggingReceive, which is to log any received message at
# DEBUG level
receive = on
}
}
}
If you want very detailed logging of all automatically received messages that are processed
by Actors:
.. code-block:: ruby
akka {
actor {
debug {
# enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill and the like)
autoreceive = on
}
}
}
If you want very detailed logging of all lifecycle changes of Actors (restarts, deaths etc):
.. code-block:: ruby
akka {
actor {
debug {
# enable DEBUG logging of actor lifecycle changes
lifecycle = on
}
}
}
If you want very detailed logging of all events, transitions and timers of FSM Actors that extend LoggingFSM:
.. code-block:: ruby
akka {
actor {
debug {
# enable DEBUG logging of all LoggingFSMs for events, transitions and timers
fsm = on
}
}
}
If you want to monitor subscriptions (subscribe/unsubscribe) on the ActorSystem.eventStream:
.. code-block:: ruby
akka {
actor {
debug {
# enable DEBUG logging of subscription changes on the eventStream
event-stream = on
}
}
}
Auxiliary remote logging options
--------------------------------
If you want to see all messages that are sent through remoting at DEBUG log level:
(This is logged as they are sent by the transport layer, not by the Actor)
.. code-block:: ruby
akka {
remote {
# If this is "on", Akka will log all outbound messages at DEBUG level, if off then they are not logged
log-sent-messages = on
}
}
If you want to see all messages that are received through remoting at DEBUG log level:
(This is logged as they are received by the transport layer, not by any Actor)
.. code-block:: ruby
akka {
remote {
# If this is "on", Akka will log all inbound messages at DEBUG level, if off then they are not logged
log-received-messages = on
}
}
Also see the logging options for TestKit: :ref:`actor.logging-scala`.
Translating Log Source to String and Class
------------------------------------------
The rules for translating the source object to the source string and class
which are inserted into the :class:`LogEvent` during runtime are implemented
using implicit parameters and thus fully customizable: simply create your own
instance of :class:`LogSource[T]` and have it in scope when creating the
logger.
.. includecode:: code/docs/event/LoggingDocSpec.scala#my-source
This example creates a log source which mimics traditional usage of Java
loggers, which are based upon the originating objects class name as log
category. The override of :meth:`getClazz` is only included for demonstration
purposes as it contains exactly the default behavior.
.. note::
You may also create the string representation up front and pass that in as
the log source, but be aware that then the :class:`Class[_]` which will be
put in the :class:`LogEvent` is
:class:`akka.event.DummyClassForStringSources`.
The SLF4J event listener treats this case specially (using the actual string
to look up the logger instance to use instead of the class name), and you
might want to do this also in case you implement your own logging adapter.
Event Handler
=============
Logging is performed asynchronously through an event bus. You can configure
which event handlers that should subscribe to the logging events. That is done
using the ``event-handlers`` element in the :ref:`configuration`. Here you can
also define the log level.
.. code-block:: ruby
akka {
# Event handlers to register at boot time (Logging$DefaultLogger logs to STDOUT)
event-handlers = ["akka.event.Logging$DefaultLogger"]
# Options: ERROR, WARNING, INFO, DEBUG
loglevel = "DEBUG"
}
The default one logs to STDOUT and is registered by default. It is not intended
to be used for production. There is also an :ref:`slf4j-scala`
event handler available in the 'akka-slf4j' module.
Example of creating a listener:
.. includecode:: code/docs/event/LoggingDocSpec.scala
:include: my-event-listener
.. _slf4j-scala:
SLF4J
=====
Akka provides an event handler for `SL4FJ <http://www.slf4j.org/>`_. This module is available in the 'akka-slf4j.jar'.
It has one single dependency; the slf4j-api jar. In runtime you also need a SLF4J backend, we recommend `Logback <http://logback.qos.ch/>`_:
.. code-block:: scala
lazy val logback = "ch.qos.logback" % "logback-classic" % "1.0.4" % "runtime"
You need to enable the Slf4jEventHandler in the 'event-handlers' element in
the :ref:`configuration`. Here you can also define the log level of the event bus.
More fine grained log levels can be defined in the configuration of the SLF4J backend
(e.g. logback.xml).
.. code-block:: ruby
akka {
event-handlers = ["akka.event.slf4j.Slf4jEventHandler"]
loglevel = "DEBUG"
}
The SLF4J logger selected for each log event is chosen based on the
:class:`Class[_]` of the log source specified when creating the
:class:`LoggingAdapter`, unless that was given directly as a string in which
case that string is used (i.e. ``LoggerFactory.getLogger(c: Class[_])`` is used in
the first case and ``LoggerFactory.getLogger(s: String)`` in the second).
.. note::
Beware that the actor systems name is appended to a :class:`String` log
source if the LoggingAdapter was created giving an :class:`ActorSystem` to
the factory. If this is not intended, give a :class:`LoggingBus` instead as
shown below:
.. code-block:: scala
val log = Logging(system.eventStream, "my.nice.string")
Logging Thread and Akka Source in MDC
-------------------------------------
Since the logging is done asynchronously the thread in which the logging was performed is captured in
Mapped Diagnostic Context (MDC) with attribute name ``sourceThread``.
With Logback the thread name is available with ``%X{sourceThread}`` specifier within the pattern layout configuration::
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date{ISO8601} %-5level %logger{36} %X{sourceThread} - %msg%n</pattern>
</encoder>
</appender>
.. note::
It will probably be a good idea to use the ``sourceThread`` MDC value also in
non-Akka parts of the application in order to have this property consistently
available in the logs.
Another helpful facility is that Akka captures the actors address when
instantiating a logger within it, meaning that the full instance identification
is available for associating log messages e.g. with members of a router. This
information is available in the MDC with attribute name ``akkaSource``::
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%date{ISO8601} %-5level %logger{36} %X{akkaSource} - %msg%n</pattern>
</encoder>
</appender>
For more details on what this attribute contains—also for non-actors—please see
`How to Log`_.

View file

@ -0,0 +1,70 @@
.. _microkernel-scala:
Microkernel (Scala)
===================
The purpose of the Akka Microkernel is to offer a bundling mechanism so that you can distribute
an Akka application as a single payload, without the need to run in a Java Application Server or manually
having to create a launcher script.
The Akka Microkernel is included in the Akka download found at `downloads`_.
.. _downloads: http://akka.io/downloads
To run an application with the microkernel you need to create a Bootable class
that handles the startup and shutdown the application. An example is included below.
Put your application jar in the ``deploy`` directory to have it automatically
loaded.
To start the kernel use the scripts in the ``bin`` directory, passing the boot
classes for your application.
There is a simple example of an application setup for running with the
microkernel included in the akka download. This can be run with the following
command (on a unix-based system):
.. code-block:: none
bin/akka sample.kernel.hello.HelloKernel
Use ``Ctrl-C`` to interrupt and exit the microkernel.
On a Windows machine you can also use the bin/akka.bat script.
The code for the Hello Kernel example (see the ``HelloKernel`` class for an example
of creating a Bootable):
.. includecode:: ../../../akka-samples/akka-sample-hello-kernel/src/main/scala/sample/kernel/hello/HelloKernel.scala
Distribution of microkernel application
---------------------------------------
To make a distribution package of the microkernel and your application the ``akka-sbt-plugin`` provides
``AkkaKernelPlugin``. It creates the directory structure, with jar files, configuration files and
start scripts.
To use the sbt plugin you define it in your ``project/plugins.sbt``:
.. includecode:: ../../../akka-sbt-plugin/sample/project/plugins.sbt
Then you add it to the settings of your ``project/Build.scala``. It is also important that you add the ``akka-kernel`` dependency.
This is an example of a complete sbt build file:
.. includecode:: ../../../akka-sbt-plugin/sample/project/Build.scala
Run the plugin with sbt::
> dist
> dist:clean
There are several settings that can be defined:
* ``outputDirectory`` - destination directory of the package, default ``target/dist``
* ``distJvmOptions`` - JVM parameters to be used in the start script
* ``configSourceDirs`` - Configuration files are copied from these directories, default ``src/config``, ``src/main/config``, ``src/main/resources``
* ``distMainClass`` - Kernel main class to use in start script
* ``libFilter`` - Filter of dependency jar files
* ``additionalLibs`` - Additional dependency jar files

View file

@ -0,0 +1,410 @@
.. _remoting-scala:
#################
Remoting (Scala)
#################
For an introduction of remoting capabilities of Akka please see :ref:`remoting`.
Preparing your ActorSystem for Remoting
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Akka remoting is a separate jar file. Make sure that you have the following dependency in your project::
"com.typesafe.akka" %% "akka-remote" % "@version@" @crossString@
To enable remote capabilities in your Akka project you should, at a minimum, add the following changes
to your ``application.conf`` file::
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
netty {
hostname = "127.0.0.1"
port = 2552
}
}
}
As you can see in the example above there are four things you need to add to get started:
* Change provider from ``akka.actor.LocalActorRefProvider`` to ``akka.remote.RemoteActorRefProvider``
* Add host name - the machine you want to run the actor system on; this host
name is exactly what is passed to remote systems in order to identify this
system and consequently used for connecting back to this system if need be,
hence set it to a reachable IP address or resolvable name in case you want to
communicate across the network.
* Add port number - the port the actor system should listen on, set to 0 to have it chosen automatically
.. note::
The port number needs to be unique for each actor system on the same machine even if the actor
systems have different names. This is because each actor system has its own network subsystem
listening for connections and handling messages as not to interfere with other actor systems.
.. _remoting-scala-configuration:
Remote Configuration
^^^^^^^^^^^^^^^^^^^^
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
There are lots of more properties that are related to remoting in Akka. We refer to the following
reference file for more information:
.. literalinclude:: ../../../akka-remote/src/main/resources/reference.conf
:language: none
Types of Remote Interaction
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Akka has two ways of using remoting:
* Lookup : used to look up an actor on a remote node with ``actorFor(path)``
* Creation : used to create an actor on a remote node with ``actorOf(Props(...), actorName)``
In the next sections the two alternatives are described in detail.
Looking up Remote Actors
^^^^^^^^^^^^^^^^^^^^^^^^
``actorFor(path)`` will obtain an ``ActorRef`` to an Actor on a remote node, e.g.::
val actor = context.actorFor("akka://actorSystemName@10.0.0.1:2552/user/actorName")
As you can see from the example above the following pattern is used to find an ``ActorRef`` on a remote node::
akka://<actor system>@<hostname>:<port>/<actor path>
Once you obtained a reference to the actor you can interact with it they same way you would with a local actor, e.g.::
actor ! "Pretty awesome feature"
.. note::
For more details on how actor addresses and paths are formed and used, please refer to :ref:`addressing`.
Creating Actors Remotely
^^^^^^^^^^^^^^^^^^^^^^^^
If you want to use the creation functionality in Akka remoting you have to further amend the
``application.conf`` file in the following way (only showing deployment section)::
akka {
actor {
deployment {
/sampleActor {
remote = "akka://sampleActorSystem@127.0.0.1:2553"
}
}
}
}
The configuration above instructs Akka to react when an actor with path ``/sampleActor`` is created, i.e.
using ``system.actorOf(Props(...)`, sampleActor)``. This specific actor will not be directly instantiated,
but instead the remote daemon of the remote system will be asked to create the actor,
which in this sample corresponds to ``sampleActorSystem@127.0.0.1:2553``.
Once you have configured the properties above you would do the following in code::
class SampleActor extends Actor { def receive = { case _ => println("Got something") } }
val actor = context.actorOf(Props[SampleActor], "sampleActor")
actor ! "Pretty slick"
``SampleActor`` has to be available to the runtimes using it, i.e. the classloader of the
actor systems has to have a JAR containing the class.
.. note::
In order to ensure serializability of ``Props`` when passing constructor
arguments to the actor being created, do not make the factory an inner class:
this will inherently capture a reference to its enclosing object, which in
most cases is not serializable. It is best to create a factory method in the
companion object of the actors class.
.. warning::
*Caveat:* Remote deployment ties both systems together in a tight fashion,
where it may become impossible to shut down one system after the other has
become unreachable. This is due to a missing feature—which will be part of
the clustering support—that hooks up network failure detection with
DeathWatch. If you want to avoid this strong coupling, do not remote-deploy
but send ``Props`` to a remotely looked-up actor and have that create a
child, returning the resulting actor reference.
.. warning::
*Caveat:* Akka Remoting does not trigger Death Watch for lost connections.
Programmatic Remote Deployment
------------------------------
To allow dynamically deployed systems, it is also possible to include
deployment configuration in the :class:`Props` which are used to create an
actor: this information is the equivalent of a deployment section from the
configuration file, and if both are given, the external configuration takes
precedence.
With these imports:
.. includecode:: code/docs/remoting/RemoteDeploymentDocSpec.scala#import
and a remote address like this:
.. includecode:: code/docs/remoting/RemoteDeploymentDocSpec.scala#make-address
you can advise the system to create a child on that remote node like so:
.. includecode:: code/docs/remoting/RemoteDeploymentDocSpec.scala#deploy
Serialization
^^^^^^^^^^^^^
When using remoting for actors you must ensure that the ``props`` and ``messages`` used for
those actors are serializable. Failing to do so will cause the system to behave in an unintended way.
For more information please see :ref:`serialization-scala`
Routers with Remote Destinations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
It is absolutely feasible to combine remoting with :ref:`routing-scala`.
This is also done via configuration::
akka {
actor {
deployment {
/serviceA/aggregation {
router = "round-robin"
nr-of-instances = 10
target {
nodes = ["akka://app@10.0.0.2:2552", "akka://app@10.0.0.3:2552"]
}
}
}
}
}
This configuration setting will clone the actor “aggregation” 10 times and deploy it evenly distributed across
the two given target nodes.
Description of the Remoting Sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
There is a more extensive remote example that comes with the Akka distribution.
Please have a look here for more information: `Remote Sample
<https://github.com/akka/akka/tree/master/akka-samples/akka-sample-remote>`_
This sample demonstrates both, remote deployment and look-up of remote actors.
First, let us have a look at the common setup for both scenarios (this is
``common.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/common.conf
This enables the remoting by installing the :class:`RemoteActorRefProvider` and
chooses the default remote transport. All other options will be set
specifically for each show case.
.. note::
Be sure to replace the default IP 127.0.0.1 with the real address the system
is reachable by if you deploy onto multiple machines!
.. _remote-lookup-sample-scala:
Remote Lookup
-------------
In order to look up a remote actor, that one must be created first. For this
purpose, we configure an actor system to listen on port 2552 (this is a snippet
from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: calculator
Then the actor must be created. For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: imports
The actor doing the work will be this one:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CalculatorApplication.scala
:include: actor
and we start it within an actor system using the above configuration
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CalculatorApplication.scala
:include: setup
With the service actor up and running, we may look it up from another actor
system, which will be configured to use port 2553 (this is a snippet from
``application.conf``).
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotelookup
The actor which will query the calculator is a quite simple one for demonstration purposes
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: actor
and it is created from an actor system using the aforementioned clients config.
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: setup
Requests which come in via ``doSomething`` will be sent to the client actor
along with the reference which was looked up earlier. Observe how the actor
system name using in ``actorFor`` matches the remote systems name, as do IP
and port number. Top-level actors are always created below the ``"/user"``
guardian, which supervises them.
Remote Deployment
-----------------
Creating remote actors instead of looking them up is not visible in the source
code, only in the configuration file. This section is used in this scenario
(this is a snippet from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotecreation
For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: imports
The client actor looks like in the previous example
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala
:include: actor
but the setup uses only ``actorOf``:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala
:include: setup
Observe how the name of the server actor matches the deployment given in the
configuration file, which will transparently delegate the actor creation to the
remote node.
Remote Events
-------------
It is possible to listen to events that occur in Akka Remote, and to subscribe/unsubscribe to there events,
you simply register as listener to the below described types in on the ``ActorSystem.eventStream``.
.. note::
To subscribe to any outbound-related events, subscribe to ``RemoteClientLifeCycleEvent``
To subscribe to any inbound-related events, subscribe to ``RemoteServerLifeCycleEvent``
To subscribe to any remote events, subscribe to ``RemoteLifeCycleEvent``
By default an event listener is registered which logs all of the events
described below. This default was chosen to help setting up a system, but it is
quite common to switch this logging off once that phase of the project is
finished.
.. note::
In order to switch off the logging, set
``akka.remote.log-remote-lifecycle-events = off`` in your
``application.conf``.
To intercept when an outbound connection is disconnected, you listen to ``RemoteClientDisconnected`` which
holds the transport used (RemoteTransport) and the outbound address that was disconnected (Address).
To intercept when an outbound connection is connected, you listen to ``RemoteClientConnected`` which
holds the transport used (RemoteTransport) and the outbound address that was connected to (Address).
To intercept when an outbound client is started you listen to ``RemoteClientStarted``
which holds the transport used (RemoteTransport) and the outbound address that it is connected to (Address).
To intercept when an outbound client is shut down you listen to ``RemoteClientShutdown``
which holds the transport used (RemoteTransport) and the outbound address that it was connected to (Address).
For general outbound-related errors, that do not classify as any of the others, you can listen to ``RemoteClientError``,
which holds the cause (Throwable), the transport used (RemoteTransport) and the outbound address (Address).
To intercept when an inbound server is started (typically only once) you listen to ``RemoteServerStarted``
which holds the transport that it will use (RemoteTransport).
To intercept when an inbound server is shut down (typically only once) you listen to ``RemoteServerShutdown``
which holds the transport that it used (RemoteTransport).
To intercept when an inbound connection has been established you listen to ``RemoteServerClientConnected``
which holds the transport used (RemoteTransport) and optionally the address that connected (Option[Address]).
To intercept when an inbound connection has been disconnected you listen to ``RemoteServerClientDisconnected``
which holds the transport used (RemoteTransport) and optionally the address that disconnected (Option[Address]).
To intercept when an inbound remote client has been closed you listen to ``RemoteServerClientClosed``
which holds the transport used (RemoteTransport) and optionally the address of the remote client that was closed (Option[Address]).
Remote Security
^^^^^^^^^^^^^^^
Akka provides a couple of ways to enhance security between remote nodes (client/server):
* Untrusted Mode
* Security Cookie Handshake
Untrusted Mode
--------------
You can enable untrusted mode for preventing system messages to be send by clients, e.g. messages like.
This will prevent the client to send these messages to the server:
* ``Create``
* ``Recreate``
* ``Suspend``
* ``Resume``
* ``Terminate``
* ``Supervise``
* ``ChildTerminated``
* ``Link``
* ``Unlink``
Here is how to turn it on in the config::
akka.remote.untrusted-mode = on
Secure Cookie Handshake
-----------------------
Akka remoting also allows you to specify a secure cookie that will be exchanged and ensured to be identical
in the connection handshake between the client and the server. If they are not identical then the client
will be refused to connect to the server.
The secure cookie can be any kind of string. But the recommended approach is to generate a cryptographically
secure cookie using this script ``$AKKA_HOME/scripts/generate_config_with_secure_cookie.sh`` or from code
using the ``akka.util.Crypt.generateSecureCookie()`` utility method.
You have to ensure that both the connecting client and the server have the same secure cookie as well
as the ``require-cookie`` option turned on.
Here is an example config::
akka.remote.netty {
secure-cookie = "090A030E0F0A05010900000A0C0E0C0B03050D05"
require-cookie = on
}
SSL
---
SSL can be used for the remote transport by activating the ``akka.remote.netty.ssl``
configuration section. See description of the settings in the :ref:`remoting-scala-configuration`.
The SSL support is implemented with Java Secure Socket Extension, please consult the offical
`Java Secure Socket Extension documentation <http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html>`_
and related resources for troubleshooting.
.. note::
When using SHA1PRNG on Linux it's recommended specify ``-Djava.security.egd=file:/dev/./urandom`` as argument
to the JVM to prevent blocking. It is NOT as secure because it reuses the seed.
Use '/dev/./urandom', not '/dev/urandom' as that doesn't work according to
`Bug ID: 6202721 <http://bugs.sun.com/view_bug.do?bug_id=6202721>`_.

View file

@ -0,0 +1,496 @@
.. _routing-scala:
Routing (Scala)
===============
A Router is an actor that routes incoming messages to outbound actors.
The router routes the messages sent to it to its underlying actors called 'routees'.
Akka comes with some defined routers out of the box, but as you will see in this chapter it
is really easy to create your own. The routers shipped with Akka are:
* ``akka.routing.RoundRobinRouter``
* ``akka.routing.RandomRouter``
* ``akka.routing.SmallestMailboxRouter``
* ``akka.routing.BroadcastRouter``
* ``akka.routing.ScatterGatherFirstCompletedRouter``
* ``akka.routing.ConsistentHashingRouter``
Routers In Action
^^^^^^^^^^^^^^^^^
This is an example of how to create a router that is defined in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-round-robin
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#configurableRouting
This is an example of how to programmatically create a router and set the number of routees it should create:
.. includecode:: code/docs/routing/RouterViaProgramExample.scala#programmaticRoutingNrOfInstances
You can also give the router already created routees as in:
.. includecode:: code/docs/routing/RouterViaProgramExample.scala#programmaticRoutingRoutees
.. note::
No actor factory or class needs to be provided in this
case, as the ``Router`` will not create any children on its own (which is not
true anymore when using a resizer). The routees can also be specified by giving
their path strings.
When you create a router programmatically you define the number of routees *or* you pass already created routees to it.
If you send both parameters to the router *only* the latter will be used, i.e. ``nrOfInstances`` is disregarded.
*It is also worth pointing out that if you define the ``router`` in the
configuration file then this value will be used instead of any programmatically
sent parameters. The decision whether to create a router at all, on the other
hand, must be taken within the code, i.e. you cannot make something a router by
external configuration alone (see below for details).*
Once you have the router actor it is just to send messages to it as you would to any actor:
.. code-block:: scala
router ! MyMsg
The router will forward the message to its routees according to its routing policy.
Remotely Deploying Routees
**************************
In addition to being able to supply looked-up remote actors as routees, you can
make the router deploy its created children on a set of remote hosts; this will
be done in round-robin fashion. In order to do that, wrap the router
configuration in a :class:`RemoteRouterConfig`, attaching the remote addresses of
the nodes to deploy to. Naturally, this requires your to include the
``akka-remote`` module on your classpath:
.. includecode:: code/docs/routing/RouterViaProgramExample.scala#remoteRoutees
How Routing is Designed within Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Routers behave like single actors, but they should also not hinder scalability.
This apparent contradiction is solved by making routers be represented by a
special :class:`RoutedActorRef` (implementation detail, what the user gets is
an :class:`ActorRef` as usual) which dispatches incoming messages destined
for the routees without actually invoking the router actors behavior (and thus
avoiding its mailbox; the single router actors task is to manage all aspects
related to the lifecycle of the routees). This means that the code which decides
which route to take is invoked concurrently from all possible senders and hence
must be thread-safe, it cannot live the simple and happy life of code within an
actor.
There is one part in the above paragraph which warrants some more background
explanation: Why does a router need a “head” which is actual parent to all the
routees? The initial design tried to side-step this issue, but location
transparency as well as mandatory parental supervision required a redesign.
Each of the actors which the router spawns must have its unique identity, which
translates into a unique actor path. Since the router has only one given name
in its parents context, another level in the name space is needed, which
according to the addressing semantics implies the existence of an actor with
the routers name. This is not only necessary for the internal messaging
involved in creating, restarting and terminating actors, it is also needed when
the pooled actors need to converse with other actors and receive replies in a
deterministic fashion. Since each actor knows its own external representation
as well as that of its parent, the routees decide where replies should be sent
when reacting to a message:
.. includecode:: code/docs/actor/ActorDocSpec.scala#reply-with-sender
.. includecode:: code/docs/actor/ActorDocSpec.scala#reply-without-sender
It is apparent now why routing needs to be enabled in code rather than being
possible to “bolt on” later: whether or not an actor is routed means a change
to the actor hierarchy, changing the actor paths of all children of the router.
The routees especially do need to know that they are routed to in order to
choose the sender reference for any messages they dispatch as shown above.
Routers vs. Supervision
^^^^^^^^^^^^^^^^^^^^^^^
As explained in the previous section, routers create new actor instances as
children of the “head” router, who therefor also is their supervisor. The
supervisor strategy of this actor can be configured by means of the
:meth:`RouterConfig.supervisorStrategy` property, which is supported for all
built-in router types. It defaults to “always escalate”, which leads to the
application of the routers parents supervision directive to all children of
the router uniformly (i.e. not only the one which failed). It should be
mentioned that the router overrides the default behavior of terminating all
children upon restart, which means that a restart—while re-creating them—does
not have an effect on the number of actors in the pool.
Setting the strategy is easily done:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#supervision
:include: supervision
:exclude: custom-strategy
Another potentially useful approach is to give the router the same strategy as
its parent, which effectively treats all actors in the pool as if they were
direct children of their grand-parent instead.
.. note::
If the child of a router terminates, the router will not automatically spawn
a new child. In the event that all children of a router have terminated the
router will terminate itself.
Router usage
^^^^^^^^^^^^
In this section we will describe how to use the different router types.
First we need to create some actors that will be used in the examples:
.. includecode:: code/docs/routing/RouterTypeExample.scala#printlnActor
and
.. includecode:: code/docs/routing/RouterTypeExample.scala#fibonacciActor
RoundRobinRouter
****************
Routes in a `round-robin <http://en.wikipedia.org/wiki/Round-robin>`_ fashion to its routees.
Code example:
.. includecode:: code/docs/routing/RouterTypeExample.scala#roundRobinRouter
When run you should see a similar output to this:
.. code-block:: scala
Received message '1' in actor $b
Received message '2' in actor $c
Received message '3' in actor $d
Received message '6' in actor $b
Received message '4' in actor $e
Received message '8' in actor $d
Received message '5' in actor $f
Received message '9' in actor $e
Received message '10' in actor $f
Received message '7' in actor $c
If you look closely to the output you can see that each of the routees received two messages which
is exactly what you would expect from a round-robin router to happen.
(The name of an actor is automatically created in the format ``$letter`` unless you specify it -
hence the names printed above.)
This is an example of how to define a round-robin router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-round-robin
RandomRouter
************
As the name implies this router type selects one of its routees randomly and forwards
the message it receives to this routee.
This procedure will happen each time it receives a message.
Code example:
.. includecode:: code/docs/routing/RouterTypeExample.scala#randomRouter
When run you should see a similar output to this:
.. code-block:: scala
Received message '1' in actor $e
Received message '2' in actor $c
Received message '4' in actor $b
Received message '5' in actor $d
Received message '3' in actor $e
Received message '6' in actor $c
Received message '7' in actor $d
Received message '8' in actor $e
Received message '9' in actor $d
Received message '10' in actor $d
The result from running the random router should be different, or at least random, every time you run it.
Try to run it a couple of times to verify its behavior if you don't trust us.
This is an example of how to define a random router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-random
SmallestMailboxRouter
*********************
A Router that tries to send to the non-suspended routee with fewest messages in mailbox.
The selection is done in this order:
* pick any idle routee (not processing message) with empty mailbox
* pick any routee with empty mailbox
* pick routee with fewest pending messages in mailbox
* pick any remote routee, remote actors are consider lowest priority,
since their mailbox size is unknown
Code example:
.. includecode:: code/docs/routing/RouterTypeExample.scala#smallestMailboxRouter
This is an example of how to define a smallest-mailbox router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-smallest-mailbox
BroadcastRouter
***************
A broadcast router forwards the message it receives to *all* its routees.
Code example:
.. includecode:: code/docs/routing/RouterTypeExample.scala#broadcastRouter
When run you should see a similar output to this:
.. code-block:: scala
Received message 'this is a broadcast message' in actor $f
Received message 'this is a broadcast message' in actor $d
Received message 'this is a broadcast message' in actor $e
Received message 'this is a broadcast message' in actor $c
Received message 'this is a broadcast message' in actor $b
As you can see here above each of the routees, five in total, received the broadcast message.
This is an example of how to define a broadcast router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-broadcast
ScatterGatherFirstCompletedRouter
*********************************
The ScatterGatherFirstCompletedRouter will send the message on to all its routees as a future.
It then waits for first result it gets back. This result will be sent back to original sender.
Code example:
.. includecode:: code/docs/routing/RouterTypeExample.scala#scatterGatherFirstCompletedRouter
When run you should see this:
.. code-block:: scala
The result of calculating Fibonacci for 10 is 55
From the output above you can't really see that all the routees performed the calculation, but they did!
The result you see is from the first routee that returned its calculation to the router.
This is an example of how to define a scatter-gather router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-scatter-gather
ConsistentHashingRouter
***********************
The ConsistentHashingRouter uses `consistent hashing <http://en.wikipedia.org/wiki/Consistent_hashing>`_
to select a connection based on the sent message. This
`article <http://weblogs.java.net/blog/tomwhite/archive/2007/11/consistent_hash.html>`_ gives good
insight into how consistent hashing is implemented.
There is 3 ways to define what data to use for the consistent hash key.
* You can define ``hashMapping`` of the router to map incoming
messages to their consistent hash key. This makes the decision
transparent for the sender.
* The messages may implement ``akka.routing.ConsistentHashingRouter.ConsistentHashable``.
The key is part of the message and it's convenient to define it together
with the message definition.
* The messages can be be wrapped in a ``akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope``
to define what data to use for the consistent hash key. The sender knows
the key to use.
These ways to define the consistent hash key can be use together and at
the same time for one router. The ``hashMapping`` is tried first.
Code example:
.. includecode:: code/docs/routing/ConsistentHashingRouterDocSpec.scala#cache-actor
.. includecode:: code/docs/routing/ConsistentHashingRouterDocSpec.scala#consistent-hashing-router
In the above example you see that the ``Get`` message implements ``ConsistentHashable`` itself,
while the ``Entry`` message is wrapped in a ``ConsistentHashableEnvelope``. The ``Evict``
message is handled by the ``hashMapping`` partial function.
This is an example of how to define a consistent-hashing router in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-consistent-hashing
Broadcast Messages
^^^^^^^^^^^^^^^^^^
There is a special type of message that will be sent to all routees regardless of the router.
This message is called ``Broadcast`` and is used in the following manner:
.. code-block:: scala
router ! Broadcast("Watch out for Davy Jones' locker")
Only the actual message is forwarded to the routees, i.e. "Watch out for Davy Jones' locker" in the example above.
It is up to the routee implementation whether to handle the broadcast message or not.
Dynamically Resizable Routers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
All routers can be used with a fixed number of routees or with a resize strategy to adjust the number
of routees dynamically.
This is an example of how to create a resizable router that is defined in configuration:
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#config-resize
.. includecode:: code/docs/routing/RouterViaConfigDocSpec.scala#configurableRoutingWithResizer
Several more configuration options are available and described in ``akka.actor.deployment.default.resizer``
section of the reference :ref:`configuration`.
This is an example of how to programmatically create a resizable router:
.. includecode:: code/docs/routing/RouterViaProgramExample.scala#programmaticRoutingWithResizer
*It is also worth pointing out that if you define the ``router`` in the configuration file then this value
will be used instead of any programmatically sent parameters.*
.. note::
Resizing is triggered by sending messages to the actor pool, but it is not
completed synchronously; instead a message is sent to the “head”
:class:`Router` to perform the size change. Thus you cannot rely on resizing
to instantaneously create new workers when all others are busy, because the
message just sent will be queued to the mailbox of a busy actor. To remedy
this, configure the pool to use a balancing dispatcher, see `Configuring
Dispatchers`_ for more information.
Custom Router
^^^^^^^^^^^^^
You can also create your own router should you not find any of the ones provided by Akka sufficient for your needs.
In order to roll your own router you have to fulfill certain criteria which are explained in this section.
The router created in this example is a simple vote counter. It will route the votes to specific vote counter actors.
In this case we only have two parties the Republicans and the Democrats. We would like a router that forwards all
democrat related messages to the Democrat actor and all republican related messages to the Republican actor.
We begin with defining the class:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#crRouter
:exclude: crRoute
The next step is to implement the ``createRoute`` method in the class just defined:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#crRoute
As you can see above we start off by creating the routees and put them in a collection.
Make sure that you don't miss to implement the line below as it is *really* important.
It registers the routees internally and failing to call this method will
cause a ``ActorInitializationException`` to be thrown when the router is used.
Therefore always make sure to do the following in your custom router:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#crRegisterRoutees
The routing logic is where your magic sauce is applied. In our example it inspects the message types
and forwards to the correct routee based on this:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#crRoutingLogic
As you can see above what's returned in the partial function is a ``List`` of ``Destination(sender, routee)``.
The sender is what "parent" the routee should see - changing this could be useful if you for example want
another actor than the original sender to intermediate the result of the routee (if there is a result).
For more information about how to alter the original sender we refer to the source code of
`ScatterGatherFirstCompletedRouter <https://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/routing/Routing.scala#L375>`_
All in all the custom router looks like this:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala#CustomRouter
If you are interested in how to use the VoteCountRouter you can have a look at the test class
`RoutingSpec <https://github.com/akka/akka/blob/master/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala>`_
.. caution::
When creating a cutom router the resulting RoutedActorRef optimizes the
sending of the message so that it does NOT go through the routers mailbox
unless the route returns an empty recipient set.
This means that the ``route`` function defined in the ``RouterConfig``
or the function returned from ``CreateCustomRoute`` in
``CustomRouterConfig`` is evaluated concurrently without protection by
the RoutedActorRef: either provide a reentrant (i.e. pure) implementation
or do the locking yourself!
Configured Custom Router
************************
It is possible to define configuration properties for custom routers. In the ``router`` property of the deployment
configuration you define the fully qualified class name of the router class. The router class must extend
``akka.routing.RouterConfig`` and and have constructor with ``com.typesafe.config.Config`` parameter.
The deployment section of the configuration is passed to the constructor.
Custom Resizer
**************
A router with dynamically resizable number of routees is implemented by providing a ``akka.routing.Resizer``
in ``resizer`` method of the ``RouterConfig``. See ``akka.routing.DefaultResizer`` for inspiration
of how to write your own resize strategy.
Configuring Dispatchers
^^^^^^^^^^^^^^^^^^^^^^^
The dispatcher for created children of the router will be taken from
:class:`Props` as described in :ref:`dispatchers-scala`. For a dynamic pool it
makes sense to configure the :class:`BalancingDispatcher` if the precise
routing is not so important (i.e. no consistent hashing or round-robin is
required); this enables newly created routees to pick up work immediately by
stealing it from their siblings.
.. note::
If you provide a collection of actors to route to, then they will still use the same dispatcher
that was configured for them in their ``Props``, it is not possible to change an actors dispatcher
after it has been created.
The “head” router cannot always run on the same dispatcher, because it
does not process the same type of messages, hence this special actor does
not use the dispatcher configured in :class:`Props`, but takes the
``routerDispatcher`` from the :class:`RouterConfig` instead, which defaults to
the actor systems default dispatcher. All standard routers allow setting this
property in their constructor or factory method, custom routers have to
implement the method in a suitable way.
.. includecode:: code/docs/routing/RouterDocSpec.scala#dispatchers
.. note::
It is not allowed to configure the ``routerDispatcher`` to be a
:class:`BalancingDispatcher` since the messages meant for the special
router actor cannot be processed by any other actor.
At first glance there seems to be an overlap between the
:class:`BalancingDispatcher` and Routers, but they complement each other.
The balancing dispatcher is in charge of running the actors while the routers
are in charge of deciding which message goes where. A router can also have
children that span multiple actor systems, even remote ones, but a dispatcher
lives inside a single actor system.
When using a :class:`RoundRobinRouter` with a :class:`BalancingDispatcher`
there are some configuration settings to take into account.
- There can only be ``nr-of-instances`` messages being processed at the same
time no matter how many threads are configured for the
:class:`BalancingDispatcher`.
- Having ``throughput`` set to a low number makes no sense since you will only
be handing off to another actor that processes the same :class:`MailBox`
as yourself, which can be costly. Either the message just got into the
mailbox and you can receive it as well as anybody else, or everybody else
is busy and you are the only one available to receive the message.
- Resizing the number of routees only introduce inertia, since resizing
is performed at specified intervals, but work stealing is instantaneous.

View file

@ -0,0 +1,60 @@
.. _scheduler-scala:
###################
Scheduler (Scala)
###################
Sometimes the need for making things happen in the future arises, and where do you go look then?
Look no further than ``ActorSystem``! There you find the :meth:`scheduler` method that returns an instance
of akka.actor.Scheduler, this instance is unique per ActorSystem and is used internally for scheduling things
to happen at specific points in time. Please note that the scheduled tasks are executed by the default
``MessageDispatcher`` of the ``ActorSystem``.
You can schedule sending of messages to actors and execution of tasks (functions or Runnable).
You will get a ``Cancellable`` back that you can call :meth:`cancel` on to cancel the execution of the
scheduled operation.
.. warning::
The default implementation of ``Scheduler`` used by Akka is based on the Netty ``HashedWheelTimer``.
It does not execute tasks at the exact time, but on every tick, it will run everything that is overdue.
The accuracy of the default Scheduler can be modified by the "ticks-per-wheel" and "tick-duration" configuration
properties. For more information, see: `HashedWheelTimers <http://www.cse.wustl.edu/~cdgill/courses/cs6874/TimingWheels.ppt>`_.
Some examples
-------------
.. includecode:: code/docs/actor/SchedulerDocSpec.scala
:include: imports1,schedule-one-off-message
.. includecode:: code/docs/actor/SchedulerDocSpec.scala
:include: schedule-one-off-thunk
.. includecode:: code/docs/actor/SchedulerDocSpec.scala
:include: schedule-recurring
From ``akka.actor.ActorSystem``
-------------------------------
.. includecode:: ../../../akka-actor/src/main/scala/akka/actor/ActorSystem.scala
:include: scheduler
The Scheduler interface
-----------------------
.. includecode:: ../../../akka-actor/src/main/scala/akka/actor/Scheduler.scala
:include: scheduler
The Cancellable interface
-------------------------
This allows you to ``cancel`` something that has been scheduled for execution.
.. warning::
This does not abort the execution of the task, if it had already been started.
.. includecode:: ../../../akka-actor/src/main/scala/akka/actor/Scheduler.scala
:include: cancellable

View file

@ -0,0 +1,197 @@
.. _serialization-scala:
######################
Serialization (Scala)
######################
Akka has a built-in Extension for serialization,
and it is both possible to use the built-in serializers and to write your own.
The serialization mechanism is both used by Akka internally to serialize messages,
and available for ad-hoc serialization of whatever you might need it for.
Usage
=====
Configuration
-------------
For Akka to know which ``Serializer`` to use for what, you need edit your :ref:`configuration`,
in the "akka.actor.serializers"-section you bind names to implementations of the ``akka.serialization.Serializer``
you wish to use, like this:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala#serialize-serializers-config
After you've bound names to different implementations of ``Serializer`` you need to wire which classes
should be serialized using which ``Serializer``, this is done in the "akka.actor.serialization-bindings"-section:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala#serialization-bindings-config
You only need to specify the name of an interface or abstract base class of the
messages. In case of ambiguity, i.e. the message implements several of the
configured classes, the most specific configured class will be used, i.e. the
one of which all other candidates are superclasses. If this condition cannot be
met, because e.g. ``java.io.Serializable`` and ``MyOwnSerializable`` both apply
and neither is a subtype of the other, a warning will be issued
Akka provides serializers for :class:`java.io.Serializable` and `protobuf
<http://code.google.com/p/protobuf/>`_
:class:`com.google.protobuf.GeneratedMessage` by default (the latter only if
depending on the akka-remote module), so normally you don't need to add
configuration for that; since :class:`com.google.protobuf.GeneratedMessage`
implements :class:`java.io.Serializable`, protobuf messages will always by
serialized using the protobuf protocol unless specifically overridden. In order
to disable a default serializer, map its marker type to “none”::
akka.actor.serialization-bindings {
"java.io.Serializable" = none
}
Verification
------------
If you want to verify that your messages are serializable you can enable the following config option:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala#serialize-messages-config
.. warning::
We only recommend using the config option turned on when you're running tests.
It is completely pointless to have it turned on in other scenarios.
If you want to verify that your ``Props`` are serializable you can enable the following config option:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala#serialize-creators-config
.. warning::
We only recommend using the config option turned on when you're running tests.
It is completely pointless to have it turned on in other scenarios.
Programmatic
------------
If you want to programmatically serialize/deserialize using Akka Serialization,
here's some examples:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: imports,programmatic
For more information, have a look at the ``ScalaDoc`` for ``akka.serialization._``
Customization
=============
So, lets say that you want to create your own ``Serializer``,
you saw the ``docs.serialization.MyOwnSerializer`` in the config example above?
Creating new Serializers
------------------------
First you need to create a class definition of your ``Serializer`` like so:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: imports,my-own-serializer
:exclude: ...
Then you only need to fill in the blanks, bind it to a name in your :ref:`configuration` and then
list which classes that should be serialized using it.
Serializing ActorRefs
---------------------
All ActorRefs are serializable using JavaSerializer, but in case you are writing your own serializer,
you might want to know how to serialize and deserialize them properly, here's the magic incantation:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: imports,actorref-serializer
.. note::
``ActorPath.toStringWithAddress`` only differs from ``toString`` if the
address does not already have ``host`` and ``port`` components, i.e. it only
inserts address information for local addresses.
This assumes that serialization happens in the context of sending a message
through the remote transport. There are other uses of serialization, though,
e.g. storing actor references outside of an actor application (database,
durable mailbox, etc.). In this case, it is important to keep in mind that the
address part of an actors path determines how that actor is communicated with.
Storing a local actor path might be the right choice if the retrieval happens
in the same logical context, but it is not enough when deserializing it on a
different network host: for that it would need to include the systems remote
transport address. An actor system is not limited to having just one remote
transport per se, which makes this question a bit more interesting.
In the general case, the local address to be used depends on the type of remote
address which shall be the recipient of the serialized information. Use
:meth:`ActorRefProvider.getExternalAddressFor(remoteAddr)` to query the system
for the appropriate address to use when sending to ``remoteAddr``:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: external-address
This requires that you know at least which type of address will be supported by
the system which will deserialize the resulting actor reference; if you have no
concrete address handy you can create a dummy one for the right protocol using
``Address(protocol, "", "", 0)`` (assuming that the actual transport used is as
lenient as Akkas RemoteActorRefProvider).
There is a possible simplification available if you are just using the default
:class:`NettyRemoteTransport` with the :meth:`RemoteActorRefProvider`, which is
enabled by the fact that this combination has just a single remote address.
This approach relies on internal API, which means that it is not guaranteed to
be supported in future versions. To make this caveat more obvious, some bridge
code in the ``akka`` package is required to make it work:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: extract-transport
And with this, the address extraction goes like this:
.. includecode:: code/docs/serialization/SerializationDocSpec.scala
:include: external-address-default
This solution has to be adapted once other providers are used (like the planned
extensions for clustering).
Deep serialization of Actors
----------------------------
The current recommended approach to do deep serialization of internal actor state is to use Event Sourcing,
for more reading on the topic, see these examples:
`Martin Krasser on EventSourcing Part1 <http://krasserm.blogspot.com/2011/11/building-event-sourced-web-application.html>`_
`Martin Krasser on EventSourcing Part2 <http://krasserm.blogspot.com/2012/01/building-event-sourced-web-application.html>`_
.. note::
Built-in API support for persisting Actors will come in a later release, see the roadmap for more info:
`Akka 2.0 roadmap <https://docs.google.com/a/typesafe.com/document/d/18W9-fKs55wiFNjXL9q50PYOnR7-nnsImzJqHOPPbM4E>`_
A Word About Java Serialization
===============================
When using Java serialization without employing the :class:`JavaSerializer` for
the task, you must make sure to supply a valid :class:`ExtendedActorSystem` in
the dynamic variable ``JavaSerializer.currentSystem``. This is used when
reading in the representation of an :class:`ActorRef` for turning the string
representation into a real reference. :class:`DynamicVariable` is a
thread-local variable, so be sure to have it set while deserializing anything
which might contain actor references.
External Akka Serializers
=========================
`Akka-protostuff by Roman Levenstein <https://github.com/romix/akka-protostuff-serialization>`_
`Akka-quickser by Roman Levenstein <https://github.com/romix/akka-quickser-serialization>`_
`Akka-kryo by Roman Levenstein <https://github.com/romix/akka-kryo-serialization>`_

View file

@ -0,0 +1,75 @@
.. _stm-scala:
#######################################
Software Transactional Memory (Scala)
#######################################
Overview of STM
===============
An `STM <http://en.wikipedia.org/wiki/Software_transactional_memory>`_ turns the
Java heap into a transactional data set with begin/commit/rollback
semantics. Very much like a regular database. It implements the first three
letters in `ACID`_; ACI:
* Atomic
* Consistent
* Isolated
.. _ACID: http://en.wikipedia.org/wiki/ACID
Generally, the STM is not needed very often when working with Akka. Some
use-cases (that we can think of) are:
- When you really need composable message flows across many actors updating
their **internal local** state but need them to do that atomically in one big
transaction. Might not be often, but when you do need this then you are
screwed without it.
- When you want to share a datastructure across actors.
The use of STM in Akka is inspired by the concepts and views in `Clojure`_\'s
STM. Please take the time to read `this excellent document`_ about state in
clojure and view `this presentation`_ by Rich Hickey (the genius behind
Clojure).
.. _Clojure: http://clojure.org/
.. _this excellent document: http://clojure.org/state
.. _this presentation: http://www.infoq.com/presentations/Value-Identity-State-Rich-Hickey
Scala STM
=========
The STM supported in Akka is `ScalaSTM`_ which will be soon included in the
Scala standard library.
.. _ScalaSTM: http://nbronson.github.com/scala-stm/
The STM is based on Transactional References (referred to as Refs). Refs are
memory cells, holding an (arbitrary) immutable value, that implement CAS
(Compare-And-Swap) semantics and are managed and enforced by the STM for
coordinated changes across many Refs.
Persistent Datastructures
=========================
Working with immutable collections can sometimes give bad performance due to
extensive copying. Scala provides so-called persistent datastructures which
makes working with immutable collections fast. They are immutable but with
constant time access and modification. They use structural sharing and an insert
or update does not ruin the old structure, hence "persistent". Makes working
with immutable composite types fast. The persistent datastructures currently
consist of a `Map`_ and `Vector`_.
.. _Map: http://www.scala-lang.org/api/current/index.html#scala.collection.immutable.Map
.. _Vector: http://www.scala-lang.org/api/current/index.html#scala.collection.immutable.Vector
Integration with Actors
=======================
In Akka we've also integrated Actors and STM in :ref:`agents-scala` and
:ref:`transactors-scala`.

View file

@ -0,0 +1,732 @@
.. _akka-testkit:
##############################
Testing Actor Systems (Scala)
##############################
.. toctree::
testkit-example
As with any piece of software, automated tests are a very important part of the
development cycle. The actor model presents a different view on how units of
code are delimited and how they interact, which has an influence on how to
perform tests.
Akka comes with a dedicated module :mod:`akka-testkit` for supporting tests at
different levels, which fall into two clearly distinct categories:
- Testing isolated pieces of code without involving the actor model, meaning
without multiple threads; this implies completely deterministic behavior
concerning the ordering of events and no concurrency concerns and will be
called **Unit Testing** in the following.
- Testing (multiple) encapsulated actors including multi-threaded scheduling;
this implies non-deterministic order of events but shielding from
concurrency concerns by the actor model and will be called **Integration
Testing** in the following.
There are of course variations on the granularity of tests in both categories,
where unit testing reaches down to white-box tests and integration testing can
encompass functional tests of complete actor networks. The important
distinction lies in whether concurrency concerns are part of the test or not.
The tools offered are described in detail in the following sections.
.. note::
Be sure to add the module :mod:`akka-testkit` to your dependencies.
Synchronous Unit Testing with :class:`TestActorRef`
===================================================
Testing the business logic inside :class:`Actor` classes can be divided into
two parts: first, each atomic operation must work in isolation, then sequences
of incoming events must be processed correctly, even in the presence of some
possible variability in the ordering of events. The former is the primary use
case for single-threaded unit testing, while the latter can only be verified in
integration tests.
Normally, the :class:`ActorRef` shields the underlying :class:`Actor` instance
from the outside, the only communications channel is the actor's mailbox. This
restriction is an impediment to unit testing, which led to the inception of the
:class:`TestActorRef`. This special type of reference is designed specifically
for test purposes and allows access to the actor in two ways: either by
obtaining a reference to the underlying actor instance, or by invoking or
querying the actor's behaviour (:meth:`receive`). Each one warrants its own
section below.
Obtaining a Reference to an :class:`Actor`
------------------------------------------
Having access to the actual :class:`Actor` object allows application of all
traditional unit testing techniques on the contained methods. Obtaining a
reference is done like this:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-actor-ref
Since :class:`TestActorRef` is generic in the actor type it returns the
underlying actor with its proper static type. From this point on you may bring
any unit testing tool to bear on your actor as usual.
.. _TestFSMRef:
Testing Finite State Machines
-----------------------------
If your actor under test is a :class:`FSM`, you may use the special
:class:`TestFSMRef` which offers all features of a normal :class:`TestActorRef`
and in addition allows access to the internal state:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-fsm-ref
Due to a limitation in Scalas type inference, there is only the factory method
shown above, so you will probably write code like ``TestFSMRef(new MyFSM)``
instead of the hypothetical :class:`ActorRef`-inspired ``TestFSMRef[MyFSM]``.
All methods shown above directly access the FSM state without any
synchronization; this is perfectly alright if the
:class:`CallingThreadDispatcher` is used (which is the default for
:class:`TestFSMRef`) and no other threads are involved, but it may lead to
surprises if you were to actually exercise timer events, because those are
executed on the :obj:`Scheduler` thread.
Testing the Actor's Behavior
----------------------------
When the dispatcher invokes the processing behavior of an actor on a message,
it actually calls :meth:`apply` on the current behavior registered for the
actor. This starts out with the return value of the declared :meth:`receive`
method, but it may also be changed using :meth:`become` and :meth:`unbecome` in
response to external messages. All of this contributes to the overall actor
behavior and it does not lend itself to easy testing on the :class:`Actor`
itself. Therefore the :class:`TestActorRef` offers a different mode of
operation to complement the :class:`Actor` testing: it supports all operations
also valid on normal :class:`ActorRef`. Messages sent to the actor are
processed synchronously on the current thread and answers may be sent back as
usual. This trick is made possible by the :class:`CallingThreadDispatcher`
described below (see `CallingThreadDispatcher`_); this dispatcher is set
implicitly for any actor instantiated into a :class:`TestActorRef`.
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-behavior
As the :class:`TestActorRef` is a subclass of :class:`LocalActorRef` with a few
special extras, also aspects like supervision and restarting work properly, but
beware that execution is only strictly synchronous as long as all actors
involved use the :class:`CallingThreadDispatcher`. As soon as you add elements
which include more sophisticated scheduling you leave the realm of unit testing
as you then need to think about asynchronicity again (in most cases the problem
will be to wait until the desired effect had a chance to happen).
One more special aspect which is overridden for single-threaded tests is the
:meth:`receiveTimeout`, as including that would entail asynchronous queuing of
:obj:`ReceiveTimeout` messages, violating the synchronous contract.
.. note::
To summarize: :class:`TestActorRef` overwrites two fields: it sets the
dispatcher to :obj:`CallingThreadDispatcher.global` and it sets the
:obj:`receiveTimeout` to None.
The Way In-Between: Expecting Exceptions
----------------------------------------
If you want to test the actor behavior, including hotswapping, but without
involving a dispatcher and without having the :class:`TestActorRef` swallow
any thrown exceptions, then there is another mode available for you: just use
the :meth:`receive` method :class:`TestActorRef`, which will be forwarded to the
underlying actor:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-expecting-exceptions
Use Cases
---------
You may of course mix and match both modi operandi of :class:`TestActorRef` as
suits your test needs:
- one common use case is setting up the actor into a specific internal state
before sending the test message
- another is to verify correct internal state transitions after having sent
the test message
Feel free to experiment with the possibilities, and if you find useful
patterns, don't hesitate to let the Akka forums know about them! Who knows,
common operations might even be worked into nice DSLs.
Asynchronous Integration Testing with :class:`TestKit`
======================================================
When you are reasonably sure that your actor's business logic is correct, the
next step is verifying that it works correctly within its intended environment
(if the individual actors are simple enough, possibly because they use the
:mod:`FSM` module, this might also be the first step). The definition of the
environment depends of course very much on the problem at hand and the level at
which you intend to test, ranging for functional/integration tests to full
system tests. The minimal setup consists of the test procedure, which provides
the desired stimuli, the actor under test, and an actor receiving replies.
Bigger systems replace the actor under test with a network of actors, apply
stimuli at varying injection points and arrange results to be sent from
different emission points, but the basic principle stays the same in that a
single procedure drives the test.
The :class:`TestKit` class contains a collection of tools which makes this
common task easy.
.. includecode:: code/docs/testkit/PlainWordSpec.scala#plain-spec
The :class:`TestKit` contains an actor named :obj:`testActor` which is the
entry point for messages to be examined with the various ``expectMsg...``
assertions detailed below. When mixing in the trait ``ImplicitSender`` this
test actor is implicitly used as sender reference when dispatching messages
from the test procedure. The :obj:`testActor` may also be passed to
other actors as usual, usually subscribing it as notification listener. There
is a whole set of examination methods, e.g. receiving all consecutive messages
matching certain criteria, receiving a whole sequence of fixed messages or
classes, receiving nothing for some time, etc.
The ActorSystem passed in to the constructor of TestKit is accessible via the
:obj:`system` member. Remember to shut down the actor system after the test is
finished (also in case of failure) so that all actors—including the test
actor—are stopped.
Built-In Assertions
-------------------
The above mentioned :meth:`expectMsg` is not the only method for formulating
assertions concerning received messages. Here is the full list:
* :meth:`expectMsg[T](d: Duration, msg: T): T`
The given message object must be received within the specified time; the
object will be returned.
* :meth:`expectMsgPF[T](d: Duration)(pf: PartialFunction[Any, T]): T`
Within the given time period, a message must be received and the given
partial function must be defined for that message; the result from applying
the partial function to the received message is returned. The duration may
be left unspecified (empty parentheses are required in this case) to use
the deadline from the innermost enclosing :ref:`within <TestKit.within>`
block instead.
* :meth:`expectMsgClass[T](d: Duration, c: Class[T]): T`
An object which is an instance of the given :class:`Class` must be received
within the allotted time frame; the object will be returned. Note that this
does a conformance check; if you need the class to be equal, have a look at
:meth:`expectMsgAllClassOf` with a single given class argument.
* :meth:`expectMsgType[T: Manifest](d: Duration)`
An object which is an instance of the given type (after erasure) must be
received within the allotted time frame; the object will be returned. This
method is approximately equivalent to
``expectMsgClass(implicitly[ClassTag[T]].runtimeClass)``.
* :meth:`expectMsgAnyOf[T](d: Duration, obj: T*): T`
An object must be received within the given time, and it must be equal (
compared with ``==``) to at least one of the passed reference objects; the
received object will be returned.
* :meth:`expectMsgAnyClassOf[T](d: Duration, obj: Class[_ <: T]*): T`
An object must be received within the given time, and it must be an
instance of at least one of the supplied :class:`Class` objects; the
received object will be returned.
* :meth:`expectMsgAllOf[T](d: Duration, obj: T*): Seq[T]`
A number of objects matching the size of the supplied object array must be
received within the given time, and for each of the given objects there
must exist at least one among the received ones which equals (compared with
``==``) it. The full sequence of received objects is returned.
* :meth:`expectMsgAllClassOf[T](d: Duration, c: Class[_ <: T]*): Seq[T]`
A number of objects matching the size of the supplied :class:`Class` array
must be received within the given time, and for each of the given classes
there must exist at least one among the received objects whose class equals
(compared with ``==``) it (this is *not* a conformance check). The full
sequence of received objects is returned.
* :meth:`expectMsgAllConformingOf[T](d: Duration, c: Class[_ <: T]*): Seq[T]`
A number of objects matching the size of the supplied :class:`Class` array
must be received within the given time, and for each of the given classes
there must exist at least one among the received objects which is an
instance of this class. The full sequence of received objects is returned.
* :meth:`expectNoMsg(d: Duration)`
No message must be received within the given time. This also fails if a
message has been received before calling this method which has not been
removed from the queue using one of the other methods.
* :meth:`receiveN(n: Int, d: Duration): Seq[AnyRef]`
``n`` messages must be received within the given time; the received
messages are returned.
* :meth:`fishForMessage(max: Duration, hint: String)(pf: PartialFunction[Any, Boolean]): Any`
Keep receiving messages as long as the time is not used up and the partial
function matches and returns ``false``. Returns the message received for
which it returned ``true`` or throws an exception, which will include the
provided hint for easier debugging.
In addition to message reception assertions there are also methods which help
with message flows:
* :meth:`receiveOne(d: Duration): AnyRef`
Tries to receive one message for at most the given time interval and
returns ``null`` in case of failure. If the given Duration is zero, the
call is non-blocking (polling mode).
* :meth:`receiveWhile[T](max: Duration, idle: Duration, messages: Int)(pf: PartialFunction[Any, T]): Seq[T]`
Collect messages as long as
* they are matching the given partial function
* the given time interval is not used up
* the next message is received within the idle timeout
* the number of messages has not yet reached the maximum
All collected messages are returned. The maximum duration defaults to the
time remaining in the innermost enclosing :ref:`within <TestKit.within>`
block and the idle duration defaults to infinity (thereby disabling the
idle timeout feature). The number of expected messages defaults to
``Int.MaxValue``, which effectively disables this limit.
* :meth:`awaitCond(p: => Boolean, max: Duration, interval: Duration)`
Poll the given condition every :obj:`interval` until it returns ``true`` or
the :obj:`max` duration is used up. The interval defaults to 100 ms and the
maximum defaults to the time remaining in the innermost enclosing
:ref:`within <TestKit.within>` block.
* :meth:`ignoreMsg(pf: PartialFunction[AnyRef, Boolean])`
:meth:`ignoreNoMsg`
The internal :obj:`testActor` contains a partial function for ignoring
messages: it will only enqueue messages which do not match the function or
for which the function returns ``false``. This function can be set and
reset using the methods given above; each invocation replaces the previous
function, they are not composed.
This feature is useful e.g. when testing a logging system, where you want
to ignore regular messages and are only interested in your specific ones.
Expecting Log Messages
----------------------
Since an integration test does not allow to the internal processing of the
participating actors, verifying expected exceptions cannot be done directly.
Instead, use the logging system for this purpose: replacing the normal event
handler with the :class:`TestEventListener` and using an :class:`EventFilter`
allows assertions on log messages, including those which are generated by
exceptions:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#event-filter
If a number of occurrences is specific—as demonstrated above—then ``intercept``
will block until that number of matching messages have been received or the
timeout configured in ``akka.test.filter-leeway`` is used up (time starts
counting after the passed-in block of code returns). In case of a timeout the
test fails.
.. note::
Be sure to exchange the default event handler with the
:class:`TestEventListener` in your ``application.conf`` to enable this
function::
akka.event-handlers = [akka.testkit.TestEventListener]
.. _TestKit.within:
Timing Assertions
-----------------
Another important part of functional testing concerns timing: certain events
must not happen immediately (like a timer), others need to happen before a
deadline. Therefore, all examination methods accept an upper time limit within
the positive or negative result must be obtained. Lower time limits need to be
checked external to the examination, which is facilitated by a new construct
for managing time constraints:
.. code-block:: scala
within([min, ]max) {
...
}
The block given to :meth:`within` must complete after a :ref:`Duration` which
is between :obj:`min` and :obj:`max`, where the former defaults to zero. The
deadline calculated by adding the :obj:`max` parameter to the block's start
time is implicitly available within the block to all examination methods, if
you do not specify it, it is inherited from the innermost enclosing
:meth:`within` block.
It should be noted that if the last message-receiving assertion of the block is
:meth:`expectNoMsg` or :meth:`receiveWhile`, the final check of the
:meth:`within` is skipped in order to avoid false positives due to wake-up
latencies. This means that while individual contained assertions still use the
maximum time bound, the overall block may take arbitrarily longer in this case.
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-within
.. note::
All times are measured using ``System.nanoTime``, meaning that they describe
wall time, not CPU time.
Ray Roestenburg has written a great article on using the TestKit:
`<http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html>`_.
His full example is also available :ref:`here <testkit-example>`.
Accounting for Slow Test Systems
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The tight timeouts you use during testing on your lightning-fast notebook will
invariably lead to spurious test failures on the heavily loaded Jenkins server
(or similar). To account for this situation, all maximum durations are
internally scaled by a factor taken from the :ref:`configuration`,
``akka.test.timefactor``, which defaults to 1.
You can scale other durations with the same factor by using the implicit conversion
in ``akka.testkit`` package object to add dilated function to :class:`Duration`.
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#duration-dilation
Resolving Conflicts with Implicit ActorRef
------------------------------------------
If you want the sender of messages inside your TestKit-based tests to be the ``testActor``
simply mix in ``ÌmplicitSender`` into your test.
.. includecode:: code/docs/testkit/PlainWordSpec.scala#implicit-sender
Using Multiple Probe Actors
---------------------------
When the actors under test are supposed to send various messages to different
destinations, it may be difficult distinguishing the message streams arriving
at the :obj:`testActor` when using the :class:`TestKit` as a mixin. Another
approach is to use it for creation of simple probe actors to be inserted in the
message flows. To make this more powerful and convenient, there is a concrete
implementation called :class:`TestProbe`. The functionality is best explained
using a small example:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala
:include: imports-test-probe,my-double-echo,test-probe
Here a the system under test is simulated by :class:`MyDoubleEcho`, which is
supposed to mirror its input to two outputs. Attaching two test probes enables
verification of the (simplistic) behavior. Another example would be two actors
A and B which collaborate by A sending messages to B. In order to verify this
message flow, a :class:`TestProbe` could be inserted as target of A, using the
forwarding capabilities or auto-pilot described below to include a real B in
the test setup.
Probes may also be equipped with custom assertions to make your test code even
more concise and clear:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala
:include: test-special-probe
You have complete flexibility here in mixing and matching the :class:`TestKit`
facilities with your own checks and choosing an intuitive name for it. In real
life your code will probably be a bit more complicated than the example given
above; just use the power!
Replying to Messages Received by Probes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The probes keep track of the communications channel for replies, if possible,
so they can also reply:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-probe-reply
Forwarding Messages Received by Probes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Given a destination actor ``dest`` which in the nominal actor network would
receive a message from actor ``source``. If you arrange for the message to be
sent to a :class:`TestProbe` ``probe`` instead, you can make assertions
concerning volume and timing of the message flow while still keeping the
network functioning:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala
:include: test-probe-forward-actors,test-probe-forward
The ``dest`` actor will receive the same message invocation as if no test probe
had intervened.
Auto-Pilot
^^^^^^^^^^
Receiving messages in a queue for later inspection is nice, but in order to
keep a test running and verify traces later you can also install an
:class:`AutoPilot` in the participating test probes (actually in any
:class:`TestKit`) which is invoked before enqueueing to the inspection queue.
This code can be used to forward messages, e.g. in a chain ``A --> Probe -->
B``, as long as a certain protocol is obeyed.
.. includecode:: ../../../akka-testkit/src/test/scala/akka/testkit/TestProbeSpec.scala#autopilot
The :meth:`run` method must return the auto-pilot for the next message, which
may be :class:`KeepRunning` to retain the current one or :class:`NoAutoPilot`
to switch it off.
Caution about Timing Assertions
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The behavior of :meth:`within` blocks when using test probes might be perceived
as counter-intuitive: you need to remember that the nicely scoped deadline as
described :ref:`above <TestKit.within>` is local to each probe. Hence, probes
do not react to each other's deadlines or to the deadline set in an enclosing
:class:`TestKit` instance:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#test-within-probe
Here, the ``expectMsg`` call will use the default timeout.
.. _Scala-CallingThreadDispatcher:
CallingThreadDispatcher
=======================
The :class:`CallingThreadDispatcher` serves good purposes in unit testing, as
described above, but originally it was conceived in order to allow contiguous
stack traces to be generated in case of an error. As this special dispatcher
runs everything which would normally be queued directly on the current thread,
the full history of a message's processing chain is recorded on the call stack,
so long as all intervening actors run on this dispatcher.
How to use it
-------------
Just set the dispatcher as you normally would:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#calling-thread-dispatcher
How it works
------------
When receiving an invocation, the :class:`CallingThreadDispatcher` checks
whether the receiving actor is already active on the current thread. The
simplest example for this situation is an actor which sends a message to
itself. In this case, processing cannot continue immediately as that would
violate the actor model, so the invocation is queued and will be processed when
the active invocation on that actor finishes its processing; thus, it will be
processed on the calling thread, but simply after the actor finishes its
previous work. In the other case, the invocation is simply processed
immediately on the current thread. Futures scheduled via this dispatcher are
also executed immediately.
This scheme makes the :class:`CallingThreadDispatcher` work like a general
purpose dispatcher for any actors which never block on external events.
In the presence of multiple threads it may happen that two invocations of an
actor running on this dispatcher happen on two different threads at the same
time. In this case, both will be processed directly on their respective
threads, where both compete for the actor's lock and the loser has to wait.
Thus, the actor model is left intact, but the price is loss of concurrency due
to limited scheduling. In a sense this is equivalent to traditional mutex style
concurrency.
The other remaining difficulty is correct handling of suspend and resume: when
an actor is suspended, subsequent invocations will be queued in thread-local
queues (the same ones used for queuing in the normal case). The call to
:meth:`resume`, however, is done by one specific thread, and all other threads
in the system will probably not be executing this specific actor, which leads
to the problem that the thread-local queues cannot be emptied by their native
threads. Hence, the thread calling :meth:`resume` will collect all currently
queued invocations from all threads into its own queue and process them.
Limitations
-----------
.. warning::
In case the CallingThreadDispatcher is used for top-level actors, but
without going through TestActorRef, then there is a time window during which
the actor is awaiting construction by the user guardian actor. Sending
messages to the actor during this time period will result in them being
enqueued and then executed on the guardians thread instead of the callers
thread. To avoid this, use TestActorRef.
If an actor's behavior blocks on a something which would normally be affected
by the calling actor after having sent the message, this will obviously
dead-lock when using this dispatcher. This is a common scenario in actor tests
based on :class:`CountDownLatch` for synchronization:
.. code-block:: scala
val latch = new CountDownLatch(1)
actor ! startWorkAfter(latch) // actor will call latch.await() before proceeding
doSomeSetupStuff()
latch.countDown()
The example would hang indefinitely within the message processing initiated on
the second line and never reach the fourth line, which would unblock it on a
normal dispatcher.
Thus, keep in mind that the :class:`CallingThreadDispatcher` is not a
general-purpose replacement for the normal dispatchers. On the other hand it
may be quite useful to run your actor network on it for testing, because if it
runs without dead-locking chances are very high that it will not dead-lock in
production.
.. warning::
The above sentence is unfortunately not a strong guarantee, because your
code might directly or indirectly change its behavior when running on a
different dispatcher. If you are looking for a tool to help you debug
dead-locks, the :class:`CallingThreadDispatcher` may help with certain error
scenarios, but keep in mind that it has may give false negatives as well as
false positives.
Benefits
--------
To summarize, these are the features with the :class:`CallingThreadDispatcher`
has to offer:
- Deterministic execution of single-threaded tests while retaining nearly full
actor semantics
- Full message processing history leading up to the point of failure in
exception stack traces
- Exclusion of certain classes of dead-lock scenarios
.. _actor.logging-scala:
Tracing Actor Invocations
=========================
The testing facilities described up to this point were aiming at formulating
assertions about a systems behavior. If a test fails, it is usually your job
to find the cause, fix it and verify the test again. This process is supported
by debuggers as well as logging, where the Akka toolkit offers the following
options:
* *Logging of exceptions thrown within Actor instances*
This is always on; in contrast to the other logging mechanisms, this logs at
``ERROR`` level.
* *Logging of message invocations on certain actors*
This is enabled by a setting in the :ref:`configuration` — namely
``akka.actor.debug.receive`` — which enables the :meth:`loggable`
statement to be applied to an actors :meth:`receive` function:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala#logging-receive
.
If the abovementioned setting is not given in the :ref:`configuration`, this method will
pass through the given :class:`Receive` function unmodified, meaning that
there is no runtime cost unless actually enabled.
The logging feature is coupled to this specific local mark-up because
enabling it uniformly on all actors is not usually what you need, and it
would lead to endless loops if it were applied to :class:`EventHandler`
listeners.
* *Logging of special messages*
Actors handle certain special messages automatically, e.g. :obj:`Kill`,
:obj:`PoisonPill`, etc. Tracing of these message invocations is enabled by
the setting ``akka.actor.debug.autoreceive``, which enables this on all
actors.
* *Logging of the actor lifecycle*
Actor creation, start, restart, monitor start, monitor stop and stop may be traced by
enabling the setting ``akka.actor.debug.lifecycle``; this, too, is enabled
uniformly on all actors.
All these messages are logged at ``DEBUG`` level. To summarize, you can enable
full logging of actor activities using this configuration fragment::
akka {
loglevel = DEBUG
actor {
debug {
receive = on
autoreceive = on
lifecycle = on
}
}
}
Different Testing Frameworks
============================
Akkas own test suite is written using `ScalaTest <http://scalatest.org>`_,
which also shines through in documentation examples. However, the TestKit and
its facilities do not depend on that framework, you can essentially use
whichever suits your development style best.
This section contains a collection of known gotchas with some other frameworks,
which is by no means exhaustive and does not imply endorsement or special
support.
When you need it to be a trait
------------------------------
If for some reason it is a problem to inherit from :class:`TestKit` due to it
being a concrete class instead of a trait, theres :class:`TestKitBase`:
.. includecode:: code/docs/testkit/TestkitDocSpec.scala
:include: test-kit-base
:exclude: put-your-test-code-here
The ``implicit lazy val system`` must be declared exactly like that (you can of
course pass arguments to the actor system factory as needed) because trait
:class:`TestKitBase` needs the system during its construction.
.. warning::
Use of the trait is discouraged because of potential issues with binary
backwards compatibility in the future, use at own risk.
Specs2
------
Some `Specs2 <http://specs2.org>`_ users have contributed examples of how to work around some clashes which may arise:
* Mixing TestKit into :class:`org.specs2.mutable.Specification` results in a
name clash involving the ``end`` method (which is a private variable in
TestKit and an abstract method in Specification); if mixing in TestKit first,
the code may compile but might then fail at runtime. The work-around—which is
actually beneficial also for the third point—is to apply the TestKit together
with :class:`org.specs2.specification.Scope`.
* The Specification traits provide a :class:`Duration` DSL which uses partly
the same method names as :class:`scala.concurrent.util.Duration`, resulting in ambiguous
implicits if ``akka.util.duration._`` is imported. There are two work-arounds:
* either use the Specification variant of Duration and supply an implicit
conversion to the Akka Duration. This conversion is not supplied with the
Akka distribution because that would mean that our JAR files would dependon
Specs2, which is not justified by this little feature.
* or mix :class:`org.specs2.time.NoTimeConversions` into the Specification.
* Specifications are by default executed concurrently, which requires some care
when writing the tests or alternatively the ``sequential`` keyword.
Testing Custom Router Logic
===========================
Given the following custom (dummy) router:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala#custom-router
This might be tested by dispatching messages and asserting their reception at
the right destinations, but that can be inconvenient. Therefore exists the
:obj:`ExtractRoute` extractor, which can be used like so:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala#test-route

View file

@ -0,0 +1,10 @@
.. _testkit-example:
########################
TestKit Example (Scala)
########################
Ray Roestenburg's example code from `his blog <http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html>`_ adapted to work with Akka 2.x.
.. includecode:: code/docs/testkit/TestkitUsageSpec.scala#testkit-usage

View file

@ -0,0 +1,163 @@
.. _transactors-scala:
#####################
Transactors (Scala)
#####################
Why Transactors?
================
Actors are excellent for solving problems where you have many independent
processes that can work in isolation and only interact with other Actors through
message passing. This model fits many problems. But the actor model is
unfortunately a terrible model for implementing truly shared state. E.g. when
you need to have consensus and a stable view of state across many
components. The classic example is the bank account where clients can deposit
and withdraw, in which each operation needs to be atomic. For detailed
discussion on the topic see `this JavaOne presentation
<http://www.slideshare.net/jboner/state-youre-doing-it-wrong-javaone-2009>`_.
STM on the other hand is excellent for problems where you need consensus and a
stable view of the state by providing compositional transactional shared
state. Some of the really nice traits of STM are that transactions compose, and
it raises the abstraction level from lock-based concurrency.
Akka's Transactors combine Actors and STM to provide the best of the Actor model
(concurrency and asynchronous event-based programming) and STM (compositional
transactional shared state) by providing transactional, compositional,
asynchronous, event-based message flows.
Generally, the STM is not needed very often when working with Akka. Some
use-cases (that we can think of) are:
- When you really need composable message flows across many actors updating
their **internal local** state but need them to do that atomically in one big
transaction. Might not be often but when you do need this then you are
screwed without it.
- When you want to share a datastructure across actors.
Actors and STM
==============
You can combine Actors and STM in several ways. An Actor may use STM internally
so that particular changes are guaranteed to be atomic. Actors may also share
transactional datastructures as the STM provides safe shared state across
threads.
It's also possible to coordinate transactions across Actors or threads so that
either the transactions in a set all commit successfully or they all fail. This
is the focus of Transactors and the explicit support for coordinated
transactions in this section.
Coordinated transactions
========================
Akka provides an explicit mechanism for coordinating transactions across
Actors. Under the hood it uses a ``CommitBarrier``, similar to a CountDownLatch.
Here is an example of coordinating two simple counter Actors so that they both
increment together in coordinated transactions. If one of them was to fail to
increment, the other would also fail.
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#coordinated-example
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#run-coordinated-example
Note that creating a ``Coordinated`` object requires a ``Timeout`` to be
specified for the coordinated transaction. This can be done implicitly, by
having an implicit ``Timeout`` in scope, or explicitly, by passing the timeout
when creating a a ``Coordinated`` object. Here's an example of specifying an
implicit timeout:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#implicit-timeout
To start a new coordinated transaction that you will also participate in, just
create a ``Coordinated`` object (this assumes an implicit timeout):
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#create-coordinated
To start a coordinated transaction that you won't participate in yourself you
can create a ``Coordinated`` object with a message and send it directly to an
actor. The recipient of the message will be the first member of the coordination
set:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#send-coordinated
To receive a coordinated message in an actor simply match it in a case
statement:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#receive-coordinated
:exclude: coordinated-atomic
To include another actor in the same coordinated transaction that you've created
or received, use the apply method on that object. This will increment the number
of parties involved by one and create a new ``Coordinated`` object to be sent.
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#include-coordinated
To enter the coordinated transaction use the atomic method of the coordinated
object:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#coordinated-atomic
The coordinated transaction will wait for the other transactions before
committing. If any of the coordinated transactions fail then they all fail.
.. note::
The same actor should not be added to a coordinated transaction more than
once. The transaction will not be able to complete as an actor only processes
a single message at a time. When processing the first message the coordinated
transaction will wait for the commit barrier, which in turn needs the second
message to be received to proceed.
Transactor
==========
Transactors are actors that provide a general pattern for coordinating
transactions, using the explicit coordination described above.
Here's an example of a simple transactor that will join a coordinated
transaction:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#counter-example
You could send this Counter transactor a ``Coordinated(Increment)`` message. If
you were to send it just an ``Increment`` message it will create its own
``Coordinated`` (but in this particular case wouldn't be coordinating
transactions with any other transactors).
To coordinate with other transactors override the ``coordinate`` method. The
``coordinate`` method maps a message to a set of ``SendTo`` objects, pairs of
``ActorRef`` and a message. You can use the ``include`` and ``sendTo`` methods
to easily coordinate with other transactors. The ``include`` method will send on
the same message that was received to other transactors. The ``sendTo`` method
allows you to specify both the actor to send to, and the message to send.
Example of coordinating an increment:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#friendly-counter-example
Using ``include`` to include more than one transactor:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#coordinate-include
Using ``sendTo`` to coordinate transactions but pass-on a different message than
the one that was received:
.. includecode:: code/docs/transactor/TransactorDocSpec.scala#coordinate-sendto
To execute directly before or after the coordinated transaction, override the
``before`` and ``after`` methods. These methods also expect partial functions
like the receive method. They do not execute within the transaction.
To completely bypass coordinated transactions override the ``normally``
method. Any message matched by ``normally`` will not be matched by the other
methods, and will not be involved in coordinated transactions. In this method
you can implement normal actor behavior, or use the normal STM atomic for local
transactions.

View file

@ -0,0 +1,224 @@
Typed Actors (Scala)
====================
Akka Typed Actors is an implementation of the `Active Objects <http://en.wikipedia.org/wiki/Active_object>`_ pattern.
Essentially turning method invocations into asynchronous dispatch instead of synchronous that has been the default way since Smalltalk came out.
Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate methodcalls asynchronously to
a private instance of the implementation.
The advantage of Typed Actors vs. Actors is that with TypedActors you have a static contract, and don't need to define your own messages, the downside is that it places some limitations on what you can do and what you can't, i.e. you can't use become/unbecome.
Typed Actors are implemented using `JDK Proxies <http://docs.oracle.com/javase/6/docs/api/java/lang/reflect/Proxy.html>`_ which provide a pretty easy-worked API to intercept method calls.
.. note::
Just as with regular Akka Actors, Typed Actors process one call at a time.
When to use Typed Actors
------------------------
Typed actors are nice for bridging between actor systems (the “inside”) and
non-actor code (the “outside”), because they allow you to write normal
OO-looking code on the outside. Think of them like doors: their practicality
lies in interfacing between private sphere and the public, but you dont want
that many doors inside your house, do you? For a longer discussion see `this
blog post <http://letitcrash.com/post/19074284309/when-to-use-typedactors>`_.
A bit more background: TypedActors can very easily be abused as RPC, and that
is an abstraction which is `well-known
<http://labs.oracle.com/techrep/1994/abstract-29.html>`_ to be leaky. Hence
TypedActors are not what we think of first when we talk about making highly
scalable concurrent software easier to write correctly. They have their niche,
use them sparingly.
The tools of the trade
----------------------
Before we create our first Typed Actor we should first go through the tools that we have at our disposal,
it's located in ``akka.actor.TypedActor``.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-extension-tools
.. warning::
Same as not exposing ``this`` of an Akka Actor, it's important not to expose ``this`` of a Typed Actor,
instead you should pass the external proxy reference, which is obtained from within your Typed Actor as
``TypedActor.self``, this is your external identity, as the ``ActorRef`` is the external identity of
an Akka Actor.
Creating Typed Actors
---------------------
To create a Typed Actor you need to have one or more interfaces, and one implementation.
Our example interface:
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: imports,typed-actor-iface
:exclude: typed-actor-iface-methods
Our example implementation of that interface:
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: imports,typed-actor-impl
:exclude: typed-actor-impl-methods
The most trivial way of creating a Typed Actor instance
of our Squarer:
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-create1
First type is the type of the proxy, the second type is the type of the implementation.
If you need to call a specific constructor you do it like this:
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-create2
Since you supply a Props, you can specify which dispatcher to use, what the default timeout should be used and more.
Now, our Squarer doesn't have any methods, so we'd better add those.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: imports,typed-actor-iface
Alright, now we've got some methods we can call, but we need to implement those in SquarerImpl.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: imports,typed-actor-impl
Excellent, now we have an interface and an implementation of that interface,
and we know how to create a Typed Actor from that, so let's look at calling these methods.
Method dispatch semantics
-------------------------
Methods returning:
* ``Unit`` will be dispatched with ``fire-and-forget`` semantics, exactly like ``ActorRef.tell``
* ``scala.concurrent.Future[_]`` will use ``send-request-reply`` semantics, exactly like ``ActorRef.ask``
* ``scala.Option[_]`` or ``akka.japi.Option<?>`` will use ``send-request-reply`` semantics, but *will* block to wait for an answer,
and return None if no answer was produced within the timeout, or scala.Some/akka.japi.Some containing the result otherwise.
Any exception that was thrown during this call will be rethrown.
* Any other type of value will use ``send-request-reply`` semantics, but *will* block to wait for an answer,
throwing ``java.util.concurrent.TimeoutException`` if there was a timeout or rethrow any exception that was thrown during this call.
Messages and immutability
-------------------------
While Akka cannot enforce that the parameters to the methods of your Typed Actors are immutable,
we *strongly* recommend that parameters passed are immutable.
One-way message send
^^^^^^^^^^^^^^^^^^^^
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-call-oneway
As simple as that! The method will be executed on another thread; asynchronously.
Request-reply message send
^^^^^^^^^^^^^^^^^^^^^^^^^^
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-call-option
This will block for as long as the timeout that was set in the Props of the Typed Actor,
if needed. It will return ``None`` if a timeout occurs.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-call-strict
This will block for as long as the timeout that was set in the Props of the Typed Actor,
if needed. It will throw a ``java.util.concurrent.TimeoutException`` if a timeout occurs.
Request-reply-with-future message send
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-call-future
This call is asynchronous, and the Future returned can be used for asynchronous composition.
Stopping Typed Actors
---------------------
Since Akkas Typed Actors are backed by Akka Actors they must be stopped when they aren't needed anymore.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-stop
This asynchronously stops the Typed Actor associated with the specified proxy ASAP.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-poisonpill
This asynchronously stops the Typed Actor associated with the specified proxy
after it's done with all calls that were made prior to this call.
Typed Actor Hierarchies
-----------------------
Since you can obtain a contextual Typed Actor Extension by passing in an ``ActorContext``
you can create child Typed Actors by invoking ``typedActorOf(..)`` on that:
.. includecode:: code/docs/actor/TypedActorDocSpec.scala
:include: typed-actor-hierarchy
You can also create a child Typed Actor in regular Akka Actors by giving the ``ActorContext``
as an input parameter to TypedActor.get(…).
Supervisor Strategy
-------------------
By having your Typed Actor implementation class implement ``TypedActor.Supervisor``
you can define the strategy to use for supervising child actors, as described in
:ref:`supervision` and :ref:`fault-tolerance-scala`.
Lifecycle callbacks
-------------------
By having your Typed Actor implementation class implement any and all of the following:
* ``TypedActor.PreStart``
* ``TypedActor.PostStop``
* ``TypedActor.PreRestart``
* ``TypedActor.PostRestart``
You can hook into the lifecycle of your Typed Actor.
Receive arbitrary messages
--------------------------
If your implementation class of your TypedActor extends ``akka.actor.TypedActor.Receiver``,
all messages that are not ``MethodCall``s will be passed into the ``onReceive``-method.
This allows you to react to DeathWatch ``Terminated``-messages and other types of messages,
e.g. when interfacing with untyped actors.
Proxying
--------
You can use the ``typedActorOf`` that takes a TypedProps and an ActorRef to proxy the given ActorRef as a TypedActor.
This is usable if you want to communicate remotely with TypedActors on other machines, just look them up with ``actorFor`` and pass the ``ActorRef`` to ``typedActorOf``.
.. note::
The ActorRef needs to accept ``MethodCall`` messages.
Lookup & Remoting
-----------------
Since ``TypedActors`` are backed by ``Akka Actors``, you can use ``actorFor`` together with ``typedActorOf`` to proxy ``ActorRefs`` potentially residing on remote nodes.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala#typed-actor-remote
Supercharging
-------------
Here's an example on how you can use traits to mix in behavior in your Typed Actors.
.. includecode:: code/docs/actor/TypedActorDocSpec.scala#typed-actor-supercharge
.. includecode:: code/docs/actor/TypedActorDocSpec.scala#typed-actor-supercharge-usage

View file

@ -0,0 +1,119 @@
.. _zeromq-scala:
################
ZeroMQ (Scala)
################
Akka provides a ZeroMQ module which abstracts a ZeroMQ connection and therefore allows interaction between Akka actors to take place over ZeroMQ connections. The messages can be of a proprietary format or they can be defined using Protobuf. The socket actor is fault-tolerant by default and when you use the newSocket method to create new sockets it will properly reinitialize the socket.
ZeroMQ is very opinionated when it comes to multi-threading so configuration option `akka.zeromq.socket-dispatcher` always needs to be configured to a PinnedDispatcher, because the actual ZeroMQ socket can only be accessed by the thread that created it.
The ZeroMQ module for Akka is written against an API introduced in JZMQ, which uses JNI to interact with the native ZeroMQ library. Instead of using JZMQ, the module uses ZeroMQ binding for Scala that uses the native ZeroMQ library through JNA. In other words, the only native library that this module requires is the native ZeroMQ library.
The benefit of the scala library is that you don't need to compile and manage native dependencies at the cost of some runtime performance. The scala-bindings are compatible with the JNI bindings so they are a drop-in replacement, in case you really need to get that extra bit of performance out.
Connection
==========
ZeroMQ supports multiple connectivity patterns, each aimed to meet a different set of requirements. Currently, this module supports publisher-subscriber connections and connections based on dealers and routers. For connecting or accepting connections, a socket must be created.
Sockets are always created using the ``akka.zeromq.ZeroMQExtension``, for example:
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#pub-socket
Above examples will create a ZeroMQ Publisher socket that is Bound to the port 1233 on localhost.
Similarly you can create a subscription socket, with a listener, that subscribes to all messages from the publisher using:
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#sub-socket
The following sub-sections describe the supported connection patterns and how they can be used in an Akka environment. However, for a comprehensive discussion of connection patterns, please refer to `ZeroMQ -- The Guide <http://zguide.zeromq.org/page:all>`_.
Publisher-Subscriber Connection
-------------------------------
In a publisher-subscriber (pub-sub) connection, the publisher accepts one or more subscribers. Each subscriber shall
subscribe to one or more topics, whereas the publisher publishes messages to a set of topics. Also, a subscriber can
subscribe to all available topics. In an Akka environment, pub-sub connections shall be used when an actor sends messages
to one or more actors that do not interact with the actor that sent the message.
When you're using zeromq pub/sub you should be aware that it needs multicast - check your cloud - to work properly and that the filtering of events for topics happens client side, so all events are always broadcasted to every subscriber.
An actor is subscribed to a topic as follows:
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#sub-topic-socket
It is a prefix match so it is subscribed to all topics starting with ``foo.bar``. Note that if the given string is empty or
``SubscribeAll`` is used, the actor is subscribed to all topics.
To unsubscribe from a topic you do the following:
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#unsub-topic-socket
To publish messages to a topic you must use two Frames with the topic in the first frame.
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#pub-topic
Pub-Sub in Action
^^^^^^^^^^^^^^^^^
The following example illustrates one publisher with two subscribers.
The publisher monitors current heap usage and system load and periodically publishes ``Heap`` events on the ``"health.heap"`` topic
and ``Load`` events on the ``"health.load"`` topic.
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#health
Let's add one subscriber that logs the information. It subscribes to all topics starting with ``"health"``, i.e. both ``Heap`` and
``Load`` events.
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#logger
Another subscriber keep track of used heap and warns if too much heap is used. It only subscribes to ``Heap`` events.
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#alerter
Router-Dealer Connection
------------------------
While Pub/Sub is nice the real advantage of zeromq is that it is a "lego-box" for reliable messaging. And because there are so many integrations the multi-language support is fantastic.
When you're using ZeroMQ to integrate many systems you'll probably need to build your own ZeroMQ devices. This is where the router and dealer socket types come in handy.
With those socket types you can build your own reliable pub sub broker that uses TCP/IP and does publisher side filtering of events.
To create a Router socket that has a high watermark configured, you would do:
.. includecode:: code/docs/zeromq/ZeromqDocSpec.scala#high-watermark
The akka-zeromq module accepts most if not all the available configuration options for a zeromq socket.
Push-Pull Connection
--------------------
Akka ZeroMQ module supports ``Push-Pull`` connections.
You can create a ``Push`` connection through the::
def newPushSocket(socketParameters: Array[SocketOption]): ActorRef
You can create a ``Pull`` connection through the::
def newPullSocket(socketParameters: Array[SocketOption]): ActorRef
More documentation and examples will follow soon.
Rep-Req Connection
------------------
Akka ZeroMQ module supports ``Rep-Req`` connections.
You can create a ``Rep`` connection through the::
def newRepSocket(socketParameters: Array[SocketOption]): ActorRef
You can create a ``Req`` connection through the::
def newReqSocket(socketParameters: Array[SocketOption]): ActorRef
More documentation and examples will follow soon.