'git mv' rst resources to md

This is mainly intended to keep the git history as neat as possible.
No actual conversion yet, so now both the rst and the paradox docs
are broken, which will be fixed in the next commits.
This commit is contained in:
Arnout Engelen 2017-05-10 15:44:43 +02:00
parent 5507147073
commit a4a0d308ad
553 changed files with 0 additions and 0 deletions

File diff suppressed because it is too large Load diff

View file

@ -1,115 +0,0 @@
.. _agents-java:
Agents
######
Agents in Akka are inspired by `agents in Clojure`_.
.. warning::
**Deprecation warning** - Agents have been deprecated and are scheduled for removal
in the next major version. We have found that their leaky abstraction (they do not
work over the network) make them inferior to pure Actors, and in face of the soon
inclusion of Akka Typed we see little value in maintaining the current Agents.
.. _agents in Clojure: http://clojure.org/agents
Agents provide asynchronous change of individual locations. Agents are bound to
a single storage location for their lifetime, and only allow mutation of that
location (to a new state) to occur as a result of an action. Update actions are
functions that are asynchronously applied to the Agent's state and whose return
value becomes the Agent's new state. The state of an Agent should be immutable.
While updates to Agents are asynchronous, the state of an Agent is always
immediately available for reading by any thread (using ``get``) without any messages.
Agents are reactive. The update actions of all Agents get interleaved amongst
threads in an ``ExecutionContext``. At any point in time, at most one ``send`` action for
each Agent is being executed. Actions dispatched to an agent from another thread
will occur in the order they were sent, potentially interleaved with actions
dispatched to the same agent from other threads.
Creating Agents
===============
Agents are created by invoking ``new Agent<ValueType>(value, executionContext)`` passing in the Agent's initial
value and providing an ``ExecutionContext`` to be used for it:
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-agent,create
:language: java
Reading an Agent's value
========================
Agents can be dereferenced (you can get an Agent's value) by invoking the Agent
with ``get()`` like this:
.. includecode:: code/jdocs/agent/AgentDocTest.java#read-get
:language: java
Reading an Agent's current value does not involve any message passing and
happens immediately. So while updates to an Agent are asynchronous, reading the
state of an Agent is synchronous.
You can also get a ``Future`` to the Agents value, that will be completed after the
currently queued updates have completed:
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-future,read-future
:language: java
See :ref:`futures-java` for more information on ``Futures``.
Updating Agents (send & alter)
==============================
You update an Agent by sending a function (``akka.dispatch.Mapper``) that transforms the current value or
by sending just a new value. The Agent will apply the new value or function
atomically and asynchronously. The update is done in a fire-forget manner and
you are only guaranteed that it will be applied. There is no guarantee of when
the update will be applied but dispatches to an Agent from a single thread will
occur in order. You apply a value or a function by invoking the ``send``
function.
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-function,send
:language: java
You can also dispatch a function to update the internal state but on its own
thread. This does not use the reactive thread pool and can be used for
long-running or blocking operations. You do this with the ``sendOff``
method. Dispatches using either ``sendOff`` or ``send`` will still be executed
in order.
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-function,send-off
:language: java
All ``send`` methods also have a corresponding ``alter`` method that returns a ``Future``.
See :ref:`futures-java` for more information on ``Futures``.
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-future,import-function,alter
:language: java
.. includecode:: code/jdocs/agent/AgentDocTest.java
:include: import-future,import-function,alter-off
:language: java
Configuration
=============
There are several configuration properties for the agents module, please refer
to the :ref:`reference configuration <config-akka-agent>`.
Deprecated Transactional Agents
===============================
Agents participating in enclosing STM transaction is a deprecated feature in 2.3.
If an Agent is used within an enclosing ``Scala STM transaction``, then it will participate in
that transaction. If you send to an Agent within a transaction then the dispatch
to the Agent will be held until that transaction commits, and discarded if the
transaction is aborted.

View file

@ -1,515 +0,0 @@
.. _camel-java:
Camel
#####
.. warning::
Akka Camel is deprecated in favour of `Alpakka`_ , the Akka Streams based collection of integrations to various endpoints (including Camel).
.. _Alpakka: https://github.com/akka/alpakka
Introduction
============
The akka-camel module allows Untyped Actors to receive
and send messages over a great variety of protocols and APIs.
In addition to the native Scala and Java actor API, actors can now exchange messages with other systems over large number
of protocols and APIs such as HTTP, SOAP, TCP, FTP, SMTP or JMS, to mention a
few. At the moment, approximately 80 protocols and APIs are supported.
Apache Camel
------------
The akka-camel module is based on `Apache Camel`_, a powerful and light-weight
integration framework for the JVM. For an introduction to Apache Camel you may
want to read this `Apache Camel article`_. Camel comes with a
large number of `components`_ that provide bindings to different protocols and
APIs. The `camel-extra`_ project provides further components.
.. _Apache Camel: http://camel.apache.org/
.. _Apache Camel article: http://architects.dzone.com/articles/apache-camel-integration
.. _components: http://camel.apache.org/components.html
.. _camel-extra: http://code.google.com/p/camel-extra/
Consumer
--------
Here's an example of using Camel's integration components in Akka.
.. includecode:: code/jdocs/camel/MyEndpoint.java#Consumer-mina
The above example exposes an actor over a TCP endpoint via Apache
Camel's `Mina component`_. The actor implements the `getEndpointUri` method to define
an endpoint from which it can receive messages. After starting the actor, TCP
clients can immediately send messages to and receive responses from that
actor. If the message exchange should go over HTTP (via Camel's `Jetty
component`_), the actor's `getEndpointUri` method should return a different URI, for instance "jetty:http://localhost:8877/example".
In the above case an extra constructor is added that can set the endpoint URI, which would result in
the `getEndpointUri` returning the URI that was set using this constructor.
.. _Mina component: http://camel.apache.org/mina2.html
.. _Jetty component: http://camel.apache.org/jetty.html
Producer
--------
Actors can also trigger message exchanges with external systems i.e. produce to
Camel endpoints.
.. includecode:: code/jdocs/camel/Orders.java#Producer
In the above example, any message sent to this actor will be sent to
the JMS queue ``Orders``. Producer actors may choose from the same set of Camel
components as Consumer actors do.
Below an example of how to send a message to the Orders producer.
.. includecode:: code/jdocs/camel/ProducerTestBase.java#TellProducer
CamelMessage
------------
The number of Camel components is constantly increasing. The akka-camel module
can support these in a plug-and-play manner. Just add them to your application's
classpath, define a component-specific endpoint URI and use it to exchange
messages over the component-specific protocols or APIs. This is possible because
Camel components bind protocol-specific message formats to a Camel-specific
`normalized message format`__. The normalized message format hides
protocol-specific details from Akka and makes it therefore very easy to support
a large number of protocols through a uniform Camel component interface. The
akka-camel module further converts mutable Camel messages into immutable
representations which are used by Consumer and Producer actors for pattern
matching, transformation, serialization or storage. In the above example of the Orders Producer,
the XML message is put in the body of a newly created Camel Message with an empty set of headers.
You can also create a CamelMessage yourself with the appropriate body and headers as you see fit.
__ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Message.java
CamelExtension
--------------
The akka-camel module is implemented as an Akka Extension, the ``CamelExtension`` object.
Extensions will only be loaded once per ``ActorSystem``, which will be managed by Akka.
The ``CamelExtension`` object provides access to the `Camel`_ interface.
The `Camel`_ interface in turn provides access to two important Apache Camel objects, the `CamelContext`_ and the `ProducerTemplate`_.
Below you can see how you can get access to these Apache Camel objects.
.. includecode:: code/jdocs/camel/CamelExtensionTest.java#CamelExtension
One ``CamelExtension`` is only loaded once for every one ``ActorSystem``, which makes it safe to call the ``CamelExtension`` at any point in your code to get to the
Apache Camel objects associated with it. There is one `CamelContext`_ and one `ProducerTemplate`_ for every one ``ActorSystem`` that uses a ``CamelExtension``.
By Default, a new `CamelContext`_ is created when the ``CamelExtension`` starts. If you want to inject your own context instead,
you can implement the `ContextProvider`_ interface and add the FQCN of your implementation in the config, as the value of the "akka.camel.context-provider".
This interface define a single method ``getContext()`` used to load the `CamelContext`_.
.. _ContextProvider: @github@/akka-camel/src/main/scala/akka/camel/ContextProvider.scala
Below an example on how to add the ActiveMQ component to the `CamelContext`_, which is required when you would like to use the ActiveMQ component.
.. includecode:: code/jdocs/camel/CamelExtensionTest.java#CamelExtensionAddComponent
The `CamelContext`_ joins the lifecycle of the ``ActorSystem`` and ``CamelExtension`` it is associated with; the `CamelContext`_ is started when
the ``CamelExtension`` is created, and it is shut down when the associated ``ActorSystem`` is shut down. The same is true for the `ProducerTemplate`_.
The ``CamelExtension`` is used by both `Producer` and `Consumer` actors to interact with Apache Camel internally.
You can access the ``CamelExtension`` inside a `Producer` or a `Consumer` using the ``camel`` method, or get straight at the `CamelContext`
using the ``getCamelContext`` method or to the `ProducerTemplate` using the `getProducerTemplate` method.
Actors are created and started asynchronously. When a `Consumer` actor is created, the `Consumer` is published at its Camel endpoint
(more precisely, the route is added to the `CamelContext`_ from the `Endpoint`_ to the actor).
When a `Producer` actor is created, a `SendProcessor`_ and `Endpoint`_ are created so that the Producer can send messages to it.
Publication is done asynchronously; setting up an endpoint may still be in progress after you have
requested the actor to be created. Some Camel components can take a while to startup, and in some cases you might want to know when the endpoints are activated and ready to be used.
The `Camel`_ interface allows you to find out when the endpoint is activated or deactivated.
.. includecode:: code/jdocs/camel/ActivationTestBase.java#CamelActivation
The above code shows that you can get a ``Future`` to the activation of the route from the endpoint to the actor, or you can wait in a blocking fashion on the activation of the route.
An ``ActivationTimeoutException`` is thrown if the endpoint could not be activated within the specified timeout. Deactivation works in a similar fashion:
.. includecode:: code/jdocs/camel/ActivationTestBase.java#CamelDeactivation
Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the `SendProcessor`_ is stopped.
A ``DeActivationTimeoutException`` is thrown if the associated camel objects could not be deactivated within the specified timeout.
.. _Camel: @github@/akka-camel/src/main/scala/akka/camel/Camel.scala
.. _CamelContext: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java
.. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java
.. _SendProcessor: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java
.. _Endpoint: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Endpoint.java
Consumer Actors
================
For objects to receive messages, they must inherit from the `UntypedConsumerActor`_
class. For example, the following actor class (Consumer1) implements the
`getEndpointUri` method, which is declared in the `UntypedConsumerActor`_ class, in order to receive
messages from the ``file:data/input/actor`` Camel endpoint.
.. _UntypedConsumerActor: @github@/akka-camel/src/main/scala/akka/camel/javaapi/UntypedConsumer.scala
.. includecode:: code/jdocs/camel/Consumer1.java#Consumer1
Whenever a file is put into the data/input/actor directory, its content is
picked up by the Camel `file component`_ and sent as message to the
actor. Messages consumed by actors from Camel endpoints are of type
`CamelMessage`_. These are immutable representations of Camel messages.
.. _file component: http://camel.apache.org/file2.html
.. _Message: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala
Here's another example that sets the endpointUri to
``jetty:http://localhost:8877/camel/default``. It causes Camel's `Jetty
component`_ to start an embedded `Jetty`_ server, accepting HTTP connections
from localhost on port 8877.
.. _Jetty component: http://camel.apache.org/jetty.html
.. _Jetty: http://www.eclipse.org/jetty/
.. includecode:: code/jdocs/camel/Consumer2.java#Consumer2
After starting the actor, clients can send messages to that actor by POSTing to
``http://localhost:8877/camel/default``. The actor sends a response by using the
``getSender().tell`` method. For returning a message body and headers to the HTTP
client the response type should be `CamelMessage`_. For any other response type, a
new CamelMessage object is created by akka-camel with the actor response as message
body.
.. _Message: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala
.. _camel-acknowledgements-java:
Delivery acknowledgements
-------------------------
With in-out message exchanges, clients usually know that a message exchange is
done when they receive a reply from a consumer actor. The reply message can be a
CamelMessage (or any object which is then internally converted to a CamelMessage) on
success, and a Failure message on failure.
With in-only message exchanges, by default, an exchange is done when a message
is added to the consumer actor's mailbox. Any failure or exception that occurs
during processing of that message by the consumer actor cannot be reported back
to the endpoint in this case. To allow consumer actors to positively or
negatively acknowledge the receipt of a message from an in-only message
exchange, they need to override the ``autoAck`` method to return false.
In this case, consumer actors must reply either with a
special akka.camel.Ack message (positive acknowledgement) or a akka.actor.Status.Failure (negative
acknowledgement).
.. includecode:: code/jdocs/camel/Consumer3.java#Consumer3
.. _camel-timeout-java:
Consumer timeout
----------------
Camel Exchanges (and their corresponding endpoints) that support two-way communications need to wait for a response from
an actor before returning it to the initiating client.
For some endpoint types, timeout values can be defined in an endpoint-specific
way which is described in the documentation of the individual `Camel
components`_. Another option is to configure timeouts on the level of consumer actors.
.. _Camel components: http://camel.apache.org/components.html
Two-way communications between a Camel endpoint and an actor are
initiated by sending the request message to the actor with the `ask`_ pattern
and the actor replies to the endpoint when the response is ready. The ask request to the actor can timeout, which will
result in the `Exchange`_ failing with a TimeoutException set on the failure of the `Exchange`_.
The timeout on the consumer actor can be overridden with the ``replyTimeout``, as shown below.
.. includecode:: code/jdocs/camel/Consumer4.java#Consumer4
.. _Exchange: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java
.. _ask: @github@/akka-actor/src/main/scala/akka/pattern/Patterns.scala
Producer Actors
===============
For sending messages to Camel endpoints, actors need to inherit from the `UntypedProducerActor`_ class and implement the getEndpointUri method.
.. includecode:: code/jdocs/camel/Producer1.java#Producer1
Producer1 inherits a default implementation of the onReceive method from the
`UntypedProducerActor`_ class. To customize a producer actor's default behavior you must override the `UntypedProducerActor`_.onTransformResponse and
`UntypedProducerActor`_.onTransformOutgoingMessage methods. This is explained later in more detail.
Producer Actors cannot override the `UntypedProducerActor`_.onReceive method.
Any message sent to a Producer actor will be sent to
the associated Camel endpoint, in the above example to
``http://localhost:8080/news``. The `UntypedProducerActor`_ always sends messages asynchronously. Response messages (if supported by the
configured endpoint) will, by default, be returned to the original sender. The
following example uses the ask pattern to send a message to a
Producer actor and waits for a response.
.. includecode:: code/jdocs/camel/ProducerTestBase.java#AskProducer
The future contains the response CamelMessage, or an ``AkkaCamelException`` when an error occurred, which contains the headers of the response.
.. _camel-custom-processing-java:
Custom Processing
-----------------
Instead of replying to the initial sender, producer actors can implement custom
response processing by overriding the onRouteResponse method. In the following example, the response
message is forwarded to a target actor instead of being replied to the original
sender.
.. includecode:: code/jdocs/camel/ResponseReceiver.java#RouteResponse
.. includecode:: code/jdocs/camel/Forwarder.java#RouteResponse
.. includecode:: code/jdocs/camel/OnRouteResponseTestBase.java#RouteResponse
Before producing messages to endpoints, producer actors can pre-process them by
overriding the `UntypedProducerActor`_.onTransformOutgoingMessage method.
.. includecode:: code/jdocs/camel/Transformer.java#TransformOutgoingMessage
Producer configuration options
------------------------------
The interaction of producer actors with Camel endpoints can be configured to be
one-way or two-way (by initiating in-only or in-out message exchanges,
respectively). By default, the producer initiates an in-out message exchange
with the endpoint. For initiating an in-only exchange, producer actors have to override the isOneway method to return true.
.. includecode:: code/jdocs/camel/OnewaySender.java#Oneway
Message correlation
-------------------
To correlate request with response messages, applications can set the
`Message.MessageExchangeId` message header.
.. includecode:: code/jdocs/camel/ProducerTestBase.java#Correlate
ProducerTemplate
----------------
The `UntypedProducerActor`_ class is a very convenient way for actors to produce messages to Camel endpoints.
Actors may also use a Camel `ProducerTemplate`_ for producing messages to endpoints.
.. includecode:: code/jdocs/camel/MyActor.java#ProducerTemplate
For initiating a two-way message exchange, one of the
``ProducerTemplate.request*`` methods must be used.
.. includecode:: code/jdocs/camel/RequestBodyActor.java#RequestProducerTemplate
.. _UntypedProducerActor: @github@/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala
.. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java
.. _camel-asynchronous-routing-java:
Asynchronous routing
====================
In-out message exchanges between endpoints and actors are
designed to be asynchronous. This is the case for both, consumer and producer
actors.
* A consumer endpoint sends request messages to its consumer actor using the ``tell``
method and the actor returns responses with ``getSender().tell`` once they are
ready.
* A producer actor sends request messages to its endpoint using Camel's
asynchronous routing engine. Asynchronous responses are wrapped and added to the
producer actor's mailbox for later processing. By default, response messages are
returned to the initial sender but this can be overridden by Producer
implementations (see also description of the ``onRouteResponse`` method
in :ref:`camel-custom-processing-java`).
However, asynchronous two-way message exchanges, without allocating a thread for
the full duration of exchange, cannot be generically supported by Camel's
asynchronous routing engine alone. This must be supported by the individual
`Camel components`_ (from which endpoints are created) as well. They must be
able to suspend any work started for request processing (thereby freeing threads
to do other work) and resume processing when the response is ready. This is
currently the case for a `subset of components`_ such as the `Jetty component`_.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also :ref:`camel-examples-java` that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
If the used Camel component is blocking it might be necessary to use a separate
:ref:`dispatcher <dispatchers-java>` for the producer. The Camel processor is
invoked by a child actor of the producer and the dispatcher can be defined in
the deployment section of the configuration. For example, if your producer actor
has path ``/user/integration/output`` the dispatcher of the child actor can be
defined with::
akka.actor.deployment {
/integration/output/* {
dispatcher = my-dispatcher
}
}
.. _Camel components: http://camel.apache.org/components.html
.. _subset of components: http://camel.apache.org/asynchronous-routing-engine.html
.. _Jetty component: http://camel.apache.org/jetty.html
Custom Camel routes
===================
In all the examples so far, routes to consumer actors have been automatically
constructed by akka-camel, when the actor was started. Although the default
route construction templates, used by akka-camel internally, are sufficient for
most use cases, some applications may require more specialized routes to actors.
The akka-camel module provides two mechanisms for customizing routes to actors,
which will be explained in this section. These are:
* Usage of :ref:`camel-components-java` to access actors.
Any Camel route can use these components to access Akka actors.
* :ref:`camel-intercepting-route-construction-java` to actors.
This option gives you the ability to change routes that have already been added to Camel.
Consumer actors have a hook into the route definition process which can be used to change the route.
.. _camel-components-java:
Akka Camel components
---------------------
Akka actors can be accessed from Camel routes using the `actor`_ Camel component. This component can be used to
access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections.
.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala
.. _access-to-actors-java:
Access to actors
----------------
To access actors from custom Camel routes, the `actor`_ Camel
component should be used. It fully supports Camel's `asynchronous routing
engine`_.
.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala
.. _asynchronous routing engine: http://camel.apache.org/asynchronous-routing-engine.html
This component accepts the following endpoint URI format:
* ``[<actor-path>]?<options>``
where ``<actor-path>`` is the ``ActorPath`` to the actor. The ``<options>`` are
name-value pairs separated by ``&`` (i.e. ``name1=value1&name2=value2&...``).
URI options
^^^^^^^^^^^
The following URI options are supported:
.. tabularcolumns:: |l|l|l|L|
+--------------+----------+---------+------------------------------------------------+
| Name | Type | Default | Description |
+==============+==========+=========+================================================+
| replyTimeout | Duration | false | The reply timeout, specified in the same |
| | | | way that you use the duration in akka, |
| | | | for instance ``10 seconds`` except that |
| | | | in the url it is handy to use a + |
| | | | between the amount and the unit, like |
| | | | for example ``200+millis`` |
| | | | |
| | | | See also :ref:`camel-timeout-java`. |
+--------------+----------+---------+------------------------------------------------+
| autoAck | Boolean | true | If set to true, in-only message exchanges |
| | | | are auto-acknowledged when the message is |
| | | | added to the actor's mailbox. If set to |
| | | | false, actors must acknowledge the |
| | | | receipt of the message. |
| | | | |
| | | | See also :ref:`camel-acknowledgements-java`. |
+--------------+----------+---------+------------------------------------------------+
Here's an actor endpoint URI example containing an actor path::
akka://some-system/user/myconsumer?autoAck=false&replyTimeout=100+millis
In the following example, a custom route to an actor is created, using the
actor's path.
.. includecode:: code/jdocs/camel/Responder.java#CustomRoute
.. includecode:: code/jdocs/camel/CustomRouteBuilder.java#CustomRoute
.. includecode:: code/jdocs/camel/CustomRouteTestBase.java#CustomRoute
The `CamelPath.toCamelUri` converts the `ActorRef` to the Camel actor component URI format which points to the actor endpoint as described above.
When a message is received on the jetty endpoint, it is routed to the Responder actor, which in return replies back to the client of
the HTTP request.
.. _camel-intercepting-route-construction-java:
Intercepting route construction
-------------------------------
The previous section, :ref:`camel-components-java`, explained how to setup a route to
an actor manually.
It was the application's responsibility to define the route and add it to the current CamelContext.
This section explains a more convenient way to define custom routes: akka-camel is still setting up the routes to consumer actors
(and adds these routes to the current CamelContext) but applications can define extensions to these routes.
Extensions can be defined with Camel's `Java DSL`_ or `Scala DSL`_. For example, an extension could be a custom error handler that redelivers messages from an endpoint to an actor's bounded mailbox when the mailbox was full.
.. _Java DSL: http://camel.apache.org/dsl.html
.. _Scala DSL: http://camel.apache.org/scala-dsl.html
The following examples demonstrate how to extend a route to a consumer actor for
handling exceptions thrown by that actor.
.. includecode:: code/jdocs/camel/ErrorThrowingConsumer.java#ErrorThrowingConsumer
The above ErrorThrowingConsumer sends the Failure back to the sender in preRestart
because the Exception that is thrown in the actor would
otherwise just crash the actor, by default the actor would be restarted, and the response would never reach the client of the Consumer.
The akka-camel module creates a RouteDefinition instance by calling
from(endpointUri) on a Camel RouteBuilder (where endpointUri is the endpoint URI
of the consumer actor) and passes that instance as argument to the route
definition handler \*). The route definition handler then extends the route and
returns a ProcessorDefinition (in the above example, the ProcessorDefinition
returned by the end method. See the `org.apache.camel.model`__ package for
details). After executing the route definition handler, akka-camel finally calls
a to(targetActorUri) on the returned ProcessorDefinition to complete the
route to the consumer actor (where targetActorUri is the actor component URI as described in :ref:`access-to-actors-java`).
If the actor cannot be found, a `ActorNotRegisteredException` is thrown.
\*) Before passing the RouteDefinition instance to the route definition handler,
akka-camel may make some further modifications to it.
__ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/model/
.. _camel-examples-java:
Examples
========
The sample named `Akka Camel Samples with Java <@exampleCodeService@/akka-samples-camel-java>`_ (`source code <@samples@/akka-sample-camel-java>`_)
contains 3 samples:
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support :ref:`camel-asynchronous-routing-java` with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route.
* Quartz Scheduler Example - Showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component
Configuration
=============
There are several configuration properties for the Camel module, please refer
to the :ref:`reference configuration <config-akka-camel>`.
Additional Resources
====================
For an introduction to akka-camel 2, see also the Peter Gabryanczyk's talk `Migrating akka-camel module to Akka 2.x`_.
For an introduction to akka-camel 1, see also the `Appendix E - Akka and Camel`_
(pdf) of the book `Camel in Action`_.
.. _Appendix E - Akka and Camel: http://www.manning.com/ibsen/appEsample.pdf
.. _Camel in Action: http://www.manning.com/ibsen/
.. _Migrating akka-camel module to Akka 2.x: http://skillsmatter.com/podcast/scala/akka-2-x
Other, more advanced external articles (for version 1) are:
* `Akka Consumer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html>`_
* `Akka Producer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html>`_

View file

@ -1,192 +0,0 @@
.. _cluster-client-java:
Cluster Client
==============
An actor system that is not part of the cluster can communicate with actors
somewhere in the cluster via this ``ClusterClient``. The client can of course be part of
another cluster. It only needs to know the location of one (or more) nodes to use as initial
contact points. It will establish a connection to a ``ClusterReceptionist`` somewhere in
the cluster. It will monitor the connection to the receptionist and establish a new
connection if the link goes down. When looking for a new receptionist it uses fresh
contact points retrieved from previous establishment, or periodically refreshed contacts,
i.e. not necessarily the initial contact points.
.. note::
``ClusterClient`` should not be used when sending messages to actors that run
within the same cluster. Similar functionality as the ``ClusterClient`` is
provided in a more efficient way by :ref:`distributed-pub-sub-java` for actors that
belong to the same cluster.
Also, note it's necessary to change ``akka.actor.provider`` from ``local``
to ``remote`` or ``cluster`` when using
the cluster client.
The receptionist is supposed to be started on all nodes, or all nodes with specified role,
in the cluster. The receptionist can be started with the ``ClusterClientReceptionist`` extension
or as an ordinary actor.
You can send messages via the ``ClusterClient`` to any actor in the cluster that is registered
in the ``DistributedPubSubMediator`` used by the ``ClusterReceptionist``.
The ``ClusterClientReceptionist`` provides methods for registration of actors that
should be reachable from the client. Messages are wrapped in ``ClusterClient.Send``,
``ClusterClient.SendToAll`` or ``ClusterClient.Publish``.
Both the ``ClusterClient`` and the ``ClusterClientReceptionist`` emit events that can be subscribed to.
The ``ClusterClient`` sends out notifications in relation to having received a list of contact points
from the ``ClusterClientReceptionist``. One use of this list might be for the client to record its
contact points. A client that is restarted could then use this information to supersede any previously
configured contact points.
The ``ClusterClientReceptionist`` sends out notifications in relation to having received a contact
from a ``ClusterClient``. This notification enables the server containing the receptionist to become aware of
what clients are connected.
**1. ClusterClient.Send**
The message will be delivered to one recipient with a matching path, if any such
exists. If several entries match the path the message will be delivered
to one random destination. The ``sender`` of the message can specify that local
affinity is preferred, i.e. the message is sent to an actor in the same local actor
system as the used receptionist actor, if any such exists, otherwise random to any other
matching entry.
**2. ClusterClient.SendToAll**
The message will be delivered to all recipients with a matching path.
**3. ClusterClient.Publish**
The message will be delivered to all recipients Actors that have been registered as subscribers
to the named topic.
Response messages from the destination actor are tunneled via the receptionist
to avoid inbound connections from other cluster nodes to the client, i.e.
the ``sender``, as seen by the destination actor, is not the client itself.
The ``sender`` of the response messages, as seen by the client, is ``deadLetters``
since the client should normally send subsequent messages via the ``ClusterClient``.
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.
While establishing a connection to a receptionist the ``ClusterClient`` will buffer
messages and send them when the connection is established. If the buffer is full
the ``ClusterClient`` will drop old messages when new messages are sent via the client.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
It's worth noting that messages can always be lost because of the distributed nature
of these actors. As always, additional logic should be implemented in the destination
(acknowledgement) and in the client (retry) actors to ensure at-least-once message delivery.
An Example
----------
On the cluster nodes first start the receptionist. Note, it is recommended to load the extension
when the actor system is started by defining it in the ``akka.extensions`` configuration property::
akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
Next, register the actors that should be available for the client.
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java#server
On the client you create the ``ClusterClient`` actor and use it as a gateway for sending
messages to the actors identified by their path (without address information) somewhere
in the cluster.
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java#client
The ``initialContacts`` parameter is a ``Set<ActorPath>``, which can be created like this:
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java#initialContacts
You will probably define the address information of the initial contact points in configuration or system property.
See also :ref:`cluster-client-config-java`.
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Java! <https://github.com/typesafehub/activator-akka-distributed-workers-java>`_.
ClusterClientReceptionist Extension
-----------------------------------
In the example above the receptionist is started and accessed with the ``akka.cluster.client.ClusterClientReceptionist`` extension.
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
start the ``akka.cluster.client.ClusterReceptionist`` actor as an ordinary actor and you can have several
different receptionists at the same time, serving different types of clients.
Note that the ``ClusterClientReceptionist`` uses the ``DistributedPubSub`` extension, which is described
in :ref:`distributed-pub-sub-java`.
It is recommended to load the extension when the actor system is started by defining it in the
``akka.extensions`` configuration property::
akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
Events
------
As mentioned earlier, both the ``ClusterClient`` and ``ClusterClientReceptionist`` emit events that can be subscribed to.
The following code snippet declares an actor that will receive notifications on contact points (addresses to the available
receptionists), as they become available. The code illustrates subscribing to the events and receiving the ``ClusterClient``
initial state.
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java#clientEventsListener
Similarly we can have an actor that behaves in a similar fashion for learning what cluster clients contact a ``ClusterClientReceptionist``:
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java#receptionistEventsListener
Dependencies
------------
To use the Cluster Client you must add the following dependency in your project.
sbt::
"com.typesafe.akka" %% "akka-cluster-tools" % "@version@" @crossString@
maven::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
.. _cluster-client-config-java:
Configuration
-------------
The ``ClusterClientReceptionist`` extension (or ``ClusterReceptionistSettings``) can be configured
with the following properties:
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#receptionist-ext-config
The following configuration properties are read by the ``ClusterClientSettings``
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterClientSettings``
or create it from another config section with the same layout as below. ``ClusterClientSettings`` is
a parameter to the ``ClusterClient.props`` factory method, i.e. each client can be configured
with different settings if needed.
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#cluster-client-config
Failure handling
----------------
When the cluster client is started it must be provided with a list of initial contacts which are cluster
nodes where receptionists are running. It will then repeatedly (with an interval configurable
by ``establishing-get-contacts-interval``) try to contact those until it gets in contact with one of them.
While running, the list of contacts are continuously updated with data from the receptionists (again, with an
interval configurable with ``refresh-contacts-interval``), so that if there are more receptionists in the cluster
than the initial contacts provided to the client the client will learn about them.
While the client is running it will detect failures in its connection to the receptionist by heartbeats
if more than a configurable amount of heartbeats are missed the client will try to reconnect to its known
set of contacts to find a receptionist it can access.
When the cluster cannot be reached at all
-----------------------------------------
It is possible to make the cluster client stop entirely if it cannot find a receptionist it can talk to
within a configurable interval. This is configured with the ``reconnect-timeout``, which defaults to ``off``.
This can be useful when initial contacts are provided from some kind of service registry, cluster node addresses
are entirely dynamic and the entire cluster might shut down or crash, be restarted on new addresses. Since the
client will be stopped in that case a monitoring actor can watch it and upon ``Terminate`` a new set of initial
contacts can be fetched and a new cluster client started.

View file

@ -1,197 +0,0 @@
.. _cluster_metrics_java:
Cluster Metrics Extension
=========================
Introduction
------------
The member nodes of the cluster can collect system health metrics and publish that to other cluster nodes
and to the registered subscribers on the system event bus with the help of Cluster Metrics Extension.
Cluster metrics information is primarily used for load-balancing routers,
and can also be used to implement advanced metrics-based node life cycles,
such as "Node Let-it-crash" when CPU steal time becomes excessive.
Cluster Metrics Extension is a separate Akka module delivered in ``akka-cluster-metrics`` jar.
To enable usage of the extension you need to add the following dependency to your project:
::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-metrics_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
and add the following configuration stanza to your ``application.conf``
::
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
Cluster members with status :ref:`WeaklyUp <weakly_up_java>`,
will participate in Cluster Metrics collection and dissemination.
Metrics Collector
-----------------
Metrics collection is delegated to an implementation of ``akka.cluster.metrics.MetricsCollector``.
Different collector implementations provide different subsets of metrics published to the cluster.
Certain message routing and let-it-crash functions may not work when Sigar is not provisioned.
Cluster metrics extension comes with two built-in collector implementations:
#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise
#. ``akka.cluster.metrics.JmxMetricsCollector``, which is used as fall back, and is less rich/precise
You can also plug-in your own metrics collector implementation.
By default, metrics extension will use collector provider fall back and will try to load them in this order:
#. configured user-provided collector
#. built-in ``akka.cluster.metrics.SigarMetricsCollector``
#. and finally ``akka.cluster.metrics.JmxMetricsCollector``
Metrics Events
--------------
Metrics extension periodically publishes current snapshot of the cluster metrics to the node system event bus.
The publication interval is controlled by the ``akka.cluster.metrics.collector.sample-interval`` setting.
The payload of the ``akka.cluster.metrics.ClusterMetricsChanged`` event will contain
latest metrics of the node as well as other cluster member nodes metrics gossip
which was received during the collector sample interval.
You can subscribe your metrics listener actors to these events in order to implement custom node lifecycle
::
ClusterMetricsExtension.get(system).subscribe(metricsListenerActor);
Hyperic Sigar Provisioning
--------------------------
Both user-provided and built-in metrics collectors can optionally use `Hyperic Sigar <http://www.hyperic.com/products/sigar>`_
for a wider and more accurate range of metrics compared to what can be retrieved from ordinary JMX MBeans.
Sigar is using a native o/s library, and requires library provisioning, i.e.
deployment, extraction and loading of the o/s native library into JVM at runtime.
User can provision Sigar classes and native library in one of the following ways:
#. Use `Kamon sigar-loader <https://github.com/kamon-io/sigar-loader>`_ as a project dependency for the user project.
Metrics extension will extract and load sigar library on demand with help of Kamon sigar provisioner.
#. Use `Kamon sigar-loader <https://github.com/kamon-io/sigar-loader>`_ as java agent: ``java -javaagent:/path/to/sigar-loader.jar``.
Kamon sigar loader agent will extract and load sigar library during JVM start.
#. Place ``sigar.jar`` on the ``classpath`` and Sigar native library for the o/s on the ``java.library.path``.
User is required to manage both project dependency and library deployment manually.
.. warning::
When using `Kamon sigar-loader <https://github.com/kamon-io/sigar-loader>`_ and running multiple
instances of the same application on the same host, you have to make sure that sigar library is extracted to a
unique per instance directory. You can control the extract directory with the
``akka.cluster.metrics.native-library-extract-folder`` configuration setting.
To enable usage of Sigar you can add the following dependency to the user project
::
<dependency>
<groupId>io.kamon</groupId>
<artifactId>sigar-loader</artifactId>
<version>@sigarLoaderVersion@</version>
</dependency>
You can download Kamon sigar-loader from `Maven Central <http://search.maven.org/#search%7Cga%7C1%7Csigar-loader>`_
Adaptive Load Balancing
-----------------------
The ``AdaptiveLoadBalancingPool`` / ``AdaptiveLoadBalancingGroup`` performs load balancing of messages to cluster nodes based on the cluster metrics data.
It uses random selection of routees with probabilities derived from the remaining capacity of the corresponding node.
It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights:
* ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max
* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors)
* ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization
* ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors.
* Any custom implementation of ``akka.cluster.metrics.MetricsSelector``
The collected metrics values are smoothed with `exponential weighted moving average <http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average>`_. In the :ref:`cluster_configuration_java` you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action. What can be more demanding than calculating factorials?
The backend worker that performs the factorial calculation:
.. includecode:: code/jdocs/cluster/FactorialBackend.java#backend
The frontend that receives user jobs and delegates to the backends via the router:
.. includecode:: code/jdocs/cluster/FactorialFrontend.java#frontend
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
::
akka.actor.deployment {
/factorialFrontend/factorialBackendRouter = {
# Router type provided by metrics extension.
router = cluster-metrics-adaptive-group
# Router parameter specific for metrics extension.
# metrics-selector = heap
# metrics-selector = load
# metrics-selector = cpu
metrics-selector = mix
#
routees.paths = ["/user/factorialBackend"]
cluster {
enabled = on
use-role = backend
allow-local-routees = off
}
}
}
It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router,
other things work in the same way as other routers.
The same type of router could also have been defined in code:
.. includecode:: code/jdocs/cluster/FactorialFrontend.java#router-lookup-in-code
.. includecode:: code/jdocs/cluster/FactorialFrontend.java#router-deploy-in-code
The easiest way to run **Adaptive Load Balancing** example yourself is to download the ready to run
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-java>`_
together with the tutorial. It contains instructions on how to run the **Adaptive Load Balancing** sample.
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-java>`_.
Subscribe to Metrics Events
---------------------------
It is possible to subscribe to the metrics events directly to implement other functionality.
.. includecode:: code/jdocs/cluster/MetricsListener.java#metrics-listener
Custom Metrics Collector
------------------------
Metrics collection is delegated to the implementation of ``akka.cluster.metrics.MetricsCollector``
You can plug-in your own metrics collector instead of built-in
``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``.
Look at those two implementations for inspiration.
Custom metrics collector implementation class must be specified in the
``akka.cluster.metrics.collector.provider`` configuration property.
Configuration
-------------
The Cluster metrics extension can be configured with the following properties:
.. includecode:: ../../../akka-cluster-metrics/src/main/resources/reference.conf

View file

@ -1,422 +0,0 @@
.. _cluster_sharding_java:
Cluster Sharding
================
Cluster sharding is useful when you need to distribute actors across several nodes in the cluster and want to
be able to interact with them using their logical identifier, but without having to care about
their physical location in the cluster, which might also change over time.
It could for example be actors representing Aggregate Roots in Domain-Driven Design terminology.
Here we call these actors "entities". These actors typically have persistent (durable) state,
but this feature is not limited to actors with persistent state.
Cluster sharding is typically used when you have many stateful actors that together consume
more resources (e.g. memory) than fit on one machine. If you only have a few stateful actors
it might be easier to run them on a :ref:`cluster-singleton-java` node.
In this context sharding means that actors with an identifier, so called entities,
can be automatically distributed across multiple nodes in the cluster. Each entity
actor runs only at one place, and messages can be sent to the entity without requiring
the sender to know the location of the destination actor. This is achieved by sending
the messages via a ``ShardRegion`` actor provided by this extension, which knows how
to route the message with the entity id to the final destination.
Cluster sharding will not be active on members with status :ref:`WeaklyUp <weakly_up_java>`
if that feature is enabled.
.. warning::
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See :ref:`automatic-vs-manual-downing-java`.
An Example
----------
This is how an entity actor may look like:
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-actor
The above actor uses event sourcing and the support provided in ``AbstractPersistentActor`` to store its state.
It does not have to be a persistent actor, but in case of failure or migration of entities between nodes it must be able to recover
its state if it is valuable.
Note how the ``persistenceId`` is defined. The name of the actor is the entity identifier (utf-8 URL-encoded).
You may define it another way, but it must be unique.
When using the sharding extension you are first, typically at system startup on each node
in the cluster, supposed to register the supported entity types with the ``ClusterSharding.start``
method. ``ClusterSharding.start`` gives you the reference which you can pass along.
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-start
The ``messageExtractor`` defines application specific methods to extract the entity
identifier and the shard identifier from incoming messages.
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-extractor
This example illustrates two different ways to define the entity identifier in the messages:
* The ``Get`` message includes the identifier itself.
* The ``EntityEnvelope`` holds the identifier, and the actual message that is
sent to the entity actor is wrapped in the envelope.
Note how these two messages types are handled in the ``entityId`` and ``entityMessage`` methods shown above.
The message sent to the entity actor is what ``entityMessage`` returns and that makes it possible to unwrap envelopes
if needed.
A shard is a group of entities that will be managed together. The grouping is defined by the
``extractShardId`` function shown above. For a specific entity identifier the shard identifier must always
be the same. Otherwise the entity actor might accidentally be started in several places at the same time.
Creating a good sharding algorithm is an interesting challenge in itself. Try to produce a uniform distribution,
i.e. same amount of entities in each shard. As a rule of thumb, the number of shards should be a factor ten greater
than the planned maximum number of cluster nodes. Less shards than number of nodes will result in that some nodes
will not host any shards. Too many shards will result in less efficient management of the shards, e.g. rebalancing
overhead, and increased latency because the coordinator is involved in the routing of the first message for each
shard. The sharding algorithm must be the same on all nodes in a running cluster. It can be changed after stopping
all nodes in the cluster.
A simple sharding algorithm that works fine in most cases is to take the absolute value of the ``hashCode`` of
the entity identifier modulo number of shards. As a convenience this is provided by the
``ShardRegion.HashCodeMessageExtractor``.
Messages to the entities are always sent via the local ``ShardRegion``. The ``ShardRegion`` actor reference for a
named entity type is returned by ``ClusterSharding.start`` and it can also be retrieved with ``ClusterSharding.shardRegion``.
The ``ShardRegion`` will lookup the location of the shard for the entity if it does not already know its location. It will
delegate the message to the right node and it will create the entity actor on demand, i.e. when the
first message for a specific entity is delivered.
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-usage
How it works
------------
The ``ShardRegion`` actor is started on each node in the cluster, or group of nodes
tagged with a specific role. The ``ShardRegion`` is created with two application specific
functions to extract the entity identifier and the shard identifier from incoming messages.
A shard is a group of entities that will be managed together. For the first message in a
specific shard the ``ShardRegion`` request the location of the shard from a central coordinator,
the ``ShardCoordinator``.
The ``ShardCoordinator`` decides which ``ShardRegion`` shall own the ``Shard`` and informs
that ``ShardRegion``. The region will confirm this request and create the ``Shard`` supervisor
as a child actor. The individual ``Entities`` will then be created when needed by the ``Shard``
actor. Incoming messages thus travel via the ``ShardRegion`` and the ``Shard`` to the target
``Entity``.
If the shard home is another ``ShardRegion`` instance messages will be forwarded
to that ``ShardRegion`` instance instead. While resolving the location of a
shard incoming messages for that shard are buffered and later delivered when the
shard home is known. Subsequent messages to the resolved shard can be delivered
to the target destination immediately without involving the ``ShardCoordinator``.
Scenario 1:
#. Incoming message M1 to ``ShardRegion`` instance R1.
#. M1 is mapped to shard S1. R1 doesn't know about S1, so it asks the coordinator C for the location of S1.
#. C answers that the home of S1 is R1.
#. R1 creates child actor for the entity E1 and sends buffered messages for S1 to E1 child
#. All incoming messages for S1 which arrive at R1 can be handled by R1 without C. It creates entity children as needed, and forwards messages to them.
Scenario 2:
#. Incoming message M2 to R1.
#. M2 is mapped to S2. R1 doesn't know about S2, so it asks C for the location of S2.
#. C answers that the home of S2 is R2.
#. R1 sends buffered messages for S2 to R2
#. All incoming messages for S2 which arrive at R1 can be handled by R1 without C. It forwards messages to R2.
#. R2 receives message for S2, ask C, which answers that the home of S2 is R2, and we are in Scenario 1 (but for R2).
To make sure that at most one instance of a specific entity actor is running somewhere
in the cluster it is important that all nodes have the same view of where the shards
are located. Therefore the shard allocation decisions are taken by the central
``ShardCoordinator``, which is running as a cluster singleton, i.e. one instance on
the oldest member among all cluster nodes or a group of nodes tagged with a specific
role.
The logic that decides where a shard is to be located is defined in a pluggable shard
allocation strategy. The default implementation ``ShardCoordinator.LeastShardAllocationStrategy``
allocates new shards to the ``ShardRegion`` with least number of previously allocated shards.
This strategy can be replaced by an application specific implementation.
To be able to use newly added members in the cluster the coordinator facilitates rebalancing
of shards, i.e. migrate entities from one node to another. In the rebalance process the
coordinator first notifies all ``ShardRegion`` actors that a handoff for a shard has started.
That means they will start buffering incoming messages for that shard, in the same way as if the
shard location is unknown. During the rebalance process the coordinator will not answer any
requests for the location of shards that are being rebalanced, i.e. local buffering will
continue until the handoff is completed. The ``ShardRegion`` responsible for the rebalanced shard
will stop all entities in that shard by sending the specified ``handOffStopMessage``
(default ``PoisonPill``) to them. When all entities have been terminated the ``ShardRegion``
owning the entities will acknowledge the handoff as completed to the coordinator.
Thereafter the coordinator will reply to requests for the location of
the shard and thereby allocate a new home for the shard and then buffered messages in the
``ShardRegion`` actors are delivered to the new location. This means that the state of the entities
are not transferred or migrated. If the state of the entities are of importance it should be
persistent (durable), e.g. with :ref:`persistence-java`, so that it can be recovered at the new
location.
The logic that decides which shards to rebalance is defined in a pluggable shard
allocation strategy. The default implementation ``ShardCoordinator.LeastShardAllocationStrategy``
picks shards for handoff from the ``ShardRegion`` with most number of previously allocated shards.
They will then be allocated to the ``ShardRegion`` with least number of previously allocated shards,
i.e. new members in the cluster. There is a configurable threshold of how large the difference
must be to begin the rebalancing. This strategy can be replaced by an application specific
implementation.
The state of shard locations in the ``ShardCoordinator`` is persistent (durable) with
:ref:`distributed_data_java` or :ref:`persistence-java` to survive failures. When a crashed or
unreachable coordinator node has been removed (via down) from the cluster a new ``ShardCoordinator`` singleton
actor will take over and the state is recovered. During such a failure period shards
with known location are still available, while messages for new (unknown) shards
are buffered until the new ``ShardCoordinator`` becomes available.
As long as a sender uses the same ``ShardRegion`` actor to deliver messages to an entity
actor the order of the messages is preserved. As long as the buffer limit is not reached
messages are delivered on a best effort basis, with at-most once delivery semantics,
in the same way as ordinary message sending. Reliable end-to-end messaging, with
at-least-once semantics can be added by using ``AtLeastOnceDelivery`` in :ref:`persistence-java`.
Some additional latency is introduced for messages targeted to new or previously
unused shards due to the round-trip to the coordinator. Rebalancing of shards may
also add latency. This should be considered when designing the application specific
shard resolution, e.g. to avoid too fine grained shards.
.. _cluster_sharding_mode_java:
Distributed Data vs. Persistence Mode
-------------------------------------
The state of the coordinator and the state of :ref:`cluster_sharding_remembering_java` of the shards
are persistent (durable) to survive failures. :ref:`distributed_data_java` or :ref:`persistence-java`
can be used for the storage. Distributed Data is used by default.
The functionality when using the two modes is the same. If your sharded entities are not using Akka Persistence
themselves it is more convenient to use the Distributed Data mode, since then you don't have to
setup and operate a separate data store (e.g. Cassandra) for persistence. Aside from that, there are
no major reasons for using one mode over the the other.
It's important to use the same mode on all nodes in the cluster, i.e. it's not possible to perform
a rolling upgrade to change this setting.
Distributed Data Mode
^^^^^^^^^^^^^^^^^^^^^
This mode is enabled with configuration (enabled by default)::
akka.cluster.sharding.state-store-mode = ddata
The state of the ``ShardCoordinator`` will be replicated inside a cluster by the
:ref:`distributed_data_java` module with ``WriteMajority``/``ReadMajority`` consistency.
The state of the coordinator is not durable, it's not stored to disk. When all nodes in
the cluster have been stopped the state is lost and not needed any more.
The state of :ref:`cluster_sharding_remembering_java` is also durable, i.e. it is stored to
disk. The stored entities are started also after a complete cluster restart.
Cluster Sharding is using its own Distributed Data ``Replicator`` per node role. In this way you can use a subset of
all nodes for some entity types and another subset for other entity types. Each such replicator has a name
that contains the node role and therefore the role configuration must be the same on all nodes in the
cluster, i.e. you can't change the roles when performing a rolling upgrade.
The settings for Distributed Data is configured in the the section
``akka.cluster.sharding.distributed-data``. It's not possible to have different
``distributed-data`` settings for different sharding entity types.
Persistence Mode
^^^^^^^^^^^^^^^^
This mode is enabled with configuration::
akka.cluster.sharding.state-store-mode = persistence
Since it is running in a cluster :ref:`persistence-java` must be configured with a distributed journal.
Startup after minimum number of members
---------------------------------------
It's good to use Cluster Sharding with the Cluster setting ``akka.cluster.min-nr-of-members`` or
``akka.cluster.role.<role-name>.min-nr-of-members``. That will defer the allocation of the shards
until at least that number of regions have been started and registered to the coordinator. This
avoids that many shards are allocated to the first region that registers and only later are
rebalanced to other nodes.
See :ref:`min-members_java` for more information about ``min-nr-of-members``.
Proxy Only Mode
---------------
The ``ShardRegion`` actor can also be started in proxy only mode, i.e. it will not
host any entities itself, but knows how to delegate messages to the right location.
A ``ShardRegion`` is started in proxy only mode with the method ``ClusterSharding.startProxy``
method.
Passivation
-----------
If the state of the entities are persistent you may stop entities that are not used to
reduce memory consumption. This is done by the application specific implementation of
the entity actors for example by defining receive timeout (``context.setReceiveTimeout``).
If a message is already enqueued to the entity when it stops itself the enqueued message
in the mailbox will be dropped. To support graceful passivation without losing such
messages the entity actor can send ``ShardRegion.Passivate`` to its parent ``Shard``.
The specified wrapped message in ``Passivate`` will be sent back to the entity, which is
then supposed to stop itself. Incoming messages will be buffered by the ``Shard``
between reception of ``Passivate`` and termination of the entity. Such buffered messages
are thereafter delivered to a new incarnation of the entity.
.. _cluster_sharding_remembering_java:
Remembering Entities
--------------------
The list of entities in each ``Shard`` can be made persistent (durable) by setting
the ``rememberEntities`` flag to true in ``ClusterShardingSettings`` when calling
``ClusterSharding.start``. When configured to remember entities, whenever a ``Shard``
is rebalanced onto another node or recovers after a crash it will recreate all the
entities which were previously running in that ``Shard``. To permanently stop entities,
a ``Passivate`` message must be sent to the parent of the entity actor, otherwise the
entity will be automatically restarted after the entity restart backoff specified in
the configuration.
When :ref:`Distributed Data mode <cluster_sharding_mode_java>` is used the identifiers of the entities are
stored in :ref:`ddata_durable_java` of Distributed Data. You may want to change the
configuration of the akka.cluster.sharding.distributed-data.durable.lmdb.dir`, since
the default directory contains the remote port of the actor system. If using a dynamically
assigned port (0) it will be different each time and the previously stored data will not
be loaded.
When ``rememberEntities`` is set to false, a ``Shard`` will not automatically restart any entities
after a rebalance or recovering from a crash. Entities will only be started once the first message
for that entity has been received in the ``Shard``. Entities will not be restarted if they stop without
using a ``Passivate``.
Note that the state of the entities themselves will not be restored unless they have been made persistent,
e.g. with :ref:`persistence-java`.
The performance cost of ``rememberEntities`` is rather high when starting/stopping entities and when
shards are rebalanced. This cost increases with number of entities per shard and we currently don't
recommend using it with more than 10000 entities per shard.
Supervision
-----------
If you need to use another ``supervisorStrategy`` for the entity actors than the default (restarting) strategy
you need to create an intermediate parent actor that defines the ``supervisorStrategy`` to the
child entity actor.
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#supervisor
You start such a supervisor in the same way as if it was the entity actor.
.. includecode:: ../../../akka-cluster-sharding/src/test/java/akka/cluster/sharding/ClusterShardingTest.java#counter-supervisor-start
Note that stopped entities will be started again when a new message is targeted to the entity.
Graceful Shutdown
-----------------
You can send the ``ShardRegion.gracefulShutdownInstance`` message
to the ``ShardRegion`` actor to handoff all shards that are hosted by that ``ShardRegion`` and then the
``ShardRegion`` actor will be stopped. You can ``watch`` the ``ShardRegion`` actor to know when it is completed.
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
This is performed automatically by the :ref:`coordinated-shutdown-java` and is therefore part of the
graceful leaving process of a cluster member.
.. _RemoveInternalClusterShardingData-java:
Removal of Internal Cluster Sharding Data
-----------------------------------------
The Cluster Sharding coordinator stores the locations of the shards using Akka Persistence.
This data can safely be removed when restarting the whole Akka Cluster.
Note that this is not application data.
There is a utility program ``akka.cluster.sharding.RemoveInternalClusterShardingData``
that removes this data.
.. warning::
Never use this program while there are running Akka Cluster nodes that are
using Cluster Sharding. Stop all Cluster nodes before using this program.
It can be needed to remove the data if the Cluster Sharding coordinator
cannot startup because of corrupt data, which may happen if accidentally
two clusters were running at the same time, e.g. caused by using auto-down
and there was a network partition.
.. warning::
**Don't use Cluster Sharding together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple shards and entities* being started, one in each separate cluster!
See :ref:`automatic-vs-manual-downing-java`.
Use this program as a standalone Java main program::
java -classpath <jar files, including akka-cluster-sharding>
akka.cluster.sharding.RemoveInternalClusterShardingData
-2.3 entityType1 entityType2 entityType3
The program is included in the ``akka-cluster-sharding`` jar file. It
is easiest to run it with same classpath and configuration as your ordinary
application. It can be run from sbt or maven in similar way.
Specify the entity type names (same as you use in the ``start`` method
of ``ClusterSharding``) as program arguments.
If you specify ``-2.3`` as the first program argument it will also try
to remove data that was stored by Cluster Sharding in Akka 2.3.x using
different persistenceId.
Dependencies
------------
To use the Cluster Sharding you must add the following dependency in your project.
sbt::
"com.typesafe.akka" %% "akka-cluster-sharding" % "@version@" @crossString@
maven::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-sharding_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
Configuration
-------------
The ``ClusterSharding`` extension can be configured with the following properties. These configuration
properties are read by the ``ClusterShardingSettings`` when created with a ``ActorSystem`` parameter.
It is also possible to amend the ``ClusterShardingSettings`` or create it from another config section
with the same layout as below. ``ClusterShardingSettings`` is a parameter to the ``start`` method of
the ``ClusterSharding`` extension, i.e. each each entity type can be configured with different settings
if needed.
.. includecode:: ../../../akka-cluster-sharding/src/main/resources/reference.conf#sharding-ext-config
Custom shard allocation strategy can be defined in an optional parameter to
``ClusterSharding.start``. See the API documentation of ``AbstractShardAllocationStrategy`` for details
of how to implement a custom shard allocation strategy.
Inspecting cluster sharding state
---------------------------------
Two requests to inspect the cluster state are available:
``ShardRegion.getShardRegionStateInstance`` which will return a ``ShardRegion.ShardRegionState`` that contains
the identifiers of the shards running in a Region and what entities are alive for each of them.
``ShardRegion.GetClusterShardingStats`` which will query all the regions in the cluster and return
a ``ShardRegion.ClusterShardingStats`` containing the identifiers of the shards running in each region and a count
of entities that are alive in each shard.
The purpose of these messages is testing and monitoring, they are not provided to give access to
directly sending messages to the individual entities.

View file

@ -1,142 +0,0 @@
.. _cluster-singleton-java:
Cluster Singleton
=================
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
Some examples:
* single point of responsibility for certain cluster-wide consistent decisions, or
coordination of actions across the cluster system
* single entry point to an external system
* single master, many workers
* centralized naming service, or routing logic
Using a singleton should not be the first design choice. It has several drawbacks,
such as single-point of bottleneck. Single-point of failure is also a relevant concern,
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
The cluster singleton pattern is implemented by ``akka.cluster.singleton.ClusterSingletonManager``.
It manages one singleton actor instance among all cluster nodes or a group of nodes tagged with
a specific role. ``ClusterSingletonManager`` is an actor that is supposed to be started on
all nodes, or all nodes with specified role, in the cluster. The actual singleton actor is
started by the ``ClusterSingletonManager`` on the oldest node by creating a child actor from
supplied ``Props``. ``ClusterSingletonManager`` makes sure that at most one singleton instance
is running at any point in time.
The singleton actor is always running on the oldest member with specified role.
The oldest member is determined by ``akka.cluster.Member#isOlderThan``.
This can change when removing that member from the cluster. Be aware that there is a short time
period when there is no active singleton during the hand-over process.
The cluster failure detector will notice when oldest node becomes unreachable due to
things like JVM crash, hard shut down, or network failure. Then a new oldest node will
take over and a new singleton actor is created. For these failure scenarios there will
not be a graceful hand-over, but more than one active singletons is prevented by all
reasonable means. Some corner cases are eventually resolved by configurable timeouts.
You can access the singleton actor by using the provided ``akka.cluster.singleton.ClusterSingletonProxy``,
which will route all messages to the current instance of the singleton. The proxy will keep track of
the oldest node in the cluster and resolve the singleton's ``ActorRef`` by explicitly sending the
singleton's ``actorSelection`` the ``akka.actor.Identify`` message and waiting for it to reply.
This is performed periodically if the singleton doesn't reply within a certain (configurable) time.
Given the implementation, there might be periods of time during which the ``ActorRef`` is unavailable,
e.g., when a node leaves the cluster. In these cases, the proxy will buffer the messages sent to the
singleton and then deliver them when the singleton is finally available. If the buffer is full
the ``ClusterSingletonProxy`` will drop old messages when new messages are sent via the proxy.
The size of the buffer is configurable and it can be disabled by using a buffer size of 0.
It's worth noting that messages can always be lost because of the distributed nature of these actors.
As always, additional logic should be implemented in the singleton (acknowledgement) and in the
client (retry) actors to ensure at-least-once message delivery.
The singleton instance will not run on members with status :ref:`WeaklyUp <weakly_up_java>`.
Potential problems to be aware of
---------------------------------
This pattern may seem to be very tempting to use at first, but it has several drawbacks, some of them are listed below:
* the cluster singleton may quickly become a *performance bottleneck*,
* you can not rely on the cluster singleton to be *non-stop* available — e.g. when the node on which the singleton has
been running dies, it will take a few seconds for this to be noticed and the singleton be migrated to another node,
* in the case of a *network partition* appearing in a Cluster that is using Automatic Downing (see docs for
:ref:`automatic-vs-manual-downing-java`),
it may happen that the isolated clusters each decide to spin up their own singleton, meaning that there might be multiple
singletons running in the system, yet the Clusters have no way of finding out about them (because of the partition).
Especially the last point is something you should be aware of — in general when using the Cluster Singleton pattern
you should take care of downing nodes yourself and not rely on the timing based auto-down feature.
.. warning::
**Don't use Cluster Singleton together with Automatic Downing**,
since it allows the cluster to split up into two separate clusters, which in turn will result
in *multiple Singletons* being started, one in each separate cluster!
An Example
----------
Assume that we need one single entry point to an external system. An actor that
receives messages from a JMS queue with the strict requirement that only one
JMS consumer must exist to be make sure that the messages are processed in order.
That is perhaps not how one would like to design things, but a typical real-world
scenario when integrating with external systems.
On each node in the cluster you need to start the ``ClusterSingletonManager`` and
supply the ``Props`` of the singleton actor, in this case the JMS queue consumer.
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-manager
Here we limit the singleton to nodes tagged with the ``"worker"`` role, but all nodes, independent of
role, can be used by not specifying ``withRole``.
Here we use an application specific ``terminationMessage`` to be able to close the
resources before actually stopping the singleton actor. Note that ``PoisonPill`` is a
perfectly fine ``terminationMessage`` if you only need to stop the actor.
With the names given above, access to the singleton can be obtained from any cluster node using a properly
configured proxy.
.. includecode:: ../../../akka-cluster-tools/src/test/java/akka/cluster/singleton/ClusterSingletonManagerTest.java#create-singleton-proxy
A more comprehensive sample is available in the tutorial named `Distributed workers with Akka and Java! <https://github.com/typesafehub/activator-akka-distributed-workers-java>`_.
Dependencies
------------
To use the Cluster Singleton you must add the following dependency in your project.
sbt::
"com.typesafe.akka" %% "akka-cluster-tools" % "@version@" @crossString@
maven::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-tools_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
Configuration
-------------
The following configuration properties are read by the ``ClusterSingletonManagerSettings``
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonManagerSettings``
or create it from another config section with the same layout as below. ``ClusterSingletonManagerSettings`` is
a parameter to the ``ClusterSingletonManager.props`` factory method, i.e. each singleton can be configured
with different settings if needed.
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-config
The following configuration properties are read by the ``ClusterSingletonProxySettings``
when created with a ``ActorSystem`` parameter. It is also possible to amend the ``ClusterSingletonProxySettings``
or create it from another config section with the same layout as below. ``ClusterSingletonProxySettings`` is
a parameter to the ``ClusterSingletonProxy.props`` factory method, i.e. each singleton proxy can be configured
with different settings if needed.
.. includecode:: ../../../akka-cluster-tools/src/main/resources/reference.conf#singleton-proxy-config

View file

@ -1,823 +0,0 @@
.. _cluster_usage_java:
#############
Cluster Usage
#############
For introduction to the Akka Cluster concepts please see :ref:`cluster`.
Preparing Your Project for Clustering
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The Akka cluster is a separate jar file. Make sure that you have the following dependency in your project::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
.. _cluster_simple_example_java:
A Simple Cluster Example
^^^^^^^^^^^^^^^^^^^^^^^^
The following configuration enables the ``Cluster`` extension to be used.
It joins the cluster and an actor subscribes to cluster membership events and logs them.
The ``application.conf`` configuration looks like this:
::
akka {
actor {
provider = "cluster"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem@127.0.0.1:2551",
"akka.tcp://ClusterSystem@127.0.0.1:2552"]
# auto downing is NOT safe for production deployments.
# you may want to use it during development, read more about it in the docs.
#
# auto-down-unreachable-after = 10s
}
}
# Disable legacy metrics in akka-cluster.
akka.cluster.metrics.enabled=off
# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]
# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host.
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native
To enable cluster capabilities in your Akka project you should, at a minimum, add the :ref:`remoting-java`
settings, but with ``cluster``.
The ``akka.cluster.seed-nodes`` should normally also be added to your ``application.conf`` file.
.. note::
If you are running Akka in a Docker container or the nodes for some other reason have separate internal and
external ip addresses you must configure remoting according to :ref:`remote-configuration-nat-java`
The seed nodes are configured contact points for initial, automatic, join of the cluster.
Note that if you are going to start the nodes on different machines you need to specify the
ip-addresses or host names of the machines in ``application.conf`` instead of ``127.0.0.1``
An actor that uses the cluster extension may look like this:
.. literalinclude:: code/jdocs/cluster/SimpleClusterListener.java
:language: java
The actor registers itself as subscriber of certain cluster events. It receives events corresponding to the current state
of the cluster when the subscription starts and then it receives events for changes that happen in the cluster.
The easiest way to run this example yourself is to download the ready to run
`Akka Cluster Sample with Scala <@exampleCodeService@/akka-samples-cluster-java>`_
together with the tutorial. It contains instructions on how to run the ``SimpleClusterApp``.
The source code of this sample can be found in the `Akka Samples Repository <@samples@/akka-sample-cluster-java>`_.
Joining to Seed Nodes
^^^^^^^^^^^^^^^^^^^^^
You may decide if joining to the cluster should be done manually or automatically
to configured initial contact points, so-called seed nodes. When a new node is started
it sends a message to all seed nodes and then sends join command to the one that
answers first. If no one of the seed nodes replied (might not be started yet)
it retries this procedure until successful or shutdown.
You define the seed nodes in the :ref:`cluster_configuration_java` file (application.conf)::
akka.cluster.seed-nodes = [
"akka.tcp://ClusterSystem@host1:2552",
"akka.tcp://ClusterSystem@host2:2552"]
This can also be defined as Java system properties when starting the JVM using the following syntax::
-Dakka.cluster.seed-nodes.0=akka.tcp://ClusterSystem@host1:2552
-Dakka.cluster.seed-nodes.1=akka.tcp://ClusterSystem@host2:2552
The seed nodes can be started in any order and it is not necessary to have all
seed nodes running, but the node configured as the first element in the ``seed-nodes``
configuration list must be started when initially starting a cluster, otherwise the
other seed-nodes will not become initialized and no other node can join the cluster.
The reason for the special first seed node is to avoid forming separated islands when
starting from an empty cluster.
It is quickest to start all configured seed nodes at the same time (order doesn't matter),
otherwise it can take up to the configured ``seed-node-timeout`` until the nodes
can join.
Once more than two seed nodes have been started it is no problem to shut down the first
seed node. If the first seed node is restarted, it will first try to join the other
seed nodes in the existing cluster.
If you don't configure seed nodes you need to join the cluster programmatically or manually.
Manual joining can be performed by using :ref:`cluster_jmx_java` or :ref:`cluster_http_java`.
Joining programmatically can be performed with ``Cluster.get(system).join``. Unsuccessful join attempts are
automatically retried after the time period defined in configuration property ``retry-unsuccessful-join-after``.
Retries can be disabled by setting the property to ``off``.
You can join to any node in the cluster. It does not have to be configured as a seed node.
Note that you can only join to an existing cluster member, which means that for bootstrapping some
node must join itself,and then the following nodes could join them to make up a cluster.
You may also use ``Cluster.get(system).joinSeedNodes`` to join programmatically,
which is attractive when dynamically discovering other nodes at startup by using some external tool or API.
When using ``joinSeedNodes`` you should not include the node itself except for the node that is
supposed to be the first seed node, and that should be placed first in parameter to ``joinSeedNodes``.
Unsuccessful attempts to contact seed nodes are automatically retried after the time period defined in
configuration property ``seed-node-timeout``. Unsuccessful attempt to join a specific seed node is
automatically retried after the configured ``retry-unsuccessful-join-after``. Retrying means that it
tries to contact all seed nodes and then joins the node that answers first. The first node in the list
of seed nodes will join itself if it cannot contact any of the other seed nodes within the
configured ``seed-node-timeout``.
An actor system can only join a cluster once. Additional attempts will be ignored.
When it has successfully joined it must be restarted to be able to join another
cluster or to join the same cluster again. It can use the same host name and port
after the restart, when it come up as new incarnation of existing member in the cluster,
trying to join in, then the existing one will be removed from the cluster and then it will
be allowed to join.
.. note::
The name of the ``ActorSystem`` must be the same for all members of a cluster. The name is given
when you start the ``ActorSystem``.
.. _automatic-vs-manual-downing-java:
Downing
^^^^^^^
When a member is considered by the failure detector to be unreachable the
leader is not allowed to perform its duties, such as changing status of
new joining members to 'Up'. The node must first become reachable again, or the
status of the unreachable member must be changed to 'Down'. Changing status to 'Down'
can be performed automatically or manually. By default it must be done manually, using
:ref:`cluster_jmx_java` or :ref:`cluster_http_java`.
It can also be performed programmatically with ``Cluster.get(system).down(address)``.
A pre-packaged solution for the downing problem is provided by
`Split Brain Resolver <http://developer.lightbend.com/docs/akka-commercial-addons/current/split-brain-resolver.html>`_,
which is part of the `Lightbend Reactive Platform <http://www.lightbend.com/platform>`_.
If you dont use RP, you should anyway carefully read the `documentation <http://developer.lightbend.com/docs/akka-commercial-addons/current/split-brain-resolver.html>`_
of the Split Brain Resolver and make sure that the solution you are using handles the concerns
described there.
Auto-downing (DO NOT USE)
-------------------------
There is an atomatic downing feature that you should not use in production. For testing purpose you can enable it with configuration::
akka.cluster.auto-down-unreachable-after = 120s
This means that the cluster leader member will change the ``unreachable`` node
status to ``down`` automatically after the configured time of unreachability.
This is a naïve approach to remove unreachable nodes from the cluster membership. It
works great for crashes and short transient network partitions, but not for long network
partitions. Both sides of the network partition will see the other side as unreachable
and after a while remove it from its cluster membership. Since this happens on both
sides the result is that two separate disconnected clusters have been created. This
can also happen because of long GC pauses or system overload.
.. warning::
We recommend against using the auto-down feature of Akka Cluster in production.
This is crucial for correct behavior if you use :ref:`cluster-singleton-java` or
:ref:`cluster_sharding_java`, especially together with Akka :ref:`persistence-java`.
For Akka Persistence with Cluster Sharding it can result in corrupt data in case
of network partitions.
Leaving
^^^^^^^
There are two ways to remove a member from the cluster.
You can just stop the actor system (or the JVM process). It will be detected
as unreachable and removed after the automatic or manual downing as described
above.
A more graceful exit can be performed if you tell the cluster that a node shall leave.
This can be performed using :ref:`cluster_jmx_java` or :ref:`cluster_http_java`.
It can also be performed programmatically with:
.. includecode:: code/jdocs/cluster/ClusterDocTest.java#leave
Note that this command can be issued to any member in the cluster, not necessarily the
one that is leaving.
The :ref:`coordinated-shutdown-java` will automatically run when the cluster node sees itself as
``Exiting``, i.e. leaving from another node will trigger the shutdown process on the leaving node.
Tasks for graceful leaving of cluster including graceful shutdown of Cluster Singletons and
Cluster Sharding are added automatically when Akka Cluster is used, i.e. running the shutdown
process will also trigger the graceful leaving if it's not already in progress.
Normally this is handled automatically, but in case of network failures during this process it might still
be necessary to set the nodes status to ``Down`` in order to complete the removal.
.. _weakly_up_java:
WeaklyUp Members
^^^^^^^^^^^^^^^^
If a node is ``unreachable`` then gossip convergence is not possible and therefore any
``leader`` actions are also not possible. However, we still might want new nodes to join
the cluster in this scenario.
``Joining`` members will be promoted to ``WeaklyUp`` and become part of the cluster if
convergence can't be reached. Once gossip convergence is reached, the leader will move ``WeaklyUp``
members to ``Up``.
This feature is enabled by default, but it can be disabled with configuration option::
akka.cluster.allow-weakly-up-members = off
You can subscribe to the ``WeaklyUp`` membership event to make use of the members that are
in this state, but you should be aware of that members on the other side of a network partition
have no knowledge about the existence of the new members. You should for example not count
``WeaklyUp`` members in quorum decisions.
.. _cluster_subscriber_java:
Subscribe to Cluster Events
^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can subscribe to change notifications of the cluster membership by using
``Cluster.get(system).subscribe``.
.. includecode:: code/jdocs/cluster/SimpleClusterListener2.java#subscribe
A snapshot of the full state, ``akka.cluster.ClusterEvent.CurrentClusterState``, is sent to the subscriber
as the first message, followed by events for incremental updates.
Note that you may receive an empty ``CurrentClusterState``, containing no members,
if you start the subscription before the initial join procedure has completed.
This is expected behavior. When the node has been accepted in the cluster you will
receive ``MemberUp`` for that node, and other nodes.
If you find it inconvenient to handle the ``CurrentClusterState`` you can use
``ClusterEvent.initialStateAsEvents()`` as parameter to ``subscribe``.
That means that instead of receiving ``CurrentClusterState`` as the first message you will receive
the events corresponding to the current state to mimic what you would have seen if you were
listening to the events when they occurred in the past. Note that those initial events only correspond
to the current state and it is not the full history of all changes that actually has occurred in the cluster.
.. includecode:: code/jdocs/cluster/SimpleClusterListener.java#subscribe
The events to track the life-cycle of members are:
* ``ClusterEvent.MemberJoined`` - A new member has joined the cluster and its status has been changed to ``Joining``.
* ``ClusterEvent.MemberUp`` - A new member has joined the cluster and its status has been changed to ``Up``.
* ``ClusterEvent.MemberExited`` - A member is leaving the cluster and its status has been changed to ``Exiting``
Note that the node might already have been shutdown when this event is published on another node.
* ``ClusterEvent.MemberRemoved`` - Member completely removed from the cluster.
* ``ClusterEvent.UnreachableMember`` - A member is considered as unreachable, detected by the failure detector
of at least one other node.
* ``ClusterEvent.ReachableMember`` - A member is considered as reachable again, after having been unreachable.
All nodes that previously detected it as unreachable has detected it as reachable again.
There are more types of change events, consult the API documentation
of classes that extends ``akka.cluster.ClusterEvent.ClusterDomainEvent``
for details about the events.
Instead of subscribing to cluster events it can sometimes be convenient to only get the full membership state with
``Cluster.get(system).state()``. Note that this state is not necessarily in sync with the events published to a
cluster subscription.
Worker Dial-in Example
----------------------
Let's take a look at an example that illustrates how workers, here named *backend*,
can detect and register to new master nodes, here named *frontend*.
The example application provides a service to transform text. When some text
is sent to one of the frontend services, it will be delegated to one of the
backend workers, which performs the transformation job, and sends the result back to
the original client. New backend nodes, as well as new frontend nodes, can be
added or removed to the cluster dynamically.
Messages:
.. includecode:: code/jdocs/cluster/TransformationMessages.java#messages
The backend worker that performs the transformation job:
.. includecode:: code/jdocs/cluster/TransformationBackend.java#backend
Note that the ``TransformationBackend`` actor subscribes to cluster events to detect new,
potential, frontend nodes, and send them a registration message so that they know
that they can use the backend worker.
The frontend that receives user jobs and delegates to one of the registered backend workers:
.. includecode:: code/jdocs/cluster/TransformationFrontend.java#frontend
Note that the ``TransformationFrontend`` actor watch the registered backend
to be able to remove it from its list of available backend workers.
Death watch uses the cluster failure detector for nodes in the cluster, i.e. it detects
network failures and JVM crashes, in addition to graceful termination of watched
actor. Death watch generates the ``Terminated`` message to the watching actor when the
unreachable cluster node has been downed and removed.
The Akka sample named
`Akka Cluster Sample with Java <https://github.com/akka/akka-samples/tree/master/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Worker Dial-in Example**.
Node Roles
^^^^^^^^^^
Not all nodes of a cluster need to perform the same function: there might be one sub-set which runs the web front-end,
one which runs the data access layer and one for the number-crunching. Deployment of actors—for example by cluster-aware
routers—can take node roles into account to achieve this distribution of responsibilities.
The roles of a node is defined in the configuration property named ``akka.cluster.roles``
and it is typically defined in the start script as a system property or environment variable.
The roles of the nodes is part of the membership information in ``MemberEvent`` that you can subscribe to.
.. _min-members_java:
How To Startup when Cluster Size Reached
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A common use case is to start actors after the cluster has been initialized,
members have joined, and the cluster has reached a certain size.
With a configuration option you can define required number of members
before the leader changes member status of 'Joining' members to 'Up'.::
akka.cluster.min-nr-of-members = 3
In a similar way you can define required number of members of a certain role
before the leader changes member status of 'Joining' members to 'Up'.::
akka.cluster.role {
frontend.min-nr-of-members = 1
backend.min-nr-of-members = 2
}
You can start the actors in a ``registerOnMemberUp`` callback, which will
be invoked when the current member status is changed to 'Up', i.e. the cluster
has at least the defined number of members.
.. includecode:: code/jdocs/cluster/FactorialFrontendMain.java#registerOnUp
This callback can be used for other things than starting actors.
How To Cleanup when Member is Removed
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can do some clean up in a ``registerOnMemberRemoved`` callback, which will
be invoked when the current member status is changed to 'Removed' or the cluster have been shutdown.
An alternative is to register tasks to the :ref:`coordinated-shutdown-java`.
.. note::
Register a OnMemberRemoved callback on a cluster that have been shutdown, the callback will be invoked immediately on
the caller thread, otherwise it will be invoked later when the current member status changed to 'Removed'. You may
want to install some cleanup handling after the cluster was started up, but the cluster might already be shutting
down when you installing, and depending on the race is not healthy.
Cluster Singleton
^^^^^^^^^^^^^^^^^
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
This can be implemented by subscribing to member events, but there are several corner
cases to consider. Therefore, this specific use case is made easily accessible by the
:ref:`cluster-singleton-java`.
Cluster Sharding
^^^^^^^^^^^^^^^^
Distributes actors across several nodes in the cluster and supports interaction
with the actors using their logical identifier, but without having to care about
their physical location in the cluster.
See :ref:`cluster_sharding_java`.
Distributed Publish Subscribe
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Publish-subscribe messaging between actors in the cluster, and point-to-point messaging
using the logical path of the actors, i.e. the sender does not have to know on which
node the destination actor is running.
See :ref:`distributed-pub-sub-java`.
Cluster Client
^^^^^^^^^^^^^^
Communication from an actor system that is not part of the cluster to actors running
somewhere in the cluster. The client does not have to know on which node the destination
actor is running.
See :ref:`cluster-client-java`.
Distributed Data
^^^^^^^^^^^^^^^^
*Akka Distributed Data* is useful when you need to share data between nodes in an
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
See :ref:`distributed_data_java`.
Failure Detector
^^^^^^^^^^^^^^^^
In a cluster each node is monitored by a few (default maximum 5) other nodes, and when
any of these detects the node as ``unreachable`` that information will spread to
the rest of the cluster through the gossip. In other words, only one node needs to
mark a node ``unreachable`` to have the rest of the cluster mark that node ``unreachable``.
The failure detector will also detect if the node becomes ``reachable`` again. When
all nodes that monitored the ``unreachable`` node detects it as ``reachable`` again
the cluster, after gossip dissemination, will consider it as ``reachable``.
If system messages cannot be delivered to a node it will be quarantined and then it
cannot come back from ``unreachable``. This can happen if the there are too many
unacknowledged system messages (e.g. watch, Terminated, remote actor deployment,
failures of actors supervised by remote parent). Then the node needs to be moved
to the ``down`` or ``removed`` states and the actor system of the quarantined node
must be restarted before it can join the cluster again.
The nodes in the cluster monitor each other by sending heartbeats to detect if a node is
unreachable from the rest of the cluster. The heartbeat arrival times is interpreted
by an implementation of
`The Phi Accrual Failure Detector <http://www.jaist.ac.jp/~defago/files/pdf/IS_RR_2004_010.pdf>`_.
The suspicion level of failure is given by a value called *phi*.
The basic idea of the phi failure detector is to express the value of *phi* on a scale that
is dynamically adjusted to reflect current network conditions.
The value of *phi* is calculated as::
phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the :ref:`cluster_configuration_java` you can adjust the ``akka.cluster.failure-detector.threshold``
to define when a *phi* value is considered to be a failure.
A low ``threshold`` is prone to generate many false positives but ensures
a quick detection in the event of a real crash. Conversely, a high ``threshold``
generates fewer mistakes but needs more time to detect actual crashes. The
default ``threshold`` is 8 and is appropriate for most situations. However in
cloud environments, such as Amazon EC2, the value could be increased to 12 in
order to account for network issues that sometimes occur on such platforms.
The following chart illustrates how *phi* increase with increasing time since the
previous heartbeat.
.. image:: ../images/phi1.png
Phi is calculated from the mean and standard deviation of historical
inter arrival times. The previous chart is an example for standard deviation
of 200 ms. If the heartbeats arrive with less deviation the curve becomes steeper,
i.e. it is possible to determine failure more quickly. The curve looks like this for
a standard deviation of 100 ms.
.. image:: ../images/phi2.png
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
``akka.cluster.failure-detector.acceptable-heartbeat-pause``. You may want to
adjust the :ref:`cluster_configuration_java` of this depending on you environment.
This is how the curve looks like for ``acceptable-heartbeat-pause`` configured to
3 seconds.
.. image:: ../images/phi3.png
Death watch uses the cluster failure detector for nodes in the cluster, i.e. it detects
network failures and JVM crashes, in addition to graceful termination of watched
actor. Death watch generates the ``Terminated`` message to the watching actor when the
unreachable cluster node has been downed and removed.
If you encounter suspicious false positives when the system is under load you should
define a separate dispatcher for the cluster actors as described in :ref:`cluster_dispatcher_java`.
Cluster Aware Routers
^^^^^^^^^^^^^^^^^^^^^
All :ref:`routers <routing-java>` can be made aware of member nodes in the cluster, i.e.
deploying new routees or looking up routees on nodes in the cluster.
When a node becomes unreachable or leaves the cluster the routees of that node are
automatically unregistered from the router. When new nodes join the cluster additional
routees are added to the router, according to the configuration. Routees are also added
when a node becomes reachable again, after having been unreachable.
Cluster aware routers make use of members with status :ref:`WeaklyUp <weakly_up_java>`.
There are two distinct types of routers.
* **Group - router that sends messages to the specified path using actor selection**
The routees can be shared between routers running on different nodes in the cluster.
One example of a use case for this type of router is a service running on some backend
nodes in the cluster and used by routers running on front-end nodes in the cluster.
* **Pool - router that creates routees as child actors and deploys them on remote nodes.**
Each router will have its own routee instances. For example, if you start a router
on 3 nodes in a 10 nodes cluster you will have 30 routee actors in total if the router is
configured to use one instance per node. The routees created by the different routers
will not be shared between the routers. One example of a use case for this type of router
is a single master that coordinate jobs and delegates the actual work to routees running
on other nodes in the cluster.
Router with Group of Routees
----------------------------
When using a ``Group`` you must start the routee actors on the cluster member nodes.
That is not done by the router. The configuration for a group looks like this:::
akka.actor.deployment {
/statsService/workerRouter {
router = consistent-hashing-group
routees.paths = ["/user/statsWorker"]
cluster {
enabled = on
allow-local-routees = on
use-role = compute
}
}
}
.. note::
The routee actors should be started as early as possible when starting the actor system, because
the router will try to use them as soon as the member status is changed to 'Up'.
The actor paths without address information that are defined in ``routees.paths`` are used for selecting the
actors to which the messages will be forwarded to by the router.
Messages will be forwarded to the routees using :ref:`ActorSelection <actorselection-java>`, so the same delivery semantics should be expected.
It is possible to limit the lookup of routees to member nodes tagged with a certain role by specifying ``use-role``.
``max-total-nr-of-instances`` defines total number of routees in the cluster. By default ``max-total-nr-of-instances``
is set to a high value (10000) that will result in new routees added to the router when nodes join the cluster.
Set it to a lower value if you want to limit total number of routees.
The same type of router could also have been defined in code:
.. includecode:: code/jdocs/cluster/StatsService.java#router-lookup-in-code
See :ref:`cluster_configuration_java` section for further descriptions of the settings.
Router Example with Group of Routees
------------------------------------
Let's take a look at how to use a cluster aware router with a group of routees,
i.e. router sending to the paths of the routees.
The example application provides a service to calculate statistics for a text.
When some text is sent to the service it splits it into words, and delegates the task
to count number of characters in each word to a separate worker, a routee of a router.
The character count for each word is sent back to an aggregator that calculates
the average number of characters per word when all results have been collected.
Messages:
.. includecode:: code/jdocs/cluster/StatsMessages.java#messages
The worker that counts number of characters in each word:
.. includecode:: code/jdocs/cluster/StatsWorker.java#worker
The service that receives text from users and splits it up into words, delegates to workers and aggregates:
.. includecode:: code/jdocs/cluster/StatsService.java#service
.. includecode:: code/jdocs/cluster/StatsAggregator.java#aggregator
Note, nothing cluster specific so far, just plain actors.
All nodes start ``StatsService`` and ``StatsWorker`` actors. Remember, routees are the workers in this case.
The router is configured with ``routees.paths``:::
akka.actor.deployment {
/statsService/workerRouter {
router = consistent-hashing-group
routees.paths = ["/user/statsWorker"]
cluster {
enabled = on
allow-local-routees = on
use-role = compute
}
}
}
This means that user requests can be sent to ``StatsService`` on any node and it will use
``StatsWorker`` on all nodes.
The Akka sample named
`Akka Cluster Sample with Java <https://github.com/akka/akka-samples/tree/master/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Router Example with Group of Routees**.
Router with Pool of Remote Deployed Routees
-------------------------------------------
When using a ``Pool`` with routees created and deployed on the cluster member nodes
the configuration for a router looks like this:::
akka.actor.deployment {
/statsService/singleton/workerRouter {
router = consistent-hashing-pool
cluster {
enabled = on
max-nr-of-instances-per-node = 3
allow-local-routees = on
use-role = compute
}
}
}
It is possible to limit the deployment of routees to member nodes tagged with a certain role by
specifying ``use-role``.
``max-total-nr-of-instances`` defines total number of routees in the cluster, but the number of routees
per node, ``max-nr-of-instances-per-node``, will not be exceeded. By default ``max-total-nr-of-instances``
is set to a high value (10000) that will result in new routees added to the router when nodes join the cluster.
Set it to a lower value if you want to limit total number of routees.
The same type of router could also have been defined in code:
.. includecode:: code/jdocs/cluster/StatsService.java#router-deploy-in-code
See :ref:`cluster_configuration_java` section for further descriptions of the settings.
Router Example with Pool of Remote Deployed Routees
---------------------------------------------------
Let's take a look at how to use a cluster aware router on single master node that creates
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton-java`
in the cluster-tools module. The ``ClusterSingletonManager`` is started on each node.
.. includecode:: code/jdocs/cluster/StatsSampleOneMasterMain.java#create-singleton-manager
We also need an actor on each node that keeps track of where current single master exists and
delegates jobs to the ``StatsService``. That is provided by the ``ClusterSingletonProxy``.
.. includecode:: code/jdocs/cluster/StatsSampleOneMasterMain.java#singleton-proxy
The ``ClusterSingletonProxy`` receives text from users and delegates to the current ``StatsService``, the single
master. It listens to cluster events to lookup the ``StatsService`` on the oldest node.
All nodes start ``ClusterSingletonProxy`` and the ``ClusterSingletonManager``. The router is now configured like this:::
akka.actor.deployment {
/statsService/singleton/workerRouter {
router = consistent-hashing-pool
cluster {
enabled = on
max-nr-of-instances-per-node = 3
allow-local-routees = on
use-role = compute
}
}
}
The Akka sample named
`Akka Cluster Sample with Java <https://github.com/akka/akka-samples/tree/master/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Router Example with Pool of Remote Deployed Routees**.
Cluster Metrics
^^^^^^^^^^^^^^^
The member nodes of the cluster can collect system health metrics and publish that to other cluster nodes
and to the registered subscribers on the system event bus with the help of :doc:`cluster-metrics`.
Management
^^^^^^^^^^
.. _cluster_http_java:
HTTP
----
Information and management of the cluster is available with a HTTP API.
See documentation of `akka/akka-cluster-management <https://github.com/akka/akka-cluster-management>`_.
.. _cluster_jmx_java:
JMX
---
Information and management of the cluster is available as JMX MBeans with the root name ``akka.Cluster``.
The JMX information can be displayed with an ordinary JMX console such as JConsole or JVisualVM.
From JMX you can:
* see what members that are part of the cluster
* see status of this node
* see roles of each member
* join this node to another node in cluster
* mark any node in the cluster as down
* tell any node in the cluster to leave
Member nodes are identified by their address, in format `akka.<protocol>://<actor-system-name>@<hostname>:<port>`.
.. _cluster_command_line_java:
Command Line
------------
.. warning::
**Deprecation warning** - The command line script has been deprecated and is scheduled for removal
in the next major version. Use the :ref:`cluster_http_java` API with `curl <https://curl.haxx.se/>`_
or similar instead.
The cluster can be managed with the script ``akka-cluster`` provided in the Akka github repository here: @github@/akka-cluster/jmx-client. Place the script and the ``jmxsh-R5.jar`` library in the same directory.
Run it without parameters to see instructions about how to use the script::
Usage: ./akka-cluster <node-hostname> <jmx-port> <command> ...
Supported commands are:
join <node-url> - Sends request a JOIN node with the specified URL
leave <node-url> - Sends a request for node with URL to LEAVE the cluster
down <node-url> - Sends a request for marking node with URL as DOWN
member-status - Asks the member node for its current status
members - Asks the cluster for addresses of current members
unreachable - Asks the cluster for addresses of unreachable members
cluster-status - Asks the cluster for its current status (member ring,
unavailable nodes, meta data etc.)
leader - Asks the cluster who the current leader is
is-singleton - Checks if the cluster is a singleton cluster (single
node cluster)
is-available - Checks if the member node is available
Where the <node-url> should be on the format of
'akka.<protocol>://<actor-system-name>@<hostname>:<port>'
Examples: ./akka-cluster localhost 9999 is-available
./akka-cluster localhost 9999 join akka.tcp://MySystem@darkstar:2552
./akka-cluster localhost 9999 cluster-status
To be able to use the script you must enable remote monitoring and management when starting the JVMs of the cluster nodes,
as described in `Monitoring and Management Using JMX Technology <http://docs.oracle.com/javase/8/jdocs/technotes/guides/management/agent.html>`_.
Make sure you understand the security implications of enabling remote monitoring and management.
.. _cluster_configuration_java:
Configuration
^^^^^^^^^^^^^
There are several configuration properties for the cluster. We refer to the
:ref:`reference configuration <config-akka-cluster>` for more information.
Cluster Info Logging
--------------------
You can silence the logging of cluster events at info level with configuration property::
akka.cluster.log-info = off
.. _cluster_dispatcher_java:
Cluster Dispatcher
------------------
Under the hood the cluster extension is implemented with actors and it can be necessary
to create a bulkhead for those actors to avoid disturbance from other actors. Especially
the heartbeating actors that is used for failure detection can generate false positives
if they are not given a chance to run at regular intervals.
For this purpose you can define a separate dispatcher to be used for the cluster actors::
akka.cluster.use-dispatcher = cluster-dispatcher
cluster-dispatcher {
type = "Dispatcher"
executor = "fork-join-executor"
fork-join-executor {
parallelism-min = 2
parallelism-max = 4
}
}
.. note::
Normally it should not be necessary to configure a separate dispatcher for the Cluster.
The default-dispatcher should be sufficient for performing the Cluster tasks, i.e. ``akka.cluster.use-dispatcher``
should not be changed. If you have Cluster related problems when using the default-dispatcher that is typically an
indication that you are running blocking or CPU intensive actors/tasks on the default-dispatcher.
Use dedicated dispatchers for such actors/tasks instead of running them on the default-dispatcher,
because that may starve system internal tasks.
Related config properties: ``akka.cluster.use-dispatcher = akka.cluster.cluster-dispatcher``.
Corresponding default values: ``akka.cluster.use-dispatcher =``.

View file

@ -1,13 +0,0 @@
/*
* Copyright (C) 2016-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs
import org.scalatest.junit.JUnitSuite
/**
* Base class for all runnable example tests written in Java
*/
abstract class AbstractJavaTest extends JUnitSuite {
}

View file

@ -1,837 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import akka.actor.*;
import akka.testkit.javadsl.TestKit;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import jdocs.AbstractJavaTest;
import static jdocs.actor.Messages.Swap.Swap;
import static jdocs.actor.Messages.*;
import akka.actor.CoordinatedShutdown;
import akka.util.Timeout;
import akka.Done;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
import akka.testkit.TestActors;
import scala.concurrent.Await;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.junit.Assert.*;
//#import-props
import akka.actor.Props;
//#import-props
//#import-actorRef
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
//#import-actorRef
//#import-identify
import akka.actor.ActorIdentity;
import akka.actor.ActorSelection;
import akka.actor.Identify;
//#import-identify
//#import-ask
import static akka.pattern.PatternsCS.ask;
import static akka.pattern.PatternsCS.pipe;
import java.util.concurrent.CompletableFuture;
//#import-ask
//#import-gracefulStop
import static akka.pattern.PatternsCS.gracefulStop;
import akka.pattern.AskTimeoutException;
import scala.concurrent.duration.Duration;
import java.util.concurrent.CompletionStage;
//#import-gracefulStop
//#import-terminated
import akka.actor.Terminated;
//#import-terminated
public class ActorDocTest extends AbstractJavaTest {
public static Config config = ConfigFactory.parseString(
"akka {\n" +
" loggers = [\"akka.testkit.TestEventListener\"]\n" +
" loglevel = \"WARNING\"\n" +
" stdout-loglevel = \"WARNING\"\n" +
"}\n"
);
static ActorSystem system = null;
@BeforeClass
public static void beforeClass() {
system = ActorSystem.create("ActorDocTest", config);
}
@AfterClass
public static void afterClass() throws Exception {
Await.ready(system.terminate(), Duration.create(5, TimeUnit.SECONDS));
}
static
//#context-actorOf
public class FirstActor extends AbstractActor {
final ActorRef child = getContext().actorOf(Props.create(MyActor.class), "myChild");
//#plus-some-behavior
@Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(x -> getSender().tell(x, getSelf()))
.build();
}
//#plus-some-behavior
}
//#context-actorOf
static public class SomeActor extends AbstractActor {
//#createReceive
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> System.out.println(s.toLowerCase()))
.build();
}
//#createReceive
}
static
//#well-structured
public class WellStructuredActor extends AbstractActor {
public static class Msg1 {}
public static class Msg2 {}
public static class Msg3 {}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Msg1.class, this::receiveMsg1)
.match(Msg2.class, this::receiveMsg2)
.match(Msg3.class, this::receiveMsg3)
.build();
}
private void receiveMsg1(Msg1 msg) {
// actual work
}
private void receiveMsg2(Msg2 msg) {
// actual work
}
private void receiveMsg3(Msg3 msg) {
// actual work
}
}
//#well-structured
static
//#optimized
public class OptimizedActor extends UntypedAbstractActor {
public static class Msg1 {}
public static class Msg2 {}
public static class Msg3 {}
@Override
public void onReceive(Object msg) throws Exception {
if (msg instanceof Msg1)
receiveMsg1((Msg1) msg);
else if (msg instanceof Msg2)
receiveMsg2((Msg2) msg);
else if (msg instanceof Msg3)
receiveMsg3((Msg3) msg);
else
unhandled(msg);
}
private void receiveMsg1(Msg1 msg) {
// actual work
}
private void receiveMsg2(Msg2 msg) {
// actual work
}
private void receiveMsg3(Msg3 msg) {
// actual work
}
}
//#optimized
static public class ActorWithArgs extends AbstractActor {
private final String args;
public ActorWithArgs(String args) {
this.args = args;
}
@Override
public Receive createReceive() {
return receiveBuilder().matchAny(x -> { }).build();
}
}
static
//#props-factory
public class DemoActor extends AbstractActor {
/**
* Create Props for an actor of this type.
* @param magicNumber The magic number to be passed to this actors constructor.
* @return a Props for creating this actor, which can then be further configured
* (e.g. calling `.withDispatcher()` on it)
*/
static Props props(Integer magicNumber) {
// You need to specify the actual type of the returned actor
// since Java 8 lambdas have some runtime type information erased
return Props.create(DemoActor.class, () -> new DemoActor(magicNumber));
}
private final Integer magicNumber;
public DemoActor(Integer magicNumber) {
this.magicNumber = magicNumber;
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Integer.class, i -> {
getSender().tell(i + magicNumber, getSelf());
})
.build();
}
}
//#props-factory
static
//#props-factory
public class SomeOtherActor extends AbstractActor {
// Props(new DemoActor(42)) would not be safe
ActorRef demoActor = getContext().actorOf(DemoActor.props(42), "demo");
// ...
//#props-factory
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
//#props-factory
}
//#props-factory
static
//#messages-in-companion
public class DemoMessagesActor extends AbstractLoggingActor {
static public class Greeting {
private final String from;
public Greeting(String from) {
this.from = from;
}
public String getGreeter() {
return from;
}
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Greeting.class, g -> {
log().info("I was greeted by {}", g.getGreeter());
})
.build();
}
}
//#messages-in-companion
public static class LifecycleMethods extends AbstractActor {
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
/*
* This section must be kept in sync with the actual Actor trait.
*
* BOYSCOUT RULE: whenever you read this, verify that!
*/
//#lifecycle-callbacks
public void preStart() {
}
public void preRestart(Throwable reason, Optional<Object> message) {
for (ActorRef each : getContext().getChildren()) {
getContext().unwatch(each);
getContext().stop(each);
}
postStop();
}
public void postRestart(Throwable reason) {
preStart();
}
public void postStop() {
}
//#lifecycle-callbacks
}
public static class Hook extends AbstractActor {
ActorRef target = null;
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
//#preStart
@Override
public void preStart() {
target = getContext().actorOf(Props.create(MyActor.class, "target"));
}
//#preStart
//#postStop
@Override
public void postStop() {
//#clean-up-some-resources
final String message = "stopped";
//#tell
// dont forget to think about who is the sender (2nd argument)
target.tell(message, getSelf());
//#tell
final Object result = "";
//#forward
target.forward(result, getContext());
//#forward
target = null;
//#clean-up-some-resources
}
//#postStop
// compilation test only
public void compileSelections() {
//#selection-local
// will look up this absolute path
getContext().actorSelection("/user/serviceA/actor");
// will look up sibling beneath same supervisor
getContext().actorSelection("../joe");
//#selection-local
//#selection-wildcard
// will look all children to serviceB with names starting with worker
getContext().actorSelection("/user/serviceB/worker*");
// will look up all siblings beneath same supervisor
getContext().actorSelection("../*");
//#selection-wildcard
//#selection-remote
getContext().actorSelection("akka.tcp://app@otherhost:1234/user/serviceB");
//#selection-remote
}
}
public static class ReplyException extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(x -> {
//#reply-exception
try {
String result = operation();
getSender().tell(result, getSelf());
} catch (Exception e) {
getSender().tell(new akka.actor.Status.Failure(e), getSelf());
throw e;
}
//#reply-exception
})
.build();
}
private String operation() {
return "Hi";
}
}
static
//#gracefulStop-actor
public class Manager extends AbstractActor {
private static enum Shutdown {
Shutdown
}
public static final Shutdown SHUTDOWN = Shutdown.Shutdown;
private ActorRef worker =
getContext().watch(getContext().actorOf(Props.create(Cruncher.class), "worker"));
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("job", s -> {
worker.tell("crunch", getSelf());
})
.matchEquals(SHUTDOWN, x -> {
worker.tell(PoisonPill.getInstance(), getSelf());
getContext().become(shuttingDown());
})
.build();
}
private AbstractActor.Receive shuttingDown() {
return receiveBuilder()
.matchEquals("job", s ->
getSender().tell("service unavailable, shutting down", getSelf())
)
.match(Terminated.class, t -> t.actor().equals(worker), t ->
getContext().stop(getSelf())
)
.build();
}
}
//#gracefulStop-actor
@Test
public void usePatternsGracefulStop() throws Exception {
ActorRef actorRef = system.actorOf(Props.create(Manager.class));
//#gracefulStop
try {
CompletionStage<Boolean> stopped =
gracefulStop(actorRef, Duration.create(5, TimeUnit.SECONDS), Manager.SHUTDOWN);
stopped.toCompletableFuture().get(6, TimeUnit.SECONDS);
// the actor has been stopped
} catch (AskTimeoutException e) {
// the actor wasn't stopped within 5 seconds
}
//#gracefulStop
}
public static class Cruncher extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder().matchEquals("crunch", s -> { }).build();
}
}
static
//#swapper
public class Swapper extends AbstractLoggingActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals(Swap, s -> {
log().info("Hi");
getContext().become(receiveBuilder().
matchEquals(Swap, x -> {
log().info("Ho");
getContext().unbecome(); // resets the latest 'become' (just for fun)
}).build(), false); // push on top instead of replace
}).build();
}
}
//#swapper
static
//#swapper
public class SwapperApp {
public static void main(String[] args) {
ActorSystem system = ActorSystem.create("SwapperSystem");
ActorRef swapper = system.actorOf(Props.create(Swapper.class), "swapper");
swapper.tell(Swap, ActorRef.noSender()); // logs Hi
swapper.tell(Swap, ActorRef.noSender()); // logs Ho
swapper.tell(Swap, ActorRef.noSender()); // logs Hi
swapper.tell(Swap, ActorRef.noSender()); // logs Ho
swapper.tell(Swap, ActorRef.noSender()); // logs Hi
swapper.tell(Swap, ActorRef.noSender()); // logs Ho
system.terminate();
}
}
//#swapper
@Test
public void creatingActorWithSystemActorOf() {
//#system-actorOf
// ActorSystem is a heavy object: create only one per application
final ActorSystem system = ActorSystem.create("MySystem", config);
final ActorRef myActor = system.actorOf(Props.create(MyActor.class), "myactor");
//#system-actorOf
try {
new TestKit(system) {
{
myActor.tell("hello", getRef());
expectMsgEquals("hello");
}
};
} finally {
TestKit.shutdownActorSystem(system);
}
}
@Test
public void creatingGraduallyBuiltActorWithSystemActorOf() {
final ActorSystem system = ActorSystem.create("MySystem", config);
final ActorRef actor = system.actorOf(Props.create(GraduallyBuiltActor.class), "graduallyBuiltActor");
try {
new TestKit(system) {
{
actor.tell("hello", getRef());
expectMsgEquals("hello");
}
};
} finally {
TestKit.shutdownActorSystem(system);
}
}
@Test
public void creatingPropsConfig() {
//#creating-props
Props props1 = Props.create(MyActor.class);
Props props2 = Props.create(ActorWithArgs.class,
() -> new ActorWithArgs("arg")); // careful, see below
Props props3 = Props.create(ActorWithArgs.class, "arg");
//#creating-props
//#creating-props-deprecated
// NOT RECOMMENDED within another actor:
// encourages to close over enclosing class
Props props7 = Props.create(ActorWithArgs.class,
() -> new ActorWithArgs("arg"));
//#creating-props-deprecated
}
@Test(expected=IllegalArgumentException.class)
public void creatingPropsIllegal() {
//#creating-props-illegal
// This will throw an IllegalArgumentException since some runtime
// type information of the lambda is erased.
// Use Props.create(actorClass, Creator) instead.
Props props = Props.create(() -> new ActorWithArgs("arg"));
//#creating-props-illegal
}
static
//#receive-timeout
public class ReceiveTimeoutActor extends AbstractActor {
//#receive-timeout
ActorRef target = getContext().getSystem().deadLetters();
//#receive-timeout
public ReceiveTimeoutActor() {
// To set an initial delay
getContext().setReceiveTimeout(Duration.create(10, TimeUnit.SECONDS));
}
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("Hello", s -> {
// To set in a response to a message
getContext().setReceiveTimeout(Duration.create(1, TimeUnit.SECONDS));
//#receive-timeout
target = getSender();
target.tell("Hello world", getSelf());
//#receive-timeout
})
.match(ReceiveTimeout.class, r -> {
// To turn it off
getContext().setReceiveTimeout(Duration.Undefined());
//#receive-timeout
target.tell("timeout", getSelf());
//#receive-timeout
})
.build();
}
}
//#receive-timeout
@Test
public void using_receiveTimeout() {
final ActorRef myActor = system.actorOf(Props.create(ReceiveTimeoutActor.class));
new TestKit(system) {
{
myActor.tell("Hello", getRef());
expectMsgEquals("Hello world");
expectMsgEquals("timeout");
}
};
}
static
//#hot-swap-actor
public class HotSwapActor extends AbstractActor {
private AbstractActor.Receive angry;
private AbstractActor.Receive happy;
public HotSwapActor() {
angry =
receiveBuilder()
.matchEquals("foo", s -> {
getSender().tell("I am already angry?", getSelf());
})
.matchEquals("bar", s -> {
getContext().become(happy);
})
.build();
happy = receiveBuilder()
.matchEquals("bar", s -> {
getSender().tell("I am already happy :-)", getSelf());
})
.matchEquals("foo", s -> {
getContext().become(angry);
})
.build();
}
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("foo", s ->
getContext().become(angry)
)
.matchEquals("bar", s ->
getContext().become(happy)
)
.build();
}
}
//#hot-swap-actor
@Test
public void using_hot_swap() {
final ActorRef actor = system.actorOf(Props.create(HotSwapActor.class), "hot");
new TestKit(system) {
{
actor.tell("foo", getRef());
actor.tell("foo", getRef());
expectMsgEquals("I am already angry?");
actor.tell("bar", getRef());
actor.tell("bar", getRef());
expectMsgEquals("I am already happy :-)");
actor.tell("foo", getRef());
actor.tell("foo", getRef());
expectMsgEquals("I am already angry?");
expectNoMsg(Duration.create(1, TimeUnit.SECONDS));
}
};
}
static
//#stash
public class ActorWithProtocol extends AbstractActorWithStash {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("open", s -> {
getContext().become(receiveBuilder()
.matchEquals("write", ws -> { /* do writing */ })
.matchEquals("close", cs -> {
unstashAll();
getContext().unbecome();
})
.matchAny(msg -> stash())
.build(), false);
})
.matchAny(msg -> stash())
.build();
}
}
//#stash
@Test
public void using_Stash() {
final ActorRef actor = system.actorOf(Props.create(ActorWithProtocol.class), "stash");
}
static
//#watch
public class WatchActor extends AbstractActor {
private final ActorRef child = getContext().actorOf(Props.empty(), "target");
private ActorRef lastSender = system.deadLetters();
public WatchActor() {
getContext().watch(child); // <-- this is the only call needed for registration
}
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("kill", s -> {
getContext().stop(child);
lastSender = getSender();
})
.match(Terminated.class, t -> t.actor().equals(child), t -> {
lastSender.tell("finished", getSelf());
})
.build();
}
}
//#watch
@Test
public void using_watch() {
ActorRef actor = system.actorOf(Props.create(WatchActor.class));
new TestKit(system) {
{
actor.tell("kill", getRef());
expectMsgEquals("finished");
}
};
}
static
//#identify
public class Follower extends AbstractActor {
final Integer identifyId = 1;
public Follower(){
ActorSelection selection = getContext().actorSelection("/user/another");
selection.tell(new Identify(identifyId), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(ActorIdentity.class, id -> id.getActorRef().isPresent(), id -> {
ActorRef ref = id.getActorRef().get();
getContext().watch(ref);
getContext().become(active(ref));
})
.match(ActorIdentity.class, id -> !id.getActorRef().isPresent(), id -> {
getContext().stop(getSelf());
})
.build();
}
final AbstractActor.Receive active(final ActorRef another) {
return receiveBuilder()
.match(Terminated.class, t -> t.actor().equals(another), t ->
getContext().stop(getSelf())
)
.build();
}
}
//#identify
@Test
public void using_Identify() {
ActorRef a = system.actorOf(Props.empty());
ActorRef b = system.actorOf(Props.create(Follower.class));
new TestKit(system) {
{
watch(b);
system.stop(a);
assertEquals(expectMsgClass(Duration.create(2, TimeUnit.SECONDS), Terminated.class).actor(), b);
}
};
}
@Test
public void usePatternsAskPipe() {
new TestKit(system) {
{
ActorRef actorA = system.actorOf(TestActors.echoActorProps());
ActorRef actorB = system.actorOf(TestActors.echoActorProps());
ActorRef actorC = getRef();
//#ask-pipe
Timeout t = new Timeout(Duration.create(5, TimeUnit.SECONDS));
// using 1000ms timeout
CompletableFuture<Object> future1 =
ask(actorA, "request", 1000).toCompletableFuture();
// using timeout from above
CompletableFuture<Object> future2 =
ask(actorB, "another request", t).toCompletableFuture();
CompletableFuture<Result> transformed =
CompletableFuture.allOf(future1, future2)
.thenApply(v -> {
String x = (String) future1.join();
String s = (String) future2.join();
return new Result(x, s);
});
pipe(transformed, system.dispatcher()).to(actorC);
//#ask-pipe
expectMsgEquals(new Result("request", "another request"));
}
};
}
@Test
public void useKill() {
new TestKit(system) {
{
ActorRef victim = system.actorOf(TestActors.echoActorProps());
watch(victim);
//#kill
victim.tell(akka.actor.Kill.getInstance(), ActorRef.noSender());
//#kill
expectTerminated(Duration.create(3, TimeUnit.SECONDS), victim);
}
};
}
@Test
public void usePoisonPill() {
new TestKit(system) {
{
ActorRef victim = system.actorOf(TestActors.echoActorProps());
watch(victim);
//#poison-pill
victim.tell(akka.actor.PoisonPill.getInstance(), ActorRef.noSender());
//#poison-pill
expectTerminated(Duration.create(3, TimeUnit.SECONDS), victim);
}
};
}
@Test
public void coordinatedShutdown() {
final ActorRef someActor = system.actorOf(Props.create(FirstActor.class));
//#coordinated-shutdown-addTask
CoordinatedShutdown.get(system).addTask(
CoordinatedShutdown.PhaseBeforeServiceUnbind(), "someTaskName",
() -> {
return akka.pattern.PatternsCS.ask(someActor, "stop", new Timeout(5, TimeUnit.SECONDS))
.thenApply(reply -> Done.getInstance());
});
//#coordinated-shutdown-addTask
//#coordinated-shutdown-jvm-hook
CoordinatedShutdown.get(system).addJvmShutdownHook(() ->
System.out.println("custom JVM shutdown hook...")
);
//#coordinated-shutdown-jvm-hook
// don't run this
if (false) {
//#coordinated-shutdown-run
CompletionStage<Done> done = CoordinatedShutdown.get(system).runAll();
//#coordinated-shutdown-run
}
}
}

View file

@ -1,79 +0,0 @@
/*
* Copyright (C) 2016-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#bytebufserializer-with-manifest
import akka.serialization.ByteBufferSerializer;
import akka.serialization.SerializerWithStringManifest;
//#bytebufserializer-with-manifest
import java.nio.ByteBuffer;
public class ByteBufferSerializerDocTest {
static //#bytebufserializer-with-manifest
class ExampleByteBufSerializer extends SerializerWithStringManifest
implements ByteBufferSerializer {
@Override
public int identifier() {
return 1337;
}
@Override
public String manifest(Object o) {
return "serialized-" + o.getClass().getSimpleName();
}
@Override
public byte[] toBinary(Object o) {
// in production code, aquire this from a BufferPool
final ByteBuffer buf = ByteBuffer.allocate(256);
toBinary(o, buf);
buf.flip();
final byte[] bytes = new byte[buf.remaining()];
buf.get(bytes);
return bytes;
}
@Override
public Object fromBinary(byte[] bytes, String manifest) {
return fromBinary(ByteBuffer.wrap(bytes), manifest);
}
@Override
public void toBinary(Object o, ByteBuffer buf) {
// Implement actual serialization here
}
@Override
public Object fromBinary(ByteBuffer buf, String manifest) {
// Implement actual deserialization here
return null;
}
}
//#bytebufserializer-with-manifest
static class OnlyForDocInclude {
static
//#ByteBufferSerializer-interface
interface ByteBufferSerializer {
/**
* Serializes the given object into the `ByteBuffer`.
*/
void toBinary(Object o, ByteBuffer buf);
/**
* Produces an object from a `ByteBuffer`, with an optional type-hint;
* the class should be loaded using ActorSystem.dynamicAccess.
*/
Object fromBinary(ByteBuffer buf, String manifest);
}
//#ByteBufferSerializer-interface
}
}

View file

@ -1,105 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import scala.concurrent.Await;
import scala.concurrent.duration.Duration;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
//#import
import akka.actor.Actor;
import akka.actor.IndirectActorProducer;
//#import
public class DependencyInjectionDocTest extends AbstractJavaTest {
public static class TheActor extends AbstractActor {
final String s;
public TheActor(String s) {
this.s = s;
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, msg -> {
getSender().tell(s, getSelf());
})
.build();
}
}
static ActorSystem system = null;
@BeforeClass
public static void beforeClass() {
system = ActorSystem.create("DependencyInjectionDocTest");
}
@AfterClass
public static void afterClass() throws Exception {
Await.ready(system.terminate(), Duration.create("5 seconds"));
}
//this is just to make the test below a tiny fraction nicer
private ActorSystem getContext() {
return system;
}
static
//#creating-indirectly
class DependencyInjector implements IndirectActorProducer {
final Object applicationContext;
final String beanName;
public DependencyInjector(Object applicationContext, String beanName) {
this.applicationContext = applicationContext;
this.beanName = beanName;
}
@Override
public Class<? extends Actor> actorClass() {
return TheActor.class;
}
@Override
public TheActor produce() {
TheActor result;
//#obtain-fresh-Actor-instance-from-DI-framework
result = new TheActor((String) applicationContext);
//#obtain-fresh-Actor-instance-from-DI-framework
return result;
}
}
//#creating-indirectly
@Test
public void indirectActorOf() {
final String applicationContext = "...";
//#creating-indirectly
final ActorRef myActor = getContext().actorOf(
Props.create(DependencyInjector.class, applicationContext, "TheActor"),
"TheActor");
//#creating-indirectly
new TestKit(system) {
{
myActor.tell("hello", getRef());
expectMsgEquals("...");
}
};
}
}

View file

@ -1,469 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#all
//#imports
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import akka.actor.*;
import akka.dispatch.Mapper;
import akka.event.LoggingReceive;
import akka.japi.pf.DeciderBuilder;
import akka.pattern.Patterns;
import akka.util.Timeout;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import scala.concurrent.duration.Duration;
import static akka.japi.Util.classTag;
import static akka.actor.SupervisorStrategy.restart;
import static akka.actor.SupervisorStrategy.stop;
import static akka.actor.SupervisorStrategy.escalate;
import static akka.pattern.Patterns.ask;
import static akka.pattern.Patterns.pipe;
import static jdocs.actor.FaultHandlingDocSample.WorkerApi.*;
import static jdocs.actor.FaultHandlingDocSample.CounterServiceApi.*;
import static jdocs.actor.FaultHandlingDocSample.CounterApi.*;
import static jdocs.actor.FaultHandlingDocSample.StorageApi.*;
//#imports
public class FaultHandlingDocSample {
/**
* Runs the sample
*/
public static void main(String[] args) {
Config config = ConfigFactory.parseString(
"akka.loglevel = \"DEBUG\"\n" +
"akka.actor.debug {\n" +
" receive = on\n" +
" lifecycle = on\n" +
"}\n");
ActorSystem system = ActorSystem.create("FaultToleranceSample", config);
ActorRef worker = system.actorOf(Props.create(Worker.class), "worker");
ActorRef listener = system.actorOf(Props.create(Listener.class), "listener");
// start the work and listen on progress
// note that the listener is used as sender of the tell,
// i.e. it will receive replies from the worker
worker.tell(Start, listener);
}
/**
* Listens on progress from the worker and shuts down the system when enough
* work has been done.
*/
public static class Listener extends AbstractLoggingActor {
@Override
public void preStart() {
// If we don't get any progress within 15 seconds then the service
// is unavailable
getContext().setReceiveTimeout(Duration.create("15 seconds"));
}
@Override
public Receive createReceive() {
return LoggingReceive.create(receiveBuilder().
match(Progress.class, progress -> {
log().info("Current progress: {} %", progress.percent);
if (progress.percent >= 100.0) {
log().info("That's all, shutting down");
getContext().getSystem().terminate();
}
}).
matchEquals(ReceiveTimeout.getInstance(), x -> {
// No progress within 15 seconds, ServiceUnavailable
log().error("Shutting down due to unavailable service");
getContext().getSystem().terminate();
}).build(), getContext());
}
}
//#messages
public interface WorkerApi {
public static final Object Start = "Start";
public static final Object Do = "Do";
public static class Progress {
public final double percent;
public Progress(double percent) {
this.percent = percent;
}
public String toString() {
return String.format("%s(%s)", getClass().getSimpleName(), percent);
}
}
}
//#messages
/**
* Worker performs some work when it receives the Start message. It will
* continuously notify the sender of the Start message of current Progress.
* The Worker supervise the CounterService.
*/
public static class Worker extends AbstractLoggingActor {
final Timeout askTimeout = new Timeout(Duration.create(5, "seconds"));
// The sender of the initial Start message will continuously be notified
// about progress
ActorRef progressListener;
final ActorRef counterService = getContext().actorOf(
Props.create(CounterService.class), "counter");
final int totalCount = 51;
// Stop the CounterService child if it throws ServiceUnavailable
private static final SupervisorStrategy strategy =
new OneForOneStrategy(DeciderBuilder.
match(ServiceUnavailable.class, e -> stop()).
matchAny(o -> escalate()).build());
@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
@Override
public Receive createReceive() {
return LoggingReceive.create(receiveBuilder().
matchEquals(Start, x -> progressListener == null, x -> {
progressListener = getSender();
getContext().getSystem().scheduler().schedule(
Duration.Zero(), Duration.create(1, "second"), getSelf(), Do,
getContext().dispatcher(), null
);
}).
matchEquals(Do, x -> {
counterService.tell(new Increment(1), getSelf());
counterService.tell(new Increment(1), getSelf());
counterService.tell(new Increment(1), getSelf());
// Send current progress to the initial sender
pipe(Patterns.ask(counterService, GetCurrentCount, askTimeout)
.mapTo(classTag(CurrentCount.class))
.map(new Mapper<CurrentCount, Progress>() {
public Progress apply(CurrentCount c) {
return new Progress(100.0 * c.count / totalCount);
}
}, getContext().dispatcher()), getContext().dispatcher())
.to(progressListener);
}).build(), getContext());
}
}
//#messages
public interface CounterServiceApi {
public static final Object GetCurrentCount = "GetCurrentCount";
public static class CurrentCount {
public final String key;
public final long count;
public CurrentCount(String key, long count) {
this.key = key;
this.count = count;
}
public String toString() {
return String.format("%s(%s, %s)", getClass().getSimpleName(), key, count);
}
}
public static class Increment {
public final long n;
public Increment(long n) {
this.n = n;
}
public String toString() {
return String.format("%s(%s)", getClass().getSimpleName(), n);
}
}
public static class ServiceUnavailable extends RuntimeException {
private static final long serialVersionUID = 1L;
public ServiceUnavailable(String msg) {
super(msg);
}
}
}
//#messages
/**
* Adds the value received in Increment message to a persistent counter.
* Replies with CurrentCount when it is asked for CurrentCount. CounterService
* supervise Storage and Counter.
*/
public static class CounterService extends AbstractLoggingActor {
// Reconnect message
static final Object Reconnect = "Reconnect";
private static class SenderMsgPair {
final ActorRef sender;
final Object msg;
SenderMsgPair(ActorRef sender, Object msg) {
this.msg = msg;
this.sender = sender;
}
}
final String key = getSelf().path().name();
ActorRef storage;
ActorRef counter;
final List<SenderMsgPair> backlog = new ArrayList<>();
final int MAX_BACKLOG = 10000;
// Restart the storage child when StorageException is thrown.
// After 3 restarts within 5 seconds it will be stopped.
private static final SupervisorStrategy strategy =
new OneForOneStrategy(3, Duration.create("5 seconds"), DeciderBuilder.
match(StorageException.class, e -> restart()).
matchAny(o -> escalate()).build());
@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
@Override
public void preStart() {
initStorage();
}
/**
* The child storage is restarted in case of failure, but after 3 restarts,
* and still failing it will be stopped. Better to back-off than
* continuously failing. When it has been stopped we will schedule a
* Reconnect after a delay. Watch the child so we receive Terminated message
* when it has been terminated.
*/
void initStorage() {
storage = getContext().watch(getContext().actorOf(
Props.create(Storage.class), "storage"));
// Tell the counter, if any, to use the new storage
if (counter != null)
counter.tell(new UseStorage(storage), getSelf());
// We need the initial value to be able to operate
storage.tell(new Get(key), getSelf());
}
@Override
public Receive createReceive() {
return LoggingReceive.create(receiveBuilder().
match(Entry.class, entry -> entry.key.equals(key) && counter == null, entry -> {
// Reply from Storage of the initial value, now we can create the Counter
final long value = entry.value;
counter = getContext().actorOf(Props.create(Counter.class, key, value));
// Tell the counter to use current storage
counter.tell(new UseStorage(storage), getSelf());
// and send the buffered backlog to the counter
for (SenderMsgPair each : backlog) {
counter.tell(each.msg, each.sender);
}
backlog.clear();
}).
match(Increment.class, increment -> {
forwardOrPlaceInBacklog(increment);
}).
matchEquals(GetCurrentCount, gcc -> {
forwardOrPlaceInBacklog(gcc);
}).
match(Terminated.class, o -> {
// After 3 restarts the storage child is stopped.
// We receive Terminated because we watch the child, see initStorage.
storage = null;
// Tell the counter that there is no storage for the moment
counter.tell(new UseStorage(null), getSelf());
// Try to re-establish storage after while
getContext().getSystem().scheduler().scheduleOnce(
Duration.create(10, "seconds"), getSelf(), Reconnect,
getContext().dispatcher(), null);
}).
matchEquals(Reconnect, o -> {
// Re-establish storage after the scheduled delay
initStorage();
}).build(), getContext());
}
void forwardOrPlaceInBacklog(Object msg) {
// We need the initial value from storage before we can start delegate to
// the counter. Before that we place the messages in a backlog, to be sent
// to the counter when it is initialized.
if (counter == null) {
if (backlog.size() >= MAX_BACKLOG)
throw new ServiceUnavailable("CounterService not available," +
" lack of initial value");
backlog.add(new SenderMsgPair(getSender(), msg));
} else {
counter.forward(msg, getContext());
}
}
}
//#messages
public interface CounterApi {
public static class UseStorage {
public final ActorRef storage;
public UseStorage(ActorRef storage) {
this.storage = storage;
}
public String toString() {
return String.format("%s(%s)", getClass().getSimpleName(), storage);
}
}
}
//#messages
/**
* The in memory count variable that will send current value to the Storage,
* if there is any storage available at the moment.
*/
public static class Counter extends AbstractLoggingActor {
final String key;
long count;
ActorRef storage;
public Counter(String key, long initialValue) {
this.key = key;
this.count = initialValue;
}
@Override
public Receive createReceive() {
return LoggingReceive.create(receiveBuilder().
match(UseStorage.class, useStorage -> {
storage = useStorage.storage;
storeCount();
}).
match(Increment.class, increment -> {
count += increment.n;
storeCount();
}).
matchEquals(GetCurrentCount, gcc -> {
getSender().tell(new CurrentCount(key, count), getSelf());
}).build(), getContext());
}
void storeCount() {
// Delegate dangerous work, to protect our valuable state.
// We can continue without storage.
if (storage != null) {
storage.tell(new Store(new Entry(key, count)), getSelf());
}
}
}
//#messages
public interface StorageApi {
public static class Store {
public final Entry entry;
public Store(Entry entry) {
this.entry = entry;
}
public String toString() {
return String.format("%s(%s)", getClass().getSimpleName(), entry);
}
}
public static class Entry {
public final String key;
public final long value;
public Entry(String key, long value) {
this.key = key;
this.value = value;
}
public String toString() {
return String.format("%s(%s, %s)", getClass().getSimpleName(), key, value);
}
}
public static class Get {
public final String key;
public Get(String key) {
this.key = key;
}
public String toString() {
return String.format("%s(%s)", getClass().getSimpleName(), key);
}
}
public static class StorageException extends RuntimeException {
private static final long serialVersionUID = 1L;
public StorageException(String msg) {
super(msg);
}
}
}
//#messages
/**
* Saves key/value pairs to persistent storage when receiving Store message.
* Replies with current value when receiving Get message. Will throw
* StorageException if the underlying data store is out of order.
*/
public static class Storage extends AbstractLoggingActor {
final DummyDB db = DummyDB.instance;
@Override
public Receive createReceive() {
return LoggingReceive.create(receiveBuilder().
match(Store.class, store -> {
db.save(store.entry.key, store.entry.value);
}).
match(Get.class, get -> {
Long value = db.load(get.key);
getSender().tell(new Entry(get.key, value == null ?
Long.valueOf(0L) : value), getSelf());
}).build(), getContext());
}
}
//#dummydb
public static class DummyDB {
public static final DummyDB instance = new DummyDB();
private final Map<String, Long> db = new HashMap<String, Long>();
private DummyDB() {
}
public synchronized void save(String key, Long value) throws StorageException {
if (11 <= value && value <= 14)
throw new StorageException("Simulated store failure " + value);
db.put(key, value);
}
public synchronized Long load(String key) throws StorageException {
return db.get(key);
}
}
//#dummydb
}
//#all

View file

@ -1,212 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import akka.actor.*;
import akka.testkit.javadsl.TestKit;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import jdocs.AbstractJavaTest;
import java.util.Optional;
import static akka.pattern.Patterns.ask;
//#testkit
import akka.testkit.TestProbe;
import akka.testkit.ErrorFilter;
import akka.testkit.EventFilter;
import akka.testkit.TestEvent;
import scala.concurrent.duration.Duration;
import static java.util.concurrent.TimeUnit.SECONDS;
import static akka.japi.Util.immutableSeq;
import scala.concurrent.Await;
//#testkit
//#supervisor
import akka.japi.pf.DeciderBuilder;
import static akka.actor.SupervisorStrategy.resume;
import static akka.actor.SupervisorStrategy.restart;
import static akka.actor.SupervisorStrategy.stop;
import static akka.actor.SupervisorStrategy.escalate;
//#supervisor
import org.junit.Test;
import org.junit.BeforeClass;
import org.junit.AfterClass;
//#testkit
public class FaultHandlingTest extends AbstractJavaTest {
//#testkit
public static Config config = ConfigFactory.parseString(
"akka {\n" +
" loggers = [\"akka.testkit.TestEventListener\"]\n" +
" loglevel = \"WARNING\"\n" +
" stdout-loglevel = \"WARNING\"\n" +
"}\n");
static
//#supervisor
public class Supervisor extends AbstractActor {
//#strategy
private static SupervisorStrategy strategy =
new OneForOneStrategy(10, Duration.create("1 minute"), DeciderBuilder.
match(ArithmeticException.class, e -> resume()).
match(NullPointerException.class, e -> restart()).
match(IllegalArgumentException.class, e -> stop()).
matchAny(o -> escalate()).build());
@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
//#strategy
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Props.class, props -> {
getSender().tell(getContext().actorOf(props), getSelf());
})
.build();
}
}
//#supervisor
static
//#supervisor2
public class Supervisor2 extends AbstractActor {
//#strategy2
private static SupervisorStrategy strategy =
new OneForOneStrategy(10, Duration.create("1 minute"), DeciderBuilder.
match(ArithmeticException.class, e -> resume()).
match(NullPointerException.class, e -> restart()).
match(IllegalArgumentException.class, e -> stop()).
matchAny(o -> escalate()).build());
@Override
public SupervisorStrategy supervisorStrategy() {
return strategy;
}
//#strategy2
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Props.class, props -> {
getSender().tell(getContext().actorOf(props), getSelf());
})
.build();
}
@Override
public void preRestart(Throwable cause, Optional<Object> msg) {
// do not kill all children, which is the default here
}
}
//#supervisor2
static
//#child
public class Child extends AbstractActor {
int state = 0;
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Exception.class, exception -> { throw exception; })
.match(Integer.class, i -> state = i)
.matchEquals("get", s -> getSender().tell(state, getSelf()))
.build();
}
}
//#child
//#testkit
static ActorSystem system;
Duration timeout = Duration.create(5, SECONDS);
@BeforeClass
public static void start() {
system = ActorSystem.create("FaultHandlingTest", config);
}
@AfterClass
public static void cleanup() {
TestKit.shutdownActorSystem(system);
system = null;
}
@Test
public void mustEmploySupervisorStrategy() throws Exception {
// code here
//#testkit
EventFilter ex1 = new ErrorFilter(ArithmeticException.class);
EventFilter ex2 = new ErrorFilter(NullPointerException.class);
EventFilter ex3 = new ErrorFilter(IllegalArgumentException.class);
EventFilter ex4 = new ErrorFilter(Exception.class);
EventFilter[] ignoreExceptions = { ex1, ex2, ex3, ex4 };
system.eventStream().publish(new TestEvent.Mute(immutableSeq(ignoreExceptions)));
//#create
Props superprops = Props.create(Supervisor.class);
ActorRef supervisor = system.actorOf(superprops, "supervisor");
ActorRef child = (ActorRef) Await.result(ask(supervisor,
Props.create(Child.class), 5000), timeout);
//#create
//#resume
child.tell(42, ActorRef.noSender());
assert Await.result(ask(child, "get", 5000), timeout).equals(42);
child.tell(new ArithmeticException(), ActorRef.noSender());
assert Await.result(ask(child, "get", 5000), timeout).equals(42);
//#resume
//#restart
child.tell(new NullPointerException(), ActorRef.noSender());
assert Await.result(ask(child, "get", 5000), timeout).equals(0);
//#restart
//#stop
final TestProbe probe = new TestProbe(system);
probe.watch(child);
child.tell(new IllegalArgumentException(), ActorRef.noSender());
probe.expectMsgClass(Terminated.class);
//#stop
//#escalate-kill
child = (ActorRef) Await.result(ask(supervisor,
Props.create(Child.class), 5000), timeout);
probe.watch(child);
assert Await.result(ask(child, "get", 5000), timeout).equals(0);
child.tell(new Exception(), ActorRef.noSender());
probe.expectMsgClass(Terminated.class);
//#escalate-kill
//#escalate-restart
superprops = Props.create(Supervisor2.class);
supervisor = system.actorOf(superprops);
child = (ActorRef) Await.result(ask(supervisor,
Props.create(Child.class), 5000), timeout);
child.tell(23, ActorRef.noSender());
assert Await.result(ask(child, "get", 5000), timeout).equals(23);
child.tell(new Exception(), ActorRef.noSender());
assert Await.result(ask(child, "get", 5000), timeout).equals(0);
//#escalate-restart
//#testkit
}
}
//#testkit

View file

@ -1,39 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#imports
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.japi.pf.ReceiveBuilder;
//#imports
//#actor
public class GraduallyBuiltActor extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@Override
public Receive createReceive() {
ReceiveBuilder builder = ReceiveBuilder.create();
builder.match(String.class, s -> {
log.info("Received String message: {}", s);
//#actor
//#reply
getSender().tell(s, getSelf());
//#reply
//#actor
});
// do some other stuff in between
builder.matchAny(o -> log.info("received unknown message"));
return builder.build();
}
}
//#actor

View file

@ -1,28 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
//#immutable-message
public class ImmutableMessage {
private final int sequenceNumber;
private final List<String> values;
public ImmutableMessage(int sequenceNumber, List<String> values) {
this.sequenceNumber = sequenceNumber;
this.values = Collections.unmodifiableList(new ArrayList<String>(values));
}
public int getSequenceNumber() {
return sequenceNumber;
}
public List<String> getValues() {
return values;
}
}
//#immutable-message

View file

@ -1,66 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import java.util.concurrent.TimeUnit;
import akka.testkit.AkkaJUnitActorSystemResource;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.ClassRule;
import org.junit.Test;
import scala.concurrent.duration.Duration;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Inbox;
import akka.actor.PoisonPill;
import akka.actor.Terminated;
import akka.testkit.AkkaSpec;
public class InboxDocTest extends AbstractJavaTest {
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource =
new AkkaJUnitActorSystemResource("InboxDocTest", AkkaSpec.testConf());
private final ActorSystem system = actorSystemResource.getSystem();
@Test
public void demonstrateInbox() {
final TestKit probe = new TestKit(system);
final ActorRef target = probe.getRef();
//#inbox
final Inbox inbox = Inbox.create(system);
inbox.send(target, "hello");
//#inbox
probe.expectMsgEquals("hello");
probe.send(probe.getLastSender(), "world");
//#inbox
try {
assert inbox.receive(Duration.create(1, TimeUnit.SECONDS)).equals("world");
} catch (java.util.concurrent.TimeoutException e) {
// timeout
}
//#inbox
}
@Test
public void demonstrateWatch() {
final TestKit probe = new TestKit(system);
final ActorRef target = probe.getRef();
//#watch
final Inbox inbox = Inbox.create(system);
inbox.watch(target);
target.tell(PoisonPill.getInstance(), ActorRef.noSender());
try {
assert inbox.receive(Duration.create(1, TimeUnit.SECONDS)) instanceof Terminated;
} catch (java.util.concurrent.TimeoutException e) {
// timeout
}
//#watch
}
}

View file

@ -1,161 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.japi.pf.FI;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import scala.concurrent.Await;
import scala.concurrent.duration.Duration;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
public class InitializationDocTest extends AbstractJavaTest {
static ActorSystem system = null;
@BeforeClass
public static void beforeClass() {
system = ActorSystem.create("InitializationDocTest");
}
@AfterClass
public static void afterClass() throws Exception {
Await.ready(system.terminate(), Duration.create("5 seconds"));
}
static public class PreStartInitExample extends AbstractActor {
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
//#preStartInit
@Override
public void preStart() {
// Initialize children here
}
// Overriding postRestart to disable the call to preStart()
// after restarts
@Override
public void postRestart(Throwable reason) {
}
// The default implementation of preRestart() stops all the children
// of the actor. To opt-out from stopping the children, we
// have to override preRestart()
@Override
public void preRestart(Throwable reason, Optional<Object> message)
throws Exception {
// Keep the call to postStop(), but no stopping of children
postStop();
}
//#preStartInit
}
public static class MessageInitExample extends AbstractActor {
private String initializeMe = null;
//#messageInit
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("init", m1 -> {
initializeMe = "Up and running";
getContext().become(receiveBuilder()
.matchEquals("U OK?", m2 -> {
getSender().tell(initializeMe, getSelf());
})
.build());
})
.build();
}
//#messageInit
}
public class GenericMessage<T> {
T value;
public GenericMessage(T value) {
this.value = value;
}
}
public static class GenericActor extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchUnchecked(GenericMessage.class, (GenericMessage<String> msg) -> {
GenericMessage<String> message = msg;
getSender().tell(message.value.toUpperCase(), getSelf());
})
.build();
}
}
static class GenericActorWithPredicate extends AbstractActor {
@Override
public Receive createReceive() {
FI.TypedPredicate<GenericMessage<String>> typedPredicate = s -> !s.value.isEmpty();
return receiveBuilder()
.matchUnchecked(GenericMessage.class, typedPredicate, (GenericMessage<String> msg) -> {
getSender().tell(msg.value.toUpperCase(), getSelf());
})
.build();
}
}
@Test
public void testIt() {
new TestKit(system) {{
ActorRef testactor = system.actorOf(Props.create(MessageInitExample.class), "testactor");
String msg = "U OK?";
testactor.tell(msg, getRef());
expectNoMsg(Duration.create(1, TimeUnit.SECONDS));
testactor.tell("init", getRef());
testactor.tell(msg, getRef());
expectMsgEquals("Up and running");
}};
}
@Test
public void testGenericActor() {
new TestKit(system) {{
ActorRef genericTestActor = system.actorOf(Props.create(GenericActor.class), "genericActor");
GenericMessage<String> genericMessage = new GenericMessage<String>("a");
genericTestActor.tell(genericMessage, getRef());
expectMsgEquals("A");
}};
}
@Test
public void actorShouldNotRespondForEmptyMessage() {
new TestKit(system) {{
ActorRef genericTestActor = system.actorOf(Props.create(GenericActorWithPredicate.class), "genericActorWithPredicate");
GenericMessage<String> emptyGenericMessage = new GenericMessage<String>("");
GenericMessage<String> nonEmptyGenericMessage = new GenericMessage<String>("a");
genericTestActor.tell(emptyGenericMessage, getRef());
expectNoMsg();
genericTestActor.tell(nonEmptyGenericMessage, getRef());
expectMsgEquals("A");
}};
}
}

View file

@ -1,149 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import java.util.ArrayList;
import java.util.Collections;
import java.util.List;
public class Messages {
static
//#immutable-message
public class ImmutableMessage {
private final int sequenceNumber;
private final List<String> values;
public ImmutableMessage(int sequenceNumber, List<String> values) {
this.sequenceNumber = sequenceNumber;
this.values = Collections.unmodifiableList(new ArrayList<String>(values));
}
public int getSequenceNumber() {
return sequenceNumber;
}
public List<String> getValues() {
return values;
}
}
//#immutable-message
public static class DoIt {
private final ImmutableMessage msg;
DoIt(ImmutableMessage msg) {
this.msg = msg;
}
public ImmutableMessage getMsg() {
return msg;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
DoIt doIt = (DoIt) o;
if (!msg.equals(doIt.msg)) return false;
return true;
}
@Override
public int hashCode() {
return msg.hashCode();
}
@Override
public String toString() {
return "DoIt{" +
"msg=" + msg +
'}';
}
}
public static class Message {
final String str;
Message(String str) {
this.str = str;
}
public String getStr() {
return str;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Message message = (Message) o;
if (!str.equals(message.str)) return false;
return true;
}
@Override
public int hashCode() {
return str.hashCode();
}
@Override
public String toString() {
return "Message{" +
"str='" + str + '\'' +
'}';
}
}
public static enum Swap {
Swap
}
public static class Result {
final String x;
final String s;
public Result(String x, String s) {
this.x = x;
this.s = s;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((s == null) ? 0 : s.hashCode());
result = prime * result + ((x == null) ? 0 : x.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
Result other = (Result) obj;
if (s == null) {
if (other.s != null)
return false;
} else if (!s.equals(other.s))
return false;
if (x == null) {
if (other.x != null)
return false;
} else if (!x.equals(other.x))
return false;
return true;
}
}
}

View file

@ -1,33 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#imports
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#imports
//#my-actor
public class MyActor extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, s -> {
log.info("Received String message: {}", s);
//#my-actor
//#reply
getSender().tell(s, getSelf());
//#reply
//#my-actor
})
.matchAny(o -> log.info("received unknown message"))
.build();
}
}
//#my-actor

View file

@ -1,14 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#my-bounded-untyped-actor
import akka.dispatch.BoundedMessageQueueSemantics;
import akka.dispatch.RequiresMessageQueue;
public class MyBoundedActor extends MyActor
implements RequiresMessageQueue<BoundedMessageQueueSemantics> {
}
//#my-bounded-untyped-actor

View file

@ -1,29 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#my-stopping-actor
import akka.actor.ActorRef;
import akka.actor.AbstractActor;
public class MyStoppingActor extends AbstractActor {
ActorRef child = null;
// ... creation of child ...
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("interrupt-child", m ->
getContext().stop(child)
)
.matchEquals("done", m ->
getContext().stop(getSelf())
)
.build();
}
}
//#my-stopping-actor

View file

@ -1,35 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#sample-actor
import akka.actor.AbstractActor;
public class SampleActor extends AbstractActor {
private Receive guarded = receiveBuilder()
.match(String.class, s -> s.contains("guard"), s -> {
getSender().tell("contains(guard): " + s, getSelf());
getContext().unbecome();
})
.build();
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Double.class, d -> {
getSender().tell(d.isNaN() ? 0 : d, getSelf());
})
.match(Integer.class, i -> {
getSender().tell(i * 10, getSelf());
})
.match(String.class, s -> s.startsWith("guard"), s -> {
getSender().tell("startsWith(guard): " + s.toUpperCase(), getSelf());
getContext().become(guarded, false);
})
.build();
}
}
//#sample-actor

View file

@ -1,56 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import static org.junit.Assert.*;
public class SampleActorTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("SampleActorTest");
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
@Test
public void testSampleActor()
{
new TestKit(system) {{
final ActorRef subject = system.actorOf(Props.create(SampleActor.class), "sample-actor");
final ActorRef probeRef = getRef();
subject.tell(47.11, probeRef);
subject.tell("and no guard in the beginning", probeRef);
subject.tell("guard is a good thing", probeRef);
subject.tell(47.11, probeRef);
subject.tell(4711, probeRef);
subject.tell("and no guard in the beginning", probeRef);
subject.tell(4711, probeRef);
subject.tell("and an unmatched message", probeRef);
expectMsgEquals(47.11);
assertTrue(expectMsgClass(String.class).startsWith("startsWith(guard):"));
assertTrue(expectMsgClass(String.class).startsWith("contains(guard):"));
expectMsgEquals(47110);
expectNoMsg();
}};
}
}

View file

@ -1,78 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#imports1
import akka.actor.Props;
import jdocs.AbstractJavaTest;
import scala.concurrent.duration.Duration;
import java.util.concurrent.TimeUnit;
//#imports1
//#imports2
import akka.actor.Cancellable;
//#imports2
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.testkit.AkkaSpec;
import akka.testkit.AkkaJUnitActorSystemResource;
import org.junit.*;
public class SchedulerDocTest extends AbstractJavaTest {
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource = new AkkaJUnitActorSystemResource("SchedulerDocTest",
AkkaSpec.testConf());
private final ActorSystem system = actorSystemResource.getSystem();
private ActorRef testActor = system.actorOf(Props.create(MyActor.class));
@Test
public void scheduleOneOffTask() {
//#schedule-one-off-message
system.scheduler().scheduleOnce(Duration.create(50, TimeUnit.MILLISECONDS),
testActor, "foo", system.dispatcher(), null);
//#schedule-one-off-message
//#schedule-one-off-thunk
system.scheduler().scheduleOnce(Duration.create(50, TimeUnit.MILLISECONDS),
new Runnable() {
@Override
public void run() {
testActor.tell(System.currentTimeMillis(), ActorRef.noSender());
}
}, system.dispatcher());
//#schedule-one-off-thunk
}
@Test
public void scheduleRecurringTask() {
//#schedule-recurring
class Ticker extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("Tick", m -> {
// Do someting
})
.build();
}
}
ActorRef tickActor = system.actorOf(Props.create(Ticker.class, this));
//This will schedule to send the Tick-message
//to the tickActor after 0ms repeating every 50ms
Cancellable cancellable = system.scheduler().schedule(Duration.Zero(),
Duration.create(50, TimeUnit.MILLISECONDS), tickActor, "Tick",
system.dispatcher(), null);
//This cancels further Ticks to be sent
cancellable.cancel();
//#schedule-recurring
system.stop(tickActor);
}
}

View file

@ -1,252 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor;
//#imports
import akka.actor.TypedActor;
import akka.actor.*;
import akka.japi.*;
import akka.dispatch.Futures;
import jdocs.AbstractJavaTest;
import scala.concurrent.Await;
import scala.concurrent.Future;
import scala.concurrent.duration.Duration;
import java.util.concurrent.TimeUnit;
import java.util.List;
import java.util.ArrayList;
import java.util.Random;
import akka.routing.RoundRobinGroup;
//#imports
import org.junit.Test;
import static org.junit.Assert.assertEquals;
public class TypedActorDocTest extends AbstractJavaTest {
Object someReference = null;
ActorSystem system = null;
static
//#typed-actor-iface
public interface Squarer {
//#typed-actor-iface-methods
void squareDontCare(int i); //fire-forget
Future<Integer> square(int i); //non-blocking send-request-reply
Option<Integer> squareNowPlease(int i);//blocking send-request-reply
int squareNow(int i); //blocking send-request-reply
//#typed-actor-iface-methods
}
//#typed-actor-iface
static
//#typed-actor-impl
class SquarerImpl implements Squarer {
private String name;
public SquarerImpl() {
this.name = "default";
}
public SquarerImpl(String name) {
this.name = name;
}
//#typed-actor-impl-methods
public void squareDontCare(int i) {
int sq = i * i; //Nobody cares :(
}
public Future<Integer> square(int i) {
return Futures.successful(i * i);
}
public Option<Integer> squareNowPlease(int i) {
return Option.some(i * i);
}
public int squareNow(int i) {
return i * i;
}
//#typed-actor-impl-methods
}
//#typed-actor-impl
@Test public void mustGetTheTypedActorExtension() {
try {
//#typed-actor-extension-tools
//Returns the Typed Actor Extension
TypedActorExtension extension =
TypedActor.get(system); //system is an instance of ActorSystem
//Returns whether the reference is a Typed Actor Proxy or not
TypedActor.get(system).isTypedActor(someReference);
//Returns the backing Akka Actor behind an external Typed Actor Proxy
TypedActor.get(system).getActorRefFor(someReference);
//Returns the current ActorContext,
// method only valid within methods of a TypedActor implementation
ActorContext context = TypedActor.context();
//Returns the external proxy of the current Typed Actor,
// method only valid within methods of a TypedActor implementation
Squarer sq = TypedActor.<Squarer>self();
//Returns a contextual instance of the Typed Actor Extension
//this means that if you create other Typed Actors with this,
//they will become children to the current Typed Actor.
TypedActor.get(TypedActor.context());
//#typed-actor-extension-tools
} catch (Exception e) {
//dun care
}
}
@Test public void createATypedActor() {
try {
//#typed-actor-create1
Squarer mySquarer =
TypedActor.get(system).typedActorOf(
new TypedProps<SquarerImpl>(Squarer.class, SquarerImpl.class));
//#typed-actor-create1
//#typed-actor-create2
Squarer otherSquarer =
TypedActor.get(system).typedActorOf(
new TypedProps<SquarerImpl>(Squarer.class,
new Creator<SquarerImpl>() {
public SquarerImpl create() { return new SquarerImpl("foo"); }
}),
"name");
//#typed-actor-create2
//#typed-actor-calls
//#typed-actor-call-oneway
mySquarer.squareDontCare(10);
//#typed-actor-call-oneway
//#typed-actor-call-future
Future<Integer> fSquare = mySquarer.square(10); //A Future[Int]
//#typed-actor-call-future
//#typed-actor-call-option
Option<Integer> oSquare = mySquarer.squareNowPlease(10); //Option[Int]
//#typed-actor-call-option
//#typed-actor-call-strict
int iSquare = mySquarer.squareNow(10); //Int
//#typed-actor-call-strict
//#typed-actor-calls
assertEquals(100, Await.result(fSquare,
Duration.create(3, TimeUnit.SECONDS)).intValue());
assertEquals(100, oSquare.get().intValue());
assertEquals(100, iSquare);
//#typed-actor-stop
TypedActor.get(system).stop(mySquarer);
//#typed-actor-stop
//#typed-actor-poisonpill
TypedActor.get(system).poisonPill(otherSquarer);
//#typed-actor-poisonpill
} catch(Exception e) {
//Ignore
}
}
@Test public void createHierarchies() {
try {
//#typed-actor-hierarchy
Squarer childSquarer =
TypedActor.get(TypedActor.context()).
typedActorOf(
new TypedProps<SquarerImpl>(Squarer.class, SquarerImpl.class)
);
//Use "childSquarer" as a Squarer
//#typed-actor-hierarchy
} catch (Exception e) {
//dun care
}
}
@Test public void proxyAnyActorRef() {
try {
final ActorRef actorRefToRemoteActor = system.deadLetters();
//#typed-actor-remote
Squarer typedActor =
TypedActor.get(system).
typedActorOf(
new TypedProps<Squarer>(Squarer.class),
actorRefToRemoteActor
);
//Use "typedActor" as a FooBar
//#typed-actor-remote
} catch (Exception e) {
//dun care
}
}
//#typed-router-types
interface HasName {
String name();
}
class Named implements HasName {
private int id = new Random().nextInt(1024);
@Override public String name() { return "name-" + id; }
}
//#typed-router-types
@Test public void typedRouterPattern() {
try {
//#typed-router
// prepare routees
TypedActorExtension typed = TypedActor.get(system);
Named named1 =
typed.typedActorOf(new TypedProps<Named>(Named.class));
Named named2 =
typed.typedActorOf(new TypedProps<Named>(Named.class));
List<Named> routees = new ArrayList<Named>();
routees.add(named1);
routees.add(named2);
List<String> routeePaths = new ArrayList<String>();
routeePaths.add(typed.getActorRefFor(named1).path().toStringWithoutAddress());
routeePaths.add(typed.getActorRefFor(named2).path().toStringWithoutAddress());
// prepare untyped router
ActorRef router = system.actorOf(new RoundRobinGroup(routeePaths).props(), "router");
// prepare typed proxy, forwarding MethodCall messages to `router`
Named typedRouter = typed.typedActorOf(new TypedProps<Named>(Named.class), router);
System.out.println("actor was: " + typedRouter.name()); // name-243
System.out.println("actor was: " + typedRouter.name()); // name-614
System.out.println("actor was: " + typedRouter.name()); // name-243
System.out.println("actor was: " + typedRouter.name()); // name-614
//#typed-router
typed.poisonPill(named1);
typed.poisonPill(named2);
typed.poisonPill(typedRouter);
} catch (Exception e) {
//dun care
}
}
}

View file

@ -1,136 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor.fsm;
//#simple-imports
import akka.actor.AbstractFSM;
import akka.actor.ActorRef;
import akka.japi.pf.UnitMatch;
import java.util.Arrays;
import java.util.LinkedList;
import java.util.List;
import scala.concurrent.duration.Duration;
//#simple-imports
import static jdocs.actor.fsm.Buncher.Data;
import static jdocs.actor.fsm.Buncher.State.*;
import static jdocs.actor.fsm.Buncher.State;
import static jdocs.actor.fsm.Buncher.Uninitialized.*;
import static jdocs.actor.fsm.Events.*;
//#simple-fsm
public class Buncher extends AbstractFSM<State, Data> {
{
//#fsm-body
startWith(Idle, Uninitialized);
//#when-syntax
when(Idle,
matchEvent(SetTarget.class, Uninitialized.class,
(setTarget, uninitialized) ->
stay().using(new Todo(setTarget.getRef(), new LinkedList<>()))));
//#when-syntax
//#transition-elided
onTransition(
matchState(Active, Idle, () -> {
// reuse this matcher
final UnitMatch<Data> m = UnitMatch.create(
matchData(Todo.class,
todo -> todo.getTarget().tell(new Batch(todo.getQueue()), getSelf())));
m.match(stateData());
}).
state(Idle, Active, () -> {/* Do something here */}));
//#transition-elided
when(Active, Duration.create(1, "second"),
matchEvent(Arrays.asList(Flush.class, StateTimeout()), Todo.class,
(event, todo) -> goTo(Idle).using(todo.copy(new LinkedList<>()))));
//#unhandled-elided
whenUnhandled(
matchEvent(Queue.class, Todo.class,
(queue, todo) -> goTo(Active).using(todo.addElement(queue.getObj()))).
anyEvent((event, state) -> {
log().warning("received unhandled request {} in state {}/{}",
event, stateName(), state);
return stay();
}));
//#unhandled-elided
initialize();
//#fsm-body
}
//#simple-fsm
static
//#simple-state
// states
enum State {
Idle, Active
}
//#simple-state
static
//#simple-state
// state data
interface Data {
}
//#simple-state
static
//#simple-state
enum Uninitialized implements Data {
Uninitialized
}
//#simple-state
static
//#simple-state
final class Todo implements Data {
private final ActorRef target;
private final List<Object> queue;
public Todo(ActorRef target, List<Object> queue) {
this.target = target;
this.queue = queue;
}
public ActorRef getTarget() {
return target;
}
public List<Object> getQueue() {
return queue;
}
//#boilerplate
@Override
public String toString() {
return "Todo{" +
"target=" + target +
", queue=" + queue +
'}';
}
public Todo addElement(Object element) {
List<Object> nQueue = new LinkedList<>(queue);
nQueue.add(element);
return new Todo(this.target, nQueue);
}
public Todo copy(List<Object> queue) {
return new Todo(this.target, queue);
}
public Todo copy(ActorRef target) {
return new Todo(target, this.queue);
}
//#boilerplate
}
//#simple-state
//#simple-fsm
}
//#simple-fsm

View file

@ -1,78 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor.fsm;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import java.util.LinkedList;
import static jdocs.actor.fsm.Events.Batch;
import static jdocs.actor.fsm.Events.Queue;
import static jdocs.actor.fsm.Events.SetTarget;
import static jdocs.actor.fsm.Events.Flush.Flush;
//#test-code
public class BuncherTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("BuncherTest");
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
@Test
public void testBuncherActorBatchesCorrectly() {
new TestKit(system) {{
final ActorRef buncher =
system.actorOf(Props.create(Buncher.class));
final ActorRef probe = getRef();
buncher.tell(new SetTarget(probe), probe);
buncher.tell(new Queue(42), probe);
buncher.tell(new Queue(43), probe);
LinkedList<Object> list1 = new LinkedList<>();
list1.add(42);
list1.add(43);
expectMsgEquals(new Batch(list1));
buncher.tell(new Queue(44), probe);
buncher.tell(Flush, probe);
buncher.tell(new Queue(45), probe);
LinkedList<Object> list2 = new LinkedList<>();
list2.add(44);
expectMsgEquals(new Batch(list2));
LinkedList<Object> list3 = new LinkedList<>();
list3.add(45);
expectMsgEquals(new Batch(list3));
system.stop(buncher);
}};
}
@Test
public void testBuncherActorDoesntBatchUninitialized() {
new TestKit(system) {{
final ActorRef buncher =
system.actorOf(Props.create(Buncher.class));
final ActorRef probe = getRef();
buncher.tell(new Queue(42), probe);
expectNoMsg();
system.stop(buncher);
}};
}
}
//#test-code

View file

@ -1,108 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor.fsm;
import akka.actor.ActorRef;
import java.util.List;
public class Events {
static
//#simple-events
public final class SetTarget {
private final ActorRef ref;
public SetTarget(ActorRef ref) {
this.ref = ref;
}
public ActorRef getRef() {
return ref;
}
//#boilerplate
@Override
public String toString() {
return "SetTarget{" +
"ref=" + ref +
'}';
}
//#boilerplate
}
//#simple-events
static
//#simple-events
public final class Queue {
private final Object obj;
public Queue(Object obj) {
this.obj = obj;
}
public Object getObj() {
return obj;
}
//#boilerplate
@Override
public String toString() {
return "Queue{" +
"obj=" + obj +
'}';
}
//#boilerplate
}
//#simple-events
static
//#simple-events
public final class Batch {
private final List<Object> list;
public Batch(List<Object> list) {
this.list = list;
}
public List<Object> getList() {
return list;
}
//#boilerplate
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Batch batch = (Batch) o;
return list.equals(batch.list);
}
@Override
public int hashCode() {
return list.hashCode();
}
@Override
public String toString() {
final StringBuilder builder = new StringBuilder();
builder.append( "Batch{list=");
list.stream().forEachOrdered(e -> { builder.append(e); builder.append(","); });
int len = builder.length();
builder.replace(len, len, "}");
return builder.toString();
}
//#boilerplate
}
//#simple-events
static
//#simple-events
public enum Flush {
Flush
}
//#simple-events
}

View file

@ -1,179 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.actor.fsm;
import akka.actor.*;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import scala.concurrent.duration.Duration;
import static org.junit.Assert.*;
import static jdocs.actor.fsm.FSMDocTest.StateType.*;
import static jdocs.actor.fsm.FSMDocTest.Messages.*;
import static java.util.concurrent.TimeUnit.*;
public class FSMDocTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("FSMDocTest");
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
public static enum StateType {
SomeState,
Processing,
Idle,
Active,
Error
}
public static enum Messages {
WillDo,
Tick
}
public static enum Data {
Foo,
Bar
};
public static interface X {};
public static class DummyFSM extends AbstractFSM<StateType, Integer> {
Integer newData = 42;
//#alt-transition-syntax
public void handler(StateType from, StateType to) {
// handle transition here
}
//#alt-transition-syntax
{
//#modifier-syntax
when(SomeState, matchAnyEvent((msg, data) -> {
return goTo(Processing).using(newData).
forMax(Duration.create(5, SECONDS)).replying(WillDo);
}));
//#modifier-syntax
//#NullFunction
when(SomeState, AbstractFSM.NullFunction());
//#NullFunction
//#transition-syntax
onTransition(
matchState(Active, Idle, () -> setTimer("timeout",
Tick, Duration.create(1, SECONDS), true)).
state(Active, null, () -> cancelTimer("timeout")).
state(null, Idle, (f, t) -> log().info("entering Idle from " + f)));
//#transition-syntax
//#alt-transition-syntax
onTransition(this::handler);
//#alt-transition-syntax
//#stop-syntax
when(Error, matchEventEquals("stop", (event, data) -> {
// do cleanup ...
return stop();
}));
//#stop-syntax
//#termination-syntax
onTermination(
matchStop(Normal(),
(state, data) -> {/* Do something here */}).
stop(Shutdown(),
(state, data) -> {/* Do something here */}).
stop(Failure.class,
(reason, state, data) -> {/* Do something here */}));
//#termination-syntax
//#unhandled-syntax
whenUnhandled(
matchEvent(X.class, (x, data) -> {
log().info("Received unhandled event: " + x);
return stay();
}).
anyEvent((event, data) -> {
log().warning("Received unknown event: " + event);
return goTo(Error);
}));
}
//#unhandled-syntax
}
static
//#logging-fsm
public class MyFSM extends AbstractLoggingFSM<StateType, Data> {
//#body-elided
//#logging-fsm
ActorRef target = null;
//#logging-fsm
@Override
public int logDepth() { return 12; }
{
onTermination(
matchStop(Failure.class, (reason, state, data) -> {
String lastEvents = getLog().mkString("\n\t");
log().warning("Failure in state " + state + " with data " + data + "\n" +
"Events leading up to this point:\n\t" + lastEvents);
//#logging-fsm
target.tell(reason.cause(), getSelf());
target.tell(state, getSelf());
target.tell(data, getSelf());
target.tell(lastEvents, getSelf());
//#logging-fsm
})
);
//...
//#logging-fsm
startWith(SomeState, Data.Foo);
when(SomeState, matchEvent(ActorRef.class, Data.class, (ref, data) -> {
target = ref;
target.tell("going active", getSelf());
return goTo(Active);
}));
when(Active, matchEventEquals("stop", (event, data) -> {
target.tell("stopping", getSelf());
return stop(new Failure("This is not the error you're looking for"));
}));
initialize();
//#logging-fsm
}
//#body-elided
}
//#logging-fsm
@Test
public void testLoggingFSM()
{
new TestKit(system) {{
final ActorRef logger =
system.actorOf(Props.create(MyFSM.class));
final ActorRef probe = getRef();
logger.tell(probe, probe);
expectMsgEquals("going active");
logger.tell("stop", probe);
expectMsgEquals("stopping");
expectMsgEquals("This is not the error you're looking for");
expectMsgEquals(Active);
expectMsgEquals(Data.Foo);
String msg = expectMsgClass(String.class);
assertTrue(msg.startsWith("LogEntry(SomeState,Foo,Actor[akka://FSMDocTest/system/"));
}};
}
}

View file

@ -1,115 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.agent;
import static org.junit.Assert.*;
import org.junit.Test;
import scala.concurrent.Await;
import scala.concurrent.duration.Duration;
//#import-agent
import scala.concurrent.ExecutionContext;
import akka.agent.Agent;
import akka.dispatch.ExecutionContexts;
//#import-agent
//#import-function
import akka.dispatch.Mapper;
//#import-function
//#import-future
import scala.concurrent.Future;
//#import-future
public class AgentDocTest extends jdocs.AbstractJavaTest {
private static ExecutionContext ec = ExecutionContexts.global();
@Test
public void createAndRead() throws Exception {
//#create
ExecutionContext ec = ExecutionContexts.global();
Agent<Integer> agent = Agent.create(5, ec);
//#create
//#read-get
Integer result = agent.get();
//#read-get
//#read-future
Future<Integer> future = agent.future();
//#read-future
assertEquals(result, new Integer(5));
assertEquals(Await.result(future, Duration.create(5,"s")), new Integer(5));
}
@Test
public void sendAndSendOffAndReadAwait() throws Exception {
Agent<Integer> agent = Agent.create(5, ec);
//#send
// send a value, enqueues this change
// of the value of the Agent
agent.send(7);
// send a Mapper, enqueues this change
// to the value of the Agent
agent.send(new Mapper<Integer, Integer>() {
public Integer apply(Integer i) {
return i * 2;
}
});
//#send
Mapper<Integer, Integer> longRunningOrBlockingFunction = new Mapper<Integer, Integer>() {
public Integer apply(Integer i) {
return i * 1;
}
};
ExecutionContext theExecutionContextToExecuteItIn = ec;
//#send-off
// sendOff a function
agent.sendOff(longRunningOrBlockingFunction,
theExecutionContextToExecuteItIn);
//#send-off
assertEquals(Await.result(agent.future(), Duration.create(5,"s")), new Integer(14));
}
@Test
public void alterAndAlterOff() throws Exception {
Agent<Integer> agent = Agent.create(5, ec);
//#alter
// alter a value
Future<Integer> f1 = agent.alter(7);
// alter a function (Mapper)
Future<Integer> f2 = agent.alter(new Mapper<Integer, Integer>() {
public Integer apply(Integer i) {
return i * 2;
}
});
//#alter
Mapper<Integer, Integer> longRunningOrBlockingFunction = new Mapper<Integer, Integer>() {
public Integer apply(Integer i) {
return i * 1;
}
};
ExecutionContext theExecutionContextToExecuteItIn = ec;
//#alter-off
// alterOff a function (Mapper)
Future<Integer> f3 = agent.alterOff(longRunningOrBlockingFunction,
theExecutionContextToExecuteItIn);
//#alter-off
assertEquals(Await.result(f3, Duration.create(5,"s")), new Integer(14));
}
}

View file

@ -1,54 +0,0 @@
package jdocs.camel;
//#CamelActivation
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.camel.Camel;
import akka.camel.CamelExtension;
import akka.camel.javaapi.UntypedConsumerActor;
import akka.testkit.javadsl.TestKit;
import akka.util.Timeout;
import jdocs.AbstractJavaTest;
import scala.concurrent.Future;
import scala.concurrent.duration.Duration;
import static java.util.concurrent.TimeUnit.SECONDS;
//#CamelActivation
import org.junit.Test;
public class ActivationTestBase extends AbstractJavaTest {
@SuppressWarnings("unused")
@Test
public void testActivation() {
//#CamelActivation
// ..
ActorSystem system = ActorSystem.create("some-system");
Props props = Props.create(MyConsumer.class);
ActorRef producer = system.actorOf(props,"myproducer");
Camel camel = CamelExtension.get(system);
// get a future reference to the activation of the endpoint of the Consumer Actor
Timeout timeout = new Timeout(Duration.create(10, SECONDS));
Future<ActorRef> activationFuture = camel.activationFutureFor(producer,
timeout, system.dispatcher());
//#CamelActivation
//#CamelDeactivation
// ..
system.stop(producer);
// get a future reference to the deactivation of the endpoint of the Consumer Actor
Future<ActorRef> deactivationFuture = camel.deactivationFutureFor(producer,
timeout, system.dispatcher());
//#CamelDeactivation
TestKit.shutdownActorSystem(system);
}
public static class MyConsumer extends UntypedConsumerActor {
public String getEndpointUri() {
return "direct:test";
}
public void onReceive(Object message) {
}
}
}

View file

@ -1,34 +0,0 @@
package jdocs.camel;
import akka.actor.ActorSystem;
import akka.camel.Camel;
import akka.camel.CamelExtension;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.apache.camel.CamelContext;
import org.apache.camel.ProducerTemplate;
import org.junit.Test;
public class CamelExtensionTest extends AbstractJavaTest {
@Test
public void getCamelExtension() {
//#CamelExtension
ActorSystem system = ActorSystem.create("some-system");
Camel camel = CamelExtension.get(system);
CamelContext camelContext = camel.context();
ProducerTemplate producerTemplate = camel.template();
//#CamelExtension
TestKit.shutdownActorSystem(system);
}
public void addActiveMQComponent() {
//#CamelExtensionAddComponent
ActorSystem system = ActorSystem.create("some-system");
Camel camel = CamelExtension.get(system);
CamelContext camelContext = camel.context();
// camelContext.addComponent("activemq", ActiveMQComponent.activeMQComponent(
// "vm://localhost?broker.persistent=false"));
//#CamelExtensionAddComponent
TestKit.shutdownActorSystem(system);
}
}

View file

@ -1,24 +0,0 @@
package jdocs.camel;
//#Consumer1
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
public class Consumer1 extends UntypedConsumerActor {
LoggingAdapter log = Logging.getLogger(getContext().system(), this);
public String getEndpointUri() {
return "file:data/input/actor";
}
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
String body = camelMessage.getBodyAs(String.class, getCamelContext());
log.info("Received message: {}", body);
} else
unhandled(message);
}
}
//#Consumer1

View file

@ -1,20 +0,0 @@
package jdocs.camel;
//#Consumer2
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
public class Consumer2 extends UntypedConsumerActor {
public String getEndpointUri() {
return "jetty:http://localhost:8877/camel/default";
}
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
String body = camelMessage.getBodyAs(String.class, getCamelContext());
getSender().tell(String.format("Received message: %s",body), getSelf());
} else
unhandled(message);
}
}
//#Consumer2

View file

@ -1,31 +0,0 @@
package jdocs.camel;
//#Consumer3
import akka.actor.Status;
import akka.camel.Ack;
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
public class Consumer3 extends UntypedConsumerActor{
@Override
public boolean autoAck() {
return false;
}
public String getEndpointUri() {
return "jms:queue:test";
}
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
getSender().tell(Ack.getInstance(), getSelf());
// on success
// ..
Exception someException = new Exception("e1");
// on failure
getSender().tell(new Status.Failure(someException), getSelf());
} else
unhandled(message);
}
}
//#Consumer3

View file

@ -1,32 +0,0 @@
package jdocs.camel;
//#Consumer4
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import java.util.concurrent.TimeUnit;
public class Consumer4 extends UntypedConsumerActor {
private final static FiniteDuration timeout =
Duration.create(500, TimeUnit.MILLISECONDS);
@Override
public FiniteDuration replyTimeout() {
return timeout;
}
public String getEndpointUri() {
return "jetty:http://localhost:8877/camel/default";
}
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
String body = camelMessage.getBodyAs(String.class, getCamelContext());
getSender().tell(String.format("Hello %s",body), getSelf());
} else
unhandled(message);
}
}
//#Consumer4

View file

@ -1,18 +0,0 @@
package jdocs.camel;
//#CustomRoute
import akka.actor.ActorRef;
import akka.camel.internal.component.CamelPath;
import org.apache.camel.builder.RouteBuilder;
public class CustomRouteBuilder extends RouteBuilder{
private String uri;
public CustomRouteBuilder(ActorRef responder) {
uri = CamelPath.toUri(responder);
}
public void configure() throws Exception {
from("jetty:http://localhost:8877/camel/custom").to(uri);
}
}
//#CustomRoute

View file

@ -1,23 +0,0 @@
package jdocs.camel;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.camel.Camel;
import akka.camel.CamelExtension;
import akka.testkit.javadsl.TestKit;
public class CustomRouteTestBase {
public void customRoute() throws Exception{
//#CustomRoute
ActorSystem system = ActorSystem.create("some-system");
try {
Camel camel = CamelExtension.get(system);
ActorRef responder = system.actorOf(Props.create(Responder.class), "TestResponder");
camel.context().addRoutes(new CustomRouteBuilder(responder));
//#CustomRoute
} finally {
TestKit.shutdownActorSystem(system);
}
}
}

View file

@ -1,53 +0,0 @@
package jdocs.camel;
//#ErrorThrowingConsumer
import akka.actor.Status;
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
import akka.dispatch.Mapper;
import org.apache.camel.builder.Builder;
import org.apache.camel.model.ProcessorDefinition;
import org.apache.camel.model.RouteDefinition;
import scala.Option;
public class ErrorThrowingConsumer extends UntypedConsumerActor{
private String uri;
private static Mapper<RouteDefinition, ProcessorDefinition<?>> mapper =
new Mapper<RouteDefinition, ProcessorDefinition<?>>() {
public ProcessorDefinition<?> apply(RouteDefinition rd) {
// Catch any exception and handle it by returning the exception message
// as response
return rd.onException(Exception.class).handled(true).
transform(Builder.exceptionMessage()).end();
}
};
public ErrorThrowingConsumer(String uri){
this.uri = uri;
}
public String getEndpointUri() {
return uri;
}
public void onReceive(Object message) throws Exception{
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
String body = camelMessage.getBodyAs(String.class, getCamelContext());
throw new Exception(String.format("error: %s",body));
} else
unhandled(message);
}
@Override
public Mapper<RouteDefinition,
ProcessorDefinition<?>> getRouteDefinitionHandler() {
return mapper;
}
@Override
public void preRestart(Throwable reason, Option<Object> message) {
getSender().tell(new Status.Failure(reason), getSelf());
}
}
//#ErrorThrowingConsumer

View file

@ -1,10 +0,0 @@
package jdocs.camel;
//#Producer1
import akka.camel.javaapi.UntypedProducerActor;
public class FirstProducer extends UntypedProducerActor {
public String getEndpointUri() {
return "http://localhost:8080/news";
}
}
//#Producer1

View file

@ -1,24 +0,0 @@
package jdocs.camel;
//#RouteResponse
import akka.actor.ActorRef;
import akka.camel.javaapi.UntypedProducerActor;
public class Forwarder extends UntypedProducerActor {
private String uri;
private ActorRef target;
public Forwarder(String uri, ActorRef target) {
this.uri = uri;
this.target = target;
}
public String getEndpointUri() {
return uri;
}
@Override
public void onRouteResponse(Object message) {
target.forward(message, getContext());
}
}
//#RouteResponse

View file

@ -1,15 +0,0 @@
package jdocs.camel;
//#ProducerTemplate
import akka.actor.UntypedAbstractActor;
import akka.camel.Camel;
import akka.camel.CamelExtension;
import org.apache.camel.ProducerTemplate;
public class MyActor extends UntypedAbstractActor {
public void onReceive(Object message) {
Camel camel = CamelExtension.get(getContext().getSystem());
ProducerTemplate template = camel.template();
template.sendBody("direct:news", message);
}
}
//#ProducerTemplate

View file

@ -1,31 +0,0 @@
package jdocs.camel;
//#Consumer-mina
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
public class MyEndpoint extends UntypedConsumerActor{
private String uri;
public String getEndpointUri() {
return uri;
}
public void onReceive(Object message) throws Exception {
if (message instanceof CamelMessage) {
/* ... */
} else
unhandled(message);
}
// Extra constructor to change the default uri,
// for instance to "jetty:http://localhost:8877/example"
public MyEndpoint(String uri) {
this.uri = uri;
}
public MyEndpoint() {
this.uri = "mina2:tcp://localhost:6200?textline=true";
}
}
//#Consumer-mina

View file

@ -1,23 +0,0 @@
package jdocs.camel;
import akka.actor.*;
import akka.testkit.javadsl.TestKit;
public class OnRouteResponseTestBase {
public void onRouteResponse(){
//#RouteResponse
ActorSystem system = ActorSystem.create("some-system");
Props receiverProps = Props.create(ResponseReceiver.class);
final ActorRef receiver = system.actorOf(receiverProps,"responseReceiver");
ActorRef forwardResponse = system.actorOf(Props.create(
Forwarder.class, "http://localhost:8080/news/akka", receiver));
// the Forwarder sends out a request to the web page and forwards the response to
// the ResponseReceiver
forwardResponse.tell("some request", ActorRef.noSender());
//#RouteResponse
system.stop(receiver);
system.stop(forwardResponse);
TestKit.shutdownActorSystem(system);
}
}

View file

@ -1,20 +0,0 @@
package jdocs.camel;
//#Oneway
import akka.camel.javaapi.UntypedProducerActor;
public class OnewaySender extends UntypedProducerActor{
private String uri;
public OnewaySender(String uri) {
this.uri = uri;
}
public String getEndpointUri() {
return uri;
}
@Override
public boolean isOneway() {
return true;
}
}
//#Oneway

View file

@ -1,10 +0,0 @@
package jdocs.camel;
//#Producer
import akka.camel.javaapi.UntypedProducerActor;
public class Orders extends UntypedProducerActor {
public String getEndpointUri() {
return "jms:queue:Orders";
}
}
//#Producer

View file

@ -1,10 +0,0 @@
package jdocs.camel;
//#Producer1
import akka.camel.javaapi.UntypedProducerActor;
public class Producer1 extends UntypedProducerActor {
public String getEndpointUri() {
return "http://localhost:8080/news";
}
}
//#Producer1

View file

@ -1,51 +0,0 @@
package jdocs.camel;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CompletionStage;
import akka.testkit.javadsl.TestKit;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.camel.CamelMessage;
import akka.pattern.PatternsCS;
public class ProducerTestBase {
public void tellJmsProducer() {
//#TellProducer
ActorSystem system = ActorSystem.create("some-system");
Props props = Props.create(Orders.class);
ActorRef producer = system.actorOf(props, "jmsproducer");
producer.tell("<order amount=\"100\" currency=\"PLN\" itemId=\"12345\"/>",
ActorRef.noSender());
//#TellProducer
TestKit.shutdownActorSystem(system);
}
@SuppressWarnings("unused")
public void askProducer() {
//#AskProducer
ActorSystem system = ActorSystem.create("some-system");
Props props = Props.create(FirstProducer.class);
ActorRef producer = system.actorOf(props,"myproducer");
CompletionStage<Object> future = PatternsCS.ask(producer, "some request", 1000);
//#AskProducer
system.stop(producer);
TestKit.shutdownActorSystem(system);
}
public void correlate(){
//#Correlate
ActorSystem system = ActorSystem.create("some-system");
Props props = Props.create(Orders.class);
ActorRef producer = system.actorOf(props,"jmsproducer");
Map<String,Object> headers = new HashMap<String, Object>();
headers.put(CamelMessage.MessageExchangeId(),"123");
producer.tell(new CamelMessage("<order amount=\"100\" currency=\"PLN\" " +
"itemId=\"12345\"/>",headers), ActorRef.noSender());
//#Correlate
system.stop(producer);
TestKit.shutdownActorSystem(system);
}
}

View file

@ -1,20 +0,0 @@
package jdocs.camel;
//#RequestProducerTemplate
import akka.actor.AbstractActor;
import akka.camel.Camel;
import akka.camel.CamelExtension;
import org.apache.camel.ProducerTemplate;
public class RequestBodyActor extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(message -> {
Camel camel = CamelExtension.get(getContext().getSystem());
ProducerTemplate template = camel.template();
getSender().tell(template.requestBody("direct:news", message), getSelf());
})
.build();
}
}
//#RequestProducerTemplate

View file

@ -1,26 +0,0 @@
package jdocs.camel;
//#CustomRoute
import akka.actor.UntypedAbstractActor;
import akka.camel.CamelMessage;
import akka.dispatch.Mapper;
public class Responder extends UntypedAbstractActor{
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
getSender().tell(createResponse(camelMessage), getSelf());
} else
unhandled(message);
}
private CamelMessage createResponse(CamelMessage msg) {
return msg.mapBody(new Mapper<String,String>() {
@Override
public String apply(String body) {
return String.format("received %s", body);
}
});
}
}
//#CustomRoute

View file

@ -1,13 +0,0 @@
package jdocs.camel;
//#RouteResponse
import akka.actor.UntypedAbstractActor;
import akka.camel.CamelMessage;
public class ResponseReceiver extends UntypedAbstractActor{
public void onReceive(Object message) {
if(message instanceof CamelMessage) {
// do something with the forwarded response
}
}
}
//#RouteResponse

View file

@ -1,37 +0,0 @@
package jdocs.camel;
//#TransformOutgoingMessage
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedProducerActor;
import akka.dispatch.Mapper;
public class Transformer extends UntypedProducerActor{
private String uri;
public Transformer(String uri) {
this.uri = uri;
}
public String getEndpointUri() {
return uri;
}
private CamelMessage upperCase(CamelMessage msg) {
return msg.mapBody(new Mapper<String,String>() {
@Override
public String apply(String body) {
return body.toUpperCase();
}
});
}
@Override
public Object onTransformOutgoingMessage(Object message) {
if(message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
return upperCase(camelMessage);
} else {
return message;
}
}
}
//#TransformOutgoingMessage

View file

@ -1,42 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.cluster;
import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import jdocs.AbstractJavaTest;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import akka.actor.ActorSystem;
import akka.cluster.Cluster;
public class ClusterDocTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("ClusterDocTest",
ConfigFactory.parseString(scala.docs.cluster.ClusterDocSpec.config()));
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
@Test
public void demonstrateLeave() {
//#leave
final Cluster cluster = Cluster.get(system);
cluster.leave(cluster.selfAddress());
//#leave
}
}

View file

@ -1,36 +0,0 @@
package jdocs.cluster;
import java.math.BigInteger;
import java.util.concurrent.CompletableFuture;
import akka.actor.AbstractActor;
import static akka.pattern.PatternsCS.pipe;
//#backend
public class FactorialBackend extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Integer.class, n -> {
CompletableFuture<FactorialResult> result =
CompletableFuture.supplyAsync(() -> factorial(n))
.thenApply((factorial) -> new FactorialResult(n, factorial));
pipe(result, getContext().dispatcher()).to(getSender());
})
.build();
}
BigInteger factorial(int n) {
BigInteger acc = BigInteger.ONE;
for (int i = 1; i <= n; ++i) {
acc = acc.multiply(BigInteger.valueOf(i));
}
return acc;
}
}
//#backend

View file

@ -1,103 +0,0 @@
package jdocs.cluster;
import java.util.Arrays;
import java.util.Collections;
import java.util.concurrent.TimeUnit;
import akka.actor.Props;
import akka.cluster.metrics.AdaptiveLoadBalancingGroup;
import akka.cluster.metrics.AdaptiveLoadBalancingPool;
import akka.cluster.metrics.HeapMetricsSelector;
import akka.cluster.metrics.SystemLoadAverageMetricsSelector;
import akka.cluster.routing.ClusterRouterGroup;
import akka.cluster.routing.ClusterRouterGroupSettings;
import akka.cluster.routing.ClusterRouterPool;
import akka.cluster.routing.ClusterRouterPoolSettings;
import scala.concurrent.duration.Duration;
import akka.actor.ActorRef;
import akka.actor.ReceiveTimeout;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.routing.FromConfig;
//#frontend
public class FactorialFrontend extends AbstractActor {
final int upToN;
final boolean repeat;
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
ActorRef backend = getContext().actorOf(FromConfig.getInstance().props(),
"factorialBackendRouter");
public FactorialFrontend(int upToN, boolean repeat) {
this.upToN = upToN;
this.repeat = repeat;
}
@Override
public void preStart() {
sendJobs();
getContext().setReceiveTimeout(Duration.create(10, TimeUnit.SECONDS));
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(FactorialResult.class, result -> {
if (result.n == upToN) {
log.debug("{}! = {}", result.n, result.factorial);
if (repeat)
sendJobs();
else
getContext().stop(getSelf());
}
})
.match(ReceiveTimeout.class, x -> {
log.info("Timeout");
sendJobs();
})
.build();
}
void sendJobs() {
log.info("Starting batch of factorials up to [{}]", upToN);
for (int n = 1; n <= upToN; n++) {
backend.tell(n, getSelf());
}
}
}
//#frontend
//not used, only for documentation
abstract class FactorialFrontend2 extends AbstractActor {
//#router-lookup-in-code
int totalInstances = 100;
Iterable<String> routeesPaths = Arrays.asList("/user/factorialBackend", "");
boolean allowLocalRoutees = true;
String useRole = "backend";
ActorRef backend = getContext().actorOf(
new ClusterRouterGroup(new AdaptiveLoadBalancingGroup(
HeapMetricsSelector.getInstance(), Collections.<String> emptyList()),
new ClusterRouterGroupSettings(totalInstances, routeesPaths,
allowLocalRoutees, useRole)).props(), "factorialBackendRouter2");
//#router-lookup-in-code
}
//not used, only for documentation
abstract class FactorialFrontend3 extends AbstractActor {
//#router-deploy-in-code
int totalInstances = 100;
int maxInstancesPerNode = 3;
boolean allowLocalRoutees = false;
String useRole = "backend";
ActorRef backend = getContext().actorOf(
new ClusterRouterPool(new AdaptiveLoadBalancingPool(
SystemLoadAverageMetricsSelector.getInstance(), 0),
new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode,
allowLocalRoutees, useRole)).props(Props
.create(FactorialBackend.class)), "factorialBackendRouter3");
//#router-deploy-in-code
}

View file

@ -1,71 +0,0 @@
package jdocs.cluster;
import java.util.concurrent.TimeUnit;
import scala.concurrent.Await;
import scala.concurrent.duration.Duration;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.cluster.Cluster;
public class FactorialFrontendMain {
public static void main(String[] args) {
final int upToN = 200;
final Config config = ConfigFactory.parseString(
"akka.cluster.roles = [frontend]").withFallback(
ConfigFactory.load("factorial"));
final ActorSystem system = ActorSystem.create("ClusterSystem", config);
system.log().info(
"Factorials will start when 2 backend members in the cluster.");
//#registerOnUp
Cluster.get(system).registerOnMemberUp(new Runnable() {
@Override
public void run() {
system.actorOf(Props.create(FactorialFrontend.class, upToN, true),
"factorialFrontend");
}
});
//#registerOnUp
//#registerOnRemoved
Cluster.get(system).registerOnMemberRemoved(new Runnable() {
@Override
public void run() {
// exit JVM when ActorSystem has been terminated
final Runnable exit = new Runnable() {
@Override public void run() {
System.exit(0);
}
};
system.registerOnTermination(exit);
// shut down ActorSystem
system.terminate();
// In case ActorSystem shutdown takes longer than 10 seconds,
// exit the JVM forcefully anyway.
// We must spawn a separate thread to not block current thread,
// since that would have blocked the shutdown of the ActorSystem.
new Thread() {
@Override public void run(){
try {
Await.ready(system.whenTerminated(), Duration.create(10, TimeUnit.SECONDS));
} catch (Exception e) {
System.exit(-1);
}
}
}.start();
}
});
//#registerOnRemoved
}
}

View file

@ -1,15 +0,0 @@
package jdocs.cluster;
import java.math.BigInteger;
import java.io.Serializable;
public class FactorialResult implements Serializable {
private static final long serialVersionUID = 1L;
public final int n;
public final BigInteger factorial;
FactorialResult(int n, BigInteger factorial) {
this.n = n;
this.factorial = factorial;
}
}

View file

@ -1,69 +0,0 @@
package jdocs.cluster;
//#metrics-listener
import akka.actor.AbstractActor;
import akka.cluster.Cluster;
import akka.cluster.ClusterEvent.CurrentClusterState;
import akka.cluster.metrics.ClusterMetricsChanged;
import akka.cluster.metrics.NodeMetrics;
import akka.cluster.metrics.StandardMetrics;
import akka.cluster.metrics.StandardMetrics.HeapMemory;
import akka.cluster.metrics.StandardMetrics.Cpu;
import akka.cluster.metrics.ClusterMetricsExtension;
import akka.event.Logging;
import akka.event.LoggingAdapter;
public class MetricsListener extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
Cluster cluster = Cluster.get(getContext().getSystem());
ClusterMetricsExtension extension = ClusterMetricsExtension.get(getContext().getSystem());
// Subscribe unto ClusterMetricsEvent events.
@Override
public void preStart() {
extension.subscribe(getSelf());
}
// Unsubscribe from ClusterMetricsEvent events.
@Override
public void postStop() {
extension.unsubscribe(getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(ClusterMetricsChanged.class, clusterMetrics -> {
for (NodeMetrics nodeMetrics : clusterMetrics.getNodeMetrics()) {
if (nodeMetrics.address().equals(cluster.selfAddress())) {
logHeap(nodeMetrics);
logCpu(nodeMetrics);
}
}
})
.match(CurrentClusterState.class, message -> {
// Ignore.
})
.build();
}
void logHeap(NodeMetrics nodeMetrics) {
HeapMemory heap = StandardMetrics.extractHeapMemory(nodeMetrics);
if (heap != null) {
log.info("Used heap: {} MB", ((double) heap.used()) / 1024 / 1024);
}
}
void logCpu(NodeMetrics nodeMetrics) {
Cpu cpu = StandardMetrics.extractCpu(nodeMetrics);
if (cpu != null && cpu.systemLoadAverage().isDefined()) {
log.info("Load: {} ({} processors)", cpu.systemLoadAverage().get(),
cpu.processors());
}
}
}
//#metrics-listener

View file

@ -1,49 +0,0 @@
package jdocs.cluster;
import akka.actor.AbstractActor;
import akka.cluster.Cluster;
import akka.cluster.ClusterEvent;
import akka.cluster.ClusterEvent.MemberEvent;
import akka.cluster.ClusterEvent.MemberUp;
import akka.cluster.ClusterEvent.MemberRemoved;
import akka.cluster.ClusterEvent.UnreachableMember;
import akka.event.Logging;
import akka.event.LoggingAdapter;
public class SimpleClusterListener extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
Cluster cluster = Cluster.get(getContext().getSystem());
//subscribe to cluster changes
@Override
public void preStart() {
//#subscribe
cluster.subscribe(getSelf(), ClusterEvent.initialStateAsEvents(),
MemberEvent.class, UnreachableMember.class);
//#subscribe
}
//re-subscribe when restart
@Override
public void postStop() {
cluster.unsubscribe(getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(MemberUp.class, mUp -> {
log.info("Member is Up: {}", mUp.member());
})
.match(UnreachableMember.class, mUnreachable -> {
log.info("Member detected as unreachable: {}", mUnreachable.member());
})
.match(MemberRemoved.class, mRemoved -> {
log.info("Member is Removed: {}", mRemoved.member());
})
.match(MemberEvent.class, message -> {
// ignore
})
.build();
}
}

View file

@ -1,51 +0,0 @@
package jdocs.cluster;
import akka.actor.AbstractActor;
import akka.cluster.Cluster;
import akka.cluster.ClusterEvent.CurrentClusterState;
import akka.cluster.ClusterEvent.MemberEvent;
import akka.cluster.ClusterEvent.MemberUp;
import akka.cluster.ClusterEvent.MemberRemoved;
import akka.cluster.ClusterEvent.UnreachableMember;
import akka.event.Logging;
import akka.event.LoggingAdapter;
public class SimpleClusterListener2 extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
Cluster cluster = Cluster.get(getContext().getSystem());
//subscribe to cluster changes
@Override
public void preStart() {
//#subscribe
cluster.subscribe(getSelf(), MemberEvent.class, UnreachableMember.class);
//#subscribe
}
//re-subscribe when restart
@Override
public void postStop() {
cluster.unsubscribe(getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(CurrentClusterState.class, state -> {
log.info("Current members: {}", state.members());
})
.match(MemberUp.class, mUp -> {
log.info("Member is Up: {}", mUp.member());
})
.match(UnreachableMember.class, mUnreachable -> {
log.info("Member detected as unreachable: {}", mUnreachable.member());
})
.match(MemberRemoved.class, mRemoved -> {
log.info("Member is Removed: {}", mRemoved.member());
})
.match(MemberEvent.class, event -> {
// ignore
})
.build();
}
}

View file

@ -1,55 +0,0 @@
package jdocs.cluster;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.TimeUnit;
import jdocs.cluster.StatsMessages.JobFailed;
import jdocs.cluster.StatsMessages.StatsResult;
import scala.concurrent.duration.Duration;
import akka.actor.ActorRef;
import akka.actor.ReceiveTimeout;
import akka.actor.AbstractActor;
//#aggregator
public class StatsAggregator extends AbstractActor {
final int expectedResults;
final ActorRef replyTo;
final List<Integer> results = new ArrayList<Integer>();
public StatsAggregator(int expectedResults, ActorRef replyTo) {
this.expectedResults = expectedResults;
this.replyTo = replyTo;
}
@Override
public void preStart() {
getContext().setReceiveTimeout(Duration.create(3, TimeUnit.SECONDS));
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Integer.class, wordCount -> {
results.add(wordCount);
if (results.size() == expectedResults) {
int sum = 0;
for (int c : results) {
sum += c;
}
double meanWordLength = ((double) sum) / results.size();
replyTo.tell(new StatsResult(meanWordLength), getSelf());
getContext().stop(getSelf());
}
})
.match(ReceiveTimeout.class, x -> {
replyTo.tell(new JobFailed("Service unavailable, try again later"),
getSelf());
getContext().stop(getSelf());
})
.build();
}
}
//#aggregator

View file

@ -1,55 +0,0 @@
package jdocs.cluster;
import java.io.Serializable;
//#messages
public interface StatsMessages {
public static class StatsJob implements Serializable {
private final String text;
public StatsJob(String text) {
this.text = text;
}
public String getText() {
return text;
}
}
public static class StatsResult implements Serializable {
private final double meanWordLength;
public StatsResult(double meanWordLength) {
this.meanWordLength = meanWordLength;
}
public double getMeanWordLength() {
return meanWordLength;
}
@Override
public String toString() {
return "meanWordLength: " + meanWordLength;
}
}
public static class JobFailed implements Serializable {
private final String reason;
public JobFailed(String reason) {
this.reason = reason;
}
public String getReason() {
return reason;
}
@Override
public String toString() {
return "JobFailed(" + reason + ")";
}
}
}
//#messages

View file

@ -1,100 +0,0 @@
package jdocs.cluster;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import jdocs.cluster.StatsMessages.JobFailed;
import jdocs.cluster.StatsMessages.StatsJob;
import jdocs.cluster.StatsMessages.StatsResult;
import java.util.concurrent.ThreadLocalRandom;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import akka.actor.ActorSelection;
import akka.actor.Address;
import akka.actor.Cancellable;
import akka.actor.AbstractActor;
import akka.cluster.Cluster;
import akka.cluster.ClusterEvent.UnreachableMember;
import akka.cluster.ClusterEvent.ReachableMember;
import akka.cluster.ClusterEvent.CurrentClusterState;
import akka.cluster.ClusterEvent.MemberEvent;
import akka.cluster.ClusterEvent.MemberUp;
import akka.cluster.ClusterEvent.ReachabilityEvent;
import akka.cluster.Member;
import akka.cluster.MemberStatus;
public class StatsSampleClient extends AbstractActor {
final String servicePath;
final Cancellable tickTask;
final Set<Address> nodes = new HashSet<Address>();
Cluster cluster = Cluster.get(getContext().getSystem());
public StatsSampleClient(String servicePath) {
this.servicePath = servicePath;
FiniteDuration interval = Duration.create(2, TimeUnit.SECONDS);
tickTask = getContext()
.getSystem()
.scheduler()
.schedule(interval, interval, getSelf(), "tick",
getContext().dispatcher(), null);
}
//subscribe to cluster changes, MemberEvent
@Override
public void preStart() {
cluster.subscribe(getSelf(), MemberEvent.class, ReachabilityEvent.class);
}
//re-subscribe when restart
@Override
public void postStop() {
cluster.unsubscribe(getSelf());
tickTask.cancel();
}
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("tick", x -> !nodes.isEmpty(), x -> {
// just pick any one
List<Address> nodesList = new ArrayList<Address>(nodes);
Address address = nodesList.get(ThreadLocalRandom.current().nextInt(
nodesList.size()));
ActorSelection service = getContext().actorSelection(address + servicePath);
service.tell(new StatsJob("this is the text that will be analyzed"),
getSelf());
})
.match(StatsResult.class, System.out::println)
.match(JobFailed.class, System.out::println)
.match(CurrentClusterState.class, state -> {
nodes.clear();
for (Member member : state.getMembers()) {
if (member.hasRole("compute") && member.status().equals(MemberStatus.up())) {
nodes.add(member.address());
}
}
})
.match(MemberUp.class, mUp -> {
if (mUp.member().hasRole("compute"))
nodes.add(mUp.member().address());
})
.match(MemberEvent.class, event -> {
nodes.remove(event.member().address());
})
.match(UnreachableMember.class, unreachable -> {
nodes.remove(unreachable.member().address());
})
.match(ReachableMember.class, reachable -> {
if (reachable.member().hasRole("compute"))
nodes.add(reachable.member().address());
})
.build();
}
}

View file

@ -1,19 +0,0 @@
package jdocs.cluster;
import com.typesafe.config.ConfigFactory;
import akka.actor.ActorSystem;
import akka.actor.Props;
public class StatsSampleOneMasterClientMain {
public static void main(String[] args) {
// note that client is not a compute node, role not defined
ActorSystem system = ActorSystem.create("ClusterSystem",
ConfigFactory.load("stats2"));
system.actorOf(Props.create(StatsSampleClient.class, "/user/statsServiceProxy"),
"client");
}
}

View file

@ -1,53 +0,0 @@
package jdocs.cluster;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import akka.actor.ActorSystem;
import akka.actor.PoisonPill;
import akka.actor.Props;
import akka.cluster.singleton.ClusterSingletonManager;
import akka.cluster.singleton.ClusterSingletonManagerSettings;
import akka.cluster.singleton.ClusterSingletonProxy;
import akka.cluster.singleton.ClusterSingletonProxySettings;
public class StatsSampleOneMasterMain {
public static void main(String[] args) {
if (args.length == 0) {
startup(new String[] { "2551", "2552", "0" });
StatsSampleOneMasterClientMain.main(new String[0]);
} else {
startup(args);
}
}
public static void startup(String[] ports) {
for (String port : ports) {
// Override the configuration of the port
Config config = ConfigFactory
.parseString("akka.remote.netty.tcp.port=" + port)
.withFallback(
ConfigFactory.parseString("akka.cluster.roles = [compute]"))
.withFallback(ConfigFactory.load("stats2"));
ActorSystem system = ActorSystem.create("ClusterSystem", config);
//#create-singleton-manager
ClusterSingletonManagerSettings settings = ClusterSingletonManagerSettings.create(system)
.withRole("compute");
system.actorOf(ClusterSingletonManager.props(
Props.create(StatsService.class), PoisonPill.getInstance(), settings),
"statsService");
//#create-singleton-manager
//#singleton-proxy
ClusterSingletonProxySettings proxySettings =
ClusterSingletonProxySettings.create(system).withRole("compute");
system.actorOf(ClusterSingletonProxy.props("/user/statsService",
proxySettings), "statsServiceProxy");
//#singleton-proxy
}
}
}

View file

@ -1,80 +0,0 @@
package jdocs.cluster;
import akka.cluster.routing.ClusterRouterGroup;
import akka.cluster.routing.ClusterRouterGroupSettings;
import akka.cluster.routing.ClusterRouterPool;
import akka.cluster.routing.ClusterRouterPoolSettings;
import akka.routing.ConsistentHashingGroup;
import akka.routing.ConsistentHashingPool;
import jdocs.cluster.StatsMessages.StatsJob;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.AbstractActor;
import akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope;
import akka.routing.FromConfig;
import java.util.Collections;
//#service
public class StatsService extends AbstractActor {
// This router is used both with lookup and deploy of routees. If you
// have a router with only lookup of routees you can use Props.empty()
// instead of Props.create(StatsWorker.class).
ActorRef workerRouter = getContext().actorOf(
FromConfig.getInstance().props(Props.create(StatsWorker.class)),
"workerRouter");
@Override
public Receive createReceive() {
return receiveBuilder()
.match(StatsJob.class, job -> !job.getText().isEmpty(), job -> {
String[] words = job.getText().split(" ");
ActorRef replyTo = getSender();
// create actor that collects replies from workers
ActorRef aggregator = getContext().actorOf(
Props.create(StatsAggregator.class, words.length, replyTo));
// send each word to a worker
for (String word : words) {
workerRouter.tell(new ConsistentHashableEnvelope(word, word),
aggregator);
}
})
.build();
}
}
//#service
//not used, only for documentation
abstract class StatsService2 extends AbstractActor {
//#router-lookup-in-code
int totalInstances = 100;
Iterable<String> routeesPaths = Collections
.singletonList("/user/statsWorker");
boolean allowLocalRoutees = true;
String useRole = "compute";
ActorRef workerRouter = getContext().actorOf(
new ClusterRouterGroup(new ConsistentHashingGroup(routeesPaths),
new ClusterRouterGroupSettings(totalInstances, routeesPaths,
allowLocalRoutees, useRole)).props(), "workerRouter2");
//#router-lookup-in-code
}
//not used, only for documentation
abstract class StatsService3 extends AbstractActor {
//#router-deploy-in-code
int totalInstances = 100;
int maxInstancesPerNode = 3;
boolean allowLocalRoutees = false;
String useRole = "compute";
ActorRef workerRouter = getContext().actorOf(
new ClusterRouterPool(new ConsistentHashingPool(0),
new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode,
allowLocalRoutees, useRole)).props(Props
.create(StatsWorker.class)), "workerRouter3");
//#router-deploy-in-code
}

View file

@ -1,27 +0,0 @@
package jdocs.cluster;
import java.util.HashMap;
import java.util.Map;
import akka.actor.AbstractActor;
//#worker
public class StatsWorker extends AbstractActor {
Map<String, Integer> cache = new HashMap<String, Integer>();
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, word -> {
Integer length = cache.get(word);
if (length == null) {
length = word.length();
cache.put(word, length);
}
getSender().tell(length, getSelf());
})
.build();
}
}
//#worker

View file

@ -1,56 +0,0 @@
package jdocs.cluster;
import static jdocs.cluster.TransformationMessages.BACKEND_REGISTRATION;
import jdocs.cluster.TransformationMessages.TransformationJob;
import jdocs.cluster.TransformationMessages.TransformationResult;
import akka.actor.AbstractActor;
import akka.cluster.Cluster;
import akka.cluster.ClusterEvent.CurrentClusterState;
import akka.cluster.ClusterEvent.MemberUp;
import akka.cluster.Member;
import akka.cluster.MemberStatus;
//#backend
public class TransformationBackend extends AbstractActor {
Cluster cluster = Cluster.get(getContext().getSystem());
//subscribe to cluster changes, MemberUp
@Override
public void preStart() {
cluster.subscribe(getSelf(), MemberUp.class);
}
//re-subscribe when restart
@Override
public void postStop() {
cluster.unsubscribe(getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(TransformationJob.class, job -> {
getSender().tell(new TransformationResult(job.getText().toUpperCase()),
getSelf());
})
.match(CurrentClusterState.class, state -> {
for (Member member : state.getMembers()) {
if (member.status().equals(MemberStatus.up())) {
register(member);
}
}
})
.match(MemberUp.class, mUp -> {
register(mUp.member());
})
.build();
}
void register(Member member) {
if (member.hasRole("frontend"))
getContext().actorSelection(member.address() + "/user/frontend").tell(
BACKEND_REGISTRATION, getSelf());
}
}
//#backend

View file

@ -1,44 +0,0 @@
package jdocs.cluster;
import static jdocs.cluster.TransformationMessages.BACKEND_REGISTRATION;
import java.util.ArrayList;
import java.util.List;
import jdocs.cluster.TransformationMessages.JobFailed;
import jdocs.cluster.TransformationMessages.TransformationJob;
import akka.actor.ActorRef;
import akka.actor.Terminated;
import akka.actor.AbstractActor;
//#frontend
public class TransformationFrontend extends AbstractActor {
List<ActorRef> backends = new ArrayList<ActorRef>();
int jobCounter = 0;
@Override
public Receive createReceive() {
return receiveBuilder()
.match(TransformationJob.class, job -> backends.isEmpty(), job -> {
getSender().tell(
new JobFailed("Service unavailable, try again later", job),
getSender());
})
.match(TransformationJob.class, job -> {
jobCounter++;
backends.get(jobCounter % backends.size())
.forward(job, getContext());
})
.matchEquals(BACKEND_REGISTRATION, x -> {
getContext().watch(getSender());
backends.add(getSender());
})
.match(Terminated.class, terminated -> {
backends.remove(terminated.getActor());
})
.build();
}
}
//#frontend

View file

@ -1,63 +0,0 @@
package jdocs.cluster;
import java.io.Serializable;
//#messages
public interface TransformationMessages {
public static class TransformationJob implements Serializable {
private final String text;
public TransformationJob(String text) {
this.text = text;
}
public String getText() {
return text;
}
}
public static class TransformationResult implements Serializable {
private final String text;
public TransformationResult(String text) {
this.text = text;
}
public String getText() {
return text;
}
@Override
public String toString() {
return "TransformationResult(" + text + ")";
}
}
public static class JobFailed implements Serializable {
private final String reason;
private final TransformationJob job;
public JobFailed(String reason, TransformationJob job) {
this.reason = reason;
this.job = job;
}
public String getReason() {
return reason;
}
public TransformationJob getJob() {
return job;
}
@Override
public String toString() {
return "JobFailed(" + reason + ")";
}
}
public static final String BACKEND_REGISTRATION = "BackendRegistration";
}
//#messages

View file

@ -1,101 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata;
//#data-bot
import static java.util.concurrent.TimeUnit.SECONDS;
import scala.concurrent.duration.Duration;
import java.util.concurrent.ThreadLocalRandom;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Cancellable;
import akka.cluster.Cluster;
import akka.cluster.ddata.DistributedData;
import akka.cluster.ddata.Key;
import akka.cluster.ddata.ORSet;
import akka.cluster.ddata.ORSetKey;
import akka.cluster.ddata.Replicator;
import akka.cluster.ddata.Replicator.Changed;
import akka.cluster.ddata.Replicator.Subscribe;
import akka.cluster.ddata.Replicator.Update;
import akka.cluster.ddata.Replicator.UpdateResponse;
import akka.event.Logging;
import akka.event.LoggingAdapter;
public class DataBot extends AbstractActor {
private static final String TICK = "tick";
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
private final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
private final Cluster node = Cluster.get(getContext().getSystem());
private final Cancellable tickTask = getContext().getSystem().scheduler().schedule(
Duration.create(5, SECONDS), Duration.create(5, SECONDS), getSelf(), TICK,
getContext().dispatcher(), getSelf());
private final Key<ORSet<String>> dataKey = ORSetKey.create("key");
@SuppressWarnings("unchecked")
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, a -> a.equals(TICK), a -> receiveTick())
.match(Changed.class, c -> c.key().equals(dataKey), c -> receiveChanged((Changed<ORSet<String>>) c))
.match(UpdateResponse.class, r -> receiveUpdateResoponse())
.build();
}
private void receiveTick() {
String s = String.valueOf((char) ThreadLocalRandom.current().nextInt(97, 123));
if (ThreadLocalRandom.current().nextBoolean()) {
// add
log.info("Adding: {}", s);
Update<ORSet<String>> update = new Update<>(
dataKey,
ORSet.create(),
Replicator.writeLocal(),
curr -> curr.add(node, s));
replicator.tell(update, getSelf());
} else {
// remove
log.info("Removing: {}", s);
Update<ORSet<String>> update = new Update<>(
dataKey,
ORSet.create(),
Replicator.writeLocal(),
curr -> curr.remove(node, s));
replicator.tell(update, getSelf());
}
}
private void receiveChanged(Changed<ORSet<String>> c) {
ORSet<String> data = c.dataValue();
log.info("Current elements: {}", data.getElements());
}
private void receiveUpdateResoponse() {
// ignore
}
@Override
public void preStart() {
Subscribe<ORSet<String>> subscribe = new Subscribe<>(dataKey, getSelf());
replicator.tell(subscribe, ActorRef.noSender());
}
@Override
public void postStop(){
tickTask.cancel();
}
}
//#data-bot

View file

@ -1,422 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata;
import java.util.HashSet;
import java.util.Arrays;
import java.util.Set;
import java.math.BigInteger;
import java.util.Optional;
import akka.actor.*;
import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import docs.ddata.DistributedDataDocSpec;
import jdocs.AbstractJavaTest;
import scala.concurrent.duration.Duration;
import static java.util.concurrent.TimeUnit.SECONDS;
import static org.junit.Assert.assertEquals;
import org.junit.Test;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import akka.cluster.Cluster;
import akka.cluster.ddata.*;
import akka.japi.pf.ReceiveBuilder;
import static akka.cluster.ddata.Replicator.*;
@SuppressWarnings({"unchecked", "unused"})
public class DistributedDataDocTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("DistributedDataDocTest",
ConfigFactory.parseString(DistributedDataDocSpec.config()));
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
static
//#update
class DemonstrateUpdate extends AbstractActor {
final Cluster node = Cluster.get(getContext().getSystem());
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
final Key<GSet<String>> set1Key = GSetKey.create("set1");
final Key<ORSet<String>> set2Key = ORSetKey.create("set2");
final Key<Flag> activeFlagKey = FlagKey.create("active");
@Override
public Receive createReceive() {
ReceiveBuilder b = receiveBuilder();
b.matchEquals("demonstrate update", msg -> {
replicator.tell(new Replicator.Update<PNCounter>(counter1Key, PNCounter.create(),
Replicator.writeLocal(), curr -> curr.increment(node, 1)), getSelf());
final WriteConsistency writeTo3 = new WriteTo(3, Duration.create(1, SECONDS));
replicator.tell(new Replicator.Update<GSet<String>>(set1Key, GSet.create(),
writeTo3, curr -> curr.add("hello")), getSelf());
final WriteConsistency writeMajority =
new WriteMajority(Duration.create(5, SECONDS));
replicator.tell(new Replicator.Update<ORSet<String>>(set2Key, ORSet.create(),
writeMajority, curr -> curr.add(node, "hello")), getSelf());
final WriteConsistency writeAll = new WriteAll(Duration.create(5, SECONDS));
replicator.tell(new Replicator.Update<Flag>(activeFlagKey, Flag.create(),
writeAll, curr -> curr.switchOn()), getSelf());
});
//#update
//#update-response1
b.match(UpdateSuccess.class, a -> a.key().equals(counter1Key), a -> {
// ok
});
//#update-response1
//#update-response2
b.match(UpdateSuccess.class, a -> a.key().equals(set1Key), a -> {
// ok
})
.match(UpdateTimeout.class, a -> a.key().equals(set1Key), a -> {
// write to 3 nodes failed within 1.second
});
//#update-response2
//#update
return b.build();
}
}
//#update
static
//#update-request-context
class DemonstrateUpdateWithRequestContext extends AbstractActor {
final Cluster node = Cluster.get(getContext().getSystem());
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final WriteConsistency writeTwo = new WriteTo(2, Duration.create(3, SECONDS));
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, a -> a.equals("increment"), a -> {
// incoming command to increase the counter
Optional<Object> reqContext = Optional.of(getSender());
Replicator.Update<PNCounter> upd = new Replicator.Update<PNCounter>(counter1Key,
PNCounter.create(), writeTwo, reqContext, curr -> curr.increment(node, 1));
replicator.tell(upd, getSelf());
})
.match(UpdateSuccess.class, a -> a.key().equals(counter1Key), a -> {
ActorRef replyTo = (ActorRef) a.getRequest().get();
replyTo.tell("ack", getSelf());
})
.match(UpdateTimeout.class, a -> a.key().equals(counter1Key), a -> {
ActorRef replyTo = (ActorRef) a.getRequest().get();
replyTo.tell("nack", getSelf());
})
.build();
}
}
//#update-request-context
static
//#get
class DemonstrateGet extends AbstractActor {
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
final Key<GSet<String>> set1Key = GSetKey.create("set1");
final Key<ORSet<String>> set2Key = ORSetKey.create("set2");
final Key<Flag> activeFlagKey = FlagKey.create("active");
@Override
public Receive createReceive() {
ReceiveBuilder b = receiveBuilder();
b.matchEquals("demonstrate get", msg -> {
replicator.tell(new Replicator.Get<PNCounter>(counter1Key,
Replicator.readLocal()), getSelf());
final ReadConsistency readFrom3 = new ReadFrom(3, Duration.create(1, SECONDS));
replicator.tell(new Replicator.Get<GSet<String>>(set1Key,
readFrom3), getSelf());
final ReadConsistency readMajority = new ReadMajority(Duration.create(5, SECONDS));
replicator.tell(new Replicator.Get<ORSet<String>>(set2Key,
readMajority), getSelf());
final ReadConsistency readAll = new ReadAll(Duration.create(5, SECONDS));
replicator.tell(new Replicator.Get<Flag>(activeFlagKey,
readAll), getSelf());
});
//#get
//#get-response1
b.match(GetSuccess.class, a -> a.key().equals(counter1Key), a -> {
GetSuccess<PNCounter> g = a;
BigInteger value = g.dataValue().getValue();
}).
match(NotFound.class, a -> a.key().equals(counter1Key), a -> {
// key counter1 does not exist
});
//#get-response1
//#get-response2
b.match(GetSuccess.class, a -> a.key().equals(set1Key), a -> {
GetSuccess<GSet<String>> g = a;
Set<String> value = g.dataValue().getElements();
}).
match(GetFailure.class, a -> a.key().equals(set1Key), a -> {
// read from 3 nodes failed within 1.second
}).
match(NotFound.class, a -> a.key().equals(set1Key), a -> {
// key set1 does not exist
});
//#get-response2
//#get
return b.build();
}
}
//#get
static
//#get-request-context
class DemonstrateGetWithRequestContext extends AbstractActor {
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final ReadConsistency readTwo = new ReadFrom(2, Duration.create(3, SECONDS));
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, a -> a.equals("get-count"), a -> {
// incoming request to retrieve current value of the counter
Optional<Object> reqContext = Optional.of(getSender());
replicator.tell(new Replicator.Get<PNCounter>(counter1Key,
readTwo), getSelf());
})
.match(GetSuccess.class, a -> a.key().equals(counter1Key), a -> {
ActorRef replyTo = (ActorRef) a.getRequest().get();
GetSuccess<PNCounter> g = a;
long value = g.dataValue().getValue().longValue();
replyTo.tell(value, getSelf());
})
.match(GetFailure.class, a -> a.key().equals(counter1Key), a -> {
ActorRef replyTo = (ActorRef) a.getRequest().get();
replyTo.tell(-1L, getSelf());
})
.match(NotFound.class, a -> a.key().equals(counter1Key), a -> {
ActorRef replyTo = (ActorRef) a.getRequest().get();
replyTo.tell(0L, getSelf());
})
.build();
}
}
//#get-request-context
static
//#subscribe
class DemonstrateSubscribe extends AbstractActor {
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
BigInteger currentValue = BigInteger.valueOf(0);
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Changed.class, a -> a.key().equals(counter1Key), a -> {
Changed<PNCounter> g = a;
currentValue = g.dataValue().getValue();
})
.match(String.class, a -> a.equals("get-count"), a -> {
// incoming request to retrieve current value of the counter
getSender().tell(currentValue, getSender());
})
.build();
}
@Override
public void preStart() {
// subscribe to changes of the Counter1Key value
replicator.tell(new Subscribe<PNCounter>(counter1Key, getSelf()), ActorRef.noSender());
}
}
//#subscribe
static
//#delete
class DemonstrateDelete extends AbstractActor {
final ActorRef replicator =
DistributedData.get(getContext().getSystem()).replicator();
final Key<PNCounter> counter1Key = PNCounterKey.create("counter1");
final Key<ORSet<String>> set2Key = ORSetKey.create("set2");
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("demonstrate delete", msg -> {
replicator.tell(new Delete<PNCounter>(counter1Key,
Replicator.writeLocal()), getSelf());
final WriteConsistency writeMajority =
new WriteMajority(Duration.create(5, SECONDS));
replicator.tell(new Delete<PNCounter>(counter1Key,
writeMajority), getSelf());
})
.build();
}
}
//#delete
public void demonstratePNCounter() {
//#pncounter
final Cluster node = Cluster.get(system);
final PNCounter c0 = PNCounter.create();
final PNCounter c1 = c0.increment(node, 1);
final PNCounter c2 = c1.increment(node, 7);
final PNCounter c3 = c2.decrement(node, 2);
System.out.println(c3.value()); // 6
//#pncounter
}
public void demonstratePNCounterMap() {
//#pncountermap
final Cluster node = Cluster.get(system);
final PNCounterMap<String> m0 = PNCounterMap.create();
final PNCounterMap<String> m1 = m0.increment(node, "a", 7);
final PNCounterMap<String> m2 = m1.decrement(node, "a", 2);
final PNCounterMap<String> m3 = m2.increment(node, "b", 1);
System.out.println(m3.get("a")); // 5
System.out.println(m3.getEntries());
//#pncountermap
}
public void demonstrateGSet() {
//#gset
final GSet<String> s0 = GSet.create();
final GSet<String> s1 = s0.add("a");
final GSet<String> s2 = s1.add("b").add("c");
if (s2.contains("a"))
System.out.println(s2.getElements()); // a, b, c
//#gset
}
public void demonstrateORSet() {
//#orset
final Cluster node = Cluster.get(system);
final ORSet<String> s0 = ORSet.create();
final ORSet<String> s1 = s0.add(node, "a");
final ORSet<String> s2 = s1.add(node, "b");
final ORSet<String> s3 = s2.remove(node, "a");
System.out.println(s3.getElements()); // b
//#orset
}
public void demonstrateORMultiMap() {
//#ormultimap
final Cluster node = Cluster.get(system);
final ORMultiMap<String, Integer> m0 = ORMultiMap.create();
final ORMultiMap<String, Integer> m1 = m0.put(node, "a",
new HashSet<>(Arrays.asList(1, 2, 3)));
final ORMultiMap<String, Integer> m2 = m1.addBinding(node, "a", 4);
final ORMultiMap<String, Integer> m3 = m2.removeBinding(node, "a", 2);
final ORMultiMap<String, Integer> m4 = m3.addBinding(node, "b", 1);
System.out.println(m4.getEntries());
//#ormultimap
}
public void demonstrateFlag() {
//#flag
final Flag f0 = Flag.create();
final Flag f1 = f0.switchOn();
System.out.println(f1.enabled());
//#flag
}
@Test
public void demonstrateLWWRegister() {
//#lwwregister
final Cluster node = Cluster.get(system);
final LWWRegister<String> r1 = LWWRegister.create(node, "Hello");
final LWWRegister<String> r2 = r1.withValue(node, "Hi");
System.out.println(r1.value() + " by " + r1.updatedBy() + " at " + r1.timestamp());
//#lwwregister
assertEquals("Hi", r2.value());
}
static
//#lwwregister-custom-clock
class Record {
public final int version;
public final String name;
public final String address;
public Record(int version, String name, String address) {
this.version = version;
this.name = name;
this.address = address;
}
}
//#lwwregister-custom-clock
public void demonstrateLWWRegisterWithCustomClock() {
//#lwwregister-custom-clock
final Cluster node = Cluster.get(system);
final LWWRegister.Clock<Record> recordClock = new LWWRegister.Clock<Record>() {
@Override
public long apply(long currentTimestamp, Record value) {
return value.version;
}
};
final Record record1 = new Record(1, "Alice", "Union Square");
final LWWRegister<Record> r1 = LWWRegister.create(node, record1);
final Record record2 = new Record(2, "Alice", "Madison Square");
final LWWRegister<Record> r2 = LWWRegister.create(node, record2);
final LWWRegister<Record> r3 = r1.merge(r2);
System.out.println(r3.value());
//#lwwregister-custom-clock
assertEquals("Madison Square", r3.value().address);
}
}

View file

@ -1,273 +0,0 @@
package jdocs.ddata;
import static java.util.concurrent.TimeUnit.SECONDS;
import java.io.Serializable;
import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import scala.concurrent.duration.Duration;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.cluster.Cluster;
import akka.cluster.ddata.DistributedData;
import akka.cluster.ddata.Key;
import akka.cluster.ddata.LWWMap;
import akka.cluster.ddata.LWWMapKey;
import akka.cluster.ddata.Replicator;
import akka.cluster.ddata.Replicator.GetFailure;
import akka.cluster.ddata.Replicator.GetResponse;
import akka.cluster.ddata.Replicator.GetSuccess;
import akka.cluster.ddata.Replicator.NotFound;
import akka.cluster.ddata.Replicator.ReadConsistency;
import akka.cluster.ddata.Replicator.ReadMajority;
import akka.cluster.ddata.Replicator.Update;
import akka.cluster.ddata.Replicator.UpdateFailure;
import akka.cluster.ddata.Replicator.UpdateSuccess;
import akka.cluster.ddata.Replicator.UpdateTimeout;
import akka.cluster.ddata.Replicator.WriteConsistency;
import akka.cluster.ddata.Replicator.WriteMajority;
@SuppressWarnings("unchecked")
public class ShoppingCart extends AbstractActor {
//#read-write-majority
private final WriteConsistency writeMajority =
new WriteMajority(Duration.create(3, SECONDS));
private final static ReadConsistency readMajority =
new ReadMajority(Duration.create(3, SECONDS));
//#read-write-majority
public static final String GET_CART = "getCart";
public static class AddItem {
public final LineItem item;
public AddItem(LineItem item) {
this.item = item;
}
}
public static class RemoveItem {
public final String productId;
public RemoveItem(String productId) {
this.productId = productId;
}
}
public static class Cart {
public final Set<LineItem> items;
public Cart(Set<LineItem> items) {
this.items = items;
}
}
public static class LineItem implements Serializable {
private static final long serialVersionUID = 1L;
public final String productId;
public final String title;
public final int quantity;
public LineItem(String productId, String title, int quantity) {
this.productId = productId;
this.title = title;
this.quantity = quantity;
}
@Override
public int hashCode() {
final int prime = 31;
int result = 1;
result = prime * result + ((productId == null) ? 0 : productId.hashCode());
result = prime * result + quantity;
result = prime * result + ((title == null) ? 0 : title.hashCode());
return result;
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (getClass() != obj.getClass())
return false;
LineItem other = (LineItem) obj;
if (productId == null) {
if (other.productId != null)
return false;
} else if (!productId.equals(other.productId))
return false;
if (quantity != other.quantity)
return false;
if (title == null) {
if (other.title != null)
return false;
} else if (!title.equals(other.title))
return false;
return true;
}
@Override
public String toString() {
return "LineItem [productId=" + productId + ", title=" + title + ", quantity=" + quantity + "]";
}
}
public static Props props(String userId) {
return Props.create(ShoppingCart.class, userId);
}
private final ActorRef replicator = DistributedData.get(getContext().getSystem()).replicator();
private final Cluster node = Cluster.get(getContext().getSystem());
@SuppressWarnings("unused")
private final String userId;
private final Key<LWWMap<String, LineItem>> dataKey;
public ShoppingCart(String userId) {
this.userId = userId;
this.dataKey = LWWMapKey.create("cart-" + userId);
}
@Override
public Receive createReceive() {
return matchGetCart()
.orElse(matchAddItem())
.orElse(matchRemoveItem())
.orElse(matchOther());
}
//#get-cart
private Receive matchGetCart() {
return receiveBuilder()
.matchEquals(GET_CART, s -> receiveGetCart())
.match(GetSuccess.class, this::isResponseToGetCart,
g -> receiveGetSuccess((GetSuccess<LWWMap<String, LineItem>>) g))
.match(NotFound.class, this::isResponseToGetCart,
n -> receiveNotFound((NotFound<LWWMap<String, LineItem>>) n))
.match(GetFailure.class, this::isResponseToGetCart,
f -> receiveGetFailure((GetFailure<LWWMap<String, LineItem>>) f))
.build();
}
private void receiveGetCart() {
Optional<Object> ctx = Optional.of(getSender());
replicator.tell(new Replicator.Get<LWWMap<String, LineItem>>(dataKey, readMajority, ctx),
getSelf());
}
private boolean isResponseToGetCart(GetResponse<?> response) {
return response.key().equals(dataKey) &&
(response.getRequest().orElse(null) instanceof ActorRef);
}
private void receiveGetSuccess(GetSuccess<LWWMap<String, LineItem>> g) {
Set<LineItem> items = new HashSet<>(g.dataValue().getEntries().values());
ActorRef replyTo = (ActorRef) g.getRequest().get();
replyTo.tell(new Cart(items), getSelf());
}
private void receiveNotFound(NotFound<LWWMap<String, LineItem>> n) {
ActorRef replyTo = (ActorRef) n.getRequest().get();
replyTo.tell(new Cart(new HashSet<>()), getSelf());
}
private void receiveGetFailure(GetFailure<LWWMap<String, LineItem>> f) {
// ReadMajority failure, try again with local read
Optional<Object> ctx = Optional.of(getSender());
replicator.tell(new Replicator.Get<LWWMap<String, LineItem>>(dataKey, Replicator.readLocal(),
ctx), getSelf());
}
//#get-cart
//#add-item
private Receive matchAddItem() {
return receiveBuilder()
.match(AddItem.class, this::receiveAddItem)
.build();
}
private void receiveAddItem(AddItem add) {
Update<LWWMap<String, LineItem>> update = new Update<>(dataKey, LWWMap.create(), writeMajority,
cart -> updateCart(cart, add.item));
replicator.tell(update, getSelf());
}
//#add-item
private LWWMap<String, LineItem> updateCart(LWWMap<String, LineItem> data, LineItem item) {
if (data.contains(item.productId)) {
LineItem existingItem = data.get(item.productId).get();
int newQuantity = existingItem.quantity + item.quantity;
LineItem newItem = new LineItem(item.productId, item.title, newQuantity);
return data.put(node, item.productId, newItem);
} else {
return data.put(node, item.productId, item);
}
}
private Receive matchRemoveItem() {
return receiveBuilder()
.match(RemoveItem.class, this::receiveRemoveItem)
.match(GetSuccess.class, this::isResponseToRemoveItem,
g -> receiveRemoveItemGetSuccess((GetSuccess<LWWMap<String, LineItem>>) g))
.match(GetFailure.class, this::isResponseToRemoveItem,
f -> receiveRemoveItemGetFailure((GetFailure<LWWMap<String, LineItem>>) f))
.match(NotFound.class, this::isResponseToRemoveItem, n -> {/* nothing to remove */})
.build();
}
//#remove-item
private void receiveRemoveItem(RemoveItem rm) {
// Try to fetch latest from a majority of nodes first, since ORMap
// remove must have seen the item to be able to remove it.
Optional<Object> ctx = Optional.of(rm);
replicator.tell(new Replicator.Get<LWWMap<String, LineItem>>(dataKey, readMajority, ctx),
getSelf());
}
private void receiveRemoveItemGetSuccess(GetSuccess<LWWMap<String, LineItem>> g) {
RemoveItem rm = (RemoveItem) g.getRequest().get();
removeItem(rm.productId);
}
private void receiveRemoveItemGetFailure(GetFailure<LWWMap<String, LineItem>> f) {
// ReadMajority failed, fall back to best effort local value
RemoveItem rm = (RemoveItem) f.getRequest().get();
removeItem(rm.productId);
}
private void removeItem(String productId) {
Update<LWWMap<String, LineItem>> update = new Update<>(dataKey, LWWMap.create(), writeMajority,
cart -> cart.remove(node, productId));
replicator.tell(update, getSelf());
}
private boolean isResponseToRemoveItem(GetResponse<?> response) {
return response.key().equals(dataKey) &&
(response.getRequest().orElse(null) instanceof RemoveItem);
}
//#remove-item
private Receive matchOther() {
return receiveBuilder()
.match(UpdateSuccess.class, u -> {
// ok
})
.match(UpdateTimeout.class, t -> {
// will eventually be replicated
})
.match(UpdateFailure.class, f -> {
throw new IllegalStateException("Unexpected failure: " + f);
})
.build();
}
}

View file

@ -1,48 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata;
import java.util.HashSet;
import java.util.Set;
import akka.cluster.ddata.AbstractReplicatedData;
import akka.cluster.ddata.GSet;
//#twophaseset
public class TwoPhaseSet extends AbstractReplicatedData<TwoPhaseSet> {
public final GSet<String> adds;
public final GSet<String> removals;
public TwoPhaseSet(GSet<String> adds, GSet<String> removals) {
this.adds = adds;
this.removals = removals;
}
public static TwoPhaseSet create() {
return new TwoPhaseSet(GSet.create(), GSet.create());
}
public TwoPhaseSet add(String element) {
return new TwoPhaseSet(adds.add(element), removals);
}
public TwoPhaseSet remove(String element) {
return new TwoPhaseSet(adds, removals.add(element));
}
public Set<String> getElements() {
Set<String> result = new HashSet<>(adds.getElements());
result.removeAll(removals.getElements());
return result;
}
@Override
public TwoPhaseSet mergeData(TwoPhaseSet that) {
return new TwoPhaseSet(this.adds.merge(that.adds),
this.removals.merge(that.removals));
}
}
//#twophaseset

View file

@ -1,92 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata.protobuf;
//#serializer
import jdocs.ddata.TwoPhaseSet;
import docs.ddata.protobuf.msg.TwoPhaseSetMessages;
import docs.ddata.protobuf.msg.TwoPhaseSetMessages.TwoPhaseSet.Builder;
import java.util.ArrayList;
import java.util.Collections;
import akka.actor.ExtendedActorSystem;
import akka.cluster.ddata.GSet;
import akka.cluster.ddata.protobuf.AbstractSerializationSupport;
public class TwoPhaseSetSerializer extends AbstractSerializationSupport {
private final ExtendedActorSystem system;
public TwoPhaseSetSerializer(ExtendedActorSystem system) {
this.system = system;
}
@Override
public ExtendedActorSystem system() {
return this.system;
}
@Override
public boolean includeManifest() {
return false;
}
@Override
public int identifier() {
return 99998;
}
@Override
public byte[] toBinary(Object obj) {
if (obj instanceof TwoPhaseSet) {
return twoPhaseSetToProto((TwoPhaseSet) obj).toByteArray();
} else {
throw new IllegalArgumentException(
"Can't serialize object of type " + obj.getClass());
}
}
@Override
public Object fromBinaryJava(byte[] bytes, Class<?> manifest) {
return twoPhaseSetFromBinary(bytes);
}
protected TwoPhaseSetMessages.TwoPhaseSet twoPhaseSetToProto(TwoPhaseSet twoPhaseSet) {
Builder b = TwoPhaseSetMessages.TwoPhaseSet.newBuilder();
ArrayList<String> adds = new ArrayList<>(twoPhaseSet.adds.getElements());
if (!adds.isEmpty()) {
Collections.sort(adds);
b.addAllAdds(adds);
}
ArrayList<String> removals = new ArrayList<>(twoPhaseSet.removals.getElements());
if (!removals.isEmpty()) {
Collections.sort(removals);
b.addAllRemovals(removals);
}
return b.build();
}
protected TwoPhaseSet twoPhaseSetFromBinary(byte[] bytes) {
try {
TwoPhaseSetMessages.TwoPhaseSet msg =
TwoPhaseSetMessages.TwoPhaseSet.parseFrom(bytes);
GSet<String> adds = GSet.create();
for (String elem : msg.getAddsList()) {
adds = adds.add(elem);
}
GSet<String> removals = GSet.create();
for (String elem : msg.getRemovalsList()) {
removals = removals.add(elem);
}
// GSet will accumulate deltas when adding elements,
// but those are not of interest in the result of the deserialization
return new TwoPhaseSet(adds.resetDelta(), removals.resetDelta());
} catch (Exception e) {
throw new RuntimeException(e.getMessage(), e);
}
}
}
//#serializer

View file

@ -1,87 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata.protobuf;
//#serializer
import jdocs.ddata.TwoPhaseSet;
import docs.ddata.protobuf.msg.TwoPhaseSetMessages;
import docs.ddata.protobuf.msg.TwoPhaseSetMessages.TwoPhaseSet2.Builder;
import akka.actor.ExtendedActorSystem;
import akka.cluster.ddata.GSet;
import akka.cluster.ddata.protobuf.AbstractSerializationSupport;
import akka.cluster.ddata.protobuf.ReplicatedDataSerializer;
public class TwoPhaseSetSerializer2 extends AbstractSerializationSupport {
private final ExtendedActorSystem system;
private final ReplicatedDataSerializer replicatedDataSerializer;
public TwoPhaseSetSerializer2(ExtendedActorSystem system) {
this.system = system;
this.replicatedDataSerializer = new ReplicatedDataSerializer(system);
}
@Override
public ExtendedActorSystem system() {
return this.system;
}
@Override
public boolean includeManifest() {
return false;
}
@Override
public int identifier() {
return 99998;
}
@Override
public byte[] toBinary(Object obj) {
if (obj instanceof TwoPhaseSet) {
return twoPhaseSetToProto((TwoPhaseSet) obj).toByteArray();
} else {
throw new IllegalArgumentException(
"Can't serialize object of type " + obj.getClass());
}
}
@Override
public Object fromBinaryJava(byte[] bytes, Class<?> manifest) {
return twoPhaseSetFromBinary(bytes);
}
protected TwoPhaseSetMessages.TwoPhaseSet2 twoPhaseSetToProto(TwoPhaseSet twoPhaseSet) {
Builder b = TwoPhaseSetMessages.TwoPhaseSet2.newBuilder();
if (!twoPhaseSet.adds.isEmpty())
b.setAdds(otherMessageToProto(twoPhaseSet.adds).toByteString());
if (!twoPhaseSet.removals.isEmpty())
b.setRemovals(otherMessageToProto(twoPhaseSet.removals).toByteString());
return b.build();
}
@SuppressWarnings("unchecked")
protected TwoPhaseSet twoPhaseSetFromBinary(byte[] bytes) {
try {
TwoPhaseSetMessages.TwoPhaseSet2 msg =
TwoPhaseSetMessages.TwoPhaseSet2.parseFrom(bytes);
GSet<String> adds = GSet.create();
if (msg.hasAdds())
adds = (GSet<String>) otherMessageFromBinary(msg.getAdds().toByteArray());
GSet<String> removals = GSet.create();
if (msg.hasRemovals())
adds = (GSet<String>) otherMessageFromBinary(msg.getRemovals().toByteArray());
return new TwoPhaseSet(adds, removals);
} catch (Exception e) {
throw new RuntimeException(e.getMessage(), e);
}
}
}
//#serializer

View file

@ -1,32 +0,0 @@
/**
* Copyright (C) 2015-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.ddata.protobuf;
import jdocs.ddata.TwoPhaseSet;
import akka.actor.ExtendedActorSystem;
public class TwoPhaseSetSerializerWithCompression extends TwoPhaseSetSerializer {
public TwoPhaseSetSerializerWithCompression(ExtendedActorSystem system) {
super(system);
}
//#compression
@Override
public byte[] toBinary(Object obj) {
if (obj instanceof TwoPhaseSet) {
return compress(twoPhaseSetToProto((TwoPhaseSet) obj));
} else {
throw new IllegalArgumentException(
"Can't serialize object of type " + obj.getClass());
}
}
@Override
public Object fromBinaryJava(byte[] bytes, Class<?> manifest) {
return twoPhaseSetFromBinary(decompress(bytes));
}
//#compression
}

View file

@ -1,258 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.dispatcher;
import akka.dispatch.ControlMessage;
import akka.dispatch.RequiresMessageQueue;
import akka.testkit.AkkaSpec;
import akka.testkit.javadsl.TestKit;
import com.typesafe.config.ConfigFactory;
import docs.dispatcher.DispatcherDocSpec;
import jdocs.AbstractJavaTest;
import jdocs.actor.MyBoundedActor;
import jdocs.actor.MyActor;
import org.junit.ClassRule;
import org.junit.Test;
import scala.concurrent.ExecutionContext;
//#imports
import akka.actor.*;
//#imports
//#imports-prio
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#imports-prio
//#imports-prio-mailbox
import akka.dispatch.PriorityGenerator;
import akka.dispatch.UnboundedStablePriorityMailbox;
import akka.testkit.AkkaJUnitActorSystemResource;
import com.typesafe.config.Config;
//#imports-prio-mailbox
//#imports-required-mailbox
//#imports-required-mailbox
public class DispatcherDocTest extends AbstractJavaTest {
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource =
new AkkaJUnitActorSystemResource("DispatcherDocTest", ConfigFactory.parseString(
DispatcherDocSpec.javaConfig()).withFallback(ConfigFactory.parseString(
DispatcherDocSpec.config())).withFallback(AkkaSpec.testConf()));
private final ActorSystem system = actorSystemResource.getSystem();
@SuppressWarnings("unused")
@Test
public void defineDispatcherInConfig() {
//#defining-dispatcher-in-config
ActorRef myActor =
system.actorOf(Props.create(MyActor.class),
"myactor");
//#defining-dispatcher-in-config
}
@SuppressWarnings("unused")
@Test
public void defineDispatcherInCode() {
//#defining-dispatcher-in-code
ActorRef myActor =
system.actorOf(Props.create(MyActor.class).withDispatcher("my-dispatcher"),
"myactor3");
//#defining-dispatcher-in-code
}
@SuppressWarnings("unused")
@Test
public void defineFixedPoolSizeDispatcher() {
//#defining-fixed-pool-size-dispatcher
ActorRef myActor = system.actorOf(Props.create(MyActor.class)
.withDispatcher("blocking-io-dispatcher"));
//#defining-fixed-pool-size-dispatcher
}
@SuppressWarnings("unused")
@Test
public void definePinnedDispatcher() {
//#defining-pinned-dispatcher
ActorRef myActor = system.actorOf(Props.create(MyActor.class)
.withDispatcher("my-pinned-dispatcher"));
//#defining-pinned-dispatcher
}
@SuppressWarnings("unused")
public void compileLookup() {
//#lookup
// this is scala.concurrent.ExecutionContext
// for use with Futures, Scheduler, etc.
final ExecutionContext ex = system.dispatchers().lookup("my-dispatcher");
//#lookup
}
@SuppressWarnings("unused")
@Test
public void defineMailboxInConfig() {
//#defining-mailbox-in-config
ActorRef myActor =
system.actorOf(Props.create(MyActor.class),
"priomailboxactor");
//#defining-mailbox-in-config
}
@SuppressWarnings("unused")
@Test
public void defineMailboxInCode() {
//#defining-mailbox-in-code
ActorRef myActor =
system.actorOf(Props.create(MyActor.class)
.withMailbox("prio-mailbox"));
//#defining-mailbox-in-code
}
@SuppressWarnings("unused")
@Test
public void usingARequiredMailbox() {
ActorRef myActor =
system.actorOf(Props.create(MyBoundedActor.class));
}
@Test
public void priorityDispatcher() throws Exception {
TestKit probe = new TestKit(system);
//#prio-dispatcher
class Demo extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
{
for (Object msg : new Object[] { "lowpriority", "lowpriority",
"highpriority", "pigdog", "pigdog2", "pigdog3", "highpriority",
PoisonPill.getInstance() }) {
getSelf().tell(msg, getSelf());
}
}
@Override
public Receive createReceive() {
return receiveBuilder().matchAny(message -> {
log.info(message.toString());
}).build();
}
}
// We create a new Actor that just prints out what it processes
ActorRef myActor = system.actorOf(Props.create(Demo.class, this)
.withDispatcher("prio-dispatcher"));
/*
Logs:
'highpriority
'highpriority
'pigdog
'pigdog2
'pigdog3
'lowpriority
'lowpriority
*/
//#prio-dispatcher
probe.watch(myActor);
probe.expectMsgClass(Terminated.class);
}
@Test
public void controlAwareDispatcher() throws Exception {
TestKit probe = new TestKit(system);
//#control-aware-dispatcher
class Demo extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
{
for (Object msg : new Object[] { "foo", "bar", new MyControlMessage(),
PoisonPill.getInstance() }) {
getSelf().tell(msg, getSelf());
}
}
@Override
public Receive createReceive() {
return receiveBuilder().matchAny(message -> {
log.info(message.toString());
}).build();
}
}
// We create a new Actor that just prints out what it processes
ActorRef myActor = system.actorOf(Props.create(Demo.class, this)
.withDispatcher("control-aware-dispatcher"));
/*
Logs:
'MyControlMessage
'foo
'bar
*/
//#control-aware-dispatcher
probe.watch(myActor);
probe.expectMsgClass(Terminated.class);
}
static
//#prio-mailbox
public class MyPrioMailbox extends UnboundedStablePriorityMailbox {
// needed for reflective instantiation
public MyPrioMailbox(ActorSystem.Settings settings, Config config) {
// Create a new PriorityGenerator, lower prio means more important
super(new PriorityGenerator() {
@Override
public int gen(Object message) {
if (message.equals("highpriority"))
return 0; // 'highpriority messages should be treated first if possible
else if (message.equals("lowpriority"))
return 2; // 'lowpriority messages should be treated last if possible
else if (message.equals(PoisonPill.getInstance()))
return 3; // PoisonPill when no other left
else
return 1; // By default they go between high and low prio
}
});
}
}
//#prio-mailbox
static
//#control-aware-mailbox-messages
public class MyControlMessage implements ControlMessage {}
//#control-aware-mailbox-messages
@Test
public void requiredMailboxDispatcher() throws Exception {
ActorRef myActor = system.actorOf(Props.create(MyActor.class)
.withDispatcher("custom-dispatcher"));
}
static
//#require-mailbox-on-actor
public class MySpecialActor extends AbstractActor implements
RequiresMessageQueue<MyUnboundedMessageQueueSemantics> {
//#require-mailbox-on-actor
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
//#require-mailbox-on-actor
// ...
}
//#require-mailbox-on-actor
@Test
public void requiredMailboxActor() throws Exception {
ActorRef myActor = system.actorOf(Props.create(MySpecialActor.class));
}
}

View file

@ -1,51 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.dispatcher;
//#mailbox-implementation-example
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.dispatch.Envelope;
import akka.dispatch.MailboxType;
import akka.dispatch.MessageQueue;
import akka.dispatch.ProducesMessageQueue;
import com.typesafe.config.Config;
import java.util.concurrent.ConcurrentLinkedQueue;
import java.util.Queue;
import scala.Option;
public class MyUnboundedMailbox implements MailboxType,
ProducesMessageQueue<MyUnboundedMailbox.MyMessageQueue> {
// This is the MessageQueue implementation
public static class MyMessageQueue implements MessageQueue,
MyUnboundedMessageQueueSemantics {
private final Queue<Envelope> queue =
new ConcurrentLinkedQueue<Envelope>();
// these must be implemented; queue used as example
public void enqueue(ActorRef receiver, Envelope handle) {
queue.offer(handle);
}
public Envelope dequeue() { return queue.poll(); }
public int numberOfMessages() { return queue.size(); }
public boolean hasMessages() { return !queue.isEmpty(); }
public void cleanUp(ActorRef owner, MessageQueue deadLetters) {
for (Envelope handle: queue) {
deadLetters.enqueue(owner, handle);
}
}
}
// This constructor signature must exist, it will be called by Akka
public MyUnboundedMailbox(ActorSystem.Settings settings, Config config) {
// put your initialization code here
}
// The create method is called to create the MessageQueue
public MessageQueue create(Option<ActorRef> owner, Option<ActorSystem> system) {
return new MyMessageQueue();
}
}
//#mailbox-implementation-example

View file

@ -1,11 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.dispatcher;
//#mailbox-implementation-example
// Marker interface used for mailbox requirements mapping
public interface MyUnboundedMessageQueueSemantics {
}
//#mailbox-implementation-example

View file

@ -1,325 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.event;
import akka.event.japi.EventBus;
import java.util.concurrent.TimeUnit;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.ClassRule;
import org.junit.Test;
import akka.actor.ActorSystem;
import akka.actor.ActorRef;
import akka.testkit.AkkaJUnitActorSystemResource;
import akka.util.Subclassification;
import scala.concurrent.duration.FiniteDuration;
//#lookup-bus
import akka.event.japi.LookupEventBus;
//#lookup-bus
//#subchannel-bus
import akka.event.japi.SubchannelEventBus;
//#subchannel-bus
//#scanning-bus
import akka.event.japi.ScanningEventBus;
//#scanning-bus
//#actor-bus
import akka.event.japi.ManagedActorEventBus;
//#actor-bus
public class EventBusDocTest extends AbstractJavaTest {
public static class Event {}
public static class Subscriber {}
public static class Classifier {}
static public interface EventBusApi extends EventBus<Event, Subscriber, Classifier> {
@Override
//#event-bus-api
/**
* Attempts to register the subscriber to the specified Classifier
* @return true if successful and false if not (because it was already
* subscribed to that Classifier, or otherwise)
*/
public boolean subscribe(Subscriber subscriber, Classifier to);
//#event-bus-api
@Override
//#event-bus-api
/**
* Attempts to deregister the subscriber from the specified Classifier
* @return true if successful and false if not (because it wasn't subscribed
* to that Classifier, or otherwise)
*/
public boolean unsubscribe(Subscriber subscriber, Classifier from);
//#event-bus-api
@Override
//#event-bus-api
/**
* Attempts to deregister the subscriber from all Classifiers it may be subscribed to
*/
public void unsubscribe(Subscriber subscriber);
//#event-bus-api
@Override
//#event-bus-api
/**
* Publishes the specified Event to this bus
*/
public void publish(Event event);
//#event-bus-api
}
static
//#lookup-bus
public class MsgEnvelope {
public final String topic;
public final Object payload;
public MsgEnvelope(String topic, Object payload) {
this.topic = topic;
this.payload = payload;
}
}
//#lookup-bus
static
//#lookup-bus
/**
* Publishes the payload of the MsgEnvelope when the topic of the
* MsgEnvelope equals the String specified when subscribing.
*/
public class LookupBusImpl extends LookupEventBus<MsgEnvelope, ActorRef, String> {
// is used for extracting the classifier from the incoming events
@Override public String classify(MsgEnvelope event) {
return event.topic;
}
// will be invoked for each event for all subscribers which registered themselves
// for the events classifier
@Override public void publish(MsgEnvelope event, ActorRef subscriber) {
subscriber.tell(event.payload, ActorRef.noSender());
}
// must define a full order over the subscribers, expressed as expected from
// `java.lang.Comparable.compare`
@Override public int compareSubscribers(ActorRef a, ActorRef b) {
return a.compareTo(b);
}
// determines the initial size of the index data structure
// used internally (i.e. the expected number of different classifiers)
@Override public int mapSize() {
return 128;
}
}
//#lookup-bus
static
//#subchannel-bus
public class StartsWithSubclassification implements Subclassification<String> {
@Override public boolean isEqual(String x, String y) {
return x.equals(y);
}
@Override public boolean isSubclass(String x, String y) {
return x.startsWith(y);
}
}
//#subchannel-bus
static
//#subchannel-bus
/**
* Publishes the payload of the MsgEnvelope when the topic of the
* MsgEnvelope starts with the String specified when subscribing.
*/
public class SubchannelBusImpl extends SubchannelEventBus<MsgEnvelope, ActorRef, String> {
// Subclassification is an object providing `isEqual` and `isSubclass`
// to be consumed by the other methods of this classifier
@Override public Subclassification<String> subclassification() {
return new StartsWithSubclassification();
}
// is used for extracting the classifier from the incoming events
@Override public String classify(MsgEnvelope event) {
return event.topic;
}
// will be invoked for each event for all subscribers which registered themselves
// for the events classifier
@Override public void publish(MsgEnvelope event, ActorRef subscriber) {
subscriber.tell(event.payload, ActorRef.noSender());
}
}
//#subchannel-bus
static
//#scanning-bus
/**
* Publishes String messages with length less than or equal to the length
* specified when subscribing.
*/
public class ScanningBusImpl extends ScanningEventBus<String, ActorRef, Integer> {
// is needed for determining matching classifiers and storing them in an
// ordered collection
@Override public int compareClassifiers(Integer a, Integer b) {
return a.compareTo(b);
}
// is needed for storing subscribers in an ordered collection
@Override public int compareSubscribers(ActorRef a, ActorRef b) {
return a.compareTo(b);
}
// determines whether a given classifier shall match a given event; it is invoked
// for each subscription for all received events, hence the name of the classifier
@Override public boolean matches(Integer classifier, String event) {
return event.length() <= classifier;
}
// will be invoked for each event for all subscribers which registered themselves
// for the events classifier
@Override public void publish(String event, ActorRef subscriber) {
subscriber.tell(event, ActorRef.noSender());
}
}
//#scanning-bus
static
//#actor-bus
public class Notification {
public final ActorRef ref;
public final int id;
public Notification(ActorRef ref, int id) {
this.ref = ref;
this.id = id;
}
}
//#actor-bus
static
//#actor-bus
public class ActorBusImpl extends ManagedActorEventBus<Notification> {
// the ActorSystem will be used for book-keeping operations, such as subscribers terminating
public ActorBusImpl(ActorSystem system) {
super(system);
}
// is used for extracting the classifier from the incoming events
@Override public ActorRef classify(Notification event) {
return event.ref;
}
// determines the initial size of the index data structure
// used internally (i.e. the expected number of different classifiers)
@Override public int mapSize() {
return 128;
}
}
//#actor-bus
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource =
new AkkaJUnitActorSystemResource("EventBusDocTest");
private final ActorSystem system = actorSystemResource.getSystem();
@Test
public void demonstrateLookupClassification() {
new TestKit(system) {{
//#lookup-bus-test
LookupBusImpl lookupBus = new LookupBusImpl();
lookupBus.subscribe(getTestActor(), "greetings");
lookupBus.publish(new MsgEnvelope("time", System.currentTimeMillis()));
lookupBus.publish(new MsgEnvelope("greetings", "hello"));
expectMsgEquals("hello");
//#lookup-bus-test
}};
}
@Test
public void demonstrateSubchannelClassification() {
new TestKit(system) {{
//#subchannel-bus-test
SubchannelBusImpl subchannelBus = new SubchannelBusImpl();
subchannelBus.subscribe(getTestActor(), "abc");
subchannelBus.publish(new MsgEnvelope("xyzabc", "x"));
subchannelBus.publish(new MsgEnvelope("bcdef", "b"));
subchannelBus.publish(new MsgEnvelope("abc", "c"));
expectMsgEquals("c");
subchannelBus.publish(new MsgEnvelope("abcdef", "d"));
expectMsgEquals("d");
//#subchannel-bus-test
}};
}
@Test
public void demonstrateScanningClassification() {
new TestKit(system) {{
//#scanning-bus-test
ScanningBusImpl scanningBus = new ScanningBusImpl();
scanningBus.subscribe(getTestActor(), 3);
scanningBus.publish("xyzabc");
scanningBus.publish("ab");
expectMsgEquals("ab");
scanningBus.publish("abc");
expectMsgEquals("abc");
//#scanning-bus-test
}};
}
@Test
public void demonstrateManagedActorClassification() {
//#actor-bus-test
ActorRef observer1 = new TestKit(system).getRef();
ActorRef observer2 = new TestKit(system).getRef();
TestKit probe1 = new TestKit(system);
TestKit probe2 = new TestKit(system);
ActorRef subscriber1 = probe1.getRef();
ActorRef subscriber2 = probe2.getRef();
ActorBusImpl actorBus = new ActorBusImpl(system);
actorBus.subscribe(subscriber1, observer1);
actorBus.subscribe(subscriber2, observer1);
actorBus.subscribe(subscriber2, observer2);
Notification n1 = new Notification(observer1, 100);
actorBus.publish(n1);
probe1.expectMsgEquals(n1);
probe2.expectMsgEquals(n1);
Notification n2 = new Notification(observer2, 101);
actorBus.publish(n2);
probe2.expectMsgEquals(n2);
probe1.expectNoMsg(FiniteDuration.create(500, TimeUnit.MILLISECONDS));
//#actor-bus-test
}
}

View file

@ -1,245 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.event;
//#imports
import akka.actor.*;
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#imports
//#imports-listener
import akka.event.Logging.InitializeLogger;
import akka.event.Logging.Error;
import akka.event.Logging.Warning;
import akka.event.Logging.Info;
import akka.event.Logging.Debug;
//#imports-listener
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.Test;
import java.util.Optional;
//#imports-mdc
import akka.event.DiagnosticLoggingAdapter;
import java.util.HashMap;
import java.util.Map;
//#imports-mdc
//#imports-deadletter
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
//#imports-deadletter
public class LoggingDocTest extends AbstractJavaTest {
@Test
public void useLoggingActor() {
ActorSystem system = ActorSystem.create("MySystem");
ActorRef myActor = system.actorOf(Props.create(MyActor.class, this));
myActor.tell("test", ActorRef.noSender());
TestKit.shutdownActorSystem(system);
}
@Test
public void useLoggingActorWithMDC() {
ActorSystem system = ActorSystem.create("MyDiagnosticSystem");
ActorRef mdcActor = system.actorOf(Props.create(MdcActor.class, this));
mdcActor.tell("some request", ActorRef.noSender());
TestKit.shutdownActorSystem(system);
}
@Test
public void subscribeToDeadLetters() {
//#deadletters
final ActorSystem system = ActorSystem.create("DeadLetters");
final ActorRef actor = system.actorOf(Props.create(DeadLetterActor.class));
system.eventStream().subscribe(actor, DeadLetter.class);
//#deadletters
TestKit.shutdownActorSystem(system);
}
//#superclass-subscription-eventstream
interface AllKindsOfMusic { }
class Jazz implements AllKindsOfMusic {
final public String artist;
public Jazz(String artist) {
this.artist = artist;
}
}
class Electronic implements AllKindsOfMusic {
final public String artist;
public Electronic(String artist) {
this.artist = artist;
}
}
static class Listener extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Jazz.class, msg ->
System.out.printf("%s is listening to: %s%n", getSelf().path().name(), msg)
)
.match(Electronic.class, msg ->
System.out.printf("%s is listening to: %s%n", getSelf().path().name(), msg)
)
.build();
}
}
//#superclass-subscription-eventstream
@Test
public void subscribeBySubclassification() {
final ActorSystem system = ActorSystem.create("DeadLetters");
//#superclass-subscription-eventstream
final ActorRef actor = system.actorOf(Props.create(DeadLetterActor.class));
system.eventStream().subscribe(actor, DeadLetter.class);
final ActorRef jazzListener = system.actorOf(Props.create(Listener.class));
final ActorRef musicListener = system.actorOf(Props.create(Listener.class));
system.eventStream().subscribe(jazzListener, Jazz.class);
system.eventStream().subscribe(musicListener, AllKindsOfMusic.class);
// only musicListener gets this message, since it listens to *all* kinds of music:
system.eventStream().publish(new Electronic("Parov Stelar"));
// jazzListener and musicListener will be notified about Jazz:
system.eventStream().publish(new Jazz("Sonny Rollins"));
//#superclass-subscription-eventstream
TestKit.shutdownActorSystem(system);
}
@Test
public void subscribeToSuppressedDeadLetters() {
final ActorSystem system = ActorSystem.create("SuppressedDeadLetters");
final ActorRef actor = system.actorOf(Props.create(DeadLetterActor.class));
//#suppressed-deadletters
system.eventStream().subscribe(actor, SuppressedDeadLetter.class);
//#suppressed-deadletters
TestKit.shutdownActorSystem(system);
}
@Test
public void subscribeToAllDeadLetters() {
final ActorSystem system = ActorSystem.create("AllDeadLetters");
final ActorRef actor = system.actorOf(Props.create(DeadLetterActor.class));
//#all-deadletters
system.eventStream().subscribe(actor, AllDeadLetters.class);
//#all-deadletters
TestKit.shutdownActorSystem(system);
}
@Test
public void demonstrateMultipleArgs() {
final ActorSystem system = ActorSystem.create("multiArg");
//#array
final Object[] args = new Object[] { "The", "brown", "fox", "jumps", 42 };
system.log().debug("five parameters: {}, {}, {}, {}, {}", args);
//#array
TestKit.shutdownActorSystem(system);
}
//#my-actor
class MyActor extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@Override
public void preStart() {
log.debug("Starting");
}
@Override
public void preRestart(Throwable reason, Optional<Object> message) {
log.error(reason, "Restarting due to [{}] when processing [{}]",
reason.getMessage(), message.isPresent() ? message.get() : "");
}
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("test", msg ->
log.info("Received test")
)
.matchAny(msg ->
log.warning("Received unknown message: {}", msg)
)
.build();
}
}
//#my-actor
//#mdc-actor
class MdcActor extends AbstractActor {
final DiagnosticLoggingAdapter log = Logging.getLogger(this);
@Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(msg -> {
Map<String, Object> mdc;
mdc = new HashMap<String, Object>();
mdc.put("requestId", 1234);
mdc.put("visitorId", 5678);
log.setMDC(mdc);
log.info("Starting new request");
log.clearMDC();
})
.build();
}
}
//#mdc-actor
//#my-event-listener
class MyEventListener extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(InitializeLogger.class, msg -> {
getSender().tell(Logging.loggerInitialized(), getSelf());
})
.match(Error.class, msg -> {
// ...
})
.match(Warning.class, msg -> {
// ...
})
.match(Info.class, msg -> {
// ...
})
.match(Debug.class, msg -> {
// ...
})
.build();
}
}
//#my-event-listener
static
//#deadletter-actor
public class DeadLetterActor extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(DeadLetter.class, msg -> {
System.out.println(msg);
})
.build();
}
}
//#deadletter-actor
}

View file

@ -1,89 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.extension;
//#imports
import akka.actor.*;
import java.util.concurrent.atomic.AtomicLong;
//#imports
import jdocs.AbstractJavaTest;
import org.junit.Test;
public class ExtensionDocTest extends AbstractJavaTest {
static
//#extension
public class CountExtensionImpl implements Extension {
//Since this Extension is a shared instance
// per ActorSystem we need to be threadsafe
private final AtomicLong counter = new AtomicLong(0);
//This is the operation this Extension provides
public long increment() {
return counter.incrementAndGet();
}
}
//#extension
static
//#extensionid
public class CountExtension extends AbstractExtensionId<CountExtensionImpl>
implements ExtensionIdProvider {
//This will be the identifier of our CountExtension
public final static CountExtension CountExtensionProvider = new CountExtension();
private CountExtension() {}
//The lookup method is required by ExtensionIdProvider,
// so we return ourselves here, this allows us
// to configure our extension to be loaded when
// the ActorSystem starts up
public CountExtension lookup() {
return CountExtension.CountExtensionProvider; //The public static final
}
//This method will be called by Akka
// to instantiate our Extension
public CountExtensionImpl createExtension(ExtendedActorSystem system) {
return new CountExtensionImpl();
}
}
//#extensionid
static
//#extension-usage-actor
public class MyActor extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.matchAny(msg -> {
// typically you would use static import of the
// CountExtension.CountExtensionProvider field
CountExtension.CountExtensionProvider.get(getContext().getSystem()).increment();
})
.build();
}
}
//#extension-usage-actor
@Test
public void demonstrateHowToCreateAndUseAnAkkaExtensionInJava() {
final ActorSystem system = null;
try {
//#extension-usage
// typically you would use static import of the
// CountExtension.CountExtensionProvider field
CountExtension.CountExtensionProvider.get(system).increment();
//#extension-usage
} catch (Exception e) {
//do nothing
}
}
}

View file

@ -1,100 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.extension;
//#imports
import akka.actor.Extension;
import akka.actor.AbstractExtensionId;
import akka.actor.ExtensionIdProvider;
import akka.actor.ActorSystem;
import akka.actor.ExtendedActorSystem;
import scala.concurrent.duration.Duration;
import com.typesafe.config.Config;
import java.util.concurrent.TimeUnit;
//#imports
import jdocs.AbstractJavaTest;
import akka.actor.AbstractActor;
import org.junit.Test;
public class SettingsExtensionDocTest extends AbstractJavaTest {
static
//#extension
public class SettingsImpl implements Extension {
public final String DB_URI;
public final Duration CIRCUIT_BREAKER_TIMEOUT;
public SettingsImpl(Config config) {
DB_URI = config.getString("myapp.db.uri");
CIRCUIT_BREAKER_TIMEOUT =
Duration.create(config.getDuration("myapp.circuit-breaker.timeout",
TimeUnit.MILLISECONDS), TimeUnit.MILLISECONDS);
}
}
//#extension
static
//#extensionid
public class Settings extends AbstractExtensionId<SettingsImpl>
implements ExtensionIdProvider {
public final static Settings SettingsProvider = new Settings();
private Settings() {}
public Settings lookup() {
return Settings.SettingsProvider;
}
public SettingsImpl createExtension(ExtendedActorSystem system) {
return new SettingsImpl(system.settings().config());
}
}
//#extensionid
static
//#extension-usage-actor
public class MyActor extends AbstractActor {
// typically you would use static import of the Settings.SettingsProvider field
final SettingsImpl settings =
Settings.SettingsProvider.get(getContext().getSystem());
Connection connection =
connect(settings.DB_URI, settings.CIRCUIT_BREAKER_TIMEOUT);
//#extension-usage-actor
public Connection connect(String dbUri, Duration circuitBreakerTimeout) {
return new Connection();
}
@Override
public Receive createReceive() {
return AbstractActor.emptyBehavior();
}
//#extension-usage-actor
}
//#extension-usage-actor
public static class Connection {
}
@Test
public void demonstrateHowToCreateAndUseAnAkkaExtensionInJava() {
final ActorSystem system = null;
try {
//#extension-usage
// typically you would use static import of the Settings.SettingsProvider field
String dbUri = Settings.SettingsProvider.get(system).DB_URI;
//#extension-usage
} catch (Exception e) {
//do nothing
}
}
}

View file

@ -1,671 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.future;
//#imports1
import akka.dispatch.*;
import jdocs.AbstractJavaTest;
import scala.concurrent.ExecutionContext;
import scala.concurrent.Future;
import scala.concurrent.Await;
import scala.concurrent.Promise;
import akka.util.Timeout;
//#imports1
//#imports2
import scala.concurrent.duration.Duration;
import akka.japi.Function;
import java.util.concurrent.Callable;
import static akka.dispatch.Futures.future;
import static java.util.concurrent.TimeUnit.SECONDS;
//#imports2
//#imports3
import static akka.dispatch.Futures.sequence;
//#imports3
//#imports4
import static akka.dispatch.Futures.traverse;
//#imports4
//#imports5
import akka.japi.Function2;
import static akka.dispatch.Futures.fold;
//#imports5
//#imports6
import static akka.dispatch.Futures.reduce;
//#imports6
//#imports7
import static akka.pattern.Patterns.after;
import java.util.Arrays;
//#imports7
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.Executor;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.CountDownLatch;
import scala.compat.java8.FutureConverters;
import akka.testkit.AkkaJUnitActorSystemResource;
import org.junit.ClassRule;
import org.junit.Test;
import akka.testkit.AkkaSpec;
import akka.actor.Status.Failure;
import akka.actor.ActorSystem;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.pattern.Patterns;
import static org.hamcrest.CoreMatchers.is;
import static org.hamcrest.core.StringContains.containsString;
import static org.junit.Assert.*;
public class FutureDocTest extends AbstractJavaTest {
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource =
new AkkaJUnitActorSystemResource("FutureDocTest", AkkaSpec.testConf());
private final ActorSystem system = actorSystemResource.getSystem();
public final static class PrintResult<T> extends OnSuccess<T> {
@Override public final void onSuccess(T t) {
// print t
}
}
public final static class Demo {
//#print-result
public final static class PrintResult<T> extends OnSuccess<T> {
@Override public final void onSuccess(T t) {
System.out.println(t);
}
}
//#print-result
}
@SuppressWarnings("unchecked") @Test public void useCustomExecutionContext() throws Exception {
ExecutorService yourExecutorServiceGoesHere = Executors.newSingleThreadExecutor();
//#diy-execution-context
ExecutionContext ec =
ExecutionContexts.fromExecutorService(yourExecutorServiceGoesHere);
//Use ec with your Futures
Future<String> f1 = Futures.successful("foo");
// Then you shut down the ExecutorService at the end of your application.
yourExecutorServiceGoesHere.shutdown();
//#diy-execution-context
}
@Test
public void useBlockingFromActor() throws Exception {
ActorRef actor = system.actorOf(Props.create(MyActor.class));
String msg = "hello";
//#ask-blocking
Timeout timeout = new Timeout(Duration.create(5, "seconds"));
Future<Object> future = Patterns.ask(actor, msg, timeout);
String result = (String) Await.result(future, timeout.duration());
//#ask-blocking
//#pipe-to
akka.pattern.Patterns.pipe(future, system.dispatcher()).to(actor);
//#pipe-to
assertEquals("HELLO", result);
}
@Test
public void useFutureEval() throws Exception {
//#future-eval
Future<String> f = future(new Callable<String>() {
public String call() {
return "Hello" + "World";
}
}, system.dispatcher());
f.onSuccess(new PrintResult<String>(), system.dispatcher());
//#future-eval
String result = (String) Await.result(f, Duration.create(5, SECONDS));
assertEquals("HelloWorld", result);
}
@Test
public void useMap() throws Exception {
//#map
final ExecutionContext ec = system.dispatcher();
Future<String> f1 = future(new Callable<String>() {
public String call() {
return "Hello" + "World";
}
}, ec);
Future<Integer> f2 = f1.map(new Mapper<String, Integer>() {
public Integer apply(String s) {
return s.length();
}
}, ec);
f2.onSuccess(new PrintResult<Integer>(), system.dispatcher());
//#map
int result = Await.result(f2, Duration.create(5, SECONDS));
assertEquals(10, result);
}
@Test
public void useFlatMap() throws Exception {
//#flat-map
final ExecutionContext ec = system.dispatcher();
Future<String> f1 = future(new Callable<String>() {
public String call() {
return "Hello" + "World";
}
}, ec);
Future<Integer> f2 = f1.flatMap(new Mapper<String, Future<Integer>>() {
public Future<Integer> apply(final String s) {
return future(new Callable<Integer>() {
public Integer call() {
return s.length();
}
}, ec);
}
}, ec);
f2.onSuccess(new PrintResult<Integer>(), system.dispatcher());
//#flat-map
int result = Await.result(f2, Duration.create(5, SECONDS));
assertEquals(10, result);
}
@Test
public void useSequence() throws Exception {
List<Future<Integer>> source = new ArrayList<Future<Integer>>();
source.add(Futures.successful(1));
source.add(Futures.successful(2));
//#sequence
final ExecutionContext ec = system.dispatcher();
//Some source generating a sequence of Future<Integer>:s
Iterable<Future<Integer>> listOfFutureInts = source;
// now we have a Future[Iterable[Integer]]
Future<Iterable<Integer>> futureListOfInts = sequence(listOfFutureInts, ec);
// Find the sum of the odd numbers
Future<Long> futureSum = futureListOfInts.map(
new Mapper<Iterable<Integer>, Long>() {
public Long apply(Iterable<Integer> ints) {
long sum = 0;
for (Integer i : ints)
sum += i;
return sum;
}
}, ec);
futureSum.onSuccess(new PrintResult<Long>(), system.dispatcher());
//#sequence
long result = Await.result(futureSum, Duration.create(5, SECONDS));
assertEquals(3L, result);
}
@Test
public void useTraverse() throws Exception {
//#traverse
final ExecutionContext ec = system.dispatcher();
//Just a sequence of Strings
Iterable<String> listStrings = Arrays.asList("a", "b", "c");
Future<Iterable<String>> futureResult = traverse(listStrings,
new Function<String, Future<String>>() {
public Future<String> apply(final String r) {
return future(new Callable<String>() {
public String call() {
return r.toUpperCase();
}
}, ec);
}
}, ec);
//Returns the sequence of strings as upper case
futureResult.onSuccess(new PrintResult<Iterable<String>>(), system.dispatcher());
//#traverse
Iterable<String> result = Await.result(futureResult, Duration.create(5, SECONDS));
assertEquals(Arrays.asList("A", "B", "C"), result);
}
@Test
public void useFold() throws Exception {
List<Future<String>> source = new ArrayList<Future<String>>();
source.add(Futures.successful("a"));
source.add(Futures.successful("b"));
//#fold
final ExecutionContext ec = system.dispatcher();
//A sequence of Futures, in this case Strings
Iterable<Future<String>> futures = source;
//Start value is the empty string
Future<String> resultFuture = fold("", futures,
new Function2<String, String, String>() {
public String apply(String r, String t) {
return r + t; //Just concatenate
}
}, ec);
resultFuture.onSuccess(new PrintResult<String>(), system.dispatcher());
//#fold
String result = Await.result(resultFuture, Duration.create(5, SECONDS));
assertEquals("ab", result);
}
@Test
public void useReduce() throws Exception {
List<Future<String>> source = new ArrayList<Future<String>>();
source.add(Futures.successful("a"));
source.add(Futures.successful("b"));
//#reduce
final ExecutionContext ec = system.dispatcher();
//A sequence of Futures, in this case Strings
Iterable<Future<String>> futures = source;
Future<Object> resultFuture = reduce(futures,
new Function2<Object, String, Object>() {
public Object apply(Object r, String t) {
return r + t; //Just concatenate
}
}, ec);
resultFuture.onSuccess(new PrintResult<Object>(), system.dispatcher());
//#reduce
Object result = Await.result(resultFuture, Duration.create(5, SECONDS));
assertEquals("ab", result);
}
@Test
public void useSuccessfulAndFailedAndPromise() throws Exception {
final ExecutionContext ec = system.dispatcher();
//#successful
Future<String> future = Futures.successful("Yay!");
//#successful
//#failed
Future<String> otherFuture = Futures.failed(
new IllegalArgumentException("Bang!"));
//#failed
//#promise
Promise<String> promise = Futures.promise();
Future<String> theFuture = promise.future();
promise.success("hello");
//#promise
Object result = Await.result(future, Duration.create(5, SECONDS));
assertEquals("Yay!", result);
Throwable result2 = Await.result(otherFuture.failed(),
Duration.create(5, SECONDS));
assertEquals("Bang!", result2.getMessage());
String out = Await.result(theFuture, Duration.create(5, SECONDS));
assertEquals("hello", out);
}
@Test
public void useFilter() throws Exception {
//#filter
final ExecutionContext ec = system.dispatcher();
Future<Integer> future1 = Futures.successful(4);
Future<Integer> successfulFilter = future1.filter(Filter.filterOf(
new Function<Integer, Boolean>() {
public Boolean apply(Integer i) {
return i % 2 == 0;
}
}), ec);
Future<Integer> failedFilter = future1.filter(Filter.filterOf(
new Function<Integer, Boolean>() {
public Boolean apply(Integer i) {
return i % 2 != 0;
}
}), ec);
//When filter fails, the returned Future will be failed with a scala.MatchError
//#filter
}
public void sendToTheInternetz(String s) {
}
public void sendToIssueTracker(Throwable t) {
}
@Test
public void useAndThen() {
//#and-then
final ExecutionContext ec = system.dispatcher();
Future<String> future1 = Futures.successful("value").andThen(
new OnComplete<String>() {
public void onComplete(Throwable failure, String result) {
if (failure != null)
sendToIssueTracker(failure);
}
}, ec).andThen(new OnComplete<String>() {
public void onComplete(Throwable failure, String result) {
if (result != null)
sendToTheInternetz(result);
}
}, ec);
//#and-then
}
@Test
public void useRecover() throws Exception {
//#recover
final ExecutionContext ec = system.dispatcher();
Future<Integer> future = future(new Callable<Integer>() {
public Integer call() {
return 1 / 0;
}
}, ec).recover(new Recover<Integer>() {
public Integer recover(Throwable problem) throws Throwable {
if (problem instanceof ArithmeticException)
return 0;
else
throw problem;
}
}, ec);
future.onSuccess(new PrintResult<Integer>(), system.dispatcher());
//#recover
int result = Await.result(future, Duration.create(5, SECONDS));
assertEquals(result, 0);
}
@Test
public void useTryRecover() throws Exception {
//#try-recover
final ExecutionContext ec = system.dispatcher();
Future<Integer> future = future(new Callable<Integer>() {
public Integer call() {
return 1 / 0;
}
}, ec).recoverWith(new Recover<Future<Integer>>() {
public Future<Integer> recover(Throwable problem) throws Throwable {
if (problem instanceof ArithmeticException) {
return future(new Callable<Integer>() {
public Integer call() {
return 0;
}
}, ec);
} else
throw problem;
}
}, ec);
future.onSuccess(new PrintResult<Integer>(), system.dispatcher());
//#try-recover
int result = Await.result(future, Duration.create(5, SECONDS));
assertEquals(result, 0);
}
@Test
public void useOnSuccessOnFailureAndOnComplete() throws Exception {
{
Future<String> future = Futures.successful("foo");
//#onSuccess
final ExecutionContext ec = system.dispatcher();
future.onSuccess(new OnSuccess<String>() {
public void onSuccess(String result) {
if ("bar" == result) {
//Do something if it resulted in "bar"
} else {
//Do something if it was some other String
}
}
}, ec);
//#onSuccess
}
{
Future<String> future = Futures.failed(new IllegalStateException("OHNOES"));
//#onFailure
final ExecutionContext ec = system.dispatcher();
future.onFailure(new OnFailure() {
public void onFailure(Throwable failure) {
if (failure instanceof IllegalStateException) {
//Do something if it was this particular failure
} else {
//Do something if it was some other failure
}
}
}, ec);
//#onFailure
}
{
Future<String> future = Futures.successful("foo");
//#onComplete
final ExecutionContext ec = system.dispatcher();
future.onComplete(new OnComplete<String>() {
public void onComplete(Throwable failure, String result) {
if (failure != null) {
//We got a failure, handle it here
} else {
// We got a result, do something with it
}
}
}, ec);
//#onComplete
}
}
@Test
public void useOrAndZip() throws Exception {
{
//#zip
final ExecutionContext ec = system.dispatcher();
Future<String> future1 = Futures.successful("foo");
Future<String> future2 = Futures.successful("bar");
Future<String> future3 = future1.zip(future2).map(
new Mapper<scala.Tuple2<String, String>, String>() {
public String apply(scala.Tuple2<String, String> zipped) {
return zipped._1() + " " + zipped._2();
}
}, ec);
future3.onSuccess(new PrintResult<String>(), system.dispatcher());
//#zip
String result = Await.result(future3, Duration.create(5, SECONDS));
assertEquals("foo bar", result);
}
{
//#fallback-to
Future<String> future1 = Futures.failed(new IllegalStateException("OHNOES1"));
Future<String> future2 = Futures.failed(new IllegalStateException("OHNOES2"));
Future<String> future3 = Futures.successful("bar");
// Will have "bar" in this case
Future<String> future4 = future1.fallbackTo(future2).fallbackTo(future3);
future4.onSuccess(new PrintResult<String>(), system.dispatcher());
//#fallback-to
String result = Await.result(future4, Duration.create(5, SECONDS));
assertEquals("bar", result);
}
}
@Test(expected = IllegalStateException.class)
@SuppressWarnings("unchecked")
public void useAfter() throws Exception {
//#after
final ExecutionContext ec = system.dispatcher();
Future<String> failExc = Futures.failed(new IllegalStateException("OHNOES1"));
Future<String> delayed = Patterns.after(Duration.create(200, "millis"),
system.scheduler(), ec, failExc);
Future<String> future = future(new Callable<String>() {
public String call() throws InterruptedException {
Thread.sleep(1000);
return "foo";
}
}, ec);
Future<String> result = Futures.firstCompletedOf(
Arrays.<Future<String>>asList(future, delayed), ec);
//#after
Await.result(result, Duration.create(2, SECONDS));
}
@Test
public void thenApplyCompletionThread() throws Exception {
//#apply-completion-thread
final ExecutionContext ec = system.dispatcher();
final CountDownLatch countDownLatch = new CountDownLatch(1);
Future<String> scalaFuture = Futures.future(() -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
countDownLatch.await(); // do not complete yet
return "hello";
}, ec);
CompletionStage<String> fromScalaFuture = FutureConverters.toJava(scalaFuture)
.thenApply(s -> { // 1
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
})
.thenApply(s -> { // 2
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
})
.thenApply(s -> { // 3
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
});
countDownLatch.countDown(); // complete scalaFuture
//#apply-completion-thread
fromScalaFuture.toCompletableFuture().get(2, SECONDS);
}
@Test
public void thenApplyMainThread() throws Exception {
final ExecutionContext ec = system.dispatcher();
//#apply-main-thread
Future<String> scalaFuture = Futures.future(() -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return "hello";
}, ec);
CompletionStage<String> completedStage = FutureConverters.toJava(scalaFuture)
.thenApply(s -> { // 1
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
});
completedStage.toCompletableFuture().get(2, SECONDS); // complete current CompletionStage
final String currentThread = Thread.currentThread().getName();
CompletionStage<String> stage2 = completedStage
.thenApply(s -> { // 2
assertThat(Thread.currentThread().getName(), is(currentThread));
return s;
})
.thenApply(s -> { // 3
assertThat(Thread.currentThread().getName(), is(currentThread));
return s;
});
//#apply-main-thread
stage2.toCompletableFuture().get(2, SECONDS);
}
@Test
public void thenApplyAsyncDefault() throws Exception {
final ExecutionContext ec = system.dispatcher();
Future<String> scalaFuture = Futures.future(() -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return "hello";
}, ec);
//#apply-async-default
CompletionStage<String> fromScalaFuture = FutureConverters.toJava(scalaFuture)
.thenApplyAsync(s -> { // 1
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
})
.thenApplyAsync(s -> { // 2
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
})
.thenApplyAsync(s -> { // 3
assertThat(Thread.currentThread().getName(), containsString("ForkJoinPool.commonPool"));
return s;
});
//#apply-async-default
fromScalaFuture.toCompletableFuture().get(2, SECONDS);
}
@Test
public void thenApplyAsyncExecutor() throws Exception {
final ExecutionContext ec = system.dispatcher();
Future<String> scalaFuture = Futures.future(() -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return "hello";
}, ec);
//#apply-async-executor
final Executor ex = system.dispatcher();
CompletionStage<String> fromScalaFuture = FutureConverters.toJava(scalaFuture)
.thenApplyAsync(s -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return s;
}, ex)
.thenApplyAsync(s -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return s;
}, ex)
.thenApplyAsync(s -> {
assertThat(Thread.currentThread().getName(), containsString("akka.actor.default-dispatcher"));
return s;
}, ex);
//#apply-async-executor
fromScalaFuture.toCompletableFuture().get(2, SECONDS);
}
public static class MyActor extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(String.class, msg -> {
getSender().tell(msg.toUpperCase(), getSelf());
})
.match(Integer.class, i -> {
if (i < 0) {
getSender().tell(new Failure(new ArithmeticException("Negative values not supported")), getSelf());
} else {
getSender().tell(i, getSelf());
}
})
.build();
}
}
}

View file

@ -1,86 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io;
import akka.actor.ActorSystem;
import akka.actor.AbstractActor;
//#imports
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.List;
import akka.actor.ActorRef;
import akka.io.Inet;
import akka.io.Tcp;
import akka.io.TcpMessage;
import akka.io.TcpSO;
import akka.util.ByteString;
//#imports
public class IODocTest {
static public class Demo extends AbstractActor {
ActorRef connectionActor = null;
ActorRef listener = getSelf();
@Override
public Receive createReceive() {
return receiveBuilder()
.matchEquals("connect", msg -> {
//#manager
final ActorRef tcp = Tcp.get(system).manager();
//#manager
//#connect
final InetSocketAddress remoteAddr = new InetSocketAddress("127.0.0.1",
12345);
tcp.tell(TcpMessage.connect(remoteAddr), getSelf());
//#connect
//#connect-with-options
final InetSocketAddress localAddr = new InetSocketAddress("127.0.0.1",
1234);
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
options.add(TcpSO.keepAlive(true));
tcp.tell(TcpMessage.connect(remoteAddr, localAddr, options, null, false), getSelf());
//#connect-with-options
})
//#connected
.match(Tcp.Connected.class, conn -> {
connectionActor = getSender();
connectionActor.tell(TcpMessage.register(listener), getSelf());
})
//#connected
//#received
.match(Tcp.Received.class, recv -> {
final ByteString data = recv.data();
// and do something with the received data ...
})
.match(Tcp.CommandFailed.class, failed -> {
final Tcp.Command command = failed.cmd();
// react to failed connect, bind, write, etc.
})
.match(Tcp.ConnectionClosed.class, closed -> {
if (closed.isAborted()) {
// handle close reasons like this
}
})
//#received
.matchEquals("bind", msg -> {
final ActorRef handler = getSelf();
//#bind
final ActorRef tcp = Tcp.get(system).manager();
final InetSocketAddress localAddr = new InetSocketAddress("127.0.0.1",
1234);
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
options.add(TcpSO.reuseAddress(true));
tcp.tell(TcpMessage.bind(handler, localAddr, 10, options, false), getSelf());
//#bind
})
.build();
}
}
static ActorSystem system;
// This is currently only a compilation test, nothing is run
}

View file

@ -1,97 +0,0 @@
package jdocs.io;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.AbstractActor;
import akka.io.Inet;
import akka.io.Tcp;
import akka.io.TcpMessage;
import akka.util.ByteString;
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.List;
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
public class JavaReadBackPressure {
static public class Listener extends AbstractActor {
ActorRef tcp;
ActorRef listener;
@Override
//#pull-accepting
public Receive createReceive() {
return receiveBuilder()
.match(Tcp.Bound.class, x -> {
listener = getSender();
// Accept connections one by one
listener.tell(TcpMessage.resumeAccepting(1), getSelf());
})
.match(Tcp.Connected.class, x -> {
ActorRef handler = getContext().actorOf(Props.create(PullEcho.class, getSender()));
getSender().tell(TcpMessage.register(handler), getSelf());
// Resume accepting connections
listener.tell(TcpMessage.resumeAccepting(1), getSelf());
})
.build();
}
//#pull-accepting
@Override
public void preStart() throws Exception {
//#pull-mode-bind
tcp = Tcp.get(getContext().getSystem()).manager();
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
tcp.tell(
TcpMessage.bind(getSelf(), new InetSocketAddress("localhost", 0), 100, options, true),
getSelf()
);
//#pull-mode-bind
}
private void demonstrateConnect() {
//#pull-mode-connect
final List<Inet.SocketOption> options = new ArrayList<Inet.SocketOption>();
tcp.tell(
TcpMessage.connect(new InetSocketAddress("localhost", 3000), null, options, null, true),
getSelf()
);
//#pull-mode-connect
}
}
static public class Ack implements Tcp.Event {
}
static public class PullEcho extends AbstractActor {
final ActorRef connection;
public PullEcho(ActorRef connection) {
this.connection = connection;
}
//#pull-reading-echo
@Override
public void preStart() throws Exception {
connection.tell(TcpMessage.resumeReading(), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Tcp.Received.class, message -> {
ByteString data = message.data();
connection.tell(TcpMessage.write(data, new Ack()), getSelf());
})
.match(Ack.class, message -> {
connection.tell(TcpMessage.resumeReading(), getSelf());
})
.build();
}
//#pull-reading-echo
}
}

View file

@ -1,128 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io;
//#imports
import akka.actor.ActorRef;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.io.Inet;
import akka.io.Udp;
import akka.io.UdpMessage;
import akka.util.ByteString;
import java.net.InetAddress;
import java.net.InetSocketAddress;
import java.net.NetworkInterface;
import java.net.StandardProtocolFamily;
import java.net.DatagramSocket;
import java.nio.channels.DatagramChannel;
import java.util.ArrayList;
import java.util.List;
//#imports
public class JavaUdpMulticast {
//#inet6-protocol-family
public static class Inet6ProtocolFamily extends Inet.DatagramChannelCreator {
@Override
public DatagramChannel create() throws Exception {
return DatagramChannel.open(StandardProtocolFamily.INET6);
}
}
//#inet6-protocol-family
//#multicast-group
public static class MulticastGroup extends Inet.AbstractSocketOptionV2 {
private String address;
private String interf;
public MulticastGroup(String address, String interf) {
this.address = address;
this.interf = interf;
}
@Override
public void afterBind(DatagramSocket s) {
try {
InetAddress group = InetAddress.getByName(address);
NetworkInterface networkInterface = NetworkInterface.getByName(interf);
s.getChannel().join(group, networkInterface);
} catch (Exception ex) {
System.out.println("Unable to join multicast group.");
}
}
}
//#multicast-group
public static class Listener extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
ActorRef sink;
public Listener(String iface, String group, Integer port, ActorRef sink) {
this.sink = sink;
//#bind
List<Inet.SocketOption> options = new ArrayList<>();
options.add(new Inet6ProtocolFamily());
options.add(new MulticastGroup(group, iface));
final ActorRef mgr = Udp.get(getContext().getSystem()).getManager();
// listen for datagrams on this address
InetSocketAddress endpoint = new InetSocketAddress(port);
mgr.tell(UdpMessage.bind(getSelf(), endpoint, options), getSelf());
//#bind
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Udp.Bound.class, bound -> {
log.info("Bound to {}", bound.localAddress());
sink.tell(bound, getSelf());
})
.match(Udp.Received.class, received -> {
final String txt = received.data().decodeString("utf-8");
log.info("Received '{}' from {}", txt, received.sender());
sink.tell(txt, getSelf());
})
.build();
}
}
public static class Sender extends AbstractActor {
LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
String iface;
String group;
Integer port;
String message;
public Sender(String iface, String group, Integer port, String msg) {
this.iface = iface;
this.group = group;
this.port = port;
this.message = msg;
List<Inet.SocketOption> options = new ArrayList<>();
options.add(new Inet6ProtocolFamily());
final ActorRef mgr = Udp.get(getContext().getSystem()).getManager();
mgr.tell(UdpMessage.simpleSender(options), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Udp.SimpleSenderReady.class, x -> {
InetSocketAddress remote = new InetSocketAddress(group + "%" + iface, port);
log.info("Sending message to " + remote);
getSender().tell(UdpMessage.send(ByteString.fromString(message), remote), getSelf());
})
.build();
}
}
}

View file

@ -1,100 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.io.Udp;
import akka.testkit.SocketUtil;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import java.net.Inet6Address;
import java.net.InetAddress;
import java.net.NetworkInterface;
import java.util.*;
public class JavaUdpMulticastTest extends AbstractJavaTest {
static ActorSystem system;
@BeforeClass
public static void setup() {
system = ActorSystem.create("JavaUdpMulticastTest");
}
@Test
public void testUdpMulticast() throws Exception {
new TestKit(system) {{
List<NetworkInterface> ipv6Ifaces = new ArrayList<>();
for (Enumeration<NetworkInterface> interfaces = NetworkInterface.getNetworkInterfaces(); interfaces.hasMoreElements(); ) {
NetworkInterface interf = interfaces.nextElement();
if (interf.isUp() && interf.supportsMulticast()) {
for (Enumeration<InetAddress> addresses = interf.getInetAddresses(); addresses.hasMoreElements(); ) {
InetAddress address = addresses.nextElement();
if (address instanceof Inet6Address) {
ipv6Ifaces.add(interf);
}
}
}
}
if (ipv6Ifaces.isEmpty()) {
system.log().info("JavaUdpMulticastTest skipped since no ipv6 interface supporting multicast could be found");
} else {
// lots of problems with choosing the wrong interface for this test depending
// on the platform (awsdl0 can't be used on OSX, docker[0-9] can't be used in a docker machine etc.)
// therefore: try hard to find an interface that _does_ work, and only fail if there was any potentially
// working interfaces but all failed
for (Iterator<NetworkInterface> interfaceIterator = ipv6Ifaces.iterator(); interfaceIterator.hasNext(); ) {
NetworkInterface ipv6Iface = interfaceIterator.next();
// host assigned link local multicast address http://tools.ietf.org/html/rfc3307#section-4.3.2
// generate a random 32 bit multicast address with the high order bit set
final String randomAddress = Long.toHexString(((long) Math.abs(new Random().nextInt())) | (1L << 31)).toUpperCase();
final StringBuilder groupBuilder = new StringBuilder("FF02:");
for (int i = 0; i < 2; i += 1) {
groupBuilder.append(":");
groupBuilder.append(randomAddress.subSequence(i * 4, i * 4 + 4));
}
final String group = groupBuilder.toString();
final Integer port = SocketUtil.temporaryUdpIpv6Port(ipv6Iface);
final String msg = "ohi";
final ActorRef sink = getRef();
final String iface = ipv6Iface.getName();
final ActorRef listener = system.actorOf(Props.create(JavaUdpMulticast.Listener.class, iface, group, port, sink));
try {
expectMsgClass(Udp.Bound.class);
final ActorRef sender = system.actorOf(Props.create(JavaUdpMulticast.Sender.class, iface, group, port, msg));
expectMsgEquals(msg);
// success with one interface is enough
break;
} catch (AssertionError ex) {
if (!interfaceIterator.hasNext()) throw ex;
else {
system.log().info("Failed to run test on interface {}", ipv6Iface.getDisplayName());
}
} finally {
// unbind
system.stop(listener);
}
}
}
}};
}
@AfterClass
public static void tearDown() {
TestKit.shutdownActorSystem(system);
system = null;
}
}

View file

@ -1,85 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io;
import akka.japi.pf.ReceiveBuilder;
import org.junit.Test;
import akka.actor.ActorSystem;
import akka.actor.AbstractActor;
//#imports
import java.net.InetSocketAddress;
import java.util.ArrayList;
import java.util.List;
import akka.actor.ActorRef;
import akka.io.Inet;
import akka.io.UdpConnected;
import akka.io.UdpConnectedMessage;
import akka.io.UdpSO;
import akka.util.ByteString;
//#imports
public class UdpConnectedDocTest {
static public class Demo extends AbstractActor {
ActorRef connectionActor = null;
ActorRef handler = getSelf();
ActorSystem system = getContext().getSystem();
@Override
public Receive createReceive() {
ReceiveBuilder builder = receiveBuilder();
builder.matchEquals("connect", message -> {
//#manager
final ActorRef udp = UdpConnected.get(system).manager();
//#manager
//#connect
final InetSocketAddress remoteAddr =
new InetSocketAddress("127.0.0.1", 12345);
udp.tell(UdpConnectedMessage.connect(handler, remoteAddr), getSelf());
//#connect
//#connect-with-options
final InetSocketAddress localAddr =
new InetSocketAddress("127.0.0.1", 1234);
final List<Inet.SocketOption> options =
new ArrayList<Inet.SocketOption>();
options.add(UdpSO.broadcast(true));
udp.tell(UdpConnectedMessage.connect(handler, remoteAddr, localAddr, options), getSelf());
//#connect-with-options
});
//#connected
builder.match(UdpConnected.Connected.class, conn -> {
connectionActor = getSender(); // Save the worker ref for later use
});
//#connected
//#received
builder
.match(UdpConnected.Received.class, recv -> {
final ByteString data = recv.data();
// and do something with the received data ...
})
.match(UdpConnected.CommandFailed.class, failed -> {
final UdpConnected.Command command = failed.cmd();
// react to failed connect, etc.
})
.match(UdpConnected.Disconnected.class, x -> {
// do something on disconnect
});
//#received
builder.matchEquals("send", x -> {
ByteString data = ByteString.empty();
//#send
connectionActor.tell(UdpConnectedMessage.send(data), getSelf());
//#send
});
return builder.build();
}
}
@Test
public void demonstrateConnect() {
}
}

View file

@ -1,164 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io;
//#imports
import akka.actor.ActorRef;
import akka.actor.PoisonPill;
import akka.actor.AbstractActor;
import akka.io.Udp;
import akka.io.UdpConnected;
import akka.io.UdpConnectedMessage;
import akka.io.UdpMessage;
import akka.util.ByteString;
import java.net.InetSocketAddress;
//#imports
public class UdpDocTest {
//#sender
public static class SimpleSender extends AbstractActor {
final InetSocketAddress remote;
public SimpleSender(InetSocketAddress remote) {
this.remote = remote;
// request creation of a SimpleSender
final ActorRef mgr = Udp.get(getContext().getSystem()).getManager();
mgr.tell(UdpMessage.simpleSender(), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Udp.SimpleSenderReady.class, message -> {
getContext().become(ready(getSender()));
//#sender
getSender().tell(UdpMessage.send(ByteString.fromString("hello"), remote), getSelf());
//#sender
})
.build();
}
private Receive ready(final ActorRef send) {
return receiveBuilder()
.match(String.class, message -> {
send.tell(UdpMessage.send(ByteString.fromString(message), remote), getSelf());
//#sender
if (message.equals("world")) {
send.tell(PoisonPill.getInstance(), getSelf());
}
//#sender
})
.build();
}
}
//#sender
//#listener
public static class Listener extends AbstractActor {
final ActorRef nextActor;
public Listener(ActorRef nextActor) {
this.nextActor = nextActor;
// request creation of a bound listen socket
final ActorRef mgr = Udp.get(getContext().getSystem()).getManager();
mgr.tell(
UdpMessage.bind(getSelf(), new InetSocketAddress("localhost", 0)),
getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Udp.Bound.class, bound -> {
//#listener
nextActor.tell(bound.localAddress(), getSender());
//#listener
getContext().become(ready(getSender()));
})
.build();
}
private Receive ready(final ActorRef socket) {
return receiveBuilder()
.match(Udp.Received.class, r -> {
// echo server example: send back the data
socket.tell(UdpMessage.send(r.data(), r.sender()), getSelf());
// or do some processing and forward it on
final Object processed = // parse data etc., e.g. using PipelineStage
// #listener
r.data().utf8String();
//#listener
nextActor.tell(processed, getSelf());
})
.matchEquals(UdpMessage.unbind(), message -> {
socket.tell(message, getSelf());
})
.match(Udp.Unbound.class, message -> {
getContext().stop(getSelf());
})
.build();
}
}
//#listener
//#connected
public static class Connected extends AbstractActor {
final InetSocketAddress remote;
public Connected(InetSocketAddress remote) {
this.remote = remote;
// create a restricted a.k.a. connected socket
final ActorRef mgr = UdpConnected.get(getContext().getSystem()).getManager();
mgr.tell(UdpConnectedMessage.connect(getSelf(), remote), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(UdpConnected.Connected.class, message -> {
getContext().become(ready(getSender()));
//#connected
getSender()
.tell(UdpConnectedMessage.send(ByteString.fromString("hello")),
getSelf());
//#connected
})
.build();
}
private Receive ready(final ActorRef connection) {
return receiveBuilder()
.match(UdpConnected.Received.class, r -> {
// process data, send it on, etc.
// #connected
if (r.data().utf8String().equals("hello")) {
connection.tell(
UdpConnectedMessage.send(ByteString.fromString("world")),
getSelf());
}
// #connected
})
.match(String.class, str -> {
connection
.tell(UdpConnectedMessage.send(ByteString.fromString(str)),
getSelf());
})
.matchEquals(UdpConnectedMessage.disconnect(), message -> {
connection.tell(message, getSelf());
})
.match(UdpConnected.Disconnected.class, x -> {
getContext().stop(getSelf());
})
.build();
}
}
//#connected
}

View file

@ -1,243 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
import java.net.InetSocketAddress;
import java.util.LinkedList;
import java.util.Queue;
import akka.actor.ActorRef;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.io.Tcp.CommandFailed;
import akka.io.Tcp.ConnectionClosed;
import akka.io.Tcp.Event;
import akka.io.Tcp.Received;
import akka.io.Tcp.Write;
import akka.io.Tcp.WritingResumed;
import akka.io.TcpMessage;
import akka.util.ByteString;
//#echo-handler
public class EchoHandler extends AbstractActor {
final LoggingAdapter log = Logging
.getLogger(getContext().getSystem(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long MAX_STORED = 100000000;
public static final long HIGH_WATERMARK = MAX_STORED * 5 / 10;
public static final long LOW_WATERMARK = MAX_STORED * 2 / 10;
private long transferred;
private int storageOffset = 0;
private long stored = 0;
private Queue<ByteString> storage = new LinkedList<ByteString>();
private boolean suspended = false;
private static class Ack implements Event {
public final int ack;
public Ack(int ack) {
this.ack = ack;
}
}
public EchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
writing = writing();
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
// start out in optimistic write-through mode
getContext().become(writing);
}
@Override
public Receive createReceive() {
return writing;
}
private final Receive writing;
private Receive writing() {
return receiveBuilder()
.match(Received.class, msg -> {
final ByteString data = msg.data();
connection.tell(TcpMessage.write(data, new Ack(currentOffset())), getSelf());
buffer(data);
})
.match(Integer.class, msg -> {
acknowledge(msg);
})
.match(CommandFailed.class, msg -> {
final Write w = (Write) msg.cmd();
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(buffering((Ack) w.ack()));
})
.match(ConnectionClosed.class, msg -> {
if (msg.isPeerClosed()) {
if (storage.isEmpty()) {
getContext().stop(getSelf());
} else {
getContext().become(closing());
}
}
})
.build();
}
//#buffering
final static class BufferingState {
int toAck = 10;
boolean peerClosed = false;
}
protected Receive buffering(final Ack nack) {
final BufferingState state = new BufferingState();
return receiveBuilder()
.match(Received.class, msg -> {
buffer(msg.data());
})
.match(WritingResumed.class, msg -> {
writeFirst();
})
.match(ConnectionClosed.class, msg -> {
if (msg.isPeerClosed())
state.peerClosed = true;
else
getContext().stop(getSelf());
})
.match(Integer.class, ack -> {
acknowledge(ack);
if (ack >= nack.ack) {
// otherwise it was the ack of the last successful write
if (storage.isEmpty()) {
if (state.peerClosed)
getContext().stop(getSelf());
else
getContext().become(writing);
} else {
if (state.toAck > 0) {
// stay in ACK-based mode for a short while
writeFirst();
--state.toAck;
} else {
// then return to NACK-based again
writeAll();
if (state.peerClosed)
getContext().become(closing());
else
getContext().become(writing);
}
}
}
})
.build();
}
//#buffering
//#closing
protected Receive closing() {
return receiveBuilder()
.match(CommandFailed.class, msg -> {
// the command can only have been a Write
connection.tell(TcpMessage.resumeWriting(), getSelf());
getContext().become(closeResend(), false);
})
.match(Integer.class, msg -> {
acknowledge(msg);
if (storage.isEmpty())
getContext().stop(getSelf());
})
.build();
}
protected Receive closeResend() {
return receiveBuilder()
.match(WritingResumed.class, msg -> {
writeAll();
getContext().unbecome();
})
.match(Integer.class, msg -> {
acknowledge(msg);
})
.build();
}
//#closing
//#storage-omitted
@Override
public void postStop() {
log.info("transferred {} bytes from/to [{}]", transferred, remote);
}
//#helpers
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > MAX_STORED) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > HIGH_WATERMARK) {
log.debug("suspending reading at {}", currentOffset());
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge(int ack) {
assert ack == storageOffset;
assert !storage.isEmpty();
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
storageOffset += 1;
if (suspended && stored < LOW_WATERMARK) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
}
//#helpers
protected int currentOffset() {
return storageOffset + storage.size();
}
protected void writeAll() {
int i = 0;
for (ByteString data : storage) {
connection.tell(TcpMessage.write(data, new Ack(storageOffset + i++)), getSelf());
}
}
protected void writeFirst() {
connection.tell(TcpMessage.write(storage.peek(), new Ack(storageOffset)), getSelf());
}
//#storage-omitted
}
//#echo-handler

View file

@ -1,81 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
import java.net.InetSocketAddress;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.SupervisorStrategy;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.io.Tcp;
import akka.io.Tcp.Bind;
import akka.io.Tcp.Bound;
import akka.io.Tcp.Connected;
import akka.io.TcpMessage;
public class EchoManager extends AbstractActor {
final LoggingAdapter log = Logging
.getLogger(getContext().getSystem(), getSelf());
final Class<?> handlerClass;
public EchoManager(Class<?> handlerClass) {
this.handlerClass = handlerClass;
}
@Override
public SupervisorStrategy supervisorStrategy() {
return SupervisorStrategy.stoppingStrategy();
}
@Override
public void preStart() throws Exception {
//#manager
final ActorRef tcpManager = Tcp.get(getContext().getSystem()).manager();
//#manager
tcpManager.tell(
TcpMessage.bind(getSelf(), new InetSocketAddress("localhost", 0), 100),
getSelf());
}
@Override
public void postRestart(Throwable arg0) throws Exception {
// do not restart
getContext().stop(getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Bound.class, msg -> {
log.info("listening on [{}]", msg.localAddress());
})
.match(Tcp.CommandFailed.class, failed -> {
if (failed.cmd() instanceof Bind) {
log.warning("cannot bind to [{}]", ((Bind) failed.cmd()).localAddress());
getContext().stop(getSelf());
} else {
log.warning("unknown command failed [{}]", failed.cmd());
}
})
.match(Connected.class, conn -> {
log.info("received connection from [{}]", conn.remoteAddress());
final ActorRef connection = getSender();
final ActorRef handler = getContext().actorOf(
Props.create(handlerClass, connection, conn.remoteAddress()));
//#echo-manager
connection.tell(TcpMessage.register(handler,
true, // <-- keepOpenOnPeerClosed flag
true), getSelf());
//#echo-manager
})
.build();
}
}

View file

@ -1,35 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
public class EchoServer {
public static void main(String[] args) throws InterruptedException {
final Config config = ConfigFactory.parseString("akka.loglevel=DEBUG");
final ActorSystem system = ActorSystem.create("EchoServer", config);
try {
final CountDownLatch latch = new CountDownLatch(1);
final ActorRef watcher = system.actorOf(Props.create(Watcher.class, latch), "watcher");
final ActorRef nackServer = system.actorOf(Props.create(EchoManager.class, EchoHandler.class), "nack");
final ActorRef ackServer = system.actorOf(Props.create(EchoManager.class, SimpleEchoHandler.class), "ack");
watcher.tell(nackServer, ActorRef.noSender());
watcher.tell(ackServer, ActorRef.noSender());
latch.await(10, TimeUnit.MINUTES);
} finally {
system.terminate();
}
}
}

View file

@ -1,186 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
import akka.testkit.AkkaJUnitActorSystemResource;
import jdocs.AbstractJavaTest;
import akka.testkit.javadsl.TestKit;
import org.junit.ClassRule;
import org.junit.Test;
//#imports
import java.net.InetSocketAddress;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.actor.AbstractActor;
import akka.io.Tcp;
import akka.io.Tcp.Bound;
import akka.io.Tcp.CommandFailed;
import akka.io.Tcp.Connected;
import akka.io.Tcp.ConnectionClosed;
import akka.io.Tcp.Received;
import akka.io.TcpMessage;
import akka.util.ByteString;
//#imports
import akka.testkit.AkkaSpec;
public class IODocTest extends AbstractJavaTest {
static
//#server
public class Server extends AbstractActor {
final ActorRef manager;
public Server(ActorRef manager) {
this.manager = manager;
}
public static Props props(ActorRef manager) {
return Props.create(Server.class, manager);
}
@Override
public void preStart() throws Exception {
final ActorRef tcp = Tcp.get(getContext().getSystem()).manager();
tcp.tell(TcpMessage.bind(getSelf(),
new InetSocketAddress("localhost", 0), 100), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Bound.class, msg -> {
manager.tell(msg, getSelf());
})
.match(CommandFailed.class, msg -> {
getContext().stop(getSelf());
})
.match(Connected.class, conn -> {
manager.tell(conn, getSelf());
final ActorRef handler = getContext().actorOf(
Props.create(SimplisticHandler.class));
getSender().tell(TcpMessage.register(handler), getSelf());
})
.build();
}
}
//#server
static
//#simplistic-handler
public class SimplisticHandler extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Received.class, msg -> {
final ByteString data = msg.data();
System.out.println(data);
getSender().tell(TcpMessage.write(data), getSelf());
})
.match(ConnectionClosed.class, msg -> {
getContext().stop(getSelf());
})
.build();
}
}
//#simplistic-handler
static
//#client
public class Client extends AbstractActor {
final InetSocketAddress remote;
final ActorRef listener;
public static Props props(InetSocketAddress remote, ActorRef listener) {
return Props.create(Client.class, remote, listener);
}
public Client(InetSocketAddress remote, ActorRef listener) {
this.remote = remote;
this.listener = listener;
final ActorRef tcp = Tcp.get(getContext().getSystem()).manager();
tcp.tell(TcpMessage.connect(remote), getSelf());
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(CommandFailed.class, msg -> {
listener.tell("failed", getSelf());
getContext().stop(getSelf());
})
.match(Connected.class, msg -> {
listener.tell(msg, getSelf());
getSender().tell(TcpMessage.register(getSelf()), getSelf());
getContext().become(connected(getSender()));
})
.build();
}
private Receive connected(final ActorRef connection) {
return receiveBuilder()
.match(ByteString.class, msg -> {
connection.tell(TcpMessage.write((ByteString) msg), getSelf());
})
.match(CommandFailed.class, msg -> {
// OS kernel socket buffer was full
})
.match(Received.class, msg -> {
listener.tell(msg.data(), getSelf());
})
.matchEquals("close", msg -> {
connection.tell(TcpMessage.close(), getSelf());
})
.match(ConnectionClosed.class, msg -> {
getContext().stop(getSelf());
})
.build();
}
}
//#client
@ClassRule
public static AkkaJUnitActorSystemResource actorSystemResource =
new AkkaJUnitActorSystemResource("IODocTest", AkkaSpec.testConf());
private final ActorSystem system = actorSystemResource.getSystem();
@Test
public void testConnection() {
new TestKit(system) {
{
@SuppressWarnings("unused")
final ActorRef server = system.actorOf(Server.props(getRef()), "server1");
final InetSocketAddress listen = expectMsgClass(Bound.class).localAddress();
final ActorRef client = system.actorOf(Client.props(listen, getRef()), "client1");
final Connected c1 = expectMsgClass(Connected.class);
final Connected c2 = expectMsgClass(Connected.class);
assert c1.localAddress().equals(c2.remoteAddress());
assert c2.localAddress().equals(c1.remoteAddress());
client.tell(ByteString.fromString("hello"), getRef());
final ByteString reply = expectMsgClass(ByteString.class);
assert reply.utf8String().equals("hello");
watch(client);
client.tell("close", getRef());
expectTerminated(client);
}
};
}
}

View file

@ -1,45 +0,0 @@
/**
* Copyright (C) 2013-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
//#message
public class Message {
static public class Person {
private final String first;
private final String last;
public Person(String first, String last) {
this.first = first;
this.last = last;
}
public String getFirst() {
return first;
}
public String getLast() {
return last;
}
}
private final Person[] persons;
private final double[] happinessCurve;
public Message(Person[] persons, double[] happinessCurve) {
this.persons = persons;
this.happinessCurve = happinessCurve;
}
public Person[] getPersons() {
return persons;
}
public double[] getHappinessCurve() {
return happinessCurve;
}
}
//#message

View file

@ -1,134 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.io.japi;
import java.net.InetSocketAddress;
import java.util.LinkedList;
import java.util.Queue;
import akka.actor.ActorRef;
import akka.actor.AbstractActor;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.io.Tcp.ConnectionClosed;
import akka.io.Tcp.Event;
import akka.io.Tcp.Received;
import akka.io.TcpMessage;
import akka.util.ByteString;
//#simple-echo-handler
public class SimpleEchoHandler extends AbstractActor {
final LoggingAdapter log = Logging
.getLogger(getContext().getSystem(), getSelf());
final ActorRef connection;
final InetSocketAddress remote;
public static final long maxStored = 100000000;
public static final long highWatermark = maxStored * 5 / 10;
public static final long lowWatermark = maxStored * 2 / 10;
public SimpleEchoHandler(ActorRef connection, InetSocketAddress remote) {
this.connection = connection;
this.remote = remote;
// sign death pact: this actor stops when the connection is closed
getContext().watch(connection);
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Received.class, msg -> {
final ByteString data = msg.data();
buffer(data);
connection.tell(TcpMessage.write(data, ACK), getSelf());
// now switch behavior to waiting for acknowledgement
getContext().become(buffering(), false);
})
.match(ConnectionClosed.class, msg -> {
getContext().stop(getSelf());
})
.build();
}
private final Receive buffering() {
return receiveBuilder()
.match(Received.class, msg -> {
buffer(msg.data());
})
.match(Event.class, msg -> msg == ACK, msg -> {
acknowledge();
})
.match(ConnectionClosed.class, msg -> {
if (msg.isPeerClosed()) {
closing = true;
} else {
// could also be ErrorClosed, in which case we just give up
getContext().stop(getSelf());
}
})
.build();
}
//#storage-omitted
public void postStop() {
log.info("transferred {} bytes from/to [{}]", transferred, remote);
}
private long transferred;
private long stored = 0;
private Queue<ByteString> storage = new LinkedList<ByteString>();
private boolean suspended = false;
private boolean closing = false;
private final Event ACK = new Event() {};
//#simple-helpers
protected void buffer(ByteString data) {
storage.add(data);
stored += data.size();
if (stored > maxStored) {
log.warning("drop connection to [{}] (buffer overrun)", remote);
getContext().stop(getSelf());
} else if (stored > highWatermark) {
log.debug("suspending reading");
connection.tell(TcpMessage.suspendReading(), getSelf());
suspended = true;
}
}
protected void acknowledge() {
final ByteString acked = storage.remove();
stored -= acked.size();
transferred += acked.size();
if (suspended && stored < lowWatermark) {
log.debug("resuming reading");
connection.tell(TcpMessage.resumeReading(), getSelf());
suspended = false;
}
if (storage.isEmpty()) {
if (closing) {
getContext().stop(getSelf());
} else {
getContext().unbecome();
}
} else {
connection.tell(TcpMessage.write(storage.peek(), ACK), getSelf());
}
}
//#simple-helpers
//#storage-omitted
}
//#simple-echo-handler

View file

@ -1,37 +0,0 @@
package jdocs.io.japi;
import java.util.concurrent.CountDownLatch;
import akka.actor.ActorRef;
import akka.actor.Terminated;
import akka.actor.AbstractActor;
public class Watcher extends AbstractActor {
static public class Watch {
final ActorRef target;
public Watch(ActorRef target) {
this.target = target;
}
}
final CountDownLatch latch;
public Watcher(CountDownLatch latch) {
this.latch = latch;
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(Watch.class, msg -> {
getContext().watch(msg.target);
})
.match(Terminated.class, msg -> {
latch.countDown();
if (latch.getCount() == 0) getContext().stop(getSelf());
})
.build();
}
}

View file

@ -1,49 +0,0 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.pattern;
import akka.actor.*;
import akka.pattern.Backoff;
import akka.pattern.BackoffSupervisor;
import akka.testkit.TestActors.EchoActor;
//#backoff-imports
import scala.concurrent.duration.Duration;
//#backoff-imports
import java.util.concurrent.TimeUnit;
public class BackoffSupervisorDocTest {
void exampleStop (ActorSystem system) {
//#backoff-stop
final Props childProps = Props.create(EchoActor.class);
final Props supervisorProps = BackoffSupervisor.props(
Backoff.onStop(
childProps,
"myEcho",
Duration.create(3, TimeUnit.SECONDS),
Duration.create(30, TimeUnit.SECONDS),
0.2)); // adds 20% "noise" to vary the intervals slightly
system.actorOf(supervisorProps, "echoSupervisor");
//#backoff-stop
}
void exampleFailure (ActorSystem system) {
//#backoff-fail
final Props childProps = Props.create(EchoActor.class);
final Props supervisorProps = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
"myEcho",
Duration.create(3, TimeUnit.SECONDS),
Duration.create(30, TimeUnit.SECONDS),
0.2)); // adds 20% "noise" to vary the intervals slightly
system.actorOf(supervisorProps, "echoSupervisor");
//#backoff-fail
}
}

Some files were not shown because too many files have changed in this diff Show more