Merge pull request #1841 from akka/wip-3689-activator-samples-patriknw

=doc #3689 Make activator templates
This commit is contained in:
Patrik Nordwall 2013-12-12 12:38:49 -08:00
commit cdfd3f07c1
311 changed files with 5514 additions and 3781 deletions

2
.gitignore vendored
View file

@ -71,3 +71,5 @@ tm*.lck
tm*.log
tm.out
worker*.log
*-shim.sbt

View file

@ -3,7 +3,7 @@ package docs.osgi
case object SomeMessage
class SomeActor extends akka.actor.Actor {
def receive = { case SomeMessage }
def receive = { case SomeMessage => }
}
//#Activator

View file

@ -35,9 +35,9 @@ class DangerousActor extends Actor with ActorLogging {
def dangerousCall: String = "This really isn't that dangerous of a call after all"
def receive = {
case "is my middle name"
case "is my middle name" =>
breaker.withCircuitBreaker(Future(dangerousCall)) pipeTo sender
case "block for me"
case "block for me" =>
sender ! breaker.withSyncCircuitBreaker(dangerousCall)
}
//#circuit-breaker-usage

View file

@ -65,7 +65,7 @@ epub_cover = ("../_sphinx/static/akka.png", "")
def setup(app):
from sphinx.util.texescape import tex_replacements
tex_replacements.append((u'', ur'\(\Rightarrow\)'))
tex_replacements.append((u'=>', ur'\(\Rightarrow\)'))
latex_paper_size = 'a4'
latex_font_size = '10pt'

View file

@ -312,7 +312,7 @@ to do other work) and resume processing when the response is ready. This is
currently the case for a `subset of components`_ such as the `Jetty component`_.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also a :ref:`camel-async-example-java` that implements both, an asynchronous
also :ref:`camel-examples-java` that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
If the used Camel component is blocking it might be necessary to use a separate
@ -469,116 +469,18 @@ __ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/j
Examples
========
.. _camel-async-example-java:
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Camel Samples with Java <http://typesafe.com/activator/template/akka-sample-camel-java>`_
contains 3 samples:
Asynchronous routing and transformation example
-----------------------------------------------
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support :ref:`camel-asynchronous-routing-java` with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route.
This example demonstrates how to implement consumer and producer actors that
support :ref:`camel-asynchronous-routing-java` with their Camel endpoints. The sample
application transforms the content of the Akka homepage, http://akka.io, by
replacing every occurrence of *Akka* with *AKKA*. To run this example, add
a Boot class that starts the actors. After starting
the :ref:`microkernel-java`, direct the browser to http://localhost:8875 and the
transformed Akka homepage should be displayed. Please note that this example
will probably not work if you're behind an HTTP proxy.
The following figure gives an overview how the example actors interact with
external systems and with each other. A browser sends a GET request to
http://localhost:8875 which is the published endpoint of the ``HttpConsumer``
actor. The ``HttpConsumer`` actor forwards the requests to the ``HttpProducer``
actor which retrieves the Akka homepage from http://akka.io. The retrieved HTML
is then forwarded to the ``HttpTransformer`` actor which replaces all occurrences
of *Akka* with *AKKA*. The transformation result is sent back the HttpConsumer
which finally returns it to the browser.
.. image:: ../images/camel-async-interact.png
Implementing the example actor classes and wiring them together is rather easy
as shown in the following snippet.
.. includecode:: code/docs/camel/sample/http/HttpConsumer.java#HttpExample
.. includecode:: code/docs/camel/sample/http/HttpProducer.java#HttpExample
.. includecode:: code/docs/camel/sample/http/HttpTransformer.java#HttpExample
.. includecode:: code/docs/camel/sample/http/HttpSample.java#HttpExample
The `jetty endpoints`_ of HttpConsumer and HttpProducer support asynchronous
in-out message exchanges and do not allocate threads for the full duration of
the exchange. This is achieved by using `Jetty continuations`_ on the
consumer-side and by using `Jetty's asynchronous HTTP client`_ on the producer
side. The following high-level sequence diagram illustrates that.
.. _jetty endpoints: http://camel.apache.org/jetty.html
.. _Jetty continuations: http://wiki.eclipse.org/Jetty/Feature/Continuations
.. _Jetty's asynchronous HTTP client: http://wiki.eclipse.org/Jetty/Tutorial/HttpClient
.. image:: ../images/camel-async-sequence.png
Custom Camel route example
--------------------------
This section also demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route. The
following figure gives an overview.
.. image:: ../images/camel-custom-route.png
* A consumer actor receives a message from an HTTP client
* It forwards the message to another actor that transforms the message (encloses
the original message into hyphens)
* The transformer actor forwards the transformed message to a producer actor
* The producer actor sends the message to a custom Camel route beginning at the
``direct:welcome`` endpoint
* A processor (transformer) in the custom Camel route prepends "Welcome" to the
original message and creates a result message
* The producer actor sends the result back to the consumer actor which returns
it to the HTTP client
The consumer, transformer and
producer actor implementations are as follows.
.. includecode:: code/docs/camel/sample/route/Consumer3.java#CustomRouteExample
.. includecode:: code/docs/camel/sample/route/Transformer.java#CustomRouteExample
.. includecode:: code/docs/camel/sample/route/Producer1.java#CustomRouteExample
.. includecode:: code/docs/camel/sample/route/CustomRouteSample.java#CustomRouteExample
The producer actor knows where to reply the message to because the consumer and
transformer actors have forwarded the original sender reference as well. The
application configuration and the route starting from direct:welcome are done in the code above.
To run the example, add the lines shown in the example to a Boot class and the start the :ref:`microkernel-java` and POST a message to
``http://localhost:8877/camel/welcome``.
.. code-block:: none
curl -H "Content-Type: text/plain" -d "Anke" http://localhost:8877/camel/welcome
The response should be:
.. code-block:: none
Welcome - Anke -
Quartz Scheduler Example
------------------------
Here is an example showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component in Akka.
The following example creates a "timer" actor which fires a message every 2
seconds:
.. includecode:: code/docs/camel/sample/quartz/MyQuartzActor.java#QuartzExample
.. includecode:: code/docs/camel/sample/quartz/QuartzSample.java#QuartzExample
For more information about the Camel Quartz component, see here:
http://camel.apache.org/quartz.html
* Quartz Scheduler Example - Showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component
Additional Resources
====================

View file

@ -23,15 +23,12 @@ The Akka cluster is a separate jar file. Make sure that you have the following d
A Simple Cluster Example
^^^^^^^^^^^^^^^^^^^^^^^^
The following small program together with its configuration starts an ``ActorSystem``
with the Cluster enabled. It joins the cluster and logs some membership events.
The following configuration enables the ``Cluster`` extension to be used.
It joins the cluster and an actor subscribes to cluster membership events and logs them.
Try it out:
The ``application.conf`` configuration looks like this:
1. Add the following ``application.conf`` in your project, place it in ``src/main/resources``:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#cluster
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/application.conf
To enable cluster capabilities in your Akka project you should, at a minimum, add the :ref:`remoting-java`
settings, but with ``akka.cluster.ClusterActorRefProvider``.
@ -42,49 +39,17 @@ The seed nodes are configured contact points for initial, automatic, join of the
Note that if you are going to start the nodes on different machines you need to specify the
ip-addresses or host names of the machines in ``application.conf`` instead of ``127.0.0.1``
2. Add the following main program to your project, place it in ``src/main/java``:
An actor that uses the cluster extension may look like this:
.. literalinclude:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/simple/japi/SimpleClusterApp.java
.. literalinclude:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/simple/SimpleClusterListener.java
:language: java
3. Start the first seed node. Open a terminal window and run (one line)::
The actor registers itself as subscriber of certain cluster events. It gets notified with a snapshot event, ``CurrentClusterState``
that holds full state information of the cluster. After that it receives events for changes that happen in the cluster.
mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp" \
-Dexec.args="2551"
2551 corresponds to the port of the first seed-nodes element in the configuration.
In the log output you see that the cluster node has been started and changed status to 'Up'.
4. Start the second seed node. Open another terminal window and run::
mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp" \
-Dexec.args="2552"
2552 corresponds to the port of the second seed-nodes element in the configuration.
In the log output you see that the cluster node has been started and joins the other seed node
and becomes a member of the cluster. Its status changed to 'Up'.
Switch over to the first terminal window and see in the log output that the member joined.
5. Start another node. Open a maven session in yet another terminal window and run::
mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp"
Now you don't need to specify the port number, and it will use a random available port.
It joins one of the configured seed nodes. Look at the log output in the different terminal
windows.
Start even more nodes in the same way, if you like.
6. Shut down one of the nodes by pressing 'ctrl-c' in one of the terminal windows.
The other nodes will detect the failure after a while, which you can see in the log
output in the other terminals.
Look at the source code of the program again. What it does is to create an actor
and register it as subscriber of certain cluster events. It gets notified with
an snapshot event, ``CurrentClusterState`` that holds full state information of
the cluster. After that it receives events for changes that happen in the cluster.
The easiest way to run this example yourself is to download `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
and open the tutorial named `Akka Cluster Samples with Java <http://typesafe.com/activator/template/akka-sample-cluster-java>`_.
It contains instructions of how to run the <code>SimpleClusterApp</code>.
Joining to Seed Nodes
^^^^^^^^^^^^^^^^^^^^^
@ -237,17 +202,13 @@ backend workers, which performs the transformation job, and sends the result bac
the original client. New backend nodes, as well as new frontend nodes, can be
added or removed to the cluster dynamically.
In this example the following imports are used:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationBackend.java#imports
Messages:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationMessages.java#messages
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/transformation/TransformationMessages.java#messages
The backend worker that performs the transformation job:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationBackend.java#backend
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/transformation/TransformationBackend.java#backend
Note that the ``TransformationBackend`` actor subscribes to cluster events to detect new,
potential, frontend nodes, and send them a registration message so that they know
@ -255,36 +216,17 @@ that they can use the backend worker.
The frontend that receives user jobs and delegates to one of the registered backend workers:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationFrontend.java#frontend
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/transformation/TransformationFrontend.java#frontend
Note that the ``TransformationFrontend`` actor watch the registered backend
to be able to remove it from its list of availble backend workers.
to be able to remove it from its list of available backend workers.
Death watch uses the cluster failure detector for nodes in the cluster, i.e. it detects
network failures and JVM crashes, in addition to graceful termination of watched
actor.
This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the
`source <@github@/akka-samples/akka-sample-cluster>`_ to your
maven project, defined as in :ref:`cluster_simple_example_java`.
Run it by starting nodes in different terminal windows. For example, starting 2
frontend nodes and 3 backend nodes::
mvn exec:java \
-Dexec.mainClass="sample.cluster.transformation.japi.TransformationFrontendMain" \
-Dexec.args="2551"
mvn exec:java \
-Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain" \
-Dexec.args="2552"
mvn exec:java \
-Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain"
mvn exec:java \
-Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain"
mvn exec:java \
-Dexec.mainClass="sample.cluster.transformation.japi.TransformationFrontendMain"
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Java <http://typesafe.com/activator/template/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Worker Dial-in Example**.
Node Roles
^^^^^^^^^^
@ -307,18 +249,18 @@ members have joined, and the cluster has reached a certain size.
With a configuration option you can define required number of members
before the leader changes member status of 'Joining' members to 'Up'.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/factorial.conf#min-nr-of-members
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/factorial.conf#min-nr-of-members
In a similar way you can define required number of members of a certain role
before the leader changes member status of 'Joining' members to 'Up'.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/factorial.conf#role-min-nr-of-members
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/factorial.conf#role-min-nr-of-members
You can start the actors in a ``registerOnMemberUp`` callback, which will
be invoked when the current member status is changed tp 'Up', i.e. the cluster
has at least the defined number of members.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontendMain.java#registerOnUp
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/FactorialFrontendMain.java#registerOnUp
This callback can be used for other things than starting actors.
@ -448,7 +390,7 @@ Router with Group of Routees
When using a ``Group`` you must start the routee actors on the cluster member nodes.
That is not done by the router. The configuration for a group looks like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#router-lookup-config
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#router-lookup-config
.. note::
@ -466,7 +408,7 @@ to a high value will result in new routees added to the router when nodes join t
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsService.java#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/Extra.java#router-lookup-in-code
See :ref:`cluster_configuration_java` section for further descriptions of the settings.
@ -482,23 +424,19 @@ to count number of characters in each word to a separate worker, a routee of a r
The character count for each word is sent back to an aggregator that calculates
the average number of characters per word when all results have been collected.
In this example we use the following imports:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsService.java#imports
Messages:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsMessages.java#messages
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsMessages.java#messages
The worker that counts number of characters in each word:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsWorker.java#worker
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsWorker.java#worker
The service that receives text from users and splits it up into words, delegates to workers and aggregates:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsService.java#service
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsService.java#service
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsAggregator.java#aggregator
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsAggregator.java#aggregator
Note, nothing cluster specific so far, just plain actors.
@ -506,31 +444,14 @@ Note, nothing cluster specific so far, just plain actors.
All nodes start ``StatsService`` and ``StatsWorker`` actors. Remember, routees are the workers in this case.
The router is configured with ``routees.paths``:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#config-router-lookup
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/stats1.conf#config-router-lookup
This means that user requests can be sent to ``StatsService`` on any node and it will use
``StatsWorker`` on all nodes. There can only be one worker per node, but that worker could easily
fan out to local children if more parallelism is needed.
``StatsWorker`` on all nodes.
This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the
`source <@github@/akka-samples/akka-sample-cluster>`_ to your
maven project, defined as in :ref:`cluster_simple_example_java`.
Run it by starting nodes in different terminal windows. For example, starting 3
service nodes and 1 client::
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleMain" \
-Dexec.args="2551"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleMain" \
-Dexec.args="2552"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleMain"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleMain"
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Java <http://typesafe.com/activator/template/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Router Example with Group of Routees**.
Router with Pool of Remote Deployed Routees
-------------------------------------------
@ -538,7 +459,7 @@ Router with Pool of Remote Deployed Routees
When using a ``Pool`` with routees created and deployed on the cluster member nodes
the configuration for a router looks like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala#router-deploy-config
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala#router-deploy-config
It is possible to limit the deployment of routees to member nodes tagged with a certain role by
specifying ``use-role``.
@ -550,7 +471,7 @@ the cluster.
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsService.java#router-deploy-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/Extra.java#router-deploy-in-code
See :ref:`cluster_configuration_java` section for further descriptions of the settings.
@ -561,44 +482,23 @@ Let's take a look at how to use a cluster aware router on single master node tha
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton`
in the contrib module. The ``ClusterSingletonManager`` is started on each node.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleOneMasterMain.java#create-singleton-manager
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsSampleOneMasterMain.java#create-singleton-manager
We also need an actor on each node that keeps track of where current single master exists and
delegates jobs to the ``StatsService``.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsFacade.java#facade
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsFacade.java#facade
The ``StatsFacade`` receives text from users and delegates to the current ``StatsService``, the single
master. It listens to cluster events to lookup the ``StatsService`` on the oldest node.
All nodes start ``StatsFacade`` and the ``ClusterSingletonManager``. The router is now configured like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#config-router-deploy
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/stats2.conf#config-router-deploy
This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the
`source <@github@/akka-samples/akka-sample-cluster>`_ to your
maven project, defined as in :ref:`cluster_simple_example_java`. Also add the `akka-contrib` dependency
to your pom.xml.
Run it by starting nodes in different terminal windows. For example, starting 3
service nodes and 1 client::
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain" \
-Dexec.args="2551"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain" \
-Dexec.args="2552"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterClientMain"
mvn exec:java \
-Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain"
.. note:: The above example will be simplified when the cluster handles automatic actor partitioning.
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Java <http://typesafe.com/activator/template/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Router Example with Pool of Remote Deployed Routees**.
Cluster Metrics
^^^^^^^^^^^^^^^
@ -637,63 +537,40 @@ It can be configured to use a specific MetricsSelector to produce the probabilit
The collected metrics values are smoothed with `exponential weighted moving average <http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average>`_. In the :ref:`cluster_configuration_java` you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action.
In this example the following imports are used:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java#imports
Let's take a look at this router in action. What can be more demanding than calculating factorials?
The backend worker that performs the factorial calculation:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java#backend
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/FactorialBackend.java#backend
The frontend that receives user jobs and delegates to the backends via the router:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#frontend
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/FactorialFrontend.java#frontend
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#adaptive-router
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/factorial.conf#adaptive-router
It is only router type ``adaptive`` and the ``metrics-selector`` that is specific to this router, other things work
in the same way as other routers.
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/Extra.java#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#router-deploy-in-code
This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the
`source <@github@/akka-samples/akka-sample-cluster>`_ to your
maven project, defined as in :ref:`cluster_simple_example_java`.
Run it by starting nodes in different terminal windows. For example, starting 3 backend nodes and
one frontend::
mvn exec:java \
-Dexec.mainClass="sample.cluster.factorial.japi.FactorialBackendMain" \
-Dexec.args="2551"
mvn exec:java \
-Dexec.mainClass="sample.cluster.factorial.japi.FactorialBackendMain" \
-Dexec.args="2552"
mvn exec:java \
-Dexec.mainClass="sample.cluster.factorial.japi.FactorialBackendMain"
mvn exec:java \
-Dexec.mainClass="sample.cluster.factorial.japi.FactorialFrontendMain"
Press ctrl-c in the terminal window of the frontend to stop the factorial calculations.
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/Extra.java#router-deploy-in-code
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Java <http://typesafe.com/activator/template/akka-sample-cluster-java>`_.
contains the full source code and instructions of how to run the **Adaptive Load Balancing** sample.
Subscribe to Metrics Events
---------------------------
It is possible to subscribe to the metrics events directly to implement other functionality.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/MetricsListener.java#metrics-listener
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/factorial/MetricsListener.java#metrics-listener
Custom Metrics Collector
------------------------

View file

@ -1,23 +0,0 @@
package docs.camel.sample.http;
import akka.actor.*;
public class HttpSample {
public static void main(String[] args) {
//#HttpExample
// Create the actors. this can be done in a Boot class so you can
// run the example in the MicroKernel. Just add the three lines below
// to your boot class.
ActorSystem system = ActorSystem.create("some-system");
final ActorRef httpTransformer = system.actorOf(
Props.create(HttpTransformer.class));
final ActorRef httpProducer = system.actorOf(
Props.create(HttpProducer.class, httpTransformer));
final ActorRef httpConsumer = system.actorOf(
Props.create(HttpConsumer.class, httpProducer));
//#HttpExample
}
}

View file

@ -1,18 +0,0 @@
package docs.camel.sample.route;
//#CustomRouteExample
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
public class CustomRouteBuilder extends RouteBuilder{
public void configure() throws Exception {
from("direct:welcome").process(new Processor(){
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody(String.format("Welcome %s",
exchange.getIn().getBody()));
}
});
}
}
//#CustomRouteExample

View file

@ -1,23 +0,0 @@
package docs.camel.sample.route;
import akka.actor.*;
import akka.camel.CamelExtension;
public class CustomRouteSample {
@SuppressWarnings("unused")
public static void main(String[] args) {
try {
//#CustomRouteExample
// the below lines can be added to a Boot class, so that you can run the
// example from a MicroKernel
ActorSystem system = ActorSystem.create("some-system");
final ActorRef producer = system.actorOf(Props.create(Producer1.class));
final ActorRef mediator = system.actorOf(Props.create(Transformer.class, producer));
final ActorRef consumer = system.actorOf(Props.create(Consumer3.class, mediator));
CamelExtension.get(system).context().addRoutes(new CustomRouteBuilder());
//#CustomRouteExample
} catch (Exception e) {
e.printStackTrace();
}
}
}

View file

@ -1,10 +0,0 @@
package docs.camel.sample.route;
//#CustomRouteExample
import akka.camel.javaapi.UntypedProducerActor;
public class Producer1 extends UntypedProducerActor{
public String getEndpointUri() {
return "direct:welcome";
}
}
//#CustomRouteExample

View file

@ -2,42 +2,17 @@
The Obligatory Hello World
##########################
Since every programming paradigm needs to solve the tough problem of printing a
well-known greeting to the console well introduce you to the actor-based
version.
The actor based version of the tough problem of printing a
well-known greeting to the console is introduced in a `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Main in Java <http://typesafe.com/activator/template/akka-sample-main-java>`_.
.. includecode:: ../java/code/docs/actor/japi/HelloWorld.java#hello-world
The ``HelloWorld`` actor is the applications “main” class; when it terminates
the application will shut down—more on that later. The main business logic
happens in the :meth:`preStart` method, where a ``Greeter`` actor is created
and instructed to issue that greeting we crave for. When the greeter is done it
will tell us so by sending back a message, and when that message has been
received it will be passed into the :meth:`onReceive` method where we can
conclude the demonstration by stopping the ``HelloWorld`` actor. You will be
very curious to see how the ``Greeter`` actor performs the actual task:
.. includecode:: ../java/code/docs/actor/japi/Greeter.java#greeter
This is extremely simple now: after its creation this actor will not do
anything until someone sends it a message, and if that happens to be an
invitation to greet the world then the ``Greeter`` complies and informs the
requester that the deed has been done.
As a Java developer you will probably want to tell us that there is no
``static public void main(...)`` anywhere in these classes, so how do we run
this program? The answer is that the appropriate :meth:`main` method is
implemented in the generic launcher class :class:`akka.Main` which expects only
The tutorial illustrates the generic launcher class :class:`akka.Main` which expects only
one command line argument: the class name of the applications main actor. This
main method will then create the infrastructure needed for running the actors,
start the given main actor and arrange for the whole application to shut down
once the main actor terminates. Thus you will be able to run the above code
with a command similar to the following::
once the main actor terminates.
java -classpath <all those JARs> akka.Main com.example.HelloWorld
This conveniently assumes placement of the above class definitions in package
``com.example`` and it further assumes that you have the required JAR files for
``scala-library`` and ``akka-actor`` available. The easiest would be to manage
these dependencies with a build tool, see :ref:`build-tool`.
There is also another `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial in the same problem domain that is named `Hello Akka! <http://typesafe.com/activator/template/hello-akka>`_.
It describes the basics of Akka in more depth.

View file

@ -246,110 +246,14 @@ This is also done via configuration::
This configuration setting will clone the actor “aggregation” 10 times and deploy it evenly distributed across
the two given target nodes.
Description of the Remoting Sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. _remote-sample-java:
There is a more extensive remote example that comes with the Akka distribution.
Please have a look here for more information: `Remote Sample
<@github@/akka-samples/akka-sample-remote>`_
This sample demonstrates both, remote deployment and look-up of remote actors.
First, let us have a look at the common setup for both scenarios (this is
``common.conf``):
Remoting Sample
^^^^^^^^^^^^^^^
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/common.conf
This enables the remoting by installing the :class:`RemoteActorRefProvider` and
chooses the default remote transport. All other options will be set
specifically for each show case.
.. note::
Be sure to replace the default IP 127.0.0.1 with the real address the system
is reachable by if you deploy onto multiple machines!
.. _remote-lookup-sample-java:
Remote Lookup
-------------
In order to look up a remote actor, that one must be created first. For this
purpose, we configure an actor system to listen on port 2552 (this is a snippet
from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: calculator
Then the actor must be created. For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JLookupApplication.java
:include: imports
The actor doing the work will be this one:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JSimpleCalculatorActor.java
:include: actor
and we start it within an actor system using the above configuration
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JCalculatorApplication.java
:include: setup
With the service actor up and running, we may look it up from another actor
system, which will be configured to use port 2553 (this is a snippet from
``application.conf``).
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotelookup
The actor which will query the calculator is a quite simple one for demonstration purposes
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JLookupActor.java
:include: actor
and it is created from an actor system using the aforementioned clients config.
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JLookupApplication.java
:include: setup
Requests which come in via ``doSomething`` will be sent to the client actor,
which will use the actor reference that was identified earlier. Observe how the actor
system name using in ``actorSelection`` matches the remote systems name, as do IP
and port number. Top-level actors are always created below the ``"/user"``
guardian, which supervises them.
Remote Deployment
-----------------
Creating remote actors instead of looking them up is not visible in the source
code, only in the configuration file. This section is used in this scenario
(this is a snippet from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotecreation
For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JLookupApplication.java
:include: imports
The server actor can multiply or divide numbers:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JAdvancedCalculatorActor.java
:include: actor
The client actor looks like in the previous example
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JCreationActor.java
:include: actor
but the setup uses only ``actorOf``:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/java/sample/remote/calculator/java/JCreationApplication.java
:include: setup
Observe how the name of the server actor matches the deployment given in the
configuration file, which will transparently delegate the actor creation to the
remote node.
There is a more extensive remote example that comes with `Typesafe Activator <http://typesafe.com/platform/getstarted>`_.
The tutorial named `Akka Remote Samples with Java <http://typesafe.com/activator/template/akka-sample-remote-java>`_
demonstrates both remote deployment and look-up of remote actors.
Pluggable transport support
---------------------------

View file

@ -445,7 +445,7 @@ Remote actor addresses may also be looked up, if :ref:`remoting <remoting-java>`
.. includecode:: code/docs/actor/UntypedActorDocTest.java#selection-remote
An example demonstrating remote actor look-up is given in :ref:`remote-lookup-sample-java`.
An example demonstrating remote actor look-up is given in :ref:`remote-sample-java`.
.. note::

View file

@ -528,7 +528,7 @@ Remote actor addresses may also be looked up, if :ref:`remoting <remoting-scala>
.. includecode:: code/docs/actor/ActorDocSpec.scala#selection-remote
An example demonstrating actor look-up is given in :ref:`remote-lookup-sample-scala`.
An example demonstrating actor look-up is given in :ref:`remote-sample-scala`.
.. note::

View file

@ -308,7 +308,7 @@ to do other work) and resume processing when the response is ready. This is
currently the case for a `subset of components`_ such as the `Jetty component`_.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also a :ref:`camel-async-example` that implements both, an asynchronous
also :ref:`camel-examples` that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
If the used Camel component is blocking it might be necessary to use a separate
@ -463,110 +463,19 @@ __ https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/j
Examples
========
.. _camel-async-example:
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Camel Samples with Scala <http://typesafe.com/activator/template/akka-sample-camel-scala>`_
contains 3 samples:
Asynchronous routing and transformation example
-----------------------------------------------
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support :ref:`camel-asynchronous-routing` with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route.
This example demonstrates how to implement consumer and producer actors that
support :ref:`camel-asynchronous-routing` with their Camel endpoints. The sample
application transforms the content of the Akka homepage, http://akka.io, by
replacing every occurrence of *Akka* with *AKKA*. To run this example, add
a Boot class that starts the actors. After starting
the :ref:`microkernel-scala`, direct the browser to http://localhost:8875 and the
transformed Akka homepage should be displayed. Please note that this example
will probably not work if you're behind an HTTP proxy.
* Quartz Scheduler Example - Showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component
The following figure gives an overview how the example actors interact with
external systems and with each other. A browser sends a GET request to
http://localhost:8875 which is the published endpoint of the ``HttpConsumer``
actor. The ``HttpConsumer`` actor forwards the requests to the ``HttpProducer``
actor which retrieves the Akka homepage from http://akka.io. The retrieved HTML
is then forwarded to the ``HttpTransformer`` actor which replaces all occurrences
of *Akka* with *AKKA*. The transformation result is sent back the HttpConsumer
which finally returns it to the browser.
.. image:: ../images/camel-async-interact.png
Implementing the example actor classes and wiring them together is rather easy
as shown in the following snippet.
.. includecode:: code/docs/camel/HttpExample.scala#HttpExample
The `jetty endpoints`_ of HttpConsumer and HttpProducer support asynchronous
in-out message exchanges and do not allocate threads for the full duration of
the exchange. This is achieved by using `Jetty continuations`_ on the
consumer-side and by using `Jetty's asynchronous HTTP client`_ on the producer
side. The following high-level sequence diagram illustrates that.
.. _jetty endpoints: http://camel.apache.org/jetty.html
.. _Jetty continuations: http://wiki.eclipse.org/Jetty/Feature/Continuations
.. _Jetty's asynchronous HTTP client: http://wiki.eclipse.org/Jetty/Tutorial/HttpClient
.. image:: ../images/camel-async-sequence.png
Custom Camel route example
--------------------------
This section also demonstrates the combined usage of a ``Producer`` and a
``Consumer`` actor as well as the inclusion of a custom Camel route. The
following figure gives an overview.
.. image:: ../images/camel-custom-route.png
* A consumer actor receives a message from an HTTP client
* It forwards the message to another actor that transforms the message (encloses
the original message into hyphens)
* The transformer actor forwards the transformed message to a producer actor
* The producer actor sends the message to a custom Camel route beginning at the
``direct:welcome`` endpoint
* A processor (transformer) in the custom Camel route prepends "Welcome" to the
original message and creates a result message
* The producer actor sends the result back to the consumer actor which returns
it to the HTTP client
The consumer, transformer and
producer actor implementations are as follows.
.. includecode:: code/docs/camel/CustomRouteExample.scala#CustomRouteExample
The producer actor knows where to reply the message to because the consumer and
transformer actors have forwarded the original sender reference as well. The
application configuration and the route starting from direct:welcome are done in the code above.
To run the example, add the lines shown in the example to a Boot class and the start the :ref:`microkernel-scala` and POST a message to
``http://localhost:8877/camel/welcome``.
.. code-block:: none
curl -H "Content-Type: text/plain" -d "Anke" http://localhost:8877/camel/welcome
The response should be:
.. code-block:: none
Welcome - Anke -
Quartz Scheduler Example
------------------------
Here is an example showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component in Akka.
The following example creates a "timer" actor which fires a message every 2
seconds:
.. includecode:: code/docs/camel/QuartzExample.scala#Quartz
For more information about the Camel Quartz component, see here:
http://camel.apache.org/quartz.html
Additional Resources
====================

View file

@ -17,15 +17,12 @@ The Akka cluster is a separate jar file. Make sure that you have the following d
A Simple Cluster Example
^^^^^^^^^^^^^^^^^^^^^^^^
The following small program together with its configuration starts an ``ActorSystem``
with the Cluster enabled. It joins the cluster and logs some membership events.
The following configuration enables the ``Cluster`` extension to be used.
It joins the cluster and an actor subscribes to cluster membership events and logs them.
Try it out:
The ``application.conf`` configuration looks like this:
1. Add the following ``application.conf`` in your project, place it in ``src/main/resources``:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#cluster
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/application.conf
To enable cluster capabilities in your Akka project you should, at a minimum, add the :ref:`remoting-scala`
settings, but with ``akka.cluster.ClusterActorRefProvider``.
@ -36,48 +33,17 @@ The seed nodes are configured contact points for initial, automatic, join of the
Note that if you are going to start the nodes on different machines you need to specify the
ip-addresses or host names of the machines in ``application.conf`` instead of ``127.0.0.1``
2. Add the following main program to your project, place it in ``src/main/scala``:
An actor that uses the cluster extension may look like this:
.. literalinclude:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/simple/SimpleClusterApp.scala
.. literalinclude:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/simple/SimpleClusterListener.scala
:language: scala
The actor registers itself as subscriber of certain cluster events. It gets notified with a snapshot event, ``CurrentClusterState``
that holds full state information of the cluster. After that it receives events for changes that happen in the cluster.
3. Start the first seed node. Open a sbt session in one terminal window and run::
run-main sample.cluster.simple.SimpleClusterApp 2551
2551 corresponds to the port of the first seed-nodes element in the configuration.
In the log output you see that the cluster node has been started and changed status to 'Up'.
4. Start the second seed node. Open a sbt session in another terminal window and run::
run-main sample.cluster.simple.SimpleClusterApp 2552
2552 corresponds to the port of the second seed-nodes element in the configuration.
In the log output you see that the cluster node has been started and joins the other seed node
and becomes a member of the cluster. Its status changed to 'Up'.
Switch over to the first terminal window and see in the log output that the member joined.
5. Start another node. Open a sbt session in yet another terminal window and run::
run-main sample.cluster.simple.SimpleClusterApp
Now you don't need to specify the port number, and it will use a random available port.
It joins one of the configured seed nodes. Look at the log output in the different terminal
windows.
Start even more nodes in the same way, if you like.
6. Shut down one of the nodes by pressing 'ctrl-c' in one of the terminal windows.
The other nodes will detect the failure after a while, which you can see in the log
output in the other terminals.
Look at the source code of the program again. What it does is to create an actor
and register it as subscriber of certain cluster events. It gets notified with
an snapshot event, ``CurrentClusterState`` that holds full state information of
the cluster. After that it receives events for changes that happen in the cluster.
The easiest way to run this example yourself is to download `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
and open the tutorial named `Akka Cluster Samples with Scala <http://typesafe.com/activator/template/akka-sample-cluster-scala>`_.
It contains instructions of how to run the <code>SimpleClusterApp</code>.
Joining to Seed Nodes
^^^^^^^^^^^^^^^^^^^^^
@ -230,17 +196,13 @@ backend workers, which performs the transformation job, and sends the result bac
the original client. New backend nodes, as well as new frontend nodes, can be
added or removed to the cluster dynamically.
In this example the following imports are used:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala#imports
Messages:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala#messages
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/transformation/TransformationMessages.scala#messages
The backend worker that performs the transformation job:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala#backend
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/transformation/TransformationBackend.scala#backend
Note that the ``TransformationBackend`` actor subscribes to cluster events to detect new,
potential, frontend nodes, and send them a registration message so that they know
@ -248,31 +210,17 @@ that they can use the backend worker.
The frontend that receives user jobs and delegates to one of the registered backend workers:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala#frontend
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/transformation/TransformationFrontend.scala#frontend
Note that the ``TransformationFrontend`` actor watch the registered backend
to be able to remove it from its list of availble backend workers.
to be able to remove it from its list of available backend workers.
Death watch uses the cluster failure detector for nodes in the cluster, i.e. it detects
network failures and JVM crashes, in addition to graceful termination of watched
actor.
This example is included in ``akka-samples/akka-sample-cluster``
and you can try by starting nodes in different terminal windows. For example, starting 2
frontend nodes and 3 backend nodes::
sbt
project akka-sample-cluster
run-main sample.cluster.transformation.TransformationFrontend 2551
run-main sample.cluster.transformation.TransformationBackend 2552
run-main sample.cluster.transformation.TransformationBackend
run-main sample.cluster.transformation.TransformationBackend
run-main sample.cluster.transformation.TransformationFrontend
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Scala <http://typesafe.com/activator/template/akka-sample-cluster-scala>`_.
contains the full source code and instructions of how to run the **Worker Dial-in Example**.
Node Roles
^^^^^^^^^^
@ -295,18 +243,18 @@ members have joined, and the cluster has reached a certain size.
With a configuration option you can define required number of members
before the leader changes member status of 'Joining' members to 'Up'.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/factorial.conf#min-nr-of-members
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/factorial.conf#min-nr-of-members
In a similar way you can define required number of members of a certain role
before the leader changes member status of 'Joining' members to 'Up'.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/factorial.conf#role-min-nr-of-members
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/factorial.conf#role-min-nr-of-members
You can start the actors in a ``registerOnMemberUp`` callback, which will
be invoked when the current member status is changed tp 'Up', i.e. the cluster
has at least the defined number of members.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#registerOnUp
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/FactorialFrontend.scala#registerOnUp
This callback can be used for other things than starting actors.
@ -439,7 +387,7 @@ Router with Group of Routees
When using a ``Group`` you must start the routee actors on the cluster member nodes.
That is not done by the router. The configuration for a group looks like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#router-lookup-config
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#router-lookup-config
.. note::
@ -457,7 +405,7 @@ to a high value will result in new routees added to the router when nodes join t
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/Extra.scala#router-lookup-in-code
See :ref:`cluster_configuration_scala` section for further descriptions of the settings.
@ -473,21 +421,17 @@ to count number of characters in each word to a separate worker, a routee of a r
The character count for each word is sent back to an aggregator that calculates
the average number of characters per word when all results have been collected.
In this example we use the following imports:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#imports
Messages:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#messages
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsMessages.scala#messages
The worker that counts number of characters in each word:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#worker
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsWorker.scala#worker
The service that receives text from users and splits it up into words, delegates to workers and aggregates:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#service
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsService.scala#service
Note, nothing cluster specific so far, just plain actors.
@ -495,27 +439,14 @@ Note, nothing cluster specific so far, just plain actors.
All nodes start ``StatsService`` and ``StatsWorker`` actors. Remember, routees are the workers in this case.
The router is configured with ``routees.paths``:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#config-router-lookup
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/stats1.conf#config-router-lookup
This means that user requests can be sent to ``StatsService`` on any node and it will use
``StatsWorker`` on all nodes. There can only be one worker per node, but that worker could easily
fan out to local children if more parallelism is needed.
``StatsWorker`` on all nodes.
This example is included in ``akka-samples/akka-sample-cluster``
and you can try by starting nodes in different terminal windows. For example, starting 3
service nodes and 1 client::
sbt
project akka-sample-cluster
run-main sample.cluster.stats.StatsSample 2551
run-main sample.cluster.stats.StatsSample 2552
run-main sample.cluster.stats.StatsSampleClient
run-main sample.cluster.stats.StatsSample
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Scala <http://typesafe.com/activator/template/akka-sample-cluster-scala>`_.
contains the full source code and instructions of how to run the **Router Example with Group of Routees**.
Router with Pool of Remote Deployed Routees
-------------------------------------------
@ -523,7 +454,7 @@ Router with Pool of Remote Deployed Routees
When using a ``Pool`` with routees created and deployed on the cluster member nodes
the configuration for a router looks like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala#router-deploy-config
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala#router-deploy-config
It is possible to limit the deployment of routees to member nodes tagged with a certain role by
specifying ``use-role``.
@ -535,7 +466,7 @@ the cluster.
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#router-deploy-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/Extra.scala#router-deploy-in-code
See :ref:`cluster_configuration_scala` section for further descriptions of the settings.
@ -546,35 +477,23 @@ Let's take a look at how to use a cluster aware router on single master node tha
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton`
in the contrib module. The ``ClusterSingletonManager`` is started on each node.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#create-singleton-manager
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsSampleOneMaster.scala#create-singleton-manager
We also need an actor on each node that keeps track of where current single master exists and
delegates jobs to the ``StatsService``.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala#facade
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsFacade.scala#facade
The ``StatsFacade`` receives text from users and delegates to the current ``StatsService``, the single
master. It listens to cluster events to lookup the ``StatsService`` on the oldest node.
All nodes start ``StatsFacade`` and the ``ClusterSingletonManager``. The router is now configured like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#config-router-deploy
This example is included in ``akka-samples/akka-sample-cluster``
and you can try by starting nodes in different terminal windows. For example, starting 3
service nodes and 1 client::
run-main sample.cluster.stats.StatsSampleOneMaster 2551
run-main sample.cluster.stats.StatsSampleOneMaster 2552
run-main sample.cluster.stats.StatsSampleOneMasterClient
run-main sample.cluster.stats.StatsSampleOneMaster
.. note:: The above example will be simplified when the cluster handles automatic actor partitioning.
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/stats2.conf#config-router-deploy
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Scala <http://typesafe.com/activator/template/akka-sample-cluster-scala>`_.
contains the full source code and instructions of how to run the **Router Example with Pool of Remote Deployed Routees**.
Cluster Metrics
^^^^^^^^^^^^^^^
@ -609,57 +528,40 @@ It can be configured to use a specific MetricsSelector to produce the probabilit
The collected metrics values are smoothed with `exponential weighted moving average <http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average>`_. In the :ref:`cluster_configuration_scala` you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action.
In this example the following imports are used:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#imports
Let's take a look at this router in action. What can be more demanding than calculating factorials?
The backend worker that performs the factorial calculation:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#backend
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/FactorialBackend.scala#backend
The frontend that receives user jobs and delegates to the backends via the router:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#frontend
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/FactorialFrontend.scala#frontend
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#adaptive-router
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/factorial.conf#adaptive-router
It is only router type ``adaptive`` and the ``metrics-selector`` that is specific to this router, other things work
in the same way as other routers.
The same type of router could also have been defined in code:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/Extra.scala#router-lookup-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#router-deploy-in-code
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/Extra.scala#router-deploy-in-code
This example is included in ``akka-samples/akka-sample-cluster``
and you can try by starting nodes in different terminal windows. For example, starting 3 backend nodes and one frontend::
sbt
project akka-sample-cluster
run-main sample.cluster.factorial.FactorialBackend 2551
run-main sample.cluster.factorial.FactorialBackend 2552
run-main sample.cluster.factorial.FactorialBackend
run-main sample.cluster.factorial.FactorialFrontend
Press ctrl-c in the terminal window of the frontend to stop the factorial calculations.
The `Typesafe Activator <http://typesafe.com/platform/getstarted>`_ tutorial named
`Akka Cluster Samples with Scala <http://typesafe.com/activator/template/akka-sample-cluster-scala>`_.
contains the full source code and instructions of how to run the **Adaptive Load Balancing** sample.
Subscribe to Metrics Events
---------------------------
It is possible to subscribe to the metrics events directly to implement other functionality.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#metrics-listener
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/factorial/MetricsListener.scala#metrics-listener
Custom Metrics Collector
------------------------
@ -679,14 +581,14 @@ add the ``sbt-multi-jvm`` plugin and the dependency to ``akka-multi-node-testkit
First, as described in :ref:`multi-node-testing`, we need some scaffolding to configure the ``MultiNodeSpec``.
Define the participating roles and their :ref:`cluster_configuration_scala` in an object extending ``MultiNodeConfig``:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala
:include: MultiNodeConfig
:exclude: router-lookup-config
Define one concrete test class for each role/node. These will be instantiated on the different nodes (JVMs). They can be
implemented differently, but often they are the same and extend an abstract test class, as illustrated here.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#concrete-tests
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#concrete-tests
Note the naming convention of these classes. The name of the classes must end with ``MultiJvmNode1``, ``MultiJvmNode2``
and so on. It is possible to define another suffix to be used by the ``sbt-multi-jvm``, but the default should be
@ -694,18 +596,18 @@ fine in most cases.
Then the abstract ``MultiNodeSpec``, which takes the ``MultiNodeConfig`` as constructor parameter.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#abstract-test
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#abstract-test
Most of this can of course be extracted to a separate trait to avoid repeating this in all your tests.
Typically you begin your test by starting up the cluster and let the members join, and create some actors.
That can be done like this:
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#startup-cluster
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#startup-cluster
From the test you interact with the cluster using the ``Cluster`` extension, e.g. ``join``.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#join
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#join
Notice how the `testActor` from :ref:`testkit <akka-testkit>` is added as :ref:`subscriber <cluster_subscriber_scala>`
to cluster changes and then waiting for certain events, such as in this case all members becoming 'Up'.
@ -713,7 +615,7 @@ to cluster changes and then waiting for certain events, such as in this case all
The above code was running for all roles (JVMs). ``runOn`` is a convenient utility to declare that a certain block
of code should only run for a specific role.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#test-statsService
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#test-statsService
Once again we take advantage of the facilities in :ref:`testkit <akka-testkit>` to verify expected behavior.
Here using ``testActor`` as sender (via ``ImplicitSender``) and verifing the reply with ``expectMsgPF``.
@ -721,7 +623,7 @@ Here using ``testActor`` as sender (via ``ImplicitSender``) and verifing the rep
In the above code you can see ``node(third)``, which is useful facility to get the root actor reference of
the actor system for a specific role. This can also be used to grab the ``akka.actor.Address`` of that node.
.. includecode:: ../../../akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#addresses
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala#addresses
.. _cluster_jmx_scala:

View file

@ -26,8 +26,8 @@ import scala.concurrent.Await
class MyActor extends Actor {
val log = Logging(context.system, this)
def receive = {
case "test" log.info("received test")
case _ log.info("received unknown message")
case "test" => log.info("received test")
case _ => log.info("received unknown message")
}
}
//#my-actor
@ -40,14 +40,14 @@ class FirstActor extends Actor {
val child = context.actorOf(Props[MyActor], name = "myChild")
//#plus-some-behavior
def receive = {
case x sender ! x
case x => sender ! x
}
//#plus-some-behavior
}
//#context-actorOf
class ActorWithArgs(arg: String) extends Actor {
def receive = { case _ () }
def receive = { case _ => () }
}
class DemoActorWrapper extends Actor {
@ -64,7 +64,7 @@ class DemoActorWrapper extends Actor {
class DemoActor(magicNumber: Int) extends Actor {
def receive = {
case x: Int sender ! (x + magicNumber)
case x: Int => sender ! (x + magicNumber)
}
}
@ -79,10 +79,10 @@ class DemoActorWrapper extends Actor {
class AnonymousActor extends Actor {
//#anonymous-actor
def receive = {
case m: DoIt
case m: DoIt =>
context.actorOf(Props(new Actor {
def receive = {
case DoIt(msg)
case DoIt(msg) =>
val replyMsg = doSomeDangerousWork(msg)
sender ! replyMsg
context.stop(self)
@ -112,13 +112,13 @@ class Hook extends Actor {
class ReplyException extends Actor {
def receive = {
case _
case _ =>
//#reply-exception
try {
val result = operation()
sender ! result
} catch {
case e: Exception
case e: Exception =>
sender ! akka.actor.Status.Failure(e)
throw e
}
@ -136,10 +136,10 @@ class Swapper extends Actor {
val log = Logging(system, this)
def receive = {
case Swap
case Swap =>
log.info("Hi")
become({
case Swap
case Swap =>
log.info("Ho")
unbecome() // resets the latest 'become' (just for fun)
}, discardOld = false) // push on top instead of replace
@ -166,7 +166,7 @@ abstract class GenericActor extends Actor {
// generic message handler
def genericMessageHandler: Receive = {
case event printf("generic: %s\n", event)
case event => printf("generic: %s\n", event)
}
def receive = specificMessageHandler orElse genericMessageHandler
@ -174,7 +174,7 @@ abstract class GenericActor extends Actor {
class SpecificActor extends GenericActor {
def specificMessageHandler = {
case event: MyMsg printf("specific: %s\n", event.subject)
case event: MyMsg => printf("specific: %s\n", event.subject)
}
}
@ -190,7 +190,7 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
import context._
val myActor = actorOf(Props[MyActor], name = "myactor")
def receive = {
case x myActor ! x
case x => myActor ! x
}
}
//#import-context
@ -207,17 +207,17 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
// TODO: convert docs to AkkaSpec(Map(...))
val filter = EventFilter.custom {
case e: Logging.Info true
case _ false
case e: Logging.Info => true
case _ => false
}
system.eventStream.publish(TestEvent.Mute(filter))
system.eventStream.subscribe(testActor, classOf[Logging.Info])
myActor ! "test"
expectMsgPF(1 second) { case Logging.Info(_, _, "received test") true }
expectMsgPF(1 second) { case Logging.Info(_, _, "received test") => true }
myActor ! "unknown"
expectMsgPF(1 second) { case Logging.Info(_, _, "received unknown message") true }
expectMsgPF(1 second) { case Logging.Info(_, _, "received unknown message") => true }
system.eventStream.unsubscribe(testActor)
system.eventStream.publish(TestEvent.UnMute(filter))
@ -245,7 +245,7 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
//#creating-props-deprecated
// DEPRECATED: old case class signature
val props4 = Props(
creator = { () new MyActor },
creator = { () => new MyActor },
dispatcher = "my-dispatcher")
// DEPRECATED due to duplicate functionality with Props.apply()
@ -273,8 +273,8 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
"creating actor with IndirectActorProducer" in {
class Echo(name: String) extends Actor {
def receive = {
case n: Int sender ! name
case message
case n: Int => sender ! name
case message =>
val target = testActor
//#forward
target forward message
@ -348,10 +348,10 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
// To set an initial delay
context.setReceiveTimeout(30 milliseconds)
def receive = {
case "Hello"
case "Hello" =>
// To set in a response to a message
context.setReceiveTimeout(100 milliseconds)
case ReceiveTimeout
case ReceiveTimeout =>
// To turn it off
context.setReceiveTimeout(Duration.Undefined)
throw new RuntimeException("Receive timed out")
@ -364,18 +364,18 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
class HotSwapActor extends Actor {
import context._
def angry: Receive = {
case "foo" sender ! "I am already angry?"
case "bar" become(happy)
case "foo" => sender ! "I am already angry?"
case "bar" => become(happy)
}
def happy: Receive = {
case "bar" sender ! "I am already happy :-)"
case "foo" become(angry)
case "bar" => sender ! "I am already happy :-)"
case "foo" => become(angry)
}
def receive = {
case "foo" become(angry)
case "bar" become(happy)
case "foo" => become(angry)
case "bar" => become(happy)
}
}
//#hot-swap-actor
@ -389,16 +389,16 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
import akka.actor.Stash
class ActorWithProtocol extends Actor with Stash {
def receive = {
case "open"
case "open" =>
unstashAll()
context.become({
case "write" // do writing...
case "close"
case "write" => // do writing...
case "close" =>
unstashAll()
context.unbecome()
case msg stash()
case msg => stash()
}, discardOld = false) // stack on top instead of replacing
case msg stash()
case msg => stash()
}
}
//#stash
@ -415,9 +415,9 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
var lastSender = system.deadLetters
def receive = {
case "kill"
case "kill" =>
context.stop(child); lastSender = sender
case Terminated(`child`) lastSender ! "finished"
case Terminated(`child`) => lastSender ! "finished"
}
}
//#watch
@ -457,15 +457,15 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
context.actorSelection("/user/another") ! Identify(identifyId)
def receive = {
case ActorIdentity(`identifyId`, Some(ref))
case ActorIdentity(`identifyId`, Some(ref)) =>
context.watch(ref)
context.become(active(ref))
case ActorIdentity(`identifyId`, None) context.stop(self)
case ActorIdentity(`identifyId`, None) => context.stop(self)
}
def active(another: ActorRef): Actor.Receive = {
case Terminated(`another`) context.stop(self)
case Terminated(`another`) => context.stop(self)
}
}
//#identify
@ -490,7 +490,7 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
// the actor has been stopped
} catch {
// the actor wasn't stopped within 5 seconds
case e: akka.pattern.AskTimeoutException
case e: akka.pattern.AskTimeoutException =>
}
//#gracefulStop
}
@ -507,9 +507,9 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
val f: Future[Result] =
for {
x ask(actorA, Request).mapTo[Int] // call pattern directly
s (actorB ask Request).mapTo[String] // call by implicit conversion
d (actorC ? Request).mapTo[Double] // call by symbolic name
x <- ask(actorA, Request).mapTo[Int] // call pattern directly
s <- (actorB ask Request).mapTo[String] // call by implicit conversion
d <- (actorC ? Request).mapTo[Double] // call by symbolic name
} yield Result(x, s, d)
f pipeTo actorD // .. or ..
@ -519,12 +519,12 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
class Replier extends Actor {
def receive = {
case ref: ActorRef
case ref: ActorRef =>
//#reply-with-sender
sender.tell("reply", context.parent) // replies will go back to parent
sender.!("reply")(context.parent) // alternative syntax (beware of the parens!)
//#reply-with-sender
case x
case x =>
//#reply-without-sender
sender ! x // replies will go to this actor
//#reply-without-sender
@ -547,8 +547,8 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
"using ActorDSL outside of akka.actor package" in {
import akka.actor.ActorDSL._
actor(new Act {
superviseWith(OneForOneStrategy() { case _ Stop; Restart; Resume; Escalate })
superviseWith(AllForOneStrategy() { case _ Stop; Restart; Resume; Escalate })
superviseWith(OneForOneStrategy() { case _ => Stop; Restart; Resume; Escalate })
superviseWith(AllForOneStrategy() { case _ => Stop; Restart; Resume; Escalate })
})
}
@ -561,9 +561,9 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
private var pfsOption: Option[Vector[PF]] = Some(Vector.empty)
private def mapPfs[C](f: Vector[PF] (Option[Vector[PF]], C)): C = {
private def mapPfs[C](f: Vector[PF] => (Option[Vector[PF]], C)): C = {
pfsOption.fold(throw new IllegalStateException("Already built"))(f) match {
case (newPfsOption, result) {
case (newPfsOption, result) => {
pfsOption = newPfsOption
result
}
@ -571,10 +571,10 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
}
def +=(pf: PF): Unit =
mapPfs { case pfs (Some(pfs :+ pf), ()) }
mapPfs { case pfs => (Some(pfs :+ pf), ()) }
def result(): PF =
mapPfs { case pfs (None, pfs.foldLeft[PF](Map.empty) { _ orElse _ }) }
mapPfs { case pfs => (None, pfs.foldLeft[PF](Map.empty) { _ orElse _ }) }
}
trait ComposableActor extends Actor {
@ -584,13 +584,13 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
trait TheirComposableActor extends ComposableActor {
receiveBuilder += {
case "foo" sender ! "foo received"
case "foo" => sender ! "foo received"
}
}
class MyComposableActor extends TheirComposableActor {
receiveBuilder += {
case "bar" sender ! "bar received"
case "bar" => sender ! "bar received"
}
}
//#receive-orElse2

View file

@ -5,7 +5,7 @@ package docs.actor
import language.postfixOps
import akka.testkit.{ AkkaSpec MyFavoriteTestFrameWorkPlusAkkaTestKit }
import akka.testkit.{ AkkaSpec => MyFavoriteTestFrameWorkPlusAkkaTestKit }
import akka.util.ByteString
//#test-code
@ -46,23 +46,23 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#when-syntax
when(Idle) {
case Event(SetTarget(ref), Uninitialized)
case Event(SetTarget(ref), Uninitialized) =>
stay using Todo(ref, Vector.empty)
}
//#when-syntax
//#transition-elided
onTransition {
case Active -> Idle
case Active -> Idle =>
stateData match {
case Todo(ref, queue) ref ! Batch(queue)
case Todo(ref, queue) => ref ! Batch(queue)
}
}
//#transition-elided
//#when-syntax
when(Active, stateTimeout = 1 second) {
case Event(Flush | StateTimeout, t: Todo)
case Event(Flush | StateTimeout, t: Todo) =>
goto(Idle) using t.copy(queue = Vector.empty)
}
//#when-syntax
@ -70,10 +70,10 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#unhandled-elided
whenUnhandled {
// common code for both states
case Event(Queue(obj), t @ Todo(_, v))
case Event(Queue(obj), t @ Todo(_, v)) =>
goto(Active) using t.copy(queue = v :+ obj)
case Event(e, s)
case Event(e, s) =>
log.warning("received unhandled request {} in state {}/{}", e, stateName, s)
stay
}
@ -99,16 +99,16 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#modifier-syntax
when(SomeState) {
case Event(msg, _)
case Event(msg, _) =>
goto(Processing) using (newData) forMax (5 seconds) replying (WillDo)
}
//#modifier-syntax
//#transition-syntax
onTransition {
case Idle -> Active setTimer("timeout", Tick, 1 second, true)
case Active -> _ cancelTimer("timeout")
case x -> Idle log.info("entering Idle from " + x)
case Idle -> Active => setTimer("timeout", Tick, 1 second, true)
case Active -> _ => cancelTimer("timeout")
case x -> Idle => log.info("entering Idle from " + x)
}
//#transition-syntax
@ -122,7 +122,7 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#stop-syntax
when(Error) {
case Event("stop", _)
case Event("stop", _) =>
// do cleanup ...
stop()
}
@ -130,38 +130,38 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#transform-syntax
when(SomeState)(transform {
case Event(bytes: ByteString, read) stay using (read + bytes.length)
case Event(bytes: ByteString, read) => stay using (read + bytes.length)
} using {
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000 =>
goto(Processing)
})
//#transform-syntax
//#alt-transform-syntax
val processingTrigger: PartialFunction[State, State] = {
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000
case s @ FSM.State(state, read, timeout, stopReason, replies) if read > 1000 =>
goto(Processing)
}
when(SomeState)(transform {
case Event(bytes: ByteString, read) stay using (read + bytes.length)
case Event(bytes: ByteString, read) => stay using (read + bytes.length)
} using processingTrigger)
//#alt-transform-syntax
//#termination-syntax
onTermination {
case StopEvent(FSM.Normal, state, data) // ...
case StopEvent(FSM.Shutdown, state, data) // ...
case StopEvent(FSM.Failure(cause), state, data) // ...
case StopEvent(FSM.Normal, state, data) => // ...
case StopEvent(FSM.Shutdown, state, data) => // ...
case StopEvent(FSM.Failure(cause), state, data) => // ...
}
//#termination-syntax
//#unhandled-syntax
whenUnhandled {
case Event(x: X, data)
case Event(x: X, data) =>
log.info("Received unhandled event: " + x)
stay
case Event(msg, _)
case Event(msg, _) =>
log.warning("Received unknown event: " + msg)
goto(Error)
}
@ -175,7 +175,7 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit {
//#body-elided
override def logDepth = 12
onTermination {
case StopEvent(FSM.Failure(_), state, data)
case StopEvent(FSM.Failure(_), state, data) =>
val lastEvents = getLog.mkString("\n\t")
log.warning("Failure in state " + state + " with data " + data + "\n" +
"Events leading up to this point:\n\t" + lastEvents)

View file

@ -49,14 +49,14 @@ class Listener extends Actor with ActorLogging {
context.setReceiveTimeout(15 seconds)
def receive = {
case Progress(percent)
case Progress(percent) =>
log.info("Current progress: {} %", percent)
if (percent >= 100.0) {
log.info("That's all, shutting down")
context.system.shutdown()
}
case ReceiveTimeout
case ReceiveTimeout =>
// No progress within 15 seconds, ServiceUnavailable
log.error("Shutting down due to unavailable service")
context.system.shutdown()
@ -83,7 +83,7 @@ class Worker extends Actor with ActorLogging {
// Stop the CounterService child if it throws ServiceUnavailable
override val supervisorStrategy = OneForOneStrategy() {
case _: CounterService.ServiceUnavailable Stop
case _: CounterService.ServiceUnavailable => Stop
}
// The sender of the initial Start message will continuously be notified
@ -94,18 +94,18 @@ class Worker extends Actor with ActorLogging {
import context.dispatcher // Use this Actors' Dispatcher as ExecutionContext
def receive = LoggingReceive {
case Start if progressListener.isEmpty
case Start if progressListener.isEmpty =>
progressListener = Some(sender)
context.system.scheduler.schedule(Duration.Zero, 1 second, self, Do)
case Do
case Do =>
counterService ! Increment(1)
counterService ! Increment(1)
counterService ! Increment(1)
// Send current progress to the initial sender
counterService ? GetCurrentCount map {
case CurrentCount(_, count) Progress(100.0 * count / totalCount)
case CurrentCount(_, count) => Progress(100.0 * count / totalCount)
} pipeTo progressListener.get
}
}
@ -135,7 +135,7 @@ class CounterService extends Actor {
// After 3 restarts within 5 seconds it will be stopped.
override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 3,
withinTimeRange = 5 seconds) {
case _: Storage.StorageException Restart
case _: Storage.StorageException => Restart
}
val key = self.path.name
@ -166,21 +166,21 @@ class CounterService extends Actor {
def receive = LoggingReceive {
case Entry(k, v) if k == key && counter == None
case Entry(k, v) if k == key && counter == None =>
// Reply from Storage of the initial value, now we can create the Counter
val c = context.actorOf(Props(classOf[Counter], key, v))
counter = Some(c)
// Tell the counter to use current storage
c ! UseStorage(storage)
// and send the buffered backlog to the counter
for ((replyTo, msg) backlog) c.tell(msg, sender = replyTo)
for ((replyTo, msg) <- backlog) c.tell(msg, sender = replyTo)
backlog = IndexedSeq.empty
case msg @ Increment(n) forwardOrPlaceInBacklog(msg)
case msg @ Increment(n) => forwardOrPlaceInBacklog(msg)
case msg @ GetCurrentCount forwardOrPlaceInBacklog(msg)
case msg @ GetCurrentCount => forwardOrPlaceInBacklog(msg)
case Terminated(actorRef) if Some(actorRef) == storage
case Terminated(actorRef) if Some(actorRef) == storage =>
// After 3 restarts the storage child is stopped.
// We receive Terminated because we watch the child, see initStorage.
storage = None
@ -189,7 +189,7 @@ class CounterService extends Actor {
// Try to re-establish storage after while
context.system.scheduler.scheduleOnce(10 seconds, self, Reconnect)
case Reconnect
case Reconnect =>
// Re-establish storage after the scheduled delay
initStorage()
}
@ -199,8 +199,8 @@ class CounterService extends Actor {
// the counter. Before that we place the messages in a backlog, to be sent
// to the counter when it is initialized.
counter match {
case Some(c) c forward msg
case None
case Some(c) => c forward msg
case None =>
if (backlog.size >= MaxBacklog)
throw new ServiceUnavailable(
"CounterService not available, lack of initial value")
@ -230,15 +230,15 @@ class Counter(key: String, initialValue: Long) extends Actor {
var storage: Option[ActorRef] = None
def receive = LoggingReceive {
case UseStorage(s)
case UseStorage(s) =>
storage = s
storeCount()
case Increment(n)
case Increment(n) =>
count += n
storeCount()
case GetCurrentCount
case GetCurrentCount =>
sender ! CurrentCount(key, count)
}
@ -271,8 +271,8 @@ class Storage extends Actor {
val db = DummyDB
def receive = LoggingReceive {
case Store(Entry(key, count)) db.save(key, count)
case Get(key) sender ! Entry(key, db.load(key).getOrElse(0L))
case Store(Entry(key, count)) => db.save(key, count)
case Get(key) => sender ! Entry(key, db.load(key).getOrElse(0L))
}
}

View file

@ -26,15 +26,15 @@ object FaultHandlingDocSpec {
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException Resume
case _: NullPointerException Restart
case _: IllegalArgumentException Stop
case _: Exception Escalate
case _: ArithmeticException => Resume
case _: NullPointerException => Restart
case _: IllegalArgumentException => Stop
case _: Exception => Escalate
}
//#strategy
def receive = {
case p: Props sender ! context.actorOf(p)
case p: Props => sender ! context.actorOf(p)
}
}
//#supervisor
@ -48,15 +48,15 @@ object FaultHandlingDocSpec {
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException Resume
case _: NullPointerException Restart
case _: IllegalArgumentException Stop
case _: Exception Escalate
case _: ArithmeticException => Resume
case _: NullPointerException => Restart
case _: IllegalArgumentException => Stop
case _: Exception => Escalate
}
//#strategy2
def receive = {
case p: Props sender ! context.actorOf(p)
case p: Props => sender ! context.actorOf(p)
}
// override default to kill all children during restart
override def preRestart(cause: Throwable, msg: Option[Any]) {}
@ -71,9 +71,9 @@ object FaultHandlingDocSpec {
override val supervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) {
case _: ArithmeticException Resume
case t
super.supervisorStrategy.decider.applyOrElse(t, (_: Any) Escalate)
case _: ArithmeticException => Resume
case t =>
super.supervisorStrategy.decider.applyOrElse(t, (_: Any) => Escalate)
}
//#default-strategy-fallback
@ -85,9 +85,9 @@ object FaultHandlingDocSpec {
class Child extends Actor {
var state = 0
def receive = {
case ex: Exception throw ex
case x: Int state = x
case "get" sender ! state
case ex: Exception => throw ex
case x: Int => state = x
case "get" => sender ! state
}
}
//#child
@ -133,7 +133,7 @@ class FaultHandlingDocSpec extends AkkaSpec with ImplicitSender {
//#stop
watch(child) // have testActor watch child
child ! new IllegalArgumentException // break it
expectMsgPF() { case Terminated(`child`) () }
expectMsgPF() { case Terminated(`child`) => () }
//#stop
}
EventFilter[Exception]("CRASH", occurrences = 2) intercept {
@ -147,7 +147,7 @@ class FaultHandlingDocSpec extends AkkaSpec with ImplicitSender {
child2 ! new Exception("CRASH") // escalate failure
expectMsgPF() {
case t @ Terminated(`child2`) if t.existenceConfirmed ()
case t @ Terminated(`child2`) if t.existenceConfirmed => ()
}
//#escalate-kill
//#escalate-restart

View file

@ -10,7 +10,7 @@ object InitializationDocSpec {
class PreStartInitExample extends Actor {
override def receive = {
case _ // Ignore
case _ => // Ignore
}
//#preStartInit
@ -37,14 +37,14 @@ object InitializationDocSpec {
var initializeMe: Option[String] = None
override def receive = {
case "init"
case "init" =>
initializeMe = Some("Up and running")
context.become(initialized, discardOld = true)
}
def initialized: Receive = {
case "U OK?" initializeMe foreach { sender ! _ }
case "U OK?" => initializeMe foreach { sender ! _ }
}
//#messageInit

View file

@ -1,50 +0,0 @@
/**
* Copyright (C) 2009-2013 Typesafe Inc. <http://www.typesafe.com>
*/
package docs.actor
import akka.testkit.AkkaSpec
//#hello-world
import akka.actor.Actor
import akka.actor.Props
class HelloWorld extends Actor {
override def preStart(): Unit = {
// create the greeter actor
val greeter = context.actorOf(Props[Greeter], "greeter")
// tell it to perform the greeting
greeter ! Greeter.Greet
}
def receive = {
// when the greeter is done, stop this actor and with it the application
case Greeter.Done context.stop(self)
}
}
//#hello-world
//#greeter
object Greeter {
case object Greet
case object Done
}
class Greeter extends Actor {
def receive = {
case Greeter.Greet
println("Hello World!")
sender ! Greeter.Done
}
}
//#greeter
class IntroDocSpec extends AkkaSpec {
"demonstrate HelloWorld" in {
expectTerminated(watch(system.actorOf(Props[HelloWorld])))
}
}

View file

@ -43,7 +43,7 @@ class SchedulerDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
val Tick = "tick"
class TickActor extends Actor {
def receive = {
case Tick //Do something
case Tick => //Do something
}
}
val tickActor = system.actorOf(Props(classOf[TickActor], this))

View file

@ -12,7 +12,7 @@ import org.scalatest.matchers.MustMatchers
import akka.testkit._
//Mr funny man avoids printing to stdout AND keeping docs alright
import java.lang.String.{ valueOf println }
import java.lang.String.{ valueOf => println }
import akka.actor.ActorRef
//#typed-actor-iface
@ -91,7 +91,7 @@ class TypedActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
//#typed-actor-extension-tools
} catch {
case e: Exception //dun care
case e: Exception => //dun care
}
}
@ -160,7 +160,7 @@ class TypedActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) {
//Use "childSquarer" as a Squarer
//#typed-actor-hierarchy
} catch {
case e: Exception //ignore
case e: Exception => //ignore
}
}

View file

@ -34,16 +34,16 @@ class UnnestedReceives extends Actor {
}
def receive = {
case 'Replay //Our first message should be a 'Replay message, all others are invalid
case 'Replay => //Our first message should be a 'Replay message, all others are invalid
allOldMessages() foreach process //Process all old messages/events
become { //Switch behavior to look for the GoAhead signal
case 'GoAhead //When we get the GoAhead signal we process all our buffered messages/events
case 'GoAhead => //When we get the GoAhead signal we process all our buffered messages/events
queue foreach process
queue.clear
become { //Then we change behaviour to process incoming messages/events as they arrive
case msg process(msg)
case msg => process(msg)
}
case msg //While we haven't gotten the GoAhead signal, buffer all incoming messages
case msg => //While we haven't gotten the GoAhead signal, buffer all incoming messages
queue += msg //Here you have full control, you can handle overflow etc
}
}

View file

@ -17,7 +17,7 @@ import akka.actor.{ Actor, ExtendedActorSystem }
class MyActor extends Actor {
def receive = {
case x
case x =>
}
}
@ -61,8 +61,8 @@ class MyMailboxType(systemSettings: ActorSystem.Settings, config: Config)
override def create(owner: Option[ActorRef],
system: Option[ActorSystem]): MessageQueue =
(owner zip system) headOption match {
case Some((o, s: ExtendedActorSystem)) new MyMessageQueue(o, s)
case _
case Some((o, s: ExtendedActorSystem)) => new MyMessageQueue(o, s)
case _ =>
throw new IllegalArgumentException("requires an owner " +
"(i.e. does not work with BalancingDispatcher)")
}

View file

@ -58,7 +58,7 @@ class AgentDocSpec extends AkkaSpec {
agent send (_ * 2)
//#send
def longRunningOrBlockingFunction = (i: Int) i * 1 // Just for the example code
def longRunningOrBlockingFunction = (i: Int) => i * 1 // Just for the example code
def someExecutionContext() = scala.concurrent.ExecutionContext.Implicits.global // Just for the example code
//#send-off
// the ExecutionContext you want to run the function on
@ -81,7 +81,7 @@ class AgentDocSpec extends AkkaSpec {
val f3: Future[Int] = agent alter (_ * 2)
//#alter
def longRunningOrBlockingFunction = (i: Int) i * 1 // Just for the example code
def longRunningOrBlockingFunction = (i: Int) => i * 1 // Just for the example code
def someExecutionContext() = ExecutionContext.global // Just for the example code
//#alter-off
@ -102,7 +102,7 @@ class AgentDocSpec extends AkkaSpec {
import scala.concurrent.stm._
def transfer(from: Agent[Int], to: Agent[Int], amount: Int): Boolean = {
atomic { txn
atomic { txn =>
if (from.get < amount) false
else {
from send (_ - amount)
@ -133,19 +133,19 @@ class AgentDocSpec extends AkkaSpec {
val agent2 = Agent(5)
// uses foreach
for (value agent1)
for (value <- agent1)
println(value)
// uses map
val agent3 = for (value agent1) yield value + 1
val agent3 = for (value <- agent1) yield value + 1
// or using map directly
val agent4 = agent1 map (_ + 1)
// uses flatMap
val agent5 = for {
value1 agent1
value2 agent2
value1 <- agent1
value2 <- agent2
} yield value1 + value2
//#monadic-example

View file

@ -15,7 +15,7 @@ object Consumers {
def endpointUri = "file:data/input/actor"
def receive = {
case msg: CamelMessage println("received %s" format msg.bodyAs[String])
case msg: CamelMessage => println("received %s" format msg.bodyAs[String])
}
}
//#Consumer1
@ -28,7 +28,7 @@ object Consumers {
def endpointUri = "jetty:http://localhost:8877/camel/default"
def receive = {
case msg: CamelMessage sender ! ("Hello %s" format msg.bodyAs[String])
case msg: CamelMessage => sender ! ("Hello %s" format msg.bodyAs[String])
}
}
//#Consumer2
@ -45,7 +45,7 @@ object Consumers {
def endpointUri = "jms:queue:test"
def receive = {
case msg: CamelMessage
case msg: CamelMessage =>
sender ! Ack
// on success
// ..
@ -65,7 +65,7 @@ object Consumers {
def endpointUri = "jetty:http://localhost:8877/camel/default"
override def replyTimeout = 500 millis
def receive = {
case msg: CamelMessage sender ! ("Hello %s" format msg.bodyAs[String])
case msg: CamelMessage => sender ! ("Hello %s" format msg.bodyAs[String])
}
}
//#Consumer4

View file

@ -18,9 +18,9 @@ object CustomRoute {
import akka.camel._
class Responder extends Actor {
def receive = {
case msg: CamelMessage
case msg: CamelMessage =>
sender ! (msg.mapBody {
body: String "received %s" format body
body: String => "received %s" format body
})
}
}
@ -47,9 +47,9 @@ object CustomRoute {
class ErrorThrowingConsumer(override val endpointUri: String) extends Consumer {
def receive = {
case msg: CamelMessage throw new Exception("error: %s" format msg.body)
case msg: CamelMessage => throw new Exception("error: %s" format msg.body)
}
override def onRouteDefinition = (rd) rd.onException(classOf[Exception]).
override def onRouteDefinition = (rd) => rd.onException(classOf[Exception]).
handled(true).transform(Builder.exceptionMessage).end
final override def preRestart(reason: Throwable, message: Option[Any]) {

View file

@ -1,53 +0,0 @@
package docs.camel
object CustomRouteExample {
{
//#CustomRouteExample
import akka.actor.{ Actor, ActorRef, Props, ActorSystem }
import akka.camel.{ CamelMessage, Consumer, Producer, CamelExtension }
import org.apache.camel.builder.RouteBuilder
import org.apache.camel.{ Exchange, Processor }
class Consumer3(transformer: ActorRef) extends Actor with Consumer {
def endpointUri = "jetty:http://0.0.0.0:8877/camel/welcome"
def receive = {
// Forward a string representation of the message body to transformer
case msg: CamelMessage transformer.forward(msg.bodyAs[String])
}
}
class Transformer(producer: ActorRef) extends Actor {
def receive = {
// example: transform message body "foo" to "- foo -" and forward result
// to producer
case msg: CamelMessage
producer.forward(msg.mapBody((body: String) "- %s -" format body))
}
}
class Producer1 extends Actor with Producer {
def endpointUri = "direct:welcome"
}
class CustomRouteBuilder extends RouteBuilder {
def configure {
from("direct:welcome").process(new Processor() {
def process(exchange: Exchange) {
// Create a 'welcome' message from the input message
exchange.getOut.setBody("Welcome %s" format exchange.getIn.getBody)
}
})
}
}
// the below lines can be added to a Boot class, so that you can run the
// example from a MicroKernel
val system = ActorSystem("some-system")
val producer = system.actorOf(Props[Producer1])
val mediator = system.actorOf(Props(classOf[Transformer], producer))
val consumer = system.actorOf(Props(classOf[Consumer3], mediator))
CamelExtension(system).context.addRoutes(new CustomRouteBuilder)
//#CustomRouteExample
}
}

View file

@ -1,52 +0,0 @@
package docs.camel
object HttpExample {
{
//#HttpExample
import org.apache.camel.Exchange
import akka.actor.{ Actor, ActorRef, Props, ActorSystem }
import akka.camel.{ Producer, CamelMessage, Consumer }
import akka.actor.Status.Failure
class HttpConsumer(producer: ActorRef) extends Consumer {
def endpointUri = "jetty:http://0.0.0.0:8875/"
def receive = {
case msg producer forward msg
}
}
class HttpProducer(transformer: ActorRef) extends Actor with Producer {
def endpointUri = "jetty://http://akka.io/?bridgeEndpoint=true"
override def transformOutgoingMessage(msg: Any) = msg match {
case msg: CamelMessage msg.copy(headers = msg.headers ++
msg.headers(Set(Exchange.HTTP_PATH)))
}
override def routeResponse(msg: Any) { transformer forward msg }
}
class HttpTransformer extends Actor {
def receive = {
case msg: CamelMessage
sender ! (msg.mapBody { body: Array[Byte]
new String(body).replaceAll("Akka ", "AKKA ")
})
case msg: Failure sender ! msg
}
}
// Create the actors. this can be done in a Boot class so you can
// run the example in the MicroKernel. Just add the three lines below
// to your boot class.
val system = ActorSystem("some-system")
val httpTransformer = system.actorOf(Props[HttpTransformer])
val httpProducer = system.actorOf(Props(classOf[HttpProducer], httpTransformer))
val httpConsumer = system.actorOf(Props(classOf[HttpConsumer], httpProducer))
//#HttpExample
}
}

View file

@ -15,8 +15,8 @@ object Introduction {
def endpointUri = "mina2:tcp://localhost:6200?textline=true"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
case msg: CamelMessage => { /* ... */ }
case _ => { /* ... */ }
}
}
@ -35,8 +35,8 @@ object Introduction {
def endpointUri = "jetty:http://localhost:8877/example"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
case msg: CamelMessage => { /* ... */ }
case _ => { /* ... */ }
}
}
//#Consumer
@ -85,8 +85,8 @@ object Introduction {
def endpointUri = "mina2:tcp://localhost:6200?textline=true"
def receive = {
case msg: CamelMessage { /* ... */ }
case _ { /* ... */ }
case msg: CamelMessage => { /* ... */ }
case _ => { /* ... */ }
}
}
val system = ActorSystem("some-system")

View file

@ -33,7 +33,7 @@ object Producers {
class ResponseReceiver extends Actor {
def receive = {
case msg: CamelMessage
case msg: CamelMessage =>
// do something with the forwarded response
}
}
@ -61,11 +61,11 @@ object Producers {
def endpointUri = uri
def upperCase(msg: CamelMessage) = msg.mapBody {
body: String body.toUpperCase
body: String => body.toUpperCase
}
override def transformOutgoingMessage(msg: Any) = msg match {
case msg: CamelMessage upperCase(msg)
case msg: CamelMessage => upperCase(msg)
}
}
//#TransformOutgoingMessage
@ -106,7 +106,7 @@ object Producers {
import akka.actor.Actor
class MyActor extends Actor {
def receive = {
case msg
case msg =>
val template = CamelExtension(context.system).template
template.sendBody("direct:news", msg)
}
@ -118,7 +118,7 @@ object Producers {
import akka.actor.Actor
class MyActor extends Actor {
def receive = {
case msg
case msg =>
val template = CamelExtension(context.system).template
sender ! template.requestBody("direct:news", msg)
}

View file

@ -9,7 +9,7 @@ object PublishSubscribe {
def endpointUri = uri
def receive = {
case msg: CamelMessage println("%s received: %s" format (name, msg.body))
case msg: CamelMessage => println("%s received: %s" format (name, msg.body))
}
}
@ -25,7 +25,7 @@ object PublishSubscribe {
def endpointUri = uri
def receive = {
case msg: CamelMessage {
case msg: CamelMessage => {
publisher ! msg.bodyAs[String]
sender ! ("message published")
}

View file

@ -1,30 +0,0 @@
package docs.camel
object QuartzExample {
//#Quartz
import akka.actor.{ ActorSystem, Props }
import akka.camel.{ Consumer }
class MyQuartzActor extends Consumer {
def endpointUri = "quartz://example?cron=0/2+*+*+*+*+?"
def receive = {
case msg println("==============> received %s " format msg)
} // end receive
} // end MyQuartzActor
object MyQuartzActor {
def main(str: Array[String]) {
val system = ActorSystem("my-quartz-system")
system.actorOf(Props[MyQuartzActor])
} // end main
} // end MyQuartzActor
//#Quartz
}

View file

@ -32,7 +32,7 @@ object ChannelDocSpec {
class Child extends Actor
with Channels[(Stats, Nothing) :+: TNil, (Request, Reply) :+: TNil] {
channel[Request] { (x, snd)
channel[Request] { (x, snd) =>
parentChannel <-!- Stats(x)
snd <-!- CommandSuccess
}
@ -43,9 +43,9 @@ object ChannelDocSpec {
val child = createChild(new Child)
channel[GetChild.type] { (_, snd) ChildRef(child) -!-> snd }
channel[GetChild.type] { (_, snd) => ChildRef(child) -!-> snd }
channel[Stats] { (x, _)
channel[Stats] { (x, _) =>
// collect some stats
}
}
@ -89,10 +89,10 @@ class ChannelDocSpec extends AkkaSpec {
"demonstrate channels creation" ignore {
//#declaring-channels
class AC extends Actor with Channels[TNil, (Request, Reply) :+: TNil] {
channel[Request] { (req, snd)
channel[Request] { (req, snd) =>
req match {
case Command("ping") snd <-!- CommandSuccess
case _
case Command("ping") => snd <-!- CommandSuccess
case _ =>
}
}
}
@ -100,8 +100,8 @@ class ChannelDocSpec extends AkkaSpec {
//#declaring-subchannels
class ACSub extends Actor with Channels[TNil, (Request, Reply) :+: TNil] {
channel[Command] { (cmd, snd) snd <-!- CommandSuccess }
channel[Request] { (req, snd)
channel[Command] { (cmd, snd) => snd <-!- CommandSuccess }
channel[Request] { (req, snd) =>
if (ThreadLocalRandom.current.nextBoolean) snd <-!- CommandSuccess
else snd <-!- CommandFailure("no luck")
}
@ -159,17 +159,17 @@ class ChannelDocSpec extends AkkaSpec {
//#become
channel[Request] {
case (Command("close"), snd)
channel[T1] { (t, s) t -?-> target -!-> s }
case (Command("close"), snd) =>
channel[T1] { (t, s) => t -?-> target -!-> s }
snd <-!- CommandSuccess
case (Command("open"), snd)
channel[T1] { (_, _) }
case (Command("open"), snd) =>
channel[T1] { (_, _) => }
snd <-!- CommandSuccess
}
//#become
channel[T1] { (t, snd) t -?-> target -!-> snd }
channel[T1] { (t, snd) => t -?-> target -!-> snd }
}
//#forwarding

View file

@ -64,7 +64,7 @@ class DataflowDocSpec extends WordSpec with MustMatchers {
//#for-vs-flow
val f1, f2 = Future { 1 }
val usingFor = for { v1 f1; v2 f2 } yield v1 + v2
val usingFor = for { v1 <- f1; v2 <- f2 } yield v1 + v2
val usingFlow = flow { f1() + f2() }
usingFor onComplete println

View file

@ -200,22 +200,22 @@ object DispatcherDocSpec {
// Create a new PriorityGenerator, lower prio means more important
PriorityGenerator {
// 'highpriority messages should be treated first if possible
case 'highpriority 0
case 'highpriority => 0
// 'lowpriority messages should be treated last if possible
case 'lowpriority 2
case 'lowpriority => 2
// PoisonPill when no other left
case PoisonPill 3
case PoisonPill => 3
// We default to 1, which is in between high and low
case otherwise 1
case otherwise => 1
})
//#prio-mailbox
class MyActor extends Actor {
def receive = {
case x
case x =>
}
}
@ -232,7 +232,7 @@ object DispatcherDocSpec {
with RequiresMessageQueue[MyUnboundedMessageQueueSemantics] {
//#require-mailbox-on-actor
def receive = {
case _
case _ =>
}
//#require-mailbox-on-actor
// ...
@ -319,7 +319,7 @@ class DispatcherDocSpec extends AkkaSpec(DispatcherDocSpec.config) {
self ! PoisonPill
def receive = {
case x log.info(x.toString)
case x => log.info(x.toString)
}
}
val a = system.actorOf(Props(classOf[Logger], this).withDispatcher(
@ -338,7 +338,7 @@ class DispatcherDocSpec extends AkkaSpec(DispatcherDocSpec.config) {
//#prio-dispatcher
watch(a)
expectMsgPF() { case Terminated(`a`) () }
expectMsgPF() { case Terminated(`a`) => () }
}
}

View file

@ -22,8 +22,8 @@ object LoggingDocSpec {
reason.getMessage, message.getOrElse(""))
}
def receive = {
case "test" log.info("Received test")
case x log.warning("Received unknown message: {}", x)
case "test" => log.info("Received test")
case x => log.warning("Received unknown message: {}", x)
}
}
//#my-actor
@ -34,7 +34,7 @@ object LoggingDocSpec {
val log = Logging(this)
def receive = {
case _ {
case _ => {
//#mdc
val mdc = Map("requestId" -> 1234, "visitorId" -> 5678)
log.mdc(mdc)
@ -60,14 +60,14 @@ object LoggingDocSpec {
reqId += 1
val always = Map("requestId" -> reqId)
val perMessage = currentMessage match {
case r: Req Map("visitorId" -> r.visitorId)
case _ Map()
case r: Req => Map("visitorId" -> r.visitorId)
case _ => Map()
}
always ++ perMessage
}
def receive: Receive = {
case r: Req {
case r: Req => {
log.info(s"Starting new request: ${r.work}")
}
}
@ -85,11 +85,11 @@ object LoggingDocSpec {
class MyEventListener extends Actor {
def receive = {
case InitializeLogger(_) sender ! LoggerInitialized
case Error(cause, logSource, logClass, message) // ...
case Warning(logSource, logClass, message) // ...
case Info(logSource, logClass, message) // ...
case Debug(logSource, logClass, message) // ...
case InitializeLogger(_) => sender ! LoggerInitialized
case Error(cause, logSource, logClass, message) => // ...
case Warning(logSource, logClass, message) => // ...
case Info(logSource, logClass, message) => // ...
case Debug(logSource, logClass, message) => // ...
}
}
//#my-event-listener
@ -140,7 +140,7 @@ class LoggingDocSpec extends AkkaSpec {
class Listener extends Actor {
def receive = {
case d: DeadLetter println(d)
case d: DeadLetter => println(d)
}
}
val listener = system.actorOf(Props(classOf[Listener], this))

View file

@ -60,7 +60,7 @@ object ExtensionDocSpec {
class MyActor extends Actor {
def receive = {
case someMessage
case someMessage =>
CountExtension(context.system).increment()
}
}
@ -68,12 +68,12 @@ object ExtensionDocSpec {
//#extension-usage-actor-trait
trait Counting { self: Actor
trait Counting { self: Actor =>
def increment() = CountExtension(context.system).increment()
}
class MyCounterActor extends Actor with Counting {
def receive = {
case someMessage increment()
case someMessage => increment()
}
}
//#extension-usage-actor-trait

View file

@ -65,7 +65,7 @@ object SettingsExtensionDocSpec {
//#extension-usage-actor
def receive = {
case someMessage
case someMessage =>
}
def connect(dbUri: String, circuitBreakerTimeout: Duration) = {

View file

@ -18,9 +18,9 @@ object FutureDocSpec {
class MyActor extends Actor {
def receive = {
case x: String sender ! x.toUpperCase
case x: Int if x < 0 sender ! Status.Failure(new ArithmeticException("Negative values not supported"))
case x: Int sender ! x
case x: String => sender ! x.toUpperCase
case x: Int if x < 0 => sender ! Status.Failure(new ArithmeticException("Negative values not supported"))
case x: Int => sender ! x
}
}
@ -29,7 +29,7 @@ object FutureDocSpec {
class OddActor extends Actor {
var n = 1
def receive = {
case GetNext
case GetNext =>
sender ! n
n += 2
}
@ -40,7 +40,7 @@ class FutureDocSpec extends AkkaSpec {
import FutureDocSpec._
import system.dispatcher
val println: PartialFunction[Any, Unit] = { case _ }
val println: PartialFunction[Any, Unit] = { case _ => }
"demonstrate usage custom ExecutionContext" in {
val yourExecutorServiceGoesHere = java.util.concurrent.Executors.newSingleThreadExecutor()
@ -112,7 +112,7 @@ class FutureDocSpec extends AkkaSpec {
val f1 = Future {
"Hello" + "World"
}
val f2 = f1 map { x
val f2 = f1 map { x =>
x.length
}
f2 foreach println
@ -128,8 +128,8 @@ class FutureDocSpec extends AkkaSpec {
"Hello" + "World"
}
val f2 = Future.successful(3)
val f3 = f1 map { x
f2 map { y
val f3 = f1 map { x =>
f2 map { y =>
x.length * y
}
}
@ -144,8 +144,8 @@ class FutureDocSpec extends AkkaSpec {
"Hello" + "World"
}
val f2 = Future.successful(3)
val f3 = f1 flatMap { x
f2 map { y
val f3 = f1 flatMap { x =>
f2 map { y =>
x.length * y
}
}
@ -164,7 +164,7 @@ class FutureDocSpec extends AkkaSpec {
val failedFilter = future1.filter(_ % 2 == 1).recover {
// When filter fails, it will have a java.util.NoSuchElementException
case m: NoSuchElementException 0
case m: NoSuchElementException => 0
}
failedFilter foreach println
@ -178,9 +178,9 @@ class FutureDocSpec extends AkkaSpec {
"demonstrate usage of for comprehension" in {
//#for-comprehension
val f = for {
a Future(10 / 2) // 10 / 2 = 5
b Future(a + 1) // 5 + 1 = 6
c Future(a - 1) // 5 - 1 = 4
a <- Future(10 / 2) // 10 / 2 = 5
b <- Future(a + 1) // 5 + 1 = 6
c <- Future(a - 1) // 5 - 1 = 4
if c > 3 // Future.filter
} yield b * c // 6 * 4 = 24
@ -232,9 +232,9 @@ class FutureDocSpec extends AkkaSpec {
val f2 = ask(actor2, msg2)
val f3 = for {
a f1.mapTo[Int]
b f2.mapTo[Int]
c ask(actor3, (a + b)).mapTo[Int]
a <- f1.mapTo[Int]
b <- f2.mapTo[Int]
c <- ask(actor3, (a + b)).mapTo[Int]
} yield c
f3 foreach println
@ -262,7 +262,7 @@ class FutureDocSpec extends AkkaSpec {
"demonstrate usage of sequence" in {
//#sequence
val futureList = Future.sequence((1 to 100).toList.map(x Future(x * 2 - 1)))
val futureList = Future.sequence((1 to 100).toList.map(x => Future(x * 2 - 1)))
val oddSum = futureList.map(_.sum)
oddSum foreach println
//#sequence
@ -271,7 +271,7 @@ class FutureDocSpec extends AkkaSpec {
"demonstrate usage of traverse" in {
//#traverse
val futureList = Future.traverse((1 to 100).toList)(x Future(x * 2 - 1))
val futureList = Future.traverse((1 to 100).toList)(x => Future(x * 2 - 1))
val oddSum = futureList.map(_.sum)
oddSum foreach println
//#traverse
@ -281,7 +281,7 @@ class FutureDocSpec extends AkkaSpec {
"demonstrate usage of fold" in {
//#fold
// Create a sequence of Futures
val futures = for (i 1 to 1000) yield Future(i * 2)
val futures = for (i <- 1 to 1000) yield Future(i * 2)
val futureSum = Future.fold(futures)(0)(_ + _)
futureSum foreach println
//#fold
@ -291,7 +291,7 @@ class FutureDocSpec extends AkkaSpec {
"demonstrate usage of reduce" in {
//#reduce
// Create a sequence of Futures
val futures = for (i 1 to 1000) yield Future(i * 2)
val futures = for (i <- 1 to 1000) yield Future(i * 2)
val futureSum = Future.reduce(futures)(_ + _)
futureSum foreach println
//#reduce
@ -304,7 +304,7 @@ class FutureDocSpec extends AkkaSpec {
val msg1 = -1
//#recover
val future = akka.pattern.ask(actor, msg1) recover {
case e: ArithmeticException 0
case e: ArithmeticException => 0
}
future foreach println
//#recover
@ -317,8 +317,8 @@ class FutureDocSpec extends AkkaSpec {
val msg1 = -1
//#try-recover
val future = akka.pattern.ask(actor, msg1) recoverWith {
case e: ArithmeticException Future.successful(0)
case foo: IllegalArgumentException
case e: ArithmeticException => Future.successful(0)
case foo: IllegalArgumentException =>
Future.failed[Int](new IllegalStateException("All br0ken!"))
}
future foreach println
@ -330,7 +330,7 @@ class FutureDocSpec extends AkkaSpec {
val future1 = Future { "foo" }
val future2 = Future { "bar" }
//#zip
val future3 = future1 zip future2 map { case (a, b) a + " " + b }
val future3 = future1 zip future2 map { case (a, b) => a + " " + b }
future3 foreach println
//#zip
Await.result(future3, 3 seconds) must be("foo bar")
@ -343,9 +343,9 @@ class FutureDocSpec extends AkkaSpec {
def watchSomeTV(): Unit = ()
//#and-then
val result = Future { loadPage(url) } andThen {
case Failure(exception) log(exception)
case Failure(exception) => log(exception)
} andThen {
case _ watchSomeTV()
case _ => watchSomeTV()
}
result foreach println
//#and-then
@ -368,8 +368,8 @@ class FutureDocSpec extends AkkaSpec {
val future = Future { "foo" }
//#onSuccess
future onSuccess {
case "bar" println("Got my bar alright!")
case x: String println("Got some random string: " + x)
case "bar" => println("Got my bar alright!")
case x: String => println("Got some random string: " + x)
}
//#onSuccess
Await.result(future, 3 seconds) must be("foo")
@ -378,9 +378,9 @@ class FutureDocSpec extends AkkaSpec {
val future = Future.failed[String](new IllegalStateException("OHNOES"))
//#onFailure
future onFailure {
case ise: IllegalStateException if ise.getMessage == "OHNOES"
case ise: IllegalStateException if ise.getMessage == "OHNOES" =>
//OHNOES! We are in deep trouble, do something!
case e: Exception
case e: Exception =>
//Do something else
}
//#onFailure
@ -391,8 +391,8 @@ class FutureDocSpec extends AkkaSpec {
def doSomethingOnFailure(t: Throwable) = ()
//#onComplete
future onComplete {
case Success(result) doSomethingOnSuccess(result)
case Failure(failure) doSomethingOnFailure(failure)
case Success(result) => doSomethingOnSuccess(result)
case Failure(failure) => doSomethingOnFailure(failure)
}
//#onComplete
Await.result(future, 3 seconds) must be("foo")
@ -436,7 +436,7 @@ class FutureDocSpec extends AkkaSpec {
val f = Future("hello")
def receive = {
//#receive-omitted
case _
case _ =>
//#receive-omitted
}
}

View file

@ -53,15 +53,15 @@ class EchoManager(handlerClass: Class[_]) extends Actor with ActorLogging {
override def postRestart(thr: Throwable): Unit = context stop self
def receive = {
case Bound(localAddress)
case Bound(localAddress) =>
log.info("listening on port {}", localAddress.getPort)
case CommandFailed(Bind(_, local, _, _))
case CommandFailed(Bind(_, local, _, _)) =>
log.warning(s"cannot bind to [$local]")
context stop self
//#echo-manager
case Connected(remote, local)
case Connected(remote, local) =>
log.info("received connection from {}", remote)
val handler = context.actorOf(Props(handlerClass, sender, remote))
sender ! Register(handler, keepOpenOnPeerClosed = true)
@ -91,18 +91,18 @@ class EchoHandler(connection: ActorRef, remote: InetSocketAddress)
//#writing
def writing: Receive = {
case Received(data)
case Received(data) =>
connection ! Write(data, Ack(currentOffset))
buffer(data)
case Ack(ack)
case Ack(ack) =>
acknowledge(ack)
case CommandFailed(Write(_, Ack(ack)))
case CommandFailed(Write(_, Ack(ack))) =>
connection ! ResumeWriting
context become buffering(ack)
case PeerClosed
case PeerClosed =>
if (storage.isEmpty) context stop self
else context become closing
}
@ -114,11 +114,11 @@ class EchoHandler(connection: ActorRef, remote: InetSocketAddress)
var peerClosed = false
{
case Received(data) buffer(data)
case WritingResumed writeFirst()
case PeerClosed peerClosed = true
case Ack(ack) if ack < nack acknowledge(ack)
case Ack(ack)
case Received(data) => buffer(data)
case WritingResumed => writeFirst()
case PeerClosed => peerClosed = true
case Ack(ack) if ack < nack => acknowledge(ack)
case Ack(ack) =>
acknowledge(ack)
if (storage.nonEmpty) {
if (toAck > 0) {
@ -138,19 +138,19 @@ class EchoHandler(connection: ActorRef, remote: InetSocketAddress)
//#closing
def closing: Receive = {
case CommandFailed(_: Write)
case CommandFailed(_: Write) =>
connection ! ResumeWriting
context.become({
case WritingResumed
case WritingResumed =>
writeAll()
context.unbecome()
case ack: Int acknowledge(ack)
case ack: Int => acknowledge(ack)
}, discardOld = false)
case Ack(ack)
case Ack(ack) =>
acknowledge(ack)
if (storage.isEmpty) context stop self
}
@ -213,7 +213,7 @@ class EchoHandler(connection: ActorRef, remote: InetSocketAddress)
}
private def writeAll(): Unit = {
for ((data, i) storage.zipWithIndex) {
for ((data, i) <- storage.zipWithIndex) {
connection ! Write(data, Ack(storageOffset + i))
}
}
@ -234,17 +234,17 @@ class SimpleEchoHandler(connection: ActorRef, remote: InetSocketAddress)
case object Ack extends Event
def receive = {
case Received(data)
case Received(data) =>
buffer(data)
connection ! Write(data, Ack)
context.become({
case Received(data) buffer(data)
case Ack acknowledge()
case PeerClosed closing = true
case Received(data) => buffer(data)
case Ack => acknowledge()
case PeerClosed => closing = true
}, discardOld = false)
case PeerClosed context stop self
case PeerClosed => context stop self
}
//#storage-omitted

View file

@ -34,14 +34,14 @@ class Server extends Actor {
IO(Tcp) ! Bind(self, new InetSocketAddress("localhost", 0))
def receive = {
case b @ Bound(localAddress)
case b @ Bound(localAddress) =>
//#do-some-logging-or-setup
context.parent ! b
//#do-some-logging-or-setup
case CommandFailed(_: Bind) context stop self
case CommandFailed(_: Bind) => context stop self
case c @ Connected(remote, local)
case c @ Connected(remote, local) =>
//#server
context.parent ! c
//#server
@ -57,8 +57,8 @@ class Server extends Actor {
class SimplisticHandler extends Actor {
import Tcp._
def receive = {
case Received(data) sender ! Write(data)
case PeerClosed context stop self
case Received(data) => sender ! Write(data)
case PeerClosed => context stop self
}
}
//#simplistic-handler
@ -77,20 +77,20 @@ class Client(remote: InetSocketAddress, listener: ActorRef) extends Actor {
IO(Tcp) ! Connect(remote)
def receive = {
case CommandFailed(_: Connect)
case CommandFailed(_: Connect) =>
listener ! "failed"
context stop self
case c @ Connected(remote, local)
case c @ Connected(remote, local) =>
listener ! c
val connection = sender
connection ! Register(self)
context become {
case data: ByteString connection ! Write(data)
case CommandFailed(w: Write) // O/S buffer was full
case Received(data) listener ! data
case "close" connection ! Close
case _: ConnectionClosed context stop self
case data: ByteString => connection ! Write(data)
case CommandFailed(w: Write) => // O/S buffer was full
case Received(data) => listener ! data
case "close" => connection ! Close
case _: ConnectionClosed => context stop self
}
}
}
@ -101,7 +101,7 @@ class IODocSpec extends AkkaSpec {
class Parent extends Actor {
context.actorOf(Props[Server], "server")
def receive = {
case msg testActor forward msg
case msg => testActor forward msg
}
}

View file

@ -45,12 +45,12 @@ class PipelinesDocSpec extends AkkaSpec {
builder ++= bs
}
override val commandPipeline = { msg: Message
override val commandPipeline = { msg: Message =>
val bs = ByteString.newBuilder
// first store the persons
bs putInt msg.persons.size
msg.persons foreach { p
msg.persons foreach { p =>
putString(bs, p.first)
putString(bs, p.last)
}
@ -72,12 +72,12 @@ class PipelinesDocSpec extends AkkaSpec {
ByteString(bytes).utf8String
}
override val eventPipeline = { bs: ByteString
override val eventPipeline = { bs: ByteString =>
val iter = bs.iterator
val personLength = iter.getInt
val persons =
(1 to personLength) map (_ Person(getString(iter), getString(iter)))
(1 to personLength) map (_ => Person(getString(iter), getString(iter)))
val curveLength = iter.getInt
val curve = new Array[Double](curveLength)
@ -94,10 +94,10 @@ class PipelinesDocSpec extends AkkaSpec {
var lastTick = Duration.Zero
override val managementPort: Mgmt = {
case TickGenerator.Tick(timestamp)
case TickGenerator.Tick(timestamp) =>
//#omitted
testActor ! TickGenerator.Tick(timestamp)
import java.lang.String.{ valueOf println }
import java.lang.String.{ valueOf => println }
//#omitted
println(s"time since last tick: ${timestamp - lastTick}")
lastTick = timestamp
@ -207,20 +207,20 @@ class PipelinesDocSpec extends AkkaSpec {
new LengthFieldFrame(10000) //
)(
// failure in the pipeline will fail this actor
cmd cmds ! cmd.get,
evt evts ! evt.get)
cmd => cmds ! cmd.get,
evt => evts ! evt.get)
def receive = {
case m: Message pipeline.injectCommand(m)
case b: ByteString pipeline.injectEvent(b)
case t: TickGenerator.Trigger pipeline.managementCommand(t)
case m: Message => pipeline.injectCommand(m)
case b: ByteString => pipeline.injectEvent(b)
case t: TickGenerator.Trigger => pipeline.managementCommand(t)
}
}
//#actor
class P(cmds: ActorRef, evts: ActorRef) extends Processor(cmds, evts) {
override def receive = ({
case "fail!" throw new RuntimeException("FAIL!")
case "fail!" => throw new RuntimeException("FAIL!")
}: Receive) orElse super.receive
}

View file

@ -21,7 +21,7 @@ object ScalaUdpDocSpec {
IO(Udp) ! Udp.SimpleSender
def receive = {
case Udp.SimpleSenderReady
case Udp.SimpleSenderReady =>
context.become(ready(sender))
//#sender
sender ! Udp.Send(ByteString("hello"), remote)
@ -29,7 +29,7 @@ object ScalaUdpDocSpec {
}
def ready(send: ActorRef): Receive = {
case msg: String
case msg: String =>
send ! Udp.Send(ByteString(msg), remote)
//#sender
if (msg == "world") send ! PoisonPill
@ -44,7 +44,7 @@ object ScalaUdpDocSpec {
IO(Udp) ! Udp.Bind(self, new InetSocketAddress("localhost", 0))
def receive = {
case Udp.Bound(local)
case Udp.Bound(local) =>
//#listener
nextActor forward local
//#listener
@ -52,15 +52,15 @@ object ScalaUdpDocSpec {
}
def ready(socket: ActorRef): Receive = {
case Udp.Received(data, remote)
case Udp.Received(data, remote) =>
val processed = // parse data etc., e.g. using PipelineStage
//#listener
data.utf8String
//#listener
socket ! Udp.Send(data, remote) // example server echoes back
nextActor ! processed
case Udp.Unbind socket ! Udp.Unbind
case Udp.Unbound context.stop(self)
case Udp.Unbind => socket ! Udp.Unbind
case Udp.Unbound => context.stop(self)
}
}
//#listener
@ -71,7 +71,7 @@ object ScalaUdpDocSpec {
IO(UdpConnected) ! UdpConnected.Connect(self, remote)
def receive = {
case UdpConnected.Connected
case UdpConnected.Connected =>
context.become(ready(sender))
//#connected
sender ! UdpConnected.Send(ByteString("hello"))
@ -79,16 +79,16 @@ object ScalaUdpDocSpec {
}
def ready(connection: ActorRef): Receive = {
case UdpConnected.Received(data)
case UdpConnected.Received(data) =>
// process data, send it on, etc.
//#connected
if (data.utf8String == "hello")
connection ! UdpConnected.Send(ByteString("world"))
//#connected
case msg: String
case msg: String =>
connection ! UdpConnected.Send(ByteString(msg))
case d @ UdpConnected.Disconnect connection ! d
case UdpConnected.Disconnected context.stop(self)
case d @ UdpConnected.Disconnect => connection ! d
case UdpConnected.Disconnected => context.stop(self)
}
}
//#connected

View file

@ -26,11 +26,11 @@ object SchedulerPatternSpec {
override def postStop() = tick.cancel()
def receive = {
case "tick"
case "tick" =>
// do something useful here
//#schedule-constructor
target ! "tick"
case "restart"
case "restart" =>
throw new ArithmeticException
//#schedule-constructor
}
@ -53,13 +53,13 @@ object SchedulerPatternSpec {
override def postRestart(reason: Throwable) = {}
def receive = {
case "tick"
case "tick" =>
// send another periodic tick after the specified delay
system.scheduler.scheduleOnce(1000 millis, self, "tick")
// do something useful here
//#schedule-receive
target ! "tick"
case "restart"
case "restart" =>
throw new ArithmeticException
//#schedule-receive
}

View file

@ -21,11 +21,11 @@ trait PersistenceDocSpec {
class MyProcessor extends Processor {
def receive = {
case Persistent(payload, sequenceNr)
case Persistent(payload, sequenceNr) =>
// message successfully written to journal
case PersistenceFailure(payload, sequenceNr, cause)
case PersistenceFailure(payload, sequenceNr, cause) =>
// message failed to be written to journal
case other
case other =>
// message not written to journal
}
}
@ -67,8 +67,8 @@ trait PersistenceDocSpec {
//#deletion
override def preRestart(reason: Throwable, message: Option[Any]) {
message match {
case Some(p: Persistent) deleteMessage(p.sequenceNr)
case _
case Some(p: Persistent) => deleteMessage(p.sequenceNr)
case _ =>
}
super.preRestart(reason, message)
}
@ -94,7 +94,7 @@ trait PersistenceDocSpec {
override def processorId = "my-stable-processor-id"
//#processor-id-override
def receive = {
case _
case _ =>
}
}
}
@ -109,14 +109,14 @@ trait PersistenceDocSpec {
val channel = context.actorOf(Channel.props(), name = "myChannel")
def receive = {
case p @ Persistent(payload, _)
case p @ Persistent(payload, _) =>
channel ! Deliver(p.withPayload(s"processed ${payload}"), destination)
}
}
class MyDestination extends Actor {
def receive = {
case p @ ConfirmablePersistent(payload, sequenceNr, redeliveries)
case p @ ConfirmablePersistent(payload, sequenceNr, redeliveries) =>
// ...
p.confirm()
}
@ -139,7 +139,7 @@ trait PersistenceDocSpec {
//#channel-custom-settings
def receive = {
case p @ Persistent(payload, _)
case p @ Persistent(payload, _) =>
//#channel-example-reply
channel ! Deliver(p.withPayload(s"processed ${payload}"), sender)
//#channel-example-reply
@ -155,7 +155,7 @@ trait PersistenceDocSpec {
class MyProcessor3 extends Processor {
def receive = {
//#payload-pattern-matching
case Persistent(payload, _)
case Persistent(payload, _) =>
//#payload-pattern-matching
}
}
@ -163,7 +163,7 @@ trait PersistenceDocSpec {
class MyProcessor4 extends Processor {
def receive = {
//#sequence-nr-pattern-matching
case Persistent(_, sequenceNr)
case Persistent(_, sequenceNr) =>
//#sequence-nr-pattern-matching
}
}
@ -178,12 +178,12 @@ trait PersistenceDocSpec {
startWith("closed", 0)
when("closed") {
case Event(Persistent("open", _), counter)
case Event(Persistent("open", _), counter) =>
goto("open") using (counter + 1) replying (counter)
}
when("open") {
case Event(Persistent("close", _), counter)
case Event(Persistent("close", _), counter) =>
goto("closed") using (counter + 1) replying (counter)
}
}
@ -196,9 +196,9 @@ trait PersistenceDocSpec {
var state: Any = _
def receive = {
case "snap" saveSnapshot(state)
case SaveSnapshotSuccess(metadata) // ...
case SaveSnapshotFailure(metadata, reason) // ...
case "snap" => saveSnapshot(state)
case SaveSnapshotSuccess(metadata) => // ...
case SaveSnapshotFailure(metadata, reason) => // ...
}
}
//#save-snapshot
@ -210,8 +210,8 @@ trait PersistenceDocSpec {
var state: Any = _
def receive = {
case SnapshotOffer(metadata, offeredSnapshot) state = offeredSnapshot
case Persistent(payload, sequenceNr) // ...
case SnapshotOffer(metadata, offeredSnapshot) => state = offeredSnapshot
case Persistent(payload, sequenceNr) => // ...
}
}
//#snapshot-offer
@ -232,8 +232,8 @@ trait PersistenceDocSpec {
//#batch-write
class MyProcessor extends Processor {
def receive = {
case Persistent("a", _) // ...
case Persistent("b", _) // ...
case Persistent("a", _) => // ...
case Persistent("b", _) => // ...
}
}
@ -278,11 +278,11 @@ trait PersistenceDocSpec {
}
def receiveReplay: Receive = {
case event: String handleEvent(event)
case event: String => handleEvent(event)
}
def receiveCommand: Receive = {
case "cmd" {
case "cmd" => {
// ...
persist("evt")(handleEvent)
}

View file

@ -98,7 +98,7 @@ object SharedLeveldbPluginDocSpec {
}
def receive = {
case ActorIdentity(1, Some(store))
case ActorIdentity(1, Some(store)) =>
SharedLeveldbJournal.setStore(store, context.system)
}
}
@ -122,7 +122,7 @@ class MyJournal extends AsyncWriteJournal {
def writeAsync(persistentBatch: Seq[PersistentRepr]): Future[Unit] = ???
def deleteAsync(processorId: String, fromSequenceNr: Long, toSequenceNr: Long, permanent: Boolean): Future[Unit] = ???
def confirmAsync(processorId: String, sequenceNr: Long, channelId: String): Future[Unit] = ???
def replayAsync(processorId: String, fromSequenceNr: Long, toSequenceNr: Long)(replayCallback: (PersistentRepr) Unit): Future[Long] = ???
def replayAsync(processorId: String, fromSequenceNr: Long, toSequenceNr: Long)(replayCallback: (PersistentRepr) => Unit): Future[Long] = ???
}
class MySnapshotStore extends SnapshotStore {

View file

@ -13,7 +13,7 @@ import akka.remote.RemoteScope
object RemoteDeploymentDocSpec {
class SampleActor extends Actor {
def receive = { case _ sender ! self }
def receive = { case _ => sender ! self }
}
}

View file

@ -18,9 +18,9 @@ object ConsistentHashingRouterDocSpec {
var cache = Map.empty[String, String]
def receive = {
case Entry(key, value) cache += (key -> value)
case Get(key) sender ! cache.get(key)
case Evict(key) cache -= key
case Entry(key, value) => cache += (key -> value)
case Get(key) => sender ! cache.get(key)
case Evict(key) => cache -= key
}
}
@ -50,7 +50,7 @@ class ConsistentHashingRouterDocSpec extends AkkaSpec with ImplicitSender {
import akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope
def hashMapping: ConsistentHashMapping = {
case Evict(key) key
case Evict(key) => key
}
val cache: ActorRef =

View file

@ -50,7 +50,7 @@ akka.actor.deployment {
class RedundancyRoutingLogic(nbrCopies: Int) extends RoutingLogic {
val roundRobin = RoundRobinRoutingLogic()
def select(message: Any, routees: immutable.IndexedSeq[Routee]): Routee = {
val targets = (1 to nbrCopies).map(_ roundRobin.select(message, routees))
val targets = (1 to nbrCopies).map(_ => roundRobin.select(message, routees))
SeveralRoutees(targets)
}
}
@ -58,7 +58,7 @@ akka.actor.deployment {
class Storage extends Actor {
def receive = {
case x sender ! x
case x => sender ! x
}
}
@ -99,7 +99,7 @@ class CustomRouterDocSpec extends AkkaSpec(CustomRouterDocSpec.config) with Impl
//#unit-test-logic
val logic = new RedundancyRoutingLogic(nbrCopies = 3)
val routees = for (n 1 to 7) yield TestRoutee(n)
val routees = for (n <- 1 to 7) yield TestRoutee(n)
val r1 = logic.select("msg", routees)
r1.asInstanceOf[SeveralRoutees].routees must be(
@ -118,16 +118,16 @@ class CustomRouterDocSpec extends AkkaSpec(CustomRouterDocSpec.config) with Impl
"demonstrate usage of custom router" in {
//#usage-1
for (n 1 to 10) system.actorOf(Props[Storage], "s" + n)
for (n <- 1 to 10) system.actorOf(Props[Storage], "s" + n)
val paths = for (n 1 to 10) yield ("/user/s" + n)
val paths = for (n <- 1 to 10) yield ("/user/s" + n)
val redundancy1: ActorRef =
system.actorOf(RedundancyGroup(paths, nbrCopies = 3).props(),
name = "redundancy1")
redundancy1 ! "important"
//#usage-1
for (_ 1 to 3) expectMsg("important")
for (_ <- 1 to 3) expectMsg("important")
//#usage-2
val redundancy2: ActorRef = system.actorOf(FromConfig.props(),
@ -135,7 +135,7 @@ class CustomRouterDocSpec extends AkkaSpec(CustomRouterDocSpec.config) with Impl
redundancy2 ! "very important"
//#usage-2
for (_ 1 to 5) expectMsg("very important")
for (_ <- 1 to 5) expectMsg("very important")
}

View file

@ -173,9 +173,9 @@ router-dispatcher {}
}
def receive = {
case w: Work
case w: Work =>
router.route(w, sender)
case Terminated(a)
case Terminated(a) =>
router = router.removeRoutee(a)
val r = context.actorOf(Props[Worker])
context watch r
@ -186,7 +186,7 @@ router-dispatcher {}
class Worker extends Actor {
def receive = {
case _
case _ =>
}
}
@ -199,7 +199,7 @@ router-dispatcher {}
//#create-worker-actors
def receive = {
case _
case _ =>
}
}
@ -335,14 +335,14 @@ router-dispatcher {}
//#resize-pool-2
def receive = {
case _
case _ =>
}
}
class Echo extends Actor {
def receive = {
case m sender ! m
case m => sender ! m
}
}
}

View file

@ -16,7 +16,7 @@ import akka.testkit.ImplicitSender
object MySpec {
class EchoActor extends Actor {
def receive = {
case x sender ! x
case x => sender ! x
}
}
}

View file

@ -79,7 +79,7 @@ class TestKitUsageSpec
filterRef ! 1
receiveWhile(500 millis) {
case msg: String messages = msg +: messages
case msg: String => messages = msg +: messages
}
}
messages.length should be(3)
@ -90,12 +90,12 @@ class TestKitUsageSpec
"receive an interesting message at some point " in {
within(500 millis) {
ignoreMsg {
case msg: String msg != "something"
case msg: String => msg != "something"
}
seqRef ! "something"
expectMsg("something")
ignoreMsg {
case msg: String msg == "1"
case msg: String => msg == "1"
}
expectNoMsg
ignoreNoMsg
@ -117,7 +117,7 @@ object TestKitUsageSpec {
*/
class EchoActor extends Actor {
def receive = {
case msg sender ! msg
case msg => sender ! msg
}
}
@ -126,7 +126,7 @@ object TestKitUsageSpec {
*/
class ForwardingActor(next: ActorRef) extends Actor {
def receive = {
case msg next ! msg
case msg => next ! msg
}
}
@ -135,8 +135,8 @@ object TestKitUsageSpec {
*/
class FilteringActor(next: ActorRef) extends Actor {
def receive = {
case msg: String next ! msg
case _ None
case msg: String => next ! msg
case _ => None
}
}
@ -149,7 +149,7 @@ object TestKitUsageSpec {
class SequencingActor(next: ActorRef, head: immutable.Seq[String],
tail: immutable.Seq[String]) extends Actor {
def receive = {
case msg {
case msg => {
head foreach { next ! _ }
next ! msg
tail foreach { next ! _ }

View file

@ -22,18 +22,18 @@ object TestkitDocSpec {
class MyActor extends Actor {
def receive = {
case Say42 sender ! 42
case "some work" sender ! "some result"
case Say42 => sender ! 42
case "some work" => sender ! "some result"
}
}
class TestFsmActor extends Actor with FSM[Int, String] {
startWith(1, "")
when(1) {
case Event("go", _) goto(2) using "go"
case Event("go", _) => goto(2) using "go"
}
when(2) {
case Event("back", _) goto(1) using "back"
case Event("back", _) => goto(1) using "back"
}
}
@ -42,10 +42,10 @@ object TestkitDocSpec {
var dest1: ActorRef = _
var dest2: ActorRef = _
def receive = {
case (d1: ActorRef, d2: ActorRef)
case (d1: ActorRef, d2: ActorRef) =>
dest1 = d1
dest2 = d2
case x
case x =>
dest1 ! x
dest2 ! x
}
@ -58,13 +58,13 @@ object TestkitDocSpec {
//#test-probe-forward-actors
class Source(target: ActorRef) extends Actor {
def receive = {
case "start" target ! "work"
case "start" => target ! "work"
}
}
class Destination extends Actor {
def receive = {
case x // Do something..
case x => // Do something..
}
}
@ -74,7 +74,7 @@ object TestkitDocSpec {
//#logging-receive
import akka.event.LoggingReceive
def receive = LoggingReceive {
case msg // Do something...
case msg => // Do something...
}
//#logging-receive
}
@ -151,7 +151,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender {
val actorRef = TestActorRef(new Actor {
def receive = {
case "hello" throw new IllegalArgumentException("boom")
case "hello" => throw new IllegalArgumentException("boom")
}
})
intercept[IllegalArgumentException] { actorRef.receive("hello") }
@ -199,7 +199,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender {
val probe = new TestProbe(system) {
def expectUpdate(x: Int) = {
expectMsgPF() {
case Update(id, _) if id == x true
case Update(id, _) if id == x => true
}
sender ! "ACK"
}
@ -280,7 +280,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender {
//#put-your-test-code-here
val probe = TestProbe()
probe.send(testActor, "hello")
try expectMsg("hello") catch { case NonFatal(e) system.shutdown(); throw e }
try expectMsg("hello") catch { case NonFatal(e) => system.shutdown(); throw e }
//#put-your-test-code-here
shutdown(system)

View file

@ -26,13 +26,13 @@ object CoordinatedExample {
val count = Ref(0)
def receive = {
case coordinated @ Coordinated(Increment(friend)) {
case coordinated @ Coordinated(Increment(friend)) => {
friend foreach (_ ! coordinated(Increment()))
coordinated atomic { implicit t
coordinated atomic { implicit t =>
count transform (_ + 1)
}
}
case GetCount sender ! count.single.get
case GetCount => sender ! count.single.get
}
}
//#coordinated-example
@ -44,9 +44,9 @@ object CoordinatedApi {
class Coordinator extends Actor {
//#receive-coordinated
def receive = {
case coordinated @ Coordinated(Message) {
case coordinated @ Coordinated(Message) => {
//#coordinated-atomic
coordinated atomic { implicit t
coordinated atomic { implicit t =>
// do something in the coordinated transaction ...
}
//#coordinated-atomic
@ -66,8 +66,8 @@ object CounterExample {
class Counter extends Transactor {
val count = Ref(0)
def atomically = implicit txn {
case Increment count transform (_ + 1)
def atomically = implicit txn => {
case Increment => count transform (_ + 1)
}
}
//#counter-example
@ -85,11 +85,11 @@ object FriendlyCounterExample {
val count = Ref(0)
override def coordinate = {
case Increment include(friend)
case Increment => include(friend)
}
def atomically = implicit txn {
case Increment count transform (_ + 1)
def atomically = implicit txn => {
case Increment => count transform (_ + 1)
}
}
//#friendly-counter-example
@ -97,8 +97,8 @@ object FriendlyCounterExample {
class Friend extends Transactor {
val count = Ref(0)
def atomically = implicit txn {
case Increment count transform (_ + 1)
def atomically = implicit txn => {
case Increment => count transform (_ + 1)
}
}
}
@ -115,22 +115,22 @@ object TransactorCoordinate {
class TestCoordinateInclude(actor1: ActorRef, actor2: ActorRef, actor3: ActorRef) extends Transactor {
//#coordinate-include
override def coordinate = {
case Message include(actor1, actor2, actor3)
case Message => include(actor1, actor2, actor3)
}
//#coordinate-include
def atomically = txn doNothing
def atomically = txn => doNothing
}
class TestCoordinateSendTo(someActor: ActorRef, actor1: ActorRef, actor2: ActorRef) extends Transactor {
//#coordinate-sendto
override def coordinate = {
case SomeMessage sendTo(someActor -> SomeOtherMessage)
case OtherMessage sendTo(actor1 -> Message1, actor2 -> Message2)
case SomeMessage => sendTo(someActor -> SomeOtherMessage)
case OtherMessage => sendTo(actor1 -> Message1, actor2 -> Message2)
}
//#coordinate-sendto
def atomically = txn doNothing
def atomically = txn => doNothing
}
}

View file

@ -45,7 +45,7 @@ object ZeromqDocSpec {
}
def receive: Receive = {
case Tick
case Tick =>
val currentHeap = memory.getHeapMemoryUsage
val timestamp = System.currentTimeMillis
@ -73,13 +73,13 @@ object ZeromqDocSpec {
def receive = {
// the first frame is the topic, second is the message
case m: ZMQMessage if m.frames(0).utf8String == "health.heap"
case m: ZMQMessage if m.frames(0).utf8String == "health.heap" =>
val Heap(timestamp, used, max) = ser.deserialize(m.frames(1).toArray,
classOf[Heap]).get
log.info("Used heap {} bytes, at {}", used,
timestampFormat.format(new Date(timestamp)))
case m: ZMQMessage if m.frames(0).utf8String == "health.load"
case m: ZMQMessage if m.frames(0).utf8String == "health.load" =>
val Load(timestamp, loadAverage) = ser.deserialize(m.frames(1).toArray,
classOf[Load]).get
log.info("Load average {}, at {}", loadAverage,
@ -98,7 +98,7 @@ object ZeromqDocSpec {
def receive = {
// the first frame is the topic, second is the message
case m: ZMQMessage if m.frames(0).utf8String == "health.heap"
case m: ZMQMessage if m.frames(0).utf8String == "health.heap" =>
val Heap(timestamp, used, max) =
ser.deserialize(m.frames(1).toArray, classOf[Heap]).get
if ((used.toDouble / max) > 0.9) count += 1
@ -130,9 +130,9 @@ class ZeromqDocSpec extends AkkaSpec("akka.loglevel=INFO") {
class Listener extends Actor {
def receive: Receive = {
case Connecting //...
case m: ZMQMessage //...
case _ //...
case Connecting => //...
case m: ZMQMessage => //...
case _ => //...
}
}
@ -195,11 +195,11 @@ class ZeromqDocSpec extends AkkaSpec("akka.loglevel=INFO") {
def checkZeroMQInstallation() = try {
ZeroMQExtension(system).version match {
case ZeroMQVersion(2, x, _) if x >= 1 Unit
case ZeroMQVersion(y, _, _) if y >= 3 Unit
case version pending
case ZeroMQVersion(2, x, _) if x >= 1 => Unit
case ZeroMQVersion(y, _, _) if y >= 3 => Unit
case version => pending
}
} catch {
case e: LinkageError pending
case e: LinkageError => pending
}
}

View file

@ -2,43 +2,17 @@
The Obligatory Hello World
##########################
Since every programming paradigm needs to solve the tough problem of printing a
well-known greeting to the console well introduce you to the actor-based
version.
The actor based version of the tough problem of printing a
well-known greeting to the console is introduced in a `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial named `Akka Main in Scala <http://typesafe.com/activator/template/akka-sample-main-scala>`_.
.. includecode:: ../scala/code/docs/actor/IntroDocSpec.scala#hello-world
The tutorial illustrates the generic launcher class :class:`akka.Main` which expects only
one command line argument: the class name of the applications main actor. This
main method will then create the infrastructure needed for running the actors,
start the given main actor and arrange for the whole application to shut down
once the main actor terminates.
The ``HelloWorld`` actor is the applications “main” class; when it terminates
the application will shut down—more on that later. The main business logic
happens in the :meth:`preStart` method, where a ``Greeter`` actor is created
and instructed to issue that greeting we crave for. When the greeter is done it
will tell us so by sending back a message, and when that message has been
received it will be passed into the behavior described by the :meth:`receive`
method where we can conclude the demonstration by stopping the ``HelloWorld``
actor. You will be very curious to see how the ``Greeter`` actor performs the
actual task:
.. includecode:: ../scala/code/docs/actor/IntroDocSpec.scala#greeter
This is extremely simple now: after its creation this actor will not do
anything until someone sends it a message, and if that happens to be an
invitation to greet the world then the ``Greeter`` complies and informs the
requester that the deed has been done.
As a Scala developer you will probably want to tell us that there is no
``main(Array[String])`` method anywhere in these classes, so how do we run this
program? The answer is that the appropriate :meth:`main` method is implemented
in the generic launcher class :class:`akka.Main` which expects only one command
line argument: the class name of the applications main actor. This main method
will then create the infrastructure needed for running the actors, start the
given main actor and arrange for the whole application to shut down once the
main actor terminates. Thus you will be able to run the above code with a
command similar to the following::
java -classpath <all those JARs> akka.Main com.example.HelloWorld
This conveniently assumes placement of the above class definitions in package
``com.example`` and it further assumes that you have the required JAR files for
``scala-library`` and ``akka-actor`` available. The easiest would be to manage
these dependencies with a build tool, see :ref:`build-tool`.
There is also another `Typesafe Activator <http://typesafe.com/platform/getstarted>`_
tutorial in the same problem domain that is named `Hello Akka! <http://typesafe.com/activator/template/hello-akka>`_.
It describes the basics of Akka in more depth.

View file

@ -253,105 +253,14 @@ This is also done via configuration::
This configuration setting will clone the actor “aggregation” 10 times and deploy it evenly distributed across
the two given target nodes.
Description of the Remoting Sample
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. _remote-sample-scala:
There is a more extensive remote example that comes with the Akka distribution.
Please have a look here for more information: `Remote Sample
<@github@/akka-samples/akka-sample-remote>`_
This sample demonstrates both, remote deployment and look-up of remote actors.
First, let us have a look at the common setup for both scenarios (this is
``common.conf``):
Remoting Sample
^^^^^^^^^^^^^^^
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/common.conf
This enables the remoting by installing the :class:`RemoteActorRefProvider` and
chooses the default remote transport. All other options will be set
specifically for each show case.
.. note::
Be sure to replace the default IP 127.0.0.1 with the real address the system
is reachable by if you deploy onto multiple machines!
.. _remote-lookup-sample-scala:
Remote Lookup
-------------
In order to look up a remote actor, that one must be created first. For this
purpose, we configure an actor system to listen on port 2552 (this is a snippet
from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: calculator
Then the actor must be created. For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: imports
The actor doing the work will be this one:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CalculatorApplication.scala
:include: actor
and we start it within an actor system using the above configuration
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CalculatorApplication.scala
:include: setup
With the service actor up and running, we may look it up from another actor
system, which will be configured to use port 2553 (this is a snippet from
``application.conf``).
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotelookup
The actor which will query the calculator is a quite simple one for demonstration purposes
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: actor
and it is created from an actor system using the aforementioned clients config.
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: setup
Requests which come in via ``doSomething`` will be sent to the client actor,
which will use the actor reference that was identified earlier. Observe how the actor
system name using in ``actorSelection`` matches the remote systems name, as do IP
and port number. Top-level actors are always created below the ``"/user"``
guardian, which supervises them.
Remote Deployment
-----------------
Creating remote actors instead of looking them up is not visible in the source
code, only in the configuration file. This section is used in this scenario
(this is a snippet from ``application.conf``):
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/resources/application.conf
:include: remotecreation
For all code which follows, assume these imports:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala
:include: imports
The client actor looks like in the previous example
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala
:include: actor
but the setup uses only ``actorOf``:
.. includecode:: ../../../akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala
:include: setup
Observe how the name of the server actor matches the deployment given in the
configuration file, which will transparently delegate the actor creation to the
remote node.
There is a more extensive remote example that comes with `Typesafe Activator <http://typesafe.com/platform/getstarted>`_.
The tutorial named `Akka Remote Samples with Scala <http://typesafe.com/activator/template/akka-sample-remote-scala>`_
demonstrates both remote deployment and look-up of remote actors.
Pluggable transport support
---------------------------

View file

@ -0,0 +1,17 @@
*#
*.iml
*.ipr
*.iws
*.pyc
*.tm.epoch
*.vim
*-shim.sbt
.idea/
/project/plugins/project
project/boot
target/
/logs
.cache
.classpath
.project
.settings

View file

@ -0,0 +1,13 @@
Copyright 2013 Typesafe, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,4 @@
name=akka-sample-camel-java
title=Akka Camel Samples with Java
description=Akka Camel Samples with Java
tags=akka,camel,java,sample

View file

@ -0,0 +1,14 @@
name := "akka-sample-camel-java"
version := "1.0"
scalaVersion := "2.10.3"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-camel" % "2.3-SNAPSHOT",
"org.apache.camel" % "camel-jetty" % "2.10.3",
"org.apache.camel" % "camel-quartz" % "2.10.3",
"org.slf4j" % "slf4j-api" % "1.7.2",
"ch.qos.logback" % "logback-classic" % "1.0.7"
)

View file

@ -0,0 +1 @@
sbt.version=0.13.0

View file

@ -1,14 +1,13 @@
package docs.camel.sample.http;
package sample.camel.http;
import akka.actor.ActorRef;
import akka.camel.javaapi.UntypedConsumerActor;
//#HttpExample
public class HttpConsumer extends UntypedConsumerActor{
public class HttpConsumer extends UntypedConsumerActor {
private ActorRef producer;
public HttpConsumer(ActorRef producer){
public HttpConsumer(ActorRef producer) {
this.producer = producer;
}
@ -20,4 +19,3 @@ public class HttpConsumer extends UntypedConsumerActor{
producer.forward(message, getContext());
}
}
//#HttpExample

View file

@ -1,4 +1,4 @@
package docs.camel.sample.http;
package sample.camel.http;
import akka.actor.ActorRef;
import akka.camel.CamelMessage;
@ -8,8 +8,7 @@ import org.apache.camel.Exchange;
import java.util.HashSet;
import java.util.Set;
//#HttpExample
public class HttpProducer extends UntypedProducerActor{
public class HttpProducer extends UntypedProducerActor {
private ActorRef transformer;
public HttpProducer(ActorRef transformer) {
@ -17,9 +16,13 @@ public class HttpProducer extends UntypedProducerActor{
}
public String getEndpointUri() {
// bridgeEndpoint=true makes the producer ignore the Exchange.HTTP_URI header,
// and use the endpoint's URI for request
return "jetty://http://akka.io/?bridgeEndpoint=true";
}
// before producing messages to endpoints, producer actors can pre-process
// them by overriding the onTransformOutgoingMessage method
@Override
public Object onTransformOutgoingMessage(Object message) {
if (message instanceof CamelMessage) {
@ -27,12 +30,14 @@ public class HttpProducer extends UntypedProducerActor{
Set<String> httpPath = new HashSet<String>();
httpPath.add(Exchange.HTTP_PATH);
return camelMessage.withHeaders(camelMessage.getHeaders(httpPath));
} else return super.onTransformOutgoingMessage(message);
} else
return super.onTransformOutgoingMessage(message);
}
// instead of replying to the initial sender, producer actors can implement custom
// response processing by overriding the onRouteResponse method
@Override
public void onRouteResponse(Object message) {
transformer.forward(message, getContext());
}
}
//#HttpExample

View file

@ -0,0 +1,15 @@
package sample.camel.http;
import akka.actor.*;
public class HttpSample {
public static void main(String[] args) {
ActorSystem system = ActorSystem.create("some-system");
final ActorRef httpTransformer = system.actorOf(Props.create(HttpTransformer.class));
final ActorRef httpProducer = system.actorOf(Props.create(HttpProducer.class, httpTransformer));
final ActorRef httpConsumer = system.actorOf(Props.create(HttpConsumer.class, httpProducer));
}
}

View file

@ -1,21 +1,18 @@
package docs.camel.sample.http;
package sample.camel.http;
import akka.actor.Status;
import akka.actor.UntypedActor;
import akka.camel.CamelMessage;
import akka.dispatch.Mapper;
import akka.japi.Function;
//#HttpExample
public class HttpTransformer extends UntypedActor{
public class HttpTransformer extends UntypedActor {
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
CamelMessage replacedMessage =
camelMessage.mapBody(new Mapper<Object, String>(){
CamelMessage replacedMessage = camelMessage.mapBody(new Mapper<Object, String>() {
@Override
public String apply(Object body) {
String text = new String((byte[])body);
String text = new String((byte[]) body);
return text.replaceAll("Akka ", "AKKA ");
}
});
@ -26,4 +23,3 @@ public class HttpTransformer extends UntypedActor{
unhandled(message);
}
}
//#HttpExample

View file

@ -1,9 +1,9 @@
package docs.camel.sample.quartz;
//#QuartzExample
package sample.camel.quartz;
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
public class MyQuartzActor extends UntypedConsumerActor{
public class MyQuartzActor extends UntypedConsumerActor {
public String getEndpointUri() {
return "quartz://example?cron=0/2+*+*+*+*+?";
}
@ -11,10 +11,8 @@ public class MyQuartzActor extends UntypedConsumerActor{
public void onReceive(Object message) {
if (message instanceof CamelMessage) {
CamelMessage camelMessage = (CamelMessage) message;
String body = camelMessage.getBodyAs(String.class, getCamelContext());
System.out.println(String.format("==============> received %s ", body));
System.out.println(String.format("==============> received %s ", camelMessage));
} else
unhandled(message);
}
}
//#QuartzExample

View file

@ -1,5 +1,5 @@
package docs.camel.sample.quartz;
//#QuartzExample
package sample.camel.quartz;
import akka.actor.ActorSystem;
import akka.actor.Props;
@ -9,4 +9,3 @@ public class QuartzSample {
system.actorOf(Props.create(MyQuartzActor.class));
}
}
//#QuartzExample

View file

@ -0,0 +1,15 @@
package sample.camel.route;
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.builder.RouteBuilder;
public class CustomRouteBuilder extends RouteBuilder {
public void configure() throws Exception {
from("direct:welcome").process(new Processor() {
public void process(Exchange exchange) throws Exception {
exchange.getOut().setBody(String.format("Welcome %s", exchange.getIn().getBody()));
}
});
}
}

View file

@ -0,0 +1,19 @@
package sample.camel.route;
import akka.actor.*;
import akka.camel.CamelExtension;
public class CustomRouteSample {
@SuppressWarnings("unused")
public static void main(String[] args) {
try {
ActorSystem system = ActorSystem.create("some-system");
final ActorRef producer = system.actorOf(Props.create(RouteProducer.class));
final ActorRef mediator = system.actorOf(Props.create(RouteTransformer.class, producer));
final ActorRef consumer = system.actorOf(Props.create(RouteConsumer.class, mediator));
CamelExtension.get(system).context().addRoutes(new CustomRouteBuilder());
} catch (Exception e) {
e.printStackTrace();
}
}
}

View file

@ -1,14 +1,13 @@
package docs.camel.sample.route;
package sample.camel.route;
//#CustomRouteExample
import akka.actor.ActorRef;
import akka.camel.CamelMessage;
import akka.camel.javaapi.UntypedConsumerActor;
public class Consumer3 extends UntypedConsumerActor{
public class RouteConsumer extends UntypedConsumerActor {
private ActorRef transformer;
public Consumer3(ActorRef transformer){
public RouteConsumer(ActorRef transformer) {
this.transformer = transformer;
}
@ -26,4 +25,3 @@ public class Consumer3 extends UntypedConsumerActor{
unhandled(message);
}
}
//#CustomRouteExample

View file

@ -0,0 +1,9 @@
package sample.camel.route;
import akka.camel.javaapi.UntypedProducerActor;
public class RouteProducer extends UntypedProducerActor {
public String getEndpointUri() {
return "direct:welcome";
}
}

View file

@ -1,15 +1,14 @@
package docs.camel.sample.route;
//#CustomRouteExample
package sample.camel.route;
import akka.actor.ActorRef;
import akka.actor.UntypedActor;
import akka.camel.CamelMessage;
import akka.dispatch.Mapper;
import akka.japi.Function;
public class Transformer extends UntypedActor {
public class RouteTransformer extends UntypedActor {
private ActorRef producer;
public Transformer(ActorRef producer) {
public RouteTransformer(ActorRef producer) {
this.producer = producer;
}
@ -18,16 +17,14 @@ public class Transformer extends UntypedActor {
// example: transform message body "foo" to "- foo -" and forward result
// to producer
CamelMessage camelMessage = (CamelMessage) message;
CamelMessage transformedMessage =
camelMessage.mapBody(new Mapper<String, String>(){
@Override
public String apply(String body) {
return String.format("- %s -",body);
}
});
CamelMessage transformedMessage = camelMessage.mapBody(new Mapper<String, String>() {
@Override
public String apply(String body) {
return String.format("- %s -", body);
}
});
producer.forward(transformedMessage, getContext());
} else
unhandled(message);
}
}
//#CustomRouteExample

View file

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 7.3 KiB

After

Width:  |  Height:  |  Size: 7.3 KiB

Before After
Before After

View file

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Before After
Before After

View file

@ -0,0 +1,163 @@
<!-- <html> -->
<head>
<title>Akka Camel Samples with Java</title>
</head>
<body>
<div>
<p>
This tutorial contains 3 samples of
<a href="http://doc.akka.io/docs/akka/2.3-SNAPSHOT/java/camel.html" target="_blank">Akka Camel</a>.
</p>
<ul>
<li>Asynchronous routing and transformation</li>
<li>Custom Camel route</li>
<li>Quartz scheduler</li>
</ul>
</div>
<div>
<h2>Asynchronous routing and transformation</h2>
<p>
This example demonstrates how to implement consumer and producer actors that
support
<a href="http://doc.akka.io/docs/akka/2.3-SNAPSHOT/java/camel.html#Asynchronous_routing" target="_blank">
Asynchronous routing</a> with their Camel endpoints. The sample
application transforms the content of the Akka homepage, <a href="http://akka.io" target="_blank">http://akka.io</a>,
by replacing every occurrence of *Akka* with *AKKA*.
</p>
<p>
To run this example, go to the <a href="#run" class="shortcut">Run</a>
tab, and start the application main class <code>sample.camel.http.HttpExample</code> if it's not already started.
Then direct the browser to <a href="http://localhost:8875" target="_blank">http://localhost:8875</a> and the
transformed Akka homepage should be displayed. Please note that this example will probably not work if you're
behind an HTTP proxy.
</p>
<p>
The following figure gives an overview how the example actors interact with
external systems and with each other. A browser sends a GET request to
http://localhost:8875 which is the published endpoint of the
<a href="#code/src/main/java/sample/camel/http/HttpConsumer.java" class="shortcut">HttpConsumer</a>
actor. The <code>HttpConsumer</code>actor forwards the requests to the
<a href="#code/src/main/java/sample/camel/http/HttpProducer.java" class="shortcut">HttpProducer.java</a>
actor which retrieves the Akka homepage from http://akka.io. The retrieved HTML
is then forwarded to the
<a href="#code/src/main/java/sample/camel/http/HttpTransformer.java" class="shortcut">HttpTransformer.java</a>
actor which replaces all occurrences of *Akka* with *AKKA*. The transformation result is sent back the HttpConsumer
which finally returns it to the browser.
</p>
<img src="tutorial/camel-async-interact.png" width="400" />
<p>
Implementing the example actor classes and wiring them together is rather easy
as shown in <a href="#code/src/main/java/sample/camel/http/HttpConsumer.java" class="shortcut">HttpConsumer.java</a>,
<a href="#code/src/main/java/sample/camel/http/HttpProducer.java" class="shortcut">HttpProducer.java</a> and
<a href="#code/src/main/java/sample/camel/http/HttpTransformer.java" class="shortcut">HttpTransformer.java</a>.
</p>
<p>
The <a href="http://camel.apache.org/jetty.html" target="_blank">jetty endpoints</a> of HttpConsumer and
HttpProducer support asynchronous in-out message exchanges and do not allocate threads for the full duration of
the exchange. This is achieved by using <a href="http://wiki.eclipse.org/Jetty/Feature/Continuations" target="_blank">Jetty continuations</a>
on the consumer-side and by using <a href="http://wiki.eclipse.org/Jetty/Tutorial/HttpClient" target="_blank">Jetty's asynchronous HTTP client</a>
on the producer side. The following high-level sequence diagram illustrates that.
</p>
<img src="tutorial/camel-async-sequence.png" width="400" />
</div>
<div>
<h2>Custom Camel route example</h2>
<p>
This section also demonstrates the combined usage of a
<a href="#code/src/main/java/sample/camel/route/RouteProducer.java" class="shortcut">RouteProducer</a> and a
<a href="#code/src/main/java/sample/camel/route/RouteConsumer.java" class="shortcut">RouteConsumer</a>
actor as well as the inclusion of a
<a href="#code/src/main/java/sample/camel/route/CustomRouteSample.java" class="shortcut">custom Camel route</a>.
The following figure gives an overview.
</p>
<img src="tutorial/camel-custom-route.png" width="400" />
<ul>
<li>A consumer actor receives a message from an HTTP client</li>
<li>It forwards the message to another actor that transforms the message (encloses
the original message into hyphens)</li>
<li>The transformer actor forwards the transformed message to a producer actor</li>
<li>The producer actor sends the message to a custom Camel route beginning at the
<code>direct:welcome</code> endpoint</li>
<li>A processor (transformer) in the custom Camel route prepends "Welcome" to the
original message and creates a result message</li>
<li>The producer actor sends the result back to the consumer actor which returns
it to the HTTP client</li>
</ul>
<p>
The producer actor knows where to reply the message to because the consumer and
transformer actors have forwarded the original sender reference as well. The
application configuration and the route starting from direct:welcome are done in the code above.
</p>
<p>
To run this example, go to the <a href="#run" class="shortcut">Run</a>
tab, and start the application main class <code>sample.camel.route.CustomRouteExample</code>
</p>
<p>
POST a message to <code>http://localhost:8877/camel/welcome</code>.
</p>
<pre><code>
curl -H "Content-Type: text/plain" -d "Anke" http://localhost:8877/camel/welcome
</code></pre>
<p>
The response should be:
</p>
<pre><code>
Welcome - Anke -
</code></pre>
</div>
<div>
<h2>Quartz Scheduler Example</h2>
<p>
Here is an example showing how simple it is to implement a cron-style scheduler by
using the Camel Quartz component in Akka.
</p>
<p>
Open <a href="#code/src/main/java/sample/camel/quartz/MyQuartzActor.java" class="shortcut">MyQuartzActor.java</a>.
</p>
<p>
The example creates a "timer" actor which fires a message every 2
seconds.
</p>
<p>
For more information about the Camel Quartz component, see here:
<a href="http://camel.apache.org/quartz.html" target="_blank">http://camel.apache.org/quartz.html</a>
</p>
</div>
</body>
</html>

View file

@ -0,0 +1,17 @@
*#
*.iml
*.ipr
*.iws
*.pyc
*.tm.epoch
*.vim
*-shim.sbt
.idea/
/project/plugins/project
project/boot
target/
/logs
.cache
.classpath
.project
.settings

View file

@ -0,0 +1,13 @@
Copyright 2013 Typesafe, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,4 @@
name=akka-sample-camel-scala
title=Akka Camel Samples with Scala
description=Akka Camel Samples with Scala
tags=akka,camel,scala,sample

View file

@ -0,0 +1,14 @@
name := "akka-sample-camel-scala"
version := "1.0"
scalaVersion := "2.10.3"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-camel" % "2.3-SNAPSHOT",
"org.apache.camel" % "camel-jetty" % "2.10.3",
"org.apache.camel" % "camel-quartz" % "2.10.3",
"org.slf4j" % "slf4j-api" % "1.7.2",
"ch.qos.logback" % "logback-classic" % "1.0.7"
)

View file

@ -0,0 +1 @@
sbt.version=0.13.0

View file

@ -0,0 +1,58 @@
package sample.camel
import org.apache.camel.Exchange
import org.apache.camel.Processor
import org.apache.camel.builder.RouteBuilder
import akka.actor.Actor
import akka.actor.ActorRef
import akka.actor.ActorSystem
import akka.actor.Props
import akka.camel.CamelExtension
import akka.camel.CamelMessage
import akka.camel.Consumer
import akka.camel.Producer
object CustomRouteExample {
def main(args: Array[String]): Unit = {
val system = ActorSystem("some-system")
val producer = system.actorOf(Props[RouteProducer])
val mediator = system.actorOf(Props(classOf[RouteTransformer], producer))
val consumer = system.actorOf(Props(classOf[RouteConsumer], mediator))
CamelExtension(system).context.addRoutes(new CustomRouteBuilder)
}
class RouteConsumer(transformer: ActorRef) extends Actor with Consumer {
def endpointUri = "jetty:http://0.0.0.0:8877/camel/welcome"
def receive = {
// Forward a string representation of the message body to transformer
case msg: CamelMessage => transformer.forward(msg.withBodyAs[String])
}
}
class RouteTransformer(producer: ActorRef) extends Actor {
def receive = {
// example: transform message body "foo" to "- foo -" and forward result
// to producer
case msg: CamelMessage =>
producer.forward(msg.mapBody((body: String) => "- %s -" format body))
}
}
class RouteProducer extends Actor with Producer {
def endpointUri = "direct:welcome"
}
class CustomRouteBuilder extends RouteBuilder {
def configure {
from("direct:welcome").process(new Processor() {
def process(exchange: Exchange) {
// Create a 'welcome' message from the input message
exchange.getOut.setBody("Welcome %s" format exchange.getIn.getBody)
}
})
}
}
}

View file

@ -0,0 +1,58 @@
package sample.camel
import org.apache.camel.Exchange
import akka.actor.Actor
import akka.actor.ActorRef
import akka.actor.ActorSystem
import akka.actor.Props
import akka.actor.Status.Failure
import akka.actor.actorRef2Scala
import akka.camel.CamelMessage
import akka.camel.Consumer
import akka.camel.Producer
object HttpExample {
def main(args: Array[String]): Unit = {
val system = ActorSystem("some-system")
val httpTransformer = system.actorOf(Props[HttpTransformer])
val httpProducer = system.actorOf(Props(classOf[HttpProducer], httpTransformer))
val httpConsumer = system.actorOf(Props(classOf[HttpConsumer], httpProducer))
}
class HttpConsumer(producer: ActorRef) extends Consumer {
def endpointUri = "jetty:http://0.0.0.0:8875/"
def receive = {
case msg => producer forward msg
}
}
class HttpProducer(transformer: ActorRef) extends Actor with Producer {
// bridgeEndpoint=true makes the producer ignore the Exchange.HTTP_URI header,
// and use the endpoint's URI for request
def endpointUri = "jetty://http://akka.io/?bridgeEndpoint=true"
// before producing messages to endpoints, producer actors can pre-process
// them by overriding the transformOutgoingMessage method
override def transformOutgoingMessage(msg: Any) = msg match {
case camelMsg: CamelMessage => camelMsg.copy(headers =
camelMsg.headers(Set(Exchange.HTTP_PATH)))
}
// instead of replying to the initial sender, producer actors can implement custom
// response processing by overriding the routeResponse method
override def routeResponse(msg: Any) { transformer forward msg }
}
class HttpTransformer extends Actor {
def receive = {
case msg: CamelMessage =>
sender ! (msg.mapBody { body: Array[Byte] =>
new String(body).replaceAll("Akka ", "AKKA ")
})
case msg: Failure => sender ! msg
}
}
}

View file

@ -0,0 +1,26 @@
package sample.camel
import akka.actor.ActorSystem
import akka.actor.Props
import akka.camel.Consumer
object QuartzExample {
def main(args: Array[String]): Unit = {
val system = ActorSystem("my-quartz-system")
system.actorOf(Props[MyQuartzActor])
}
class MyQuartzActor extends Consumer {
def endpointUri = "quartz://example?cron=0/2+*+*+*+*+?"
def receive = {
case msg => println("==============> received %s " format msg)
}
}
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 21 KiB

View file

@ -0,0 +1,161 @@
<!-- <html> -->
<head>
<title>Akka Camel Samples with Scala</title>
</head>
<body>
<div>
<p>
This tutorial contains 3 samples of
<a href="http://doc.akka.io/docs/akka/2.3-SNAPSHOT/scala/camel.html" target="_blank">Akka Camel</a>.
</p>
<ul>
<li>Asynchronous routing and transformation</li>
<li>Custom Camel route</li>
<li>Quartz scheduler</li>
</ul>
</div>
<div>
<h2>Asynchronous routing and transformation</h2>
<p>
This example demonstrates how to implement consumer and producer actors that
support
<a href="http://doc.akka.io/docs/akka/2.3-SNAPSHOT/scala/camel.html#Asynchronous_routing" target="_blank">
Asynchronous routing</a> with their Camel endpoints. The sample
application transforms the content of the Akka homepage, <a href="http://akka.io" target="_blank">http://akka.io</a>,
by replacing every occurrence of *Akka* with *AKKA*.
</p>
<p>
To run this example, go to the <a href="#run" class="shortcut">Run</a>
tab, and start the application main class <code>sample.camel.HttpExample</code> if it's not already started.
Then direct the browser to <a href="http://localhost:8875" target="_blank">http://localhost:8875</a> and the
transformed Akka homepage should be displayed. Please note that this example will probably not work if you're
behind an HTTP proxy.
</p>
<p>
The following figure gives an overview how the example actors interact with
external systems and with each other. A browser sends a GET request to
http://localhost:8875 which is the published endpoint of the
<a href="#code/src/main/scala/sample/camel/HttpExample.scala" class="shortcut">HttpConsumer</a>
actor. The <code>HttpConsumer</code> actor forwards the requests to the
<a href="#code/src/main/scala/sample/camel/HttpExample.scala" class="shortcut">HttpProducer</a>
actor which retrieves the Akka homepage from http://akka.io. The retrieved HTML
is then forwarded to the
<a href="#code/src/main/scala/sample/camel/HttpExample.scala" class="shortcut">HttpTransformer</a>
actor which replaces all occurrences of *Akka* with *AKKA*. The transformation result is sent back the HttpConsumer
which finally returns it to the browser.
</p>
<img src="tutorial/camel-async-interact.png" width="400" />
<p>
Implementing the example actor classes and wiring them together is rather easy
as shown in <a href="#code/src/main/scala/sample/camel/HttpExample.scala" class="shortcut">HttpExample.scala</a>.
</p>
<p>
The <a href="http://camel.apache.org/jetty.html" target="_blank">jetty endpoints</a> of HttpConsumer and
HttpProducer support asynchronous in-out message exchanges and do not allocate threads for the full duration of
the exchange. This is achieved by using <a href="http://wiki.eclipse.org/Jetty/Feature/Continuations" target="_blank">Jetty continuations</a>
on the consumer-side and by using <a href="http://wiki.eclipse.org/Jetty/Tutorial/HttpClient" target="_blank">Jetty's asynchronous HTTP client</a>
on the producer side. The following high-level sequence diagram illustrates that.
</p>
<img src="tutorial/camel-async-sequence.png" width="400" />
</div>
<div>
<h2>Custom Camel route example</h2>
<p>
This section also demonstrates the combined usage of a
<a href="#code/src/main/scala/sample/camel/CustomRouteExample.scala" class="shortcut">RouteProducer</a>
and a <a href="#code/src/main/scala/sample/camel/CustomRouteExample.scala" class="shortcut">RouteConsumer</a>
actor as well as the inclusion of a
<a href="#code/src/main/scala/sample/camel/CustomRouteExample.scala" class="shortcut">custom Camel route</a>.
The following figure gives an overview.
</p>
<img src="tutorial/camel-custom-route.png" width="400" />
<ul>
<li>A consumer actor receives a message from an HTTP client</li>
<li>It forwards the message to another actor that transforms the message (encloses
the original message into hyphens)</li>
<li>The transformer actor forwards the transformed message to a producer actor</li>
<li>The producer actor sends the message to a custom Camel route beginning at the
<code>direct:welcome</code> endpoint</li>
<li>A processor (transformer) in the custom Camel route prepends "Welcome" to the
original message and creates a result message</li>
<li>The producer actor sends the result back to the consumer actor which returns
it to the HTTP client</li>
</ul>
<p>
The producer actor knows where to reply the message to because the consumer and
transformer actors have forwarded the original sender reference as well. The
application configuration and the route starting from direct:welcome are done in the code above.
</p>
<p>
To run this example, go to the <a href="#run" class="shortcut">Run</a>
tab, and start the application main class <code>sample.camel.CustomRouteExample</code>
</p>
<p>
POST a message to <code>http://localhost:8877/camel/welcome</code>.
</p>
<pre><code>
curl -H "Content-Type: text/plain" -d "Anke" http://localhost:8877/camel/welcome
</code></pre>
<p>
The response should be:
</p>
<pre><code>
Welcome - Anke -
</code></pre>
</div>
<div>
<h2>Quartz Scheduler Example</h2>
<p>
Here is an example showing how simple it is to implement a cron-style scheduler by
using the Camel Quartz component in Akka.
</p>
<p>
Open <a href="#code/src/main/scala/sample/camel/QuartzExample.scala" class="shortcut">QuartzExample.scala</a>.
</p>
<p>
The example creates a "timer" actor which fires a message every 2
seconds.
</p>
<p>
For more information about the Camel Quartz component, see here:
<a href="http://camel.apache.org/quartz.html" target="_blank">http://camel.apache.org/quartz.html</a>
</p>
</div>
</body>
</html>

View file

@ -1,15 +0,0 @@
Camel Sample
============
This sample is meant to be used by studying the code; it does not perform any
astounding functions when running it. If you want to run it, check out the akka
sources on your local hard drive, follow the [instructions for setting up Akka
with SBT](http://doc.akka.io/docs/akka/current/intro/getting-started.html).
When you start SBT within the checked-out akka source directory, you can run
this sample by typing
akka-sample-camel/run
and then choose which of the demonstrations you would like to run.
You can read more in the [Akka docs](http://akka.io/docs).

View file

@ -1,49 +0,0 @@
import akka.actor.Status.Failure
import akka.actor.{ Actor, ActorRef, Props, ActorSystem }
import akka.camel.{ Producer, CamelMessage, Consumer }
import org.apache.camel.{ Exchange }
/**
* Asynchronous routing and transformation example
*/
object AsyncRouteAndTransform extends App {
val system = ActorSystem("rewriteAkkaToAKKA")
val httpTransformer = system.actorOf(Props[HttpTransformer], "transformer")
val httpProducer = system.actorOf(Props(classOf[HttpProducer], httpTransformer), "producer")
val httpConsumer = system.actorOf(Props(classOf[HttpConsumer], httpProducer), "consumer")
}
class HttpConsumer(producer: ActorRef) extends Consumer {
def endpointUri = "jetty:http://0.0.0.0:8875/"
def receive = {
case msg producer forward msg
}
}
class HttpProducer(transformer: ActorRef) extends Actor with Producer {
def endpointUri = "jetty://http://akka.io/?bridgeEndpoint=true"
override def transformOutgoingMessage(msg: Any) = msg match {
case msg: CamelMessage msg.copy(headers = msg.headers(Set(Exchange.HTTP_PATH)))
}
override def routeResponse(msg: Any) {
transformer forward msg
}
}
class HttpTransformer extends Actor {
def receive = {
case msg: CamelMessage
val transformedMsg = msg.mapBody {
(body: Array[Byte])
new String(body).replaceAll("Akka", "<b>AKKA</b>")
// just to make the result look a bit better.
.replaceAll("href=\"/resources", "href=\"http://akka.io/resources")
.replaceAll("src=\"/resources", "src=\"http://akka.io/resources")
}
sender ! transformedMsg
case msg: Failure sender ! msg
}
}

View file

@ -1,24 +0,0 @@
import akka.actor.{ Props, ActorSystem }
import akka.camel.{ CamelMessage, Consumer }
import java.io.File
import org.apache.camel.Exchange
object SimpleFileConsumer extends App {
val subDir = "consume-files"
val tmpDir = System.getProperty("java.io.tmpdir")
val consumeDir = new File(tmpDir, subDir)
consumeDir.mkdirs()
val tmpDirUri = "file://%s/%s" format (tmpDir, subDir)
val system = ActorSystem("consume-files")
val fileConsumer = system.actorOf(Props(classOf[FileConsumer], tmpDirUri), "fileConsumer")
println(String.format("Put a text file in '%s', the consumer will pick it up!", consumeDir))
}
class FileConsumer(uri: String) extends Consumer {
def endpointUri = uri
def receive = {
case msg: CamelMessage
println("Received file %s with content:\n%s".format(msg.headers(Exchange.FILE_NAME), msg.bodyAs[String]))
}
}

View file

@ -0,0 +1,17 @@
*#
*.iml
*.ipr
*.iws
*.pyc
*.tm.epoch
*.vim
*-shim.sbt
.idea/
/project/plugins/project
project/boot
target/
/logs
.cache
.classpath
.project
.settings

View file

@ -0,0 +1,13 @@
Copyright 2013 Typesafe, Inc.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View file

@ -0,0 +1,4 @@
name=akka-sample-cluster-java
title=Akka Cluster Samples with Java
description=Akka Cluster Samples with Java
tags=akka,cluster,java,sample

View file

@ -0,0 +1,32 @@
import sbt._
import sbt.Keys._
import com.typesafe.sbt.SbtMultiJvm
import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm
object AkkaSampleClusterBuild extends Build {
val akkaVersion = "2.3-SNAPSHOT"
lazy val akkaSampleCluster = Project(
id = "akka-sample-cluster-java",
base = file("."),
settings = Project.defaultSettings ++ SbtMultiJvm.multiJvmSettings ++ Seq(
name := "akka-sample-cluster-java",
version := "1.0",
scalaVersion := "2.10.3",
scalacOptions in Compile ++= Seq("-encoding", "UTF-8", "-target:jvm-1.6", "-deprecation", "-feature", "-unchecked", "-Xlog-reflective-calls", "-Xlint"),
javacOptions in Compile ++= Seq("-source", "1.6", "-target", "1.6", "-Xlint:unchecked", "-Xlint:deprecation"),
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-cluster" % akkaVersion,
"com.typesafe.akka" %% "akka-contrib" % akkaVersion,
"com.typesafe.akka" %% "akka-multi-node-testkit" % akkaVersion,
"org.scalatest" %% "scalatest" % "2.0" % "test",
"org.fusesource" % "sigar" % "1.6.4"),
javaOptions in run ++= Seq(
"-Djava.library.path=./sigar",
"-Xms128m", "-Xmx1024m"),
Keys.fork in run := true,
mainClass in (Compile, run) := Some("sample.cluster.simple.SimpleClusterApp")
)
) configs (MultiJvm)
}

View file

@ -0,0 +1 @@
sbt.version=0.13.0

Some files were not shown because too many files have changed in this diff Show more