Cleaned up akka-docs, removed old and unused docs, fixed some errors etc.

This commit is contained in:
Jonas Bonér 2012-05-11 10:00:12 +02:00
parent ce105ca742
commit 41ba88e1f8
13 changed files with 2 additions and 1837 deletions

View file

@ -1,170 +0,0 @@
Companies and Open Source projects using Akka
=============================================
Production Users
****************
These are some of the production Akka users that are able to talk about their use publicly.
CSC
---
CSC is a global provider of information technology services. The Traffic Management business unit in the Netherlands is a systems integrator for the implementation of Traffic Information and Traffic Enforcement Systems, such as section control, weigh in motion, travel time and traffic jam detection and national data warehouse for traffic information. CSC Traffic Management is using Akka for their latest Traffic Information and Traffic Enforcement Systems.
`<http://www.csc.com/nl/ds/42449-traffic_management>`_
*"Akka has been in use for almost a year now (since 0.7) and has been used successfully for two projects so far. Akka has enabled us to deliver very flexible, scalable and high performing systems with as little friction as possible. The Actor model has simplified a lot of concerns in the type of systems that we build and is now part of our reference architecture. With Akka we deliver systems that meet the most strict performance requirements of our clients in a near-realtime environment. We have found the Akka framework and it's support team invaluable."*
Thatcham Motor Insurance Repair Research Centre
-----------------------------------------------
Thatcham is a EuroNCAP member. They research efficient, safe, cost effective repair of vehicles, and work with manufacturers to influence the design of new vehicles Thatcham are using Akka as the implementation for their distributed modules. All Scala based research software now talks to an Akka based publishing platform. Using Akka enables Thatcham to 'free their domain', and ensures that the platform is cloud enabled and scalable, and that the team is confident that they are flexible. Akka has been in use, tested under load at Thatcham for almost a year, with no problems migrating up through the different versions. An old website currently under redesign on a new Scala powered platform: `www.thatcham.org <http://www.thatcham.org>`_
*“We have been in production with Akka for over 18 months with zero downtime. The core is rock solid, never a problem, performance is great, integration capabilities are diverse and ever growing, and the toolkit is just a pleasure to work with. Combine that with the excellent response you get from the devs and users on this list and you have a winner. Absolutely no regrets on our part for choosing to work with Akka.”*
*"Scala and Akka are now enabling improvements in the standard of vehicle damage assessment, and in the safety of vehicle repair across the UK, with Europe, USA, Asia and Australasia to follow. Thatcham (Motor Insurance Repair Research Centre) are delivering crash specific information with linked detailed repair information for over 7000 methods.*
*For Thatcham, the technologies enable scalability and elegance when dealing with complicated design constraints. Because of the complexity of interlinked methods, caching is virtually impossible in most cases, so in steps the 'actors' paradigm. Where previously something like JMS would have provided a stable but heavyweight, rigid solution, Thatcham are now more flexible, and can expand into the cloud in a far simpler, more rewarding way.*
*Thatcham's customers, body shop repairers and insurers receive up to date repair information in the form of crash repair documents of the quality necessary to ensure that every vehicle is repaired back to the original safety standard. In a market as important as this, availability is key, as is performance. Scala and Akka have delivered consistently so far.*
*While recently introduced, growing numbers of UK repairers are receiving up to date repair information from this service, with the rest to follow shortly. Plans are already in motion to build new clusters to roll the service out across Europe, USA, Asia and Australasia.*
*The sheer opportunities opened up to teams by Scala and Akka, in terms of integration, concise expression of intent and scalability are of huge benefit."*
SVT (Swedish Television)
------------------------
`<http://svt.se>`_
*“Im currently working in a project at the Swedish Television where were developing a subtitling system with collaboration capabilities similar to Google Wave. Its a mission critical system and the design and server implementation is all based on Akka and actors etc. Weve been running in production for about 6 months and have been upgrading Akka whenever a new release comes out. Weve never had a single bug due to Akka, and its been a pure pleasure to work with. I would choose Akka any day of the week!*
*Our system is highly asynchronous so the actor style of doing things is a perfect fit. I dont know about how you feel about concurrency in a big system, but rolling your own abstractions is not a very easy thing to do. When using Akka you can almost forget about all that. Synchronizing between threads, locking and protecting access to state etc. Akka is not just about actors, but thats one of the most pleasurable things to work with. Its easy to add new ones and its easy to design with actors. You can fire up work actors tied to a specific dispatcher etc. I could make the list of benefits much longer, but Im at work right now. I suggest you try it out and see how it fits your requirements.*
*We saw a perfect business reason for using Akka. It lets you concentrate on the business logic instead of the low level things. Its easy to teach others and the business intent is clear just by reading the code. We didnt chose Akka just for fun. Its a business critical application thats used in broadcasting. Even live broadcasting. We wouldnt have been where we are today in such a short time without using Akka. Were two developers that have done great things in such a short amount of time and part of this is due to Akka. As I said, it lets us focus on the business logic instead of low level things such as concurrency, locking, performance etc."*
Tapad
-----
`<http://tapad.com>`_
*"Tapad is building a real-time ad exchange platform for advertising on mobile and connected devices. Real-time ad exchanges allows for advertisers (among other things) to target audiences instead of buying fixed set of ad slots that will be displayed “randomly” to users. To developers without experience in the ad space, this might seem boring, but real-time ad exchanges present some really interesting technical challenges.*
*Take for instance the process backing a page view with ads served by a real-time ad exchange auction (somewhat simplified):*
1. *A user opens a site (or app) which has ads in it.*
2. *As the page / app loads, the ad serving components fires off a request to the ad exchange (this might just be due to an image tag on the page).*
3. *The ad exchange enriches the request with any information about the current user (tracking cookies are often employed for this) and and display context information (“news article about parenting”, “blog about food” etc).*
4. *The ad exchange forwards the enriched request to all bidders registered with the ad exchange.*
5. *The bidders consider the provided user information and responds with what price they are willing to pay for this particular ad slot.*
6. *The ad exchange picks the highest bidder and ensures that the winning bidders ad is shown to to user.*
*Any latency in this process directly influences user experience latency, so this has to happen really fast. All-in-all, the total time should not exceed about 100ms and most ad exchanges allow bidders to spend about 60ms (including network time) to return their bids. That leaves the ad exchange with less than 40ms to facilitate the auction. At Tapad, this happens billions of times per month / tens of thousands of times per second.*
*Tapad is building bidders which will participate in auctions facilitated by other ad exchanges, but were also building our own. We are using Akka in several ways in several parts of the system. Here are some examples:*
*Plain old parallelization*
*During an auction in the real-time exchange, its obvious that all bidders must receive the bid requests in parallel. An auctioneer actor sends the bid requests to bidder actors which in turn handles throttling and eventually IO. We use futures in these requests and the auctioneer discards any responses which arrive too late.*
*Inside our bidders, we also rely heavily on parallel execution. In order to determine how much to pay for an ad slot, several data stores are queried for information pertinent to the current user. In a “traditional” system, wed be doing this sequentially, but again, due to the extreme latency constraints, were doing this concurrently. Again, this is done with futures and data that is not available in time, get cut from the decision making (and logged :)).*
*Maintaining state under concurrent load*
*This is probably the de facto standard use case for the actors model. Bidders internal to our system are actors backed by a advertiser campaign. A campaign includes, among other things, budget and “pacing” information. The budget determines how much money to spend for the duration of the campaign, whereas pacing information might set constraints on how quickly or slowly the money should be spent. Ad traffic changes from day to day and from hour to hour and our spending algorithms considers past performance in order to spend the right amount of money at the right time. Needless to say, these algorithms use a lot of state and this state is in constant flux. A bidder with a high budget may see tens of thousands of bid requests per second. Luckily, due to round-robin load-balancing and the predictability of randomness under heavy traffic, the bidder actors do not share state across cluster nodes, they just share their instance count so they know which fraction of the campaign budget to try to spend.*
*Pacing is also done for external bidders. Each 3rd party bidder end-point has an actor coordinating requests and measuring latency and throughput. The actor never blocks itself, but when an incoming bid request is received, it considers the current performance of the 3rd party system and decides whether to pass on the request and respond negatively immediately, or forward the request to the 3rd party request executor component (which handles the IO).*
*Batch processing*
*We store a lot of data about every single ad request we serve and this is stored in a key-value data store. Due to the performance characteristics of the data store, it is not feasible to store every single data point one at at time - it must be batched up and performed in parallel. We dont need a durable messaging system for this (losing a couple of hundred data points is no biggie). All our data logging happens asynchronously and we have a basic load-balanced actors which batches incoming messages and writes on regular intervals (using Scheduler) or whenever the specified batch size has been reached.*
*Analytics*
*Needless to say, its not feasible / useful to store our traffic information in a relational database. A lot of analytics and data analysis is done “offline” with map / reduce on top the data store, but this doesnt work well for real-time analytics which our customers love. We therefore have metrics actors that receives campaign bidding and click / impression information in real-time, aggregates this information over configurable periods of time and flushes it to the database used for customer dashboards for “semi-real-time” display. Five minute history is considered real-time in this business, but in theory, we could have queried the actors directly for really real-time data. :)*
*Our Akka journey started as a prototyping project, but Akka has now become a crucial part of our system. All of the above mentioned components, except the 3rd party bidder integration, have been running in production for a couple of weeks (on Akka 1.0RC3) and we have not seen any issues at all so far."*
Flowdock
--------
Flowdock delivers Google Wave for the corporate world.
*"Flowdock makes working together a breeze. Organize the flow of information, task things over and work together towards common goals seamlessly on the web - in real time."*
`<http://flowdock.com/>`_
Travel Budget
-------------
`<http://labs.inevo.pt/travel-budget>`_
Says.US
-------
*"says.us is a gathering place for people to connect in real time - whether an informal meeting of people who love Scala or a chance for people anywhere to speak out about the latest headlines."*
`<http://says.us/>`_
LShift
------
* *"Diffa is an open source data analysis tool that automatically establishes data differences between two or more real-time systems.*
* Diffa will help you compare local or distributed systems for data consistency, without having to stop them running or implement manual cross-system comparisons. The interface provides you with simple visual summary of any consistency breaks and tools to investigate the issues.*
* Diffa is the ideal tool to use to investigate where or when inconsistencies are occurring, or simply to provide confidence that your systems are running in perfect sync. It can be used operationally as an early warning system, in deployment for release verification, or in development with other enterprise diagnosis tools to help troubleshoot faults."*
`<http://diffa.lshift.net/>`_
Twimpact
--------
*"Real-time twitter trends and user impact"*
`<http://twimpact.com>`_
Rocket Pack Platform
--------------------
*"Rocket Pack Platform is the only fully integrated solution for plugin-free browser game development."*
`<http://rocketpack.fi/platform/>`_
Open Source Projects using Akka
*******************************
Redis client
------------
*A Redis client written Scala, using Akka actors, HawtDispath and non-blocking IO. Supports Redis 2.0+*
`<http://github.com/derekjw/fyrie-redis>`_
Narrator
--------
*"Narrator is a a library which can be used to create story driven clustered load-testing packages through a very readable and understandable api."*
`<http://github.com/shorrockin/narrator>`_
Kandash
-------
*"Kandash is a lightweight kanban web-based board and set of analytics tools."*
`<http://vasilrem.com/blog/software-development/kandash-project-v-0-3-is-now-available/>`_
`<http://code.google.com/p/kandash/>`_
Wicket Cassandra Datastore
--------------------------
This project provides an org.apache.wicket.pageStore.IDataStore implementation that writes pages to an Apache Cassandra cluster using Akka.
`<http://github.com/gseitz/wicket-cassandra-datastore/>`_
Spray
-----
*"spray is a lightweight Scala framework for building RESTful web services on top of Akka actors and Akka Mist. It sports the following main features:*
* *Completely asynchronous, non-blocking, actor-based request processing for efficiently handling very high numbers of concurrent connections*
* *Powerful, flexible and extensible internal Scala DSL for declaratively defining your web service behavior*
* *Immutable model of the HTTP protocol, decoupled from the underlying servlet container*
* *Full testability of your REST services, without the need to fire up containers or actors"*
`<https://github.com/spray/spray/wiki>`_

View file

@ -5,6 +5,4 @@ Additional Information
:maxdepth: 2
recipes
companies-using-akka
third-party-integrations
language-bindings

View file

@ -1,23 +0,0 @@
Third-party Integrations
========================
The Play! Framework
-------------------
Play 2.0 is based upon Akka. Uses all its eventing and threading using Akka actors and futures.
Read more here: `<http://www.playframework.org/2.0>`_.
Scalatra
--------
Scalatra has Akka integration.
Read more here: `<https://github.com/scalatra/scalatra/blob/develop/akka/src/main/scala/org/scalatra/akka/AkkaSupport.scala>`_
Gatling
-------
Gatling is an Open Source Stress Tool.
Read more here: `<http://gatling-tool.org/>`_

View file

@ -1,133 +0,0 @@
Articles & Presentations
========================
Videos
------
`Functional Programming eXchange - March 2011 <http://skillsmatter.com/podcast/scala/simpler-scalability-fault-tolerance-concurrency-remoting-through-actors>`_
`NE Scala - Feb 2011 <http://vimeo.com/20297968>`_
`JFokus - Feb 2011 <http://79.136.112.58/ability/show/xaimkwdli/a2_20110216_1110/mainshow.asp?STREAMID=1>`_.
`London Scala User Group - Oct 2010 <http://skillsmatter.com/podcast/scala/akka-simpler-scalability-fault-tolerance-concurrency-remoting-through-actors>`_
`Akka LinkedIn Tech Talk - Sept 2010 <http://sna-projects.com/blog/2010/10/akka>`_
`Akka talk at Scala Days - March 2010 <http://days2010.scala-lang.org/node/138/162>`_
`Devoxx 2010 talk "Akka: Simpler Scalability, Fault-Tolerance, Concurrency" by Viktor Klang <http://parleys.com/d/2089>`_
Articles
--------
`Scatter-Gather with Akka Dataflow <http://vasilrem.com/blog/software-development/scatter-gather-with-akka-dataflow/>`_
`Actor-Based Continuations with Akka and Swarm <http://www.earldouglas.com/actor-based-continuations-with-akka-and-swarm>`_
`Mimicking Twitter Using an Akka-Based Event-Driven Architecture <http://www.earldouglas.com/mimicking-twitter-using-an-akka-based-event-driven-architecture>`_
`Remote Actor Class Loading with Akka <https://www.earldouglas.com/remote-actor-class-loading-with-akka>`_
`Akka Producer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html>`_
`Akka Consumer Actors: New Features and Best Practices <http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html>`_
`Compute Grid with Cloudy Akka <http://letitcrash.com/compute-grid-with-cloudy-akka>`_
`Clustered Actors with Cloudy Akka <http://letitcrash.com/clustered-actors-with-cloudy-akka>`_
`Unit testing Akka Actors with the TestKit <http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html>`_
`Starting with Akka 1.0 <http://roestenburg.agilesquad.com/2011/02/starting-with-akka-10.html>`_
`Akka Does Async <http://altdevblogaday.com/akka-does-async>`_
`CQRS with Akka actors and functional domain models <http://debasishg.blogspot.com/2011/01/cqrs-with-akka-actors-and-functional.html>`_
`High Level Concurrency with JRuby and Akka Actors <http://metaphysicaldeveloper.wordpress.com/2010/12/16/high-level-concurrency-with-jruby-and-akka-actors/>`_
`Container-managed actor dispatchers <http://vasilrem.com/blog/software-development/container-managed-actor-dispatchers/>`_
`Even simpler scalability with Akka through RegistryActor <http://vasilrem.com/blog/software-development/even-simpler-scalability-with-akka-through-registryactor/>`_
`FSM in Akka (in Vietnamese) <http://cntt.tv/nodes/show/559>`_
`Repeater and Idempotent Receiver implementation in Akka <http://roestenburg.agilesquad.com/2010/09/repeater-and-idempotent-receiver.html>`_
`EDA Akka as EventBus <http://fornax-sculptor.blogspot.com/2010/08/eda-akka-as-eventbus.html>`_
`Upgrading examples to Akka master (0.10) and Scala 2.8.0 Final <http://roestenburg.agilesquad.com/2010/07/upgrading-to-akka-master-010-and-scala.html>`_
`Testing Akka Remote Actor using Serializable.Protobuf <http://roestenburg.agilesquad.com/2010/05/testing-akka-remote-actor-using.html>`_
`Flexible load balancing with Akka in Scala <http://vasilrem.com/blog/software-development/flexible-load-balancing-with-akka-in-scala/>`_
`Eventually everything, and actors <http://rossputo.blogspot.com/2010/05/eventually-everything-and-actors.html>`_
`Join messages with Akka <http://roestenburg.agilesquad.com/2010/05/join-messages-with-akka.html>`_
`Starting with Akka part 2, Intellij IDEA, Test Driven Development <http://roestenburg.agilesquad.com/2010/05/starting-with-akka-part-2-intellij-idea.htm>`_
`Starting with Akka and Scala <http://roestenburg.agilesquad.com/2010/04/starting-with-akka-and-scala.html>`_
`PubSub using Redis and Akka Actors <http://debasishg.blogspot.com/2010/04/pubsub-with-redis-and-akka-actors.html>`_
`Akka's grown-up hump <http://krasserm.blogspot.com/2010/08/akkas-grown-up-hump.html>`_
`Akka features for application integration <http://krasserm.blogspot.com/2010/04/akka-features-for-application.html>`_
`Load Balancing Actors with Work Stealing Techniques <http://janvanbesien.blogspot.com/2010/03/load-balancing-actors-with-work.html>`_
`Domain Services and Bounded Context using Akka - Part 2 <http://debasishg.blogspot.com/2010/03/domain-services-and-bounded-context.html>`_
`Thinking Asynchronous - Domain Modeling using Akka Transactors - Part 1 <http://debasishg.blogspot.com/2010/03/thinking-asynchronous-domain-modeling.html>`_
`Introducing Akka Simpler Scalability, Fault-Tolerance, Concurrency & Remoting through Actors <http://jonasboner.com/2010/01/04/introducing-akka.html>`_
`Using Cassandra with Scala and Akka <http://codemonkeyism.com/cassandra-scala-akka/>`_
`No Comet, Hacking with WebSocket and Akka <http://debasishg.blogspot.com/2009/12/no-comet-hacking-with-websocket-and.html>`_
`MongoDB for Akka Persistence <http://debasishg.blogspot.com/2009/08/mongodb-for-akka-persistence.html>`_
`Pluggable Persistent Transactors with Akka <http://debasishg.blogspot.com/2009/10/pluggable-persistent-transactors-with.html>`_
`Enterprise scala actors: introducing the Akka framework <http://blog.xebia.com/2009/10/22/scala-actors-for-the-enterprise-introducing-the-akka-framework/>`_
Books
-----
`Akka and Camel <http://www.manning.com/ibsen/appEsample.pdf>`_ (appendix E of `Camel in Action <http://www.manning.com/ibsen/>`_)
`Ett första steg i Scala <http://www.studentlitteratur.se/o.o.i.s?id=2474&artnr=33847-01&csid=66&mp=4918>`_ (Kapitel "Aktörer och Akka") (en. "A first step in Scala", chapter "Actors and Akka", book in Swedish)
Presentations
-------------
`Slides from Akka talk at Scala Days 2010, good short intro to Akka <http://www.slideshare.net/jboner/akka-scala-days-2010>`_
`Akka: Simpler Scalability, Fault-Tolerance, Concurrency & Remoting through Actors <http://www.slideshare.net/jboner/akka-simpler-scalability-faulttolerance-concurrency-remoting-through-actors>`_
`<http://ccombs.net/storage/presentations/Akka_High_Level_Abstractions.pdf>`_
`<https://github.com/deanwampler/Presentations/tree/master/akka-intro/>`_
Podcasts
--------
`Episode 16 Scala and Akka an Interview with Jonas Boner <http://basementcoders.com/?p=711>`_
`Jonas Boner on the Akka framework, Scala, and highly scalable applications <http://techcast.chariotsolutions.com/index.php?post_id=557314>`_
Interviews
----------
`JetBrains/DZone interview: Talking about Akka, Scala and life with Jonas Bonér <http://jetbrains.dzone.com/articles/talking-about-akka-scala-and>`_
`Artima interview of Jonas on Akka 1.0 <http://www.artima.com/scalazine/articles/akka_jonas_boner.html>`_
`InfoQ interview of Jonas on Akka 1.0 <http://www.infoq.com/news/2011/02/akka10>`_
`InfoQ interview of Jonas on Akka 0.7 <http://www.infoq.com/news/2010/03/akka-10>`_
`<http://jaxenter.com/we-ve-added-tons-of-new-features-since-0-10-33360.html>`_

View file

@ -1,682 +0,0 @@
Clustering
==========
Overview
--------
The clustering module provides services like group membership, clustering, and
failover of actors.
The clustering module is based on ZooKeeper.
Starting up the ZooKeeper ensemble
----------------------------------
Embedded ZooKeeper server
~~~~~~~~~~~~~~~~~~~~~~~~~
For testing purposes the simplest way is to start up a single embedded ZooKeeper
server. This can be done like this:
.. literalinclude:: examples/clustering.scala
:language: scala
:lines: 1-2,5,2-4,7
You can leave ``port`` and ``tickTime`` out which will then default to port 2181
and tick time 5000 ms.
ZooKeeper server ensemble
~~~~~~~~~~~~~~~~~~~~~~~~~
For production you should always run an ensemble of at least 3 servers. The
number should be quorum-based, e.g. 3, 5, 7 etc.
|more| Read more about this in the `ZooKeeper Installation and Admin Guide
<http://hadoop.apache.org/zookeeper/docs/r3.1.1/zookeeperAdmin.htm>`_.
In the future Cloudy Akka Provisioning module will automate this.
Creating, starting and stopping a cluster node
----------------------------------------------
Once we have one or more ZooKeeper servers running we can create and start up a
cluster node.
Cluster configuration
~~~~~~~~~~~~~~~~~~~~~
Cluster is configured in the ``akka.cloud.cluster`` section in the :ref:`configuration`.
Here you specify the default addresses to the ZooKeeper
servers, timeouts, if compression should be on or off, and so on.
.. code-block:: conf
akka {
cloud {
cluster {
zookeeper-server-addresses = "wallace:2181,gromit:2181"
remote-server-port = 2552
max-time-to-wait-until-connected = 5
session-timeout = 60
connection-timeout = 30
use-compression = on
}
}
}
Creating a node
~~~~~~~~~~~~~~~
The first thing you need to do on each node is to create a new cluster
node. That is done by invoking ``newNode`` on the ``Cluster`` object. Here is
the signature with its default values::
def newNode(
nodeAddress: NodeAddress,
zkServerAddresses: String = Cluster.zooKeeperServers,
serializer: ZkSerializer = Cluster.defaultSerializer,
hostname: String = NetworkUtil.getLocalhostName,
remoteServerPort: Int = Cluster.remoteServerPort): ClusterNode
The ``NodeAddress`` defines the address for a node and has the following
signature::
final case class NodeAddress(
clusterName: String,
nodeName: String,
hostname: String = Cluster.lookupLocalhostName,
port: Int = Cluster.remoteServerPort)
You have to specify a cluster name and node name while the hostname and port for
the remote server can be left out to use default values.
Here is a an example of creating a node in which we only specify the node address
and let the rest of the configuration options have their default values::
import akka.cloud.cluster._
val clusterNode = Cluster.newNode(NodeAddress("test-cluster", "node1"))
You can also use the ``apply`` method on the ``Cluster`` object to create a new
node in a more idiomatic Scala way::
import akka.cloud.cluster._
val clusterNode = Cluster(NodeAddress("test-cluster", "node1"))
The ``NodeAddress`` defines the name of the node and the name of cluster. This
allows you to have multiple clusters running in parallel in isolation,
not aware of each other.
The other parameters to know are:
- ``zkServerAddresses`` -- a list of the ZooKeeper servers to connect to,
default is "localhost:2181"
- ``serializer`` -- the serializer to use when serializing configuration data
into the cluster. Default is ``Cluster.defaultSerializer`` which is using Java
serialization
- ``hostname`` -- the hostname to use for the node
- ``remoteServerPort`` -- the remote server port, for the internal remote server
Starting a node
~~~~~~~~~~~~~~~
Creating a node does not make it join the cluster. In order to do that you need
to invoke the ``start`` method::
val clusterNode = Cluster.newNode(NodeAddress("test-cluster", "node1"))
clusterNode.start
Or if you prefer to do it in one line of code::
val clusterNode = Cluster.newNode(NodeAddress("test-cluster", "node1")).start
Stopping a node
~~~~~~~~~~~~~~~
To stop a node invoke ``stop``::
clusterNode.stop
Querying which nodes are part of the cluster
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can query the cluster for which nodes have joined the cluster. This is done using the ``membershipNodes``method::
val allNodesInCluster = clusterNode.membershipNodes
You can also query the 'Cluster' object for which nodes are member of the a specific cluster::
val nodes = Cluster nodesInCluster "test-cluster"
Resetting the Cluster
~~~~~~~~~~~~~~~~~~~~~
You can reset the whole cluster using the ``reset`` method on the ``Cluster`` object::
Cluster.reset
This shuts down all nodes and removes them from the cluster, it also removes all clustered actors and configuration data from the registry. Use this method with care.
You can reset all the nodes in a specific cluster using the ``resetNodesInCluster`` method on the ``Cluster`` object::
Cluster resetNodesInCluster "test-cluster"
Cluster event subscription
--------------------------
The cluster module supports subscribing to events happening in the cluster. For
example, this can be very useful for knowing when a new nodes come and go,
allowing you to dynamically resize the cluster. Here is an example::
clusterNode.register(new ChangeListener {
override def nodeConnected(node: String, thisNode: ClusterNode) {
// ...
}
override def nodeDisconnected(node: String, thisNode: ClusterNode) {
// ...
}
})
As parameters into these callbacks the cluster passes the name of the node that
joined or left the cluster as well as the local node itself.
Here is the full trait with all the callbacks you can implement::
trait ChangeListener {
def nodeConnected(node: String, client: ClusterNode) = {}
def nodeDisconnected(node: String, client: ClusterNode) = {}
def newLeader(name: String, client: ClusterNode) = {}
def thisNodeNewSession(client: ClusterNode) = {}
def thisNodeConnected(client: ClusterNode) = {}
def thisNodeDisconnected(client: ClusterNode) = {}
def thisNodeExpired(client: ClusterNode) = {}
}
Here is when each callback will be invoked:
- ``nodeConnected`` -- when a node joins the cluster
- ``nodeDisconnected`` -- when a node leaves the cluster
- ``newLeader`` -- when there has been a leader election and the new leader is elected
- ``thisNodeNewSession`` -- when the local node has created a new session to the cluster
- ``thisNodeConnected`` -- when a local node has joined the cluster
- ``thisNodeDisconnected`` -- when a local node has left the cluster
- ``thisNodeExpired`` -- when the local node's session has expired
If you are using this from Java then you need to use the "Java-friendly"
``ChangeListenerAdapter`` abstract class instead of the ``ChangeListener``
trait.
Clustered Actor Registry
------------------------
You can cluster actors by storing them in the cluster by UUID. The actors will
be serialized deeply (with or without its mailbox and pending messages) and put
in a highly available storage. This actor can then be checked out on any other
node, used there and then checked in again. The cluster will also take care of
transparently migrating actors residing on a failed node onto another node on
the cluster so that the application can continue working as if nothing happened.
Let's look at an example. First we create a simple Hello World actor. We also
create a ``Format`` type class for serialization. For simplicity we are using
plain Java serialization. ::
import akka.serialization._
import akka.actor._
@serializable class HelloActor extends Actor {
private var count = 0
self.id = "service:hello"
def receive = {
case "hello" =>
count = count + 1
self reply ("world " + count)
}
}
object BinaryFormats {
@serializable implicit object HelloActorFormat
extends SerializerBasedActorFormat[HelloActor] {
val serializer = Serializer.Java
}
}
|more| Read more about actor serialization in the `Akka Serialization
Documentation <http://doc.akka.io/serialization-scala>`_.
.. todo:: add explanation on how to do this with the Java API
Once we can serialize and deserialize the actor we have what we need in
order to cluster the actor. We have four methods at our disposal:
- ``store``
- ``remove``
- ``use``
- ``release``
ActorAddress
----------------
The ``ActorAddress`` is used to represent the address to a specific actor. All methods in the API that deals with actors works with ``ActorAddress`` and represents one of these identifiers:
- ``actorUuid`` -- the UUID for an actor; ``Actor.uuid``
- ``actorId`` -- the ID for an actor; ``Actor.id``
- ``actorClassName`` -- the class name of an actor; ``Actor.actorClassName``
To create a ``ActorAddress`` you can create the it using named arguments like this::
ActorAddress(actorUuid = uuid)
ActorAddress(actorId = id)
ActorAddress(actorClassName = className)
Or, if you are using the API from Java (or prefer the syntaxt in Scala) then you can use the ``ActorAddress`` factory methods::
ActorAddress.forUuid(uuid)
ActorAddress.forId(id)
ActorAddress.forClassName(className)
Store and Remove
----------------
The methods for storing an actor in the cluster and removing it from the cluster
are:
- ``store`` -- clusters the actor by adding it to the clustered actor registry, available to any node in the cluster
- ``remove`` -- removes the actor from the clustered actor registry
The ``store`` method also allows you to specify a replication factor. The
``nrOfInstances`` defines the number of (randomly picked) nodes in the cluster that
the stored actor should be automatically deployed to and instantiated locally on (using
``use``). If you leave this argument out then a replication factor of ``0`` will be used
which means that the actor will only be stored in the clustered actor registry and not
deployed anywhere.
The last argument to the ``store`` method is the ``serializeMailbox`` which defines if
the actor's mailbox should be serialized along with the actor, stored in the cluster and
deployed (if replication factor is set to more than ``0``). If it should or not depends
on your use-case. Default is ``false``
This is the signatures for the ``store`` method (all different permutations of these methods are available for using from Java)::
def store[T <: Actor]
(actorRef: ActorRef, nrOfInstances: Int = 0, serializeMailbox: Boolean = false)
(implicit format: Format[T]): ClusterNode
def store[T <: Actor]
(actorClass: Class[T], nrOfInstances: Int = 0, serializeMailbox: Boolean = false)
(implicit format: Format[T]): ClusterNode
The ``implicit format: Format[T]`` might look scary but this argument is chosen for you and passed in automatically by the compiler as long as you have imported the serialization typeclass for the actor you are storing, e.g. the ``HelloActorFormat`` (defined above and imported in the sample below).
Here is an example of how to use ``store`` to cluster an already
created actor::
import Actor._
import ActorSerialization._
import BinaryFormats._
val clusterNode = Cluster.newNode(NodeAddress("test-cluster", "node1")).start
val hello = actorOf(Props[HelloActor]).asInstanceOf[LocalActorRef]
val serializeMailbox = false
val nrOfInstances = 5
clusterNode store (hello, serializeMailbox, nrOfInstances)
Here is an example of how to use ``store`` to cluster an actor by type::
clusterNode store classOf[HelloActor]
The ``remove`` method allows you to passing in a ``ActorAddress``::
cluster remove actorAddress
You can also remove an actor by type like this::
cluster remove classOf[HelloActor]
Use and Release
---------------
The two methods for "checking out" an actor from the cluster for use and
"checking it in" after use are:
- ``use`` -- "checks out" for use on a specific node, this will deserialize
the actor and instantiated on the node it is being checked out on
- ``release`` -- "checks in" the actor after being done with it, important for
the cluster bookkeeping
The ``use`` and ``release`` methods allow you to pass an instance of ``ActorAddress``. Here is an example::
val helloActor1 = cluster use actorAddress
helloActor1 ! "hello"
helloActor2 ! "hello"
helloActor3 ! "hello"
cluster release actorAddress
Ref and Router
--------------
The ``ref`` method is used to create an actor reference to a set of clustered
(remote) actors defined with a spefific routing policy.
This is the signature for ``ref``::
def ref(actorAddress: ActorAddress, router: Router.RouterType): ActorRef
The final argument ``router`` defines a routing policy in which you have the
following options:
- ``Router.Direct`` -- this policy means that the reference will only represent one single actor which it will use all the time when sending messages to the actor. If the query returns multiple actors then a single one is picked out randomly.
- ``Router.Random`` -- this policy will route the messages to a randomly picked actor in the set of actors in the cluster, returned by the query.
- ``Router.RoundRobin`` -- this policy will route the messages to the set of actors in the cluster returned by the query in a round-robin fashion. E.g. circle around the set of actors in order.
Here is an example::
// Store the PongActor in the cluster and deploy it to 5 nodes in the cluster
localNode store (classOf[PongActor], 5)
// Get a reference to all the pong actors through a round-robin router ActorRef
val pong = localNode ref (actorAddress, Router.RoundRobin)
// Send it messages
pong ! Ping
Actor migration
---------------
The cluster has mechanisms to either manually or automatically fail over all
actors running on a node that have crashed to another node in the cluster. It
will also make sure that all remote clients that are communicating these actors
will automatically and transparently reconnect to the new host node.
Automatic actor migration on fail-over
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All actors are checked out with ``use`` are tracked by the cluster and will be
automatically failed over to a new node in the cluster if the node that up and
how is it is running on (using it) it crashes. Tracking will stop when the actor
is checked in using ``release``.
Manual actor migration
~~~~~~~~~~~~~~~~~~~~~~
You can move an actor for one node to another using the ``migrate`` method. Here is the parameter list:
- ``from`` -- the address of the node migrating from (default is the address for the node you are invoking it on)
- ``to`` -- the address of the node migrating to
- ``actorAddress`` -- the ``ActorAddress``
Here is an example::
clusterNode migrate (
NodeAddress("test-cluster", "node1"),
NodeAddress("test-cluster", "node2"),
actorAddress)
Here is an example using ``actorId`` and ``to``, e.g. relying on the default value for ``from`` (this node)::
clusterNode migrate (NodeAddress("test-cluster", "node2"), actorAddress)
Compute Grid
-----------------------------
Akka can work as a compute grid by allowing you to send functions to the nodes
in the cluster and collect the result back.
The workhorse for this is the ``send`` method (in different variations). The
``send`` methods take the following parameters:
- ``f`` -- the function you want to be invoked on the remote nodes in the cluster
- ``arg`` -- the argument to the function (not all of them have this parameter)
- ``nrOfInstances`` -- the replication factor defining the number of nodes you want the function to be sent and invoked on
You can currently send these function types to the cluster:
- ``Function0[Unit]`` -- takes no arguments and returns nothing
- ``Function0[Any]`` -- takes no arguments and returns a value of type ``Any``
- ``Function1[Any, Unit]`` -- takes an arguments of type ``Any`` and returns nothing
- ``Function1[Any, Any]`` -- takes an arguments of type ``Any`` and returns a value of type ``Any``
All ``send`` methods returns immediately after the functions have been sent off
asynchronously to the remote nodes. The ``send`` methods that takes a function
that yields a return value all return a ``scala.List`` of ``akka.dispatch.Future[Any]``.
This gives you the option of handling these futures the way you wish. Some helper
functions for working with ``Future`` are in the ``akka.dispatch.Futures`` object.
|more| Read more about futures in the `Akka documentation on Futures
<http://doc.akka.io/actors-scala#Actors%20(Scala)-Send%20messages-Send-And-Receive-Future>`_.
Here are some examples showing how you can use the different ``send`` methods.
Send a ``Function0[Unit]``::
val node1 = Cluster newNode (NodeAddress("test", "node1", port = 9991)) start
val node2 = Cluster newNode (NodeAddress("test", "node2", port = 9992)) start
val fun = () => println(">>> AKKA ROCKS <<<")
// send and invoke function on to two cluster nodes
node1 send (fun, 2)
Send a ``Function0[Any]``::
val node1 = Cluster newNode (NodeAddress("test", "node1", port = 9991)) start
val node2 = Cluster newNode (NodeAddress("test", "node2", port = 9992)) start
val fun = () => "AKKA ROCKS"
// send and invoke function on to two cluster nodes and get result
val futures = node1 send (fun, 2)
Futures awaitAll futures
println("Cluster says [" + futures.map(_.result).mkString(" - ") + "]")
Send a ``Function1[Any, Unit]``::
val node1 = Cluster newNode (NodeAddress("test", "node1", port = 9991)) start
val node2 = Cluster newNode (NodeAddress("test", "node2", port = 9992)) start
val fun = ((s: String) => println(">>> " + s + " <<<")).asInstanceOf[Function1[Any, Unit]]
// send and invoke function on to two cluster nodes
node1 send (fun, "AKKA ROCKS", 2)
Send a ``Function1[Any, Any]``::
val node1 = Cluster newNode (NodeAddress("test", "node1", port = 9991)) start
val node2 = Cluster newNode (NodeAddress("test", "node2", port = 9992)) start
val fun = ((i: Int) => i * i).asInstanceOf[Function1[Any, Any]]
// send and invoke function on one cluster node and get result
val future1 = node1 send (fun, 2, 1) head
val future2 = node1 send (fun, 2, 1) head
// grab the result from the first one that returns
val result = Futures awaitEither (future1, future2)
println("Cluster says [" + result.get + "]")
Querying the Clustered Actor Registry
-------------------------------------
Here we have some other methods for querying the Clustered Actor Registry in different ways.
Check if an actor is clustered (stored and/or used in the cluster):
- ``def isClustered(actorUuid: UUID, actorId: String, actorClassName: String): Boolean``
- When using this method you should only specify one of the parameters using "named parameters" as in the examples above.
Check if an actor is used by a specific node (e.g. checked out locally using ``use``):
- ``def isInUseOnNode(actorUuid: UUID, actorId: String, actorClassName: String, node: NodeAddress): Boolean``
- When using this method you should only specify one of the parameters using "named parameters" as in the examples above. Default argument for ``node`` is "this" node.
Lookup the remote addresses for a specific actor (can reside on more than one node):
- ``def addressesForActor(actorUuid: UUID, actorId: String,
actorClassName: String): Array[Tuple2[UUID, InetSocketAddress]]``
- When using this method you should only specify one of the parameters using "named parameters" as in the examples above.
Lookup all actors that are in use (e.g. "checked out") on this node:
- ``def uuidsForActorsInUse: Array[UUID]``
- ``def idsForActorsInUse: Array[String]``
- ``def classNamesForActorsInUse: Array[String]``
Lookup all actors are available (e.g. "stored") in the Clustered Actor Registry:
- ``def uuidsForClusteredActors: Array[UUID]``
- ``def idsForClusteredActors: Array[String]``
- ``def classNamesForClusteredActors: Array[String]``
Lookup the ``Actor.id`` by ``Actor.uuid``:
- ``def actorIdForUuid(uuid: UUID): String``
- ``def actorIdsForUuids(uuids: Array[UUID]): Array[String]``
Lookup the ``Actor.actorClassName`` by ``Actor.uuid``:
- ``def actorClassNameForUuid(uuid: UUID): String``
- ``def actorClassNamesForUuids(uuids: Array[UUID]): Array[String]``
Lookup the ``Actor.uuid``'s by ``Actor.id``:
- ``def uuidsForActorId(actorId: String): Array[UUID]``
Lookup the ``Actor.uuid``'s by ``Actor.actorClassName``:
- ``def uuidsForActorClassName(actorClassName: String): Array[UUID]``
Lookup which nodes that have checked out a specific actor:
- ``def nodesForActorsInUseWithUuid(uuid: UUID): Array[String]``
- ``def nodesForActorsInUseWithId(id: String): Array[String]``
- ``def nodesForActorsInUseWithClassName(className: String): Array[String]``
Lookup the ``Actor.uuid`` for the actors that have been checked out a specific node:
- ``def uuidsForActorsInUseOnNode(nodeName: String): Array[UUID]``
- ``def idsForActorsInUseOnNode(nodeName: String): Array[String]``
- ``def classNamesForActorsInUseOnNode(nodeName: String): Array[String]``
Lookup the serialization ``Format`` instance for a specific actor:
- ``def formatForActor(actorUuid: UUID, actorId: String, actorClassName: String): Format[T]``
- When using this method you should only specify one of the parameters using "named parameters" as in the examples above.
Clustered configuration manager
-------------------------------
Custom configuration data
~~~~~~~~~~~~~~~~~~~~~~~~~
You can also store configuration data into the cluster. This is done using the
``setConfigElement`` and ``getConfigElement`` methods. The key is a ``String`` and the data a ``Array[Byte]``::
clusterNode setConfigElement ("hello", "world".getBytes("UTF-8"))
val valueAsBytes = clusterNode getConfigElement ("hello") // returns Array[Byte]
val valueAsString = new String(valueAsBytes, "UTF-8")
You can also remove an entry using the ``removeConfigElement`` method and get an
``Array[String]`` with all the keys::
clusterNode removeConfigElement ("hello")
val allConfigElementKeys = clusterNode.getConfigElementKeys // returns Array[String]
Consolidation and management of the Akka configuration file
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Not implemented yet.
The actor :ref:`configuration` file will also be stored into the cluster
and it will be possible to have one single configuration file, stored on the server, and pushed out to all
the nodes that joins the cluster. Each node only needs to be configured with the ZooKeeper
server address and the master configuration will only reside in one single place
simplifying administration of the cluster and alleviates the risk of having
different configuration files lying around in the cluster.
Leader election
---------------
The cluster supports leader election. There will always only be one single
leader in the cluster. The first thing that happens when the cluster startup is
a leader election. The leader that gets elected will stay the leader until it
crashes or is shut down, then an automatic reelection process will take place
and a new leader is elected. Only having one leader in a cluster can be very
useful to solve the wide range of problems. You can find out which node is the
leader by invoking the ``leader`` method. A node can also check if it is the
leader by invoking the ``isLeader`` method. A leader node can also explicitly
resign and issue a new leader election by invoking the ``resign`` method. Each node has an election number stating its ranking in the last election. You can query a node for its election number through the ``electionNumber`` method.
JMX monitoring and management
-----------------------------
.. todo:: Add some docs to each method
The clustering module has an JMX MBean that you can use. Here is the interface
with all available operations::
trait ClusterNodeMBean {
def start: Unit
def stop: Unit
def disconnect: Unit
def reconnect: Unit
def resign: Unit
def isConnected: Boolean
def getRemoteServerHostname: String
def getRemoteServerPort: Int
def getNodeName: String
def getClusterName: String
def getZooKeeperServerAddresses: String
def getMemberNodes: Array[String]
def getLeader: String
def getUuidsForClusteredActors: Array[String]
def getIdsForClusteredActors: Array[String]
def getClassNamesForClusteredActors: Array[String]
def getUuidsForActorsInUse: Array[String]
def getIdsForActorsInUse: Array[String]
def getClassNamesForActorsInUse: Array[String]
def getNodesForActorInUseWithUuid(uuid: String): Array[String]
def getNodesForActorInUseWithId(id: String): Array[String]
def getNodesForActorInUseWithClassName(className: String): Array[String]
def getUuidsForActorsInUseOnNode(nodeName: String): Array[String]
def getIdsForActorsInUseOnNode(nodeName: String): Array[String]
def getClassNamesForActorsInUseOnNode(nodeName: String): Array[String]
def setConfigElement(key: String, value: String): Unit
def getConfigElement(key: String): AnyRef
def removeConfigElement(key: String): Unit
def getConfigElementKeys: Array[String]
}
JMX support is turned on and off using the default ``akka.enable-jmx``
configuration option.
.. code-block:: conf
akka {
enable-jmx = on
}
.. |more| image:: more.png
:align: middle
:alt: More info

View file

@ -1,131 +0,0 @@
//#imports
package akka.tutorial.scala.first
import _root_.akka.routing.{RoutedProps, Routing, CyclicIterator}
import akka.actor.{Actor, PoisonPill}
import Actor._
import Routing._
import System.{currentTimeMillis => now}
import java.util.concurrent.CountDownLatch
//#imports
//#system
object Pi extends App {
calculate(nrOfWorkers = 4, nrOfElements = 10000, nrOfMessages = 10000)
//#actors-and-messages
// ====================
// ===== Messages =====
// ====================
//#messages
sealed trait PiMessage
case object Calculate extends PiMessage
case class Work(start: Int, nrOfElements: Int) extends PiMessage
case class Result(value: Double) extends PiMessage
//#messages
// ==================
// ===== Worker =====
// ==================
//#worker
class Worker extends Actor {
//#calculate-pi
def calculatePiFor(start: Int, nrOfElements: Int): Double = {
var acc = 0.0
for (i <- start until (start + nrOfElements))
acc += 4.0 * math.pow(-1, i) / (2 * i + 1)
acc
}
//#calculate-pi
def receive = {
case Work(start, nrOfElements) =>
reply(Result(calculatePiFor(start, nrOfElements))) // perform the work
}
}
//#worker
// ==================
// ===== Master =====
// ==================
//#master
class Master(
nrOfWorkers: Int, nrOfMessages: Int, nrOfElements: Int, latch: CountDownLatch)
extends Actor {
var pi: Double = _
var nrOfResults: Int = _
var start: Long = _
//#create-workers
// create the workers
val workers = Vector.fill(nrOfWorkers)(actorOf(Props[Worker])
// wrap them with a load-balancing router
val router = Routing.actorOf(
RoutedProps().withRoundRobinRouter.withLocalConnections(workers), "pi")
loadBalancerActor(CyclicIterator(workers))
//#create-workers
//#master-receive
// message handler
def receive = {
//#message-handling
case Calculate =>
// schedule work
for (i <- 0 until nrOfMessages) router ! Work(i * nrOfElements, nrOfElements)
// send a PoisonPill to all workers telling them to shut down themselves
router ! Broadcast(PoisonPill)
// send a PoisonPill to the router, telling him to shut himself down
router ! PoisonPill
case Result(value) =>
// handle result from the worker
pi += value
nrOfResults += 1
if (nrOfResults == nrOfMessages) self.stop()
//#message-handling
}
//#master-receive
override def preStart() {
start = now
}
override def postStop() {
// tell the world that the calculation is complete
println(
"\n\tPi estimate: \t\t%s\n\tCalculation time: \t%s millis"
.format(pi, (now - start)))
latch.countDown()
}
}
//#master
//#actors-and-messages
// ==================
// ===== Run it =====
// ==================
def calculate(nrOfWorkers: Int, nrOfElements: Int, nrOfMessages: Int) {
// this latch is only plumbing to know when the calculation is completed
val latch = new CountDownLatch(1)
// create the master
val master = actorOf(Props(new Master(nrOfWorkers, nrOfMessages, nrOfElements, latch)))
// start the calculation
master ! Calculate
// wait for master to shut down
latch.await()
}
}
//#system

View file

@ -1,226 +0,0 @@
External Sample Projects
========================
Here are some external sample projects created by Akka's users.
Camel in Action - Akka samples
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Akka samples for the upcoming Camel in Action book by Martin Krasser.
`<http://code.google.com/p/camelinaction/source/browse/trunk/appendixE/>`_
CQRS impl using Scalaz and Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
An implementation of CQRS using scalaz for functional domain models and Akka for event sourcing.
`<https://github.com/debasishg/cqrs-akka>`_
Example of using Comet with Akka Mist
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/wrwills/AkkaMistComet>`_
Movie store
^^^^^^^^^^^
Code for a book on Scala/Akka.
Showcasing Remote Actors.
`<http://github.com/obcode/moviestore_akka>`_
Estimating Pi with Akka
^^^^^^^^^^^^^^^^^^^^^^^
`<http://www.earldouglas.com/estimating-pi-with-akka>`_
Running Akka on Android
^^^^^^^^^^^^^^^^^^^^^^^
Sample showing Dining Philosophers running in UI on Android.
`<https://github.com/gseitz/DiningAkkaDroids>`_
`<http://www.vimeo.com/20303656>`_
Remote chat application using Java API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/mariofusco/akkachat>`_
Remote chat application using Java API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A sample chat application using the Java API for Akka.
Port of the Scala API chat sample application in the Akka repository.
`<https://github.com/abramsm/akka_chat_java>`_
Sample parallel computing with Akka and Scala API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/yannart/ParallelPolynomialIntegral>`_
Bank application
^^^^^^^^^^^^^^^^
Showcasing Transactors and STM.
`<http://github.com/weiglewilczek/demo-akka>`_
Ant simulation 1
^^^^^^^^^^^^^^^^
Traveling salesman problem. Inspired by Clojure's Ant demo. Uses SPDE for GUI. Idiomatic Scala/Akka code.
Good example on how to use Actors and STM
`<http://github.com/pvlugter/ants>`_
Ant simulation 2
^^^^^^^^^^^^^^^^
Traveling salesman problem. Close to straight port by Clojure's Ant demo. Uses Swing for GUI.
Another nice example on how to use Actors and STM
`<http://github.com/azzoti/ScalaAkkaAnts>`_
The santa clause STM example by SPJ using Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/arjanblokzijl/akka-santa>`_
Akka trading system
^^^^^^^^^^^^^^^^^^^
`<http://github.com/patriknw/akka-sample-trading>`_
Snowing version of Game of Life in Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/mariogleichmann/AkkaSamples/tree/master/src/main/scala/com/mgi/akka/gameoflife>`_
Akka Web (REST/Comet) template project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
A sbt-based, scala Akka project that sets up a web project with REST and comet support
`<http://github.com/mattbowen/akka-web-template>`_
Various samples on how to use Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
From the May Chciago-Area Scala Enthusiasts Meeting
`<http://github.com/deanwampler/AkkaWebSampleExercise>`_
Absurd concept for a ticket sales & inventory system, using Akka framework
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/bmjames/ticketfaster>`_
Akka sports book sample: Java API
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/jrask/akka-activeobjects-application>`_
Sample of using the Finite State Machine (FSM) DSL
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/ngocdaothanh/lock-fsm-akka>`_
Akka REST, Jetty, SBT template project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Great starting point for building an Akka application.
`<http://github.com/efleming969/akka-template-rest>`_
Samples of various Akka features (in Scala)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/efleming969/akka-samples>`_
Fork at `<http://github.com/flintobrien/akka-samples>`_
A sample sbt setup for running the akka-sample-chat
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/dwhitney/sbt-akka-sample-chat>`_
Akka Benchmark project
^^^^^^^^^^^^^^^^^^^^^^
Benches Akka against various other actors and concurrency tools
`<http://github.com/akka/akka-bench>`_
Typed Actor (Java API) sample project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/bobo/akka_sample_java>`_
Akka PI calculation sample project
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/bonnefoa/akkaPi/>`_
Akka Vaadin Ice sample
^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/tomhowe/vaadin-akka-ice-test>`_
Port of Jersey (JAX-RS) samples to Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/akollegger/akka-jersey-samples>`_
Akka Expect Testing
^^^^^^^^^^^^^^^^^^^
`<https://github.com/joda/akka-expect>`_
Akka Java API playground
^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/koevet/akka-java-playground>`_
Family web page build with Scala, Lift, Akka, Redis, and Facebook Connect
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/derekjw/williamsfamily>`_
An example of queued computation tasks using Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/derekjw/computation-queue-example>`_
The samples for the New York Scala Enthusiasts Meetup discussing Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://www.meetup.com/New-York-Scala-Enthusiasts/calendar/12315985/>`_
`<http://github.com/dwhitney/akka_meetup>`_
Container managed thread pools for Akka Dispatchers
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/remeniuk/akka-cm-dispatcher>`_
"Lock" Finite State Machine demo with Akka
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/ngocdaothanh/lock-fsm-akka>`_
Template w/ Intellij stuff for random akka playing around (with Bivvy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/b3n00/akka10-template>`_
Akka chat using Akka Java API by Mario Fusco
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/mariofusco/akkachat>`_
Projects using the removed Akka Persistence modules
===================================================
Akka Terrastore sample
^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/dgreco/akka-terrastore-example>`_
Akka Persistence for Force.com
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<https://github.com/sclasen/akka-persistence-force>`_
Template for Akka and Redis
^^^^^^^^^^^^^^^^^^^^^^^^^^^
`<http://github.com/andrewmilkowski/template-akka-persistence-redis>`_

View file

@ -1,101 +0,0 @@
Getting Started Tutorial: First Chapter
=======================================
Start writing the code
----------------------
Now it's about time to start hacking.
We start by creating a ``Pi.scala`` file and adding these import statements at the top of the file:
.. includecode:: examples/Pi.scala#imports
If you are using SBT in this tutorial then create the file in the ``src/main/scala`` directory.
If you are using the command line tools then create the file wherever you want. I will create it in a directory called ``tutorial`` at the root of the Akka distribution, e.g. in ``$AKKA_HOME/tutorial/Pi.scala``.
Creating the messages
---------------------
The design we are aiming for is to have one ``Master`` actor initiating the computation, creating a set of ``Worker`` actors. Then it splits up the work into discrete chunks, and sends these chunks to the different workers in a round-robin fashion. The master waits until all the workers have completed their work and sent back results for aggregation. When computation is completed the master prints out the result, shuts down all workers and then itself.
With this in mind, let's now create the messages that we want to have flowing in the system. We need three different messages:
- ``Calculate`` -- sent to the ``Master`` actor to start the calculation
- ``Work`` -- sent from the ``Master`` actor to the ``Worker`` actors containing the work assignment
- ``Result`` -- sent from the ``Worker`` actors to the ``Master`` actor containing the result from the worker's calculation
Messages sent to actors should always be immutable to avoid sharing mutable state. In scala we have 'case classes' which make excellent messages. So let's start by creating three messages as case classes. We also create a common base trait for our messages (that we define as being ``sealed`` in order to prevent creating messages outside our control):
.. includecode:: examples/Pi.scala#messages
Creating the worker
-------------------
Now we can create the worker actor. This is done by mixing in the ``Actor`` trait and defining the ``receive`` method. The ``receive`` method defines our message handler. We expect it to be able to handle the ``Work`` message so we need to add a handler for this message:
.. includecode:: examples/Pi.scala#worker
:exclude: calculate-pi
As you can see we have now created an ``Actor`` with a ``receive`` method as a handler for the ``Work`` message. In this handler we invoke the ``calculatePiFor(..)`` method, wrap the result in a ``Result`` message and send it back to the original sender using ``self.reply``. In Akka the sender reference is implicitly passed along with the message so that the receiver can always reply or store away the sender reference for future use.
The only thing missing in our ``Worker`` actor is the implementation on the ``calculatePiFor(..)`` method. While there are many ways we can implement this algorithm in Scala, in this introductory tutorial we have chosen an imperative style using a for comprehension and an accumulator:
.. includecode:: examples/Pi.scala#calculate-pi
Creating the master
-------------------
The master actor is a little bit more involved. In its constructor we need to create the workers (the ``Worker`` actors) and start them. We will also wrap them in a load-balancing router to make it easier to spread out the work evenly between the workers. Let's do that first:
.. includecode:: examples/Pi.scala#create-workers
As you can see we are using the ``actorOf`` factory method to create actors, this method returns as an ``ActorRef`` which is a reference to our newly created actor. This method is available in the ``Actor`` object but is usually imported::
import akka.actor.Actor._
Now we have a router that is representing all our workers in a single abstraction. If you paid attention to the code above, you saw that we were using the ``nrOfWorkers`` variable. This variable and others we have to pass to the ``Master`` actor in its constructor. So now let's create the master actor. We have to pass in three integer variables:
- ``nrOfWorkers`` -- defining how many workers we should start up
- ``nrOfMessages`` -- defining how many number chunks to send out to the workers
- ``nrOfElements`` -- defining how big the number chunks sent to each worker should be
Here is the master actor:
.. includecode:: examples/Pi.scala#master
:exclude: message-handling
A couple of things are worth explaining further.
First, we are passing in a ``java.util.concurrent.CountDownLatch`` to the ``Master`` actor. This latch is only used for plumbing (in this specific tutorial), to have a simple way of letting the outside world knowing when the master can deliver the result and shut down. In more idiomatic Akka code, as we will see in part two of this tutorial series, we would not use a latch but other abstractions and functions like ``Channel``, ``Future`` and ``?`` to achive the same thing in a non-blocking way. But for simplicity let's stick to a ``CountDownLatch`` for now.
Second, we are adding a couple of life-cycle callback methods; ``preStart`` and ``postStop``. In the ``preStart`` callback we are recording the time when the actor is started and in the ``postStop`` callback we are printing out the result (the approximation of Pi) and the time it took to calculate it. In this call we also invoke ``latch.countDown`` to tell the outside world that we are done.
But we are not done yet. We are missing the message handler for the ``Master`` actor. This message handler needs to be able to react to two different messages:
- ``Calculate`` -- which should start the calculation
- ``Result`` -- which should aggregate the different results
The ``Calculate`` handler is sending out work to all the ``Worker`` actors and after doing that it also sends a ``Broadcast(PoisonPill)`` message to the router, which will send out the ``PoisonPill`` message to all the actors it is representing (in our case all the ``Worker`` actors). ``PoisonPill`` is a special kind of message that tells the receiver to shut itself down using the normal shutdown method; ``self.stop``. We also send a ``PoisonPill`` to the router itself (since it's also an actor that we want to shut down).
The ``Result`` handler is simpler, here we get the value from the ``Result`` message and aggregate it to our ``pi`` member variable. We also keep track of how many results we have received back, and if that matches the number of tasks sent out, the ``Master`` actor considers itself done and shuts down.
Let's capture this in code:
.. includecode:: examples/Pi.scala#master-receive
Bootstrap the calculation
-------------------------
Now the only thing that is left to implement is the runner that should bootstrap and run the calculation for us. We do that by creating an object that we call ``Pi``, here we can extend the ``App`` trait in Scala, which means that we will be able to run this as an application directly from the command line.
The ``Pi`` object is a perfect container module for our actors and messages, so let's put them all there. We also create a method ``calculate`` in which we start up the ``Master`` actor and wait for it to finish:
.. includecode:: examples/Pi.scala#app
:exclude: actors-and-messages
That's it. Now we are done.
But before we package it up and run it, let's take a look at the full code now, with package declaration, imports and all:
.. includecode:: examples/Pi.scala

View file

@ -1,329 +0,0 @@
.. _spring-module:
####################
Spring Integration
####################
Akkas integration with the `Spring Framework <http://www.springsource.org>`_ supplies the Spring way of using the Typed Actor Java API and for CamelService configuration for :ref:`camel-spring-applications`. It uses Spring's custom namespaces to create Typed Actors, supervisor hierarchies and a CamelService in a Spring environment.
To use the custom name space tags for Akka you have to add the XML schema definition to your spring configuration. It is available at `http://repo.akka.io/akka-1.0.xsd <http://repo.akka.io/akka.xsd>`_. The namespace for Akka is:
.. code-block:: xml
xmlns:akka="http://repo.akka.io/schema/akka"
Example header for Akka Spring configuration:
.. code-block:: xml
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:akka="http://repo.akka.io/schema/akka"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://repo.akka.io/schema/akka
http://repo.akka.io/akka-1.0.xsd">
-
Actors
------
Actors in Java are created by extending the 'UntypedActor' class and implementing the 'onReceive' method.
Example how to create Actors with the Spring framework:
.. code-block:: xml
<akka:untyped-actor id="myActor"
implementation="com.biz.MyActor"
scope="singleton"
autostart="false"
depends-on="someBean"> <!-- or a comma-separated list of beans -->
<property name="aProperty" value="somePropertyValue"/>
<property name="aDependency" ref="someBeanOrActorDependency"/>
</akka:untyped-actor>
Supported scopes are singleton and prototype. Dependencies and properties are set with Springs ``<property/>`` element.
A dependency can be either a ``<akka:untyped-actor/>`` or a regular ``<bean/>``.
Get the Actor from the Spring context:
.. code-block:: java
ApplicationContext context = new ClassPathXmlApplicationContext("akka-spring-config.xml");
ActorRef actorRef = (ActorRef) context.getBean("myActor");
Typed Actors
------------
Here are some examples how to create Typed Actors with the Spring framework:
Creating a Typed Actor:
^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: xml
<beans>
<akka:typed-actor id="myActor"
interface="com.biz.MyPOJO"
implementation="com.biz.MyPOJOImpl"
transactional="true"
timeout="1000"
scope="singleton"
depends-on="someBean"> <!-- or a comma-separated list of beans -->
<property name="aProperty" value="somePropertyValue"/>
<property name="aDependency" ref="someBeanOrActorDependency"/>
</akka:typed-actor>
</beans>
Supported scopes are singleton and prototype. Dependencies and properties are set with Springs ``<property/>`` element.
A dependency can be either a ``<akka:typed-actor/>`` or a regular ``<bean/>``.
Get the Typed Actor from the Spring context:
.. code-block:: java
ApplicationContext context = new ClassPathXmlApplicationContext("akka-spring-config.xml");
MyPojo myPojo = (MyPojo) context.getBean("myActor");
Remote Actors
-------------
For details on server managed and client managed remote actors see Remote Actor documentation.
Configuration for a client managed remote Actor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
::
<akka:untyped-actor id="remote-untyped-actor"
implementation="com.biz.MyActor"
timeout="2000">
<akka:remote host="localhost" port="9992" managed-by="client"/>
</akka:untyped-actor>
The default for 'managed-by' is "client", so in the above example it could be left out.
Configuration for a server managed remote Actor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Server side
***********
::
<akka:untyped-actor id="server-managed-remote-untyped-actor"
implementation="com.biz.MyActor">
<akka:remote host="localhost" port="9990" managed-by="server"/>
</akka:untyped-actor>
<!-- register with custom service name -->
<akka:untyped-actor id="server-managed-remote-untyped-actor-custom-id"
implementation="com.biz.MyActor">
<akka:remote host="localhost" port="9990" service-name="my-service"/>
</akka:untyped-actor>
If the server specified by 'host' and 'port' does not exist it will not be registered.
Client side
***********
::
<!-- service-name could be custom name or class name -->
<akka:actor-for id="client-1" host="localhost" port="9990" service-name="my-service"/>
Configuration for a client managed remote Typed Actor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: xml
<akka:typed-actor id="remote-typed-actor"
interface="com.biz.MyPojo"
implementation="com.biz.MyPojoImpl"
timeout="2000">
<akka:remote host="localhost" port="9999" />
</akka:typed-actor>
Configuration for a server managed remote Typed Actor
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sever side setup
****************
::
<akka:typed-actor id="server-managed-remote-typed-actor-custom-id"
interface="com.biz.IMyPojo"
implementation="com.biz.MyPojo"
timeout="2000">
<akka:remote host="localhost" port="9999" service-name="mypojo-service"/>
</akka:typed-actor>
Client side setup
*****************
::
<!-- always specify the interface for typed actor -->
<akka:actor-for id="typed-client"
interface="com.biz.MyPojo"
host="localhost"
port="9999"
service-name="mypojo-service"/>
Dispatchers
-----------
Configuration for a Typed Actor or Untyped Actor with a custom dispatcher
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you don't want to use the default dispatcher you can define your own dispatcher in the spring configuration. For more information on dispatchers have a look at Dispatchers documentation.
.. code-block:: xml
<akka:typed-actor id="remote-typed-actor"
interface="com.biz.MyPOJO"
implementation="com.biz.MyPOJOImpl"
timeout="2000">
<akka:dispatcher id="my-dispatcher" type="executor-based-event-driven" name="myDispatcher">
<akka:thread-pool queue="unbounded-linked-blocking-queue" capacity="100" />
</akka:dispatcher>
</akka:typed-actor>
<akka:untyped-actor id="untyped-actor-with-thread-based-dispatcher"
implementation="com.biz.MyActor">
<akka:dispatcher type="thread-based" name="threadBasedDispatcher"/>
</akka:untyped-actor>
If you want to or have to share the dispatcher between Actors you can define a dispatcher and reference it from the Typed Actor configuration:
.. code-block:: xml
<akka:dispatcher id="dispatcher-1"
type="executor-based-event-driven"
name="myDispatcher">
<akka:thread-pool queue="bounded-array-blocking-queue"
capacity="100"
fairness="true"
core-pool-size="1"
max-pool-size="20"
keep-alive="3000"
rejection-policy="caller-runs-policy"/>
</akka:dispatcher>
<akka:typed-actor id="typed-actor-with-dispatcher-ref"
interface="com.biz.MyPOJO"
implementation="com.biz.MyPOJOImpl"
timeout="1000">
<akka:dispatcher ref="dispatcher-1"/>
</akka:typed-actor>
The following dispatcher types are available in spring configuration:
* executor-based-event-driven
* executor-based-event-driven-work-stealing
* thread-based
The following queue types are configurable for dispatchers using thread pools:
* bounded-linked-blocking-queue
* unbounded-linked-blocking-queue
* synchronous-queue
* bounded-array-blocking-queue
If you have set up your IDE to be XSD-aware you can easily write your configuration through auto-completion.
Stopping Typed Actors and Untyped Actors
----------------------------------------
Actors with scope singleton are stopped when the application context is closed. Actors with scope prototype must be stopped by the application.
Supervisor Hierarchies
----------------------
The supervisor configuration in Spring follows the declarative configuration for the Java API. Have a look at Akka's approach to fault tolerance.
Example spring supervisor configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: xml
<beans>
<akka:supervision id="my-supervisor">
<akka:restart-strategy failover="AllForOne"
retries="3"
timerange="1000">
<akka:trap-exits>
<akka:trap-exit>java.io.IOException</akka:trap-exit>
</akka:trap-exits>
</akka:restart-strategy>
<akka:typed-actors>
<akka:typed-actor interface="com.biz.MyPOJO"
implementation="com.biz.MyPOJOImpl"
lifecycle="permanent"
timeout="1000"/>
<akka:typed-actor interface="com.biz.AnotherPOJO"
implementation="com.biz.AnotherPOJOImpl"
lifecycle="temporary"
timeout="1000"/>
<akka:typed-actor interface ="com.biz.FooBar"
implementation ="com.biz.FooBarImpl"
lifecycle="permanent"
transactional="true"
timeout="1000" />
</akka:typed-actors>
</akka:supervision>
<akka:supervision id="supervision-untyped-actors">
<akka:restart-strategy failover="AllForOne" retries="3" timerange="1000">
<akka:trap-exits>
<akka:trap-exit>java.io.IOException</akka:trap-exit>
<akka:trap-exit>java.lang.NullPointerException</akka:trap-exit>
</akka:trap-exits>
</akka:restart-strategy>
<akka:untyped-actors>
<akka:untyped-actor implementation="com.biz.PingActor"
lifecycle="permanent"/>
<akka:untyped-actor implementation="com.biz.PongActor"
lifecycle="permanent"/>
</akka:untyped-actors>
</akka:supervision>
</beans>
Get the TypedActorConfigurator from the Spring context
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code-block:: java
TypedActorConfigurator myConfigurator = (TypedActorConfigurator) context.getBean("my-supervisor");
MyPojo myPojo = (MyPOJO) myConfigurator.getInstance(MyPojo.class);
Property Placeholders
---------------------
The Akka configuration can be made available as property placeholders by using a custom property placeholder configurer for Configgy:
::
<akka:property-placeholder location="akka.conf"/>
<akka:untyped-actor id="actor-1" implementation="com.biz.MyActor" timeout="${akka.actor.timeout}">
<akka:remote host="${akka.remote.server.hostname}" port="${akka.remote.server.port}"/>
</akka:untyped-actor>
Camel configuration
-------------------
For details refer to the :ref:`camel-module` documentation:
* CamelService configuration for :ref:`camel-spring-applications`
* Access to Typed Actors :ref:`camel-typed-actors-using-spring`

View file

@ -36,7 +36,7 @@ of the class path to form the fallback configuration, i.e. it internally uses
.. code-block:: scala
appConfig.withFallback(ConfigFactory.defaultReference(classLoader))
The philosophy is that code never contains default values, but instead relies
upon their presence in the ``reference.conf`` supplied with the library in
question.
@ -370,32 +370,8 @@ akka-zeromq
.. literalinclude:: ../../akka-zeromq/src/main/resources/reference.conf
:language: none
akka-beanstalk-mailbox
~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../akka-durable-mailboxes/akka-beanstalk-mailbox/src/main/resources/reference.conf
:language: none
akka-file-mailbox
~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../akka-durable-mailboxes/akka-file-mailbox/src/main/resources/reference.conf
:language: none
akka-mongo-mailbox
~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../akka-durable-mailboxes/akka-mongo-mailbox/src/main/resources/reference.conf
:language: none
akka-redis-mailbox
~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../akka-durable-mailboxes/akka-redis-mailbox/src/main/resources/reference.conf
:language: none
akka-zookeeper-mailbox
~~~~~~~~~~~~~~~~~~~~~~
.. literalinclude:: ../../akka-durable-mailboxes/akka-zookeeper-mailbox/src/main/resources/reference.conf
:language: none

View file

@ -7,4 +7,3 @@ Modules
durable-mailbox
http
camel
spring

View file

@ -1,13 +0,0 @@
.. _spring-module:
####################
Spring Integration
####################
.. note::
The Akka Spring module has not been migrated to Akka 2.1-SNAPSHOT yet.
It might not make it into Akka 2.0 final but will then hopefully be
re-introduce in an upcoming release. It might also be backported to
2.0 final.

View file

@ -4,7 +4,7 @@
TestKit Example (Scala)
########################
Ray Roestenburg's example code from `his blog <http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html>`_ adapted to work with Akka 1.1.
Ray Roestenburg's example code from `his blog <http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html>`_ adapted to work with Akka 2.x.
.. code-block:: scala