Documentation cleanup

This commit is contained in:
Derek Williams 2011-04-10 13:07:57 -06:00
parent 2ad80c34da
commit b8097f3756
10 changed files with 219 additions and 225 deletions

View file

@ -2,7 +2,7 @@ Articles & Presentations
======================== ========================
Videos Videos
====== ------
`Functional Programming eXchange - March 2011 <http://skillsmatter.com/podcast/scala/simpler-scalability-fault-tolerance-concurrency-remoting-through-actors>`_ `Functional Programming eXchange - March 2011 <http://skillsmatter.com/podcast/scala/simpler-scalability-fault-tolerance-concurrency-remoting-through-actors>`_
@ -17,7 +17,7 @@ Videos
`Akka talk at Scala Days - March 2010 <http://days2010.scala-lang.org/node/138/162>`_ `Akka talk at Scala Days - March 2010 <http://days2010.scala-lang.org/node/138/162>`_
Articles Articles
======== --------
`Remote Actor Class Loading with Akka <https://www.earldouglas.com/remote-actor-class-loading-with-akka>`_ `Remote Actor Class Loading with Akka <https://www.earldouglas.com/remote-actor-class-loading-with-akka>`_
@ -88,13 +88,13 @@ Articles
`Enterprise scala actors: introducing the Akka framework <http://blog.xebia.com/2009/10/22/scala-actors-for-the-enterprise-introducing-the-akka-framework/>`_ `Enterprise scala actors: introducing the Akka framework <http://blog.xebia.com/2009/10/22/scala-actors-for-the-enterprise-introducing-the-akka-framework/>`_
Books Books
===== -----
`Akka and Camel <http://www.manning.com/ibsen/appEsample.pdf>`_ (appendix E of `Camel in Action <http://www.manning.com/ibsen/>`_) `Akka and Camel <http://www.manning.com/ibsen/appEsample.pdf>`_ (appendix E of `Camel in Action <http://www.manning.com/ibsen/>`_)
`Ett första steg i Scala <http://www.studentlitteratur.se/o.o.i.s?id=2474&artnr=33847-01&csid=66&mp=4918>`_ (Kapitel "Aktörer och Akka") (en. "A first step in Scala", chapter "Actors and Akka", book in Swedish) `Ett första steg i Scala <http://www.studentlitteratur.se/o.o.i.s?id=2474&artnr=33847-01&csid=66&mp=4918>`_ (Kapitel "Aktörer och Akka") (en. "A first step in Scala", chapter "Actors and Akka", book in Swedish)
Presentations Presentations
============= -------------
`Slides from Akka talk at Scala Days 2010, good short intro to Akka <http://www.slideshare.net/jboner/akka-scala-days-2010>`_ `Slides from Akka talk at Scala Days 2010, good short intro to Akka <http://www.slideshare.net/jboner/akka-scala-days-2010>`_
@ -105,14 +105,14 @@ Presentations
`<https://github.com/deanwampler/Presentations/tree/master/akka-intro/>`_ `<https://github.com/deanwampler/Presentations/tree/master/akka-intro/>`_
Podcasts Podcasts
======== --------
`Episode 16 Scala and Akka an Interview with Jonas Boner <http://basementcoders.com/?p=711>`_ `Episode 16 Scala and Akka an Interview with Jonas Boner <http://basementcoders.com/?p=711>`_
`Jonas Boner on the Akka framework, Scala, and highly scalable applications <http://techcast.chariotsolutions.com/index.php?post_id=557314>`_ `Jonas Boner on the Akka framework, Scala, and highly scalable applications <http://techcast.chariotsolutions.com/index.php?post_id=557314>`_
Interviews Interviews
========== ----------
`JetBrains/DZone interview: Talking about Akka, Scala and life with Jonas Bonér <http://jetbrains.dzone.com/articles/talking-about-akka-scala-and>`_ `JetBrains/DZone interview: Talking about Akka, Scala and life with Jonas Bonér <http://jetbrains.dzone.com/articles/talking-about-akka-scala-and>`_

View file

@ -4,7 +4,7 @@ Building Akka
This page describes how to build and run Akka from the latest source code. This page describes how to build and run Akka from the latest source code.
Get the source code Get the source code
=================== -------------------
Akka uses `Git <http://git-scm.com>`_ and is hosted at `Github <http://github.com>`_. Akka uses `Git <http://git-scm.com>`_ and is hosted at `Github <http://github.com>`_.
@ -26,7 +26,7 @@ If you have already cloned the repositories previously then you can update the c
git pull origin master git pull origin master
SBT - Simple Build Tool SBT - Simple Build Tool
======================= -----------------------
Akka is using the excellent `SBT <http://code.google.com/p/simple-build-tool>`_ build system. So the first thing you have to do is to download and install SBT. You can read more about how to do that `here <http://code.google.com/p/simple-build-tool/wiki/Setup>`_ . Akka is using the excellent `SBT <http://code.google.com/p/simple-build-tool>`_ build system. So the first thing you have to do is to download and install SBT. You can read more about how to do that `here <http://code.google.com/p/simple-build-tool/wiki/Setup>`_ .
@ -37,7 +37,7 @@ The Akka SBT build file is ``project/build/AkkaProject.scala`` with some propert
---- ----
Building Akka Building Akka
============= -------------
First make sure that you are in the akka code directory: First make sure that you are in the akka code directory:
@ -46,7 +46,7 @@ First make sure that you are in the akka code directory:
cd akka cd akka
Fetching dependencies Fetching dependencies
--------------------- ^^^^^^^^^^^^^^^^^^^^^
SBT does not fetch dependencies automatically. You need to manually do this with the ``update`` command: SBT does not fetch dependencies automatically. You need to manually do this with the ``update`` command:
@ -59,7 +59,7 @@ Once finished, all the dependencies for Akka will be in the ``lib_managed`` dire
*Note: you only need to run {{update}} the first time you are building the code, or when the dependencies have changed.* *Note: you only need to run {{update}} the first time you are building the code, or when the dependencies have changed.*
Building Building
-------- ^^^^^^^^
To compile all the Akka core modules use the ``compile`` command: To compile all the Akka core modules use the ``compile`` command:
@ -76,7 +76,7 @@ You can run all tests with the ``test`` command:
If compiling and testing are successful then you have everything working for the latest Akka development version. If compiling and testing are successful then you have everything working for the latest Akka development version.
Publish to local Ivy repository Publish to local Ivy repository
------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to deploy the artifacts to your local Ivy repository (for example, to use from an SBT project) use the ``publish-local`` command: If you want to deploy the artifacts to your local Ivy repository (for example, to use from an SBT project) use the ``publish-local`` command:
@ -85,7 +85,7 @@ If you want to deploy the artifacts to your local Ivy repository (for example, t
sbt publish-local sbt publish-local
Publish to local Maven repository Publish to local Maven repository
--------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you want to deploy the artifacts to your local Maven repository use: If you want to deploy the artifacts to your local Maven repository use:
@ -94,7 +94,7 @@ If you want to deploy the artifacts to your local Maven repository use:
sbt publish-local publish sbt publish-local publish
SBT interactive mode SBT interactive mode
-------------------- ^^^^^^^^^^^^^^^^^^^^
Note that in the examples above we are calling ``sbt compile`` and ``sbt test`` and so on. SBT also has an interactive mode. If you just run ``sbt`` you enter the interactive SBT prompt and can enter the commands directly. This saves starting up a new JVM instance for each command and can be much faster and more convenient. Note that in the examples above we are calling ``sbt compile`` and ``sbt test`` and so on. SBT also has an interactive mode. If you just run ``sbt`` you enter the interactive SBT prompt and can enter the commands directly. This saves starting up a new JVM instance for each command and can be much faster and more convenient.
@ -118,7 +118,7 @@ For example, building Akka as above is more commonly done like this:
... ...
SBT batch mode SBT batch mode
-------------- ^^^^^^^^^^^^^^
It's also possible to combine commands in a single call. For example, updating, testing, and publishing Akka to the local Ivy repository can be done with: It's also possible to combine commands in a single call. For example, updating, testing, and publishing Akka to the local Ivy repository can be done with:
@ -129,7 +129,7 @@ It's also possible to combine commands in a single call. For example, updating,
---- ----
Building Akka Modules Building Akka Modules
===================== ---------------------
To build Akka Modules first build and publish Akka to your local Ivy repository as described above. Or using: To build Akka Modules first build and publish Akka to your local Ivy repository as described above. Or using:
@ -146,7 +146,7 @@ Then you can build Akka Modules using the same steps as building Akka. First upd
sbt update publish-local sbt update publish-local
Microkernel distribution Microkernel distribution
------------------------ ^^^^^^^^^^^^^^^^^^^^^^^^
To build the Akka Modules microkernel (the same as the Akka Modules distribution download) use the ``dist`` command: To build the Akka Modules microkernel (the same as the Akka Modules distribution download) use the ``dist`` command:
@ -170,10 +170,10 @@ The microkernel will boot up and install the sample applications that reside in
---- ----
Scripts Scripts
======= -------
Linux/Unix init script Linux/Unix init script
---------------------- ^^^^^^^^^^^^^^^^^^^^^^
Here is a Linux/Unix init script that can be very useful: Here is a Linux/Unix init script that can be very useful:
@ -182,7 +182,7 @@ Here is a Linux/Unix init script that can be very useful:
Copy and modify as needed. Copy and modify as needed.
Simple startup shell script Simple startup shell script
--------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^
This little script might help a bit. Just make sure you have the Akka distribution in the '$AKKA_HOME/dist' directory and then invoke this script to start up the kernel. The distribution is created in the './dist' dir for you if you invoke 'sbt dist'. This little script might help a bit. Just make sure you have the Akka distribution in the '$AKKA_HOME/dist' directory and then invoke this script to start up the kernel. The distribution is created in the './dist' dir for you if you invoke 'sbt dist'.
@ -193,7 +193,7 @@ Copy and modify as needed.
---- ----
Dependencies Dependencies
============ ------------
If you are managing dependencies by hand you can find out what all the compile dependencies are for each module by looking in the ``lib_managed/compile`` directories. For example, you can run this to create a listing of dependencies (providing you have the source code and have run ``sbt update``): If you are managing dependencies by hand you can find out what all the compile dependencies are for each module by looking in the ``lib_managed/compile`` directories. For example, you can run this to create a listing of dependencies (providing you have the source code and have run ``sbt update``):

View file

@ -3,38 +3,39 @@ Using Akka in a Buildr project
This is an example on how to use Akka in a project based on Buildr This is an example on how to use Akka in a project based on Buildr
`<code>`_ .. code-block:: ruby
require 'buildr/scala'
VERSION_NUMBER = "0.6" require 'buildr/scala'
GROUP = "se.scalablesolutions.akka"
repositories.remote << "http://www.ibiblio.org/maven2/" VERSION_NUMBER = "0.6"
repositories.remote << "http://www.lag.net/repo" GROUP = "se.scalablesolutions.akka"
repositories.remote << "http://multiverse.googlecode.com/svn/maven-repository/releases"
AKKA = group('akka-core', 'akka-comet', 'akka-util','akka-kernel', 'akka-rest', 'akka-util-java', repositories.remote << "http://www.ibiblio.org/maven2/"
repositories.remote << "http://www.lag.net/repo"
repositories.remote << "http://multiverse.googlecode.com/svn/maven-repository/releases"
AKKA = group('akka-core', 'akka-comet', 'akka-util','akka-kernel', 'akka-rest', 'akka-util-java',
'akka-security','akka-persistence-common', 'akka-persistence-redis', 'akka-security','akka-persistence-common', 'akka-persistence-redis',
'akka-amqp', 'akka-amqp',
:under=> 'se.scalablesolutions.akka', :under=> 'se.scalablesolutions.akka',
:version => '0.6') :version => '0.6')
ASPECTJ = "org.codehaus.aspectwerkz:aspectwerkz-nodeps-jdk5:jar:2.1" ASPECTJ = "org.codehaus.aspectwerkz:aspectwerkz-nodeps-jdk5:jar:2.1"
SBINARY = "sbinary:sbinary:jar:0.3" SBINARY = "sbinary:sbinary:jar:0.3"
COMMONS_IO = "commons-io:commons-io:jar:1.4" COMMONS_IO = "commons-io:commons-io:jar:1.4"
CONFIGGY = "net.lag:configgy:jar:1.4.7" CONFIGGY = "net.lag:configgy:jar:1.4.7"
JACKSON = group('jackson-core-asl', 'jackson-mapper-asl', JACKSON = group('jackson-core-asl', 'jackson-mapper-asl',
:under=> 'org.codehaus.jackson', :under=> 'org.codehaus.jackson',
:version => '1.2.1') :version => '1.2.1')
MULTIVERSE = "org.multiverse:multiverse-alpha:jar:jar-with-dependencies:0.3" MULTIVERSE = "org.multiverse:multiverse-alpha:jar:jar-with-dependencies:0.3"
NETTY = "org.jboss.netty:netty:jar:3.2.0.ALPHA2" NETTY = "org.jboss.netty:netty:jar:3.2.0.ALPHA2"
PROTOBUF = "com.google.protobuf:protobuf-java:jar:2.2.0" PROTOBUF = "com.google.protobuf:protobuf-java:jar:2.2.0"
REDIS = "com.redis:redisclient:jar:1.0.1" REDIS = "com.redis:redisclient:jar:1.0.1"
SJSON = "sjson.json:sjson:jar:0.3" SJSON = "sjson.json:sjson:jar:0.3"
Project.local_task "run" Project.local_task "run"
desc "Akka Chat Sample Module" desc "Akka Chat Sample Module"
define "akka-sample-chat" do define "akka-sample-chat" do
project.version = VERSION_NUMBER project.version = VERSION_NUMBER
project.group = GROUP project.group = GROUP
@ -51,5 +52,4 @@ define "akka-sample-chat" do
SBINARY, SJSON], SBINARY, SJSON],
:java_args => ["-server"] :java_args => ["-server"]
end end
end end
`<code>`_

View file

@ -6,7 +6,7 @@ Module stability: **IN PROGRESS**
Akka supports a Cluster Membership through a `JGroups <http://www.jgroups.org/>`_ based implementation. JGroups is is a `P2P <http://en.wikipedia.org/wiki/Peer-to-peer>`_ clustering API Akka supports a Cluster Membership through a `JGroups <http://www.jgroups.org/>`_ based implementation. JGroups is is a `P2P <http://en.wikipedia.org/wiki/Peer-to-peer>`_ clustering API
Configuration Configuration
============= -------------
The cluster is configured in 'akka.conf' by adding the Fully Qualified Name (FQN) of the actor class and serializer: The cluster is configured in 'akka.conf' by adding the Fully Qualified Name (FQN) of the actor class and serializer:
@ -21,12 +21,12 @@ The cluster is configured in 'akka.conf' by adding the Fully Qualified Name (FQN
} }
How to join the cluster How to join the cluster
======================= -----------------------
The node joins the cluster when the 'RemoteNode' and/or 'RemoteServer' servers are started. The node joins the cluster when the 'RemoteNode' and/or 'RemoteServer' servers are started.
Cluster API Cluster API
=========== -----------
Interaction with the cluster is done through the 'akka.remote.Cluster' object. Interaction with the cluster is done through the 'akka.remote.Cluster' object.
@ -80,11 +80,10 @@ Here is an example:
Here is another example: Here is another example:
`<code format="scala">`_ .. code-block:: scala
Cluster.lookup({ Cluster.lookup({
case remoteAddress @ RemoteAddress(_,_) => remoteAddress case remoteAddress @ RemoteAddress(_,_) => remoteAddress
}) match { }) match {
case Some(remoteAddress) => spawnAllRemoteActors(remoteAddress) case Some(remoteAddress) => spawnAllRemoteActors(remoteAddress)
case None => handleNoRemoteNodeFound case None => handleNoRemoteNodeFound
} }
`<code>`_

View file

@ -2,10 +2,9 @@ Dataflow Concurrency (Java)
=========================== ===========================
Introduction Introduction
============ ------------
IMPORTANT: As of Akka 1.1, Akka Future, CompletableFuture and DefaultCompletableFuture have all the functionality of DataFlowVariables, they also support non-blocking composition and advanced features like fold and reduce, Akka DataFlowVariable is therefor deprecated and will probably resurface in the following release as a DSL on top of Futures. **IMPORTANT: As of Akka 1.1, Akka Future, CompletableFuture and DefaultCompletableFuture have all the functionality of DataFlowVariables, they also support non-blocking composition and advanced features like fold and reduce, Akka DataFlowVariable is therefor deprecated and will probably resurface in the following release as a DSL on top of Futures.**
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Akka implements `Oz-style dataflow concurrency <http://www.mozart-oz.org/documentation/tutorial/node8.html#chapter.concurrency>`_ through dataflow (single assignment) variables and lightweight (event-based) processes/threads. Akka implements `Oz-style dataflow concurrency <http://www.mozart-oz.org/documentation/tutorial/node8.html#chapter.concurrency>`_ through dataflow (single assignment) variables and lightweight (event-based) processes/threads.
@ -80,12 +79,12 @@ You can also set the thread to a reference to be able to control its life-cycle:
t.sendOneWay(new Exit()); // shut down the thread t.sendOneWay(new Exit()); // shut down the thread
Examples Examples
======== --------
Most of these examples are taken from the `Oz wikipedia page <http://en.wikipedia.org/wiki/Oz_%28programming_language%29>`_ Most of these examples are taken from the `Oz wikipedia page <http://en.wikipedia.org/wiki/Oz_%28programming_language%29>`_
Simple DataFlowVariable example Simple DataFlowVariable example
------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This example is from Oz wikipedia page: http://en.wikipedia.org/wiki/Oz_(programming_language). This example is from Oz wikipedia page: http://en.wikipedia.org/wiki/Oz_(programming_language).
Sort of the "Hello World" of dataflow concurrency. Sort of the "Hello World" of dataflow concurrency.
@ -132,23 +131,23 @@ Example in Akka:
}); });
Example on life-cycle management of DataFlowVariables Example on life-cycle management of DataFlowVariables
----------------------------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Shows how to shutdown dataflow variables and bind threads to values to be able to interact with them (exit etc.). Shows how to shutdown dataflow variables and bind threads to values to be able to interact with them (exit etc.).
Example in Akka: Example in Akka:
`<code format="java">`_ .. code-block:: java
import static akka.dataflow.DataFlow.*; import static akka.dataflow.DataFlow.*;
import akka.japi.Effect; import akka.japi.Effect;
// create four 'int' data flow variables // create four 'int' data flow variables
DataFlowVariable<int> x = new DataFlowVariable<int>(); DataFlowVariable<int> x = new DataFlowVariable<int>();
DataFlowVariable<int> y = new DataFlowVariable<int>(); DataFlowVariable<int> y = new DataFlowVariable<int>();
DataFlowVariable<int> z = new DataFlowVariable<int>(); DataFlowVariable<int> z = new DataFlowVariable<int>();
DataFlowVariable<int> v = new DataFlowVariable<int>(); DataFlowVariable<int> v = new DataFlowVariable<int>();
ActorRef main = thread(new Effect() { ActorRef main = thread(new Effect() {
public void apply() { public void apply() {
System.out.println("Thread 'main'") System.out.println("Thread 'main'")
if (x.get() > y.get()) { if (x.get() > y.get()) {
@ -165,27 +164,26 @@ ActorRef main = thread(new Effect() {
z.shutdown(); z.shutdown();
v.shutdown(); v.shutdown();
} }
}); });
ActorRef setY = thread(new Effect() { ActorRef setY = thread(new Effect() {
public void apply() { public void apply() {
System.out.println("Thread 'setY', sleeping..."); System.out.println("Thread 'setY', sleeping...");
Thread.sleep(5000); Thread.sleep(5000);
y.set(2); y.set(2);
System.out.println("'y' set to: " + y.get()); System.out.println("'y' set to: " + y.get());
} }
}); });
ActorRef setV = thread(new Effect() { ActorRef setV = thread(new Effect() {
public void apply() { public void apply() {
System.out.println("Thread 'setV'"); System.out.println("Thread 'setV'");
y.set(2); y.set(2);
System.out.println("'v' set to y: " + v.get()); System.out.println("'v' set to y: " + v.get());
} }
}); });
// shut down the threads // shut down the threads
main.sendOneWay(new Exit()); main.sendOneWay(new Exit());
setY.sendOneWay(new Exit()); setY.sendOneWay(new Exit());
setV.sendOneWay(new Exit()); setV.sendOneWay(new Exit());
`<code>`_

View file

@ -2,10 +2,9 @@ Dataflow Concurrency (Scala)
============================ ============================
Description Description
=========== -----------
IMPORTANT: As of Akka 1.1, Akka Future, CompletableFuture and DefaultCompletableFuture have all the functionality of DataFlowVariables, they also support non-blocking composition and advanced features like fold and reduce, Akka DataFlowVariable is therefor deprecated and will probably resurface in the following release as a DSL on top of Futures. **IMPORTANT: As of Akka 1.1, Akka Future, CompletableFuture and DefaultCompletableFuture have all the functionality of DataFlowVariables, they also support non-blocking composition and advanced features like fold and reduce, Akka DataFlowVariable is therefor deprecated and will probably resurface in the following release as a DSL on top of Futures.**
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Akka implements `Oz-style dataflow concurrency <http://www.mozart-oz.org/documentation/tutorial/node8.html#chapter.concurrency>`_ through dataflow (single assignment) variables and lightweight (event-based) processes/threads. Akka implements `Oz-style dataflow concurrency <http://www.mozart-oz.org/documentation/tutorial/node8.html#chapter.concurrency>`_ through dataflow (single assignment) variables and lightweight (event-based) processes/threads.
@ -14,6 +13,7 @@ Dataflow concurrency is deterministic. This means that it will always behave the
The best way to learn how to program with dataflow variables is to read the fantastic book `Concepts, Techniques, and Models of Computer Programming <http://www.info.ucl.ac.be/%7Epvr/book.html>`_. By Peter Van Roy and Seif Haridi. The best way to learn how to program with dataflow variables is to read the fantastic book `Concepts, Techniques, and Models of Computer Programming <http://www.info.ucl.ac.be/%7Epvr/book.html>`_. By Peter Van Roy and Seif Haridi.
The documentation is not as complete as it should be, something we will improve shortly. For now, besides above listed resources on dataflow concurrency, I recommend you to read the documentation for the GPars implementation, which is heavily influenced by the Akka implementation: The documentation is not as complete as it should be, something we will improve shortly. For now, besides above listed resources on dataflow concurrency, I recommend you to read the documentation for the GPars implementation, which is heavily influenced by the Akka implementation:
* `<http://gpars.codehaus.org/Dataflow>`_ * `<http://gpars.codehaus.org/Dataflow>`_
* `<http://www.gpars.org/guide/guide/7.%20Dataflow%20Concurrency.html>`_ * `<http://www.gpars.org/guide/guide/7.%20Dataflow%20Concurrency.html>`_
@ -68,7 +68,7 @@ You can also set the thread to a reference to be able to control its life-cycle:
t ! 'exit // shut down the thread t ! 'exit // shut down the thread
Examples Examples
======== --------
Most of these examples are taken from the `Oz wikipedia page <http://en.wikipedia.org/wiki/Oz_%28programming_language%29>`_ Most of these examples are taken from the `Oz wikipedia page <http://en.wikipedia.org/wiki/Oz_%28programming_language%29>`_
@ -96,7 +96,7 @@ Note: Do not try to run the Oz version, it is only there for reference.
3. Have fun. 3. Have fun.
Simple DataFlowVariable example Simple DataFlowVariable example
------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This example is from Oz wikipedia page: http://en.wikipedia.org/wiki/Oz_(programming_language). This example is from Oz wikipedia page: http://en.wikipedia.org/wiki/Oz_(programming_language).
Sort of the "Hello World" of dataflow concurrency. Sort of the "Hello World" of dataflow concurrency.
@ -128,7 +128,7 @@ Example in Akka:
thread { y << 2 } thread { y << 2 }
Example of using DataFlowVariable with recursion Example of using DataFlowVariable with recursion
------------------------------------------------ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Using DataFlowVariable and recursion to calculate sum. Using DataFlowVariable and recursion to calculate sum.
@ -178,19 +178,19 @@ Example in Akka:
thread { println("List of sums: " + y()) } thread { println("List of sums: " + y()) }
Example on life-cycle management of DataFlowVariables Example on life-cycle management of DataFlowVariables
----------------------------------------------------- ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Shows how to shutdown dataflow variables and bind threads to values to be able to interact with them (exit etc.). Shows how to shutdown dataflow variables and bind threads to values to be able to interact with them (exit etc.).
Example in Akka: Example in Akka:
`<code format="scala">`_ .. code-block:: scala
import akka.dataflow.DataFlow._ import akka.dataflow.DataFlow._
// create four 'Int' data flow variables // create four 'Int' data flow variables
val x, y, z, v = new DataFlowVariable[Int] val x, y, z, v = new DataFlowVariable[Int]
val main = thread { val main = thread {
println("Thread 'main'") println("Thread 'main'")
x << 1 x << 1
@ -211,23 +211,22 @@ val main = thread {
y.shutdown y.shutdown
z.shutdown z.shutdown
v.shutdown v.shutdown
} }
val setY = thread { val setY = thread {
println("Thread 'setY', sleeping...") println("Thread 'setY', sleeping...")
Thread.sleep(5000) Thread.sleep(5000)
y << 2 y << 2
println("'y' set to: " + y()) println("'y' set to: " + y())
} }
val setV = thread { val setV = thread {
println("Thread 'setV'") println("Thread 'setV'")
v << y v << y
println("'v' set to 'y': " + v()) println("'v' set to 'y': " + v())
} }
// shut down the threads // shut down the threads
main ! 'exit main ! 'exit
setY ! 'exit setY ! 'exit
setV ! 'exit setV ! 'exit
`<code>`_

View file

@ -2,7 +2,7 @@ Developer Guidelines
==================== ====================
Code Style Code Style
========== ----------
The Akka code style follows `this document <http://davetron5000.github.com/scala-style/ScalaStyleGuide.pdf>`_ . The Akka code style follows `this document <http://davetron5000.github.com/scala-style/ScalaStyleGuide.pdf>`_ .
@ -12,20 +12,22 @@ Here is a code style settings file for IntelliJ IDEA.
Please follow the code style. Look at the code around you and mimic. Please follow the code style. Look at the code around you and mimic.
Testing Testing
======= -------
All code that is checked in should have tests. All testing is done with ScalaTest and ScalaCheck. All code that is checked in should have tests. All testing is done with ScalaTest and ScalaCheck.
* Name tests as *Test.scala if they do not depend on any external stuff. That keeps surefire happy. * Name tests as *Test.scala if they do not depend on any external stuff. That keeps surefire happy.
* Name tests as *Spec.scala if they have external dependencies. * Name tests as *Spec.scala if they have external dependencies.
There is a testing standard that should be followed: `Ticket001Spec <@https://github.com/jboner/akka/blob/master/akka-actor/src/test/scala/akka/ticket/Ticket001Spec.scala>`_ There is a testing standard that should be followed: `Ticket001Spec <@https://github.com/jboner/akka/blob/master/akka-actor/src/test/scala/akka/ticket/Ticket001Spec.scala>`_
Actor TestKit Actor TestKit
------------- ^^^^^^^^^^^^^
There is a useful test kit for testing actors: `akka.util.TestKit <@https://github.com/jboner/akka/tree/master/akka-actor/src/main/scala/akka/util/TestKit.scala>`_. It enables assertions concerning replies received and their timing, there is more documentation in the `<TestKit>`_ module. There is a useful test kit for testing actors: `akka.util.TestKit <@https://github.com/jboner/akka/tree/master/akka-actor/src/main/scala/akka/util/TestKit.scala>`_. It enables assertions concerning replies received and their timing, there is more documentation in the `<TestKit>`_ module.
NetworkFailureTest NetworkFailureTest
------------------ ^^^^^^^^^^^^^^^^^^
You can use the 'NetworkFailureTest' trait to test network failure. See the 'RemoteErrorHandlingNetworkTest' test. Your tests needs to end with 'NetworkTest'. They are disabled by default. To run them you need to enable a flag. You can use the 'NetworkFailureTest' trait to test network failure. See the 'RemoteErrorHandlingNetworkTest' test. Your tests needs to end with 'NetworkTest'. They are disabled by default. To run them you need to enable a flag.

View file

@ -6,7 +6,7 @@ Module stability: **SOLID**
The "let it crash" approach to fault/error handling, implemented by linking actors, is very different to what Java and most non-concurrency oriented languages/frameworks have adopted. Its a way of dealing with failure that is designed for concurrent and distributed systems. The "let it crash" approach to fault/error handling, implemented by linking actors, is very different to what Java and most non-concurrency oriented languages/frameworks have adopted. Its a way of dealing with failure that is designed for concurrent and distributed systems.
Concurrency Concurrency
^^^^^^^^^^^ -----------
Throwing an exception in concurrent code (lets assume we are using non-linked actors), will just simply blow up the thread that currently executes the actor. Throwing an exception in concurrent code (lets assume we are using non-linked actors), will just simply blow up the thread that currently executes the actor.
@ -24,14 +24,14 @@ This is very useful when you have thousands of concurrent actors. Some actors mi
It encourages non-defensive programming. Dont try to prevent things from go wrong, because they will, whether you want it or not. Instead; expect failure as a natural state in the life-cycle of your app, crash early and let someone else (that sees the whole picture), deal with it. It encourages non-defensive programming. Dont try to prevent things from go wrong, because they will, whether you want it or not. Instead; expect failure as a natural state in the life-cycle of your app, crash early and let someone else (that sees the whole picture), deal with it.
Distributed actors Distributed actors
^^^^^^^^^^^^^^^^^^ ------------------
You cant build a fault-tolerant system with just one single box - you need at least two. Also, you (usually) need to know if one box is down and/or the service you are talking to on the other box is down. Here actor supervision/linking is a critical tool for not only monitoring the health of remote services, but to actually manage the service, do something about the problem if the actor or node is down. Such as restarting actors on the same node or on another node. You cant build a fault-tolerant system with just one single box - you need at least two. Also, you (usually) need to know if one box is down and/or the service you are talking to on the other box is down. Here actor supervision/linking is a critical tool for not only monitoring the health of remote services, but to actually manage the service, do something about the problem if the actor or node is down. Such as restarting actors on the same node or on another node.
In short, it is a very different way of thinking, but a way that is very useful (if not critical) to building fault-tolerant highly concurrent and distributed applications, which is as valid if you are writing applications for the JVM or the Erlang VM (the origin of the idea of "let-it-crash" and actor supervision). In short, it is a very different way of thinking, but a way that is very useful (if not critical) to building fault-tolerant highly concurrent and distributed applications, which is as valid if you are writing applications for the JVM or the Erlang VM (the origin of the idea of "let-it-crash" and actor supervision).
Supervision Supervision
=========== -----------
Supervisor hierarchies originate from `Erlangs OTP framework <http://www.erlang.org/doc/design_principles/sup_princ.html#5>`_. Supervisor hierarchies originate from `Erlangs OTP framework <http://www.erlang.org/doc/design_principles/sup_princ.html#5>`_.
@ -45,20 +45,17 @@ OneForOne
The OneForOne fault handler will restart only the component that has crashed. The OneForOne fault handler will restart only the component that has crashed.
`<image:http://www.erlang.org/doc/design_principles/sup4.gif>`_ `<image:http://www.erlang.org/doc/design_principles/sup4.gif>`_
^
AllForOne AllForOne
^^^^^^^^^ ^^^^^^^^^
The AllForOne fault handler will restart all the components that the supervisor is managing, including the one that have crashed. This strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing. The AllForOne fault handler will restart all the components that the supervisor is managing, including the one that have crashed. This strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing.
`<image:http://www.erlang.org/doc/design_principles/sup5.gif>`_ `<image:http://www.erlang.org/doc/design_principles/sup5.gif>`_
^
Restart callbacks Restart callbacks
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
There are two different callbacks that an UntypedActor or TypedActor can hook in to: There are two different callbacks that an UntypedActor or TypedActor can hook in to:
* Pre restart * Pre restart
* Post restart * Post restart
@ -68,8 +65,10 @@ Defining a supervisor's restart strategy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Both the Typed Actor supervisor configuration and the Actor supervisor configuration take a FaultHandlingStrategy instance which defines the fault management. The different strategies are: Both the Typed Actor supervisor configuration and the Actor supervisor configuration take a FaultHandlingStrategy instance which defines the fault management. The different strategies are:
* AllForOne * AllForOne
* OneForOne * OneForOne
These have the semantics outlined in the section above. These have the semantics outlined in the section above.
Here is an example of how to define a restart strategy: Here is an example of how to define a restart strategy:
@ -86,6 +85,7 @@ Defining actor life-cycle
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
The other common configuration element is the LifeCycle which defines the life-cycle. The supervised actor can define one of two different life-cycle configurations: The other common configuration element is the LifeCycle which defines the life-cycle. The supervised actor can define one of two different life-cycle configurations:
* Permanent: which means that the actor will always be restarted. * Permanent: which means that the actor will always be restarted.
* Temporary: which means that the actor will **not** be restarted, but it will be shut down through the regular shutdown process so the 'postStop' callback function will called. * Temporary: which means that the actor will **not** be restarted, but it will be shut down through the regular shutdown process so the 'postStop' callback function will called.
@ -223,10 +223,11 @@ If a linked Actor is failing and throws an exception then an new Exit(deadAct
The supervising Actor also needs to define a fault handler that defines the restart strategy the Actor should accommodate when it traps an Exit message. This is done by setting the setFaultHandler method. The supervising Actor also needs to define a fault handler that defines the restart strategy the Actor should accommodate when it traps an Exit message. This is done by setting the setFaultHandler method.
The different options are: The different options are:
* AllForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange) * AllForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange)
** trapExit is an Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle * trapExit is an Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle
* OneForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange) * OneForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange)
** trapExit is an Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle * trapExit is an Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle
Here is an example: Here is an example:
@ -334,6 +335,7 @@ If you remember, when you define the 'RestartStrategy' you also defined maximum
Now, what happens if this limit is reached? Now, what happens if this limit is reached?
What will happen is that the failing actor will send a system message to its supervisor called 'MaximumNumberOfRestartsWithinTimeRangeReached' with the following these properties: What will happen is that the failing actor will send a system message to its supervisor called 'MaximumNumberOfRestartsWithinTimeRangeReached' with the following these properties:
* victim: ActorRef * victim: ActorRef
* maxNrOfRetries: int * maxNrOfRetries: int
* withinTimeRange: int * withinTimeRange: int
@ -369,8 +371,6 @@ You will also get this log warning similar to this:
If you don't define a message handler for this message then you don't get an error but the message is simply not sent to the supervisor. Instead you will get a log warning. If you don't define a message handler for this message then you don't get an error but the message is simply not sent to the supervisor. Instead you will get a log warning.
-
Supervising Typed Actors Supervising Typed Actors
------------------------ ------------------------
@ -409,8 +409,6 @@ Then you can retrieve the Typed Actor as follows:
Foo foo = (Foo) manager.getInstance(Foo.class); Foo foo = (Foo) manager.getInstance(Foo.class);
^
Restart callbacks Restart callbacks
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
@ -450,17 +448,16 @@ If the parent TypedActor (supervisor) wants to be able to do handle failing chil
For convenience there is an overloaded link that takes trapExit and faultHandler for the supervisor as arguments. Here is an example: For convenience there is an overloaded link that takes trapExit and faultHandler for the supervisor as arguments. Here is an example:
`<code format="java5">`_ .. code-block:: java
import static akka.actor.TypedActor.*; import static akka.actor.TypedActor.*;
import static akka.config.Supervision.*; import static akka.config.Supervision.*;
foo = newInstance(Foo.class, FooImpl.class, 1000); foo = newInstance(Foo.class, FooImpl.class, 1000);
bar = newInstance(Bar.class, BarImpl.class, 1000); bar = newInstance(Bar.class, BarImpl.class, 1000);
link(foo, bar, new AllForOneStrategy(new Class[]{IOException.class}, 3, 2000)); link(foo, bar, new AllForOneStrategy(new Class[]{IOException.class}, 3, 2000));
// alternative: chaining // alternative: chaining
bar = faultHandler(foo, new AllForOneStrategy(new Class[]{IOException.class}, 3, 2000)).newInstance(Bar.class, 1000); bar = faultHandler(foo, new AllForOneStrategy(new Class[]{IOException.class}, 3, 2000)).newInstance(Bar.class, 1000);
link(foo, bar); link(foo, bar);
`<code>`_

View file

@ -6,7 +6,7 @@ Module stability: **SOLID**
The "let it crash" approach to fault/error handling, implemented by linking actors, is very different to what Java and most non-concurrency oriented languages/frameworks have adopted. It's a way of dealing with failure that is designed for concurrent and distributed systems. The "let it crash" approach to fault/error handling, implemented by linking actors, is very different to what Java and most non-concurrency oriented languages/frameworks have adopted. It's a way of dealing with failure that is designed for concurrent and distributed systems.
Concurrency Concurrency
^^^^^^^^^^^ -----------
Throwing an exception in concurrent code (let's assume we are using non-linked actors), will just simply blow up the thread that currently executes the actor. Throwing an exception in concurrent code (let's assume we are using non-linked actors), will just simply blow up the thread that currently executes the actor.
@ -16,6 +16,7 @@ Throwing an exception in concurrent code (let's assume we are using non-linked a
Here actors provide a clean way of getting notification of the error and do something about it. Here actors provide a clean way of getting notification of the error and do something about it.
Linking actors also allow you to create sets of actors where you can be sure that either: Linking actors also allow you to create sets of actors where you can be sure that either:
# All are dead # All are dead
# None are dead # None are dead
@ -24,14 +25,14 @@ This is very useful when you have thousands of concurrent actors. Some actors mi
It encourages non-defensive programming. Don't try to prevent things from go wrong, because they will, whether you want it or not. Instead; expect failure as a natural state in the life-cycle of your app, crash early and let someone else (that sees the whole picture), deal with it. It encourages non-defensive programming. Don't try to prevent things from go wrong, because they will, whether you want it or not. Instead; expect failure as a natural state in the life-cycle of your app, crash early and let someone else (that sees the whole picture), deal with it.
Distributed actors Distributed actors
^^^^^^^^^^^^^^^^^^ ------------------
You can't build a fault-tolerant system with just one single box - you need at least two. Also, you (usually) need to know if one box is down and/or the service you are talking to on the other box is down. Here actor supervision/linking is a critical tool for not only monitoring the health of remote services, but to actually manage the service, do something about the problem if the actor or node is down. Such as restarting actors on the same node or on another node. You can't build a fault-tolerant system with just one single box - you need at least two. Also, you (usually) need to know if one box is down and/or the service you are talking to on the other box is down. Here actor supervision/linking is a critical tool for not only monitoring the health of remote services, but to actually manage the service, do something about the problem if the actor or node is down. Such as restarting actors on the same node or on another node.
In short, it is a very different way of thinking, but a way that is very useful (if not critical) to building fault-tolerant highly concurrent and distributed applications, which is as valid if you are writing applications for the JVM or the Erlang VM (the origin of the idea of "let-it-crash" and actor supervision). In short, it is a very different way of thinking, but a way that is very useful (if not critical) to building fault-tolerant highly concurrent and distributed applications, which is as valid if you are writing applications for the JVM or the Erlang VM (the origin of the idea of "let-it-crash" and actor supervision).
Supervision Supervision
=========== -----------
Supervisor hierarchies originate from `Erlang's OTP framework <http://www.erlang.org/doc/design_principles/sup_princ.html#5>`_. Supervisor hierarchies originate from `Erlang's OTP framework <http://www.erlang.org/doc/design_principles/sup_princ.html#5>`_.
@ -45,20 +46,17 @@ OneForOne
The OneForOne fault handler will restart only the component that has crashed. The OneForOne fault handler will restart only the component that has crashed.
`<image:http://www.erlang.org/doc/design_principles/sup4.gif>`_ `<image:http://www.erlang.org/doc/design_principles/sup4.gif>`_
^
AllForOne AllForOne
^^^^^^^^^ ^^^^^^^^^
The AllForOne fault handler will restart all the components that the supervisor is managing, including the one that have crashed. This strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing. The AllForOne fault handler will restart all the components that the supervisor is managing, including the one that have crashed. This strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing.
`<image:http://www.erlang.org/doc/design_principles/sup5.gif>`_ `<image:http://www.erlang.org/doc/design_principles/sup5.gif>`_
^
Restart callbacks Restart callbacks
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
There are two different callbacks that the Typed Actor and Actor can hook in to: There are two different callbacks that the Typed Actor and Actor can hook in to:
* Pre restart * Pre restart
* Post restart * Post restart
@ -68,8 +66,10 @@ Defining a supervisor's restart strategy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Both the Typed Actor supervisor configuration and the Actor supervisor configuration take a 'FaultHandlingStrategy' instance which defines the fault management. The different strategies are: Both the Typed Actor supervisor configuration and the Actor supervisor configuration take a 'FaultHandlingStrategy' instance which defines the fault management. The different strategies are:
* AllForOne * AllForOne
* OneForOne * OneForOne
These have the semantics outlined in the section above. These have the semantics outlined in the section above.
Here is an example of how to define a restart strategy: Here is an example of how to define a restart strategy:
@ -86,6 +86,7 @@ Defining actor life-cycle
^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^
The other common configuration element is the "LifeCycle' which defines the life-cycle. The supervised actor can define one of two different life-cycle configurations: The other common configuration element is the "LifeCycle' which defines the life-cycle. The supervised actor can define one of two different life-cycle configurations:
* Permanent: which means that the actor will always be restarted. * Permanent: which means that the actor will always be restarted.
* Temporary: which means that the actor will **not** be restarted, but it will be shut down through the regular shutdown process so the 'postStop' callback function will called. * Temporary: which means that the actor will **not** be restarted, but it will be shut down through the regular shutdown process so the 'postStop' callback function will called.
@ -216,10 +217,11 @@ The supervising Actor also needs to define a fault handler that defines the rest
protected var faultHandler: FaultHandlingStrategy protected var faultHandler: FaultHandlingStrategy
The different options are: The different options are:
* AllForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange) * AllForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange)
** trapExit is a List or Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle * trapExit is a List or Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle
* OneForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange) * OneForOneStrategy(trapExit, maxNrOfRetries, withinTimeRange)
** trapExit is a List or Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle * trapExit is a List or Array of classes inheriting from Throwable, they signal which types of exceptions this actor will handle
Here is an example: Here is an example:
@ -346,8 +348,6 @@ You will also get this log warning similar to this:
If you don't define a message handler for this message then you don't get an error but the message is simply not sent to the supervisor. Instead you will get a log warning. If you don't define a message handler for this message then you don't get an error but the message is simply not sent to the supervisor. Instead you will get a log warning.
-
Supervising Typed Actors Supervising Typed Actors
------------------------ ------------------------
@ -407,17 +407,16 @@ If the parent TypedActor (supervisor) wants to be able to do handle failing chil
For convenience there is an overloaded link that takes trapExit and faultHandler for the supervisor as arguments. Here is an example: For convenience there is an overloaded link that takes trapExit and faultHandler for the supervisor as arguments. Here is an example:
`<code format="scala">`_ .. code-block:: scala
import akka.actor.TypedActor._ import akka.actor.TypedActor._
val foo = newInstance(classOf[Foo], 1000) val foo = newInstance(classOf[Foo], 1000)
val bar = newInstance(classOf[Bar], 1000) val bar = newInstance(classOf[Bar], 1000)
link(foo, bar, new AllForOneStrategy(Array(classOf[IOException]), 3, 2000)) link(foo, bar, new AllForOneStrategy(Array(classOf[IOException]), 3, 2000))
// alternative: chaining // alternative: chaining
bar = faultHandler(foo, new AllForOneStrategy(Array(classOf[IOException]), 3, 2000)) bar = faultHandler(foo, new AllForOneStrategy(Array(classOf[IOException]), 3, 2000))
.newInstance(Bar.class, 1000) .newInstance(Bar.class, 1000)
link(foo, bar link(foo, bar
`<code>`_

View file

@ -4,14 +4,14 @@ Futures (Scala)
Introduction Introduction
------------ ------------
In Akka, a `Future <http://en.wikipedia.org/wiki/Futures_and_promises>`_ is a data structure used to retrieve the result of some concurrent operation. This operation is usually performed by an `Actor <futures-scala#use-actor>`_ or by the Dispatcher `directly <futures-scala#use-direct>`_. This result can be accessed synchronously (blocking) or asynchronously (non-blocking). In Akka, a `Future <http://en.wikipedia.org/wiki/Futures_and_promises>`_ is a data structure used to retrieve the result of some concurrent operation. This operation is usually performed by an ``Actor`` or by the ``Dispatcher`` directly. This result can be accessed synchronously (blocking) or asynchronously (non-blocking).
Use with Actors Use with Actors
--------------- ---------------
There are generally two ways of getting a reply from an Actor: the first is by a sent message (`actor ! msg <actors-scala#fire-forget>`_), which only works if the original sender was an Actor) and the second is through a Future. There are generally two ways of getting a reply from an ``Actor``: the first is by a sent message (``actor ! msg``), which only works if the original sender was an ``Actor``) and the second is through a ``Future``.
Using an Actor's '!!!' method to send a message will return a Future. To wait for and retreive the actual result the simplest method is: Using an ``Actor``\'s ``!!!`` method to send a message will return a Future. To wait for and retreive the actual result the simplest method is:
.. code-block:: scala .. code-block:: scala
@ -20,12 +20,12 @@ Using an Actor's '!!!' method to send a message will return a Future. To wait fo
// or more simply // or more simply
val result: Any = future() val result: Any = future()
This will cause the current thread to block and wait for the Actor to 'complete' the Future with it's reply. Due to the dynamic nature of Akka's Actors this result will be untyped and will default to 'Nothing'. The safest way to deal with this is to cast the result to an Any as is shown in the above example. You can also use the expected result type instead of Any, but if an unexpected type were to be returned you will get a ClassCastException. For more elegant ways to deal with this and to use the result without blocking refer to `Functional Futures <futures-scala#functional>`_. This will cause the current thread to block and wait for the ``Actor`` to 'complete' the ``Future`` with it's reply. Due to the dynamic nature of Akka's ``Actor``\s this result will be untyped and will default to ``Nothing``. The safest way to deal with this is to cast the result to an ``Any`` as is shown in the above example. You can also use the expected result type instead of ``Any``, but if an unexpected type were to be returned you will get a ``ClassCastException``. For more elegant ways to deal with this and to use the result without blocking refer to `Functional Futures`_.
Use Directly Use Directly
------------ ------------
A common use case within Akka is to have some computation performed concurrently without needing the extra utility of an Actor. If you find yourself creating a pool of Actors for the sole reason of performing a calculation in parallel, there is an easier (and faster) way: A common use case within Akka is to have some computation performed concurrently without needing the extra utility of an ``Actor``. If you find yourself creating a pool of ``Actor``\s for the sole reason of performing a calculation in parallel, there is an easier (and faster) way:
.. code-block:: scala .. code-block:: scala
@ -36,17 +36,17 @@ A common use case within Akka is to have some computation performed concurrently
} }
val result = future() val result = future()
In the above code the block passed to Future will be executed by the default `Dispatcher <dispatchers-scala>`_, with the return value of the block used to complete the Future (in this case, the result would be the string: "HelloWorld"). Unlike a Future that is returned from an Actor, this Future is properly typed, and we also avoid the overhead of managing an Actor. In the above code the block passed to ``Future`` will be executed by the default ``Dispatcher``, with the return value of the block used to complete the ``Future`` (in this case, the result would be the string: "HelloWorld"). Unlike a ``Future`` that is returned from an ``Actor``, this ``Future`` is properly typed, and we also avoid the overhead of managing an ``Actor``.
Functional Futures Functional Futures
------------------ ------------------
A recent addition to Akka's Future is several monadic methods that are very similar to the ones used by Scala's collections. These allow you to create 'pipelines' or 'streams' that the result will travel through. A recent addition to Akka's ``Future`` is several monadic methods that are very similar to the ones used by Scala's collections. These allow you to create 'pipelines' or 'streams' that the result will travel through.
Future is a Monad Future is a Monad
^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^
The first method for working with Future functionally is 'map'. This method takes a Function which performs some operation on the result of the Future, and returning a new result. The return value of the 'map' method is another Future that will contain the new result: The first method for working with ``Future`` functionally is ``map``. This method takes a ``Function`` which performs some operation on the result of the ``Future``, and returning a new result. The return value of the ``map`` method is another ``Future`` that will contain the new result:
.. code-block:: scala .. code-block:: scala