DOC: Replace all akka.conf references. Fixes #1469

* Let us use :ref:`configuration` in all places to refer to the configuration.
This commit is contained in:
Patrik Nordwall 2011-12-09 13:27:27 +01:00
parent 9fdf9a9c66
commit 884dc43a7d
22 changed files with 50 additions and 122 deletions

View file

@ -74,8 +74,7 @@ storage.
Read more about that in the :ref:`dispatchers-scala` documentation. Read more about that in the :ref:`dispatchers-scala` documentation.
You can also configure and tune the file-based durable mailbox. This is done in You can also configure and tune the file-based durable mailbox. This is done in
the ``akka.actor.mailbox.file-based`` section in the ``akka.conf`` configuration the ``akka.actor.mailbox.file-based`` section in the :ref:`configuration`.
file.
.. code-block:: none .. code-block:: none
@ -125,8 +124,7 @@ or for a thread-based durable dispatcher::
RedisDurableMailboxStorage) RedisDurableMailboxStorage)
You also need to configure the IP and port for the Redis server. This is done in You also need to configure the IP and port for the Redis server. This is done in
the ``akka.actor.mailbox.redis`` section in the ``akka.conf`` configuration the ``akka.actor.mailbox.redis`` section in the :ref:`configuration`.
file.
.. code-block:: none .. code-block:: none
@ -169,8 +167,7 @@ or for a thread-based durable dispatcher::
ZooKeeperDurableMailboxStorage) ZooKeeperDurableMailboxStorage)
You also need to configure ZooKeeper server addresses, timeouts, etc. This is You also need to configure ZooKeeper server addresses, timeouts, etc. This is
done in the ``akka.actor.mailbox.zookeeper`` section in the ``akka.conf`` done in the ``akka.actor.mailbox.zookeeper`` section in the :ref:`configuration`.
configuration file.
.. code-block:: none .. code-block:: none
@ -208,7 +205,7 @@ or for a thread-based durable dispatcher. ::
You also need to configure the IP, and port, and so on, for the Beanstalk You also need to configure the IP, and port, and so on, for the Beanstalk
server. This is done in the ``akka.actor.mailbox.beanstalk`` section in the server. This is done in the ``akka.actor.mailbox.beanstalk`` section in the
``akka.conf`` configuration file. :ref:`configuration`.
.. code-block:: none .. code-block:: none
@ -238,8 +235,7 @@ features cohesive to a fast, reliable & durable queueing mechanism which the Akk
Akka's implementations of MongoDB mailboxes are built on top of the purely asynchronous MongoDB driver (often known as `Hammersmith <http://github.com/bwmcadams/hammersmith>`_ and ``com.mongodb.async``) and as such are purely callback based with a Netty network layer. This makes them extremely fast & lightweight versus building on other MongoDB implementations such as `mongo-java-driver <http://github.com/mongodb/mongo-java-driver>`_ and `Casbah <http://github.com/mongodb/casbah>`_. Akka's implementations of MongoDB mailboxes are built on top of the purely asynchronous MongoDB driver (often known as `Hammersmith <http://github.com/bwmcadams/hammersmith>`_ and ``com.mongodb.async``) and as such are purely callback based with a Netty network layer. This makes them extremely fast & lightweight versus building on other MongoDB implementations such as `mongo-java-driver <http://github.com/mongodb/mongo-java-driver>`_ and `Casbah <http://github.com/mongodb/casbah>`_.
You will need to configure the URI for the MongoDB server, using the URI Format specified in the `MongoDB Documentation <http://www.mongodb.org/display/DOCS/Connections>`_. This is done in You will need to configure the URI for the MongoDB server, using the URI Format specified in the `MongoDB Documentation <http://www.mongodb.org/display/DOCS/Connections>`_. This is done in
the ``akka.actor.mailbox.mongodb`` section in the ``akka.conf`` configuration the ``akka.actor.mailbox.mongodb`` section in the :ref:`configuration`.
file.
.. code-block:: none .. code-block:: none

View file

@ -35,7 +35,7 @@ multi-JVM testing::
base = file("akka-cluster"), base = file("akka-cluster"),
settings = defaultSettings ++ MultiJvmPlugin.settings ++ Seq( settings = defaultSettings ++ MultiJvmPlugin.settings ++ Seq(
extraOptions in MultiJvm <<= (sourceDirectory in MultiJvm) { src => extraOptions in MultiJvm <<= (sourceDirectory in MultiJvm) { src =>
(name: String) => (src ** (name + ".conf")).get.headOption.map("-Dakka.config=" + _.absolutePath).toSeq (name: String) => (src ** (name + ".conf")).get.headOption.map("-Dconfig.file=" + _.absolutePath).toSeq
}, },
test in Test <<= (test in Test) dependsOn (test in MultiJvm) test in Test <<= (test in Test) dependsOn (test in MultiJvm)
) )
@ -176,10 +176,10 @@ and add the options to them.
-Dakka.cluster.nodename=node3 -Dakka.remote.port=9993 -Dakka.cluster.nodename=node3 -Dakka.remote.port=9993
Overriding akka.conf options Overriding configuration options
---------------------------- --------------------------------
You can also override the options in the ``akka.conf`` file with different options for each You can also override the options in the :ref:`configuration` file with different options for each
spawned JVM. You do that by creating a file named after the node in the test with suffix spawned JVM. You do that by creating a file named after the node in the test with suffix
``.conf`` and put them in the same directory as the test . ``.conf`` and put them in the same directory as the test .

View file

@ -48,8 +48,8 @@ cluster node.
Cluster configuration Cluster configuration
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
Cluster is configured in the ``akka.cloud.cluster`` section in the ``akka.conf`` Cluster is configured in the ``akka.cloud.cluster`` section in the :ref:`configuration`.
configuration file. Here you specify the default addresses to the ZooKeeper Here you specify the default addresses to the ZooKeeper
servers, timeouts, if compression should be on or off, and so on. servers, timeouts, if compression should be on or off, and so on.
.. code-block:: conf .. code-block:: conf
@ -594,7 +594,7 @@ Consolidation and management of the Akka configuration file
Not implemented yet. Not implemented yet.
The actor configuration file ``akka.conf`` will also be stored into the cluster The actor :ref:`configuration` file will also be stored into the cluster
and it will be possible to have one single configuration file, stored on the server, and pushed out to all and it will be possible to have one single configuration file, stored on the server, and pushed out to all
the nodes that joins the cluster. Each node only needs to be configured with the ZooKeeper the nodes that joins the cluster. Each node only needs to be configured with the ZooKeeper
server address and the master configuration will only reside in one single place server address and the master configuration will only reside in one single place

View file

@ -1,3 +1,5 @@
.. _configuration:
Configuration Configuration
============= =============

View file

@ -9,7 +9,8 @@ There is an Event Handler which takes the place of a logging system in Akka:
akka.event.EventHandler akka.event.EventHandler
You can configure which event handlers should be registered at boot time. That is done using the 'event-handlers' element in akka.conf. Here you can also define the log level. You can configure which event handlers should be registered at boot time. That is done using the 'event-handlers' element in
the :ref:`configuration`. Here you can also define the log level.
.. code-block:: ruby .. code-block:: ruby

View file

@ -14,7 +14,8 @@ also need a SLF4J backend, we recommend `Logback <http://logback.qos.ch/>`_:
Event Handler Event Handler
------------- -------------
This module includes a SLF4J Event Handler that works with Akka's standard Event Handler. You enabled it in the 'event-handlers' element in akka.conf. Here you can also define the log level. This module includes a SLF4J Event Handler that works with Akka's standard Event Handler. You enabled it in the 'event-handlers' element in
the :ref:`configuration`. Here you can also define the log level.
.. code-block:: ruby .. code-block:: ruby

View file

@ -29,12 +29,12 @@ Actors as services
The simplest way you can use Akka is to use the actors as services in your Web The simplest way you can use Akka is to use the actors as services in your Web
application. All thats needed to do that is to put the Akka charts as well as application. All thats needed to do that is to put the Akka charts as well as
its dependency jars into ``WEB-INF/lib``. You also need to put the ``akka.conf`` its dependency jars into ``WEB-INF/lib``. You also need to put the :ref:`configuration`
config file in the ``$AKKA_HOME/config`` directory. Now you can create your file in the ``$AKKA_HOME/config`` directory. Now you can create your
Actors as regular services referenced from your Web application. You should also Actors as regular services referenced from your Web application. You should also
be able to use the Remoting service, e.g. be able to make certain Actors remote be able to use the Remoting service, e.g. be able to make certain Actors remote
on other hosts. Please note that remoting service does not speak HTTP over port on other hosts. Please note that remoting service does not speak HTTP over port
80, but a custom protocol over the port is specified in ``akka.conf``. 80, but a custom protocol over the port is specified in :ref:`configuration`.
Using Akka as a stand alone microkernel Using Akka as a stand alone microkernel

View file

@ -729,18 +729,12 @@ we compiled ourselves::
$ java \ $ java \
-cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:tutorial \ -cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:tutorial \
akka.tutorial.java.first.Pi akka.tutorial.java.first.Pi
AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT]
loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf].
Pi estimate: 3.1435501812459323 Pi estimate: 3.1435501812459323
Calculation time: 822 millis Calculation time: 822 millis
Yippee! It is working. Yippee! It is working.
If you have not defined the ``AKKA_HOME`` environment variable then Akka can't
find the ``akka.conf`` configuration file and will print out a ``Cant load
akka.conf`` warning. This is ok since it will then just use the defaults.
Run it inside Maven Run it inside Maven
------------------- -------------------
@ -758,8 +752,6 @@ When this in done we can run our application directly inside Maven::
Yippee! It is working. Yippee! It is working.
If you have not defined an the ``AKKA_HOME`` environment variable then Akka can't find the ``akka.conf`` configuration file and will print out a ``Cant load akka.conf`` warning. This is ok since it will then just use the defaults.
Conclusion Conclusion
---------- ----------

View file

@ -382,15 +382,10 @@ Run it from Eclipse
Eclipse builds your project on every save when ``Project/Build Automatically`` is set. If not, bring you project up to date by clicking ``Project/Build Project``. If there are no compilation errors, you can right-click in the editor where ``Pi`` is defined, and choose ``Run as.. /Scala application``. If everything works fine, you should see:: Eclipse builds your project on every save when ``Project/Build Automatically`` is set. If not, bring you project up to date by clicking ``Project/Build Project``. If there are no compilation errors, you can right-click in the editor where ``Pi`` is defined, and choose ``Run as.. /Scala application``. If everything works fine, you should see::
AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT]
loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf].
Pi estimate: 3.1435501812459323 Pi estimate: 3.1435501812459323
Calculation time: 858 millis Calculation time: 858 millis
If you have not defined an the ``AKKA_HOME`` environment variable then Akka can't find the ``akka.conf`` configuration file and will print out a ``Cant load akka.conf`` warning. This is ok since it will then just use the defaults. You can also define a new Run configuration, by going to ``Run/Run Configurations``. Create a new ``Scala application`` and choose the tutorial project and the main class to be ``akkatutorial.Pi``. You can pass additional command line arguments to the JVM on the ``Arguments`` page, for instance to define where :ref:`configuration` is:
You can also define a new Run configuration, by going to ``Run/Run Configurations``. Create a new ``Scala application`` and choose the tutorial project and the main class to be ``akkatutorial.Pi``. You can pass additional command line arguments to the JVM on the ``Arguments`` page, for instance to define where ``akka.conf`` is:
.. image:: ../images/run-config.png .. image:: ../images/run-config.png

View file

@ -424,19 +424,12 @@ compiled ourselves::
$ java \ $ java \
-cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:. \ -cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:. \
akka.tutorial.first.scala.Pi akka.tutorial.first.scala.Pi
AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT]
loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf].
Pi estimate: 3.1435501812459323 Pi estimate: 3.1435501812459323
Calculation time: 858 millis Calculation time: 858 millis
Yippee! It is working. Yippee! It is working.
If you have not defined the ``AKKA_HOME`` environment variable then Akka can't
find the ``akka.conf`` configuration file and will print out a ``Cant load
akka.conf`` warning. This is ok since it will then just use the defaults.
Run it inside SBT Run it inside SBT
================= =================
@ -456,11 +449,6 @@ When this in done we can run our application directly inside SBT::
Yippee! It is working. Yippee! It is working.
If you have not defined an the ``AKKA_HOME`` environment variable then Akka
can't find the ``akka.conf`` configuration file and will print out a ``Cant
load akka.conf`` warning. This is ok since it will then just use the defaults.
Conclusion Conclusion
========== ==========

View file

@ -19,7 +19,7 @@ Default dispatcher
------------------ ------------------
For most scenarios the default settings are the best. Here we have one single event-based dispatcher for all Actors created. The default dispatcher used is "GlobalDispatcher" which also is retrievable in ``akka.dispatch.Dispatchers.globalDispatcher``. For most scenarios the default settings are the best. Here we have one single event-based dispatcher for all Actors created. The default dispatcher used is "GlobalDispatcher" which also is retrievable in ``akka.dispatch.Dispatchers.globalDispatcher``.
The Dispatcher specified in the akka.conf as "default-dispatcher" is as ``Dispatchers.defaultGlobalDispatcher``. The Dispatcher specified in the :ref:`configuration` as "default-dispatcher" is as ``Dispatchers.defaultGlobalDispatcher``.
The "GlobalDispatcher" is not configurable but will use default parameters given by Akka itself. The "GlobalDispatcher" is not configurable but will use default parameters given by Akka itself.
@ -124,16 +124,13 @@ Here is an example:
... ...
} }
This 'Dispatcher' allows you to define the 'throughput' it should have. This defines the number of messages for a specific Actor the dispatcher should process in one single sweep. The standard :class:`Dispatcher` allows you to define the ``throughput`` it
Setting this to a higher number will increase throughput but lower fairness, and vice versa. If you don't specify it explicitly then it uses the default value defined in the 'akka.conf' configuration file: should have, as shown above. This defines the number of messages for a specific
Actor the dispatcher should process in one single sweep; in other words, the
.. code-block:: xml dispatcher will bunch up to ``throughput`` message invocations together when
having elected an actor to run. Setting this to a higher number will increase
actor { throughput but lower fairness, and vice versa. If you don't specify it explicitly
throughput = 5 then it uses the value (5) defined for ``default-dispatcher`` in the :ref:`configuration`.
}
If you don't define a the 'throughput' option in the configuration file then the default value of '5' will be used.
Browse the :ref:`scaladoc` or look at the code for all the options available. Browse the :ref:`scaladoc` or look at the code for all the options available.

View file

@ -42,7 +42,7 @@ A common use case within Akka is to have some computation performed concurrently
return "Hello" + "World!"; return "Hello" + "World!";
} }
}); });
String result = f.get(); //Blocks until timeout, default timeout is set in akka.conf, otherwise 5 seconds String result = f.get(); //Blocks until timeout, default timeout is set in :ref:`configuration`, otherwise 5 seconds
In the above code the block passed to ``future`` will be executed by the default ``Dispatcher``, with the return value of the block used to complete the ``Future`` (in this case, the result would be the string: "HelloWorld"). Unlike a ``Future`` that is returned from an ``UntypedActor``, this ``Future`` is properly typed, and we also avoid the overhead of managing an ``UntypedActor``. In the above code the block passed to ``future`` will be executed by the default ``Dispatcher``, with the return value of the block used to complete the ``Future`` (in this case, the result would be the string: "HelloWorld"). Unlike a ``Future`` that is returned from an ``UntypedActor``, this ``Future`` is properly typed, and we also avoid the overhead of managing an ``UntypedActor``.

View file

@ -182,23 +182,7 @@ The following settings are possible on a TransactionFactory:
- propagation - For controlling how nested transactions behave. - propagation - For controlling how nested transactions behave.
- traceLevel - Transaction trace level. - traceLevel - Transaction trace level.
You can also specify the default values for some of these options in akka.conf. Here they are with their default values: You can also specify the default values for some of these options in :ref:`configuration`.
::
stm {
fair = on # Should global transactions be fair or non-fair (non fair yield better performance)
max-retries = 1000
timeout = 5 # Default timeout for blocking transactions and transaction set (in unit defined by
# the time-unit property)
write-skew = true
blocking-allowed = false
interruptible = false
speculative = true
quick-release = true
propagation = "requires"
trace-level = "none"
}
Transaction lifecycle listeners Transaction lifecycle listeners
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

View file

@ -185,7 +185,7 @@ Messages and immutability
**IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala cant enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data. **IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala cant enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data.
Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the $AKKA_HOME/config/akka.conf config file like this: Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the :ref:`configuration` file like this:
.. code-block:: ruby .. code-block:: ruby

View file

@ -1522,7 +1522,7 @@ CamelService configuration
For publishing consumer actors and typed actor methods For publishing consumer actors and typed actor methods
(:ref:`camel-publishing`), applications must start a CamelService. When starting (:ref:`camel-publishing`), applications must start a CamelService. When starting
Akka in :ref:`microkernel` mode then a CamelService can be started automatically Akka in :ref:`microkernel` mode then a CamelService can be started automatically
when camel is added to the enabled-modules list in akka.conf, for example: when camel is added to the enabled-modules list in :ref:`configuration`, for example:
.. code-block:: none .. code-block:: none
@ -1535,7 +1535,7 @@ when camel is added to the enabled-modules list in akka.conf, for example:
Applications that do not use the Akka Kernel, such as standalone applications Applications that do not use the Akka Kernel, such as standalone applications
for example, need to start a CamelService manually, as explained in the for example, need to start a CamelService manually, as explained in the
following subsections.When starting a CamelService manually, settings in following subsections.When starting a CamelService manually, settings in
akka.conf are ignored. :ref:`configuration` are ignored.
Standalone applications Standalone applications
@ -1771,7 +1771,7 @@ CamelService can be omitted, as discussed in the previous section. Since these
classes are loaded and instantiated before the CamelService is started (by classes are loaded and instantiated before the CamelService is started (by
Akka), applications can make modifications to a CamelContext here as well (and Akka), applications can make modifications to a CamelContext here as well (and
even provide their own CamelContext). Assuming there's a boot class even provide their own CamelContext). Assuming there's a boot class
sample.camel.Boot configured in akka.conf. sample.camel.Boot configured in :ref:`configuration`.
.. code-block:: none .. code-block:: none
@ -2439,8 +2439,7 @@ Examples
For all features described so far, there's running sample code in For all features described so far, there's running sample code in
`akka-sample-camel`_. The examples in `sample.camel.Boot`_ are started during `akka-sample-camel`_. The examples in `sample.camel.Boot`_ are started during
Kernel startup because this class has been added to the boot configuration in Kernel startup because this class has been added to the boot :ref:`configuration`.
akka-reference.conf.
.. _akka-sample-camel: http://github.com/jboner/akka/tree/master/akka-samples/akka-sample-camel/ .. _akka-sample-camel: http://github.com/jboner/akka/tree/master/akka-samples/akka-sample-camel/
.. _sample.camel.Boot: http://github.com/jboner/akka/blob/master/akka-samples/akka-sample-camel/src/main/scala/sample/camel/Boot.scala .. _sample.camel.Boot: http://github.com/jboner/akka/blob/master/akka-samples/akka-sample-camel/src/main/scala/sample/camel/Boot.scala
@ -2454,8 +2453,7 @@ akka-reference.conf.
} }
If you don't want to have these examples started during Kernel startup, delete If you don't want to have these examples started during Kernel startup, delete
it from akka-reference.conf (or from akka.conf if you have a custom boot it from the :ref:`configuration`. Other examples are standalone applications (i.e. classes with a
configuration). Other examples are standalone applications (i.e. classes with a
main method) that can be started from `sbt`_. main method) that can be started from `sbt`_.
.. _sbt: http://code.google.com/p/simple-build-tool/ .. _sbt: http://code.google.com/p/simple-build-tool/

View file

@ -11,10 +11,9 @@ Run the microkernel
To start the kernel use the scripts in the ``bin`` directory. To start the kernel use the scripts in the ``bin`` directory.
All services are configured in the ``config/akka.conf`` configuration file. See All services are configured in the :ref:`configuration` file in the ``config`` directory.
the Akka documentation on Configuration for more details. Services you want to Services you want to be started up automatically should be listed in the list of ``boot`` classes in
be started up automatically should be listed in the list of ``boot`` classes in the :ref:`configuration`.
the configuration.
Put your application in the ``deploy`` directory. Put your application in the ``deploy`` directory.

View file

@ -120,17 +120,8 @@ should have, as shown above. This defines the number of messages for a specific
Actor the dispatcher should process in one single sweep; in other words, the Actor the dispatcher should process in one single sweep; in other words, the
dispatcher will bunch up to ``throughput`` message invocations together when dispatcher will bunch up to ``throughput`` message invocations together when
having elected an actor to run. Setting this to a higher number will increase having elected an actor to run. Setting this to a higher number will increase
throughput but lower fairness, and vice versa. If you don't specify it throughput but lower fairness, and vice versa. If you don't specify it explicitly
explicitly then it uses the default value defined in the 'akka.conf' then it uses the value (5) defined for ``default-dispatcher`` in the :ref:`configuration`.
configuration file:
.. code-block:: ruby
actor {
throughput = 5
}
If you don't define a the 'throughput' option in the configuration file then the default value of '5' will be used.
Browse the `ScalaDoc <scaladoc>`_ or look at the code for all the options available. Browse the `ScalaDoc <scaladoc>`_ or look at the code for all the options available.

View file

@ -498,7 +498,7 @@ and in the following.
Event Tracing Event Tracing
------------- -------------
The setting ``akka.actor.debug.fsm`` in ``akka.conf`` enables logging of an The setting ``akka.actor.debug.fsm`` in `:ref:`configuration` enables logging of an
event trace by :class:`LoggingFSM` instances:: event trace by :class:`LoggingFSM` instances::
class MyFSM extends Actor with LoggingFSM[X, Z] { class MyFSM extends Actor with LoggingFSM[X, Z] {

View file

@ -244,7 +244,7 @@ In this example, if an ``ArithmeticException`` was thrown while the ``Actor`` pr
Timeouts Timeouts
-------- --------
Waiting forever for a ``Future`` to be completed can be dangerous. It could cause your program to block indefinitly or produce a memory leak. ``Future`` has support for a timeout already builtin with a default of 5 seconds (taken from 'akka.conf'). A timeout is an instance of ``akka.actor.Timeout`` which contains an ``akka.util.Duration``. A ``Duration`` can be finite, which needs a length and unit type, or infinite. An infinite ``Timeout`` can be dangerous since it will never actually expire. Waiting forever for a ``Future`` to be completed can be dangerous. It could cause your program to block indefinitly or produce a memory leak. ``Future`` has support for a timeout already builtin with a default of 5 seconds (taken from :ref:`configuration`). A timeout is an instance of ``akka.actor.Timeout`` which contains an ``akka.util.Duration``. A ``Duration`` can be finite, which needs a length and unit type, or infinite. An infinite ``Timeout`` can be dangerous since it will never actually expire.
A different ``Timeout`` can be supplied either explicitly or implicitly when a ``Future`` is created. An implicit ``Timeout`` has the benefit of being usable by a for-comprehension as well as being picked up by any methods looking for an implicit ``Timeout``, while an explicit ``Timeout`` can be used in a more controlled manner. A different ``Timeout`` can be supplied either explicitly or implicitly when a ``Future`` is created. An implicit ``Timeout`` has the benefit of being usable by a for-comprehension as well as being picked up by any methods looking for an implicit ``Timeout``, while an explicit ``Timeout`` can be used in a more controlled manner.

View file

@ -271,23 +271,7 @@ The following settings are possible on a TransactionFactory:
- ``propagation`` - For controlling how nested transactions behave. - ``propagation`` - For controlling how nested transactions behave.
- ``traceLevel`` - Transaction trace level. - ``traceLevel`` - Transaction trace level.
You can also specify the default values for some of these options in ``akka.conf``. Here they are with their default values: You can also specify the default values for some of these options in the :ref:`configuration`.
::
stm {
fair = on # Should global transactions be fair or non-fair (non fair yield better performance)
max-retries = 1000
timeout = 5 # Default timeout for blocking transactions and transaction set (in unit defined by
# the time-unit property)
write-skew = true
blocking-allowed = false
interruptible = false
speculative = true
quick-release = true
propagation = "requires"
trace-level = "none"
}
You can also determine at which level a transaction factory is shared or not shared, which affects the way in which the STM can optimise transactions. You can also determine at which level a transaction factory is shared or not shared, which affects the way in which the STM can optimise transactions.

View file

@ -457,7 +457,7 @@ Accounting for Slow Test Systems
The tight timeouts you use during testing on your lightning-fast notebook will The tight timeouts you use during testing on your lightning-fast notebook will
invariably lead to spurious test failures on the heavily loaded Jenkins server invariably lead to spurious test failures on the heavily loaded Jenkins server
(or similar). To account for this situation, all maximum durations are (or similar). To account for this situation, all maximum durations are
internally scaled by a factor taken from ``akka.conf``, internally scaled by a factor taken from the :ref:`configuration`,
``akka.test.timefactor``, which defaults to 1. ``akka.test.timefactor``, which defaults to 1.
Resolving Conflicts with Implicit ActorRef Resolving Conflicts with Implicit ActorRef
@ -716,7 +716,7 @@ options:
* *Logging of message invocations on certain actors* * *Logging of message invocations on certain actors*
This is enabled by a setting in ``akka.conf`` — namely This is enabled by a setting in the :ref:`configuration` — namely
``akka.actor.debug.receive`` — which enables the :meth:`loggable` ``akka.actor.debug.receive`` — which enables the :meth:`loggable`
statement to be applied to an actors :meth:`receive` function:: statement to be applied to an actors :meth:`receive` function::
@ -728,7 +728,7 @@ options:
The first argument to :meth:`LoggingReceive` defines the source to be used in the The first argument to :meth:`LoggingReceive` defines the source to be used in the
logging events, which should be the current actor. logging events, which should be the current actor.
If the abovementioned setting is not given in ``akka.conf``, this method will If the abovementioned setting is not given in the :ref:`configuration`, this method will
pass through the given :class:`Receive` function unmodified, meaning that pass through the given :class:`Receive` function unmodified, meaning that
there is no runtime cost unless actually enabled. there is no runtime cost unless actually enabled.

View file

@ -178,7 +178,7 @@ Messages and immutability
**IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala cant enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data. **IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala cant enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data.
Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the $AKKA_HOME/config/akka.conf config file like this: Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the :ref:`configuration` file like this:
.. code-block:: ruby .. code-block:: ruby