diff --git a/akka-docs/cluster/durable-mailbox.rst b/akka-docs/cluster/durable-mailbox.rst index 774008c6da..875d6ea9fb 100644 --- a/akka-docs/cluster/durable-mailbox.rst +++ b/akka-docs/cluster/durable-mailbox.rst @@ -74,8 +74,7 @@ storage. Read more about that in the :ref:`dispatchers-scala` documentation. You can also configure and tune the file-based durable mailbox. This is done in -the ``akka.actor.mailbox.file-based`` section in the ``akka.conf`` configuration -file. +the ``akka.actor.mailbox.file-based`` section in the :ref:`configuration`. .. code-block:: none @@ -125,8 +124,7 @@ or for a thread-based durable dispatcher:: RedisDurableMailboxStorage) You also need to configure the IP and port for the Redis server. This is done in -the ``akka.actor.mailbox.redis`` section in the ``akka.conf`` configuration -file. +the ``akka.actor.mailbox.redis`` section in the :ref:`configuration`. .. code-block:: none @@ -169,8 +167,7 @@ or for a thread-based durable dispatcher:: ZooKeeperDurableMailboxStorage) You also need to configure ZooKeeper server addresses, timeouts, etc. This is -done in the ``akka.actor.mailbox.zookeeper`` section in the ``akka.conf`` -configuration file. +done in the ``akka.actor.mailbox.zookeeper`` section in the :ref:`configuration`. .. code-block:: none @@ -208,7 +205,7 @@ or for a thread-based durable dispatcher. :: You also need to configure the IP, and port, and so on, for the Beanstalk server. This is done in the ``akka.actor.mailbox.beanstalk`` section in the -``akka.conf`` configuration file. +:ref:`configuration`. .. code-block:: none @@ -238,8 +235,7 @@ features cohesive to a fast, reliable & durable queueing mechanism which the Akk Akka's implementations of MongoDB mailboxes are built on top of the purely asynchronous MongoDB driver (often known as `Hammersmith `_ and ``com.mongodb.async``) and as such are purely callback based with a Netty network layer. This makes them extremely fast & lightweight versus building on other MongoDB implementations such as `mongo-java-driver `_ and `Casbah `_. You will need to configure the URI for the MongoDB server, using the URI Format specified in the `MongoDB Documentation `_. This is done in -the ``akka.actor.mailbox.mongodb`` section in the ``akka.conf`` configuration -file. +the ``akka.actor.mailbox.mongodb`` section in the :ref:`configuration`. .. code-block:: none diff --git a/akka-docs/dev/multi-jvm-testing.rst b/akka-docs/dev/multi-jvm-testing.rst index 7e79f65bfa..dade7c30c1 100644 --- a/akka-docs/dev/multi-jvm-testing.rst +++ b/akka-docs/dev/multi-jvm-testing.rst @@ -35,7 +35,7 @@ multi-JVM testing:: base = file("akka-cluster"), settings = defaultSettings ++ MultiJvmPlugin.settings ++ Seq( extraOptions in MultiJvm <<= (sourceDirectory in MultiJvm) { src => - (name: String) => (src ** (name + ".conf")).get.headOption.map("-Dakka.config=" + _.absolutePath).toSeq + (name: String) => (src ** (name + ".conf")).get.headOption.map("-Dconfig.file=" + _.absolutePath).toSeq }, test in Test <<= (test in Test) dependsOn (test in MultiJvm) ) @@ -176,10 +176,10 @@ and add the options to them. -Dakka.cluster.nodename=node3 -Dakka.remote.port=9993 -Overriding akka.conf options ----------------------------- +Overriding configuration options +-------------------------------- -You can also override the options in the ``akka.conf`` file with different options for each +You can also override the options in the :ref:`configuration` file with different options for each spawned JVM. You do that by creating a file named after the node in the test with suffix ``.conf`` and put them in the same directory as the test . diff --git a/akka-docs/disabled/clustering.rst b/akka-docs/disabled/clustering.rst index f384a37ca0..559233143d 100644 --- a/akka-docs/disabled/clustering.rst +++ b/akka-docs/disabled/clustering.rst @@ -48,8 +48,8 @@ cluster node. Cluster configuration ~~~~~~~~~~~~~~~~~~~~~ -Cluster is configured in the ``akka.cloud.cluster`` section in the ``akka.conf`` -configuration file. Here you specify the default addresses to the ZooKeeper +Cluster is configured in the ``akka.cloud.cluster`` section in the :ref:`configuration`. +Here you specify the default addresses to the ZooKeeper servers, timeouts, if compression should be on or off, and so on. .. code-block:: conf @@ -594,7 +594,7 @@ Consolidation and management of the Akka configuration file Not implemented yet. -The actor configuration file ``akka.conf`` will also be stored into the cluster +The actor :ref:`configuration` file will also be stored into the cluster and it will be possible to have one single configuration file, stored on the server, and pushed out to all the nodes that joins the cluster. Each node only needs to be configured with the ZooKeeper server address and the master configuration will only reside in one single place diff --git a/akka-docs/general/configuration.rst b/akka-docs/general/configuration.rst index 0e96f8165e..9328e19561 100644 --- a/akka-docs/general/configuration.rst +++ b/akka-docs/general/configuration.rst @@ -1,3 +1,5 @@ +.. _configuration: + Configuration ============= diff --git a/akka-docs/general/event-handler.rst b/akka-docs/general/event-handler.rst index c23911939e..e6aa422b37 100644 --- a/akka-docs/general/event-handler.rst +++ b/akka-docs/general/event-handler.rst @@ -9,7 +9,8 @@ There is an Event Handler which takes the place of a logging system in Akka: akka.event.EventHandler -You can configure which event handlers should be registered at boot time. That is done using the 'event-handlers' element in akka.conf. Here you can also define the log level. +You can configure which event handlers should be registered at boot time. That is done using the 'event-handlers' element in +the :ref:`configuration`. Here you can also define the log level. .. code-block:: ruby diff --git a/akka-docs/general/slf4j.rst b/akka-docs/general/slf4j.rst index 876b139d65..296cbb7b48 100644 --- a/akka-docs/general/slf4j.rst +++ b/akka-docs/general/slf4j.rst @@ -14,7 +14,8 @@ also need a SLF4J backend, we recommend `Logback `_: Event Handler ------------- -This module includes a SLF4J Event Handler that works with Akka's standard Event Handler. You enabled it in the 'event-handlers' element in akka.conf. Here you can also define the log level. +This module includes a SLF4J Event Handler that works with Akka's standard Event Handler. You enabled it in the 'event-handlers' element in +the :ref:`configuration`. Here you can also define the log level. .. code-block:: ruby diff --git a/akka-docs/intro/deployment-scenarios.rst b/akka-docs/intro/deployment-scenarios.rst index 829d93829e..a5da196d24 100644 --- a/akka-docs/intro/deployment-scenarios.rst +++ b/akka-docs/intro/deployment-scenarios.rst @@ -29,12 +29,12 @@ Actors as services The simplest way you can use Akka is to use the actors as services in your Web application. All that’s needed to do that is to put the Akka charts as well as -its dependency jars into ``WEB-INF/lib``. You also need to put the ``akka.conf`` -config file in the ``$AKKA_HOME/config`` directory. Now you can create your +its dependency jars into ``WEB-INF/lib``. You also need to put the :ref:`configuration` +file in the ``$AKKA_HOME/config`` directory. Now you can create your Actors as regular services referenced from your Web application. You should also be able to use the Remoting service, e.g. be able to make certain Actors remote on other hosts. Please note that remoting service does not speak HTTP over port -80, but a custom protocol over the port is specified in ``akka.conf``. +80, but a custom protocol over the port is specified in :ref:`configuration`. Using Akka as a stand alone microkernel diff --git a/akka-docs/intro/getting-started-first-java.rst b/akka-docs/intro/getting-started-first-java.rst index 9ae9e87441..bf694e5fe2 100644 --- a/akka-docs/intro/getting-started-first-java.rst +++ b/akka-docs/intro/getting-started-first-java.rst @@ -729,18 +729,12 @@ we compiled ourselves:: $ java \ -cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:tutorial \ akka.tutorial.java.first.Pi - AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT] - loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf]. Pi estimate: 3.1435501812459323 Calculation time: 822 millis Yippee! It is working. -If you have not defined the ``AKKA_HOME`` environment variable then Akka can't -find the ``akka.conf`` configuration file and will print out a ``Can’t load -akka.conf`` warning. This is ok since it will then just use the defaults. - Run it inside Maven ------------------- @@ -758,8 +752,6 @@ When this in done we can run our application directly inside Maven:: Yippee! It is working. -If you have not defined an the ``AKKA_HOME`` environment variable then Akka can't find the ``akka.conf`` configuration file and will print out a ``Can’t load akka.conf`` warning. This is ok since it will then just use the defaults. - Conclusion ---------- diff --git a/akka-docs/intro/getting-started-first-scala-eclipse.rst b/akka-docs/intro/getting-started-first-scala-eclipse.rst index 45fbfd24ce..487d8b4509 100644 --- a/akka-docs/intro/getting-started-first-scala-eclipse.rst +++ b/akka-docs/intro/getting-started-first-scala-eclipse.rst @@ -382,15 +382,10 @@ Run it from Eclipse Eclipse builds your project on every save when ``Project/Build Automatically`` is set. If not, bring you project up to date by clicking ``Project/Build Project``. If there are no compilation errors, you can right-click in the editor where ``Pi`` is defined, and choose ``Run as.. /Scala application``. If everything works fine, you should see:: - AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT] - loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf]. - Pi estimate: 3.1435501812459323 Calculation time: 858 millis -If you have not defined an the ``AKKA_HOME`` environment variable then Akka can't find the ``akka.conf`` configuration file and will print out a ``Can’t load akka.conf`` warning. This is ok since it will then just use the defaults. - -You can also define a new Run configuration, by going to ``Run/Run Configurations``. Create a new ``Scala application`` and choose the tutorial project and the main class to be ``akkatutorial.Pi``. You can pass additional command line arguments to the JVM on the ``Arguments`` page, for instance to define where ``akka.conf`` is: +You can also define a new Run configuration, by going to ``Run/Run Configurations``. Create a new ``Scala application`` and choose the tutorial project and the main class to be ``akkatutorial.Pi``. You can pass additional command line arguments to the JVM on the ``Arguments`` page, for instance to define where :ref:`configuration` is: .. image:: ../images/run-config.png diff --git a/akka-docs/intro/getting-started-first-scala.rst b/akka-docs/intro/getting-started-first-scala.rst index 73aace96bf..6a8720f843 100644 --- a/akka-docs/intro/getting-started-first-scala.rst +++ b/akka-docs/intro/getting-started-first-scala.rst @@ -424,19 +424,12 @@ compiled ourselves:: $ java \ -cp lib/scala-library.jar:lib/akka/akka-actor-2.0-SNAPSHOT.jar:. \ akka.tutorial.first.scala.Pi - AKKA_HOME is defined as [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT] - loading config from [/Users/jboner/tools/akka-actors-2.0-SNAPSHOT/config/akka.conf]. Pi estimate: 3.1435501812459323 Calculation time: 858 millis Yippee! It is working. -If you have not defined the ``AKKA_HOME`` environment variable then Akka can't -find the ``akka.conf`` configuration file and will print out a ``Can’t load -akka.conf`` warning. This is ok since it will then just use the defaults. - - Run it inside SBT ================= @@ -456,11 +449,6 @@ When this in done we can run our application directly inside SBT:: Yippee! It is working. -If you have not defined an the ``AKKA_HOME`` environment variable then Akka -can't find the ``akka.conf`` configuration file and will print out a ``Can’t -load akka.conf`` warning. This is ok since it will then just use the defaults. - - Conclusion ========== diff --git a/akka-docs/java/dispatchers.rst b/akka-docs/java/dispatchers.rst index dc2684f9d8..883efe60ba 100644 --- a/akka-docs/java/dispatchers.rst +++ b/akka-docs/java/dispatchers.rst @@ -19,7 +19,7 @@ Default dispatcher ------------------ For most scenarios the default settings are the best. Here we have one single event-based dispatcher for all Actors created. The default dispatcher used is "GlobalDispatcher" which also is retrievable in ``akka.dispatch.Dispatchers.globalDispatcher``. -The Dispatcher specified in the akka.conf as "default-dispatcher" is as ``Dispatchers.defaultGlobalDispatcher``. +The Dispatcher specified in the :ref:`configuration` as "default-dispatcher" is as ``Dispatchers.defaultGlobalDispatcher``. The "GlobalDispatcher" is not configurable but will use default parameters given by Akka itself. @@ -124,16 +124,13 @@ Here is an example: ... } -This 'Dispatcher' allows you to define the 'throughput' it should have. This defines the number of messages for a specific Actor the dispatcher should process in one single sweep. -Setting this to a higher number will increase throughput but lower fairness, and vice versa. If you don't specify it explicitly then it uses the default value defined in the 'akka.conf' configuration file: - -.. code-block:: xml - - actor { - throughput = 5 - } - -If you don't define a the 'throughput' option in the configuration file then the default value of '5' will be used. +The standard :class:`Dispatcher` allows you to define the ``throughput`` it +should have, as shown above. This defines the number of messages for a specific +Actor the dispatcher should process in one single sweep; in other words, the +dispatcher will bunch up to ``throughput`` message invocations together when +having elected an actor to run. Setting this to a higher number will increase +throughput but lower fairness, and vice versa. If you don't specify it explicitly +then it uses the value (5) defined for ``default-dispatcher`` in the :ref:`configuration`. Browse the :ref:`scaladoc` or look at the code for all the options available. diff --git a/akka-docs/java/futures.rst b/akka-docs/java/futures.rst index 2715ff33d1..694a15c8b0 100644 --- a/akka-docs/java/futures.rst +++ b/akka-docs/java/futures.rst @@ -42,7 +42,7 @@ A common use case within Akka is to have some computation performed concurrently return "Hello" + "World!"; } }); - String result = f.get(); //Blocks until timeout, default timeout is set in akka.conf, otherwise 5 seconds + String result = f.get(); //Blocks until timeout, default timeout is set in :ref:`configuration`, otherwise 5 seconds In the above code the block passed to ``future`` will be executed by the default ``Dispatcher``, with the return value of the block used to complete the ``Future`` (in this case, the result would be the string: "HelloWorld"). Unlike a ``Future`` that is returned from an ``UntypedActor``, this ``Future`` is properly typed, and we also avoid the overhead of managing an ``UntypedActor``. diff --git a/akka-docs/java/stm.rst b/akka-docs/java/stm.rst index 67917e7e77..3cbf390bd1 100644 --- a/akka-docs/java/stm.rst +++ b/akka-docs/java/stm.rst @@ -182,23 +182,7 @@ The following settings are possible on a TransactionFactory: - propagation - For controlling how nested transactions behave. - traceLevel - Transaction trace level. -You can also specify the default values for some of these options in akka.conf. Here they are with their default values: - -:: - - stm { - fair = on # Should global transactions be fair or non-fair (non fair yield better performance) - max-retries = 1000 - timeout = 5 # Default timeout for blocking transactions and transaction set (in unit defined by - # the time-unit property) - write-skew = true - blocking-allowed = false - interruptible = false - speculative = true - quick-release = true - propagation = "requires" - trace-level = "none" - } +You can also specify the default values for some of these options in :ref:`configuration`. Transaction lifecycle listeners ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/akka-docs/java/typed-actors.rst b/akka-docs/java/typed-actors.rst index 2eb02f6ebc..8f39ecde76 100644 --- a/akka-docs/java/typed-actors.rst +++ b/akka-docs/java/typed-actors.rst @@ -185,7 +185,7 @@ Messages and immutability **IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala can’t enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data. -Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the ‘$AKKA_HOME/config/akka.conf’ config file like this: +Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the :ref:`configuration` file like this: .. code-block:: ruby diff --git a/akka-docs/modules/camel.rst b/akka-docs/modules/camel.rst index b3c07e56dd..8b2b84c992 100644 --- a/akka-docs/modules/camel.rst +++ b/akka-docs/modules/camel.rst @@ -1522,7 +1522,7 @@ CamelService configuration For publishing consumer actors and typed actor methods (:ref:`camel-publishing`), applications must start a CamelService. When starting Akka in :ref:`microkernel` mode then a CamelService can be started automatically -when camel is added to the enabled-modules list in akka.conf, for example: +when camel is added to the enabled-modules list in :ref:`configuration`, for example: .. code-block:: none @@ -1535,7 +1535,7 @@ when camel is added to the enabled-modules list in akka.conf, for example: Applications that do not use the Akka Kernel, such as standalone applications for example, need to start a CamelService manually, as explained in the following subsections.When starting a CamelService manually, settings in -akka.conf are ignored. +:ref:`configuration` are ignored. Standalone applications @@ -1771,7 +1771,7 @@ CamelService can be omitted, as discussed in the previous section. Since these classes are loaded and instantiated before the CamelService is started (by Akka), applications can make modifications to a CamelContext here as well (and even provide their own CamelContext). Assuming there's a boot class -sample.camel.Boot configured in akka.conf. +sample.camel.Boot configured in :ref:`configuration`. .. code-block:: none @@ -2439,8 +2439,7 @@ Examples For all features described so far, there's running sample code in `akka-sample-camel`_. The examples in `sample.camel.Boot`_ are started during -Kernel startup because this class has been added to the boot configuration in -akka-reference.conf. +Kernel startup because this class has been added to the boot :ref:`configuration`. .. _akka-sample-camel: http://github.com/jboner/akka/tree/master/akka-samples/akka-sample-camel/ .. _sample.camel.Boot: http://github.com/jboner/akka/blob/master/akka-samples/akka-sample-camel/src/main/scala/sample/camel/Boot.scala @@ -2454,8 +2453,7 @@ akka-reference.conf. } If you don't want to have these examples started during Kernel startup, delete -it from akka-reference.conf (or from akka.conf if you have a custom boot -configuration). Other examples are standalone applications (i.e. classes with a +it from the :ref:`configuration`. Other examples are standalone applications (i.e. classes with a main method) that can be started from `sbt`_. .. _sbt: http://code.google.com/p/simple-build-tool/ diff --git a/akka-docs/modules/microkernel.rst b/akka-docs/modules/microkernel.rst index c7a9014e14..cbf9ba96ba 100644 --- a/akka-docs/modules/microkernel.rst +++ b/akka-docs/modules/microkernel.rst @@ -11,10 +11,9 @@ Run the microkernel To start the kernel use the scripts in the ``bin`` directory. -All services are configured in the ``config/akka.conf`` configuration file. See -the Akka documentation on Configuration for more details. Services you want to -be started up automatically should be listed in the list of ``boot`` classes in -the configuration. +All services are configured in the :ref:`configuration` file in the ``config`` directory. +Services you want to be started up automatically should be listed in the list of ``boot`` classes in +the :ref:`configuration`. Put your application in the ``deploy`` directory. diff --git a/akka-docs/scala/dispatchers.rst b/akka-docs/scala/dispatchers.rst index e16c336753..fb09c8e5ae 100644 --- a/akka-docs/scala/dispatchers.rst +++ b/akka-docs/scala/dispatchers.rst @@ -120,17 +120,8 @@ should have, as shown above. This defines the number of messages for a specific Actor the dispatcher should process in one single sweep; in other words, the dispatcher will bunch up to ``throughput`` message invocations together when having elected an actor to run. Setting this to a higher number will increase -throughput but lower fairness, and vice versa. If you don't specify it -explicitly then it uses the default value defined in the 'akka.conf' -configuration file: - -.. code-block:: ruby - - actor { - throughput = 5 - } - -If you don't define a the 'throughput' option in the configuration file then the default value of '5' will be used. +throughput but lower fairness, and vice versa. If you don't specify it explicitly +then it uses the value (5) defined for ``default-dispatcher`` in the :ref:`configuration`. Browse the `ScalaDoc `_ or look at the code for all the options available. diff --git a/akka-docs/scala/fsm.rst b/akka-docs/scala/fsm.rst index 48d716c53b..fb1f54ff26 100644 --- a/akka-docs/scala/fsm.rst +++ b/akka-docs/scala/fsm.rst @@ -498,7 +498,7 @@ and in the following. Event Tracing ------------- -The setting ``akka.actor.debug.fsm`` in ``akka.conf`` enables logging of an +The setting ``akka.actor.debug.fsm`` in `:ref:`configuration` enables logging of an event trace by :class:`LoggingFSM` instances:: class MyFSM extends Actor with LoggingFSM[X, Z] { diff --git a/akka-docs/scala/futures.rst b/akka-docs/scala/futures.rst index ba7b8bb73e..623a24730a 100644 --- a/akka-docs/scala/futures.rst +++ b/akka-docs/scala/futures.rst @@ -244,7 +244,7 @@ In this example, if an ``ArithmeticException`` was thrown while the ``Actor`` pr Timeouts -------- -Waiting forever for a ``Future`` to be completed can be dangerous. It could cause your program to block indefinitly or produce a memory leak. ``Future`` has support for a timeout already builtin with a default of 5 seconds (taken from 'akka.conf'). A timeout is an instance of ``akka.actor.Timeout`` which contains an ``akka.util.Duration``. A ``Duration`` can be finite, which needs a length and unit type, or infinite. An infinite ``Timeout`` can be dangerous since it will never actually expire. +Waiting forever for a ``Future`` to be completed can be dangerous. It could cause your program to block indefinitly or produce a memory leak. ``Future`` has support for a timeout already builtin with a default of 5 seconds (taken from :ref:`configuration`). A timeout is an instance of ``akka.actor.Timeout`` which contains an ``akka.util.Duration``. A ``Duration`` can be finite, which needs a length and unit type, or infinite. An infinite ``Timeout`` can be dangerous since it will never actually expire. A different ``Timeout`` can be supplied either explicitly or implicitly when a ``Future`` is created. An implicit ``Timeout`` has the benefit of being usable by a for-comprehension as well as being picked up by any methods looking for an implicit ``Timeout``, while an explicit ``Timeout`` can be used in a more controlled manner. diff --git a/akka-docs/scala/stm.rst b/akka-docs/scala/stm.rst index a35fb94676..f21f988939 100644 --- a/akka-docs/scala/stm.rst +++ b/akka-docs/scala/stm.rst @@ -271,23 +271,7 @@ The following settings are possible on a TransactionFactory: - ``propagation`` - For controlling how nested transactions behave. - ``traceLevel`` - Transaction trace level. -You can also specify the default values for some of these options in ``akka.conf``. Here they are with their default values: - -:: - - stm { - fair = on # Should global transactions be fair or non-fair (non fair yield better performance) - max-retries = 1000 - timeout = 5 # Default timeout for blocking transactions and transaction set (in unit defined by - # the time-unit property) - write-skew = true - blocking-allowed = false - interruptible = false - speculative = true - quick-release = true - propagation = "requires" - trace-level = "none" - } +You can also specify the default values for some of these options in the :ref:`configuration`. You can also determine at which level a transaction factory is shared or not shared, which affects the way in which the STM can optimise transactions. diff --git a/akka-docs/scala/testing.rst b/akka-docs/scala/testing.rst index 8b94f301d5..c9a2b5928e 100644 --- a/akka-docs/scala/testing.rst +++ b/akka-docs/scala/testing.rst @@ -457,7 +457,7 @@ Accounting for Slow Test Systems The tight timeouts you use during testing on your lightning-fast notebook will invariably lead to spurious test failures on the heavily loaded Jenkins server (or similar). To account for this situation, all maximum durations are -internally scaled by a factor taken from ``akka.conf``, +internally scaled by a factor taken from the :ref:`configuration`, ``akka.test.timefactor``, which defaults to 1. Resolving Conflicts with Implicit ActorRef @@ -716,7 +716,7 @@ options: * *Logging of message invocations on certain actors* - This is enabled by a setting in ``akka.conf`` — namely + This is enabled by a setting in the :ref:`configuration` — namely ``akka.actor.debug.receive`` — which enables the :meth:`loggable` statement to be applied to an actor’s :meth:`receive` function:: @@ -728,7 +728,7 @@ options: The first argument to :meth:`LoggingReceive` defines the source to be used in the logging events, which should be the current actor. - If the abovementioned setting is not given in ``akka.conf``, this method will + If the abovementioned setting is not given in the :ref:`configuration`, this method will pass through the given :class:`Receive` function unmodified, meaning that there is no runtime cost unless actually enabled. diff --git a/akka-docs/scala/typed-actors.rst b/akka-docs/scala/typed-actors.rst index bb3bb1e7b3..295fba6632 100644 --- a/akka-docs/scala/typed-actors.rst +++ b/akka-docs/scala/typed-actors.rst @@ -178,7 +178,7 @@ Messages and immutability **IMPORTANT**: Messages can be any kind of object but have to be immutable (there is a workaround, see next section). Java or Scala can’t enforce immutability (yet) so this has to be by convention. Primitives like String, int, Long are always immutable. Apart from these you have to create your own immutable objects to send as messages. If you pass on a reference to an instance that is mutable then this instance can be modified concurrently by two different Typed Actors and the Actor model is broken leaving you with NO guarantees and most likely corrupt data. -Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the ‘$AKKA_HOME/config/akka.conf’ config file like this: +Akka can help you in this regard. It allows you to turn on an option for serializing all messages, e.g. all parameters to the Typed Actor effectively making a deep clone/copy of the parameters. This will make sending mutable messages completely safe. This option is turned on in the :ref:`configuration` file like this: .. code-block:: ruby