=act #19201 improve configuration of thread-pool-executor

* The old implementation would cap the pool size (both corePoolSize
  and maximumPoolSize) to max-pool-size, which is very confusing
  becuase maximumPoolSize is only used when the task queue is bounded.
* That resulted in configuring core-pool-size-min and core-pool-size-max
  was not enough, because it could be capped by the default max-pool-size.
* The new behavior is simply that maximumPoolSize is adjusted to not be
  less than corePoolSize, but otherwise the config properties match the
  underlying ThreadPoolExecutor implementation.
* Added a convenience fixed-pool-size property.
This commit is contained in:
Patrik Nordwall 2015-12-17 09:40:03 +01:00
parent a99fee96df
commit a1c3dbe307
19 changed files with 119 additions and 66 deletions

View file

@ -349,11 +349,11 @@ Inspecting cluster sharding state
---------------------------------
Two requests to inspect the cluster state are available:
`ClusterShard.getShardRegionStateInstance` which will return a `ClusterShard.ShardRegionState` that contains
the `ShardId`s running in a Region and what `EntityId`s are alive for each of them.
``ClusterShard.getShardRegionStateInstance`` which will return a ``ClusterShard.ShardRegionState`` that contains
the identifiers of the shards running in a Region and what entities are alive for each of them.
`ClusterShard.getClusterShardingStatsInstance` which will query all the regions in the cluster and return
a `ClusterShard.ClusterShardingStats` containing the `ShardId`s running in each region and a count
``ClusterShard.getClusterShardingStatsInstance`` which will query all the regions in the cluster and return
a ``ClusterShard.ClusterShardingStats`` containing the identifiers of the shards running in each region and a count
of entities that are alive in each shard.
The purpose of these messages is testing and monitoring, they are not provided to give access to

View file

@ -66,6 +66,15 @@ public class DispatcherDocTest {
//#defining-dispatcher-in-code
}
@SuppressWarnings("unused")
@Test
public void defineFixedPoolSizeDispatcher() {
//#defining-fixed-pool-size-dispatcher
ActorRef myActor = system.actorOf(Props.create(MyUntypedActor.class)
.withDispatcher("blocking-io-dispatcher"));
//#defining-fixed-pool-size-dispatcher
}
@SuppressWarnings("unused")
@Test
public void definePinnedDispatcher() {

View file

@ -116,6 +116,15 @@ There are 3 different types of message dispatchers:
More dispatcher configuration examples
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Configuring a dispatcher with fixed thread pool size, e.g. for actors that perform blocking IO:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#fixed-pool-size-dispatcher-config
And then using it:
.. includecode:: ../java/code/docs/dispatcher/DispatcherDocTest.java#defining-fixed-pool-size-dispatcher
Configuring a ``PinnedDispatcher``:
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-pinned-dispatcher-config

View file

@ -272,7 +272,7 @@ When sending two commands to this ``PersistentActor``, the persist handlers will
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#nested-persist-persist-caller
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
Only after all these handlers have been successfully invoked will the next command be delivered to the persistent Actor.
In other words, the stashing of incoming commands that is guaranteed by initially calling ``persist()`` on the outer layer
is extended until all nested ``persist`` callbacks have been handled.

View file

@ -275,7 +275,7 @@ When sending two commands to this ``PersistentActor``, the persist handlers will
.. includecode:: code/docs/persistence/PersistenceDocTest.java#nested-persist-persist-caller
First the "outer layer" of persist calls is issued and their callbacks are applied. After these have successfully completed,
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
the inner callbacks will be invoked (once the events they are persisting have been confirmed to be persisted by the journal).
Only after all these handlers have been successfully invoked will the next command be delivered to the persistent Actor.
In other words, the stashing of incoming commands that is guaranteed by initially calling ``persist()`` on the outer layer
is extended until all nested ``persist`` callbacks have been handled.

View file

@ -647,9 +647,9 @@ The memory usage is O(n) where n is the number of sizes you allow, i.e. upperBou
Pool with ``OptimalSizeExploringResizer`` defined in configuration:
.. includecode:: code/docs/routing/RouterDocSpec.scala#config-optimal-size-exploring-resize-pool
.. includecode:: ../scala/code/docs/routing/RouterDocSpec.scala#config-optimal-size-exploring-resize-pool
.. includecode:: code/docs/routing/RouterDocTest.java#optimal-size-exploring-resize-pool
.. includecode:: code/docs/jrouting/RouterDocTest.java#optimal-size-exploring-resize-pool
Several more configuration options are available and described in ``akka.actor.deployment.default.optimal-size-exploring-resizer``
section of the reference :ref:`configuration`.