* Failure detection heartbeating was not performed to joining
nodes, since it was expected that they will become Up first.
* If a joining node is downed before it is changed to Up failure
detection will not be performed for that node. That resulted in
the downed node will not be removed from membership, since the
unreachability signal is used as confirmation that the node is
actually stopped before removing it.
* The old implementation would cap the pool size (both corePoolSize
and maximumPoolSize) to max-pool-size, which is very confusing
becuase maximumPoolSize is only used when the task queue is bounded.
* That resulted in configuring core-pool-size-min and core-pool-size-max
was not enough, because it could be capped by the default max-pool-size.
* The new behavior is simply that maximumPoolSize is adjusted to not be
less than corePoolSize, but otherwise the config properties match the
underlying ThreadPoolExecutor implementation.
* Added a convenience fixed-pool-size property.
Unless the message is in akka.* or the configuration setting 'akka.actor.warn-about-java-serializer-usage'
is disabled a warning is logged for each class that the Java serializer is choosen for.
* the reported issue is fixed by the immediate leaderActions
(moving to Up) when joining the first node to itself
* the other changes are precautions just in case
* instead of using transport failure detector
* add a new config property akka.remote.handshake-timeout, but
for netty.tcp and netty.ssl the existing netty.tcp.connection-timeout
setting will be used
* add test of the timeouts
* mima filter for internal ProtocolStateActor
For manual downing it is not needed. For auto-down it doesn't add any extra safety, since that
is not handling network partitions anyway.
The setting is still useful if you implement downing strategies that handle network partitions,
e.g. by keeping the larger side of the partition and shutting down the smaller side.
- created new subproject akka-protobuf (and added COPYING and LICENSE)
- renamed com.google.protobuf -> akka.protobuf everywhere
- also added such renaming step to the results of protoc compilation in
project/Protobuf.scala
- had to include transcriptions of Netty’s ProtobufEncoder/Decoder to
make multi-node-testkit compile again
When using a dispatcher (default or separate cluster dispatcher)
with less than 5 threads the Cluster extension initialization
could deadlock.
It was reproducable by adding a sleep before the Await of GetClusterCoreRef
in the Cluster extension constructor. The reason was that other cluster actors were
started too early and they also tried to get the Cluster extension and thereby blocking
dispatcher threads.
Note that the Cluster extension is started via ClusterActorRefProvider before
ActorSystem.apply returns.
The improvement is to start the cluster child actors lazily when the
GetClusterCoreRef is received.
* avoid using Down and Exiting member from being used for joining
* delay shut down of Down member until the information is spread
to all reachable members, e.g. downing several nodes via one node
* akka.cluster.down-removal-margin setting
Margin until shards or singletons that belonged to a
downed/removed partition are created in surviving partition.
Used by singleton and sharding.
* remove the retry count parameters/settings for singleton in
favor of deriving those from the removal-margin
cluster.shutdown
* must also be done when the listener actor stops before the
MemberRemoved event has been received
* add test for this
* clarify docs with example that shuts down actor system and
exit jvm
This improves the remote watching mechanism as follows: Watch requests
are intercepted by the RemoteWatcher and not sent on the wire,
excepted watches from the remoteWatcher itself.
RemoteWatcher is then in charge of forwarding DeathWatchNotification
messages to the watchers.
This reduces the number of watch message to one per watchee, even if
there are several watcher on the same watchee (instead of n+1 before).
Reversed watch messages, and watch on ref with undefinedUid are excluded from
interception by the RemoteWatcher and so are handled as before this commit.
In addition, the following changes are made:
- Keep watchers in a map watchee -> watchers for more efficient retrieval
(in a scala Multimap)
- Keep watchees in a map address -> watchee for more efficient retrieval
(in a scala Multimap)
- Use of InternalActorRef more thoroughly to avoid casts
- Rewatch use a standard watch message, as the distinction is longer needed
(cherry picked from commit 89af8bdb90)
* remove final identifier in serializers
i* revert/deprecate ProtobufSerializer.ARRAY_OF_BYTE_ARRAY
* adding back compatible empty constructor in serializers
* make FSM.State compatible
* add back ActorPath.ElementRegex
* revert SocketOption changes and add SocketOptionV2
see a6d3704ef6
* problem filter for ActorSystem and ActorPath
* problem filter for ByteString
* problem filter for deprecated Timeout methods
* BalancingPool companion
* ask
* problem filter for ActorDSL
* event bus
* exclude hasSubscriptions
* exclude some problems in testkit
* boundAddress and addressFromSocketAddress
* Pool nrOfInstances
* PromiseActorRef
* check with 2.3.9
* migration guide note
* explicit exclude of final class problems
+ enable parallel execution
+ exclude perf tests (TODO mark more as such)
+ uses sbt-dependency-graph plugin
+ implement dependency tracking for testing of only these
+ project which could have been affected by a given PR