* instead of using transport failure detector
* add a new config property akka.remote.handshake-timeout, but
for netty.tcp and netty.ssl the existing netty.tcp.connection-timeout
setting will be used
* add test of the timeouts
* mima filter for internal ProtocolStateActor
* well, as long as they provide the parseFrom and toByteArray
* it is using reflection to find the `parseFrom` and `toByteArray` methods to avoid
dependency to `com.google.protobuf`.
* also special case com.google.protobuf when loading serialization binding
* migration guide
* mima filters for the serializers (all types changed)
* add real test for ProtobufSerializer
When using a dispatcher (default or separate cluster dispatcher)
with less than 5 threads the Cluster extension initialization
could deadlock.
It was reproducable by adding a sleep before the Await of GetClusterCoreRef
in the Cluster extension constructor. The reason was that other cluster actors were
started too early and they also tried to get the Cluster extension and thereby blocking
dispatcher threads.
Note that the Cluster extension is started via ClusterActorRefProvider before
ActorSystem.apply returns.
The improvement is to start the cluster child actors lazily when the
GetClusterCoreRef is received.
When watching many (5000) actors at the same time the
following problems were found:
* first send of a sys msg is sent without any flow control
=> limit the number of outstanding sys msg by using
the buffer to send them later (ordinary resend)
* when msg cannot be written sys msg is dropped (relying on resend),
but that cause message re-ordering and negative acknowledgment,
which is very costly
=> buffer the sys msg on write failure
=> minor optimization of AckedReceiveBuffer
I also made the resend-limit configurable.
(cherry picked from commit ecfc271e9a9d7efcf76945632d89c78740291cc6)
* addunidoc task via an AutoPlugin that depends on PrValidation and Unidoc autoplugins
* separate cli option logic to a case class
* remove autoplugin for root project
This improves the remote watching mechanism as follows: Watch requests
are intercepted by the RemoteWatcher and not sent on the wire,
excepted watches from the remoteWatcher itself.
RemoteWatcher is then in charge of forwarding DeathWatchNotification
messages to the watchers.
This reduces the number of watch message to one per watchee, even if
there are several watcher on the same watchee (instead of n+1 before).
Reversed watch messages, and watch on ref with undefinedUid are excluded from
interception by the RemoteWatcher and so are handled as before this commit.
In addition, the following changes are made:
- Keep watchers in a map watchee -> watchers for more efficient retrieval
(in a scala Multimap)
- Keep watchees in a map address -> watchee for more efficient retrieval
(in a scala Multimap)
- Use of InternalActorRef more thoroughly to avoid casts
- Rewatch use a standard watch message, as the distinction is longer needed
It contained a difficult to hit race condition that was exploited with
the help of a custom same-thread execution context by Play (its Iteratee
trampoline). In short: don’t resubmit the current Batch if it contains
an unsynchronized variable.
(cherry picked from commit 6d6b9048ddaa72e7b7f1183dabf550b78de6d4e4)
(cherry picked from commit 89af8bdb90)
* remove final identifier in serializers
i* revert/deprecate ProtobufSerializer.ARRAY_OF_BYTE_ARRAY
* adding back compatible empty constructor in serializers
* make FSM.State compatible
* add back ActorPath.ElementRegex
* revert SocketOption changes and add SocketOptionV2
see a6d3704ef6
* problem filter for ActorSystem and ActorPath
* problem filter for ByteString
* problem filter for deprecated Timeout methods
* BalancingPool companion
* ask
* problem filter for ActorDSL
* event bus
* exclude hasSubscriptions
* exclude some problems in testkit
* boundAddress and addressFromSocketAddress
* Pool nrOfInstances
* PromiseActorRef
* check with 2.3.9
* migration guide note
* explicit exclude of final class problems