* Rewrite the pool gateway synchronization
Rewrite the pool gateway synchronization so that:
- The documented race condition in PoolInterfaceActor is gone. No
PoolInterfaceActor will receive new requests after the gateway
shutdown has been initiated (fix#20081).
- A gateway created using newHostConnectionPool will no longer
share its pool with others even when it has been shutdown
due to idle-timeout and recreated. Also, its original
materializer will be used to create all the successive
pools incarnations (fix#20080).
- Collapsing chains of gateways do no longer need to be created.
The gateways are now only an entrypoint to the pool master
actor, and this actor is in charge of keeping a cache of
currently active pools and recreate them from the information
given by the gateway when needed.
* Add copyright header
* Mark PoolMasterActor as INTERNAL API
* Larger outer timeout
* Define Props in PoolMasterActor object
* Comment INTERNAL API
* Remove unused import
Previously a failure during e.g. MailboxType.create() would make the
user guardian fail, tearing down the whole system as a result. The cause
is a deep bug in handling ActorCell creation that we cannot really fix
anymore due to resulting changes in semantics, hence this fix only
targets top-level actors (where the observable difference is an
unambiguous improvement).
fixes#15947
This entails:
* adding akka.pattern.PatternCS.* to enable ask etc. with
CompletionStage
* changing RequestContext to offer an ExecutionContextExecutor for the
CompletionStage.*Async combinators
* splitting up akka.stream.Queue for JavaDSL consistency
This new light-weight ActorRef supports running a non-blocking
side-effect upon message send, which is used to dispatch an async
callback to a GraphStageLogic, or it can be used to make the Akka Typed
adapters more efficient. The FunctionRef is registered with its parent,
and it is not user-level API (hence only accessible by downcasting the
ActorContext).
Publish appropriate events to the current ActorSystem event stream upon remote ActorSystem shutdown or when current ActorSystem is quarantined by the remote ActorSystem.
* DeathPactException could occur if the ReliableDeliverySupervisor
was gated but not yet received Terminated and got an Ungate message
from the EndpointManager and thereby entered idle state, followed by
receiving the Terminated message, which is not handled in idle
* instead of using transport failure detector
* add a new config property akka.remote.handshake-timeout, but
for netty.tcp and netty.ssl the existing netty.tcp.connection-timeout
setting will be used
* add test of the timeouts
* mima filter for internal ProtocolStateActor
* well, as long as they provide the parseFrom and toByteArray
* it is using reflection to find the `parseFrom` and `toByteArray` methods to avoid
dependency to `com.google.protobuf`.
* also special case com.google.protobuf when loading serialization binding
* migration guide
* mima filters for the serializers (all types changed)
* add real test for ProtobufSerializer
When using a dispatcher (default or separate cluster dispatcher)
with less than 5 threads the Cluster extension initialization
could deadlock.
It was reproducable by adding a sleep before the Await of GetClusterCoreRef
in the Cluster extension constructor. The reason was that other cluster actors were
started too early and they also tried to get the Cluster extension and thereby blocking
dispatcher threads.
Note that the Cluster extension is started via ClusterActorRefProvider before
ActorSystem.apply returns.
The improvement is to start the cluster child actors lazily when the
GetClusterCoreRef is received.