* send terminationMessage to singleton when leaving last, #21592
* When leaving last node, i.e. no newOldestOption, the manager was
just stopped. The change is to send the terminationMessage also
in this case and wait until the singleton actor is terminated
before stopping the manager.
* Also changed so that the singleton is stopped immediately when
cluster has been terminated when last node is leaving, i.e.
no newOldestOption. Previously it retried until maxTakeOverRetries
before stopping.
* More comprehensive test of this scenario in ClusterSingletonManagerLeaveSpec
* increase test timeout
* ignore PubSub Status message from unknown node, #20846
Reproducer:
1. old cluster of node1, node2 and node3
2. shutdown node3 and start it again with same host:port, let it
join itself and not the old cluster
3. node1 and node2 will continue to gossip to the node3 address and
Status message is accepted and replied to (Delta is ignored from
unknown node)
Solution:
* ignore status message from unknown node
* also added a reply flag in the Status message to break the
back-and-forth replies in case the deltas are not accepted,
this is not needed for fixing this bug, but it adds an extra
level of safety
* Provide shorter aliases for the ActorRefProviders #20649
* Use the new actorefprovider aliases throughout code and docs
* Cleaner alias replacement logic
Allows the cluster client and its receptionist to be observable in terms of contact points becoming available and client heartbeats. Furthermore a query API for requesting the current state has been provided.
* In 2.4 we derive the number of hand-over/take-over retries from
the removal margin, but we decided to set that to 0 by default, since
it is intended for network partition scenarios. maxTakeOverRetries
became 1. So there must be also be a min number of retries property.
* The test failed for the leaving scenario because the singleton
instance was stopped hard without sending the terminationMessage when
the maxTakeOverRetries was exceeded.
* number-of-contacts is by default 3, and in this test
with 4 server nodes we shutdown all but one in the end
and sometimes the client has all other except the remaining
node in its list of contacts, so it will never get contact
with the remaining node
* because it will result in quarantine if failure
detection triggers and that kind of coupling is
exactly what is not desired for a ClusterClient
* replace by simple heartbeat failure detection,
DeadlineFailureDetector
* DeadLetterSuppression