* Otherwise some changes might never be published, since it doesn't have
to be convergence on all nodes inbetween all transitions.
* Detected by a failure ClusterSingletonManagerSpec.
* Added a test to simulate the failure scenario.
* It was an unlikely situatation that was not covered,
the new leader didn't know previous, because it transitioned
from Start -> BecomeLeader, old leader was removed and got
LeaderChanged(None), so none of them could request the other
for hand-over or take-over.
* Taken care of with the retry timeouts, also when leader
receives LeaderChanged(None)
* The old leader should have received a propert LeaderChanged
earlier, which is a flaw in the way we publish leader events.
That part will be fixed in a separate commit.
* Rename config akka.event-handlers to akka.loggers
* Rename config akka.event-handler-startup-timeout to
akka.logger-startup-timeout
* Rename JulEventHandler to JavaLogger
* Rename Slf4jEventHandler to Slf4jLogger
* Change all places in tests and docs
* Deprecation, old still works, but with warnings
* Migration guide
* Test for the deprecated event-handler config
* Failure detector was previously copied with refactoring to
akka-remote and this refactoring makes use of that and removes
the failure detector in akka-cluster
* Adjustments to reference.conf
* Refactoring of FailureDetectorPuppet
- initial setting of the repeated task raced with first execution, when
the latter won the task would not repeat
- there was a race in task submission which could lead to enqueueing one
round too late
* Previous work-around was introduced because Netty blocks when sending
to broken connections. This is supposed to be solved by the non-blocking
new remoting.
* Removed HeartbeatSender and CoreSender in cluster
* Added tests to verify that broken connections don't disturb live connection
* The problem is that we do remote deployment to a node that isn't alive and with ordinary
remoting that is not detected at all, as we know. With cluster this was taken care of by
a later AddressTerminated and ChildTerminated generated by RemoteDeploymentWatcher. With
the new RemoteDeadLetters the additional watch triggers an immediate Terminate which is
captured by RemoteDeploymentWatcher but not acted upon since it's not an addressTerminated.
RemoteDeploymentWatcher unwatch and will therefor not act on later AddressTerminated.
* The long term solution is to have reliable system messages and remote supervision without
explicit watch, so that we know that the remote deployment fails.
* This short term solution is to let RemoteDeploymentWatcher always generate ChildTerminated,
also for non-addressTerminated.
* It's possibly racy since ChildTerminated is not idempotent.