* The problem was that we didn't wait for the testconductor.shutdown Future
to complete and therefore barriers could be triggered in unexpected order.
The reason why we didn't await, was that during shutdown the Future was
completed with client disconnected failure. I have fixed that and added
await to all shutdowns.
* Previously heartbeat messages was sent to all other members, i.e.
each member was monitored by all other members in the cluster.
* This was the number one know scalability bottleneck, due to the
number of interconnections.
* Limit sending of heartbeats to a few (5) members. Select and
re-balance with consistent hashing algorithm when new members
are added or removed.
* Send a few EndHeartbeat when ending send of Heartbeat messages.
* We will write more tests that rely on real Cluster(system) extension,
such as ClusterRoundRobinRoutedActorSpec
* When not using FailureDetectorStrategy or overriding seed nodes
MultiNodeClusterSpec will use the real Cluster(system) extension
instead of a new Cluster instance with additional test facilities
* Defined the domain events in ClusterEvent.scala file
* Produce events from diff and publish publish to event bus
from separate actor, ClusterDomainEventPublisher
* Adjustments of tests
* Gossip is not exposed in user api
* Better and more events
* Snapshot event sent to new subscriber
* Updated tests
* Periodic publish only for internal stats
* Trying to simultaneously resolving conflicts at several nodes creates new conflicts.
Therefore the leader resolves conflicts to limit divergence. To avoid overload there
is also a configurable rate limit of how many conflicts that are handled by second.
* Netty blocks when sending to broken connections. ClusterHeartbeatSender actor
isolates sending to different nodes by using child workers for each target
address and thereby reduce the risk of irregular heartbeats to healty
nodes due to broken connections to other nodes.