* avoid the hand-over/take-over attempts when starting the
ClusterSingletonManager for the normal clase when the cluster is
in a good shape, i.e. no exiting member that might run previous
singleton instance
* because it is not referentially transparent; normally we reserved parens for
side-effecting code but given how people thoughtlessly close over it we revised
that that decision for sender
* caller can still omit parens
* Getter for CurrentClusterState in Cluster extension, updated via
ClusterReadView
* Remove lazy init of readView. Otherwise the cluster.state will be
empty on first access, wich is probably surprising
* Subscribe to several cluster event types at once, to ensure *one*
CurrentClusterEvent followed by change events
* Deprecate publishCurrentClusterState, was a bad idea, use sendCurrentClusterState
instead
* Possibility to subscribe with InitialStateAsEvents to receive events corresponding
to CurrentClusterState
* CurrentClusterState not a ClusterDomainEvent, ticket #3614
* The problem was:
- first is leaving, second is new oldest
- two actors subscribe to cluster events, OldestChangedBuffer and ClusterSingletonManager
- ClusterSingletonManager receives MemberExited(first), and then also MemberRemoved(second)
before OldestChangedBuffer receives MemberExited(first) and delivers
OldestChanged(first->second)
- MemberRemoved(second) is the result of the cluster extension shutdown
- because ClusterSingletonManager gets the MemberRemoved(second) before the OldestChanged
it will not send the hand over data to second
- second becomes new singleton after retry period, as designed, but without hand over data
* The solution is to check the selfExited flag in Oldest state, similar to what is done
in WasOldest
* I considered the alternative to tunnel all member events through same subscriber,
but that would involve more changes to the code
* Removed leader commands for Shutdown and Exit
* Member shutdown itself when it sees itself as Exiting
* Singleton cluster with status Exiting will shutdown itself,
in case the Exiting gossip never arrives
* Exiting member not part convergence check
* Exiting member is removed by leader (on convergence) when the
exiting member is in the unreachable set, i.e. sucessfully shutdown
* Reverted the change made for #3266, i.e. Exiting is
detected as unreachable again.
* Adjust ClusterSingletonManager to new Exiting behaviour
* Fix bug in HeartbeatSender, which caused it to continue to
send heartbeats to removed nodes, instead of rebalancing
* Refactoring of leaderActions method
* Leaving section in docs
* Assign internal upNumber when member is moved to Up
* Public API Member.isOlder
* Change cluster singleton to use oldest member instead of leader
* Update samples and docs
* Deprecate all actorFor methods
* resolveActorRef in provider
* Identify auto receive message
* Support ActorPath in actorSelection
* Support remote actor selections
* Additional tests of actor selection
* Update tests (keep most actorFor tests)
* Update samples to use actorSelection
* Updates to documentation
* Migration guide, including motivation
* Config of node roles cluster.role
* Cluster router configurable with use-role
* RoleLeaderChanged event
* Cluster singleton per role
* Cluster only starts once all required per-role node
counts are reached,
role.<role-name>.min-nr-of-members config
* Update documentation and make use of the roles in the examples
* The scenario was that previous leader left.
* The problem was that the new leader got MemberRemoved
before it got the HandOverDone and therefore missed the
hand over data.
* Solved by not changing the singleton to leader when receiving
MemberRemoved and instead do that on normal HandOverDone or
in failure cases after retry timeout.
* The reason for this bug was the new transition from Down to
Removed and that there is now no MemberDowned event. Previously
this was only triggered by MemberDowned (not MemberRemoved) and
that was safe because that was "always" preceeded by unreachable.
* The new solution means that it will take longer for new singleton
to startup in case of unreachable previous leader, but I don't
want to trigger it on MemberUnreachable because it might in the
future be possible to switch it back to reachable.
* Changed TransportAdapterProvider to support java impl
* Verified java impl of AbstractTransportAdapter and
ActorTransportAdapter
* Privatized things that should not be public api
* Consistent usage of INTERNAL API marker in scaladoc
* Added some missing doc in conf
* Added missing SerialVersionUID
* It was an unlikely situatation that was not covered,
the new leader didn't know previous, because it transitioned
from Start -> BecomeLeader, old leader was removed and got
LeaderChanged(None), so none of them could request the other
for hand-over or take-over.
* Taken care of with the retry timeouts, also when leader
receives LeaderChanged(None)
* The old leader should have received a propert LeaderChanged
earlier, which is a flaw in the way we publish leader events.
That part will be fixed in a separate commit.
* ClusterSingletonManager
* ClusterSingletonManagerSpec multi-node test
* Use in cluster router with single master sample
* Extensive logging to be able to understand what is
going on
* Java api
* Add cluster dependency to contrib
* Add contrib dependency to sample
* Scaladoc
* rst docs in contrib area, ref from cluster docs