* The problem was that ShardRegion actor only kept track of one shard
id per region actor. Therefore the Terminated message only removes
one of the shards from its registry when there are multiple shards
per region.
* Added failing test and solved the problem by keeping track of all
shards per region
* Also, rebalance must not be done before any regions have been
registered
* The race can happen if the MemberRemoved event is received followed by a Delta update from
a node that has not yet got the MemberRemoved. That will make the bucket for the removed
node to be added back in the registry.
* The documentation was good, but some parts were "hidden" by separating
it two places. I understand the original reason for the separation but
it might be easier for the user (as reported in the ticket) to have
everything in one place.
* because it is not referentially transparent; normally we reserved parens for
side-effecting code but given how people thoughtlessly close over it we revised
that that decision for sender
* caller can still omit parens
- removed retry-window and related settings
- removed gate-invalid-addresses-for
- gate is now mandatory
- remoting has a dedicated dispatcher by default
- updated tests to work with changed timings
- added doc section for association lifecycle
* Getter for CurrentClusterState in Cluster extension, updated via
ClusterReadView
* Remove lazy init of readView. Otherwise the cluster.state will be
empty on first access, wich is probably surprising
* Subscribe to several cluster event types at once, to ensure *one*
CurrentClusterEvent followed by change events
* Deprecate publishCurrentClusterState, was a bad idea, use sendCurrentClusterState
instead
* Possibility to subscribe with InitialStateAsEvents to receive events corresponding
to CurrentClusterState
* CurrentClusterState not a ClusterDomainEvent, ticket #3614
* Replace (deprecate) akka.cluster.auto-down config setting with
akka.cluster.auto-down-unreachable-after
* AutoDown actor that keeps track of unreachable members
and performs down from the leader node when they have been
unreachable for the specified duration
* Migration guide
* This can't go into 2.2.x since ScalaTest 1.9.2-SNAP2 has source incompatible changes and the dependecy in akka-multi-node-node-testkit would force people to upgrade.
- add “mailbox-requirement” key to dispatcher section
- split out mailbox section, add akka.actor.default-mailbox
- rewrite findMarker method and use it for Props.create() and getting
the required mailbox of an actor
- add ProducesMessageQueue trait for MailboxType so that requirements
can be checked before trying to create the actor for real
- verify actor as well as dispatcher requirements for message queue
before creation, even in remote-deployed case
- change MessageDispatcher constructor to take a Configurator, add that
to migration guide
* The problem was:
- first is leaving, second is new oldest
- two actors subscribe to cluster events, OldestChangedBuffer and ClusterSingletonManager
- ClusterSingletonManager receives MemberExited(first), and then also MemberRemoved(second)
before OldestChangedBuffer receives MemberExited(first) and delivers
OldestChanged(first->second)
- MemberRemoved(second) is the result of the cluster extension shutdown
- because ClusterSingletonManager gets the MemberRemoved(second) before the OldestChanged
it will not send the hand over data to second
- second becomes new singleton after retry period, as designed, but without hand over data
* The solution is to check the selfExited flag in Oldest state, similar to what is done
in WasOldest
* I considered the alternative to tunnel all member events through same subscriber,
but that would involve more changes to the code