* Assign internal upNumber when member is moved to Up
* Public API Member.isOlder
* Change cluster singleton to use oldest member instead of leader
* Update samples and docs
* Message wrappers in ClusterClient, so that DistributedPubSubMediator
is not leaking to the client api
* Register methods in ClusterReceptionistExtension, for convenience and
clarity
* Deprecate all actorFor methods
* resolveActorRef in provider
* Identify auto receive message
* Support ActorPath in actorSelection
* Support remote actor selections
* Additional tests of actor selection
* Update tests (keep most actorFor tests)
* Update samples to use actorSelection
* Updates to documentation
* Migration guide, including motivation
* Sending to a previous incarnation of an actor shall fail,
to make remote actors work the same way as local ones (in
the sense that after Terminated() the ref is not working anymore)
* Changed equality of ActorRef to take the uid into account
* Parse uid fragment in RelativeActorPath and ActorPathExtractor
* Handle uid in getChild and in RemoteSystemDaemon
* Use toSerializationFormat and toSerializationFormatWithAddress
in serialization
* Replaced var uid in ActorCell and ChildRestartStats with
constructor parameters (path)
* Create the uid in one single place, in makeChild in parent
* Handle ActorRef with and without uid in DeathWatch
* Optimize ActorPath.toString and friends
* Update documentation and migration guide
* Config of node roles cluster.role
* Cluster router configurable with use-role
* RoleLeaderChanged event
* Cluster singleton per role
* Cluster only starts once all required per-role node
counts are reached,
role.<role-name>.min-nr-of-members config
* Update documentation and make use of the roles in the examples
* The scenario was that previous leader left.
* The problem was that the new leader got MemberRemoved
before it got the HandOverDone and therefore missed the
hand over data.
* Solved by not changing the singleton to leader when receiving
MemberRemoved and instead do that on normal HandOverDone or
in failure cases after retry timeout.
* The reason for this bug was the new transition from Down to
Removed and that there is now no MemberDowned event. Previously
this was only triggered by MemberDowned (not MemberRemoved) and
that was safe because that was "always" preceeded by unreachable.
* The new solution means that it will take longer for new singleton
to startup in case of unreachable previous leader, but I don't
want to trigger it on MemberUnreachable because it might in the
future be possible to switch it back to reachable.
* Changed TransportAdapterProvider to support java impl
* Verified java impl of AbstractTransportAdapter and
ActorTransportAdapter
* Privatized things that should not be public api
* Consistent usage of INTERNAL API marker in scaladoc
* Added some missing doc in conf
* Added missing SerialVersionUID
* Otherwise some changes might never be published, since it doesn't have
to be convergence on all nodes inbetween all transitions.
* Detected by a failure ClusterSingletonManagerSpec.
* Added a test to simulate the failure scenario.
* It was an unlikely situatation that was not covered,
the new leader didn't know previous, because it transitioned
from Start -> BecomeLeader, old leader was removed and got
LeaderChanged(None), so none of them could request the other
for hand-over or take-over.
* Taken care of with the retry timeouts, also when leader
receives LeaderChanged(None)
* The old leader should have received a propert LeaderChanged
earlier, which is a flaw in the way we publish leader events.
That part will be fixed in a separate commit.
* Rename config akka.event-handlers to akka.loggers
* Rename config akka.event-handler-startup-timeout to
akka.logger-startup-timeout
* Rename JulEventHandler to JavaLogger
* Rename Slf4jEventHandler to Slf4jLogger
* Change all places in tests and docs
* Deprecation, old still works, but with warnings
* Migration guide
* Test for the deprecated event-handler config
* ClusterSingletonManager
* ClusterSingletonManagerSpec multi-node test
* Use in cluster router with single master sample
* Extensive logging to be able to understand what is
going on
* Java api
* Add cluster dependency to contrib
* Add contrib dependency to sample
* Scaladoc
* rst docs in contrib area, ref from cluster docs
* The within margin was too small. On my machine the transition to Idle is done
when 400 ms is remaining, which is too timing sensitive.
* Improved the "resend across a slow link" test to actually trigger resending,
which it didn't do before.
[RK: assembled commit log from original branch, squashed]
akka-contrib:
- an event handler which properly sets the ThreadID on log records
- a LoggingAdapter which does synchronous logging to j.u.l.Logger
- a small logging mixin for the aforementioned adapter