* used wrong protocol by mistake and got weird errors
and it was not obvious that the reason was wrong protocol,
e.g. it created association to itself
* and also set the cachedAssociation
* First stab at separate large message channel for Artery
* Full actor paths, no implicit "/user/" part
* Various small fixes after review
* Fixes to make it work after rebasing
* Use a separate EnvelopeBufferPool for the large message stream
* Docs for actorSelection not sending through large message stream
* UID exchange with handshake stages
* second iteration of reply side-channel, observable
* InboundContext and OutboundContext to facilitate testing
without real transport
* collapse ArterySubsystem and Transport into ArteryTransport
* incomplete HandshakeRestartReceiverSpec (origin address missing
to be able to implement that part
* remove embedded aeron media driver directory on shutdown
* The configuration of the pool of the SimpleDnsManager
is configured in deployment section "/IO-DNS/inet-address"
* We don't really support deployment configuration of system actors
but here it's used and I don't think we can change that.
* It didn't work when using RemoteActorRefProvider/ClusterActorRefProvider,
so I fixed it so that the behavior is consistent with the
LocalActorRefProvider (verified by tests)
remote's shutdown is using `ask` pattern, so it can produce `Status.Failure`, which is not handled in RARP's state `WaitTransportShutdown`.
For fixing it added matching for `Status.Failure` and changed `RemoteTransport`'s shutdown signature to use `akka.Done`, which looks more consistent with other shutdown's and `akka.Done` is more verbose than previously used `Unit`.
* until we have replaced all internal usages of it,
or we could decide that it is good to keep as an
internal facility and then we can remove the deprecation
annotations
This improves the remote watching mechanism as follows: Watch requests
are intercepted by the RemoteWatcher and not sent on the wire,
excepted watches from the remoteWatcher itself.
RemoteWatcher is then in charge of forwarding DeathWatchNotification
messages to the watchers.
This reduces the number of watch message to one per watchee, even if
there are several watcher on the same watchee (instead of n+1 before).
Reversed watch messages, and watch on ref with undefinedUid are excluded from
interception by the RemoteWatcher and so are handled as before this commit.
In addition, the following changes are made:
- Keep watchers in a map watchee -> watchers for more efficient retrieval
(in a scala Multimap)
- Keep watchees in a map address -> watchee for more efficient retrieval
(in a scala Multimap)
- Use of InternalActorRef more thoroughly to avoid casts
- Rewatch use a standard watch message, as the distinction is longer needed
* The problem scenario was that a remote watch followed by re-watch triggered by
first heartbeat and unwatch coming in before the extra re-watch message. That
caused RemoteWatcher to still watch the subject even though it was intended to
be unwatched.
* I could reproduce it with sleeps at stratgic points
* Sovled by separate re-watch message and check that still watching
* Separate routing logic, to be usable stand alone, e.g. in actors
* Simplify RouterConfig, only a factory
* Move reading of config from Deployer to the RouterConfig
* Distiction between Pool and Group router types
* Remove usage of actorFor, use ActorSelection
* Management messages to add and remove routees
* Simplify the internals of RoutedActorCell & co
* Move resize specific code to separate RoutedActorCell subclass
* Change resizer api to only return capacity change
* Resizer only allowed together with Pool
* Re-implement all routers, and keep old api during deprecation phase
* Replace ClusterRouterConfig, deprecation
* Rewrite documentation
* Migration guide
* Also includes related ticket:
+act #3087 Create nicer Props factories for RouterConfig
- add “mailbox-requirement” key to dispatcher section
- split out mailbox section, add akka.actor.default-mailbox
- rewrite findMarker method and use it for Props.create() and getting
the required mailbox of an actor
- add ProducesMessageQueue trait for MailboxType so that requirements
can be checked before trying to create the actor for real
- verify actor as well as dispatcher requirements for message queue
before creation, even in remote-deployed case
- change MessageDispatcher constructor to take a Configurator, add that
to migration guide
* The problem was a race caused by HeartbeatReq sent out, and
the watchee terminated immediately. That caused the RemoteWatcher
peers watching each other without any other watch registered.
It is racy.
* Instead of one-way heartbeats from the side beeing watched I
changed to ping-pong style. That makes the problem go away
and simplifies a lot of things in RemoteWatcher.
* RemoteWatcher that monitors node failures, with heartbeats
and failure detector
* Move RemoteDeploymentWatcher from CARP to RARP
* ClusterRemoteWatcher that handles cluster nodes
* Update documentation
* UID in Heartbeat msg to be able to quarantine,
actual implementation of quarantining will be implemented
in ticket 2594
* Deprecate all actorFor methods
* resolveActorRef in provider
* Identify auto receive message
* Support ActorPath in actorSelection
* Support remote actor selections
* Additional tests of actor selection
* Update tests (keep most actorFor tests)
* Update samples to use actorSelection
* Updates to documentation
* Migration guide, including motivation