- initial setting of the repeated task raced with first execution, when
the latter won the task would not repeat
- there was a race in task submission which could lead to enqueueing one
round too late
* Previous work-around was introduced because Netty blocks when sending
to broken connections. This is supposed to be solved by the non-blocking
new remoting.
* Removed HeartbeatSender and CoreSender in cluster
* Added tests to verify that broken connections don't disturb live connection
* The problem is that we do remote deployment to a node that isn't alive and with ordinary
remoting that is not detected at all, as we know. With cluster this was taken care of by
a later AddressTerminated and ChildTerminated generated by RemoteDeploymentWatcher. With
the new RemoteDeadLetters the additional watch triggers an immediate Terminate which is
captured by RemoteDeploymentWatcher but not acted upon since it's not an addressTerminated.
RemoteDeploymentWatcher unwatch and will therefor not act on later AddressTerminated.
* The long term solution is to have reliable system messages and remote supervision without
explicit watch, so that we know that the remote deployment fails.
* This short term solution is to let RemoteDeploymentWatcher always generate ChildTerminated,
also for non-addressTerminated.
* It's possibly racy since ChildTerminated is not idempotent.
LARS may execute recurring tasks a little too early on occasion, which
is a direct consequence of it trying to keep them running more regularly
and also allowing them to run at 1/tick rate.
- New DeadLetter class for handling remoting specific envelopes
- Fixed error handling of name lookups
- Name lookup is now handled via futures (future refactor opportunity)
* The problem was that the breaker is asynchronous so it may
take some extra time to open.
* Was able to reproduce with a sleep in onComplete in
CircuitBreaker L303
* Added an extra awaitCond in case the normal (quick)
failing calls don't open the breaker
NB1: the EventHandler needs to be specified in the configuration
NB2: there are some logs that appear in the OSGi prompt at starting and stopping of the Bundle
messages are stored in Buffer
become used in DefaultOSGiLogger
cleanings
thanks to patriknw
Conflicts:
project/AkkaBuild.scala
* Make SystemMessage extend Serializable to avoid ambiguity when
setting serialization-bindings.
* Set serialVersionUID in SystemMessages and create tests to
ensure binary formats remain unchanged.
* Add tests for reference.conf's serialization settings.
* Make some existing serialization tests more robust.
Removed boilerplate from serialization tests
Use actual reference.conf; tidy up
Make serialization compatible