* track nodes by UniqueAddress in Cluster Singleton, #20942
* reply with HandOverDone from new incarnation, #20942
* confirm as terminated immediately when new incarnation joins, #20942 instead of waiting for failure detector to mark it as unreachable this will speed-up removal when restarting cluster node with same hostname:port
* minor fixes
* remove now superfluous buffer from MultipartUnmarshaller
* remove unused TokenSourceActor
* remove FIXME: add tests, see #16437
* removed unused param remoteAddress (comment: TODO: remove after #16168 is cleared)
* convert FIXME to TODO (#18709)
* reenable tests in {Request|Response}RendererSpec due to fixed#15981
* remove logging workaround in StreamTestDefaultMailbox due to fixed#15947
* The problem: ACK that was targeted to an old incarnation
was sent to the new, restarted, system with same host:port, and
therefore resulting issues noticed as
"Error encountered while processing system message acknowledgement buffer: [-1 {}] ack: ACK[0, {}]"
when restarting actor system
* The reason:
1. The endpoint reader was about to send OutgoingAck to parent reader,
targeted to the old system.
2. At the same time there is an incoming connection from new system
that triggered TakeOver in the endpoint writer, i.e. replacing
the handle to the connection of the new system.
3. The OutgoingAck is received by the writer, which happily sends it
to the new handle, the new system.
* The solution: Ignore OutgoingAck during the handoff (TakeOver) process.
* Automatic downing of old node incarnation when new tries to rejoin the cluster is performed even if old incarnation was left in Leaving or Exiting state.
* Added information to clustering docs about automatic downing of old incarnations when new tries to rejoin the cluster.
GZIPInputStream uses Inflater internally (so also native zlib). Inflater frees up memory only on explicit call to end() or during finalization (finalize() contains only call to end()), so GZIPInputStream should always be explicitly closed.
As native libraries are used a non-scalaish try-finally is used to avoid off-heap memory leak for GZIPInputStream and GZIPOutputStream in case of exceptions.
My assumption is that it the absence of the sealed modifier
was an oversight. Marking it as sealed will avoid exhausitity
warnings from upcoming Scala compiler version in `highestPriorityOf`.
* Failure detection heartbeating was not performed to joining
nodes, since it was expected that they will become Up first.
* If a joining node is downed before it is changed to Up failure
detection will not be performed for that node. That resulted in
the downed node will not be removed from membership, since the
unreachability signal is used as confirmation that the node is
actually stopped before removing it.
* The old implementation would cap the pool size (both corePoolSize
and maximumPoolSize) to max-pool-size, which is very confusing
becuase maximumPoolSize is only used when the task queue is bounded.
* That resulted in configuring core-pool-size-min and core-pool-size-max
was not enough, because it could be capped by the default max-pool-size.
* The new behavior is simply that maximumPoolSize is adjusted to not be
less than corePoolSize, but otherwise the config properties match the
underlying ThreadPoolExecutor implementation.
* Added a convenience fixed-pool-size property.
Unless the message is in akka.* or the configuration setting 'akka.actor.warn-about-java-serializer-usage'
is disabled a warning is logged for each class that the Java serializer is choosen for.
* the reported issue is fixed by the immediate leaderActions
(moving to Up) when joining the first node to itself
* the other changes are precautions just in case