GZIPInputStream uses Inflater internally (so also native zlib). Inflater frees up memory only on explicit call to end() or during finalization (finalize() contains only call to end()), so GZIPInputStream should always be explicitly closed.
As native libraries are used a non-scalaish try-finally is used to avoid off-heap memory leak for GZIPInputStream and GZIPOutputStream in case of exceptions.
Two new message pairs:
`GetShardRegionState`/`CurrentShardRegionState` allows for querying a region for it's current shards and the current `EntityIds` of it
`GetClusterShardingStats`/`ClusterShardingStats` allows for querying the entire cluster for a summary of
the number of entitites alive in each region and shard.
* the reported issue is fixed by the immediate leaderActions
(moving to Up) when joining the first node to itself
* the other changes are precautions just in case
For manual downing it is not needed. For auto-down it doesn't add any extra safety, since that
is not handling network partitions anyway.
The setting is still useful if you implement downing strategies that handle network partitions,
e.g. by keeping the larger side of the partition and shutting down the smaller side.
- created new subproject akka-protobuf (and added COPYING and LICENSE)
- renamed com.google.protobuf -> akka.protobuf everywhere
- also added such renaming step to the results of protoc compilation in
project/Protobuf.scala
- had to include transcriptions of Netty’s ProtobufEncoder/Decoder to
make multi-node-testkit compile again
Two improvements to the coordinator startup (state recovery) that
should make it operational faster and reduce the amount of lost messages
during startup.
* Let the quick (those not involving failure detection) Terminated messages
be processed before starting to reply to GetShardHome.
* Consider regions that don't belong to the current cluster
to be terminated.
The new akka.cluster.down-removal-margin comes into play.
During that period messages are still routed to the old location, even though we have got the Terminated message.
We can reduce (best effort) the message loss by not replying to GetShardHome during the period.
* avoid using Down and Exiting member from being used for joining
* delay shut down of Down member until the information is spread
to all reachable members, e.g. downing several nodes via one node
* akka.cluster.down-removal-margin setting
Margin until shards or singletons that belonged to a
downed/removed partition are created in surviving partition.
Used by singleton and sharding.
* remove the retry count parameters/settings for singleton in
favor of deriving those from the removal-margin