GZIPInputStream uses Inflater internally (so also native zlib). Inflater frees up memory only on explicit call to end() or during finalization (finalize() contains only call to end()), so GZIPInputStream should always be explicitly closed.
As native libraries are used a non-scalaish try-finally is used to avoid off-heap memory leak for GZIPInputStream and GZIPOutputStream in case of exceptions.
* Added reusable PerGroupingBuffer trait for pubsub implementation
* Moved mkKey methods to the Internal object
* Introduced passivate-like protocol between DistributedPubSubMediator/Topic and Topic/Group actors, contained in ChildActorTerminationProtocol messages.
* the reported issue is fixed by the immediate leaderActions
(moving to Up) when joining the first node to itself
* the other changes are precautions just in case
* In 2.4 we derive the number of hand-over/take-over retries from
the removal margin, but we decided to set that to 0 by default, since
it is intended for network partition scenarios. maxTakeOverRetries
became 1. So there must be also be a min number of retries property.
* The test failed for the leaving scenario because the singleton
instance was stopped hard without sending the terminationMessage when
the maxTakeOverRetries was exceeded.
For manual downing it is not needed. For auto-down it doesn't add any extra safety, since that
is not handling network partitions anyway.
The setting is still useful if you implement downing strategies that handle network partitions,
e.g. by keeping the larger side of the partition and shutting down the smaller side.
- created new subproject akka-protobuf (and added COPYING and LICENSE)
- renamed com.google.protobuf -> akka.protobuf everywhere
- also added such renaming step to the results of protoc compilation in
project/Protobuf.scala
- had to include transcriptions of Netty’s ProtobufEncoder/Decoder to
make multi-node-testkit compile again
* number-of-contacts is by default 3, and in this test
with 4 server nodes we shutdown all but one in the end
and sometimes the client has all other except the remaining
node in its list of contacts, so it will never get contact
with the remaining node
* avoid using Down and Exiting member from being used for joining
* delay shut down of Down member until the information is spread
to all reachable members, e.g. downing several nodes via one node
* akka.cluster.down-removal-margin setting
Margin until shards or singletons that belonged to a
downed/removed partition are created in surviving partition.
Used by singleton and sharding.
* remove the retry count parameters/settings for singleton in
favor of deriving those from the removal-margin
* because it will result in quarantine if failure
detection triggers and that kind of coupling is
exactly what is not desired for a ClusterClient
* replace by simple heartbeat failure detection,
DeadlineFailureDetector
* DeadLetterSuppression
* addunidoc task via an AutoPlugin that depends on PrValidation and Unidoc autoplugins
* separate cli option logic to a case class
* remove autoplugin for root project