GZIPInputStream uses Inflater internally (so also native zlib). Inflater frees up memory only on explicit call to end() or during finalization (finalize() contains only call to end()), so GZIPInputStream should always be explicitly closed.
As native libraries are used a non-scalaish try-finally is used to avoid off-heap memory leak for GZIPInputStream and GZIPOutputStream in case of exceptions.
My assumption is that it the absence of the sealed modifier
was an oversight. Marking it as sealed will avoid exhausitity
warnings from upcoming Scala compiler version in `highestPriorityOf`.
* Failure detection heartbeating was not performed to joining
nodes, since it was expected that they will become Up first.
* If a joining node is downed before it is changed to Up failure
detection will not be performed for that node. That resulted in
the downed node will not be removed from membership, since the
unreachability signal is used as confirmation that the node is
actually stopped before removing it.
* The old implementation would cap the pool size (both corePoolSize
and maximumPoolSize) to max-pool-size, which is very confusing
becuase maximumPoolSize is only used when the task queue is bounded.
* That resulted in configuring core-pool-size-min and core-pool-size-max
was not enough, because it could be capped by the default max-pool-size.
* The new behavior is simply that maximumPoolSize is adjusted to not be
less than corePoolSize, but otherwise the config properties match the
underlying ThreadPoolExecutor implementation.
* Added a convenience fixed-pool-size property.
Unless the message is in akka.* or the configuration setting 'akka.actor.warn-about-java-serializer-usage'
is disabled a warning is logged for each class that the Java serializer is choosen for.
* the reported issue is fixed by the immediate leaderActions
(moving to Up) when joining the first node to itself
* the other changes are precautions just in case
* instead of using transport failure detector
* add a new config property akka.remote.handshake-timeout, but
for netty.tcp and netty.ssl the existing netty.tcp.connection-timeout
setting will be used
* add test of the timeouts
* mima filter for internal ProtocolStateActor
For manual downing it is not needed. For auto-down it doesn't add any extra safety, since that
is not handling network partitions anyway.
The setting is still useful if you implement downing strategies that handle network partitions,
e.g. by keeping the larger side of the partition and shutting down the smaller side.
- created new subproject akka-protobuf (and added COPYING and LICENSE)
- renamed com.google.protobuf -> akka.protobuf everywhere
- also added such renaming step to the results of protoc compilation in
project/Protobuf.scala
- had to include transcriptions of Netty’s ProtobufEncoder/Decoder to
make multi-node-testkit compile again
When using a dispatcher (default or separate cluster dispatcher)
with less than 5 threads the Cluster extension initialization
could deadlock.
It was reproducable by adding a sleep before the Await of GetClusterCoreRef
in the Cluster extension constructor. The reason was that other cluster actors were
started too early and they also tried to get the Cluster extension and thereby blocking
dispatcher threads.
Note that the Cluster extension is started via ClusterActorRefProvider before
ActorSystem.apply returns.
The improvement is to start the cluster child actors lazily when the
GetClusterCoreRef is received.