* clarify how to enable the plugin
* added empty class property in fallback config in reference
to have a proper place to document that and throw a more
specific exception if it is not defined
* also some formatting of reference.conf
For manual downing it is not needed. For auto-down it doesn't add any extra safety, since that
is not handling network partitions anyway.
The setting is still useful if you implement downing strategies that handle network partitions,
e.g. by keeping the larger side of the partition and shutting down the smaller side.
* well, as long as they provide the parseFrom and toByteArray
* it is using reflection to find the `parseFrom` and `toByteArray` methods to avoid
dependency to `com.google.protobuf`.
* also special case com.google.protobuf when loading serialization binding
* migration guide
* mima filters for the serializers (all types changed)
* add real test for ProtobufSerializer
- created new subproject akka-protobuf (and added COPYING and LICENSE)
- renamed com.google.protobuf -> akka.protobuf everywhere
- also added such renaming step to the results of protoc compilation in
project/Protobuf.scala
- had to include transcriptions of Netty’s ProtobufEncoder/Decoder to
make multi-node-testkit compile again
When using a dispatcher (default or separate cluster dispatcher)
with less than 5 threads the Cluster extension initialization
could deadlock.
It was reproducable by adding a sleep before the Await of GetClusterCoreRef
in the Cluster extension constructor. The reason was that other cluster actors were
started too early and they also tried to get the Cluster extension and thereby blocking
dispatcher threads.
Note that the Cluster extension is started via ClusterActorRefProvider before
ActorSystem.apply returns.
The improvement is to start the cluster child actors lazily when the
GetClusterCoreRef is received.
For example, a new persistent actor (no snapshots, no events) should use
0L so that it makes sense that journal returns 0L and the first persisted
event gets 1L.
Two improvements to the coordinator startup (state recovery) that
should make it operational faster and reduce the amount of lost messages
during startup.
* Let the quick (those not involving failure detection) Terminated messages
be processed before starting to reply to GetShardHome.
* Consider regions that don't belong to the current cluster
to be terminated.
The new akka.cluster.down-removal-margin comes into play.
During that period messages are still routed to the old location, even though we have got the Terminated message.
We can reduce (best effort) the message loss by not replying to GetShardHome during the period.