+ per plugin scoped adapters
+ could be swapped during runtime
+per EventAdapter now has manifest and is configurable ai la serializers
+ json examples in docs
+ including "completely manual" example in case one wants to add
metadata TO the persisted event
+ better error reporting when misconfigured bindings
+ manifest is handled by in memory plugin
- did not check if it works with LevelDB plugin yet
> TODO: json example uses Gson, as that's simplest to do, can we use
+per allows 1:n adapters, multiple adapters can be bound to 1 class
Previously know as [patriknw/akka-data-replication](https://github.com/patriknw/akka-data-replication),
which was originally inspired by [jboner/akka-crdt](https://github.com/jboner/akka-crdt).
The functionality is very similar to akka-data-replication 0.11.
Here is a list of the most important changes:
* The package name changed to `akka.cluster.ddata`
* The extension was renamed to `DistributedData`
* The keys changed from strings to classes with unique identifiers and type information of the data values,
e.g. `ORSetKey[Int]("set2")`
* The optional read consistency parameter was removed from the `Update` message. If you need to read from
other replicas before performing the update you have to first send a `Get` message and then continue with
the ``Update`` when the ``GetSuccess`` is received.
* `BigInt` is used in `GCounter` and `PNCounter` instead of `Long`
* Improvements of java api
* Better documentation
* avoid using Down and Exiting member from being used for joining
* delay shut down of Down member until the information is spread
to all reachable members, e.g. downing several nodes via one node
* akka.cluster.down-removal-margin setting
Margin until shards or singletons that belonged to a
downed/removed partition are created in surviving partition.
Used by singleton and sharding.
* remove the retry count parameters/settings for singleton in
favor of deriving those from the removal-margin
* I think it originated from channels, or some idea that
the sender should be revived (as good as possible) during replay,
but that is pretty useless
* It must still be in PersistentRepr for remote serialization
* I didn't want to change to the built in sender when looping to the
journal because keeping it together with the message makes it easier
to do batching (queueing)
* adjust tck
When watching many (5000) actors at the same time the
following problems were found:
* first send of a sys msg is sent without any flow control
=> limit the number of outstanding sys msg by using
the buffer to send them later (ordinary resend)
* when msg cannot be written sys msg is dropped (relying on resend),
but that cause message re-ordering and negative acknowledgment,
which is very costly
=> buffer the sys msg on write failure
=> minor optimization of AckedReceiveBuffer
I also made the resend-limit configurable.
(cherry picked from commit ecfc271e9a9d7efcf76945632d89c78740291cc6)