* It was a timing race condition in the test that was exposed
by the change in PR #25315. Full state is now sent immediately
when receiving the DeltaNack and that makes the Update complete
much faster for that case than before.
* That resulted in that the delta propagations from previous
updates were still in the buffer to be sent out when the
incr(4) was performed. Those deltas contained the NoDeltaPlaceholder,
which caused the inrc(4) delta to also be folded into NoDeltaPlaceholder
and thereby not propagated.
* Before the DeltaNack the buffer had time to be flushed before the incr(4)
and therefore no NoDeltaPlaceholder.
* Use deterministic order of the target nodes for the writes when
type RequiresCausalDeliveryOfDeltas, otherwise the random pick
of targets caused that delta sequence numbers were missing for
susequent updates
* Resend immediately when receiving DeltaNack instead of waiting
for timeout. DeltaNack can happen when there are multiple
concurrent updates from same node because each starts a WriteAggregator
and a later Update might bypass an earlier
* since the ordering can change based on the member's status
it's not possible to use ordinary - for removal
* similar issue at a few places where ageOrdering was used
* Sharding only within own team (coordinator is singleton)
* the ddata Replicator used by Sharding must also be only within own team
* added support for Set of roles in ddata Replicator so that can be used
by sharding to specify role + team
* Sharding proxy can route to sharding in another team
* to avoid OversizedPayloadException
* some complex deltas grow for each update operation, e.g.
when updating different keys in ORMap (PNCounterMap)
* such large deltas can safely be discarded and disseminated as full
state instead
* added ReplicatedDeltaSize interface to be able to define the "size"
and when that size exceeds configured threshold the delta is discarded
When a DeltaReplicatedData returns None from delta it must still be
treated as a delta that increase the version counter in DeltaPropagationSelector.
Otherwise a later delta might be applied before the full state gossip is received
and thereby violating RequiresCausalDeliveryOfDeltas.
* Follow up on the causal delivery of deltas.
* The first implementation used full state for the direct
Write messages, i.e. updates with WriteConsistency != LocalWrite
* This is an optimization so that delatas are tried first and if
they can't be applied it falls back to full state.
* For simultanious updates the messages may be reordered because we
create separate WriteAggregator actor and such, but normally they
will be sent in order so the deltas will typically be received in
order, otherwise we fall back to retrying with full state in the
second round in the WriteAggregator.
* keep track of delta interval versions and skip deltas
that are not consequtive, i.e. when some delta message was lost
* send the delta versions in the full state gossip to sync up the
expected versions after dropped deltas
* implementation of deltas for ORSet
* refactoring of the delta types to allow for different type for the
delta and the full state
* extensive tests
* mima filter
* performance optimizations
* simple pruning of deltas
* Java API
* update documentation
* KeyId type alias
* Use InternalApi annotation
* delta GCounter and PNCounter
* first stab at delta propagation protocol
* send delta in the direct write
* possibility to turn off delta propagation
* tests
* protobuf serializer for DeltaPropagation
* documentation
* fix merge issues of DataEnvelope and its pruning
* simplify by removing the tombstones, which didn't work in all cases anyway
* keep the PruningPerformed markers in the DataEnvelope until configured
TTL has elapsed (wall clock)
* simplify PruningState structure
* also store the pruning markers in durable data
* collect removed nodes from the data, listing on MemberRemoved is not enough
* possibility to disable pruning altogether
* documented caveat for durable data
The WriteAggregator and ReadAggregator typically send
the same message to several replicas and by caching the serialized bytes
we avoid doing the same thing for each message
and add a test for WriteAggregator
Previously know as [patriknw/akka-data-replication](https://github.com/patriknw/akka-data-replication),
which was originally inspired by [jboner/akka-crdt](https://github.com/jboner/akka-crdt).
The functionality is very similar to akka-data-replication 0.11.
Here is a list of the most important changes:
* The package name changed to `akka.cluster.ddata`
* The extension was renamed to `DistributedData`
* The keys changed from strings to classes with unique identifiers and type information of the data values,
e.g. `ORSetKey[Int]("set2")`
* The optional read consistency parameter was removed from the `Update` message. If you need to read from
other replicas before performing the update you have to first send a `Get` message and then continue with
the ``Update`` when the ``GetSuccess`` is received.
* `BigInt` is used in `GCounter` and `PNCounter` instead of `Long`
* Improvements of java api
* Better documentation