* Possibility to prefer oldest in ddata writes and reads
* enabled for Cluster Sharding
* New ReadMajorityPlus and WriteMajorityPlus
* used by Cluster Sharding, with configuration
* also possible to define ReadAll in config
(cherry picked from commit 4ba835d328)
Refs #28993
The previous `nextDeadline - time < 0` required that nanoTime resolution is
actually high enough to see that the deadline had already passed. If it
had not, the current keep alive was missed and also all future ones (until
another a regular element would trigger another push/pull cycle).
Now, with `>=` it also works in that case and just fails noisily if our
assumptions are not true.
It's not clear how it could have happened. On my machine, timers
trigger 1-2 tick-durations too late (but at least ~2ms). How that could be
the same in terms of the nanoTime resolution is hard to see.
* encode failedFatally in the existing _failed field
* removed unused parameter of finishRecreate
* removed now unused parameter of clearActorFields
* Removed failed fatally with perpetrator state
* Remove actor_= and restrict places where _actor can be set
* test for non null context on actor termination
* Remove Reflect.lookupAndSetField
* Refactor shard to have a state for each entity
Rather than inferred from various maps and sets.
Unfortunately, we still have the by actor ref and by id but have moved
them to a class so they are always updated together.
* Avoid allocation on the message path
* Change Entities API to all OptionVals rather than a mixture
* Add spec for Entities
* Avoid multiple conversions of collection for handoff
* Migration from persistent shard coordinator to ddata with eventsourced remembered entities
* Fix bin compat in typed sharding
* Add log capturing
* Java API for nested case objects in typed sharding settings
* Starting some docs for remembering entities store
* Snapshot and marker to detect going back to persistence mode
* Review feedback
* Unused imports
* if FunctionRef is stopped first, which is probably the most common case
the message will be redirected as is to deadLetters
* otherwise the wrapped message is sent to deadLetters
There's only one read so it was relying on both the Data and the Failed
being in the shared queue when it takes place.
Remove the data so that the poll on the shared queue will wait for the
Failed to be added.
Ref #28829
* Replace signature with apidoc in fromMaterializer operator docs
* (untyped) actorRefWithBackpressure replace siugnature with apidoc
* (typed) actorRefWithBackpressure replace siugnature with apidoc
* signature to apidoc of map
* (typed and untyped) actorRef signature to apidoc
* Reviews ask (no replacement)
* from/apply from signature to apidoc directive.