* and addition of TimestampOffset
* ApiMayChange
* small mention in docs
eventsBySlices is intended to be a better way to retrieve all events for an entity type
than eventsByTag.
The usage of `eventsByTag` for Projections has the major drawback that the number of tags
must be decided up-front and can't easily be changed afterwards. Starting with too many
tags means much overhead since many projection instances would be running on each node
in a small Akka Cluster. Each projection instance polling the database periodically.
Starting with too few tags means that it can't be scaled later to more Akka nodes.
Instead of tags we can store a slice number by hashing the persistence id.
Like `math.abs(persistenceId.hashCode % numberOfSlices)`.
Then the Projection query can be a range query of the slices. For example if using 128
slices and running 4 Projection instances the slice ranges would be 0-31, 32-63, 64-95,
96-128. That can easily be split to more Projection instances when needed and still
reuse the offsets for the previous range distributions.
* allow 'currentPersistenceIds' for eventsourced journals
This allows `currentPersistenceIds(afterId, limit)`
to be mixed in with ReadJournal as well as with
DurableStateStore.
Instead of duplicating `CurrentDurableStatePersistenceIdsQuery` I
renamed it to `PagedPersistenceIdsQuery` and removed the restriction
that it must be a `DurableStateStore`. The downside is that it is less
obvious where it should/can be mixed in, that is now only communicated
through the java/scaladoc and the testkit example.
* Split Journal and DurableState traits
* Add slice utilities to Persistence
* These will be used by a persistence plugin when it supports
eventsBySlices
* Good to have these implementations in a single place in Akka rather
than duplicating it in different plugins
* The numberOfSlices is hardcoded to 128 with the motivation described in
doc comment, but by placing it in the Persistence extension we have the
possiblity to make it configurable in the future if that is necessary
* ClusterSingletonManagerSpec is failing with Artery aeron-udp because of
starvation slowness
* we use 5 nodes with 4 vCPU each
* ClusterSingletonManagerSpec uses 8 pods
* my thinking is that with the previous cpu request 1 it might
schedule too many pods on the same node (if it doesn't distribute
them evenly)
* with this new cpu request it should still be able to schedule 2 pods
per node and that covers all tests except the StressSpec, which is
anyway disabled
* also changed to n2 series and reduced idle-cpu-level
Perhaps we should add a page to the docs summarizing the status
of Scala 3 support, so we can point people to that rather than directly
at the GitHub issues?
* Clarify docs around cluster shutdown
Previous docs could give the impression that changing the number of cluster nodes required a full shutdown. This clarifies that the shutdown is only needed if changing the number of shards and that adjusting the number of shards is not required for changing the number of nodes.
PersistenceTestKitDurableStateStore.currentChanges was correctly only
returning the current changes, however it was not completing until an
addition change was made. This fixes that.