* ClusterSingletonManagerSpec is failing with Artery aeron-udp because of
starvation slowness
* we use 5 nodes with 4 vCPU each
* ClusterSingletonManagerSpec uses 8 pods
* my thinking is that with the previous cpu request 1 it might
schedule too many pods on the same node (if it doesn't distribute
them evenly)
* with this new cpu request it should still be able to schedule 2 pods
per node and that covers all tests except the StressSpec, which is
anyway disabled
* also changed to n2 series and reduced idle-cpu-level
Perhaps we should add a page to the docs summarizing the status
of Scala 3 support, so we can point people to that rather than directly
at the GitHub issues?
* Clarify docs around cluster shutdown
Previous docs could give the impression that changing the number of cluster nodes required a full shutdown. This clarifies that the shutdown is only needed if changing the number of shards and that adjusting the number of shards is not required for changing the number of nodes.
PersistenceTestKitDurableStateStore.currentChanges was correctly only
returning the current changes, however it was not completing until an
addition change was made. This fixes that.
* Include Scala 3 in cross-build
sbt cross-building sometimes behaves surprisingly, so this does not work
yet: when switching to 3.0.1-RC1 it still tries to build the modules
that do not support that version yet, even though they are 'excluded'.
This also currently breaks cross-publishing, so we cannot merge this.
Once this works we should add a note to the documentation clarifying
that the Scala 3 artifacts are experimental.
* Fix jackson test dependency
* Don't publish docs for scala3 artifacts for now
* Publish empty doc packages for Scala 3 artifacts
* Remove OptionVal workaround
* Revert "Avoid pattern-matching on OptionVal since Scala 2.13 allocates when checking the pattern"
This reverts commit f0194bbc1ad43ac2c79bf156bfe91adf7fd5e538.
* Revert "Optimizes retrieval of mandatoryAttributes by removing potential allocation of OptionVal"
This reverts commit 165b0e0d5c057965e37418299061bdf48c33fc44.
* I think it was a thread starvation problem because next test step could
start before the previous pool had been terminated.
* Many (at least 5) threads are blocked in this test and AkkaSpec defines a max of 8.
Cannot be observed from the SubSink side if the materialization failed or the
running stream already cancelled, so just ignore the sub-sink cancellation in the
case where the sink is already cancelled.
Previously it included a coarse grained Java reformat which it seems was never
quite worked through (reformatting generated sources etc), so go back on that,
and make sure verifyCodeStyle is exactly what we require for PR validation to pass
(and that it does not diverge)
* "must send elements downstream as soon as time comes"
* it failed with perThrottleInterval:
Map(0 -> List((0,1), (499,2)), 2 -> List((1000,3), (1499,4)), 3 -> List((1999,5)))
* round the interval groups
* use dilated