* take it from testkit settings instead of hard coded
* dilate it
* it's still somewhat confusing since we have both
classic and typed testkits and they dilate the default
timeout differently, but don't want to change too much
* this saves at least 2 seconds where the coordinator is not able to respond
when the oldest node is shutdown
* also reduce default write-majority-plus for sharding, introduced in #28856
The old logic allowed a race condition where the 'StartEntity' from the
test arrived at the ShardRegion before the termination of the actor did,
causing it to ignore the `StartEntity`.
* Set sbtopts inline
* Ignore flaky, obsolete test
* Adds concurrency limit (run only latest commit
* Don't run scala3 workflows until ready to merge
* split publishLocal from pr validation
* fix: Defer coordinator stop until region graceful stop has completed #28917
* Added multi jvm test
* Formatting
* Also send GracefulShutdown to region if it hasn't started gracefully shutting down yet
* Deprecate LevelDB
In preparation of moving into the testing infra (or deleting it completely) at some distant future point in time
* Remove leveldb tests where there also is an inmem one
* More details in deprecation text, recommend inmem + journal proxy for testing etc.
* member information for full cluster shutdown
* Cluster singleton: dont hand over when in ready for shutdown
* Noop everything in shard coordinator
* Set all members to preparing for shutdown
* Don't allow a node to join after prepare for shutdown
* Review feedbac: singleton listen to all member chagnes
* Java API
* More better
* Keep sharding working while ready for shutdown
* Mima
* Revert DEBUG logging
* gs
* Fix api doc link
* Missed review feedback
* Review feedback
* Remove @switch when it doesn't take effect
* Use ActorRef.noSender
* Minor tweaks to SchedulerSpec
* Disambiguate TypedActor for Scala 3
* Bump ScalaTest to a version compatible with Scala 3
* Bump ScalaCheck
* Disambiguate Event in SupervisorHierarchySpec
* Scala 3 compatible EventBusSpec
* Prevent private unused variables to be erased by Scala 3
* Bump mockito
* Explicit actorRef2Scala import
* restore original .scalafix.conf
* Scala 3 compatible tailrec
* Reminder to re add switch annotation in case
* Move to nowarn instead of silencer
* Bump to Scala 2.12.13
* Cross compatible annotations
* fix docs generation
* adapt the build for Scala 3
* fix errors but bus
* remove more SerialVersion from trait
* scalacheck only from scalatest
* cross-compile akka-actor-tests
* restore cross-compilation
* early initializers workaround
* scalacheck switch
* cross compatible FSM.State class
* cross compatible LARS spec
* Change results to pass LineNumberSpec
* fix stackoverflow in AsyncDnsResolverIntegrationSpec
* FSM.State unapply
* fix Scala 2.13 mima
* SerialVersionRemover compiler plugin
* removed unused nowarns
* because it's likely that the first GetShardHome request will result in
allocation update and then all are stashed again
* rename to unstashOneGetShardHomeRequest
* we already had it in the ShardRegion
* it's possible to see it from the actor path but that might not be
obvious and many forget to configure their logback to show the akkaSource
* fix test
Adds some level of cluster awareness to both the LeastShardAllocationStrategy implementations:
* #27368 prefer shard allocations on new nodes during rolling updates
* #27367 don't rebalance during rolling update
* #29554 don't rebalance when there are joining nodes
* #29553 don't allocate to leaving, downed, exiting and unreachable nodes
* When allocating when there are joining, unreachable, are leaving are de-prioritized to decrease the risk that a shard is allocated just to directly need to be re-allocated on a different node.
* The rebalance in the LeastShardAllocationStrategy is only comparing the region
with most shards with the one with least shards. Makes the rebalance rather
slow. By default it's only rebalancing 1 shard at a time.
* This new strategy looks at all current allocations to find the optimal
number of shards per region and tries to adjust towards that value.
Picking from all regions with more shards than the optimal.
* Absolute and relative limit on how many shards that can be rebalanced
in one round.
* It's also not starting a new rebalance round until the previous has
completed.
* unit tests
* second phase for fine grained rebalance, due to rounding it will not be perfect in the first phase
* randomized unit test
* configuration settings
* docs
* Reduce sharding warnings when there are no buffered messages
If shard regions are started before the cluster is formed warnings are
logged. The user can wait until SelfUp but for the cases they don't make
logging debug until the user has buffered messages.
* Review feedback
* Review feedback