* Deprecate LevelDB
In preparation of moving into the testing infra (or deleting it completely) at some distant future point in time
* Remove leveldb tests where there also is an inmem one
* More details in deprecation text, recommend inmem + journal proxy for testing etc.
* member information for full cluster shutdown
* Cluster singleton: dont hand over when in ready for shutdown
* Noop everything in shard coordinator
* Set all members to preparing for shutdown
* Don't allow a node to join after prepare for shutdown
* Review feedbac: singleton listen to all member chagnes
* Java API
* More better
* Keep sharding working while ready for shutdown
* Mima
* Revert DEBUG logging
* gs
* Fix api doc link
* Missed review feedback
* Review feedback
* Remove @switch when it doesn't take effect
* Use ActorRef.noSender
* Minor tweaks to SchedulerSpec
* Disambiguate TypedActor for Scala 3
* Bump ScalaTest to a version compatible with Scala 3
* Bump ScalaCheck
* Disambiguate Event in SupervisorHierarchySpec
* Scala 3 compatible EventBusSpec
* Prevent private unused variables to be erased by Scala 3
* Bump mockito
* Explicit actorRef2Scala import
* restore original .scalafix.conf
* Scala 3 compatible tailrec
* Reminder to re add switch annotation in case
* Move to nowarn instead of silencer
* Bump to Scala 2.12.13
* Cross compatible annotations
* fix docs generation
* adapt the build for Scala 3
* fix errors but bus
* remove more SerialVersion from trait
* scalacheck only from scalatest
* cross-compile akka-actor-tests
* restore cross-compilation
* early initializers workaround
* scalacheck switch
* cross compatible FSM.State class
* cross compatible LARS spec
* Change results to pass LineNumberSpec
* fix stackoverflow in AsyncDnsResolverIntegrationSpec
* FSM.State unapply
* fix Scala 2.13 mima
* SerialVersionRemover compiler plugin
* removed unused nowarns
* because it's likely that the first GetShardHome request will result in
allocation update and then all are stashed again
* rename to unstashOneGetShardHomeRequest
* we already had it in the ShardRegion
* it's possible to see it from the actor path but that might not be
obvious and many forget to configure their logback to show the akkaSource
* fix test
Adds some level of cluster awareness to both the LeastShardAllocationStrategy implementations:
* #27368 prefer shard allocations on new nodes during rolling updates
* #27367 don't rebalance during rolling update
* #29554 don't rebalance when there are joining nodes
* #29553 don't allocate to leaving, downed, exiting and unreachable nodes
* When allocating when there are joining, unreachable, are leaving are de-prioritized to decrease the risk that a shard is allocated just to directly need to be re-allocated on a different node.
* The rebalance in the LeastShardAllocationStrategy is only comparing the region
with most shards with the one with least shards. Makes the rebalance rather
slow. By default it's only rebalancing 1 shard at a time.
* This new strategy looks at all current allocations to find the optimal
number of shards per region and tries to adjust towards that value.
Picking from all regions with more shards than the optimal.
* Absolute and relative limit on how many shards that can be rebalanced
in one round.
* It's also not starting a new rebalance round until the previous has
completed.
* unit tests
* second phase for fine grained rebalance, due to rounding it will not be perfect in the first phase
* randomized unit test
* configuration settings
* docs
* Reduce sharding warnings when there are no buffered messages
If shard regions are started before the cluster is formed warnings are
logged. The user can wait until SelfUp but for the cases they don't make
logging debug until the user has buffered messages.
* Review feedback
* Review feedback
* Forward terminated from ShardCoordinator to RebalanceWorker
Avoiding the need for rebalance workers to watch shard regions which is
expensive as there is one rebalance worker per shard
* Review feedback
* Watch all regions as they may shutdown after rebalance starts
* Send graceful shutdown to selection if no coordinator found
* mima
* Add missing new line
* Make log markers consistent for rebalance worker
* Allow entities to stop by terminating in sharding without remember entities #29383
We missed an allowed transition from running/active to stopped/NoState in shard.
when the logic was rewritten.
* Add a toggle to opt-in crash shard on illegal state transitions
Default is logging an error and not crashing shard and all other entities, our tests have the toggle enabled.
* A fix for passivation when not using remember entities fixing #29359 and possibly #27549