* member information for full cluster shutdown
* Cluster singleton: dont hand over when in ready for shutdown
* Noop everything in shard coordinator
* Set all members to preparing for shutdown
* Don't allow a node to join after prepare for shutdown
* Review feedbac: singleton listen to all member chagnes
* Java API
* More better
* Keep sharding working while ready for shutdown
* Mima
* Revert DEBUG logging
* gs
* Fix api doc link
* Missed review feedback
* Review feedback
* Remove @switch when it doesn't take effect
* Use ActorRef.noSender
* Minor tweaks to SchedulerSpec
* Disambiguate TypedActor for Scala 3
* Bump ScalaTest to a version compatible with Scala 3
* Bump ScalaCheck
* Disambiguate Event in SupervisorHierarchySpec
* Scala 3 compatible EventBusSpec
* Prevent private unused variables to be erased by Scala 3
* Bump mockito
* Explicit actorRef2Scala import
* restore original .scalafix.conf
* Scala 3 compatible tailrec
* Reminder to re add switch annotation in case
* Move to nowarn instead of silencer
* Bump to Scala 2.12.13
* Cross compatible annotations
* fix docs generation
* adapt the build for Scala 3
* fix errors but bus
* remove more SerialVersion from trait
* scalacheck only from scalatest
* cross-compile akka-actor-tests
* restore cross-compilation
* early initializers workaround
* scalacheck switch
* cross compatible FSM.State class
* cross compatible LARS spec
* Change results to pass LineNumberSpec
* fix stackoverflow in AsyncDnsResolverIntegrationSpec
* FSM.State unapply
* fix Scala 2.13 mima
* SerialVersionRemover compiler plugin
* removed unused nowarns
* SBR now downs a node when it notices that it has been quarantined from other nodes #29565
* Them mima excludes
* Review feedback mostly addressed
* One more stale comment removed
* More stress
* Ignore if remote quarantining node is not part of cluster
* Preliminary (untested) keepalive server support
* Completed reproducer of scenario discussed in PR
* Fix weird wrong extends in multi-jvm tests
* Put the test transport dropping after control junction to also drop control messages on blackhole.
* Test cleanup/review feedback addressed
* Ping from both nodes of side 1
Co-authored-by: Renato Cavalcanti <renato@cavalcanti.be>
* Add some debug logging to test to nail down failure cause
* Log when InboundTestStage lets messages through because no association yet
Co-authored-by: Renato Cavalcanti <renato@cavalcanti.be>
In a recent support case the 'manual cluster join required'
log message caused some confusion.
Turns out the configuration we used to detect whether Cluster
Bootstrap is available has been changed since
https://github.com/akka/akka-management/pull/476
Unfortunately I don't think we can detect whether Cluster
Bootstrap is actually enabled, since users may call
`ClusterBootstrap(system).start()` whenever they like.
Updated the logging to reflect that better.
* Ignore gossip desrialization failures
Only to happen suring a rolling upgrade. Gives us the option to do
incompatible things in Gossip and have the old nodes ignore the
deserialization error.
* Review feedback
* when SBR downs the reachable side (minority) it's important
to quickly inform everybody to shutdown
* send gossip directly to downed node, STONITH signal
* gossip to a few random immediatly when self is downed, which
is always the last from the SBR downing
* enable gossip speedup when there are downed members
* adjust StressSpect to normal again
* adjust TransitionSpect to the new behavior
* Config for when to move to WeaklyUp
* noticed when I was testing with the StressSpec that it's often moving nodes to WeaklyUp
in normal joining scenarios (also seen in Kubernetes testing)
* better to wait some longer since the WeaklyUp will require a new convergence round
and making the full joining -> up take longer time
* changed existing config property to be a duration
* default 7s, previously it was 3s
* on => 7s
* Since DeathWatchNotification is sent over the control channel it may overtake
other messages that have been sent from the same actor before it stopped.
* It can be confusing that Terminated can't be used as an end-of-conversation marker.
* In classic Remoting we didn't have this problem because all messages were sent over
the same connection.
* don't send DeathWatchNotification when system is terminating
* when using Cluster we can rely on that the other side will publish AddressTerminated
when the member has been removed
* it's actually already a race condition that often will result in that the DeathWatchNotification
from the terminating side
* in DeathWatch.scala it will remove the watchedBy when receiving AddressTerminated, and that
may (sometimes) happen before tellWatchersWeDied
* same for Unwatch
* to avoid sending many Unwatch messages when watcher's ActorSystem is terminated
* same race exists for Unwatch as for DeathWatchNotification, if RemoteWatcher publishAddressTerminated
before the watcher is terminated
* config for the flush timeout, and possibility to disable
* adjust default minimum for down-all-when-unstable
* when down-all-when-unstable=on it will be >= 4 seconds
* in case stable-after is tweaked to low value such as 5 seconds
* will be used in rolling update features
* configured with akka.cluster.app-version
* reusing same implementation as ManifestInfo.Version
by moving that to akka.util.Version
* additional version test
* support dynver format, + separator, and commit number
* improve version parser
* lazy parse
* make Member.appVersion internal
* to only exercise membership
* remote deployed routers and supervision of remote deployed actors
are not priority, and that is what is sometimes failing