* Fix singleton issue when leaving several nodes, #27487
* When leaving several nodes at about the same time the new singleton
could be started before previous had been completely stopped.
* Found two possible ways this could happen.
* Acting on MemberRemoved that is emitted when the self
cluster node is shutting down.
* The HandOverDone confirmation when in Younger state,
but that node is also Leaving so could be seen as Exiting
from a third node that is the next singleton.
* keep track of all previous oldest, not only the latest
* Option => List
* Otherwise in BecomingOldest it could transition to Oldest
when the previous oldest was removed even though the previous-previous wasn't removed yet
* fix failure in ClusterSingletonRestart2Spec
* OldestChanged was not emitted when Exiting member was removed
* The initial membersByAge must also contain Leaving, Exiting members
(cherry picked from commit ee188565b9f3cf2257ebda218cec6af5a4777439)
* Support configuration for Jackson MapperFeatures in Jackson Serializer
* Add JsonParser.Feature configuration support
* Add JsonGenerator.Feature configuration support
* Fix formatting issues
* Add examples for each feature configuration
* Test coverage of the override methods
* base functionality
* fix-restart-flow
* Fix subSource / subSink cancellation handling
* GraphStage-fix
* Fix ambiguity between complete and cancellation (for isAvailable / grab)
* rename lastCancellationCause
* add mima
* fix cancellation cause propagation in OutputBoundary
* Fix cancellation cause propagation in SubSink
* Add cancellation cause logging to Flow.log
* add more comments about GraphStage portState internals
* Add some assertions in onDownstreamFinish to prevent wrong usage
* Also deprecate onDownstreamFinish() so that no one calls the wrong one
accidentally
* add SubSinkInlet.cancel(cause)
* Propagate causes in two other places
* Suggest to use `cancel(in, cause)` but don't deprecate old one
* RebalanceWorker should watch shard regions
Fixes#27259.
The RebalanceWorker actor needs to watch the shard regions that it's
expecting a BeginHandOffAck message from, in case the ShardRegion shuts
down before it can receive the BeginHandOff message, preventing hand
off. This can be a problem when two nodes are shut down at about the
same time.
'Blocking Needs Careful Management' and 'CallingThreadDispatcher'
still need to be done, but those also need example changes, so
leaving that for another PR
* Remove extensions to protobuf config checker messages
AFAICT these are never serialized/deserialized. Removing as they use a
deprecated feature of protobuf (required fields in extensions)
* Remove extensions from protobuf
The number of shards is configurable, in the order of magnitude of the number
of nodes in the cluster. Logging the ActorRef for each allocated shard is
useful to see on which node the shard is allocated.