Test cleanup:
* No need to use mockito, replaced with TestProbe (side effect is that it actually also
makes some test cases more explicit in what they expect)
* Use matchers to get reasonable failure messages
* Use types where it makes sense
* Remove mockito dependency from akka-actor-tests
The idea is that `.recover { xyz => throw newException }` is common enough
not to log an ERROR message just because we didn't catch it in the Recover stage.
On the other hand, using `mapError` can be a better choice if you just want to
map the error (but there might be other occurrences where a partial function is not
enough to avoid throwing an error from recover).
* from logs it can be seen that the singleton actor on third node is started
after 3 seconds and the expectMsg("preStart") timeout is also 3 seconds
* increasing that timeout
* Fix race when binding to port '0' in artery tcp
Splitting the 'binding' and 'starting inbound streams' seems to make
it a bit easier to follow as well
* Add monitoring section (#27223)
* Make it a little less sparse
* Update akka-docs/src/main/paradox/additional/monitoring.md
Co-Authored-By: Peter Vlugter <pvlugter@users.noreply.github.com>
* Further suggestions
* Update akka-docs/src/main/paradox/additional/observability.md
Co-Authored-By: Helena Edelson <helena@users.noreply.github.com>
Introduces a materializer started through an extension and then an implicit
conversion for Scala turning an implicitly available ActorSystem into a
materializer. The Java APIs has been ammended with run-methods accepting
an ActorSystem.
* Removes sections that describe language features: Futures and Duration
* Keeps section for logging and circuit breaker
* Keep logging as is for now, to be updated with the Typed SL4J logging
Creates issues for
* Documenting typed extensions https://github.com/akka/akka/issues/27448
* Future interactions in https://github.com/akka/akka/issues/27449
Refs #27223
Add redirects from removed pages to 2.5 docs
Make indexes complete and fix link
'Classic' in the title for docs for classic API's
* Fix singleton issue when leaving several nodes, #27487
* When leaving several nodes at about the same time the new singleton
could be started before previous had been completely stopped.
* Found two possible ways this could happen.
* Acting on MemberRemoved that is emitted when the self
cluster node is shutting down.
* The HandOverDone confirmation when in Younger state,
but that node is also Leaving so could be seen as Exiting
from a third node that is the next singleton.
* keep track of all previous oldest, not only the latest
* Option => List
* Otherwise in BecomingOldest it could transition to Oldest
when the previous oldest was removed even though the previous-previous wasn't removed yet
* fix failure in ClusterSingletonRestart2Spec
* OldestChanged was not emitted when Exiting member was removed
* The initial membersByAge must also contain Leaving, Exiting members
(cherry picked from commit ee188565b9f3cf2257ebda218cec6af5a4777439)
* Support configuration for Jackson MapperFeatures in Jackson Serializer
* Add JsonParser.Feature configuration support
* Add JsonGenerator.Feature configuration support
* Fix formatting issues
* Add examples for each feature configuration
* Test coverage of the override methods
* base functionality
* fix-restart-flow
* Fix subSource / subSink cancellation handling
* GraphStage-fix
* Fix ambiguity between complete and cancellation (for isAvailable / grab)
* rename lastCancellationCause
* add mima
* fix cancellation cause propagation in OutputBoundary
* Fix cancellation cause propagation in SubSink
* Add cancellation cause logging to Flow.log
* add more comments about GraphStage portState internals
* Add some assertions in onDownstreamFinish to prevent wrong usage
* Also deprecate onDownstreamFinish() so that no one calls the wrong one
accidentally
* add SubSinkInlet.cancel(cause)
* Propagate causes in two other places
* Suggest to use `cancel(in, cause)` but don't deprecate old one
* RebalanceWorker should watch shard regions
Fixes#27259.
The RebalanceWorker actor needs to watch the shard regions that it's
expecting a BeginHandOffAck message from, in case the ShardRegion shuts
down before it can receive the BeginHandOff message, preventing hand
off. This can be a problem when two nodes are shut down at about the
same time.