(cherry picked from commit 89af8bdb90)
* remove final identifier in serializers
i* revert/deprecate ProtobufSerializer.ARRAY_OF_BYTE_ARRAY
* adding back compatible empty constructor in serializers
* make FSM.State compatible
* add back ActorPath.ElementRegex
* revert SocketOption changes and add SocketOptionV2
see a6d3704ef6
* problem filter for ActorSystem and ActorPath
* problem filter for ByteString
* problem filter for deprecated Timeout methods
* BalancingPool companion
* ask
* problem filter for ActorDSL
* event bus
* exclude hasSubscriptions
* exclude some problems in testkit
* boundAddress and addressFromSocketAddress
* Pool nrOfInstances
* PromiseActorRef
* check with 2.3.9
* migration guide note
* explicit exclude of final class problems
Needed by Akka Streams. Old functions placed in akka.japi will be deprecated
in 2.4
(cherry picked from commit 99628f408295070848af6c23b1d722057069e660)
+act #17392 Include generated japi.function from akka-stream
* add boilerplate plugin
* make them Serializable to be able to grab line number for Java 8 lambdas
(cherry picked from commit d5950a13d2f123d2101d56f0a8a86a2097dda8e1)
+ enable parallel execution
+ exclude perf tests (TODO mark more as such)
+ uses sbt-dependency-graph plugin
+ implement dependency tracking for testing of only these
+ project which could have been affected by a given PR
* interim solution until a proper solution can be implemented as
described in #17281
(cherry picked from commit e0edc45d9740069b90a19ebaaec7d53a64344263)
* The problem is that when an extension partly fails the next
attempt will typically generate another failure, such as
"actor name [snapshot-store] is not unique"
* We have seen this problem for both persistence and cluster
extensions
* Extensions are now only given one chance to initialize and
thereafter fail fast with same exception as the the first failure
* in the end TestKitBase eagerly initialize the ActorSystem and
to avoid the need for using lazy val tricks I changed the trait
to abstract class with config constructor parameter
* sysmsg.Terminate, sysmsg.DeathWatchNotification, io.Tcp.Closed
were needed to silence normal usage of http client/server
* other things based on jenkins logs, but not a complete audit
(cherry picked from commit 270e3b2f49af3c34fd5ea4c3bcfd8257402b5cbe)
instead of spinning unboundedly use ReentrantLock.tryLock—it is a best
effort pool anyway and contention shall not lock the whole application
(cherry picked from commit 518fedb33c22c69deae019090d4236c9c5175fb5)
Two issues:
1) ShardRegion actor must stop itself when the node is shutting down,
ie. when receiving MemberRemoved(selfAddress)
2) ShardCoordinator must not persist anything when the node is shutting
down. MemberRemoved of other shard regions will trigger Terminated,
which must not be persisted, because then the next coordinator will
replay those events and end up in wrong state. This is a problem
announced itself when using leaving as illustrated in the new test.
To solve the second issue I have added a new ClusterShuttingDown event
that is published before the MemberRemoved events. Note that Terminated
is triggered by MemberRemoved.
(cherry picked from commit 1b272c72597beece9d93f0054f4b58e3d25f9ae2)