* I couldn't find anything wrong
* Increasing the test timeout, it takes 1.5 s for the reconnects,
so the previous total of 3 s might not have been enough
(for that run)
* Replace sleep with awaitAssert
* Use separate probes for awaitAssert checks to avoid spill-over
to the testActor
* Some additional cleanup
* Deliver buffered messages when HostShard is received
Test failures showed that initial messages could be re-ordered otherwise
* deprecates awaitTermination, shutdown and isTerminated
* introduces a terminate-method that returns a Future[Unit]
* introduces a whenTerminated-method that returns a Future[Unit]
* simplifies the implementation by removing blocking constructs
* adds tests for terminate() and whenTerminated
- Move all entry related logic out of the ShardRegion and into a
new dedicated child `Shard` actor.
- Shard actor persists entry started and passivated messages.
- Non passivated entries get restarted on termination.
- Shard Coordinator restarts shards on other regions upon region failure or handoff
- Ensures shard rebalance restarts shards.
- Shard buffers messages after an EntryStarted is received until state persisted
- Shard buffers messages (still) after a Passivate is received until state persisted
- Shard will retry persisting state until success
- Shard will restart entries automatically (after a backoff) if not passivated and remembering entries
- Added Entry path change to the migration docs
* the lower bound was rather racy, depends on "where in it's Tick"
time the throtteler currently was. In general the upper bound is also
not exact, but "good enough" because the `.5` is an estimation of "the
throtteler must finish it's previous tick, and then it sends the data"
* I suspect that the issue #15440 happens because of replay of events
in wrong order (ShardHomeAllocated received before ShardRegionRegistered)
by the hbase journal
* This does not fix that issue, but the additional invariant checks and
debug statements would perhaps make it easier for us to diagnose such
issues
* These changes also ensures that the allocation strategy does not return
the wrong thing.
* It also tightens a possible error if a region is terminated while a
rebalance is in progress
(cherry picked from commit d07b9db4958236d580b8bfb8f92461969ff88cbc)
A PersistentView works the same way as View did previously, except:
* it requires an `peristenceId` (no default is provided)
* messages given to `receive` are NOT wrapped in Persistent()
akka-streams not touched, will update them afterwards on different branch
Also solves #15436 by making persistentId in PersistentView abstract.
(cherry picked from commit dcafaf788236fe6d018388dd55d5bf9650ded696)
Conflicts:
akka-docs/rst/java/lambda-persistence.rst
akka-docs/rst/java/persistence.rst
akka-docs/rst/scala/persistence.rst
akka-persistence/src/main/scala/akka/persistence/Persistent.scala
akka-persistence/src/main/scala/akka/persistence/View.scala
PubSubMediator uses router which always unwraps RouterEnvelope messages.
However unwrapping is undesirable if user sends message in
ConsistentHashableEnvelope. Thus PubSubMediator should always wrap user
messages in RouterEnvelope which will be unwrapped by the router, leaving
user message unchanged.
Also disallow consistent hashing routing logic in pub-sub mediator.
Breaks binary compatibility because adding new methods to Eventsourced
trait. Since akka-persistence is experimental this is ok, yet
source-level compatibility has been perserved thankfuly :-)
Deprecates:
* Rename of EventsourcedProcessor -> PersistentActor
* Processor -> suggest using PersistentActor
* Migration guide for akka-persistence is separate, as wel'll deprecate in minor versions (its experimental)
* Persistent as well as ConfirmablePersistent - since Processor, their
main user will be removed soon.
Other changes:
* persistAsync works as expected when mixed with persist
* A counter must be kept for pending stashing invocations
* Uses only 1 shared list buffer for persit / persistAsync
* Includes small benchmark
* Docs also include info about not using Persistent() wrapper
* uses java LinkedList, for best performance of append / head on
persistInvocations; the get(0) is safe, because these msgs only
come in response to persistInvocations
* Renamed internal *MessagesSuccess/Failure messages because we kept
small mistakes seeing the class "with s" and "without s" as the same
* Updated everything that refered to EventsourcedProcessor to
PersistentActor, including samples
Refs #15227
Conflicts:
akka-docs/rst/project/migration-guides.rst
akka-persistence/src/main/scala/akka/persistence/JournalProtocol.scala
akka-persistence/src/main/scala/akka/persistence/Persistent.scala
akka-persistence/src/test/scala/akka/persistence/PersistentActorSpec.scala
project/AkkaBuild.scala
ActorRef #15209
* Changed ClusterSharding.start to return the ActorRef to the shardRegion (#15157)
* Fixed indentation, and removed unused import
* Test for new API
* removed unused import
- Moved barrier outside of the runon
* Also, a watch leftover from ticket 3882
(cherry picked from commit cbc9dc535c0692a7df00bfb7292e62de1bed7e3f)
Conflicts:
akka-contrib/src/main/scala/akka/contrib/pattern/DistributedPubSubMediator.scala
* The reason for the problem with NoSuchElementException in ClusterSharding was
that actor references were not serialized with full address information. In
certain fail over scenarios the references could not be resolved and therefore
the ShardRegionTerminated did not match corresponding ShardRegionRegistered.
* Wrap serialization with transport information from defaultAddress
(cherry picked from commit 3e73ae5925cf1293a9a5d61e48919b1708e84df2)
* Problem when using PersistentChannel from Processor
* When the seq numbers of the sending processor and the seq numbers
of the PersistentChannel was out of sync the PersistentChannel
did not de-duplicate confirmed deliveres that were resent by
the processor.
* There is a hand-off in the RequestWriter that confirms the
Processor seq number, and therefore the seq number of the
RequestWriter must be used in the ConfirmablePersistent from
the RequestReader
* More tests, covering this scenario
* Add supervisor level that will start the ShardCoordinator again after
a configurable backoff duration
* Make the timeout of SharedLeveldbJournal configurable
* Include cause of PersistenceFailure in message of ActorKilledException
* The problem was that ShardRegion actor only kept track of one shard
id per region actor. Therefore the Terminated message only removes
one of the shards from its registry when there are multiple shards
per region.
* Added failing test and solved the problem by keeping track of all
shards per region
* Also, rebalance must not be done before any regions have been
registered
* The race can happen if the MemberRemoved event is received followed by a Delta update from
a node that has not yet got the MemberRemoved. That will make the bucket for the removed
node to be added back in the registry.
* The documentation was good, but some parts were "hidden" by separating
it two places. I understand the original reason for the separation but
it might be easier for the user (as reported in the ticket) to have
everything in one place.