* Add a new link to "Strong Eventual Consistency and Conflict-free Replicated Data Types" and remove "Eventually Consistent Data Structures". fix#25338
* Update distributed-data.md
* createLogLevels java dsl: remove ability to pass in nulls as default args since it's a bug
* fix scala doc for Java: [[Attributes#logLevelOff]] instead of [[LogLevels.Off]]
* It was a timing race condition in the test that was exposed
by the change in PR #25315. Full state is now sent immediately
when receiving the DeltaNack and that makes the Update complete
much faster for that case than before.
* That resulted in that the delta propagations from previous
updates were still in the buffer to be sent out when the
incr(4) was performed. Those deltas contained the NoDeltaPlaceholder,
which caused the inrc(4) delta to also be folded into NoDeltaPlaceholder
and thereby not propagated.
* Before the DeltaNack the buffer had time to be flushed before the incr(4)
and therefore no NoDeltaPlaceholder.
* Optimize flatMapConcat for single element source, #25241
* Grab the SourceSingle via TraversalBuilder
* Also handle the case when there is no demand
* don't match when mapMaterializedValue and async
* Use deterministic order of the target nodes for the writes when
type RequiresCausalDeliveryOfDeltas, otherwise the random pick
of targets caused that delta sequence numbers were missing for
susequent updates
* Resend immediately when receiving DeltaNack instead of waiting
for timeout. DeltaNack can happen when there are multiple
concurrent updates from same node because each starts a WriteAggregator
and a later Update might bypass an earlier
Not having it causes a compile error in 'sbt ++2.12.6 clean akka-actor/compile'
on jdk9, though I can't quite explain why: using the jdk8 classpath otherwise
seems to work, and I can't reproduce it in a minimal example yet.
This reverts commit c6735b630b75408b0c8bbdb22dd31f7d144346ef.
* The TimerMsg was wrapped in IncomingCommand and therefore stashed,
and when unstashed causing the ClassCastException
* Solved by not using timers here but plain scheduler
* Also fixing journalPluginId and snapshotPluginId
* Optimize LoadSnapshot if toSequenceNr == 0, i.e. SnapshotCriteria.none,
then no need to involve the snapshot store
* Optimize ReplayMessages if toSequenceNr == 0, i.e. Recovery.none,
then no need to do asyncReplayMessages, but asyncReadHighestSequenceNr
is still needed
* should still load snapshot if critera != none and toSeqNr == 0,
weird case for saving/loading snapshots with seqNr 0
* Ensure NPE is always through when VirtualProcessor.onError(null) is invoked
This fix is similar to #24749, fixing a spec violation bug that was
introduced in #24722.
* Check remembered entities before remembering entity
Messages that come through for an entity before StartEntity
has been processed for that entity caused redundant persistence
of the entity.
When a Success is received, call onCompleteThenStop instead of just
context.stop; that takes care of the completion logic instead of just
stopping the actor and leaving the stream going.
Add test to ensure the stream materializes on Source.actorRef receiving
Status.Success
Remove tests around stream completion behaviour in response to
PoisonPill - as well as these tests not correctly demonstrating that the
completion was passed on downstream, they describe behaviour which was
previously incidental and is no longer accurate.
Update the docs to reflect that PoisonPill should not be used on the
actor ref as this scenario will necessarily result in bad behaviour as
it will be unable to signal the completion downstream.
Make a few grammar fixes and remove some trailing space while updating the
docs.
* Rather than stop so that users can add their own supervision e.g.
restartWithBackOff
* Only allow back off supervisoir for persistent behaviors
* Handle persist rejections
* Composable javadsl CommandHandlerBuilder, #25226
* CommandHandlerBuilder with stateClass and statePredicate parameters
* CommandHandlerBuilder.orElse
* Remove ActorContext from handler function signatures, can be
passed in constructor
When the node has left the cluster, existed nodes throw
akka.remote.transport.Transport$InvalidAssociationException with message `The
remote system terminated the association because it is shutting down`.
This error normally happened when node is leaving the cluster in redeployment
proposal and it isn't an error but it creates a noise in monitoring/alert system.
So, I propose to log it as Warning.