* The reason for this change is that `DataDeleted` didn't extend the
`UpdateResponse` and `GetResponse` types and could therefore cause problems
when `Update` and `Get` were used with `ask`. This was also a problem for
Akka Typed.
* Introduce new messages types UpdateDataDeleted and GetDataDeleted
* Introduce SubscribeResponse because the responses can be both `Changed`
and `Deleted` are responses to subscriptions. Important for Typed.
* previous `schedule` method is trying to maintain a fixed average frequency
over time, but that can result in undesired bursts of scheduled tasks after a long
GC or if the JVM process has been suspended, same with all other periodic
scheduled message sending via various Timer APIs
* most of the time "fixed delay" is more desirable
* we can't just change because it's too big behavioral change and some might
depend on previous behavior
* deprecate the old `schedule` and introduce new `scheduleWithFixedDelay`
and `scheduleAtFixedRate`, when fixing the deprecation warning users should
make a concious decision of which behavior to use (scheduleWithFixedDelay in
most cases)
* Streams
* SchedulerSpec
* test both fixed delay and fixed rate
* TimerSpec
* FSM and PersistentFSM
* mima
* runnable as second parameter list, also in typed.Scheduler
* IllegalStateException vs SchedulerException
* deprecated annotations
* api and reference docs, all places
* migration guide
* Allow for dispatcher aliases and define a internal dispatcher
* Test checking dispatcher name
* MiMa for Dispatchers
* Migration guide entry
* No need to have custom dispatcher lookup logic in streams anymore
* Default dispatcher size and migration note about that
* Test checking exact config values...
* Typed receptionist on internal dispatcher
* All internal usages of system.dispatcher gone through
* The scenario was (probably) that a node was restarted with
same host:port and then didn't join the same cluster. The DData
Replicator in the original cluster would continue sending messages
to the new incarnation resulting in false removals.
* The fix is that DData Replicator includes the system uid of the sending
or target system in messages and if recipient gets a message that is from/to
unknown it will discard it and thereby not spreading information across
different clusters.
* Reproduced in ClusterReceptionistSpec
* Much hardening of other things in ClusterReceptionistSpec
* There are also some improvements to ClusterReceptionist to not leak
Listing with refs of removed nodes.
* use ClusterShuttingDown
* The reason for using sender system uid instead of target uid in messages
like Read and Write is that then the optimization for sending same message
to many destinations can remain.
So now we can compile akka-distributed-data with
-Xfatal-warnings - though I'm not yet sure about
enabling the (other) undisciplineScalacOptions
* Fix multi-node silencing
* Fix scaladoc warnings
* Introduce annotation to declare ccompat use
* Add explicit toString
* Fix deprecation on 2.13
* Move 'immutable' ccompat helpers to shared ccompat package
* Add MiMa for internal scala 2.13 compatibility class
* Internal API markers
* Fix scaladoc generation
Got bitten by https://github.com/scala/bug/issues/11021
* ⇒, →, ←
* because we don't want to show them in documentation snippets and
then it's complicated to avoid that when snippets are
located in src/test/scala in individual modules
* dont replace object `→` in FSM.scala and PersistentFSM.scala
fix akka-actor-tests compile errors
some tests still fail though
Fix test failures in akka-actor-test
Manually work arround missing implicit Factory[Nothing, Seq[Nothing]]
see https://github.com/scala/scala-collection-compat/issues/137
akka-remote scalafix changes
Fix shutdownAll compile error
test:akka-remote scalafix changes
akka-multi-node-testkit scalafix
Fix akka-remote-tests multi-jvm compile errors
akka-stream-tests/test:scalafix
Fix test:akka-stream-tests
Crude implementation of ByteString.map
scalafix akka-actor-typed, akka-actor-typed-tests
akka-actor-typed-tests compile and succeed
scalafix akka-camel
scalafix akka-cluster
akka-cluster compile & test
scalafix akka-cluster-metrics
Fix akka-cluster-metrics
scalafix akka-cluster-tools
akka-cluster-tools compile and test
scalafix akka-distributed-data
akka-distributed-data fixes
scalafix akka-persistence
scalafix akka-cluster-sharding
fix akka-cluster-sharding
scalafix akka-contrib
Fix akka-cluster-sharding-typed test
scalafix akka-docs
Use scala-stm 0.9 (released for M5)
akka-docs
Remove dependency on collections-compat
Cherry-pick the relevant constructs to our own
private utils
Shorten 'scala.collections.immutable' by importing it
Duplicate 'immutable' imports
Use 'foreach' on futures
Replace MapLike with regular Map
Internal API markers
Simplify ccompat by moving PackageShared into object
Since we don't currently need to differentiate between 2.11 and
Avoid relying on 'union' (and ++) being left-biased
Fix akka-actor/doc by removing -Ywarn-unused
Make more things more private
Copyright headers
Use 'unsorted' to go from SortedSet to Set
Duplicate import
Use onComplete rather than failed.foreach
Clarify why we partly duplicate scala-collection-compat
* Add CopyrightHeader support for sbt-boilerplate plugin.
* Add CopyrightHeader support for `*.proto` files.
* Add regex match for both `–` and `-` for CopyrightHeader.
* Add CopyrightHeader support for sbt build files.
* Update copyright from 2018 to 2019.
* It was a timing race condition in the test that was exposed
by the change in PR #25315. Full state is now sent immediately
when receiving the DeltaNack and that makes the Update complete
much faster for that case than before.
* That resulted in that the delta propagations from previous
updates were still in the buffer to be sent out when the
incr(4) was performed. Those deltas contained the NoDeltaPlaceholder,
which caused the inrc(4) delta to also be folded into NoDeltaPlaceholder
and thereby not propagated.
* Before the DeltaNack the buffer had time to be flushed before the incr(4)
and therefore no NoDeltaPlaceholder.
* Use deterministic order of the target nodes for the writes when
type RequiresCausalDeliveryOfDeltas, otherwise the random pick
of targets caused that delta sequence numbers were missing for
susequent updates
* Resend immediately when receiving DeltaNack instead of waiting
for timeout. DeltaNack can happen when there are multiple
concurrent updates from same node because each starts a WriteAggregator
and a later Update might bypass an earlier
* since the ordering can change based on the member's status
it's not possible to use ordinary - for removal
* similar issue at a few places where ageOrdering was used
* Sharding only within own team (coordinator is singleton)
* the ddata Replicator used by Sharding must also be only within own team
* added support for Set of roles in ddata Replicator so that can be used
by sharding to specify role + team
* Sharding proxy can route to sharding in another team
* to avoid OversizedPayloadException
* some complex deltas grow for each update operation, e.g.
when updating different keys in ORMap (PNCounterMap)
* such large deltas can safely be discarded and disseminated as full
state instead
* added ReplicatedDeltaSize interface to be able to define the "size"
and when that size exceeds configured threshold the delta is discarded
When a DeltaReplicatedData returns None from delta it must still be
treated as a delta that increase the version counter in DeltaPropagationSelector.
Otherwise a later delta might be applied before the full state gossip is received
and thereby violating RequiresCausalDeliveryOfDeltas.
* Follow up on the causal delivery of deltas.
* The first implementation used full state for the direct
Write messages, i.e. updates with WriteConsistency != LocalWrite
* This is an optimization so that delatas are tried first and if
they can't be applied it falls back to full state.
* For simultanious updates the messages may be reordered because we
create separate WriteAggregator actor and such, but normally they
will be sent in order so the deltas will typically be received in
order, otherwise we fall back to retrying with full state in the
second round in the WriteAggregator.
* keep track of delta interval versions and skip deltas
that are not consequtive, i.e. when some delta message was lost
* send the delta versions in the full state gossip to sync up the
expected versions after dropped deltas
* implementation of deltas for ORSet
* refactoring of the delta types to allow for different type for the
delta and the full state
* extensive tests
* mima filter
* performance optimizations
* simple pruning of deltas
* Java API
* update documentation
* KeyId type alias
* Use InternalApi annotation
* delta GCounter and PNCounter
* first stab at delta propagation protocol
* send delta in the direct write
* possibility to turn off delta propagation
* tests
* protobuf serializer for DeltaPropagation
* documentation
* fix merge issues of DataEnvelope and its pruning
* simplify by removing the tombstones, which didn't work in all cases anyway
* keep the PruningPerformed markers in the DataEnvelope until configured
TTL has elapsed (wall clock)
* simplify PruningState structure
* also store the pruning markers in durable data
* collect removed nodes from the data, listing on MemberRemoved is not enough
* possibility to disable pruning altogether
* documented caveat for durable data