* DData and Persistence based remember entitites refactored
* Order methods in the order of init in the shard.
* Some bad isolation between test cases causing problems
* Test coverage for remember entities store failures
* WithLogCapturing where applicable
* MiMa filters
* Timeouts from config for persistent remember entities
* Single method for deliver, less utf-8 encoding
* Include detail on write failure
* Don't send message to dead letter if it is actually handled in BackOffSupervisor
* Back off supervisor log format plus use warning for hitting max restarts
* actor/message based spi
* Missing assert that node had joined cluster
* Shard state.entities and idByRef properties incosistency in case of HandOffStopper creation
* Additional sharding debugging logs
(cherry picked from commit 70c2b571b9759e0441529fe107b8e8bf42825415)
* previous `schedule` method is trying to maintain a fixed average frequency
over time, but that can result in undesired bursts of scheduled tasks after a long
GC or if the JVM process has been suspended, same with all other periodic
scheduled message sending via various Timer APIs
* most of the time "fixed delay" is more desirable
* we can't just change because it's too big behavioral change and some might
depend on previous behavior
* deprecate the old `schedule` and introduce new `scheduleWithFixedDelay`
and `scheduleAtFixedRate`, when fixing the deprecation warning users should
make a concious decision of which behavior to use (scheduleWithFixedDelay in
most cases)
* Streams
* SchedulerSpec
* test both fixed delay and fixed rate
* TimerSpec
* FSM and PersistentFSM
* mima
* runnable as second parameter list, also in typed.Scheduler
* IllegalStateException vs SchedulerException
* deprecated annotations
* api and reference docs, all places
* migration guide
* This problem was introduced in the optimization in PR #26878,
and that regression has not been released.
* While waiting for the ddata update response it buffers messages
for the entity that is stopped/started and in the case of passivation
those buffered messages were not delivered afterwards. Therefore
the test failed when waiting for the expected response.
* While waiting for update to comple it will now deliver messages to other
already started entities immediately, instead of stashing
* Unstash one message at a time, instead of unstashAll
* Append messageBuffer for messages to the entity that we are waiting for,
instead of stashing
* Test to confirm the improvements
* Fixing a few other missing things
* receiveStartEntity should process the change before starting the entity
* lastMessageTimestamp should be touched from overridden deliverTo
* handle StoreFailure
* lease api
* Cluster singleton manager with lease
* Refactor OldestData to use option for actor reference
* Sharding with lease
* Docs for singleton and sharding lease + config for sharding lease
* Have ddata shard wait until lease is acquired before getting state
* ⇒, →, ←
* because we don't want to show them in documentation snippets and
then it's complicated to avoid that when snippets are
located in src/test/scala in individual modules
* dont replace object `→` in FSM.scala and PersistentFSM.scala
* Add CopyrightHeader support for sbt-boilerplate plugin.
* Add CopyrightHeader support for `*.proto` files.
* Add regex match for both `–` and `-` for CopyrightHeader.
* Add CopyrightHeader support for sbt build files.
* Update copyright from 2018 to 2019.
* Stops entities of shard forcefully if they don't handle stopMessage #23751
* Prints a warning log while stopping the entities
* fix version of backward exclude file and checks for shard stopped
* adds documentation for handoff timeout
* Provide minSequenceNr for snapshot deletion
Journals can use this to make the bulk deletion more efficient
Use keepNrBatches to delete the last few snapshots in case previous
deletes failed.
* Check remembered entities before remembering entity
Messages that come through for an entity before StartEntity
has been processed for that entity caused redundant persistence
of the entity.
* previous solution didn't work becuse the untyped StartEntity
message is sent by untyped sharding itself without the typed envelope
and null was a bit of a hack
* Revert "fix entityPropsFactory id param, #21809"
This reverts commit cd7eae28f6.
* Revert "Merge pull request #24058 from talpr/talpr-24053-add-entity-id-to-sharding-props"
This reverts commit 8417e70460, reversing
changes made to 22e85f869d.
* Test case covering changing shard id extractor with remember-entities
* This should do the trick
* Feedback addressed
* Docs and migration guide mention
* Correct logic to persist that entity has moved off off shard
* #21725 cluster-sharding doesn't delete snapshots and messages
Fixes#21725
Without deleting messages those pollute persistence with not needed anymore messages. Naive and bullet proof flow is snapshot -> delete messges -> delete snapshots.
# Пожалуйста, введите сообщение коммита для ваших изменений. Строки,
# начинающиеся с «#» будут оставлены; вы можете удалить их вручную,
# если хотите. Пустое сообщение отменяет процесс коммита.
#
# Дата: Mon Oct 31 23:24:37 2016 +0300
#
# интерактивное перемещение в процессе; над 432b53c
# Последняя команда выполнена (1 команда выполнена):
# edit f86b015 21725 cluster-sharding doesn't delete snapshots and messages Fixes#21725 Without deleting messages those pollute persistence with not needed anymore messages. Naive and bullet proof flow is snapshot -> delete messges -> delete snapshots.
# Следующая команда для выполнения (1 команда осталась):
# pick 56adb40 #21725 keeping N number of batches (messages and snapshot) using N from configuration
# Вы сейчас редактируете коммит при перемещении ветки «fix-21725-delete-messages-after-snapshot» над «432b53c».
#
# Изменения, которые будут включены в коммит:
# изменено: akka-cluster-sharding/src/main/scala/akka/cluster/sharding/Shard.scala
# изменено: akka-cluster-sharding/src/main/scala/akka/cluster/sharding/ShardCoordinator.scala
#
* #21725 keeping N number of batches (messages and snapshot) using N from configuration
* Adding debug message when passivate method cannot identify entity
* Include entity in log message
* Include debug for Some where entity already being processed
* one Replicator per configured role
* log LMDB directory at startup
* clarify the imporantce of the LMDB directory
* use more than one key to support many entities