=per Clarify concurrency of asyncWriteMessages

This commit is contained in:
Patrik Nordwall 2015-10-09 16:00:06 +02:00
parent aad2c4ca35
commit 550aa10db1
2 changed files with 24 additions and 0 deletions

View file

@ -59,6 +59,18 @@ interface AsyncWritePlugin {
*
* Note that it is possible to reduce number of allocations by caching some
* result `Iterable` for the happy path, i.e. when no messages are rejected.
*
* Calls to this method are serialized by the enclosing journal actor. If you spawn
* work in asyncronous tasks it is alright that they complete the futures in any order,
* but the actual writes for a specific persistenceId should be serialized to avoid
* issues such as events of a later write are visible to consumers (query side, or replay)
* before the events of an earlier write are visible. This can also be done with
* consistent hashing if it is too fine grained to do it on the persistenceId level.
* Normally a `PersistentActor` will only have one outstanding write request to the journal but
* it may emit several write requests when `persistAsync` is used and the max batch size
* is reached.
*
* This call is protected with a circuit-breaker.
*/
Future<Iterable<Optional<Exception>>> doAsyncWriteMessages(Iterable<AtomicWrite> messages);
@ -66,6 +78,8 @@ interface AsyncWritePlugin {
* Java API, Plugin API: synchronously deletes all persistent messages up to
* `toSequenceNr`.
*
* This call is protected with a circuit-breaker.
*
* @see AsyncRecoveryPlugin
*/
Future<Void> doAsyncDeleteMessagesTo(String persistenceId, long toSequenceNr);

View file

@ -206,6 +206,16 @@ trait AsyncWriteJournal extends Actor with WriteJournalBase with AsyncRecovery {
* It is possible but not mandatory to reduce number of allocations by returning
* `Future.successful(Nil)` for the happy path, i.e. when no messages are rejected.
*
* Calls to this method are serialized by the enclosing journal actor. If you spawn
* work in asyncronous tasks it is alright that they complete the futures in any order,
* but the actual writes for a specific persistenceId should be serialized to avoid
* issues such as events of a later write are visible to consumers (query side, or replay)
* before the events of an earlier write are visible. This can also be done with
* consistent hashing if it is too fine grained to do it on the persistenceId level.
* Normally a `PersistentActor` will only have one outstanding write request to the journal but
* it may emit several write requests when `persistAsync` is used and the max batch size
* is reached.
*
* This call is protected with a circuit-breaker.
*/
def asyncWriteMessages(messages: immutable.Seq[AtomicWrite]): Future[immutable.Seq[Try[Unit]]]