19 KiB
| project.description |
|---|
| Query side to Apache Pekko Persistence allowing for building CQRS applications. |
Apache Persistence Query
Dependency
To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=org.apache.pekko bomArtifact=pekko-bom_scala.binary.version bomVersionSymbols=PekkoVersion
symbol1=PekkoVersion
value1="$pekko.version$"
group=org.apache.pekko
artifact=pekko-persistence-query_scala.binary.version
version=PekkoVersion
}
This will also add dependency on the @refPekko Persistence module.
Introduction
Pekko persistence query complements @ref:Event Sourcing by providing a universal asynchronous stream based query interface that various journal plugins can implement in order to expose their query capabilities.
The most typical use case of persistence query is implementing the so-called query side (also known as "read side") in the popular CQRS architecture pattern - in which the writing side of the application (e.g. implemented using Pekko persistence) is completely separated from the "query side". Pekko Persistence Query itself is not directly the query side of an application, however it can help to migrate data from the write side to the query side database. In very simple scenarios Persistence Query may be powerful enough to fulfill the query needs of your app, however we highly recommend (in the spirit of CQRS) of splitting up the write/read sides into separate datastores as the need arises.
For a similar implementation of query interface to @ref:Durable State Behaviors please refer to @ref:Persistence Query using Durable State.
Design overview
Pekko persistence query is purposely designed to be a very loosely specified API. This is in order to keep the provided APIs general enough for each journal implementation to be able to expose its best features, e.g. a SQL journal can use complex SQL queries or if a journal is able to subscribe to a live event stream this should also be possible to expose the same API - a typed stream of events.
Each read journal must explicitly document which types of queries it supports. Refer to your journal's plugins documentation for details on which queries and semantics it supports.
While Pekko Persistence Query does not provide actual implementations of ReadJournals, it defines a number of pre-defined query types for the most common query scenarios, that most journals are likely to implement (however they are not required to).
Read Journals
In order to issue queries one has to first obtain an instance of a @apidoc[query.*.ReadJournal].
For example, given a library that provides a pekko.persistence.query.my-read-journal obtaining the related
journal is as simple as:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #basic-usage }
- Java
- @@snip PersistenceQueryDocTest.java { #basic-usage }
Journal implementers are encouraged to put this identifier in a variable known to the user, such that one can access it via @scala[@scaladoc[readJournalForNoopJournal](pekko.persistence.query.PersistenceQuery#readJournalForT%3C:org.apache.pekko.persistence.query.scaladsl.ReadJournal:T)]@java[@javadocgetJournalFor(NoopJournal.class, NoopJournal.identifier)], however this is not enforced.
Predefined queries
Pekko persistence query comes with a number of query interfaces built in and suggests Journal implementors to implement them according to the semantics described below. It is important to notice that while these query types are very common a journal is not obliged to implement all of them - for example because in a given journal such query would be significantly inefficient.
@@@ note
Refer to the documentation of the @apidoc[query.*.ReadJournal] plugin you are using for a specific list of supported query types. For example, Journal plugins should document their stream completion strategies.
@@@
The predefined queries are:
PersistenceIdsQuery and CurrentPersistenceIdsQuery
@apidocpersistenceIds which is designed to allow users to subscribe to a stream of all persistent ids in the system. By default this stream should be assumed to be a "live" stream, which means that the journal should keep emitting new persistence ids as they come into the system:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #all-persistence-ids-live }
- Java
- @@snip PersistenceQueryDocTest.java { #all-persistence-ids-live }
If your usage does not require a live stream, you can use the @apidoccurrentPersistenceIds query:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #all-persistence-ids-snap }
- Java
- @@snip PersistenceQueryDocTest.java { #all-persistence-ids-snap }
EventsByPersistenceIdQuery and CurrentEventsByPersistenceIdQuery
@apidoceventsByPersistenceId is a query equivalent to replaying an @ref:event sourced actor,
however, since it is a stream it is possible to keep it alive and watch for additional incoming events persisted by the
persistent actor identified by the given persistenceId.
- Scala
- @@snip PersistenceQueryDocSpec.scala { #events-by-persistent-id }
- Java
- @@snip PersistenceQueryDocTest.java { #events-by-persistent-id }
Most journals will have to revert to polling in order to achieve this,
which can typically be configured with a refresh-interval configuration property.
If your usage does not require a live stream, you can use the @apidoccurrentEventsByPersistenceId query.
EventsByTag and CurrentEventsByTag
@apidoceventsByTag allows querying events regardless of which persistenceId they are associated with. This query is hard to
implement in some journals or may need some additional preparation of the used data store to be executed efficiently.
The goal of this query is to allow querying for all events which are "tagged" with a specific tag.
That includes the use case to query all domain events of an Aggregate Root type.
Please refer to your read journal plugin's documentation to find out if and how it is supported.
Some journals may support @ref:tagging of events or
@ref:Event Adapters that wraps the events in a @apidocpersistence.journal.Tagged
with the given tags. The journal may support other ways of doing tagging - again,
how exactly this is implemented depends on the used journal. Here is an example of such a tagging with an @apidoc[typed.*.EventSourcedBehavior]:
- Scala
- @@snip BasicPersistentActorCompileOnly.scala { #tagging-query }
- Java
- @@snip BasicPersistentBehaviorTest.java { #tagging-query }
@@@ note
A very important thing to keep in mind when using queries spanning multiple persistenceIds, such as @apidocEventsByTag is that the order of events at which the events appear in the stream rarely is guaranteed (or stable between materializations).
Journals may choose to opt for strict ordering of the events, and should then document explicitly what kind of ordering guarantee they provide - for example "ordered by timestamp ascending, independently of persistenceId" is easy to achieve on relational databases, yet may be hard to implement efficiently on plain key-value datastores.
@@@
In the example below we query all events which have been tagged (we assume this was performed by the write-side using
@ref:tagging of events or @ref:Event Adapters, or
that the journal is smart enough that it can figure out what we mean by this
tag - for example if the journal stored the events as json it may try to find those with the field tag set to this value etc.).
- Scala
- @@snip PersistenceQueryDocSpec.scala { #events-by-tag }
- Java
- @@snip PersistenceQueryDocTest.java { #events-by-tag }
As you can see, we can use all the usual stream operators available from @ref:Streams on the resulting query stream,
including for example taking the first 10 and cancelling the stream. It is worth pointing out that the built-in EventsByTag
query has an optionally supported offset parameter (of type Long) which the journals can use to implement resumable-streams.
For example a journal may be able to use a WHERE clause to begin the read starting from a specific row, or in a datastore
that is able to order events by insertion time it could treat the Long as a timestamp and select only older events.
If your usage does not require a live stream, you can use the @apidoccurrentEventsByTag query.
EventsBySlice and CurrentEventsBySlice
Query events for given entity type and slices. A slice is deterministically defined based on the persistence id. The purpose is to evenly distribute all persistence ids over the slices.
See @apidoc[persistence.query.typed..EventsBySliceQuery] and @apidoc[persistence.query.typed..CurrentEventsBySliceQuery].
Materialized values of queries
Journals are able to provide additional information related to a query by exposing @ref:Materialized values, which are a feature of @ref:Streams that allows to expose additional values at stream materialization time.
More advanced query journals may use this technique to expose information about the character of the materialized stream, for example if it's finite or infinite, strictly ordered or not ordered at all. The materialized value type is defined as the second type parameter of the returned @apidoc[stream.*.Source], which allows journals to provide users with their specialised query object, as demonstrated in the sample below:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #advanced-journal-query-types }
- Java
- @@snip PersistenceQueryDocTest.java { #advanced-journal-query-types }
- Scala
- @@snip PersistenceQueryDocSpec.scala { #advanced-journal-query-definition }
- Java
- @@snip PersistenceQueryDocTest.java { #advanced-journal-query-definition }
- Scala
- @@snip PersistenceQueryDocSpec.scala { #advanced-journal-query-usage }
- Java
- @@snip PersistenceQueryDocTest.java { #advanced-journal-query-usage }
Performance and denormalization
When building systems using @ref:Event Sourcing and CQRS (Command & Query Responsibility Segregation) techniques it is tremendously important to realise that the write-side has completely different needs from the read-side, and separating those concerns into datastores that are optimised for either side makes it possible to offer the best experience for the write and read sides independently.
For example, in a bidding system it is important to "take the write" and respond to the bidder that we have accepted the bid as soon as possible, which means that write-throughput is of highest importance for the write-side – often this means that data stores which are able to scale to accommodate these requirements have a less expressive query side.
On the other hand the same application may have some complex statistics view or we may have analysts working with the data to figure out best bidding strategies and trends – this often requires some kind of expressive query capabilities like for example SQL or writing Spark jobs to analyse the data. Therefore the data stored in the write-side needs to be projected into the other read-optimised datastore.
@@@ note
When referring to Materialized Views in Pekko Persistence think of it as "some persistent storage of the result of a Query". In other words, it means that the view is created once, in order to be afterwards queried multiple times, as in this format it may be more efficient or interesting to query it (instead of the source events directly).
@@@
Materialize view to Reactive Streams compatible datastore
If the read datastore exposes a Reactive Streams interface then implementing a simple projection is as simple as, using the read-journal and feeding it into the databases driver interface, for example like so:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #projection-into-different-store-rs }
- Java
- @@snip PersistenceQueryDocTest.java { #projection-into-different-store-rs }
Materialize view using mapAsync
If the target database does not provide a reactive streams Subscriber that can perform writes,
you may have to implement the write logic using plain functions or Actors instead.
In case your write logic is state-less and you need to convert the events from one data type to another before writing into the alternative datastore, then the projection will look like this:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #projection-into-different-store-simple-classes }
- Java
- @@snip PersistenceQueryDocTest.java { #projection-into-different-store-simple-classes }
- Scala
- @@snip PersistenceQueryDocSpec.scala { #projection-into-different-store-simple }
- Java
- @@snip PersistenceQueryDocTest.java { #projection-into-different-store-simple }
Resumable projections
Sometimes, you may need to use "resumable" projections, which will not start from the beginning of time each time
when run. In such case, the sequence number (or offset) of the processed event will be stored and
used the next time this projection is started. This pattern is implemented in the
Pekko Projections module.
Query plugins
Query plugins are various (mostly community driven) @apidoc[query.*.ReadJournal] implementations for all kinds of available datastores.
This section aims to provide tips and guide plugin developers through implementing a custom query plugin. Most users will not need to implement journals themselves, except if targeting a not yet supported datastore.
@@@ note
Since different data stores provide different query capabilities journal plugins must extensively document their exposed semantics as well as handled query scenarios.
@@@
ReadJournal plugin API
A read journal plugin must implement @apidocpekko.query.ReadJournalProvider which
creates instances of @scaladocpekko.persistence.query.scaladsl.ReadJournal and
@javadocpersistence.query.javadsl.ReadJournal. The plugin must implement both the scaladsl
and the javadsl @scala[traits]@java[interfaces] because the @scaladocpekko.stream.scaladsl.Source and
@javadocstream.javadsl.Source are different types and even though those types can be converted
to each other it is most convenient for the end user to get access to the Java or Scala Source directly.
As illustrated below one of the implementations can delegate to the other.
Below is a simple journal implementation:
- Scala
- @@snip PersistenceQueryDocSpec.scala { #my-read-journal }
- Java
- @@snip PersistenceQueryDocTest.java { #my-read-journal }
And the @apidoceventsByTag could be backed by a GraphStage for example:
- Scala
- @@snip MyEventsByTagSource.scala { #events-by-tag-publisher }
- Java
- @@snip MyEventsByTagSource.java { #events-by-tag-publisher }
The @apidoc[query.ReadJournalProvider] class must have a constructor with one of these signatures:
- constructor with a @apidoc[actor.ExtendedActorSystem] parameter, a @javadoccom.typesafe.config.Config parameter, and a
Stringparameter for the config path - constructor with a
ExtendedActorSystemparameter, and acom.typesafe.config.Configparameter - constructor with one
ExtendedActorSystemparameter - constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the String parameter.
If the underlying datastore only supports queries that are completed when they reach the
end of the "result set", the journal has to submit new queries after a while in order
to support "infinite" event streams that include events stored after the initial query
has completed. It is recommended that the plugin use a configuration property named
refresh-interval for defining such a refresh interval.
Scaling out
In a use case where the number of events are very high, the work needed for each event is high or where resilience is important so that if a node crashes the persistent queries are quickly started on a new node and can resume operations @ref:Cluster Sharding together with event tagging is an excellent fit to shard events over a cluster.