remove more akka refs in our docs (#269)

This commit is contained in:
PJ Fanning 2023-03-24 22:19:14 +01:00 committed by GitHub
parent 3e1231c320
commit 69b5045f9e
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
13 changed files with 14 additions and 25 deletions

View file

@ -145,4 +145,4 @@ Rolling update is not supported when @ref:[changing the remoting transport](../r
### Migrating from Classic Sharding to Typed Sharding ### Migrating from Classic Sharding to Typed Sharding
If you have been using classic sharding it is possible to do a rolling update to typed sharding using a 3 step procedure. If you have been using classic sharding it is possible to do a rolling update to typed sharding using a 3 step procedure.
The steps along with example commits are detailed in [this sample PR](https://github.com/akka/akka-samples/pull/110) The steps along with example commits are detailed in [this sample Akka PR](https://github.com/akka/akka-samples/pull/110)

View file

@ -105,8 +105,8 @@ There are two actors that could potentially be supervised. For the `consumer` si
* The user actor e.g. `/user/consumer/singleton` which the manager starts on the oldest node * The user actor e.g. `/user/consumer/singleton` which the manager starts on the oldest node
The Cluster singleton manager actor should not have its supervision strategy changed as it should always be running. The Cluster singleton manager actor should not have its supervision strategy changed as it should always be running.
However it is sometimes useful to add supervision for the user actor. However, it is sometimes useful to add supervision for the user actor.
To accomplish this add a parent supervisor actor which will be used to create the 'real' singleton instance. To accomplish this, add a parent supervisor actor which will be used to create the 'real' singleton instance.
Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/questions/36701898/how-to-supervise-cluster-singleton-in-akka/36716708#36716708)) Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/questions/36701898/how-to-supervise-cluster-singleton-in-akka/36716708#36716708))
Scala Scala

View file

@ -1,7 +1,6 @@
# Persistence - Building a storage backend # Persistence - Building a storage backend
Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension. Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension.
A directory of persistence journal and snapshot store plugins is available at the Pekko Community Projects page, see [Community plugins](https://akka.io/community/)
This documentation described how to build a new storage backend. This documentation described how to build a new storage backend.
Applications can provide their own plugins by implementing a plugin API and activating them by configuration. Applications can provide their own plugins by implementing a plugin API and activating them by configuration.

View file

@ -2,8 +2,6 @@
Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension. Storage backends for journals and snapshot stores are pluggable in the Pekko persistence extension.
A directory of persistence journal and snapshot store plugins is available at the Pekko Community Projects page, see [Community plugins](https://akka.io/community/)
Plugins maintained within the Pekko organization are: Plugins maintained within the Pekko organization are:
* [pekko-persistence-cassandra]($pekko.doc.dns$/docs/pekko-persistence-cassandra/current/) (no Durable State support) * [pekko-persistence-cassandra]($pekko.doc.dns$/docs/pekko-persistence-cassandra/current/) (no Durable State support)

View file

@ -52,8 +52,7 @@ query types for the most common query scenarios, that most journals are likely t
## Read Journals ## Read Journals
In order to issue queries one has to first obtain an instance of a @apidoc[query.*.ReadJournal]. In order to issue queries one has to first obtain an instance of a @apidoc[query.*.ReadJournal].
Read journals are implemented as [Community plugins](https://akka.io/community/#plugins-to-akka-persistence-query), each targeting a specific datastore (for example Cassandra or JDBC For example, given a library that provides a `pekko.persistence.query.my-read-journal` obtaining the related
databases). For example, given a library that provides a `pekko.persistence.query.my-read-journal` obtaining the related
journal is as simple as: journal is as simple as:
Scala Scala
@ -65,8 +64,6 @@ Java
Journal implementers are encouraged to put this identifier in a variable known to the user, such that one can access it via Journal implementers are encouraged to put this identifier in a variable known to the user, such that one can access it via
@scala[@scaladoc[readJournalFor[NoopJournal](NoopJournal.identifier)](pekko.persistence.query.PersistenceQuery#readJournalFor[T%3C:org.apache.pekko.persistence.query.scaladsl.ReadJournal](readJournalPluginId:String):T)]@java[@javadoc[getJournalFor(NoopJournal.class, NoopJournal.identifier)](pekko.persistence.query.PersistenceQuery#getReadJournalFor(java.lang.Class,java.lang.String))], however this is not enforced. @scala[@scaladoc[readJournalFor[NoopJournal](NoopJournal.identifier)](pekko.persistence.query.PersistenceQuery#readJournalFor[T%3C:org.apache.pekko.persistence.query.scaladsl.ReadJournal](readJournalPluginId:String):T)]@java[@javadoc[getJournalFor(NoopJournal.class, NoopJournal.identifier)](pekko.persistence.query.PersistenceQuery#getReadJournalFor(java.lang.Class,java.lang.String))], however this is not enforced.
Read journal implementations are available as [Community plugins](https://akka.io/community/#plugins-to-akka-persistence-query).
### Predefined queries ### Predefined queries
Pekko persistence query comes with a number of query interfaces built in and suggests Journal implementors to implement Pekko persistence query comes with a number of query interfaces built in and suggests Journal implementors to implement

View file

@ -468,8 +468,9 @@ Java
This technique only applies if the Pekko Persistence plugin you are using provides this capability. This technique only applies if the Pekko Persistence plugin you are using provides this capability.
Check the documentation of your favourite plugin to see if it supports this style of persistence. Check the documentation of your favourite plugin to see if it supports this style of persistence.
If it doesn't, you may want to skim the [list of existing journal plugins](https://akka.io/community/#journal-plugins), just in case some other plugin Over time, we hope that some Community projects will extend the number of supported platforms.
for your favourite datastore *does* provide this capability. Notify us if you would like us to link to any that you know about. You may also find Akka Community
plugins that could be adapted for Pekko usage.
@@@ @@@

View file

@ -44,11 +44,10 @@ recover its state from these messages.
case of sender and receiver JVM crashes. case of sender and receiver JVM crashes.
* @scala[@scaladoc[AsyncWriteJournal](pekko.persistence.journal.AsyncWriteJournal)]@java[@javadoc[AsyncWriteJournal](pekko.persistence.journal.japi.AsyncWriteJournal)]: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages * @scala[@scaladoc[AsyncWriteJournal](pekko.persistence.journal.AsyncWriteJournal)]@java[@javadoc[AsyncWriteJournal](pekko.persistence.journal.japi.AsyncWriteJournal)]: A journal stores the sequence of messages sent to a persistent actor. An application can control which messages
are journaled and which are received by the persistent actor without being journaled. Journal maintains `highestSequenceNr` that is increased on each message. are journaled and which are received by the persistent actor without being journaled. Journal maintains `highestSequenceNr` that is increased on each message.
The storage backend of a journal is pluggable. The storage backend of a journal is pluggable.
Replicated journals are available as [Community plugins](https://akka.io/community/).
* *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are * *Snapshot store*: A snapshot store persists snapshots of a persistent actor's state. Snapshots are
used for optimizing recovery times. The storage backend of a snapshot store is pluggable. used for optimizing recovery times. The storage backend of a snapshot store is pluggable.
The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem. Replicated snapshot stores are available as [Community plugins](https://akka.io/community/) The persistence extension comes with a "local" snapshot storage plugin, which writes to the local filesystem.
* *Event Sourcing*. Based on the building blocks described above, Pekko persistence provides abstractions for the * *Event Sourcing*. Based on the building blocks described above, Pekko persistence provides abstractions for the
development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)). development of event sourced applications (see section @ref:[Event Sourcing](typed/persistence.md#event-sourcing-concepts)).

View file

@ -7,7 +7,7 @@ Integration with Reactive Streams, materializes into a @javadoc[Subscriber](java
## Signature ## Signature
Scala Scala
: @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #asSubscriber } : @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/org/apache/pekko/stream/scaladsl/JavaFlowSupport.scala) { #asSubscriber }
Java Java
: @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/AsSubscriber.java) { #api } : @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/AsSubscriber.java) { #api }

View file

@ -7,7 +7,7 @@ Integration with Reactive Streams, subscribes to a @javadoc[Publisher](java.util
## Signature ## Signature
Scala Scala
: @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/akka/stream/scaladsl/JavaFlowSupport.scala) { #fromPublisher } : @@snip[JavaFlowSupport.scala](/stream/src/main/scala-jdk-9/org/apache/pekko/stream/scaladsl/JavaFlowSupport.scala) { #fromPublisher }
Java Java
: @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/FromPublisher.java) { #api } : @@snip[JavaFlowSupport.java](/docs/src/test/java-jdk9-only/jdocs/stream/operators/source/FromPublisher.java) { #api }

View file

@ -32,8 +32,8 @@ distributed processing framework or to introduce such capabilities in specific p
Stream refs are trivial to use in existing clustered Pekko applications and require no additional configuration Stream refs are trivial to use in existing clustered Pekko applications and require no additional configuration
or setup. They automatically maintain flow-control / back-pressure over the network and employ Pekko's failure detection or setup. They automatically maintain flow-control / back-pressure over the network and employ Pekko's failure detection
mechanisms to fail-fast ("let it crash!") in the case of failures of remote nodes. They can be seen as an implementation mechanisms to fail-fast ("let it crash!") in the case of failures of remote nodes. They can be seen as an implementation
of the [Work Pulling Pattern](https://www.michaelpollmeier.com/akka-work-pulling-pattern), which one would otherwise of the [Akka Work Pulling Pattern](https://www.michaelpollmeier.com/akka-work-pulling-pattern).
implement manually. It should be straightforward to adapt this to Pekko.
@@@ note @@@ note
A useful way to think about stream refs is: A useful way to think about stream refs is:
@ -165,11 +165,6 @@ Stream refs utilise normal actor messaging for their transport, and therefore pr
## Bulk Stream References ## Bulk Stream References
@@@ warning
Bulk stream references are not implemented yet.
See ticket [Bulk Transfer Stream Refs #24276](https://github.com/akka/akka/issues/24276) to track progress or signal demand for this feature.
@@@
Bulk stream refs can be used to create simple side-channels to transfer humongous amounts Bulk stream refs can be used to create simple side-channels to transfer humongous amounts
of data such as huge log files, messages or even media, with as much ease as if it was a trivial local stream. of data such as huge log files, messages or even media, with as much ease as if it was a trivial local stream.

View file

@ -309,7 +309,7 @@ wall time, not CPU time or system time.
@@@ div { .group-scala } @@@ div { .group-scala }
Ray Roestenburg has written a great article on using the TestKit: Ray Roestenburg has written a great article on using the Akka TestKit (but can also be applied to the Pekko Testkit):
[https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html). [https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html](https://web.archive.org/web/20180114133958/http://roestenburg.agilesquad.com/2011/02/unit-testing-akka-actors-with-testkit_12.html).
His full example is also available @ref:[here](testing.md#example). His full example is also available @ref:[here](testing.md#example).