fix doc grammar by removing an Pekko (#118)

This commit is contained in:
PJ Fanning 2023-01-20 09:40:34 +00:00 committed by GitHub
parent 0903c6aa86
commit b6a8e2204b
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
27 changed files with 35 additions and 34 deletions

View file

@ -596,7 +596,7 @@ existence of those docs.
### Reporting security issues
If you have found an issue in an Pekko project that might have security implications, you can report it by following the process mentioned in the [Apache document](https://apache.org/security/#reporting-a-vulnerability). We will make sure those will get handled with priority. Thank you for your responsible disclosure!
If you have found an issue in a Pekko project that might have security implications, you can report it by following the process mentioned in the [Apache document](https://apache.org/security/#reporting-a-vulnerability). We will make sure those will get handled with priority. Thank you for your responsible disclosure!
### Continuous integration

View file

@ -1,5 +1,5 @@
---
project.description: How to package an Pekko application for deployment.
project.description: How to package a Pekko application for deployment.
---
# Packaging

View file

@ -26,12 +26,12 @@ Pekko Projections let you process a stream of events or records from a source to
## [Cassandra Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-cassandra/current/)
An Pekko Persistence journal and snapshot store backed by Apache Cassandra.
A Pekko Persistence journal and snapshot store backed by Apache Cassandra.
## [JDBC Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-jdbc/current/)
An Pekko Persistence journal and snapshot store for use with JDBC-compatible databases. This implementation relies on [Slick](https://scala-slick.org/).
A Pekko Persistence journal and snapshot store for use with JDBC-compatible databases. This implementation relies on [Slick](https://scala-slick.org/).
## [R2DBC Plugin for Pekko Persistence](https://doc.akka.io/docs/akka-persistence-r2dbc/current/)
@ -45,7 +45,7 @@ Use [Google Cloud Spanner](https://cloud.google.com/spanner/) as Pekko Persisten
## Pekko Management
* [Pekko Management](https://doc.akka.io/docs/akka-management/current/) provides a central HTTP endpoint for Pekko management extensions.
* [Pekko Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps bootstrapping an Pekko cluster using Pekko Discovery.
* [Pekko Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/) helps bootstrapping a Pekko cluster using Pekko Discovery.
* [Pekko Management Cluster HTTP](https://doc.akka.io/docs/akka-management/current/cluster-http-management.html) provides HTTP endpoints for introspecting and managing Pekko clusters.
* [Pekko Discovery for Kubernetes, Consul, Marathon, and AWS](https://doc.akka.io/docs/akka-management/current/discovery/)
* [Kubernetes Lease](https://doc.akka.io/docs/akka-management/current/kubernetes-lease.html)

View file

@ -61,7 +61,7 @@ Scala
Java
: @@snip [CompileOnlyTest.java](/discovery/src/test/java/jdoc/org/apache/pekko/discovery/CompileOnlyTest.java) { #full }
Port can be used when a service opens multiple ports e.g. a HTTP port and an Pekko remoting port.
Port can be used when a service opens multiple ports e.g. a HTTP port and a Pekko remoting port.
## Discovery Method: DNS

View file

@ -44,7 +44,7 @@ to `application`—may be overridden using the `config.resource` property
@@@ note
If you are writing an Pekko application, keep your configuration in
If you are writing a Pekko application, keep your configuration in
`application.conf` at the root of the class path. If you are writing an
Pekko-based library, keep its configuration in `reference.conf` at the root
of the JAR file. It's not supported to override a config property owned by

View file

@ -39,7 +39,7 @@ The process of materialization will often create specific objects that are usefu
## Interoperation with other Reactive Streams implementations
Pekko Streams fully implement the Reactive Streams specification and interoperate with all other conformant implementations. We chose to completely separate the Reactive Streams interfaces from the user-level API because we regard them to be an SPI that is not targeted at endusers. In order to obtain a [Publisher](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Publisher.html) or [Subscriber](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Subscriber.html) from an Pekko Stream topology, a corresponding @apidoc[Sink.asPublisher](Sink$) {scala="#asPublisher[T](fanout:Boolean):org.apache.pekko.stream.scaladsl.Sink[T,org.reactivestreams.Publisher[T]]" java="#asPublisher(org.apache.pekko.stream.javadsl.AsPublisher)"} or @apidoc[Source.asSubscriber](Source$) {scala="#asSubscriber[T]:org.apache.pekko.stream.scaladsl.Source[T,org.reactivestreams.Subscriber[T]]" java="#asSubscriber()"} element must be used.
Pekko Streams fully implement the Reactive Streams specification and interoperate with all other conformant implementations. We chose to completely separate the Reactive Streams interfaces from the user-level API because we regard them to be an SPI that is not targeted at endusers. In order to obtain a [Publisher](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Publisher.html) or [Subscriber](https://javadoc.io/doc/org.reactivestreams/reactive-streams/latest/org/reactivestreams/Subscriber.html) from a Pekko Stream topology, a corresponding @apidoc[Sink.asPublisher](Sink$) {scala="#asPublisher[T](fanout:Boolean):org.apache.pekko.stream.scaladsl.Sink[T,org.reactivestreams.Publisher[T]]" java="#asPublisher(org.apache.pekko.stream.javadsl.AsPublisher)"} or @apidoc[Source.asSubscriber](Source$) {scala="#asSubscriber[T]:org.apache.pekko.stream.scaladsl.Source[T,org.reactivestreams.Subscriber[T]]" java="#asSubscriber()"} element must be used.
All stream Processors produced by the default materialization of Pekko Streams are restricted to having a single Subscriber, additional Subscribers will be rejected. The reason for this is that the stream topologies described using our DSL never require fan-out behavior from the Publisher sides of the elements, all fan-out is done using explicit elements like @apidoc[Broadcast[T]](stream.*.Broadcast).

View file

@ -31,7 +31,7 @@ nodes connect to it.
## The Test Conductor
The basis for the multi node testing is the @apidoc[TestConductor$]. It is an Pekko Extension that plugs in to the
The basis for the multi node testing is the @apidoc[TestConductor$]. It is a Pekko Extension that plugs in to the
network stack and it is used to coordinate the nodes participating in the test and provides several features
including:

View file

@ -63,14 +63,14 @@ pekko {
## Pre-packaged plugins
The Pekko Persistence module comes with few built-in persistence plugins, but none of these are suitable
for production usage in an Pekko Cluster.
for production usage in a Pekko Cluster.
### Local LevelDB journal
This plugin writes events to a local LevelDB instance.
@@@ warning
The LevelDB plugin cannot be used in an Pekko Cluster since the storage is in a local file system.
The LevelDB plugin cannot be used in a Pekko Cluster since the storage is in a local file system.
@@@
The LevelDB journal is deprecated and it is not advised to build new applications with it.
@ -147,7 +147,7 @@ i.e. only the first injection is used.
This plugin writes snapshot files to the local filesystem.
@@@ warning
The local snapshot store plugin cannot be used in an Pekko Cluster since the storage is in a local file system.
The local snapshot store plugin cannot be used in a Pekko Cluster since the storage is in a local file system.
@@@
The local snapshot store plugin config entry is `pekko.persistence.snapshot-store.local`.

View file

@ -76,7 +76,7 @@ Illustrates how to use Pekko Cluster with Docker compose.
@extref[Cluster with Kubernetes example project](samples:pekko-sample-cluster-kubernetes-java)
This sample illustrates how to form an Pekko Cluster with Pekko Bootstrap when running in Kubernetes.
This sample illustrates how to form a Pekko Cluster with Pekko Bootstrap when running in Kubernetes.
## Distributed workers

View file

@ -8,7 +8,7 @@ project.description: Details about the underlying remoting module for Pekko Clus
Remoting is the mechanism by which Actors on different nodes talk to each
other internally.
When building an Pekko application, you would usually not use the Remoting concepts
When building a Pekko application, you would usually not use the Remoting concepts
directly, but instead use the more high-level
@ref[Pekko Cluster](index-cluster.md) utilities or technology-agnostic protocols
such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
@ -311,7 +311,7 @@ According to [RFC 7525](https://www.rfc-editor.org/rfc/rfc7525.html) the recomme
You should always check the latest information about security and algorithm recommendations though before you configure your system.
Since an Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
Since a Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)

View file

@ -11,7 +11,7 @@ Classic remoting has been deprecated. Please use @ref[Artery](remoting-artery.md
Remoting is the mechanism by which Actors on different nodes talk to each
other internally.
When building an Pekko application, you would usually not use the Remoting concepts
When building a Pekko application, you would usually not use the Remoting concepts
directly, but instead use the more high-level
@ref[Pekko Cluster](index-cluster.md) utilities or technology-agnostic protocols
such as [HTTP](https://doc.akka.io/docs/akka-http/current/),
@ -488,7 +488,7 @@ According to [RFC 7525](https://www.rfc-editor.org/rfc/rfc7525.html) the recomme
You should always check the latest information about security and algorithm recommendations though before you configure your system.
Since an Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
Since a Pekko remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)

View file

@ -19,6 +19,7 @@ report with the Lightbend Akka team.
## Security Related Documentation
* [Akka security fixes](https://doc.akka.io/docs/akka/current/security/index.html)
* @ref:[Java Serialization](../serialization.md#java-serialization)
* @ref:[Remote deployment allow list](../remoting.md#remote-deployment-allow-list)
* @ref:[Remote Security](../remoting-artery.md#remote-security)

View file

@ -461,7 +461,7 @@ the binding name (for example `jackson-cbor`).
## Using Pekko Serialization for embedded types
For types that already have an Pekko Serializer defined that are embedded in types serialized with Jackson the @apidoc[PekkoSerializationSerializer] and
For types that already have a Pekko Serializer defined that are embedded in types serialized with Jackson the @apidoc[PekkoSerializationSerializer] and
@apidoc[PekkoSerializationDeserializer] can be used to Pekko Serialization for individual fields.
The serializer/deserializer are not enabled automatically. The @javadoc[@JsonSerialize](com.fasterxml.jackson.databind.annotation.JsonSerialize) and @javadoc[@JsonDeserialize](com.fasterxml.jackson.databind.annotation.JsonDeserialize) annotation needs to be added

View file

@ -1,6 +1,6 @@
# Split Brain Resolver
When operating an Pekko cluster you must consider how to handle
When operating a Pekko cluster you must consider how to handle
[network partitions](https://en.wikipedia.org/wiki/Network_partition) (a.k.a. split brain scenarios)
and machine crashes (including JVM and hardware failures). This is crucial for correct behavior if
you use @ref:[Cluster Singleton](typed/cluster-singleton.md) or @ref:[Cluster Sharding](typed/cluster-sharding.md),

View file

@ -55,7 +55,7 @@ Scala
Java
: @@snip [ReactiveStreamsDocTest.java](/docs/src/test/java/jdocs/stream/ReactiveStreamsDocTest.java) { #author-storage-subscriber }
Using an Pekko Streams `Flow` we can transform the stream and connect those:
Using a Pekko Streams `Flow` we can transform the stream and connect those:
Scala
: @@snip [ReactiveStreamsDocSpec.scala](/docs/src/test/scala/docs/stream/ReactiveStreamsDocSpec.scala) { #authors #connect-all }

View file

@ -412,7 +412,7 @@ This is a very useful technique if the stream is closely related to the actor, e
You may also cause a `Materializer` to shut down by explicitly calling @apidoc[shutdown()](stream.Materializer) {scala="#shutdown():Unit" java="#shutdown()"} on it, resulting in abruptly terminating all of the streams it has been running then.
Sometimes, however, you may want to explicitly create a stream that will out-last the actor's life.
For example, you are using an Pekko stream to push some large stream of data to an external service.
For example, you are using a Pekko stream to push some large stream of data to an external service.
You may want to eagerly stop the Actor since it has performed all of its duties already:
Scala

View file

@ -21,7 +21,7 @@ or viceversa. See @ref:[IDE Tips](../additional/ide.md).
## First steps
A stream usually begins at a source, so this is also how we start an Pekko
A stream usually begins at a source, so this is also how we start a Pekko
Stream. Before we create one, we import the full complement of streaming tools:
Scala
@ -38,7 +38,7 @@ Scala
Java
: @@snip [QuickStartDocTest.java](/docs/src/test/java/jdocs/stream/QuickStartDocTest.java) { #other-imports }
And @scala[an object]@java[a class] to start an Pekko @apidoc[actor.ActorSystem] and hold your code @scala[. Making the `ActorSystem`
And @scala[an object]@java[a class] to start a Pekko @apidoc[actor.ActorSystem] and hold your code @scala[. Making the `ActorSystem`
implicit makes it available to the streams without manually passing it when running them]:
Scala

View file

@ -23,7 +23,7 @@ To use Pekko Streams, add the module to your project:
@@@
Stream references, or "stream refs" for short, allow running Pekko Streams across multiple nodes within
an Pekko Cluster.
a Pekko Cluster.
Unlike heavier "streaming data processing" frameworks, Pekko Streams are neither "deployed" nor automatically distributed.
Pekko stream refs are, as the name implies, references to existing parts of a stream, and can be used to create a

View file

@ -13,7 +13,7 @@ on top of the cluster membership service.
## Introduction
A cluster is made up of a set of member nodes. The identifier for each node is a
`hostname:port:uid` tuple. An Pekko application can be distributed over a cluster with
`hostname:port:uid` tuple. A Pekko application can be distributed over a cluster with
each node hosting some part of the application. Cluster membership and the actors running
on that node of the application are decoupled. A node could be a member of a
cluster without hosting any actors. Joining a cluster is initiated

View file

@ -21,7 +21,7 @@ page describes how to use dispatchers with `pekko-actor-typed`, which has depend
## Introduction
An Pekko `MessageDispatcher` is what makes Pekko Actors "tick", it is the engine of the machine so to speak.
A Pekko `MessageDispatcher` is what makes Pekko Actors "tick", it is the engine of the machine so to speak.
All `MessageDispatcher` implementations are also an @scala[`ExecutionContext`]@java[`Executor`], which means that they can be used
to execute arbitrary code, for instance @scala[`Future`s]@java[`CompletableFuture`s].

View file

@ -1,5 +1,5 @@
---
project.description: Share data between nodes and perform updates without coordination in an Pekko Cluster using Conflict Free Replicated Data Types CRDT.
project.description: Share data between nodes and perform updates without coordination in a Pekko Cluster using Conflict Free Replicated Data Types CRDT.
---
# Distributed Data

View file

@ -42,7 +42,7 @@ is ensured, have a look at the @ref:[Cluster Sharding and DurableStateBehavior](
## Example and core API
Let's start with a simple example that models a counter using an Pekko persistent actor. The minimum required for a @apidoc[DurableStateBehavior] is:
Let's start with a simple example that models a counter using a Pekko persistent actor. The minimum required for a @apidoc[DurableStateBehavior] is:
Scala
: @@snip [DurableStatePersistentBehaviorCompileOnly.scala](/persistence-typed/src/test/scala/docs/org/apache/pekko/persistence/typed/DurableStatePersistentBehaviorCompileOnly.scala) { #structure }

View file

@ -111,7 +111,7 @@ Challenges the Cluster module solves include the following:
version=PekkoVersion
}
Sharding helps to solve the problem of distributing a set of actors among members of an Pekko cluster.
Sharding helps to solve the problem of distributing a set of actors among members of a Pekko cluster.
Sharding is a pattern that mostly used together with Persistence to balance a large set of persistent entities
(backed by actors) to members of a cluster and also migrate them to other nodes when members crash or leave.
@ -203,7 +203,7 @@ Challenges Projections solve include the following:
}
In situations where eventual consistency is acceptable, it is possible to share data between nodes in
an Pekko Cluster and accept both reads and writes even in the face of cluster partitions. This can be
a Pekko Cluster and accept both reads and writes even in the face of cluster partitions. This can be
achieved using [Conflict Free Replicated Data Types](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) (CRDTs), where writes on different nodes can
happen concurrently and are merged in a predictable way afterward. The Distributed Data module
provides infrastructure to share data and a number of useful data types.

View file

@ -1,7 +1,7 @@
# Introduction to the Example
When writing prose, the hardest part is often composing the first few sentences. There is a similar "blank canvas" feeling
when starting to build an Pekko system. You might wonder: Which should be the first actor? Where should it live? What should it do?
when starting to build a Pekko system. You might wonder: Which should be the first actor? Where should it live? What should it do?
Fortunately — unlike with prose — established best practices can guide us through these initial steps. In the remainder of this guide, we examine the core logic of a simple Pekko application to introduce you to actors and show you how to formulate solutions with them. The example demonstrates common patterns that will help you kickstart your Pekko projects.
## Prerequisites

View file

@ -78,7 +78,7 @@ The factory takes in:
* `allReplicasAndQueryPlugins`: All Replicas and the query plugin used to read their events
* A factory function to create an instance of the @scala[`EventSourcedBehavior`]@java[`ReplicatedEventSourcedBehavior`]
In this scenario each replica reads from each other's database effectively providing cross region replication for any database that has an Pekko Persistence plugin. Alternatively if all the replicas use the same journal, e.g. for testing or if it is a distributed database such as Cassandra, the `withSharedJournal` factory can be used.
In this scenario each replica reads from each other's database effectively providing cross region replication for any database that has a Pekko Persistence plugin. Alternatively if all the replicas use the same journal, e.g. for testing or if it is a distributed database such as Cassandra, the `withSharedJournal` factory can be used.
Scala
: @@snip [ReplicatedEventSourcingCompileOnlySpec.scala](/persistence-typed-tests/src/test/scala/docs/org/apache/pekko/persistence/typed/ReplicatedEventSourcingCompileOnlySpec.scala) { #factory-shared}

View file

@ -706,7 +706,7 @@ pekko {
# However, starting with Pekko 2.4.12, even with this setting "off", the active side (TLS client side)
# will use the given key-store to send over a certificate if asked. A rolling upgrade from versions of
# Pekko < 2.4.12 can therefore work like this:
# - upgrade all nodes to an Pekko version >= 2.4.12, in the best case the latest version, but keep this setting at "off"
# - upgrade all nodes to a Pekko version >= 2.4.12, in the best case the latest version, but keep this setting at "off"
# - then switch this flag to "on" and do again a rolling upgrade of all nodes
# The first step ensures that all nodes will send over a certificate when asked to. The second
# step will ensure that all nodes finally enforce the secure checking of client certificates.

View file

@ -32,7 +32,7 @@ import pekko.util.Timeout
@nowarn
class PekkoSpecSpec extends AnyWordSpec with Matchers {
"An PekkoSpec" must {
"A PekkoSpec" must {
"warn about unhandled messages" in {
implicit val system = ActorSystem("PekkoSpec0", PekkoSpec.testConf)