Rename akka to pekko in configuration (#63)

Resolves https://github.com/apache/incubator-pekko/issues/54
This commit is contained in:
Greg Methvin 2022-12-02 04:53:48 -08:00 committed by GitHub
parent 708da8caec
commit 3d93dbcb81
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
1047 changed files with 4472 additions and 4464 deletions

View file

@ -1,6 +1,6 @@
Sources and sinks for integrating with `java.io.InputStream` and `java.io.OutputStream` can be found on
`StreamConverters`. As they are blocking APIs the implementations of these operators are run on a separate
dispatcher configured through the `akka.stream.blocking-io-dispatcher`.
dispatcher configured through the `pekko.stream.blocking-io-dispatcher`.
@@@ warning

View file

@ -9,7 +9,7 @@ To use Classic Actors, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -311,7 +311,7 @@ If the current actor behavior does not match a received message,
`unhandled` is called, which by default publishes an
@apidoc[actor.UnhandledMessage(message, sender, recipient)](actor.UnhandledMessage) on the actor
systems event stream (set configuration item
`akka.actor.debug.unhandled` to `on` to have them converted into
`pekko.actor.debug.unhandled` to `on` to have them converted into
actual Debug messages).
In addition, it offers:

View file

@ -18,7 +18,7 @@ Akka is also:
* the name of the goose that Nils traveled across Sweden on in [The Wonderful Adventures of Nils](https://en.wikipedia.org/wiki/The_Wonderful_Adventures_of_Nils) by the Swedish writer Selma Lagerlöf.
* the Finnish word for 'nasty elderly woman' and the word for 'elder sister' in the Indian languages Tamil, Telugu, Kannada and Marathi.
* a [font](https://www.dafont.com/akka.font)
* a [font](https://www.dafont.com/pekko.font)
* a town in Morocco
* a near-earth asteroid
@ -67,7 +67,7 @@ Read more in @ref:[Message Delivery Reliability](../general/message-delivery-rel
To turn on debug logging in your actor system add the following to your configuration:
```
akka.loglevel = DEBUG
pekko.loglevel = DEBUG
```
Read more about it in the docs for @ref:[Logging](../typed/logging.md).

View file

@ -59,7 +59,7 @@ Member nodes are identified by their address, in format *`akka://actor-system-na
## Monitoring and Observability
Aside from log monitoring and the monitoring provided by your APM or platform provider, [Lightbend Telemetry](https://developer.lightbend.com/docs/telemetry/current/instrumentations/akka/akka.html),
Aside from log monitoring and the monitoring provided by your APM or platform provider, [Lightbend Telemetry](https://developer.lightbend.com/docs/telemetry/current/instrumentations/akka/pekko.html),
available through a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription),
can provide additional insights in the run-time characteristics of your application, including metrics, events,
and distributed tracing for Akka Actors, Cluster, HTTP, and more.

View file

@ -7,7 +7,7 @@ To use Akka in OSGi, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-osgi_$scala.binary.version$
version=AkkaVersion

View file

@ -52,7 +52,7 @@ different configured `app-version`.
To make use of this feature you need to define the `app-version` and increase it for each rolling update.
```
akka.cluster.app-version = 1.2.3
pekko.cluster.app-version = 1.2.3
```
To understand which is old and new it compares the version numbers using normal conventions,
@ -104,9 +104,9 @@ During rolling updates the configuration from existing nodes should pass the Clu
For example, it is possible to migrate Cluster Sharding from Classic to Typed Actors in a rolling update using a two step approach
as of Akka version `2.5.23`:
* Deploy with the new nodes set to `akka.cluster.configuration-compatibility-check.enforce-on-join = off`
* Deploy with the new nodes set to `pekko.cluster.configuration-compatibility-check.enforce-on-join = off`
and ensure all nodes are in this state
* Deploy again and with the new nodes set to `akka.cluster.configuration-compatibility-check.enforce-on-join = on`.
* Deploy again and with the new nodes set to `pekko.cluster.configuration-compatibility-check.enforce-on-join = on`.
Full documentation about enforcing these checks on joining nodes and optionally adding custom checks can be found in
@ref:[Akka Cluster configuration compatibility checks](../typed/cluster.md#configuration-compatibility-check).
@ -122,7 +122,7 @@ without bringing down the entire cluster.
The procedure for changing from Java serialization to Jackson would look like:
1. Rolling update from 2.5.24 (or later) to 2.6.0
* Use config `akka.actor.allow-java-serialization=on`.
* Use config `pekko.actor.allow-java-serialization=on`.
* Roll out the change.
* Java serialization will be used as before.
* This step is optional and you could combine it with next step if you like, but could be good to
@ -131,15 +131,15 @@ The procedure for changing from Java serialization to Jackson would look like:
* Change message classes by adding the marker interface and possibly needed annotations as
described in @ref:[Serialization with Jackson](../serialization-jackson.md).
* Test the system with the new serialization in a new test cluster (no rolling update).
* Remove the binding for the marker interface in `akka.actor.serialization-bindings`, so that Jackson is not used for serialization (toBinary) yet.
* Configure `akka.serialization.jackson.allowed-class-prefix=["com.myapp"]`
* Remove the binding for the marker interface in `pekko.actor.serialization-bindings`, so that Jackson is not used for serialization (toBinary) yet.
* Configure `pekko.serialization.jackson.allowed-class-prefix=["com.myapp"]`
* This is needed for Jackson deserialization when the `serialization-bindings` isn't defined.
* Replace `com.myapp` with the name of the root package of your application to trust all classes.
* Roll out the change.
* Java serialization is still used, but this version is prepared for next roll out.
1. Rolling update to enable serialization with Jackson.
* Add the binding to the marker interface in `akka.actor.serialization-bindings` to the Jackson serializer.
* Remove `akka.serialization.jackson.allowed-class-prefix`.
* Add the binding to the marker interface in `pekko.actor.serialization-bindings` to the Jackson serializer.
* Remove `pekko.serialization.jackson.allowed-class-prefix`.
* Roll out the change.
* Old nodes will still send messages with Java serialization, and that can still be deserialized by new nodes.
* New nodes will send messages with Jackson serialization, and old node can deserialize those because they were

View file

@ -4,4 +4,4 @@ The akka-camel module was deprecated in 2.5 and has been removed in 2.6.
As an alternative we recommend [Alpakka](https://doc.akka.io/docs/alpakka/current/). This is of course not a drop-in replacement.
If anyone is interested in setting up akka-camel as a separate community-maintained repository then please get in touch.
If anyone is interested in setting up akka-camel as a separate community-maintained repository then please get in touch.

View file

@ -16,7 +16,7 @@ To use Cluster Client, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-tools_$scala.binary.version$
version=AkkaVersion
@ -114,10 +114,10 @@ of these actors. As always, additional logic should be implemented in the destin
## An Example
On the cluster nodes, first start the receptionist. Note, it is recommended to load the extension
when the actor system is started by defining it in the `akka.extensions` configuration property:
when the actor system is started by defining it in the `pekko.extensions` configuration property:
```
akka.extensions = ["org.apache.pekko.cluster.client.ClusterClientReceptionist"]
pekko.extensions = ["org.apache.pekko.cluster.client.ClusterClientReceptionist"]
```
Next, register the actors that should be available for the client.
@ -164,10 +164,10 @@ Note that the @apidoc[ClusterClientReceptionist] uses the @apidoc[DistributedPub
in @ref:[Distributed Publish Subscribe in Cluster](distributed-pub-sub.md).
It is recommended to load the extension when the actor system is started by defining it in the
`akka.extensions` configuration property:
`pekko.extensions` configuration property:
```
akka.extensions = ["akka.cluster.client.ClusterClientReceptionist"]
pekko.extensions = ["pekko.cluster.client.ClusterClientReceptionist"]
```
## Events

View file

@ -7,7 +7,7 @@ To use Cluster Metrics Extension, you must add the following dependency in your
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-metrics_$scala.binary.version$
version=AkkaVersion
@ -17,7 +17,7 @@ and add the following configuration stanza to your `application.conf`
:
```
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
pekko.extensions = [ "pekko.cluster.metrics.ClusterMetricsExtension" ]
```
@@project-info{ projectId="akka-cluster-metrics" }
@ -58,7 +58,7 @@ By default, metrics extension will use collector provider fall back and will try
Metrics extension periodically publishes current snapshot of the cluster metrics to the node system event bus.
The publication interval is controlled by the `akka.cluster.metrics.collector.sample-interval` setting.
The publication interval is controlled by the `pekko.cluster.metrics.collector.sample-interval` setting.
The payload of the `org.apache.pekko.cluster.metrics.ClusterMetricsChanged` event will contain
latest metrics of the node as well as other cluster member nodes metrics gossip
@ -102,7 +102,7 @@ User is required to manage both project dependency and library deployment manual
When using [Kamon sigar-loader](https://github.com/kamon-io/sigar-loader) and running multiple
instances of the same application on the same host, you have to make sure that sigar library is extracted to a
unique per instance directory. You can control the extract directory with the
`akka.cluster.metrics.native-library-extract-folder` configuration setting.
`pekko.cluster.metrics.native-library-extract-folder` configuration setting.
@@@
@ -151,7 +151,7 @@ Java
As you can see, the router is defined in the same way as other routers, and in this case it is configured as follows:
```
akka.actor.deployment {
pekko.actor.deployment {
/factorialFrontend/factorialBackendRouter = {
# Router type provided by metrics extension.
router = cluster-metrics-adaptive-group
@ -202,7 +202,7 @@ You can plug-in your own metrics collector instead of built-in
Look at those two implementations for inspiration.
Custom metrics collector implementation class must be specified in the
`akka.cluster.metrics.collector.provider` configuration property.
`pekko.cluster.metrics.collector.provider` configuration property.
## Configuration

View file

@ -34,7 +34,7 @@ To use Cluster aware routers, you must add the following dependency in your proj
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-cluster_$scala.binary.version$"
version=AkkaVersion
@ -46,7 +46,7 @@ When using a `Group` you must start the routee actors on the cluster member node
That is not done by the router. The configuration for a group looks like this::
```
akka.actor.deployment {
pekko.actor.deployment {
/statsService/workerRouter {
router = consistent-hashing-group
routees.paths = ["/user/statsWorker"]
@ -133,7 +133,7 @@ All nodes start `StatsService` and `StatsWorker` actors. Remember, routees are t
The router is configured with `routees.paths`::
```
akka.actor.deployment {
pekko.actor.deployment {
/statsService/workerRouter {
router = consistent-hashing-group
routees.paths = ["/user/statsWorker"]
@ -155,7 +155,7 @@ When using a `Pool` with routees created and deployed on the cluster member node
the configuration for a router looks like this::
```
akka.actor.deployment {
pekko.actor.deployment {
/statsService/singleton/workerRouter {
router = consistent-hashing-pool
cluster {
@ -233,7 +233,7 @@ master. It listens to cluster events to lookup the `StatsService` on the oldest
All nodes start `ClusterSingletonProxy` and the `ClusterSingletonManager`. The router is now configured like this::
```
akka.actor.deployment {
pekko.actor.deployment {
/statsService/singleton/workerRouter {
router = consistent-hashing-pool
cluster {

View file

@ -10,7 +10,7 @@ To use Cluster Sharding, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-sharding_$scala.binary.version$
version=AkkaVersion
@ -229,7 +229,7 @@ the identifiers of the shards running in a Region and what entities are alive fo
a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards running in each region and a count
of entities that are alive in each shard.
If any shard queries failed, for example due to timeout if a shard was too busy to reply within the configured `akka.cluster.sharding.shard-region-query-timeout`,
If any shard queries failed, for example due to timeout if a shard was too busy to reply within the configured `pekko.cluster.sharding.shard-region-query-timeout`,
`ShardRegion.CurrentShardRegionState` and `ShardRegion.ClusterShardingStats` will also include the set of shard identifiers by region that failed.
The type names of all started shards can be acquired via @scala[`ClusterSharding.shardTypeNames`] @java[`ClusterSharding.getShardTypeNames`].

View file

@ -10,7 +10,7 @@ To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-tools_$scala.binary.version$
version=AkkaVersion

View file

@ -26,7 +26,7 @@ To use Akka Cluster add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-cluster_$scala.binary.version$"
version=AkkaVersion
@ -51,7 +51,7 @@ Scala
Java
: @@snip [SimpleClusterListener.java](/docs/src/test/java/jdocs/cluster/SimpleClusterListener.java) { type=java }
And the minimum configuration required is to set a host/port for remoting and the `akka.actor.provider = "cluster"`.
And the minimum configuration required is to set a host/port for remoting and the `pekko.actor.provider = "cluster"`.
@@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
@ -219,14 +219,14 @@ With a configuration option you can define required number of members
before the leader changes member status of 'Joining' members to 'Up'.:
```
akka.cluster.min-nr-of-members = 3
pekko.cluster.min-nr-of-members = 3
```
In a similar way you can define required number of members of a certain role
before the leader changes member status of 'Joining' members to 'Up'.:
```
akka.cluster.role {
pekko.cluster.role {
frontend.min-nr-of-members = 1
backend.min-nr-of-members = 2
}
@ -234,7 +234,7 @@ akka.cluster.role {
You can start actors or trigger any functions using the @apidoc[registerOnMemberUp](cluster.Cluster) {scala="#registerOnMemberUp[T](code:=%3ET):Unit" java="#registerOnMemberUp(java.lang.Runnable)"} callback, which will
be invoked when the current member status is changed to 'Up'. This can additionally be used with
`akka.cluster.min-nr-of-members` optional configuration to defer an action until the cluster has reached a certain size.
`pekko.cluster.min-nr-of-members` optional configuration to defer an action until the cluster has reached a certain size.
Scala
: @@snip [FactorialFrontend.scala](/docs/src/test/scala/docs/cluster/FactorialFrontend.scala) { #registerOnUp }

View file

@ -8,7 +8,7 @@ The @apidoc[CoordinatedShutdown$] extension registers internal and user-defined
Especially the phases `before-service-unbind`, `before-cluster-shutdown` and
`before-actor-system-terminate` are intended for application specific phases or tasks.
The order of the shutdown phases is defined in configuration `akka.coordinated-shutdown.phases`. See the default phases in the `reference.conf` tab:
The order of the shutdown phases is defined in configuration `pekko.coordinated-shutdown.phases`. See the default phases in the `reference.conf` tab:
Most relevant default phases
: | Phase | Description |
@ -83,7 +83,7 @@ JVM is not forcefully stopped (it will be stopped if all non-daemon threads have
To enable a hard `System.exit` as a final action you can configure:
```
akka.coordinated-shutdown.exit-jvm = on
pekko.coordinated-shutdown.exit-jvm = on
```
The coordinated shutdown process is also started once the actor system's root actor is stopped.
@ -98,7 +98,7 @@ By default, the `CoordinatedShutdown` will be run when the JVM process exits, e.
via `kill SIGTERM` signal (`SIGINT` ctrl-c doesn't work). This behavior can be disabled with:
```
akka.coordinated-shutdown.run-by-jvm-shutdown-hook=off
pekko.coordinated-shutdown.run-by-jvm-shutdown-hook=off
```
If you have application specific JVM shutdown hooks it's recommended that you register them via the
@ -117,8 +117,8 @@ used in the test:
```
# Don't terminate ActorSystem via CoordinatedShutdown in tests
akka.coordinated-shutdown.terminate-actor-system = off
akka.coordinated-shutdown.run-by-actor-system-terminate = off
akka.coordinated-shutdown.run-by-jvm-shutdown-hook = off
akka.cluster.run-coordinated-shutdown-when-down = off
pekko.coordinated-shutdown.terminate-actor-system = off
pekko.coordinated-shutdown.run-by-actor-system-terminate = off
pekko.coordinated-shutdown.run-by-jvm-shutdown-hook = off
pekko.cluster.run-coordinated-shutdown-when-down = off
```

View file

@ -10,7 +10,7 @@ Akka Coordination is a set of tools for distributed coordination.
@@dependency[sbt,Gradle,Maven] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-coordination_$scala.binary.version$"
version=AkkaVersion
@ -105,7 +105,7 @@ If a user prefers to have outside intervention in this case for maximum safety t
The configuration must define the `lease-class` property for the FQCN of the lease implementation.
The lease implementation should have support for the following properties where the defaults come from `akka.coordination.lease`:
The lease implementation should have support for the following properties where the defaults come from `pekko.coordination.lease`:
@@snip [reference.conf](/akka-coordination/src/main/resources/reference.conf) { #defaults }

View file

@ -36,7 +36,7 @@ See @ref:[Migration hints](#migrating-from-akka-management-discovery-before-1-0-
@@dependency[sbt,Gradle,Maven] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-discovery_$scala.binary.version$"
version=AkkaVersion
@ -78,7 +78,7 @@ Port can be used when a service opens multiple ports e.g. a HTTP port and an Akk
@@@ note { title="Async DNS" }
Akka Discovery with DNS does always use the @ref[Akka-native "async-dns" implementation](../io-dns.md) (it is independent of the `akka.io.dns.resolver` setting).
Akka Discovery with DNS does always use the @ref[Akka-native "async-dns" implementation](../io-dns.md) (it is independent of the `pekko.io.dns.resolver` setting).
@@@
@ -93,7 +93,7 @@ The mapping between Akka service discovery terminology and SRV terminology:
* SRV name = serviceName
* SRV protocol = protocol
Configure `akka-dns` to be used as the discovery implementation in your `application.conf`:
Configure `pekko-dns` to be used as the discovery implementation in your `application.conf`:
@@snip[application.conf](/docs/src/test/scala/docs/discovery/DnsDiscoveryDocSpec.scala){ #configure-dns }
@ -115,9 +115,9 @@ The advantage of SRV records is that they can include a port.
Lookups with all the fields set become SRV queries. For example:
```
dig srv _service._tcp.akka.test
dig srv _service._tcp.pekko.test
; <<>> DiG 9.11.3-RedHat-9.11.3-6.fc28 <<>> srv service.tcp.akka.test
; <<>> DiG 9.11.3-RedHat-9.11.3-6.fc28 <<>> srv service.tcp.pekko.test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 60023
@ -127,25 +127,25 @@ dig srv _service._tcp.akka.test
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 5ab8dd4622e632f6190f54de5b28bb8fb1b930a5333c3862 (good)
;; QUESTION SECTION:
;service.tcp.akka.test. IN SRV
;service.tcp.pekko.test. IN SRV
;; ANSWER SECTION:
_service._tcp.akka.test. 86400 IN SRV 10 60 5060 a-single.akka.test.
_service._tcp.akka.test. 86400 IN SRV 10 40 5070 a-double.akka.test.
_service._tcp.pekko.test. 86400 IN SRV 10 60 5060 a-single.pekko.test.
_service._tcp.pekko.test. 86400 IN SRV 10 40 5070 a-double.pekko.test.
```
In this case `service.tcp.akka.test` resolves to `a-single.akka.test` on port `5060`
and `a-double.akka.test` on port `5070`. Currently discovery does not support the weightings.
In this case `service.tcp.pekko.test` resolves to `a-single.pekko.test` on port `5060`
and `a-double.pekko.test` on port `5070`. Currently discovery does not support the weightings.
#### A/AAAA records
Lookups with any fields missing become A/AAAA record queries. For example:
```
dig a-double.akka.test
dig a-double.pekko.test
; <<>> DiG 9.11.3-RedHat-9.11.3-6.fc28 <<>> a-double.akka.test
; <<>> DiG 9.11.3-RedHat-9.11.3-6.fc28 <<>> a-double.pekko.test
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 11983
@ -155,15 +155,15 @@ dig a-double.akka.test
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 16e9815d9ca2514d2f3879265b28bad05ff7b4a82721edd0 (good)
;; QUESTION SECTION:
;a-double.akka.test. IN A
;a-double.pekko.test. IN A
;; ANSWER SECTION:
a-double.akka.test. 86400 IN A 192.168.1.21
a-double.akka.test. 86400 IN A 192.168.1.22
a-double.pekko.test. 86400 IN A 192.168.1.21
a-double.pekko.test. 86400 IN A 192.168.1.22
```
In this case `a-double.akka.test` would resolve to `192.168.1.21` and `192.168.1.22`.
In this case `a-double.pekko.test` would resolve to `192.168.1.21` and `192.168.1.22`.
## Discovery Method: Configuration
@ -177,15 +177,15 @@ sophisticated discovery method without any code changes.
Configure it to be used as discovery method in your `application.conf`
```
akka {
pekko {
discovery.method = config
}
```
By default the services discoverable are defined in `akka.discovery.config.services` and have the following format:
By default the services discoverable are defined in `pekko.discovery.config.services` and have the following format:
```
akka.discovery.config.services = {
pekko.discovery.config.services = {
service1 = {
endpoints = [
{
@ -215,14 +215,14 @@ via DNS and fall back to configuration.
To use aggregate discovery add its dependency as well as all of the discovery that you
want to aggregate.
Configure `aggregate` as `akka.discovery.method` and which discovery methods are tried and in which order.
Configure `aggregate` as `pekko.discovery.method` and which discovery methods are tried and in which order.
```
akka {
pekko {
discovery {
method = aggregate
aggregate {
discovery-methods = ["akka-dns", "config"]
discovery-methods = ["pekko-dns", "config"]
}
config {
services {
@ -245,7 +245,7 @@ akka {
```
The above configuration will result in `akka-dns` first being checked and if it fails or returns no
The above configuration will result in `pekko-dns` first being checked and if it fails or returns no
targets for the given service name then `config` is queried which i configured with one service called
`service1` which two hosts `host1` and `host2`.
@ -258,8 +258,8 @@ At least version `1.0.0` of any Akka Management module should be used if also us
Migration steps:
* Any custom discovery method should now implement `org.apache.pekko.discovery.ServiceDiscovery`
* `discovery-method` now has to be a configuration location under `akka.discovery` with at minimum a property `class` specifying the fully qualified name of the implementation of `org.apache.pekko.discovery.ServiceDiscovery`.
Previous versions allowed this to be a class name or a fully qualified config location e.g. `akka.discovery.kubernetes-api` rather than just `kubernetes-api`
* `discovery-method` now has to be a configuration location under `pekko.discovery` with at minimum a property `class` specifying the fully qualified name of the implementation of `org.apache.pekko.discovery.ServiceDiscovery`.
Previous versions allowed this to be a class name or a fully qualified config location e.g. `pekko.discovery.kubernetes-api` rather than just `kubernetes-api`

View file

@ -10,7 +10,7 @@ Dispatchers are part of core Akka, which means that they are part of the akka-ac
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ To use Akka Distributed Data, you must add the following dependency in your proj
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-distributed-data_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ To use Distributed Publish Subscribe you must add the following dependency in yo
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-cluster-tools_$scala.binary.version$"
version=AkkaVersion
@ -224,11 +224,11 @@ The `DistributedPubSub` extension can be configured with the following propertie
@@snip [reference.conf](/akka-cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
It is recommended to load the extension when the actor system is started by defining it in
`akka.extensions` configuration property. Otherwise it will be activated when first used
`pekko.extensions` configuration property. Otherwise it will be activated when first used
and then it takes a while for it to be populated.
```
akka.extensions = ["org.apache.pekko.cluster.pubsub.DistributedPubSub"]
pekko.extensions = ["org.apache.pekko.cluster.pubsub.DistributedPubSub"]
```
## Delivery Guarantee

View file

@ -10,7 +10,7 @@ To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-persistence-query_$scala.binary.version$
version=AkkaVersion

View file

@ -206,7 +206,7 @@ stream for logging: these are the handlers which are configured for example in
`application.conf`:
```text
akka {
pekko {
loggers = ["org.apache.pekko.event.Logging$DefaultLogger"]
}
```

View file

@ -67,7 +67,7 @@ That's all there is to it!
## Loading from Configuration
To be able to load extensions from your Akka configuration you must add FQCNs of implementations of either @apidoc[ExtensionId](actor.ExtensionId) or @apidoc[ExtensionIdProvider](ExtensionIdProvider)
in the `akka.extensions` section of the config you provide to your @apidoc[ActorSystem](actor.ActorSystem).
in the `pekko.extensions` section of the config you provide to your @apidoc[ActorSystem](actor.ActorSystem).
Scala
: @@snip [ExtensionDocSpec.scala](/docs/src/test/scala/docs/extension/ExtensionDocSpec.scala) { #config }
@ -75,7 +75,7 @@ Scala
Java
: @@@vars
```
akka {
pekko {
extensions = ["docs.extension.ExtensionDocTest.CountExtension"]
}
```
@ -114,10 +114,10 @@ Java
## Library extensions
A third part library may register its extension for auto-loading on actor system startup by appending it to
`akka.library-extensions` in its `reference.conf`.
`pekko.library-extensions` in its `reference.conf`.
```
akka.library-extensions += "docs.extension.ExampleExtension"
pekko.library-extensions += "docs.extension.ExampleExtension"
```
As there is no way to selectively remove such extensions, it should be used with care and only when there is no case
@ -126,7 +126,7 @@ this could be important is in tests.
@@@ warning
The``akka.library-extensions`` must never be assigned (`= ["Extension"]`) instead of appending as this will break
The``pekko.library-extensions`` must never be assigned (`= ["Extension"]`) instead of appending as this will break
the library-extension mechanism and make behavior depend on class path ordering.
@@@

View file

@ -10,7 +10,7 @@ The concept of fault tolerance relates to actors, so in order to use these make
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -313,7 +313,7 @@ The `org.apache.pekko.pattern.BackoffOnFailureOptions` and `org.apache.pekko.pat
Options are:
* `withAutoReset`: The backoff is reset if no failure/stop occurs within the duration. This is the default behaviour with `minBackoff` as default value
* `withManualReset`: The child must send `BackoffSupervisor.Reset` to its backoff supervisor (parent)
* `withSupervisionStrategy`: Sets a custom `OneForOneStrategy` (as each backoff supervisor only has one child). The default strategy uses the `akka.actor.SupervisorStrategy.defaultDecider` which stops and starts the child on exceptions.
* `withSupervisionStrategy`: Sets a custom `OneForOneStrategy` (as each backoff supervisor only has one child). The default strategy uses the `pekko.actor.SupervisorStrategy.defaultDecider` which stops and starts the child on exceptions.
* `withMaxNrOfRetries`: Sets the maximum number of retries until the supervisor will give up (`-1` is default which means no limit of retries). Note: This is set on the supervision strategy, so setting a different strategy resets the `maxNrOfRetries`.
* `withReplyWhileStopped`: By default all messages received while the child is stopped are forwarded to dead letters. With this set, the supervisor will reply to the sender instead.

View file

@ -10,7 +10,7 @@ To use Finite State Machine actors, you must add the following dependency in you
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -541,7 +541,7 @@ and in the following.
### Event Tracing
The setting `akka.actor.debug.fsm` in @ref:[configuration](general/configuration.md) enables logging of an
The setting `pekko.actor.debug.fsm` in @ref:[configuration](general/configuration.md) enables logging of an
event trace by `LoggingFSM` instances:
Scala

View file

@ -7,7 +7,7 @@ Akka offers tiny helpers for use with @scala[@scaladoc[Future](scala.concurrent.
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -75,7 +75,7 @@ A custom `application.conf` might look like this:
# In this file you can override any option defined in the reference files.
# Copy in parts of the reference files and modify as you please.
akka {
pekko {
# Logger config for Akka internals and classic actors, the new API relies
# directly on SLF4J and your config for the logger backend.
@ -126,7 +126,7 @@ Specifying system property with `-Dconfig.resource=/dev.conf` will load the `dev
```
include "application"
akka {
pekko {
loglevel = "DEBUG"
}
```
@ -137,7 +137,7 @@ specification.
<a id="dakka-log-config-on-start"></a>
## Logging of Configuration
If the system or config property `akka.log-config-on-start` is set to `on`, then the
If the system or config property `pekko.log-config-on-start` is set to `on`, then the
complete configuration is logged at INFO level when the actor system is started. This is
useful when you are uncertain of what configuration is used.
@ -198,7 +198,7 @@ This implies that putting Akka on the boot class path will yield
## Application specific settings
The configuration can also be used for application specific settings.
A good practice is to place those settings in an @ref:[Extension](../extending-akka.md#extending-akka-settings).
A good practice is to place those settings in an @ref:[Extension](../extending-akka.md#extending-akka-settings).
## Configuring multiple ActorSystem
@ -213,11 +213,11 @@ differentiate actor systems within the hierarchy of the configuration:
```
myapp1 {
akka.loglevel = "WARNING"
pekko.loglevel = "WARNING"
my.own.setting = 43
}
myapp2 {
akka.loglevel = "ERROR"
pekko.loglevel = "ERROR"
app2.setting = "appname"
}
my.own.setting = 42
@ -235,7 +235,7 @@ trick: in the first case, the configuration accessible from within the actor
system is this
```ruby
akka.loglevel = "WARNING"
pekko.loglevel = "WARNING"
my.own.setting = 43
my.other.setting = "hello"
// plus myapp1 and myapp2 subtrees
@ -245,7 +245,7 @@ while in the second one, only the “akka” subtree is lifted, with the followi
result
```ruby
akka.loglevel = "ERROR"
pekko.loglevel = "ERROR"
my.own.setting = 42
my.other.setting = "hello"
// plus myapp1 and myapp2 subtrees

View file

@ -9,7 +9,7 @@ To use Classic Akka Actors, you must add the following dependency in your projec
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Utilities, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -36,7 +36,7 @@ block that specifies the implementation via `provider-object`.
@@@
To select which `DnsProvider` to use set `akka.io.dns.resolver ` to the location of the configuration.
To select which `DnsProvider` to use set `pekko.io.dns.resolver ` to the location of the configuration.
There are currently two implementations:
@ -83,7 +83,7 @@ The Async DNS provider has the following advantages:
## SRV Records
To get DNS SRV records `akka.io.dns.resolver` must be set to `async-dns` and `DnsProtocol.Resolve`'s requestType
To get DNS SRV records `pekko.io.dns.resolver` must be set to `async-dns` and `DnsProtocol.Resolve`'s requestType
must be set to `DnsProtocol.Srv`
Scala

View file

@ -10,7 +10,7 @@ To use TCP, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ To use UDP, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use I/O, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -15,7 +15,7 @@ To use I/O, you must add the following dependency in your project:
## Introduction
The `akka.io` package has been developed in collaboration between the Akka
The `pekko.io` package has been developed in collaboration between the Akka
and [spray.io](http://spray.io) teams. Its design combines experiences from the
`spray-io` module with improvements that were jointly developed for
more general consumption as an actor-based service.
@ -113,7 +113,7 @@ result in copying all bytes in that slice.
#### Compatibility with java.io
A @apidoc[ByteStringBuilder](util.ByteStringBuilder) can be wrapped in a @javadoc[java.io.OutputStream](java.io.OutputStream) via the @apidoc[asOutputStream](util.ByteStringBuilder) {scala="#asOutputStream:java.io.OutputStream" java="#asOutputStream()"} method. Likewise, @apidoc[ByteIterator](util.ByteIterator) can be wrapped in a @javadoc[java.io.InputStream](java.io.InputStream) via @apidoc[asInputStream](util.ByteIterator) {scala="#asInputStream:java.io.InputStream" java="#asInputStream()"}. Using these, `akka.io` applications can integrate legacy code based on `java.io` streams.
A @apidoc[ByteStringBuilder](util.ByteStringBuilder) can be wrapped in a @javadoc[java.io.OutputStream](java.io.OutputStream) via the @apidoc[asOutputStream](util.ByteStringBuilder) {scala="#asOutputStream:java.io.OutputStream" java="#asOutputStream()"} method. Likewise, @apidoc[ByteIterator](util.ByteIterator) can be wrapped in a @javadoc[java.io.InputStream](java.io.InputStream) via @apidoc[asInputStream](util.ByteIterator) {scala="#asInputStream:java.io.InputStream" java="#asInputStream()"}. Using these, `pekko.io` applications can integrate legacy code based on `java.io` streams.
## Architecture in-depth

View file

@ -10,7 +10,7 @@ To use Logging, you must at least use the Akka actors dependency in your project
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -98,7 +98,7 @@ messages in the actor mailboxes are sent to dead letters. You can also disable l
of dead letters during shutdown.
```ruby
akka {
pekko {
log-dead-letters = 10
log-dead-letters-during-shutdown = on
}
@ -114,7 +114,7 @@ Akka has a few configuration options for very low level debugging. These make mo
You almost definitely need to have logging set to `DEBUG` to use any of the options below:
```ruby
akka {
pekko {
loglevel = "DEBUG"
}
```
@ -122,7 +122,7 @@ akka {
This config option is very good if you want to know what config settings are loaded by Akka:
```ruby
akka {
pekko {
# Log the complete configuration at INFO level when the actor system is started.
# This is useful when you are uncertain of what configuration is used.
log-config-on-start = on
@ -135,7 +135,7 @@ If you want very detailed logging of user-level messages then wrap your actors'
@scaladoc[LoggingReceive](pekko.event.LoggingReceive) and enable the `receive` option:
```ruby
akka {
pekko {
actor {
debug {
# enable function of LoggingReceive, which is to log any received message at
@ -152,7 +152,7 @@ If you want very detailed logging of all automatically received messages that ar
by Actors:
```ruby
akka {
pekko {
actor {
debug {
# enable DEBUG logging of all AutoReceiveMessages (Kill, PoisonPill etc.)
@ -165,7 +165,7 @@ akka {
If you want very detailed logging of all lifecycle changes of Actors (restarts, deaths etc.):
```ruby
akka {
pekko {
actor {
debug {
# enable DEBUG logging of actor lifecycle changes
@ -178,7 +178,7 @@ akka {
If you want unhandled messages logged at `DEBUG`:
```ruby
akka {
pekko {
actor {
debug {
# enable DEBUG logging of unhandled messages
@ -191,7 +191,7 @@ akka {
If you want very detailed logging of all events, transitions and timers of FSM Actors that extend LoggingFSM:
```ruby
akka {
pekko {
actor {
debug {
# enable DEBUG logging of all LoggingFSMs for events, transitions and timers
@ -204,7 +204,7 @@ akka {
If you want to monitor subscriptions (subscribe/unsubscribe) on the ActorSystem.eventStream:
```ruby
akka {
pekko {
actor {
debug {
# enable DEBUG logging of subscription changes on the eventStream
@ -220,7 +220,7 @@ akka {
If you want to see all messages that are sent through remoting at `DEBUG` log level, use the following config option. Note that this logs the messages as they are sent by the transport layer, not by an actor.
```ruby
akka.remote.artery {
pekko.remote.artery {
# If this is "on", Akka will log all outbound messages at DEBUG level,
# if off then they are not logged
log-sent-messages = on
@ -230,7 +230,7 @@ akka.remote.artery {
If you want to see all messages that are received through remoting at `DEBUG` log level, use the following config option. Note that this logs the messages as they are received by the transport layer, not by an actor.
```ruby
akka.remote.artery {
pekko.remote.artery {
# If this is "on", Akka will log all inbound messages at DEBUG level,
# if off then they are not logged
log-received-messages = on
@ -277,7 +277,7 @@ might want to do this also in case you implement your own logging adapter.
To turn off logging you can configure the log levels to be `OFF` like this.
```ruby
akka {
pekko {
stdout-loglevel = "OFF"
loglevel = "OFF"
}
@ -306,7 +306,7 @@ can be implemented in a custom @apidoc[LoggingFilter], which can be defined in t
configuration property.
```ruby
akka {
pekko {
# Loggers to register at boot time (org.apache.pekko.event.Logging$DefaultLogger logs
# to STDOUT)
loggers = ["org.apache.pekko.event.Logging$DefaultLogger"]
@ -340,7 +340,7 @@ Java
When the actor system is starting up and shutting down the configured `loggers` are not used.
Instead log messages are printed to stdout (System.out). The default log level for this
stdout logger is `WARNING` and it can be silenced completely by setting
`akka.stdout-loglevel=OFF`.
`pekko.stdout-loglevel=OFF`.
## SLF4J
@ -350,7 +350,7 @@ It has a single dependency: the slf4j-api jar. In your runtime, you also need a
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-slf4j_$scala.binary.version$"
version=AkkaVersion
@ -372,13 +372,13 @@ If you set the `loglevel` to a higher level than `DEBUG`, any `DEBUG` events wil
out already at the source and will never reach the logging backend, regardless of how the backend
is configured.
You can enable `DEBUG` level for `akka.loglevel` and control the actual level in the SLF4J backend
You can enable `DEBUG` level for `pekko.loglevel` and control the actual level in the SLF4J backend
without any significant overhead, also for production.
@@@
```ruby
akka {
pekko {
loggers = ["org.apache.pekko.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "org.apache.pekko.event.slf4j.Slf4jLoggingFilter"

View file

@ -10,7 +10,7 @@ To use Mailboxes, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -38,7 +38,7 @@ it can be used as the default mailbox, but it cannot be used with a BalancingDis
Configuration of `SingleConsumerOnlyUnboundedMailbox` as default mailbox:
```
akka.actor.default-mailbox {
pekko.actor.default-mailbox {
mailbox-type = "org.apache.pekko.dispatch.SingleConsumerOnlyUnboundedMailbox"
}
```
@ -107,7 +107,7 @@ that fails then the dispatcher's requirement—if any—will be tried instead.
5. If the dispatcher requires a mailbox type as described above then the
mapping for that requirement will be used to determine the mailbox type to
be used.
6. The default mailbox `akka.actor.default-mailbox` will be used.
6. The default mailbox `pekko.actor.default-mailbox` will be used.
## Mailbox configuration examples

View file

@ -149,26 +149,26 @@ You can define specific JVM options for each of the spawned JVMs. You do that by
a file named after the node in the test with suffix `.opts` and put them in the same
directory as the test.
For example, to feed the JVM options `-Dakka.remote.port=9991` and `-Xmx256m` to the `SampleMultiJvmNode1`
For example, to feed the JVM options `-Dpekko.remote.port=9991` and `-Xmx256m` to the `SampleMultiJvmNode1`
let's create three `*.opts` files and add the options to them. Separate multiple options with
space.
`SampleMultiJvmNode1.opts`:
```
-Dakka.remote.port=9991 -Xmx256m
-Dpekko.remote.port=9991 -Xmx256m
```
`SampleMultiJvmNode2.opts`:
```
-Dakka.remote.port=9992 -Xmx256m
-Dpekko.remote.port=9992 -Xmx256m
```
`SampleMultiJvmNode3.opts`:
```
-Dakka.remote.port=9993 -Xmx256m
-Dpekko.remote.port=9993 -Xmx256m
```
## ScalaTest

View file

@ -10,7 +10,7 @@ To use Multi Node Testing, you must add the following dependency in your project
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-multi-node-testkit_$scala.binary.version$
version=AkkaVersion

View file

@ -9,7 +9,7 @@ Persistent FSMs are part of Akka persistence, you must add the following depende
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-persistence_$scala.binary.version$"
version=AkkaVersion

View file

@ -59,7 +59,7 @@ The plugin section of the actor system's config will be passed in the config con
of the plugin is passed in the `String` parameter.
The `plugin-dispatcher` is the dispatcher used for the plugin actor. If not specified, it defaults to
`akka.persistence.dispatchers.default-plugin-dispatcher`.
`pekko.persistence.dispatchers.default-plugin-dispatcher`.
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
@ -91,7 +91,7 @@ The plugin section of the actor system's config will be passed in the config con
of the plugin is passed in the `String` parameter.
The `plugin-dispatcher` is the dispatcher used for the plugin actor. If not specified, it defaults to
`akka.persistence.dispatchers.default-plugin-dispatcher`.
`pekko.persistence.dispatchers.default-plugin-dispatcher`.
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.
@ -104,7 +104,7 @@ The TCK is usable from Java as well as Scala projects. To test your implementati
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-persistence-tck_$scala.binary.version$"
version=AkkaVersion

View file

@ -18,9 +18,9 @@ When a persistent actor does NOT override the `journalPluginId` and `snapshotPlu
the persistence extension will use the "default" journal, snapshot-store and durable-state plugins configured in `reference.conf`:
```
akka.persistence.journal.plugin = ""
akka.persistence.snapshot-store.plugin = ""
akka.persistence.state.plugin = ""
pekko.persistence.journal.plugin = ""
pekko.persistence.snapshot-store.plugin = ""
pekko.persistence.state.plugin = ""
```
However, these entries are provided as empty "", and require explicit user configuration via override in the user `application.conf`.
@ -33,25 +33,25 @@ However, these entries are provided as empty "", and require explicit user confi
By default, persistence plugins are started on-demand, as they are used. In some case, however, it might be beneficial
to start a certain plugin eagerly. In order to do that, you should first add `org.apache.pekko.persistence.Persistence`
under the `akka.extensions` key. Then, specify the IDs of plugins you wish to start automatically under
`akka.persistence.journal.auto-start-journals` and `akka.persistence.snapshot-store.auto-start-snapshot-stores`.
under the `pekko.extensions` key. Then, specify the IDs of plugins you wish to start automatically under
`pekko.persistence.journal.auto-start-journals` and `pekko.persistence.snapshot-store.auto-start-snapshot-stores`.
For example, if you want eager initialization for the leveldb journal plugin and the local snapshot store plugin, your configuration should look like this:
```
akka {
pekko {
extensions = [org.apache.pekko.persistence.Persistence]
persistence {
journal {
plugin = "akka.persistence.journal.leveldb"
plugin = "pekko.persistence.journal.leveldb"
auto-start-journals = ["org.apache.pekko.persistence.journal.leveldb"]
}
snapshot-store {
plugin = "akka.persistence.snapshot-store.local"
plugin = "pekko.persistence.snapshot-store.local"
auto-start-snapshot-stores = ["org.apache.pekko.persistence.snapshot-store.local"]
}
@ -76,7 +76,7 @@ The LevelDB plugin cannot be used in an Akka Cluster since the storage is in a l
The LevelDB journal is deprecated and it is not advised to build new applications with it.
As a replacement we recommend using [Akka Persistence JDBC](https://doc.akka.io/docs/akka-persistence-jdbc/current/index.html).
The LevelDB journal plugin config entry is `akka.persistence.journal.leveldb`. Enable this plugin by
The LevelDB journal plugin config entry is `pekko.persistence.journal.leveldb`. Enable this plugin by
defining config property:
@@snip [PersistencePluginDocSpec.scala](/docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-plugin-config }
@ -125,7 +125,7 @@ working directory. The storage location can be changed by configuration:
@@snip [PersistencePluginDocSpec.scala](/docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-store-config }
Actor systems that use a shared LevelDB store must activate the `akka.persistence.journal.leveldb-shared`
Actor systems that use a shared LevelDB store must activate the `pekko.persistence.journal.leveldb-shared`
plugin.
@@snip [PersistencePluginDocSpec.scala](/docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #shared-journal-config }
@ -150,7 +150,7 @@ This plugin writes snapshot files to the local filesystem.
The local snapshot store plugin cannot be used in an Akka Cluster since the storage is in a local file system.
@@@
The local snapshot store plugin config entry is `akka.persistence.snapshot-store.local`.
The local snapshot store plugin config entry is `pekko.persistence.snapshot-store.local`.
Enable this plugin by defining config property:
@@snip [PersistencePluginDocSpec.scala](/docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #leveldb-snapshot-plugin-config }
@ -176,10 +176,10 @@ A shared journal/snapshot store is a single point of failure and should only be
purposes.
@@@
The journal and snapshot store proxies are controlled via the `akka.persistence.journal.proxy` and
`akka.persistence.snapshot-store.proxy` configuration entries, respectively. Set the `target-journal-plugin` or
The journal and snapshot store proxies are controlled via the `pekko.persistence.journal.proxy` and
`pekko.persistence.snapshot-store.proxy` configuration entries, respectively. Set the `target-journal-plugin` or
`target-snapshot-store-plugin` keys to the underlying plugin you wish to use (for example:
`akka.persistence.journal.inmem`). The `start-target-journal` and `start-target-snapshot-store` keys should be
`pekko.persistence.journal.inmem`). The `start-target-journal` and `start-target-snapshot-store` keys should be
set to `on` in exactly one actor system - this is the system that will instantiate the shared persistence plugin.
Next, the proxy needs to be told how to find the shared plugin. This can be done by setting the `target-journal-address`
and `target-snapshot-store-address` configuration keys, or programmatically by calling the

View file

@ -10,7 +10,7 @@ To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-persistence-query_$scala.binary.version$
version=AkkaVersion
@ -154,7 +154,7 @@ backend journal.
## Configuration
Configuration settings can be defined in the configuration section with the
absolute path corresponding to the identifier, which is `"akka.persistence.query.journal.leveldb"`
absolute path corresponding to the identifier, which is `"pekko.persistence.query.journal.leveldb"`
for the default `LeveldbReadJournal.Identifier`.
It can be configured with the following properties:

View file

@ -10,7 +10,7 @@ To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-persistence-query_$scala.binary.version$
version=AkkaVersion
@ -53,7 +53,7 @@ query types for the most common query scenarios, that most journals are likely t
In order to issue queries one has to first obtain an instance of a @apidoc[query.*.ReadJournal].
Read journals are implemented as [Community plugins](https://akka.io/community/#plugins-to-akka-persistence-query), each targeting a specific datastore (for example Cassandra or JDBC
databases). For example, given a library that provides a `akka.persistence.query.my-read-journal` obtaining the related
databases). For example, given a library that provides a `pekko.persistence.query.my-read-journal` obtaining the related
journal is as simple as:
Scala

View file

@ -7,7 +7,7 @@ This documentation page touches upon @ref[Akka Persistence](persistence.md), so
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-persistence_$scala.binary.version$"
version=AkkaVersion

View file

@ -13,7 +13,7 @@ To use Akka Persistence, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-persistence_$scala.binary.version$"
version=AkkaVersion
@ -134,7 +134,7 @@ to not overload the system and the backend data store. When exceeding the limit
until other recoveries have been completed. This is configured by:
```
akka.persistence.max-concurrent-recoveries = 50
pekko.persistence.max-concurrent-recoveries = 50
```
@@@ note
@ -220,7 +220,7 @@ of stashed messages will grow without bounds. It can be wise to protect against
maximum stash capacity in the mailbox configuration:
```
akka.actor.default-mailbox.stash-capacity=10000
pekko.actor.default-mailbox.stash-capacity=10000
```
Note that the stash capacity is per actor. If you have many persistent actors, e.g. when using cluster sharding,
@ -234,7 +234,7 @@ for all persistent actors by providing FQCN, which must be a subclass of @apidoc
persistence configuration:
```
akka.persistence.internal-stash-overflow-strategy=
pekko.persistence.internal-stash-overflow-strategy=
"org.apache.pekko.persistence.ThrowExceptionConfigurator"
```
@ -604,7 +604,7 @@ saved snapshot matches the specified `SnapshotSelectionCriteria` will replay all
@@@ note
In order to use snapshots, a default snapshot-store (`akka.persistence.snapshot-store.plugin`) must be configured,
In order to use snapshots, a default snapshot-store (`pekko.persistence.snapshot-store.plugin`) must be configured,
or the @scala[`PersistentActor`]@java[persistent actor] can pick a snapshot store explicitly by overriding @scala[`def snapshotPluginId: String`]@java[`String snapshotPluginId()`].
Because some use cases may not benefit from or need snapshots, it is perfectly valid not to not configure a snapshot store.
@ -742,28 +742,28 @@ serialization mechanism. It is easiest to include the bytes of the `AtLeastOnceD
as a blob in your custom snapshot.
The interval between redelivery attempts is defined by the @apidoc[redeliverInterval](persistence.AtLeastOnceDeliveryLike) {scala="#redeliverInterval:scala.concurrent.duration.FiniteDuration" java="#redeliverInterval()"} method.
The default value can be configured with the `akka.persistence.at-least-once-delivery.redeliver-interval`
The default value can be configured with the `pekko.persistence.at-least-once-delivery.redeliver-interval`
configuration key. The method can be overridden by implementation classes to return non-default values.
The maximum number of messages that will be sent at each redelivery burst is defined by the
@apidoc[redeliverBurstLimit](persistence.AtLeastOnceDeliveryLike) {scala="#redeliveryBurstLimit:Int" java="#redeliveryBurstLimit()"} method (burst frequency is half of the redelivery interval). If there's a lot of
unconfirmed messages (e.g. if the destination is not available for a long time), this helps to prevent an overwhelming
amount of messages to be sent at once. The default value can be configured with the
`akka.persistence.at-least-once-delivery.redelivery-burst-limit` configuration key. The method can be overridden
`pekko.persistence.at-least-once-delivery.redelivery-burst-limit` configuration key. The method can be overridden
by implementation classes to return non-default values.
After a number of delivery attempts a @apidoc[persistence.AtLeastOnceDelivery.UnconfirmedWarning] message
will be sent to `self`. The re-sending will still continue, but you can choose to call
`confirmDelivery` to cancel the re-sending. The number of delivery attempts before emitting the
warning is defined by the @apidoc[warnAfterNumberOfUnconfirmedAttempts](persistence.AtLeastOnceDeliveryLike) {scala="#warnAfterNumberOfUnconfirmedAttempts:Int" java="#warnAfterNumberOfUnconfirmedAttempts()"} method. The default value can be
configured with the `akka.persistence.at-least-once-delivery.warn-after-number-of-unconfirmed-attempts`
configured with the `pekko.persistence.at-least-once-delivery.warn-after-number-of-unconfirmed-attempts`
configuration key. The method can be overridden by implementation classes to return non-default values.
The @scala[@scaladoc[AtLeastOnceDelivery](pekko.persistence.AtLeastOnceDelivery) trait]@java[@javadoc[AbstractPersistentActorWithAtLeastOnceDelivery](pekko.persistence.AbstractPersistentActorWithAtLeastOnceDelivery) class] holds messages in memory until their successful delivery has been confirmed.
The maximum number of unconfirmed messages that the actor is allowed to hold in memory
is defined by the @apidoc[maxUnconfirmedMessages](persistence.AtLeastOnceDeliveryLike) {scala="#maxUnconfirmedMessages:Int" java="#maxUnconfirmedMessages()"} method. If this limit is exceed the `deliver` method will
not accept more messages and it will throw @apidoc[AtLeastOnceDelivery.MaxUnconfirmedMessagesExceededException].
The default value can be configured with the `akka.persistence.at-least-once-delivery.max-unconfirmed-messages`
The default value can be configured with the `pekko.persistence.at-least-once-delivery.max-unconfirmed-messages`
configuration key. The method can be overridden by implementation classes to return non-default values.
## Event Adapters

View file

@ -13,7 +13,7 @@ It also provides the Lightbend Reactive Platform, which is powered by an open so
## Akka Discuss Forums
[Akka Discuss Forums](https://discuss.akka.io)
[Akka Discuss Forums](https://discuss.pekko.io)
## Gitter

View file

@ -31,7 +31,7 @@ If you are still using Scala 2.11 then you must upgrade to 2.12 or 2.13
Auto-downing of unreachable Cluster members have been removed after warnings and recommendations against using it
for many years. It was by default disabled, but could be enabled with configuration
`akka.cluster.auto-down-unreachable-after`.
`pekko.cluster.auto-down-unreachable-after`.
For alternatives see the @ref:[documentation about Downing](../typed/cluster.md#downing).
@ -122,7 +122,7 @@ After being deprecated since 2.2, the following have been removed in Akka 2.6.0.
### TypedActor
`org.apache.pekko.actor.TypedActor` has been deprecated as of 2.6.0 in favor of the
`akka.actor.typed` API which should be used instead.
`pekko.actor.typed` API which should be used instead.
There are several reasons for phasing out the old `TypedActor`. The primary reason is they use transparent
remoting which is not our recommended way of implementing and interacting with actors. Transparent remoting
@ -225,7 +225,7 @@ misconfiguration. You can run Artery on 2552 if you prefer that (e.g. existing f
have to configure the port with:
```
akka.remote.artery.canonical.port = 2552
pekko.remote.artery.canonical.port = 2552
```
The configuration for Artery is different, so you might have to revisit any custom configuration. See the full
@ -234,8 +234,8 @@ The configuration for Artery is different, so you might have to revisit any cust
Configuration that is likely required to be ported:
* `akka.remote.netty.tcp.hostname` => `akka.remote.artery.canonical.hostname`
* `akka.remote.netty.tcp.port`=> `akka.remote.artery.canonical.port`
* `pekko.remote.netty.tcp.hostname` => `pekko.remote.artery.canonical.hostname`
* `pekko.remote.netty.tcp.port`=> `pekko.remote.artery.canonical.port`
If using SSL then `tcp-tls` needs to be enabled and setup. See @ref[Artery docs for SSL](../remoting-artery.md#configuring-ssl-tls-for-akka-remoting)
for how to do this.
@ -250,25 +250,25 @@ The following events that are published to the `eventStream` have changed:
The following defaults have changed:
* `akka.remote.artery.transport` default has changed from `aeron-udp` to `tcp`
* `pekko.remote.artery.transport` default has changed from `aeron-udp` to `tcp`
The following properties have moved. If you don't adjust these from their defaults no changes are required:
For Aeron-UDP:
* `akka.remote.artery.log-aeron-counters` to `akka.remote.artery.advanced.aeron.log-aeron-counters`
* `akka.remote.artery.advanced.embedded-media-driver` to `akka.remote.artery.advanced.aeron.embedded-media-driver`
* `akka.remote.artery.advanced.aeron-dir` to `akka.remote.artery.advanced.aeron.aeron-dir`
* `akka.remote.artery.advanced.delete-aeron-dir` to `akka.remote.artery.advanced.aeron.aeron-delete-dir`
* `akka.remote.artery.advanced.idle-cpu-level` to `akka.remote.artery.advanced.aeron.idle-cpu-level`
* `akka.remote.artery.advanced.give-up-message-after` to `akka.remote.artery.advanced.aeron.give-up-message-after`
* `akka.remote.artery.advanced.client-liveness-timeout` to `akka.remote.artery.advanced.aeron.client-liveness-timeout`
* `akka.remote.artery.advanced.image-liveless-timeout` to `akka.remote.artery.advanced.aeron.image-liveness-timeout`
* `akka.remote.artery.advanced.driver-timeout` to `akka.remote.artery.advanced.aeron.driver-timeout`
* `pekko.remote.artery.log-aeron-counters` to `pekko.remote.artery.advanced.aeron.log-aeron-counters`
* `pekko.remote.artery.advanced.embedded-media-driver` to `pekko.remote.artery.advanced.aeron.embedded-media-driver`
* `pekko.remote.artery.advanced.aeron-dir` to `pekko.remote.artery.advanced.aeron.aeron-dir`
* `pekko.remote.artery.advanced.delete-aeron-dir` to `pekko.remote.artery.advanced.aeron.aeron-delete-dir`
* `pekko.remote.artery.advanced.idle-cpu-level` to `pekko.remote.artery.advanced.aeron.idle-cpu-level`
* `pekko.remote.artery.advanced.give-up-message-after` to `pekko.remote.artery.advanced.aeron.give-up-message-after`
* `pekko.remote.artery.advanced.client-liveness-timeout` to `pekko.remote.artery.advanced.aeron.client-liveness-timeout`
* `pekko.remote.artery.advanced.image-liveless-timeout` to `pekko.remote.artery.advanced.aeron.image-liveness-timeout`
* `pekko.remote.artery.advanced.driver-timeout` to `pekko.remote.artery.advanced.aeron.driver-timeout`
For TCP:
* `akka.remote.artery.advanced.connection-timeout` to `akka.remote.artery.advanced.tcp.connection-timeout`
* `pekko.remote.artery.advanced.connection-timeout` to `pekko.remote.artery.advanced.tcp.connection-timeout`
#### Remaining with Classic remoting (not recommended)
@ -278,8 +278,8 @@ not supported so if you want to update from Akka 2.5.x with Classic remoting to
down of the Cluster you have to enable Classic remoting. Later, you can plan for a full shutdown and
@ref:[migrate from classic remoting to Artery](#migrating-from-classic-remoting-to-artery) as a separate step.
Explicitly disable Artery by setting property `akka.remote.artery.enabled` to `false`. Further, any configuration under `akka.remote` that is
specific to classic remoting needs to be moved to `akka.remote.classic`. To see which configuration options
Explicitly disable Artery by setting property `pekko.remote.artery.enabled` to `false`. Further, any configuration under `pekko.remote` that is
specific to classic remoting needs to be moved to `pekko.remote.classic`. To see which configuration options
are specific to classic search for them in: @ref:[`akka-remote/reference.conf`](../general/configuration-reference.md#config-akka-remote).
If you have a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription) you can use our [Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html) enhancement to flag any settings that have not been properly migrated.
@ -306,13 +306,13 @@ recommendation if you don't have other preferences or constraints.
For compatibility with older systems that rely on Java serialization it can be enabled with the following configuration:
```ruby
akka.actor.allow-java-serialization = on
pekko.actor.allow-java-serialization = on
```
Akka will still log warning when Java serialization is used and to silent that you may add:
```ruby
akka.actor.warn-about-java-serializer-usage = off
pekko.actor.warn-about-java-serializer-usage = off
```
### Rolling update
@ -355,7 +355,7 @@ will log a warning and be ignored, it must be done after the node has joined.
To optionally enable a watch without Akka Cluster or across a Cluster boundary between Cluster and non Cluster,
knowing the consequences, all watchers (cluster as well as remote) need to set:
```
akka.remote.use-unsafe-remote-features-outside-cluster = on`.
pekko.remote.use-unsafe-remote-features-outside-cluster = on`.
```
When enabled
@ -363,7 +363,7 @@ When enabled
* An initial warning is logged on startup of `RemoteActorRefProvider`
* A warning will be logged on remote watch attempts, which you can suppress by setting
```
akka.remote.warn-unsafe-watch-outside-cluster = off
pekko.remote.warn-unsafe-watch-outside-cluster = off
```
### Schedule periodically with fixed-delay vs. fixed-rate
@ -391,16 +391,16 @@ To protect the Akka internals against starvation when user code blocks the defau
use of blocking APIs from actors) a new internal dispatcher has been added. All of Akka's internal, non-blocking actors
now run on the internal dispatcher by default.
The dispatcher can be configured through `akka.actor.internal-dispatcher`.
The dispatcher can be configured through `pekko.actor.internal-dispatcher`.
For maximum performance, you might want to use a single shared dispatcher for all non-blocking,
asynchronous actors, user actors and Akka internal actors. In that case, you can configure the
`akka.actor.internal-dispatcher` with a string value of `akka.actor.default-dispatcher`.
`pekko.actor.internal-dispatcher` with a string value of `pekko.actor.default-dispatcher`.
This reinstantiates the behavior from previous Akka versions but also removes the isolation between
user and Akka internals. So, use at your own risk!
Several `use-dispatcher` configuration settings that previously accepted an empty value to fall back to the default
dispatcher has now gotten an explicit value of `akka.actor.internal-dispatcher` and no longer accept an empty
dispatcher has now gotten an explicit value of `pekko.actor.internal-dispatcher` and no longer accept an empty
string as value. If such an empty value is used in your `application.conf` the same result is achieved by simply removing
that entry completely and having the default apply.
@ -411,7 +411,7 @@ For more details about configuring dispatchers, see the @ref[Dispatchers](../dis
Previously the factor for the default dispatcher was set a bit high (`3.0`) to give some extra threads in case of accidental
blocking and protect a bit against starving the internal actors. Since the internal actors are now on a separate dispatcher
the default dispatcher has been adjusted down to `1.0` which means the number of threads will be one per core, but at least
`8` and at most `64`. This can be tuned using the individual settings in `akka.actor.default-dispatcher.fork-join-executor`.
`8` and at most `64`. This can be tuned using the individual settings in `pekko.actor.default-dispatcher.fork-join-executor`.
### Mixed version
@ -429,12 +429,12 @@ so it is more likely to timeout if there are nodes restarting, for example when
#### Passivate idle entity
The configuration `akka.cluster.sharding.passivate-idle-entity-after` is now enabled by default.
The configuration `pekko.cluster.sharding.passivate-idle-entity-after` is now enabled by default.
Sharding will passivate entities when they have not received any messages after this duration.
To disable passivation you can use configuration:
```
akka.cluster.sharding.passivate-idle-entity-after = off
pekko.cluster.sharding.passivate-idle-entity-after = off
```
It is always disabled if @ref:[Remembering Entities](../cluster-sharding.md#remembering-entities) is enabled.
@ -442,7 +442,7 @@ It is always disabled if @ref:[Remembering Entities](../cluster-sharding.md#reme
#### Cluster Sharding stats
A new field has been added to the response of a `ShardRegion.GetClusterShardingStats` command
for any shards per region that may have failed or not responded within the new configurable `akka.cluster.sharding.shard-region-query-timeout`.
for any shards per region that may have failed or not responded within the new configurable `pekko.cluster.sharding.shard-region-query-timeout`.
This is described further in @ref:[inspecting sharding state](../cluster-sharding.md#inspecting-cluster-sharding-state).
### Distributed Data
@ -456,8 +456,8 @@ actor messages.
The new configuration properties are:
```
akka.cluster.distributed-data.max-delta-elements = 500
akka.cluster.distributed-data.delta-crdt.max-delta-size = 50
pekko.cluster.distributed-data.max-delta-elements = 500
pekko.cluster.distributed-data.delta-crdt.max-delta-size = 50
```
#### DataDeleted
@ -483,7 +483,7 @@ If this is not desired behavior, for example in tests, you can disable this feat
and then it will behave as in Akka 2.5.x:
```
akka.coordinated-shutdown.run-by-actor-system-terminate = off
pekko.coordinated-shutdown.run-by-actor-system-terminate = off
```
### Scheduler not running tasks when shutdown
@ -511,10 +511,10 @@ keeping our own copy, so from Akka 2.6.0 on, the default FJP from the JDK will b
### Logging of dead letters
When the number of dead letters have reached configured `akka.log-dead-letters` value it didn't log
more dead letters in Akka 2.5.x. In Akka 2.6.x the count is reset after configured `akka.log-dead-letters-suspend-duration`.
When the number of dead letters have reached configured `pekko.log-dead-letters` value it didn't log
more dead letters in Akka 2.5.x. In Akka 2.6.x the count is reset after configured `pekko.log-dead-letters-suspend-duration`.
`akka.log-dead-letters-during-shutdown` default configuration changed from `on` to `off`.
`pekko.log-dead-letters-during-shutdown` default configuration changed from `on` to `off`.
### Cluster failure detection
@ -524,13 +524,13 @@ The reason is to have better coverage and unreachability information for downing
Configuration property:
```
akka.cluster.monitored-by-nr-of-members = 9
pekko.cluster.monitored-by-nr-of-members = 9
```
### TestKit
`expectNoMessage()` without timeout parameter is now using a new configuration property
`akka.test.expect-no-message-default` (short timeout) instead of `remainingOrDefault` (long timeout).
`pekko.test.expect-no-message-default` (short timeout) instead of `remainingOrDefault` (long timeout).
### Config library resolution change
@ -542,34 +542,34 @@ For example, the default config for Cluster Sharding, refers to the default conf
`reference.conf` like this:
```ruby
akka.cluster.sharding.distributed-data = ${akka.cluster.distributed-data}
pekko.cluster.sharding.distributed-data = ${pekko.cluster.distributed-data}
```
In Akka 2.5.x this meant that to override default gossip interval for both direct use of Distributed Data and Cluster Sharding
in the same application you would have to change two settings:
```ruby
akka.cluster.distributed-data.gossip-interval = 3s
akka.cluster.sharding.distributed-data = 3s
pekko.cluster.distributed-data.gossip-interval = 3s
pekko.cluster.sharding.distributed-data = 3s
```
In Akka 2.6.0 and forward, changing the default in the `akka.cluster.distributed-data` config block will be done before
In Akka 2.6.0 and forward, changing the default in the `pekko.cluster.distributed-data` config block will be done before
the variable in `reference.conf` is resolved, so that the same change only needs to be done once:
```ruby
akka.cluster.distributed-data.gossip-interval = 3s
pekko.cluster.distributed-data.gossip-interval = 3s
```
The following default settings in Akka are using such substitution and may be affected if you are changing the right
hand config path in your `application.conf`:
```ruby
akka.cluster.sharding.coordinator-singleton = ${akka.cluster.singleton}
akka.cluster.sharding.distributed-data = ${akka.cluster.distributed-data}
akka.cluster.singleton-proxy.singleton-name = ${akka.cluster.singleton.singleton-name}
akka.cluster.typed.receptionist.distributed-data = ${akka.cluster.distributed-data}
akka.remote.classic.netty.ssl = ${akka.remote.classic.netty.tcp}
akka.remote.artery.advanced.materializer = ${akka.stream.materializer}
pekko.cluster.sharding.coordinator-singleton = ${pekko.cluster.singleton}
pekko.cluster.sharding.distributed-data = ${pekko.cluster.distributed-data}
pekko.cluster.singleton-proxy.singleton-name = ${pekko.cluster.singleton.singleton-name}
pekko.cluster.typed.receptionist.distributed-data = ${pekko.cluster.distributed-data}
pekko.remote.classic.netty.ssl = ${pekko.remote.classic.netty.tcp}
pekko.remote.artery.advanced.materializer = ${pekko.stream.materializer}
```
@ -695,32 +695,32 @@ used for individual streams when they are materialized.
| MaterializerSettings | Corresponding attribute | Config |
-------------------------|---------------------------------------------------|---------|
| `initialInputBufferSize` | `Attributes.inputBuffer(initial, max)` | `akka.stream.materializer.initial-input-buffer-size` |
| `maxInputBufferSize` | `Attributes.inputBuffer(initial, max)` | `akka.stream.materializer.max-input-buffer-size` |
| `dispatcher` | `ActorAttributes.dispatcher(name)` | `akka.stream.materializer.dispatcher` |
| `initialInputBufferSize` | `Attributes.inputBuffer(initial, max)` | `pekko.stream.materializer.initial-input-buffer-size` |
| `maxInputBufferSize` | `Attributes.inputBuffer(initial, max)` | `pekko.stream.materializer.max-input-buffer-size` |
| `dispatcher` | `ActorAttributes.dispatcher(name)` | `pekko.stream.materializer.dispatcher` |
| `supervisionDecider` | `ActorAttributes.supervisionStrategy` | na |
| `debugLogging` | `ActorAttributes.debugLogging` | `akka.stream.materializer.debug-logging` |
| `outputBurstLimit` | `ActorAttributes.outputBurstLimit` | `akka.stream.materializer.output-burst-limit` |
| `fuzzingMode` | `ActorAttributes.fuzzingMode` | `akka.stream.materializer.debug.fuzzing-mode` |
| `debugLogging` | `ActorAttributes.debugLogging` | `pekko.stream.materializer.debug-logging` |
| `outputBurstLimit` | `ActorAttributes.outputBurstLimit` | `pekko.stream.materializer.output-burst-limit` |
| `fuzzingMode` | `ActorAttributes.fuzzingMode` | `pekko.stream.materializer.debug.fuzzing-mode` |
| `autoFusing` | no longer used (since 2.5.0) | na |
| `maxFixedBufferSize` | `ActorAttributes.maxFixedBufferSize` | `akka.stream.materializer.max-fixed-buffer-size` |
| `syncProcessingLimit` | `ActorAttributes.syncProcessingLimit` | `akka.stream.materializer.sync-processing-limit` |
| `IOSettings.tcpWriteBufferSize` | `Tcp.writeBufferSize` | `akka.stream.materializer.io.tcp.write-buffer-size` |
| `blockingIoDispatcher` | na | `akka.stream.materializer.blocking-io-dispatcher` |
| `maxFixedBufferSize` | `ActorAttributes.maxFixedBufferSize` | `pekko.stream.materializer.max-fixed-buffer-size` |
| `syncProcessingLimit` | `ActorAttributes.syncProcessingLimit` | `pekko.stream.materializer.sync-processing-limit` |
| `IOSettings.tcpWriteBufferSize` | `Tcp.writeBufferSize` | `pekko.stream.materializer.io.tcp.write-buffer-size` |
| `blockingIoDispatcher` | na | `pekko.stream.materializer.blocking-io-dispatcher` |
| StreamRefSettings | Corresponding StreamRefAttributes | Config |
-----------------------------------|-----------------------------------|---------|
| `bufferCapacity` | `bufferCapacity` | `akka.stream.materializer.stream-ref.buffer-capacity` |
| `demandRedeliveryInterval` | `demandRedeliveryInterval` | `akka.stream.materializer.stream-ref.demand-redelivery-interval` |
| `subscriptionTimeout` | `subscriptionTimeout` | `akka.stream.materializer.stream-ref.subscription-timeout` |
| `finalTerminationSignalDeadline` | `finalTerminationSignalDeadline` | `akka.stream.materializer.stream-ref.final-termination-signal-deadline` |
| `bufferCapacity` | `bufferCapacity` | `pekko.stream.materializer.stream-ref.buffer-capacity` |
| `demandRedeliveryInterval` | `demandRedeliveryInterval` | `pekko.stream.materializer.stream-ref.demand-redelivery-interval` |
| `subscriptionTimeout` | `subscriptionTimeout` | `pekko.stream.materializer.stream-ref.subscription-timeout` |
| `finalTerminationSignalDeadline` | `finalTerminationSignalDeadline` | `pekko.stream.materializer.stream-ref.final-termination-signal-deadline` |
| SubscriptionTimeoutSettings | Corresponding ActorAttributes | Config |
-----------------------------------|---------------------------------------------|---------|
| `subscriptionTimeoutSettings.mode` | `streamSubscriptionTimeoutMode` | `akka.stream.materializer.subscription-timeout.mode` |
| `subscriptionTimeoutSettings.timeout` | `streamSubscriptionTimeout` | `akka.stream.materializer.subscription-timeout.timeout` |
| `subscriptionTimeoutSettings.mode` | `streamSubscriptionTimeoutMode` | `pekko.stream.materializer.subscription-timeout.mode` |
| `subscriptionTimeoutSettings.timeout` | `streamSubscriptionTimeout` | `pekko.stream.materializer.subscription-timeout.timeout` |
Setting attributes on individual streams can be done like so:

View file

@ -104,7 +104,7 @@ You can start using CBOR format already with Akka 2.6.5 without waiting for the
a rolling update to Akka 2.6.5 using default configuration. Then change the configuration to:
```
akka.actor {
pekko.actor {
serializers {
jackson-cbor = "org.apache.pekko.serialization.jackson.JacksonCborSerializer"
}

View file

@ -25,7 +25,7 @@ To use Artery Remoting, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-remote_$scala.binary.version$
version=AkkaVersion
@ -49,7 +49,7 @@ To enable remote capabilities in your Akka project you should, at a minimum, add
to your `application.conf` file:
```
akka {
pekko {
actor {
# provider=remote is possible, but prefer cluster
provider = cluster
@ -117,7 +117,7 @@ acts as a "server" to which arbitrary systems on the same network can connect to
## Selecting a transport
There are three alternatives of which underlying transport to use. It is configured by property
`akka.remote.artery.transport` with the possible values:
`pekko.remote.artery.transport` with the possible values:
* `tcp` - Based on @ref:[Akka Streams TCP](stream/stream-io.md#streaming-tcp) (default if other not configured)
* `tls-tcp` - Same as `tcp` with encryption using @ref:[Akka Streams TLS](stream/stream-io.md#tls)
@ -277,7 +277,7 @@ In addition to what is described here, read the blog post about [Securing Akka c
SSL can be used as the remote transport by using the `tls-tcp` transport:
```
akka.remote.artery {
pekko.remote.artery {
transport = tls-tcp
}
```
@ -285,7 +285,7 @@ akka.remote.artery {
Next the actual SSL/TLS parameters have to be configured:
```
akka.remote.artery {
pekko.remote.artery {
transport = tls-tcp
ssl.config-ssl-engine {
@ -336,7 +336,7 @@ Note that if TLS is enabled with mutual authentication there is still a risk tha
valid certificate by compromising any node with certificates issued by the same internal PKI tree.
It's recommended that you enable hostname verification with
`akka.remote.artery.ssl.config-ssl-engine.hostname-verification=on`.
`pekko.remote.artery.ssl.config-ssl-engine.hostname-verification=on`.
When enabled it will verify that the destination hostname matches the hostname in the peer's certificate.
In deployments where hostnames are dynamic and not known up front it can make sense to leave the hostname verification off.
@ -390,7 +390,7 @@ that system down. This is not always desired, and it can be disabled with the
following setting:
```
akka.remote.artery.untrusted-mode = on
pekko.remote.artery.untrusted-mode = on
```
This disallows sending of system messages (actor life-cycle commands,
@ -417,7 +417,7 @@ permission to receive actor selection messages can be granted to specific actors
defined in configuration:
```
akka.remote.artery.trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
pekko.remote.artery.trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
```
@ -530,7 +530,7 @@ phi = -log10(1 - F(timeSinceLastHeartbeat))
where F is the cumulative distribution function of a normal distribution with mean
and standard deviation estimated from historical heartbeat inter-arrival times.
In the @ref:[Remote Configuration](#remote-configuration-artery) you can adjust the `akka.remote.watch-failure-detector.threshold`
In the @ref:[Remote Configuration](#remote-configuration-artery) you can adjust the `pekko.remote.watch-failure-detector.threshold`
to define when a *phi* value is considered to be a failure.
A low `threshold` is prone to generate many false positives but ensures
@ -555,7 +555,7 @@ a standard deviation of 100 ms.
To be able to survive sudden abnormalities, such as garbage collection pauses and
transient network failures the failure detector is configured with a margin,
`akka.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
`pekko.remote.watch-failure-detector.acceptable-heartbeat-pause`. You may want to
adjust the @ref:[Remote Configuration](#remote-configuration-artery) of this depending on you environment.
This is how the curve looks like for `acceptable-heartbeat-pause` configured to
3 seconds.
@ -679,7 +679,7 @@ arrive in send order. It is possible to assign actors on given paths to use this
path patterns that have to be specified in the actor system's configuration on both the sending and the receiving side:
```
akka.remote.artery.large-message-destinations = [
pekko.remote.artery.large-message-destinations = [
"/user/largeMessageActor",
"/user/largeMessagesGroup/*",
"/user/anotherGroup/*/largeMesssages",
@ -706,7 +706,7 @@ To notice large messages you can enable logging of message types with payload si
configured `log-frame-size-exceeding`.
```
akka.remote.artery {
pekko.remote.artery {
log-frame-size-exceeding = 10000b
}
```
@ -780,7 +780,7 @@ aeron.threading.mode=SHARED_NETWORK
#aeron.sender.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
#aeron.receiver.idle.strategy=org.agrona.concurrent.BusySpinIdleStrategy
# use same director in akka.remote.artery.advanced.aeron-dir config
# use same director in pekko.remote.artery.advanced.aeron-dir config
# of the Akka application
aeron.dir=/dev/shm/aeron
```
@ -791,7 +791,7 @@ To use the external media driver from the Akka application you need to define th
configuration properties:
```
akka.remote.artery.advanced.aeron {
pekko.remote.artery.advanced.aeron {
embedded-media-driver = off
aeron-dir = /dev/shm/aeron
}
@ -817,7 +817,7 @@ usage and latency with the following configuration:
```
# Values can be from 1 to 10, where 10 strongly prefers low latency
# and 1 strongly prefers less CPU usage
akka.remote.artery.advanced.aeron.idle-cpu-level = 1
pekko.remote.artery.advanced.aeron.idle-cpu-level = 1
```
By setting this value to a lower number, it tells Akka to do longer "sleeping" periods on its thread dedicated
@ -851,7 +851,7 @@ host name and port pair that is used to connect to the system from the outside.
special configuration that sets both the logical and the bind pairs for remoting.
```
akka {
pekko {
remote {
artery {
canonical.hostname = my.domain.com # external (logical) hostname
@ -902,7 +902,7 @@ Any space used in the mount will count towards your container's memory usage.
### Flight Recorder
When running on JDK 11 Artery specific flight recording is available through the [Java Flight Recorder (JFR)](https://openjdk.java.net/jeps/328).
The flight recorder is automatically enabled by detecting JDK 11 but can be disabled if needed by setting `akka.java-flight-recorder.enabled = false`.
The flight recorder is automatically enabled by detecting JDK 11 but can be disabled if needed by setting `pekko.java-flight-recorder.enabled = false`.
Low overhead Artery specific events are emitted by default when JFR is enabled, higher overhead events needs a custom settings template and are not enabled automatically with the `profiling` JFR template.
To enable those create a copy of the `profiling` template and enable all `Akka` sub category events, for example through the JMC GUI.

View file

@ -27,7 +27,7 @@ To use Akka Remoting, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-remote_$scala.binary.version$
version=AkkaVersion
@ -50,14 +50,14 @@ To enable classic remoting in your Akka project you should, at a minimum, add th
to your `application.conf` file:
```
akka {
pekko {
actor {
# provider=remote is possible, but prefer cluster
provider = cluster
}
remote.artery.enabled = false
remote.classic {
enabled-transports = ["akka.remote.classic.netty.tcp"]
enabled-transports = ["pekko.remote.classic.netty.tcp"]
netty.tcp {
hostname = "127.0.0.1"
port = 2552
@ -186,7 +186,7 @@ If you want to use the creation functionality in Akka remoting you have to furth
`application.conf` file in the following way (only showing deployment section):
```
akka {
pekko {
actor {
deployment {
/sampleActor {
@ -226,7 +226,7 @@ companion object of the actors class]@java[make a static
inner class which implements `Creator<T extends Actor>`].
Serializability of all Props can be tested by setting the configuration item
`akka.actor.serialize-creators=on`. Only Props whose `deploy` has
`pekko.actor.serialize-creators=on`. Only Props whose `deploy` has
`LocalScope` are exempt from this check.
@@@
@ -282,7 +282,7 @@ is *not* remote code loading, the Actors class to be deployed onto a remote syst
remote system. This still however may pose a security risk, and one may want to restrict remote deployment to
only a specific set of known actors by enabling the allow list feature.
To enable remote deployment allow list set the `akka.remote.deployment.enable-allow-list` value to `on`.
To enable remote deployment allow list set the `pekko.remote.deployment.enable-allow-list` value to `on`.
The list of allowed classes has to be configured on the "remote" system, in other words on the system onto which
others will be attempting to remote deploy Actors. That system, locally, knows best which Actors it should or
should not allow others to remote deploy onto it. The full settings section may for example look like this:
@ -302,7 +302,7 @@ is attempted to be sent to the remote system or an inbound connection is accepte
When a communication failure happens and the connection is lost between the two systems the link becomes `Gated`.
In this state the system will not attempt to connect to the remote host and all outbound messages will be dropped. The time
while the link is in the `Gated` state is controlled by the setting `akka.remote.retry-gate-closed-for`:
while the link is in the `Gated` state is controlled by the setting `pekko.remote.retry-gate-closed-for`:
after this time elapses the link state transitions to `Idle` again. `Gate` is one-sided in the
sense that whenever a successful *inbound* connection is accepted from a remote system during `Gate` it automatically
transitions to `Active` and communication resumes immediately.
@ -329,14 +329,14 @@ Remoting uses the `org.apache.pekko.remote.PhiAccrualFailureDetector` failure de
implementing the `org.apache.pekko.remote.FailureDetector` and configuring it:
```
akka.remote.watch-failure-detector.implementation-class = "com.example.CustomFailureDetector"
pekko.remote.watch-failure-detector.implementation-class = "com.example.CustomFailureDetector"
```
In the @ref:[Remote Configuration](#remote-configuration) you may want to adjust these
depending on you environment:
* When a *phi* value is considered to be a failure `akka.remote.watch-failure-detector.threshold`
* Margin of error for sudden abnormalities `akka.remote.watch-failure-detector.acceptable-heartbeat-pause`
* When a *phi* value is considered to be a failure `pekko.remote.watch-failure-detector.threshold`
* Margin of error for sudden abnormalities `pekko.remote.watch-failure-detector.acceptable-heartbeat-pause`
## Serialization
@ -397,7 +397,7 @@ finished.
@@@ note
In order to disable the logging, set
`akka.remote.classic.log-remote-lifecycle-events = off` in your
`pekko.remote.classic.log-remote-lifecycle-events = off` in your
`application.conf`.
@@@
@ -439,13 +439,13 @@ That is also security best-practice because of its multiple
<a id="remote-tls"></a>
### Configuring SSL/TLS for Akka Remoting
SSL can be used as the remote transport by adding `akka.remote.classic.netty.ssl` to the `enabled-transport` configuration section.
SSL can be used as the remote transport by adding `pekko.remote.classic.netty.ssl` to the `enabled-transport` configuration section.
An example of setting up the default Netty based SSL driver as default:
```
akka {
pekko {
remote.classic {
enabled-transports = [akka.remote.classic.netty.ssl]
enabled-transports = [pekko.remote.classic.netty.ssl]
}
}
```
@ -453,7 +453,7 @@ akka {
Next the actual SSL/TLS parameters have to be configured:
```
akka {
pekko {
remote.classic {
netty.ssl {
hostname = "127.0.0.1"
@ -528,7 +528,7 @@ that system down. This is not always desired, and it can be disabled with the
following setting:
```
akka.remote.classic.untrusted-mode = on
pekko.remote.classic.untrusted-mode = on
```
This disallows sending of system messages (actor life-cycle commands,
@ -555,7 +555,7 @@ permission to receive actor selection messages can be granted to specific actors
defined in configuration:
```
akka.remote.classic.trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
pekko.remote.classic.trusted-selection-paths = ["/user/receptionist", "/user/namingService"]
```
The actual message must still not be of type `PossiblyHarmful`.
@ -608,7 +608,7 @@ host name and port pair that is used to connect to the system from the outside.
special configuration that sets both the logical and the bind pairs for remoting.
```
akka.remote.classic.netty.tcp {
pekko.remote.classic.netty.tcp {
hostname = my.domain.com # external (logical) hostname
port = 8000 # external (logical) port

View file

@ -10,7 +10,7 @@ To use Routing, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -823,7 +823,7 @@ Scala
Java
: @@snip [RouterDocTest.java](/docs/src/test/java/jdocs/routing/RouterDocTest.java) { #resize-pool-1 }
Several more configuration options are available and described in `akka.actor.deployment.default.resizer`
Several more configuration options are available and described in `pekko.actor.deployment.default.resizer`
section of the reference @ref:[configuration](general/configuration.md).
Pool with resizer defined in code:
@ -873,7 +873,7 @@ Scala
Java
: @@snip [RouterDocTest.java](/docs/src/test/java/jdocs/routing/RouterDocTest.java) { #optimal-size-exploring-resize-pool }
Several more configuration options are available and described in `akka.actor.deployment.default.optimal-size-exploring-resizer`
Several more configuration options are available and described in `pekko.actor.deployment.default.optimal-size-exploring-resizer`
section of the reference @ref:[configuration](general/configuration.md).
@@@ note

View file

@ -13,7 +13,7 @@ To use Scheduler, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -55,7 +55,7 @@ The default implementation of @apidoc[actor.Scheduler] used by Akka is based on
buckets which are emptied according to a fixed schedule. It does not
execute tasks at the exact time, but on every tick, it will run everything
that is (over)due. The accuracy of the default Scheduler can be modified
by the `akka.scheduler.tick-duration` configuration property.
by the `pekko.scheduler.tick-duration` configuration property.
@@@
@ -156,7 +156,7 @@ which may in worst case cause undesired load on the system. `scheduleWithFixedDe
The actual scheduler implementation is loaded reflectively upon
@apidoc[actor.ActorSystem] start-up, which means that it is possible to provide a
different one using the `akka.scheduler.implementation` configuration
different one using the `pekko.scheduler.implementation` configuration
property. The referenced class must implement the @scala[@apidoc[actor.Scheduler]]@java[@apidoc[actor.AbstractScheduler]]
interface.

View file

@ -10,7 +10,7 @@ An attacker that can connect to an `ActorSystem` exposed via Akka Remote over TC
capabilities in the context of the JVM process that runs the ActorSystem if:
* `JavaSerializer` is enabled (default in Akka 2.4.x)
* and TLS is disabled *or* TLS is enabled with `akka.remote.netty.ssl.security.require-mutual-authentication = false`
* and TLS is disabled *or* TLS is enabled with `pekko.remote.netty.ssl.security.require-mutual-authentication = false`
(which is still the default in Akka 2.4.x)
* or if TLS is enabled with mutual authentication and the authentication keys of a host that is allowed to connect have been compromised, an attacker gained access to a valid certificate (e.g. by compromising a node with certificates issued by the same internal PKI tree to get access of the certificate)
* regardless of whether `untrusted` mode is enabled or not

View file

@ -28,8 +28,8 @@ configuration of the TLS random number generator should be used:
```
# Set `SecureRandom` RNG explicitly (but it is also the default)
akka.remote.classic.netty.ssl.random-number-generator = "SecureRandom"
akka.remote.artery.ssl.config-ssl-engine.random-number-generator = "SecureRandom"
pekko.remote.classic.netty.ssl.random-number-generator = "SecureRandom"
pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "SecureRandom"
```
Please subscribe to the [akka-security](https://groups.google.com/forum/#!forum/akka-security) mailing list to be notified promptly about future security issues.
@ -53,10 +53,10 @@ Rationale for the score:
* Akka *2.5.0 - 2.5.15* with any of the following configuration properties defined:
```
akka.remote.netty.ssl.random-number-generator = "AES128CounterSecureRNG"
akka.remote.netty.ssl.random-number-generator = "AES256CounterSecureRNG"
akka.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES128CounterSecureRNG"
akka.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES256CounterSecureRNG"
pekko.remote.netty.ssl.random-number-generator = "AES128CounterSecureRNG"
pekko.remote.netty.ssl.random-number-generator = "AES256CounterSecureRNG"
pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES128CounterSecureRNG"
pekko.remote.artery.ssl.config-ssl-engine.random-number-generator = "AES256CounterSecureRNG"
```
Akka *2.4.x* versions are not affected by this particular bug. It has reached

View file

@ -11,7 +11,7 @@ The mailing list is very low traffic, and receives notifications only after secu
We strongly encourage people to report such problems to our private security mailing list first, before disclosing them in a public forum.
Following best practice, we strongly encourage anyone to report potential security
vulnerabilities to [security@akka.io](mailto:security@akka.io) before disclosing them in a public forum like the mailing list or as a GitHub issue.
vulnerabilities to [security@pekko.io](mailto:security@pekko.io) before disclosing them in a public forum like the mailing list or as a GitHub issue.
Reports to this email address will be handled by our security team, who will work together with you
to ensure that a fix can be provided without delay.

View file

@ -12,7 +12,7 @@ To use Serialization, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ To use Jackson Serialization, you must add the following dependency in your proj
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-serialization-jackson_$scala.binary.version$"
version=AkkaVersion
@ -436,7 +436,7 @@ The following Jackson modules are enabled by default:
@@snip [reference.conf](/akka-serialization-jackson/src/main/resources/reference.conf) { #jackson-modules }
You can amend the configuration `akka.serialization.jackson.jackson-modules` to enable other modules.
You can amend the configuration `pekko.serialization.jackson.jackson-modules` to enable other modules.
The [ParameterNamesModule](https://github.com/FasterXML/jackson-modules-java8/tree/master/parameter-names) requires that the `-parameters`
Java compiler option is enabled.
@ -478,8 +478,8 @@ The type will be embedded as an object with the fields:
### Configuration per binding
By default the configuration for the Jackson serializers and their @javadoc[ObjectMapper](com.fasterxml.jackson.databind.ObjectMapper)s is defined in
the `akka.serialization.jackson` section. It is possible to override that configuration in a more
specific `akka.serialization.jackson.<binding name>` section.
the `pekko.serialization.jackson` section. It is possible to override that configuration in a more
specific `pekko.serialization.jackson.<binding name>` section.
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/org/apache/pekko/serialization/jackson/SerializationDocSpec.scala) { #specific-config }

View file

@ -10,7 +10,7 @@ To use Serialization, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary.version$"
version=AkkaVersion
@ -36,13 +36,13 @@ Akka itself uses Protocol Buffers to serialize internal messages (for example cl
### Configuration
For Akka to know which `Serializer` to use for what, you need to edit your configuration:
in the `akka.actor.serializers`-section, you bind names to implementations of the @apidoc[serialization.Serializer](Serializer)
in the `pekko.actor.serializers`-section, you bind names to implementations of the @apidoc[serialization.Serializer](Serializer)
you wish to use, like this:
@@snip [SerializationDocSpec.scala](/docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialize-serializers-config }
After you've bound names to different implementations of `Serializer` you need to wire which classes
should be serialized using which `Serializer`, this is done in the `akka.actor.serialization-bindings`-section:
should be serialized using which `Serializer`, this is done in the `pekko.actor.serialization-bindings`-section:
@@snip [SerializationDocSpec.scala](/docs/src/test/scala/docs/serialization/SerializationDocSpec.scala) { #serialization-bindings-config }
@ -239,13 +239,13 @@ However, for early prototyping it is very convenient to use. For that reason and
older systems that rely on Java serialization it can be enabled with the following configuration:
```ruby
akka.actor.allow-java-serialization = on
pekko.actor.allow-java-serialization = on
```
Akka will still log warning when Java serialization is used and to silent that you may add:
```ruby
akka.actor.warn-about-java-serializer-usage = off
pekko.actor.warn-about-java-serializer-usage = off
```
### Java serialization compatibility
@ -263,14 +263,14 @@ The message class (the bindings) is not used for deserialization. The manifest i
That means that it is possible to change serialization for a message by performing two rolling update steps to
switch to the new serializer.
1. Add the @scala[`Serializer`]@java[`JSerializer`] class and define it in `akka.actor.serializers` config section, but not in
`akka.actor.serialization-bindings`. Perform a rolling update for this change. This means that the
1. Add the @scala[`Serializer`]@java[`JSerializer`] class and define it in `pekko.actor.serializers` config section, but not in
`pekko.actor.serialization-bindings`. Perform a rolling update for this change. This means that the
serializer class exists on all nodes and is registered, but it is still not used for serializing any
messages. That is important because during the rolling update the old nodes still don't know about
the new serializer and would not be able to deserialize messages with that format.
1. The second change is to register that the serializer is to be used for certain classes by defining
those in the `akka.actor.serialization-bindings` config section. Perform a rolling update for this
those in the `pekko.actor.serialization-bindings` config section. Perform a rolling update for this
change. This means that new nodes will use the new serializer when sending messages and old nodes will
be able to deserialize the new format. Old nodes will continue to use the old serializer when sending
messages and new nodes will be able to deserialize the old format.
@ -286,7 +286,7 @@ Normally, messages sent between local actors (i.e. same JVM) do not undergo seri
Certain messages can be excluded from verification by extending the marker @scala[trait]@java[interface]
@apidoc[actor.NoSerializationVerificationNeeded](NoSerializationVerificationNeeded) or define a class name prefix in configuration
`akka.actor.no-serialization-verification-needed-class-prefix`.
`pekko.actor.no-serialization-verification-needed-class-prefix`.
If you want to verify that your @apidoc[actor.Props] are serializable you can enable the following config option:

View file

@ -18,7 +18,7 @@ dependency included. Otherwise, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster_$scala.binary.version$
version=AkkaVersion
@ -32,7 +32,7 @@ You need to enable the Split Brain Resolver by configuring it as downing provide
the `ActorSystem` (`application.conf`):
```
akka.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
pekko.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
```
You should also consider the different available @ref:[downing strategies](#strategies).
@ -118,7 +118,7 @@ When there is uncertainty it selects to down more nodes than necessary, or even
Therefore Split Brain Resolver should always be combined with a mechanism to automatically start up nodes that
have been shutdown, and join them to the existing cluster or form a new cluster again.
You enable a strategy with the configuration property `akka.cluster.split-brain-resolver.active-strategy`.
You enable a strategy with the configuration property `pekko.cluster.split-brain-resolver.active-strategy`.
### Stable after
@ -129,7 +129,7 @@ while there are unreachable nodes. Joining nodes are not counted in the logic of
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #split-brain-resolver }
Set `akka.cluster.split-brain-resolver.stable-after` to a shorter duration to have quicker removal of crashed nodes,
Set `pekko.cluster.split-brain-resolver.stable-after` to a shorter duration to have quicker removal of crashed nodes,
at the price of risking too early action on transient network partitions that otherwise would have healed. Do not
set this to a shorter duration than the membership dissemination time in the cluster, which depends
on the cluster size. Recommended minimum duration for different cluster sizes:
@ -161,7 +161,7 @@ That is handled by @ref:[Coordinated Shutdown](coordinated-shutdown.md)
but to exit the JVM it's recommended that you enable:
```
akka.coordinated-shutdown.exit-jvm = on
pekko.coordinated-shutdown.exit-jvm = on
```
@@@ note
@ -207,7 +207,7 @@ it means shutting down more worker nodes.
Configuration:
```
akka.cluster.split-brain-resolver.active-strategy=keep-majority
pekko.cluster.split-brain-resolver.active-strategy=keep-majority
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #keep-majority }
@ -231,7 +231,7 @@ Therefore it is important that you join new nodes when old nodes have been remov
Another consequence of this is that if there are unreachable nodes when starting up the cluster,
before reaching this limit, the cluster may shut itself down immediately. This is not an issue
if you start all nodes at approximately the same time or use the `akka.cluster.min-nr-of-members`
if you start all nodes at approximately the same time or use the `pekko.cluster.min-nr-of-members`
to define required number of members before the leader changes member status of 'Joining' members to 'Up'
You can tune the timeout after which downing decisions are made using the `stable-after` setting.
@ -273,7 +273,7 @@ in the cluster, as described above.
Configuration:
```
akka.cluster.split-brain-resolver.active-strategy=static-quorum
pekko.cluster.split-brain-resolver.active-strategy=static-quorum
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #static-quorum }
@ -312,7 +312,7 @@ i.e. using the oldest member (singleton) within the nodes with that role.
Configuration:
```
akka.cluster.split-brain-resolver.active-strategy=keep-oldest
pekko.cluster.split-brain-resolver.active-strategy=keep-oldest
```
@@snip [reference.conf](/akka-cluster/src/main/resources/reference.conf) { #keep-oldest }
@ -361,13 +361,13 @@ on another side of a network partition, and then all nodes will be downed.
Configuration:
```
akka {
pekko {
cluster {
downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
split-brain-resolver {
active-strategy = "lease-majority"
lease-majority {
lease-implementation = "akka.coordination.lease.kubernetes"
lease-implementation = "pekko.coordination.lease.kubernetes"
}
}
}
@ -411,7 +411,7 @@ continue after the `stable-after` or it can be set to `off` to disable this feat
```
akka.cluster.split-brain-resolver {
pekko.cluster.split-brain-resolver {
down-all-when-unstable = 15s
stable-after = 20s
}
@ -453,9 +453,9 @@ You would like to configure this to a short duration to have quick failover, but
risk of having multiple singleton/sharded instances running at the same time and it may take a different
amount of time to act on the decision (dissemination of the down/removal). The duration is by default
the same as the `stable-after` property (see @ref:[Stable after](#stable-after) above). It is recommended to
leave this value as is, but it can also be separately overriden with the `akka.cluster.down-removal-margin` property.
leave this value as is, but it can also be separately overriden with the `pekko.cluster.down-removal-margin` property.
Another concern for setting this `stable-after`/`akka.cluster.down-removal-margin` is dealing with JVM pauses e.g.
Another concern for setting this `stable-after`/`pekko.cluster.down-removal-margin` is dealing with JVM pauses e.g.
garbage collection. When a node is unresponsive it is not known if it is due to a pause, overload, a crash or a
network partition. If it is pause that lasts longer than `stable-after` * 2 it gives time for SBR to down the node
and for singletons and shards to be started on other nodes. When the node un-pauses there will be a short time before

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion
@ -240,4 +240,4 @@ A sink that will publish emitted messages to a @apidoc[actor.typed.pubsub.Topic$
@@@ note
See also: @ref[ActorSink.actorRefWithBackpressure operator reference docs](operators/PubSub/sink.md)
@@@
@@@

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -10,7 +10,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -11,7 +11,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion

View file

@ -43,7 +43,7 @@ Java
: @@snip [FromSinkAndSource.java](/docs/src/test/java/jdocs/stream/operators/flow/FromSinkAndSource.java) { #chat }
The same patterns can also be applied to @extref:[Akka HTTP WebSockets](akka.http:/server-side/websocket-support.html#server-api) which also have an API accepting a `Flow` of messages.
The same patterns can also be applied to @extref:[Akka HTTP WebSockets](akka.http:/server-side/websocket-support.html#server-api) which also have an API accepting a `Flow` of messages.
If we would replace the `fromSinkAndSource` here with `fromSinkAndSourceCoupled` it would allow the client to close the connection by closing its outgoing stream.

View file

@ -16,7 +16,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion
@ -34,4 +34,4 @@ version=AkkaVersion
**backpressures** never
@@@
@@@

View file

@ -19,7 +19,7 @@ This operator is included in:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-typed_$scala.binary.version$"
version=AkkaVersion
@ -37,4 +37,4 @@ version=AkkaVersion
**completes** when the topic actor terminates
@@@
@@@

View file

@ -9,7 +9,7 @@ Emit each integer in a range, with an option to take bigger steps than 1.
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -16,7 +16,7 @@ underlying `java.io.InputStream` returns on each read invocation. Such chunks wi
than `chunkSize` though.
You can configure the default dispatcher for this Source by changing
the `akka.stream.materializer.blocking-io-dispatcher` or set it for a given Source by
the `pekko.stream.materializer.blocking-io-dispatcher` or set it for a given Source by
using `org.apache.pekko.stream.ActorAttributes`.
It materializes a @java[`CompletionStage`]@scala[`Future`] of `IOResult` containing the number of bytes read from the source file

View file

@ -89,7 +89,7 @@ These built-in sinks are available from @scala[`org.apache.pekko.stream.scaladsl
Sources and sinks for integrating with `java.io.InputStream` and `java.io.OutputStream` can be found on
`StreamConverters`. As they are blocking APIs the implementations of these operators are run on a separate
dispatcher configured through the `akka.stream.blocking-io-dispatcher`.
dispatcher configured through the `pekko.stream.blocking-io-dispatcher`.
@@@ warning

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion
@ -176,7 +176,7 @@ Java
Please note that these operators are backed by Actors and by default are configured to run on a pre-configured
threadpool-backed dispatcher dedicated for File IO. This is very important as it isolates the blocking file IO operations from the rest
of the ActorSystem allowing each dispatcher to be utilised in the most efficient way. If you want to configure a custom
dispatcher for file IO operations globally, you can do so by changing the `akka.stream.materializer.blocking-io-dispatcher`,
dispatcher for file IO operations globally, you can do so by changing the `pekko.stream.materializer.blocking-io-dispatcher`,
or for a specific operator by specifying a custom Dispatcher in code, like this:
Scala

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion
@ -75,7 +75,7 @@ and increase them only to a level suitable for the throughput requirements of th
can be set through configuration:
```
akka.stream.materializer.max-input-buffer-size = 16
pekko.stream.materializer.max-input-buffer-size = 16
```
Alternatively they can be set per stream by adding an attribute to the complete `RunnableGraph` or on smaller segments

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion
@ -198,7 +198,7 @@ timeout has triggered, materialization of the target side will fail, pointing ou
Since these timeouts are often very different based on the kind of stream offered, and there can be
many different kinds of them in the same application, it is possible to not only configure this setting
globally (`akka.stream.materializer.stream-ref.subscription-timeout`), but also via attributes:
globally (`pekko.stream.materializer.stream-ref.subscription-timeout`), but also via attributes:
Scala
: @@snip [FlowStreamRefsDocSpec.scala](/docs/src/test/scala/docs/stream/FlowStreamRefsDocSpec.scala) { #attr-sub-timeout }
@ -209,6 +209,6 @@ Java
### General configuration
Other settings can be set globally in your `application.conf`, by overriding any of the following values
in the `akka.stream.materializer.stream-ref.*` keyspace:
in the `pekko.stream.materializer.stream-ref.*` keyspace:
@@snip [reference.conf](/akka-stream/src/main/resources/reference.conf) { #stream-ref }

View file

@ -7,7 +7,7 @@ To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary.version$"
version=AkkaVersion

View file

@ -7,7 +7,7 @@ To use Akka Stream TestKit, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-stream-testkit_$scala.binary.version$"
version=AkkaVersion
@ -152,7 +152,7 @@ more aggressively (at the cost of reduced performance) and therefore helps expos
enable this setting add the following line to your configuration:
```
akka.stream.materializer.debug.fuzzing-mode = on
pekko.stream.materializer.debug.fuzzing-mode = on
```
@@@ warning

View file

@ -77,7 +77,7 @@ user-created actors, the guardian named `"/user"`. Actors created using
guardian terminates, all normal actors in the system will be shutdown, too. It
also means that this guardians supervisor strategy determines how the
top-level normal actors are supervised. Since Akka 2.1 it is possible to
configure this using the setting `akka.actor.guardian-supervisor-strategy`,
configure this using the setting `pekko.actor.guardian-supervisor-strategy`,
which takes the fully-qualified class-name of a
`SupervisorStrategyConfigurator`. When the guardian escalates a failure,
the root guardians response will be to terminate the guardian, which in effect

View file

@ -10,7 +10,7 @@ To use Akka Testkit, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group="com.typesafe.akka"
artifact="akka-testkit_$scala.binary.version$"
version=AkkaVersion
@ -87,7 +87,7 @@ Java
In these examples, the maximum durations you will find mentioned below are left
out, in which case they use the default value from the configuration item
`akka.test.single-expect-default` which itself defaults to 3 seconds (or they
`pekko.test.single-expect-default` which itself defaults to 3 seconds (or they
obey the innermost enclosing `Within` as detailed @ref:[below](#testkit-within)). The full signatures are:
* @scala[`expectMsg[T](d: Duration, msg: T): T`]@java[`public <T> T expectMsgEquals(Duration max, T msg)`]
@ -233,7 +233,7 @@ Java
If the number of occurrences is specific—as demonstrated above—then `intercept`
will block until that number of matching messages have been received or the
timeout configured in `akka.test.filter-leeway` is used up (time starts
timeout configured in `pekko.test.filter-leeway` is used up (time starts
counting after the passed-in block of code returns). In case of a timeout the
test fails.
@ -244,7 +244,7 @@ Be sure to exchange the default logger with the
function:
```
akka.loggers = [org.apache.pekko.testkit.TestEventListener]
pekko.loggers = [org.apache.pekko.testkit.TestEventListener]
```
@@@
@ -321,10 +321,10 @@ The tight timeouts you use during testing on your lightning-fast notebook will
invariably lead to spurious test failures on the heavily loaded Jenkins server
(or similar). To account for this situation, all maximum durations are
internally scaled by a factor taken from the @ref:[Configuration](general/configuration-reference.md#config-akka-testkit),
`akka.test.timefactor`, which defaults to 1.
`pekko.test.timefactor`, which defaults to 1.
You can scale other durations with the same factor by using the @scala[implicit conversion
in `akka.testkit` package object to add dilated function to `Duration`]@java[`dilated` method in `TestKit`].
in `pekko.testkit` package object to add dilated function to `Duration`]@java[`dilated` method in `TestKit`].
Scala
: @@snip [TestkitDocSpec.scala](/docs/src/test/scala/docs/testkit/TestkitDocSpec.scala) { #duration-dilation }
@ -715,7 +715,7 @@ options:
@@@ div { .group-scala }
* *Logging of message invocations on certain actors*
This is enabled by a setting in the @ref:[Configuration](general/configuration-reference.md#config-akka-actor) — namely
`akka.actor.debug.receive` — which enables the `loggable`
`pekko.actor.debug.receive` — which enables the `loggable`
statement to be applied to an actors `receive` function:
@@snip [TestkitDocSpec.scala](/docs/src/test/scala/docs/testkit/TestkitDocSpec.scala) { #logging-receive }
@ -733,18 +733,18 @@ would lead to endless loops if it were applied to event bus logger listeners.
* *Logging of special messages*
Actors handle certain special messages automatically, e.g. `Kill`,
`PoisonPill`, etc. Tracing of these message invocations is enabled by
the setting `akka.actor.debug.autoreceive`, which enables this on all
the setting `pekko.actor.debug.autoreceive`, which enables this on all
actors.
* *Logging of the actor lifecycle*
Actor creation, start, restart, monitor start, monitor stop and stop may be traced by
enabling the setting `akka.actor.debug.lifecycle`; this, too, is enabled
enabling the setting `pekko.actor.debug.lifecycle`; this, too, is enabled
uniformly on all actors.
Logging of these messages is at `DEBUG` level. To summarize, you can enable
full logging of actor activities using this configuration fragment:
```
akka {
pekko {
loglevel = "DEBUG"
actor {
debug {

View file

@ -9,7 +9,7 @@ To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary.version$
version=AkkaVersion

View file

@ -12,7 +12,7 @@ To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary.version$
version=AkkaVersion

View file

@ -12,7 +12,7 @@ To use Akka Actors, add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary.version$
version=AkkaVersion
@ -148,18 +148,18 @@ An application normally consists of a single @apidoc[typed.ActorSystem], running
The console output may look like this:
```
[INFO] [03/13/2018 15:50:05.814] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-2] [akka://hello/user/World] Greeting 1 for World
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 1 for Akka
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 2 for World
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 2 for Akka
[INFO] [03/13/2018 15:50:05.816] [hello-akka.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.816] [hello-akka.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 3 for World
[INFO] [03/13/2018 15:50:05.816] [hello-akka.actor.default-dispatcher-6] [akka://hello/user/Akka] Greeting 3 for Akka
[INFO] [03/13/2018 15:50:05.814] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-2] [akka://hello/user/World] Greeting 1 for World
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 1 for Akka
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 2 for World
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello World!
[INFO] [03/13/2018 15:50:05.815] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/Akka] Greeting 2 for Akka
[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-5] [akka://hello/user/greeter] Hello Akka!
[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-4] [akka://hello/user/World] Greeting 3 for World
[INFO] [03/13/2018 15:50:05.816] [hello-pekko.actor.default-dispatcher-6] [akka://hello/user/Akka] Greeting 3 for Akka
```
You will also need to add a @ref:[logging dependency](logging.md) to see that output when running.

View file

@ -21,7 +21,7 @@ To use Akka Cluster add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-typed_$scala.binary.version$
version=AkkaVersion
@ -78,7 +78,7 @@ Cluster can span multiple data centers and still be tolerant to network partitio
## Defining the data centers
The features are based on the idea that nodes can be assigned to a group of nodes
by setting the `akka.cluster.multi-data-center.self-data-center` configuration property.
by setting the `pekko.cluster.multi-data-center.self-data-center` configuration property.
A node can only belong to one data center and if nothing is specified a node will belong
to the `default` data center.
@ -125,8 +125,8 @@ be interpreted as an indication of problem with the network link between the dat
Two different failure detectors can be configured for these two purposes:
* `akka.cluster.failure-detector` for failure detection within own data center
* `akka.cluster.multi-data-center.failure-detector` for failure detection across different data centers
* `pekko.cluster.failure-detector` for failure detection within own data center
* `pekko.cluster.multi-data-center.failure-detector` for failure detection across different data centers
When @ref[subscribing to cluster events](cluster.md#cluster-subscriptions) the `UnreachableMember` and
`ReachableMember` events are for observations within the own data center. The same data center as where the
@ -136,7 +136,7 @@ For cross data center unreachability notifications you can subscribe to `Unreach
events.
Heartbeat messages for failure detection across data centers are only performed between a number of the
oldest nodes on each side. The number of nodes is configured with `akka.cluster.multi-data-center.cross-data-center-connections`.
oldest nodes on each side. The number of nodes is configured with `pekko.cluster.multi-data-center.cross-data-center-connections`.
The reason for only using a limited number of nodes is to keep the number of connections across data
centers low. The same nodes are also used for the gossip protocol when disseminating the membership
information across data centers. Within a data center all nodes are involved in gossip and failure detection.

View file

@ -35,7 +35,7 @@ merged and converge to the same end result.
* **joining** - transient state when joining a cluster
* **weakly up** - transient state while network split (only if `akka.cluster.allow-weakly-up-members=on`)
* **weakly up** - transient state while network split (only if `pekko.cluster.allow-weakly-up-members=on`)
* **up** - normal operating state
@ -121,7 +121,7 @@ Another transition that is possible without convergence is marking members as `W
If a node is `unreachable` then gossip convergence is not
possible and therefore most `leader` actions are impossible. By enabling
`akka.cluster.allow-weakly-up-members` (which is enabled by default), joining nodes can be promoted to `WeaklyUp`
`pekko.cluster.allow-weakly-up-members` (which is enabled by default), joining nodes can be promoted to `WeaklyUp`
even while convergence is not yet reached. Once gossip convergence can be established again, the leader will move
`WeaklyUp` members to `Up`.

View file

@ -7,7 +7,7 @@ To use Akka Sharded Daemon Process, you must add the following dependency in you
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-sharding-typed_$scala.binary.version$
version=AkkaVersion

View file

@ -12,7 +12,7 @@ To use Akka Cluster Sharding, you must add the following dependency in your proj
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-sharding-typed_$scala.binary.version$
version=AkkaVersion
@ -195,12 +195,12 @@ The new algorithm is recommended and will become the default in future versions
You enable the new algorithm by setting `rebalance-absolute-limit` > 0, for example:
```
akka.cluster.sharding.least-shard-allocation-strategy.rebalance-absolute-limit = 20
pekko.cluster.sharding.least-shard-allocation-strategy.rebalance-absolute-limit = 20
```
The `rebalance-absolute-limit` is the maximum number of shards that will be rebalanced in one rebalance round.
You may also want to tune the `akka.cluster.sharding.least-shard-allocation-strategy.rebalance-relative-limit`.
You may also want to tune the `pekko.cluster.sharding.least-shard-allocation-strategy.rebalance-relative-limit`.
The `rebalance-relative-limit` is a fraction (< 1.0) of total number of (known) shards that will be rebalanced
in one rebalance round. The lower result of `rebalance-relative-limit` and `rebalance-absolute-limit` will be used.
@ -304,7 +304,7 @@ testing and feedback.
@@@
Automatic passivation can be disabled by setting `akka.cluster.sharding.passivation.strategy = none`. It is disabled
Automatic passivation can be disabled by setting `pekko.cluster.sharding.passivation.strategy = none`. It is disabled
automatically if @ref:[Remembering Entities](#remembering-entities) is enabled.
@@@ note
@ -362,7 +362,7 @@ and idle entity timeouts.
### Custom passivation strategies
To configure a custom passivation strategy, create a configuration section for the strategy under
`akka.cluster.sharding.passivation` and select this strategy using the `strategy` setting. The strategy needs a
`pekko.cluster.sharding.passivation` and select this strategy using the `strategy` setting. The strategy needs a
_replacement policy_ to be chosen, an _active entity limit_ to be set, and can optionally [passivate idle
entities](#idle-entity-passivation). For example, a custom strategy can be configured to use the [least recently used
policy](#least-recently-used-policy):
@ -544,7 +544,7 @@ There are two options for the state store:
To enable distributed data store mode (the default):
```
akka.cluster.sharding.state-store-mode = ddata
pekko.cluster.sharding.state-store-mode = ddata
```
The state of the `ShardCoordinator` is replicated across the cluster but is not stored to disk.
@ -558,7 +558,7 @@ that contains the node role and therefore the role configuration must be the sam
cluster, for example you can't change the roles when performing a rolling update.
Changing roles requires @ref:[a full cluster restart](../additional/rolling-updates.md#cluster-sharding-configuration-change).
The `akka.cluster.sharding.distributed-data` config section configures the settings for Distributed Data.
The `pekko.cluster.sharding.distributed-data` config section configures the settings for Distributed Data.
It's not possible to have different `distributed-data` settings for different sharding entity types.
#### Persistence mode
@ -566,7 +566,7 @@ It's not possible to have different `distributed-data` settings for different sh
To enable persistence store mode:
```
akka.cluster.sharding.state-store-mode = persistence
pekko.cluster.sharding.state-store-mode = persistence
```
Since it is running in a cluster @ref:[Persistence](persistence.md) must be configured with a distributed journal.
@ -590,7 +590,7 @@ for example with @ref:[Event Sourcing](persistence.md).
To enable remember entities set `rememberEntities` flag to true in
@apidoc[typed.ClusterShardingSettings] when starting a shard region (or its proxy) for a given `entity` type or configure
`akka.cluster.sharding.remember-entities = on`.
`pekko.cluster.sharding.remember-entities = on`.
Starting and stopping entities has an overhead but this is limited by batching operations to the
underlying remember entities store.
@ -618,13 +618,13 @@ There are two options for the remember entities store:
Enable ddata mode with (enabled by default):
```
akka.cluster.sharding.remember-entities-store = ddata
pekko.cluster.sharding.remember-entities-store = ddata
```
To support restarting entities after a full cluster restart (non-rolling) the remember entities store is persisted to disk by distributed data.
This can be disabled if not needed:
```
akka.cluster.sharding.distributed-data.durable.keys = []
pekko.cluster.sharding.distributed-data.durable.keys = []
```
Reasons for disabling:
@ -639,15 +639,15 @@ For supporting remembered entities in an environment without disk storage use `e
Enable `eventsourced` mode with:
```
akka.cluster.sharding.remember-entities-store = eventsourced
pekko.cluster.sharding.remember-entities-store = eventsourced
```
This mode uses @ref:[Event Sourcing](./persistence.md) to store the active shards and active entities for each shard
so a persistence and snapshot plugin must be configured.
```
akka.cluster.sharding.journal-plugin-id = <plugin>
akka.cluster.sharding.snapshot-plugin-id = <plugin>
pekko.cluster.sharding.journal-plugin-id = <plugin>
pekko.cluster.sharding.snapshot-plugin-id = <plugin>
```
### Migrating from deprecated persistence mode
@ -664,7 +664,7 @@ For migrating existing remembered entities an event adapter needs to be configur
In this example `cassandra` is the used journal:
```
akka.persistence.cassandra.journal {
pekko.persistence.cassandra.journal {
event-adapters {
coordinator-migration = "org.apache.pekko.cluster.sharding.OldCoordinatorStateMigrationEventAdapter"
}
@ -679,7 +679,7 @@ Once you have migrated you cannot go back to the old persistence store, a rollin
When @ref:[Distributed Data mode](#distributed-data-mode) is used the identifiers of the entities are
stored in @ref:[Durable Storage](distributed-data.md#durable-storage) of Distributed Data. You may want to change the
configuration of the `akka.cluster.sharding.distributed-data.durable.lmdb.dir`, since
configuration of the `pekko.cluster.sharding.distributed-data.durable.lmdb.dir`, since
the default directory contains the remote port of the actor system. If using a dynamically
assigned port (0) it will be different each time and the previously stored data will not
be loaded.
@ -689,12 +689,12 @@ disk, is that the same entities should be started also after a complete cluster
you can disable durable storage and benefit from better performance by using the following configuration:
```
akka.cluster.sharding.distributed-data.durable.keys = []
pekko.cluster.sharding.distributed-data.durable.keys = []
```
## Startup after minimum number of members
It's recommended to use Cluster Sharding with the Cluster setting `akka.cluster.min-nr-of-members` or
`akka.cluster.role.<role-name>.min-nr-of-members`. `min-nr-of-members` will defer the allocation of the shards
It's recommended to use Cluster Sharding with the Cluster setting `pekko.cluster.min-nr-of-members` or
`pekko.cluster.role.<role-name>.min-nr-of-members`. `min-nr-of-members` will defer the allocation of the shards
until at least that number of regions have been started and registered to the coordinator. This
avoids that many shards are allocated to the first region that registers and only later are
rebalanced to other nodes.
@ -713,7 +713,7 @@ The health check does not fail after an initial successful check. Once a shard r
Cluster sharding enables the health check automatically. To disable:
```ruby
akka.management.health-checks.readiness-checks {
pekko.management.health-checks.readiness-checks {
sharding = ""
}
```
@ -721,7 +721,7 @@ akka.management.health-checks.readiness-checks {
Monitoring of each shard region is off by default. Add them by defining the entity type names (`EntityTypeKey.name`):
```ruby
akka.cluster.sharding.healthcheck.names = ["counter-1", "HelloWorld"]
pekko.cluster.sharding.healthcheck.names = ["counter-1", "HelloWorld"]
```
See also additional information about how to make @ref:[smooth rolling updates](../additional/rolling-updates.md#cluster-sharding).
@ -750,7 +750,7 @@ Scala
Java
: @@snip [ShardingCompileOnlyTest.java](/akka-cluster-sharding-typed/src/test/java/jdocs/org/apache/pekko/cluster/sharding/typed/ShardingCompileOnlyTest.java) { #get-cluster-sharding-stats }
If any shard queries failed, for example due to timeout if a shard was too busy to reply within the configured `akka.cluster.sharding.shard-region-query-timeout`,
If any shard queries failed, for example due to timeout if a shard was too busy to reply within the configured `pekko.cluster.sharding.shard-region-query-timeout`,
`ShardRegion.CurrentShardRegionState` and `ShardRegion.ClusterShardingStats` will also include the set of shard identifiers by region that failed.
The purpose of these messages is testing and monitoring, they are not provided to give access to
@ -769,7 +769,7 @@ Reasons for how this can happen:
A lease can be a final backup that means that each shard won't create child entity actors unless it has the lease.
To use a lease for sharding set `akka.cluster.sharding.use-lease` to the configuration location
To use a lease for sharding set `pekko.cluster.sharding.use-lease` to the configuration location
of the lease to use. Each shard will try and acquire a lease with with the name `<actor system name>-shard-<type name>-<shard id>` and
the owner is set to the `Cluster(system).selfAddress.hostPort`.

View file

@ -9,7 +9,7 @@ To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-typed_$scala.binary.version$
version=AkkaVersion
@ -174,7 +174,7 @@ don't run at the same time. Reasons for how this can happen:
A lease can be a final backup that means that the singleton actor won't be created unless
the lease can be acquired.
To use a lease for singleton set `akka.cluster.singleton.use-lease` to the configuration location
To use a lease for singleton set `pekko.cluster.singleton.use-lease` to the configuration location
of the lease to use. A lease with with the name `<actor system name>-singleton-<singleton actor path>` is used and
the owner is set to the @scala[`Cluster(system).selfAddress.hostPort`]@java[`Cluster.get(system).selfAddress().hostPort()`].

View file

@ -28,7 +28,7 @@ To use Akka Cluster add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
bomGroup=com.typesafe.akka bomArtifact=akka-bom_$scala.binary.version$ bomVersionSymbols=AkkaVersion
symbol1=AkkaVersion
value1="$akka.version$"
value1="$pekko.version$"
group=com.typesafe.akka
artifact=akka-cluster-typed_$scala.binary.version$
version=AkkaVersion
@ -57,7 +57,7 @@ Java
: @@snip [BasicClusterExampleTest.java](/akka-cluster-typed/src/test/java/jdocs/org/apache/pekko/cluster/typed/BasicClusterExampleTest.java) { #cluster-imports }
<a id="basic-cluster-configuration"></a>
The minimum configuration required is to set a host/port for remoting and the `akka.actor.provider = "cluster"`.
The minimum configuration required is to set a host/port for remoting and the `pekko.actor.provider = "cluster"`.
@@snip [BasicClusterExampleSpec.scala](/akka-cluster-typed/src/test/scala/docs/org/apache/pekko/cluster/typed/BasicClusterExampleSpec.scala) { #config-seeds }
@ -155,7 +155,7 @@ it retries this procedure until success or shutdown.
You can define the seed nodes in the @ref:[configuration](#configuration) file (application.conf):
```
akka.cluster.seed-nodes = [
pekko.cluster.seed-nodes = [
"akka://ClusterSystem@host1:2552",
"akka://ClusterSystem@host2:2552"]
```
@ -163,8 +163,8 @@ akka.cluster.seed-nodes = [
This can also be defined as Java system properties when starting the JVM using the following syntax:
```
-Dakka.cluster.seed-nodes.0=akka://ClusterSystem@host1:2552
-Dakka.cluster.seed-nodes.1=akka://ClusterSystem@host2:2552
-Dpekko.cluster.seed-nodes.0=akka://ClusterSystem@host1:2552
-Dpekko.cluster.seed-nodes.1=akka://ClusterSystem@host2:2552
```
@ -228,8 +228,8 @@ the JVM. If the `seed-nodes` are assembled dynamically, it is useful to define t
and a restart with new seed-nodes should be tried after unsuccessful attempts.
```
akka.cluster.shutdown-after-unsuccessful-join-seed-nodes = 20s
akka.coordinated-shutdown.exit-jvm = on
pekko.cluster.shutdown-after-unsuccessful-join-seed-nodes = 20s
pekko.coordinated-shutdown.exit-jvm = on
```
If you don't configure seed nodes or use one of the join seed node functions, you need to join the cluster manually
@ -284,7 +284,7 @@ We recommend that you enable the @ref:[Split Brain Resolver](../split-brain-reso
Akka Cluster module. You enable it with configuration:
```
akka.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
pekko.cluster.downing-provider-class = "org.apache.pekko.cluster.sbr.SplitBrainResolverProvider"
```
You should also consider the different available @ref:[downing strategies](../split-brain-resolver.md#strategies).
@ -313,7 +313,7 @@ Not all nodes of a cluster need to perform the same function. For example, there
one which runs the data access layer and one for the number-crunching. Choosing which actors to start on each node,
for example cluster-aware routers, can take node roles into account to achieve this distribution of responsibilities.
The node roles are defined in the configuration property named `akka.cluster.roles`
The node roles are defined in the configuration property named `pekko.cluster.roles`
and typically defined in the start script as a system property or environment variable.
The roles are part of the membership information in @apidoc[MemberEvent](ClusterEvent.MemberEvent) that you can subscribe to. The roles
@ -341,14 +341,14 @@ Cluster uses the @apidoc[remote.PhiAccrualFailureDetector](PhiAccrualFailureDete
implementing the @apidoc[remote.FailureDetector](FailureDetector) and configuring it:
```
akka.cluster.implementation-class = "com.example.CustomFailureDetector"
pekko.cluster.implementation-class = "com.example.CustomFailureDetector"
```
In the @ref:[Cluster Configuration](#configuration) you may want to adjust these
depending on you environment:
* When a *phi* value is considered to be a failure `akka.cluster.failure-detector.threshold`
* Margin of error for sudden abnormalities `akka.cluster.failure-detector.acceptable-heartbeat-pause`
* When a *phi* value is considered to be a failure `pekko.cluster.failure-detector.threshold`
* Margin of error for sudden abnormalities `pekko.cluster.failure-detector.acceptable-heartbeat-pause`
## How to test
@ -373,14 +373,14 @@ With a configuration option you can define the required number of members
before the leader changes member status of 'Joining' members to 'Up'.:
```
akka.cluster.min-nr-of-members = 3
pekko.cluster.min-nr-of-members = 3
```
In a similar way you can define the required number of members of a certain role
before the leader changes member status of 'Joining' members to 'Up'.:
```
akka.cluster.role {
pekko.cluster.role {
frontend.min-nr-of-members = 1
backend.min-nr-of-members = 2
}
@ -391,21 +391,21 @@ akka.cluster.role {
You can silence the logging of cluster events at info level with configuration property:
```
akka.cluster.log-info = off
pekko.cluster.log-info = off
```
You can enable verbose logging of cluster events at info level, e.g. for temporary troubleshooting, with configuration property:
```
akka.cluster.log-info-verbose = on
pekko.cluster.log-info-verbose = on
```
### Cluster Dispatcher
The Cluster extension is implemented with actors. To protect them against
disturbance from user actors they are by default run on the internal dispatcher configured
under `akka.actor.internal-dispatcher`. The cluster actors can potentially be isolated even
further, onto their own dispatcher using the setting `akka.cluster.use-dispatcher`
under `pekko.actor.internal-dispatcher`. The cluster actors can potentially be isolated even
further, onto their own dispatcher using the setting `pekko.cluster.use-dispatcher`
or made run on the same dispatcher to keep the number of threads down.
### Configuration Compatibility Check
@ -417,14 +417,14 @@ The Configuration Compatibility Check feature ensures that all nodes in a cluste
New custom checkers can be added by extending @apidoc[cluster.JoinConfigCompatChecker](JoinConfigCompatChecker) and including them in the configuration. Each checker must be associated with a unique key:
```
akka.cluster.configuration-compatibility-check.checkers {
pekko.cluster.configuration-compatibility-check.checkers {
my-custom-config = "com.company.MyCustomJoinConfigCompatChecker"
}
```
@@@ note
Configuration Compatibility Check is enabled by default, but can be disabled by setting `akka.cluster.configuration-compatibility-check.enforce-on-join = off`. This is specially useful when performing rolling updates. Obviously this should only be done if a complete cluster shutdown isn't an option. A cluster with nodes with different configuration settings may lead to data loss or data corruption.
Configuration Compatibility Check is enabled by default, but can be disabled by setting `pekko.cluster.configuration-compatibility-check.enforce-on-join = off`. This is specially useful when performing rolling updates. Obviously this should only be done if a complete cluster shutdown isn't an option. A cluster with nodes with different configuration settings may lead to data loss or data corruption.
This setting should only be disabled on the joining nodes. The checks are always performed on both sides, and warnings are logged. In case of incompatibilities, it is the responsibility of the joining node to decide if the process should be interrupted or not.

Some files were not shown because too many files have changed in this diff Show more