From 418d6d3ec0a8c16f3bc3f54b770ba60dfd0c5b65 Mon Sep 17 00:00:00 2001 From: Ignasi Marimon-Clos Date: Tue, 25 Aug 2020 11:11:06 +0200 Subject: [PATCH] Prefer "update" over "upgrade" when rolling (#29523) --- .../main/paradox/additional/rolling-updates.md | 18 +++++++++--------- .../src/main/paradox/project/rolling-update.md | 2 +- akka-docs/src/main/paradox/serialization.md | 12 ++++++------ akka-docs/src/main/paradox/typed/cluster-dc.md | 2 +- .../src/main/paradox/typed/cluster-sharding.md | 4 ++-- 5 files changed, 19 insertions(+), 19 deletions(-) diff --git a/akka-docs/src/main/paradox/additional/rolling-updates.md b/akka-docs/src/main/paradox/additional/rolling-updates.md index 225cc47428..8c4b60f117 100644 --- a/akka-docs/src/main/paradox/additional/rolling-updates.md +++ b/akka-docs/src/main/paradox/additional/rolling-updates.md @@ -27,30 +27,30 @@ There are two parts of Akka that need careful consideration when performing an r 1. Serialization format of persisted events and snapshots. New nodes must be able to read old data, and during the update old nodes must be able to read data stored by new nodes. -There are many more application specific aspects for serialization changes during rolling upgrades to consider. +There are many more application specific aspects for serialization changes during rolling updates to consider. For example based on the use case and requirements, whether to allow dropped messages or tear down the TCP connection when the manifest is unknown. -When some message loss during a rolling upgrade is acceptable versus a full shutdown and restart, assuming the application recovers afterwards +When some message loss during a rolling update is acceptable versus a full shutdown and restart, assuming the application recovers afterwards * If a `java.io.NotSerializableException` is thrown in `fromBinary` this is treated as a transient problem, the issue logged and the message is dropped * If other exceptions are thrown it can be an indication of corrupt bytes from the underlying transport, and the connection is broken -For more zero-impact rolling upgrades, it is important to consider a strategy for serialization that can be evolved. -One approach to retiring a serializer without downtime is described in @ref:[two rolling upgrade steps to switch to the new serializer](../serialization.md#rolling-upgrades). +For more zero-impact rolling updates, it is important to consider a strategy for serialization that can be evolved. +One approach to retiring a serializer without downtime is described in @ref:[two rolling update steps to switch to the new serializer](../serialization.md#rolling-updates). Additionally you can find advice on @ref:[Persistence - Schema Evolution](../persistence-schema-evolution.md) which also applies to remote messages when deploying with rolling updates. ## Cluster Sharding -During a rolling upgrade, sharded entities receiving traffic may be moved during @ref:[shard rebalancing](../typed/cluster-sharding-concepts.md#shard-rebalancing), +During a rolling update, sharded entities receiving traffic may be moved during @ref:[shard rebalancing](../typed/cluster-sharding-concepts.md#shard-rebalancing), to an old or new node in the cluster, based on the pluggable allocation strategy and settings. When an old node is stopped the shards that were running on it are moved to one of the other old nodes remaining in the cluster. The `ShardCoordinator` is itself a cluster singleton. -To minimize downtime of the shard coordinator, see the strategies about @ref[ClusterSingleton](#cluster-singleton) rolling upgrades below. +To minimize downtime of the shard coordinator, see the strategies about @ref[ClusterSingleton](#cluster-singleton) rolling updates below. A few specific changes to sharding configuration require @ref:[a full cluster restart](#cluster-sharding-configuration-change). ## Cluster Singleton -Cluster singletons are always running on the oldest node. To avoid moving cluster singletons more than necessary during a rolling upgrade, -it is recommended to upgrade the oldest node last. This way cluster singletons are only moved once during a full rolling upgrade. +Cluster singletons are always running on the oldest node. To avoid moving cluster singletons more than necessary during a rolling update, +it is recommended to upgrade the oldest node last. This way cluster singletons are only moved once during a full rolling update. Otherwise, in the worst case cluster singletons may be migrated from node to node which requires coordination and initialization overhead several times. @@ -160,5 +160,5 @@ Rolling update is not supported when @ref:[changing the remoting transport](../r ### Migrating from Classic Sharding to Typed Sharding -If you have been using classic sharding it is possible to do a rolling upgrade to typed sharding using a 3 step procedure. +If you have been using classic sharding it is possible to do a rolling update to typed sharding using a 3 step procedure. The steps along with example commits are detailed in [this sample PR](https://github.com/akka/akka-samples/pull/110) diff --git a/akka-docs/src/main/paradox/project/rolling-update.md b/akka-docs/src/main/paradox/project/rolling-update.md index 8c5cbbe5c8..361facd122 100644 --- a/akka-docs/src/main/paradox/project/rolling-update.md +++ b/akka-docs/src/main/paradox/project/rolling-update.md @@ -92,7 +92,7 @@ This means that a rolling update will have to go through at least one of 2.6.2, Issue: [#28918](https://github.com/akka/akka/issues/28918). JacksonCborSerializer was using plain JSON format instead of CBOR. -If you have `jackson-cbor` in your `serialization-bindings` a rolling upgrade will have to go through 2.6.5 when +If you have `jackson-cbor` in your `serialization-bindings` a rolling update will have to go through 2.6.5 when upgrading to 2.6.5 or higher. In Akka 2.6.5 the `jackson-cbor` binding will still serialize to JSON format to support rolling update from 2.6.4. diff --git a/akka-docs/src/main/paradox/serialization.md b/akka-docs/src/main/paradox/serialization.md index 64cde0b0b8..ee0d91da07 100644 --- a/akka-docs/src/main/paradox/serialization.md +++ b/akka-docs/src/main/paradox/serialization.md @@ -180,7 +180,7 @@ should be serialized by it. It's recommended to throw `IllegalArgumentException` or `java.io.NotSerializableException` in `fromBinary` if the manifest is unknown. This makes it possible to introduce new message types and send them to nodes that don't know about them. This is typically needed when performing -rolling upgrades, i.e. running a cluster with mixed versions for a while. +rolling updates, i.e. running a cluster with mixed versions for a while. Those exceptions are treated as a transient problem in the classic remoting layer. The problem will be logged and the message dropped. Other exceptions will tear down the TCP connection because it can be an indication of corrupt bytes from the underlying @@ -252,24 +252,24 @@ akka.actor.warn-about-java-serializer-usage = off It is not safe to mix major Scala versions when using the Java serialization as Scala does not guarantee compatibility and this could lead to very surprising errors. -## Rolling upgrades +## Rolling updates A serialized remote message (or persistent event) consists of serializer-id, the manifest, and the binary payload. When deserializing it is only looking at the serializer-id to pick which `Serializer` to use for `fromBinary`. The message class (the bindings) is not used for deserialization. The manifest is only used within the `Serializer` to decide how to deserialize the payload, so one `Serializer` can handle many classes. -That means that it is possible to change serialization for a message by performing two rolling upgrade steps to +That means that it is possible to change serialization for a message by performing two rolling update steps to switch to the new serializer. 1. Add the `Serializer` class and define it in `akka.actor.serializers` config section, but not in - `akka.actor.serialization-bindings`. Perform a rolling upgrade for this change. This means that the + `akka.actor.serialization-bindings`. Perform a rolling update for this change. This means that the serializer class exists on all nodes and is registered, but it is still not used for serializing any - messages. That is important because during the rolling upgrade the old nodes still don't know about + messages. That is important because during the rolling update the old nodes still don't know about the new serializer and would not be able to deserialize messages with that format. 1. The second change is to register that the serializer is to be used for certain classes by defining - those in the `akka.actor.serialization-bindings` config section. Perform a rolling upgrade for this + those in the `akka.actor.serialization-bindings` config section. Perform a rolling update for this change. This means that new nodes will use the new serializer when sending messages and old nodes will be able to deserialize the new format. Old nodes will continue to use the old serializer when sending messages and new nodes will be able to deserialize the old format. diff --git a/akka-docs/src/main/paradox/typed/cluster-dc.md b/akka-docs/src/main/paradox/typed/cluster-dc.md index 8d512cb7af..ebe05ed07e 100644 --- a/akka-docs/src/main/paradox/typed/cluster-dc.md +++ b/akka-docs/src/main/paradox/typed/cluster-dc.md @@ -140,7 +140,7 @@ The reason for only using a limited number of nodes is to keep the number of con centers low. The same nodes are also used for the gossip protocol when disseminating the membership information across data centers. Within a data center all nodes are involved in gossip and failure detection. -This influences how rolling upgrades should be performed. Don't stop all of the oldest that are used for gossip +This influences how rolling updates should be performed. Don't stop all of the oldest that are used for gossip at the same time. Stop one or a few at a time so that new nodes can take over the responsibility. It's best to leave the oldest nodes until last. diff --git a/akka-docs/src/main/paradox/typed/cluster-sharding.md b/akka-docs/src/main/paradox/typed/cluster-sharding.md index 83a007d8c7..5975e6fa94 100644 --- a/akka-docs/src/main/paradox/typed/cluster-sharding.md +++ b/akka-docs/src/main/paradox/typed/cluster-sharding.md @@ -293,7 +293,7 @@ Cluster Sharding uses its own Distributed Data `Replicator` per node. If using roles with sharding there is one `Replicator` per role, which enables a subset of all nodes for some entity types and another subset for other entity types. Each replicator has a name that contains the node role and therefore the role configuration must be the same on all nodes in the -cluster, for example you can't change the roles when performing a rolling upgrade. +cluster, for example you can't change the roles when performing a rolling update. Changing roles requires @ref:[a full cluster restart](../additional/rolling-updates.md#cluster-sharding-configuration-change). The `akka.cluster.sharding.distributed-data` config section configures the settings for Distributed Data. @@ -413,7 +413,7 @@ akka.persistence.cassandra.journal { } ``` -Once you have migrated you cannot go back to the old persistence store, a rolling upgrade is therefore not possible. +Once you have migrated you cannot go back to the old persistence store, a rolling update is therefore not possible. When @ref:[Distributed Data mode](#distributed-data-mode) is used the identifiers of the entities are stored in @ref:[Durable Storage](distributed-data.md#durable-storage) of Distributed Data. You may want to change the