Merge pull request #26299 from helena/naming-consistency-shard-coordinator

Use one naming convention for ShardCoordinator in doc and logging
This commit is contained in:
Patrik Nordwall 2019-01-29 15:06:41 +01:00 committed by GitHub
commit db64a54c69
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
3 changed files with 4 additions and 4 deletions

View file

@ -23,14 +23,14 @@ import akka.persistence.journal.leveldb.SharedLeveldbStore
/**
* Utility program that removes the internal data stored with Akka Persistence
* by the Cluster Sharding coordinator. The data contains the locations of the
* by the Cluster `ShardCoordinator`. The data contains the locations of the
* shards using Akka Persistence and it can safely be removed when restarting
* the whole Akka Cluster. Note that this is not application data.
*
* <b>Never use this program while there are running Akka Cluster that is
* using Cluster Sharding. Stop all Cluster nodes before using this program.</b>
*
* It can be needed to remove the data if the Cluster Sharding coordinator
* It can be needed to remove the data if the Cluster `ShardCoordinator`
* cannot startup because of corrupt data, which may happen if accidentally
* two clusters were running at the same time, e.g. caused by using auto-down
* and there was a network partition.

View file

@ -1087,7 +1087,7 @@ class DDataShardCoordinator(typeName: String, settings: ClusterShardingSettings,
def activate() = {
context.become(active)
log.info("Sharding Coordinator was moved to the active state {}", state)
log.info("ShardCoordinator was moved to the active state {}", state)
}
override def active: Receive =

View file

@ -235,7 +235,7 @@ A higher threshold means that more shards can be rebalanced at the same time ins
That has the advantage that the rebalance process can be quicker but has the drawback that the
the number of shards (and therefore load) between different nodes may be significantly different.
### Shard Coordinator State
### ShardCoordinator State
The state of shard locations in the `ShardCoordinator` is persistent (durable) with
@ref:[Distributed Data](distributed-data.md) or @ref:[Persistence](persistence.md) to survive failures. When a crashed or