Fix some typos/grammar (#24082)
This commit is contained in:
parent
cbc1c9a4f0
commit
5ef03c4423
9 changed files with 13 additions and 13 deletions
|
|
@ -12,7 +12,7 @@ Akka is also the name of a goddess in the Sámi (the native Swedish population)
|
||||||
mythology. She is the goddess that stands for all the beauty and good in the
|
mythology. She is the goddess that stands for all the beauty and good in the
|
||||||
world. The mountain can be seen as the symbol of this goddess.
|
world. The mountain can be seen as the symbol of this goddess.
|
||||||
|
|
||||||
Also, the name AKKA is the a palindrome of letters A and K as in Actor Kernel.
|
Also, the name AKKA is a palindrome of the letters A and K as in Actor Kernel.
|
||||||
|
|
||||||
Akka is also:
|
Akka is also:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -34,7 +34,7 @@ FSM of a bundle in an OSGi container:
|
||||||
1. INSTALLED: A bundle that is installed has been loaded from disk and a classloader instantiated with its capabilities.
|
1. INSTALLED: A bundle that is installed has been loaded from disk and a classloader instantiated with its capabilities.
|
||||||
Bundles are iteratively installed manually or through container-specific descriptors. For those familiar with legacy packging
|
Bundles are iteratively installed manually or through container-specific descriptors. For those familiar with legacy packging
|
||||||
such as EJB, the modular nature of OSGi means that bundles may be used by multiple applications with overlapping dependencies.
|
such as EJB, the modular nature of OSGi means that bundles may be used by multiple applications with overlapping dependencies.
|
||||||
By resolving them individually from repositories, these overlaps can be de-duplicated across multiple deployemnts to
|
By resolving them individually from repositories, these overlaps can be de-duplicated across multiple deployments to
|
||||||
the same container.
|
the same container.
|
||||||
2. RESOLVED: A bundle that has been resolved is one that has had its requirements (imports) satisfied. Resolution does
|
2. RESOLVED: A bundle that has been resolved is one that has had its requirements (imports) satisfied. Resolution does
|
||||||
mean that a bundle can be started.
|
mean that a bundle can be started.
|
||||||
|
|
|
||||||
|
|
@ -169,7 +169,7 @@ and regions, isolated from other data centers. If you start an entity type with
|
||||||
nodes and you have defined 3 different data centers and then send messages to the same entity id to
|
nodes and you have defined 3 different data centers and then send messages to the same entity id to
|
||||||
sharding regions in all data centers you will end up with 3 active entity instances for that entity id,
|
sharding regions in all data centers you will end up with 3 active entity instances for that entity id,
|
||||||
one in each data center. This is because the region/coordinator is only aware of its own data center
|
one in each data center. This is because the region/coordinator is only aware of its own data center
|
||||||
and will activate the entity there. It's unaware of the existence of corresponding entitiy in the
|
and will activate the entity there. It's unaware of the existence of corresponding entities in the
|
||||||
other data centers.
|
other data centers.
|
||||||
|
|
||||||
Especially when used together with Akka Persistence that is based on the single-writer principle
|
Especially when used together with Akka Persistence that is based on the single-writer principle
|
||||||
|
|
|
||||||
|
|
@ -350,7 +350,7 @@ Note that stopped entities will be started again when a new message is targeted
|
||||||
## Graceful Shutdown
|
## Graceful Shutdown
|
||||||
|
|
||||||
You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
|
You can send the @scala[`ShardRegion.GracefulShutdown`] @java[`ShardRegion.gracefulShutdownInstance`] message
|
||||||
to the `ShardRegion` actor to handoff all shards that are hosted by that `ShardRegion` and then the
|
to the `ShardRegion` actor to hand off all shards that are hosted by that `ShardRegion` and then the
|
||||||
`ShardRegion` actor will be stopped. You can `watch` the `ShardRegion` actor to know when it is completed.
|
`ShardRegion` actor will be stopped. You can `watch` the `ShardRegion` actor to know when it is completed.
|
||||||
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
||||||
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
||||||
|
|
@ -464,7 +464,7 @@ the identifiers of the shards running in a Region and what entities are alive fo
|
||||||
a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards running in each region and a count
|
a `ShardRegion.ClusterShardingStats` containing the identifiers of the shards running in each region and a count
|
||||||
of entities that are alive in each shard.
|
of entities that are alive in each shard.
|
||||||
|
|
||||||
The type names of all started shards can be aquired via @scala[`ClusterSharding.shardTypeNames`] @java[`ClusterSharding.getShardTypeNames`].
|
The type names of all started shards can be acquired via @scala[`ClusterSharding.shardTypeNames`] @java[`ClusterSharding.getShardTypeNames`].
|
||||||
|
|
||||||
The purpose of these messages is testing and monitoring, they are not provided to give access to
|
The purpose of these messages is testing and monitoring, they are not provided to give access to
|
||||||
directly sending messages to the individual entities.
|
directly sending messages to the individual entities.
|
||||||
|
|
|
||||||
|
|
@ -82,7 +82,7 @@ Here's how a `CircuitBreaker` would be configured for:
|
||||||
|
|
||||||
### Future & Synchronous based API
|
### Future & Synchronous based API
|
||||||
|
|
||||||
Once a circuit breaker actor has been intialized, interacting with that actor is done by either using the Future based API or the synchronous API. Both of these APIs are considered `Call Protection` because whether synchronously or asynchronously, the purpose of the circuit breaker is to protect your system from cascading failures while making a call to another service. In the future based API, we use the `withCircuitBreaker` which takes an asynchronous method (some method wrapped in a `Future`), for instance a call to retrieve data from a database, and we pipe the result back to the sender. If for some reason the database in this example isn't responding, or there is another issue, the circuit breaker will open and stop trying to hit the database again and again until the timeout is over.
|
Once a circuit breaker actor has been initialized, interacting with that actor is done by either using the Future based API or the synchronous API. Both of these APIs are considered `Call Protection` because whether synchronously or asynchronously, the purpose of the circuit breaker is to protect your system from cascading failures while making a call to another service. In the future based API, we use the `withCircuitBreaker` which takes an asynchronous method (some method wrapped in a `Future`), for instance a call to retrieve data from a database, and we pipe the result back to the sender. If for some reason the database in this example isn't responding, or there is another issue, the circuit breaker will open and stop trying to hit the database again and again until the timeout is over.
|
||||||
|
|
||||||
The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the `withSyncCircuitBreaker` and receives a method that is not wrapped in a `Future`.
|
The Synchronous API would also wrap your call with the circuit breaker logic, however, it uses the `withSyncCircuitBreaker` and receives a method that is not wrapped in a `Future`.
|
||||||
|
|
||||||
|
|
@ -154,4 +154,4 @@ The below examples doesn't make a remote call when the state is *HalfOpen*. Usin
|
||||||
|
|
||||||
#### Java
|
#### Java
|
||||||
|
|
||||||
@@snip [TellPatternJavaActor.java]($code$/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
|
@@snip [TellPatternJavaActor.java]($code$/java/jdocs/circuitbreaker/TellPatternJavaActor.java) { #circuit-breaker-tell-pattern }
|
||||||
|
|
|
||||||
|
|
@ -83,7 +83,7 @@ at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
|
||||||
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
|
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
|
||||||
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
|
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
|
||||||
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
|
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
|
||||||
are prefered over unreachable nodes.
|
are preferred over unreachable nodes.
|
||||||
|
|
||||||
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
|
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -842,8 +842,8 @@ Event Adapters help in situations where:
|
||||||
* **Version Migrations** – existing events stored in *Version 1* should be "upcasted" to a new *Version 2* representation,
|
* **Version Migrations** – existing events stored in *Version 1* should be "upcasted" to a new *Version 2* representation,
|
||||||
and the process of doing so involves actual code, not just changes on the serialization layer. For these scenarios
|
and the process of doing so involves actual code, not just changes on the serialization layer. For these scenarios
|
||||||
the `toJournal` function is usually an identity function, however the `fromJournal` is implemented as
|
the `toJournal` function is usually an identity function, however the `fromJournal` is implemented as
|
||||||
`v1.Event=>v2.Event`, performing the neccessary mapping inside the fromJournal method.
|
`v1.Event=>v2.Event`, performing the necessary mapping inside the fromJournal method.
|
||||||
This technique is sometimes refered to as "upcasting" in other CQRS libraries.
|
This technique is sometimes referred to as "upcasting" in other CQRS libraries.
|
||||||
* **Separating Domain and Data models** – thanks to EventAdapters it is possible to completely separate the domain model
|
* **Separating Domain and Data models** – thanks to EventAdapters it is possible to completely separate the domain model
|
||||||
from the model used to persist data in the Journals. For example one may want to use case classes in the
|
from the model used to persist data in the Journals. For example one may want to use case classes in the
|
||||||
domain model, however persist their protocol-buffer (or any other binary serialization format) counter-parts to the Journal.
|
domain model, however persist their protocol-buffer (or any other binary serialization format) counter-parts to the Journal.
|
||||||
|
|
|
||||||
|
|
@ -22,7 +22,7 @@ a known datastructure and algorithm for handling such use cases, refer to the [H
|
||||||
whitepaper by Varghese and Lauck if you'd like to understand its inner workings.
|
whitepaper by Varghese and Lauck if you'd like to understand its inner workings.
|
||||||
|
|
||||||
The Akka scheduler is **not** designed for long-term scheduling (see [akka-quartz-scheduler](https://github.com/enragedginger/akka-quartz-scheduler)
|
The Akka scheduler is **not** designed for long-term scheduling (see [akka-quartz-scheduler](https://github.com/enragedginger/akka-quartz-scheduler)
|
||||||
instead for this use case) nor is it to be used for higly precise firing of the events.
|
instead for this use case) nor is it to be used for highly precise firing of the events.
|
||||||
The maximum amount of time into the future you can schedule an event to trigger is around 8 months,
|
The maximum amount of time into the future you can schedule an event to trigger is around 8 months,
|
||||||
which in practice is too much to be useful since this would assume the system never went down during that period.
|
which in practice is too much to be useful since this would assume the system never went down during that period.
|
||||||
If you need long-term scheduling we highly recommend looking into alternative schedulers, as this
|
If you need long-term scheduling we highly recommend looking into alternative schedulers, as this
|
||||||
|
|
@ -119,4 +119,4 @@ scheduled task was canceled or will (eventually) have run.
|
||||||
|
|
||||||
@@@
|
@@@
|
||||||
|
|
||||||
@@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
|
@@snip [Scheduler.scala]($akka$/akka-actor/src/main/scala/akka/actor/Scheduler.scala) { #cancellable }
|
||||||
|
|
|
||||||
|
|
@ -127,7 +127,7 @@ Like the `concat` operation on `Flow`, it fully consumes one `Source` after the
|
||||||
So, there is only one substream actively running at a given time.
|
So, there is only one substream actively running at a given time.
|
||||||
|
|
||||||
Then once the active substream is fully consumed, the next substream can start running.
|
Then once the active substream is fully consumed, the next substream can start running.
|
||||||
Elements from all the substreams are concatnated to the sink.
|
Elements from all the substreams are concatenated to the sink.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue