Typo / grammatical fixes (#29715)

* Typo & grammatical fixes

* Grammatical fix

* Grammatical fix

* More grammatical fixes
This commit is contained in:
Matt Kohl 2020-10-09 16:10:44 +01:00 committed by GitHub
parent efc13f989d
commit 49f721d84d
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
5 changed files with 21 additions and 21 deletions

View file

@ -69,7 +69,7 @@ Once a method has been deprecated then the guideline* is that it will be kept, a
Since the release of Akka `2.4.0` a new versioning scheme is in effect.
Historically, Akka has been following the Java or Scala style of versioning where as the first number would mean "**epoch**",
Historically, Akka has been following the Java or Scala style of versioning in which the first number would mean "**epoch**",
the second one would mean **major**, and third be the **minor**, thus: `epoch.major.minor` (versioning scheme followed until and during `2.3.x`).
**Currently**, since Akka `2.4.0`, the new versioning applies which is closer to semantic versioning many have come to expect,
@ -134,7 +134,7 @@ No compatibility guarantees are given about these classes. They may change or ev
and user code is not supposed to call them.
Side-note on JVM representation details of the Scala `private[akka]` pattern that Akka is using extensively in
it's internals: Such methods or classes, which act as "accessible only from the given package" in Scala, are compiled
its internals: Such methods or classes, which act as "accessible only from the given package" in Scala, are compiled
down to `public` (!) in raw Java bytecode. The access restriction, that Scala understands is carried along
as metadata stored in the classfile. Thus, such methods are safely guarded from being accessed from Scala,
however Java users will not be warned about this fact by the `javac` compiler. Please be aware of this and do not call

View file

@ -4,7 +4,7 @@ project.description: How to extend Akka with Akka Extensions.
# Classic Akka Extensions
If you want to add features to Akka, there is a very elegant, but powerful mechanism for doing so.
It's called Akka Extensions and is comprised of 2 basic components: an `Extension` and an `ExtensionId`.
It's called Akka Extensions and comprises 2 basic components: an `Extension` and an `ExtensionId`.
Extensions will only be loaded once per `ActorSystem`, which will be managed by Akka.
You can choose to have your Extension loaded on-demand or at `ActorSystem` creation time through the Akka configuration.
@ -112,7 +112,7 @@ Java
## Library extensions
A third part library may register it's extension for auto-loading on actor system startup by appending it to
A third part library may register its extension for auto-loading on actor system startup by appending it to
`akka.library-extensions` in its `reference.conf`.
```

View file

@ -101,11 +101,11 @@ together they are both stored within the resulting `ByteString` instead of copyi
such as `drop` and `take` return `ByteString`s that still reference the original @scala[`Array`]@java[array], but just change the
offset and length that is visible. Great care has also been taken to make sure that the internal @scala[`Array`]@java[array] cannot be
modified. Whenever a potentially unsafe @scala[`Array`]@java[array] is used to create a new `ByteString` a defensive copy is created. If
you require a `ByteString` that only blocks as much memory as necessary for it's content, use the `compact` method to
you require a `ByteString` that only blocks as much memory as necessary for its content, use the `compact` method to
get a `CompactByteString` instance. If the `ByteString` represented only a slice of the original array, this will
result in copying all bytes in that slice.
`ByteString` inherits all methods from `IndexedSeq`, and it also has some new ones. For more information, look up the `akka.util.ByteString` class and it's companion object in the ScalaDoc.
`ByteString` inherits all methods from `IndexedSeq`, and it also has some new ones. For more information, look up the `akka.util.ByteString` class and its companion object in the ScalaDoc.
`ByteString` also comes with its own optimized builder and iterator classes `ByteStringBuilder` and
`ByteIterator` which provide extra features in addition to those of normal builders and iterators.

View file

@ -109,7 +109,7 @@ by Martin Kleppmann.
### Provided default serializers
Akka Persistence provides [Google Protocol Buffers](https://developers.google.com/protocol-buffers/) based serializers (using @ref:[Akka Serialization](serialization.md))
for it's own message types such as `PersistentRepr`, `AtomicWrite` and snapshots. Journal plugin implementations
for its own message types such as `PersistentRepr`, `AtomicWrite` and snapshots. Journal plugin implementations
*may* choose to use those provided serializers, or pick a serializer which suits the underlying database better.
@@@ note
@ -145,7 +145,7 @@ flexibility of the persisted vs. exposed types even more. However for now we wil
concerning only configuring the payload serializers.
By default the `payload` will be serialized using Java Serialization. This is fine for testing and initial phases
of your development (while you're still figuring out things and the data will not need to stay persisted forever).
of your development (while you're still figuring out things, and the data will not need to stay persisted forever).
However, once you move to production you should really *pick a different serializer for your payloads*.
@@@ warning
@ -222,7 +222,7 @@ needs to have an associated code which indicates if it is a window or aisle seat
**Solution:**
Adding fields is the most common change you'll need to apply to your messages so make sure the serialization format
you picked for your payloads can handle it apropriately, i.e. such changes should be *binary compatible*.
you picked for your payloads can handle it appropriately, i.e. such changes should be *binary compatible*.
This is achieved using the right serializer toolkit. In the following examples we will be using protobuf.
See also @ref:[how to add fields with Jackson](serialization-jackson.md#add-optional-field).
@ -236,8 +236,8 @@ Scala
Java
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #protobuf-read-optional-model }
Next we prepare an protocol definition using the protobuf Interface Description Language, which we'll use to generate
the serializer code to be used on the Akka Serialization layer (notice that the schema aproach allows us to rename
Next we prepare a protocol definition using the protobuf Interface Description Language, which we'll use to generate
the serializer code to be used on the Akka Serialization layer (notice that the schema approach allows us to rename
fields, as long as the numeric identifiers of the fields do not change):
@@snip [FlightAppModels.proto](/akka-docs/src/test/../main/protobuf/FlightAppModels.proto) { #protobuf-read-optional-proto }
@ -284,7 +284,7 @@ swiftly and refactor your models fearlessly as you go on with the project.
@@@ note
Learn in-depth about the serialization engine you're using as it will impact how you can aproach schema evolution.
Learn in-depth about the serialization engine you're using as it will impact how you can approach schema evolution.
Some operations are "free" in certain serialization formats (more often than not: removing/adding optional fields,
sometimes renaming fields etc.), while some other operations are strictly not possible.
@ -328,7 +328,7 @@ changes in the message format.
**Situation:**
While investigating app performance you notice that insane amounts of `CustomerBlinked` events are being stored
for every customer each time he/she blinks. Upon investigation you decide that the event does not add any value
for every customer each time he/she blinks. Upon investigation, you decide that the event does not add any value
and should be deleted. You still have to be able to replay from a journal which contains those old CustomerBlinked events though.
**Naive solution - drop events in EventAdapter:**
@ -353,7 +353,7 @@ In the just described technique we have saved the PersistentActor from receiving
out in the `EventAdapter`, however the event itself still was deserialized and loaded into memory.
This has two notable *downsides*:
* first, that the deserialization was actually performed, so we spent some of out time budget on the
* first, that the deserialization was actually performed, so we spent some of our time budget on the
deserialization, even though the event does not contribute anything to the persistent actors state.
* second, that we are *unable to remove the event class* from the system since the serializer still needs to create
the actual instance of it, as it does not know it will not be used.
@ -361,7 +361,7 @@ the actual instance of it, as it does not know it will not be used.
The solution to these problems is to use a serializer that is aware of that event being no longer needed, and can notice
this before starting to deserialize the object.
This aproach allows us to *remove the original class from our classpath*, which makes for less "old" classes lying around in the project.
This approach allows us to *remove the original class from our classpath*, which makes for less "old" classes lying around in the project.
This can for example be implemented by using an `SerializerWithStringManifest`
(documented in depth in @ref:[Serializer with String Manifest](serialization.md#string-manifest-serializer)). By looking at the string manifest, the serializer can notice
that the type is no longer needed, and skip the deserialization all-together:
@ -381,7 +381,7 @@ Java
: @@snip [PersistenceSchemaEvolutionDocTest.java](/akka-docs/src/test/java/jdocs/persistence/PersistenceSchemaEvolutionDocTest.java) { #string-serializer-skip-deleved-event-by-manifest }
The EventAdapter we implemented is aware of `EventDeserializationSkipped` events (our "Tombstones"),
and emits and empty `EventSeq` whenever such object is encoutered:
and emits and empty `EventSeq` whenever such object is encountered:
Scala
: @@snip [PersistenceSchemaEvolutionDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistenceSchemaEvolutionDocSpec.scala) { #string-serializer-skip-deleved-event-by-manifest-adapter }
@ -448,7 +448,7 @@ from the Journal implementation to achieve this.
An example of a Journal which may implement this pattern is MongoDB, however other databases such as PostgreSQL
and Cassandra could also do it because of their built-in JSON capabilities.
In this aproach, the `EventAdapter` is used as the marshalling layer: it serializes the events to/from JSON.
In this approach, the `EventAdapter` is used as the marshalling layer: it serializes the events to/from JSON.
The journal plugin notices that the incoming event type is JSON (for example by performing a `match` on the incoming
event) and stores the incoming object directly.
@ -504,7 +504,7 @@ of our model).
![persistence-event-adapter-1-n.png](./images/persistence-event-adapter-1-n.png)
The `EventAdapter` splits the incoming event into smaller more fine grained events during recovery.
The `EventAdapter` splits the incoming event into smaller more fine-grained events during recovery.
During recovery however, we now need to convert the old `V1` model into the `V2` representation of the change.
Depending if the old event contains a name change, we either emit the `UserNameChanged` or we don't,

View file

@ -135,7 +135,7 @@ To summarize the fallacy of transparent remoting:
* Was used in CORBA, RMI, and DCOM, and all of them failed. Those problems were noted by [Waldo et al already in 1994](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.7628)
* Partial failure is a major problem. Remote calls introduce uncertainty whether the function was invoked or not.
Typically handled by using timeouts but the client can't always know the result of the call.
* Latency of calls over a network are several order of magnitudes longer than latency of local calls,
* Latency of calls over a network are several orders of magnitudes longer than latency of local calls,
which can be more than surprising if encoded as an innocent looking local method call.
* Remote invocations have much lower throughput due to the need of serializing the
data and you can't just pass huge datasets in the same way.
@ -376,7 +376,7 @@ The @ref:[Scheduler](../scheduler.md#schedule-periodically) documentation descri
`startTimerWithFixedDelay`.
The deprecated `schedule` method had the same semantics as `scheduleAtFixedRate`, but since that can result in
bursts of scheduled tasks or messages after long garbage collection pauses and in worst case cause undesired
bursts of scheduled tasks or messages after long garbage collection pauses and in the worst case cause undesired
load on the system `scheduleWithFixedDelay` is often preferred.
For the same reason the following methods have also been deprecated:
@ -604,7 +604,7 @@ In 2.5.x the Cluster Receptionist was using the shared Distributed Data extensio
undesired configuration changes if the application was also using that and changed for example the `role`
configuration.
In 2.6.x the Cluster Receptionist is using it's own independent instance of Distributed Data.
In 2.6.x the Cluster Receptionist is using its own independent instance of Distributed Data.
This means that the receptionist information will not be disseminated between 2.5.x and 2.6.x nodes during a
rolling update from 2.5.x to 2.6.x if you use Akka Typed. See @ref:[rolling updates with typed Cluster Receptionist](../additional/rolling-updates.md#akka-typed-with-receptionist-or-cluster-receptionist)