fix the the typo

This commit is contained in:
Patrik Nordwall 2016-03-18 17:06:34 +01:00
parent 53a877d76d
commit 137c4c8b3d
12 changed files with 13 additions and 13 deletions

View file

@ -109,7 +109,7 @@ case object OptimalSizeExploringResizer {
* The memory usage is O(n) where n is the number of sizes
* you allow, i.e. upperBound - lowerBound.
*
* For documentation about the the parameters, see the reference.conf -
* For documentation about the parameters, see the reference.conf -
* akka.actor.deployment.default.optimal-size-exploring-resizer
*
*/

View file

@ -592,7 +592,7 @@ object ClusterReceptionist {
* the `sender()`, as seen by the destination actor, is not the client itself.
* The `sender()` of the response messages, as seen by the client, is `deadLetters`
* since the client should normally send subsequent messages via the `ClusterClient`.
* It is possible to pass the the original sender inside the reply messages if
* It is possible to pass the original sender inside the reply messages if
* the client is supposed to communicate directly to the actor in the cluster.
*
*/

View file

@ -56,7 +56,7 @@ to avoid inbound connections from other cluster nodes to the client, i.e.
the ``sender()``, as seen by the destination actor, is not the client itself.
The ``sender()`` of the response messages, as seen by the client, is ``deadLetters``
since the client should normally send subsequent messages via the ``ClusterClient``.
It is possible to pass the the original sender inside the reply messages if
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.
While establishing a connection to a receptionist the ``ClusterClient`` will buffer

View file

@ -560,7 +560,7 @@ saved snapshot matches the specified ``SnapshotSelectionCriteria`` will replay a
Since it is acceptable for some applications to not use any snapshotting, it is legal to not configure a snapshot store.
However Akka will log a warning message when this situation is detected and then continue to operate until
an actor tries to store a snapshot, at which point the the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
an actor tries to store a snapshot, at which point the operation will fail (by replying with an ``SaveSnapshotFailure`` for example).
Note that :ref:`cluster_sharding_java` is using snapshots, so if you use Cluster Sharding you need to define a snapshot store plugin.

View file

@ -291,7 +291,7 @@ case: if the very first element is not yet available.
We introduce a boolean variable ``waitingFirstValue`` to denote whether the first element has been provided or not
(alternatively an :class:`Optional` can be used for ``currentValue`` or if the element type is a subclass of Object
a null can be used with the same purpose). In the downstream ``onPull()`` handler the difference from the previous
version is that we check if we have received the the first value and only emit if we have. This leads to that when the
version is that we check if we have received the first value and only emit if we have. This leads to that when the
first element comes in we must check if there possibly already was demand from downstream so that we in that case can
push the element directly.

View file

@ -373,7 +373,7 @@ actor in cluster.
In 2.4 the ``sender()`` of the response messages, as seen by the client, is ``deadLetters``
since the client should normally send subsequent messages via the ``ClusterClient``.
It is possible to pass the the original sender inside the reply messages if
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.
Akka Persistence

View file

@ -56,7 +56,7 @@ to avoid inbound connections from other cluster nodes to the client, i.e.
the ``sender()``, as seen by the destination actor, is not the client itself.
The ``sender()`` of the response messages, as seen by the client, is ``deadLetters``
since the client should normally send subsequent messages via the ``ClusterClient``.
It is possible to pass the the original sender inside the reply messages if
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.
While establishing a connection to a receptionist the ``ClusterClient`` will buffer

View file

@ -284,7 +284,7 @@ case: if the very first element is not yet available.
We introduce a boolean variable ``waitingFirstValue`` to denote whether the first element has been provided or not
(alternatively an :class:`Option` can be used for ``currentValue`` or if the element type is a subclass of AnyRef
a null can be used with the same purpose). In the downstream ``onPull()`` handler the difference from the previous
version is that we check if we have received the the first value and only emit if we have. This leads to that when the
version is that we check if we have received the first value and only emit if we have. This leads to that when the
first element comes in we must check if there possibly already was demand from downstream so that we in that case can
push the element directly.

View file

@ -85,7 +85,7 @@ class PrepareRequestsSpec extends AkkaSpec {
val entitySub = entityProbe.expectSubscription()
// the bug happens when both the client has signalled demand
// and the the streamed entity has
// and the streamed entity has
upstreamSub.request(1)
entitySub.request(1)

View file

@ -35,12 +35,12 @@ abstract class TestResponse(_response: HttpResponse, awaitAtMost: FiniteDuration
lazy val response: HttpResponse = _response.withEntity(entity)
/**
* Returns the media-type of the the response's content-type
* Returns the media-type of the response's content-type
*/
def mediaType: MediaType = extractFromResponse(_.entity.contentType.mediaType)
/**
* Returns a string representation of the media-type of the the response's content-type
* Returns a string representation of the media-type of the response's content-type
*/
def mediaTypeString: String = mediaType.toString

View file

@ -14,7 +14,7 @@ import scala.collection.mutable.LinkedHashSet
/**
* INTERNAL API
*
* Detect corrupt event stream during replay. It uses the the writerUuid and the
* Detect corrupt event stream during replay. It uses the writerUuid and the
* sequenceNr in the replayed events to find events emitted by overlapping writers.
*/
private[akka] object ReplayFilter {

View file

@ -111,7 +111,7 @@ object Source {
* `ConcurrentModificationException` or other more subtle errors may occur.
*/
def from[O](iterable: java.lang.Iterable[O]): javadsl.Source[O, NotUsed] = {
// this adapter is not immutable if the the underlying java.lang.Iterable is modified
// this adapter is not immutable if the underlying java.lang.Iterable is modified
// but there is not anything we can do to prevent that from happening.
// ConcurrentModificationException will be thrown in some cases.
val scalaIterable = new immutable.Iterable[O] {