Fix paradox anchor references (#27226)
* Fix paradox anchor references Found by https://github.com/lightbend/paradox/pull/326 * Remove duplicate anchors in paradox docs Found by https://github.com/lightbend/paradox/pull/328
This commit is contained in:
parent
c636058d0d
commit
d758e746d1
27 changed files with 18 additions and 66 deletions
|
|
@ -350,7 +350,6 @@ Java
|
|||
|
||||
The implementations shown above are the defaults provided by the @scala[`Actor` trait.] @java[`AbstractActor` class.]
|
||||
|
||||
<a id="actor-lifecycle"></a>
|
||||
### Actor Lifecycle
|
||||
|
||||

|
||||
|
|
@ -442,7 +441,6 @@ using `context.unwatch(target)`. This works even if the `Terminated`
|
|||
message has already been enqueued in the mailbox; after calling `unwatch`
|
||||
no `Terminated` message for that actor will be processed anymore.
|
||||
|
||||
<a id="start-hook"></a>
|
||||
### Start Hook
|
||||
|
||||
Right after starting the actor, its `preStart` method is invoked.
|
||||
|
|
@ -501,7 +499,6 @@ See @ref:[Discussion: Message Ordering](general/message-delivery-reliability.md#
|
|||
|
||||
@@@
|
||||
|
||||
<a id="stop-hook"></a>
|
||||
### Stop Hook
|
||||
|
||||
After stopping an actor, its `postStop` hook is called, which may be used
|
||||
|
|
@ -915,7 +912,6 @@ The timers are bound to the lifecycle of the actor that owns it, and thus are ca
|
|||
automatically when it is restarted or stopped. Note that the `TimerScheduler` is not thread-safe,
|
||||
i.e. it must only be used within the actor that owns it.
|
||||
|
||||
<a id="stopping-actors"></a>
|
||||
## Stopping actors
|
||||
|
||||
Actors are stopped by invoking the `stop` method of a `ActorRefFactory`,
|
||||
|
|
@ -1038,7 +1034,6 @@ message, i.e. not for top-level actors.
|
|||
|
||||
@@@
|
||||
|
||||
<a id="coordinated-shutdown"></a>
|
||||
### Coordinated Shutdown
|
||||
|
||||
There is an extension named `CoordinatedShutdown` that will stop certain actors and
|
||||
|
|
@ -1183,7 +1178,6 @@ Java
|
|||
|
||||
See this @extref[Unnested receive example](github:akka-docs/src/test/scala/docs/actor/UnnestedReceives.scala).
|
||||
|
||||
<a id="stash"></a>
|
||||
## Stash
|
||||
|
||||
The @scala[`Stash` trait] @java[`AbstractActorWithStash` class] enables an actor to temporarily stash away messages
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ to see what this looks like in practice.
|
|||
|
||||
For the JVM to run well in a Docker container, there are some general (not Akka specific) parameters that might need tuning:
|
||||
|
||||
### Resource limits
|
||||
### Resource constraints
|
||||
|
||||
Docker allows [constraining each containers' resource usage](https://docs.docker.com/config/containers/resource_constraints/).
|
||||
|
||||
|
|
|
|||
|
|
@ -424,7 +424,7 @@ Java
|
|||
|
||||
Note that stopped entities will be started again when a new message is targeted to the entity.
|
||||
|
||||
If 'on stop' backoff supervision strategy is used, a final termination message must be set and used for passivation, see @ref:[Supervision](general/supervision.md#Sharding)
|
||||
If 'on stop' backoff supervision strategy is used, a final termination message must be set and used for passivation, see @ref:[Supervision](general/supervision.md#sharding)
|
||||
|
||||
## Graceful Shutdown
|
||||
|
||||
|
|
|
|||
|
|
@ -714,10 +714,10 @@ unreachable cluster node has been downed and removed.
|
|||
If you encounter suspicious false positives when the system is under load you should
|
||||
define a separate dispatcher for the cluster actors as described in [Cluster Dispatcher](#cluster-dispatcher).
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
## How to Test
|
||||
|
||||
@@@ div { .group-scala }
|
||||
|
||||
@ref:[Multi Node Testing](multi-node-testing.md) is useful for testing cluster applications.
|
||||
|
||||
Set up your project according to the instructions in @ref:[Multi Node Testing](multi-node-testing.md) and @ref:[Multi JVM Testing](multi-jvm-testing.md), i.e.
|
||||
|
|
@ -772,8 +772,6 @@ the actor system for a specific role. This can also be used to grab the `akka.ac
|
|||
|
||||
@@@ div { .group-java }
|
||||
|
||||
## How to Test
|
||||
|
||||
Currently testing with the `sbt-multi-jvm` plugin is only documented for Scala.
|
||||
Go to the corresponding Scala version of this page for details.
|
||||
|
||||
|
|
@ -867,7 +865,6 @@ You can enable verbose logging of cluster events at info level, e.g. for tempora
|
|||
akka.cluster.log-info-verbose = on
|
||||
```
|
||||
|
||||
<a id="cluster-dispatcher"></a>
|
||||
### Cluster Dispatcher
|
||||
|
||||
Under the hood the cluster extension is implemented with actors. To protect them against
|
||||
|
|
|
|||
|
|
@ -92,7 +92,7 @@ Java
|
|||
Future<SimpleServiceDiscovery.Resolved> result = discovery.lookup("service-name", Duration.create("500 millis"));
|
||||
```
|
||||
|
||||
### How it works
|
||||
### DNS records used
|
||||
|
||||
DNS discovery will use either A/AAAA records or SRV records depending on whether a `Simple` or `Full` lookup is issued.
|
||||
The advantage of SRV records is that they can include a port.
|
||||
|
|
|
|||
|
|
@ -42,6 +42,7 @@ Scala
|
|||
|
||||
Java
|
||||
: @@snip [DispatcherDocTest.java](/akka-docs/src/test/java/jdocs/dispatcher/DispatcherDocTest.java) { #lookup }
|
||||
|
||||
## Setting the dispatcher for an Actor
|
||||
|
||||
So in case you want to give your `Actor` a different dispatcher than the default, you need to do two things, of which the first
|
||||
|
|
|
|||
|
|
@ -355,7 +355,6 @@ types that support both updates and removals, for example `ORMap` or `ORSet`.
|
|||
|
||||
@@@
|
||||
|
||||
<a id="delta-crdt"></a>
|
||||
### delta-CRDT
|
||||
|
||||
[Delta State Replicated Data Types](http://arxiv.org/abs/1603.01529)
|
||||
|
|
@ -739,7 +738,6 @@ This would be possible if a node with durable data didn't participate in the pru
|
|||
be stopped for longer time than this duration and if it is joining again after this
|
||||
duration its data should first be manually removed (from the lmdb directory).
|
||||
|
||||
<a id="crdt-garbage"></a>
|
||||
### CRDT Garbage
|
||||
|
||||
One thing that can be problematic with CRDTs is that some data types accumulate history (garbage).
|
||||
|
|
|
|||
|
|
@ -122,7 +122,6 @@ Java
|
|||
This classifier takes always a time which is proportional to the number of
|
||||
subscriptions, independent of how many actually match.
|
||||
|
||||
<a id="actor-classification"></a>
|
||||
### Actor Classification
|
||||
|
||||
This classification was originally developed specifically for implementing
|
||||
|
|
@ -153,7 +152,6 @@ Java
|
|||
This classifier is still is generic in the event type, and it is efficient for
|
||||
all use cases.
|
||||
|
||||
<a id="event-stream"></a>
|
||||
## Event Stream
|
||||
|
||||
The event stream is the main event bus of each actor system: it is used for
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ One important consequence of offering only features that can be relied upon is t
|
|||
|
||||
This means that sending JVM objects into a stream that need to be cleaned up will require the user to ensure that this happens outside of the Akka Streams facilities (e.g. by cleaning them up after a timeout or when their results are observed on the stream output, or by using other means like finalizers etc.).
|
||||
|
||||
### Resulting Implementation Constraints
|
||||
### Resulting Implementation Considerations
|
||||
|
||||
Compositionality entails reusability of partial stream topologies, which led us to the lifted approach of describing data flows as (partial) graphs that can act as composite sources, flows (a.k.a. pipes) and sinks of data. These building blocks shall then be freely shareable, with the ability to combine them freely to form larger graphs. The representation of these pieces must therefore be an immutable blueprint that is materialized in an explicit step in order to start the stream processing. The resulting stream processing engine is then also immutable in the sense of having a fixed topology that is prescribed by the blueprint. Dynamic networks need to be modeled by explicitly using the Reactive Streams interfaces for plugging different engines together.
|
||||
|
||||
|
|
|
|||
|
|
@ -207,7 +207,6 @@ to recover before the persistent actor is started.
|
|||
|
||||
> <a id="1" href="#^1">[1]</a> A failure can be indicated in two different ways; by an actor stopping or crashing.
|
||||
|
||||
<a id="supervision-strategies"></a>
|
||||
#### Supervision strategies
|
||||
|
||||
There are two basic supervision strategies available for backoff:
|
||||
|
|
|
|||
|
|
@ -87,7 +87,6 @@ not error handling. In other words, data may still be lost, even if every write
|
|||
|
||||
@@@
|
||||
|
||||
<a id="bytestring"></a>
|
||||
### ByteString
|
||||
|
||||
To maintain isolation, actors should communicate with immutable objects only. `ByteString` is an
|
||||
|
|
|
|||
|
|
@ -204,6 +204,7 @@ akka {
|
|||
}
|
||||
```
|
||||
|
||||
<a id="logging-remote"></a>
|
||||
### Auxiliary remote logging options
|
||||
|
||||
If you want to see all messages that are sent through remoting at DEBUG log level, use the following config option. Note that this logs the messages as they are sent by the transport layer, not by an actor.
|
||||
|
|
@ -324,7 +325,6 @@ Instead log messages are printed to stdout (System.out). The default log level f
|
|||
stdout logger is `WARNING` and it can be silenced completely by setting
|
||||
`akka.stdout-loglevel=OFF`.
|
||||
|
||||
<a id="slf4j"></a>
|
||||
## SLF4J
|
||||
|
||||
Akka provides a logger for [SLF4J](http://www.slf4j.org/). This module is available in the 'akka-slf4j.jar'.
|
||||
|
|
|
|||
|
|
@ -10,9 +10,6 @@ Persistent FSMs are part of Akka persistence, you must add the following depende
|
|||
version="$akka.version$"
|
||||
}
|
||||
|
||||
<a id="persistent-fsm"></a>
|
||||
## Persistent FSM
|
||||
|
||||
@@@ warning
|
||||
|
||||
Persistent FSM is no longer actively developed and will be replaced by @ref[Akka Typed Persistence](typed/persistence.md). It is not advised
|
||||
|
|
@ -20,7 +17,6 @@ to build new applications with Persistent FSM.
|
|||
|
||||
@@@
|
||||
|
||||
|
||||
@scala[`PersistentFSM`]@java[`AbstractPersistentFSM`] handles the incoming messages in an FSM like fashion.
|
||||
Its internal state is persisted as a sequence of changes, later referred to as domain events.
|
||||
Relationship between incoming messages, FSM's states and transitions, persistence of domain events is defined by a DSL.
|
||||
|
|
|
|||
|
|
@ -4,7 +4,6 @@ Storage backends for journals and snapshot stores are pluggable in the Akka pers
|
|||
A directory of persistence journal and snapshot store plugins is available at the Akka Community Projects page, see [Community plugins](http://akka.io/community/)
|
||||
This documentation described how to build a new storage backend.
|
||||
|
||||
<a id="journal-plugin-api"></a>
|
||||
### Journal plugin API
|
||||
|
||||
A journal plugin extends `AsyncWriteJournal`.
|
||||
|
|
|
|||
|
|
@ -68,7 +68,6 @@ The persistence extension comes with a "local" snapshot storage plugin, which wr
|
|||
* *Event sourcing*. Based on the building blocks described above, Akka persistence provides abstractions for the
|
||||
development of event sourced applications (see section [Event sourcing](#event-sourcing)).
|
||||
|
||||
<a id="event-sourcing"></a>
|
||||
## Event sourcing
|
||||
|
||||
See an [introduction to EventSourcing](https://msdn.microsoft.com/en-us/library/jj591559.aspx), what follows is
|
||||
|
|
@ -161,7 +160,6 @@ behavior is corrupted.
|
|||
|
||||
@@@
|
||||
|
||||
<a id="recovery"></a>
|
||||
### Recovery
|
||||
|
||||
By default, a persistent actor is automatically recovered on start and on restart by replaying journaled messages.
|
||||
|
|
@ -247,7 +245,6 @@ unused `persistenceId`.
|
|||
If there is a problem with recovering the state of the actor from the journal, `onRecoveryFailure`
|
||||
is called (logging the error by default) and the actor will be stopped.
|
||||
|
||||
<a id="internal-stash"></a>
|
||||
### Internal stash
|
||||
|
||||
The persistent actor has a private @ref:[stash](actors.md#stash) for internally caching incoming messages during
|
||||
|
|
@ -384,7 +381,6 @@ The callback will not be invoked if the actor is restarted (or stopped) in betwe
|
|||
|
||||
@@@
|
||||
|
||||
<a id="nested-persist-calls"></a>
|
||||
### Nested persist calls
|
||||
|
||||
It is possible to call `persist` and `persistAsync` inside their respective callback blocks and they will properly
|
||||
|
|
@ -444,7 +440,6 @@ the Actor's receive block (or methods synchronously invoked from there).
|
|||
|
||||
@@@
|
||||
|
||||
<a id="failures"></a>
|
||||
### Failures
|
||||
|
||||
If persistence of an event fails, `onPersistFailure` will be invoked (logging the error by default),
|
||||
|
|
@ -485,7 +480,6 @@ The recovery of a persistent actor will therefore never be done partially with o
|
|||
Some journals may not support atomic writes of several events and they will then reject the `persistAll`
|
||||
command, i.e. `onPersistRejected` is called with an exception (typically `UnsupportedOperationException`).
|
||||
|
||||
<a id="batch-writes"></a>
|
||||
### Batch writes
|
||||
|
||||
In order to optimize throughput when using `persistAsync`, a persistent actor
|
||||
|
|
@ -597,7 +591,6 @@ Scala
|
|||
Java
|
||||
: @@snip [LambdaPersistenceDocTest.java](/akka-docs/src/test/java/jdocs/persistence/LambdaPersistenceDocTest.java) { #safe-shutdown-example-good }
|
||||
|
||||
<a id="replay-filter"></a>
|
||||
### Replay Filter
|
||||
|
||||
There could be cases where event streams are corrupted and multiple writers (i.e. multiple persistent actor instances)
|
||||
|
|
@ -629,7 +622,6 @@ akka.persistence.journal.leveldb.replay-filter {
|
|||
}
|
||||
```
|
||||
|
||||
<a id="snapshots"></a>
|
||||
## Snapshots
|
||||
|
||||
As you model your domain using actors, you may notice that some actors may be prone to accumulating extremely long event logs and experiencing long recovery times. Sometimes, the right approach may be to split out into a set of shorter lived actors. However, when this is not an option, you can use snapshots to reduce recovery times drastically.
|
||||
|
|
@ -731,7 +723,6 @@ around this. For more details see @java[[Managing Data Persistence](https://www.
|
|||
@java[[Persistent Entity](https://www.lagomframework.com/documentation/current/java/PersistentEntity.html)]
|
||||
@scala[[Persistent Entity](https://www.lagomframework.com/documentation/current/scala/PersistentEntity.html)] in the Lagom documentation.
|
||||
|
||||
<a id="at-least-once-delivery"></a>
|
||||
## At-Least-Once Delivery
|
||||
|
||||
To send messages with at-least-once delivery semantics to destinations you can @scala[mix-in `AtLeastOnceDelivery` trait to your `PersistentActor`]@java[extend the `AbstractPersistentActorWithAtLeastOnceDelivery` class instead of `AbstractPersistentActor`]
|
||||
|
|
@ -840,7 +831,6 @@ not accept more messages and it will throw `AtLeastOnceDelivery.MaxUnconfirmedMe
|
|||
The default value can be configured with the `akka.persistence.at-least-once-delivery.max-unconfirmed-messages`
|
||||
configuration key. The method can be overridden by implementation classes to return non-default values.
|
||||
|
||||
<a id="event-adapters"></a>
|
||||
## Event Adapters
|
||||
|
||||
In long running projects using event sourcing sometimes the need arises to detach the data model from the domain model
|
||||
|
|
@ -885,9 +875,6 @@ For more advanced schema evolution techniques refer to the @ref:[Persistence - S
|
|||
|
||||
@@@
|
||||
|
||||
<a id="persistent-fsm"></a>
|
||||
|
||||
<a id="storage-plugins"></a>
|
||||
## Storage plugins
|
||||
|
||||
Storage backends for journals and snapshot stores are pluggable in the Akka persistence extension.
|
||||
|
|
@ -949,10 +936,8 @@ akka {
|
|||
}
|
||||
```
|
||||
|
||||
<a id="pre-packaged-plugins"></a>
|
||||
## Pre-packaged plugins
|
||||
|
||||
<a id="local-leveldb-journal"></a>
|
||||
### Local LevelDB journal
|
||||
|
||||
The LevelDB journal plugin config entry is `akka.persistence.journal.leveldb`. It writes messages to a local LevelDB
|
||||
|
|
@ -982,7 +967,6 @@ this end, LevelDB offers a special journal compaction function that is exposed v
|
|||
|
||||
@@snip [PersistencePluginDocSpec.scala](/akka-docs/src/test/scala/docs/persistence/PersistencePluginDocSpec.scala) { #compaction-intervals-config }
|
||||
|
||||
<a id="shared-leveldb-journal"></a>
|
||||
### Shared LevelDB journal
|
||||
|
||||
A LevelDB instance can also be shared by multiple actor systems (on the same or on different nodes). This, for
|
||||
|
|
@ -1032,7 +1016,6 @@ Java
|
|||
Internal journal commands (sent by persistent actors) are buffered until injection completes. Injection is idempotent
|
||||
i.e. only the first injection is used.
|
||||
|
||||
<a id="local-snapshot-store"></a>
|
||||
### Local snapshot store
|
||||
|
||||
The local snapshot store plugin config entry is `akka.persistence.snapshot-store.local`. It writes snapshot files to
|
||||
|
|
@ -1048,7 +1031,6 @@ directory. This can be changed by configuration where the specified path can be
|
|||
Note that it is not mandatory to specify a snapshot store plugin. If you don't use snapshots
|
||||
you don't have to configure it.
|
||||
|
||||
<a id="persistence-plugin-proxy"></a>
|
||||
### Persistence Plugin Proxy
|
||||
|
||||
A persistence plugin proxy allows sharing of journals and snapshot stores across multiple actor systems (on the same or
|
||||
|
|
@ -1086,7 +1068,6 @@ The proxied persistence plugin can (and should) be configured using its original
|
|||
|
||||
@@@
|
||||
|
||||
<a id="custom-serialization"></a>
|
||||
## Custom serialization
|
||||
|
||||
Serialization of snapshots and payloads of `Persistent` messages is configurable with Akka's
|
||||
|
|
|
|||
|
|
@ -120,7 +120,7 @@ Artery has the same functionality as classic remoting and you should normally on
|
|||
configuration to switch.
|
||||
To switch a full cluster restart is required and any overrides for classic remoting need to be ported to Artery configuration.
|
||||
|
||||
Artery defaults to TCP (see @ref:[selected transport](#selecting-a-transport)) which is a good start
|
||||
Artery defaults to TCP (see @ref:[selected transport](../remoting-artery.md#selecting-a-transport)) which is a good start
|
||||
when migrating from classic remoting.
|
||||
|
||||
The protocol part in the Akka `Address`, for example `"akka.tcp://actorSystemName@10.0.0.1:2552/user/actorName"`
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ is completely different. It will require a full cluster shutdown and new startup
|
|||
|
||||
### 2.5.0 Several changes in minor release
|
||||
|
||||
See @ref:[migration guide](migration-guide-2.4.x-2.5.x.md#rolling-update) when updating from 2.4.x to 2.5.x.
|
||||
See [migration guide](https://doc.akka.io/docs/akka/2.5/project/migration-guide-2.4.x-2.5.x.html#rolling-update) when updating from 2.4.x to 2.5.x.
|
||||
|
||||
### 2.5.10 Joining regression
|
||||
|
||||
|
|
|
|||
|
|
@ -261,7 +261,6 @@ Scala
|
|||
Java
|
||||
: @@snip [RemoteDeploymentDocTest.java](/akka-docs/src/test/java/jdocs/remoting/RemoteDeploymentDocTest.java) { #deploy }
|
||||
|
||||
<a id="remote-deployment-whitelist"></a>
|
||||
### Remote deployment whitelist
|
||||
|
||||
As remote deployment can potentially be abused by both users and even attackers a whitelist feature
|
||||
|
|
@ -445,7 +444,6 @@ To be notified when the remoting subsystem has been shut down, listen to `Remot
|
|||
|
||||
To intercept generic remoting related errors, listen to `RemotingErrorEvent` which holds the `Throwable` cause.
|
||||
|
||||
<a id="remote-security"></a>
|
||||
## Remote Security
|
||||
|
||||
An `ActorSystem` should not be exposed via Akka Remote over plain TCP to an untrusted network (e.g. Internet).
|
||||
|
|
@ -607,7 +605,6 @@ marking them `PossiblyHarmful` so that a client cannot forge them.
|
|||
|
||||
@@@
|
||||
|
||||
<a id="remote-configuration"></a>
|
||||
## Remote Configuration
|
||||
|
||||
There are lots of configuration properties that are related to remoting in Akka. We refer to the
|
||||
|
|
|
|||
|
|
@ -683,7 +683,6 @@ Note that these special messages, except for the `Broadcast` message, are only h
|
|||
self contained router actors and not by the `akka.routing.Router` component described
|
||||
in [A Simple Router](#simple-router).
|
||||
|
||||
<a id="broadcast-messages"></a>
|
||||
### Broadcast Messages
|
||||
|
||||
A `Broadcast` message can be used to send a message to *all* of a router's routees. When a router
|
||||
|
|
@ -909,7 +908,6 @@ routers were implemented with normal actors. Fortunately all of this complexity
|
|||
consumers of the routing API. However, it is something to be aware of when implementing your own
|
||||
routers.
|
||||
|
||||
<a id="custom-router"></a>
|
||||
## Custom Router
|
||||
|
||||
You can create your own router should you not find any of the ones provided by Akka sufficient for your needs.
|
||||
|
|
|
|||
|
|
@ -315,7 +315,7 @@ different settings for remote messages and persisted events.
|
|||
|
||||
@@snip [config](/akka-serialization-jackson/src/test/scala/doc/akka/serialization/jackson/SerializationDocSpec.scala) { #several-config }
|
||||
|
||||
## Additional configuration
|
||||
## Additional features
|
||||
|
||||
Additional Jackson serialization features can be enabled/disabled in configuration. The default values from
|
||||
Jackson are used aside from the the following that are changed in Akka's default configuration.
|
||||
|
|
|
|||
|
|
@ -2,7 +2,7 @@
|
|||
|
||||
Splits each element of input into multiple downstreams using a function
|
||||
|
||||
@ref[Fan-out operators](../index.md#fan-out-operators)
|
||||
@ref[Fan-out operators](index.md#fan-out-operators)
|
||||
|
||||
## Signature
|
||||
|
||||
|
|
|
|||
|
|
@ -12,7 +12,6 @@ To use Akka Streams, add the module to your project:
|
|||
|
||||
## Introduction
|
||||
|
||||
<a id="core-concepts"></a>
|
||||
## Core concepts
|
||||
|
||||
Akka Streams is a library to process and transfer a sequence of elements using bounded buffer space. This
|
||||
|
|
@ -57,7 +56,6 @@ This way they can slow down a fast producer without blocking its thread. This is
|
|||
design, since entities that need to wait (a fast producer waiting on a slow consumer) will not block the thread but
|
||||
can hand it back for further use to an underlying thread-pool.
|
||||
|
||||
<a id="defining-and-running-streams"></a>
|
||||
## Defining and running streams
|
||||
|
||||
Linear processing pipelines can be expressed in Akka Streams using the following core abstractions:
|
||||
|
|
@ -256,7 +254,6 @@ it will have to abide to this back-pressure by applying one of the below strateg
|
|||
As we can see, this scenario effectively means that the `Subscriber` will *pull* the elements from the Publisher –
|
||||
this mode of operation is referred to as pull-based back-pressure.
|
||||
|
||||
<a id="stream-materialization"></a>
|
||||
## Stream Materialization
|
||||
|
||||
When constructing flows and graphs in Akka Streams think of them as preparing a blueprint, an execution plan.
|
||||
|
|
@ -281,7 +278,6 @@ yet will materialize that operator multiple times.
|
|||
|
||||
@@@
|
||||
|
||||
<a id="operator-fusion"></a>
|
||||
### Operator Fusion
|
||||
|
||||
By default, Akka Streams will fuse the stream operators. This means that the processing steps of a flow or
|
||||
|
|
|
|||
|
|
@ -245,7 +245,6 @@ Scala
|
|||
|
||||
|
||||
|
||||
<a id="predefined-shapes"></a>
|
||||
## Predefined shapes
|
||||
|
||||
In general a custom `Shape` needs to be able to provide all its input and output ports, be able to copy itself, and also be
|
||||
|
|
|
|||
|
|
@ -63,7 +63,7 @@ composition, therefore it may take some careful study of this subject until you
|
|||
feel familiar with the tools and techniques. The documentation is here to help
|
||||
and for best results we recommend the following approach:
|
||||
|
||||
* Read the @ref:[Quick Start Guide](stream-quickstart.md#stream-quickstart) to get a feel for how streams
|
||||
* Read the @ref:[Quick Start Guide](stream-quickstart.md) to get a feel for how streams
|
||||
look like and what they can do.
|
||||
* The top-down learners may want to peruse the @ref:[Design Principles behind Akka Streams](../general/stream/stream-design.md) at this
|
||||
point.
|
||||
|
|
@ -72,4 +72,4 @@ point.
|
|||
* For a complete overview of the built-in processing operators you can look at the
|
||||
@ref:[operator index](operators/index.md)
|
||||
* The other sections can be read sequentially or as needed during the previous
|
||||
steps, each digging deeper into specific topics.
|
||||
steps, each digging deeper into specific topics.
|
||||
|
|
|
|||
|
|
@ -20,6 +20,7 @@ perform tests.
|
|||
|
||||
Akka comes with a dedicated module `akka-testkit` for supporting tests.
|
||||
|
||||
<a id="async-integration-testing"></a>
|
||||
## Asynchronous Testing: `TestKit`
|
||||
|
||||
Testkit allows you to test your actors in a controlled but realistic
|
||||
|
|
@ -566,7 +567,6 @@ Which of these methods is the best depends on what is most important to test. Th
|
|||
most generic option is to create the parent actor by passing it a function that is
|
||||
responsible for the Actor creation, but @scala[the]@java[using `TestProbe` or having a] fabricated parent is often sufficient.
|
||||
|
||||
<a id="callingthreaddispatcher"></a>
|
||||
## CallingThreadDispatcher
|
||||
|
||||
The `CallingThreadDispatcher` serves good purposes in unit testing, as
|
||||
|
|
|
|||
|
|
@ -391,7 +391,7 @@ former simply speaks more languages than the latter. The opposite would be
|
|||
problematic, so passing an @scala[`ActorRef[PublishSessionMessage]`]@java[`ActorRef<PublishSessionMessage>`] where
|
||||
@scala[`ActorRef[RoomCommand]`]@java[`ActorRef<RoomCommand>`] is required will lead to a type error.
|
||||
|
||||
#### Trying it out
|
||||
#### Try it out
|
||||
|
||||
In order to see this chat room in action we need to write a client Actor that can use it
|
||||
@scala[, for this stateless actor it doesn't make much sense to use the `AbstractBehavior` so let's just reuse the functional style gabbler from the sample above]:
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ Java
|
|||
|
||||
## Group Router
|
||||
|
||||
The group router is created with a `ServiceKey` and uses the receptionist (see @ref:[Receptionist](actor-discovery.md#Receptionist)) to discover
|
||||
The group router is created with a `ServiceKey` and uses the receptionist (see @ref:[Receptionist](actor-discovery.md#receptionist)) to discover
|
||||
available actors for that key and routes messages to one of the currently known registered actors for a key.
|
||||
|
||||
Since the receptionist is used this means the group router is cluster aware out of the box and will pick up routees
|
||||
|
|
@ -88,4 +88,4 @@ it will not give better performance to create more routees than there are thread
|
|||
|
||||
Since the router itself is an actor and has a mailbox this means that messages are routed sequentially to the routees
|
||||
where it can be processed in parallel (depending on the available threads in the dispatcher).
|
||||
In a high throughput use cases the sequential routing could be a bottle neck. Akka Typed does not provide an optimized tool for this.
|
||||
In a high throughput use cases the sequential routing could be a bottle neck. Akka Typed does not provide an optimized tool for this.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue