Add library dependency section to documentation (#25010)

This commit is contained in:
Richard Imaoka 2018-05-15 18:44:33 +09:00 committed by Arnout Engelen
parent 6fed2d78ad
commit ea84b8d469
66 changed files with 888 additions and 315 deletions

View file

@ -1,5 +1,17 @@
# Actors
## Dependency
To use Actors, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
and distributed systems. It alleviates the developer from having to deal with
explicit locking and thread management, making it easier to write correct

View file

@ -1,5 +1,15 @@
# Akka in OSGi
## Dependency
To use Akka in OSGi, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-osgi_$scala.binary_version$
version=$akka.version$
}
## Background
[OSGi](http://www.osgi.org/developer) is a mature packaging and deployment standard for component-based systems. It
@ -102,18 +112,6 @@ The goal here is to map the OSGi lifecycle more directly to the Akka lifecycle.
the actor system with a class loader that finds resources (`application.conf` and `reference.conf` files) and classes
from the application bundle and all transitive dependencies.
The `ActorSystemActivator` class is included in the `akka-osgi` artifact:
@@@vars
```
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-osgi_$scala.binary_version$</artifactId>
<version>$akka.version$</version>
</dependency>
```
@@@
## Sample
A complete sample project is provided in @extref[akka-sample-osgi-dining-hakkers](samples:akka-sample-osgi-dining-hakkers)

View file

@ -1,5 +1,17 @@
# Agents
## Dependency
To use Agents, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-agent_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Agents in Akka are inspired by [agents in Clojure](http://clojure.org/agents).
@@@ warning { title="Deprecation warning" }

View file

@ -1,5 +1,18 @@
# Camel
## Dependency
To use Camel, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-camel_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
@@@ warning
Akka Camel is deprecated in favour of [Alpakka](https://github.com/akka/alpakka) , the Akka Streams based collection of integrations to various endpoints (including Camel).

View file

@ -1,5 +1,17 @@
# Cluster Client
## Dependency
To use Cluster Client, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-tools_$scala.major_version$
version=$akka.version$
}
## Introduction
An actor system that is not part of the cluster can communicate with actors
somewhere in the cluster via the @unidoc[ClusterClient], the client can run in an `ActorSystem` that is part of
another cluster. It only needs to know the location of one (or more) nodes to use as initial
@ -161,16 +173,6 @@ Scala
Java
: @@snip [ClusterClientTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/client/ClusterClientTest.java) { #receptionistEventsListener }
## Dependencies
To use the Cluster Client you must add the following dependency in your project.
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-tools_$scala.binary_version$"
version="$akka.version$"
}
<a id="cluster-client-config"></a>
## Configuration

View file

@ -1,16 +1,24 @@
# Cluster Metrics Extension
## Dependency
To use Cluster Metrics Extension, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-metrics_$scala.binary_version$
version=$akka.version$
}
and add the following configuration stanza to your `application.conf`
:
```
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
```
## Introduction
The member nodes of the cluster can collect system health metrics and publish that to other cluster nodes
and to the registered subscribers on the system event bus with the help of Cluster Metrics Extension.
Cluster metrics information is primarily used for load-balancing routers,
and can also be used to implement advanced metrics-based node life cycles,
such as "Node Let-it-crash" when CPU steal time becomes excessive.
Cluster Metrics Extension is a separate Akka module delivered in `akka-cluster-metrics` jar.
To enable usage of the extension you need to add the following dependency to your project:
@@dependency[sbt,Maven,Gradle] {
@ -19,12 +27,12 @@ To enable usage of the extension you need to add the following dependency to you
version="$akka.version$"
}
and add the following configuration stanza to your `application.conf`
:
The member nodes of the cluster can collect system health metrics and publish that to other cluster nodes
and to the registered subscribers on the system event bus with the help of Cluster Metrics Extension.
```
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
```
Cluster metrics information is primarily used for load-balancing routers,
and can also be used to implement advanced metrics-based node life cycles,
such as "Node Let-it-crash" when CPU steal time becomes excessive.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up), if that feature is enabled,
will participate in Cluster Metrics collection and dissemination.
@ -101,7 +109,7 @@ unique per instance directory. You can control the extract directory with the
@@@
To enable usage of Sigar you can add the following dependency to the user project
To enable usage of Sigar you can add the following dependency to the user project:
@@dependency[sbt,Maven,Gradle] {
group="io.kamon"

View file

@ -1,5 +1,17 @@
# Cluster Sharding
## Dependency
To use Cluster Sharding, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-sharding_$scala.binary_version$
version=$akka.version$
}
## Introduction
Cluster sharding is useful when you need to distribute actors across several nodes in the cluster and want to
be able to interact with them using their logical identifier, but without having to care about
their physical location in the cluster, which might also change over time.
@ -31,16 +43,6 @@ See @ref:[Downing](cluster-usage.md#automatic-vs-manual-downing).
@@@
## Dependency
To use Akka Cluster Sharding, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-sharding_$scala.binary_version$"
version="$akka.version$"
}
## An Example
This is how an entity actor may look like:
@ -423,16 +425,6 @@ If you specify `-2.3` as the first program argument it will also try
to remove data that was stored by Cluster Sharding in Akka 2.3.x using
different persistenceId.
## Dependencies
To use the Cluster Sharding you must add the following dependency in your project.
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-sharding_$scala.binary_version$"
version="$akka.version$"
}
## Configuration
The `ClusterSharding` extension can be configured with the following properties. These configuration

View file

@ -1,5 +1,17 @@
# Cluster Singleton
## Dependency
To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-tools_$scala.binary_version$
version=$akka.version$
}
## Introduction
For some use cases it is convenient and sometimes also mandatory to ensure that
you have exactly one actor of a certain type running somewhere in the cluster.
@ -128,16 +140,6 @@ Java
A more comprehensive sample is available in the tutorial named
@scala[[Distributed workers with Akka and Scala!](https://github.com/typesafehub/activator-akka-distributed-workers)]@java[[Distributed workers with Akka and Java!](https://github.com/typesafehub/activator-akka-distributed-workers-java)].
## Dependencies
To use the Cluster Singleton you must add the following dependency in your project.
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-tools_$scala.binary_version$"
version="$akka.version$"
}
## Configuration
The following configuration properties are read by the `ClusterSingletonManagerSettings`

View file

@ -4,7 +4,7 @@ For introduction to the Akka Cluster concepts please see @ref:[Cluster Specifica
## Dependency
To use Akka Cluster, add the module to your project:
To use Akka Cluster, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"

View file

@ -1,5 +1,17 @@
# Dispatchers
## Dependency
Dispatchers are part of core akka, which means that they are part of the akka-actor dependency:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
An Akka `MessageDispatcher` is what makes Akka Actors "tick", it is the engine of the machine so to speak.
All `MessageDispatcher` implementations are also an `ExecutionContext`, which means that they can be used
to execute arbitrary code, for instance @ref:[Futures](futures.md).

View file

@ -1,5 +1,17 @@
# Distributed Data
## Dependency
To use Akka Distributed Data, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-distributed-data_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
*Akka Distributed Data* is useful when you need to share data between nodes in an
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
The keys are unique identifiers with type information of the data values. The values
@ -19,16 +31,6 @@ It is eventually consistent and geared toward providing high read and write avai
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
out-of-date value.
## Dependency
To use Akka Distributed Data, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-distributed-data_$scala.binary_version$"
version="$akka.version$"
}
## Using the Replicator
The `akka.cluster.ddata.Replicator` actor provides the API for interacting with the data.
@ -781,16 +783,6 @@ talk by Mark Shapiro
* [A comprehensive study of Convergent and Commutative Replicated Data Types](http://hal.upmc.fr/file/index/docid/555588/filename/techreport.pdf)
paper by Mark Shapiro et. al.
## Dependencies
To use Distributed Data you must add the following dependency in your project.
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-distributed-data_$scala.binary_version$"
version="$akka.version$"
}
## Configuration
The `DistributedData` extension can be configured with the following properties:

View file

@ -1,5 +1,17 @@
# Distributed Publish Subscribe in Cluster
## Dependency
To use Distributed Publish Subscribe you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-tools_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
How do I send a message to an actor without knowing which node it is running on?
How do I send messages to all actors in the cluster that have registered interest
@ -217,13 +229,3 @@ As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.m
In other words, messages can be lost over the wire.
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](http://doc.akka.io/docs/akka-stream-kafka/current/home.html).
## Dependencies
To use Distributed Publish Subscribe you must add the following dependency in your project.
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-cluster-tools_$scala.binary_version$"
version="$akka.version$"
}

View file

@ -1,5 +1,17 @@
# Fault Tolerance
## Dependency
The concept of fault tolerance relates to actors, so in order to use these make sure to depend on actors:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
As explained in @ref:[Actor Systems](general/actor-systems.md) each actor is the supervisor of its
children, and as such each actor defines fault handling supervisor strategy.
This strategy cannot be changed afterwards as it is an integral part of the

View file

@ -1,5 +1,15 @@
# FSM
## Dependency
To use Finite State Machine actors, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Overview
The FSM (Finite State Machine) is available as @scala[a mixin for the] @java[an abstract base class that implements an] Akka Actor and

View file

@ -1,5 +1,15 @@
# Futures
## Dependency
This section explains using plain Scala Futures but focuses on their interop with Akka Actors, so to follow those examples you will want to depend on:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
In the Scala Standard Library, a [Future](http://en.wikipedia.org/wiki/Futures_and_promises) is a data structure

View file

@ -26,6 +26,12 @@ This page does not list all available modules, but overviews the main functional
### Actor library
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor_$scala.binary_version$
version=$akka.version$
}
The core Akka library is `akka-actor`. But, actors are used across Akka libraries, providing a consistent, integrated model that relieves you from individually
solving the challenges that arise in concurrent or distributed system design. From a birds-eye view,
actors are a programming paradigm that takes encapsulation, one of the pillars of OOP, to its extreme.
@ -45,6 +51,12 @@ Challenges that actors solve include the following:
### Remoting
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-remote_$scala.binary_version$
version=$akka.version$
}
Remoting enables actors that live on different computers, to seamlessly exchange messages.
While distributed as a JAR artifact, Remoting resembles a module more than it does a library. You enable it mostly
with configuration and it has only a few APIs. Thanks to the actor model, a remote and local message send looks exactly the
@ -62,6 +74,12 @@ Challenges Remoting solves include the following:
### Cluster
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster_$scala.binary_version$
version=$akka.version$
}
If you have a set of actor systems that cooperate to solve some business problem, then you likely want to manage these set of
systems in a disciplined way. While Remoting solves the problem of addressing and communicating with components of
remote systems, Clustering gives you the ability to organize these into a "meta-system" tied together by a membership
@ -79,6 +97,12 @@ Challenges the Cluster module solves include the following:
### Cluster Sharding
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-sharding_$scala.binary_version$
version=$akka.version$
}
Sharding helps to solve the problem of distributing a set of actors among members of an Akka cluster.
Sharding is a pattern that mostly used together with Persistence to balance a large set of persistent entities
(backed by actors) to members of a cluster and also migrate them to other nodes when members crash or leave.
@ -92,6 +116,12 @@ Challenges that Sharding solves include the following:
### Cluster Singleton
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-singleton_$scala.binary_version$
version=$akka.version$
}
A common (in fact, a bit too common) use case in distributed systems is to have a single entity responsible
for a given task which is shared among other members of the cluster and migrated if the host system fails.
While this undeniably introduces a common bottleneck for the whole cluster that limits scaling,
@ -107,6 +137,12 @@ The Singleton module can be used to solve these challenges:
### Cluster Publish-Subscribe
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-tools_$scala.binary_version$
version=$akka.version$
}
For coordination among systems, it is often necessary to distribute messages to all, or one system of a set of
interested systems in a cluster. This pattern is usually called publish-subscribe and this module solves this exact
problem. It is possible to broadcast messages to all subscribers of a topic or send a message to an arbitrary actor that has expressed interest.
@ -119,6 +155,12 @@ Cluster Publish-Subscribe is intended to solve the following challenges:
### Persistence
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-persistence_$scala.binary_version$
version=$akka.version$
}
Just like objects in OOP, actors keep their state in volatile memory. Once the system is shut down, gracefully or
because of a crash, all data that was in memory is lost. Persistence provides patterns to enable actors to persist
events that lead to their current state. Upon startup, events can be replayed to restore the state of the entity hosted
@ -135,6 +177,12 @@ Persistence tackles the following challenges:
### Distributed Data
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-distributed-data_$scala.binary_version$
version=$akka.version$
}
In situations where eventual consistency is acceptable, it is possible to share data between nodes in
an Akka Cluster and accept both reads and writes even in the face of cluster partitions. This can be
achieved using [Conflict Free Replicated Data Types](https://en.wikipedia.org/wiki/Conflict-free_replicated_data_type) (CRDTs), where writes on different nodes can
@ -148,6 +196,12 @@ Distributed Data is intended to solve the following challenges:
### Streams
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-stream_$scala.binary_version$
version=$akka.version$
}
Actors are a fundamental model for concurrency, but there are common patterns where their use requires the user
to implement the same pattern over and over. Very common is the scenario where a chain, or graph, of actors, need to
process a potentially large, or infinite, stream of sequential events and properly coordinate resource usage so that
@ -166,6 +220,8 @@ Streams solve the following challenges:
### HTTP
[Akka HTTP](https://doc.akka.io/docs/akka-http/current) is a separate module from Akka.
The de facto standard for providing APIs remotely, internal or external, is [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol). Akka provides a library to construct or consume such HTTP services by giving a set of tools to create HTTP services (and serve them) and a client that can be
used to consume other services. These tools are particularly suited to streaming in and out a large set of data or real-time events by leveraging the underlying model of Akka Streams.

View file

@ -1,5 +1,17 @@
# Part 1: Actor Architecture
## Dependency
Add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Use of Akka relieves you from creating the infrastructure for an actor system and from writing the low-level code necessary to control basic behavior. To appreciate this, let's look at the relationships between actors you create in your code and those that Akka creates and manages for you internally, the actor lifecycle, and failure handling.
## The Akka actor hierarchy

View file

@ -1,5 +1,17 @@
# Part 2: Creating the First Actor
## Dependency
Add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
With an understanding of actor hierarchy and behavior, the remaining question is how to map the top-level components of our IoT system to actors. It might be tempting to make the actors that
represent devices and dashboards at the top level. Instead, we recommend creating an explicit component that represents the whole application. In other words, we will have a single top-level actor in our IoT system. The components that create and manage devices and dashboards will be children of this actor. This allows us to refactor the example use case architecture diagram into a tree of actors:

View file

@ -1,4 +1,17 @@
# Part 3: Working with Device Actors
## Dependency
Add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
In the previous topics we explained how to view actor systems _in the large_, that is, how components should be represented, how actors should be arranged in the hierarchy. In this part, we will look at actors _in the small_ by implementing the device actor.
If we were working with objects, we would typically design the API as _interfaces_, a collection of abstract methods to be filled out by the actual implementation. In the world of actors, protocols take the place of interfaces. While it is not possible to formalize general protocols in the programming language, we can compose their most basic element, messages. So, we will start by identifying the messages we will want to send to device actors.

View file

@ -1,5 +1,17 @@
# Part 4: Working with Device Groups
## Dependency
Add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Let's take a closer look at the main functionality required by our use case. In a complete IoT system for monitoring home temperatures, the steps for connecting a device sensor to our system might look like this:
1. A sensor device in the home connects through some protocol.

View file

@ -1,5 +1,17 @@
# Part 5: Querying Device Groups
## Dependency
Add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
The conversational patterns that we have seen so far are simple in the sense that they require the actor to keep little or no state. Specifically:
* Device actors return a reading, which requires no state change

View file

@ -1,5 +1,15 @@
# Actors
## Dependency
To use Akka Actors, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
@@toc { depth=2 }
@@@ index

View file

@ -1,5 +1,15 @@
# Utilities
## Dependency
To use Utilities, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
@@toc { depth=2 }
@@@ index

View file

@ -1,5 +1,17 @@
# Using TCP
## Dependency
To use TCP, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
The code snippets through-out this section assume the following imports:
Scala

View file

@ -1,5 +1,17 @@
# Using UDP
## Dependency
To use UDP, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
UDP is a connectionless datagram protocol which offers two different ways of
communication on the JDK level:

View file

@ -1,5 +1,15 @@
# I/O
## Dependency
To use I/O, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
The `akka.io` package has been developed in collaboration between the Akka

View file

@ -1,5 +1,17 @@
# Logging
## Dependency
To use Logging, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Logging in Akka is not tied to a specific logging backend. By default
log messages are printed to STDOUT, but you can plug-in a SLF4J logger or
your own logger. Logging is performed asynchronously to ensure that logging

View file

@ -1,5 +1,17 @@
# Mailboxes
## Dependency
To use Mailboxes, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
An Akka `Mailbox` holds the messages that are destined for an `Actor`.
Normally each `Actor` has its own mailbox, but with for example a `BalancingPool`
all routees will share a single mailbox instance.

View file

@ -1,5 +1,19 @@
# Multi Node Testing
## Dependency
To use Multi Node Testing, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-multi-node-testkit_$scala.binary_version$
version=$akka.version$
}
If you are using the latest nightly build you should pick a timestamped Akka version from
[https://repo.akka.io/snapshots/com/typesafe/akka/akka-multi-node-testkit_2.11/](https://repo.akka.io/snapshots/com/typesafe/akka/akka-multi-node-testkit_2.11/).
We recommend against using `SNAPSHOT` in order to obtain stable builds.
## Multi Node Testing Concepts
When we talk about multi node testing in Akka we mean the process of running coordinated tests on multiple actor
@ -148,20 +162,6 @@ multi-jvm:testOnly your.MultiNodeTest
More than one test name can be listed to run multiple specific tests. Tab completion in sbt makes it easy to
complete the test names.
## Preparing Your Project for Multi Node Testing
The multi node testing kit is a separate jar file. Make sure that you have the following dependency in your project:
@@@vars
```
"com.typesafe.akka" %% "akka-multi-node-testkit" % "$akka.version$"
```
@@@
If you are using the latest nightly build you should pick a timestamped Akka version from
[https://repo.akka.io/snapshots/com/typesafe/akka/akka-multi-node-testkit_2.11/](https://repo.akka.io/snapshots/com/typesafe/akka/akka-multi-node-testkit_2.11/).
We recommend against using `SNAPSHOT` in order to obtain stable builds.
## A Multi Node Testing Example
First we need some scaffolding to hook up the `MultiNodeSpec` with your favorite test framework. Lets define a trait

View file

@ -1,19 +1,22 @@
# Persistence Query for LevelDB
## Dependency
To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-persistence-query_$scala.binary_version$
version=$akka.version$
}
This will also add dependency on the @ref[akka-persistence](persistence.md) module.
## Introduction
This is documentation for the LevelDB implementation of the @ref:[Persistence Query](persistence-query.md) API.
Note that implementations for other journals may have different semantics.
## Dependencies
Akka persistence LevelDB query implementation is bundled in the `akka-persistence-query` artifact.
Make sure that you have the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-persistence-query_$scala.binary_version$"
version="$akka.version$"
}
## How to get the ReadJournal
The `ReadJournal` is retrieved via the `akka.persistence.query.PersistenceQuery`

View file

@ -1,5 +1,19 @@
# Persistence Query
## Dependency
To use Persistence Query, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-persistence-query_$scala.binary_version$
version=$akka.version$
}
This will also add dependency on the @ref[Akka Persistence](persistence.md) module.
## Introduction
Akka persistence query complements @ref:[Persistence](persistence.md) by providing a universal asynchronous stream based
query interface that various journal plugins can implement in order to expose their query capabilities.
@ -10,16 +24,6 @@ side of an application, however it can help to migrate data from the write side
simple scenarios Persistence Query may be powerful enough to fulfill the query needs of your app, however we highly
recommend (in the spirit of CQRS) of splitting up the write/read sides into separate datastores as the need arises.
## Dependencies
Akka persistence query is a separate jar file. Make sure that you have the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-persistence-query_$scala.binary_version$"
version="$akka.version$"
}
## Design overview
Akka persistence query is purposely designed to be a very loosely specified API.

View file

@ -1,5 +1,17 @@
# Persistence - Schema Evolution
## Dependency
This documentation page touches upon @ref[Akka Persitence](persistence.md), so to follow those examples you will want to depend on:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-persistence_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
When working on long running projects using @ref:[Persistence](persistence.md), or any kind of [Event Sourcing](http://martinfowler.com/eaaDev/EventSourcing.html) architectures,
schema evolution becomes one of the more important technical aspects of developing your application.
The requirements as well as our own understanding of the business domain may (and will) change in time.

View file

@ -1,21 +1,8 @@
# Persistence
Akka persistence enables stateful actors to persist their internal state so that it can be recovered when an actor
is started, restarted after a JVM crash or by a supervisor, or migrated in a cluster. The key concept behind Akka
persistence is that only changes to an actor's internal state are persisted but never its current state directly
(except for optional snapshots). These changes are only ever appended to storage, nothing is ever mutated, which
allows for very high transaction rates and efficient replication. Stateful actors are recovered by replaying stored
changes to these actors from which they can rebuild internal state. This can be either the full history of changes
or starting from a snapshot which can dramatically reduce recovery times. Akka persistence also provides point-to-point
communication with at-least-once message delivery semantics.
Akka persistence is inspired by and the official replacement of the [eventsourced](https://github.com/eligosource/eventsourced) library. It follows the same
concepts and architecture of [eventsourced](https://github.com/eligosource/eventsourced) but significantly differs on API and implementation level. See also
@ref:[migration-eventsourced-2.3](project/migration-guide-eventsourced-2.3.x.md)
## Dependency
To use Akka Persistence, add the module to your project:
To use Akka Persistence, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
@ -34,6 +21,21 @@ LevelDB-based plugins will require the following additional dependency:
version="1.8"
}
## Introduction
Akka persistence enables stateful actors to persist their internal state so that it can be recovered when an actor
is started, restarted after a JVM crash or by a supervisor, or migrated in a cluster. The key concept behind Akka
persistence is that only changes to an actor's internal state are persisted but never its current state directly
(except for optional snapshots). These changes are only ever appended to storage, nothing is ever mutated, which
allows for very high transaction rates and efficient replication. Stateful actors are recovered by replaying stored
changes to these actors from which they can rebuild internal state. This can be either the full history of changes
or starting from a snapshot which can dramatically reduce recovery times. Akka persistence also provides point-to-point
communication with at-least-once message delivery semantics.
Akka persistence is inspired by and the official replacement of the [eventsourced](https://github.com/eligosource/eventsourced) library. It follows the same
concepts and architecture of [eventsourced](https://github.com/eligosource/eventsourced) but significantly differs on API and implementation level. See also
@ref:[migration-eventsourced-2.3](project/migration-guide-eventsourced-2.3.x.md)
## Architecture
* @scala[`PersistentActor`]@java[`AbstractPersistentActor`]: Is a persistent, stateful actor. It is able to persist events to a journal and can react to

View file

@ -1,5 +1,59 @@
# Remoting (codename Artery)
## Dependency
To use Remoting (codename Artery), you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-remote_$scala.major_version$
version=$akka.version$
}
## Configuration
To enable remote capabilities in your Akka project you should, at a minimum, add the following changes
to your `application.conf` file:
```
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = aeron-udp
canonical.hostname = "127.0.0.1"
canonical.port = 25520
}
}
}
```
As you can see in the example above there are four things you need to add to get started:
* Change provider from `local` to `remote`
* Enable Artery to use it as the remoting implementation
* Add host name - the machine you want to run the actor system on; this host
name is exactly what is passed to remote systems in order to identify this
system and consequently used for connecting back to this system if need be,
hence set it to a reachable IP address or resolvable name in case you want to
communicate across the network.
* Add port number - the port the actor system should listen on, set to 0 to have it chosen automatically
@@@ note
The port number needs to be unique for each actor system on the same machine even if the actor
systems have different names. This is because each actor system has its own networking subsystem
listening for connections and handling messages as not to interfere with other actor systems.
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in @ref:[Remote Configuration](#remote-configuration-artery).
## Introduction
We recommend @ref:[Akka Cluster](cluster-usage.md) over using remoting directly. As remoting is the
underlying module that allows for Cluster, it is still useful to understand details about it though.
@ -48,57 +102,6 @@ The main incompatible change from the previous implementation that the protocol
`ActorRef` is always *akka* instead of the previously used *akka.tcp* or *akka.ssl.tcp*. Configuration properties
are also different.
## Preparing your ActorSystem for Remoting
The Akka remoting is a separate jar file. Make sure that you have the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-remote_$scala.binary_version$"
version="$akka.version$"
}
To enable remote capabilities in your Akka project you should, at a minimum, add the following changes
to your `application.conf` file:
```
akka {
actor {
provider = remote
}
remote {
artery {
enabled = on
transport = aeron-udp
canonical.hostname = "127.0.0.1"
canonical.port = 25520
}
}
}
```
As you can see in the example above there are four things you need to add to get started:
* Change provider from `local` to `remote`
* Enable Artery to use it as the remoting implementation
* Add host name - the machine you want to run the actor system on; this host
name is exactly what is passed to remote systems in order to identify this
system and consequently used for connecting back to this system if need be,
hence set it to a reachable IP address or resolvable name in case you want to
communicate across the network.
* Add port number - the port the actor system should listen on, set to 0 to have it chosen automatically
@@@ note
The port number needs to be unique for each actor system on the same machine even if the actor
systems have different names. This is because each actor system has its own networking subsystem
listening for connections and handling messages as not to interfere with other actor systems.
@@@
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in @ref:[Remote Configuration](#remote-configuration-artery).
### Selecting transport
There are three alternatives of which underlying transport to use. It is configured by property

View file

@ -1,31 +1,17 @@
# Remoting
We recommend @ref:[Akka Cluster](cluster-usage.md) over using remoting directly. As remoting is the
underlying module that allows for Cluster, it is still useful to understand details about it though.
## Dependency
For an introduction of remoting capabilities of Akka please see @ref:[Location Transparency](general/remoting.md).
@@@ note
As explained in that chapter Akka remoting is designed for communication in a
peer-to-peer fashion and it is not a good fit for client-server setups. In
particular Akka Remoting does not work transparently with Network Address Translation,
Load Balancers, or in Docker containers. For symmetric communication in these situations
network and/or Akka configuration will have to be changed as described in
[Akka behind NAT or in a Docker container](#remote-configuration-nat).
@@@
## Preparing your ActorSystem for Remoting
The Akka remoting is a separate jar file. Make sure that you have the following dependency in your project:
To use Akka Remoting, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-remote_$scala.binary_version$"
version="$akka.version$"
group=com.typesafe.akka
artifact=akka-remote_$scala.major_version$
version=$akka.version$
}
## Configuration
To enable remote capabilities in your Akka project you should, at a minimum, add the following changes
to your `application.conf` file:
@ -65,6 +51,24 @@ listening for connections and handling messages as not to interfere with other a
The example above only illustrates the bare minimum of properties you have to add to enable remoting.
All settings are described in [Remote Configuration](#remote-configuration).
## Introduction
We recommend @ref:[Akka Cluster](cluster-usage.md) over using remoting directly. As remoting is the
underlying module that allows for Cluster, it is still useful to understand details about it though.
For an introduction of remoting capabilities of Akka please see @ref:[Location Transparency](general/remoting.md).
@@@ note
As explained in that chapter Akka remoting is designed for communication in a
peer-to-peer fashion and it is not a good fit for client-server setups. In
particular Akka Remoting does not work transparently with Network Address Translation,
Load Balancers, or in Docker containers. For symmetric communication in these situations
network and/or Akka configuration will have to be changed as described in
[Akka behind NAT or in a Docker container](#remote-configuration-nat).
@@@
## Types of Remote Interaction
Akka has two ways of using remoting:

View file

@ -1,5 +1,17 @@
# Routing
## Dependency
To use Routing, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Messages can be sent via a router to efficiently route them to destination actors, known as
its *routees*. A `Router` can be used inside or outside of an actor, and you can manage the
routees yourselves or use a self contained router actor with configuration capabilities.

View file

@ -1,5 +1,17 @@
# Scheduler
## Dependency
To use Scheduler, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Sometimes the need for making things happen in the future arises, and where do
you go look then? Look no further than `ActorSystem`! There you find the
`scheduler` method that returns an instance of

View file

@ -1,5 +1,17 @@
# Serialization
## Dependency
To use Serialization, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-actor_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
The messages that Akka actors send to each other are JVM objects (e.g. instances of Scala case classes). Message passing between actors that live on the same JVM is straightforward. It is done via reference passing. However, messages that have to escape the JVM to reach an actor running on a different host have to undergo some form of serialization (i.e. the objects have to be converted to and from byte arrays).
Akka itself uses Protocol Buffers to serialize internal messages (i.e. cluster gossip messages). However, the serialization mechanism in Akka allows you to write custom serializers and to define which serializer to use for what.

View file

@ -1,5 +1,15 @@
# Streams
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
@@toc { depth=2 }
@@@ index

View file

@ -1,5 +1,17 @@
# Modularity, Composition and Hierarchy
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Akka Streams provide a uniform model of stream processing graphs, which allows flexible composition of reusable
components. In this chapter we show how these look like from the conceptual and API perspective, demonstrating
the modularity aspects of the library.

View file

@ -1,5 +1,15 @@
# Streams Cookbook
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
This is a collection of patterns to demonstrate various usage of the Akka Streams API by solving small targeted

View file

@ -1,5 +1,17 @@
# Custom stream processing
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
While the processing vocabulary of Akka Streams is quite rich (see the @ref:[Streams Cookbook](stream-cookbook.md) for examples) it
is sometimes necessary to define new transformation stages either because some functionality is missing from the
stock operations, or for performance reasons. In this part we show how to build custom processing stages and graph

View file

@ -1,5 +1,17 @@
# Dynamic stream handling
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
<a id="kill-switch"></a>
## Controlling graph completion with KillSwitch

View file

@ -1,5 +1,17 @@
# Error Handling in Streams
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
When a stage in a stream fails this will normally lead to the entire stream being torn down.
Each of the stages downstream gets informed about the failure and each upstream stage sees a cancellation.

View file

@ -1,5 +1,17 @@
# Basics and working with Flows
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
<a id="core-concepts"></a>
## Core concepts

View file

@ -1,5 +1,17 @@
# Working with Graphs
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
In Akka Streams computation graphs are not expressed using a fluent DSL like linear computations are, instead they are
written in a more graph-resembling DSL which aims to make translating graph drawings (e.g. from notes taken
from design discussions, or illustrations in protocol specifications) to and from code simpler. In this section we'll

View file

@ -1,5 +1,15 @@
# Integration
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Integrating with Actors
For piping the elements of a stream as messages to an ordinary actor you can use

View file

@ -1,5 +1,17 @@
# Working with streaming IO
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Akka Streams provides a way of handling File IO and TCP connections with Streams.
While the general approach is very similar to the @ref:[Actor based TCP handling](../io-tcp.md) using Akka IO,
by using Akka Streams you are freed of having to manually react to back-pressure signals,

View file

@ -1,5 +1,17 @@
# Pipelining and Parallelism
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Akka Streams processing stages (be it simple operators on Flows and Sources or graph junctions) are "fused" together
and executed sequentially by default. This avoids the overhead of events crossing asynchronous boundaries but
limits the flow to execute at most one stage at any given time.

View file

@ -1,5 +1,17 @@
# Buffers and working with rate
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
When upstream and downstream rates differ, especially when the throughput has spikes, it can be useful to introduce
buffers in a stream. In this chapter we cover how buffers are used in Akka Streams.

View file

@ -1,5 +1,17 @@
# StreamRefs - Reactive Streams over the network
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
@@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense

View file

@ -1,5 +1,17 @@
# Substreams
## Dependency
To use Akka Streams, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream_$scala.binary_version$"
version="$akka.version$"
}
## Introduction
Substreams are represented as `SubSource` or `SubFlow` instances, on which you can multiplex a single `Source` or `Flow`
into a stream of streams.

View file

@ -1,5 +1,18 @@
# Testing streams
## Dependency
To use Akka Stream TestKit, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream-testkit_$scala.binary_version$"
version="$akka.version$"
scope="test"
}
## Introduction
Verifying behavior of Akka Stream sources, flows and sinks can be done using
various code patterns and libraries. Here we will discuss testing these
elements using:
@ -93,17 +106,6 @@ provides tools specifically for writing stream tests. This module comes with
two main components that are `TestSource` and `TestSink` which
provide sources and sinks that materialize to probes that allow fluent API.
### Dependency
To use Akka Stream TestKit, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
artifact="akka-stream-testkit_$scala.binary_version$"
version="$akka.version$"
scope="test"
}
### Using the TestKit
A sink returned by `TestSink.probe` allows manual control over demand and

View file

@ -1,15 +1,8 @@
# Testing Actor Systems
As with any piece of software, automated tests are a very important part of the
development cycle. The actor model presents a different view on how units of
code are delimited and how they interact, which has an influence on how to
perform tests.
Akka comes with a dedicated module `akka-testkit` for supporting tests.
## Dependency
To use Akka Testkit, add the module to your project:
To use Akka Testkit, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group="com.typesafe.akka"
@ -18,6 +11,15 @@ To use Akka Testkit, add the module to your project:
scope="test"
}
## Introduction
As with any piece of software, automated tests are a very important part of the
development cycle. The actor model presents a different view on how units of
code are delimited and how they interact, which has an influence on how to
perform tests.
Akka comes with a dedicated module `akka-testkit` for supporting tests.
## Asynchronous Testing: `TestKit`
Testkit allows you to test your actors in a controlled but realistic

View file

@ -1,5 +1,17 @@
# Actor discovery
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
With @ref:[untyped actors](../general/addressing.md) you would use `ActorSelection` to "lookup" actors. Given an actor path with
address information you can get hold of an `ActorRef` to any actor. `ActorSelection` does not exist in Akka Typed,
so how do you get the actor references? You can send refs in messages but you need something to bootstrap the interaction.

View file

@ -1,5 +1,17 @@
# Actor lifecycle
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
TODO intro
## Creating Actors

View file

@ -1,5 +1,17 @@
# Actors
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
@@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
@ -9,18 +21,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka Actor Typed add the following dependency:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_2.12
version=$akka.version$
}
## Introduction
As discussed in @ref:[Actor Systems](../general/actor-systems.md) Actors are about
sending messages between independent units of computation, but how does that
look like?

View file

@ -1,5 +1,17 @@
# Cluster Sharding
## Dependency
To use Akka Cluster Sharding Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-sharding-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
For an introduction to Sharding concepts see @ref:[Cluster Sharding](../cluster-sharding.md). This documentation shows how to use the typed
Cluster Sharding API.
@ -12,16 +24,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka Cluster Sharding Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-sharding-typed_2.12
version=$akka.version$
}
## Basic example
Sharding is accessed via the `ClusterSharding` extension

View file

@ -1,5 +1,17 @@
# Cluster Singleton
## Dependency
To use Cluster Singleton, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
@@@ warning
This module is currently marked as @ref:[may change](../common/may-change.md) in the sense
@ -25,17 +37,6 @@ such as single-point of bottleneck. Single-point of failure is also a relevant c
but for some cases this feature takes care of that by making sure that another singleton
instance will eventually be started.
## Dependency
Cluster singleton is part of the cluster module:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_2.12
version=$akka.version$
}
## Example
Any `Behavior` can be run as a singleton. E.g. a basic counter:

View file

@ -1,5 +1,17 @@
# Cluster
## Dependency
To use Akka Cluster Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
For an introduction to Akka Cluster concepts see @ref:[Cluster Specification](../common/cluster.md). This documentation shows how to use the typed
Cluster API.
@ -12,16 +24,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka Cluster Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-cluster-typed_2.12
version=$akka.version$
}
## Examples
All of the examples below assume the following imports:

View file

@ -1,5 +1,17 @@
# Coexistence
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
We believe Akka Typed will be adopted in existing systems gradually and therefore it's important to be able to use typed
and untyped actors together, within the same `ActorSystem`. Also, we will not be able to integrate with all existing modules in one big bang release and that is another reason for why these two ways of writing actors must be able to coexist.

View file

@ -1,5 +1,17 @@
# Interaction Patterns
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
Interacting with an Actor in Akka Typed is done through an @scala[`ActorRef[T]`]@java[`ActorRef<T>`] where `T` is the type of messages the actor accepts, also known as the "protocol". This ensures that only the right kind of messages can be sent to an actor and also that no one else but the Actor itself can access the Actor instance internals.
Message exchange with Actors follow a few common patterns, let's go through each one of them.

View file

@ -1,5 +1,17 @@
# Persistence
## Dependency
To use Akka Persistence Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-persistence-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
Akka Persistence is a library for building event sourced actors. For background about how it works
see the @ref:[untyped Akka Persistence section](../persistence.md). This documentation shows how the typed API for persistence
works and assumes you know what is meant by `Command`, `Event` and `State`.
@ -13,16 +25,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka Persistence Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-persistence-typed_2.12
version=$akka.version$
}
## Example
Let's start with a simple example. The minimum required for a `PersistentBehavior` is:

View file

@ -1,5 +1,17 @@
# Stash
## Dependency
To use Akka Actor Typed, you must add the following dependency in your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-actor-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
Stashing enables an actor to temporarily buffer all or some messages that cannot or should not
be handled using the actor's current behavior.

View file

@ -1,6 +1,18 @@
# Streams
@ref:[Akka Streams](../stream/index.md) make it easy to model type-safe message processing pipelines. With typed actors it is possible to connect streams to actors without losing the type information.
## Dependency
To use Akka Streams Typed, add the module to your project:
@@dependency [sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-stream-typed_$scala.binary_version$
version=$akka.version$
}
## Introduction
@ref:[Akka Streams](../stream/index.md) make it easy to model type-safe message processing pipelines. With typed actors it is possible to connect streams to actors without loosing the type information.
This module contains typed alternatives to the @ref:[already existing `ActorRef` sources and sinks](../stream/stream-integrations.md) together with a factory methods for @scala[@scaladoc[`ActorMaterializer`](akka.stream.typed.ActorMaterializer)]@java[@javadoc[`ActorMaterializer`](akka.stream.typed.ActorMaterializer)] which takes a typed `ActorSystem`.
@ -15,16 +27,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka Streams Typed, add the module to your project:
@@dependency [sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-stream-typed_2.12
version=$akka.version$
}
## Actor Source
A stream that is driven by messages sent to a particular actor can be started with @scala[@scaladoc[`ActorSource.actorRef`](akka.stream.typed.scaladsl.ActorSource#actorRef)]@java[@javadoc[`ActorSource.actorRef`](akka.stream.typed.javadsl.ActorSource#actorRef)]. This source materializes to a typed `ActorRef` which only accepts messages that are of the same type as the stream.

View file

@ -1,5 +1,18 @@
# Testing
## Dependency
To use Akka TestKit Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-testkit-typed_$scala.binary_version$
version=$akka.version$
scope=test
}
## Introduction
Testing can either be done asynchronously using a real `ActorSystem` or synchronously on the testing thread using the `BehaviousTestKit`.
For testing logic in a `Behavior` in isolation synchronous testing is preferred. For testing interactions between multiple
@ -18,17 +31,6 @@ This module is currently marked as @ref:[may change](../common/may-change.md) in
@@@
## Dependency
To use Akka TestKit Typed, add the module to your project:
@@dependency[sbt,Maven,Gradle] {
group=com.typesafe.akka
artifact=akka-testkit-typed_2.12
version=$akka.version$
scope=test
}
## Synchronous behavior testing
The following demonstrates how to test: