Getting Started tutorial improvements (#23210)

This commit is contained in:
Arnout Engelen 2017-07-13 01:24:53 -07:00 committed by GitHub
parent d87cf4aec4
commit f38b928e13
67 changed files with 1451 additions and 1507 deletions

View file

@ -1,179 +1,25 @@
# What problems does the actor model solve?
# How the Actor Model Meets the Needs of Modern, Distributed Systems
Akka uses the actor model to overcome the limitations of traditional object-oriented programming models and meet the
unique challenges of highly distributed systems. To fully understand why the actor model is necessary, it helps to
identify mismatches between traditional approaches to programming and the realities of concurrent and distributed
computing.
As described in the previous topic, common programming practices do not properly
address the needs of demanding modern systems. Thankfully, we
don't need to scrap everything we know. Instead, the actor model addresses these
shortcomings in a principled way, allowing systems to behave in a way that
better matches our mental model. The actor model abstraction
allows you to think about your code in terms of communication, not unlike the
exchanges that occur between people in a large organization.
### The illusion of encapsulation
Object-oriented programming (OOP) is a widely-accepted, familiar programming model. One of its core pillars is
_encapsulation_. Encapsulation dictates that the internal data of an object is not accessible directly from the outside;
it can only be modified by invoking a set of curated methods. The object is responsible for exposing safe operations
that protect the invariant nature of its encapsulated data.
For example, operations on an ordered binary tree implementation must not allow violation of the tree ordering
invariant. Callers expect the ordering to be intact and when querying the tree for a certain piece of
data, they need to be able to rely on this constraint.
When we analyze OOP runtime behavior, we sometimes draw a message sequence chart showing the interactions of
method calls. For example:
![sequence chart](diagrams/seq_chart.png)
Unfortunately, the above diagram does not accurately represent the _lifelines_ of the instances during execution.
In reality, a _thread_ executes all these calls, and the enforcement of invariants occurs on the same thread from
which the method was called. Updating the diagram with the thread of execution, it looks like this:
![sequence chart with thread](diagrams/seq_chart_thread.png)
The significance of this clarification becomes clear when you try to model what happens with _multiple threads_.
Suddenly, our neatly drawn diagram becomes inadequate. We can try to illustrate multiple threads accessing
the same instance:
![sequence chart with threads interacting](diagrams/seq_chart_multi_thread.png)
There is a section of execution where two threads enter the same method. Unfortunately, the encapsulation model
of objects does not guarantee anything about what happens in that section. Instructions of the two invocations
can be interleaved in arbitrary ways which eliminate any hope for keeping the invariants intact without some
type of coordination between two threads. Now, imagine this issue compounded by the existence of many threads.
The common approach to solving this problem is to add a lock around these methods. While this ensures that at most
one thread will enter the method at any given time, this is a very costly strategy:
* Locks _seriously limit_ concurrency, they are very costly on modern CPU architectures,
requiring heavy-lifting from the operating system to suspend the thread and restore it later.
* The caller thread is now blocked, so it cannot do any other meaningful work. Even in desktop applications this is
unacceptable, we want to keep user-facing parts of applications (its UI) to be responsive even when a
long background job is running. In the backend, blocking is outright wasteful.
One might think that this can be compensated by launching new threads, but threads are also a costly abstraction.
* Locks introduce a new menace: deadlocks.
These realities result in a no-win situation:
* Without sufficient locks, the state gets corrupted.
* With many locks in place, performance suffers and very easily leads to deadlocks.
Additionally, locks only really work well locally. When it comes to coordinating across multiple machines,
the only alternative is distributed locks. Unfortunately, distributed locks are several magnitudes less efficient
than local locks and usually impose a hard limit on scaling out. Distributed lock protocols require several
communication round-trips over the network across multiple machines, so latency goes through the roof.
In Object Oriented languages we rarely think about threads or linear execution paths in general.
We often envision a system as a network of object instances that react to method calls, modify their internal state,
then communicate with each other via method calls driving the whole application state forward:
![network of interacting objects](diagrams/object_graph.png)
However, in a multi-threaded distributed environment, what actually happens is that threads "traverse" this network of object instances by following method calls.
As a result, threads are what really drive execution:
![network of interactive objects traversed by threads](diagrams/object_graph_snakes.png)
**In summary**:
* **Objects can only guarantee encapsulation (protection of invariants) in the face of single-threaded access,
multi-thread execution almost always leads to corrupted internal state. Every invariant can be violated by
having two contending threads in the same code segment.**
* **While locks seem to be the natural remedy to uphold encapsulation with multiple threads, in practice they
are inefficient and easily lead to deadlocks in any application of real-world scale.**
* **Locks work locally, attempts to make them distributed exist, but offer limited potential for scaling out.**
### The illusion of shared memory on modern computer architectures
Programming models of the 80'-90's conceptualize that writing to a variable means writing to a memory location directly
(which somewhat muddies the water that local variables might exist only in registers). On modern architectures -
if we simplify things a bit - CPUs are writing to [cache lines](https://en.wikipedia.org/wiki/CPU_cache)
instead of writing to memory directly. Most of these caches are local to the CPU core, that is, writes by one core
are not visible by another. In order to make local changes visible to another core, and hence to another thread,
the cache line needs to be shipped to the other core's cache.
On the JVM, we have to explicitly denote memory locations to be shared across threads by using _volatile_ markers
or `Atomic` wrappers. Otherwise, we can access them only in a locked section. Why don't we just mark all variables as
volatile? Because shipping cache lines across cores is a very costly operation! Doing so would implicitly stall the cores
involved from doing additional work, and result in bottlenecks on the cache coherence protocol (the protocol CPUs
use to transfer cache lines between main memory and other CPUs).
The result is magnitudes of slowdown.
Even for developers aware of this situation, figuring out which memory locations should be marked as volatile,
or which atomic structures to use is a dark art.
**In summary**:
* **There is no real shared memory anymore, CPU cores pass chunks of data (cache lines) explicitly to each other
just as computers on a network do. Inter-CPU communication and network communication have more in common than many realize. Passing messages is the norm now be it across CPUs or networked computers.**
* **Instead of hiding the message passing aspect through variables marked as shared or using atomic data structures,
a more disciplined and principled approach is to keep state local to a concurrent entity and propagate data or events
between concurrent entities explicitly via messages.**
### The illusion of a call stack
Today, we often take call stacks for granted. But, they were invented in an era where concurrent programming
was not as important because multi-CPU systems were not common. Call stacks do not cross threads and hence,
do not model asynchronous call chains.
The problem arises when a thread intends to delegate a task to the "background". In practice, this really means
delegating to another thread. This cannot be a simple method/function call because calls are strictly local to the
thread. What usually happens, is that the "caller" puts an object into a memory location shared by a worker thread
("callee"), which in turn, picks it up in some event loop. This allows the "caller" thread to move on and do other tasks.
The first issue is, how can the "caller" be notified of the completion of the task? But a more serious issue arises
when a task fails with an exception. Where does the exception propagate to? It will propagate to the exception handler
of the worker thread completely ignoring who the actual "caller" was:
![exceptions cannot propagate between different threads](diagrams/exception_prop.png)
This is a serious problem. How does the worker thread deal with the situation? It likely cannot fix the issue as it is
usually oblivious of the purpose of the failed task. The "caller" thread needs to be notified somehow,
but there is no call stack to unwind with an exception. Failure notification can only be done via a side-channel,
for example putting an error code where the "caller" thread otherwise expects the result once ready.
If this notification is not in place, the "caller" never gets notified of a failure and the task is lost!
**This is surprisingly similar to how networked systems work where messages/requests can get lost/fail without any
notification.**
This bad situation gets worse when things go really wrong and a worker backed by a thread encounters a bug and ends
up in an unrecoverable situation. For example, an internal exception caused by a bug bubbles up to the root of
the thread and makes the thread shut down. This immediately raises the question, who should restart the normal operation
of the service hosted by the thread, and how should it be restored to a known-good state? At first glance,
this might seem manageable, but we are suddenly faced by a new, unexpected phenomena: the actual task,
that the thread was currently working on, is no longer in the shared memory location where tasks are taken from
(usually a queue). In fact, due to the exception reaching to the top, unwinding all of the call stack,
the task state is fully lost! **We have lost a message even though this is local communication with no networking
involved (where message losses are to be expected).**
**In summary:**
* **To achieve any meaningful concurrency and performance on current systems, threads must delegate tasks among each
other in an efficient way without blocking. With this style of task-delegating concurrency
(and even more so with networked/distributed computing) call stack-based error handling breaks down and new,
explicit error signaling mechanisms need to be introduced. Failures become part of the domain model.**
* **Concurrent systems with work delegation needs to handle service faults and have principled means to recover from them.
Clients of such services need to be aware that tasks/messages might get lost during restarts.
Even if loss does not happen, a response might be delayed arbitrarily due to previously enqueued tasks
(a long queue), delays caused by garbage collection, etc. In face of these, concurrent systems should handle response
deadlines in the form of timeouts, just like networked/distributed systems.**
## How the actor model meets the needs of concurrent, distributed systems
As described in the sections above, common programming practices cannot properly address the needs of modern concurrent
and distributed systems.
Thankfully, we don't need to scrap everything we know. Instead, the actor model addresses these shortcomings in a
principled way, allowing systems to behave in a way that better matches our mental model.
In particular, we would like to:
Use of actors allows us to:
* Enforce encapsulation without resorting to locks.
* Use the model of cooperative entities reacting to signals, changing state and sending signals to each other
* Use the model of cooperative entities reacting to signals, changing state, and sending signals to each other
to drive the whole application forward.
* Stop worrying about an executing mechanism which is a mismatch to our world view.
The actor model accomplishes all of these goals. The following topics describe how.
### Usage of message passing avoids locking and blocking
Instead of calling methods, actors send messages to each other. Sending a message does not transfer the thread
of execution from the sender to the destination. An actor can send a message and continue without blocking.
It can, therefore, do more work, send and receive messages.
Therefore, it can accomplish more in the same amount of time.
With objects, when a method returns, it releases control of its executing thread. In this respect, actors behave
much like objects, they react to messages and return execution when they finish processing the current message.
@ -181,8 +27,8 @@ In this way, actors actually achieve the execution we imagined for objects:
![actors interact with each other by sending messages](diagrams/actor_graph.png)
An important difference of passing messages instead of calling methods is that messages have no return value.
By sending a message, an actor delegates work to another actor. As we saw in @ref:[The illusion of a call stack](actors-intro.md#the-illusion-of-a-call-stack),
An important difference between passing messages and calling methods is that messages have no return value.
By sending a message, an actor delegates work to another actor. As we saw in @ref:[The illusion of a call stack](actors-motivation.md#the-illusion-of-a-call-stack),
if it expected a return value, the sending actor would either need to block or to execute the other actor's work on the same thread.
Instead, the receiving actor delivers the results in a reply message.
@ -190,8 +36,9 @@ The second key change we need in our model is to reinstate encapsulation. Actors
"react" to methods invoked on them. The difference is that instead of multiple threads "protruding" into our actor and
wreaking havoc to internal state and invariants, actors execute independently from the senders of a message, and they
react to incoming messages sequentially, one at a time. While each actor processes messages sent to it sequentially,
different actors work concurrently with each other so an actor system can process as many messages simultaneously
as many processor cores are available on the machine. Since there is always at most one message being processed per actor
different actors work concurrently with each other so that an actor system can process as many messages simultaneously as the hardware will support.
Since there is always at most one message being processed per actor,
the invariants of an actor can be kept without synchronization. This happens automatically without using locks:
![messages don't invalidate invariants as they are processed sequentially](diagrams/serialized_timeline_invariants.png)
@ -207,15 +54,15 @@ In summary, this is what happens when an actor receives a message:
To accomplish this behavior, actors have:
* A Mailbox (the queue where messages end up).
* A Behavior (the state of the actor, internal variables etc.).
* A mailbox (the queue where messages end up).
* A behavior (the state of the actor, internal variables etc.).
* Messages (pieces of data representing a signal, similar to method calls and their parameters).
* An Execution Environment (the machinery that takes actors that have messages to react to and invokes
* An execution environment (the machinery that takes actors that have messages to react to and invokes
their message handling code).
* An Address (more on this later).
* An address (more on this later).
Messages are put into so-called Mailboxes of Actors. The Behavior of the actor describes how the actor responds to
messages (like sending more messages and/or changing state). An Execution Environment orchestrates a pool of threads
Messages go into actor mailboxes. The behavior of the actor describes how the actor responds to
messages (like sending more messages and/or changing state). An execution environment orchestrates a pool of threads
to drive all these actions completely transparently.
This is a very simple model and it solves the issues enumerated previously:
@ -231,7 +78,7 @@ This is a very simple model and it solves the issues enumerated previously:
### Actors handle error situations gracefully
Since we have no longer a shared call stack between actors that send messages to each other, we need to handle
Since we no longer have a shared call stack between actors that send messages to each other, we need to handle
error situations differently. There are two kinds of errors we need to consider:
* The first case is when the delegated task on the target actor failed due to an error in the task (typically some
@ -250,3 +97,5 @@ others. Children never go silently dead (with the notable exception of entering
either failing and their parent can react to the fault, or they are stopped (in which case interested parties are
automatically notified). There is always a responsible entity for managing an actor: its parent. Restarts are not
visible from the outside: collaborating actors can keep continuing sending messages while the target actor restarts.
Now, let's take a short tour of the functionality Akka provides.

View file

@ -0,0 +1,159 @@
# Why modern systems need a new programming model
The actor model was proposed decades ago by @extref[Carl Hewitt](wikipedia:Carl_Hewitt#Actor_model) as a way to handle parallel processing in a high performance network — an environment that was not available at the time. Today, hardware and infrastructure capabilities have caught up with and exceeded Hewitt's vision. Consequently, organizations building distributed systems with demanding requirements encounter challenges that cannot fully be solved with a traditional object-oriented programming (OOP) model, but that can benefit from the actor model.
Today, the actor model is not only recognized as a highly effective solution — it has been proven in production for some of the world's most demanding applications. To highlight issues that the actor model addresses, this topic discusses the following mismatches between traditional programming assumptions and the reality of modern multi-threaded, multi-CPU architectures:
* [The challenge of encapsulation](#the-illusion-of-encapsulation)
* [The illusion of shared memory on modern computer architectures](#The-illusion-of-shared-memory-on-modern-computer-architectures)
* [The illustion of a call stack](#the-illusion-of-a-call-stack)
## The challenge of encapsulation
A core pillar of OOP is _encapsulation_. Encapsulation dictates that the internal data of an object is not accessible directly from the outside;
it can only be modified by invoking a set of curated methods. The object is responsible for exposing safe operations
that protect the invariant nature of its encapsulated data.
For example, operations on an ordered binary tree implementation must not allow violation of the tree ordering
invariant. Callers expect the ordering to be intact and when querying the tree for a certain piece of
data, they need to be able to rely on this constraint.
When we analyze OOP runtime behavior, we sometimes draw a message sequence chart showing the interactions of
method calls. For example:
![sequence chart](diagrams/seq_chart.png)
Unfortunately, the above diagram does not accurately represent the _lifelines_ of the instances during execution.
In reality, a _thread_ executes all these calls, and the enforcement of invariants occurs on the same thread from
which the method was called. Updating the diagram with the thread of execution, it looks like this:
![sequence chart with thread](diagrams/seq_chart_thread.png)
The significance of this clarification becomes clear when you try to model what happens with _multiple threads_.
Suddenly, our neatly drawn diagram becomes inadequate. We can try to illustrate multiple threads accessing
the same instance:
![sequence chart with threads interacting](diagrams/seq_chart_multi_thread.png)
There is a section of execution where two threads enter the same method. Unfortunately, the encapsulation model
of objects does not guarantee anything about what happens in that section. Instructions of the two invocations
can be interleaved in arbitrary ways which eliminate any hope for keeping the invariants intact without some
type of coordination between two threads. Now, imagine this issue compounded by the existence of many threads.
The common approach to solving this problem is to add a lock around these methods. While this ensures that at most
one thread will enter the method at any given time, this is a very costly strategy:
* Locks _seriously limit_ concurrency, they are very costly on modern CPU architectures,
requiring heavy-lifting from the operating system to suspend the thread and restore it later.
* The caller thread is now blocked, so it cannot do any other meaningful work. Even in desktop applications this is
unacceptable, we want to keep user-facing parts of applications (its UI) to be responsive even when a
long background job is running. In the backend, blocking is outright wasteful.
One might think that this can be compensated by launching new threads, but threads are also a costly abstraction.
* Locks introduce a new menace: deadlocks.
These realities result in a no-win situation:
* Without sufficient locks, the state gets corrupted.
* With many locks in place, performance suffers and very easily leads to deadlocks.
Additionally, locks only really work well locally. When it comes to coordinating across multiple machines,
the only alternative is distributed locks. Unfortunately, distributed locks are several magnitudes less efficient
than local locks and usually impose a hard limit on scaling out. Distributed lock protocols require several
communication round-trips over the network across multiple machines, so latency goes through the roof.
In Object Oriented languages we rarely think about threads or linear execution paths in general.
We often envision a system as a network of object instances that react to method calls, modify their internal state,
then communicate with each other via method calls driving the whole application state forward:
![network of interacting objects](diagrams/object_graph.png)
However, in a multi-threaded distributed environment, what actually happens is that threads "traverse" this network of object instances by following method calls.
As a result, threads are what really drive execution:
![network of interactive objects traversed by threads](diagrams/object_graph_snakes.png)
**In summary**:
* **Objects can only guarantee encapsulation (protection of invariants) in the face of single-threaded access,
multi-thread execution almost always leads to corrupted internal state. Every invariant can be violated by
having two contending threads in the same code segment.**
* **While locks seem to be the natural remedy to uphold encapsulation with multiple threads, in practice they
are inefficient and easily lead to deadlocks in any application of real-world scale.**
* **Locks work locally, attempts to make them distributed exist, but offer limited potential for scaling out.**
## The illusion of shared memory on modern computer architectures
Programming models of the 80'-90's conceptualize that writing to a variable means writing to a memory location directly
(which somewhat muddies the water that local variables might exist only in registers). On modern architectures -
if we simplify things a bit - CPUs are writing to @extref[cache lines](wikipedia:CPU_cache)
instead of writing to memory directly. Most of these caches are local to the CPU core, that is, writes by one core
are not visible by another. In order to make local changes visible to another core, and hence to another thread,
the cache line needs to be shipped to the other core's cache.
On the JVM, we have to explicitly denote memory locations to be shared across threads by using _volatile_ markers
or `Atomic` wrappers. Otherwise, we can access them only in a locked section. Why don't we just mark all variables as
volatile? Because shipping cache lines across cores is a very costly operation! Doing so would implicitly stall the cores
involved from doing additional work, and result in bottlenecks on the cache coherence protocol (the protocol CPUs
use to transfer cache lines between main memory and other CPUs).
The result is magnitudes of slowdown.
Even for developers aware of this situation, figuring out which memory locations should be marked as volatile,
or which atomic structures to use is a dark art.
**In summary**:
* **There is no real shared memory anymore, CPU cores pass chunks of data (cache lines) explicitly to each other
just as computers on a network do. Inter-CPU communication and network communication have more in common than many realize. Passing messages is the norm now be it across CPUs or networked computers.**
* **Instead of hiding the message passing aspect through variables marked as shared or using atomic data structures,
a more disciplined and principled approach is to keep state local to a concurrent entity and propagate data or events
between concurrent entities explicitly via messages.**
## The illusion of a call stack
Today, we often take call stacks for granted. But, they were invented in an era where concurrent programming
was not as important because multi-CPU systems were not common. Call stacks do not cross threads and hence,
do not model asynchronous call chains.
The problem arises when a thread intends to delegate a task to the "background". In practice, this really means
delegating to another thread. This cannot be a simple method/function call because calls are strictly local to the
thread. What usually happens, is that the "caller" puts an object into a memory location shared by a worker thread
("callee"), which in turn, picks it up in some event loop. This allows the "caller" thread to move on and do other tasks.
The first issue is, how can the "caller" be notified of the completion of the task? But a more serious issue arises
when a task fails with an exception. Where does the exception propagate to? It will propagate to the exception handler
of the worker thread completely ignoring who the actual "caller" was:
![exceptions cannot propagate between different threads](diagrams/exception_prop.png)
This is a serious problem. How does the worker thread deal with the situation? It likely cannot fix the issue as it is
usually oblivious of the purpose of the failed task. The "caller" thread needs to be notified somehow,
but there is no call stack to unwind with an exception. Failure notification can only be done via a side-channel,
for example putting an error code where the "caller" thread otherwise expects the result once ready.
If this notification is not in place, the "caller" never gets notified of a failure and the task is lost!
**This is surprisingly similar to how networked systems work where messages/requests can get lost/fail without any
notification.**
This bad situation gets worse when things go really wrong and a worker backed by a thread encounters a bug and ends
up in an unrecoverable situation. For example, an internal exception caused by a bug bubbles up to the root of
the thread and makes the thread shut down. This immediately raises the question, who should restart the normal operation
of the service hosted by the thread, and how should it be restored to a known-good state? At first glance,
this might seem manageable, but we are suddenly faced by a new, unexpected phenomena: the actual task,
that the thread was currently working on, is no longer in the shared memory location where tasks are taken from
(usually a queue). In fact, due to the exception reaching to the top, unwinding all of the call stack,
the task state is fully lost! **We have lost a message even though this is local communication with no networking
involved (where message losses are to be expected).**
**In summary:**
* **To achieve any meaningful concurrency and performance on current systems, threads must delegate tasks among each
other in an efficient way without blocking. With this style of task-delegating concurrency
(and even more so with networked/distributed computing) call stack-based error handling breaks down and new,
explicit error signaling mechanisms need to be introduced. Failures become part of the domain model.**
* **Concurrent systems with work delegation needs to handle service faults and have principled means to recover from them.
Clients of such services need to be aware that tasks/messages might get lost during restarts.
Even if loss does not happen, a response might be delayed arbitrarily due to previously enqueued tasks
(a long queue), delays caused by garbage collection, etc. In face of these, concurrent systems should handle response
deadlines in the form of timeouts, just like networked/distributed systems.**
Next, let's see how use of the actor model can overcome these challenges.

View file

@ -5,12 +5,14 @@
@@@ index
* [introduction](introduction.md)
* [actors-motivation](actors-motivation.md)
* [actors-intro](actors-intro.md)
* [modules](modules.md)
* [quickstart](quickstart.md)
* [tutorial](tutorial.md)
* [part1](tutorial_1.md)
* [part2](tutorial_2.md)
* [part3](tutorial_3.md)
* [part4](tutorial_4.md)
* [part5](tutorial_5.md)
@@@

View file

@ -1,49 +1,42 @@
# Introduction to Akka
Welcome to Akka, a set of open-source libraries for designing scalable, resilient systems that
span processor cores and networks. Akka allows you to focus on meeting business needs instead
of writing low-level code to provide reliable behavior, fault tolerance, and high performance.
Welcome to Akka, a set of open-source libraries for designing scalable, resilient systems that span processor cores and networks. Akka allows you to focus on meeting business needs instead of writing low-level code to provide reliable behavior, fault tolerance, and high performance.
Common practices and programming models do not address important challenges inherent in designing systems
for modern computer architectures. To be successful, distributed systems must cope in an environment where components
crash without responding, messages get lost without a trace on the wire, and network latency fluctuates.
These problems occur regularly in carefully managed intra-datacenter environments - even more so in virtualized
architectures.
Many common practices and accepted programming models do not address important challenges
inherent in designing systems for modern computer architectures. To be
successful, distributed systems must cope in an environment where components
crash without responding, messages get lost without a trace on the wire, and
network latency fluctuates. These problems occur regularly in carefully managed
intra-datacenter environments - even more so in virtualized architectures.
To deal with these realities, Akka provides:
To help you deal with these realities, Akka provides:
* Multi-threaded behavior without the use of low-level concurrency constructs like
atomics or locks. You do not even need to think about memory visibility issues.
* Transparent remote communication between systems and their components. You do
not need to write or maintain difficult networking code.
* A clustered, high-availability architecture that is elastic, scales in or out, on demand.
atomics or locks — relieving you from even thinking about memory visibility issues.
* Transparent remote communication between systems and their components — relieving you from writing and maintaining difficult networking code.
* A clustered, high-availability architecture that is elastic, scales in or out, on demand — enabling you to deliver a truly reactive system.
All of these features are available through a uniform programming model: Akka exploits the actor model
to provide a level of abstraction that makes it easier to write correct concurrent, parallel and distributed systems.
The actor model spans the set of Akka libraries, providing you with a consistent way of understanding and using them.
Thus, Akka offers a depth of integration that you cannot achieve by picking libraries to solve individual problems and
trying to piece them together.
Akka's use of the actor model provides a level of abstraction that makes it
easier to write correct concurrent, parallel and distributed systems. The actor
model spans the full set of Akka libraries, providing you with a consistent way
of understanding and using them. Thus, Akka offers a depth of integration that
you cannot achieve by picking libraries to solve individual problems and trying
to piece them together.
By learning Akka and its actor model, you will gain access to a vast and deep set of tools that solve difficult
distributed/parallel systems problems in a uniform programming model where everything fits together tightly and
By learning Akka and how to use the actor model, you will gain access to a vast
and deep set of tools that solve difficult distributed/parallel systems problems
in a uniform programming model where everything fits together tightly and
efficiently.
## What is the Actor Model?
## How to get started
The characteristics of today's computing environments are vastly different from the ones in use when the programming
models of yesterday were conceived. Actors were invented decades ago by @extref[Carl Hewitt](wikipedia:Carl_Hewitt#Actor_model).
But relatively recently, their applicability to the challenges of modern computing systems has been recognized and
proved to be effective.
If this is your first experience with Akka, we recommend that you start by
running a simple Hello World project. See the @scala[[QuickStart Guide](http://developer.lightbend.com/guides/akka-quickstart-scala)] @java[[QuickStart Guide](http://developer.lightbend.com/guides/akka-quickstart-java)] for
instructions on downloading and running the Hello World example. The *QuickStart* guide walks you through example code that introduces how to define actor systems, actors, and messages as well as how to use the test module and logging. Within 30 minutes, you should be able to run the Hello World example and learn how it is constructed.
The actor model provides an abstraction that allows you to think about your code in terms of communication, not unlike
people in a large organization. The basic characteristic of actors is that they model the world as stateful entities
communicating with each other by explicit message passing.
This *Getting Started* guide provides the next level of information. It covers why the actor model fits the needs of modern distributed systems and includes a tutorial that will help further your knowledge of Akka. Topics include:
As computational entities, actors have these characteristics:
* They communicate with asynchronous messaging instead of method calls
* They manage their own state
* When responding to a message, they can:
* Create other (child) actors
* Send messages to other actors
* Stop (child) actors or themselves
* @ref[Why modern systems need a new programming model](actors-motivation.md)
* @ref[How the actor model meets the needs of concurrent, distributed systems](actors-intro.md)
* @ref[Overview of Akka libraries and modules](modules.md)
* A @ref[more complex example](tutorial.md) that builds on the Hello World example to illustrate common Akka patterns.

View file

@ -1,12 +1,31 @@
# Akka Libraries and Modules
# Overview of Akka libraries and modules
Before we delve further into writing our first actors, we should stop for a moment and look at the set of libraries
that come out-of-the-box. This will help you identify which modules and libraries provide the functionality you
want to use in your system.
Before delving into some best practices for writing actors, it will be helpful to preview the most commonly used Akka libraries. This will help you start thinking about the functionality you want to use in your system. All core Akka functionality is available as Open Source Software (OSS). Lightbend sponsors Akka development but can also help you with [commercial offerings ](https://www.lightbend.com/platform/subscription) such as training, consulting, support, and [Enterprise Suite](https://www.lightbend.com/platform/production) — a comprehensive set of tools for managing Akka systems.
### Actors (`akka-actor` Library, the Core)
The following capabilities are included with Akka OSS and are introduced later on this page:
The use of actors across Akka libraries provides a consistent, integrated model that relieves you from individually
* [Actor library](#actor-library)
* [Remoting](#remoting)
* [Cluster](#cluster)
* [Cluster Sharding](#cluster-sharding)
* [Cluster Singleton](#cluster-singleton)
* [Cluster Publish-Subscribe](#cluster-publish-subscribe)
* [Persistence](#persistence)
* [Distributed Data](#distributed-data)
* [HTTP](#http)
With a Lightbend subscription, you can use [Enterprise Suite](https://www.lightbend.com/platform/production) in production. Enterprise Suite includes the following extensions to Akka core functionality:
* [Split Brain Resolver](https://developer.lightbend.com/docs/akka-commercial-addons/current/split-brain-resolver.html) — Detects and recovers from network partitions, eliminating data inconsistencies and possible downtime.
* [Configuration Checker](https://developer.lightbend.com/docs/akka-commercial-addons/current/config-checker.html) — Checks for potential configuration issues and logs suggestions.
* [Diagnostics Recorder](https://developer.lightbend.com/docs/akka-commercial-addons/current/diagnostics-recorder.html) — Captures configuration and system information in a format that makes it easy to troubleshoot issues during development and production.
* [Thread Starvation Detector](https://developer.lightbend.com/docs/akka-commercial-addons/current/starvation-detector.html) — Monitors an Akka system dispatcher and logs warnings if it becomes unresponsive.
This page does not list all available modules, but overviews the main functionality and gives you an idea of the level of sophistication you can reach when you start building systems on top of Akka.
### Actor library
The core Akka library is `akka-actor`. But, actors are used across Akka libraries, providing a consistent, integrated model that relieves you from individually
solving the challenges that arise in concurrent or distributed system design. From a birds-eye view,
actors are a programming paradigm that takes encapsulation, one of the pillars of OOP, to its extreme.
Unlike objects, actors encapsulate not only their
@ -17,7 +36,7 @@ yet, in the next chapter we will explain actors in detail. For now, the importan
handles concurrency and distribution at the fundamental level instead of ad hoc patched attempts to bring these
features to OOP.
Challenges that actors solve include:
Challenges that actors solve include the following:
* How to build and design high-performance, concurrent applications.
* How to handle errors in a multi-threaded environment.
@ -25,13 +44,13 @@ Challenges that actors solve include:
### Remoting
Remoting enables actors that are remote, living on different computers, to seamlessly exchange messages.
Remoting enables actors that live on different computers, to seamlessly exchange messages.
While distributed as a JAR artifact, Remoting resembles a module more than it does a library. You enable it mostly
with configuration, it has only a few APIs. Thanks to the actor model, a remote and local message send looks exactly the
with configuration and it has only a few APIs. Thanks to the actor model, a remote and local message send looks exactly the
same. The patterns that you use on local systems translate directly to remote systems.
You will rarely need to use Remoting directly, but it provides the foundation on which the Cluster subsystem is built.
Some of the challenges Remoting solves are:
Challenges Remoting solves include the following:
* How to address actor systems living on remote hosts.
* How to address individual actors on remote actor systems.
@ -48,7 +67,7 @@ remote systems, Clustering gives you the ability to organize these into a "meta-
protocol. **In most cases, you want to use the Cluster module instead of using Remoting directly.**
Clustering provides an additional set of services on top of Remoting that most real world applications need.
The challenges the Cluster module solves, among others, are:
Challenges the Cluster module solves include the following:
* How to maintain a set of actor systems (a cluster) that can communicate with each other and consider each other as part of the cluster.
* How to introduce a new system safely to the set of already existing members.
@ -63,7 +82,7 @@ Sharding helps to solve the problem of distributing a set of actors among member
Sharding is a pattern that mostly used together with Persistence to balance a large set of persistent entities
(backed by actors) to members of a cluster and also migrate them to other nodes when members crash or leave.
The challenge space that Sharding targets:
Challenges that Sharding solves include the following:
* How to model and scale out a large set of stateful entities on a set of systems.
* How to ensure that entities in the cluster are distributed properly so that load is properly balanced across the machines.
@ -155,17 +174,10 @@ Some of the challenges that HTTP tackles:
* How to stream large datasets in and out of a system using HTTP.
* How to stream live events in and out of a system using HTTP.
***
### Example of module use
The above is an incomplete list of all the available modules, but it gives a nice overview of the landscape of modules
and the level of sophistication you can reach when you start building systems on top of Akka. All these modules
integrate together seamlessly. For example, take a large set of stateful business objects
(a document, a shopping cart, etc) that is accessed by on-line users from your website. Model these as sharded
entities using Sharding and Persistence to keep them balanced across a cluster that you can scale out on-demand
(for example during an advertising campaign before holidays) and keep them available even if some systems crash.
Take the real-time stream of domain events of your business objects with Persistence Query and use Streams to pipe
it into a streaming BigData engine. Take the output of that engine as a Stream, manipulate it using Akka Streams
Akka modules integrate together seamlessly. For example, think of a large set of stateful business objects, such as documents or shopping carts, that website users access. If you model these as sharded entities, using Sharding and Persistence, they will be balanced across a cluster that you can scale out on-demand. They will be available during spikes that come from advertising campaigns or before holidays will be handled, even if some systems crash. You can also easily take the real-time stream of domain events with Persistence Query and use Streams to pipe them into a streaming Fast Data engine. Then, take the output of that engine as a Stream, manipulate it using Akka Streams
operators and expose it as web socket connections served by a load balanced set of HTTP servers hosted by your cluster
to power your real-time business analytics tool.
Has this gotten you interested? Keep on reading to learn more.
We hope this preview caught your interest! The next topic introduces the example application we will build in the tutorial portion of this guide.

View file

@ -1,18 +0,0 @@
# Quickstart
After all this introduction, we are ready to build our first actor system. We will do so in five chapters.
This first chapter will help you to set up your project, tools and have a simple "Hello World" demo running.
We will keep this section to a bare minimum and then extend the sample application in the next chapter.
> Our goal in this chapter is to set up a working environment for you, create an application that starts up and stops
an ActorSystem and create an actor which we will run and test.
Akka requires that you have [Java 8](http://www.oracle.com/technetwork/java/javase/downloads/index.html) or
later installed on your machine.
As the very first thing, we need to make sure that we can compile our project and have a working IDE setup to be
able to edit code comfortably.
The easiest way is to use the @scala[[Akka Quickstart with Scala guide](http://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Akka Quickstart with Java guide](http://developer.lightbend.com/guides/akka-quickstart-java/)]. It contains a Hello World example that illustrates Akka basics. Within 30 minutes, you should be able to download and run the example and use that guide to understand how the example is constructed.
After that you can go back here and you are ready to dive deeper.

View file

@ -0,0 +1,38 @@
# Introduction to the Example
When writing prose, the hardest part is often composing the first few sentences. There is a similar "blank canvas" feeling
when starting to build an Akka system. You might wonder: Which should be the first actor? Where should it live? What should it do?
Fortunately — unlike with prose — established best practices can guide us through these initial steps. In the remainder of this guide, we examine the core logic of a simple Akka application to introduce you to actors and show you how to formulate solutions with them. The example demonstrates common patterns that will help you kickstart your Akka projects.
## Prerequisites
You should have already followed the instructions in the @scala[[Akka Quickstart with Scala guide](http://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Akka Quickstart with Java guide](http://developer.lightbend.com/guides/akka-quickstart-java/)] to download and run the Hello World example. You will use this as a seed project and add the functionality described in this tutorial.
## IoT example use case
In this tutorial, we'll use Akka to build out part of an Internet of Things (IoT) system that reports data from sensor devices installed in customers' homes. The example focuses on temperature readings. The target use case simply allows customers to log in and view the last reported temperature from different areas of their homes. You can imagine that such sensors could also collect relative humidity or other interesting data and an application would likely support reading and changing device configuration, maybe even alerting home owners when sensor state falls outside of a particular range.
In a real system, the application would be exposed to customers through a mobile app or browser. This guide concentrates only on the core logic for storing temperaturs that would be called over a network protocol, such as HTTP. It also includes writing tests to help you get comfortable and proficient with testing actors.
The tutorial application consists of two main components:
* **Device data collection:** — maintains a local representation of the
remote devices. Multiple sensor devices for a home are organized into one device group.
* **User dashboard:** — periodically collects data from the devices for a
logged in user's home and presents the results as a report.
The following diagram illustrates the example application architecture. Since we are interested in the state of each sensor device, we will model devices as actors. The running application will create as many instances of device actors and device groups as necessary.
![box diagram of the architecture](diagrams/arch_boxes_diagram.png)
## What you will learn in this tutorial
This tutorial introduces and illustrates:
* The actor hierarchy and how it influences actor behavior
* How to choose the right granularity for actors
* How to define protocols as messages
* Typical conversational styles
Let's get started by learning more about actors.

View file

@ -1,82 +1,31 @@
# Part 1: Top-level Architecture
# Part 1: Actor Architecture
In this and the following chapters, we will build a sample Akka application to introduce you to the language of
actors and how solutions can be formulated with them. It is a common hurdle for beginners to translate their project
into actors even though they don't understand what they do on the high-level. We will build the core logic of a small
application and this will serve as a guide for common patterns that will help to kickstart Akka projects.
Use of Akka relieves you from creating the infrastructure for an actor system and from writing the low-level code necessary to control basic behavior. To appreciate this, let's look at the relationships between actors you create in your code and those that Akka creates and manages for you internally, the actor lifecycle, and failure handling.
The application we aim to write will be a simplified IoT system where devices, installed at the home of users, can report temperature data from sensors. Users will be able to query the current state of these sensors. To keep
things simple, we will not actually expose the application via HTTP or any other external API, we will, instead, concentrate only on the
core logic. However, we will write tests for the pieces of the application to get comfortable and
proficient with testing actors early on.
## The Akka actor hierarchy
## Our Goals for the IoT System
We will build a simple IoT application with the bare essentials to demonstrate designing an Akka-based system. The application will consist of two main components:
* **Device data collection:** This component has the responsibility to maintain a local representation of the
otherwise remote devices. The devices will be organized into device groups, grouping together sensors belonging to a home.
* **User dashboards:** This component has the responsibility to periodically collect data from the devices for a
logged in user and present the results as a report.
For simplicity, we will only collect temperature data for the devices, but in a real application our local representations
for a remote device, which we will model as an actor, would have many more responsibilities. Among others; reading the
configuration of the device, changing the configuration, checking if the devices are unresponsive, etc. We leave
these complexities for now as they can be easily added as an exercise.
We will also not address the means by which the remote devices communicate with the local representations (actors). Instead,
we just build an actor based API that such a network protocol could use. We will use tests for our API everywhere though.
The architecture of the application will look like this:
![box diagram of the architecture](diagrams/arch_boxes_diagram.png)
## Top Level Architecture
When writing prose, the hardest part is usually to write the first couple of sentences. There is a similar feeling
when trying to build an Akka system: What should be the first actor? Where should it live? What should it do?
Fortunately, unlike with prose, there are established best practices that can guide us through these initial steps.
When one creates an actor in Akka it always belongs to a certain parent. This means that actors are always organized
into a tree. In general, creating an actor can only happen from inside another actor. This creator actor becomes the
An actor in Akka always belongs to a parent. Typically, you create an actor by calling @java[`getContext().actorOf()`]@scala[`context.actorOf()`]. Rather than creating a "freestanding" actor, this injects the new actor as a child into an already existing tree: the creator actor becomes the
_parent_ of the newly created _child_ actor. You might ask then, who is the parent of the _first_ actor you create?
As we have seen in the previous chapters, to create a top-level actor one must call `system.actorOf()`. This does
not create a "freestanding" actor though, instead, it injects the corresponding actor as a child into an already
existing tree:
As illustrated below, all actors have a common parent, the user guardian. New actor instances can be created under this actor using `system.actorOf()`. As we covered in the @scala[[QuickStart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)]@java[[QuickStart Guide](https://developer.lightbend.com/guides/akka-quickstart-java/)], creation of an actor returns a reference that is a valid URL. So, for example, if we create an actor named `someActor` with `system.actorOf(…, "someActor")`, its reference will include the path `/user/someActor`.
![box diagram of the architecture](diagrams/actor_top_tree.png)
As you see, creating actors from the "top" injects those actors under the path `/user/`, so for example creating
an actor named `myActor` will end up having the path `/user/myActor`. In fact, there are three already existing
actors in the system:
In fact, before you create an actor in your code, Akka has already created three actors in the system. The names of these built-in actors contain _guardian_ because they _supervise_ every child actor in their path. The guardian actors include:
- `/` the so-called _root guardian_. This is the parent of all actors in the system, and the last one to stop
when the system itself is terminated.
- `/user` the _guardian_. **This is the parent actor for all user created actors**. The name `user` should not confuse
you, it has nothing to do with the logged in user, nor user handling in general. This name really means _userspace_
as this is the place where actors that do not access Akka internals live, i.e. all the actors created by users of the Akka library. Every actor you will create will have the constant path `/user/` prepended to it.
- `/` the so-called _root guardian_. This is the parent of all actors in the system, and the last one to stop when the system itself is terminated.
- `/user` the _guardian_. **This is the parent actor for all user created actors**. Don't let the name `user` confuse
you, it has nothing to do with end users, nor with user handling. Every actor you create using the Akka library will have the constant path `/user/` prepended to it.
- `/system` the _system guardian_.
The names of these built-in actors contain _guardian_ because these are _supervising_ every actor living as a child
of them, i.e. under their path. We will explain supervision in more detail, all you need to know now is that every
unhandled failure from actors bubbles up to their parent that, in turn, can decide how to handle this failure. These
predefined actors are guardians in the sense that they are the final lines of defense, where all unhandled failures
from user, or system, actors end up.
In the Hello World example, we have already seen how `system.actorOf()`, creates an actor directly under `/user`. We call this a _top level_ actor, even though, in practice it is only on the top of the
_user defined_ hierarchy. You typically have only one (or very few) top level actors in your `ActorSystem`.
We create child, or non-top-level, actors by invoking `context.actorOf()` from an existing actor. The `context.actorOf()` method has a signature identical to `system.actorOf()`, its top-level counterpart.
> Does the root guardian (the root path `/`) have a parent? As it turns out, it has. This special entity is called
> the "Bubble-Walker". This special entity is invisible for the user and only has uses internally.
The easiest way to see the actor hierarchy in action is to simply print `ActorRef` instances. In this small experiment, we create an actor, print its reference, create a child of this actor, and print the child's reference. We start with the Hello World project, if you have not downloaded it, download the Quickstart project from the @scala[[Lightbend Tech Hub](http://developer.lightbend.com/start/?group=akka&project=akka-quickstart-scala)]@java[[Lightbend Tech Hub](http://developer.lightbend.com/start/?group=akka&project=akka-quickstart-java)].
### Structure of an ActorRef and Paths of Actors
The easiest way to see this in action is to simply print `ActorRef` instances. In this small experiment, we print
the reference of the first actor we create and then we create a child of this actor, and print its reference. We have
already created actors with `system.actorOf()`, which creates an actor under `/user` directly. We call this kind
of actors _top level_, even though in practice they are not on the top of the hierarchy, only on the top of the
_user defined_ hierarchy. Since in practice we usually concern ourselves about actors under `/user` this is still a
convenient terminology, and we will stick to it.
Creating a non-top-level actor is possible from any actor, by invoking `context.actorOf()` which has the exact same
signature as its top-level counterpart. This is how it looks like in practice:
In your Hello World project, navigate to the `com.lightbend.akka.sample` package and create a new @scala[Scala file called `ActorHierarchyExperiments.scala`]@java[Java file called `ActorHierarchyExperiments.java`] here. Copy and paste the code from the snippet below to this new source file. Save your file and run `sbt "runMain com.lightbend.akka.sample.ActorHierarchyExperiments"` to observe the output.
Scala
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #print-refs }
@ -84,51 +33,38 @@ Scala
Java
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #print-refs }
We see that the following two lines are printed
Note the way a message asked the first actor to do its work. We sent the message by using the parent's reference: @scala[`firstRef ! "printit"`]@java[`firstRef.tell("printit", ActorRef.noSender())`]. When the code executes, the output includes the references for the first actor and the child it created as part of the `printit` case. Your output should look similar to the following:
```
First : Actor[akka://testSystem/user/first-actor#1053618476]
First: Actor[akka://testSystem/user/first-actor#1053618476]
Second: Actor[akka://testSystem/user/first-actor/second-actor#-1544706041]
```
First, we notice that all of the paths start with `akka://testSystem/`. Since all actor references are valid URLs, there
is a protocol field needed, which is `akka://` in the case of actors. Then, just like on the World Wide Web, the system
is identified. In our case, this is `testSystem`, but could be any other name (if remote communication between multiple
systems is enabled this name is the hostname of the system so other systems can find it on the network). Our two actors,
as we have discussed before, live under user, and form a hierarchy:
Notice the structure of the references:
* `akka://testSystem/user/first-actor` is the first actor we created, which lives directly under the user guardian,
`/user`
* `akka://testSystem/user/first-actor/second-actor` is the second actor we created, using `context.actorOf`. As we
see it lives directly under the first actor.
* Both paths start with `akka://testSystem/`. Since all actor references are valid URLs, `akka://` is the value of the protocol field.
* Next, just like on the World Wide Web, the URL identifies the system. In this example, the system is named `testSystem`, but it could be any other name. If remote communication between multiple systems is enabled, this part of the URL includes the hostname so other systems can find it on the network.
* Because the second actor's reference includes the path `/first-actor/`, it identifies it as a child of the first.
* The last part of the actor reference, `#1053618476` or `#-1544706041` is a unique identifier that you can ignore in most cases.
The last part of the actor reference, like `#1053618476` is a unique identifier of the actor living under the path.
This is usually not something the user needs to be concerned with, and we leave the discussion of this field for later.
Now that you understand what the actor hierarchy
looks like, you might be wondering: _Why do we need this hierarchy? What is it used for?_
### Hierarchy and Lifecycle of Actors
An important role of the hierarchy is to safely manage actor lifecycles. Let's consider this next and see how that knowledge can help us write better code.
We have so far seen that actors are organized into a **strict hierarchy**. This hierarchy consists of a predefined
upper layer of three actors (the root guardian, the user guardian, and the system guardian), thereafter the user created
top-level actors (those directly living under `/user`) and the children of those. We now understand what the hierarchy
looks like, but there are some nagging unanswered questions: _Why do we need this hierarchy? What is it used for?_
### The actor lifecycle
Actors pop into existence when created, then later, at user requests, they are stopped. Whenever an actor is stopped, all of its children are _recursively stopped_ too.
This behavior greatly simplifies resource cleanup and helps avoid resource leaks such as those caused by open sockets and files. In fact, a commonly overlooked difficulty when dealing with low-level multi-threaded code is the lifecycle management of various concurrent resources.
The first use of the hierarchy is to manage the lifecycle of actors. Actors pop into existence when created, then later,
at user requests, they are stopped. Whenever an actor is stopped, all of its children are _recursively stopped_ too.
This is a very useful property and greatly simplifies cleaning up resources and avoiding resource leaks (like open
sockets files, etc.). In fact, one of the overlooked difficulties when dealing with low-level multi-threaded code is
the lifecycle management of various concurrent resources.
To stop an actor, the recommended pattern is to call @scala[`context.stop(self)`]@java[`getContext().stop(getSelf())`] inside the actor to stop itself, usually as a response to some user defined stop message or when the actor is done with its job. Stopping another actor is technically possible by calling @scala[`context.stop(actorRef)`]@java[`getContext().stop(actorRef)`], but **It is considered a bad practice to stop arbitrary actors this way**: try sending them a `PoisonPill` or custom stop message instead.
Stopping an actor can be done by calling `context.stop(actorRef)`. **It is considered a bad practice to stop arbitrary
actors this way**. The recommended pattern is to call `context.stop(self)` inside an actor to stop itself, usually as
a response to some user defined stop message or when the actor is done with its job.
The actor API exposes many lifecycle hooks that the actor implementation can override. The most commonly used are
The Akka actor API exposes many lifecycle hooks that you can override in an actor implementation. The most commonly used are
`preStart()` and `postStop()`.
* `preStart()` is invoked after the actor has started but before it processes its first message.
* `postStop()` is invoked just before the actor stops. No messages are processed after this point.
Again, we can try out all this with a simple experiment:
Let's use the `preStart()` and `postStop()` lifecycle hooks in a simple experiment to observe the behavior when we stop an actor. First, add the following 2 actor classes to your project:
Scala
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop }
@ -136,7 +72,15 @@ Scala
Java
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop }
After running it, we get the output
And create a 'main' class like above to start the actors and then send them a `"stop"` message:
Scala
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #start-stop-main }
Java
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #start-stop-main }
You can again use `sbt` to start this program. The output should look like this:
```
first started
@ -145,19 +89,18 @@ second stopped
first stopped
```
We see that when we stopped actor `first` it recursively stopped actor `second` and thereafter it stopped itself.
This ordering is strict, _all_ `postStop()` hooks of the children are called before the `postStop()` hook of the parent
When we stopped actor `first`, it stopped its child actor, `second`, before stopping itself. This ordering is strict, _all_ `postStop()` hooks of the children are called before the `postStop()` hook of the parent
is called.
The family of these lifecycle hooks is rich, and we recommend reading @ref[the actor lifecycle](../actors.md#actor-lifecycle) section of the reference for all details.
The @ref[Actor Lifecycle](../actors.md#actor-lifecycle) section of the Akka reference manual provides details on the full set of lifecyle hooks.
### Hierarchy and Failure Handling (Supervision)
### Failure handling
Parents and children are not only connected by their lifecycles. Whenever an actor fails (throws an exception or
an unhandled exception bubbles out from `receive`) it is temporarily suspended. The failure information is propagated
to the parent, which decides how to handle the exception caused by the child actor. The default _supervisor strategy_ is to
stop and restart the child. If you don't change the default strategy all failures result in a restart. We won't change
the default strategy in this simple experiment:
Parents and children are connected throughout their lifecycles. Whenever an actor fails (throws an exception or an unhandled exception bubbles out from `receive`) it is temporarily suspended. As mentioned earlier, the failure information is propagated
to the parent, which then decides how to handle the exception caused by the child actor. In this way, parents act as supervisors for their children. The default _supervisor strategy_ is to
stop and restart the child. If you don't change the default strategy all failures result in a restart.
Let's observe the default strategy in a simple experiment. Add the following classes to your project, just as you did with the previous ones:
Scala
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise }
@ -165,7 +108,15 @@ Scala
Java
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise }
After running the snippet, we see the following output on the console:
And run with:
Scala
: @@snip [ActorHierarchyExperiments.scala]($code$/scala/tutorial_1/ActorHierarchyExperiments.scala) { #supervise-main }
Java
: @@snip [ActorHierarchyExperiments.java]($code$/java/jdocs/tutorial_1/ActorHierarchyExperiments.java) { #supervise-main }
You should see output similar to the following:
```
supervised actor started
@ -188,62 +139,13 @@ java.lang.Exception: I failed!
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
```
We see that after failure the actor is stopped and immediately started. We also see a log entry reporting the
exception that was handled, in this case, our test exception. In this example we use `preStart()` and `postStop()` hooks
which are the default to be called after and before restarts, so we cannot distinguish from inside the actor if it
was started for the first time or restarted. This is usually the right thing to do, the purpose of the restart is to
set the actor in a known-good state, which usually means a clean starting stage. **What actually happens though is
that the `preRestart()` and `postRestart()` methods are called which, if not overridden, by default delegate to
`postStop()` and `preStart()` respectively**. You can experiment with overriding these additional methods and see
how the output changes.
We see that after failure the supervised actor is stopped and immediately restarted. We also see a log entry reporting the exception that was handled, in this case, our test exception. In this example we used `preStart()` and `postStop()` hooks
which are the default to be called after and before restarts, so we cannot distinguish from inside the actor whether it was started for the first time or restarted. This is usually the right thing to do, the purpose of the restart is to set the actor in a known-good state, which usually means a clean starting stage. **What actually happens though is
that the `preRestart()` and `postRestart()` methods are called which, if not overridden, by default delegate to `postStop()` and `preStart()` respectively**. You can experiment with overriding these additional methods and see how the output changes.
For the impatient, we also recommend looking into the [supervision reference page](http://doc.akka.io/docs/akka/current/general/supervision.html) for more in-depth
For the impatient, we also recommend looking into the @ref[supervision reference page](../general/supervision.md) for more in-depth
details.
### The First Actor
# Summary
We've learned about how Akka manages actors in hierarchies where parents supervise their children and handle exceptions. We saw how to create a very simple actor and child. Next, we'll apply this knowledge to our example use case by modeling the communication necessary to get information from device actors. Later, we'll deal with how to manage the actors in groups.
Actors are organized into a strict tree, where the lifecycle of every child is tied to the parent and where parents
are responsible for deciding the fate of failed children. At first, it might not be evident how to map our problem
to such a tree, but in practice, this is easier than it looks. All we need to do is to rewrite our architecture diagram
that contained nested boxes into a tree:
![actor tree diagram of the architecture](diagrams/arch_tree_diagram.png)
In simple terms, every component manages the lifecycle of the subcomponents. No subcomponent can outlive the parent
component. This is exactly how the actor hierarchy works. Furthermore, it is desirable that a component handles the failure
of its subcomponents. Together, these two desirable properties lead to the conclusion that the "contained-in" relationship of components should be mapped to the
"children-of" relationship of actors.
The remaining question is how to map the top-level components to actors. It might be tempting to create the actors
representing the main components as top-level actors. We instead, recommend creating an explicit component that
represents the whole application. In other words, we will have a single top-level actor in our actor system and have
the main components as children of this actor.
The first actor happens to be rather simple now, as we have not implemented any of the components yet. What is new
is that we have dropped using `println()` and instead use @scala[the `ActorLogging` helper trait] @java[`akka.event.Logging`] which allows us to use the
logging facility built into Akka directly. Furthermore, we are using a recommended creational pattern for actors; define a `props()` @scala[method in the [companion object](http://docs.scala-lang.org/tutorials/tour/singleton-objects.html#companions) of] @java[static method on] the actor:
Scala
: @@snip [IotSupervisor.scala]($code$/scala/tutorial_1/IotSupervisor.scala) { #iot-supervisor }
Java
: @@snip [IotSupervisor.java]($code$/java/jdocs/tutorial_1/IotSupervisor.java) { #iot-supervisor }
All we need now is to tie this up with a class with the `main` entry point:
Scala
: @@snip [IotApp.scala]($code$/scala/tutorial_1/IotApp.scala) { #iot-app }
Java
: @@snip [IotMain.java]($code$/java/jdocs/tutorial_1/IotMain.java) { #iot-app }
This application does very little for now, but we have the first actor in place and we are ready to extend it further.
## What is next?
In the following chapters we will grow the application step-by-step:
1. We will create the representation for a device
2. We create the device management component
3. We add query capabilities to device groups
4. We add the dashboard component

View file

@ -1,214 +1,42 @@
# Part 2: The Device Actor
# Part 2: Creating the First Actor
In part 1 we explained how to view actor systems _in the large_, i.e. how components should be represented, how
actors should be arranged in the hierarchy. In this part, we will look at actors _in the small_ by implementing an
actor with the most common conversational patterns.
With an understanding of actor hierarchy and behavior, the remaining question is how to map the top-level components of our IoT system to actors. It might be tempting to make the actors that
represent devices and dashboards at the top level. Instead, we recommend creating an explicit component that represents the whole application. In other words, we will have a single top-level actor in our IoT system. The components that create and manage devices and dashboards will be children of this actor. This allows us to refactor the example use case architecture diagram into a tree of actors:
In particular, leaving the components aside for a while, we will implement an actor that represents a device. The
tasks of this actor will be rather simple:
![actor tree diagram of the architecture](diagrams/arch_tree_diagram.png)
* Collect temperature measurements
* Report the last measured temperature if asked
When working with objects we usually design our API as _interfaces_, which are basically a collection of abstract
methods to be filled out by the actual implementation. In the world of actors, the counterpart of interfaces is
protocols. While it is not possible to formalize general protocols in the programming language, we can formalize
its most basic elements: the messages.
We can define the first actor, the IotSupervisor, with a few simple lines of code. To start your tutorial application:
## The Query Protocol
Just because a device have been started it does not mean that it has immediately a temperature measurement. Hence, we
need to account for the case where a temperature is not present in our protocol. This, fortunately, means that we
can test the query part of the actor without the write part present, as it can simply report an empty result.
The protocol for obtaining the current temperature from the device actor is rather simple:
1. Wait for a request for the current temperature.
2. Respond to the request with a reply containing the current temperature or an indication that it is not yet
available.
We need two messages, one for the request, and one for the reply. A first attempt could look like this:
1. Create a new `IotSupervisor` source file in the `com.lightbend.akka.sample` package.
1. Paste the following code into the new file to define the IotSupervisor.
Scala
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_2/DeviceInProgress.scala) { #read-protocol-1 }
: @@snip [IotSupervisor.scala]($code$/scala/tutorial_2/IotSupervisor.scala) { #iot-supervisor }
Java
: @@snip [DeviceInProgress.java]($code$/java/jdocs/tutorial_2/DeviceInProgress.java) { #read-protocol-1 }
: @@snip [IotSupervisor.java]($code$/java/jdocs/tutorial_2/IotSupervisor.java) { #iot-supervisor }
This is a fine approach, but it limits the flexibility of the protocol. To understand why we need to talk
about message ordering and message delivery guarantees in general.
The code is similar to the actor examples we used in the previous experiments, but notice:
## Message Ordering, Delivery Guarantees
* Instead of `println()` we use @scala[the `ActorLogging` helper trait] @java[`akka.event.Logging`], which directly invokes Akka's built in logging facility.
* We use the recommended pattern for creating actors by defining a `props()` @scala[method in the [companion object](http://docs.scala-lang.org/tutorials/tour/singleton-objects.html#companions) of] @java[static method on] the actor.
In order to give some context to the discussion below, consider an application which spans multiple network hosts.
The basic mechanism for communication is the same whether sending to an actor on the local JVM or to a remote actor,
but of course, there will be observable differences in the latency of delivery (possibly also depending on the bandwidth
of the network link and the message size) and the reliability. In the case of a remote message send there are
more steps involved which means that more can go wrong. Another aspect is that a local send will just pass a
reference to the message inside the same JVM, without any restrictions on the underlying object which is sent,
whereas a remote transport will place a limit on the message size.
It is also important to keep in mind, that while sending inside the same JVM is significantly more reliable, if an
actor fails due to a programmer error while processing the message, the effect is basically the same as if a remote,
network request fails due to the remote host crashing while processing the message. Even though in both cases the
service is recovered after a while (the actor is restarted by its supervisor, the host is restarted by an operator
or by a monitoring system) individual requests are lost during the crash. **Writing your actors such that every
message could possibly be lost is the safe, pessimistic bet.**
These are the rules in Akka for message sends:
* At-most-once delivery, i.e. no guaranteed delivery.
* Message ordering is maintained per sender, receiver pair.
### What Does "at-most-once" Mean?
When it comes to describing the semantics of a delivery mechanism, there are three basic categories:
* **At-most-once delivery** means that for each message handed to the mechanism, that message is delivered zero or
one time; in more casual terms it means that messages may be lost, but never duplicated.
* **At-least-once delivery** means that for each message handed to the mechanism potentially multiple attempts are made
at delivering it, such that at least one succeeds; again, in more casual terms this means that messages may be duplicated but not lost.
* **Exactly-once delivery** means that for each message handed to the mechanism exactly one delivery is made to
the recipient; the message can neither be lost nor duplicated.
The first one is the cheapest, highest performance, least implementation overhead because it can be done in a
fire-and-forget fashion without keeping the state at the sending end or in the transport mechanism.
The second one requires retries to counter transport losses, which means keeping the state at the sending end and
having an acknowledgment mechanism at the receiving end. The third is most expensive, and has consequently worst
performance: in addition to the second, it requires the state to be kept at the receiving end in order to filter out
duplicate deliveries.
### Why No Guaranteed Delivery?
At the core of the problem lies the question what exactly this guarantee shall mean, i.e. at which point does
the delivery considered to be guaranteed:
1. When the message is sent out on the network?
2. When the message is received by the other host?
3. When the message is put into the target actor's mailbox?
4. When the message is starting to be processed by the target actor?
5. When the message is processed successfully by the target actor?
Most frameworks/protocols claiming guaranteed delivery actually provide something similar to point 4 and 5. While this
sounds fair, **is this actually useful?** To understand the implications, consider a simple, practical example:
a user attempts to place an order and we only want to claim that it has successfully processed once it is actually on
disk in the database containing orders.
If we rely on the guarantees of such system it will report success as soon as the order has been submitted to the
internal API that has the responsibility to validate it, process it and put it into the database. Unfortunately,
immediately after the API has been invoked the following may happen:
* The host can immediately crash.
* Deserialization can fail.
* Validation can fail.
* The database might be unavailable.
* A programming error might occur.
The problem is that the **guarantee of delivery** does not translate to the **domain level guarantee**. We only want to
report success once the order has been actually fully processed and persisted. **The only entity that can report
success is the application itself, since only it has any understanding of the domain guarantees required. No generalized
framework can figure out the specifics of a particular domain and what is considered a success in that domain**. In
this particular example, we only want to signal success after a successful database write, where the database acknowledged
that the order is now safely stored. **For these reasons Akka lifts the responsibilities of guarantees to the application
itself, i.e. you have to implement them yourself. On the other hand, you are in full control of the guarantees that you want
to provide**.
### Message Ordering
The rule is that for a given pair of actors, messages sent directly from the first to the second will not be
received out-of-order. The word directly emphasizes that this guarantee only applies when sending with the tell
operator directly to the final destination, but not when employing mediators.
If:
* Actor `A1` sends messages `M1`, `M2`, `M3` to `A2`.
* Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`.
This means that:
* If `M1` is delivered it must be delivered before `M2` and `M3`.
* If `M2` is delivered it must be delivered before `M3`.
* If `M4` is delivered it must be delivered before `M5` and `M6`.
* If `M5` is delivered it must be delivered before `M6`.
* `A2` can see messages from `A1` interleaved with messages from `A3`.
* Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive at `A2`.
For the full details on delivery guarantees please refer to the [reference page](http://doc.akka.io/docs/akka/current/general/message-delivery-reliability.html).
### Revisiting the Query Protocol
There is nothing wrong with our first query protocol but it limits our flexibility. If we want to implement resends
in the actor that queries our device actor (because of timed out requests) or want to query multiple actors it
can be helpful to put an additional query ID field in the message which helps us correlate requests with responses.
Hence, we add one more field to our messages, so that an ID can be provided by the requester:
To provide the `main` entry point that creates the actor system, add the following code to the new @scala[`IotApp` object] @java[`IotMain` class].
Scala
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_2/DeviceInProgress.scala) { #read-protocol-2 }
: @@snip [IotApp.scala]($code$/scala/tutorial_2/IotApp.scala) { #iot-app }
Java
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_2/inprogress2/DeviceInProgress2.java) { #read-protocol-2 }
: @@snip [IotMain.java]($code$/java/jdocs/tutorial_2/IotMain.java) { #iot-app }
Our device actor has the responsibility to use the same ID for the response of a given query. Now we can sketch
our device actor:
The application does little, other than print out that it is started. But, we have the first actor in place and we are ready to add other actors.
Scala
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_2/DeviceInProgress.scala) { #device-with-read }
## What's next?
Java
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_2/inprogress2/DeviceInProgress2.java) { #device-with-read }
In the following chapters we will grow the application gradually, by:
We maintain the current temperature, initially set to @scala[`None`] @java[`Optional.empty()`], and we simply report it back if queried. We also
added fields for the ID of the device and the group it belongs to, which we will use later.
1. Creating the representation for a device.
2. Creating the device management component.
3. Adding query capabilities to device groups.
We can already write a simple test for this functionality @scala[(we use ScalaTest but any other test framework can be
used with the Akka Testkit)]:
Scala
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_2/DeviceSpec.scala) { #device-read-test }
Java
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_2/DeviceTest.java) { #device-read-test }
## The Write Protocol
As a first attempt, we could model recording the current temperature in the device actor as a single message:
* When a temperature record request is received, update the `currentTemperature` field.
Such a message could possibly look like this:
Scala
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_2/DeviceInProgress.scala) { #write-protocol-1 }
Java
: @@snip [DeviceInProgress3.java]($code$/java/jdocs/tutorial_2/DeviceInProgress3.java) { #write-protocol-1 }
The problem with this approach is that the sender of the record temperature message can never be sure if the message
was processed or not. We have seen that Akka does not guarantee delivery of these messages and leaves it to the
application to provide success notifications. In our case, we would like to send an acknowledgment to the sender
once we have updated our last temperature recording, e.g. @scala[`final case class TemperatureRecorded(requestId: Long)`] @java[`TemperatureRecorded`].
Just like in the case of temperature queries and responses, it is a good idea to include an ID field to provide maximum flexibility.
Putting read and write protocol together, the device actor will look like this:
Scala
: @@snip [Device.scala]($code$/scala/tutorial_2/Device.scala) { #full-device }
Java
: @@snip [Device.java]($code$/java/jdocs/tutorial_2/Device.java) { #full-device }
We are also responsible for writing a new test case now, exercising both the read/query and write/record functionality
together:
Scala:
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_2/DeviceSpec.scala) { #device-write-read-test }
Java:
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_2/DeviceTest.java) { #device-write-read-test }
## What is Next?
So far, we have started designing our overall architecture, and we wrote our first actor directly corresponding to the
domain. We now have to create the component that is responsible for maintaining groups of devices and the device
actors themselves.

View file

@ -1,253 +1,182 @@
# Part 3: Device Groups and Manager
# Part 3: Working with Device Actors
In the previous topics we explained how to view actor systems _in the large_, that is, how components should be represented, how actors should be arranged in the hierarchy. In this part, we will look at actors _in the small_ by implementing the device actor.
In this chapter, we will integrate our device actors into a component that manages devices. When a new device comes
online, there is no actor representing it. We need to be able to ask the device manager component to create a new
device actor for us if necessary, in the required group (or return a reference to an already existing one).
If we were working with objects, we would typically design the API as _interfaces_, a collection of abstract methods to be filled out by the actual implementation. In the world of actors, protocols take the place of interfaces. While it is not possible to formalize general protocols in the programming language, we can compose their most basic element, messages. So, we will start by identifying the messages we will want to send to device actors.
Since we keep our tutorial system to the bare minimum, we have no actual component that interfaces with the external
world via some networking protocol. For our exercise, we will just create the API necessary to integrate with such
a component in the future. In a final system, the steps for connecting a device would look like this:
Typically, messages fall into categories, or patterns. By identifying these patterns, you will find that it becomes easier to choose between them and to implement them. The first example demonstrates the _request-respond_ message pattern.
1. The device connects through some protocol to our system.
2. The component managing network connections accept the connection.
3. The ID of the device and the ID of the group that it belongs is acquired.
4. The device manager component is asked to create a group and device actor for the given IDs (or return an existing
one).
5. The device actor (just been created or located) responds with an acknowledgment, at the same time exposing its
ActorRef directly (by being the sender of the acknowledgment).
6. The networking component now uses the ActorRef of the device directly, avoiding going through the component.
## Identifying messages for devices
The tasks of a device actor will be simple:
We are only concerned with steps 4 and 5 now. We will model the device manager component as an actor tree with three
levels:
* Collect temperature measurements
* When asked, report the last measured temperature
![device manager tree](diagrams/device_manager_tree.png)
However, a device might start without immediately having a temperature measurement. Hence, we need to account for the case where a temperature is not present. This also allows us to test the query part of the actor without the write part present, as the device actor can simply report an empty result.
* The top level is the supervisor actor representing the component. It is also the entry point to look up or create
group and device actors.
* Device group actors are supervisors of the devices belonging to the group. Apart from supervising the device actors they
also provide extra services, like querying the temperature readings from all the devices available.
* Device actors manage all the interactions with the actual devices, storing temperature readings for example.
The protocol for obtaining the current temperature from the device actor is simple. The actor:
When designing actor systems one of the main challenges is to decide on the granularity of the actors. For example, it
would be perfectly possible to have only a single actor maintaining all the groups and devices in `HashMap`s for
example. It would be also reasonable to keep the groups as separate actors, but keep device state simply inside
the group actor.
1. Waits for a request for the current temperature.
2. Responds to the request with a reply that either:
We chose this three-layered architecture for the following reasons:
* contains the current temperature or,
* indicates that a temperature is not yet available.
* Having groups as individual actors:
* Allows us to isolate failures happening in a group. If a programmer error would
happen in the single actor that keeps all state, it would be all wiped out once that actor is restarted affecting groups that are otherwise non-faulty.
* Simplifies the problem of querying all the devices belonging to a group (since it only contains state related
to the given group).
* Increases the parallelism of the system by allowing to query multiple groups concurrently. Since groups have
dedicated actors, all of them can run concurrently.
* Having devices as individual actors:
* Allows us to isolate failures happening in a device actor from the rest of the devices.
* Increases the parallelism of collecting temperature readings as actual network connections from different devices
can talk to the individual device actors directly, reducing contention points.
In practice, a system can be organized in multiple ways, all depending on the characteristics of the interactions
between actors.
The following guidelines help to arrive at the right granularity:
* Prefer larger granularity to smaller. Introducing more fine-grained actors than needed causes more problems than
it solves.
* Prefer finer granularity if it enables higher concurrency in the system.
* Prefer finer granularity if actors need to handle complex conversations with other actors and hence have many
states. We will see a very good example for this in the next chapter.
* Prefer finer granularity if there is too much state to keep around in one place compared to dividing into smaller
actors.
* Prefer finer granularity if the current actor has multiple unrelated responsibilities that can fail and restored
individually.
## The Registration Protocol
As the first step, we need to design the protocol for registering a device and create an actor that will be responsible
for it. This protocol will be provided by the `DeviceManager` component itself because that is the only actor that
is known up front: device groups and device actors are created on-demand. The steps of registering a device are the following:
1. DeviceManager receives the request to track a device for a given group and device.
2. If the manager already has an actor for the device group, it forwards the request to it. Otherwise, it first creates
a new one and then forwards the request.
3. The DeviceGroup receives the request to register an actor for the given device.
4. If the group already has an actor for the device, it forwards the request to it. Otherwise, it first creates
a new one and then forwards the request.
5. The device actor receives the request and acknowledges it to the original sender. Since the device actor is the sender of
the acknowledgment, the receiver, i.e. the device, will be able to learn its `ActorRef` and send direct messages to its device actor in the future.
Now that the steps are defined, we only need to define the messages that we will use to communicate requests and
their acknowledgement:
@@snip [DeviceManager.scala]($code$/scala/tutorial_3/DeviceManager.scala) { #device-manager-msgs }
As you see, in this case, we have not included a request ID field in the messages. Since registration is usually happening
once, at the component that connects the system to some network protocol, we will usually have no use for the ID.
Nevertheless, it is a good exercise to add this ID.
## Add Registration Support to Device Actor
We start implementing the protocol from the bottom first. In practice, both a top-down and bottom-up approach can
work, but in our case, we benefit from the bottom-up approach as it allows us to immediately write tests for the
new features without mocking out parts.
At the bottom of our hierarchy are the `Device` actors. Their job in this registration process is rather simple, just reply to the
registration request with an acknowledgment to the sender. *We will assume that the sender of the registration
message is preserved in the upper layers.* We will show you in the next section how this can be achieved.
We also add a safeguard against requests that come with a mismatched group or device ID. This is how the resulting
the code looks like:
@@@ note { .group-scala }
We used a feature of scala pattern matching where we can match if a certain field equals to an expected
value. This is achieved by variables included in backticks, like `` `variable` ``, and it means that the pattern
only match if it contains the value of `variable` in that position.
@@@
We need two messages, one for the request, and one for the reply. Our first attempt might look like the following:
Scala
: @@snip [Device.scala]($code$/scala/tutorial_3/Device.scala) { #device-with-register }
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-1 }
Java
: @@snip [Device.java]($code$/java/jdocs/tutorial_3/Device.java) { #device-with-register }
: @@snip [DeviceInProgress.java]($code$/java/jdocs/tutorial_3/DeviceInProgress.java) { #read-protocol-1 }
We should not leave features untested, so we immediately write two new test cases, one exercising successful
registration, the other testing the case when IDs don't match:
These two messages seem to cover the required functionality. However, the approach we choose must take into account the distributed nature of the application. While the basic mechanism is the same for communicating with an actor on the local JVM as with a remote actor, we need to keep the following in mind:
@@@ note
* There will be observable differences in the latency of delivery between local and remote messages, because factors like network link bandwidth and the message size also come into play.
* Reliability is a concern because a remote message send involves more steps, which means that more can go wrong.
* A local send will just pass a reference to the message inside the same JVM, without any restrictions on the underlying object which is sent, whereas a remote transport will place a limit on the message size.
We used the `expectNoMsg()` helper method from @scala[`TestProbe`] @java[`TestKit`]. This assertion waits until the defined time-limit
and fails if it receives any messages during this period. If no messages are received during the waiting period the
assertion passes. It is usually a good idea to keep these timeouts low (but not too low) because they add significant
test execution time otherwise.
In addition, while sending inside the same JVM is significantly more reliable, if an
actor fails due to a programmer error while processing the message, the effect is basically the same as if a remote network request fails due to the remote host crashing while processing the message. Even though in both cases, the service recovers after a while (the actor is restarted by its supervisor, the host is restarted by an operator or by a monitoring system) individual requests are lost during the crash. **Therefore, writing your actors such that every
message could possibly be lost is the safe, pessimistic bet.**
@@@
But to further understand the need for flexibility in the protocol, it will help to consider Akka message ordering and message delivery guarantees. Akka provides the following behavior for message sends:
* At-most-once delivery, that is, no guaranteed delivery.
* Message ordering is maintained per sender, receiver pair.
The following sections discuss this behavior in more detail:
* [Message delivery](#message-delivery)
* [Message ordering](#message-ordering)
### Message delivery
The delivery semantics provided by messaging subsystems typically fall into the following categories:
* **At-most-once delivery** — each message is delivered zero or one time; in more causal terms it means that messages can be lost, but are never duplicated.
* **At-least-once delivery** — potentially multiple attempts are made to deliver each message, until at least one succeeds; again, in more causal terms this means that messages can be duplicated but are never lost.
* **Exactly-once delivery** — each message is delivered exactly once to the recipient; the message can neither be lost nor be duplicated.
The first behavior, the one used by Akka, is the cheapest and results in the highest performance. It has the least implementation overhead because it can be done in a fire-and-forget fashion without keeping the state at the sending end or in the transport mechanism. The second, at-least-once, requires retries to counter transport losses. This adds the overhead of keeping the state at the sending end and having an acknowledgment mechanism at the receiving end. Exactly-once delivery is most expensive, and results in the worst performance: in addition to the overhead added by at-least-once delivery, it requires the state to be kept at the receiving end in order to filter out
duplicate deliveries.
In an actor system, we need to determine exact meaning of a guarantee — at which point does the system consider the delivery as accomplished:
1. When the message is sent out on the network?
2. When the message is received by the target actor's host?
3. When the message is put into the target actor's mailbox?
4. When the message target actor starts to process the message?
5. When the target actor has successfully processed the message?
Most frameworks and protocols that claim guaranteed delivery actually provide something similar to points 4 and 5. While this sounds reasonable, **is it actually useful?** To understand the implications, consider a simple, practical example: a user attempts to place an order and we only want to claim that it has successfully processed once it is actually on disk in the orders database.
If we rely on the successful processing of the message, the actor will report success as soon as the order has been submitted to the internal API that has the responsibility to validate it, process it and put it into the database. Unfortunately,
immediately after the API has been invoked any the following can happen:
* The host can crash.
* Deserialization can fail.
* Validation can fail.
* The database might be unavailable.
* A programming error might occur.
This illustrates that the **guarantee of delivery** does not translate to the **domain level guarantee**. We only want to report success once the order has been actually fully processed and persisted. **The only entity that can report success is the application itself, since only it has any understanding of the domain guarantees required. No generalized framework can figure out the specifics of a particular domain and what is considered a success in that domain**.
In this particular example, we only want to signal success after a successful database write, where the database acknowledged that the order is now safely stored. **For these reasons Akka lifts the responsibilities of guarantees to the application
itself, i.e. you have to implement them yourself. This gives you full control of the guarantees that you want to provide**. Now, let's consider the message ordering that Akka provides to make it easy to reason about application logic.
### Message Ordering
In Akka, for a given pair of actors, messages sent directly from the first to the second will not be received out-of-order. The word directly emphasizes that this guarantee only applies when sending with the tell operator directly to the final destination, but not when employing mediators.
If:
* Actor `A1` sends messages `M1`, `M2`, `M3` to `A2`.
* Actor `A3` sends messages `M4`, `M5`, `M6` to `A2`.
This means that, for Akka messages:
* If `M1` is delivered it must be delivered before `M2` and `M3`.
* If `M2` is delivered it must be delivered before `M3`.
* If `M4` is delivered it must be delivered before `M5` and `M6`.
* If `M5` is delivered it must be delivered before `M6`.
* `A2` can see messages from `A1` interleaved with messages from `A3`.
* Since there is no guaranteed delivery, any of the messages may be dropped, i.e. not arrive at `A2`.
These guarantees strike a good balance: having messages from one actor arrive in-order is convenient for building systems that can be easily reasoned about, while on the other hand allowing messages from different actors to arrive interleaved provides sufficient freedom for an efficient implementation of the actor system.
For the full details on delivery guarantees please refer to the @ref[reference page](../general/message-delivery-reliability.md).
## Adding flexibility to device messages
Our first query protocol was correct, but did not take into account distributed application execution. If we want to implement resends in the actor that queries a device actor (because of timed out requests), or if we want to query multiple actors, we need to be able to correlate requests and responses. Hence, we add one more field to our messages, so that an ID can be provided by the requester (we will add this code to our app in a later step):
Scala
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-registration-tests }
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #read-protocol-2 }
Java
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_3/DeviceTest.java) { #device-registration-tests }
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #read-protocol-2 }
## Device Group
## Defining the device actor and its read protocol
We are done with the registration support at the device level, now we have to implement it at the group level. A group
has more work to do when it comes to registrations. It must either forward the request to an existing child, or it
should create one. To be able to look up child actors by their device IDs we will use a @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`].
We also want to keep the original sender of the request so that our device actor can reply directly. This is possible
by using `forward` instead of the @scala[`!`] @java[`tell`] operator. The only difference between the two is that `forward` keeps the original
sender while @scala[`!`] @java[`tell`] always sets the sender to be the current actor. Just like with our device actor, we ensure that we don't
respond to wrong group IDs:
As we learned in the Hello World example, each actor defines the type of messages it will accept. Our device actor has the responsibility to use the same ID parameter for the response of a given query, which would make it look like the following.
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_3/DeviceGroup.scala) { #device-group-register }
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #device-with-read }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_3/DeviceGroup.java) { #device-group-register }
: @@snip [DeviceInProgress2.java]($code$/java/jdocs/tutorial_3/inprogress2/DeviceInProgress2.java) { #device-with-read }
Just as we did with the device, we test this new functionality. We also test that the actors returned for the two
different IDs are actually different, and we also attempt to record a temperature reading for each of the devices
to see if the actors are responding.
Note in the code that:
* The @scala[companion object]@java[static method] defines how to construct a `Device` actor. The `props` parameters include an ID for the device and the group to which it belongs, which we will use later.
* The @scala[companion object]@java[class] includes the definitions of the messages we reasoned about previously.
* In the `Device` class, the value of `lastTemperatureReading` is initially set to @scala[`None`]@java[`Optional.empty()`], and the actor will simply report it back if queried.
## Testing the actor
Based on the simple actor above, we could write a simple test. In the `com.lightbend.akka.sample` package in the test tree of your project, add the following code to a @scala[`DeviceSpec.scala`]@java[`DeviceTest.java`] file.
@scala[(We use ScalaTest but any other test framework can be used with the Akka Testkit)].
You can run this test @java[by running `mvn test` or] by running `test` at the sbt prompt.
Scala
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_3/DeviceGroupSpec.scala) { #device-group-test-registration }
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-read-test }
Java
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_3/DeviceGroupTest.java) { #device-group-test-registration }
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_3/DeviceTest.java) { #device-read-test }
It might be, that a device actor already exists for the registration request. In this case, we would like to use
the existing actor instead of a new one. We have not tested this yet, so we need to fix this:
Now, the actor needs a way to change the state of the temperature when it receives a message from the sensor.
## Adding a write protocol
The purpose of the write protocol is to update the `currentTemperature` field when the actor receives a message that contains the temperature. Again, it is tempting to define the write protocol as a very simple message, something like this:
Scala
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_3/DeviceGroupSpec.scala) { #device-group-test3 }
: @@snip [DeviceInProgress.scala]($code$/scala/tutorial_3/DeviceInProgress.scala) { #write-protocol-1 }
Java
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_3/DeviceGroupTest.java) { #device-group-test3 }
: @@snip [DeviceInProgress3.java]($code$/java/jdocs/tutorial_3/DeviceInProgress3.java) { #write-protocol-1 }
So far, we have implemented everything for registering device actors in the group. Devices come and go, however, so
we will need a way to remove those from the @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`]. We will assume that when a device is removed, its corresponding device actor
is simply stopped. We need some way for the parent to be notified when one of the device actors are stopped. Unfortunately,
supervision will not help because it is used for error scenarios, not graceful stopping.
However, this approach does not take into account that the sender of the record temperature message can never be sure if the message was processed or not. We have seen that Akka does not guarantee delivery of these messages and leaves it to the application to provide success notifications. In our case, we would like to send an acknowledgment to the sender once we have updated our last temperature recording, e.g. @scala[`final case class TemperatureRecorded(requestId: Long)`]@java[`TemperatureRecorded`].
Just like in the case of temperature queries and responses, it is a good idea to include an ID field to provide maximum flexibility.
There is a feature in Akka that is exactly what we need here. It is possible for an actor to _watch_ another actor
and be notified if the other actor is stopped. This feature is called _Death Watch_ and it is an important tool for
any Akka application. Unlike supervision, watching is not limited to parent-child relationships, any actor can watch
any other actor given its `ActorRef`. After a watched actor stops, the watcher receives a `Terminated(ref)` message
which also contains the reference to the watched actor. The watcher can either handle this message explicitly or, if
it does not handle it directly it will fail with a `DeathPactException`. This latter is useful if the actor cannot
longer perform its duties after its collaborator actor has been stopped. In our case, the group should still function
after one device have been stopped, so we need to handle this message. The steps we need to follow are the following:
## Actor with read and write messages
1. Whenever we create a new device actor, we must also watch it.
2. When we are notified that a device actor has been stopped we also need to remove it from the @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`] which maps
devices to device actors.
Unfortunately, the `Terminated` message only contains the `ActorRef` of the child actor but we do not know
its ID, which we need to remove it from the map of existing device to device actor mappings. To be able to do this removal, we
need to introduce another placeholder, @scala[`Map[ActorRef, String]`] @java[`Map<ActorRef, String>`], that allow us to find out the device ID corresponding to a given `ActorRef`. Putting
this together the result is:
Putting the read and write protocol together, the device actor looks like the following example:
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_3/DeviceGroup.scala) { #device-group-remove }
: @@snip [Device.scala]($code$/scala/tutorial_3/Device.scala) { #full-device }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_3/DeviceGroup.java) { #device-group-remove }
: @@snip [Device.java]($code$/java/jdocs/tutorial_3/Device.java) { #full-device }
So far we have no means to get what devices the group device actor keeps track of and, therefore, we cannot test our
new functionality yet. To make it testable, we add a new query capability (message @scala[`RequestDeviceList(requestId: Long)`] @java[`RequestDeviceList`]) that simply lists the currently active
device IDs:
We should also write a new test case now, exercising both the read/query and write/record functionality together:
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_3/DeviceGroup.scala) { #device-group-full }
Scala:
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_3/DeviceSpec.scala) { #device-write-read-test }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_3/DeviceGroup.java) { #device-group-full }
Java:
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_3/DeviceTest.java) { #device-write-read-test }
We almost have everything to test the removal of devices. What is missing is:
## What's Next?
* Stopping a device actor from our test case, from the outside: any actor can be stopped by simply sending a special
the built-in message, `PoisonPill`, which instructs the actor to stop.
* Be notified once the device actor is stopped: we can use the _Death Watch_ facility for this purpose, too. Thankfully
the @scala[`TestProbe`] @java[`TestKit`] has two messages that we can easily use, `watch()` to watch a specific actor, and `expectTerminated`
to assert that the watched actor has been terminated.
We add two more test cases now. In the first, we just test that we get back the list of proper IDs once we have added
a few devices. The second test case makes sure that the device ID is properly removed after the device actor has
been stopped:
Scala
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_3/DeviceGroupSpec.scala) { #device-group-list-terminate-test }
Java
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_3/DeviceGroupTest.java) { #device-group-list-terminate-test }
## Device Manager
The only part that remains now is the entry point for our device manager component. This actor is very similar to
the device group actor, with the only difference that it creates device group actors instead of device actors:
Scala
: @@snip [DeviceManager.scala]($code$/scala/tutorial_3/DeviceManager.scala) { #device-manager-full }
Java
: @@snip [DeviceManager.java]($code$/java/jdocs/tutorial_3/DeviceManager.java) { #device-manager-full }
We leave tests of the device manager as an exercise as it is very similar to the tests we have written for the group
actor.
## What is Next?
We have now a hierarchical component for registering and tracking devices and recording measurements. We have seen
some conversation patterns like:
* Request-respond (for temperature recordings).
* Delegate-respond (for registration of devices).
* Create-watch-terminate (for creating the group and device actor as children).
In the next chapter, we will introduce group query capabilities, which will establish a new conversation pattern of
scatter-gather. In particular, we will implement the functionality that allows users to query the status of all
the devices belonging to a group.
So far, we have started designing our overall architecture, and we wrote the first actor that directly corresponds to the domain. We now have to create the component that is responsible for maintaining groups of devices and the device actors themselves.

View file

@ -1,253 +1,218 @@
# Part 4: Querying a Group of Devices
# Part 4: Working with Device Groups
The conversational patterns we have seen so far were simple in the sense that they required no or little state to be kept in the
actor that is only relevant to the conversation. Our device actors either simply returned a reading, which required no
state change, recorded a temperature, which was required an update of a single field, or in the most complex case,
managing groups and devices, we had to add or remove simple entries from a map.
Let's take a closer look at the main functionality required by our use case. In a complete IoT system for monitoring home temperatures, the steps for connecting a device sensor to our system might look like this:
In this chapter, we will see a more complex example. Our goal is to add a new service to the group device actor, one which
allows querying the temperature from all running devices. Let us start by investigating how we want our query API to
behave.
1. A sensor device in the home connects through some protocol.
1. The component managing network connections accepts the connection.
1. The sensor provides its group and device ID to register with the device manager component of our system.
1. The device manager component handles registration by looking up or creating the actor responsible for keeping sensor state.
1. The actor responds with an acknowledgement, exposing its `ActorRef`.
1. The networking component now uses the `ActorRef` for communication between the sensor and device actor without going through the device manager.
The very first issue we face is that the set of devices is dynamic, and each device is represented by an actor that
can stop at any time. At the beginning of the query, we need to ask all of the device actors for the current temperature
that we know about. However, during the lifecycle of the query:
Steps 1 and 2 take place outside the boundaries of our tutorial system. In this chapter, we will start addressing steps 3-6 and create a way for sensors to register with our system and to communicate with actors. But first, we have another architectural decision &#8212; how many levels of actors should we use to represent device groups and device sensors?
* A device actor may stop and not respond back with a temperature reading.
* A new device actor might start up, but we missed asking it for the current temperature.
One of the main design challenges for Akka programmers is choosing the best granularity for actors. In practice, depending on the characteristics of the interactions between actors, there are usually several valid ways to organize a system. In our use case, for example, it would be possible to have a single actor maintain all the groups and devices &#8212; perhaps using hash maps. It would also be reasonable to have an actor for each group that tracks the state of all devices in the same home.
There are many approaches that can be taken to address these issues, but the important point is to settle on what is
the desired behavior. We will pick the following two guarantees:
The following guidelines help us choose the most appropriate actor hierarchy:
* When a query arrives at the group, the group actor takes a _snapshot_ of the existing device actors and will only
ask those for the temperature. Actors that are started _after_ the arrival of the query are simply ignored.
* When an actor stops during the query without answering (i.e. before all the actors we asked for the temperature
responded) we simply report back that fact to the sender of the query message.
* In general, prefer larger granularity. Introducing more fine-grained actors than needed causes more problems than it solves.
* Add finer granularity when the system requires:
* Higher concurrency.
* Complex conversations between actors that have many
states. We will see a very good example for this in the next chapter.
* Sufficient state that it makes sense to divide into smaller
actors.
* Multiple unrelated responsibilities. Using separate actors allows individuals to fail and be restored with little impact on others.
Apart from device actors coming and going dynamically, some actors might take a long time to answer, for example, because
they are stuck in an accidental infinite loop, or because they failed due to a bug and dropped our request. Ideally,
we would like to give a deadline to our query:
## Device manager hierarchy
* The query is considered completed if either all actors have responded (or confirmed being stopped), or we reach
the deadline.
Considering the principles outlined in the previous section, We will model the device manager component as an actor tree with three levels:
Given these decisions, and the fact that a device might not have a temperature to record, we can define four states
that each device can be in, according to the query:
* The top level supervisor actor represents the system component for devices. It is also the entry point to look up and create device group and device actors.
* At the next level, group actors each supervise the device actors for one group id (e.g. one home). They also provide services, such as querying temperature readings from all of the available devices in their group.
* Device actors manage all the interactions with the actual device sensors, such as storing temperature readings.
* It has a temperature available: @scala[`Temperature(value)`] @java[`Temperature`].
* It has responded, but has no temperature available yet: `TemperatureNotAvailable`.
* It has stopped before answering: `DeviceNotAvailable`.
* It did not respond before the deadline: `DeviceTimedOut`.
![device manager tree](diagrams/device_manager_tree.png)
Summarizing these in message types we can add the following to `DeviceGroup`:
We chose this three-layered architecture for these reasons:
* Having groups of individual actors:
* Isolates failures that occur in a group. If a single actor managed all device groups, an error in one group that causes a restart would wipe out the state of groups that are otherwise non-faulty.
* Simplifies the problem of querying all the devices belonging to a group. Each group actor only contains state related to its group.
* Increases parallelism in the system. Since each group has a dedicated actor, they run concurrently and we can query multiple groups concurrently.
* Having sensors modeled as individual device actors:
* Isolates failures of one device actor from the rest of the devices in the group.
* Increases the parallelism of collecting temperature readings. Network connections from different sensors communicate with their individual device actors directly, reducing contention points.
With the architecture defined, we can start working on the protocol for registering sensors.
## The Registration Protocol
As the first step, we need to design the protocol both for registering a device and for creating the group and device actors that will be responsible for it. This protocol will be provided by the `DeviceManager` component itself because that is the only actor that is known and available up front: device groups and device actors are created on-demand.
Looking at registration in more detail, we can outline the necessary functionality:
1. When a `DeviceManager` receives a request with a group and device id:
* If the manager already has an actor for the device group, it forwards the request to it.
* Otherwise, it creates a new device group actor and then forwards the request.
1. The `DeviceGroup` actor receives the request to register an actor for the given device:
* If the group already has an actor for the device, the group actor forwards the request to the device actor.
* Otherwise, the `DeviceGroup` actor first creates a device actor and then forwards the request.
1. The device actor receives the request and sends an acknowledgement to the original sender. Since the device actor acknowledges receipt (instead of the group actor), the sensor will now have the `ActorRef` to send messages directly to its actor.
The messages that we will use to communicate registration requests and
their acknowledgement have a simple definition:
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #query-protocol }
: @@snip [DeviceManager.scala]($code$/scala/tutorial_4/DeviceManager.scala) { #device-manager-msgs }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #query-protocol }
: @@snip [DeviceManager.java]($code$/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-msgs }
## Implementing the Query
In this case we have not included a request ID field in the messages. Since registration happens once, when the component connects the system to some network protocol, the ID is not important. However, it is usually a best practice to include a request ID.
One of the approaches for implementing the query could be to add more code to the group device actor. While this is
possible, in practice this can be very cumbersome and error prone. When we start a query, we need to take a snapshot
of the devices present at the start of the query and start a timer so that we can enforce the deadline. Unfortunately,
during the time we execute a query _another query_ might just arrive. For this other query, of course, we need to keep
track of the exact same information but isolated from the previous query. This complicates the code and also poses
some problems. For example, we would need a data structure that maps the `ActorRef`s of the devices to the queries
that use that device, so that they can be notified when such a device terminates, i.e. a `Terminated` message is
received.
Now, we'll start implementing the protocol from the bottom up. In practice, both a top-down and bottom-up approach can work, but in our case, we benefit from the bottom-up approach as it allows us to immediately write tests for the new features without mocking out parts that we will need to build later.
There is a much simpler approach that is superior in every way, and it is the one we will implement. We will create
an actor that represents a _single query_ and which performs the tasks needed to complete the query on behalf of the
group actor. So far we have created actors that belonged to classical domain objects, but now, we will create an
actor that represents a process or task rather than an entity. This move keeps our group device actor simple and gives
us better ways to test the query capability in isolation.
## Adding registration support to device actors
First, we need to design the lifecycle of our query actor. This consists of identifying its initial state, then
the first action to be taken by the actor, then, the cleanup if necessary. There are a few things the query should
need to be able to work:
At the bottom of our hierarchy are the `Device` actors. Their job in the registration process is simple: reply to the registration request with an acknowledgment to the sender. It is also prudent to add a safeguard against requests that come with a mismatched group or device ID.
* The snapshot of active device actors to query, and their IDs.
* The requestID of the request that started the query (so we can include it in the reply).
* The `ActorRef` of the actor who sent the group actor the query. We will send the reply to this actor directly.
* A timeout parameter, how long the query should wait for replies. Keeping this as a parameter will simplify testing.
*We will assume that the ID of the sender of the registration
message is preserved in the upper layers.* We will show you in the next section how this can be achieved.
Since we need to have a timeout for how long we are willing to wait for responses, it is time to introduce a new feature that we have
not used yet: timers. Akka has a built-in scheduler facility for this exact purpose. Using it is simple, the
@scala[`scheduler.scheduleOnce(time, actorRef, message)`] @java[`scheduler.scheduleOnce(time, actorRef, message, executor, sender)`] method will schedule the message `message` into the future by the
specified `time` and send it to the actor `actorRef`. To implement our query timeout we need to create a message
that represents the query timeout. We create a simple message `CollectionTimeout` without any parameters for
this purpose. The return value from `scheduleOnce` is a `Cancellable` which can be used to cancel the timer
if the query finishes successfully in time. Getting the scheduler is possible from the `ActorSystem`, which, in turn,
is accessible from the actor's context: @scala[`context.system.scheduler`] @java[`getContext().getSystem().scheduler()`]. This needs an @scala[implicit] `ExecutionContext` which
is basically the thread-pool that will execute the timer task itself. In our case, we use the same dispatcher
as the actor by @scala[importing `import context.dispatcher`] @java[passing in `getContext().dispatcher()`].
At the start of the query, we need to ask each of the device actors for the current temperature. To be able to quickly
detect devices that stopped before they got the `ReadTemperature` message we will also watch each of the actors. This
way, we get `Terminated` messages for those that stop during the lifetime of the query, so we don't need to wait
until the timeout to mark these as not available.
Putting together all these, the outline of our actor looks like this:
The device actor registration code looks like the following. Modify your example to match.
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_4/DeviceGroupQuery.scala) { #query-outline }
: @@snip [Device.scala]($code$/scala/tutorial_4/Device.scala) { #device-with-register }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_4/DeviceGroupQuery.java) { #query-outline }
: @@snip [Device.java]($code$/java/jdocs/tutorial_4/Device.java) { #device-with-register }
The query actor, apart from the pending timer, has one stateful aspect about it: the actors that did not answer so far or,
from the other way around, the set of actors that have replied or stopped. One way to track this state is
to create a mutable field in the actor @scala[(a `var`)]. There is another approach. It is also possible to change how
the actor responds to messages. By default, the `receive` block defines the behavior of the actor, but it is possible
to change it, several times, during the life of the actor. This is possible by calling `context.become(newBehavior)`
where `newBehavior` is anything with type `Receive` @scala[(which is just a shorthand for `PartialFunction[Any, Unit]`)]. A
`Receive` is just a function (or an object, if you like) that can be returned from another function. We will leverage this
feature to track the state of our actor.
@@@ note { .group-scala }
As the first step, instead of defining `receive` directly, we delegate to another function to create the `Receive`, which
we will call `waitingForReplies`. This will keep track of two changing values, a `Map` of already received replies
and a `Set` of actors that we still wait on. We have three events that we should act on. We can receive a
`RespondTemperature` message from one of the devices. Second, we can receive a `Terminated` message for a device actor
that has been stopped in the meantime. Finally, we can reach the deadline and receive a `CollectionTimeout`. In the
first two cases, we need to keep track of the replies, which we now simply delegate to a method `receivedResponse` which
we will discuss later. In the case of timeout, we need to simply take all the actors that have not yet replied yet
(the members of the set `stillWaiting`) and put a `DeviceTimedOut` as the status in the final reply. Then we
reply to the submitter of the query with the collected results and stop the query actor:
We used a feature of scala pattern matching where we can check to see if a certain field equals an expected value. By bracketing variables with backticks, like `` `variable` ``, the pattern will only match if it contains the value of `variable` in that position.
@@@
We can now write two new test cases, one exercising successful registration, the other testing the case when IDs don't match:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_4/DeviceGroupQuery.scala) { #query-state }
: @@snip [DeviceSpec.scala]($code$/scala/tutorial_4/DeviceSpec.scala) { #device-registration-tests }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_4/DeviceGroupQuery.java) { #query-state }
: @@snip [DeviceTest.java]($code$/java/jdocs/tutorial_4/DeviceTest.java) { #device-registration-tests }
What is not yet clear, how we will "mutate" the `answersSoFar` and `stillWaiting` data structures. One important
thing to note is that the function `waitingForReplies` **does not handle the messages directly. It returns a `Receive`
function that will handle the messages**. This means that if we call `waitingForReplies` again, with different parameters,
then it returns a brand new `Receive` that will use those new parameters. We have seen how we
can install the initial `Receive` by simply returning it from `receive`. In order to install a new one, to record a
new reply, for example, we need some mechanism. This mechanism is the method `context.become(newReceive)` which will
_change_ the actor's message handling function to the provided `newReceive` function. You can imagine that before
starting, your actor automatically calls `context.become(receive)`, i.e. installing the `Receive` function that
is returned from `receive`. This is another important observation: **it is not `receive` that handles the messages,
it just returns a `Receive` function that will actually handle the messages**.
@@@ note
We now have to figure out what to do in `receivedResponse`. First, we need to record the new result in the map
`repliesSoFar` and remove the actor from `stillWaiting`. The next step is to check if there are any remaining actors
we are waiting for. If there is none, we send the result of the query to the original requester and stop
the query actor. Otherwise, we need to update the `repliesSoFar` and `stillWaiting` structures and wait for more
messages.
We used the `expectNoMsg()` helper method from @scala[`TestProbe`]@java[`TestKit`]. This assertion waits until the defined time-limit and fails if it receives any messages during this period. If no messages are received during the waiting period, the assertion passes. It is usually a good idea to keep these timeouts low (but not too low) because they add significant test execution time.
In the code before, we treated `Terminated` as the implicit response `DeviceNotAvailable`, so `receivedResponse` does
not need to do anything special. However, there is one small task we still need to do. It is possible that we receive a proper
response from a device actor, but then it stops during the lifetime of the query. We don't want this second event
to overwrite the already received reply. In other words, we don't want to receive `Terminated` after we recorded the
response. This is simple to achieve by calling `context.unwatch(ref)`. This method also ensures that we don't
receive `Terminated` events that are already in the mailbox of the actor. It is also safe to call this multiple times,
only the first call will have any effect, the rest is simply ignored.
@@@
With all this knowledge, we can create the `receivedResponse` method:
## Adding registration support to device group actors
We are done with registration support at the device level, now we have to implement it at the group level. A group actor has more work to do when it comes to registrations, including:
* Handling the registration request by either forwarding it to an existing device actor or by creating a new actor and forwarding the message.
* Tracking which device actors exist in the group and removing them from the group when they are stopped.
### Handling the registration request
A device group actor must either forward the request to an existing child, or it should create one. To look up child actors by their device IDs we will use a @scala[`Map[String, ActorRef]`]@java[`Map<String, ActorRef>`].
We also want to keep the the ID of the original sender of the request so that our device actor can reply directly. This is possible by using `forward` instead of the @scala[`!`] @java[`tell`] operator. The only difference between the two is that `forward` keeps the original
sender while @scala[`!`] @java[`tell`] sets the sender to be the current actor. Just like with our device actor, we ensure that we don't respond to wrong group IDs. Add the following to your source file:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_4/DeviceGroupQuery.scala) { #query-collect-reply }
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-register }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_4/DeviceGroupQuery.java) { #query-collect-reply }
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-register }
It is quite natural to ask at this point, what have we gained by using the `context.become()` trick instead of
just making the `repliesSoFar` and `stillWaiting` structures mutable fields of the actor (i.e. `var`s)? In this
simple example, not that much. The value of this style of state keeping becomes more evident when you suddenly have
_more kinds_ of states. Since each state
might have temporary data that is relevant itself, keeping these as fields would pollute the global state
of the actor, i.e. it is unclear what fields are used in what state. Using parameterized `Receive` "factory"
methods we can keep data private that is only relevant to the state. It is still a good exercise to
rewrite the query using @scala[`var`s] @java[mutable fields] instead of `context.become()`. However, it is recommended to get comfortable
with the solution we have used here as it helps structuring more complex actor code in a cleaner and more maintainable way.
Our query actor is now done:
Just as we did with the device, we test this new functionality. We also test that the actors returned for the two different IDs are actually different, and we also attempt to record a temperature reading for each of the devices to see if the actors are responding.
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_4/DeviceGroupQuery.scala) { #query-full }
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test-registration }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_4/DeviceGroupQuery.java) { #query-full }
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test-registration }
## Testing
Now let's verify the correctness of the query actor implementation. There are various scenarios we need to test individually to make
sure everything works as expected. To be able to do this, we need to simulate the device actors somehow to exercise
various normal or failure scenarios. Thankfully we took the list of collaborators (actually a `Map`) as a parameter
to the query actor, so we can easily pass in @scala[`TestProbe`] @java[`TestKit`] references. In our first test, we try out the case when
there are two devices and both report a temperature:
If a device actor already exists for the registration request, we would like to use
the existing actor instead of a new one. We have not tested this yet, so we need to fix this:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_4/DeviceGroupQuerySpec.scala) { #query-test-normal }
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-test3 }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupQueryTest.java) { #query-test-normal }
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-test3 }
That was the happy case, but we know that sometimes devices cannot provide a temperature measurement. This
scenario is just slightly different from the previous:
### Keeping track of the device actors in the group
So far, we have implemented logic for registering device actors in the group. Devices come and go, however, so we will need a way to remove device actors from the @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`]. We will assume that when a device is removed, its corresponding device actor is simply stopped. Supervision, as we discussed earlier, only handles error scenarios &#8212; not graceful stopping. So we need to notify the parent when one of the device actors is stopped.
Akka provides a _Death Watch_ feature that allows an actor to _watch_ another actor and be notified if the other actor is stopped. Unlike supervision, watching is not limited to parent-child relationships, any actor can watch any other actor as long as it knows the `ActorRef`. After a watched actor stops, the watcher receives a `Terminated(actorRef)` message which also contains the reference to the watched actor. The watcher can either handle this message explicitly or will fail with a `DeathPactException`. This latter is useful if the actor can no longer perform its own duties after the watched actor has been stopped. In our case, the group should still function after one device have been stopped, so we need to handle the `Terminated(actorRef)` message.
Our device group actor needs to include functionality that:
1. Starts watching new device actors when they are created.
2. Removes a device actor from the @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`] &#8212; which maps devices to device actors &#8212; when the notification indicates it has stopped.
Unfortunately, the `Terminated` message only contains the `ActorRef` of the child actor. We need the actor's ID to remove it from the map of existing device to device actor mappings. To be able to do this removal, we need to introduce another placeholder, @scala[`Map[String, ActorRef]`] @java[`Map<String, ActorRef>`], that allow us to find out the device ID corresponding to a given `ActorRef`.
Adding the functionality to identify the actor results in this:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_4/DeviceGroupQuerySpec.scala) { #query-test-no-reading }
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-remove }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupQueryTest.java) { #query-test-no-reading }
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-remove }
We also know, that sometimes device actors stop before answering:
So far we have no means to get which devices the group device actor keeps track of and, therefore, we cannot test our new functionality yet. To make it testable, we add a new query capability (message @scala[`RequestDeviceList(requestId: Long)`] @java[`RequestDeviceList`]) that simply lists the currently active
device IDs:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_4/DeviceGroupQuerySpec.scala) { #query-test-stopped }
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #device-group-full }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupQueryTest.java) { #query-test-stopped }
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #device-group-full }
If you remember, there is another case related to device actors stopping. It is possible that we get a normal reply
from a device actor, but then receive a `Terminated` for the same actor later. In this case, we would like to keep
the first reply and not mark the device as `DeviceNotAvailable`. We should test this, too:
We are almost ready to test the removal of devices. But, we still need the following capabilities:
* To stop a device actor from our test case. From the outside, any actor can be stopped by simply sending a special
the built-in message, `PoisonPill`, which instructs the actor to stop.
* To be notified once the device actor is stopped. We can use the _Death Watch_ facility for this purpose, too. The @scala[`TestProbe`] @java[`TestKit`] has two messages that we can easily use, `watch()` to watch a specific actor, and `expectTerminated`
to assert that the watched actor has been terminated.
We add two more test cases now. In the first, we just test that we get back the list of proper IDs once we have added a few devices. The second test case makes sure that the device ID is properly removed after the device actor has been stopped:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_4/DeviceGroupQuerySpec.scala) { #query-test-stopped-later }
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #device-group-list-terminate-test }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupQueryTest.java) { #query-test-stopped-later }
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #device-group-list-terminate-test }
The final case is when not all devices respond in time. To keep our test relatively fast, we will construct the
`DeviceGroupQuery` actor with a smaller timeout:
## Creating device manager actors
Going up to the next level in our hierarchy, we need to create the entry point for our device manager component in the `DeviceManager` source file. This actor is very similar to the device group actor, but creates device group actors instead of device actors:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_4/DeviceGroupQuerySpec.scala) { #query-test-timeout }
: @@snip [DeviceManager.scala]($code$/scala/tutorial_4/DeviceManager.scala) { #device-manager-full }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupQueryTest.java) { #query-test-timeout }
: @@snip [DeviceManager.java]($code$/java/jdocs/tutorial_4/DeviceManager.java) { #device-manager-full }
Our query works as expected now, it is time to include this new functionality in the `DeviceGroup` actor now.
We leave tests of the device manager as an exercise for you since it is very similar to the tests we have already written for the group
actor.
## Adding the Query Capability to the Group
## What's next?
Including the query feature in the group actor is fairly simple now. We did all the heavy lifting in the query actor
itself, the group actor only needs to create it with the right initial parameters and nothing else.
We have now a hierarchical component for registering and tracking devices and recording measurements. We have seen how to implement different types of conversation patterns, such as:
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_4/DeviceGroup.scala) { #query-added }
* Request-respond (for temperature recordings)
* Delegate-respond (for registration of devices)
* Create-watch-terminate (for creating the group and device actor as children)
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_4/DeviceGroup.java) { #query-added }
It is probably worth to reiterate what we said at the beginning of the chapter. By keeping the temporary state
that is only relevant to the query itself in a separate actor we keep the group actor implementation very simple. It delegates
everything to child actors and therefore does not have to keep state that is not relevant to its core business. Also, multiple queries can
now run parallel to each other, in fact, as many as needed. In our case querying an individual device actor is a fast operation, but
if this were not the case, for example, because the remote sensors need to be contacted over the network, this design
would significantly improve throughput.
We close this chapter by testing that everything works together. This test is just a variant of the previous ones,
now exercising the group query feature:
Scala
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_4/DeviceGroupSpec.scala) { #group-query-integration-test }
Java
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_4/DeviceGroupTest.java) { #group-query-integration-test }
In the next chapter, we will introduce group query capabilities, which will establish a new conversation pattern of scatter-gather. In particular, we will implement the functionality that allows users to query the status of all the devices belonging to a group.

View file

@ -0,0 +1,253 @@
# Part 5: Querying Device Groups
The conversational patterns that we have seen so far are simple in the sense that they require the actor to keep little or no state. Specifically:
* Device actors return a reading, which requires no state change
* Record a temperature, which updates a single field
* Device Group actors maintain group membership by simply adding or removing entries from a map
In this part, we will use a more complex example. Since homeowners will be interested in the temperatures throughout their home, our goal is to be able to query all of the device actors in a group. Let us start by investigating how such a query API should behave.
## Dealing with possible scenarios
The very first issue we face is that the membership of a group is dynamic. Each sensor device is represented by an actor that can stop at any time. At the beginning of the query, we can ask all of the existing device actors for the current temperature. However, during the lifecycle of the query:
* A device actor might stop and not be able to respond back with a temperature reading.
* A new device actor might start up and not be included in the query because we weren't aware of it.
These issues can be addressed in many different ways, but the important point is to settle on the desired behavior. The following works well for our use case:
* When a query arrives, the group actor takes a _snapshot_ of the existing device actors and will only ask those actors for the temperature.
* Actors that start up _after_ the query arrives are simply ignored.
* If an actor in the snapshot stops during the query without answering, we will simply report the fact that it stopped to the sender of the query message.
Apart from device actors coming and going dynamically, some actors might take a long time to answer. For example, they could be stuck in an accidental infinite loop, or fail due to a bug and drop our request. We don't want the query to continue indefinitely, so we will consider it complete in either of the following cases:
* All actors in the snapshot have either responded or have confirmed being stopped.
* We reach a pre-defined deadline.
Given these decisions, along with the fact that a device in the snapshot might have just started and not yet received a temperature to record, we can define four states
for each device actor, with respect to a temperature query:
* It has a temperature available: @scala[`Temperature(value)`] @java[`Temperature`].
* It has responded, but has no temperature available yet: `TemperatureNotAvailable`.
* It has stopped before answering: `DeviceNotAvailable`.
* It did not respond before the deadline: `DeviceTimedOut`.
Summarizing these in message types we can add the following to `DeviceGroup`:
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_5/DeviceGroup.scala) { #query-protocol }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_5/DeviceGroup.java) { #query-protocol }
## Implementing the query
One approach for implementing the query involves adding code to the group device actor. However, in practice this can be very cumbersome and error prone. Remember that when we start a query, we need to take a snapshot of the devices present and start a timer so that we can enforce the deadline. In the meantime, _another query_ can arrive. For the second query, of course, we need to keep track of the exact same information but in isolation from the previous query. This would require us to maintain separate mappings between queries and device actors.
Instead, we will implement a simpler, and superior approach. We will create an actor that represents a _single query_ and that performs the tasks needed to complete the query on behalf of the group actor. So far we have created actors that belonged to classical domain objects, but now, we will create an
actor that represents a process or a task rather than an entity. We benefit by keeping our group device actor simple and being able to better test query capability in isolation.
### Defining the query actor
First, we need to design the lifecycle of our query actor. This consists of identifying its initial state, the first action it will take, and the cleanup &#8212; if necessary. The query actor will need the following information:
* The snapshot and IDs of active device actors to query.
* The ID of the request that started the query (so that we can include it in the reply).
* The reference of the actor who sent the query. We will send the reply to this actor directly.
* A deadline that indicates how long the query should wait for replies. Making this a parameter will simplify testing.
#### Scheduling the query timeout
Since we need a way to indicate how long we are willing to wait for responses, it is time to introduce a new Akka feature that we have
not used yet, the built-in scheduler facility. Using the scheduler is simple:
* We get the scheduler from the `ActorSystem`, which, in turn,
is accessible from the actor's context: @scala[`context.system.scheduler`]@java[`getContext().getSystem().scheduler()`]. This needs an @scala[implicit] `ExecutionContext` which
is basically the thread-pool that will execute the timer task itself. In our case, we use the same dispatcher
as the actor by @scala[importing `import context.dispatcher`] @java[passing in `getContext().dispatcher()`].
* The
@scala[`scheduler.scheduleOnce(time, actorRef, message)`] @java[`scheduler.scheduleOnce(time, actorRef, message, executor, sender)`] method will schedule the message `message` into the future by the
specified `time` and send it to the actor `actorRef`.
We need to create a message that represents the query timeout. We create a simple message `CollectionTimeout` without any parameters for this purpose. The return value from `scheduleOnce` is a `Cancellable` which can be used to cancel the timer if the query finishes successfully in time. At the start of the query, we need to ask each of the device actors for the current temperature. To be able to quickly
detect devices that stopped before they got the `ReadTemperature` message we will also watch each of the actors. This
way, we get `Terminated` messages for those that stop during the lifetime of the query, so we don't need to wait
until the timeout to mark these as not available.
Putting this together, the outline of our `DeviceGroupQuery` actor looks like this:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-outline }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-outline }
#### Tracking actor state
The query actor, apart from the pending timer, has one stateful aspect, tracking the set of actors that: have replied, have stopped, or have not replied. One way to track this state is
to create a mutable field in the actor @scala[(a `var`)]. A different approach takes advantage of the ability to change how
an actor responds to messages. A
`Receive` is just a function (or an object, if you like) that can be returned from another function. By default, the `receive` block defines the behavior of the actor, but it is possible to change it multiple times during the life of the actor. We simply call `context.become(newBehavior)`
where `newBehavior` is anything with type `Receive` @scala[(which is just a shorthand for `PartialFunction[Any, Unit]`)]. We will leverage this
feature to track the state of our actor.
For our use case:
1. Instead of defining `receive` directly, we delegate to a `waitingForReplies` function to create the `Receive`.
1. The `waitingForReplies` function will keep track of two changing values:
* a `Map` of already received replies
* a `Set` of actors that we still wait on
1. We have three events to act on:
* We can receive a
`RespondTemperature` message from one of the devices.
* We can receive a `Terminated` message for a device actor
that has been stopped in the meantime.
* We can reach the deadline and receive a `CollectionTimeout`.
In the first two cases, we need to keep track of the replies, which we now simply delegate to a method `receivedResponse`, which we will discuss later. In the case of timeout, we need to simply take all the actors that have not yet replied yet (the members of the set `stillWaiting`) and put a `DeviceTimedOut` as the status in the final reply. Then we reply to the submitter of the query with the collected results and stop the query actor.
To accomplish this, add the following to your `DeviceGroupQuery` source file:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-state }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-state }
It is not yet clear how we will "mutate" the `answersSoFar` and `stillWaiting` data structures. One important thing to note is that the function `waitingForReplies` **does not handle the messages directly. It returns a `Receive` function that will handle the messages**. This means that if we call `waitingForReplies` again, with different parameters,
then it returns a brand new `Receive` that will use those new parameters.
We have seen how we
can install the initial `Receive` by simply returning it from `receive`. In order to install a new one, to record a
new reply, for example, we need some mechanism. This mechanism is the method `context.become(newReceive)` which will
_change_ the actor's message handling function to the provided `newReceive` function. You can imagine that before
starting, your actor automatically calls `context.become(receive)`, i.e. installing the `Receive` function that
is returned from `receive`. This is another important observation: **it is not `receive` that handles the messages,
it just returns a `Receive` function that will actually handle the messages**.
We now have to figure out what to do in `receivedResponse`. First, we need to record the new result in the map `repliesSoFar` and remove the actor from `stillWaiting`. The next step is to check if there are any remaining actors we are waiting for. If there is none, we send the result of the query to the original requester and stop the query actor. Otherwise, we need to update the `repliesSoFar` and `stillWaiting` structures and wait for more
messages.
In the code before, we treated `Terminated` as the implicit response `DeviceNotAvailable`, so `receivedResponse` does
not need to do anything special. However, there is one small task we still need to do. It is possible that we receive a proper
response from a device actor, but then it stops during the lifetime of the query. We don't want this second event
to overwrite the already received reply. In other words, we don't want to receive `Terminated` after we recorded the
response. This is simple to achieve by calling `context.unwatch(ref)`. This method also ensures that we don't
receive `Terminated` events that are already in the mailbox of the actor. It is also safe to call this multiple times,
only the first call will have any effect, the rest is simply ignored.
With all this knowledge, we can create the `receivedResponse` method:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-collect-reply }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-collect-reply }
It is quite natural to ask at this point, what have we gained by using the `context.become()` trick instead of
just making the `repliesSoFar` and `stillWaiting` structures mutable fields of the actor (i.e. `var`s)? In this
simple example, not that much. The value of this style of state keeping becomes more evident when you suddenly have
_more kinds_ of states. Since each state
might have temporary data that is relevant itself, keeping these as fields would pollute the global state
of the actor, i.e. it is unclear what fields are used in what state. Using parameterized `Receive` "factory"
methods we can keep data private that is only relevant to the state. It is still a good exercise to
rewrite the query using @scala[`var`s] @java[mutable fields] instead of `context.become()`. However, it is recommended to get comfortable
with the solution we have used here as it helps structuring more complex actor code in a cleaner and more maintainable way.
Our query actor is now done:
Scala
: @@snip [DeviceGroupQuery.scala]($code$/scala/tutorial_5/DeviceGroupQuery.scala) { #query-full }
Java
: @@snip [DeviceGroupQuery.java]($code$/java/jdocs/tutorial_5/DeviceGroupQuery.java) { #query-full }
### Testing the query actor
Now let's verify the correctness of the query actor implementation. There are various scenarios we need to test individually to make
sure everything works as expected. To be able to do this, we need to simulate the device actors somehow to exercise
various normal or failure scenarios. Thankfully we took the list of collaborators (actually a `Map`) as a parameter
to the query actor, so we can easily pass in @scala[`TestProbe`] @java[`TestKit`] references. In our first test, we try out the case when
there are two devices and both report a temperature:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-normal }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-normal }
That was the happy case, but we know that sometimes devices cannot provide a temperature measurement. This
scenario is just slightly different from the previous:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-no-reading }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-no-reading }
We also know, that sometimes device actors stop before answering:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped }
If you remember, there is another case related to device actors stopping. It is possible that we get a normal reply
from a device actor, but then receive a `Terminated` for the same actor later. In this case, we would like to keep
the first reply and not mark the device as `DeviceNotAvailable`. We should test this, too:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-stopped-later }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-stopped-later }
The final case is when not all devices respond in time. To keep our test relatively fast, we will construct the
`DeviceGroupQuery` actor with a smaller timeout:
Scala
: @@snip [DeviceGroupQuerySpec.scala]($code$/scala/tutorial_5/DeviceGroupQuerySpec.scala) { #query-test-timeout }
Java
: @@snip [DeviceGroupQueryTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupQueryTest.java) { #query-test-timeout }
Our query works as expected now, it is time to include this new functionality in the `DeviceGroup` actor now.
## Adding query capability to the group
Including the query feature in the group actor is fairly simple now. We did all the heavy lifting in the query actor
itself, the group actor only needs to create it with the right initial parameters and nothing else.
Scala
: @@snip [DeviceGroup.scala]($code$/scala/tutorial_5/DeviceGroup.scala) { #query-added }
Java
: @@snip [DeviceGroup.java]($code$/java/jdocs/tutorial_5/DeviceGroup.java) { #query-added }
It is probably worth restating what we said at the beginning of the chapter. By keeping the temporary state that is only relevant to the query itself in a separate actor we keep the group actor implementation very simple. It delegates
everything to child actors and therefore does not have to keep state that is not relevant to its core business. Also, multiple queries can now run parallel to each other, in fact, as many as needed. In our case querying an individual device actor is a fast operation, but if this were not the case, for example, because the remote sensors need to be contacted over the network, this design would significantly improve throughput.
We close this chapter by testing that everything works together. This test is just a variant of the previous ones, now exercising the group query feature:
Scala
: @@snip [DeviceGroupSpec.scala]($code$/scala/tutorial_5/DeviceGroupSpec.scala) { #group-query-integration-test }
Java
: @@snip [DeviceGroupTest.java]($code$/java/jdocs/tutorial_5/DeviceGroupTest.java) { #group-query-integration-test }
## Summary
In the context of the IoT system, this guide introduced the following concepts, among others. You can follow the links to review them if necessary:
* @ref[The hierarchy of actors and their lifecycle](tutorial_1.md)
* @ref[The importance of designing messages for flexibility](tutorial_3.md)
* @ref[How to watch and stop actors, if necessary](tutorial_4.md#keeping-track-of-the-device-actors-in-the-group)
## What's Next?
To continue your journey with Akka, we recommend:
* Start building your own applications with Akka, make sure you [get involved in our amazing community](http://akka.io/get-involved) for help if you get stuck.
* If youd like some additional background, read the rest of the reference documentation and check out some of the @ref[books and videos](../additional/books.md) on Akka.

View file

@ -3,7 +3,7 @@
# Quick Start Guide
Create a project and add the akka-streams dependency to the build tool of your
choice as described in @ref[Using a build tool](../guide/quickstart.md).
choice.
A stream usually begins at a source, so this is also how we start an Akka
Stream. Before we create one, we import the full complement of streaming tools:

View file

@ -1,10 +1,8 @@
package jdocs.tutorial_1;
//#print-refs
package com.lightbend.akka.sample;
//#print-refs
import akka.actor.AbstractActor;
import akka.actor.AbstractActor.Receive;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@ -12,6 +10,12 @@ import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
//#print-refs
import akka.actor.AbstractActor;
import akka.actor.AbstractActor.Receive;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
class PrintMyActorRefActor extends AbstractActor {
@Override
public Receive createReceive() {
@ -106,6 +110,26 @@ class SupervisedActor extends AbstractActor {
}
//#supervise
//#print-refs
public class ActorHierarchyExperiments {
public static void main(String[] args) throws java.io.IOException {
ActorSystem system = ActorSystem.create("test");
ActorRef firstRef = system.actorOf(Props.create(PrintMyActorRefActor.class), "first-actor");
System.out.println("First: " + firstRef);
firstRef.tell("printit", ActorRef.noSender());
System.out.println(">>> Press ENTER to exit <<<");
try {
System.in.read();
} finally {
system.terminate();
}
}
}
//#print-refs
class ActorHierarchyExperimentsTest extends JUnitSuite {
static ActorSystem system;
@ -120,28 +144,19 @@ class ActorHierarchyExperimentsTest extends JUnitSuite {
system = null;
}
@Test
public void testCreateTopAndChildActor() {
//#print-refs
ActorRef firstRef = system.actorOf(Props.create(PrintMyActorRefActor.class), "first-actor");
System.out.println("First : " + firstRef);
firstRef.tell("printit", ActorRef.noSender());
//#print-refs
}
@Test
public void testStartAndStopActors() {
//#start-stop
//#start-stop-main
ActorRef first = system.actorOf(Props.create(StartStopActor1.class), "first");
first.tell("stop", ActorRef.noSender());
//#start-stop
//#start-stop-main
}
@Test
public void testSuperviseActors() {
//#supervise
//#supervise-main
ActorRef supervisingActor = system.actorOf(Props.create(SupervisingActor.class), "supervising-actor");
supervisingActor.tell("failChild", ActorRef.noSender());
//#supervise
//#supervise-main
}
}

View file

@ -1,9 +1,9 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_1;
//#iot-app
package com.lightbend.akka.sample;
import java.io.IOException;

View file

@ -1,9 +1,9 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_1;
//#iot-supervisor
package com.lightbend.akka.sample;
import akka.actor.AbstractActor;
import akka.actor.ActorLogging;

View file

@ -3,18 +3,16 @@
*/
package jdocs.tutorial_3;
//#device-with-register
//#full-device
import java.util.Optional;
import akka.actor.AbstractActor;
import akka.actor.AbstractActor.Receive;
import akka.actor.Props;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import jdocs.tutorial_3.DeviceManager.DeviceRegistered;
import jdocs.tutorial_3.DeviceManager.RequestTrackDevice;
import java.util.Optional;
public class Device extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -82,16 +80,6 @@ public class Device extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(RequestTrackDevice.class, r -> {
if (this.groupId.equals(r.groupId) && this.deviceId.equals(r.deviceId)) {
getSender().tell(new DeviceRegistered(), getSelf());
} else {
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
r.groupId, r.deviceId, this.groupId, this.deviceId
);
}
})
.match(RecordTemperature.class, r -> {
log.info("Recorded temperature reading {} with {}", r.value, r.requestId);
lastTemperatureReading = Optional.of(r.value);
@ -103,4 +91,4 @@ public class Device extends AbstractActor {
.build();
}
}
//#device-with-register
//#full-device

View file

@ -1,11 +1,11 @@
package jdocs.tutorial_2;
package jdocs.tutorial_3;
import java.util.Optional;
import jdocs.tutorial_2.Device.ReadTemperature;
import jdocs.tutorial_2.Device.RecordTemperature;
import jdocs.tutorial_2.Device.RespondTemperature;
import jdocs.tutorial_2.Device.TemperatureRecorded;
import jdocs.tutorial_3.Device.ReadTemperature;
import jdocs.tutorial_3.Device.RecordTemperature;
import jdocs.tutorial_3.Device.RespondTemperature;
import jdocs.tutorial_3.Device.TemperatureRecorded;
class DeviceInProgress1 {

View file

@ -1,4 +1,4 @@
package jdocs.tutorial_2;
package jdocs.tutorial_3;
class DeviceInProgress3 {

View file

@ -3,9 +3,7 @@
*/
package jdocs.tutorial_3;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.testkit.javadsl.TestKit;
import java.util.Optional;
import org.junit.AfterClass;
import org.junit.BeforeClass;
@ -14,7 +12,9 @@ import static org.junit.Assert.assertEquals;
import org.scalatest.junit.JUnitSuite;
import java.util.Optional;
import akka.actor.ActorSystem;
import akka.actor.ActorRef;
import akka.testkit.javadsl.TestKit;
public class DeviceTest extends JUnitSuite {
@ -31,30 +31,6 @@ public class DeviceTest extends JUnitSuite {
system = null;
}
//#device-registration-tests
@Test
public void testReplyToRegistrationRequests() {
TestKit probe = new TestKit(system);
ActorRef deviceActor = system.actorOf(Device.props("group", "device"));
deviceActor.tell(new DeviceManager.RequestTrackDevice("group", "device"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
assertEquals(deviceActor, probe.getLastSender());
}
@Test
public void testIgnoreWrongRegistrationRequests() {
TestKit probe = new TestKit(system);
ActorRef deviceActor = system.actorOf(Device.props("group", "device"));
deviceActor.tell(new DeviceManager.RequestTrackDevice("wrongGroup", "device"), probe.getRef());
probe.expectNoMsg();
deviceActor.tell(new DeviceManager.RequestTrackDevice("group", "wrongDevice"), probe.getRef());
probe.expectNoMsg();
}
//#device-registration-tests
//#device-read-test
@Test
public void testReplyWithEmptyReadingIfNoTemperatureIsKnown() {

View file

@ -1,4 +1,4 @@
package jdocs.tutorial_2.inprogress2;
package jdocs.tutorial_3.inprogress2;
//#device-with-read

View file

@ -3,6 +3,8 @@
*/
package jdocs.tutorial_4;
//#device-with-register
import akka.actor.AbstractActor;
import akka.actor.Props;
import akka.event.Logging;
@ -101,3 +103,4 @@ public class Device extends AbstractActor {
.build();
}
}
//#device-with-register

View file

@ -3,20 +3,21 @@
*/
package jdocs.tutorial_4;
import java.util.Set;
import java.util.Map;
import java.util.HashMap;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Props;
import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import scala.concurrent.duration.FiniteDuration;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
import jdocs.tutorial_4.Device;
import jdocs.tutorial_4.DeviceManager;
//#query-added
//#device-group-full
public class DeviceGroup extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -26,9 +27,11 @@ public class DeviceGroup extends AbstractActor {
this.groupId = groupId;
}
//#device-group-register
public static Props props(String groupId) {
return Props.create(DeviceGroup.class, groupId);
}
//#device-group-register
public static final class RequestDeviceList {
final long requestId;
@ -47,51 +50,15 @@ public class DeviceGroup extends AbstractActor {
this.ids = ids;
}
}
//#query-protocol
public static final class RequestAllTemperatures {
final long requestId;
public RequestAllTemperatures(long requestId) {
this.requestId = requestId;
}
}
public static final class RespondAllTemperatures {
final long requestId;
final Map<String, TemperatureReading> temperatures;
public RespondAllTemperatures(long requestId, Map<String, TemperatureReading> temperatures) {
this.requestId = requestId;
this.temperatures = temperatures;
}
}
public static interface TemperatureReading {
}
public static final class Temperature implements TemperatureReading {
public final double value;
public Temperature(double value) {
this.value = value;
}
}
public static final class TemperatureNotAvailable implements TemperatureReading {
}
public static final class DeviceNotAvailable implements TemperatureReading {
}
public static final class DeviceTimedOut implements TemperatureReading {
}
//#query-protocol
//#device-group-register
//#device-group-register
//#device-group-register
//#device-group-remove
final Map<String, ActorRef> deviceIdToActor = new HashMap<>();
//#device-group-register
final Map<ActorRef, String> actorToDeviceId = new HashMap<>();
final long nextCollectionId = 0L;
//#device-group-register
@Override
public void preStart() {
@ -103,19 +70,20 @@ public class DeviceGroup extends AbstractActor {
log.info("DeviceGroup {} stopped", groupId);
}
//#query-added
private void onTrackDevice(DeviceManager.RequestTrackDevice trackMsg) {
if (this.groupId.equals(trackMsg.groupId)) {
ActorRef ref = deviceIdToActor.get(trackMsg.deviceId);
if (ref != null) {
ref.forward(trackMsg, getContext());
ActorRef deviceActor = deviceIdToActor.get(trackMsg.deviceId);
if (deviceActor != null) {
deviceActor.forward(trackMsg, getContext());
} else {
log.info("Creating device actor for {}", trackMsg.deviceId);
ActorRef deviceActor = getContext().actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId);
deviceActor = getContext().actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId);
//#device-group-register
getContext().watch(deviceActor);
deviceActor.forward(trackMsg, getContext());
actorToDeviceId.put(deviceActor, trackMsg.deviceId);
//#device-group-register
deviceIdToActor.put(trackMsg.deviceId, deviceActor);
deviceActor.forward(trackMsg, getContext());
}
} else {
log.warning(
@ -136,24 +104,16 @@ public class DeviceGroup extends AbstractActor {
actorToDeviceId.remove(deviceActor);
deviceIdToActor.remove(deviceId);
}
//#query-added
private void onAllTemperatures(RequestAllTemperatures r) {
getContext().actorOf(DeviceGroupQuery.props(
actorToDeviceId, r.requestId, getSender(), new FiniteDuration(3, TimeUnit.SECONDS)));
}
@Override
public Receive createReceive() {
//#query-added
return receiveBuilder()
.match(DeviceManager.RequestTrackDevice.class, this::onTrackDevice)
.match(RequestDeviceList.class, this::onDeviceList)
.match(Terminated.class, this::onTerminated)
//#query-added
// ... other cases omitted
.match(RequestAllTemperatures.class, this::onAllTemperatures)
.build();
}
}
//#query-added
//#device-group-remove
//#device-group-register
//#device-group-full

View file

@ -3,10 +3,8 @@
*/
package jdocs.tutorial_4;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import java.util.stream.Collectors;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
@ -21,8 +19,6 @@ import static org.junit.Assert.assertNotEquals;
import org.scalatest.junit.JUnitSuite;
import static jdocs.tutorial_4.DeviceGroupQueryTest.assertEqualTemperatures;
public class DeviceGroupTest extends JUnitSuite {
static ActorSystem system;
@ -38,6 +34,7 @@ public class DeviceGroupTest extends JUnitSuite {
system = null;
}
//#device-group-test-registration
@Test
public void testRegisterDeviceActor() {
TestKit probe = new TestKit(system);
@ -67,7 +64,9 @@ public class DeviceGroupTest extends JUnitSuite {
groupActor.tell(new DeviceManager.RequestTrackDevice("wrongGroup", "device1"), probe.getRef());
probe.expectNoMsg();
}
//#device-group-test-registration
//#device-group-test3
@Test
public void testReturnSameActorForSameDeviceId() {
TestKit probe = new TestKit(system);
@ -82,7 +81,9 @@ public class DeviceGroupTest extends JUnitSuite {
ActorRef deviceActor2 = probe.getLastSender();
assertEquals(deviceActor1, deviceActor2);
}
//#device-group-test3
//#device-group-list-terminate-test
@Test
public void testListActiveDevices() {
TestKit probe = new TestKit(system);
@ -132,42 +133,5 @@ public class DeviceGroupTest extends JUnitSuite {
return null;
});
}
//#group-query-integration-test
@Test
public void testCollectTemperaturesFromAllActiveDevices() {
TestKit probe = new TestKit(system);
ActorRef groupActor = system.actorOf(DeviceGroup.props("group"));
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device1"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor1 = probe.getLastSender();
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device2"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor2 = probe.getLastSender();
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device3"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor3 = probe.getLastSender();
// Check that the device actors are working
deviceActor1.tell(new Device.RecordTemperature(0L, 1.0), probe.getRef());
assertEquals(0L, probe.expectMsgClass(Device.TemperatureRecorded.class).requestId);
deviceActor2.tell(new Device.RecordTemperature(1L, 2.0), probe.getRef());
assertEquals(1L, probe.expectMsgClass(Device.TemperatureRecorded.class).requestId);
// No temperature for device 3
groupActor.tell(new DeviceGroup.RequestAllTemperatures(0L), probe.getRef());
DeviceGroup.RespondAllTemperatures response = probe.expectMsgClass(DeviceGroup.RespondAllTemperatures.class);
assertEquals(0L, response.requestId);
Map<String, DeviceGroup.TemperatureReading> expectedTemperatures = new HashMap<>();
expectedTemperatures.put("device1", new DeviceGroup.Temperature(1.0));
expectedTemperatures.put("device2", new DeviceGroup.Temperature(2.0));
expectedTemperatures.put("device3", new DeviceGroup.TemperatureNotAvailable());
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#group-query-integration-test
//#device-group-list-terminate-test
}

View file

@ -4,6 +4,9 @@
package jdocs.tutorial_4;
import java.util.Map;
import java.util.HashMap;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Props;
@ -11,9 +14,7 @@ import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import java.util.HashMap;
import java.util.Map;
//#device-manager-full
public class DeviceManager extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -21,6 +22,7 @@ public class DeviceManager extends AbstractActor {
return Props.create(DeviceManager.class);
}
//#device-manager-msgs
public static final class RequestTrackDevice {
public final String groupId;
public final String deviceId;
@ -34,6 +36,7 @@ public class DeviceManager extends AbstractActor {
public static final class DeviceRegistered {
}
//#device-manager-msgs
final Map<String, ActorRef> groupIdToActor = new HashMap<>();
final Map<ActorRef, String> actorToGroupId = new HashMap<>();
@ -78,3 +81,4 @@ public class DeviceManager extends AbstractActor {
}
}
//#device-manager-full

View file

@ -3,8 +3,6 @@
*/
package jdocs.tutorial_4;
import java.util.Optional;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.testkit.javadsl.TestKit;
@ -16,6 +14,8 @@ import static org.junit.Assert.assertEquals;
import org.scalatest.junit.JUnitSuite;
import java.util.Optional;
public class DeviceTest extends JUnitSuite {
static ActorSystem system;
@ -31,6 +31,7 @@ public class DeviceTest extends JUnitSuite {
system = null;
}
//#device-registration-tests
@Test
public void testReplyToRegistrationRequests() {
TestKit probe = new TestKit(system);
@ -52,7 +53,9 @@ public class DeviceTest extends JUnitSuite {
deviceActor.tell(new DeviceManager.RequestTrackDevice("group", "wrongDevice"), probe.getRef());
probe.expectNoMsg();
}
//#device-registration-tests
//#device-read-test
@Test
public void testReplyWithEmptyReadingIfNoTemperatureIsKnown() {
TestKit probe = new TestKit(system);
@ -62,7 +65,9 @@ public class DeviceTest extends JUnitSuite {
assertEquals(42L, response.requestId);
assertEquals(Optional.empty(), response.value);
}
//#device-read-test
//#device-write-read-test
@Test
public void testReplyWithLatestTemperatureReading() {
TestKit probe = new TestKit(system);
@ -84,5 +89,6 @@ public class DeviceTest extends JUnitSuite {
assertEquals(4L, response2.requestId);
assertEquals(Optional.of(55.0), response2.value);
}
//#device-write-read-test
}

View file

@ -7,6 +7,7 @@ import akka.actor.AbstractActor;
import akka.actor.Props;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import jdocs.tutorial_5.DeviceManager.DeviceRegistered;
import jdocs.tutorial_5.DeviceManager.RequestTrackDevice;

View file

@ -16,6 +16,7 @@ import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
//#query-added
public class DeviceGroup extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -47,6 +48,7 @@ public class DeviceGroup extends AbstractActor {
}
}
//#query-protocol
public static final class RequestAllTemperatures {
final long requestId;
@ -84,6 +86,8 @@ public class DeviceGroup extends AbstractActor {
public static final class DeviceTimedOut implements TemperatureReading {
}
//#query-protocol
final Map<String, ActorRef> deviceIdToActor = new HashMap<>();
final Map<ActorRef, String> actorToDeviceId = new HashMap<>();
@ -99,6 +103,7 @@ public class DeviceGroup extends AbstractActor {
log.info("DeviceGroup {} stopped", groupId);
}
//#query-added
private void onTrackDevice(DeviceManager.RequestTrackDevice trackMsg) {
if (this.groupId.equals(trackMsg.groupId)) {
ActorRef ref = deviceIdToActor.get(trackMsg.deviceId);
@ -131,6 +136,7 @@ public class DeviceGroup extends AbstractActor {
actorToDeviceId.remove(deviceActor);
deviceIdToActor.remove(deviceId);
}
//#query-added
private void onAllTemperatures(RequestAllTemperatures r) {
getContext().actorOf(DeviceGroupQuery.props(
@ -139,11 +145,15 @@ public class DeviceGroup extends AbstractActor {
@Override
public Receive createReceive() {
//#query-added
return receiveBuilder()
.match(DeviceManager.RequestTrackDevice.class, this::onTrackDevice)
.match(RequestDeviceList.class, this::onDeviceList)
.match(Terminated.class, this::onTerminated)
//#query-added
// ... other cases omitted
.match(RequestAllTemperatures.class, this::onAllTemperatures)
.build();
}
}
//#query-added

View file

@ -3,16 +3,24 @@
*/
package jdocs.tutorial_5;
import akka.actor.*;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import scala.concurrent.duration.FiniteDuration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import scala.concurrent.duration.FiniteDuration;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Cancellable;
import akka.actor.Props;
import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#query-full
//#query-outline
public class DeviceGroupQuery extends AbstractActor {
public static final class CollectionTimeout {
}
@ -52,6 +60,8 @@ public class DeviceGroupQuery extends AbstractActor {
queryTimeoutTimer.cancel();
}
//#query-outline
//#query-state
@Override
public Receive createReceive() {
return waitingForReplies(new HashMap<>(), actorToDeviceId.keySet());
@ -69,10 +79,7 @@ public class DeviceGroupQuery extends AbstractActor {
receivedResponse(deviceActor, reading, stillWaiting, repliesSoFar);
})
.match(Terminated.class, t -> {
if (stillWaiting.contains(t.getActor())) {
receivedResponse(t.getActor(), new DeviceGroup.DeviceNotAvailable(), stillWaiting, repliesSoFar);
}
// else ignore
})
.match(CollectionTimeout.class, t -> {
Map<String, DeviceGroup.TemperatureReading> replies = new HashMap<>(repliesSoFar);
@ -85,7 +92,9 @@ public class DeviceGroupQuery extends AbstractActor {
})
.build();
}
//#query-state
//#query-collect-reply
public void receivedResponse(ActorRef deviceActor,
DeviceGroup.TemperatureReading reading,
Set<ActorRef> stillWaiting,
@ -105,4 +114,8 @@ public class DeviceGroupQuery extends AbstractActor {
getContext().become(waitingForReplies(newRepliesSoFar, newStillWaiting));
}
}
//#query-collect-reply
//#query-outline
}
//#query-outline
//#query-full

View file

@ -11,10 +11,10 @@ import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import org.scalatest.junit.JUnitSuite;
import scala.concurrent.duration.FiniteDuration;
import java.util.HashMap;
@ -37,6 +37,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
system = null;
}
//#query-test-normal
@Test
public void testReturnTemperatureValueForWorkingDevices() {
TestKit requester = new TestKit(system);
@ -69,7 +70,9 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-normal
//#query-test-no-reading
@Test
public void testReturnTemperatureNotAvailableForDevicesWithNoReadings() {
TestKit requester = new TestKit(system);
@ -102,7 +105,9 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-no-reading
//#query-test-stopped
@Test
public void testReturnDeviceNotAvailableIfDeviceStopsBeforeAnswering() {
TestKit requester = new TestKit(system);
@ -135,7 +140,9 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-stopped
//#query-test-stopped-later
@Test
public void testReturnTemperatureReadingEvenIfDeviceStopsAfterAnswering() {
TestKit requester = new TestKit(system);
@ -169,7 +176,9 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-stopped-later
//#query-test-timeout
@Test
public void testReturnDeviceTimedOutIfDeviceDoesNotAnswerInTime() {
TestKit requester = new TestKit(system);
@ -203,6 +212,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-timeout
public static void assertEqualTemperatures(Map<String, DeviceGroup.TemperatureReading> expected, Map<String, DeviceGroup.TemperatureReading> actual) {
for (Map.Entry<String, DeviceGroup.TemperatureReading> entry : expected.entrySet()) {

View file

@ -16,10 +16,11 @@ import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotEquals;
import org.scalatest.junit.JUnitSuite;
import static jdocs.tutorial_5.DeviceGroupQueryTest.assertEqualTemperatures;
public class DeviceGroupTest extends JUnitSuite {
@ -132,6 +133,7 @@ public class DeviceGroupTest extends JUnitSuite {
});
}
//#group-query-integration-test
@Test
public void testCollectTemperaturesFromAllActiveDevices() {
TestKit probe = new TestKit(system);
@ -167,4 +169,5 @@ public class DeviceGroupTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#group-query-integration-test
}

View file

@ -16,7 +16,6 @@ import static org.junit.Assert.assertEquals;
import org.scalatest.junit.JUnitSuite;
public class DeviceTest extends JUnitSuite {
static ActorSystem system;

View file

@ -1,17 +1,16 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_2;
//#full-device
import java.util.Optional;
package jdocs.tutorial_6;
import akka.actor.AbstractActor;
import akka.actor.AbstractActor.Receive;
import akka.actor.Props;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import jdocs.tutorial_6.DeviceManager.DeviceRegistered;
import jdocs.tutorial_6.DeviceManager.RequestTrackDevice;
import java.util.Optional;
public class Device extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -80,6 +79,16 @@ public class Device extends AbstractActor {
@Override
public Receive createReceive() {
return receiveBuilder()
.match(RequestTrackDevice.class, r -> {
if (this.groupId.equals(r.groupId) && this.deviceId.equals(r.deviceId)) {
getSender().tell(new DeviceRegistered(), getSelf());
} else {
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
r.groupId, r.deviceId, this.groupId, this.deviceId
);
}
})
.match(RecordTemperature.class, r -> {
log.info("Recorded temperature reading {} with {}", r.value, r.requestId);
lastTemperatureReading = Optional.of(r.value);
@ -91,4 +100,3 @@ public class Device extends AbstractActor {
.build();
}
}
//#full-device

View file

@ -1,11 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_3;
import java.util.Set;
import java.util.Map;
import java.util.HashMap;
package jdocs.tutorial_6;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
@ -13,11 +9,13 @@ import akka.actor.Props;
import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import scala.concurrent.duration.FiniteDuration;
import jdocs.tutorial_3.Device;
import jdocs.tutorial_3.DeviceManager;
import java.util.HashMap;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;
//#device-group-full
public class DeviceGroup extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -27,11 +25,9 @@ public class DeviceGroup extends AbstractActor {
this.groupId = groupId;
}
//#device-group-register
public static Props props(String groupId) {
return Props.create(DeviceGroup.class, groupId);
}
//#device-group-register
public static final class RequestDeviceList {
final long requestId;
@ -50,15 +46,48 @@ public class DeviceGroup extends AbstractActor {
this.ids = ids;
}
}
//#device-group-register
//#device-group-register
//#device-group-register
//#device-group-remove
public static final class RequestAllTemperatures {
final long requestId;
public RequestAllTemperatures(long requestId) {
this.requestId = requestId;
}
}
public static final class RespondAllTemperatures {
final long requestId;
final Map<String, TemperatureReading> temperatures;
public RespondAllTemperatures(long requestId, Map<String, TemperatureReading> temperatures) {
this.requestId = requestId;
this.temperatures = temperatures;
}
}
public static interface TemperatureReading {
}
public static final class Temperature implements TemperatureReading {
public final double value;
public Temperature(double value) {
this.value = value;
}
}
public static final class TemperatureNotAvailable implements TemperatureReading {
}
public static final class DeviceNotAvailable implements TemperatureReading {
}
public static final class DeviceTimedOut implements TemperatureReading {
}
final Map<String, ActorRef> deviceIdToActor = new HashMap<>();
//#device-group-register
final Map<ActorRef, String> actorToDeviceId = new HashMap<>();
//#device-group-register
final long nextCollectionId = 0L;
@Override
public void preStart() {
@ -72,18 +101,16 @@ public class DeviceGroup extends AbstractActor {
private void onTrackDevice(DeviceManager.RequestTrackDevice trackMsg) {
if (this.groupId.equals(trackMsg.groupId)) {
ActorRef deviceActor = deviceIdToActor.get(trackMsg.deviceId);
if (deviceActor != null) {
deviceActor.forward(trackMsg, getContext());
ActorRef ref = deviceIdToActor.get(trackMsg.deviceId);
if (ref != null) {
ref.forward(trackMsg, getContext());
} else {
log.info("Creating device actor for {}", trackMsg.deviceId);
deviceActor = getContext().actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId);
//#device-group-register
ActorRef deviceActor = getContext().actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId);
getContext().watch(deviceActor);
actorToDeviceId.put(deviceActor, trackMsg.deviceId);
//#device-group-register
deviceIdToActor.put(trackMsg.deviceId, deviceActor);
deviceActor.forward(trackMsg, getContext());
actorToDeviceId.put(deviceActor, trackMsg.deviceId);
deviceIdToActor.put(trackMsg.deviceId, deviceActor);
}
} else {
log.warning(
@ -105,15 +132,18 @@ public class DeviceGroup extends AbstractActor {
deviceIdToActor.remove(deviceId);
}
private void onAllTemperatures(RequestAllTemperatures r) {
getContext().actorOf(DeviceGroupQuery.props(
actorToDeviceId, r.requestId, getSender(), new FiniteDuration(3, TimeUnit.SECONDS)));
}
@Override
public Receive createReceive() {
return receiveBuilder()
.match(DeviceManager.RequestTrackDevice.class, this::onTrackDevice)
.match(RequestDeviceList.class, this::onDeviceList)
.match(Terminated.class, this::onTerminated)
.match(RequestAllTemperatures.class, this::onAllTemperatures)
.build();
}
}
//#device-group-remove
//#device-group-register
//#device-group-full

View file

@ -1,26 +1,18 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_4;
package jdocs.tutorial_6;
import akka.actor.*;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import scala.concurrent.duration.FiniteDuration;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;
import scala.concurrent.duration.FiniteDuration;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.Cancellable;
import akka.actor.Props;
import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#query-full
//#query-outline
public class DeviceGroupQuery extends AbstractActor {
public static final class CollectionTimeout {
}
@ -60,8 +52,6 @@ public class DeviceGroupQuery extends AbstractActor {
queryTimeoutTimer.cancel();
}
//#query-outline
//#query-state
@Override
public Receive createReceive() {
return waitingForReplies(new HashMap<>(), actorToDeviceId.keySet());
@ -79,7 +69,10 @@ public class DeviceGroupQuery extends AbstractActor {
receivedResponse(deviceActor, reading, stillWaiting, repliesSoFar);
})
.match(Terminated.class, t -> {
if (stillWaiting.contains(t.getActor())) {
receivedResponse(t.getActor(), new DeviceGroup.DeviceNotAvailable(), stillWaiting, repliesSoFar);
}
// else ignore
})
.match(CollectionTimeout.class, t -> {
Map<String, DeviceGroup.TemperatureReading> replies = new HashMap<>(repliesSoFar);
@ -92,9 +85,7 @@ public class DeviceGroupQuery extends AbstractActor {
})
.build();
}
//#query-state
//#query-collect-reply
public void receivedResponse(ActorRef deviceActor,
DeviceGroup.TemperatureReading reading,
Set<ActorRef> stillWaiting,
@ -114,8 +105,4 @@ public class DeviceGroupQuery extends AbstractActor {
getContext().become(waitingForReplies(newRepliesSoFar, newStillWaiting));
}
}
//#query-collect-reply
//#query-outline
}
//#query-outline
//#query-full

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_4;
package jdocs.tutorial_6;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
@ -11,10 +11,10 @@ import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotNull;
import org.scalatest.junit.JUnitSuite;
import scala.concurrent.duration.FiniteDuration;
import java.util.HashMap;
@ -37,7 +37,6 @@ public class DeviceGroupQueryTest extends JUnitSuite {
system = null;
}
//#query-test-normal
@Test
public void testReturnTemperatureValueForWorkingDevices() {
TestKit requester = new TestKit(system);
@ -70,9 +69,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-normal
//#query-test-no-reading
@Test
public void testReturnTemperatureNotAvailableForDevicesWithNoReadings() {
TestKit requester = new TestKit(system);
@ -105,9 +102,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-no-reading
//#query-test-stopped
@Test
public void testReturnDeviceNotAvailableIfDeviceStopsBeforeAnswering() {
TestKit requester = new TestKit(system);
@ -140,9 +135,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-stopped
//#query-test-stopped-later
@Test
public void testReturnTemperatureReadingEvenIfDeviceStopsAfterAnswering() {
TestKit requester = new TestKit(system);
@ -176,9 +169,7 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-stopped-later
//#query-test-timeout
@Test
public void testReturnDeviceTimedOutIfDeviceDoesNotAnswerInTime() {
TestKit requester = new TestKit(system);
@ -212,7 +203,6 @@ public class DeviceGroupQueryTest extends JUnitSuite {
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
//#query-test-timeout
public static void assertEqualTemperatures(Map<String, DeviceGroup.TemperatureReading> expected, Map<String, DeviceGroup.TemperatureReading> actual) {
for (Map.Entry<String, DeviceGroup.TemperatureReading> entry : expected.entrySet()) {

View file

@ -1,10 +1,12 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_3;
package jdocs.tutorial_6;
import java.util.stream.Stream;
import java.util.HashMap;
import java.util.Map;
import java.util.stream.Collectors;
import java.util.stream.Stream;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
@ -14,10 +16,11 @@ import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertNotEquals;
import org.scalatest.junit.JUnitSuite;
import static jdocs.tutorial_6.DeviceGroupQueryTest.assertEqualTemperatures;
public class DeviceGroupTest extends JUnitSuite {
@ -34,7 +37,6 @@ public class DeviceGroupTest extends JUnitSuite {
system = null;
}
//#device-group-test-registration
@Test
public void testRegisterDeviceActor() {
TestKit probe = new TestKit(system);
@ -49,7 +51,7 @@ public class DeviceGroupTest extends JUnitSuite {
ActorRef deviceActor2 = probe.getLastSender();
assertNotEquals(deviceActor1, deviceActor2);
// Check that the device actors are workingl
// Check that the device actors are working
deviceActor1.tell(new Device.RecordTemperature(0L, 1.0), probe.getRef());
assertEquals(0L, probe.expectMsgClass(Device.TemperatureRecorded.class).requestId);
deviceActor2.tell(new Device.RecordTemperature(1L, 2.0), probe.getRef());
@ -64,9 +66,7 @@ public class DeviceGroupTest extends JUnitSuite {
groupActor.tell(new DeviceManager.RequestTrackDevice("wrongGroup", "device1"), probe.getRef());
probe.expectNoMsg();
}
//#device-group-test-registration
//#device-group-test3
@Test
public void testReturnSameActorForSameDeviceId() {
TestKit probe = new TestKit(system);
@ -81,9 +81,7 @@ public class DeviceGroupTest extends JUnitSuite {
ActorRef deviceActor2 = probe.getLastSender();
assertEquals(deviceActor1, deviceActor2);
}
//#device-group-test3
//#device-group-list-terminate-test
@Test
public void testListActiveDevices() {
TestKit probe = new TestKit(system);
@ -133,5 +131,40 @@ public class DeviceGroupTest extends JUnitSuite {
return null;
});
}
//#device-group-list-terminate-test
@Test
public void testCollectTemperaturesFromAllActiveDevices() {
TestKit probe = new TestKit(system);
ActorRef groupActor = system.actorOf(DeviceGroup.props("group"));
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device1"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor1 = probe.getLastSender();
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device2"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor2 = probe.getLastSender();
groupActor.tell(new DeviceManager.RequestTrackDevice("group", "device3"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
ActorRef deviceActor3 = probe.getLastSender();
// Check that the device actors are working
deviceActor1.tell(new Device.RecordTemperature(0L, 1.0), probe.getRef());
assertEquals(0L, probe.expectMsgClass(Device.TemperatureRecorded.class).requestId);
deviceActor2.tell(new Device.RecordTemperature(1L, 2.0), probe.getRef());
assertEquals(1L, probe.expectMsgClass(Device.TemperatureRecorded.class).requestId);
// No temperature for device 3
groupActor.tell(new DeviceGroup.RequestAllTemperatures(0L), probe.getRef());
DeviceGroup.RespondAllTemperatures response = probe.expectMsgClass(DeviceGroup.RespondAllTemperatures.class);
assertEquals(0L, response.requestId);
Map<String, DeviceGroup.TemperatureReading> expectedTemperatures = new HashMap<>();
expectedTemperatures.put("device1", new DeviceGroup.Temperature(1.0));
expectedTemperatures.put("device2", new DeviceGroup.Temperature(2.0));
expectedTemperatures.put("device3", new DeviceGroup.TemperatureNotAvailable());
assertEqualTemperatures(expectedTemperatures, response.temperatures);
}
}

View file

@ -2,10 +2,7 @@
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_3;
import java.util.Map;
import java.util.HashMap;
package jdocs.tutorial_6;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
@ -14,7 +11,9 @@ import akka.actor.Terminated;
import akka.event.Logging;
import akka.event.LoggingAdapter;
//#device-manager-full
import java.util.HashMap;
import java.util.Map;
public class DeviceManager extends AbstractActor {
private final LoggingAdapter log = Logging.getLogger(getContext().getSystem(), this);
@ -22,7 +21,6 @@ public class DeviceManager extends AbstractActor {
return Props.create(DeviceManager.class);
}
//#device-manager-msgs
public static final class RequestTrackDevice {
public final String groupId;
public final String deviceId;
@ -36,7 +34,6 @@ public class DeviceManager extends AbstractActor {
public static final class DeviceRegistered {
}
//#device-manager-msgs
final Map<String, ActorRef> groupIdToActor = new HashMap<>();
final Map<ActorRef, String> actorToGroupId = new HashMap<>();
@ -81,4 +78,3 @@ public class DeviceManager extends AbstractActor {
}
}
//#device-manager-full

View file

@ -1,10 +1,14 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_2;
package jdocs.tutorial_6;
import java.util.Optional;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.testkit.javadsl.TestKit;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
@ -12,9 +16,6 @@ import static org.junit.Assert.assertEquals;
import org.scalatest.junit.JUnitSuite;
import akka.actor.ActorSystem;
import akka.actor.ActorRef;
import akka.testkit.javadsl.TestKit;
public class DeviceTest extends JUnitSuite {
@ -31,7 +32,28 @@ public class DeviceTest extends JUnitSuite {
system = null;
}
//#device-read-test
@Test
public void testReplyToRegistrationRequests() {
TestKit probe = new TestKit(system);
ActorRef deviceActor = system.actorOf(Device.props("group", "device"));
deviceActor.tell(new DeviceManager.RequestTrackDevice("group", "device"), probe.getRef());
probe.expectMsgClass(DeviceManager.DeviceRegistered.class);
assertEquals(deviceActor, probe.getLastSender());
}
@Test
public void testIgnoreWrongRegistrationRequests() {
TestKit probe = new TestKit(system);
ActorRef deviceActor = system.actorOf(Device.props("group", "device"));
deviceActor.tell(new DeviceManager.RequestTrackDevice("wrongGroup", "device"), probe.getRef());
probe.expectNoMsg();
deviceActor.tell(new DeviceManager.RequestTrackDevice("group", "wrongDevice"), probe.getRef());
probe.expectNoMsg();
}
@Test
public void testReplyWithEmptyReadingIfNoTemperatureIsKnown() {
TestKit probe = new TestKit(system);
@ -41,9 +63,7 @@ public class DeviceTest extends JUnitSuite {
assertEquals(42L, response.requestId);
assertEquals(Optional.empty(), response.value);
}
//#device-read-test
//#device-write-read-test
@Test
public void testReplyWithLatestTemperatureReading() {
TestKit probe = new TestKit(system);
@ -65,6 +85,5 @@ public class DeviceTest extends JUnitSuite {
assertEquals(4L, response2.requestId);
assertEquals(Optional.of(55.0), response2.value);
}
//#device-write-read-test
}

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_5;
package jdocs.tutorial_6;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package jdocs.tutorial_5;
package jdocs.tutorial_6;
//#iot-supervisor

View file

@ -1,9 +1,12 @@
package tutorial_1
import akka.actor.{ Actor, Props }
import akka.testkit.AkkaSpec
// Prevent package clashes with the Java examples:
package docs.tutorial_1
//#print-refs
package com.lightbend.akka.sample
import akka.actor.{ Actor, Props, ActorSystem }
import scala.io.StdIn
class PrintMyActorRefActor extends Actor {
override def receive: Receive = {
case "printit" =>
@ -13,6 +16,8 @@ class PrintMyActorRefActor extends Actor {
}
//#print-refs
import akka.testkit.AkkaSpec
//#start-stop
class StartStopActor1 extends Actor {
override def preStart(): Unit = {
@ -62,30 +67,38 @@ class ActorHierarchyExperiments extends AkkaSpec {
// format: OFF
//#print-refs
val firstRef = system.actorOf(Props[PrintMyActorRefActor], "first-actor")
println(s"First : $firstRef")
firstRef ! "printit"
object ActorHierarchyExperiments extends App {
val system = ActorSystem()
val firstRef = system.actorOf(Props[PrintMyActorRefActor], "first-actor")
println(s"First: $firstRef")
firstRef ! "printit"
println(">>> Press ENTER to exit <<<")
try StdIn.readLine()
finally system.terminate()
}
//#print-refs
// format: ON
}
"start and stop actors" in {
// format: OFF
//#start-stop
//#start-stop-main
val first = system.actorOf(Props[StartStopActor1], "first")
first ! "stop"
//#start-stop
//#start-stop-main
// format: ON
}
"supervise actors" in {
// format: OFF
//#supervise
//#supervise-main
val supervisingActor = system.actorOf(Props[SupervisingActor], "supervising-actor")
supervisingActor ! "failChild"
//#supervise
//#supervise-main
// format: ON
}
}

View file

@ -1,9 +1,11 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_1
package tutorial_2
//#iot-app
package com.lightbend.akka.sample
import akka.actor.ActorSystem
import scala.io.StdIn

View file

@ -1,9 +1,11 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_1
package tutorial_2
//#iot-supervisor
package com.lightbend.akka.sample
import akka.actor.{ Actor, ActorLogging, Props }
object IotSupervisor {

View file

@ -3,11 +3,9 @@
*/
package tutorial_3
//#full-device
import akka.actor.{ Actor, ActorLogging, Props }
import tutorial_3.Device.{ ReadTemperature, RecordTemperature, RespondTemperature, TemperatureRecorded }
import tutorial_3.DeviceManager.{ DeviceRegistered, RequestTrackDevice }
//#device-with-register
object Device {
def props(groupId: String, deviceId: String): Props = Props(new Device(groupId, deviceId))
@ -19,22 +17,13 @@ object Device {
}
class Device(groupId: String, deviceId: String) extends Actor with ActorLogging {
import Device._
var lastTemperatureReading: Option[Double] = None
override def preStart(): Unit = log.info("Device actor {}-{} started", groupId, deviceId)
override def postStop(): Unit = log.info("Device actor {}-{} stopped", groupId, deviceId)
override def receive: Receive = {
case RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceRegistered
case RequestTrackDevice(groupId, deviceId) =>
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
groupId, deviceId, this.groupId, this.deviceId
)
case RecordTemperature(id, value) =>
log.info("Recorded temperature reading {} with {}", value, id)
lastTemperatureReading = Some(value)
@ -44,4 +33,4 @@ class Device(groupId: String, deviceId: String) extends Actor with ActorLogging
sender() ! RespondTemperature(id, lastTemperatureReading)
}
}
//#device-with-register
//#full-device

View file

@ -1,6 +1,4 @@
package tutorial_2
import tutorial_5.Device.{ ReadTemperature, RecordTemperature, RespondTemperature, TemperatureRecorded }
package tutorial_3
object DeviceInProgress1 {
@ -28,6 +26,8 @@ object DeviceInProgress2 {
}
class Device(groupId: String, deviceId: String) extends Actor with ActorLogging {
import Device._
var lastTemperatureReading: Option[Double] = None
override def preStart(): Unit = log.info("Device actor {}-{} started", groupId, deviceId)

View file

@ -11,28 +11,7 @@ class DeviceSpec extends AkkaSpec {
"Device actor" must {
//#device-registration-tests
"reply to registration requests" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
deviceActor.tell(DeviceManager.RequestTrackDevice("group", "device"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
probe.lastSender should ===(deviceActor)
}
"ignore wrong registration requests" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
deviceActor.tell(DeviceManager.RequestTrackDevice("wrongGroup", "device"), probe.ref)
probe.expectNoMsg(500.milliseconds)
deviceActor.tell(DeviceManager.RequestTrackDevice("group", "Wrongdevice"), probe.ref)
probe.expectNoMsg(500.milliseconds)
}
//#device-registration-tests
//#device-read-test
"reply with empty reading if no temperature is known" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
@ -42,7 +21,9 @@ class DeviceSpec extends AkkaSpec {
response.requestId should ===(42)
response.value should ===(None)
}
//#device-read-test
//#device-write-read-test
"reply with latest temperature reading" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
@ -63,6 +44,7 @@ class DeviceSpec extends AkkaSpec {
response2.requestId should ===(4)
response2.value should ===(Some(55.0))
}
//#device-write-read-test
}

View file

@ -4,11 +4,9 @@
package tutorial_4
import akka.actor.{ Actor, ActorLogging, Props }
import tutorial_4.Device.{ ReadTemperature, RecordTemperature, RespondTemperature, TemperatureRecorded }
import tutorial_4.DeviceManager.{ DeviceRegistered, RequestTrackDevice }
//#device-with-register
object Device {
def props(groupId: String, deviceId: String): Props = Props(new Device(groupId, deviceId))
final case class RecordTemperature(requestId: Long, value: Double)
@ -19,6 +17,8 @@ object Device {
}
class Device(groupId: String, deviceId: String) extends Actor with ActorLogging {
import Device._
var lastTemperatureReading: Option[Double] = None
override def preStart(): Unit = log.info("Device actor {}-{} started", groupId, deviceId)
@ -26,10 +26,10 @@ class Device(groupId: String, deviceId: String) extends Actor with ActorLogging
override def postStop(): Unit = log.info("Device actor {}-{} stopped", groupId, deviceId)
override def receive: Receive = {
case RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceRegistered
case DeviceManager.RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceManager.DeviceRegistered
case RequestTrackDevice(groupId, deviceId) =>
case DeviceManager.RequestTrackDevice(groupId, deviceId) =>
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
groupId, deviceId, this.groupId, this.deviceId
@ -44,3 +44,4 @@ class Device(groupId: String, deviceId: String) extends Actor with ActorLogging
sender() ! RespondTemperature(id, lastTemperatureReading)
}
}
//#device-with-register

View file

@ -4,52 +4,49 @@
package tutorial_4
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_4.DeviceGroup._
import tutorial_4.DeviceManager.RequestTrackDevice
import DeviceGroup._
import DeviceManager.RequestTrackDevice
import scala.concurrent.duration._
//#device-group-full
//#device-group-register
object DeviceGroup {
def props(groupId: String): Props = Props(new DeviceGroup(groupId))
//#device-group-register
final case class RequestDeviceList(requestId: Long)
final case class ReplyDeviceList(requestId: Long, ids: Set[String])
//#query-protocol
final case class RequestAllTemperatures(requestId: Long)
final case class RespondAllTemperatures(requestId: Long, temperatures: Map[String, TemperatureReading])
sealed trait TemperatureReading
final case class Temperature(value: Double) extends TemperatureReading
case object TemperatureNotAvailable extends TemperatureReading
case object DeviceNotAvailable extends TemperatureReading
case object DeviceTimedOut extends TemperatureReading
//#query-protocol
//#device-group-register
}
//#device-group-register
//#device-group-register
//#device-group-remove
//#query-added
class DeviceGroup(groupId: String) extends Actor with ActorLogging {
var deviceIdToActor = Map.empty[String, ActorRef]
//#device-group-register
var actorToDeviceId = Map.empty[ActorRef, String]
var nextCollectionId = 0L
//#device-group-register
override def preStart(): Unit = log.info("DeviceGroup {} started", groupId)
override def postStop(): Unit = log.info("DeviceGroup {} stopped", groupId)
override def receive: Receive = {
//#query-added
case trackMsg @ RequestTrackDevice(`groupId`, _) =>
deviceIdToActor.get(trackMsg.deviceId) match {
case Some(ref) =>
ref forward trackMsg
case Some(deviceActor) =>
deviceActor forward trackMsg
case None =>
log.info("Creating device actor for {}", trackMsg.deviceId)
val deviceActor = context.actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId)
val deviceActor = context.actorOf(Device.props(groupId, trackMsg.deviceId), s"device-${trackMsg.deviceId}")
//#device-group-register
context.watch(deviceActor)
deviceActor forward trackMsg
deviceIdToActor += trackMsg.deviceId -> deviceActor
actorToDeviceId += deviceActor -> trackMsg.deviceId
//#device-group-register
deviceIdToActor += trackMsg.deviceId -> deviceActor
deviceActor forward trackMsg
}
case RequestTrackDevice(groupId, deviceId) =>
@ -57,9 +54,12 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
"Ignoring TrackDevice request for {}. This actor is responsible for {}.",
groupId, this.groupId
)
//#device-group-register
//#device-group-remove
case RequestDeviceList(requestId) =>
sender() ! ReplyDeviceList(requestId, deviceIdToActor.keySet)
//#device-group-remove
case Terminated(deviceActor) =>
val deviceId = actorToDeviceId(deviceActor)
@ -67,17 +67,9 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
actorToDeviceId -= deviceActor
deviceIdToActor -= deviceId
//#query-added
// ... other cases omitted
case RequestAllTemperatures(requestId) =>
context.actorOf(DeviceGroupQuery.props(
actorToDeviceId = actorToDeviceId,
requestId = requestId,
requester = sender(),
3.seconds
))
//#device-group-register
}
}
//#query-added
//#device-group-remove
//#device-group-register
//#device-group-full

View file

@ -13,6 +13,7 @@ class DeviceGroupSpec extends AkkaSpec {
"DeviceGroup actor" must {
//#device-group-test-registration
"be able to register a device actor" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -40,7 +41,9 @@ class DeviceGroupSpec extends AkkaSpec {
groupActor.tell(DeviceManager.RequestTrackDevice("wrongGroup", "device1"), probe.ref)
probe.expectNoMsg(500.milliseconds)
}
//#device-group-test-registration
//#device-group-test3
"return same actor for same deviceId" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -55,7 +58,9 @@ class DeviceGroupSpec extends AkkaSpec {
deviceActor1 should ===(deviceActor2)
}
//#device-group-test3
//#device-group-list-terminate-test
"be able to list active devices" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -95,41 +100,7 @@ class DeviceGroupSpec extends AkkaSpec {
probe.expectMsg(DeviceGroup.ReplyDeviceList(requestId = 1, Set("device2")))
}
}
//#group-query-integration-test
"be able to collect temperatures from all active devices" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device1"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor1 = probe.lastSender
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device2"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor2 = probe.lastSender
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device3"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor3 = probe.lastSender
// Check that the device actors are working
deviceActor1.tell(Device.RecordTemperature(requestId = 0, 1.0), probe.ref)
probe.expectMsg(Device.TemperatureRecorded(requestId = 0))
deviceActor2.tell(Device.RecordTemperature(requestId = 1, 2.0), probe.ref)
probe.expectMsg(Device.TemperatureRecorded(requestId = 1))
// No temperature for device3
groupActor.tell(DeviceGroup.RequestAllTemperatures(requestId = 0), probe.ref)
probe.expectMsg(
DeviceGroup.RespondAllTemperatures(
requestId = 0,
temperatures = Map(
"device1" -> DeviceGroup.Temperature(1.0),
"device2" -> DeviceGroup.Temperature(2.0),
"device3" -> DeviceGroup.TemperatureNotAvailable)))
}
//#group-query-integration-test
//#device-group-list-terminate-test
}

View file

@ -5,13 +5,16 @@
package tutorial_4
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_4.DeviceManager.RequestTrackDevice
import DeviceManager.RequestTrackDevice
//#device-manager-full
object DeviceManager {
def props(): Props = Props(new DeviceManager)
//#device-manager-msgs
final case class RequestTrackDevice(groupId: String, deviceId: String)
case object DeviceRegistered
//#device-manager-msgs
}
class DeviceManager extends Actor with ActorLogging {
@ -45,3 +48,4 @@ class DeviceManager extends Actor with ActorLogging {
}
}
//#device-manager-full

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_2
package tutorial_4
import akka.testkit.{ AkkaSpec, TestProbe }
@ -11,7 +11,28 @@ class DeviceSpec extends AkkaSpec {
"Device actor" must {
//#device-read-test
//#device-registration-tests
"reply to registration requests" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
deviceActor.tell(DeviceManager.RequestTrackDevice("group", "device"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
probe.lastSender should ===(deviceActor)
}
"ignore wrong registration requests" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
deviceActor.tell(DeviceManager.RequestTrackDevice("wrongGroup", "device"), probe.ref)
probe.expectNoMsg(500.milliseconds)
deviceActor.tell(DeviceManager.RequestTrackDevice("group", "Wrongdevice"), probe.ref)
probe.expectNoMsg(500.milliseconds)
}
//#device-registration-tests
"reply with empty reading if no temperature is known" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
@ -21,9 +42,7 @@ class DeviceSpec extends AkkaSpec {
response.requestId should ===(42)
response.value should ===(None)
}
//#device-read-test
//#device-write-read-test
"reply with latest temperature reading" in {
val probe = TestProbe()
val deviceActor = system.actorOf(Device.props("group", "device"))
@ -44,7 +63,6 @@ class DeviceSpec extends AkkaSpec {
response2.requestId should ===(4)
response2.value should ===(Some(55.0))
}
//#device-write-read-test
}

View file

@ -4,8 +4,6 @@
package tutorial_5
import akka.actor.{ Actor, ActorLogging, Props }
import tutorial_5.Device.{ ReadTemperature, RecordTemperature, RespondTemperature, TemperatureRecorded }
import tutorial_5.DeviceManager.{ DeviceRegistered, RequestTrackDevice }
object Device {
@ -19,6 +17,8 @@ object Device {
}
class Device(groupId: String, deviceId: String) extends Actor with ActorLogging {
import Device._
var lastTemperatureReading: Option[Double] = None
override def preStart(): Unit = log.info("Device actor {}-{} started", groupId, deviceId)
@ -26,10 +26,10 @@ class Device(groupId: String, deviceId: String) extends Actor with ActorLogging
override def postStop(): Unit = log.info("Device actor {}-{} stopped", groupId, deviceId)
override def receive: Receive = {
case RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceRegistered
case DeviceManager.RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceManager.DeviceRegistered
case RequestTrackDevice(groupId, deviceId) =>
case DeviceManager.RequestTrackDevice(groupId, deviceId) =>
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
groupId, deviceId, this.groupId, this.deviceId

View file

@ -4,17 +4,18 @@
package tutorial_5
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_5.DeviceGroup._
import tutorial_5.DeviceManager.RequestTrackDevice
import DeviceGroup._
import DeviceManager.RequestTrackDevice
import scala.concurrent.duration._
object DeviceGroup {
def props(groupId: String): Props = Props(new DeviceGroup(groupId))
final case class RequestDeviceList(requestId: Long)
final case class ReplyDeviceList(requestId: Long, ids: Set[String])
//#query-protocol
final case class RequestAllTemperatures(requestId: Long)
final case class RespondAllTemperatures(requestId: Long, temperatures: Map[String, TemperatureReading])
@ -23,8 +24,10 @@ object DeviceGroup {
case object TemperatureNotAvailable extends TemperatureReading
case object DeviceNotAvailable extends TemperatureReading
case object DeviceTimedOut extends TemperatureReading
//#query-protocol
}
//#query-added
class DeviceGroup(groupId: String) extends Actor with ActorLogging {
var deviceIdToActor = Map.empty[String, ActorRef]
var actorToDeviceId = Map.empty[ActorRef, String]
@ -35,7 +38,7 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
override def postStop(): Unit = log.info("DeviceGroup {} stopped", groupId)
override def receive: Receive = {
// Note the backticks
//#query-added
case trackMsg @ RequestTrackDevice(`groupId`, _) =>
deviceIdToActor.get(trackMsg.deviceId) match {
case Some(ref) =>
@ -64,6 +67,9 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
actorToDeviceId -= deviceActor
deviceIdToActor -= deviceId
//#query-added
// ... other cases omitted
case RequestAllTemperatures(requestId) =>
context.actorOf(DeviceGroupQuery.props(
actorToDeviceId = actorToDeviceId,
@ -74,3 +80,4 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
}
}
//#query-added

View file

@ -3,13 +3,13 @@
*/
package tutorial_5
import akka.actor.Actor.Receive
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import scala.concurrent.duration._
//#query-full
//#query-outline
object DeviceGroupQuery {
case object CollectionTimeout
def props(
@ -37,13 +37,14 @@ class DeviceGroupQuery(
context.watch(deviceActor)
deviceActor ! Device.ReadTemperature(0)
}
}
override def postStop(): Unit = {
queryTimeoutTimer.cancel()
}
//#query-outline
//#query-state
override def receive: Receive =
waitingForReplies(
Map.empty,
@ -63,9 +64,7 @@ class DeviceGroupQuery(
receivedResponse(deviceActor, reading, stillWaiting, repliesSoFar)
case Terminated(deviceActor) =>
if (stillWaiting.contains(deviceActor))
receivedResponse(deviceActor, DeviceGroup.DeviceNotAvailable, stillWaiting, repliesSoFar)
// else ignore
case CollectionTimeout =>
val timedOutReplies =
@ -76,13 +75,16 @@ class DeviceGroupQuery(
requester ! DeviceGroup.RespondAllTemperatures(requestId, repliesSoFar ++ timedOutReplies)
context.stop(self)
}
//#query-state
//#query-collect-reply
def receivedResponse(
deviceActor: ActorRef,
reading: DeviceGroup.TemperatureReading,
stillWaiting: Set[ActorRef],
repliesSoFar: Map[String, DeviceGroup.TemperatureReading]
): Unit = {
context.unwatch(deviceActor)
val deviceId = actorToDeviceId(deviceActor)
val newStillWaiting = stillWaiting - deviceActor
@ -94,5 +96,9 @@ class DeviceGroupQuery(
context.become(waitingForReplies(newRepliesSoFar, newStillWaiting))
}
}
//#query-collect-reply
//#query-outline
}
//#query-outline
//#query-full

View file

@ -12,6 +12,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
"DeviceGroupQuery" must {
//#query-test-normal
"return temperature value for working devices" in {
val requester = TestProbe()
@ -39,7 +40,9 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-normal
//#query-test-no-reading
"return TemperatureNotAvailable for devices with no readings" in {
val requester = TestProbe()
@ -67,7 +70,9 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-no-reading
//#query-test-stopped
"return DeviceNotAvailable if device stops before answering" in {
val requester = TestProbe()
@ -95,7 +100,9 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-stopped
//#query-test-stopped-later
"return temperature reading even if device stops after answering" in {
val requester = TestProbe()
@ -124,7 +131,9 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-stopped-later
//#query-test-timeout
"return DeviceTimedOut if device does not answer in time" in {
val requester = TestProbe()
@ -151,6 +160,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-timeout
}

View file

@ -96,6 +96,7 @@ class DeviceGroupSpec extends AkkaSpec {
}
}
//#group-query-integration-test
"be able to collect temperatures from all active devices" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -128,6 +129,7 @@ class DeviceGroupSpec extends AkkaSpec {
"device2" -> DeviceGroup.Temperature(2.0),
"device3" -> DeviceGroup.TemperatureNotAvailable)))
}
//#group-query-integration-test
}

View file

@ -5,7 +5,7 @@
package tutorial_5
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_5.DeviceManager.RequestTrackDevice
import DeviceManager.RequestTrackDevice
object DeviceManager {
def props(): Props = Props(new DeviceManager)

View file

@ -1,12 +1,12 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_2
package tutorial_6
//#full-device
import akka.actor.{ Actor, ActorLogging, Props }
object Device {
def props(groupId: String, deviceId: String): Props = Props(new Device(groupId, deviceId))
final case class RecordTemperature(requestId: Long, value: Double)
@ -18,12 +18,23 @@ object Device {
class Device(groupId: String, deviceId: String) extends Actor with ActorLogging {
import Device._
var lastTemperatureReading: Option[Double] = None
override def preStart(): Unit = log.info("Device actor {}-{} started", groupId, deviceId)
override def postStop(): Unit = log.info("Device actor {}-{} stopped", groupId, deviceId)
override def receive: Receive = {
case DeviceManager.RequestTrackDevice(`groupId`, `deviceId`) =>
sender() ! DeviceManager.DeviceRegistered
case DeviceManager.RequestTrackDevice(groupId, deviceId) =>
log.warning(
"Ignoring TrackDevice request for {}-{}.This actor is responsible for {}-{}.",
groupId, deviceId, this.groupId, this.deviceId
)
case RecordTemperature(id, value) =>
log.info("Recorded temperature reading {} with {}", value, id)
lastTemperatureReading = Some(value)
@ -33,4 +44,3 @@ class Device(groupId: String, deviceId: String) extends Actor with ActorLogging
sender() ! RespondTemperature(id, lastTemperatureReading)
}
}
//#full-device

View file

@ -1,52 +1,52 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_3
package tutorial_6
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_3.DeviceGroup._
import tutorial_3.DeviceManager.RequestTrackDevice
import DeviceGroup._
import DeviceManager.RequestTrackDevice
import scala.concurrent.duration._
//#device-group-full
//#device-group-register
object DeviceGroup {
def props(groupId: String): Props = Props(new DeviceGroup(groupId))
//#device-group-register
final case class RequestDeviceList(requestId: Long)
final case class ReplyDeviceList(requestId: Long, ids: Set[String])
//#device-group-register
final case class RequestAllTemperatures(requestId: Long)
final case class RespondAllTemperatures(requestId: Long, temperatures: Map[String, TemperatureReading])
sealed trait TemperatureReading
final case class Temperature(value: Double) extends TemperatureReading
case object TemperatureNotAvailable extends TemperatureReading
case object DeviceNotAvailable extends TemperatureReading
case object DeviceTimedOut extends TemperatureReading
}
//#device-group-register
//#device-group-register
//#device-group-remove
class DeviceGroup(groupId: String) extends Actor with ActorLogging {
var deviceIdToActor = Map.empty[String, ActorRef]
//#device-group-register
var actorToDeviceId = Map.empty[ActorRef, String]
//#device-group-register
var nextCollectionId = 0L
override def preStart(): Unit = log.info("DeviceGroup {} started", groupId)
override def postStop(): Unit = log.info("DeviceGroup {} stopped", groupId)
override def receive: Receive = {
// Note the backticks
case trackMsg @ RequestTrackDevice(`groupId`, _) =>
deviceIdToActor.get(trackMsg.deviceId) match {
case Some(deviceActor) =>
deviceActor forward trackMsg
case Some(ref) =>
ref forward trackMsg
case None =>
log.info("Creating device actor for {}", trackMsg.deviceId)
val deviceActor = context.actorOf(Device.props(groupId, trackMsg.deviceId), s"device-${trackMsg.deviceId}")
//#device-group-register
val deviceActor = context.actorOf(Device.props(groupId, trackMsg.deviceId), "device-" + trackMsg.deviceId)
context.watch(deviceActor)
actorToDeviceId += deviceActor -> trackMsg.deviceId
//#device-group-register
deviceIdToActor += trackMsg.deviceId -> deviceActor
deviceActor forward trackMsg
deviceIdToActor += trackMsg.deviceId -> deviceActor
actorToDeviceId += deviceActor -> trackMsg.deviceId
}
case RequestTrackDevice(groupId, deviceId) =>
@ -54,12 +54,9 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
"Ignoring TrackDevice request for {}. This actor is responsible for {}.",
groupId, this.groupId
)
//#device-group-register
//#device-group-remove
case RequestDeviceList(requestId) =>
sender() ! ReplyDeviceList(requestId, deviceIdToActor.keySet)
//#device-group-remove
case Terminated(deviceActor) =>
val deviceId = actorToDeviceId(deviceActor)
@ -67,9 +64,13 @@ class DeviceGroup(groupId: String) extends Actor with ActorLogging {
actorToDeviceId -= deviceActor
deviceIdToActor -= deviceId
//#device-group-register
case RequestAllTemperatures(requestId) =>
context.actorOf(DeviceGroupQuery.props(
actorToDeviceId = actorToDeviceId,
requestId = requestId,
requester = sender(),
3.seconds
))
}
}
//#device-group-remove
//#device-group-register
//#device-group-full

View file

@ -1,15 +1,15 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_4
package tutorial_6
import akka.actor.Actor.Receive
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import scala.concurrent.duration._
//#query-full
//#query-outline
object DeviceGroupQuery {
case object CollectionTimeout
def props(
@ -37,14 +37,13 @@ class DeviceGroupQuery(
context.watch(deviceActor)
deviceActor ! Device.ReadTemperature(0)
}
}
override def postStop(): Unit = {
queryTimeoutTimer.cancel()
}
//#query-outline
//#query-state
override def receive: Receive =
waitingForReplies(
Map.empty,
@ -64,7 +63,9 @@ class DeviceGroupQuery(
receivedResponse(deviceActor, reading, stillWaiting, repliesSoFar)
case Terminated(deviceActor) =>
if (stillWaiting.contains(deviceActor))
receivedResponse(deviceActor, DeviceGroup.DeviceNotAvailable, stillWaiting, repliesSoFar)
// else ignore
case CollectionTimeout =>
val timedOutReplies =
@ -75,16 +76,13 @@ class DeviceGroupQuery(
requester ! DeviceGroup.RespondAllTemperatures(requestId, repliesSoFar ++ timedOutReplies)
context.stop(self)
}
//#query-state
//#query-collect-reply
def receivedResponse(
deviceActor: ActorRef,
reading: DeviceGroup.TemperatureReading,
stillWaiting: Set[ActorRef],
repliesSoFar: Map[String, DeviceGroup.TemperatureReading]
): Unit = {
context.unwatch(deviceActor)
val deviceId = actorToDeviceId(deviceActor)
val newStillWaiting = stillWaiting - deviceActor
@ -96,9 +94,5 @@ class DeviceGroupQuery(
context.become(waitingForReplies(newRepliesSoFar, newStillWaiting))
}
}
//#query-collect-reply
//#query-outline
}
//#query-outline
//#query-full

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_4
package tutorial_6
import akka.actor.PoisonPill
import akka.testkit.{ AkkaSpec, TestProbe }
@ -12,7 +12,6 @@ class DeviceGroupQuerySpec extends AkkaSpec {
"DeviceGroupQuery" must {
//#query-test-normal
"return temperature value for working devices" in {
val requester = TestProbe()
@ -40,9 +39,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-normal
//#query-test-no-reading
"return TemperatureNotAvailable for devices with no readings" in {
val requester = TestProbe()
@ -70,9 +67,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-no-reading
//#query-test-stopped
"return DeviceNotAvailable if device stops before answering" in {
val requester = TestProbe()
@ -100,9 +95,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-stopped
//#query-test-stopped-later
"return temperature reading even if device stops after answering" in {
val requester = TestProbe()
@ -131,9 +124,7 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-stopped-later
//#query-test-timeout
"return DeviceTimedOut if device does not answer in time" in {
val requester = TestProbe()
@ -160,7 +151,6 @@ class DeviceGroupQuerySpec extends AkkaSpec {
)
))
}
//#query-test-timeout
}

View file

@ -2,7 +2,7 @@
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_3
package tutorial_6
import akka.actor.PoisonPill
import akka.testkit.{ AkkaSpec, TestProbe }
@ -13,7 +13,6 @@ class DeviceGroupSpec extends AkkaSpec {
"DeviceGroup actor" must {
//#device-group-test-registration
"be able to register a device actor" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -41,9 +40,7 @@ class DeviceGroupSpec extends AkkaSpec {
groupActor.tell(DeviceManager.RequestTrackDevice("wrongGroup", "device1"), probe.ref)
probe.expectNoMsg(500.milliseconds)
}
//#device-group-test-registration
//#device-group-test3
"return same actor for same deviceId" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -58,9 +55,7 @@ class DeviceGroupSpec extends AkkaSpec {
deviceActor1 should ===(deviceActor2)
}
//#device-group-test3
//#device-group-list-terminate-test
"be able to list active devices" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
@ -100,7 +95,39 @@ class DeviceGroupSpec extends AkkaSpec {
probe.expectMsg(DeviceGroup.ReplyDeviceList(requestId = 1, Set("device2")))
}
}
//#device-group-list-terminate-test
"be able to collect temperatures from all active devices" in {
val probe = TestProbe()
val groupActor = system.actorOf(DeviceGroup.props("group"))
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device1"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor1 = probe.lastSender
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device2"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor2 = probe.lastSender
groupActor.tell(DeviceManager.RequestTrackDevice("group", "device3"), probe.ref)
probe.expectMsg(DeviceManager.DeviceRegistered)
val deviceActor3 = probe.lastSender
// Check that the device actors are working
deviceActor1.tell(Device.RecordTemperature(requestId = 0, 1.0), probe.ref)
probe.expectMsg(Device.TemperatureRecorded(requestId = 0))
deviceActor2.tell(Device.RecordTemperature(requestId = 1, 2.0), probe.ref)
probe.expectMsg(Device.TemperatureRecorded(requestId = 1))
// No temperature for device3
groupActor.tell(DeviceGroup.RequestAllTemperatures(requestId = 0), probe.ref)
probe.expectMsg(
DeviceGroup.RespondAllTemperatures(
requestId = 0,
temperatures = Map(
"device1" -> DeviceGroup.Temperature(1.0),
"device2" -> DeviceGroup.Temperature(2.0),
"device3" -> DeviceGroup.TemperatureNotAvailable)))
}
}

View file

@ -2,19 +2,16 @@
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_3
package tutorial_6
import akka.actor.{ Actor, ActorLogging, ActorRef, Props, Terminated }
import tutorial_3.DeviceManager.RequestTrackDevice
import DeviceManager.RequestTrackDevice
//#device-manager-full
object DeviceManager {
def props(): Props = Props(new DeviceManager)
//#device-manager-msgs
final case class RequestTrackDevice(groupId: String, deviceId: String)
case object DeviceRegistered
//#device-manager-msgs
}
class DeviceManager extends Actor with ActorLogging {
@ -48,4 +45,3 @@ class DeviceManager extends Actor with ActorLogging {
}
}
//#device-manager-full

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_5
package tutorial_6
import akka.testkit.{ AkkaSpec, TestProbe }

View file

@ -1,10 +1,10 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_5
package tutorial_6
import akka.actor.ActorSystem
import tutorial_5.DeviceManager.RequestTrackDevice
import DeviceManager.RequestTrackDevice
import scala.io.StdIn

View file

@ -1,7 +1,7 @@
/**
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package tutorial_5
package tutorial_6
import akka.actor.{ Actor, ActorLogging, ActorRef, Props }