Docs: sort out links (#29027)

This commit is contained in:
Enno 2020-05-06 15:02:12 +02:00 committed by GitHub
parent f04559de66
commit d82c834a70
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
52 changed files with 107 additions and 107 deletions

View file

@ -16,7 +16,7 @@ To use Classic Actors, add the following dependency in your project:
## Introduction
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
The [Actor Model](https://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
and distributed systems. It alleviates the developer from having to deal with
explicit locking and thread management, making it easier to write correct
concurrent and parallel systems. Actors were defined in the 1973 paper by Carl
@ -294,7 +294,7 @@ singleton scope.
Techniques for dependency injection and integration with dependency injection frameworks
are described in more depth in the
[Using Akka with Dependency Injection](http://letitcrash.com/post/55958814293/akka-dependency-injection)
[Using Akka with Dependency Injection](https://letitcrash.com/post/55958814293/akka-dependency-injection)
guideline and the [Akka Java Spring](https://github.com/typesafehub/activator-akka-java-spring) tutorial.
## Actor API
@ -832,7 +832,7 @@ That has benefits such as:
The `Receive` can be implemented in other ways than using the `ReceiveBuilder` since it in the
end is just a wrapper around a Scala `PartialFunction`. In Java, you can implement `PartialFunction` by
extending `AbstractPartialFunction`. For example, one could implement an adapter
to [Vavr Pattern Matching DSL](http://www.vavr.io/vavr-docs/#_pattern_matching). See the [Akka Vavr sample project](https://github.com/akka/akka-samples/tree/2.5/akka-sample-vavr) for more details.
to [Vavr Pattern Matching DSL](https://www.vavr.io/vavr-docs/#_pattern_matching). See the [Akka Vavr sample project](https://github.com/akka/akka-samples/tree/2.5/akka-sample-vavr) for more details.
If the validation of the `ReceiveBuilder` match logic turns out to be a bottleneck for some of your
actors you can consider to implement it at lower level by extending `UntypedAbstractActor` instead

View file

@ -4,16 +4,16 @@
### Recommended reads
* [Reactive Design Patterns](https://www.reactivedesignpatterns.com/), by Roland Kuhn with Jamie Allen and Brian Hanafee, Manning Publications Co., ISBN 9781617291807, Feb 2017
* [Akka in Action](http://www.lightbend.com/resources/e-book/akka-in-action), by Raymond Roestenburg and Rob Bakker, Manning Publications Co., ISBN: 9781617291012, September 2016
* [Akka in Action](https://www.lightbend.com/resources/e-book/akka-in-action), by Raymond Roestenburg and Rob Bakker, Manning Publications Co., ISBN: 9781617291012, September 2016
### Other reads about Akka and the Actor model
* [Akka Cookbook](https://www.packtpub.com/application-development/akka-cookbook), by Héctor Veiga Ortiz & Piyush Mishra, PACKT Publishing, ISBN: 9781785288180, May 2017
* [Mastering Akka](https://www.packtpub.com/application-development/mastering-akka), by Christian Baxter, PACKT Publishing, ISBN: 9781786465023, October 2016
* [Learning Akka](https://www.packtpub.com/application-development/learning-akka), by Jason Goodwin, PACKT Publishing, ISBN: 9781784393007, December 2015
* [Reactive Messaging Patterns with the Actor Model](http://www.informit.com/store/reactive-messaging-patterns-with-the-actor-model-applications-9780133846836), by Vaughn Vernon, Addison-Wesley Professional, ISBN: 0133846830, August 2015
* [Developing an Akka Edge](http://bleedingedgepress.com/our-books/developing-an-akka-edge/), by Thomas Lockney and Raymond Tay, Bleeding Edge Press, ISBN: 9781939902054, April 2014
* [Effective Akka](http://shop.oreilly.com/product/0636920028789.do), by Jamie Allen, O'Reilly Media, ISBN: 1449360076, August 2013
* [Akka Concurrency](http://www.artima.com/shop/akka_concurrency), by Derek Wyatt, artima developer, ISBN: 0981531660, May 2013
* [Reactive Messaging Patterns with the Actor Model](https://www.informit.com/store/reactive-messaging-patterns-with-the-actor-model-applications-9780133846836), by Vaughn Vernon, Addison-Wesley Professional, ISBN: 0133846830, August 2015
* [Developing an Akka Edge](https://bleedingedgepress.com/developing-an-akka-edge/), by Thomas Lockney and Raymond Tay, Bleeding Edge Press, ISBN: 9781939902054, April 2014
* [Effective Akka](https://shop.oreilly.com/product/0636920028789.do), by Jamie Allen, O'Reilly Media, ISBN: 1449360076, August 2013
* [Akka Concurrency](https://www.artima.com/shop/akka_concurrency), by Derek Wyatt, artima developer, ISBN: 0981531660, May 2013
* [Akka Essentials](https://www.packtpub.com/application-development/akka-essentials), by Munish K. Gupta, PACKT Publishing, ISBN: 1849518289, October 2012
* [Start Building RESTful Microservices using Akka HTTP with Scala](https://www.amazon.com/dp/1976762545/), by Ayush Kumar Mishra, Knoldus Software LLP, ISBN: 9781976762543, December 2017
@ -23,3 +23,7 @@
* [Zen of Akka](https://www.youtube.com/watch?v=vgFoKOxrTzg) - an overview of good and bad practices in Akka, by Konrad Malawski, ScalaDays New York, June 2016
* [Learning Akka Videos](https://www.packtpub.com/application-development/learning-akka-video), by Salma Khater, PACKT Publishing, ISBN: 9781784391836, January 2016
* [Building Microservice with AKKA HTTP (Video)](https://www.packtpub.com/application-development/building-microservice-akka-http-video), by Tomasz Lelek, PACKT Publishing, ISBN: 9781788298582, March 2017
## Blogs
A list of [blogs and presentations](https://akka.io/blog/external-archive.html) curated by the Akka team.

View file

@ -4,7 +4,7 @@
### Where does the name Akka come from?
It is the name of a beautiful Swedish [mountain](https://lh4.googleusercontent.com/-z28mTALX90E/UCOsd249TdI/AAAAAAAAAB0/zGyNNZla-zY/w442-h331/akka-beautiful-panorama.jpg)
It is the name of a beautiful Swedish [mountain](https://en.wikipedia.org/wiki/%C3%81hkk%C3%A1)
up in the northern part of Sweden called Laponia. The mountain is also sometimes
called 'The Queen of Laponia'.
@ -16,9 +16,9 @@ Also, the name AKKA is a palindrome of the letters A and K as in Actor Kernel.
Akka is also:
* the name of the goose that Nils traveled across Sweden on in [The Wonderful Adventures of Nils](http://en.wikipedia.org/wiki/The_Wonderful_Adventures_of_Nils) by the Swedish writer Selma Lagerlöf.
* the name of the goose that Nils traveled across Sweden on in [The Wonderful Adventures of Nils](https://en.wikipedia.org/wiki/The_Wonderful_Adventures_of_Nils) by the Swedish writer Selma Lagerlöf.
* the Finnish word for 'nasty elderly woman' and the word for 'elder sister' in the Indian languages Tamil, Telugu, Kannada and Marathi.
* a [font](http://www.dafont.com/akka.font)
* a [font](https://www.dafont.com/akka.font)
* a town in Morocco
* a near-earth asteroid

View file

@ -14,7 +14,7 @@ When starting clusters on cloud systems such as Kubernetes, AWS, Google Cloud, A
you may want to automate the discovery of nodes for the cluster joining process, using your cloud providers,
cluster orchestrator, or other form of service discovery (such as managed DNS).
The open source Akka Management library includes the [Cluster Bootstrap](https://doc.akka.io/docs/akka-management/current/bootstrap/index.html)
The open source Akka Management library includes the @extref:[Cluster Bootstrap](akka-management:bootstrap/index.html)
module which handles just that. Please refer to its documentation for more details.
@@@ note
@ -32,13 +32,13 @@ See @ref:[Rolling Updates, Cluster Shutdown and Coordinated Shutdown](../additio
There are several management tools for the cluster.
Complete information on running and managing Akka applications can be found in
the [Akka Management](https://doc.akka.io/docs/akka-management/current/) project documentation.
the @exref:[Akka Management](akka-management:) project documentation.
<a id="cluster-http"></a>
### HTTP
Information and management of the cluster is available with a HTTP API.
See documentation of [Akka Management](http://developer.lightbend.com/docs/akka-management/current/).
See documentation of @extref:[Akka Management](akka-management:).
<a id="cluster-jmx"></a>
### JMX
@ -60,6 +60,6 @@ Member nodes are identified by their address, in format *`akka://actor-system-na
## Monitoring and Observability
Aside from log monitoring and the monitoring provided by your APM or platform provider, [Lightbend Telemetry](https://developer.lightbend.com/docs/telemetry/current/instrumentations/akka/akka.html),
available through a [Lightbend Platform Subscription](https://www.lightbend.com/lightbend-platform-subscription),
available through a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription),
can provide additional insights in the run-time characteristics of your application, including metrics, events,
and distributed tracing for Akka Actors, Cluster, HTTP, and more.

View file

@ -12,7 +12,7 @@ To use Akka in OSGi, you must add the following dependency in your project:
## Background
[OSGi](http://www.osgi.org/developer) is a mature packaging and deployment standard for component-based systems. It
[OSGi](https://www.osgi.org/developer/where-to-start/) is a mature packaging and deployment standard for component-based systems. It
has similar capabilities as Project Jigsaw (originally scheduled for JDK 1.8), but has far stronger facilities to
support legacy Java code. This is to say that while Jigsaw-ready modules require significant changes to most source files
and on occasion to the structure of the overall application, OSGi can be used to modularize almost any Java code as far

View file

@ -33,12 +33,12 @@ Add [sbt-native-packager](https://github.com/sbt/sbt-native-packager) in `projec
addSbtPlugin("com.typesafe.sbt" % "sbt-native-packager" % "1.1.5")
```
Follow the instructions for the `JavaAppPackaging` in the [sbt-native-packager plugin documentation](http://sbt-native-packager.readthedocs.io/en/latest/archetypes/java_app/index.html).
Follow the instructions for the `JavaAppPackaging` in the [sbt-native-packager plugin documentation](https://sbt-native-packager.readthedocs.io/en/latest/archetypes/java_app/index.html).
## Maven: jarjar, onejar or assembly
You can use the [Apache Maven Shade Plugin](http://maven.apache.org/plugins/maven-shade-plugin)
support for [Resource Transformers](http://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html#AppendingTransformer)
You can use the [Apache Maven Shade Plugin](https://maven.apache.org/plugins/maven-shade-plugin/)
support for [Resource Transformers](https://maven.apache.org/plugins/maven-shade-plugin/examples/resource-transformers.html#AppendingTransformer)
to merge all the reference.confs on the build classpath into one.
The plugin configuration might look like this:

View file

@ -231,7 +231,7 @@ contacts can be fetched and a new cluster client started.
## Migration to Akka gRPC
Cluster Client is deprecated and it is not advised to build new applications with it.
As a replacement we recommend using [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/index.html)
As a replacement we recommend using [Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/)
with an application-specific protocol. The benefits of this approach are:
* Improved security by using TLS for gRPC (HTTP/2) versus exposing Akka Remoting outside the Akka Cluster
@ -244,7 +244,7 @@ with an application-specific protocol. The benefits of this approach are:
### Migrating directly
Existing users of Cluster Client may migrate directly to Akka gRPC and use it
as documented in [its documentation](https://doc.akka.io/docs/akka-grpc/current).
as documented in [its documentation](https://doc.akka.io/docs/akka-grpc/current/).
### Migrating gradually

View file

@ -112,7 +112,7 @@ To enable usage of Sigar you can add the following dependency to the user projec
version="$sigar_loader.version$"
}
You can download Kamon sigar-loader from [Maven Central](http://search.maven.org/#search%7Cga%7C1%7Csigar-loader)
You can download Kamon sigar-loader from [Maven Central](https://search.maven.org/search?q=sigar-loader)
## Adaptive Load Balancing
@ -126,7 +126,7 @@ It can be configured to use a specific MetricsSelector to produce the probabilit
* `mix` / `MixMetricsSelector` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors.
* Any custom implementation of `akka.cluster.metrics.MetricsSelector`
The collected metrics values are smoothed with [exponential weighted moving average](http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[Cluster configuration](cluster-usage.md#cluster-configuration) you can adjust how quickly past data is decayed compared to new data.
The collected metrics values are smoothed with [exponential weighted moving average](https://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average). In the @ref:[Cluster configuration](cluster-usage.md#cluster-configuration) you can adjust how quickly past data is decayed compared to new data.
Let's take a look at this router in action. What can be more demanding than calculating factorials?

View file

@ -107,7 +107,7 @@ There are two actors that could potentially be supervised. For the `consumer` si
The Cluster singleton manager actor should not have its supervision strategy changed as it should always be running.
However it is sometimes useful to add supervision for the user actor.
To accomplish this add a parent supervisor actor which will be used to create the 'real' singleton instance.
Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/a/36716708/779513))
Below is an example implementation (credit to [this StackOverflow answer](https://stackoverflow.com/questions/36701898/how-to-supervise-cluster-singleton-in-akka/36716708#36716708))
Scala
: @@snip [ClusterSingletonSupervision.scala](/akka-docs/src/test/scala/docs/cluster/singleton/ClusterSingletonSupervision.scala) { #singleton-supervisor-actor }

View file

@ -414,7 +414,7 @@ Examples: ./akka-cluster localhost 9999 is-available
```
To be able to use the script you must enable remote monitoring and management when starting the JVMs of the cluster nodes,
as described in [Monitoring and Management Using JMX Technology](http://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
as described in [Monitoring and Management Using JMX Technology](https://docs.oracle.com/javase/8/docs/technotes/guides/management/agent.html).
Make sure you understand the security implications of enabling remote monitoring and management.
<a id="cluster-configuration"></a>

View file

@ -8,7 +8,7 @@ A full server- and client-side HTTP stack on top of akka-actor and akka-stream.
Alpakka is a Reactive Enterprise Integration library for Java and Scala, based on Reactive Streams and Akka.
## [Alpakka Kafka Connector](https://doc.akka.io/docs/akka-stream-kafka/current/)
## [Alpakka Kafka Connector](https://doc.akka.io/docs/alpakka-kafka/current/)
The Alpakka Kafka Connector connects Apache Kafka with Akka Streams.

View file

@ -268,11 +268,9 @@ For the full documentation of this feature and for new projects see @ref:[Limita
## Learn More about CRDTs
* [Eventually Consistent Data Structures](https://vimeo.com/43903960)
talk by Sean Cribbs
* [Strong Eventual Consistency and Conflict-free Replicated Data Types (video)](https://www.youtube.com/watch?v=oyUHd894w18&amp;feature=youtu.be)
talk by Mark Shapiro
* [A comprehensive study of Convergent and Commutative Replicated Data Types](http://hal.upmc.fr/file/index/docid/555588/filename/techreport.pdf)
* [A comprehensive study of Convergent and Commutative Replicated Data Types](https://hal.inria.fr/file/index/docid/555588/filename/techreport.pdf)
paper by Mark Shapiro et. al.
## Configuration

View file

@ -16,7 +16,7 @@ To use Finite State Machine actors, you must add the following dependency in you
## Overview
The FSM (Finite State Machine) is available as @scala[a mixin for the] @java[an abstract base class that implements an] Akka Actor and
is best described in the [Erlang design principles](http://www.erlang.org/documentation/doc-4.8.2/doc/design_principles/fsm.html)
is best described in the [Erlang design principles](https://www.erlang.org/documentation/doc-4.8.2/doc/design_principles/fsm.html)
A FSM can be described as a set of relations of the form:

View file

@ -9,7 +9,7 @@ section looks at one such actor in isolation, explaining the concepts you
encounter while implementing it. For a more in depth reference with all the
details please refer to @ref:[Introduction to Actors](../typed/actors.md).
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model) as defined by
The [Actor Model](https://en.wikipedia.org/wiki/Actor_model) as defined by
Hewitt, Bishop and Steiger in 1973 is a computational model that expresses
exactly what it means for computation to be distributed. The processing
units—Actors—can only communicate by exchanging messages and upon reception of a

View file

@ -87,7 +87,7 @@ mailbox would interact with the third point, or even what it would mean to
decide upon the “successfully” part of point five.
Along those same lines goes the reasoning in [Nobody Needs Reliable
Messaging](http://www.infoq.com/articles/no-reliable-messaging). The only meaningful way for a sender to know whether an
Messaging](https://www.infoq.com/articles/no-reliable-messaging/). The only meaningful way for a sender to know whether an
interaction was successful is by receiving a business-level acknowledgement
message, which is not something Akka could make up on its own (neither are we
writing a “do what I mean” framework nor would you want us to).
@ -96,7 +96,7 @@ Akka embraces distributed computing and makes the fallibility of communication
explicit through message passing, therefore it does not try to lie and emulate
a leaky abstraction. This is a model that has been used with great success in
Erlang and requires the users to design their applications around it. You can
read more about this approach in the [Erlang documentation](http://www.erlang.org/faq/academic.html) (section 10.9 and
read more about this approach in the [Erlang documentation](https://erlang.org/faq/academic.html) (section 10.9 and
10.10), Akka follows it closely.
Another angle on this issue is that by providing only basic guarantees those

View file

@ -104,7 +104,7 @@ A source that emits a stream of streams is still a normal Source, the kind of el
## The difference between Error and Failure
The starting point for this discussion is the [definition given by the Reactive Manifesto](http://www.reactivemanifesto.org/glossary#Failure). Translated to streams this means that an error is accessible within the stream as a normal data element, while a failure means that the stream itself has failed and is collapsing. In concrete terms, on the Reactive Streams interface level data elements (including errors) are signaled via `onNext` while failures raise the `onError` signal.
The starting point for this discussion is the [definition given by the Reactive Manifesto](https://www.reactivemanifesto.org/glossary#Failure). Translated to streams this means that an error is accessible within the stream as a normal data element, while a failure means that the stream itself has failed and is collapsing. In concrete terms, on the Reactive Streams interface level data elements (including errors) are signaled via `onNext` while failures raise the `onError` signal.
@@@ note

View file

@ -93,7 +93,7 @@ To maintain isolation, actors should communicate with immutable objects only. `B
immutable container for bytes. It is used by Akka's I/O system as an efficient, immutable alternative
the traditional byte containers used for I/O on the JVM, such as @scala[`Array[Byte]`]@java[`byte[]`] and `ByteBuffer`.
`ByteString` is a [rope-like](http://en.wikipedia.org/wiki/Rope_\(computer_science\)) data structure that is immutable
`ByteString` is a [rope-like](https://en.wikipedia.org/wiki/Rope_\(computer_science\)) data structure that is immutable
and provides fast concatenation and slicing operations (perfect for I/O). When two `ByteString`s are concatenated
together they are both stored within the resulting `ByteString` instead of copying both to a new @scala[`Array`]@java[array]. Operations
such as `drop` and `take` return `ByteString`s that still reference the original @scala[`Array`]@java[array], but just change the

View file

@ -97,7 +97,7 @@ Don't run snapshot store tasks/futures on the system default dispatcher, since t
## Plugin TCK
In order to help developers build correct and high quality storage plugins, we provide a Technology Compatibility Kit ([TCK](http://en.wikipedia.org/wiki/Technology_Compatibility_Kit) for short).
In order to help developers build correct and high quality storage plugins, we provide a Technology Compatibility Kit ([TCK](https://en.wikipedia.org/wiki/Technology_Compatibility_Kit) for short).
The TCK is usable from Java as well as Scala projects. To test your implementation (independently of language) you need to include the akka-persistence-tck dependency:

View file

@ -197,7 +197,7 @@ Java
## Performance and denormalization
When building systems using @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://msdn.microsoft.com/en-us/library/jj554200.aspx)) techniques
When building systems using @ref:[Event sourcing](typed/persistence.md#event-sourcing-concepts) and CQRS ([Command & Query Responsibility Segregation](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj554200(v=pandp.10)?redirectedfrom=MSDN)) techniques
it is tremendously important to realise that the write-side has completely different needs from the read-side,
and separating those concerns into datastores that are optimised for either side makes it possible to offer the best
experience for the write and read sides independently.
@ -221,7 +221,7 @@ it may be more efficient or interesting to query it (instead of the source event
### Materialize view to Reactive Streams compatible datastore
If the read datastore exposes a [Reactive Streams](http://reactive-streams.org) interface then implementing a simple projection
If the read datastore exposes a [Reactive Streams](https://www.reactive-streams.org) interface then implementing a simple projection
is as simple as, using the read-journal and feeding it into the databases driver interface, for example like so:
Scala

View file

@ -5,8 +5,8 @@ of how to run.
## Quickstart
@scala[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala)]
@java[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-java)]
@scala[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)]
@java[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-java/)]
The *Quickstart* guide walks you through example code that introduces how to define actor systems, actors, and
messages as well as how to use the test module and logging.

View file

@ -83,5 +83,5 @@ getter, toString, hashCode, equals.
### Integrating Lombok with an IDE
Lombok integrates with popular IDEs:
* To use Lombok in IntelliJ IDEA you'll need the [Lombok Plugin for IntelliJ IDEA](https://plugins.jetbrains.com/idea/plugin/6317-lombok-plugin) and you'll also need to enable Annotation Processing (`Settings / Build,Execution,Deployment / Compiler / Annotation Processors` and tick `Enable annotation processing`)
* To use Lombok in IntelliJ IDEA you'll need the [Lombok Plugin for IntelliJ IDEA](https://plugins.jetbrains.com/plugin/6317-lombok) and you'll also need to enable Annotation Processing (`Settings / Build,Execution,Deployment / Compiler / Annotation Processors` and tick `Enable annotation processing`)
* To Use Lombok in Eclipse, run `java -jar lombok.jar` (see the video at [Project Lombok](https://projectlombok.org/)).

View file

@ -22,8 +22,8 @@ the License.
## Akka Committer License Agreement
All committers have signed this [CLA](http://www.lightbend.com/contribute/current-cla).
It can be [signed online](http://www.lightbend.com/contribute/cla).
All committers have signed this [CLA](https://www.lightbend.com/contribute/current-cla).
It can be [signed online](https://www.lightbend.com/contribute/cla).
## Licenses for Dependency Libraries

View file

@ -2,18 +2,18 @@
## Commercial Support
Commercial support is provided by [Lightbend](http://www.lightbend.com).
Akka is part of the [Lightbend Platform](http://www.lightbend.com/platform).
Commercial support is provided by [Lightbend](https://www.lightbend.com).
Akka is part of the [Akka Platform](https://www.lightbend.com/akka-platform).
## Sponsors
**Lightbend** is the company behind the Akka Project, Scala Programming Language,
Play Web Framework, Lagom, sbt and many other open source projects.
It also provides the Lightbend Reactive Platform, which is powered by an open source core and commercial Enterprise Suite for building scalable Reactive systems on the JVM. Learn more at [lightbend.com](http://www.lightbend.com).
It also provides the Lightbend Reactive Platform, which is powered by an open source core and commercial Enterprise Suite for building scalable Reactive systems on the JVM. Learn more at [lightbend.com](https://www.lightbend.com).
## Akka Discuss Forums
[Akka Discuss Forums](http://discuss.akka.io)
[Akka Discuss Forums](https://discuss.akka.io)
## Gitter
@ -28,7 +28,7 @@ Akka uses Git and is hosted at [Github akka/akka](https://github.com/akka/akka).
## Releases Repository
All Akka releases are published via Sonatype to Maven Central, see
[search.maven.org](http://search.maven.org/#search%7Cga%7C1%7Cg%3A%22com.typesafe.akka%22)
[search.maven.org](https://search.maven.org/search?q=g:com.typesafe.akka)
## Snapshots Repository
@ -36,7 +36,7 @@ Nightly builds are available in [https://repo.akka.io/snapshots](https://repo.ak
timestamped versions.
For timestamped versions, pick a timestamp from
[https://repo.akka.io/snapshots/com/typesafe/akka](https://repo.akka.io/snapshots/com/typesafe/akka).
[https://repo.akka.io/snapshots/com/typesafe/akka/](https://repo.akka.io/snapshots/com/typesafe/akka/).
All Akka modules that belong to the same build have the same timestamp.
@@@ warning

View file

@ -280,7 +280,7 @@ Explicitly disable Artery by setting property `akka.remote.artery.enabled` to `f
specific to classic remoting needs to be moved to `akka.remote.classic`. To see which configuration options
are specific to classic search for them in: @ref:[`akka-remote/reference.conf`](../general/configuration-reference.md#config-akka-remote).
If you have a [Lightbend Platform Subscription](https://www.lightbend.com/lightbend-platform-subscription) you can use our [Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html) enhancement to flag any settings that have not been properly migrated.
If you have a [Lightbend Subscription](https://www.lightbend.com/lightbend-subscription) you can use our [Config Checker](https://doc.akka.io/docs/akka-enhancements/current/config-checker.html) enhancement to flag any settings that have not been properly migrated.
### Persistent mode for Cluster Sharding

View file

@ -3,7 +3,7 @@
Migration from old versions:
* [2.3.x to 2.4.x](https://doc.akka.io/docs/akka/2.4/project/migration-guide-2.3.x-2.4.x.html)
* [2.2.x to 2.3.x](https://doc.akka.io/docs/akka/2.3.12/project/migration-guide-2.2.x-2.3.x.html)
* [2.1.x to 2.2.x](https://doc.akka.io/docs/akka/2.2.3/project/migration-guide-2.1.x-2.2.x.html)
* [2.0.x to 2.1.x](https://doc.akka.io/docs/akka/2.1.4/project/migration-guide-2.0.x-2.1.x.html)
* [2.2.x to 2.3.x](https://doc.akka.io/docs/akka/2.3/project/migration-guide-2.2.x-2.3.x.html)
* [2.1.x to 2.2.x](https://doc.akka.io/docs/akka/2.2/project/migration-guide-2.1.x-2.2.x.html)
* [2.0.x to 2.1.x](https://doc.akka.io/docs/akka/2.1/project/migration-guide-2.0.x-2.1.x.html)
* [1.3.x to 2.0.x](https://doc.akka.io/docs/akka/2.0.5/project/migration-guide-1.3.x-2.0.x.html).

View file

@ -311,13 +311,13 @@ According to [RFC 7525](https://tools.ietf.org/html/rfc7525) the recommended alg
You should always check the latest information about security and algorithm recommendations though before you configure your system.
Creating and working with keystores and certificates is well documented in the
[Generating X.509 Certificates](http://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
[Generating X.509 Certificates](https://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
section of Lightbend's SSL-Config library.
Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html)
The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
as well as the [Oracle documentation on creating KeyStore and TrustStores](https://docs.oracle.com/cd/E19509-01/820-3503/6nf1il6er/index.html)
are both great resources to research when setting up security on the JVM. Please consult those resources when troubleshooting
and configuring SSL.
@ -717,7 +717,7 @@ The needed classpath:
Agrona-0.5.4.jar:aeron-driver-1.0.1.jar:aeron-client-1.0.1.jar
```
You find those jar files on [Maven Central](http://search.maven.org/), or you can create a
You find those jar files on [Maven Central](https://search.maven.org/), or you can create a
package with your preferred build tool.
You can pass [Aeron properties](https://github.com/real-logic/Aeron/wiki/Configuration-Options) as

View file

@ -486,13 +486,13 @@ According to [RFC 7525](https://tools.ietf.org/html/rfc7525) the recommended alg
You should always check the latest information about security and algorithm recommendations though before you configure your system.
Creating and working with keystores and certificates is well documented in the
[Generating X.509 Certificates](http://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
[Generating X.509 Certificates](https://lightbend.github.io/ssl-config/CertificateGeneration.html#using-keytool)
section of Lightbend's SSL-Config library.
Since an Akka remoting is inherently @ref:[peer-to-peer](general/remoting.md#symmetric-communication) both the key-store as well as trust-store
need to be configured on each remoting node participating in the cluster.
The official [Java Secure Socket Extension documentation](http://docs.oracle.com/javase/7/docs/technotes/guides/security/jsse/JSSERefGuide.html)
The official [Java Secure Socket Extension documentation](https://docs.oracle.com/javase/8/docs/technotes/guides/security/jsse/JSSERefGuide.html)
as well as the [Oracle documentation on creating KeyStore and TrustStores](https://docs.oracle.com/cd/E19509-01/820-3503/6nf1il6er/index.html)
are both great resources to research when setting up security on the JVM. Please consult those resources when troubleshooting
and configuring SSL.

View file

@ -257,7 +257,7 @@ Java
<a id="round-robin-router"></a>
### RoundRobinPool and RoundRobinGroup
Routes in a [round-robin](http://en.wikipedia.org/wiki/Round-robin) fashion to its routees.
Routes in a [round-robin](https://en.wikipedia.org/wiki/Round-robin) fashion to its routees.
RoundRobinPool defined in configuration:
@ -598,7 +598,7 @@ Java
### ConsistentHashingPool and ConsistentHashingGroup
The ConsistentHashingPool uses [consistent hashing](http://en.wikipedia.org/wiki/Consistent_hashing)
The ConsistentHashingPool uses [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing)
to select a routee based on the sent message. This
[article](http://www.tom-e-white.com/2007/11/consistent-hashing.html) gives good
insight into how consistent hashing is implemented.

View file

@ -27,7 +27,7 @@ Please subscribe to the [akka-security](https://groups.google.com/forum/#!forum/
### Severity
The [CVSS](https://en.wikipedia.org/wiki/CVSS) score of this vulnerability is 6.8 (Medium), based on vector [AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C](https://nvd.nist.gov/cvss.cfm?calculator&amp;version=2&amp;vector=\(AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C\)).
The [CVSS](https://en.wikipedia.org/wiki/CVSS) score of this vulnerability is 6.8 (Medium), based on vector [AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C](https://nvd.nist.gov/vuln-metrics/cvss/v2-calculator?calculator&amp;version=2&amp;vector=%5C(AV:A/AC:M/Au:N/C:C/I:C/A:C/E:F/RL:TF/RC:C%5C)).
Rationale for the score:

View file

@ -59,7 +59,7 @@ you would need to reference it as `Wrapper$Message` instead of `Wrapper.Message`
@@@
Akka provides serializers for several primitive types and [protobuf](http://code.google.com/p/protobuf/)
Akka provides serializers for several primitive types and [protobuf](https://github.com/protocolbuffers/protobuf)
`com.google.protobuf.GeneratedMessage` (protobuf2) and `com.google.protobuf.GeneratedMessageV3` (protobuf3) by default (the latter only if
depending on the akka-remote module), so normally you don't need to add
configuration for that if you send raw protobuf messages as actor messages.

View file

@ -34,7 +34,7 @@ Java
With this server running you could use `telnet 127.0.0.1 9999` to see a stream of timestamps being printed, one every second.
The following sample is a little bit more advanced and uses the @apidoc[MergeHub] to dynamically merge incoming messages to a single stream which is then fed into a @apidoc[BroadcastHub] which emits elements over a dynamic set of downstreams allowing us to create a simplistic little TCP chat server in which a text entered from one client is emitted to all connected clients.
The following sample is a little bit more advanced and uses the @apidoc[MergeHub$] to dynamically merge incoming messages to a single stream which is then fed into a @apidoc[BroadcastHub$] which emits elements over a dynamic set of downstreams allowing us to create a simplistic little TCP chat server in which a text entered from one client is emitted to all connected clients.
Scala
: @@snip [FromSinkAndSource.scala](/akka-docs/src/test/scala/docs/stream/operators/flow/FromSinkAndSource.scala) { #chat }

View file

@ -16,7 +16,7 @@ If any of the asks times out it will fail the stream with a @apidoc[AskTimeoutEx
The @java[`mapTo` class]@scala[`S` generic] parameter is used to cast the responses from the actor to the expected outgoing flow type.
Similar to the plain ask pattern, the target actor is allowed to reply with @apidoc[akka.actor.Status].
Similar to the plain ask pattern, the target actor is allowed to reply with @apidoc[akka.actor.Status$].
An @apidoc[akka.actor.Status.Failure] will cause the operator to fail with the cause carried in the `Failure` message.
Adheres to the @apidoc[ActorAttributes.SupervisionStrategy] attribute.

View file

@ -13,7 +13,7 @@ To use Akka Streams, add the module to your project:
<a id="reactive-streams-integration"></a>
## Overview
Akka Streams implements the [Reactive Streams](http://reactive-streams.org/) standard for asynchronous stream processing with non-blocking
Akka Streams implements the [Reactive Streams](https://www.reactive-streams.org/) standard for asynchronous stream processing with non-blocking
back pressure.
Since Java 9 the APIs of Reactive Streams has been included in the Java Standard library, under the `java.util.concurrent.Flow`
@ -133,5 +133,5 @@ An incomplete list of other implementations:
* [Reactor (1.1+)](https://github.com/reactor/reactor)
* [RxJava](https://github.com/ReactiveX/RxJavaReactiveStreams)
* [Ratpack](http://www.ratpack.io/manual/current/streams.html)
* [Slick](http://slick.lightbend.com)
* [Ratpack](https://www.ratpack.io/manual/current/streams.html)
* [Slick](https://scala-slick.org/)

View file

@ -516,7 +516,7 @@ allow nicer syntax. The short answer is that Scala 2 does not support this in a
that it is impossible to abstract over the kind of stream that is being extended because `Source`, `Flow`
and `SubFlow` differ in the number and kind of their type parameters. While it would be possible to write
an implicit class that enriches them generically, this class would require explicit instantiation with all type
parameters due to [SI-2712](https://issues.scala-lang.org/browse/SI-2712). For a partial workaround that unifies
parameters due to [SI-2712](https://github.com/scala/bug/issues/2712). For a partial workaround that unifies
extensions to `Source` and `Flow` see [this sketch by R. Kuhn](https://gist.github.com/rkuhn/2870fcee4937dda2cad5).
A lot simpler is the task of adding an extension method to `Source` as shown below:

View file

@ -190,7 +190,7 @@ of absence of a value we recommend using @scala[`scala.Option` or `scala.util.Ei
## Back-pressure explained
Akka Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](http://reactive-streams.org/)
Akka Streams implement an asynchronous non-blocking back-pressure protocol standardised by the [Reactive Streams](https://www.reactive-streams.org/)
specification, which Akka is a founding member of.
The user of the library does not have to write any explicit back-pressure handling code — it is built in

View file

@ -28,7 +28,7 @@ efficiently and with bounded resource usage—no more OutOfMemoryErrors. In orde
to achieve this our streams need to be able to limit the buffering that they
employ, they need to be able to slow down producers if the consumers cannot
keep up. This feature is called back-pressure and is at the core of the
[Reactive Streams](http://reactive-streams.org/) initiative of which Akka is a
[Reactive Streams](https://www.reactive-streams.org/) initiative of which Akka is a
founding member. For you this means that the hard problem of propagating and
reacting to back-pressure has been incorporated in the design of Akka Streams
already, so you have one less thing to worry about; it also means that Akka

View file

@ -29,7 +29,7 @@ distributed processing framework or to introduce such capabilities in specific p
Stream refs are trivial to use in existing clustered Akka applications and require no additional configuration
or setup. They automatically maintain flow-control / back-pressure over the network and employ Akka's failure detection
mechanisms to fail-fast ("let it crash!") in the case of failures of remote nodes. They can be seen as an implementation
of the [Work Pulling Pattern](http://www.michaelpollmeier.com/akka-work-pulling-pattern), which one would otherwise
of the [Work Pulling Pattern](https://www.michaelpollmeier.com/akka-work-pulling-pattern), which one would otherwise
implement manually.
@@@ note

View file

@ -758,7 +758,7 @@ akka {
## Different Testing Frameworks
Akkas own test suite is written using [ScalaTest](http://scalatest.org),
Akkas own test suite is written using [ScalaTest](http://www.scalatest.org),
which also shines through in documentation examples. However, the TestKit and
its facilities do not depend on that framework, you can essentially use
whichever suits your development style best.
@ -783,7 +783,7 @@ backwards compatibility in the future, use at own risk.
### Specs2
Some [Specs2](http://specs2.org) users have contributed examples of how to work around some clashes which may arise:
Some [Specs2](https://etorreborre.github.io/specs2/) users have contributed examples of how to work around some clashes which may arise:
* Mixing TestKit into `org.specs2.mutable.Specification` results in a
name clash involving the `end` method (which is a private variable in

View file

@ -23,7 +23,7 @@ imports when working in Scala, or viceversa. See @ref:[IDE Tips](../additional/i
## Akka Actors
The [Actor Model](http://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
The [Actor Model](https://en.wikipedia.org/wiki/Actor_model) provides a higher level of abstraction for writing concurrent
and distributed systems. It alleviates the developer from having to deal with
explicit locking and thread management, making it easier to write correct
concurrent and parallel systems. Actors were defined in the 1973 paper by Carl

View file

@ -10,7 +10,7 @@ Microservices has many attractive properties, such as the independent nature of
multiple smaller and more focused teams that can deliver new functionality more frequently and can
respond quicker to business opportunities. Reactive Microservices should be isolated, autonomous, and have
a single responsibility as identified by Jonas Bonér in the book
[Reactive Microsystems: The Evolution of Microservices at Scale](https://info.lightbend.com/ebook-reactive-microservices-the-evolution-of-microservices-at-scale-register.html).
[Reactive Microsystems: The Evolution of Microservices at Scale](https://www.lightbend.com/ebooks/reactive-microsystems-evolution-of-microservices-scalability-oreilly).
In a microservices architecture, you should consider communication within a service and between services.
@ -29,9 +29,9 @@ during a rolling deployment, but deployment of the entire set has a single point
intra-service communication can take advantage of Akka Cluster, failure management and actor messaging, which
is convenient to use and has great performance.
Between different services [Akka HTTP](https://doc.akka.io/docs/akka-http/current) or
Between different services [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) or
[Akka gRPC](https://doc.akka.io/docs/akka-grpc/current/) can be used for synchronous (yet non-blocking)
communication and [Akka Streams Kafka](https://doc.akka.io/docs/akka-stream-kafka/current/home.html) or other
communication and [Akka Streams Kafka](https://doc.akka.io/docs/alpakka-kafka/current/) or other
[Alpakka](https://doc.akka.io/docs/alpakka/current/) connectors for integration asynchronous communication.
All those communication mechanisms work well with streaming of messages with end-to-end back-pressure, and the
synchronous communication tools can also be used for single request response interactions. It is also important

View file

@ -30,15 +30,15 @@ and membership state transitions.
### Gossip
The cluster membership used in Akka is based on Amazon's [Dynamo](http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) system and
particularly the approach taken in Basho's' [Riak](http://basho.com/technology/architecture/) distributed database.
Cluster membership is communicated using a [Gossip Protocol](http://en.wikipedia.org/wiki/Gossip_protocol), where the current
The cluster membership used in Akka is based on Amazon's [Dynamo](https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf) system and
particularly the approach taken in Basho's' [Riak](https://riak.com/technology/architecture/) distributed database.
Cluster membership is communicated using a [Gossip Protocol](https://en.wikipedia.org/wiki/Gossip_protocol), where the current
state of the cluster is gossiped randomly through the cluster, with preference to
members that have not seen the latest version.
#### Vector Clocks
[Vector clocks](http://en.wikipedia.org/wiki/Vector_clock) are a type of data structure and algorithm for generating a partial
[Vector clocks](https://en.wikipedia.org/wiki/Vector_clock) are a type of data structure and algorithm for generating a partial
ordering of events in a distributed system and detecting causality violations.
We use vector clocks to reconcile and merge differences in cluster state
@ -175,5 +175,5 @@ The periodic nature of the gossip has a nice batching effect of state changes,
e.g. joining several nodes quickly after each other to one node will result in only
one state change to be spread to other members in the cluster.
The gossip messages are serialized with [protobuf](https://code.google.com/p/protobuf/) and also gzipped to reduce payload
The gossip messages are serialized with [protobuf](https://github.com/protocolbuffers/protobuf) and also gzipped to reduce payload
size.

View file

@ -29,7 +29,7 @@ UID.
## Member States
The cluster membership state is a specialized [CRDT](http://hal.upmc.fr/docs/00/55/55/88/PDF/techreport.pdf), which means that it has a monotonic
The cluster membership state is a specialized [CRDT](https://hal.inria.fr/file/index/docid/555588/filename/techreport.pdf), which means that it has a monotonic
merge function. When concurrent changes occur on different nodes the updates can always be
merged and converge to the same end result.

View file

@ -283,8 +283,8 @@ been removed from the Cluster. Removal of crashed (unreachable) nodes is perform
A production solution for downing is provided by
[Split Brain Resolver](https://doc.akka.io/docs/akka-enhancements/current/split-brain-resolver.html),
which is part of the [Lightbend Platform](http://www.lightbend.com/platform).
If you dont have a Lightbend Platform Subscription, you should still carefully read the
which is part of the [Akka Platform](https://www.lightbend.com/akka-platform).
If you dont have a Lightbend Subscription, you should still carefully read the
[documentation](https://doc.akka.io/docs/akka-enhancements/current/split-brain-resolver.html)
of the Split Brain Resolver and make sure that the solution you are using handles the concerns and scenarios
described there.

View file

@ -534,7 +534,7 @@ akka.cluster.distributed-data.prefer-oldest = on
### Delta-CRDT
[Delta State Replicated Data Types](http://arxiv.org/abs/1603.01529)
[Delta State Replicated Data Types](https://arxiv.org/abs/1603.01529)
are supported. A delta-CRDT is a way to reduce the need for sending the full state
for updates. For example adding element `'c'` and `'d'` to set `{'a', 'b'}` would
result in sending the delta `{'c', 'd'}` and merge that with the state on the
@ -665,7 +665,7 @@ All entries can be made durable by specifying:
akka.cluster.distributed-data.durable.keys = ["*"]
```
@scala[[LMDB](https://symas.com/products/lightning-memory-mapped-database/)]@java[[LMDB](https://github.com/lmdbjava/lmdbjava/)] is the default storage implementation. It is
@scala[[LMDB](https://symas.com/lmdb/technical/)]@java[[LMDB](https://github.com/lmdbjava/lmdbjava/)] is the default storage implementation. It is
possible to replace that with another implementation by implementing the actor protocol described in
`akka.cluster.ddata.DurableStore` and defining the `akka.cluster.distributed-data.durable.store-actor-class`
property for the new implementation.
@ -761,11 +761,9 @@ API documentation of the `Replicator` for details.
## Learn More about CRDTs
* [Eventually Consistent Data Structures](https://vimeo.com/43903960)
talk by Sean Cribbs
* [Strong Eventual Consistency and Conflict-free Replicated Data Types (video)](https://www.youtube.com/watch?v=oyUHd894w18&amp;feature=youtu.be)
talk by Mark Shapiro
* [A comprehensive study of Convergent and Commutative Replicated Data Types](http://hal.upmc.fr/file/index/docid/555588/filename/techreport.pdf)
* [A comprehensive study of Convergent and Commutative Replicated Data Types](https://hal.inria.fr/file/index/docid/555588/filename/techreport.pdf)
paper by Mark Shapiro et. al.
## Configuration

View file

@ -31,7 +31,7 @@ efficiently.
## How to get started
If this is your first experience with Akka, we recommend that you start by
running a simple Hello World project. See the @scala[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala)] @java[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-java)] for
running a simple Hello World project. See the @scala[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-scala/)] @java[[Quickstart Guide](https://developer.lightbend.com/guides/akka-quickstart-java/)] for
instructions on downloading and running the Hello World example. The *Quickstart* guide walks you through example code that introduces how to define actor systems, actors, and messages as well as how to use the test module and logging. Within 30 minutes, you should be able to run the Hello World example and learn how it is constructed.
This *Getting Started* guide provides the next level of information. It covers why the actor model fits the needs of modern distributed systems and includes a tutorial that will help further your knowledge of Akka. Topics include:

View file

@ -1,6 +1,6 @@
# Overview of Akka libraries and modules
Before delving into some best practices for writing actors, it will be helpful to preview the most commonly used Akka libraries. This will help you start thinking about the functionality you want to use in your system. All core Akka functionality is available as Open Source Software (OSS). Lightbend sponsors Akka development but can also help you with [commercial offerings ](https://www.lightbend.com/platform/subscription) such as training, consulting, support, and [Enterprise Suite](https://www.lightbend.com/platform/production) &#8212; a comprehensive set of tools for managing Akka systems.
Before delving into some best practices for writing actors, it will be helpful to preview the most commonly used Akka libraries. This will help you start thinking about the functionality you want to use in your system. All core Akka functionality is available as Open Source Software (OSS). Lightbend sponsors Akka development but can also help you with [commercial offerings ](https://www.lightbend.com/lightbend-subscription) such as training, consulting, support, and [Enterprise capabilities](https://www.lightbend.com/why-lightbend#enterprise-capabilities) &#8212; a comprehensive set of tools for managing Akka systems.
The following capabilities are included with Akka OSS and are introduced later on this page:
@ -14,7 +14,7 @@ The following capabilities are included with Akka OSS and are introduced later o
* @ref:[Streams](#streams)
* @ref:[HTTP](#http)
With a [Lightbend Platform Subscription](https://www.lightbend.com/platform/subscription), you can use [Akka Enhancements](https://doc.akka.io/docs/akka-enhancements/current/) that includes:
With a [Lightbend Platform Subscription](https://www.lightbend.com/lightbend-subscription), you can use [Akka Enhancements](https://doc.akka.io/docs/akka-enhancements/current/) that includes:
[Akka Resilience Enhancements](https://doc.akka.io/docs/akka-enhancements/current/akka-resilience-enhancements.html):
@ -160,7 +160,7 @@ cluster for example) or alternate views (like reports).
Persistence tackles the following challenges:
* How to restore the state of an entity/actor when system restarts or crashes.
* How to implement a [CQRS system](https://msdn.microsoft.com/en-us/library/jj591573.aspx).
* How to implement a [CQRS system](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591573(v=pandp.10)?redirectedfrom=MSDN).
* How to ensure reliable delivery of messages in face of network errors and system crashes.
* How to introspect domain events that have led an entity to its current state.
* How to leverage [Event Sourcing](https://martinfowler.com/eaaDev/EventSourcing.html) in your application to support long-running processes while the project continues to evolve.
@ -198,7 +198,7 @@ process a potentially large, or infinite, stream of sequential events and proper
faster processing stages do not overwhelm slower ones in the chain or graph. Streams provide a higher-level
abstraction on top of actors that simplifies writing such processing networks, handling all the fine details in the
background and providing a safe, typed, composable programming model. Streams is also an implementation
of the [Reactive Streams standard](http://www.reactive-streams.org) which enables integration with all third
of the [Reactive Streams standard](https://www.reactive-streams.org) which enables integration with all third
party implementations of that standard.
Streams solve the following challenges:
@ -210,7 +210,7 @@ Streams solve the following challenges:
### HTTP
[Akka HTTP](https://doc.akka.io/docs/akka-http/current) is a separate module from Akka.
[Akka HTTP](https://doc.akka.io/docs/akka-http/current/) is a separate module from Akka.
The de facto standard for providing APIs remotely, internal or external, is [HTTP](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol). Akka provides a library to construct or consume such HTTP services by giving a set of tools to create HTTP services (and serve them) and a client that can be
used to consume other services. These tools are particularly suited to streaming in and out a large set of data or real-time events by leveraging the underlying model of Akka Streams.

View file

@ -228,7 +228,7 @@ In the context of the IoT system, this guide introduced the following concepts,
To continue your journey with Akka, we recommend:
* Start building your own applications with Akka, make sure you [get involved in our amazing community](https://akka.io/get-involved) for help if you get stuck.
* Start building your own applications with Akka, make sure you [get involved in our amazing community](https://akka.io/get-involved/) for help if you get stuck.
* If youd like some additional background, and detail, read the rest of the @ref:[reference documentation](../actors.md) and check out some of the @ref:[books and videos](../../additional/books.md) on Akka.
* If you are interested in functional programming, read how actors can be defined in a @ref:[functional style](../actors.md#functional-style). In this guide the object-oriented style was used, but you can mix both as you like.

View file

@ -47,7 +47,7 @@ provides tools to facilitate in building GDPR capable systems.
### Event sourcing concepts
See an [introduction to EventSourcing](https://msdn.microsoft.com/en-us/library/jj591559.aspx) at MSDN.
See an [introduction to EventSourcing](https://docs.microsoft.com/en-us/previous-versions/msp-n-p/jj591559(v=pandp.10)?redirectedfrom=MSDN) at MSDN.
Another excellent article about "thinking in Events" is [Events As First-Class Citizens](https://hackernoon.com/events-as-first-class-citizens-8633e8479493)
by Randy Shoup. It is a short and recommended read if you're starting developing Events based applications.

View file

@ -53,7 +53,7 @@ There are 3 supported patterns, which are described in the following sections:
This pattern implements point-to-point reliable delivery between a single producer actor sending messages and a single consumer actor
receiving the messages.
Messages are sent from the producer to @apidoc[ProducerController] and via @apidoc[ConsumerController] actors, which
Messages are sent from the producer to @apidoc[ProducerController$] and via @apidoc[ConsumerController$] actors, which
handle the delivery and confirmation of the processing in the destination consumer actor.
![delivery-p2p-1.png](./images/delivery-p2p-1.png)
@ -156,7 +156,7 @@ One important property is that the order of the messages should not matter, beca
message is routed randomly to one of the workers with demand. In other words, two subsequent
messages may be routed to two different workers and processed independent of each other.
Messages are sent from the producer to @apidoc[WorkPullingProducerController] and via @apidoc[ConsumerController]
Messages are sent from the producer to @apidoc[WorkPullingProducerController$] and via @apidoc[ConsumerController$]
actors, which handle the delivery and confirmation of the processing in the destination worker (consumer) actor.
![delivery-work-pulling-1.png](./images/delivery-work-pulling-1.png)
@ -266,7 +266,7 @@ and sending from another producer (different node)
![delivery-work-sharding-3.png](./images/delivery-sharding-3.png)
The @apidoc[ShardingProducerController] should be used together with @apidoc[ShardingConsumerController].
The @apidoc[ShardingProducerController$] should be used together with @apidoc[ShardingConsumerController$].
A producer can send messages via a `ShardingProducerController` to any `ShardingConsumerController`
identified by an `entityId`. A single `ShardingProducerController` per `ActorSystem` (node) can be
@ -344,7 +344,7 @@ some of these may already have been processed by the previous consumer.
Until sent messages have been confirmed the producer side keeps them in memory to be able to
resend them. If the JVM of the producer side crashes those unconfirmed messages are lost.
To make sure the messages can be delivered also in that scenario a @apidoc[DurableProducerQueue] can be used.
To make sure the messages can be delivered also in that scenario a @apidoc[DurableProducerQueue$] can be used.
Then the unconfirmed messages are stored in a durable way so that they can be redelivered when the producer
is started again. An implementation of the `DurableProducerQueue` is provided by @apidoc[EventSourcedProducerQueue]
in `akka-persistence-typed`.

View file

@ -105,7 +105,7 @@ An optional parameter `preferLocalRoutees` can be used for this strategy. Router
### Consistent Hashing
Uses [consistent hashing](http://en.wikipedia.org/wiki/Consistent_hashing) to select a routee based
Uses [consistent hashing](https://en.wikipedia.org/wiki/Consistent_hashing) to select a routee based
on the sent message. This [article](http://www.tom-e-white.com/2007/11/consistent-hashing.html)
gives good insight into how consistent hashing is implemented.

View file

@ -446,8 +446,8 @@ be good to know that it's optional in case you would prefer a different approach
* direct processing because there is only one message type
* if or switch statements
* annotation processor
* [Vavr Pattern Matching DSL](http://www.vavr.io/vavr-docs/#_pattern_matching)
* future pattern matching in Java ([JEP 305](http://openjdk.java.net/jeps/305))
* [Vavr Pattern Matching DSL](https://www.vavr.io/vavr-docs/#_pattern_matching)
* pattern matching since JDK 14 ([JEP 305](https://openjdk.java.net/jeps/305))
In `Behaviors` there are `receive`, `receiveMessage` and `receiveSignal` factory methods that takes functions
instead of using the `ReceiveBuilder`, which is the `receive` with the class parameter.

View file

@ -250,7 +250,7 @@ project-info {
text: "API (Scaladoc)"
}
{
url: ${project-info.javadoc}"coordination/package-summary.html"
url: ${project-info.javadoc}"coordination/lease/package-summary.html"
text: "API (Javadoc)"
}
]