Merge branch 'master' into wip-query-2.5

This commit is contained in:
Konrad `ktoso` Malawski 2017-01-03 17:04:48 +01:00 committed by GitHub
commit 067b569f85
78 changed files with 1762 additions and 1744 deletions

View file

@ -17,7 +17,7 @@ Depending on which version (or sometimes module) you want to work on, you should
Akka uses tags to categorise issues into groups or mark their phase in development.
Most notably many tags start `t:` prefix (as in `topic:`), which categorises issues in terms of which module they relate to. Examples are:
Most notably many tags start with a `t:` prefix (as in `topic:`), which categorises issues in terms of which module they relate to. Examples are:
- [t:core](https://github.com/akka/akka/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3At%3Acore)
- [t:stream](https://github.com/akka/akka/issues?q=is%3Aissue+is%3Aopen+label%3At%3Astream)
@ -31,9 +31,9 @@ that will be accepted and likely is a nice one to get started you should check o
Another group of tickets are those which start from a number. They're used to signal in what phase of development an issue is:
- [0 - new](https://github.com/akka/akka/labels/0%20-%20new) - is assigned when a ticket is unclear on it's purpose or if it is valid or not. Sometimes the additional tag `discuss` is used to mark such tickets, if they propose large scale changed and need more discussion before moving into triaged (or being closed as invalid)
- [0 - new](https://github.com/akka/akka/labels/0%20-%20new) - is assigned when a ticket is unclear on its purpose or if it is valid or not. Sometimes the additional tag `discuss` is used to mark such tickets, if they propose large scale changes and need more discussion before moving into triaged (or being closed as invalid).
- [1 - triaged](https://github.com/akka/akka/labels/1%20-%20triaged) - roughly speaking means "this ticket makes sense". Triaged tickets are safe to pick up for contributing in terms of likeliness of a patch for it being accepted. It is not recommended to start working on a ticket that is not triaged.
- [2 - pick next](https://github.com/akka/akka/labels/2%20-%20pick%20next) - used to mark issues which are next up in the queue to be worked on. Sometimes it's also used to mark which PRs are expected to be reviewed/merged for the next release. The tag is non-binding, and mostly used as organisational helper.
- [2 - pick next](https://github.com/akka/akka/labels/2%20-%20pick%20next) - used to mark issues which are next up in the queue to be worked on. Sometimes it's also used to mark which PRs are expected to be reviewed/merged for the next release. The tag is non-binding, and mostly used as an organisational helper.
- [3 - in progress](https://github.com/akka/akka/labels/3%20-%20in%20progress) - means someone is working on this ticket. If you see a ticket that has the tag, however seems inactive, it could have been an omission with removing the tag, feel free to ping the ticket then if it's still being worked on.
The last group of special tags indicate specific states a ticket is in:
@ -41,41 +41,41 @@ The last group of special tags indicate specific states a ticket is in:
- [bug](https://github.com/akka/akka/labels/failed) - bugs take priority in being fixed above features. The core team dedicates a number of days to working on bugs each sprint. Bugs which have reproducers are also great for community contributions as they're well isolated. Sometimes we're not as lucky to have reproducers though, then a bugfix should also include a test reproducing the original error along with the fix.
- [failed](https://github.com/akka/akka/labels/failed) - tickets indicate a Jenkins failure (for example from a nightly build). These tickets usually start with the `FAILED: ...` message, and include a stacktrace + link to the Jenkins failure. The tickets are collected and worked on with priority to keep the build stable and healthy. Often times it may be simple timeout issues (Jenkins boxes are slow), though sometimes real bugs are discovered this way.
Pull Request validation states:
Pull request validation states:
- `validating => [tested | needs-attention]` - signify pull request validation status
- `validating => [tested | needs-attention]` - signify pull request validation status.
# Akka contributing guidelines
These guidelines apply to all Akka projects, by which we mean both the `akka/akka` repository,
as well as any plugins or additional repos located under the Akka GitHub organisation.
as well as any plugins or additional repositories located under the Akka GitHub organisation.
These guidelines are meant to be a living document that should be changed and adapted as needed.
We encourage changes that make it easier to achieve our goals in an efficient way.
## General Workflow
## General workflow
The below steps are how to get a patch into a main development branch (e.g. `master`).
The steps below describe how to get a patch into a main development branch (e.g. `master`).
The steps are exactly the same for everyone involved in the project (be it core team, or first time contributor).
1. Make sure an issue exists in the [issue tracker](https://github.com/akka/akka/issues) for the work you want to contribute.
- If there is no ticket for it, [create one](https://github.com/akka/akka/issues/new) first.
1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a PullRequest against the mainline Akka.
1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a pull request against the mainline Akka.
1. Create a branch on your fork and work on the feature. For example: `git checkout -b wip-custom-headers-akka-http`
- Please make sure to follow the general quality guidelines (specified below) when developing your patch.
- Please write additional tests covering your feature and adjust existing ones if needed before submitting your Pull Request. The `validatePullRequest` sbt task ([explained below](#validatePullRequest)) may come in handy to verify your changes are correct.
- Please write additional tests covering your feature and adjust existing ones if needed before submitting your pull request. The `validatePullRequest` sbt task ([explained below](#validatePullRequest)) may come in handy to verify your changes are correct.
1. Once your feature is complete, prepare the commit following our [Creating Commits And Writing Commit Messages](#creating-commits-and-writing-commit-messages). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve).
1. If it's a new feature, or a change of behaviour, document it on the [akka-docs](https://github.com/akka/akka/tree/master/akka-docs), remember, a undocumented feature is not a feature. If the feature was touching Scala or Java DSL, make sure to document it in both the java and scala documentation (usually in a file of the same name, but under `/scala/` instead of `/java/` etc).
1. Now it's finally time to [submit the Pull Request](https://help.github.com/articles/using-pull-requests)!
1. If you have not already done so, you will be asked by our CLA bot to [sign the Lightbend CLA](http://www.lightbend.com/contribute/cla) online CLA stands for Contributor License Agreement and is a way of protecting intellectual property disputes from harming the project.
1. If it's a new feature, or a change of behaviour, document it on the [akka-docs](https://github.com/akka/akka/tree/master/akka-docs), remember, an undocumented feature is not a feature. If the feature was touching Scala or Java DSL, make sure to document it in both the Java and Scala documentation (usually in a file of the same name, but under `/scala/` instead of `/java/` etc).
1. Now it's finally time to [submit the pull request](https://help.github.com/articles/using-pull-requests)!
1. If you have not already done so, you will be asked by our CLA bot to [sign the Lightbend CLA](http://www.lightbend.com/contribute/cla) online. CLA stands for Contributor License Agreement and is a way of protecting intellectual property disputes from harming the project.
1. If you're not already on the contributors white-list, the @akka-ci bot will ask `Can one of the repo owners verify this patch?`, to which a core member will reply by commenting `OK TO TEST`. This is just a sanity check to prevent malicious code from being run on the Jenkins cluster.
1. Now both committers and interested people will review your code. This process is to ensure the code we merge is of the best possible quality, and that no silly mistakes slip though. You're expected to follow-up these comments by adding new commits to the same branch. The commit messages of those commits can be more lose, for example: `Removed debugging using printline`, as they all will be squashed into one commit before merging into the main branch.
- The community and team are really nice people, so don't be afraid to ask follow up questions if you didn't understand some comment, or would like to clarify how to continue with a given feature. We're here to help, so feel free to ask and discuss any kind of questions you might have during review!
- The community and team are really nice people, so don't be afraid to ask follow up questions if you didn't understand some comment, or would like clarification on how to continue with a given feature. We're here to help, so feel free to ask and discuss any kind of questions you might have during review!
1. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs upwhich is signalled usually by a comment saying `LGTM`, which means "Looks Good To Me".
- In general a PR is expected to get 2 LGTMs from the team before it is merged. If the PR is trivial, or under under special circumstances (such as most of the team being on vacation, a PR was very thoroughly reviewed/tested and surely is correct) one LGTM may be fine as well.
- In general a PR is expected to get 2 LGTMs from the team before it is merged. If the PR is trivial, or under special circumstances (such as most of the team being on vacation, a PR was very thoroughly reviewed/tested and surely is correct) one LGTM may be fine as well.
1. If the code change needs to be applied to other branches as well (for example a bugfix needing to be backported to a previous version), one of the team will either ask you to submit a PR with the same commit to the old branch, or do this for you.
- Backport pull requests such as these are marked using the phrase`for validation` in the title to make the purpose clear in the pull request list. They can be merged once validation passes without additional review (if no conflicts).
1. Once everything is said and done, your Pull Request gets merged :tada: Your feature will be available with the next “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release). And of course you will be given credit for the fix in the release stats during the release's announcement. You've made it!
- Backport pull requests such as these are marked using the phrase `for validation` in the title to make the purpose clear in the pull request list. They can be merged once validation passes without additional review (if there are no conflicts).
1. Once everything is said and done, your pull request gets merged :tada: Your feature will be available with the next “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release). And of course you will be given credit for the fix in the release stats during the release's announcement. You've made it!
The TL;DR; of the above very precise workflow version is:
@ -87,7 +87,7 @@ The TL;DR; of the above very precise workflow version is:
6. Keep polishing it until received enough LGTM
7. Profit!
Note that the Akka sbt project is large, so `sbt` needs to be run with lots of heap (1-2 Gb). This can be specified using a command line argument `sbt -mem 2048` or in the environment variable `SBT_OPTS` but then as a regular JVM memory flag, for example `SBT_OPTS=-Xmx2G`, on some platforms you can also edit the global defaults for sbt in `/usr/local/etc/sbtopts`.
Note that the Akka sbt project is large, so `sbt` needs to be run with lots of heap (1-2 GB). This can be specified using a command line argument `sbt -mem 2048` or in the environment variable `SBT_OPTS` but then as a regular JVM memory flag, for example `SBT_OPTS=-Xmx2G`, on some platforms you can also edit the global defaults for sbt in `/usr/local/etc/sbtopts`.
## The `validatePullRequest` task
@ -98,7 +98,7 @@ then running tests only on those projects.
For example changing something in `akka-http-core` would cause tests to be run in all projects which depend on it
(e.g. `akka-http-tests`, `akka-http-marshallers-*`, `akka-docs` etc.).
To use the task simply type, and the output should include entries like shown below:
To use the task simply type `validatePullRequest`, and the output should include entries like shown below:
```
> validatePullRequest
@ -108,7 +108,7 @@ To use the task simply type, and the output should include entries like shown be
```
By default changes are diffed with the `master` branch when working locally, if you want to validate against a different
target PR branch you can do so by setting the PR_TARGET_BRANCH environment variable for SBT:
target PR branch you can do so by setting the PR_TARGET_BRANCH environment variable for sbt:
```
PR_TARGET_BRANCH=origin/example sbt validatePullRequest
@ -119,7 +119,7 @@ Binary compatibility rules and guarantees are described in depth in the [Binary
](http://doc.akka.io/docs/akka/snapshot/common/binary-compatibility-rules.html) section of the documentation.
Akka uses MiMa (which is short for [Lightbend Migration Manager](https://github.com/typesafehub/migration-manager)) to
validate binary compatibility of incoming Pull Requests. If your PR fails due to binary compatibility issues, you may see
validate binary compatibility of incoming pull requests. If your PR fails due to binary compatibility issues, you may see
an error like this:
```
@ -137,10 +137,9 @@ Situations when it may be fine to ignore a MiMa issued warning include:
- if it is adding API to classes / traits which are only meant for extension by Akka itself, i.e. should not be extended by end-users
- other tricky situations
## Pull request requirements
## Pull Request Requirements
For a Pull Request to be considered at all it has to meet these requirements:
For a pull request to be considered at all it has to meet these requirements:
1. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests.
1. The code must be well documented in the Lightbend's standard documentation format (see the Documentation section below).
@ -152,15 +151,14 @@ For a Pull Request to be considered at all it has to meet these requirements:
Akka uses the first choice, having copyright notices in every file header.
### Additional guidelines
Some additional guidelines regarding source code are:
- files should start with a ``Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>`` copyright header
- keep the code [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself)
- apply the [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) whenever you have the chance to
- Never delete or change existing copyright notices, just add additional info.
- Files should start with a ``Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>`` copyright header.
- Keep the code [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself).
- Apply the [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) whenever you have the chance to.
- Never delete or change existing copyright notices, just add additional info.
- Do not use ``@author`` tags since it does not encourage [Collective Code Ownership](http://www.extremeprogramming.org/rules/collective.html).
- Contributors , each project should make sure that the contributors gets the credit they deserve—in a text file or page on the project website and in the release notes etc.
@ -188,15 +186,15 @@ For larger projects that have invested a lot of time and resources into their cu
Akka generates JavaDoc-style API documentation using the [genjavadoc](https://github.com/typesafehub/genjavadoc) sbt plugin, since the sources are written mostly in Scala.
Generating JavaDoc is not enabled by default, as it's not needed on day-to-day development as it's expected to just work.
If you'd like to check if you links and formatting looks good in JavaDoc (and not only in ScalaDoc), you can generate it by running:
If you'd like to check if your links and formatting look good in JavaDoc (and not only in ScalaDoc), you can generate it by running:
```
sbt -Dakka.genjavadoc.enabled=true javaunidoc:doc
```
Which will generate JavaDoc style docs in `./target/javaunidoc/index.html`
Which will generate JavaDoc style docs in `./target/javaunidoc/index.html`.
## External Dependencies
## External dependencies
All the external runtime dependencies for the project, including transitive dependencies, must have an open source license that is equal to, or compatible with, [Apache 2](http://www.apache.org/licenses/LICENSE-2.0).
@ -206,19 +204,19 @@ This must be ensured by manually verifying the license for all the dependencies
2. Whenever a committer to the project adds a new dependency.
3. Whenever a new release is cut (public or private for a customer).
Which licenses are compatible with Apache 2 are defined in [this doc](http://www.apache.org/legal/3party.html#category-a), where you can see that the licenses that are listed under ``Category A`` automatically compatible with Apache 2, while the ones listed under ``Category B`` needs additional action:
Which licenses are compatible with Apache 2 are defined in [this doc](http://www.apache.org/legal/3party.html#category-a), where you can see that the licenses that are listed under ``Category A`` are automatically compatible with Apache 2, while the ones listed under ``Category B`` need additional action:
> Each license in this category requires some degree of [reciprocity](http://www.apache.org/legal/3party.html#define-reciprocal); therefore, additional action must be taken in order to minimize the chance that a user of an Apache product will create a derivative work of a reciprocally-licensed portion of an Apache product without being aware of the applicable requirements.
Each project must also create and maintain a list of all dependencies and their licenses, including all their transitive dependencies. This can be done either in the documentation or in the build file next to each dependency.
## Creating Commits And Writing Commit Messages
## Creating commits and writing commit messages
Follow these guidelines when creating public commits and writing commit messages.
1. If your work spans multiple local commits (for example; if you do safe point commits while working in a feature branch or work in a branch for a long time doing merges/rebases etc.) then please do not commit it all but rewrite the history by squashing the commits into a single big commit which you write a good commit message for (like discussed in the following sections). For more info read this article: [Git Workflow](http://sandofsky.com/blog/git-workflow.html). Every commit should be able to be used in isolation, cherry picked etc.
2. First line should be a descriptive sentence what the commit is doing, including the ticket number. It should be possible to fully understand what the commit does—but not necessarily how it does it—by just reading this single line. We follow the “imperative present tense” style for commit messages ([more info here](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)).
2. The first line should be a descriptive sentence what the commit is doing, including the ticket number. It should be possible to fully understand what the commit does—but not necessarily how it does it—by just reading this single line. We follow the “imperative present tense” style for commit messages ([more info here](http://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)).
It is **not ok** to only list the ticket number, type "minor fix" or similar.
If the commit is a small fix, then you are done. If not, go to 3.
@ -239,16 +237,16 @@ Example:
## Pull request validation workflow details
Akka uses [Jenkins GitHub pull request builder plugin](https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin)
that automatically merges the code, builds it, runs the tests and comments on the Pull Request in GitHub.
that automatically merges the code, builds it, runs the tests and comments on the pull request in GitHub.
Upon a submission of a Pull Request the GitHub pull request builder plugin will post a following comment:
Upon a submission of a pull request the GitHub pull request builder plugin will post a following comment:
Can one of the repo owners verify this patch?
This requires a member from a core team to start Pull Request validation process by posting comment consisting only of `OK TO TEST`.
From now on, whenever new commits are pushed to the Pull Request, a validation job will be automatically started and the results of the validation posted to the Pull Request.
This requires a member from a core team to start the pull request validation process by posting a comment consisting only of `OK TO TEST`.
From now on, whenever new commits are pushed to the pull request, a validation job will be automatically started and the results of the validation posted to the pull request.
A Pull Request validation job can be started manually by posting `PLS BUILD` comment on the Pull Request.
A pull request validation job can be started manually by posting `PLS BUILD` comment on the pull request.
In order to speed up PR validation times, the Akka build contains a special sbt task called `validatePullRequest`,
which is smart enough to figure out which projects should be built if a PR only has changes in some parts of the project.
@ -263,7 +261,7 @@ sbt -Dakka.test.tags.exclude=performance,timing,long-running -Dakka.test.multi-i
```
In order to force the `validatePullRequest` task to build the entire project, regardless of dependency analysis of a PRs
changes one can use the special `PLS BUILD ALL` command (typed in a comment on Github, on the Pull Request), which will cause
changes one can use the special `PLS BUILD ALL` command (typed in a comment on GitHub, on the pull request), which will cause
the validator to test all projects.
## Source style
@ -283,20 +281,20 @@ Thus we ask Java contributions to follow these simple guidelines:
### Preferred ways to use timeouts in tests
Avoid short test timeouts, since Jenkins server may GC heavily causing spurious test failures. GC pause or other hiccup of 2 seconds is common in our CI environment. Please note that usually giving a larger timeout *does not slow down the tests*, as in an `expectMessage` call for example it usually will complete quickly.
Avoid short test timeouts, since Jenkins server may GC heavily causing spurious test failures. GC pause or other hiccups of 2 seconds are common in our CI environment. Please note that usually giving a larger timeout *does not slow down the tests*, as in an `expectMessage` call for example it usually will complete quickly.
There is a number of ways timeouts can be defined in Akka tests. The following ways to use timeouts are recommended (in order of preference):
There are a number of ways timeouts can be defined in Akka tests. The following ways to use timeouts are recommended (in order of preference):
* `remaining` is first choice (requires `within` block)
* `remainingOrDefault` is second choice
* `3.seconds` is third choice if not using testkit
* lower timeouts must come with a very good reason (e.g. Awaiting on a known to be "already completed" `Future`)
Special care should be given `expectNoMsg` calls, which indeed will wait the entire timeout before continuing, therefore a shorter timeout should be used in those, for example `200` or `300.millis`.
Special care should be given to `expectNoMsg` calls, which indeed will wait the entire timeout before continuing, therefore a shorter timeout should be used in those, for example `200` or `300.millis`.
You can read up on remaining and friends in [TestKit.scala](https://github.com/akka/akka/blob/master/akka-testkit/src/main/scala/akka/testkit/TestKit.scala)
You can read up on `remaining` and friends in [TestKit.scala](https://github.com/akka/akka/blob/master/akka-testkit/src/main/scala/akka/testkit/TestKit.scala).
## Contributing Modules
## Contributing modules
For external contributions of entire features, the normal way is to establish it
as a stand-alone feature first, to show that there is a need for the feature. The
@ -308,14 +306,14 @@ tested it becomes an officially supported Akka feature.
# Supporting infrastructure
## Continuous Integration
## Continuous integration
Each project should be configured to use a continuous integration (CI) tool (i.e. a build server à la Jenkins).
Lightbend is sponsoring a [Jenkins server farm](https://jenkins.akka.io/), sometimes referred to as "the Lausanne cluster".
The cluster is made out of real bare-metal boxes, and maintained by the Akka team (and other very helpful people at Lightbend).
In addition to PR Validation the cluster is also used for nightly and performance test runs.
In addition to PR validation the cluster is also used for nightly and performance test runs.
## Related links

View file

@ -20,7 +20,6 @@ Reference Documentation
The reference documentation is available at [doc.akka.io](http://doc.akka.io),
for [Scala](http://doc.akka.io/docs/akka/current/scala.html) and [Java](http://doc.akka.io/docs/akka/current/java.html).
Community
---------
You can join these groups and chats to discuss and ask Akka related questions:
@ -41,16 +40,15 @@ Contributing
------------
Contributions are *very* welcome!
If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a PullRequest implementing it.
If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a pull request implementing it.
Refer to the [CONTRIBUTING.md](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) file for more details about the workflow,
and general hints how to prepare your pull request. You can also chat ask for clarifications or guidance in GitHub issues directly,
and general hints on how to prepare your pull request. You can also ask for clarifications or guidance in GitHub issues directly,
or in the akka/dev chat if a more real time communication would be of benefit.
A chat room is available for all questions related to *developing and contributing* to Akka:
[![gitter: akka/dev](https://img.shields.io/badge/gitter%3A-akka%2Fdev-blue.svg?style=flat-square)](https://gitter.im/akka/dev)
License
-------

View file

@ -0,0 +1,36 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor.setup;
import akka.actor.setup.ActorSystemSetup;
import akka.actor.setup.Setup;
import org.junit.Test;
import org.scalatest.junit.JUnitSuite;
import java.util.Optional;
import static org.junit.Assert.*;
public class ActorSystemSetupTest extends JUnitSuite {
static class JavaSetup extends Setup {
public final String name;
public JavaSetup(String name) {
this.name = name;
}
}
@Test
public void apiMustBeUsableFromJava() {
final JavaSetup javaSetting = new JavaSetup("Jasmine Rice");
final Optional<JavaSetup> result = ActorSystemSetup.create()
.withSetup(javaSetting)
.get(JavaSetup.class);
assertTrue(result.isPresent());
assertEquals(result.get(), javaSetting);
}
}

View file

@ -6,9 +6,12 @@ package akka.actor
import language.postfixOps
import akka.testkit._
import com.typesafe.config.ConfigFactory
import scala.concurrent.{ ExecutionContext, Await, Future }
import scala.concurrent.{ Await, ExecutionContext, Future }
import scala.concurrent.duration._
import java.util.concurrent.{ RejectedExecutionException, ConcurrentLinkedQueue }
import java.util.concurrent.{ ConcurrentLinkedQueue, RejectedExecutionException }
import akka.actor.setup.ActorSystemSetup
import akka.util.Timeout
import akka.japi.Util.immutableSeq
import akka.pattern.ask
@ -353,7 +356,7 @@ class ActorSystemSpec extends AkkaSpec(ActorSystemSpec.config) with ImplicitSend
"not allow top-level actor creation with custom guardian" in {
val sys = new ActorSystemImpl("custom", ConfigFactory.defaultReference(),
getClass.getClassLoader, None, Some(Props.empty))
getClass.getClassLoader, None, Some(Props.empty), ActorSystemSetup.empty)
sys.start()
try {
intercept[UnsupportedOperationException] {

View file

@ -115,11 +115,16 @@ object ActorModelSpec {
val stops = new AtomicLong(0)
def getStats(actorRef: ActorRef) = {
val is = new InterceptorStats
stats.putIfAbsent(actorRef, is) match {
case null is
case other other
stats.get(actorRef) match {
case null
val is = new InterceptorStats
stats.putIfAbsent(actorRef, is) match {
case null is
case other other
}
case existing existing
}
}
protected[akka] abstract override def suspend(actor: ActorCell) {
@ -414,7 +419,7 @@ abstract class ActorModelSpec(config: String) extends AkkaSpec(config) with Defa
}
}
for (run 1 to 3) {
flood(50000)
flood(10000)
assertDispatcher(dispatcher)(stops = run)
}
}

View file

@ -0,0 +1,74 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor.setup
import akka.actor.ActorSystem
import akka.testkit.TestKit
import org.scalatest.{ Matchers, WordSpec }
case class DummySetup(name: String) extends Setup
case class DummySetup2(name: String) extends Setup
case class DummySetup3(name: String) extends Setup
class ActorSystemSetupSpec extends WordSpec with Matchers {
"The ActorSystemSettings" should {
"store and retrieve a setup" in {
val setup = DummySetup("Al Dente")
val setups = ActorSystemSetup()
.withSetup(setup)
setups.get[DummySetup] should ===(Some(setup))
setups.get[DummySetup2] should ===(None)
}
"replace setup if already defined" in {
val setup1 = DummySetup("Al Dente")
val setup2 = DummySetup("Earl E. Bird")
val setups = ActorSystemSetup()
.withSetup(setup1)
.withSetup(setup2)
setups.get[DummySetup] should ===(Some(setup2))
}
"provide a fluent creation alternative" in {
val a = DummySetup("Al Dente")
val b = DummySetup("Earl E. Bird") // same type again
val c = DummySetup2("Amanda Reckonwith")
val setups = a and b and c
setups.get[DummySetup] should ===(Some(b))
setups.get[DummySetup2] should ===(Some(c))
}
"be created with a set of setups" in {
val setup1 = DummySetup("Manny Kin")
val setup2 = DummySetup2("Pepe Roni")
val setups = ActorSystemSetup(setup1, setup2)
setups.get[DummySetup].isDefined shouldBe true
setups.get[DummySetup2].isDefined shouldBe true
setups.get[DummySetup3].isDefined shouldBe false
}
"be available from the ExtendedActorSystem" in {
var system: ActorSystem = null
try {
val setup = DummySetup("Tad Moore")
system = ActorSystem("name", ActorSystemSetup(setup))
system
.settings
.setup
.get[DummySetup] should ===(Some(setup))
} finally {
TestKit.shutdownActorSystem(system)
}
}
}
}

View file

@ -0,0 +1,57 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.serialization
import akka.actor.setup.ActorSystemSetup
import akka.actor.{ ActorSystem, BootstrapSetup }
import akka.testkit.AkkaSpec
import com.typesafe.config.ConfigFactory
class ConfigurationDummy
class ProgrammaticDummy
object SerializationSetupSpec {
val testSerializer = new TestSerializer
val serializationSettings = SerializationSetup { _
List(
SerializerDetails("test", testSerializer, List(classOf[ProgrammaticDummy]))
)
}
val bootstrapSettings = BootstrapSetup(None, Some(ConfigFactory.parseString("""
akka {
actor {
serialize-messages = off
serialization-bindings {
"akka.serialization.ConfigurationDummy" = test
}
}
}
""")), None)
val actorSystemSettings = ActorSystemSetup(bootstrapSettings, serializationSettings)
}
class SerializationSetupSpec extends AkkaSpec(
ActorSystem("SerializationSettingsSpec", SerializationSetupSpec.actorSystemSettings)
) {
import SerializationSetupSpec._
"The serialization settings" should {
"allow for programmatic configuration of serializers" in {
val serializer = SerializationExtension(system).findSerializerFor(new ProgrammaticDummy)
serializer shouldBe theSameInstanceAs(testSerializer)
}
"allow a configured binding to hook up to a programmatic serializer" in {
val serializer = SerializationExtension(system).findSerializerFor(new ConfigurationDummy)
serializer shouldBe theSameInstanceAs(testSerializer)
}
}
}

View file

@ -65,7 +65,10 @@ import akka.util.Helpers.ConfigOps
* <b>Note:</b> If you want to use an `Act with Stash`, you should use the
* `ActWithStash` trait in order to have the actor get the necessary deque-based
* mailbox setting.
*
* @deprecated Use the normal `actorOf` methods defined on `ActorSystem` and `ActorContext` to create Actors instead.
*/
@deprecated("deprecated Use the normal `actorOf` methods defined on `ActorSystem` and `ActorContext` to create Actors instead.", since = "2.5.0")
object ActorDSL extends dsl.Inbox with dsl.Creators {
protected object Extension extends ExtensionId[Extension] with ExtensionIdProvider {

View file

@ -5,8 +5,9 @@
package akka.actor
import java.io.Closeable
import java.util.concurrent.{ ConcurrentHashMap, ThreadFactory, CountDownLatch, RejectedExecutionException }
import java.util.concurrent.atomic.{ AtomicReference }
import java.util.concurrent.{ ConcurrentHashMap, CountDownLatch, RejectedExecutionException, ThreadFactory }
import java.util.concurrent.atomic.AtomicReference
import com.typesafe.config.{ Config, ConfigFactory }
import akka.event._
import akka.dispatch._
@ -14,13 +15,123 @@ import akka.japi.Util.immutableSeq
import akka.actor.dungeon.ChildrenContainer
import akka.util._
import akka.util.Helpers.toRootLowerCase
import scala.annotation.tailrec
import scala.collection.immutable
import scala.concurrent.duration.{ Duration }
import scala.concurrent.{ Await, Future, Promise, ExecutionContext, ExecutionContextExecutor }
import scala.concurrent.duration.Duration
import scala.concurrent.{ Await, ExecutionContext, ExecutionContextExecutor, Future, Promise }
import scala.util.{ Failure, Success, Try }
import scala.util.control.{ NonFatal, ControlThrowable }
import java.util.Locale
import scala.util.control.{ ControlThrowable, NonFatal }
import java.util.Optional
import akka.actor.setup.{ ActorSystemSetup, Setup }
import scala.compat.java8.OptionConverters._
object BootstrapSetup {
/**
* Scala API: Construct a bootstrap settings with default values. Note that passing that to the actor system is the
* same as not passing any [[BootstrapSetup]] at all. You can use the returned instance to derive
* one that has other values than defaults using the various `with`-methods.
*/
def apply(): BootstrapSetup = {
new BootstrapSetup()
}
/**
* Scala API: Create bootstrap settings needed for starting the actor system
*
* @see [[BootstrapSetup]] for description of the properties
*/
def apply(classLoader: Option[ClassLoader], config: Option[Config], defaultExecutionContext: Option[ExecutionContext]): BootstrapSetup =
new BootstrapSetup(classLoader, config, defaultExecutionContext)
/**
* Scala API: Short for using custom config but keeping default classloader and default execution context
*/
def apply(config: Config): BootstrapSetup = apply(None, Some(config), None)
/**
* Java API: Create bootstrap settings needed for starting the actor system
*
* @see [[BootstrapSetup]] for description of the properties
*/
def create(classLoader: Optional[ClassLoader], config: Optional[Config], defaultExecutionContext: Optional[ExecutionContext]): BootstrapSetup =
apply(classLoader.asScala, config.asScala, defaultExecutionContext.asScala)
/**
* Java API: Short for using custom config but keeping default classloader and default execution context
*/
def create(config: Config): BootstrapSetup = apply(config)
/**
* Java API: Construct a bootstrap settings with default values. Note that passing that to the actor system is the
* same as not passing any [[BootstrapSetup]] at all. You can use the returned instance to derive
* one that has other values than defaults using the various `with`-methods.
*/
def create(): BootstrapSetup = {
new BootstrapSetup()
}
}
abstract class ProviderSelection private (private[akka] val identifier: String)
object ProviderSelection {
case object Local extends ProviderSelection("local")
case object Remote extends ProviderSelection("remote")
case object Cluster extends ProviderSelection("cluster")
/**
* JAVA API
*/
def local(): ProviderSelection = Local
/**
* JAVA API
*/
def remote(): ProviderSelection = Remote
/**
* JAVA API
*/
def cluster(): ProviderSelection = Cluster
}
/**
* Core bootstrap settings of the actor system, create using one of the factories in [[BootstrapSetup]],
* constructor is *Internal API*.
*
* @param classLoader If no ClassLoader is given, it obtains the current ClassLoader by first inspecting the current
* threads' getContextClassLoader, then tries to walk the stack to find the callers class loader, then
* falls back to the ClassLoader associated with the ActorSystem class.
* @param config Configuration to use for the actor system. If no Config is given, the default reference config will be obtained from the ClassLoader.
* @param defaultExecutionContext If defined the ExecutionContext will be used as the default executor inside this ActorSystem.
* If no ExecutionContext is given, the system will fallback to the executor configured under
* "akka.actor.default-dispatcher.default-executor.fallback".
* @param actorRefProvider Overrides the `akka.actor.provider` setting in config, can be `local` (default), `remote` or
* `cluster`. It can also be a fully qualified class name of a provider.
*/
final class BootstrapSetup private (
val classLoader: Option[ClassLoader] = None,
val config: Option[Config] = None,
val defaultExecutionContext: Option[ExecutionContext] = None,
val actorRefProvider: Option[ProviderSelection] = None) extends Setup {
def withClassloader(classLoader: ClassLoader): BootstrapSetup =
new BootstrapSetup(Some(classLoader), config, defaultExecutionContext, actorRefProvider)
def withConfig(config: Config): BootstrapSetup =
new BootstrapSetup(classLoader, Some(config), defaultExecutionContext, actorRefProvider)
def withDefaultExecutionContext(executionContext: ExecutionContext): BootstrapSetup =
new BootstrapSetup(classLoader, config, Some(executionContext), actorRefProvider)
def withActorRefProvider(name: ProviderSelection): BootstrapSetup =
new BootstrapSetup(classLoader, config, defaultExecutionContext, Some(name))
}
object ActorSystem {
@ -56,6 +167,19 @@ object ActorSystem {
*/
def create(name: String): ActorSystem = apply(name)
/**
* Java API: Creates a new actor system with the specified name and settings
* The core actor system settings are defined in [[BootstrapSetup]]
*/
def create(name: String, setups: ActorSystemSetup): ActorSystem = apply(name, setups)
/**
* Java API: Shortcut for creating an actor system with custom bootstrap settings.
* Same behaviour as calling `ActorSystem.create(name, ActorSystemSetup.create(bootstrapSettings))`
*/
def create(name: String, bootstrapSetup: BootstrapSetup): ActorSystem =
create(name, ActorSystemSetup.create(bootstrapSetup))
/**
* Creates a new ActorSystem with the specified name, and the specified Config, then
* obtains the current ClassLoader by first inspecting the current threads' getContextClassLoader,
@ -108,6 +232,26 @@ object ActorSystem {
*/
def apply(name: String): ActorSystem = apply(name, None, None, None)
/**
* Scala API: Creates a new actor system with the specified name and settings
* The core actor system settings are defined in [[BootstrapSetup]]
*/
def apply(name: String, setup: ActorSystemSetup): ActorSystem = {
val bootstrapSettings = setup.get[BootstrapSetup]
val cl = bootstrapSettings.flatMap(_.classLoader).getOrElse(findClassLoader())
val appConfig = bootstrapSettings.flatMap(_.config).getOrElse(ConfigFactory.load(cl))
val defaultEC = bootstrapSettings.flatMap(_.defaultExecutionContext)
new ActorSystemImpl(name, appConfig, cl, defaultEC, None, setup).start()
}
/**
* Scala API: Shortcut for creating an actor system with custom bootstrap settings.
* Same behaviour as calling `ActorSystem(name, ActorSystemSetup(bootstrapSetup))`
*/
def apply(name: String, bootstrapSetup: BootstrapSetup): ActorSystem =
create(name, ActorSystemSetup.create(bootstrapSetup))
/**
* Creates a new ActorSystem with the specified name, and the specified Config, then
* obtains the current ClassLoader by first inspecting the current threads' getContextClassLoader,
@ -136,11 +280,12 @@ object ActorSystem {
*
* @see <a href="http://typesafehub.github.io/config/v1.3.0/" target="_blank">The Typesafe Config Library API Documentation</a>
*/
def apply(name: String, config: Option[Config] = None, classLoader: Option[ClassLoader] = None, defaultExecutionContext: Option[ExecutionContext] = None): ActorSystem = {
val cl = classLoader.getOrElse(findClassLoader())
val appConfig = config.getOrElse(ConfigFactory.load(cl))
new ActorSystemImpl(name, appConfig, cl, defaultExecutionContext, None).start()
}
def apply(
name: String,
config: Option[Config] = None,
classLoader: Option[ClassLoader] = None,
defaultExecutionContext: Option[ExecutionContext] = None): ActorSystem =
apply(name, ActorSystemSetup(BootstrapSetup(classLoader, config, defaultExecutionContext)))
/**
* Settings are the overall ActorSystem Settings which also provides a convenient access to the Config object.
@ -149,7 +294,9 @@ object ActorSystem {
*
* @see <a href="http://typesafehub.github.io/config/v1.3.0/" target="_blank">The Typesafe Config Library API Documentation</a>
*/
class Settings(classLoader: ClassLoader, cfg: Config, final val name: String) {
class Settings(classLoader: ClassLoader, cfg: Config, final val name: String, val setup: ActorSystemSetup) {
def this(classLoader: ClassLoader, cfg: Config, name: String) = this(classLoader, cfg, name, ActorSystemSetup())
/**
* The backing Config of this ActorSystem's Settings
@ -167,13 +314,15 @@ object ActorSystem {
final val ConfigVersion: String = getString("akka.version")
final val ProviderClass: String =
getString("akka.actor.provider") match {
case "local" classOf[LocalActorRefProvider].getName
// these two cannot be referenced by class as they may not be on the classpath
case "remote" "akka.remote.RemoteActorRefProvider"
case "cluster" "akka.cluster.ClusterActorRefProvider"
case fqcn fqcn
}
setup.get[BootstrapSetup]
.flatMap(_.actorRefProvider).map(_.identifier)
.getOrElse(getString("akka.actor.provider")) match {
case "local" classOf[LocalActorRefProvider].getName
// these two cannot be referenced by class as they may not be on the classpath
case "remote" "akka.remote.RemoteActorRefProvider"
case "cluster" "akka.cluster.ClusterActorRefProvider"
case fqcn fqcn
}
final val SupervisorStrategyClass: String = getString("akka.actor.guardian-supervisor-strategy")
final val CreationTimeout: Timeout = Timeout(config.getMillisDuration("akka.actor.creation-timeout"))
@ -517,7 +666,8 @@ private[akka] class ActorSystemImpl(
applicationConfig: Config,
classLoader: ClassLoader,
defaultExecutionContext: Option[ExecutionContext],
val guardianProps: Option[Props]) extends ExtendedActorSystem {
val guardianProps: Option[Props],
setup: ActorSystemSetup) extends ExtendedActorSystem {
if (!name.matches("""^[a-zA-Z0-9][a-zA-Z0-9-_]*$"""))
throw new IllegalArgumentException(
@ -527,7 +677,7 @@ private[akka] class ActorSystemImpl(
import ActorSystem._
@volatile private var logDeadLetterListener: Option[ActorRef] = None
final val settings: Settings = new Settings(classLoader, applicationConfig, name)
final val settings: Settings = new Settings(classLoader, applicationConfig, name, setup)
protected def uncaughtExceptionHandler: Thread.UncaughtExceptionHandler =
new Thread.UncaughtExceptionHandler() {

View file

@ -0,0 +1,84 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor.setup
import java.util.Optional
import scala.annotation.varargs
import scala.compat.java8.OptionConverters._
import scala.reflect.ClassTag
/**
* Marker supertype for a setup part that can be put inside [[ActorSystemSetup]], if a specific concrete setup
* is not specified in the actor system setup that means defaults are used (usually from the config file) - no concrete
* setup instance should be mandatory in the [[ActorSystemSetup]] that an actor system is created with.
*/
abstract class Setup {
/**
* Construct an [[ActorSystemSetup]] with this setup combined with another one. Allows for
* fluent creation of settings. If `other` is a setting of the same concrete [[Setup]] as this
* it will replace this.
*/
final def and(other: Setup): ActorSystemSetup = ActorSystemSetup(this, other)
}
object ActorSystemSetup {
val empty = new ActorSystemSetup(Map.empty)
/**
* Scala API: Create an [[ActorSystemSetup]] containing all the provided settings
*/
def apply(settings: Setup*): ActorSystemSetup =
new ActorSystemSetup(settings.map(s s.getClass s).toMap)
/**
* Java API: Create an [[ActorSystemSetup]] containing all the provided settings
*/
@varargs
def create(settings: Setup*): ActorSystemSetup = apply(settings: _*)
}
/**
* A set of setup settings for programmatic configuration of the actor system.
*
* Constructor is *Internal API*. Use the factory methods [[ActorSystemSetup#create]] and [[akka.actor.Actor#apply]] to create
* instances.
*/
final class ActorSystemSetup private[akka] (setups: Map[Class[_], AnyRef]) {
/**
* Java API: Extract a concrete [[Setup]] of type `T` if it is defined in the settings.
*/
def get[T <: Setup](clazz: Class[T]): Optional[T] = {
setups.get(clazz).map(_.asInstanceOf[T]).asJava
}
/**
* Scala API: Extract a concrete [[Setup]] of type `T` if it is defined in the settings.
*/
def get[T <: Setup: ClassTag]: Option[T] = {
val clazz = implicitly[ClassTag[T]].runtimeClass
setups.get(clazz).map(_.asInstanceOf[T])
}
/**
* Add a concrete [[Setup]]. If a setting of the same concrete [[Setup]] already is
* present it will be replaced.
*/
def withSetup[T <: Setup](t: T): ActorSystemSetup = {
new ActorSystemSetup(setups + (t.getClass t))
}
/**
* alias for `withSetting` allowing for fluent combination of settings: `a and b and c`, where `a`, `b` and `c` are
* concrete [[Setup]] instances. If a setting of the same concrete [[Setup]] already is
* present it will be replaced.
*/
def and[T <: Setup](t: T): ActorSystemSetup = withSetup(t)
override def toString: String = s"""ActorSystemSettings(${setups.keys.map(_.getName).mkString(",")})"""
}

View file

@ -4,17 +4,19 @@
package akka.dispatch
import akka.actor.{ ActorCell }
import akka.actor.ActorCell
import akka.dispatch.sysmsg._
import scala.annotation.tailrec
import scala.concurrent.duration.Duration
import akka.util.Helpers
import java.util.{ Comparator, Iterator }
import java.util.concurrent.{ ConcurrentSkipListSet }
import java.util.concurrent.ConcurrentSkipListSet
import akka.actor.ActorSystemImpl
import scala.concurrent.duration.FiniteDuration
/**
* INTERNAL API: Use `BalancingPool` instead of this dispatcher directly.
*
* An executor based event driven dispatcher which will try to redistribute work from busy actors to idle actors. It is assumed
* that all actors using the same instance of this dispatcher can process all messages that have been sent to one of the actors. I.e. the
* actors belong to a pool of actors, and to the client there is no guarantee about which actor instance actually processes a given message.
@ -29,7 +31,7 @@ import scala.concurrent.duration.FiniteDuration
* @see akka.dispatch.Dispatchers
*/
@deprecated("Use BalancingPool instead of BalancingDispatcher", "2.3")
class BalancingDispatcher(
private[akka] class BalancingDispatcher(
_configurator: MessageDispatcherConfigurator,
_id: String,
throughput: Int,

View file

@ -253,20 +253,40 @@ class Serialization(val system: ExtendedActorSystem) extends Extension {
case _: NoSuchMethodException system.dynamicAccess.createInstanceFor[Serializer](serializerFQN, Nil)
}
/**
* Programmatically defined serializers
*/
private val serializerDetails =
system.settings.setup.get[SerializationSetup] match {
case None Vector.empty
case Some(setting) setting.createSerializers(system)
}
/**
* A Map of serializer from alias to implementation (class implementing akka.serialization.Serializer)
* By default always contains the following mapping: "java" -> akka.serialization.JavaSerializer
*/
private val serializers: Map[String, Serializer] =
for ((k: String, v: String) settings.Serializers) yield k serializerOf(v).get
private val serializers: Map[String, Serializer] = {
val fromConfig = for ((k: String, v: String) settings.Serializers) yield k serializerOf(v).get
fromConfig ++ serializerDetails.map(d d.alias d.serializer)
}
/**
* bindings is a Seq of tuple representing the mapping from Class to Serializer.
* It is primarily ordered by the most specific classes first, and secondly in the configured order.
*/
private[akka] val bindings: immutable.Seq[ClassSerializer] =
sort(for ((k: String, v: String) settings.SerializationBindings if v != "none" && checkGoogleProtobuf(k))
yield (system.dynamicAccess.getClassFor[Any](k).get, serializers(v))).to[immutable.Seq]
private[akka] val bindings: immutable.Seq[ClassSerializer] = {
val fromConfig = for {
(className: String, alias: String) settings.SerializationBindings
if alias != "none" && checkGoogleProtobuf(className)
} yield (system.dynamicAccess.getClassFor[Any](className).get, serializers(alias))
val fromSettings = serializerDetails.flatMap { detail
detail.useFor.map(clazz clazz detail.serializer)
}
sort(fromConfig ++ fromSettings)
}
// com.google.protobuf serialization binding is only used if the class can be loaded,
// i.e. com.google.protobuf dependency has been added in the application project.

View file

@ -0,0 +1,69 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.serialization
import akka.actor.ExtendedActorSystem
import akka.actor.setup.Setup
import scala.collection.immutable
import scala.collection.JavaConverters._
object SerializationSetup {
/**
* Scala API: Programmatic definition of serializers
* @param createSerializers create pairs of serializer and the set of classes it should be used for
*/
def apply(createSerializers: ExtendedActorSystem immutable.Seq[SerializerDetails]): SerializationSetup = {
new SerializationSetup(createSerializers)
}
/**
* Java API: Programmatic definition of serializers
* @param createSerializers create pairs of serializer and the set of classes it should be used for
*/
def create(
createSerializers: akka.japi.Function[ExtendedActorSystem, java.util.List[SerializerDetails]]): SerializationSetup =
apply(sys createSerializers(sys).asScala.toVector)
}
/**
* Setup for the serialization subsystem, constructor is *Internal API*, use factories in [[SerializationSetup()]]
*/
final class SerializationSetup private (
val createSerializers: ExtendedActorSystem immutable.Seq[SerializerDetails]
) extends Setup
object SerializerDetails {
/**
* Scala API: factory for details about one programmatically setup serializer
*
* @param alias Register the serializer under this alias (this allows it to be used by bindings in the config)
* @param useFor A set of classes or superclasses to bind to the serializer, selection works just as if
* the classes, the alias and the serializer had been in the config.
*/
def apply(alias: String, serializer: Serializer, useFor: immutable.Seq[Class[_]]): SerializerDetails =
new SerializerDetails(alias, serializer, useFor)
/**
* Java API: factory for details about one programmatically setup serializer
*
* @param alias Register the serializer under this alias (this allows it to be used by bindings in the config)
* @param useFor A set of classes or superclasses to bind to the serializer, selection works just as if
* the classes, the alias and the serializer had been in the config.
*/
def create(alias: String, serializer: Serializer, useFor: java.util.List[Class[_]]): SerializerDetails =
apply(alias, serializer, useFor.asScala.toVector)
}
/**
* Constructor is internal API: Use the factories [[SerializerDetails#create]] or [[SerializerDetails#apply]]
* to construct
*/
final class SerializerDetails private (
val alias: String,
val serializer: Serializer,
val useFor: immutable.Seq[Class[_]])

View file

@ -120,6 +120,15 @@ abstract class SerializerWithStringManifest extends Serializer {
/**
* Produces an object from an array of bytes, with an optional type-hint;
* the class should be loaded using ActorSystem.dynamicAccess.
*
* It's recommended to throw `java.io.NotSerializableException` in `fromBinary`
* if the manifest is unknown. This makes it possible to introduce new message
* types and send them to nodes that don't know about them. This is typically
* needed when performing rolling upgrades, i.e. running a cluster with mixed
* versions for while. `NotSerializableException` is treated as a transient
* problem in the TCP based remoting layer. The problem will be logged
* and message is dropped. Other exceptions will tear down the TCP connection
* because it can be an indication of corrupt bytes from the underlying transport.
*/
def fromBinary(bytes: Array[Byte], manifest: String): AnyRef

View file

@ -16,6 +16,7 @@ import akka.protobuf.{ ByteString, MessageLite }
import scala.annotation.tailrec
import scala.collection.JavaConverters.{ asJavaIterableConverter, asScalaBufferConverter, setAsJavaSetConverter }
import akka.serialization.SerializerWithStringManifest
import java.io.NotSerializableException
/**
* Protobuf serializer for [[akka.cluster.metrics.ClusterMetricsMessage]] types.
@ -66,7 +67,7 @@ class MessageSerializer(val system: ExtendedActorSystem) extends SerializerWithS
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef = manifest match {
case MetricsGossipEnvelopeManifest metricsGossipEnvelopeFromBinary(bytes)
case _ throw new IllegalArgumentException(
case _ throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}")
}

View file

@ -21,6 +21,7 @@ import akka.serialization.Serialization
import akka.serialization.SerializationExtension
import akka.serialization.SerializerWithStringManifest
import akka.protobuf.MessageLite
import java.io.NotSerializableException
/**
* INTERNAL API: Protobuf serializer of ClusterSharding messages.
@ -159,7 +160,7 @@ private[akka] class ClusterShardingMessageSerializer(val system: ExtendedActorSy
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -10,6 +10,7 @@ import akka.serialization.SerializationExtension
import akka.serialization.SerializerWithStringManifest
import akka.cluster.client.ClusterReceptionist
import akka.cluster.client.protobuf.msg.{ ClusterClientMessages cm }
import java.io.NotSerializableException
/**
* INTERNAL API: Serializer of ClusterClient messages.
@ -54,7 +55,7 @@ private[akka] class ClusterClientMessageSerializer(val system: ExtendedActorSyst
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -21,6 +21,7 @@ import akka.actor.ActorRef
import akka.serialization.SerializationExtension
import scala.collection.immutable.TreeMap
import akka.serialization.SerializerWithStringManifest
import java.io.NotSerializableException
/**
* INTERNAL API: Protobuf serializer of DistributedPubSubMediator messages.
@ -72,7 +73,7 @@ private[akka] class DistributedPubSubMessageSerializer(val system: ExtendedActor
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -11,6 +11,7 @@ import akka.cluster.singleton.ClusterSingletonManager.Internal.TakeOverFromMe
import akka.serialization.BaseSerializer
import akka.serialization.SerializationExtension
import akka.serialization.SerializerWithStringManifest
import java.io.NotSerializableException
/**
* INTERNAL API: Serializer of ClusterSingleton messages.
@ -56,7 +57,7 @@ private[akka] class ClusterSingletonMessageSerializer(val system: ExtendedActorS
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -178,11 +178,8 @@ private[cluster] class Reachability private (
def remove(nodes: Iterable[UniqueAddress]): Reachability = {
val nodesSet = nodes.to[immutable.HashSet]
val newRecords = records.filterNot(r nodesSet(r.observer) || nodesSet(r.subject))
if (newRecords.size == records.size) this
else {
val newVersions = versions -- nodes
Reachability(newRecords, newVersions)
}
val newVersions = versions -- nodes
Reachability(newRecords, newVersions)
}
def removeObservers(nodes: Set[UniqueAddress]): Reachability =
@ -190,11 +187,8 @@ private[cluster] class Reachability private (
this
else {
val newRecords = records.filterNot(r nodes(r.observer))
if (newRecords.size == records.size) this
else {
val newVersions = versions -- nodes
Reachability(newRecords, newVersions)
}
val newVersions = versions -- nodes
Reachability(newRecords, newVersions)
}
def status(observer: UniqueAddress, subject: UniqueAddress): ReachabilityStatus =

View file

@ -18,6 +18,7 @@ import scala.annotation.tailrec
import scala.collection.JavaConverters._
import scala.collection.immutable
import scala.concurrent.duration.Deadline
import java.io.NotSerializableException
/**
* Protobuf serializer of cluster messages.
@ -107,7 +108,7 @@ class ClusterMessageSerializer(val system: ExtendedActorSystem) extends BaseSeri
def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = clazz match {
case Some(c) fromBinaryMap.get(c.asInstanceOf[Class[ClusterMessage]]) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(s"Unimplemented deserialization of message class $c in ClusterSerializer")
case None throw new NotSerializableException(s"Unimplemented deserialization of message class $c in ClusterSerializer")
}
case _ throw new IllegalArgumentException("Need a cluster message class to be able to deserialize bytes in ClusterSerializer")
}
@ -175,8 +176,7 @@ class ClusterMessageSerializer(val system: ExtendedActorSystem) extends BaseSeri
} else {
// old remote node
uniqueAddress.getUid.toLong
}
)
})
}
private val memberStatusToInt = scala.collection.immutable.HashMap[MemberStatus, Int](

View file

@ -224,5 +224,16 @@ class ReachabilitySpec extends WordSpec with Matchers {
r.status(nodeB, nodeE) should ===(Reachable)
}
"remove correctly after pruning" in {
val r = Reachability.empty.
unreachable(nodeB, nodeA).unreachable(nodeB, nodeC).
unreachable(nodeD, nodeC).
reachable(nodeB, nodeA).reachable(nodeB, nodeC)
r.records should ===(Vector(Record(nodeD, nodeC, Unreachable, 1L)))
val r2 = r.remove(List(nodeB))
r2.allObservers should ===(Set(nodeD))
r2.versions.keySet should ===(Set(nodeD))
}
}
}

View file

@ -44,6 +44,7 @@ import akka.actor.ExtendedActorSystem
import akka.actor.SupervisorStrategy
import akka.actor.OneForOneStrategy
import akka.actor.ActorInitializationException
import java.util.concurrent.TimeUnit
object ReplicatorSettings {
@ -894,30 +895,39 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog
else normalReceive
val load: Receive = {
case LoadData(data)
data.foreach {
case (key, d)
val envelope = DataEnvelope(d)
write(key, envelope) match {
case Some(newEnvelope)
if (newEnvelope.data ne envelope.data)
durableStore ! Store(key, newEnvelope.data, None)
case None
}
}
case LoadAllCompleted
context.become(normalReceive)
self ! FlushChanges
val startTime = System.nanoTime()
var count = 0
case GetReplicaCount
// 0 until durable data has been loaded, used by test
sender() ! ReplicaCount(0)
{
case LoadData(data)
count += data.size
data.foreach {
case (key, d)
val envelope = DataEnvelope(d)
write(key, envelope) match {
case Some(newEnvelope)
if (newEnvelope.data ne envelope.data)
durableStore ! Store(key, newEnvelope.data, None)
case None
}
}
case LoadAllCompleted
log.debug(
"Loading {} entries from durable store took {} ms",
count, TimeUnit.NANOSECONDS.toMillis(System.nanoTime() - startTime))
context.become(normalReceive)
self ! FlushChanges
case RemovedNodePruningTick | FlushChanges | GossipTick
// ignore scheduled ticks when loading durable data
case m @ (_: Read | _: Write | _: Status | _: Gossip)
// ignore gossip and replication when loading durable data
log.debug("ignoring message [{}] when loading durable data", m.getClass.getName)
case GetReplicaCount
// 0 until durable data has been loaded, used by test
sender() ! ReplicaCount(0)
case RemovedNodePruningTick | FlushChanges | GossipTick
// ignore scheduled ticks when loading durable data
case m @ (_: Read | _: Write | _: Status | _: Gossip)
// ignore gossip and replication when loading durable data
log.debug("ignoring message [{}] when loading durable data", m.getClass.getName)
}
}
val normalReceive: Receive = {

View file

@ -21,6 +21,7 @@ import akka.protobuf.ByteString
import akka.util.ByteString.UTF_8
import scala.collection.immutable.TreeMap
import akka.cluster.UniqueAddress
import java.io.NotSerializableException
/**
* Protobuf serializer of ReplicatedData.
@ -126,7 +127,7 @@ class ReplicatedDataSerializer(val system: ExtendedActorSystem)
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -27,6 +27,7 @@ import scala.annotation.tailrec
import scala.concurrent.duration.FiniteDuration
import akka.cluster.ddata.DurableStore.DurableDataEnvelope
import akka.cluster.ddata.DurableStore.DurableDataEnvelope
import java.io.NotSerializableException
/**
* INTERNAL API
@ -235,7 +236,7 @@ class ReplicatorMessageSerializer(val system: ExtendedActorSystem)
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(f) f(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -112,7 +112,7 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
runOn(first) {
val r = newReplicator()
within(5.seconds) {
within(10.seconds) {
awaitAssert {
r ! GetReplicaCount
expectMsg(ReplicaCount(1))
@ -158,7 +158,7 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
join(second, first)
val r = newReplicator()
within(5.seconds) {
within(10.seconds) {
awaitAssert {
r ! GetReplicaCount
expectMsg(ReplicaCount(2))
@ -247,7 +247,7 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
new TestKit(sys1) with ImplicitSender {
val r = newReplicator(sys1)
within(5.seconds) {
within(10.seconds) {
awaitAssert {
r ! GetReplicaCount
expectMsg(ReplicaCount(1))

View file

@ -3,7 +3,7 @@ Books
* `Mastering Akka <https://www.packtpub.com/application-development/mastering-akka>`_, by Christian Baxter, PACKT Publishing, ISBN: 9781786465023, October 2016
* `Learning Akka <https://www.packtpub.com/application-development/learning-akka>`_, by Jason Goodwin, PACKT Publishing, ISBN: 9781784393007, December 2015
* `Akka in Action <http://www.lightbend.com/resources/e-book/akka-in-action>`_, by Raymond Roestenburg and Rob Bakker, Manning Publications Co., ISBN: 9781617291012, estimated in 2016
* `Akka in Action <http://www.lightbend.com/resources/e-book/akka-in-action>`_, by Raymond Roestenburg and Rob Bakker, Manning Publications Co., ISBN: 9781617291012, September 2016
* `Reactive Messaging Patterns with the Actor Model <http://www.informit.com/store/reactive-messaging-patterns-with-the-actor-model-applications-9780133846836>`_, by Vaughn Vernon, Addison-Wesley Professional, ISBN: 0133846830, August 2015
* `Developing an Akka Edge <http://bleedingedgepress.com/our-books/developing-an-akka-edge/>`_, by Thomas Lockney and Raymond Tay, Bleeding Edge Press, ISBN: 9781939902054, April 2014
* `Effective Akka <http://shop.oreilly.com/product/0636920028789.do>`_, by Jamie Allen, O'Reilly Media, ISBN: 1449360076, August 2013

View file

@ -610,7 +610,7 @@ Router Example with Pool of Remote Deployed Routees
Let's take a look at how to use a cluster aware router on single master node that creates
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton-java`
in the contrib module. The ``ClusterSingletonManager`` is started on each node.
in the cluster-tools module. The ``ClusterSingletonManager`` is started on each node.
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/java/sample/cluster/stats/StatsSampleOneMasterMain.java#create-singleton-manager
@ -754,4 +754,4 @@ For this purpose you can define a separate dispatcher to be used for the cluster
Use dedicated dispatchers for such actors/tasks instead of running them on the default-dispatcher,
because that may starve system internal tasks.
Related config properties: ``akka.cluster.use-dispatcher = akka.cluster.cluster-dispatcher``.
Corresponding default values: ``akka.cluster.use-dispatcher =``.
Corresponding default values: ``akka.cluster.use-dispatcher =``.

View file

@ -527,31 +527,6 @@ public class LambdaPersistenceDocTest {
}
};
static Object o12 = new Object() {
//#view
class MyView extends AbstractPersistentView {
@Override public String persistenceId() { return "some-persistence-id"; }
@Override public String viewId() { return "some-persistence-id-view"; }
public MyView() {
receive(ReceiveBuilder.
match(Object.class, p -> isPersistent(), persistent -> {
// ...
}).build()
);
}
}
//#view
public void usage() {
final ActorSystem system = ActorSystem.create("example");
//#view-update
final ActorRef view = system.actorOf(Props.create(MyView.class));
view.tell(Update.create(true), null);
//#view-update
}
};
static Object o14 = new Object() {
//#safe-shutdown
final class Shutdown {

View file

@ -512,37 +512,6 @@ public class PersistenceDocTest {
}
};
static Object o14 = new Object() {
//#view
class MyView extends UntypedPersistentView {
@Override
public String persistenceId() { return "some-persistence-id"; }
@Override
public String viewId() { return "my-stable-persistence-view-id"; }
@Override
public void onReceive(Object message) throws Exception {
if (isPersistent()) {
// handle message from Journal...
} else if (message instanceof String) {
// handle message from user...
} else {
unhandled(message);
}
}
}
//#view
public void usage() {
final ActorSystem system = ActorSystem.create("example");
//#view-update
final ActorRef view = system.actorOf(Props.create(MyView.class));
view.tell(Update.create(true), null);
//#view-update
}
};
static Object o13 = new Object() {
//#safe-shutdown
final class Shutdown {}

View file

@ -54,10 +54,6 @@ Architecture
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
recover internal state from these messages.
* *AbstractPersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another
persistent actor. A view itself does not journal new messages, instead, it updates internal state only from a persistent actor's
replicated message stream.
* *AbstractPersistentActorAtLeastOnceDelivery*: To send messages with at-least-once delivery semantics to destinations, also in
case of sender and receiver JVM crashes.
@ -457,90 +453,6 @@ mechanism when ``persist()`` is used. Notice the early stop behaviour that occur
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#safe-shutdown-example-bad
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#safe-shutdown-example-good
.. _persistent-views-java-lambda:
Persistent Views
================
.. warning::
``AbstractPersistentView`` is deprecated. Use :ref:`persistence-query-java` instead. The corresponding
query type is ``EventsByPersistenceId``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``UntypedPersistentView`` actor:
* `Sink.actorRef`_ is simple, but has the disadvantage that there is no back-pressure signal from the
destination actor, i.e. if the actor is not consuming the messages fast enough the mailbox of the actor will grow
* `mapAsync`_ combined with :ref:`actors-ask-lambda` is almost as simple with the advantage of back-pressure
being propagated all the way
* `ActorSubscriber`_ in case you need more fine grained control
The consuming actor may be a plain ``AbstractActor`` or an ``AbstractPersistentActor`` if it needs to store its
own state (e.g. fromSequenceNr offset).
.. _Sink.actorRef: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#Sink_actorRef
.. _mapAsync: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/stages-overview.html#Asynchronous_processing_stages
.. _ActorSubscriber: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#ActorSubscriber
Persistent views can be implemented by extending the ``AbstractView`` abstract class, implement the ``persistenceId`` method
and setting the “initial behavior” in the constructor by calling the :meth:`receive` method.
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#view
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
persistent actor is started later and begins to write new messages, by default the corresponding view is updated automatically.
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
(skip the ``if isPersistent`` check).
Updates
-------
The default update interval of all persistent views of an actor system is configurable:
.. includecode:: ../scala/code/docs/persistence/PersistenceDocSpec.scala#auto-update-interval
``AbstractPersistentView`` implementation classes may also override the ``autoUpdateInterval`` method to return a custom update
interval for a specific view class or view instance. Applications may also trigger additional updates at
any time by sending a view an ``Update`` message.
.. includecode:: code/docs/persistence/LambdaPersistenceDocTest.java#view-update
If the ``await`` parameter is set to ``true``, messages that follow the ``Update`` request are processed when the
incremental message replay, triggered by that update request, completed. If set to ``false`` (default), messages
following the update request may interleave with the replayed message stream. Automated updates always run with
``await = false``.
Automated updates of all persistent views of an actor system can be turned off by configuration:
.. includecode:: ../scala/code/docs/persistence/PersistenceDocSpec.scala#auto-update
Implementation classes may override the configured default value by overriding the ``autoUpdate`` method. To
limit the number of replayed messages per update request, applications can configure a custom
``akka.persistence.view.auto-update-replay-max`` value or override the ``autoUpdateReplayMax`` method. The number
of replayed messages for manual updates can be limited with the ``replayMax`` parameter of the ``Update`` message.
Recovery
--------
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
Further possibilities to customize initial recovery are explained in section :ref:`recovery-java-lambda`.
.. _persistence-identifiers-java-lambda:
Identifiers
-----------
A persistent view must have an identifier that doesn't change across different actor incarnations.
The identifier must be defined with the ``viewId`` method.
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots-java-lambda` of a view and its
persistent actor should be shared (which is what applications usually do not want).
.. _snapshots-java-lambda:
Snapshots
=========
@ -870,15 +782,21 @@ A journal plugin can be activated with the following minimal configuration:
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#journal-plugin-config
The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher
used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``.
The journal plugin instance is an actor so the methods corresponding to requests from persistent actors
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The journal plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
``akka.persistence.dispatchers.default-plugin-dispatcher``.
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
@ -893,15 +811,21 @@ A snapshot store plugin can be activated with the following minimal configuratio
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-store-plugin-config
The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher
used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``.
The snapshot store instance is an actor so the methods corresponding to requests from persistent actors
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The snapshot store plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The snapshot store plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
``akka.persistence.dispatchers.default-plugin-dispatcher``.
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.

View file

@ -248,6 +248,16 @@ And the ``EventsByTag`` could be backed by such an Actor for example:
.. includecode:: code/docs/persistence/query/MyEventsByTagJavaPublisher.java#events-by-tag-publisher
The ``ReadJournalProvider`` class must have a constructor with one of these signatures:
* constructor with a ``ExtendedActorSystem`` parameter, a ``com.typesafe.config.Config`` parameter, and a ``String`` parameter for the config path
* constructor with a ``ExtendedActorSystem`` parameter, and a ``com.typesafe.config.Config`` parameter
* constructor with one ``ExtendedActorSystem`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
If the underlying datastore only supports queries that are completed when they reach the
end of the "result set", the journal has to submit new queries after a while in order
to support "infinite" event streams that include events stored after the initial query

View file

@ -58,10 +58,6 @@ Architecture
When a persistent actor is started or restarted, journaled messages are replayed to that actor so that it can
recover internal state from these messages.
* *UntypedPersistentView*: A view is a persistent, stateful actor that receives journaled messages that have been written by another
persistent actor. A view itself does not journal new messages, instead, it updates internal state only from a persistent actor's
replicated message stream.
* *UntypedPersistentActorAtLeastOnceDelivery*: To send messages with at-least-once delivery semantics to destinations, also in
case of sender and receiver JVM crashes.
@ -518,91 +514,6 @@ For example, if you configure the replay filter for leveldb plugin, it looks lik
}
.. _persistent-views-java:
Persistent Views
================
.. warning::
``UntypedPersistentView`` is deprecated. Use :ref:`persistence-query-java` instead. The corresponding
query type is ``EventsByPersistenceId``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``UntypedPersistentView`` actor:
* `Sink.actorRef`_ is simple, but has the disadvantage that there is no back-pressure signal from the
destination actor, i.e. if the actor is not consuming the messages fast enough the mailbox of the actor will grow
* `mapAsync`_ combined with :ref:`actors-ask-lambda` is almost as simple with the advantage of back-pressure
being propagated all the way
* `ActorSubscriber`_ in case you need more fine grained control
The consuming actor may be a plain ``UntypedActor`` or an ``UntypedPersistentActor`` if it needs to store its
own state (e.g. fromSequenceNr offset).
.. _Sink.actorRef: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#Sink_actorRef
.. _mapAsync: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/stages-overview.html#Asynchronous_processing_stages
.. _ActorSubscriber: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/java/stream-integrations.html#ActorSubscriber
Persistent views can be implemented by extending the ``UntypedPersistentView`` trait and implementing the ``onReceive``
and the ``persistenceId`` methods.
.. includecode:: code/docs/persistence/PersistenceDocTest.java#view
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
persistent actor is started later and begins to write new messages, by
default the corresponding view is updated automatically.
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
(skip the ``if isPersistent`` check).
Updates
-------
The default update interval of all persistent views of an actor system is configurable:
.. includecode:: ../scala/code/docs/persistence/PersistenceDocSpec.scala#auto-update-interval
``UntypedPersistentView`` implementation classes may also override the ``autoUpdateInterval`` method to return a custom update
interval for a specific view class or view instance. Applications may also trigger additional updates at
any time by sending a view an ``Update`` message.
.. includecode:: code/docs/persistence/PersistenceDocTest.java#view-update
If the ``await`` parameter is set to ``true``, messages that follow the ``Update`` request are processed when the
incremental message replay, triggered by that update request, completed. If set to ``false`` (default), messages
following the update request may interleave with the replayed message stream. Automated updates always run with
``await = false``.
Automated updates of all persistent views of an actor system can be turned off by configuration:
.. includecode:: ../scala/code/docs/persistence/PersistenceDocSpec.scala#auto-update
Implementation classes may override the configured default value by overriding the ``autoUpdate`` method. To
limit the number of replayed messages per update request, applications can configure a custom
``akka.persistence.view.auto-update-replay-max`` value or override the ``autoUpdateReplayMax`` method. The number
of replayed messages for manual updates can be limited with the ``replayMax`` parameter of the ``Update`` message.
Recovery
--------
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
Further possibilities to customize initial recovery are explained in section :ref:`recovery-java`.
.. _persistence-identifiers-java:
Identifiers
-----------
A persistent view must have an identifier that doesn't change across different actor incarnations.
The identifier must be defined with the ``viewId`` method.
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots-java` of a view and its
persistent actor should be shared (which is what applications usually do not want).
.. _snapshots-java:
Snapshots
=========
@ -874,8 +785,14 @@ The journal plugin instance is an actor so the methods corresponding to requests
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The journal plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
@ -890,15 +807,21 @@ A snapshot store plugin can be activated with the following minimal configuratio
.. includecode:: ../scala/code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-store-plugin-config
The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher
used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``.
The snapshot store instance is an actor so the methods corresponding to requests from persistent actors
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The snapshot store plugin class must have a constructor without parameters or constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The snapshot store plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
``akka.persistence.dispatchers.default-plugin-dispatcher``.
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.

View file

@ -145,6 +145,16 @@ This is how a ``SerializerWithStringManifest`` looks like:
You must also bind it to a name in your :ref:`configuration` and then list which classes
that should be serialized using it.
It's recommended to throw ``java.io.NotSerializableException`` in ``fromBinary``
if the manifest is unknown. This makes it possible to introduce new message types and
send them to nodes that don't know about them. This is typically needed when performing
rolling upgrades, i.e. running a cluster with mixed versions for while.
``NotSerializableException`` is treated as a transient problem in the TCP based remoting
layer. The problem will be logged and message is dropped. Other exceptions will tear down
the TCP connection because it can be an indication of corrupt bytes from the underlying
transport.
Serializing ActorRefs
---------------------

View file

@ -5,4 +5,677 @@ Migration Guide 2.3.x to 2.4.x
##############################
Migration from 2.3.x to 2.4.x is described in the
`documentation of 2.4 <http://doc.akka.io/docs/akka/2.4/project/migration-guide-2.3.x-2.4.x.html>`_.
`documentation of 2.4 <http://doc.akka.io/docs/akka/2.4/project/migration-guide-2.3.x-2.4.x.html>`_.
The 2.4 release contains some structural changes that require some
simple, mechanical source-level changes in client code.
When migrating from earlier versions you should first follow the instructions for
migrating :ref:`1.3.x to 2.0.x <migration-2.0>` and then :ref:`2.0.x to 2.1.x <migration-2.1>`
and then :ref:`2.1.x to 2.2.x <migration-2.2>` and then :ref:`2.2.x to 2.3.x <migration-2.3>`.
Binary Compatibility
====================
Akka 2.4.x is backwards binary compatible with previous 2.3.x versions apart from the following
exceptions. This means that the new JARs are a drop-in replacement for the old one
(but not the other way around) as long as your build does not enable the inliner (Scala-only restriction).
The following parts are not binary compatible with 2.3.x:
* akka-testkit and akka-remote-testkit
* experimental modules, such as akka-persistence and akka-contrib
* features, classes, methods that were deprecated in 2.3.x and removed in 2.4.x
The dependency to **Netty** has been updated from version 3.8.0.Final to 3.10.3.Final. The changes in
those versions might not be fully binary compatible, but we believe that it will not be a problem
in practice. No changes were needed to the Akka source code for this update. Users of libraries that
depend on 3.8.0.Final that break with 3.10.3.Final should be able to manually downgrade the dependency
to 3.8.0.Final and Akka will still work with that version.
Advanced Notice: TypedActors will go away
=========================================
While technically not yet deprecated, the current ``akka.actor.TypedActor`` support will be superseded by
the :ref:`typed-scala` project that is currently being developed in open preview mode. If you are using TypedActors
in your projects you are advised to look into this, as it is superior to the Active Object pattern expressed
in TypedActors. The generic ActorRefs in Akka Typed allow the same type-safety that is afforded by
TypedActors while retaining all the other benefits of an explicit actor model (including the ability to
change behaviors etc.).
It is likely that TypedActors will be officially deprecated in the next major update of Akka and subsequently removed.
Removed Deprecated Features
===========================
The following, previously deprecated, features have been removed:
* akka-dataflow
* akka-transactor
* durable mailboxes (akka-mailboxes-common, akka-file-mailbox)
* Cluster.publishCurrentClusterState
* akka.cluster.auto-down, replaced by akka.cluster.auto-down-unreachable-after in Akka 2.3
* Old routers and configuration.
Note that in router configuration you must now specify if it is a ``pool`` or a ``group``
in the way that was introduced in Akka 2.3.
* Timeout constructor without unit
* JavaLoggingEventHandler, replaced by JavaLogger
* UntypedActorFactory
* Java API TestKit.dilated, moved to JavaTestKit.dilated
Protobuf Dependency
===================
The transitive dependency to Protobuf has been removed to make it possible to use any version
of Protobuf for the application messages. If you use Protobuf in your application you need
to add the following dependency with desired version number::
"com.google.protobuf" % "protobuf-java" % "2.5.0"
Internally Akka is using an embedded version of protobuf that corresponds to ``com.google.protobuf/protobuf-java``
version 2.5.0. The package name of the embedded classes has been changed to ``akka.protobuf``.
Added parameter validation to RootActorPath
===========================================
Previously ``akka.actor.RootActorPath`` allowed passing in arbitrary strings into its name parameter,
which is meant to be the *name* of the root Actor. Subsequently, if constructed with an invalid name
such as a full path for example (``/user/Full/Path``) some features using this path may transparently fail -
such as using ``actorSelection`` on such invalid path.
In Akka 2.4.x the ``RootActorPath`` validates the input and may throw an ``IllegalArgumentException`` if
the passed in name string is illegal (contains ``/`` elsewhere than in the begining of the string or contains ``#``).
TestKit.remaining throws AssertionError
=======================================
In earlier versions of Akka `TestKit.remaining` returned the default timeout configurable under
"akka.test.single-expect-default". This was a bit confusing and thus it has been changed to throw an
AssertionError if called outside of within. The old behavior however can still be achieved by
calling `TestKit.remainingOrDefault` instead.
EventStream and ManagedActorClassification EventBus now require an ActorSystem
==============================================================================
Both the ``EventStream`` (:ref:`Scala <event-stream-scala>`, :ref:`Java <event-stream-java>`) and the
``ManagedActorClassification``, ``ManagedActorEventBus`` (:ref:`Scala <actor-classification-scala>`, :ref:`Java <actor-classification-java>`) now
require an ``ActorSystem`` to properly operate. The reason for that is moving away from stateful internal lifecycle checks
to a fully reactive model for unsubscribing actors that have ``Terminated``. Therefore the ``ActorClassification``
and ``ActorEventBus`` was deprecated and replaced by ``ManagedActorClassification`` and ``ManagedActorEventBus``
If you have implemented a custom event bus, you will need to pass in the actor system through the constructor now:
.. includecode:: ../scala/code/docs/event/EventBusDocSpec.scala#actor-bus
If you have been creating EventStreams manually, you now have to provide an actor system and *start the unsubscriber*:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala#event-bus-start-unsubscriber-scala
Please note that this change affects you only if you have implemented your own buses, Akka's own ``context.eventStream``
is still there and does not require any attention from you concerning this change.
FSM notifies on same state transitions
======================================
When changing states in an Finite-State-Machine Actor (``FSM``), state transition events are emitted and can be handled by the user
either by registering ``onTransition`` handlers or by subscribing to these events by sending it an ``SubscribeTransitionCallBack`` message.
Previously in ``2.3.x`` when an ``FSM`` was in state ``A`` and performed a ``goto(A)`` transition, no state transition notification would be sent.
This is because it would effectively stay in the same state, and was deemed to be semantically equivalent to calling ``stay()``.
In ``2.4.x`` when an ``FSM`` performs an any ``goto(X)`` transition, it will always trigger state transition events.
Which turns out to be useful in many systems where same-state transitions actually should have an effect.
In case you do *not* want to trigger a state transition event when effectively performing an ``X->X`` transition, use ``stay()`` instead.
Circuit Breaker Timeout Change
==============================
In ``2.3.x`` calls protected by the ``CircuitBreaker`` were allowed to run indefinitely and the check to see if the timeout had been exceeded was done after the call had returned.
In ``2.4.x`` the failureCount of the Breaker will be increased as soon as the timeout is reached and a ``Failure[TimeoutException]`` will be returned immediately for asynchronous calls. Synchronous calls will now throw a ``TimeoutException`` after the call is finished.
Slf4j logging filter
====================
If you use ``Slf4jLogger`` you should add the following configuration::
akka.logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
It will filter the log events using the backend configuration (e.g. logback.xml) before
they are published to the event bus.
Inbox.receive Java API
======================
``Inbox.receive`` now throws a checked ``java.util.concurrent.TimeoutException`` exception if the receive timeout
is reached.
Pool routers nrOfInstances method now takes ActorSystem
=======================================================
In order to make cluster routers smarter about when they can start local routees,
``nrOfInstances`` defined on ``Pool`` now takes ``ActorSystem`` as an argument.
In case you have implemented a custom Pool you will have to update the method's signature,
however the implementation can remain the same if you don't need to rely on an ActorSystem in your logic.
Group routers paths method now takes ActorSystem
================================================
In order to make cluster routers smarter about when they can start local routees,
``paths`` defined on ``Group`` now takes ``ActorSystem`` as an argument.
In case you have implemented a custom Group you will have to update the method's signature,
however the implementation can remain the same if you don't need to rely on an ActorSystem in your logic.
Cluster aware router max-total-nr-of-instances
==============================================
In 2.3.x the deployment configuration property ``nr-of-instances`` was used for
cluster aware routers to specify total number of routees in the cluster.
This was confusing, especially since the default value is 1.
In 2.4.x there is a new deployement property ``cluster.max-total-nr-of-instances`` that
defines total number of routees in the cluster. By default ``max-total-nr-of-instances``
is set to a high value (10000) that will result in new routees added to the router when nodes join the cluster.
Set it to a lower value if you want to limit total number of routees.
For backwards compatibility reasons ``nr-of-instances`` is still used if defined by user,
i.e. if defined it takes precedence over ``max-total-nr-of-instances``.
Logger names use full class name
================================
Previously, few places in Akka used "simple" logger names, such as ``Cluster`` or ``Remoting``.
Now they use full class names, such as ``akka.cluster.Cluster`` or ``akka.remote.Remoting``,
in order to allow package level log level definitions and ease source code lookup.
In case you used specific "simple" logger name based rules in your ``logback.xml`` configurations,
please change them to reflect appropriate package name, such as
``<logger name='akka.cluster' level='warn' />`` or ``<logger name='akka.remote' level='error' />``
Default interval for TestKit.awaitAssert changed to 100 ms
==========================================================
Default check interval changed from 800 ms to 100 ms. You can define the interval explicitly if you need a
longer interval.
Secure Cookies
==============
`Secure cookies` feature was deprecated.
AES128CounterInetRNG and AES256CounterInetRNG are Deprecated
============================================================
Use ``AES128CounterSecureRNG`` or ``AES256CounterSecureRNG`` as
``akka.remote.netty.ssl.security.random-number-generator``.
Microkernel is Deprecated
=========================
Akka Microkernel is deprecated and will be removed. It is replaced by using an ordinary
user defined main class and packaging with `sbt-native-packager <https://github.com/sbt/sbt-native-packager>`_
or `Lightbend ConductR <http://www.lightbend.com/products/conductr>`_.
Please see :ref:`deployment-scenarios` for more information.
New Cluster Metrics Extension
=============================
Previously, cluster metrics functionality was located in the ``akka-cluster`` jar.
Now it is split out and moved into a separate Akka module: ``akka-cluster-metrics`` jar.
The module comes with few enhancements, such as use of Kamon sigar-loader
for native library provisioning as well as use of statistical averaging of metrics data.
Note that both old and new metrics configuration entries in the ``reference.conf``
are still in the same name space ``akka.cluster.metrics`` but are not compatible.
Make sure to disable legacy metrics in akka-cluster: ``akka.cluster.metrics.enabled=off``,
since it is still enabled in akka-cluster by default (for compatibility with past releases).
Router configuration entries have also changed for the module, they use prefix ``cluster-metrics-``:
``cluster-metrics-adaptive-pool`` and ``cluster-metrics-adaptive-group``
Metrics extension classes and objects are located in the new package ``akka.cluster.metrics``.
Please see :ref:`Scala <cluster_metrics_scala>`, :ref:`Java <cluster_metrics_java>` for more information.
Cluster tools moved to separate module
======================================
The Cluster Singleton, Distributed Pub-Sub, and Cluster Client previously located in the ``akka-contrib``
jar is now moved to a separate module named ``akka-cluster-tools``. You need to replace this dependency
if you use any of these tools.
The classes changed package name from ``akka.contrib.pattern`` to ``akka.cluster.singleton``, ``akka.cluster.pubsub``
and ``akka.cluster.client``.
The configuration properties changed name to ``akka.cluster.pub-sub`` and ``akka.cluster.client``.
Cluster sharding moved to separate module
=========================================
The Cluster Sharding previously located in the ``akka-contrib`` jar is now moved to a separate module
named ``akka-cluster-sharding``. You need to replace this dependency if you use Cluster Sharding.
The classes changed package name from ``akka.contrib.pattern`` to ``akka.cluster.sharding``.
The configuration properties changed name to ``akka.cluster.sharding``.
ClusterSharding construction
============================
Several parameters of the ``start`` method of the ``ClusterSharding`` extension are now defined
in a settings object ``ClusterShardingSettings``.
It can be created from system configuration properties and also amended with API.
These settings can be defined differently per entry type if needed.
Starting the ``ShardRegion`` in proxy mode is now done with the ``startProxy`` method
of the ``ClusterSharding`` extension instead of the optional ``entryProps`` parameter.
Entry was renamed to Entity, for example in the ``MessagesExtractor`` in the Java API
and the ``EntityId`` type in the Scala API.
``idExtractor`` function was renamed to ``extractEntityId``. ``shardResolver`` function
was renamed to ``extractShardId``.
Cluster Sharding Entry Path Change
==================================
Previously in ``2.3.x`` entries were direct children of the local ``ShardRegion``. In examples the ``persistenceId`` of entries
included ``self.path.parent.name`` to include the cluster type name.
In ``2.4.x`` entries are now children of a ``Shard``, which in turn is a child of the local ``ShardRegion``. To include the shard
type in the ``persistenceId`` it is now accessed by ``self.path.parent.parent.name`` from each entry.
Asynchronous ShardAllocationStrategy
====================================
The methods of the ``ShardAllocationStrategy`` and ``AbstractShardAllocationStrategy`` in Cluster Sharding
have changed return type to a ``Future`` to support asynchronous decision. For example you can ask an
actor external actor of how to allocate shards or rebalance shards.
For the synchronous case you can return the result via ``scala.concurrent.Future.successful`` in Scala or
``akka.dispatch.Futures.successful`` in Java.
Cluster Sharding internal data
==============================
The Cluster Sharding coordinator stores the locations of the shards using Akka Persistence.
This data can safely be removed when restarting the whole Akka Cluster.
The serialization format of the internal persistent events stored by the Cluster Sharding coordinator
has been changed and it cannot load old data from 2.3.x or some 2.4 milestone.
The ``persistenceId`` of the Cluster Sharding coordinator has been changed since 2.3.x so
it should not load such old data, but it can be a problem if you have used a 2.4
milestone release. In that case you should remove the persistent data that the
Cluster Sharding coordinator stored. Note that this is not application data.
You can use the :ref:`RemoveInternalClusterShardingData <RemoveInternalClusterShardingData-scala>`
utility program to remove this data.
The new ``persistenceId`` is ``s"/sharding/${typeName}Coordinator"``.
The old ``persistenceId`` is ``s"/user/sharding/${typeName}Coordinator/singleton/coordinator"``.
ClusterSingletonManager and ClusterSingletonProxy construction
==============================================================
Parameters to the ``Props`` factory methods have been moved to settings object ``ClusterSingletonManagerSettings``
and ``ClusterSingletonProxySettings``. These can be created from system configuration properties and also
amended with API as needed.
The buffer size of the ``ClusterSingletonProxy`` can be defined in the ``ClusterSingletonProxySettings``
instead of defining ``stash-capacity`` of the mailbox. Buffering can be disabled by using a
buffer size of 0.
The ``singletonPath`` parameter of ``ClusterSingletonProxy.props`` has changed. It is now named
``singletonManagerPath`` and is the logical path of the singleton manager, e.g. ``/user/singletonManager``,
which ends with the name you defined in ``actorOf`` when creating the ``ClusterSingletonManager``.
In 2.3.x it was the path to singleton instance, which was error-prone because one had to provide both
the name of the singleton manager and the singleton actor.
DistributedPubSub construction
==============================
Normally, the ``DistributedPubSubMediator`` actor is started by the ``DistributedPubSubExtension``.
This extension has been renamed to ``DistributedPubSub``. It is also possible to start
it as an ordinary actor if you need multiple instances of it with different settings.
The parameters of the ``Props`` factory methods in the ``DistributedPubSubMediator`` companion
has been moved to settings object ``DistributedPubSubSettings``. This can be created from
system configuration properties and also amended with API as needed.
ClusterClient construction
==========================
The parameters of the ``Props`` factory methods in the ``ClusterClient`` companion
has been moved to settings object ``ClusterClientSettings``. This can be created from
system configuration properties and also amended with API as needed.
The buffer size of the ``ClusterClient`` can be defined in the ``ClusterClientSettings``
instead of defining ``stash-capacity`` of the mailbox. Buffering can be disabled by using a
buffer size of 0.
Normally, the ``ClusterReceptionist`` actor is started by the ``ClusterReceptionistExtension``.
This extension has been renamed to ``ClusterClientReceptionist``. It is also possible to start
it as an ordinary actor if you need multiple instances of it with different settings.
The parameters of the ``Props`` factory methods in the ``ClusterReceptionist`` companion
has been moved to settings object ``ClusterReceptionistSettings``. This can be created from
system configuration properties and also amended with API as needed.
The ``ClusterReceptionist`` actor that is started by the ``ClusterReceptionistExtension``
is now started as a ``system`` actor instead of a ``user`` actor, i.e. the default path for
the ``ClusterClient`` initial contacts has changed to
``"akka.tcp://system@hostname:port/system/receptionist"``.
ClusterClient sender
====================
In 2.3 the ``sender()`` of the response messages, as seen by the client, was the
actor in cluster.
In 2.4 the ``sender()`` of the response messages, as seen by the client, is ``deadLetters``
since the client should normally send subsequent messages via the ``ClusterClient``.
It is possible to pass the original sender inside the reply messages if
the client is supposed to communicate directly to the actor in the cluster.
Akka Persistence
================
Experimental removed
--------------------
The artifact name has changed from ``akka-persistence-experimental`` to ``akka-persistence``.
New sbt dependency::
"com.typesafe.akka" %% "akka-persistence" % "@version@" @crossString@
New Maven dependency::
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-persistence_@binVersion@</artifactId>
<version>@version@</version>
</dependency>
The artefact name of the Persistent TCK has changed from ``akka-persistence-tck-experimental`` (``akka-persistence-experimental-tck``) to
``akka-persistence-tck``.
Mandatory persistenceId
-----------------------
It is now mandatory to define the ``persistenceId`` in subclasses of ``PersistentActor``, ``UntypedPersistentActor``
and ``AbstractPersistentId``.
The rationale behind this change being stricter de-coupling of your Actor hierarchy and the logical
"which persistent entity this actor represents".
In case you want to preserve the old behavior of providing the actor's path as the default ``persistenceId``, you can easily
implement it yourself either as a helper trait or simply by overriding ``persistenceId`` as follows::
override def persistenceId = self.path.toStringWithoutAddress
Failures
--------
Backend journal failures during recovery and persist are treated differently than in 2.3.x. The ``PersistenceFailure``
message is removed and the actor is unconditionally stopped. The new behavior and reasons for it is explained in
:ref:`failures-scala`.
Persist sequence of events
--------------------------
The ``persist`` method that takes a ``Seq`` (Scala) or ``Iterable`` (Java) of events parameter was deprecated and
renamed to ``persistAll`` to avoid mistakes of persisting other collection types as one single event by calling
the overloaded ``persist(event)`` method.
non-permanent deletion
----------------------
The ``permanent`` flag in ``deleteMessages`` was removed. non-permanent deletes are not supported
any more. Events that were deleted with ``permanent=false`` with older version will
still not be replayed in this version.
Recover message is gone, replaced by Recovery config
----------------------------------------------------
Previously the way to cause recover in PersistentActors was sending them a ``Recover()`` message.
Most of the time it was the actor itself sending such message to ``self`` in its ``preStart`` method,
however it was possible to send this message from an external source to any ``PersistentActor`` or ``PresistentView``
to make it start recovering.
This style of starting recovery does not fit well with usual Actor best practices: an Actor should be independent
and know about its internal state, and also about its recovery or lack thereof. In order to guide users towards
more independent Actors, the ``Recovery()`` object is now not used as a message, but as configuration option
used by the Actor when it starts. In order to migrate previous code which customised its recovery mode use this example
as reference::
// previously
class OldCookieMonster extends PersistentActor {
def preStart() = self ! Recover(toSequenceNr = 42L)
// ...
}
// now:
class NewCookieMonster extends PersistentActor {
override def recovery = Recovery(toSequenceNr = 42L)
// ...
}
Sender reference of replayed events is deadLetters
--------------------------------------------------
While undocumented, previously the ``sender()`` of the replayed messages would be the same sender that originally had
sent the message. Since sender is an ``ActorRef`` and those events are often replayed in different incarnations of
actor systems and during the entire lifetime of the app, relying on the existence of this reference is most likely
not going to succeed. In order to avoid bugs in the style of "it worked last week", the ``sender()`` reference is now not
stored, in order to avoid potential bugs which this could have provoked.
The previous behaviour was never documented explicitly (nor was it a design goal), so it is unlikely that applications
have explicitly relied on this behaviour, however if you find yourself with an application that did exploit this you
should rewrite it to explicitly store the ``ActorPath`` of where such replies during replay may have to be sent to,
instead of relying on the sender reference during replay.
max-message-batch-size config
-----------------------------
Configuration property ``akka.persistence.journal.max-message-batch-size`` has been moved into the plugin configuration
section, to allow different values for different journal plugins. See ``reference.conf``.
akka.persistence.snapshot-store.plugin config
---------------------------------------------
The configuration property ``akka.persistence.snapshot-store.plugin`` now by default is empty. To restore the previous
setting add ``akka.persistence.snapshot-store.plugin = "akka.persistence.snapshot-store.local"`` to your application.conf.
See ``reference.conf``.
PersistentView is deprecated
----------------------------
``PersistentView`` is deprecated. Use :ref:`persistence-query-scala` instead. The corresponding
query type is ``EventsByPersistenceId``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``PersistentView`` actor which are documented in :ref:`stream-integrations-scala`
for Scala and :ref:`Java <stream-integrations-java>`.
The consuming actor may be a plain ``Actor`` or a ``PersistentActor`` if it needs to store its
own state (e.g. fromSequenceNr offset).
.. _Sink.actorRef: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/stream-integrations.html#Sink_actorRef
.. _mapAsync: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/stages-overview.html#Asynchronous_processing_stages
.. _ActorSubscriber: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/stream-integrations.html#ActorSubscriber
Persistence Plugin APIs
=======================
SyncWriteJournal removed
------------------------
``SyncWriteJournal`` removed in favor of using ``AsyncWriteJournal``.
If the storage backend API only supports synchronous, blocking writes,
the methods can still be implemented in terms of the asynchronous API.
Example of how to do that is in included in the
See :ref:`Journal plugin API for Scala <journal-plugin-api>`
or :ref:`Journal plugin API for Java <journal-plugin-api-java>`.
SnapshotStore: Snapshots can now be deleted asynchronously (and report failures)
--------------------------------------------------------------------------------
Previously the ``SnapshotStore`` plugin SPI did not allow for asynchronous deletion of snapshots,
and failures of deleting a snapshot may have been even silently ignored.
Now ``SnapshotStore`` must return a ``Future`` representing the deletion of the snapshot.
If this future completes successfully the ``PersistentActor`` which initiated the snapshotting will
be notified via an ``DeleteSnapshotSuccess`` message. If the deletion fails for some reason a ``DeleteSnapshotFailure``
will be sent to the actor instead.
For ``criteria`` based deletion of snapshots (``def deleteSnapshots(criteria: SnapshotSelectionCriteria)``) equivalent
``DeleteSnapshotsSuccess`` and ``DeleteSnapshotsFailure`` messages are sent, which contain the specified criteria,
instead of ``SnapshotMetadata`` as is the case with the single snapshot deletion messages.
SnapshotStore: Removed 'saved' callback
---------------------------------------
Snapshot Stores previously were required to implement a ``def saved(meta: SnapshotMetadata): Unit`` method which
would be called upon successful completion of a ``saveAsync`` (``doSaveAsync`` in Java API) snapshot write.
Currently all journals and snapshot stores perform asynchronous writes and deletes, thus all could potentially benefit
from such callback methods. The only gain these callback give over composing an ``onComplete`` over ``Future`` returned
by the journal or snapshot store is that it is executed in the Actors context, thus it can safely (without additional
synchronization modify its internal state - for example a "pending writes" counter).
However, this feature was not used by many plugins, and expanding the API to accomodate all callbacks would have grown
the API a lot. Instead, Akka Persistence 2.4.x introduces an additional (optionally overrideable)
``receivePluginInternal:Actor.Receive`` method in the plugin API, which can be used for handling those as well as any custom messages
that are sent to the plugin Actor (imagine use cases like "wake up and continue reading" or custom protocols which your
specialised journal can implement).
Implementations using the previous feature should adjust their code as follows::
// previously
class MySnapshots extends SnapshotStore {
// old API:
// def saved(meta: SnapshotMetadata): Unit = doThings()
// new API:
def saveAsync(metadata: SnapshotMetadata, snapshot: Any): Future[Unit] = {
// completion or failure of the returned future triggers internal messages in receivePluginInternal
val f: Future[Unit] = ???
// custom messages can be piped to self in order to be received in receivePluginInternal
f.map(MyCustomMessage(_)) pipeTo self
f
}
def receivePluginInternal = {
case SaveSnapshotSuccess(metadata) => doThings()
case MyCustomMessage(data) => doOtherThings()
}
// ...
}
SnapshotStore: Java 8 Optional used in Java plugin APIs
-------------------------------------------------------
In places where previously ``akka.japi.Option`` was used in Java APIs, including the return type of ``doLoadAsync``,
the Java 8 provided ``Optional`` type is used now.
Please remember that when creating an ``java.util.Optional`` instance from a (possibly) ``null`` value you will want to
use the non-throwing ``Optional.fromNullable`` method, which converts a ``null`` into a ``None`` value - which is
slightly different than its Scala counterpart (where ``Option.apply(null)`` returns ``None``).
Atomic writes
-------------
``asyncWriteMessages`` takes a ``immutable.Seq[AtomicWrite]`` parameter instead of
``immutable.Seq[PersistentRepr]``.
Each `AtomicWrite` message contains the single ``PersistentRepr`` that corresponds to the event that was
passed to the ``persist`` method of the ``PersistentActor``, or it contains several ``PersistentRepr``
that corresponds to the events that were passed to the ``persistAll`` method of the ``PersistentActor``.
All ``PersistentRepr`` of the `AtomicWrite` must be written to the data store atomically, i.e. all or
none must be stored.
If the journal (data store) cannot support atomic writes of multiple events it should
reject such writes with a ``Try`` ``Failure`` with an ``UnsupportedOperationException``
describing the issue. This limitation should also be documented by the journal plugin.
Rejecting writes
----------------
``asyncWriteMessages`` returns a ``Future[immutable.Seq[Try[Unit]]]``.
The journal can signal that it rejects individual messages (``AtomicWrite``) by the returned
`immutable.Seq[Try[Unit]]`. The returned ``Seq`` must have as many elements as the input
``messages`` ``Seq``. Each ``Try`` element signals if the corresponding ``AtomicWrite``
is rejected or not, with an exception describing the problem. Rejecting a message means it
was not stored, i.e. it must not be included in a later replay. Rejecting a message is
typically done before attempting to store it, e.g. because of serialization error.
Read the :ref:`API documentation <journal-plugin-api>` of this method for more
information about the semantics of rejections and failures.
asyncReplayMessages Java API
----------------------------
The signature of `asyncReplayMessages` in the Java API changed from ``akka.japi.Procedure``
to ``java.util.function.Consumer``.
asyncDeleteMessagesTo
---------------------
The ``permanent`` deletion flag was removed. Support for non-permanent deletions was
removed. Events that were deleted with ``permanent=false`` with older version will
still not be replayed in this version.
References to "replay" in names
-------------------------------
Previously a number of classes and methods used the word "replay" interchangeably with the word "recover".
This lead to slight inconsistencies in APIs, where a method would be called ``recovery``, yet the
signal for a completed recovery was named ``ReplayMessagesSuccess``.
This is now fixed, and all methods use the same "recovery" wording consistently across the entire API.
The old ``ReplayMessagesSuccess`` is now called ``RecoverySuccess``, and an additional method called ``onRecoveryFailure``
has been introduced.
AtLeastOnceDelivery deliver signature
-------------------------------------
The signature of ``deliver`` changed slightly in order to allow both ``ActorSelection`` and ``ActorPath`` to be
used with it.
Previously:
def deliver(destination: ActorPath, deliveryIdToMessage: Long ⇒ Any): Unit
Now:
def deliver(destination: ActorSelection)(deliveryIdToMessage: Long ⇒ Any): Unit
def deliver(destination: ActorPath)(deliveryIdToMessage: Long ⇒ Any): Unit
The Java API remains unchanged and has simply gained the 2nd overload which allows ``ActorSelection`` to be
passed in directly (without converting to ``ActorPath``).
Actor system shutdown
---------------------
``ActorSystem.shutdown``, ``ActorSystem.awaitTermination`` and ``ActorSystem.isTerminated`` has been
deprecated in favor of ``ActorSystem.terminate`` and ``ActorSystem.whenTerminated```. Both returns a
``Future[Terminated]`` value that will complete when the actor system has terminated.
To get the same behavior as ``ActorSystem.awaitTermination`` block and wait for ``Future[Terminated]`` value
with ``Await.result`` from the Scala standard library.
To trigger a termination and wait for it to complete:
import scala.concurrent.duration._
Await.result(system.terminate(), 10.seconds)
Be careful to not do any operations on the ``Future[Terminated]`` using the ``system.dispatcher``
as ``ExecutionContext`` as it will be shut down with the ``ActorSystem``, instead use for example
the Scala standard library context from ``scala.concurrent.ExecutionContext.global``.
::
// import system.dispatcher <- this would not work
import scala.concurrent.ExecutionContext.Implicits.global
system.terminate().foreach { _ =>
println("Actor system was shut down")
}

View file

@ -4,6 +4,15 @@
Migration Guide 2.4.x to 2.5.x
##############################
Akka Actor
==========
Actor DSL deprecation
---------------------
Actor DSL is a rarely used feature and thus will be deprecated and removed.
Use plain ``system.actorOf`` instead of the DSL to create Actors if you have been using it.
Akka Streams
============
@ -51,27 +60,61 @@ which explains using and implementing GraphStages in more practical terms than t
.. _Mastering GraphStages, part I: http://blog.akka.io/streams/2016/07/30/mastering-graph-stage-part-1
Agents
Remote
======
Agents are now deprecated
-------------------------
Mutual TLS authentication now required by default for netty-based SSL transport
-------------------------------------------------------------------------------
Akka Agents are a very simple way of containing mutable state and allowing to access it safely from
multiple threads. The abstraction is leaky though, as Agents do not work over the network (unlike Akka Actors).
Mutual TLS authentication is now required by default for the netty-based SSL transport.
As users were often confused by "when to use an Actor vs. when to use an Agent?" a decision was made to deprecate
the Agents, as they rarely are really enough and do not fit the Akka spirit of thinking about distribution.
We also anticipate to replace the uses of Agents by the upcoming Akka Typed, so in preparation thereof the Agents have been deprecated in 2.5.
Nodes that are configured with this setting to ``on`` might not be able to receive messages from nodes that run on older
versions of akka-remote. This is because in versions of Akka < 2.4.12 the active side of the remoting
connection will not send over certificates even if asked to.
If you use Agents and would like to take over the maintanance thereof, please contact the team on gitter or github.
It is still possible to make a rolling upgrade from a version < 2.4.12 by doing the upgrade stepwise:
* first, upgrade Akka to the latest version but keep ``akka.remote.netty.ssl.require-mutual-authentication`` at ``off``
and do a first rolling upgrade
* second, turn the setting to ``on`` and do another rolling upgrade
For more information see the documentation for the ``akka.remote.netty.ssl.require-mutual-authentication` configuration setting
in akka-remote's `reference.conf`_.
.. _reference.conf: https://github.com/akka/akka/blob/master/akka-remote/src/main/resources/reference.conf
Cluster
=======
Cluster Management Command Line Tool
------------------------------------
There is a new cluster management tool with HTTP API that has the same functionality as the command line tool.
The HTTP API gives you access to cluster membership information as JSON including full reachability status between the nodes.
It supports the ordinary cluster operations such as join, leave, and down.
See documentation of `akka/akka-cluster-management <https://github.com/akka/akka-cluster-management>`_.
The command line script for cluster management has been deprecated and is scheduled for removal
in the next major version. Use the HTTP API with `curl <https://curl.haxx.se/>`_ or similar instead.
Akka Persistence
================
Removal of PersistentView
-------------------------
After being deprecated for a long time, and replaced by :ref:`Persistence Query Java <persistence-query-java>`
(:ref:`Persistence Query Scala <persistence-query-scala>`) ``PersistentView`` has been removed now removed.
The corresponding query type is ``EventsByPersistenceId``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``PersistentView``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``PersistentView`` actor which are documented in :ref:`stream-integrations-scala`
for Scala and :ref:`Java <stream-integrations-java>`.
The consuming actor may be a plain ``Actor`` or an ``PersistentActor`` if it needs to store its own state (e.g. ``fromSequenceNr`` offset).
Please note that Persistence Query is not experimental anymore in Akka ``2.5.0``, so you can safely upgrade to it.
Persistence Plugin Proxy
------------------------
@ -112,18 +155,17 @@ Instead of the previous ``Long`` offset you can now use the provided ``Offset``
Journals are also free to provide their own specific ``Offset`` types. Consult your journal plugin's documentation for details.
Agents
======
Cluster
=======
Agents are now deprecated
-------------------------
Cluster Management Command Line Tool
------------------------------------
Akka Agents are a very simple way of containing mutable state and allowing to access it safely from
multiple threads. The abstraction is leaky though, as Agents do not work over the network (unlike Akka Actors).
There is a new cluster management tool with HTTP API that has the same functionality as the command line tool.
The HTTP API gives you access to cluster membership information as JSON including full reachability status between the nodes.
It supports the ordinary cluster operations such as join, leave, and down.
As users were often confused by "when to use an Actor vs. when to use an Agent?" a decision was made to deprecate
the Agents, as they rarely are really enough and do not fit the Akka spirit of thinking about distribution.
We also anticipate to replace the uses of Agents by the upcoming Akka Typed, so in preparation thereof the Agents have been deprecated in 2.5.
See documentation of `akka/akka-cluster-management <https://github.com/akka/akka-cluster-management>`_.
The command line script for cluster management has been deprecated and is scheduled for removal
in the next major version. Use the HTTP API with `curl <https://curl.haxx.se/>`_ or similar instead.
If you use Agents and would like to take over the maintanance thereof, please contact the team on gitter or github.

View file

@ -1,8 +1,11 @@
.. _actordsl-scala:
################
Actor DSL
################
Actor DSL
#########
.. warning::
Actor DSL is deprecated and will be removed in the near future.
Use plain ``system.actorOf`` or ``context.actorOf`` instead.
The Actor DSL
=============
@ -77,4 +80,4 @@ runtime erased type is just an anonymous subtype of ``Act``). The purpose is to
automatically use the appropriate deque-based mailbox type required by :class:`Stash`.
If you want to use this magic, simply extend :class:`ActWithStash`:
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala#act-with-stash
.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala#act-with-stash

View file

@ -94,9 +94,7 @@ Dangerous Variants
This method is not recommended to be used within another actor because it
encourages to close over the enclosing scope, resulting in non-serializable
:class:`Props` and possibly race conditions (breaking the actor encapsulation).
We will provide a macro-based solution in a future release which allows similar
syntax without the headaches, at which point this variant will be properly
deprecated. On the other hand using this variant in a :class:`Props` factory in
On the other hand using this variant in a :class:`Props` factory in
the actors companion object as documented under “Recommended Practices” below
is completely fine.

View file

@ -607,7 +607,7 @@ Router Example with Pool of Remote Deployed Routees
Let's take a look at how to use a cluster aware router on single master node that creates
and deploys workers. To keep track of a single master we use the :ref:`cluster-singleton-scala`
in the contrib module. The ``ClusterSingletonManager`` is started on each node.
in the cluster-tools module. The ``ClusterSingletonManager`` is started on each node.
.. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/scala/sample/cluster/stats/StatsSampleOneMaster.scala#create-singleton-manager

View file

@ -423,29 +423,4 @@ object PersistenceDocSpec {
//#safe-shutdown-example-good
}
object View {
import akka.actor.Props
val system: ActorSystem = ???
//#view
class MyView extends PersistentView {
override def persistenceId: String = "some-persistence-id"
override def viewId: String = "some-persistence-id-view"
def receive: Receive = {
case payload if isPersistent =>
// handle message from journal...
case payload =>
// handle message from user-land...
}
}
//#view
//#view-update
val view = system.actorOf(Props[MyView])
view ! Update(await = true)
//#view-update
}
}

View file

@ -243,6 +243,16 @@ And the ``eventsByTag`` could be backed by such an Actor for example:
.. includecode:: code/docs/persistence/query/MyEventsByTagPublisher.scala#events-by-tag-publisher
The ``ReadJournalProvider`` class must have a constructor with one of these signatures:
* constructor with a ``ExtendedActorSystem`` parameter, a ``com.typesafe.config.Config`` parameter, and a ``String`` parameter for the config path
* constructor with a ``ExtendedActorSystem`` parameter, and a ``com.typesafe.config.Config`` parameter
* constructor with one ``ExtendedActorSystem`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
If the underlying datastore only supports queries that are completed when they reach the
end of the "result set", the journal has to submit new queries after a while in order
to support "infinite" event streams that include events stored after the initial query

View file

@ -504,88 +504,6 @@ For example, if you configure the replay filter for leveldb plugin, it looks lik
mode = repair-by-discard-old
}
.. _persistent-views:
Persistent Views
================
.. warning::
``PersistentView`` is deprecated. Use :ref:`persistence-query-scala` instead. The corresponding
query type is ``EventsByPersistenceId``. There are several alternatives for connecting the ``Source``
to an actor corresponding to a previous ``PersistentView`` actor:
* `Sink.actorRef`_ is simple, but has the disadvantage that there is no back-pressure signal from the
destination actor, i.e. if the actor is not consuming the messages fast enough the mailbox of the actor will grow
* `mapAsync`_ combined with :ref:`actors-ask-lambda` is almost as simple with the advantage of back-pressure
being propagated all the way
* `ActorSubscriber`_ in case you need more fine grained control
The consuming actor may be a plain ``Actor`` or a ``PersistentActor`` if it needs to store its
own state (e.g. fromSequenceNr offset).
.. _Sink.actorRef: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/stream-integrations.html#Sink_actorRef
.. _mapAsync: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/stages-overview.html#Asynchronous_processing_stages
.. _ActorSubscriber: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0/scala/stream-integrations.html#ActorSubscriber
Persistent views can be implemented by extending the ``PersistentView`` trait and implementing the ``receive`` and the ``persistenceId``
methods.
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#view
The ``persistenceId`` identifies the persistent actor from which the view receives journaled messages. It is not necessary that
the referenced persistent actor is actually running. Views read messages from a persistent actor's journal directly. When a
persistent actor is started later and begins to write new messages, by default the corresponding view is updated automatically.
It is possible to determine if a message was sent from the Journal or from another actor in user-land by calling the ``isPersistent``
method. Having that said, very often you don't need this information at all and can simply apply the same logic to both cases
(skip the ``if isPersistent`` check).
Updates
-------
The default update interval of all views of an actor system is configurable:
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#auto-update-interval
``PersistentView`` implementation classes may also override the ``autoUpdateInterval`` method to return a custom update
interval for a specific view class or view instance. Applications may also trigger additional updates at
any time by sending a view an ``Update`` message.
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#view-update
If the ``await`` parameter is set to ``true``, messages that follow the ``Update`` request are processed when the
incremental message replay, triggered by that update request, completed. If set to ``false`` (default), messages
following the update request may interleave with the replayed message stream. Automated updates always run with
``await = false``.
Automated updates of all persistent views of an actor system can be turned off by configuration:
.. includecode:: code/docs/persistence/PersistenceDocSpec.scala#auto-update
Implementation classes may override the configured default value by overriding the ``autoUpdate`` method. To
limit the number of replayed messages per update request, applications can configure a custom
``akka.persistence.view.auto-update-replay-max`` value or override the ``autoUpdateReplayMax`` method. The number
of replayed messages for manual updates can be limited with the ``replayMax`` parameter of the ``Update`` message.
Recovery
--------
Initial recovery of persistent views works the very same way as for persistent actors (i.e. by sending a ``Recover`` message
to self). The maximum number of replayed messages during initial recovery is determined by ``autoUpdateReplayMax``.
Further possibilities to customize initial recovery are explained in section :ref:`recovery-scala`.
.. _persistence-identifiers:
Identifiers
-----------
A persistent view must have an identifier that doesn't change across different actor incarnations.
The identifier must be defined with the ``viewId`` method.
The ``viewId`` must differ from the referenced ``persistenceId``, unless :ref:`snapshots` of a view and its
persistent actor should be shared (which is what applications usually do not want).
.. _snapshots:
Snapshots
@ -937,15 +855,21 @@ A journal plugin can be activated with the following minimal configuration:
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#journal-plugin-config
The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher
used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``.
The journal plugin instance is an actor so the methods corresponding to requests from persistent actors
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The journal plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The journal plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
``akka.persistence.dispatchers.default-plugin-dispatcher``.
Don't run journal tasks/futures on the system default dispatcher, since that might starve other tasks.
@ -960,15 +884,21 @@ A snapshot store plugin can be activated with the following minimal configuratio
.. includecode:: code/docs/persistence/PersistencePluginDocSpec.scala#snapshot-store-plugin-config
The specified plugin ``class`` must have a no-arg constructor. The ``plugin-dispatcher`` is the dispatcher
used for the plugin actor. If not specified, it defaults to ``akka.persistence.dispatchers.default-plugin-dispatcher``.
The snapshot store instance is an actor so the methods corresponding to requests from persistent actors
are executed sequentially. It may delegate to asynchronous libraries, spawn futures, or delegate to other
actors to achive parallelism.
The snapshot store plugin class must have a constructor without parameters or a constructor with one ``com.typesafe.config.Config``
parameter. The plugin section of the actor system's config will be passed in the config constructor parameter.
The snapshot store plugin class must have a constructor with one of these signatures:
* constructor with one ``com.typesafe.config.Config`` parameter and a ``String`` parameter for the config path
* constructor with one ``com.typesafe.config.Config`` parameter
* constructor without parameters
The plugin section of the actor system's config will be passed in the config constructor parameter. The config path
of the plugin is passed in the ``String`` parameter.
The ``plugin-dispatcher`` is the dispatcher used for the plugin actor. If not specified, it defaults to
``akka.persistence.dispatchers.default-plugin-dispatcher``.
Don't run snapshot store tasks/futures on the system default dispatcher, since that might starve other tasks.

View file

@ -135,6 +135,15 @@ This is how a ``SerializerWithStringManifest`` looks like:
You must also bind it to a name in your :ref:`configuration` and then list which classes
that should be serialized using it.
It's recommended to throw ``java.io.NotSerializableException`` in ``fromBinary``
if the manifest is unknown. This makes it possible to introduce new message types and
send them to nodes that don't know about them. This is typically needed when performing
rolling upgrades, i.e. running a cluster with mixed versions for while.
``NotSerializableException`` is treated as a transient problem in the TCP based remoting
layer. The problem will be logged and message is dropped. Other exceptions will tear down
the TCP connection because it can be an indication of corrupt bytes from the underlying
transport.
Serializing ActorRefs
---------------------

View file

@ -82,11 +82,18 @@ class PersistenceQuery(system: ExtendedActorSystem) extends Extension {
def instantiate(args: collection.immutable.Seq[(Class[_], AnyRef)]) =
system.dynamicAccess.createInstanceFor[ReadJournalProvider](pluginClass, args)
instantiate((classOf[ExtendedActorSystem], system) :: (classOf[Config], pluginConfig) :: Nil)
instantiate((classOf[ExtendedActorSystem], system) :: (classOf[Config], pluginConfig) ::
(classOf[String], configPath) :: Nil)
.recoverWith {
case x: NoSuchMethodException instantiate(
(classOf[ExtendedActorSystem], system) :: (classOf[Config], pluginConfig) :: Nil)
}
.recoverWith { case x: NoSuchMethodException instantiate((classOf[ExtendedActorSystem], system) :: Nil) }
.recoverWith { case x: NoSuchMethodException instantiate(Nil) }
.recoverWith {
case ex: Exception Failure.apply(new IllegalArgumentException(s"Unable to create read journal plugin instance for path [$configPath], class [$pluginClassName]!", ex))
case ex: Exception Failure.apply(
new IllegalArgumentException("Unable to create read journal plugin instance for path " +
s"[$configPath], class [$pluginClassName]!", ex))
}.get
}

View file

@ -7,6 +7,7 @@ package akka.persistence.query
import akka.NotUsed
import akka.stream.scaladsl.Source
import com.typesafe.config.{ Config, ConfigFactory }
import akka.actor.ExtendedActorSystem
/**
* Use for tests only!
@ -29,10 +30,19 @@ class DummyReadJournalForJava(readJournal: DummyReadJournal) extends javadsl.Rea
object DummyReadJournalProvider {
final val config: Config = ConfigFactory.parseString(
s"""
|${DummyReadJournal.Identifier} {
| class = "${classOf[DummyReadJournalProvider].getCanonicalName}"
|}
""".stripMargin)
${DummyReadJournal.Identifier} {
class = "${classOf[DummyReadJournalProvider].getCanonicalName}"
}
${DummyReadJournal.Identifier}2 {
class = "${classOf[DummyReadJournalProvider2].getCanonicalName}"
}
${DummyReadJournal.Identifier}3 {
class = "${classOf[DummyReadJournalProvider3].getCanonicalName}"
}
${DummyReadJournal.Identifier}4 {
class = "${classOf[DummyReadJournalProvider4].getCanonicalName}"
}
""")
}
class DummyReadJournalProvider extends ReadJournalProvider {
@ -43,3 +53,10 @@ class DummyReadJournalProvider extends ReadJournalProvider {
override val javadslReadJournal: DummyReadJournalForJava =
new DummyReadJournalForJava(scaladslReadJournal)
}
class DummyReadJournalProvider2(sys: ExtendedActorSystem) extends DummyReadJournalProvider
class DummyReadJournalProvider3(sys: ExtendedActorSystem, conf: Config) extends DummyReadJournalProvider
class DummyReadJournalProvider4(sys: ExtendedActorSystem, conf: Config, confPath: String) extends DummyReadJournalProvider

View file

@ -28,6 +28,10 @@ class PersistenceQuerySpec extends WordSpecLike with Matchers with BeforeAndAfte
"be found by full config key" in {
withActorSystem() { system
PersistenceQuery.get(system).readJournalFor[DummyReadJournal](DummyReadJournal.Identifier)
// other combinations of constructor parameters
PersistenceQuery.get(system).readJournalFor[DummyReadJournal](DummyReadJournal.Identifier + "2")
PersistenceQuery.get(system).readJournalFor[DummyReadJournal](DummyReadJournal.Identifier + "3")
PersistenceQuery.get(system).readJournalFor[DummyReadJournal](DummyReadJournal.Identifier + "4")
}
}

View file

@ -184,6 +184,8 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
/** INTERNAL API. */
override protected[akka] def aroundPreStart(): Unit = {
require(persistenceId ne null, s"persistenceId is [null] for PersistentActor [${self.path}]")
// Fail fast on missing plugins.
val j = journal; val s = snapshotStore
startRecovery(recovery)
@ -316,6 +318,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
* @param handler handler for each persisted `event`
*/
def persist[A](event: A)(handler: A Unit): Unit = {
if (recoveryRunning) throw new IllegalStateException("Cannot persist during replay. Events can be persisted when receiving RecoveryCompleted or later.")
pendingStashingPersistInvocations += 1
pendingInvocations addLast StashingHandlerInvocation(event, handler.asInstanceOf[Any Unit])
eventBatch ::= AtomicWrite(PersistentRepr(event, persistenceId = persistenceId,
@ -331,6 +334,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
* @param handler handler for each persisted `events`
*/
def persistAll[A](events: immutable.Seq[A])(handler: A Unit): Unit = {
if (recoveryRunning) throw new IllegalStateException("Cannot persist during replay. Events can be persisted when receiving RecoveryCompleted or later.")
if (events.nonEmpty) {
events.foreach { event
pendingStashingPersistInvocations += 1
@ -369,6 +373,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
* @param handler handler for each persisted `event`
*/
def persistAsync[A](event: A)(handler: A Unit): Unit = {
if (recoveryRunning) throw new IllegalStateException("Cannot persist during replay. Events can be persisted when receiving RecoveryCompleted or later.")
pendingInvocations addLast AsyncHandlerInvocation(event, handler.asInstanceOf[Any Unit])
eventBatch ::= AtomicWrite(PersistentRepr(event, persistenceId = persistenceId,
sequenceNr = nextSequenceNr(), writerUuid = writerUuid, sender = sender()))
@ -382,7 +387,8 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
* @param events events to be persisted
* @param handler handler for each persisted `events`
*/
def persistAllAsync[A](events: immutable.Seq[A])(handler: A Unit): Unit =
def persistAllAsync[A](events: immutable.Seq[A])(handler: A Unit): Unit = {
if (recoveryRunning) throw new IllegalStateException("Cannot persist during replay. Events can be persisted when receiving RecoveryCompleted or later.")
if (events.nonEmpty) {
events.foreach { event
pendingInvocations addLast AsyncHandlerInvocation(event, handler.asInstanceOf[Any Unit])
@ -390,6 +396,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
eventBatch ::= AtomicWrite(events.map(PersistentRepr(_, persistenceId = persistenceId,
sequenceNr = nextSequenceNr(), writerUuid = writerUuid, sender = sender())))
}
}
@deprecated("use persistAllAsync instead", "2.4")
def persistAsync[A](events: immutable.Seq[A])(handler: A Unit): Unit =
@ -413,6 +420,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
* @param handler handler for the given `event`
*/
def deferAsync[A](event: A)(handler: A Unit): Unit = {
if (recoveryRunning) throw new IllegalStateException("Cannot persist during replay. Events can be persisted when receiving RecoveryCompleted or later.")
if (pendingInvocations.isEmpty) {
handler(event)
} else {
@ -537,10 +545,11 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
context.system.scheduler.schedule(timeout, timeout, self, RecoveryTick(snapshot = false))
}
var eventSeenInInterval = false
var _recoveryRunning = true
override def toString: String = "replay started"
override def recoveryRunning: Boolean = true
override def recoveryRunning: Boolean = _recoveryRunning
override def stateReceive(receive: Receive, message: Any) = message match {
case ReplayedMessage(p)
@ -556,18 +565,18 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
case RecoverySuccess(highestSeqNr)
timeoutCancellable.cancel()
onReplaySuccess() // callback for subclass implementation
changeState(processingCommands)
sequenceNr = highestSeqNr
setLastSequenceNr(highestSeqNr)
internalStash.unstashAll()
Eventsourced.super.aroundReceive(recoveryBehavior, RecoveryCompleted)
_recoveryRunning = false
try Eventsourced.super.aroundReceive(recoveryBehavior, RecoveryCompleted)
finally transitToProcessingState()
case ReplayMessagesFailure(cause)
timeoutCancellable.cancel()
try onRecoveryFailure(cause, event = None) finally context.stop(self)
case RecoveryTick(false) if !eventSeenInInterval
timeoutCancellable.cancel()
try onRecoveryFailure(
new RecoveryTimedOut(s"Recovery timed out, didn't get event within $timeout, highest sequence number seen $sequenceNr"),
new RecoveryTimedOut(s"Recovery timed out, didn't get event within $timeout, highest sequence number seen $lastSequenceNr"),
event = None)
finally context.stop(self)
case RecoveryTick(false)
@ -577,6 +586,17 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas
case other
stashInternally(other)
}
private def transitToProcessingState(): Unit = {
if (eventBatch.nonEmpty) flushBatch()
if (pendingStashingPersistInvocations > 0) changeState(persistingEvents)
else {
changeState(processingCommands)
internalStash.unstashAll()
}
}
}
private def flushBatch() {

View file

@ -289,9 +289,15 @@ class Persistence(val system: ExtendedActorSystem) extends Extension {
val pluginClass = system.dynamicAccess.getClassFor[Any](pluginClassName).get
val pluginDispatcherId = pluginConfig.getString("plugin-dispatcher")
val pluginActorArgs = try {
Reflect.findConstructor(pluginClass, List(pluginConfig)) // will throw if not found
List(pluginConfig)
} catch { case NonFatal(_) Nil } // otherwise use empty constructor
Reflect.findConstructor(pluginClass, List(pluginConfig, configPath)) // will throw if not found
List(pluginConfig, configPath)
} catch {
case NonFatal(_)
try {
Reflect.findConstructor(pluginClass, List(pluginConfig)) // will throw if not found
List(pluginConfig)
} catch { case NonFatal(_) Nil } // otherwise use empty constructor
}
val pluginActorProps = Props(Deploy(dispatcher = pluginDispatcherId), pluginClass, pluginActorArgs)
system.systemActorOf(pluginActorProps, pluginActorName)
}

View file

@ -1,396 +0,0 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.persistence
import scala.concurrent.duration._
import scala.util.control.NonFatal
import akka.actor.AbstractActor
import akka.actor.Actor
import akka.actor.Cancellable
import akka.actor.Stash
import akka.actor.StashFactory
import akka.actor.UntypedActor
import akka.actor.ActorLogging
/**
* Instructs a [[PersistentView]] to update itself. This will run a single incremental message replay with
* all messages from the corresponding persistent id's journal that have not yet been consumed by the view.
* To update a view with messages that have been written after handling this request, another `Update`
* request must be sent to the view.
*
* @param await if `true`, processing of further messages sent to the view will be delayed until the
* incremental message replay, triggered by this update request, completes. If `false`,
* any message sent to the view may interleave with replayed persistent event stream.
* @param replayMax maximum number of messages to replay when handling this update request. Defaults
* to `Long.MaxValue` (i.e. no limit).
*/
@SerialVersionUID(1L)
final case class Update(await: Boolean = false, replayMax: Long = Long.MaxValue)
object Update {
/**
* Java API.
*/
def create() =
Update()
/**
* Java API.
*/
def create(await: Boolean) =
Update(await)
/**
* Java API.
*/
def create(await: Boolean, replayMax: Long) =
Update(await, replayMax)
}
/**
* INTERNAL API
*/
private[akka] object PersistentView {
private final case class ScheduledUpdate(replayMax: Long)
}
/**
* A view replicates the persistent message stream of a [[PersistentActor]]. Implementation classes receive
* the message stream directly from the Journal. These messages can be processed to update internal state
* in order to maintain an (eventual consistent) view of the state of the corresponding persistent actor. A
* persistent view can also run on a different node, provided that a replicated journal is used.
*
* Implementation classes refer to a persistent actors' message stream by implementing `persistenceId`
* with the corresponding (shared) identifier value.
*
* Views can also store snapshots of internal state by calling [[PersistentView#autoUpdate]]. The snapshots of a view
* are independent of those of the referenced persistent actor. During recovery, a saved snapshot is offered
* to the view with a [[SnapshotOffer]] message, followed by replayed messages, if any, that are younger
* than the snapshot. Default is to offer the latest saved snapshot.
*
* By default, a view automatically updates itself with an interval returned by `autoUpdateInterval`.
* This method can be overridden by implementation classes to define a view instance-specific update
* interval. The default update interval for all views of an actor system can be configured with the
* `akka.persistence.view.auto-update-interval` configuration key. Applications may trigger additional
* view updates by sending the view [[Update]] requests. See also methods
*
* - [[PersistentView#autoUpdate]] for turning automated updates on or off
* - [[PersistentView#autoUpdateReplayMax]] for limiting the number of replayed messages per view update cycle
*
*/
@deprecated("use Persistence Query instead", "2.4")
trait PersistentView extends Actor with Snapshotter with Stash with StashFactory
with PersistenceIdentity with PersistenceRecovery
with ActorLogging {
import PersistentView._
import JournalProtocol._
import SnapshotProtocol.LoadSnapshotResult
import context.dispatcher
private val extension = Persistence(context.system)
private val viewSettings = extension.settings.view
private[persistence] lazy val journal = extension.journalFor(journalPluginId)
private[persistence] lazy val snapshotStore = extension.snapshotStoreFor(snapshotPluginId)
private var schedule: Option[Cancellable] = None
private var _lastSequenceNr: Long = 0L
private val internalStash = createStash()
private var currentState: State = recoveryStarted(Long.MaxValue)
/**
* View id is used as identifier for snapshots performed by this [[PersistentView]].
* This allows the View to keep separate snapshots of data than the [[PersistentActor]] originating the message stream.
*
*
* The usual case is to have a *different* id set as `viewId` than `persistenceId`,
* although it is possible to share the same id with an [[PersistentActor]] - for example to decide about snapshots
* based on some average or sum, calculated by this view.
*
* Example:
* {{{
* class SummingView extends PersistentView {
* override def persistenceId = "count-123"
* override def viewId = "count-123-sum" // this view is performing summing,
* // so this view's snapshots reside under the "-sum" suffixed id
*
* // ...
* }
* }}}
*/
def viewId: String
/**
* Returns `viewId`.
*/
def snapshotterId: String = viewId
/**
* If `true`, the currently processed message was persisted (is sent from the Journal).
* If `false`, the currently processed message comes from another actor (from "user-land").
*/
def isPersistent: Boolean = currentState.recoveryRunning
/**
* If `true`, this view automatically updates itself with an interval specified by `autoUpdateInterval`.
* If `false`, applications must explicitly update this view by sending [[Update]] requests. The default
* value can be configured with the `akka.persistence.view.auto-update` configuration key. This method
* can be overridden by implementation classes to return non-default values.
*/
def autoUpdate: Boolean =
viewSettings.autoUpdate
/**
* The interval for automated updates. The default value can be configured with the
* `akka.persistence.view.auto-update-interval` configuration key. This method can be
* overridden by implementation classes to return non-default values.
*/
def autoUpdateInterval: FiniteDuration =
viewSettings.autoUpdateInterval
/**
* The maximum number of messages to replay per update. The default value can be configured with the
* `akka.persistence.view.auto-update-replay-max` configuration key. This method can be overridden by
* implementation classes to return non-default values.
*/
def autoUpdateReplayMax: Long =
viewSettings.autoUpdateReplayMax match {
case -1 Long.MaxValue
case value value
}
/**
* Highest received sequence number so far or `0L` if this actor hasn't replayed
* any persistent events yet.
*/
def lastSequenceNr: Long = _lastSequenceNr
/**
* Returns `lastSequenceNr`.
*/
def snapshotSequenceNr: Long = lastSequenceNr
private def setLastSequenceNr(value: Long): Unit =
_lastSequenceNr = value
private def updateLastSequenceNr(persistent: PersistentRepr): Unit =
if (persistent.sequenceNr > _lastSequenceNr) _lastSequenceNr = persistent.sequenceNr
override def recovery = Recovery(replayMax = autoUpdateReplayMax)
/**
* Triggers an initial recovery, starting form a snapshot, if any, and replaying at most `autoUpdateReplayMax`
* messages (following that snapshot).
*/
override def preStart(): Unit = {
startRecovery(recovery)
if (autoUpdate)
schedule = Some(context.system.scheduler.schedule(autoUpdateInterval, autoUpdateInterval, self, ScheduledUpdate(autoUpdateReplayMax)))
}
private def startRecovery(recovery: Recovery): Unit = {
changeState(recoveryStarted(recovery.replayMax))
loadSnapshot(snapshotterId, recovery.fromSnapshot, recovery.toSequenceNr)
}
/** INTERNAL API. */
override protected[akka] def aroundReceive(receive: Receive, message: Any): Unit =
currentState.stateReceive(receive, message)
/** INTERNAL API. */
override protected[akka] def aroundPreStart(): Unit = {
// Fail fast on missing plugins.
val j = journal; val s = snapshotStore
super.aroundPreStart()
}
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
try internalStash.unstashAll() finally super.preRestart(reason, message)
}
override def postStop(): Unit = {
schedule.foreach(_.cancel())
super.postStop()
}
/**
* Called whenever a message replay fails. By default it logs the error.
* Subclass may override to customize logging.
* The `PersistentView` will not stop or throw exception due to this.
* It will try again on next update.
*/
protected def onReplayError(cause: Throwable): Unit = {
log.error(cause, "Persistence view failure when replaying events for persistenceId [{}]. " +
"Last known sequence number [{}]", persistenceId, lastSequenceNr)
}
private def changeState(state: State): Unit = {
currentState = state
}
// TODO There are some duplication of the recovery state management here and in Eventsourced.scala,
// but the enhanced PersistentView will not be based on recovery infrastructure, and
// therefore this code will be replaced anyway
private trait State {
def stateReceive(receive: Receive, message: Any): Unit
def recoveryRunning: Boolean
}
/**
* Processes a loaded snapshot, if any. A loaded snapshot is offered with a `SnapshotOffer`
* message to the actor's `receive`. Then initiates a message replay, either starting
* from the loaded snapshot or from scratch, and switches to `replayStarted` state.
* All incoming messages are stashed.
*
* @param replayMax maximum number of messages to replay.
*/
private def recoveryStarted(replayMax: Long) = new State {
override def toString: String = s"recovery started (replayMax = [${replayMax}])"
override def recoveryRunning: Boolean = true
override def stateReceive(receive: Receive, message: Any) = message match {
case LoadSnapshotResult(sso, toSnr)
sso.foreach {
case SelectedSnapshot(metadata, snapshot)
setLastSequenceNr(metadata.sequenceNr)
PersistentView.super.aroundReceive(receive, SnapshotOffer(metadata, snapshot))
}
changeState(replayStarted(await = true))
journal ! ReplayMessages(lastSequenceNr + 1L, toSnr, replayMax, persistenceId, self)
case other internalStash.stash()
}
}
/**
* Processes replayed messages, if any. The actor's `receive` is invoked with the replayed
* events.
*
* If replay succeeds it switches to `initializing` state and requests the highest stored sequence
* number from the journal.
*
* If replay succeeds the `onReplaySuccess` callback method is called, otherwise `onReplayError` is called and
* remaining replay events are consumed (ignored).
*
* If processing of a replayed event fails, the exception is caught and
* stored for later and state is changed to `recoveryFailed`.
*
* All incoming messages are stashed when `await` is true.
*/
private def replayStarted(await: Boolean) = new State {
override def toString: String = s"replay started"
override def recoveryRunning: Boolean = true
override def stateReceive(receive: Receive, message: Any) = message match {
case ReplayedMessage(p)
try {
updateLastSequenceNr(p)
PersistentView.super.aroundReceive(receive, p.payload)
} catch {
case NonFatal(t)
changeState(ignoreRemainingReplay(t))
}
case _: RecoverySuccess
onReplayComplete()
case ReplayMessagesFailure(cause)
try onReplayError(cause) finally onReplayComplete()
case ScheduledUpdate(_) // ignore
case Update(a, _)
if (a)
internalStash.stash()
case other
if (await)
internalStash.stash()
else {
try {
PersistentView.super.aroundReceive(receive, other)
} catch {
case NonFatal(t)
changeState(ignoreRemainingReplay(t))
}
}
}
/**
* Switches to `idle`
*/
private def onReplayComplete(): Unit = {
changeState(idle)
internalStash.unstashAll()
}
}
/**
* Consumes remaining replayed messages and then throw the exception.
*/
private def ignoreRemainingReplay(cause: Throwable) = new State {
override def toString: String = "replay failed"
override def recoveryRunning: Boolean = true
override def stateReceive(receive: Receive, message: Any) = message match {
case ReplayedMessage(p)
case ReplayMessagesFailure(_)
replayCompleted(receive)
// journal couldn't tell the maximum stored sequence number, hence the next
// replay must be a full replay (up to the highest stored sequence number)
// Recover(lastSequenceNr) is sent by preRestart
setLastSequenceNr(Long.MaxValue)
case _: RecoverySuccess replayCompleted(receive)
case _ internalStash.stash()
}
def replayCompleted(receive: Receive): Unit = {
// in case the actor resumes the state must be `idle`
changeState(idle)
internalStash.unstashAll()
throw cause
}
}
/**
* When receiving an [[Update]] request, switches to `replayStarted` state and triggers
* an incremental message replay. Invokes the actor's current behavior for any other
* received message.
*/
private val idle: State = new State {
override def toString: String = "idle"
override def recoveryRunning: Boolean = false
override def stateReceive(receive: Receive, message: Any): Unit = message match {
case ReplayedMessage(p)
// we can get ReplayedMessage here if it was stashed by user during replay
// unwrap the payload
PersistentView.super.aroundReceive(receive, p.payload)
case ScheduledUpdate(replayMax) changeStateToReplayStarted(await = false, replayMax)
case Update(awaitUpdate, replayMax) changeStateToReplayStarted(awaitUpdate, replayMax)
case other PersistentView.super.aroundReceive(receive, other)
}
def changeStateToReplayStarted(await: Boolean, replayMax: Long): Unit = {
changeState(replayStarted(await))
journal ! ReplayMessages(lastSequenceNr + 1L, Long.MaxValue, replayMax, persistenceId, self)
}
}
}
/**
* Java API.
*
* @see [[PersistentView]]
*/
@deprecated("use Persistence Query instead", "2.4")
abstract class UntypedPersistentView extends UntypedActor with PersistentView
/**
* Java API: compatible with lambda expressions (to be used with [[akka.japi.pf.ReceiveBuilder]])
*
* @see [[PersistentView]]
*/
@deprecated("use Persistence Query instead", "2.4")
abstract class AbstractPersistentView extends AbstractActor with PersistentView

View file

@ -43,7 +43,7 @@ trait AsyncWriteJournal extends Actor with WriteJournalBase with AsyncRecovery {
case "fail" ReplayFilter.Fail
case "warn" ReplayFilter.Warn
case other throw new IllegalArgumentException(
s"invalid replay-filter.mode [$other], supported values [off, repair, fail, warn]")
s"invalid replay-filter.mode [$other], supported values [off, repair-by-discard-old, fail, warn]")
}
private def isReplayFilterEnabled: Boolean = replayFilterMode != ReplayFilter.Disabled
private val replayFilterWindowSize: Int = config.getInt("replay-filter.window-size")

View file

@ -20,9 +20,15 @@ import com.typesafe.config.Config
*
* Journal backed by a local LevelDB store. For production use.
*/
private[persistence] class LeveldbJournal(val config: Config) extends AsyncWriteJournal with LeveldbStore {
private[persistence] class LeveldbJournal(cfg: Config) extends AsyncWriteJournal with LeveldbStore {
import LeveldbJournal._
def this() = this(LeveldbStore.emptyConfig)
override def prepareConfig: Config =
if (cfg ne LeveldbStore.emptyConfig) cfg
else context.system.settings.config.getConfig("akka.persistence.journal.leveldb")
override def receivePluginInternal: Receive = {
case r @ ReplayTaggedMessages(fromSequenceNr, toSequenceNr, max, tag, replyTo)
import context.dispatcher

View file

@ -6,25 +6,33 @@
package akka.persistence.journal.leveldb
import java.io.File
import scala.collection.mutable
import akka.actor._
import akka.persistence._
import akka.persistence.journal.WriteJournalBase
import akka.serialization.SerializationExtension
import org.iq80.leveldb._
import scala.collection.immutable
import scala.util._
import scala.concurrent.Future
import scala.util.control.NonFatal
import akka.persistence.journal.Tagged
import com.typesafe.config.Config
import com.typesafe.config.{ Config, ConfigFactory }
private[persistence] object LeveldbStore {
val emptyConfig = ConfigFactory.empty()
}
/**
* INTERNAL API.
*/
private[persistence] trait LeveldbStore extends Actor with WriteJournalBase with LeveldbIdMapping with LeveldbRecovery {
val config: Config
def prepareConfig: Config
val config: Config = prepareConfig
val nativeLeveldb = config.getBoolean("native")
val leveldbOptions = new Options().createIfMissing(true)

View file

@ -18,10 +18,16 @@ import scala.concurrent.Future
* set for each actor system that uses the store via `SharedLeveldbJournal.setStore`. The
* shared LevelDB store is for testing only.
*/
class SharedLeveldbStore(cfg: Config) extends { override val config = cfg.getConfig("store") } with LeveldbStore {
class SharedLeveldbStore(cfg: Config) extends LeveldbStore {
import AsyncWriteTarget._
import context.dispatcher
def this() = this(LeveldbStore.emptyConfig)
override def prepareConfig: Config =
if (cfg ne LeveldbStore.emptyConfig) cfg.getConfig("store")
else context.system.settings.config.getConfig("akka.persistence.journal.leveldb-shared.store")
def receive = {
case WriteMessages(messages)
// TODO it would be nice to DRY this with AsyncWriteJournal, but this is using

View file

@ -16,6 +16,7 @@ import scala.concurrent.duration
import akka.actor.Actor
import scala.concurrent.duration.Duration
import scala.language.existentials
import java.io.NotSerializableException
/**
* Marker trait for all protobuf-serializable messages in `akka.persistence`.
@ -71,7 +72,7 @@ class MessageSerializer(val system: ExtendedActorSystem) extends BaseSerializer
case AtLeastOnceDeliverySnapshotClass atLeastOnceDeliverySnapshot(mf.AtLeastOnceDeliverySnapshot.parseFrom(bytes))
case PersistentStateChangeEventClass stateChange(mf.PersistentStateChangeEvent.parseFrom(bytes))
case PersistentFSMSnapshotClass persistentFSMSnapshot(mf.PersistentFSMSnapshot.parseFrom(bytes))
case _ throw new IllegalArgumentException(s"Can't deserialize object of type ${c}")
case _ throw new NotSerializableException(s"Can't deserialize object of type ${c}")
}
}

View file

@ -7,7 +7,7 @@ package akka.persistence
import java.util.concurrent.atomic.AtomicInteger
import akka.actor._
import akka.testkit.{ ImplicitSender, TestLatch, TestProbe }
import akka.testkit.{ EventFilter, ImplicitSender, TestLatch, TestProbe }
import com.typesafe.config.Config
import scala.collection.immutable.Seq
@ -646,6 +646,23 @@ object PersistentActorSpec {
}
}
class PersistInRecovery(name: String) extends ExamplePersistentActor(name) {
override def receiveRecover = {
case Evt("invalid")
persist(Evt("invalid-recovery"))(updateState)
case e: Evt updateState(e)
case RecoveryCompleted
persistAsync(Evt("rc-1"))(updateState)
persist(Evt("rc-2"))(updateState)
persistAsync(Evt("rc-3"))(updateState)
}
override def onRecoveryFailure(cause: scala.Throwable, event: Option[Any]): Unit = ()
def receiveCommand = commonBehavior orElse {
case Cmd(d) persist(Evt(d))(updateState)
}
}
}
abstract class PersistentActorSpec(config: Config) extends PersistenceSpec(config) with ImplicitSender {
@ -661,6 +678,20 @@ abstract class PersistentActorSpec(config: Config) extends PersistenceSpec(confi
}
"A persistent actor" must {
"fail fast if persistenceId is null" in {
import akka.testkit.filterEvents
filterEvents(EventFilter[ActorInitializationException]()) {
EventFilter.error(message = "requirement failed: persistenceId is [null] for PersistentActor") intercept {
val ref = system.actorOf(Props(new NamedPersistentActor(null) {
override def receiveRecover: Receive = Actor.emptyBehavior
override def receiveCommand: Receive = Actor.emptyBehavior
}))
watch(ref)
expectTerminated(ref)
}
}
}
"recover from persisted events" in {
val persistentActor = namedPersistentActor[Behavior1PersistentActor]
persistentActor ! GetState
@ -1119,6 +1150,20 @@ abstract class PersistentActorSpec(config: Config) extends PersistenceSpec(confi
persistentActor ! "Boom"
expectMsg("failed with TestException while processing Boom")
}
"be able to persist events that happen during recovery" in {
val persistentActor = namedPersistentActor[PersistInRecovery]
persistentActor ! GetState
expectMsg(List("a-1", "a-2", "rc-1", "rc-2"))
persistentActor ! GetState
expectMsg(List("a-1", "a-2", "rc-1", "rc-2", "rc-3"))
persistentActor ! Cmd("invalid")
persistentActor ! GetState
expectMsg(List("a-1", "a-2", "rc-1", "rc-2", "rc-3", "invalid"))
watch(persistentActor)
persistentActor ! "boom"
expectTerminated(persistentActor)
}
}
}

View file

@ -1,352 +0,0 @@
/**
* Copyright (C) 2014-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.persistence
import akka.actor._
import akka.persistence.JournalProtocol.ReplayMessages
import akka.testkit._
import com.typesafe.config.Config
import scala.concurrent.duration._
object PersistentViewSpec {
private class TestPersistentActor(name: String, probe: ActorRef) extends NamedPersistentActor(name) {
def receiveCommand = {
case msg persist(msg) { m probe ! s"${m}-${lastSequenceNr}" }
}
override def receiveRecover: Receive = {
case _ // do nothing...
}
}
private class TestPersistentView(name: String, probe: ActorRef, interval: FiniteDuration, var failAt: Option[String]) extends PersistentView {
def this(name: String, probe: ActorRef, interval: FiniteDuration) =
this(name, probe, interval, None)
def this(name: String, probe: ActorRef) =
this(name, probe, 100.milliseconds)
override def autoUpdateInterval: FiniteDuration = interval.dilated(context.system)
override val persistenceId: String = name
override val viewId: String = name + "-view"
var last: String = _
def receive = {
case "get"
probe ! last
case "boom"
throw new TestException("boom")
case payload if isPersistent && shouldFailOn(payload)
throw new TestException("boom")
case payload if isPersistent
last = s"replicated-${payload}-${lastSequenceNr}"
probe ! last
}
override def postRestart(reason: Throwable): Unit = {
super.postRestart(reason)
failAt = None
}
def shouldFailOn(m: Any): Boolean =
failAt.foldLeft(false) { (a, f) a || (m == f) }
}
private class PassiveTestPersistentView(name: String, probe: ActorRef, var failAt: Option[String]) extends PersistentView {
override val persistenceId: String = name
override val viewId: String = name + "-view"
override def autoUpdate: Boolean = false
override def autoUpdateReplayMax: Long = 0L // no message replay during initial recovery
var last: String = _
def receive = {
case "get"
probe ! last
case payload if isPersistent && shouldFailOn(payload)
throw new TestException("boom")
case payload
last = s"replicated-${payload}-${lastSequenceNr}"
}
override def postRestart(reason: Throwable): Unit = {
super.postRestart(reason)
failAt = None
}
def shouldFailOn(m: Any): Boolean =
failAt.foldLeft(false) { (a, f) a || (m == f) }
}
private class ActiveTestPersistentView(name: String, probe: ActorRef) extends PersistentView {
override val persistenceId: String = name
override val viewId: String = name + "-view"
override def autoUpdateInterval: FiniteDuration = 50.millis
override def autoUpdateReplayMax: Long = 2
def receive = {
case payload
probe ! s"replicated-${payload}-${lastSequenceNr}"
}
}
private class BecomingPersistentView(name: String, probe: ActorRef) extends PersistentView {
override def persistenceId = name
override def viewId = name + "-view"
def receive = Actor.emptyBehavior
context.become {
case payload probe ! s"replicated-${payload}-${lastSequenceNr}"
}
}
private class StashingPersistentView(name: String, probe: ActorRef) extends PersistentView {
override def persistenceId = name
override def viewId = name + "-view"
def receive = {
case "other" stash()
case "unstash"
unstashAll()
context.become {
case msg probe ! s"$msg-${lastSequenceNr}"
}
case msg stash()
}
}
private class PersistentOrNotTestPersistentView(name: String, probe: ActorRef) extends PersistentView {
override val persistenceId: String = name
override val viewId: String = name + "-view"
def receive = {
case payload if isPersistent probe ! s"replicated-${payload}-${lastSequenceNr}"
case payload probe ! s"normal-${payload}-${lastSequenceNr}"
}
}
private class SnapshottingPersistentView(name: String, probe: ActorRef) extends PersistentView {
override val persistenceId: String = name
override val viewId: String = s"${name}-replicator"
override def autoUpdateInterval: FiniteDuration = 100.microseconds.dilated(context.system)
var last: String = _
def receive = {
case "get"
probe ! last
case "snap"
saveSnapshot(last)
case "restart"
throw new TestException("restart requested")
case SaveSnapshotSuccess(_)
probe ! "snapped"
case SnapshotOffer(metadata, snapshot: String)
last = snapshot
probe ! last
case payload
last = s"replicated-${payload}-${lastSequenceNr}"
probe ! last
}
}
}
abstract class PersistentViewSpec(config: Config) extends PersistenceSpec(config) with ImplicitSender {
import akka.persistence.PersistentViewSpec._
var persistentActor: ActorRef = _
var view: ActorRef = _
var persistentActorProbe: TestProbe = _
var viewProbe: TestProbe = _
override protected def beforeEach(): Unit = {
super.beforeEach()
persistentActorProbe = TestProbe()
viewProbe = TestProbe()
persistentActor = system.actorOf(Props(classOf[TestPersistentActor], name, persistentActorProbe.ref))
persistentActor ! "a"
persistentActor ! "b"
persistentActorProbe.expectMsg("a-1")
persistentActorProbe.expectMsg("b-2")
}
override protected def afterEach(): Unit = {
system.stop(persistentActor)
system.stop(view)
super.afterEach()
}
def subscribeToReplay(probe: TestProbe): Unit =
system.eventStream.subscribe(probe.ref, classOf[ReplayMessages])
"A persistent view" must {
"receive past updates from a persistent actor" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
}
"receive live updates from a persistent actor" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
persistentActor ! "c"
viewProbe.expectMsg("replicated-c-3")
}
"run updates at specified interval" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref, 2.seconds))
// initial update is done on start
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
// live updates takes 5 seconds to replicate
persistentActor ! "c"
viewProbe.expectNoMsg(1.second)
viewProbe.expectMsg("replicated-c-3")
}
"run updates on user request" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref, 5.seconds))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
persistentActor ! "c"
persistentActorProbe.expectMsg("c-3")
view ! Update(await = false)
viewProbe.expectMsg("replicated-c-3")
}
"run updates on user request and await update" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref, 5.seconds))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
persistentActor ! "c"
persistentActorProbe.expectMsg("c-3")
view ! Update(await = true)
view ! "get"
viewProbe.expectMsg("replicated-c-3")
}
"run updates again on failure outside an update cycle" in {
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref, 5.seconds))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
view ! "boom"
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
}
"run updates again on failure during an update cycle" in {
persistentActor ! "c"
persistentActorProbe.expectMsg("c-3")
view = system.actorOf(Props(classOf[TestPersistentView], name, viewProbe.ref, 5.seconds, Some("b")))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
viewProbe.expectMsg("replicated-c-3")
}
"run size-limited updates on user request" in {
persistentActor ! "c"
persistentActor ! "d"
persistentActor ! "e"
persistentActor ! "f"
persistentActorProbe.expectMsg("c-3")
persistentActorProbe.expectMsg("d-4")
persistentActorProbe.expectMsg("e-5")
persistentActorProbe.expectMsg("f-6")
view = system.actorOf(Props(classOf[PassiveTestPersistentView], name, viewProbe.ref, None))
view ! Update(await = true, replayMax = 2)
view ! "get"
viewProbe.expectMsg("replicated-b-2")
view ! Update(await = true, replayMax = 1)
view ! "get"
viewProbe.expectMsg("replicated-c-3")
view ! Update(await = true, replayMax = 4)
view ! "get"
viewProbe.expectMsg("replicated-f-6")
}
"run size-limited updates automatically" in {
val replayProbe = TestProbe()
persistentActor ! "c"
persistentActor ! "d"
persistentActorProbe.expectMsg("c-3")
persistentActorProbe.expectMsg("d-4")
subscribeToReplay(replayProbe)
view = system.actorOf(Props(classOf[ActiveTestPersistentView], name, viewProbe.ref))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
viewProbe.expectMsg("replicated-c-3")
viewProbe.expectMsg("replicated-d-4")
replayProbe.expectMsgPF() { case ReplayMessages(1L, _, 2L, _, _) }
replayProbe.expectMsgPF() { case ReplayMessages(3L, _, 2L, _, _) }
replayProbe.expectMsgPF() { case ReplayMessages(5L, _, 2L, _, _) }
}
"support context.become" in {
view = system.actorOf(Props(classOf[BecomingPersistentView], name, viewProbe.ref))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
}
"check if an incoming message is persistent" in {
persistentActor ! "c"
persistentActorProbe.expectMsg("c-3")
view = system.actorOf(Props(classOf[PersistentOrNotTestPersistentView], name, viewProbe.ref))
view ! "d"
view ! "e"
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
viewProbe.expectMsg("replicated-c-3")
viewProbe.expectMsg("normal-d-3")
viewProbe.expectMsg("normal-e-3")
persistentActor ! "f"
viewProbe.expectMsg("replicated-f-4")
}
"take snapshots" in {
view = system.actorOf(Props(classOf[SnapshottingPersistentView], name, viewProbe.ref))
viewProbe.expectMsg("replicated-a-1")
viewProbe.expectMsg("replicated-b-2")
view ! "snap"
viewProbe.expectMsg("snapped")
view ! "restart"
persistentActor ! "c"
viewProbe.expectMsg("replicated-b-2")
viewProbe.expectMsg("replicated-c-3")
}
"support stash" in {
view = system.actorOf(Props(classOf[StashingPersistentView], name, viewProbe.ref))
view ! "other"
view ! "unstash"
viewProbe.expectMsg("a-2") // note that the lastSequenceNumber is 2, since we have replayed b-2
viewProbe.expectMsg("b-2")
viewProbe.expectMsg("other-2")
}
}
}
class LeveldbPersistentViewSpec extends PersistentViewSpec(PersistenceSpec.config("leveldb", "LeveldbPersistentViewSpec"))
class InmemPersistentViewSpec extends PersistentViewSpec(PersistenceSpec.config("inmem", "InmemPersistentViewSpec"))

View file

@ -572,6 +572,9 @@ akka {
}
# DEPRECATED, since 2.5.0
# The netty.udp transport is deprecated, please use Artery instead.
# See: http://doc.akka.io/docs/akka/2.4/scala/remoting-artery.html
netty.udp = ${akka.remote.netty.tcp}
netty.udp {
transport-protocol = udp
@ -616,14 +619,9 @@ akka {
# "SHA1PRNG" => Can be slow because of blocking issues on Linux
# "AES128CounterSecureRNG" => fastest startup and based on AES encryption
# algorithm
# "AES256CounterSecureRNG"
#
# The following are deprecated in Akka 2.4. They use one of 3 possible
# seed sources, depending on availability: /dev/random, random.org and
# SecureRandom (provided by Java)
# "AES128CounterInetRNG"
# "AES256CounterInetRNG" (Install JCE Unlimited Strength Jurisdiction
# "AES256CounterSecureRNG" (Install JCE Unlimited Strength Jurisdiction
# Policy Files first)
#
# Setting a value here may require you to supply the appropriate cipher
# suite (see enabled-algorithms section above)
random-number-generator = ""
@ -634,21 +632,20 @@ akka {
# checks if the passive side (TLS server side) sends over a trusted certificate. With the flag turned on,
# the passive side will also request and verify a certificate from the connecting peer.
#
# To prevent man-in-the-middle attacks you should enable this setting. For compatibility reasons it is
# still set to 'off' per default.
# To prevent man-in-the-middle attacks this setting is enabled by default.
#
# Note: Nodes that are configured with this setting to 'on' might not be able to receive messages from nodes that
# run on older versions of akka-remote. This is because in older versions of Akka the active side of the remoting
# connection will not send over certificates.
# run on older versions of akka-remote. This is because in versions of Akka < 2.4.12 the active side of the remoting
# connection will not send over certificates even if asked.
#
# However, starting from the version this setting was added, even with this setting "off", the active side
# (TLS client side) will use the given key-store to send over a certificate if asked. A rolling upgrades from
# older versions of Akka can therefore work like this:
# - upgrade all nodes to an Akka version supporting this flag, keeping it off
# - then switch the flag on and do again a rolling upgrade of all nodes
# However, starting with Akka 2.4.12, even with this setting "off", the active side (TLS client side)
# will use the given key-store to send over a certificate if asked. A rolling upgrade from versions of
# Akka < 2.4.12 can therefore work like this:
# - upgrade all nodes to an Akka version >= 2.4.12, in the best case the latest version, but keep this setting at "off"
# - then switch this flag to "on" and do again a rolling upgrade of all nodes
# The first step ensures that all nodes will send over a certificate when asked to. The second
# step will ensure that all nodes finally enforce the secure checking of client certificates.
require-mutual-authentication = off
require-mutual-authentication = on
}
}

View file

@ -984,7 +984,18 @@ private[remote] class EndpointReader(
if (msg.reliableDeliveryEnabled) {
ackedReceiveBuffer = ackedReceiveBuffer.receive(msg)
deliverAndAck()
} else msgDispatch.dispatch(msg.recipient, msg.recipientAddress, msg.serializedMessage, msg.senderOption)
} else try
msgDispatch.dispatch(msg.recipient, msg.recipientAddress, msg.serializedMessage, msg.senderOption)
catch {
case e: NotSerializableException
val sm = msg.serializedMessage
log.warning(
"Serializer not defined for message with serializer id [{}] and manifest [{}]. " +
"Transient association error (association remains live). {}",
sm.getSerializerId,
if (sm.hasMessageManifest) sm.getMessageManifest.toStringUtf8 else "",
e.getMessage)
}
case None
}

View file

@ -1,42 +0,0 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.remote.security.provider
import org.uncommons.maths.random.{ AESCounterRNG }
import SeedSize.Seed128
/**
* INTERNAL API
* This class is a wrapper around the 128-bit AESCounterRNG algorithm provided by http://maths.uncommons.org/
* It uses the default seed generator which uses one of the following 3 random seed sources:
* Depending on availability: random.org, /dev/random, and SecureRandom (provided by Java)
* The only method used by netty ssl is engineNextBytes(bytes)
*/
@deprecated("Use AES128CounterSecureRNG instead", "2.4")
class AES128CounterInetRNG extends java.security.SecureRandomSpi {
private val rng = new AESCounterRNG(engineGenerateSeed(Seed128))
/**
* This is managed internally by AESCounterRNG
*/
override protected def engineSetSeed(seed: Array[Byte]): Unit = ()
/**
* Generates a user-specified number of random bytes.
*
* @param bytes the array to be filled in with random bytes.
*/
override protected def engineNextBytes(bytes: Array[Byte]): Unit = rng.nextBytes(bytes)
/**
* Unused method
* Returns the given number of seed bytes. This call may be used to
* seed other random number generators.
*
* @param numBytes the number of seed bytes to generate.
* @return the seed bytes.
*/
override protected def engineGenerateSeed(numBytes: Int): Array[Byte] = InternetSeedGenerator.getInstance.generateSeed(numBytes)
}

View file

@ -1,42 +0,0 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.remote.security.provider
import org.uncommons.maths.random.{ AESCounterRNG }
import SeedSize.Seed256
/**
* INTERNAL API
* This class is a wrapper around the 256-bit AESCounterRNG algorithm provided by http://maths.uncommons.org/
* It uses the default seed generator which uses one of the following 3 random seed sources:
* Depending on availability: random.org, /dev/random, and SecureRandom (provided by Java)
* The only method used by netty ssl is engineNextBytes(bytes)
*/
@deprecated("Use AES256CounterSecureRNG instead", "2.4")
class AES256CounterInetRNG extends java.security.SecureRandomSpi {
private val rng = new AESCounterRNG(engineGenerateSeed(Seed256))
/**
* This is managed internally by AESCounterRNG
*/
override protected def engineSetSeed(seed: Array[Byte]): Unit = ()
/**
* Generates a user-specified number of random bytes.
*
* @param bytes the array to be filled in with random bytes.
*/
override protected def engineNextBytes(bytes: Array[Byte]): Unit = rng.nextBytes(bytes)
/**
* Unused method
* Returns the given number of seed bytes. This call may be used to
* seed other random number generators.
*
* @param numBytes the number of seed bytes to generate.
* @return the seed bytes.
*/
override protected def engineGenerateSeed(numBytes: Int): Array[Byte] = InternetSeedGenerator.getInstance.generateSeed(numBytes)
}

View file

@ -14,14 +14,10 @@ object AkkaProvider extends Provider("Akka", 1.0, "Akka provider 1.0 that implem
//SecureRandom
put("SecureRandom.AES128CounterSecureRNG", classOf[AES128CounterSecureRNG].getName)
put("SecureRandom.AES256CounterSecureRNG", classOf[AES256CounterSecureRNG].getName)
put("SecureRandom.AES128CounterInetRNG", classOf[AES128CounterInetRNG].getName)
put("SecureRandom.AES256CounterInetRNG", classOf[AES256CounterInetRNG].getName)
//Implementation type: software or hardware
put("SecureRandom.AES128CounterSecureRNG ImplementedIn", "Software")
put("SecureRandom.AES256CounterSecureRNG ImplementedIn", "Software")
put("SecureRandom.AES128CounterInetRNG ImplementedIn", "Software")
put("SecureRandom.AES256CounterInetRNG ImplementedIn", "Software")
null //Magic null is magic
}
})

View file

@ -1,56 +0,0 @@
// ============================================================================
// Copyright 2006-2010 Daniel W. Dyer
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// ============================================================================
package akka.remote.security.provider
import org.uncommons.maths.random.{ SeedGenerator, SeedException, SecureRandomSeedGenerator, RandomDotOrgSeedGenerator }
import scala.collection.immutable
/**
* INTERNAL API
* Seed generator that maintains multiple strategies for seed
* generation and will delegate to the best one available for the
* current operating environment.
* @author Daniel Dyer
*/
@deprecated("Use another seed generator instead", "2.4")
object InternetSeedGenerator {
/**
* @return The singleton instance of this class.
*/
def getInstance: InternetSeedGenerator = Instance
/**Singleton instance. */
private final val Instance: InternetSeedGenerator = new InternetSeedGenerator
/**Delegate generators. */
private final val Generators: immutable.Seq[SeedGenerator] =
List(
new RandomDotOrgSeedGenerator, // first try the Internet seed generator
new SecureRandomSeedGenerator) // this is last because it always works
}
final class InternetSeedGenerator extends SeedGenerator {
/**
* Generates a seed by trying each of the available strategies in
* turn until one succeeds. Tries the most suitable strategy first
* and eventually degrades to the least suitable (but guaranteed to
* work) strategy.
* @param length The length (in bytes) of the seed.
* @return A random seed of the requested length.
*/
def generateSeed(length: Int): Array[Byte] = InternetSeedGenerator.Generators.view.flatMap(
g try Option(g.generateSeed(length)) catch { case _: SeedException None }).headOption.getOrElse(throw new IllegalStateException("All available seed generation strategies failed."))
}

View file

@ -8,6 +8,7 @@ import akka.protobuf.ByteString
import akka.remote.{ ContainerFormats, RemoteWatcher }
import akka.serialization.{ BaseSerializer, Serialization, SerializationExtension, SerializerWithStringManifest }
import java.util.Optional
import java.io.NotSerializableException
class MiscMessageSerializer(val system: ExtendedActorSystem) extends SerializerWithStringManifest with BaseSerializer {
@ -152,7 +153,7 @@ class MiscMessageSerializer(val system: ExtendedActorSystem) extends SerializerW
override def fromBinary(bytes: Array[Byte], manifest: String): AnyRef =
fromBinaryMap.get(manifest) match {
case Some(deserializer) deserializer(bytes)
case None throw new IllegalArgumentException(
case None throw new NotSerializableException(
s"Unimplemented deserialization of message with manifest [$manifest] in [${getClass.getName}]")
}

View file

@ -85,10 +85,6 @@ private[akka] class SSLSettings(config: Config) {
case r @ ("AES128CounterSecureRNG" | "AES256CounterSecureRNG")
log.debug("SSL random number generator set to: {}", r)
SecureRandom.getInstance(r, AkkaProvider)
case r @ ("AES128CounterInetRNG" | "AES256CounterInetRNG")
log.warning(LogMarker.Security, "SSL random number generator {} is deprecated, " +
"use AES128CounterSecureRNG or AES256CounterSecureRNG instead", r)
SecureRandom.getInstance(r, AkkaProvider)
case s @ ("SHA1PRNG" | "NativePRNG")
log.debug("SSL random number generator set to: {}", s)
// SHA1PRNG needs /dev/urandom to be the source on Linux to prevent problems with /dev/random blocking

View file

@ -16,6 +16,7 @@ import scala.concurrent.{ Future, Promise }
/**
* INTERNAL API
*/
@deprecated("Deprecated in favour of Artery (the new Aeron/UDP based remoting implementation).", since = "2.5.0")
private[remote] trait UdpHandlers extends CommonHandlers {
override def createHandle(channel: Channel, localAddress: Address, remoteAddress: Address): AssociationHandle =
@ -53,9 +54,12 @@ private[remote] trait UdpHandlers extends CommonHandlers {
/**
* INTERNAL API
*/
@deprecated("Deprecated in favour of Artery (the new Aeron/UDP based remoting implementation).", since = "2.5.0")
private[remote] class UdpServerHandler(_transport: NettyTransport, _associationListenerFuture: Future[AssociationEventListener])
extends ServerHandler(_transport, _associationListenerFuture) with UdpHandlers {
transport.system.log.warning("The netty.udp transport is deprecated, please use Artery instead. See: http://doc.akka.io/docs/akka/2.4/scala/remoting-artery.html")
override def initUdp(channel: Channel, remoteSocketAddress: SocketAddress, msg: ChannelBuffer): Unit =
initInbound(channel, remoteSocketAddress, msg)
}
@ -63,9 +67,12 @@ private[remote] class UdpServerHandler(_transport: NettyTransport, _associationL
/**
* INTERNAL API
*/
@deprecated("Deprecated in favour of Artery (the new Aeron/UDP based remoting implementation).", since = "2.5.0")
private[remote] class UdpClientHandler(_transport: NettyTransport, remoteAddress: Address)
extends ClientHandler(_transport, remoteAddress) with UdpHandlers {
transport.system.log.warning("The netty.udp transport is deprecated, please use Artery instead. See: http://doc.akka.io/docs/akka/2.4/scala/remoting-artery.html")
override def initUdp(channel: Channel, remoteSocketAddress: SocketAddress, msg: ChannelBuffer): Unit =
initOutbound(channel, remoteSocketAddress, msg)
}
@ -94,4 +101,4 @@ private[remote] class UdpAssociationHandle(
override def disassociate(): Unit = try channel.close()
finally transport.udpConnectionTable.remove(transport.addressToSocketAddress(remoteAddress))
}
}

View file

@ -91,31 +91,6 @@ class Ticket1978AES128CounterSecureRNGSpec extends Ticket1978CommunicationSpec(g
class Ticket1978AES256CounterSecureRNGSpec extends Ticket1978CommunicationSpec(getCipherConfig("AES256CounterSecureRNG", "TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA"))
/**
* Both of the `Inet` variants require access to the Internet to access random.org.
*/
class Ticket1978AES128CounterInetRNGSpec extends Ticket1978CommunicationSpec(getCipherConfig("AES128CounterInetRNG", "TLS_RSA_WITH_AES_128_CBC_SHA"))
with InetRNGSpec
/**
* Both of the `Inet` variants require access to the Internet to access random.org.
*/
class Ticket1978AES256CounterInetRNGSpec extends Ticket1978CommunicationSpec(getCipherConfig("AES256CounterInetRNG", "TLS_RSA_WITH_AES_256_CBC_SHA"))
with InetRNGSpec
trait InetRNGSpec { this: Ticket1978CommunicationSpec
override def preCondition = try {
(new RandomDotOrgSeedGenerator).generateSeed(128)
true
} catch {
case NonFatal(e)
log.warning("random.org not available: {}", e.getMessage())
false
}
override implicit val timeout: Timeout = Timeout(90.seconds)
}
class Ticket1978DefaultRNGSecureSpec extends Ticket1978CommunicationSpec(getCipherConfig("", "TLS_RSA_WITH_AES_128_CBC_SHA"))
class Ticket1978CrappyRSAWithMD5OnlyHereToMakeSureThingsWorkSpec extends Ticket1978CommunicationSpec(getCipherConfig("", "SSL_RSA_WITH_NULL_MD5"))

View file

@ -12,6 +12,7 @@ import com.typesafe.config.ConfigFactory
import scala.util.control.NoStackTrace
import java.util.Optional
import java.io.NotSerializableException
object MiscMessageSerializerSpec {
val serializationTestOverrides =
@ -103,7 +104,7 @@ class MiscMessageSerializerSpec extends AkkaSpec(MiscMessageSerializerSpec.testC
}
"reject deserialization with invalid manifest" in {
intercept[IllegalArgumentException] {
intercept[NotSerializableException] {
val serializer = new MiscMessageSerializer(system.asInstanceOf[ExtendedActorSystem])
serializer.fromBinary(Array.empty[Byte], "INVALID")
}

View file

@ -1,87 +0,0 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package sample.persistence;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.japi.pf.ReceiveBuilder;
import akka.persistence.*;
import scala.PartialFunction;
import scala.concurrent.duration.Duration;
import scala.runtime.BoxedUnit;
import java.util.concurrent.TimeUnit;
public class ViewExample {
public static class ExamplePersistentActor extends AbstractPersistentActor {
private int count = 1;
@Override
public String persistenceId() { return "sample-id-4"; }
@Override
public PartialFunction<Object, BoxedUnit> receiveCommand() {
return ReceiveBuilder.
match(String.class, s -> {
System.out.println(String.format("persistentActor received %s (nr = %d)", s, count));
persist(s + count, evt -> {
count += 1;
});
}).
build();
}
@Override
public PartialFunction<Object, BoxedUnit> receiveRecover() {
return ReceiveBuilder.
match(String.class, s -> count += 1).
build();
}
}
public static class ExampleView extends AbstractPersistentView {
private int numReplicated = 0;
@Override public String persistenceId() { return "sample-id-4"; }
@Override public String viewId() { return "sample-view-id-4"; }
public ExampleView() {
receive(ReceiveBuilder.
match(Object.class, m -> isPersistent(), msg -> {
numReplicated += 1;
System.out.println(String.format("view received %s (num replicated = %d)",
msg,
numReplicated));
}).
match(SnapshotOffer.class, so -> {
numReplicated = (Integer) so.snapshot();
System.out.println(String.format("view received snapshot offer %s (metadata = %s)",
numReplicated,
so.metadata()));
}).
match(String.class, s -> s.equals("snap"), s -> saveSnapshot(numReplicated)).build()
);
}
}
public static void main(String... args) throws Exception {
final ActorSystem system = ActorSystem.create("example");
final ActorRef persistentActor = system.actorOf(Props.create(ExamplePersistentActor.class));
final ActorRef view = system.actorOf(Props.create(ExampleView.class));
system.scheduler()
.schedule(Duration.Zero(),
Duration.create(2, TimeUnit.SECONDS),
persistentActor,
"scheduled",
system.dispatcher(),
null);
system.scheduler()
.schedule(Duration.Zero(), Duration.create(5, TimeUnit.SECONDS), view, "snap", system.dispatcher(), null);
}
}

View file

@ -1,78 +0,0 @@
package sample.persistence;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
import akka.japi.Procedure;
import akka.persistence.SnapshotOffer;
import akka.persistence.UntypedPersistentActor;
import akka.persistence.UntypedPersistentView;
import scala.concurrent.duration.Duration;
import java.util.concurrent.TimeUnit;
public class PersistentViewExample {
public static class ExamplePersistentActor extends UntypedPersistentActor {
@Override
public String persistenceId() { return "sample-id-4"; }
private int count = 1;
@Override
public void onReceiveRecover(Object message) {
if (message instanceof String) {
count += 1;
} else {
unhandled(message);
}
}
@Override
public void onReceiveCommand(Object message) {
if (message instanceof String) {
String s = (String) message;
System.out.println(String.format("persistentActor received %s (nr = %d)", s, count));
persist(s + count, new Procedure<String>() {
public void apply(String evt) {
count += 1;
}
});
} else {
unhandled(message);
}
}
}
public static class ExampleView extends UntypedPersistentView {
private int numReplicated = 0;
@Override public String persistenceId() { return "sample-id-4"; }
@Override public String viewId() { return "sample-view-id-4"; }
@Override
public void onReceive(Object message) throws Exception {
if (isPersistent()) {
numReplicated += 1;
System.out.println(String.format("view received %s (num replicated = %d)", message, numReplicated));
} else if (message instanceof SnapshotOffer) {
SnapshotOffer so = (SnapshotOffer)message;
numReplicated = (Integer)so.snapshot();
System.out.println(String.format("view received snapshot offer %s (metadata = %s)", numReplicated, so.metadata()));
} else if (message.equals("snap")) {
saveSnapshot(numReplicated);
} else {
unhandled(message);
}
}
}
public static void main(String... args) throws Exception {
final ActorSystem system = ActorSystem.create("example");
final ActorRef persistentActor = system.actorOf(Props.create(ExamplePersistentActor.class));
final ActorRef view = system.actorOf(Props.create(ExampleView.class));
system.scheduler().schedule(Duration.Zero(), Duration.create(2, TimeUnit.SECONDS), persistentActor, "scheduled", system.dispatcher(), null);
system.scheduler().schedule(Duration.Zero(), Duration.create(5, TimeUnit.SECONDS), view, "snap", system.dispatcher(), null);
}
}

View file

@ -1,62 +0,0 @@
package sample.persistence
import scala.concurrent.duration._
import akka.actor._
import akka.persistence._
object ViewExample extends App {
class ExamplePersistentActor extends PersistentActor {
override def persistenceId = "sample-id-4"
var count = 1
def receiveCommand: Receive = {
case payload: String =>
println(s"persistentActor received ${payload} (nr = ${count})")
persist(payload + count) { evt =>
count += 1
}
}
def receiveRecover: Receive = {
case _: String => count += 1
}
}
class ExampleView extends PersistentView {
private var numReplicated = 0
override def persistenceId: String = "sample-id-4"
override def viewId = "sample-view-id-4"
def receive = {
case "snap" =>
println(s"view saving snapshot")
saveSnapshot(numReplicated)
case SnapshotOffer(metadata, snapshot: Int) =>
numReplicated = snapshot
println(s"view received snapshot offer ${snapshot} (metadata = ${metadata})")
case payload if isPersistent =>
numReplicated += 1
println(s"view replayed event ${payload} (num replicated = ${numReplicated})")
case SaveSnapshotSuccess(metadata) =>
println(s"view saved snapshot (metadata = ${metadata})")
case SaveSnapshotFailure(metadata, reason) =>
println(s"view snapshot failure (metadata = ${metadata}), caused by ${reason}")
case payload =>
println(s"view received other message ${payload}")
}
}
val system = ActorSystem("example")
val persistentActor = system.actorOf(Props(classOf[ExamplePersistentActor]))
val view = system.actorOf(Props(classOf[ExampleView]))
import system.dispatcher
system.scheduler.schedule(Duration.Zero, 2.seconds, persistentActor, "scheduled")
system.scheduler.schedule(Duration.Zero, 5.seconds, view, "snap")
}

View file

@ -7,8 +7,7 @@ import akka.NotUsed
import akka.stream._
import akka.stream.testkit._
import scala.concurrent.Await
import scala.concurrent.Future
import scala.concurrent.{Await, Future, Promise}
import scala.concurrent.duration._
class GraphMatValueSpec extends StreamSpec {
@ -227,5 +226,19 @@ class GraphMatValueSpec extends StreamSpec {
matValue should ===(NotUsed)
}
"not ignore materialized value of indentity flow which is optimized away" in {
implicit val mat = ActorMaterializer(ActorMaterializerSettings(system).withAutoFusing(false))
val (m1, m2) = Source.single(1).viaMat(Flow[Int])(Keep.both).to(Sink.ignore).run()
m1 should ===(NotUsed)
m2 should ===(NotUsed)
// Fails with ClassCastException if value is wrong
val m3: Promise[Option[Int]] = Source.maybe[Int].viaMat(Flow[Int])(Keep.left).to(Sink.ignore).run()
m3.success(None)
val m4 = Source.single(1).viaMat(Flow[Int])(Keep.right).to(Sink.ignore).run()
m4 should ===(NotUsed)
}
}
}

View file

@ -43,8 +43,14 @@ final class Source[+Out, +Mat](override val module: Module)
override def via[T, Mat2](flow: Graph[FlowShape[Out, T], Mat2]): Repr[T] = viaMat(flow)(Keep.left)
override def viaMat[T, Mat2, Mat3](flow: Graph[FlowShape[Out, T], Mat2])(combine: (Mat, Mat2) Mat3): Source[T, Mat3] = {
if (flow.module eq GraphStages.Identity.module) this.asInstanceOf[Source[T, Mat3]]
else {
if (flow.module eq GraphStages.Identity.module) {
if (combine eq Keep.left)
this.asInstanceOf[Source[T, Mat3]]
else if (combine eq Keep.right)
this.mapMaterializedValue((_) NotUsed).asInstanceOf[Source[T, Mat3]]
else
this.mapMaterializedValue(combine(_, NotUsed.asInstanceOf[Mat2])).asInstanceOf[Source[T, Mat3]]
} else {
val flowCopy = flow.module.carbonCopy
new Source(
module

View file

@ -455,7 +455,7 @@ abstract class GraphStageLogic private[stream] (val inCount: Int, val outCount:
// Detailed error information should not add overhead to the hot path
ReactiveStreamsCompliance.requireNonNullElement(elem)
require(!isClosed(out), s"Cannot pull closed port ($out)")
require(!isClosed(out), s"Cannot push closed port ($out)")
require(isAvailable(out), s"Cannot push port ($out) twice")
// No error, just InClosed caused the actual pull to be ignored, but the status flag still needs to be flipped

View file

@ -6,9 +6,12 @@ package akka.typed
import scala.concurrent.ExecutionContext
import akka.{ actor a, event e }
import java.util.concurrent.ThreadFactory
import akka.actor.setup.ActorSystemSetup
import com.typesafe.config.{ Config, ConfigFactory }
import scala.concurrent.{ ExecutionContextExecutor, Future }
import akka.typed.adapter.{ PropsAdapter, ActorSystemAdapter }
import akka.typed.adapter.{ ActorSystemAdapter, PropsAdapter }
import akka.util.Timeout
/**
@ -163,14 +166,15 @@ object ActorSystem {
* system typed and untyped actors can coexist.
*/
def adapter[T](name: String, guardianBehavior: Behavior[T],
guardianDeployment: DeploymentConfig = EmptyDeploymentConfig,
config: Option[Config] = None,
classLoader: Option[ClassLoader] = None,
executionContext: Option[ExecutionContext] = None): ActorSystem[T] = {
guardianDeployment: DeploymentConfig = EmptyDeploymentConfig,
config: Option[Config] = None,
classLoader: Option[ClassLoader] = None,
executionContext: Option[ExecutionContext] = None,
actorSystemSettings: ActorSystemSetup = ActorSystemSetup.empty): ActorSystem[T] = {
Behavior.validateAsInitial(guardianBehavior)
val cl = classLoader.getOrElse(akka.actor.ActorSystem.findClassLoader())
val appConfig = config.getOrElse(ConfigFactory.load(cl))
val untyped = new a.ActorSystemImpl(name, appConfig, cl, executionContext, Some(PropsAdapter(guardianBehavior, guardianDeployment)))
val untyped = new a.ActorSystemImpl(name, appConfig, cl, executionContext, Some(PropsAdapter(guardianBehavior, guardianDeployment)), actorSystemSettings)
untyped.start()
new ActorSystemAdapter(untyped)
}

View file

@ -70,8 +70,8 @@ object Dependencies {
// For Java 8 Conversions
val java8Compat = Def.setting {"org.scala-lang.modules" %% "scala-java8-compat" % java8CompatVersion.value} // Scala License
val aeronDriver = "io.aeron" % "aeron-driver" % "1.0.4" // ApacheV2
val aeronClient = "io.aeron" % "aeron-client" % "1.0.4" // ApacheV2
val aeronDriver = "io.aeron" % "aeron-driver" % "1.0.5" // ApacheV2
val aeronClient = "io.aeron" % "aeron-client" % "1.0.5" // ApacheV2
object Docs {
val sprayJson = "io.spray" %% "spray-json" % "1.3.2" % "test"

View file

@ -132,7 +132,24 @@ object MiMa extends AutoPlugin {
// object akka.stream.stage.StatefulStage#Stay does not have a correspondent in current version
ProblemFilters.exclude[MissingClassProblem]("akka.stream.stage.StatefulStage$Stay$"),
// object akka.stream.stage.StatefulStage#Finish does not have a correspondent in current version
ProblemFilters.exclude[MissingClassProblem]("akka.stream.stage.StatefulStage$Finish$")
ProblemFilters.exclude[MissingClassProblem]("akka.stream.stage.StatefulStage$Finish$"),
// #21423 removal of deprecated `PersistentView` (in 2.5.x)
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.Update"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.Update$"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.PersistentView"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.PersistentView$"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.PersistentView$ScheduledUpdate"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.AbstractPersistentView"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.UntypedPersistentView"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.PersistentView$ScheduledUpdate$"),
ProblemFilters.exclude[MissingClassProblem]("akka.persistence.PersistentView$State"),
// #22015 removal of deprecated AESCounterSecureInetRNGs
ProblemFilters.exclude[MissingClassProblem]("akka.remote.security.provider.AES128CounterInetRNG"),
ProblemFilters.exclude[MissingClassProblem]("akka.remote.security.provider.AES256CounterInetRNG"),
ProblemFilters.exclude[MissingClassProblem]("akka.remote.security.provider.InternetSeedGenerator"),
ProblemFilters.exclude[MissingClassProblem]("akka.remote.security.provider.InternetSeedGenerator$")
)
Map(
@ -647,9 +664,8 @@ object MiMa extends AutoPlugin {
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.snapshot.local.LocalSnapshotStore.this"),
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.journal.leveldb.LeveldbStore.configPath"),
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.journal.leveldb.LeveldbJournal.configPath"),
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.journal.leveldb.LeveldbJournal.this"),
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.journal.leveldb.SharedLeveldbStore.configPath"),
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.persistence.journal.leveldb.SharedLeveldbStore.this"),
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.persistence.journal.leveldb.LeveldbStore.prepareConfig"),
// #20737 aligned test sink and test source stage factory methods types
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.stream.testkit.TestSinkStage.apply"),
@ -661,10 +677,14 @@ object MiMa extends AutoPlugin {
// https://github.com/akka/akka/pull/21688
ProblemFilters.exclude[MissingClassProblem]("akka.stream.Fusing$StructuralInfo$"),
ProblemFilters.exclude[MissingClassProblem]("akka.stream.Fusing$StructuralInfo"),
// https://github.com/akka/akka/pull/21989 - add more information in tcp connection shutdown logs (add mapError)
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.mapError")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.mapError"),
// #21894 Programmatic configuration of the ActorSystem
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.this")
) ++ bcIssuesBetween24and25)
// Entries should be added to a section keyed with the latest released version before the change
)
}
}

View file

@ -88,7 +88,7 @@ object OSGi {
exports(
packages = Seq("akka.stream.*",
"com.typesafe.sslconfig.akka.*"),
imports = Seq(scalaJava8CompatImport())) ++
imports = Seq(scalaJava8CompatImport(), scalaParsingCombinatorImport())) ++
Seq(OsgiKeys.requireBundle := Seq(s"""com.typesafe.sslconfig;bundle-version="${Dependencies.sslConfigVersion}""""))
val streamTestkit = exports(Seq("akka.stream.testkit.*"))
@ -123,6 +123,7 @@ object OSGi {
versionedImport(packageName, s"$epoch.$major", s"$epoch.${major.toInt+1}")
}
def scalaJava8CompatImport(packageName: String = "scala.compat.java8.*") = versionedImport(packageName, "0.7.0", "1.0.0")
def scalaParsingCombinatorImport(packageName: String = "scala.util.parsing.combinator.*") = versionedImport(packageName, "1.0.4", "1.1.0")
def kamonImport(packageName: String = "kamon.sigar.*") = optionalResolution(versionedImport(packageName, "1.6.5", "1.6.6"))
def sigarImport(packageName: String = "org.hyperic.*") = optionalResolution(versionedImport(packageName, "1.6.5", "1.6.6"))
def optionalResolution(packageName: String) = "%s;resolution:=optional".format(packageName)

View file

@ -275,7 +275,11 @@ try ssh -t ${release_server} echo "Successfully contacted release server."
echolog "Getting current project version from sbt..."
declare -r current_version=$(get_current_version)
echolog "Current version is ${current_version}"
echolog "Current version is ${current_version} on branch $initial_branch"
if [ "${current_version:0:3}" != "${version:0:3}" ]; then
fail "Releasing $version from wrong branch $initial_branch with version $current_version"
fi
# check out a release branch
try git checkout -b ${release_branch}