diff --git a/.gitignore b/.gitignore index f646a4c173..48632735ce 100755 --- a/.gitignore +++ b/.gitignore @@ -7,6 +7,7 @@ project/plugins/project project/boot/* */project/build/target */project/boot +*/project/project.target.config-classes lib_managed etags tags @@ -67,3 +68,4 @@ redis/ beanstalk/ .scalastyle bin/ +.worksheet diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index df9d399f9e..2796e67ed8 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,63 +1,97 @@ -#Contributing to Akka# +# Contributing to Akka -Greetings traveller! +## Infrastructure -##Infrastructure## - -* [Akka Contributor License Agreement](www.typesafe.com/contribute/cla) +* [Akka Contributor License Agreement](http://www.typesafe.com/contribute/cla) * [Akka Issue Tracker](http://doc.akka.io/docs/akka/current/project/issue-tracking.html) * [Scalariform](https://github.com/mdr/scalariform) -##Workflow## +# Typesafe Project & Developer Guidelines -0. Sign the Akka Contributor License Agreement, - we won't accept anything from anybody who has not signed it. -1. Find-or-create a ticket in the issue tracker -2. Assign that ticket to yourself -3. Create a local branch with the following name format: wip-X-Y-Z - where the X is the number of the ticket in the tracker, - and Y is some brief keywords of the ticket title and Z is your initials or similar. - Example: wip-2373-add-contributing-md-√ -4. Do what needs to be done (with tests and docs if applicable). - Your branch should pass all tests before going any further. -5. Push the branch to your clone of the Akka repository -6. Create a Pull Request onto the applicable Akka branch, - if the number of commits are more than a few, please squash the - commits first. -7. Change the status of your ticket to "Test" -8. The Pull Request will be reviewed by the Akka committers -9. Modify the Pull Request as agreed upon during the review, - then push the changes to your branch in your Akka repository, - the Pull Request should be automatically updated with the new - content. -10. Several cycles of review-then-change might occur. -11. Pull Request is either merged by the Akka committers, - or rejected, and the associated ticket will be updated to - reflect that. -12. Delete the local and remote wip-X-Y-Z +These guidelines are meant to be a living document that should be changed and adapted as needed. We encourage changes that makes it easier to achieve our goals in an efficient way. -##Code Reviews## +These guidelines mainly applies to Typesafe’s “mature” projects - not necessarily to projects of the type ‘collection of scripts’ etc. -Akka utilizes peer code reviews to streamline the codebase, reduce the defect ratio, -increase maintainability and spread knowledge about how things are solved. +## General Workflow -Core review values: +This is the process for committing code into master. There are of course exceptions to these rules, for example minor changes to comments and documentation, fixing a broken build etc. -* Rule: [The Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) - - Why: Small improvements add up over time, keeping the codebase in shape. -* Rule: [Don't Repeat Yourself](http://programmer.97things.oreilly.com/wiki/index.php/Don't_Repeat_Yourself) - - Why: Repetitions are not maintainable, keeping things DRY makes it easier to fix bugs and refactor, - since you only need to apply the correction in one place, or perform the refactoring at one place. -* Rule: Feature tests > Integration tests > Unit tests - - Why: Without proving that a feature works, the code is only liability. - Without proving that a feature works with other features, the code is of limited value. - Without proving the individual parts of a feature works, the code is harder to debug. +1. Make sure you have signed the [Typesafe CLA](http://www.typesafe.com/contribute/cla), if not, sign it online. +2. Before starting to work on a feature or a fix, you have to make sure that: + 1. There is a ticket for your work in the project's issue tracker. If not, create it first. + 2. The ticket has been scheduled for the current milestone. + 3. The ticket is estimated by the team. + 4. The ticket have been discussed and prioritized by the team. +3. You should always perform your work in a Git feature branch. The branch should be given a descriptive name that explains its intent. Some teams also like adding the ticket number and/or the [GitHub](http://github.com) user ID to the branch name, these details is up to each of the individual teams. +4. When the feature or fix is completed you should open a [Pull Request](https://help.github.com/articles/using-pull-requests) on GitHub. +5. The Pull Request should be reviewed by other maintainers (as many as feasible/practical). Note that the maintainers can consist of outside contributors, both within and outside Typesafe. Outside contributors (for example from EPFL or independent committers) are encouraged to participate in the review process, it is not a closed process. +6. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs up. +7. Once the code has passed review the Pull Request can be merged into the master branch. -##Source style## +## Pull Request Requirements + +For a Pull Request to be considered at all it has to meet these requirements: + +1. Live up to the current code standard: + - Not violate [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself). + - [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) needs to have been applied. +2. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests. +3. The code must be well documented in the Typesafe's standard documentation format (see the ‘Documentation’ section below). + +If these requirements are not met then the code should **not** be merged into master, or even reviewed - regardless of how good or important it is. No exceptions. + +## Continuous Integration + +Each project should be configured to use a continuous integration (CI) tool (i.e. a build server ala Jenkins). Typesafe has a Jenkins server farm that can be used. The CI tool should, on each push to master, build the **full** distribution and run **all** tests, and if something fails it should email out a notification with the failure report to the committer and the core team. The CI tool should also be used in conjunction with Typesafe’s Pull Request Validator (discussed below). + +## Documentation + +All documentation should be generated using the sbt-site-plugin, *or* publish artifacts to a repository that can be consumed by the typesafe stack. + +All documentation must abide by the following maxims: + +- Example code should be run as part of an automated test suite. +- Version should be **programmatically** specifiable to the build. +- Generation should be **completely automated** and available for scripting. +- Artifacts that must be included in the Typesafe Stack should be published to a maven “documentation” repository as documentation artifacts. + +All documentation is preferred to be in Typesafe's standard documentation format [reStructuredText](http://doc.akka.io/docs/akka/snapshot/dev/documentation.html) compiled using Typesafe's customized [Sphinx](http://sphinx.pocoo.org/) based documentation generation system, which among other things allows all code in the documentation to be externalized into compiled files and imported into the documentation. + +For more info, or for a starting point for new projects, look at the [Typesafe Documentation Template project](https://github.com/typesafehub/doc-template). + +For larger projects that have invested a lot of time and resources into their current documentation and samples scheme (like for example Play), it is understandable that it will take some time to migrate to this new model. In these cases someone from the project needs to take the responsibility of manual QA and verifier for the documentation and samples. + +## Work In Progress + +It is ok to work on a public feature branch in the GitHub repository. Something that can sometimes be useful for early feedback etc. If so then it is preferable to name the branch accordingly. This can be done by either prefix the name with ``wip-`` as in ‘Work In Progress’, or use hierarchical names like ``wip/..``, ``feature/..`` or ``topic/..``. Either way is fine as long as it is clear that it is work in progress and not ready for merge. This work can temporarily have a lower standard. However, to be merged into master it will have to go through the regular process outlined above, with Pull Request, review etc.. + +Also, to facilitate both well-formed commits and working together, the ``wip`` and ``feature``/``topic`` identifiers also have special meaning. Any branch labelled with ``wip`` is considered “git-unstable” and may be rebased and have its history rewritten. Any branch with ``feature``/``topic`` in the name is considered “stable” enough for others to depend on when a group is working on a feature. + +## Creating Commits And Writing Commit Messages + +Follow these guidelines when creating public commits and writing commit messages. + +1. If your work spans multiple local commits (for example; if you do safe point commits while working in a feature branch or work in a branch for long time doing merges/rebases etc.) then please do not commit it all but rewrite the history by squashing the commits into a single big commit which you write a good commit message for (like discussed in the following sections). For more info read this article: [Git Workflow](http://sandofsky.com/blog/git-workflow.html). Every commit should be able to be used in isolation, cherry picked etc. +2. First line should be a descriptive sentence what the commit is doing. It should be possible to fully understand what the commit does by just reading this single line. It is **not ok** to only list the ticket number, type "minor fix" or similar. Include reference to ticket number, prefixed with #, at the end of the first line. If the commit is a small fix, then you are done. If not, go to 3. +3. Following the single line description should be a blank line followed by an enumerated list with the details of the commit. +4. Add keywords for your commit (depending on the degree of automation we reach, the list may change over time): + * ``Review by @gituser`` - if you want to notify someone on the team. The others can, and are encouraged to participate. + * ``Fix/Fixing/Fixes/Close/Closing/Refs #ticket`` - if you want to mark the ticket as fixed in the issue tracker (Assembla understands this). + * ``backport to _branch name_`` - if the fix needs to be cherry-picked to another branch (like 2.9.x, 2.10.x, etc) + +Example: + + Adding monadic API to Future. Fixes #2731 + + * Details 1 + * Details 2 + * Details 3 + +## Source style Akka uses [Scalariform](https://github.com/mdr/scalariform) to enforce some of the code style rules. -##Contributing Modules## +## Contributing Modules For external contributions of entire features, the normal way is to establish it as a stand-alone feature first, to show that there is a need for the feature. The diff --git a/akka-actor-tests/src/test/java/akka/dispatch/JavaFutureTests.java b/akka-actor-tests/src/test/java/akka/dispatch/JavaFutureTests.java index 4053a2d7f2..ec494e542c 100644 --- a/akka-actor-tests/src/test/java/akka/dispatch/JavaFutureTests.java +++ b/akka-actor-tests/src/test/java/akka/dispatch/JavaFutureTests.java @@ -7,7 +7,7 @@ import akka.japi.*; import scala.concurrent.Await; import scala.concurrent.Future; import scala.concurrent.Promise; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.testkit.TestKitExtension; import org.junit.AfterClass; import org.junit.BeforeClass; diff --git a/akka-actor-tests/src/test/java/akka/japi/JavaAPITestBase.java b/akka-actor-tests/src/test/java/akka/japi/JavaAPITestBase.java index c0361530da..b3a092b1f9 100644 --- a/akka-actor-tests/src/test/java/akka/japi/JavaAPITestBase.java +++ b/akka-actor-tests/src/test/java/akka/japi/JavaAPITestBase.java @@ -1,5 +1,7 @@ package akka.japi; +import akka.event.LoggingAdapter; +import akka.event.NoLogging; import org.junit.Test; import static org.junit.Assert.*; @@ -46,4 +48,10 @@ public class JavaAPITestBase { public void shouldBeSingleton() { assertSame(Option.none(), Option.none()); } + + @Test + public void mustBeAbleToGetNoLogging() { + LoggingAdapter a = NoLogging.getInstance(); + assertNotNull(a); + } } diff --git a/akka-actor-tests/src/test/java/akka/routing/CustomRouteTest.java b/akka-actor-tests/src/test/java/akka/routing/CustomRouteTest.java index d47c49e28d..c0ccd4de26 100644 --- a/akka-actor-tests/src/test/java/akka/routing/CustomRouteTest.java +++ b/akka-actor-tests/src/test/java/akka/routing/CustomRouteTest.java @@ -15,7 +15,8 @@ public class CustomRouteTest { // only to test compilability public void testRoute() { final ActorRef ref = system.actorOf(new Props().withRouter(new RoundRobinRouter(1))); - final scala.Function1, scala.collection.Iterable> route = ExtractRoute.apply(ref); + final scala.Function1, + scala.collection.immutable.Iterable> route = ExtractRoute.apply(ref); route.apply(null); } diff --git a/akka-actor-tests/src/test/java/akka/util/JavaDuration.java b/akka-actor-tests/src/test/java/akka/util/JavaDuration.java index 0cbcea80d4..326afb8543 100644 --- a/akka-actor-tests/src/test/java/akka/util/JavaDuration.java +++ b/akka-actor-tests/src/test/java/akka/util/JavaDuration.java @@ -4,14 +4,14 @@ package akka.util; import org.junit.Test; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; public class JavaDuration { @Test public void testCreation() { final Duration fivesec = Duration.create(5, "seconds"); - final Duration threemillis = Duration.parse("3 millis"); + final Duration threemillis = Duration.create("3 millis"); final Duration diff = fivesec.minus(threemillis); assert diff.lt(fivesec); assert Duration.Zero().lteq(Duration.Inf()); diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorConfigurationVerificationSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorConfigurationVerificationSpec.scala index 6532b5e5cd..c130d23149 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorConfigurationVerificationSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorConfigurationVerificationSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps import akka.testkit._ import akka.testkit.DefaultTimeout import akka.testkit.TestEvent._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.routing._ import org.scalatest.BeforeAndAfterEach import akka.ConfigurationException diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala index bb5ed0d4bd..2aba0e18d4 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala @@ -12,7 +12,7 @@ import akka.actor.ActorDSL._ //#import import akka.event.Logging.Warning import scala.concurrent.{ Await, Future } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.TimeoutException class ActorDSLSpec extends AkkaSpec { @@ -103,6 +103,32 @@ class ActorDSLSpec extends AkkaSpec { i.receive() must be("hi") } + "support becomeStacked" in { + //#becomeStacked + val a = actor(new Act { + become { // this will replace the initial (empty) behavior + case "info" ⇒ sender ! "A" + case "switch" ⇒ + becomeStacked { // this will stack upon the "A" behavior + case "info" ⇒ sender ! "B" + case "switch" ⇒ unbecome() // return to the "A" behavior + } + case "lobotomize" ⇒ unbecome() // OH NOES: Actor.emptyBehavior + } + }) + //#becomeStacked + + implicit def sender = testActor + a ! "info" + expectMsg("A") + a ! "switch" + a ! "info" + expectMsg("B") + a ! "switch" + a ! "info" + expectMsg("A") + } + "support setup/teardown" in { //#simple-start-stop val a = actor(new Act { @@ -188,7 +214,7 @@ class ActorDSLSpec extends AkkaSpec { become { case 1 ⇒ stash() case 2 ⇒ - testActor ! 2; unstashAll(); become { + testActor ! 2; unstashAll(); becomeStacked { case 1 ⇒ testActor ! 1; unbecome() } } diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorFireForgetRequestReplySpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorFireForgetRequestReplySpec.scala index 42018823bc..93e17d3192 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorFireForgetRequestReplySpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorFireForgetRequestReplySpec.scala @@ -6,7 +6,7 @@ package akka.actor import akka.testkit._ import org.scalatest.BeforeAndAfterEach -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.Await import akka.pattern.ask diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorLifeCycleSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorLifeCycleSpec.scala index 40907e74a0..430a64172a 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorLifeCycleSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorLifeCycleSpec.scala @@ -11,7 +11,7 @@ import org.scalatest.matchers.MustMatchers import akka.actor.Actor._ import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.atomic._ import scala.concurrent.Await import akka.pattern.ask diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorLookupSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorLookupSpec.scala index 2d49ba884d..4d19f5ea9e 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorLookupSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorLookupSpec.scala @@ -6,7 +6,7 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.Await import akka.pattern.ask import java.net.MalformedURLException diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorRefSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorRefSpec.scala index ae956e968a..a1da055cf6 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorRefSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorRefSpec.scala @@ -11,7 +11,7 @@ import org.scalatest.matchers.MustMatchers import akka.testkit._ import akka.util.Timeout -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.Await import java.lang.IllegalStateException import scala.concurrent.Promise diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorSystemSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorSystemSpec.scala index f7a9844c9d..781b8d4cab 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorSystemSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorSystemSpec.scala @@ -8,12 +8,16 @@ import akka.testkit._ import org.scalatest.junit.JUnitSuite import com.typesafe.config.ConfigFactory import scala.concurrent.Await -import scala.concurrent.util.duration._ -import scala.collection.JavaConverters -import java.util.concurrent.{ TimeUnit, RejectedExecutionException, CountDownLatch, ConcurrentLinkedQueue } +import scala.concurrent.duration._ +import java.util.concurrent.{ RejectedExecutionException, ConcurrentLinkedQueue } import akka.util.Timeout +import akka.japi.Util.immutableSeq import scala.concurrent.Future import akka.pattern.ask +import akka.dispatch._ +import com.typesafe.config.Config +import java.util.concurrent.{ LinkedBlockingQueue, BlockingQueue, TimeUnit } +import akka.util.Switch class JavaExtensionSpec extends JavaExtension with JUnitSuite @@ -67,10 +71,57 @@ object ActorSystemSpec { } } + case class FastActor(latch: TestLatch, testActor: ActorRef) extends Actor { + val ref1 = context.actorOf(Props.empty) + val ref2 = context.actorFor(ref1.path.toString) + testActor ! ref2.getClass + latch.countDown() + + def receive = { + case _ ⇒ + } + } + + class SlowDispatcher(_config: Config, _prerequisites: DispatcherPrerequisites) extends MessageDispatcherConfigurator(_config, _prerequisites) { + private val instance = new Dispatcher( + prerequisites, + config.getString("id"), + config.getInt("throughput"), + Duration(config.getNanoseconds("throughput-deadline-time"), TimeUnit.NANOSECONDS), + mailboxType, + configureExecutor(), + Duration(config.getMilliseconds("shutdown-timeout"), TimeUnit.MILLISECONDS)) { + val doneIt = new Switch + override protected[akka] def registerForExecution(mbox: Mailbox, hasMessageHint: Boolean, hasSystemMessageHint: Boolean): Boolean = { + val ret = super.registerForExecution(mbox, hasMessageHint, hasSystemMessageHint) + doneIt.switchOn { + TestKit.awaitCond(mbox.actor.actor != null, 1.second) + mbox.actor.actor match { + case FastActor(latch, _) ⇒ Await.ready(latch, 1.second) + } + } + ret + } + } + + /** + * Returns the same dispatcher instance for each invocation + */ + override def dispatcher(): MessageDispatcher = instance + } + + val config = s""" + akka.extensions = ["akka.actor.TestExtension"] + slow { + type="${classOf[SlowDispatcher].getName}" + }""" + } @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class ActorSystemSpec extends AkkaSpec("""akka.extensions = ["akka.actor.TestExtension"]""") with ImplicitSender { +class ActorSystemSpec extends AkkaSpec(ActorSystemSpec.config) with ImplicitSender { + + import ActorSystemSpec.FastActor "An ActorSystem" must { @@ -102,8 +153,6 @@ class ActorSystemSpec extends AkkaSpec("""akka.extensions = ["akka.actor.TestExt } "run termination callbacks in order" in { - import scala.collection.JavaConverters._ - val system2 = ActorSystem("TerminationCallbacks", AkkaSpec.testConf) val result = new ConcurrentLinkedQueue[Int] val count = 10 @@ -121,13 +170,11 @@ class ActorSystemSpec extends AkkaSpec("""akka.extensions = ["akka.actor.TestExt Await.ready(latch, 5 seconds) val expected = (for (i ← 1 to count) yield i).reverse - result.asScala.toSeq must be(expected) + immutableSeq(result) must be(expected) } "awaitTermination after termination callbacks" in { - import scala.collection.JavaConverters._ - val system2 = ActorSystem("AwaitTermination", AkkaSpec.testConf) @volatile var callbackWasRun = false @@ -168,6 +215,11 @@ class ActorSystemSpec extends AkkaSpec("""akka.extensions = ["akka.actor.TestExt Await.result(Future.sequence(waves), timeout.duration + 5.seconds) must be === Seq("done", "done", "done") } + "find actors that just have been created" in { + system.actorOf(Props(new FastActor(TestLatch(), testActor)).withDispatcher("slow")) + expectMsgType[Class[_]] must be(classOf[LocalActorRef]) + } + "reliable deny creation of actors while shutting down" in { val system = ActorSystem() import system.dispatcher diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorTimeoutSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorTimeoutSpec.scala index 4eed96b0c5..965a99319d 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorTimeoutSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorTimeoutSpec.scala @@ -3,7 +3,7 @@ */ package akka.actor -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit._ import akka.testkit.TestEvent._ import scala.concurrent.Await diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorWithBoundedStashSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorWithBoundedStashSpec.scala index b3a7bf0686..4d95bf02f6 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorWithBoundedStashSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorWithBoundedStashSpec.scala @@ -11,7 +11,7 @@ import akka.testkit.TestEvent._ import akka.dispatch.BoundedDequeBasedMailbox import akka.pattern.ask import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.ActorSystem.Settings import com.typesafe.config.{ Config, ConfigFactory } import org.scalatest.Assertions.intercept diff --git a/akka-actor-tests/src/test/scala/akka/actor/ActorWithStashSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ActorWithStashSpec.scala index 5913000215..c4d9248d88 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ActorWithStashSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ActorWithStashSpec.scala @@ -10,7 +10,7 @@ import akka.testkit.DefaultTimeout import akka.testkit.TestEvent._ import scala.concurrent.Await import akka.pattern.ask -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.{ Config, ConfigFactory } import org.scalatest.BeforeAndAfterEach import org.scalatest.junit.JUnitSuite diff --git a/akka-actor-tests/src/test/scala/akka/actor/ConsistencySpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ConsistencySpec.scala index dbba376054..6f6fb7fe21 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ConsistencySpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ConsistencySpec.scala @@ -4,7 +4,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import akka.dispatch.UnboundedMailbox -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object ConsistencySpec { val config = """ diff --git a/akka-actor-tests/src/test/scala/akka/actor/DeathWatchSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/DeathWatchSpec.scala index ea491dcbd1..d01848943f 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/DeathWatchSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/DeathWatchSpec.scala @@ -6,7 +6,7 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.atomic._ import scala.concurrent.Await import akka.pattern.ask diff --git a/akka-actor-tests/src/test/scala/akka/actor/DeployerSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/DeployerSpec.scala index 37aa133583..954337431c 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/DeployerSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/DeployerSpec.scala @@ -10,7 +10,7 @@ import akka.testkit.AkkaSpec import com.typesafe.config.ConfigFactory import com.typesafe.config.ConfigParseOptions import akka.routing._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object DeployerSpec { val deployerConf = ConfigFactory.parseString(""" diff --git a/akka-actor-tests/src/test/scala/akka/actor/FSMActorSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/FSMActorSpec.scala index bae6d2f6fe..b4860154ea 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/FSMActorSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/FSMActorSpec.scala @@ -8,13 +8,14 @@ import language.postfixOps import org.scalatest.{ BeforeAndAfterAll, BeforeAndAfterEach } import akka.testkit._ import TestEvent.Mute -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.event._ import com.typesafe.config.ConfigFactory import scala.concurrent.Await import akka.util.Timeout -import scala.concurrent.util.Duration -import scala.concurrent.util.FiniteDuration +import org.scalatest.matchers.Matcher +import org.scalatest.matchers.HavePropertyMatcher +import org.scalatest.matchers.HavePropertyMatchResult object FSMActorSpec { val timeout = Timeout(2 seconds) @@ -201,6 +202,45 @@ class FSMActorSpec extends AkkaSpec(Map("akka.actor.debug.fsm" -> true)) with Im expectMsg(1 second, fsm.StopEvent(FSM.Shutdown, 1, null)) } + "cancel all timers when terminated" in { + val timerNames = List("timer-1", "timer-2", "timer-3") + + // Lazy so fsmref can refer to checkTimersActive + lazy val fsmref = TestFSMRef(new Actor with FSM[String, Null] { + startWith("not-started", null) + when("not-started") { + case Event("start", _) ⇒ goto("started") replying "starting" + } + when("started", stateTimeout = 10 seconds) { + case Event("stop", _) ⇒ stop() + } + onTransition { + case "not-started" -> "started" ⇒ + for (timerName ← timerNames) setTimer(timerName, (), 10 seconds, false) + } + onTermination { + case _ ⇒ { + checkTimersActive(false) + testActor ! "stopped" + } + } + }) + + def checkTimersActive(active: Boolean) { + for (timer ← timerNames) fsmref.isTimerActive(timer) must be(active) + fsmref.isStateTimerActive must be(active) + } + + checkTimersActive(false) + + fsmref ! "start" + expectMsg(1 second, "starting") + checkTimersActive(true) + + fsmref ! "stop" + expectMsg(1 second, "stopped") + } + "log events and transitions if asked to do so" in { import scala.collection.JavaConverters._ val config = ConfigFactory.parseMap(Map("akka.loglevel" -> "DEBUG", diff --git a/akka-actor-tests/src/test/scala/akka/actor/FSMTimingSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/FSMTimingSpec.scala index 3960f5a8ff..e5436d4e9c 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/FSMTimingSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/FSMTimingSpec.scala @@ -7,8 +7,7 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.event.Logging @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) diff --git a/akka-actor-tests/src/test/scala/akka/actor/FSMTransitionSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/FSMTransitionSpec.scala index 446f6fc9b3..04a0eea352 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/FSMTransitionSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/FSMTransitionSpec.scala @@ -6,8 +6,7 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ object FSMTransitionSpec { diff --git a/akka-actor-tests/src/test/scala/akka/actor/ForwardActorSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ForwardActorSpec.scala index 9e662b5535..40c652c3ec 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ForwardActorSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ForwardActorSpec.scala @@ -7,9 +7,8 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Actor._ -import scala.concurrent.util.Duration import scala.concurrent.Await import akka.pattern.{ ask, pipe } diff --git a/akka-actor-tests/src/test/scala/akka/actor/IOActor.scala b/akka-actor-tests/src/test/scala/akka/actor/IOActor.scala index 58ffb9c602..5cd9075e38 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/IOActor.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/IOActor.scala @@ -7,16 +7,14 @@ package akka.actor import language.postfixOps import akka.util.ByteString import scala.concurrent.{ ExecutionContext, Await, Future, Promise } -import scala.concurrent.util.{ Duration, Deadline } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.util.continuations._ import akka.testkit._ import akka.dispatch.MessageDispatcher import akka.pattern.ask import java.net.{ Socket, InetSocketAddress, InetAddress, SocketAddress } import scala.util.Failure -import annotation.tailrec -import scala.concurrent.util.FiniteDuration +import scala.annotation.tailrec object IOActorSpec { @@ -57,6 +55,8 @@ object IOActorSpec { def receive = { + case _: IO.Connected ⇒ //don't care + case bytes: ByteString ⇒ val source = sender socket write bytes @@ -67,9 +67,9 @@ object IOActorSpec { case IO.Closed(`socket`, cause) ⇒ state(cause) - throw cause match { - case IO.Error(e) ⇒ e - case _ ⇒ new RuntimeException("Socket closed") + cause match { + case IO.Error(e) ⇒ throw e + case _ ⇒ throw new RuntimeException("Socket closed") } } @@ -156,6 +156,8 @@ object IOActorSpec { case IO.Read(socket, bytes) ⇒ state(socket)(IO Chunk bytes) + case _: IO.Connected ⇒ //don't care + case IO.Closed(socket, cause) ⇒ state -= socket @@ -183,6 +185,8 @@ object IOActorSpec { readResult map (source !) } + case _: IO.Connected ⇒ //don't care + case IO.Read(`socket`, bytes) ⇒ state(IO Chunk bytes) @@ -278,7 +282,7 @@ class IOActorSpec extends AkkaSpec with DefaultTimeout { } "an IO Actor" must { - implicit val ec = system.dispatcher + import system.dispatcher "run echo server" in { filterException[java.net.ConnectException] { val addressPromise = Promise[SocketAddress]() diff --git a/akka-actor-tests/src/test/scala/akka/actor/LocalActorRefProviderSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/LocalActorRefProviderSpec.scala index a71a9a09f8..4cb432aa23 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/LocalActorRefProviderSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/LocalActorRefProviderSpec.scala @@ -7,7 +7,7 @@ package akka.actor import language.postfixOps import akka.testkit._ import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import scala.concurrent.Future import scala.util.Success @@ -39,6 +39,22 @@ class LocalActorRefProviderSpec extends AkkaSpec(LocalActorRefProviderSpec.confi a must be === b } + "find child actor with URL encoded name using actorFor" in { + val childName = "akka%3A%2F%2FClusterSystem%40127.0.0.1%3A2552" + val a = system.actorOf(Props(new Actor { + val child = context.actorOf(Props.empty, name = childName) + def receive = { + case "lookup" ⇒ + if (childName == child.path.name) sender ! context.actorFor(childName) + else sender ! s"$childName is not ${child.path.name}!" + } + })) + a.tell("lookup", testActor) + val b = expectMsgType[ActorRef] + b.isTerminated must be(false) + b.path.name must be(childName) + } + } "An ActorRefFactory" must { diff --git a/akka-actor-tests/src/test/scala/akka/actor/ReceiveTimeoutSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/ReceiveTimeoutSpec.scala index a74cbc9839..f34dbda9e3 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/ReceiveTimeoutSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/ReceiveTimeoutSpec.scala @@ -6,11 +6,10 @@ package akka.actor import language.postfixOps import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.atomic.AtomicInteger import scala.concurrent.Await import java.util.concurrent.TimeoutException -import scala.concurrent.util.Duration @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) class ReceiveTimeoutSpec extends AkkaSpec { diff --git a/akka-actor-tests/src/test/scala/akka/actor/RelativeActorPathSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/RelativeActorPathSpec.scala new file mode 100644 index 0000000000..6870a36125 --- /dev/null +++ b/akka-actor-tests/src/test/scala/akka/actor/RelativeActorPathSpec.scala @@ -0,0 +1,27 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ +package akka.actor + +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers +import java.net.URLEncoder +import scala.collection.immutable + +class RelativeActorPathSpec extends WordSpec with MustMatchers { + + def elements(path: String): immutable.Seq[String] = RelativeActorPath.unapply(path).getOrElse(Nil) + + "RelativeActorPath" must { + "match single name" in { + elements("foo") must be(List("foo")) + } + "match path separated names" in { + elements("foo/bar/baz") must be(List("foo", "bar", "baz")) + } + "match url encoded name" in { + val name = URLEncoder.encode("akka://ClusterSystem@127.0.0.1:2552", "UTF-8") + elements(name) must be(List(name)) + } + } +} diff --git a/akka-actor-tests/src/test/scala/akka/actor/RestartStrategySpec.scala b/akka-actor-tests/src/test/scala/akka/actor/RestartStrategySpec.scala index 55e87b75da..190c738f83 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/RestartStrategySpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/RestartStrategySpec.scala @@ -15,8 +15,7 @@ import java.util.concurrent.{ TimeUnit, CountDownLatch } import akka.testkit.AkkaSpec import akka.testkit.DefaultTimeout import akka.testkit.TestLatch -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.pattern.ask @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) diff --git a/akka-actor-tests/src/test/scala/akka/actor/SchedulerSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/SchedulerSpec.scala index 8d1d2fa965..3932df4ea3 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/SchedulerSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/SchedulerSpec.scala @@ -3,7 +3,7 @@ package akka.actor import language.postfixOps import org.scalatest.BeforeAndAfterEach -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.{ CountDownLatch, ConcurrentLinkedQueue, TimeUnit } import akka.testkit._ import scala.concurrent.Await @@ -214,5 +214,30 @@ class SchedulerSpec extends AkkaSpec with BeforeAndAfterEach with DefaultTimeout assert(elapsedTimeMs < 2000) // the precision is not ms exact cancellable.cancel() } + + "adjust for scheduler inaccuracy" taggedAs TimingTest in { + val startTime = System.nanoTime + val n = 33 + val latch = new TestLatch(n) + system.scheduler.schedule(150.millis, 150.millis) { + latch.countDown() + } + Await.ready(latch, 6.seconds) + val rate = n * 1000.0 / (System.nanoTime - startTime).nanos.toMillis + rate must be(6.66 plusOrMinus (0.4)) + } + + "not be affected by long running task" taggedAs TimingTest in { + val startTime = System.nanoTime + val n = 22 + val latch = new TestLatch(n) + system.scheduler.schedule(225.millis, 225.millis) { + Thread.sleep(80) + latch.countDown() + } + Await.ready(latch, 6.seconds) + val rate = n * 1000.0 / (System.nanoTime - startTime).nanos.toMillis + rate must be(4.4 plusOrMinus (0.3)) + } } } diff --git a/akka-actor-tests/src/test/scala/akka/actor/SupervisorHierarchySpec.scala b/akka-actor-tests/src/test/scala/akka/actor/SupervisorHierarchySpec.scala index fe7e66a5fe..eb30bb182b 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/SupervisorHierarchySpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/SupervisorHierarchySpec.scala @@ -7,8 +7,7 @@ package akka.actor import language.postfixOps import java.util.concurrent.{ TimeUnit, CountDownLatch } import scala.concurrent.Await -import scala.concurrent.util.Duration -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ import scala.math.BigInt.int2bigInt import scala.util.Random import scala.util.control.NoStackTrace @@ -195,7 +194,7 @@ object SupervisorHierarchySpec { case x ⇒ (x, x) } override val supervisorStrategy = OneForOneStrategy()(unwrap andThen { - case _: Failure if pongsToGo > 0 ⇒ + case (_: Failure, _) if pongsToGo > 0 ⇒ log :+= Event("pongOfDeath resuming " + sender, identityHashCode(this)) Resume case (f: Failure, orig) ⇒ @@ -392,10 +391,10 @@ object SupervisorHierarchySpec { // don’t escalate from this one! override val supervisorStrategy = OneForOneStrategy() { - case f: Failure ⇒ f.directive - case OriginalRestartException(f: Failure) ⇒ f.directive - case ActorInitializationException(f: Failure) ⇒ f.directive - case _ ⇒ Stop + case f: Failure ⇒ f.directive + case OriginalRestartException(f: Failure) ⇒ f.directive + case ActorInitializationException(_, _, f: Failure) ⇒ f.directive + case _ ⇒ Stop } var children = Vector.empty[ActorRef] diff --git a/akka-actor-tests/src/test/scala/akka/actor/SupervisorMiscSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/SupervisorMiscSpec.scala index b13457338c..070a5aba51 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/SupervisorMiscSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/SupervisorMiscSpec.scala @@ -12,7 +12,7 @@ import java.util.concurrent.{ TimeUnit, CountDownLatch } import akka.testkit.AkkaSpec import akka.testkit.DefaultTimeout import akka.pattern.ask -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.util.control.NonFatal object SupervisorMiscSpec { diff --git a/akka-actor-tests/src/test/scala/akka/actor/SupervisorSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/SupervisorSpec.scala index 5362ad4153..eafb47c47d 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/SupervisorSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/SupervisorSpec.scala @@ -7,7 +7,7 @@ package akka.actor import language.postfixOps import org.scalatest.BeforeAndAfterEach -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.{ Die, Ping } import akka.testkit.TestEvent._ import akka.testkit._ diff --git a/akka-actor-tests/src/test/scala/akka/actor/SupervisorTreeSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/SupervisorTreeSpec.scala index 4213b548d9..96e063a383 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/SupervisorTreeSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/SupervisorTreeSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps import org.scalatest.WordSpec import org.scalatest.matchers.MustMatchers import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Actor._ import akka.testkit.{ TestKit, EventFilter, filterEvents, filterException, AkkaSpec, ImplicitSender, DefaultTimeout } import akka.dispatch.Dispatchers diff --git a/akka-actor-tests/src/test/scala/akka/actor/Ticket669Spec.scala b/akka-actor-tests/src/test/scala/akka/actor/Ticket669Spec.scala index 6c96ae28a8..cca4652de9 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/Ticket669Spec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/Ticket669Spec.scala @@ -14,7 +14,7 @@ import akka.testkit.ImplicitSender import akka.testkit.DefaultTimeout import scala.concurrent.Await import akka.pattern.ask -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) class Ticket669Spec extends AkkaSpec with BeforeAndAfterAll with ImplicitSender with DefaultTimeout { diff --git a/akka-actor-tests/src/test/scala/akka/actor/TypedActorSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/TypedActorSpec.scala index fc7be182f7..201b6c6949 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/TypedActorSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/TypedActorSpec.scala @@ -5,22 +5,21 @@ package akka.actor import language.postfixOps import org.scalatest.{ BeforeAndAfterAll, BeforeAndAfterEach } -import akka.util.Timeout +import scala.annotation.tailrec +import scala.collection.immutable import scala.concurrent.{ Await, Future, Promise } -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ -import java.util.concurrent.atomic.AtomicReference -import annotation.tailrec +import scala.concurrent.duration._ import akka.testkit.{ EventFilter, filterEvents, AkkaSpec } +import akka.util.Timeout import akka.japi.{ Option ⇒ JOption } import akka.testkit.DefaultTimeout -import akka.dispatch.{ Dispatchers } +import akka.dispatch.Dispatchers import akka.pattern.ask import akka.serialization.JavaSerializer import akka.actor.TypedActor._ +import java.util.concurrent.atomic.AtomicReference import java.lang.IllegalStateException import java.util.concurrent.{ TimeoutException, TimeUnit, CountDownLatch } -import scala.concurrent.util.FiniteDuration object TypedActorSpec { @@ -37,9 +36,9 @@ object TypedActorSpec { } """ - class CyclicIterator[T](val items: Seq[T]) extends Iterator[T] { + class CyclicIterator[T](val items: immutable.Seq[T]) extends Iterator[T] { - private[this] val current: AtomicReference[Seq[T]] = new AtomicReference(items) + private[this] val current = new AtomicReference(items) def hasNext = items != Nil diff --git a/akka-actor-tests/src/test/scala/akka/actor/dispatch/ActorModelSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/dispatch/ActorModelSpec.scala index d67acd9ac1..a736003421 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/dispatch/ActorModelSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/dispatch/ActorModelSpec.scala @@ -21,8 +21,7 @@ import akka.event.Logging.Error import akka.pattern.ask import akka.testkit._ import akka.util.Switch -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import scala.concurrent.{ Await, Future, Promise } import scala.annotation.tailrec diff --git a/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatcherActorSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatcherActorSpec.scala index a6b071d804..db3574a29d 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatcherActorSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatcherActorSpec.scala @@ -7,8 +7,7 @@ import java.util.concurrent.atomic.{ AtomicBoolean, AtomicInteger } import akka.testkit.{ filterEvents, EventFilter, AkkaSpec } import akka.actor.{ Props, Actor } import scala.concurrent.Await -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit.DefaultTimeout import akka.dispatch.{ PinnedDispatcher, Dispatchers, Dispatcher } import akka.pattern.ask diff --git a/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatchersSpec.scala b/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatchersSpec.scala index 5abcdc7a0d..39612fe409 100644 --- a/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatchersSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/actor/dispatch/DispatchersSpec.scala @@ -14,7 +14,7 @@ import scala.collection.JavaConverters._ import com.typesafe.config.ConfigFactory import akka.actor.Actor import akka.actor.Props -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object DispatchersSpec { val config = """ diff --git a/akka-actor-tests/src/test/scala/akka/config/ConfigSpec.scala b/akka-actor-tests/src/test/scala/akka/config/ConfigSpec.scala index 56f8cd45fc..9a43631894 100644 --- a/akka-actor-tests/src/test/scala/akka/config/ConfigSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/config/ConfigSpec.scala @@ -9,8 +9,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import com.typesafe.config.ConfigFactory import scala.collection.JavaConverters._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.actor.{ IOManager, ActorSystem } @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) @@ -25,8 +24,8 @@ class ConfigSpec extends AkkaSpec(ConfigFactory.defaultReference(ActorSystem.fin { import config._ - getString("akka.version") must equal("2.1-SNAPSHOT") - settings.ConfigVersion must equal("2.1-SNAPSHOT") + getString("akka.version") must equal("2.2-SNAPSHOT") + settings.ConfigVersion must equal("2.2-SNAPSHOT") getBoolean("akka.daemonic") must equal(false) getBoolean("akka.actor.serialize-messages") must equal(false) @@ -46,6 +45,9 @@ class ConfigSpec extends AkkaSpec(ConfigFactory.defaultReference(ActorSystem.fin getInt("akka.actor.deployment.default.virtual-nodes-factor") must be(10) settings.DefaultVirtualNodesFactor must be(10) + + getMilliseconds("akka.actor.unstarted-push-timeout") must be(10.seconds.toMillis) + settings.UnstartedPushTimeout.duration must be(10.seconds) } { diff --git a/akka-actor-tests/src/test/scala/akka/dataflow/Future2Actor.scala b/akka-actor-tests/src/test/scala/akka/dataflow/Future2Actor.scala index 0e3d358322..bc225933fe 100644 --- a/akka-actor-tests/src/test/scala/akka/dataflow/Future2Actor.scala +++ b/akka-actor-tests/src/test/scala/akka/dataflow/Future2Actor.scala @@ -8,7 +8,7 @@ import language.postfixOps import akka.actor.{ Actor, Props } import scala.concurrent.Future import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit.{ AkkaSpec, DefaultTimeout } import akka.pattern.{ ask, pipe } import scala.concurrent.ExecutionException diff --git a/akka-actor-tests/src/test/scala/akka/dispatch/FutureSpec.scala b/akka-actor-tests/src/test/scala/akka/dispatch/FutureSpec.scala index bc423998f0..9c732d7279 100644 --- a/akka-actor-tests/src/test/scala/akka/dispatch/FutureSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/dispatch/FutureSpec.scala @@ -12,8 +12,7 @@ import akka.actor._ import akka.testkit.{ EventFilter, filterEvents, filterException, AkkaSpec, DefaultTimeout, TestLatch } import scala.concurrent.{ Await, Awaitable, Future, Promise, ExecutionContext } import scala.util.control.NonFatal -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import scala.concurrent.ExecutionContext import org.scalatest.junit.JUnitSuite import scala.runtime.NonLocalReturnControl @@ -522,268 +521,6 @@ class FutureSpec extends AkkaSpec with Checkers with BeforeAndAfterAll with Defa filterException[TimeoutException] { intercept[TimeoutException] { FutureSpec.ready(f3, 0 millis) } } } - //FIXME DATAFLOW - /*"futureComposingWithContinuations" in { - import Future.flow - - val actor = system.actorOf(Props[TestActor]) - - val x = Future("Hello") - val y = x flatMap (actor ? _) mapTo manifest[String] - - val r = flow(x() + " " + y() + "!") - - assert(Await.result(r, timeout.duration) === "Hello World!") - - system.stop(actor) - } - - "futureComposingWithContinuationsFailureDivideZero" in { - filterException[ArithmeticException] { - import Future.flow - - val x = Future("Hello") - val y = x map (_.length) - - val r = flow(x() + " " + y.map(_ / 0).map(_.toString).apply, 100) - - intercept[java.lang.ArithmeticException](Await.result(r, timeout.duration)) - } - } - - "futureComposingWithContinuationsFailureCastInt" in { - filterException[ClassCastException] { - import Future.flow - - val actor = system.actorOf(Props[TestActor]) - - val x = Future(3) - val y = (actor ? "Hello").mapTo[Int] - - val r = flow(x() + y(), 100) - - intercept[ClassCastException](Await.result(r, timeout.duration)) - } - } - - "futureComposingWithContinuationsFailureCastNothing" in { - filterException[ClassCastException] { - import Future.flow - - val actor = system.actorOf(Props[TestActor]) - - val x = Future("Hello") - val y = actor ? "Hello" mapTo manifest[Nothing] - - val r = flow(x() + y()) - - intercept[ClassCastException](Await.result(r, timeout.duration)) - } - } - - "futureCompletingWithContinuations" in { - import Future.flow - - val x, y, z = Promise[Int]() - val ly, lz = new TestLatch - - val result = flow { - y completeWith x - ly.open() // not within continuation - - z << x - lz.open() // within continuation, will wait for 'z' to complete - z() + y() - } - - FutureSpec.ready(ly, 100 milliseconds) - intercept[TimeoutException] { FutureSpec.ready(lz, 100 milliseconds) } - - flow { x << 5 } - - assert(Await.result(y, timeout.duration) === 5) - assert(Await.result(z, timeout.duration) === 5) - FutureSpec.ready(lz, timeout.duration) - assert(Await.result(result, timeout.duration) === 10) - - val a, b, c = Promise[Int]() - - val result2 = flow { - val n = (a << c).value.get.right.get + 10 - b << (c() - 2) - a() + n * b() - } - - c completeWith Future(5) - - assert(Await.result(a, timeout.duration) === 5) - assert(Await.result(b, timeout.duration) === 3) - assert(Await.result(result2, timeout.duration) === 50) - } - - "futureDataFlowShouldEmulateBlocking1" in { - import Future.flow - - val one, two = Promise[Int]() - val simpleResult = flow { - one() + two() - } - - assert(List(one, two, simpleResult).forall(_.isCompleted == false)) - - flow { one << 1 } - - FutureSpec.ready(one, 1 minute) - - assert(one.isCompleted) - assert(List(two, simpleResult).forall(_.isCompleted == false)) - - flow { two << 9 } - - FutureSpec.ready(two, 1 minute) - - assert(List(one, two).forall(_.isCompleted == true)) - assert(Await.result(simpleResult, timeout.duration) === 10) - - } - - "futureDataFlowShouldEmulateBlocking2" in { - import Future.flow - val x1, x2, y1, y2 = Promise[Int]() - val lx, ly, lz = new TestLatch - val result = flow { - lx.open() - x1 << y1 - ly.open() - x2 << y2 - lz.open() - x1() + x2() - } - FutureSpec.ready(lx, 2 seconds) - assert(!ly.isOpen) - assert(!lz.isOpen) - assert(List(x1, x2, y1, y2).forall(_.isCompleted == false)) - - flow { y1 << 1 } // When this is set, it should cascade down the line - - FutureSpec.ready(ly, 2 seconds) - assert(Await.result(x1, 1 minute) === 1) - assert(!lz.isOpen) - - flow { y2 << 9 } // When this is set, it should cascade down the line - - FutureSpec.ready(lz, 2 seconds) - assert(Await.result(x2, 1 minute) === 9) - - assert(List(x1, x2, y1, y2).forall(_.isCompleted)) - - assert(Await.result(result, 1 minute) === 10) - } - - "dataFlowAPIshouldbeSlick" in { - import Future.flow - - val i1, i2, s1, s2 = new TestLatch - - val callService1 = Future { i1.open(); FutureSpec.ready(s1, TestLatch.DefaultTimeout); 1 } - val callService2 = Future { i2.open(); FutureSpec.ready(s2, TestLatch.DefaultTimeout); 9 } - - val result = flow { callService1() + callService2() } - - assert(!s1.isOpen) - assert(!s2.isOpen) - assert(!result.isCompleted) - FutureSpec.ready(i1, 2 seconds) - FutureSpec.ready(i2, 2 seconds) - s1.open() - s2.open() - assert(Await.result(result, timeout.duration) === 10) - } - - "futureCompletingWithContinuationsFailure" in { - filterException[ArithmeticException] { - import Future.flow - - val x, y, z = Promise[Int]() - val ly, lz = new TestLatch - - val result = flow { - y << x - ly.open() - val oops = 1 / 0 - z << x - lz.open() - z() + y() + oops - } - intercept[TimeoutException] { FutureSpec.ready(ly, 100 milliseconds) } - intercept[TimeoutException] { FutureSpec.ready(lz, 100 milliseconds) } - flow { x << 5 } - - assert(Await.result(y, timeout.duration) === 5) - intercept[java.lang.ArithmeticException](Await.result(result, timeout.duration)) - assert(z.value === None) - assert(!lz.isOpen) - } - } - - "futureContinuationsShouldNotBlock" in { - import Future.flow - - val latch = new TestLatch - val future = Future { - FutureSpec.ready(latch, TestLatch.DefaultTimeout) - "Hello" - } - - val result = flow { - Some(future()).filter(_ == "Hello") - } - - assert(!result.isCompleted) - - latch.open() - - assert(Await.result(result, timeout.duration) === Some("Hello")) - } - - "futureFlowShouldBeTypeSafe" in { - import Future.flow - - val rString = flow { - val x = Future(5) - x().toString - } - - val rInt = flow { - val x = rString.apply - val y = Future(5) - x.length + y() - } - - assert(checkType(rString, manifest[String])) - assert(checkType(rInt, manifest[Int])) - assert(!checkType(rInt, manifest[String])) - assert(!checkType(rInt, manifest[Nothing])) - assert(!checkType(rInt, manifest[Any])) - - Await.result(rString, timeout.duration) - Await.result(rInt, timeout.duration) - } - - "futureFlowSimpleAssign" in { - import Future.flow - - val x, y, z = Promise[Int]() - - flow { - z << x() + y() - } - flow { x << 40 } - flow { y << 2 } - - assert(Await.result(z, timeout.duration) === 42) - }*/ - "run callbacks async" in { val latch = Vector.fill(10)(new TestLatch) @@ -873,13 +610,6 @@ class FutureSpec extends AkkaSpec with Checkers with BeforeAndAfterAll with Defa // failCount.get must be(0) } - //FIXME DATAFLOW - /*"should capture first exception with dataflow" in { - import Future.flow - val f1 = flow { 40 / 0 } - intercept[java.lang.ArithmeticException](Await result (f1, TestLatch.DefaultTimeout)) - }*/ - } } diff --git a/akka-actor-tests/src/test/scala/akka/dispatch/MailboxConfigSpec.scala b/akka-actor-tests/src/test/scala/akka/dispatch/MailboxConfigSpec.scala index ed93362b6f..94954ab4d8 100644 --- a/akka-actor-tests/src/test/scala/akka/dispatch/MailboxConfigSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/dispatch/MailboxConfigSpec.scala @@ -11,7 +11,7 @@ import com.typesafe.config.Config import akka.actor.{ RepointableRef, Props, DeadLetter, ActorSystem, ActorRefWithCell, ActorRef, ActorCell } import akka.testkit.AkkaSpec import scala.concurrent.{ Future, Promise, Await } -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) abstract class MailboxSpec extends AkkaSpec with BeforeAndAfterAll with BeforeAndAfterEach { diff --git a/akka-actor-tests/src/test/scala/akka/dispatch/PriorityDispatcherSpec.scala b/akka-actor-tests/src/test/scala/akka/dispatch/PriorityDispatcherSpec.scala index 58a785ccf3..4e76c5bea6 100644 --- a/akka-actor-tests/src/test/scala/akka/dispatch/PriorityDispatcherSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/dispatch/PriorityDispatcherSpec.scala @@ -6,11 +6,10 @@ import org.junit.runner.RunWith import org.scalatest.junit.JUnitRunner import com.typesafe.config.Config -import akka.actor.{ Props, InternalActorRef, ActorSystem, Actor } +import akka.actor.{ Props, ActorSystem, Actor } import akka.pattern.ask import akka.testkit.{ DefaultTimeout, AkkaSpec } -import scala.concurrent.Await -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ object PriorityDispatcherSpec { val config = """ @@ -50,24 +49,32 @@ class PriorityDispatcherSpec extends AkkaSpec(PriorityDispatcherSpec.config) wit } def testOrdering(dispatcherKey: String) { + val msgs = (1 to 100) toList + // It's important that the actor under test is not a top level actor + // with RepointableActorRef, since messages might be queued in + // UnstartedCell and the sent to the PriorityQueue and consumed immediately + // without the ordering taking place. val actor = system.actorOf(Props(new Actor { - var acc: List[Int] = Nil + context.actorOf(Props(new Actor { - def receive = { - case i: Int ⇒ acc = i :: acc - case 'Result ⇒ sender ! acc - } - }).withDispatcher(dispatcherKey)).asInstanceOf[InternalActorRef] + val acc = scala.collection.mutable.ListBuffer[Int]() - actor.suspend //Make sure the actor isn't treating any messages, let it buffer the incoming messages + scala.util.Random.shuffle(msgs) foreach { m ⇒ self ! m } - val msgs = (1 to 100).toList - for (m ← msgs) actor ! m + self.tell('Result, testActor) - actor.resume(causedByFailure = null) //Signal the actor to start treating it's message backlog + def receive = { + case i: Int ⇒ acc += i + case 'Result ⇒ sender ! acc.toList + } + }).withDispatcher(dispatcherKey)) - Await.result(actor.?('Result).mapTo[List[Int]], timeout.duration) must be === msgs.reverse + def receive = Actor.emptyBehavior + + })) + + expectMsgType[List[_]] must be === msgs } } diff --git a/akka-actor-tests/src/test/scala/akka/event/EventBusSpec.scala b/akka-actor-tests/src/test/scala/akka/event/EventBusSpec.scala index 2703727f07..0f7799adc0 100644 --- a/akka-actor-tests/src/test/scala/akka/event/EventBusSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/event/EventBusSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps import org.scalatest.BeforeAndAfterEach import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.atomic._ import akka.actor.{ Props, Actor, ActorRef, ActorSystem } import java.util.Comparator diff --git a/akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala b/akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala index 745f4ca2b8..442d35f194 100644 --- a/akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/event/EventStreamSpec.scala @@ -5,7 +5,7 @@ package akka.event import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.{ Actor, ActorRef, ActorSystemImpl, ActorSystem, Props, UnhandledMessage } import com.typesafe.config.ConfigFactory import scala.collection.JavaConverters._ @@ -282,4 +282,4 @@ class EventStreamSpec extends AkkaSpec(EventStreamSpec.config) { msg foreach (expectMsg(_)) } -} \ No newline at end of file +} diff --git a/akka-actor-tests/src/test/scala/akka/event/LoggingReceiveSpec.scala b/akka-actor-tests/src/test/scala/akka/event/LoggingReceiveSpec.scala index 4bb99ec555..d7ce93e997 100644 --- a/akka-actor-tests/src/test/scala/akka/event/LoggingReceiveSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/event/LoggingReceiveSpec.scala @@ -6,10 +6,9 @@ package akka.event import language.postfixOps import org.scalatest.{ BeforeAndAfterAll, BeforeAndAfterEach } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit._ import org.scalatest.WordSpec -import scala.concurrent.util.Duration import com.typesafe.config.ConfigFactory import scala.collection.JavaConverters._ import java.util.Properties diff --git a/akka-actor-tests/src/test/scala/akka/pattern/AskSpec.scala b/akka-actor-tests/src/test/scala/akka/pattern/AskSpec.scala index 7104e2edb6..8f3f7f0510 100644 --- a/akka-actor-tests/src/test/scala/akka/pattern/AskSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/pattern/AskSpec.scala @@ -6,7 +6,7 @@ package akka.pattern import language.postfixOps import akka.testkit.AkkaSpec -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.Await import akka.testkit.DefaultTimeout import akka.util.Timeout diff --git a/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerMTSpec.scala b/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerMTSpec.scala index 72370d98a4..34cb3d4ef8 100644 --- a/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerMTSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerMTSpec.scala @@ -4,8 +4,9 @@ package akka.pattern import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.{ Promise, Future, Await } +import scala.annotation.tailrec class CircuitBreakerMTSpec extends AkkaSpec { implicit val ec = system.dispatcher @@ -14,8 +15,16 @@ class CircuitBreakerMTSpec extends AkkaSpec { val resetTimeout = 2.seconds.dilated val breaker = new CircuitBreaker(system.scheduler, 5, callTimeout, resetTimeout) - def openBreaker(): Unit = - Await.ready(Future.sequence((1 to 5).map(_ ⇒ breaker.withCircuitBreaker(Future(throw new RuntimeException("FAIL"))).failed)), 1.second.dilated) + def openBreaker(): Unit = { + @tailrec def call(attemptsLeft: Int): Unit = { + attemptsLeft must be > (0) + if (Await.result(breaker.withCircuitBreaker(Future(throw new RuntimeException("FAIL"))) recover { + case _: CircuitBreakerOpenException ⇒ false + case _ ⇒ true + }, remaining)) call(attemptsLeft - 1) + } + call(10) + } "allow many calls while in closed state with no errors" in { diff --git a/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerSpec.scala b/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerSpec.scala index 0e108d1a3b..954fefb58d 100644 --- a/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/pattern/CircuitBreakerSpec.scala @@ -6,7 +6,7 @@ package akka.pattern import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit._ import org.scalatest.BeforeAndAfter import akka.actor.{ ActorSystem, Scheduler } diff --git a/akka-actor-tests/src/test/scala/akka/pattern/PatternSpec.scala b/akka-actor-tests/src/test/scala/akka/pattern/PatternSpec.scala index 1c41364d05..f1ef0564f6 100644 --- a/akka-actor-tests/src/test/scala/akka/pattern/PatternSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/pattern/PatternSpec.scala @@ -9,8 +9,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import akka.actor.{ Props, Actor } import scala.concurrent.{ Future, Promise, Await } -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object PatternSpec { case class Work(duration: Duration) diff --git a/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputComputationPerformanceSpec.scala b/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputComputationPerformanceSpec.scala index dccd0b243a..87aa78f2c7 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputComputationPerformanceSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputComputationPerformanceSpec.scala @@ -4,8 +4,7 @@ import akka.performance.workbench.PerformanceSpec import akka.actor._ import java.util.concurrent.{ ThreadPoolExecutor, CountDownLatch, TimeUnit } import akka.dispatch._ -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ // -server -Xms512M -Xmx1024M -XX:+UseParallelGC -Dbenchmark=true -Dbenchmark.repeatFactor=500 @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) diff --git a/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputPerformanceSpec.scala b/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputPerformanceSpec.scala index f9a2ae2df8..8cc54f8635 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputPerformanceSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/microbench/TellThroughputPerformanceSpec.scala @@ -4,8 +4,7 @@ import akka.performance.workbench.PerformanceSpec import akka.actor._ import java.util.concurrent.{ ThreadPoolExecutor, CountDownLatch, TimeUnit } import akka.dispatch._ -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ // -server -Xms512M -Xmx1024M -XX:+UseParallelGC -Dbenchmark=true -Dbenchmark.repeatFactor=500 @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) diff --git a/akka-actor-tests/src/test/scala/akka/performance/workbench/BenchResultRepository.scala b/akka-actor-tests/src/test/scala/akka/performance/workbench/BenchResultRepository.scala index 1cccd19417..7bc3fec9d1 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/workbench/BenchResultRepository.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/workbench/BenchResultRepository.scala @@ -12,17 +12,18 @@ import java.io.PrintWriter import java.text.SimpleDateFormat import java.util.Date import scala.collection.mutable.{ Map ⇒ MutableMap } +import scala.collection.immutable import akka.actor.ActorSystem import akka.event.Logging trait BenchResultRepository { def add(stats: Stats) - def get(name: String): Seq[Stats] + def get(name: String): immutable.Seq[Stats] def get(name: String, load: Int): Option[Stats] - def getWithHistorical(name: String, load: Int): Seq[Stats] + def getWithHistorical(name: String, load: Int): immutable.Seq[Stats] def isBaseline(stats: Stats): Boolean @@ -38,9 +39,9 @@ object BenchResultRepository { } class FileBenchResultRepository extends BenchResultRepository { - private val statsByName = MutableMap[String, Seq[Stats]]() + private val statsByName = MutableMap[String, immutable.Seq[Stats]]() private val baselineStats = MutableMap[Key, Stats]() - private val historicalStats = MutableMap[Key, Seq[Stats]]() + private val historicalStats = MutableMap[Key, immutable.Seq[Stats]]() private def resultDir = BenchmarkConfig.config.getString("benchmark.resultDir") private val serDir = resultDir + "/ser" private def serDirExists: Boolean = new File(serDir).exists @@ -51,13 +52,13 @@ class FileBenchResultRepository extends BenchResultRepository { case class Key(name: String, load: Int) def add(stats: Stats): Unit = synchronized { - val values = statsByName.getOrElseUpdate(stats.name, IndexedSeq.empty) + val values = statsByName.getOrElseUpdate(stats.name, Vector.empty) statsByName(stats.name) = values :+ stats save(stats) } - def get(name: String): Seq[Stats] = synchronized { - statsByName.getOrElse(name, IndexedSeq.empty) + def get(name: String): immutable.Seq[Stats] = synchronized { + statsByName.getOrElse(name, Vector.empty) } def get(name: String, load: Int): Option[Stats] = synchronized { @@ -68,13 +69,13 @@ class FileBenchResultRepository extends BenchResultRepository { baselineStats.get(Key(stats.name, stats.load)) == Some(stats) } - def getWithHistorical(name: String, load: Int): Seq[Stats] = synchronized { + def getWithHistorical(name: String, load: Int): immutable.Seq[Stats] = synchronized { val key = Key(name, load) - val historical = historicalStats.getOrElse(key, IndexedSeq.empty) + val historical = historicalStats.getOrElse(key, Vector.empty) val baseline = baselineStats.get(key) val current = get(name, load) - val limited = (IndexedSeq.empty ++ historical ++ baseline ++ current).takeRight(maxHistorical) + val limited = (Vector.empty ++ historical ++ baseline ++ current).takeRight(maxHistorical) limited.sortBy(_.timestamp) } @@ -94,7 +95,7 @@ class FileBenchResultRepository extends BenchResultRepository { } val historical = load(historicalFiles) for (h ← historical) { - val values = historicalStats.getOrElseUpdate(Key(h.name, h.load), IndexedSeq.empty) + val values = historicalStats.getOrElseUpdate(Key(h.name, h.load), Vector.empty) historicalStats(Key(h.name, h.load)) = values :+ h } } @@ -120,7 +121,7 @@ class FileBenchResultRepository extends BenchResultRepository { } } - private def load(files: Iterable[File]): Seq[Stats] = { + private def load(files: Iterable[File]): immutable.Seq[Stats] = { val result = for (f ← files) yield { var in: ObjectInputStream = null @@ -132,11 +133,11 @@ class FileBenchResultRepository extends BenchResultRepository { case e: Throwable ⇒ None } finally { - if (in ne null) try { in.close() } catch { case ignore: Exception ⇒ } + if (in ne null) try in.close() catch { case ignore: Exception ⇒ } } } - result.flatten.toSeq.sortBy(_.timestamp) + result.flatten.toVector.sortBy(_.timestamp) } loadFiles() diff --git a/akka-actor-tests/src/test/scala/akka/performance/workbench/GoogleChartBuilder.scala b/akka-actor-tests/src/test/scala/akka/performance/workbench/GoogleChartBuilder.scala index 52b30ceee7..66b634d47f 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/workbench/GoogleChartBuilder.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/workbench/GoogleChartBuilder.scala @@ -3,7 +3,7 @@ package akka.performance.workbench import java.io.UnsupportedEncodingException import java.net.URLEncoder -import scala.collection.immutable.TreeMap +import scala.collection.immutable /** * Generates URLs to Google Chart API http://code.google.com/apis/chart/ @@ -16,7 +16,7 @@ object GoogleChartBuilder { /** * Builds a bar chart for tps in the statistics. */ - def tpsChartUrl(statsByTimestamp: TreeMap[Long, Seq[Stats]], title: String, legend: Stats ⇒ String): String = { + def tpsChartUrl(statsByTimestamp: immutable.TreeMap[Long, Seq[Stats]], title: String, legend: Stats ⇒ String): String = { if (statsByTimestamp.isEmpty) "" else { val loads = statsByTimestamp.values.head.map(_.load) @@ -46,7 +46,7 @@ object GoogleChartBuilder { //sb.append("&") // legend - val legendStats = statsByTimestamp.values.map(_.head).toSeq + val legendStats = statsByTimestamp.values.toVector.map(_.head) appendLegend(legendStats, sb, legend) sb.append("&") // bar spacing @@ -60,10 +60,7 @@ object GoogleChartBuilder { val loadStr = loads.mkString(",") sb.append("chd=t:") val maxValue = allStats.map(_.tps).max - val tpsSeries: Iterable[String] = - for (statsSeq ← statsByTimestamp.values) yield { - statsSeq.map(_.tps).mkString(",") - } + val tpsSeries: Iterable[String] = for (statsSeq ← statsByTimestamp.values) yield statsSeq.map(_.tps).mkString(",") sb.append(tpsSeries.mkString("|")) // y range @@ -83,7 +80,7 @@ object GoogleChartBuilder { /** * Builds a bar chart for all percentiles and the mean in the statistics. */ - def percentilesAndMeanChartUrl(statistics: Seq[Stats], title: String, legend: Stats ⇒ String): String = { + def percentilesAndMeanChartUrl(statistics: immutable.Seq[Stats], title: String, legend: Stats ⇒ String): String = { if (statistics.isEmpty) "" else { val current = statistics.last @@ -146,13 +143,13 @@ object GoogleChartBuilder { } } - private def percentileLabels(percentiles: TreeMap[Int, Long], sb: StringBuilder) { + private def percentileLabels(percentiles: immutable.TreeMap[Int, Long], sb: StringBuilder) { sb.append("chxl=1:|") val s = percentiles.keys.toList.map(_ + "%").mkString("|") sb.append(s) } - private def appendLegend(statistics: Seq[Stats], sb: StringBuilder, legend: Stats ⇒ String) { + private def appendLegend(statistics: immutable.Seq[Stats], sb: StringBuilder, legend: Stats ⇒ String) { val legends = statistics.map(legend(_)) sb.append("chdl=") val s = legends.map(urlEncode(_)).mkString("|") @@ -166,7 +163,7 @@ object GoogleChartBuilder { sb.append(s) } - private def dataSeries(allPercentiles: Seq[TreeMap[Int, Long]], meanValues: Seq[Double], sb: StringBuilder) { + private def dataSeries(allPercentiles: immutable.Seq[immutable.TreeMap[Int, Long]], meanValues: immutable.Seq[Double], sb: StringBuilder) { val percentileSeries = for { percentiles ← allPercentiles @@ -181,7 +178,7 @@ object GoogleChartBuilder { sb.append(series.mkString("|")) } - private def dataSeries(values: Seq[Double], sb: StringBuilder) { + private def dataSeries(values: immutable.Seq[Double], sb: StringBuilder) { val series = values.map(formatDouble(_)) sb.append(series.mkString("|")) } @@ -198,7 +195,7 @@ object GoogleChartBuilder { } } - def latencyAndThroughputChartUrl(statistics: Seq[Stats], title: String): String = { + def latencyAndThroughputChartUrl(statistics: immutable.Seq[Stats], title: String): String = { if (statistics.isEmpty) "" else { val sb = new StringBuilder diff --git a/akka-actor-tests/src/test/scala/akka/performance/workbench/PerformanceSpec.scala b/akka-actor-tests/src/test/scala/akka/performance/workbench/PerformanceSpec.scala index 796a9f5835..977c8ed41e 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/workbench/PerformanceSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/workbench/PerformanceSpec.scala @@ -4,7 +4,7 @@ import scala.collection.immutable.TreeMap import org.apache.commons.math.stat.descriptive.DescriptiveStatistics import org.scalatest.BeforeAndAfterEach import akka.testkit.AkkaSpec -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import com.typesafe.config.Config import java.util.concurrent.TimeUnit import akka.event.Logging diff --git a/akka-actor-tests/src/test/scala/akka/performance/workbench/Report.scala b/akka-actor-tests/src/test/scala/akka/performance/workbench/Report.scala index 18f87702f3..f7974e6784 100644 --- a/akka-actor-tests/src/test/scala/akka/performance/workbench/Report.scala +++ b/akka-actor-tests/src/test/scala/akka/performance/workbench/Report.scala @@ -5,7 +5,7 @@ import java.text.SimpleDateFormat import java.util.Date import akka.actor.ActorSystem import akka.event.Logging -import scala.collection.immutable.TreeMap +import scala.collection.immutable class Report( system: ActorSystem, @@ -19,7 +19,7 @@ class Report( val legendTimeFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm") val fileTimestampFormat = new SimpleDateFormat("yyyyMMddHHmmss") - def html(statistics: Seq[Stats]) { + def html(statistics: immutable.Seq[Stats]) { val current = statistics.last val sb = new StringBuilder @@ -80,13 +80,13 @@ class Report( chartUrl } - def comparePercentilesAndMeanChart(stats: Stats): Seq[String] = { + def comparePercentilesAndMeanChart(stats: Stats): immutable.Seq[String] = { for { - compareName ← compareResultWith.toSeq + compareName ← compareResultWith.to[immutable.Seq] compareStats ← resultRepository.get(compareName, stats.load) } yield { val chartTitle = stats.name + " vs. " + compareName + ", " + stats.load + " clients" + ", Percentiles and Mean (microseconds)" - val chartUrl = GoogleChartBuilder.percentilesAndMeanChartUrl(Seq(compareStats, stats), chartTitle, _.name) + val chartUrl = GoogleChartBuilder.percentilesAndMeanChartUrl(List(compareStats, stats), chartTitle, _.name) chartUrl } } @@ -102,17 +102,17 @@ class Report( } } - def compareWithHistoricalTpsChart(statistics: Seq[Stats]): Option[String] = { + def compareWithHistoricalTpsChart(statistics: immutable.Seq[Stats]): Option[String] = { if (statistics.isEmpty) { None } else { val histTimestamps = resultRepository.getWithHistorical(statistics.head.name, statistics.head.load).map(_.timestamp) - val statsByTimestamp = TreeMap[Long, Seq[Stats]]() ++ + val statsByTimestamp = immutable.TreeMap[Long, Seq[Stats]]() ++ (for (ts ← histTimestamps) yield { val seq = for (stats ← statistics) yield { - val withHistorical: Seq[Stats] = resultRepository.getWithHistorical(stats.name, stats.load) + val withHistorical: immutable.Seq[Stats] = resultRepository.getWithHistorical(stats.name, stats.load) val cell = withHistorical.find(_.timestamp == ts) cell.getOrElse(Stats(stats.name, stats.load, ts)) } @@ -131,7 +131,7 @@ class Report( chartUrl } - def formatResultsTable(statsSeq: Seq[Stats]): String = { + def formatResultsTable(statsSeq: immutable.Seq[Stats]): String = { val name = statsSeq.head.name diff --git a/akka-actor-tests/src/test/scala/akka/routing/ConfiguredLocalRoutingSpec.scala b/akka-actor-tests/src/test/scala/akka/routing/ConfiguredLocalRoutingSpec.scala index ab212f8901..9f3c121d86 100644 --- a/akka-actor-tests/src/test/scala/akka/routing/ConfiguredLocalRoutingSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/routing/ConfiguredLocalRoutingSpec.scala @@ -12,7 +12,7 @@ import akka.ConfigurationException import scala.concurrent.Await import akka.pattern.{ ask, gracefulStop } import akka.testkit.{ TestLatch, ImplicitSender, DefaultTimeout, AkkaSpec } -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ import akka.actor.UnstartedCell object ConfiguredLocalRoutingSpec { diff --git a/akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala b/akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala index 5232338b9f..00bd46f430 100644 --- a/akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/routing/CustomRouteSpec.scala @@ -19,7 +19,7 @@ class CustomRouteSpec extends AkkaSpec { provider.createRoutees(1) { - case (sender, message: String) ⇒ Seq(Destination(sender, target)) + case (sender, message: String) ⇒ List(Destination(sender, target)) case (sender, message) ⇒ toAll(sender, provider.routees) } } @@ -35,7 +35,7 @@ class CustomRouteSpec extends AkkaSpec { import akka.pattern.ask import akka.testkit.ExtractRoute import scala.concurrent.Await - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ val target = system.actorOf(Props.empty) val router = system.actorOf(Props.empty.withRouter(new MyRouter(target))) @@ -43,8 +43,8 @@ class CustomRouteSpec extends AkkaSpec { val r = Await.result(router.ask(CurrentRoutees)(1 second). mapTo[RouterRoutees], 1 second) r.routees.size must be(1) - route(testActor -> "hallo") must be(Seq(Destination(testActor, target))) - route(testActor -> 12) must be(Seq(Destination(testActor, r.routees.head))) + route(testActor -> "hallo") must be(List(Destination(testActor, target))) + route(testActor -> 12) must be(List(Destination(testActor, r.routees.head))) //#test-route } diff --git a/akka-actor-tests/src/test/scala/akka/routing/ResizerSpec.scala b/akka-actor-tests/src/test/scala/akka/routing/ResizerSpec.scala index bfb5b4bba7..b2eeccf3bf 100644 --- a/akka-actor-tests/src/test/scala/akka/routing/ResizerSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/routing/ResizerSpec.scala @@ -9,13 +9,10 @@ import akka.testkit._ import akka.testkit.TestEvent._ import akka.actor.Props import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable import akka.actor.ActorRef -import java.util.concurrent.atomic.AtomicInteger import akka.pattern.ask -import scala.concurrent.util.Duration -import java.util.concurrent.TimeoutException -import scala.concurrent.util.FiniteDuration import scala.util.Try object ResizerSpec { @@ -63,10 +60,10 @@ class ResizerSpec extends AkkaSpec(ResizerSpec.config) with DefaultTimeout with lowerBound = 2, upperBound = 3) - val c1 = resizer.capacity(IndexedSeq.empty[ActorRef]) + val c1 = resizer.capacity(immutable.IndexedSeq.empty[ActorRef]) c1 must be(2) - val current = IndexedSeq(system.actorOf(Props[TestActor]), system.actorOf(Props[TestActor])) + val current = immutable.IndexedSeq(system.actorOf(Props[TestActor]), system.actorOf(Props[TestActor])) val c2 = resizer.capacity(current) c2 must be(0) } @@ -162,7 +159,7 @@ class ResizerSpec extends AkkaSpec(ResizerSpec.config) with DefaultTimeout with // sending in too quickly will result in skipped resize due to many resizeInProgress conflicts Thread.sleep(20.millis.dilated.toMillis) } - within((((d * loops).asInstanceOf[FiniteDuration] / resizer.lowerBound) + 2.seconds.dilated).asInstanceOf[FiniteDuration]) { + within((d * loops / resizer.lowerBound) + 2.seconds.dilated) { for (m ← 0 until loops) expectMsg("done") } } @@ -176,20 +173,21 @@ class ResizerSpec extends AkkaSpec(ResizerSpec.config) with DefaultTimeout with routeeSize(router) must be(resizer.upperBound) } - "backoff" in { + "backoff" in within(10 seconds) { val resizer = DefaultResizer( lowerBound = 1, upperBound = 5, rampupRate = 1.0, backoffRate = 1.0, - backoffThreshold = 0.20, + backoffThreshold = 0.40, pressureThreshold = 1, messagesPerResize = 1) val router = system.actorOf(Props(new Actor { def receive = { - case n: Int ⇒ Thread.sleep((n millis).dilated.toMillis) + case n: Int if n <= 0 ⇒ // done + case n: Int ⇒ Thread.sleep((n millis).dilated.toMillis) } }).withRouter(RoundRobinRouter(resizer = Some(resizer)))) @@ -205,12 +203,11 @@ class ResizerSpec extends AkkaSpec(ResizerSpec.config) with DefaultTimeout with Thread.sleep((300 millis).dilated.toMillis) // let it cool down - for (m ← 0 to 5) { - router ! 1 - Thread.sleep((500 millis).dilated.toMillis) - } + awaitCond({ + router ! 0 // trigger resize + routeeSize(router) < z + }, interval = 500.millis.dilated) - awaitCond(Try(routeeSize(router) < (z)).getOrElse(false)) } } diff --git a/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala b/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala index 283eac7463..9d7522f950 100644 --- a/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/routing/RoutingSpec.scala @@ -5,20 +5,20 @@ package akka.routing import language.postfixOps -import java.util.concurrent.atomic.AtomicInteger import akka.actor._ -import scala.collection.mutable.LinkedList +import scala.collection.immutable import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.Await -import scala.concurrent.util.Duration import akka.ConfigurationException import com.typesafe.config.ConfigFactory import akka.pattern.{ ask, pipe } import java.util.concurrent.ConcurrentHashMap import com.typesafe.config.Config import akka.dispatch.Dispatchers +import akka.util.Collections.EmptyImmutableSeq import akka.util.Timeout +import java.util.concurrent.atomic.AtomicInteger object RoutingSpec { @@ -55,11 +55,10 @@ object RoutingSpec { class MyRouter(config: Config) extends RouterConfig { val foo = config.getString("foo") def createRoute(routeeProvider: RouteeProvider): Route = { - val routees = IndexedSeq(routeeProvider.context.actorOf(Props[Echo])) - routeeProvider.registerRoutees(routees) + routeeProvider.registerRoutees(List(routeeProvider.context.actorOf(Props[Echo]))) { - case (sender, message) ⇒ Nil + case (sender, message) ⇒ EmptyImmutableSeq } } def routerDispatcher: String = Dispatchers.DefaultDispatcherId @@ -102,33 +101,35 @@ class RoutingSpec extends AkkaSpec(RoutingSpec.config) with DefaultTimeout with } "be able to send their routees" in { - class TheActor extends Actor { - val routee1 = context.actorOf(Props[TestActor], "routee1") - val routee2 = context.actorOf(Props[TestActor], "routee2") - val routee3 = context.actorOf(Props[TestActor], "routee3") - val router = context.actorOf(Props[TestActor].withRouter( - ScatterGatherFirstCompletedRouter( - routees = List(routee1, routee2, routee3), - within = 5 seconds))) - + case class TestRun(id: String, names: immutable.Iterable[String], actors: Int) + val actor = system.actorOf(Props(new Actor { def receive = { - case "doIt" ⇒ router ! CurrentRoutees - case routees: RouterRoutees ⇒ testActor forward routees + case TestRun(id, names, actors) ⇒ + val routerProps = Props[TestActor].withRouter( + ScatterGatherFirstCompletedRouter( + routees = names map { context.actorOf(Props(new TestActor), _) }, + within = 5 seconds)) + + 1 to actors foreach { i ⇒ context.actorOf(routerProps, id + i).tell(CurrentRoutees, testActor) } } - } + })) - val theActor = system.actorOf(Props(new TheActor), "theActor") - theActor ! "doIt" - val routees = expectMsgPF() { - case RouterRoutees(routees) ⇒ routees.toSet - } + val actors = 15 + val names = 1 to 20 map { "routee" + _ } toList - routees.map(_.path.name) must be(Set("routee1", "routee2", "routee3")) + actor ! TestRun("test", names, actors) + + 1 to actors foreach { _ ⇒ + val routees = expectMsgType[RouterRoutees].routees + routees.map(_.path.name) must be === names + } + expectNoMsg(500.millis) } "use configured nr-of-instances when FromConfig" in { val router = system.actorOf(Props[TestActor].withRouter(FromConfig), "router1") - Await.result(router ? CurrentRoutees, remaining).asInstanceOf[RouterRoutees].routees.size must be(3) + router ! CurrentRoutees + expectMsgType[RouterRoutees].routees.size must be(3) watch(router) system.stop(router) expectMsgType[Terminated] @@ -136,7 +137,8 @@ class RoutingSpec extends AkkaSpec(RoutingSpec.config) with DefaultTimeout with "use configured nr-of-instances when router is specified" in { val router = system.actorOf(Props[TestActor].withRouter(RoundRobinRouter(nrOfInstances = 2)), "router2") - Await.result(router ? CurrentRoutees, remaining).asInstanceOf[RouterRoutees].routees.size must be(3) + router ! CurrentRoutees + expectMsgType[RouterRoutees].routees.size must be(3) system.stop(router) } @@ -151,7 +153,8 @@ class RoutingSpec extends AkkaSpec(RoutingSpec.config) with DefaultTimeout with } val router = system.actorOf(Props[TestActor].withRouter(RoundRobinRouter(resizer = Some(resizer))), "router3") Await.ready(latch, remaining) - Await.result(router ? CurrentRoutees, remaining).asInstanceOf[RouterRoutees].routees.size must be(3) + router ! CurrentRoutees + expectMsgType[RouterRoutees].routees.size must be(3) system.stop(router) } @@ -252,15 +255,15 @@ class RoutingSpec extends AkkaSpec(RoutingSpec.config) with DefaultTimeout with val doneLatch = new TestLatch(connectionCount) //lets create some connections. - var actors = new LinkedList[ActorRef] - var counters = new LinkedList[AtomicInteger] + @volatile var actors = immutable.IndexedSeq[ActorRef]() + @volatile var counters = immutable.IndexedSeq[AtomicInteger]() for (i ← 0 until connectionCount) { counters = counters :+ new AtomicInteger() val actor = system.actorOf(Props(new Actor { def receive = { case "end" ⇒ doneLatch.countDown() - case msg: Int ⇒ counters.get(i).get.addAndGet(msg) + case msg: Int ⇒ counters(i).addAndGet(msg) } })) actors = actors :+ actor @@ -279,10 +282,8 @@ class RoutingSpec extends AkkaSpec(RoutingSpec.config) with DefaultTimeout with //now wait some and do validations. Await.ready(doneLatch, remaining) - for (i ← 0 until connectionCount) { - val counter = counters.get(i).get - counter.get must be((iterationCount * (i + 1))) - } + for (i ← 0 until connectionCount) + counters(i).get must be((iterationCount * (i + 1))) } "deliver a broadcast message using the !" in { diff --git a/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala b/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala index 3125431fb5..c49dc8037f 100644 --- a/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala @@ -11,7 +11,7 @@ import akka.actor._ import java.io._ import scala.concurrent.Await import akka.util.Timeout -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.reflect.BeanInfo import com.google.protobuf.Message import akka.pattern.ask diff --git a/akka-actor-tests/src/test/scala/akka/util/DurationSpec.scala b/akka-actor-tests/src/test/scala/akka/util/DurationSpec.scala index ef300afbe5..ca285274aa 100644 --- a/akka-actor-tests/src/test/scala/akka/util/DurationSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/util/DurationSpec.scala @@ -7,12 +7,32 @@ import language.postfixOps import org.scalatest.WordSpec import org.scalatest.matchers.MustMatchers -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.concurrent.Await import java.util.concurrent.TimeUnit._ +import akka.testkit.AkkaSpec +import akka.testkit.TestLatch +import java.util.concurrent.TimeoutException +import akka.testkit.LongRunningTest -class DurationSpec extends WordSpec with MustMatchers { +class DurationSpec extends AkkaSpec { + + "A HashedWheelTimer" must { + + "not mess up long timeouts" taggedAs LongRunningTest in { + val longish = Long.MaxValue.nanos + val barrier = TestLatch() + import system.dispatcher + val job = system.scheduler.scheduleOnce(longish)(barrier.countDown()) + intercept[TimeoutException] { + // this used to fire after 46 seconds due to wrap-around + Await.ready(barrier, 90 seconds) + } + job.cancel() + } + + } "Duration" must { @@ -34,11 +54,12 @@ class DurationSpec extends WordSpec with MustMatchers { val one = 1.second val inf = Duration.Inf val minf = Duration.MinusInf + val undefined = Duration.Undefined (-inf) must be(minf) - intercept[IllegalArgumentException] { minf + inf } - intercept[IllegalArgumentException] { inf - inf } - intercept[IllegalArgumentException] { inf + minf } - intercept[IllegalArgumentException] { minf - minf } + (minf + inf) must be(undefined) + (inf - inf) must be(undefined) + (inf + minf) must be(undefined) + (minf - minf) must be(undefined) (inf + inf) must be(inf) (inf - minf) must be(inf) (minf - inf) must be(minf) diff --git a/akka-actor/src/main/java/akka/actor/AbstractActorRef.java b/akka-actor/src/main/java/akka/actor/AbstractActorRef.java index 97ef09c501..650182a457 100644 --- a/akka-actor/src/main/java/akka/actor/AbstractActorRef.java +++ b/akka-actor/src/main/java/akka/actor/AbstractActorRef.java @@ -8,10 +8,12 @@ import akka.util.Unsafe; final class AbstractActorRef { final static long cellOffset; + final static long lookupOffset; static { try { cellOffset = Unsafe.instance.objectFieldOffset(RepointableActorRef.class.getDeclaredField("_cellDoNotCallMeDirectly")); + lookupOffset = Unsafe.instance.objectFieldOffset(RepointableActorRef.class.getDeclaredField("_lookupDoNotCallMeDirectly")); } catch(Throwable t){ throw new ExceptionInInitializerError(t); } diff --git a/akka-actor/src/main/java/akka/japi/JAPI.java b/akka-actor/src/main/java/akka/japi/JAPI.java index 4808b3e725..4c040220f3 100644 --- a/akka-actor/src/main/java/akka/japi/JAPI.java +++ b/akka-actor/src/main/java/akka/japi/JAPI.java @@ -5,7 +5,7 @@ import scala.collection.Seq; public class JAPI { public static Seq seq(T... ts) { - return Util.arrayToSeq(ts); + return Util.immutableSeq(ts); } } diff --git a/akka-actor/src/main/java/akka/util/internal/HashedWheelTimer.java b/akka-actor/src/main/java/akka/util/internal/HashedWheelTimer.java index 1630f599ee..e95ff9ad95 100644 --- a/akka-actor/src/main/java/akka/util/internal/HashedWheelTimer.java +++ b/akka-actor/src/main/java/akka/util/internal/HashedWheelTimer.java @@ -24,8 +24,8 @@ import java.util.concurrent.ThreadFactory; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; -import scala.concurrent.util.Duration; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import akka.event.LoggingAdapter; import akka.util.Unsafe; @@ -263,8 +263,11 @@ public class HashedWheelTimer implements Timer { void scheduleTimeout(HashedWheelTimeout timeout, long delay) { // Prepare the required parameters to schedule the timeout object. - final long relativeIndex = Math.max(1, (delay + tickDuration - 1) / tickDuration); // If relative index < 1 then it should be 1 - + long relativeIndex = (delay + tickDuration - 1) / tickDuration; + // if the previous line had an overflow going on, then we’ll just schedule this timeout + // one tick early; that shouldn’t matter since we’re talking 270 years here + if (relativeIndex < 0) relativeIndex = delay / tickDuration; + if (relativeIndex == 0) relativeIndex = 1; final long remainingRounds = relativeIndex / wheel.length; // Add the timeout to the wheel. @@ -304,7 +307,7 @@ public class HashedWheelTimer implements Timer { while (!shutdown()) { final long deadline = waitForNextTick(); - if (deadline > 0) + if (deadline > Long.MIN_VALUE) notifyExpiredTimeouts(fetchExpiredTimeouts(deadline)); } } @@ -332,7 +335,7 @@ public class HashedWheelTimer implements Timer { HashedWheelTimeout timeout = i.next(); if (timeout.remainingRounds <= 0) { i.remove(); - if (timeout.deadline <= deadline) { + if (timeout.deadline - deadline <= 0) { expiredTimeouts.add(timeout); } else { // Handle the case where the timeout is put into a wrong @@ -368,6 +371,12 @@ public class HashedWheelTimer implements Timer { expiredTimeouts.clear(); } + /** + * calculate goal nanoTime from startTime and current tick number, + * then wait until that goal has been reached. + * + * @return Long.MIN_VALUE if received a shutdown request, current time otherwise (with Long.MIN_VALUE changed by +1) + */ private long waitForNextTick() { long deadline = startTime + tickDuration * tick; @@ -378,7 +387,8 @@ public class HashedWheelTimer implements Timer { if (sleepTimeMs <= 0) { tick += 1; - return currentTime; + if (currentTime == Long.MIN_VALUE) return -Long.MAX_VALUE; + else return currentTime; } // Check if we run on windows, as if thats the case we will need @@ -394,7 +404,7 @@ public class HashedWheelTimer implements Timer { Thread.sleep(sleepTimeMs); } catch (InterruptedException e) { if (shutdown()) { - return -1; + return Long.MIN_VALUE; } } } diff --git a/akka-actor/src/main/java/akka/util/internal/Timer.java b/akka-actor/src/main/java/akka/util/internal/Timer.java index be7656ec6c..7110b03091 100644 --- a/akka-actor/src/main/java/akka/util/internal/Timer.java +++ b/akka-actor/src/main/java/akka/util/internal/Timer.java @@ -17,7 +17,7 @@ package akka.util.internal; import java.util.Set; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.FiniteDuration; /** * Schedules {@link TimerTask}s for one-time future execution in a background diff --git a/akka-actor/src/main/resources/reference.conf b/akka-actor/src/main/resources/reference.conf index 839a9b614d..aeee89a65f 100644 --- a/akka-actor/src/main/resources/reference.conf +++ b/akka-actor/src/main/resources/reference.conf @@ -7,7 +7,7 @@ akka { # Akka version, checked against the runtime version of Akka. - version = "2.1-SNAPSHOT" + version = "2.2-SNAPSHOT" # Home directory of Akka, modules in the deploy directory will be loaded home = "" diff --git a/akka-actor/src/main/scala/akka/actor/Actor.scala b/akka-actor/src/main/scala/akka/actor/Actor.scala index 1b616c55e8..d02f37cf7b 100644 --- a/akka-actor/src/main/scala/akka/actor/Actor.scala +++ b/akka-actor/src/main/scala/akka/actor/Actor.scala @@ -300,6 +300,11 @@ object Actor { def apply(x: Any) = throw new UnsupportedOperationException("Empty behavior apply()") } + /** + * Default placeholder (null) used for "!" to indicate that there is no sender of the message, + * that will be translated to the receiving system's deadLetters. + */ + final val noSender: ActorRef = null } /** diff --git a/akka-actor/src/main/scala/akka/actor/ActorCell.scala b/akka-actor/src/main/scala/akka/actor/ActorCell.scala index 5ec4545fd1..51f11c044c 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorCell.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorCell.scala @@ -6,8 +6,8 @@ package akka.actor import java.io.{ ObjectOutputStream, NotSerializableException } import scala.annotation.tailrec -import scala.collection.immutable.TreeSet -import scala.concurrent.util.Duration +import scala.collection.immutable +import scala.concurrent.duration.Duration import scala.util.control.NonFatal import akka.actor.dungeon.ChildrenContainer import akka.actor.dungeon.ChildrenContainer.WaitingForChildren @@ -76,8 +76,14 @@ trait ActorContext extends ActorRefFactory { /** * Changes the Actor's behavior to become the new 'Receive' (PartialFunction[Any, Unit]) handler. - * Puts the behavior on top of the hotswap stack. - * If "discardOld" is true, an unbecome will be issued prior to pushing the new behavior to the stack + * This method acts upon the behavior stack as follows: + * + * - if `discardOld = true` it will replace the top element (i.e. the current behavior) + * - if `discardOld = false` it will keep the current behavior and push the given one atop + * + * The default of replacing the current behavior has been chosen to avoid memory leaks in + * case client code is written without consulting this documentation first (i.e. always pushing + * new closures and never issuing an `unbecome()`) */ def become(behavior: Actor.Receive, discardOld: Boolean = true): Unit @@ -102,7 +108,7 @@ trait ActorContext extends ActorRefFactory { * val goodLookup = context.actorFor("kid") * }}} */ - def children: Iterable[ActorRef] + def children: immutable.Iterable[ActorRef] /** * Get the child with the given name if it exists. @@ -167,14 +173,20 @@ trait UntypedActorContext extends ActorContext { /** * Changes the Actor's behavior to become the new 'Procedure' handler. - * Puts the behavior on top of the hotswap stack. + * Replaces the current behavior at the top of the hotswap stack. */ def become(behavior: Procedure[Any]): Unit /** * Changes the Actor's behavior to become the new 'Procedure' handler. - * Puts the behavior on top of the hotswap stack. - * If "discardOld" is true, an unbecome will be issued prior to pushing the new behavior to the stack + * This method acts upon the behavior stack as follows: + * + * - if `discardOld = true` it will replace the top element (i.e. the current behavior) + * - if `discardOld = false` it will keep the current behavior and push the given one atop + * + * The default of replacing the current behavior has been chosen to avoid memory leaks in + * case client code is written without consulting this documentation first (i.e. always pushing + * new closures and never issuing an `unbecome()`) */ def become(behavior: Procedure[Any], discardOld: Boolean): Unit @@ -196,6 +208,11 @@ private[akka] trait Cell { * The system internals where this Cell lives. */ def systemImpl: ActorSystemImpl + /** + * Start the cell: enqueued message must not be processed before this has + * been called. The usual action is to attach the mailbox to a dispatcher. + */ + def start(): this.type /** * Recursively suspend this actor and all its children. Must not throw exceptions. */ @@ -247,12 +264,12 @@ private[akka] trait Cell { */ def isLocal: Boolean /** - * If the actor isLocal, returns whether messages are currently queued, + * If the actor isLocal, returns whether "user messages" are currently queued, * “false” otherwise. */ def hasMessages: Boolean /** - * If the actor isLocal, returns the number of messages currently queued, + * If the actor isLocal, returns the number of "user messages" currently queued, * which may be a costly operation, 0 otherwise. */ def numberOfMessages: Int @@ -275,7 +292,7 @@ private[akka] object ActorCell { final val emptyBehaviorStack: List[Actor.Receive] = Nil - final val emptyActorRefSet: Set[ActorRef] = TreeSet.empty + final val emptyActorRefSet: Set[ActorRef] = immutable.TreeSet.empty } //ACTORCELL IS 64bytes and should stay that way unless very good reason not to (machine sympathy, cache line fit) @@ -349,10 +366,10 @@ private[akka] class ActorCell( case null ⇒ faultResume(inRespToFailure) case w: WaitingForChildren ⇒ w.enqueue(message) } - case Terminate() ⇒ terminate() - case Supervise(child, uid) ⇒ supervise(child, uid) - case ChildTerminated(child) ⇒ todo = handleChildTerminated(child) - case NoMessage ⇒ // only here to suppress warning + case Terminate() ⇒ terminate() + case Supervise(child, async, uid) ⇒ supervise(child, async, uid) + case ChildTerminated(child) ⇒ todo = handleChildTerminated(child) + case NoMessage ⇒ // only here to suppress warning } } catch { case e @ (_: InterruptedException | NonFatal(_)) ⇒ handleInvokeFailure(Nil, e, "error while processing " + message) @@ -480,21 +497,21 @@ private[akka] class ActorCell( } } - private def supervise(child: ActorRef, uid: Int): Unit = if (!isTerminating) { + private def supervise(child: ActorRef, async: Boolean, uid: Int): Unit = if (!isTerminating) { // Supervise is the first thing we get from a new child, so store away the UID for later use in handleFailure() initChild(child) match { case Some(crs) ⇒ crs.uid = uid - handleSupervise(child) + handleSupervise(child, async) if (system.settings.DebugLifecycle) publish(Debug(self.path.toString, clazz(actor), "now supervising " + child)) case None ⇒ publish(Error(self.path.toString, clazz(actor), "received Supervise from unregistered child " + child + ", this will not end well")) } } // future extension point - protected def handleSupervise(child: ActorRef): Unit = child match { - case r: RepointableActorRef ⇒ r.activate() - case _ ⇒ + protected def handleSupervise(child: ActorRef, async: Boolean): Unit = child match { + case r: RepointableActorRef if async ⇒ r.point() + case _ ⇒ } final protected def clearActorFields(actorInstance: Actor): Unit = { diff --git a/akka-actor/src/main/scala/akka/actor/ActorDSL.scala b/akka-actor/src/main/scala/akka/actor/ActorDSL.scala index b1e36f7559..bee50ff78e 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorDSL.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorDSL.scala @@ -5,13 +5,11 @@ package akka.actor import scala.collection.mutable.Queue -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.pattern.ask import scala.concurrent.Await import akka.util.Timeout import scala.collection.immutable.TreeSet -import scala.concurrent.util.Deadline import java.util.concurrent.TimeoutException import java.util.concurrent.atomic.AtomicInteger import java.util.concurrent.TimeUnit diff --git a/akka-actor/src/main/scala/akka/actor/ActorPath.scala b/akka-actor/src/main/scala/akka/actor/ActorPath.scala index cc21e0de16..4cb61d2212 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorPath.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorPath.scala @@ -3,6 +3,8 @@ */ package akka.actor import scala.annotation.tailrec +import scala.collection.immutable +import akka.japi.Util.immutableSeq import java.net.MalformedURLException object ActorPath { @@ -20,6 +22,8 @@ object ActorPath { * http://www.ietf.org/rfc/rfc2396.txt */ val ElementRegex = """(?:[-\w:@&=+,.!~*'_;]|%\p{XDigit}{2})(?:[-\w:@&=+,.!~*'$_;]|%\p{XDigit}{2})*""".r + + private[akka] final val emptyActorPath: immutable.Iterable[String] = List("") } /** @@ -68,23 +72,18 @@ sealed trait ActorPath extends Comparable[ActorPath] with Serializable { /** * ''Java API'': Recursively create a descendant’s path by appending all child names. */ - def descendant(names: java.lang.Iterable[String]): ActorPath = { - import scala.collection.JavaConverters._ - /(names.asScala) - } + def descendant(names: java.lang.Iterable[String]): ActorPath = /(immutableSeq(names)) /** * Sequence of names for this path from root to this. Performance implication: has to allocate a list. */ - def elements: Iterable[String] + def elements: immutable.Iterable[String] /** * ''Java API'': Sequence of names for this path from root to this. Performance implication: has to allocate a list. */ - def getElements: java.lang.Iterable[String] = { - import scala.collection.JavaConverters._ - elements.asJava - } + def getElements: java.lang.Iterable[String] = + scala.collection.JavaConverters.asJavaIterableConverter(elements).asJava /** * Walk up the tree to obtain and return the RootActorPath. @@ -112,7 +111,7 @@ final case class RootActorPath(address: Address, name: String = "/") extends Act override def /(child: String): ActorPath = new ChildActorPath(this, child) - override val elements: Iterable[String] = List("") + override def elements: immutable.Iterable[String] = ActorPath.emptyActorPath override val toString: String = address + name @@ -121,7 +120,7 @@ final case class RootActorPath(address: Address, name: String = "/") extends Act else addr + name override def compareTo(other: ActorPath): Int = other match { - case r: RootActorPath ⇒ toString compareTo r.toString + case r: RootActorPath ⇒ toString compareTo r.toString // FIXME make this cheaper by comparing address and name in isolation case c: ChildActorPath ⇒ 1 } } @@ -134,9 +133,9 @@ final class ChildActorPath(val parent: ActorPath, val name: String) extends Acto override def /(child: String): ActorPath = new ChildActorPath(this, child) - override def elements: Iterable[String] = { + override def elements: immutable.Iterable[String] = { @tailrec - def rec(p: ActorPath, acc: List[String]): Iterable[String] = p match { + def rec(p: ActorPath, acc: List[String]): immutable.Iterable[String] = p match { case r: RootActorPath ⇒ acc case _ ⇒ rec(p.parent, p.name :: acc) } diff --git a/akka-actor/src/main/scala/akka/actor/ActorRef.scala b/akka-actor/src/main/scala/akka/actor/ActorRef.scala index 2cb2f984f2..a6685ae549 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorRef.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorRef.scala @@ -153,7 +153,7 @@ trait ScalaActorRef { ref: ActorRef ⇒ * *

*/ - def !(message: Any)(implicit sender: ActorRef = null): Unit + def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit } @@ -194,6 +194,7 @@ private[akka] abstract class InternalActorRef extends ActorRef with ScalaActorRe /* * Actor life-cycle management, invoked only internally (in response to user requests via ActorContext). */ + def start(): Unit def resume(causedByFailure: Throwable): Unit def suspend(): Unit def restart(cause: Throwable): Unit @@ -259,13 +260,16 @@ private[akka] class LocalActorRef private[akka] ( /* * Safe publication of this class’s fields is guaranteed by mailbox.setActor() - * which is called indirectly from actorCell.start() (if you’re wondering why + * which is called indirectly from actorCell.init() (if you’re wondering why * this is at all important, remember that under the JMM final fields are only * frozen at the _end_ of the constructor, but we are publishing “this” before * that is reached). + * This means that the result of newActorCell needs to be written to the val + * actorCell before we call init and start, since we can start using "this" + * object from another thread as soon as we run init. */ private val actorCell: ActorCell = newActorCell(_system, this, _props, _supervisor) - actorCell.start(sendSupervise = true, ThreadLocalRandom.current.nextInt()) + actorCell.init(ThreadLocalRandom.current.nextInt(), sendSupervise = true) protected def newActorCell(system: ActorSystemImpl, ref: InternalActorRef, props: Props, supervisor: InternalActorRef): ActorCell = new ActorCell(system, ref, props, supervisor) @@ -279,6 +283,11 @@ private[akka] class LocalActorRef private[akka] ( */ override def isTerminated: Boolean = actorCell.isTerminated + /** + * Starts the actor after initialization. + */ + override def start(): Unit = actorCell.start() + /** * Suspends the actor so that it will not process messages until resumed. The * suspend request is processed asynchronously to the caller of this method @@ -341,7 +350,7 @@ private[akka] class LocalActorRef private[akka] ( override def sendSystemMessage(message: SystemMessage): Unit = actorCell.sendSystemMessage(message) - override def !(message: Any)(implicit sender: ActorRef = null): Unit = actorCell.tell(message, sender) + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = actorCell.tell(message, sender) override def restart(cause: Throwable): Unit = actorCell.restart(cause) @@ -390,12 +399,13 @@ private[akka] trait MinimalActorRef extends InternalActorRef with LocalRef { override def getParent: InternalActorRef = Nobody override def getChild(names: Iterator[String]): InternalActorRef = if (names.forall(_.isEmpty)) this else Nobody + override def start(): Unit = () override def suspend(): Unit = () override def resume(causedByFailure: Throwable): Unit = () override def stop(): Unit = () override def isTerminated = false - override def !(message: Any)(implicit sender: ActorRef = null): Unit = () + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = () override def sendSystemMessage(message: SystemMessage): Unit = () override def restart(cause: Throwable): Unit = () @@ -409,7 +419,10 @@ private[akka] trait MinimalActorRef extends InternalActorRef with LocalRef { * to the ActorSystem's EventStream */ @SerialVersionUID(1L) -case class DeadLetter(message: Any, sender: ActorRef, recipient: ActorRef) +case class DeadLetter(message: Any, sender: ActorRef, recipient: ActorRef) { + require(sender ne null, "DeadLetter sender may not be null") + require(recipient ne null, "DeadLetter recipient may not be null") +} private[akka] object DeadLetterActorRef { @SerialVersionUID(1L) @@ -435,9 +448,12 @@ private[akka] class EmptyLocalActorRef(override val provider: ActorRefProvider, override def sendSystemMessage(message: SystemMessage): Unit = specialHandle(message) - override def !(message: Any)(implicit sender: ActorRef = null): Unit = message match { - case d: DeadLetter ⇒ specialHandle(d.message) // do NOT form endless loops, since deadLetters will resend! - case _ ⇒ if (!specialHandle(message)) eventStream.publish(DeadLetter(message, sender, this)) + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = message match { + case d: DeadLetter ⇒ + specialHandle(d.message) // do NOT form endless loops, since deadLetters will resend! + case _ if !specialHandle(message) ⇒ + eventStream.publish(DeadLetter(message, if (sender eq Actor.noSender) provider.deadLetters else sender, this)) + case _ ⇒ } protected def specialHandle(msg: Any): Boolean = msg match { @@ -520,7 +536,7 @@ private[akka] class VirtualPathContainer( def hasChildren: Boolean = !children.isEmpty - def foreachChild(f: ActorRef ⇒ Unit) = { + def foreachChild(f: ActorRef ⇒ Unit): Unit = { val iter = children.values.iterator while (iter.hasNext) f(iter.next) } diff --git a/akka-actor/src/main/scala/akka/actor/ActorRefProvider.scala b/akka-actor/src/main/scala/akka/actor/ActorRefProvider.scala index d60a46d497..5a3bb7dac2 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorRefProvider.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorRefProvider.scala @@ -8,8 +8,9 @@ import akka.dispatch._ import akka.routing._ import akka.event._ import akka.util.{ Switch, Helpers } +import akka.japi.Util.immutableSeq +import akka.util.Collections.EmptyImmutableSeq import scala.util.{ Success, Failure } -import scala.util.control.NonFatal import scala.concurrent.{ Future, Promise } import java.util.concurrent.atomic.AtomicLong @@ -41,8 +42,7 @@ trait ActorRefProvider { def deadLetters: ActorRef /** - * The root path for all actors within this actor system, including remote - * address if enabled. + * The root path for all actors within this actor system, not including any remote address information. */ def rootPath: ActorPath @@ -145,6 +145,11 @@ trait ActorRefProvider { * attempt is made to verify actual reachability). */ def getExternalAddressFor(addr: Address): Option[Address] + + /** + * Obtain the external address of the default transport. + */ + def getDefaultAddress: Address } /** @@ -271,10 +276,7 @@ trait ActorRefFactory { * * For maximum performance use a collection with efficient head & tail operations. */ - def actorFor(path: java.lang.Iterable[String]): ActorRef = { - import scala.collection.JavaConverters._ - provider.actorFor(lookupRoot, path.asScala) - } + def actorFor(path: java.lang.Iterable[String]): ActorRef = provider.actorFor(lookupRoot, immutableSeq(path)) /** * Construct an [[akka.actor.ActorSelection]] from the given path, which is @@ -319,6 +321,10 @@ private[akka] object SystemGuardian { /** * Local ActorRef provider. + * + * INTERNAL API! + * + * Depending on this class is not supported, only the [[ActorRefProvider]] interface is supported. */ class LocalActorRefProvider( _systemName: String, @@ -375,7 +381,7 @@ class LocalActorRefProvider( override def stop(): Unit = stopped switchOn { terminationPromise.complete(causeOfTermination.map(Failure(_)).getOrElse(Success(()))) } override def isTerminated: Boolean = stopped.isOn - override def !(message: Any)(implicit sender: ActorRef = null): Unit = stopped.ifOff(message match { + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = stopped.ifOff(message match { case Failed(ex, _) if sender ne null ⇒ causeOfTermination = Some(ex); sender.asInstanceOf[InternalActorRef].stop() case NullMessage ⇒ // do nothing case _ ⇒ log.error(this + " received unexpected message [" + message + "]") @@ -383,7 +389,7 @@ class LocalActorRefProvider( override def sendSystemMessage(message: SystemMessage): Unit = stopped ifOff { message match { - case Supervise(_, _) ⇒ // TODO register child in some map to keep track of it and enable shutdown after all dead + case Supervise(_, _, _) ⇒ // TODO register child in some map to keep track of it and enable shutdown after all dead case ChildTerminated(_) ⇒ stop() case _ ⇒ log.error(this + " received unexpected system message [" + message + "]") } @@ -480,7 +486,7 @@ class LocalActorRefProvider( def registerExtraNames(_extras: Map[String, InternalActorRef]): Unit = extraNames ++= _extras private def guardianSupervisorStrategyConfigurator = - dynamicAccess.createInstanceFor[SupervisorStrategyConfigurator](settings.SupervisorStrategyClass, Seq()).get + dynamicAccess.createInstanceFor[SupervisorStrategyConfigurator](settings.SupervisorStrategyClass, EmptyImmutableSeq).get /** * Overridable supervision strategy to be used by the “/user” guardian. @@ -516,6 +522,7 @@ class LocalActorRefProvider( cell.reserveChild("user") val ref = new LocalActorRef(system, Props(new Guardian(guardianStrategy)), rootGuardian, rootPath / "user") cell.initChild(ref) + ref.start() ref } @@ -524,6 +531,7 @@ class LocalActorRefProvider( cell.reserveChild("system") val ref = new LocalActorRef(system, Props(new SystemGuardian(systemGuardianStrategy)), rootGuardian, rootPath / "system") cell.initChild(ref) + ref.start() ref } @@ -585,16 +593,17 @@ class LocalActorRefProvider( if (settings.DebugRouterMisconfiguration && deployer.lookup(path).isDefined) log.warning("Configuration says that {} should be a router, but code disagrees. Remove the config or add a routerConfig to its Props.") - if (async) new RepointableActorRef(system, props, supervisor, path).initialize() + if (async) new RepointableActorRef(system, props, supervisor, path).initialize(async) else new LocalActorRef(system, props, supervisor, path) case router ⇒ val lookup = if (lookupDeploy) deployer.lookup(path) else None val fromProps = Iterator(props.deploy.copy(routerConfig = props.deploy.routerConfig withFallback router)) val d = fromProps ++ deploy.iterator ++ lookup.iterator reduce ((a, b) ⇒ b withFallback a) - val ref = new RoutedActorRef(system, props.withRouter(d.routerConfig), supervisor, path).initialize() - if (async) ref else ref.activate() + new RoutedActorRef(system, props.withRouter(d.routerConfig), supervisor, path).initialize(async) } } def getExternalAddressFor(addr: Address): Option[Address] = if (addr == rootPath.address) Some(addr) else None + + def getDefaultAddress: Address = rootPath.address } diff --git a/akka-actor/src/main/scala/akka/actor/ActorSelection.scala b/akka-actor/src/main/scala/akka/actor/ActorSelection.scala index 0740d8724e..e329af556b 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorSelection.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorSelection.scala @@ -23,7 +23,6 @@ abstract class ActorSelection { def tell(msg: Any, sender: ActorRef): Unit = target.tell(toMessage(msg, path), sender) - // FIXME make this so that "next" instead is the remaining path private def toMessage(msg: Any, path: Array[AnyRef]): Any = { var acc = msg var index = path.length - 1 @@ -70,5 +69,5 @@ object ActorSelection { trait ScalaActorSelection { this: ActorSelection ⇒ - def !(msg: Any)(implicit sender: ActorRef = null) = tell(msg, sender) + def !(msg: Any)(implicit sender: ActorRef = Actor.noSender) = tell(msg, sender) } \ No newline at end of file diff --git a/akka-actor/src/main/scala/akka/actor/ActorSystem.scala b/akka-actor/src/main/scala/akka/actor/ActorSystem.scala index 182239adbf..8bada6e0ba 100644 --- a/akka-actor/src/main/scala/akka/actor/ActorSystem.scala +++ b/akka-actor/src/main/scala/akka/actor/ActorSystem.scala @@ -6,24 +6,24 @@ package akka.actor import akka.event._ import akka.dispatch._ -import akka.pattern.ask +import akka.japi.Util.immutableSeq import com.typesafe.config.{ Config, ConfigFactory } import scala.annotation.tailrec -import scala.concurrent.util.Duration -import java.io.Closeable +import scala.collection.immutable +import scala.concurrent.duration.{ FiniteDuration, Duration } import scala.concurrent.{ Await, Awaitable, CanAwait, Future } +import scala.util.{ Failure, Success } import scala.util.control.NonFatal import akka.util._ +import java.io.Closeable import akka.util.internal.{ HashedWheelTimer, ConcurrentIdentityHashMap } import java.util.concurrent.{ ThreadFactory, CountDownLatch, TimeoutException, RejectedExecutionException } import java.util.concurrent.TimeUnit.MILLISECONDS import akka.actor.dungeon.ChildrenContainer -import scala.concurrent.util.FiniteDuration -import util.{ Failure, Success } object ActorSystem { - val Version: String = "2.1-SNAPSHOT" + val Version: String = "2.2-SNAPSHOT" val EnvHome: Option[String] = System.getenv("AKKA_HOME") match { case null | "" | "." ⇒ None @@ -144,7 +144,7 @@ object ActorSystem { final val LogLevel: String = getString("akka.loglevel") final val StdoutLogLevel: String = getString("akka.stdout-loglevel") - final val EventHandlers: Seq[String] = getStringList("akka.event-handlers").asScala + final val EventHandlers: immutable.Seq[String] = immutableSeq(getStringList("akka.event-handlers")) final val EventHandlerStartTimeout: Timeout = Timeout(Duration(getMilliseconds("akka.event-handler-startup-timeout"), MILLISECONDS)) final val LogConfigOnStart: Boolean = config.getBoolean("akka.log-config-on-start") @@ -273,10 +273,7 @@ abstract class ActorSystem extends ActorRefFactory { /** * ''Java API'': Recursively create a descendant’s path by appending all child names. */ - def descendant(names: java.lang.Iterable[String]): ActorPath = { - import scala.collection.JavaConverters._ - /(names.asScala) - } + def descendant(names: java.lang.Iterable[String]): ActorPath = /(immutableSeq(names)) /** * Start-up time in milliseconds since the epoch. @@ -536,7 +533,7 @@ private[akka] class ActorSystemImpl(val name: String, applicationConfig: Config, val scheduler: Scheduler = createScheduler() val provider: ActorRefProvider = { - val arguments = Seq( + val arguments = Vector( classOf[String] -> name, classOf[Settings] -> settings, classOf[EventStream] -> eventStream, @@ -676,9 +673,8 @@ private[akka] class ActorSystemImpl(val name: String, applicationConfig: Config, def hasExtension(ext: ExtensionId[_ <: Extension]): Boolean = findExtension(ext) != null private def loadExtensions() { - import scala.collection.JavaConversions._ - settings.config.getStringList("akka.extensions") foreach { fqcn ⇒ - dynamicAccess.getObjectFor[AnyRef](fqcn) recoverWith { case _ ⇒ dynamicAccess.createInstanceFor[AnyRef](fqcn, Seq()) } match { + immutableSeq(settings.config.getStringList("akka.extensions")) foreach { fqcn ⇒ + dynamicAccess.getObjectFor[AnyRef](fqcn) recoverWith { case _ ⇒ dynamicAccess.createInstanceFor[AnyRef](fqcn, Nil) } match { case Success(p: ExtensionIdProvider) ⇒ registerExtension(p.lookup()) case Success(p: ExtensionId[_]) ⇒ registerExtension(p) case Success(other) ⇒ log.error("[{}] is not an 'ExtensionIdProvider' or 'ExtensionId', skipping...", fqcn) diff --git a/akka-actor/src/main/scala/akka/actor/Address.scala b/akka-actor/src/main/scala/akka/actor/Address.scala index 438c479176..d98bbcb208 100644 --- a/akka-actor/src/main/scala/akka/actor/Address.scala +++ b/akka-actor/src/main/scala/akka/actor/Address.scala @@ -5,7 +5,8 @@ package akka.actor import java.net.URI import java.net.URISyntaxException import java.net.MalformedURLException -import annotation.tailrec +import scala.annotation.tailrec +import scala.collection.immutable /** * The address specifies the physical location under which an Actor can be @@ -71,11 +72,11 @@ private[akka] trait PathUtils { } object RelativeActorPath extends PathUtils { - def unapply(addr: String): Option[Iterable[String]] = { + def unapply(addr: String): Option[immutable.Seq[String]] = { try { val uri = new URI(addr) if (uri.isAbsolute) None - else Some(split(uri.getPath)) + else Some(split(uri.getRawPath)) } catch { case _: URISyntaxException ⇒ None } @@ -119,13 +120,12 @@ object AddressFromURIString { * Given an ActorPath it returns the Address and the path elements if the path is well-formed */ object ActorPathExtractor extends PathUtils { - def unapply(addr: String): Option[(Address, Iterable[String])] = + def unapply(addr: String): Option[(Address, immutable.Iterable[String])] = try { val uri = new URI(addr) - if (uri.getPath == null) None - else AddressFromURIString.unapply(uri) match { - case None ⇒ None - case Some(addr) ⇒ Some((addr, split(uri.getPath).drop(1))) + uri.getRawPath match { + case null ⇒ None + case path ⇒ AddressFromURIString.unapply(uri).map((_, split(path).drop(1))) } } catch { case _: URISyntaxException ⇒ None diff --git a/akka-actor/src/main/scala/akka/actor/Deployer.scala b/akka-actor/src/main/scala/akka/actor/Deployer.scala index dd3e88dcda..8ed7dc754a 100644 --- a/akka-actor/src/main/scala/akka/actor/Deployer.scala +++ b/akka-actor/src/main/scala/akka/actor/Deployer.scala @@ -4,13 +4,14 @@ package akka.actor -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import com.typesafe.config._ import akka.routing._ +import akka.japi.Util.immutableSeq import java.util.concurrent.{ TimeUnit } import akka.util.WildcardTree import java.util.concurrent.atomic.AtomicReference -import annotation.tailrec +import scala.annotation.tailrec /** * This class represents deployment configuration for a given actor path. It is @@ -79,7 +80,11 @@ trait Scope { @SerialVersionUID(1L) abstract class LocalScope extends Scope -//FIXME docs +/** + * The Local Scope is the default one, which is assumed on all deployments + * which do not set a different scope. It is also the only scope handled by + * the LocalActorRefProvider. + */ case object LocalScope extends LocalScope { /** * Java API: get the singleton instance @@ -134,16 +139,24 @@ private[akka] class Deployer(val settings: ActorSystem.Settings, val dynamicAcce } def parseConfig(key: String, config: Config): Option[Deploy] = { - val deployment = config.withFallback(default) + val router = createRouterConfig(deployment.getString("router"), key, config, deployment) + Some(Deploy(key, deployment, router, NoScopeGiven)) + } - val routees = Vector() ++ deployment.getStringList("routees.paths").asScala - + /** + * Factory method for creating `RouterConfig` + * @param routerType the configured name of the router, or FQCN + * @param key the full configuration key of the deployment section + * @param config the user defined config of the deployment, without defaults + * @param deployment the deployment config, with defaults + */ + protected def createRouterConfig(routerType: String, key: String, config: Config, deployment: Config): RouterConfig = { + val routees = immutableSeq(deployment.getStringList("routees.paths")) val nrOfInstances = deployment.getInt("nr-of-instances") + val resizer = if (config.hasPath("resizer")) Some(DefaultResizer(deployment.getConfig("resizer"))) else None - val resizer: Option[Resizer] = if (config.hasPath("resizer")) Some(DefaultResizer(deployment.getConfig("resizer"))) else None - - val router: RouterConfig = deployment.getString("router") match { + routerType match { case "from-code" ⇒ NoRouter case "round-robin" ⇒ RoundRobinRouter(nrOfInstances, routees, resizer) case "random" ⇒ RandomRouter(nrOfInstances, routees, resizer) @@ -156,7 +169,7 @@ private[akka] class Deployer(val settings: ActorSystem.Settings, val dynamicAcce val vnodes = deployment.getInt("virtual-nodes-factor") ConsistentHashingRouter(nrOfInstances, routees, resizer, virtualNodesFactor = vnodes) case fqn ⇒ - val args = Seq(classOf[Config] -> deployment) + val args = List(classOf[Config] -> deployment) dynamicAccess.createInstanceFor[RouterConfig](fqn, args).recover({ case exception ⇒ throw new IllegalArgumentException( ("Cannot instantiate router [%s], defined in [%s], " + @@ -165,7 +178,6 @@ private[akka] class Deployer(val settings: ActorSystem.Settings, val dynamicAcce .format(fqn, key), exception) }).get } - - Some(Deploy(key, deployment, router, NoScopeGiven)) } + } diff --git a/akka-actor/src/main/scala/akka/actor/DynamicAccess.scala b/akka-actor/src/main/scala/akka/actor/DynamicAccess.scala index 7a73eb3b15..af891bc483 100644 --- a/akka-actor/src/main/scala/akka/actor/DynamicAccess.scala +++ b/akka-actor/src/main/scala/akka/actor/DynamicAccess.scala @@ -3,7 +3,7 @@ */ package akka.actor -import scala.util.control.NonFatal +import scala.collection.immutable import java.lang.reflect.InvocationTargetException import scala.reflect.ClassTag import scala.util.Try @@ -25,7 +25,7 @@ abstract class DynamicAccess { * val obj = DynamicAccess.createInstanceFor(clazz, Seq(classOf[Config] -> config, classOf[String] -> name)) * }}} */ - def createInstanceFor[T: ClassTag](clazz: Class[_], args: Seq[(Class[_], AnyRef)]): Try[T] + def createInstanceFor[T: ClassTag](clazz: Class[_], args: immutable.Seq[(Class[_], AnyRef)]): Try[T] /** * Obtain a `Class[_]` object loaded with the right class loader (i.e. the one @@ -40,7 +40,7 @@ abstract class DynamicAccess { * `args` argument. The exact usage of args depends on which type is requested, * see the relevant requesting code for details. */ - def createInstanceFor[T: ClassTag](fqcn: String, args: Seq[(Class[_], AnyRef)]): Try[T] + def createInstanceFor[T: ClassTag](fqcn: String, args: immutable.Seq[(Class[_], AnyRef)]): Try[T] /** * Obtain the Scala “object” instance for the given fully-qualified class name, if there is one. @@ -70,7 +70,7 @@ class ReflectiveDynamicAccess(val classLoader: ClassLoader) extends DynamicAcces if (t.isAssignableFrom(c)) c else throw new ClassCastException(t + " is not assignable from " + c) }) - override def createInstanceFor[T: ClassTag](clazz: Class[_], args: Seq[(Class[_], AnyRef)]): Try[T] = + override def createInstanceFor[T: ClassTag](clazz: Class[_], args: immutable.Seq[(Class[_], AnyRef)]): Try[T] = Try { val types = args.map(_._1).toArray val values = args.map(_._2).toArray @@ -81,7 +81,7 @@ class ReflectiveDynamicAccess(val classLoader: ClassLoader) extends DynamicAcces if (t.isInstance(obj)) obj.asInstanceOf[T] else throw new ClassCastException(clazz.getName + " is not a subtype of " + t) } recover { case i: InvocationTargetException if i.getTargetException ne null ⇒ throw i.getTargetException } - override def createInstanceFor[T: ClassTag](fqcn: String, args: Seq[(Class[_], AnyRef)]): Try[T] = + override def createInstanceFor[T: ClassTag](fqcn: String, args: immutable.Seq[(Class[_], AnyRef)]): Try[T] = getClassFor(fqcn) flatMap { c ⇒ createInstanceFor(c, args) } override def getObjectFor[T: ClassTag](fqcn: String): Try[T] = { diff --git a/akka-actor/src/main/scala/akka/actor/Extension.scala b/akka-actor/src/main/scala/akka/actor/Extension.scala index 6fab4ceb07..707c07982a 100644 --- a/akka-actor/src/main/scala/akka/actor/Extension.scala +++ b/akka-actor/src/main/scala/akka/actor/Extension.scala @@ -98,5 +98,5 @@ abstract class ExtensionKey[T <: Extension](implicit m: ClassTag[T]) extends Ext def this(clazz: Class[T]) = this()(ClassTag(clazz)) override def lookup(): ExtensionId[T] = this - def createExtension(system: ExtendedActorSystem): T = system.dynamicAccess.createInstanceFor[T](m.runtimeClass, Seq(classOf[ExtendedActorSystem] -> system)).get + def createExtension(system: ExtendedActorSystem): T = system.dynamicAccess.createInstanceFor[T](m.runtimeClass, List(classOf[ExtendedActorSystem] -> system)).get } diff --git a/akka-actor/src/main/scala/akka/actor/FSM.scala b/akka-actor/src/main/scala/akka/actor/FSM.scala index a58abb0ac3..9a9b56c470 100644 --- a/akka-actor/src/main/scala/akka/actor/FSM.scala +++ b/akka-actor/src/main/scala/akka/actor/FSM.scala @@ -5,10 +5,10 @@ package akka.actor import language.implicitConversions import akka.util._ -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.collection.mutable import akka.routing.{ Deafen, Listen, Listeners } -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration object FSM { @@ -238,7 +238,7 @@ object FSM { * setTimer("tock", TockMsg, 1 second, true) // repeating * setTimer("lifetime", TerminateMsg, 1 hour, false) // single-shot * cancelTimer("tock") - * timerActive_? ("tock") + * isTimerActive("tock") * */ trait FSM[S, D] extends Listeners with ActorLogging { @@ -372,7 +372,15 @@ trait FSM[S, D] extends Listeners with ActorLogging { * timer does not exist, has previously been canceled or if it was a * single-shot timer whose message was already received. */ - final def timerActive_?(name: String) = timers contains name + @deprecated("Use isTimerActive(name) instead.", "2.2") + final def timerActive_?(name: String) = isTimerActive(name) + + /** + * Inquire whether the named timer is still active. Returns true unless the + * timer does not exist, has previously been canceled or if it was a + * single-shot timer whose message was already received. + */ + final def isTimerActive(name: String) = timers contains name /** * Set state timeout explicitly. This method can safely be used from within a @@ -380,6 +388,11 @@ trait FSM[S, D] extends Listeners with ActorLogging { */ final def setStateTimeout(state: S, timeout: Timeout): Unit = stateTimeouts(state) = timeout + /** + * Internal API, used for testing. + */ + private[akka] final def isStateTimerActive = timeoutFuture.isDefined + /** * Set handler which is called upon each state transition, i.e. not when * staying in the same state. This may use the pair extractor defined in the @@ -427,6 +440,8 @@ trait FSM[S, D] extends Listeners with ActorLogging { /** * Set handler which is called upon reception of unhandled messages. Calling * this method again will overwrite the previous contents. + * + * The current state may be queried using ``stateName``. */ final def whenUnhandled(stateFunction: StateFunction): Unit = handleEvent = stateFunction orElse handleEventDefault @@ -519,7 +534,7 @@ trait FSM[S, D] extends Listeners with ActorLogging { * Main actor receive() method * ******************************************* */ - override final def receive: Receive = { + override def receive: Receive = { case TimeoutMarker(gen) ⇒ if (generation == gen) { processMsg(StateTimeout, "state timeout") @@ -632,6 +647,8 @@ trait FSM[S, D] extends Listeners with ActorLogging { case Failure(msg: AnyRef) ⇒ log.error(msg.toString) case _ ⇒ } + for (timer ← timers.values) timer.cancel() + timers.clear() val stopEvent = StopEvent(reason, currentState.stateName, currentState.stateData) if (terminateEvent.isDefinedAt(stopEvent)) terminateEvent(stopEvent) diff --git a/akka-actor/src/main/scala/akka/actor/FaultHandling.scala b/akka-actor/src/main/scala/akka/actor/FaultHandling.scala index 3d1c9a01c3..7f65c84d02 100644 --- a/akka-actor/src/main/scala/akka/actor/FaultHandling.scala +++ b/akka-actor/src/main/scala/akka/actor/FaultHandling.scala @@ -5,11 +5,13 @@ package akka.actor import language.implicitConversions -import java.util.concurrent.TimeUnit -import scala.collection.mutable.ArrayBuffer -import scala.collection.JavaConversions._ import java.lang.{ Iterable ⇒ JIterable } -import scala.concurrent.util.Duration +import java.util.concurrent.TimeUnit +import akka.japi.Util.immutableSeq +import scala.collection.mutable.ArrayBuffer +import scala.collection.immutable +import scala.concurrent.duration.Duration + /** * INTERNAL API */ @@ -171,7 +173,7 @@ object SupervisorStrategy extends SupervisorStrategyLowPriorityImplicits { * Implicit conversion from `Seq` of Throwables to a `Decider`. * This maps the given Throwables to restarts, otherwise escalates. */ - implicit def seqThrowable2Decider(trapExit: Seq[Class[_ <: Throwable]]): Decider = makeDecider(trapExit) + implicit def seqThrowable2Decider(trapExit: immutable.Seq[Class[_ <: Throwable]]): Decider = makeDecider(trapExit) type Decider = PartialFunction[Throwable, Directive] type JDecider = akka.japi.Function[Throwable, Directive] @@ -181,21 +183,15 @@ object SupervisorStrategy extends SupervisorStrategyLowPriorityImplicits { * Decider builder which just checks whether one of * the given Throwables matches the cause and restarts, otherwise escalates. */ - def makeDecider(trapExit: Array[Class[_]]): Decider = - { case x ⇒ if (trapExit exists (_ isInstance x)) Restart else Escalate } + def makeDecider(trapExit: immutable.Seq[Class[_ <: Throwable]]): Decider = { + case x ⇒ if (trapExit exists (_ isInstance x)) Restart else Escalate + } /** * Decider builder which just checks whether one of * the given Throwables matches the cause and restarts, otherwise escalates. */ - def makeDecider(trapExit: Seq[Class[_ <: Throwable]]): Decider = - { case x ⇒ if (trapExit exists (_ isInstance x)) Restart else Escalate } - - /** - * Decider builder which just checks whether one of - * the given Throwables matches the cause and restarts, otherwise escalates. - */ - def makeDecider(trapExit: JIterable[Class[_ <: Throwable]]): Decider = makeDecider(trapExit.toSeq) + def makeDecider(trapExit: JIterable[Class[_ <: Throwable]]): Decider = makeDecider(immutableSeq(trapExit)) /** * Decider builder for Iterables of cause-directive pairs, e.g. a map obtained @@ -220,20 +216,22 @@ object SupervisorStrategy extends SupervisorStrategyLowPriorityImplicits { * * INTERNAL API */ - private[akka] def sort(in: Iterable[CauseDirective]): Seq[CauseDirective] = + private[akka] def sort(in: Iterable[CauseDirective]): immutable.Seq[CauseDirective] = (new ArrayBuffer[CauseDirective](in.size) /: in) { (buf, ca) ⇒ buf.indexWhere(_._1 isAssignableFrom ca._1) match { case -1 ⇒ buf append ca case x ⇒ buf insert (x, ca) } buf - } + }.to[immutable.IndexedSeq] private[akka] def withinTimeRangeOption(withinTimeRange: Duration): Option[Duration] = if (withinTimeRange.isFinite && withinTimeRange >= Duration.Zero) Some(withinTimeRange) else None private[akka] def maxNrOfRetriesOption(maxNrOfRetries: Int): Option[Int] = if (maxNrOfRetries < 0) None else Some(maxNrOfRetries) + + private[akka] val escalateDefault = (_: Any) ⇒ Escalate } /** @@ -280,7 +278,7 @@ abstract class SupervisorStrategy { * @param children is a lazy collection (a view) */ def handleFailure(context: ActorContext, child: ActorRef, cause: Throwable, stats: ChildRestartStats, children: Iterable[ChildRestartStats]): Boolean = { - val directive = if (decider.isDefinedAt(cause)) decider(cause) else Escalate //FIXME applyOrElse in Scala 2.10 + val directive = decider.applyOrElse(cause, escalateDefault) directive match { case Resume ⇒ resumeChild(child, cause); true case Restart ⇒ processFailure(context, true, child, cause, stats, children); true @@ -334,10 +332,6 @@ case class AllForOneStrategy(maxNrOfRetries: Int = -1, withinTimeRange: Duration def this(maxNrOfRetries: Int, withinTimeRange: Duration, trapExit: JIterable[Class[_ <: Throwable]]) = this(maxNrOfRetries, withinTimeRange)(SupervisorStrategy.makeDecider(trapExit)) - - def this(maxNrOfRetries: Int, withinTimeRange: Duration, trapExit: Array[Class[_]]) = - this(maxNrOfRetries, withinTimeRange)(SupervisorStrategy.makeDecider(trapExit)) - /* * this is a performance optimization to avoid re-allocating the pairs upon * every call to requestRestartPermission, assuming that strategies are shared @@ -376,9 +370,6 @@ case class OneForOneStrategy(maxNrOfRetries: Int = -1, withinTimeRange: Duration def this(maxNrOfRetries: Int, withinTimeRange: Duration, trapExit: JIterable[Class[_ <: Throwable]]) = this(maxNrOfRetries, withinTimeRange)(SupervisorStrategy.makeDecider(trapExit)) - def this(maxNrOfRetries: Int, withinTimeRange: Duration, trapExit: Array[Class[_]]) = - this(maxNrOfRetries, withinTimeRange)(SupervisorStrategy.makeDecider(trapExit)) - /* * this is a performance optimization to avoid re-allocating the pairs upon * every call to requestRestartPermission, assuming that strategies are shared diff --git a/akka-actor/src/main/scala/akka/actor/IO.scala b/akka-actor/src/main/scala/akka/actor/IO.scala index 8c104e9a18..e1dedb3ba2 100644 --- a/akka-actor/src/main/scala/akka/actor/IO.scala +++ b/akka-actor/src/main/scala/akka/actor/IO.scala @@ -6,8 +6,9 @@ package akka.actor import language.higherKinds import language.postfixOps +import scala.collection.immutable import scala.concurrent.{ ExecutionContext, Future } -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.util.control.NonFatal import akka.util.ByteString import java.net.{ SocketAddress, InetSocketAddress } @@ -122,7 +123,7 @@ object IO { * @return a new SocketHandle that can be used to perform actions on the * new connection's SocketChannel. */ - def accept(options: Seq[SocketOption] = Seq.empty)(implicit socketOwner: ActorRef): SocketHandle = { + def accept(options: immutable.Seq[SocketOption] = Nil)(implicit socketOwner: ActorRef): SocketHandle = { val socket = SocketHandle(socketOwner, ioManager) ioManager ! Accept(socket, this, options) socket @@ -250,7 +251,7 @@ object IO { * * Normally sent using IOManager.listen() */ - case class Listen(server: ServerHandle, address: SocketAddress, options: Seq[ServerSocketOption] = Seq.empty) extends IOMessage + case class Listen(server: ServerHandle, address: SocketAddress, options: immutable.Seq[ServerSocketOption] = Nil) extends IOMessage /** * Message from an [[akka.actor.IOManager]] that the ServerSocketChannel is @@ -272,7 +273,7 @@ object IO { * * Normally sent using [[akka.actor.IO.ServerHandle]].accept() */ - case class Accept(socket: SocketHandle, server: ServerHandle, options: Seq[SocketOption] = Seq.empty) extends IOMessage + case class Accept(socket: SocketHandle, server: ServerHandle, options: immutable.Seq[SocketOption] = Nil) extends IOMessage /** * Message to an [[akka.actor.IOManager]] to create a SocketChannel connected @@ -280,7 +281,7 @@ object IO { * * Normally sent using IOManager.connect() */ - case class Connect(socket: SocketHandle, address: SocketAddress, options: Seq[SocketOption] = Seq.empty) extends IOMessage + case class Connect(socket: SocketHandle, address: SocketAddress, options: immutable.Seq[SocketOption] = Nil) extends IOMessage /** * Message from an [[akka.actor.IOManager]] that the SocketChannel has @@ -832,7 +833,7 @@ final class IOManager private (system: ExtendedActorSystem) extends Extension { * @param option Seq of [[akka.actor.IO.ServerSocketOptions]] to setup on socket * @return a [[akka.actor.IO.ServerHandle]] to uniquely identify the created socket */ - def listen(address: SocketAddress, options: Seq[IO.ServerSocketOption])(implicit owner: ActorRef): IO.ServerHandle = { + def listen(address: SocketAddress, options: immutable.Seq[IO.ServerSocketOption])(implicit owner: ActorRef): IO.ServerHandle = { val server = IO.ServerHandle(owner, actor) actor ! IO.Listen(server, address, options) server @@ -847,7 +848,7 @@ final class IOManager private (system: ExtendedActorSystem) extends Extension { * @param owner the ActorRef that will receive messages from the IOManagerActor * @return a [[akka.actor.IO.ServerHandle]] to uniquely identify the created socket */ - def listen(address: SocketAddress)(implicit owner: ActorRef): IO.ServerHandle = listen(address, Seq.empty) + def listen(address: SocketAddress)(implicit owner: ActorRef): IO.ServerHandle = listen(address, Nil) /** * Create a ServerSocketChannel listening on a host and port. Messages will @@ -860,7 +861,7 @@ final class IOManager private (system: ExtendedActorSystem) extends Extension { * @param owner the ActorRef that will receive messages from the IOManagerActor * @return a [[akka.actor.IO.ServerHandle]] to uniquely identify the created socket */ - def listen(host: String, port: Int, options: Seq[IO.ServerSocketOption] = Seq.empty)(implicit owner: ActorRef): IO.ServerHandle = + def listen(host: String, port: Int, options: immutable.Seq[IO.ServerSocketOption] = Nil)(implicit owner: ActorRef): IO.ServerHandle = listen(new InetSocketAddress(host, port), options)(owner) /** @@ -873,7 +874,7 @@ final class IOManager private (system: ExtendedActorSystem) extends Extension { * @param owner the ActorRef that will receive messages from the IOManagerActor * @return a [[akka.actor.IO.SocketHandle]] to uniquely identify the created socket */ - def connect(address: SocketAddress, options: Seq[IO.SocketOption] = Seq.empty)(implicit owner: ActorRef): IO.SocketHandle = { + def connect(address: SocketAddress, options: immutable.Seq[IO.SocketOption] = Nil)(implicit owner: ActorRef): IO.SocketHandle = { val socket = IO.SocketHandle(owner, actor) actor ! IO.Connect(socket, address, options) socket @@ -991,7 +992,7 @@ final class IOManagerActor(val settings: Settings) extends Actor with ActorLoggi private def forwardFailure(f: ⇒ Unit): Unit = try f catch { case NonFatal(e) ⇒ sender ! Status.Failure(e) } - private def setSocketOptions(socket: java.net.Socket, options: Seq[IO.SocketOption]) { + private def setSocketOptions(socket: java.net.Socket, options: immutable.Seq[IO.SocketOption]) { options foreach { case IO.KeepAlive(on) ⇒ forwardFailure(socket.setKeepAlive(on)) case IO.OOBInline(on) ⇒ forwardFailure(socket.setOOBInline(on)) diff --git a/akka-actor/src/main/scala/akka/actor/RepointableActorRef.scala b/akka-actor/src/main/scala/akka/actor/RepointableActorRef.scala index 7bbc5517d8..02aef18564 100644 --- a/akka-actor/src/main/scala/akka/actor/RepointableActorRef.scala +++ b/akka-actor/src/main/scala/akka/actor/RepointableActorRef.scala @@ -5,17 +5,18 @@ package akka.actor import java.io.ObjectStreamException +import java.util.{ LinkedList ⇒ JLinkedList, ListIterator ⇒ JListIterator } import java.util.concurrent.TimeUnit import java.util.concurrent.locks.ReentrantLock import scala.annotation.tailrec -import scala.collection.mutable.Queue import scala.concurrent.forkjoin.ThreadLocalRandom import akka.actor.dungeon.ChildrenContainer -import akka.dispatch.{ Envelope, Supervise, SystemMessage, Terminate } import akka.event.Logging.Warning import akka.util.Unsafe +import akka.dispatch._ +import util.Try /** * This actor ref starts out with some dummy cell (by default just enqueuing @@ -32,17 +33,34 @@ private[akka] class RepointableActorRef( val path: ActorPath) extends ActorRefWithCell with RepointableRef { - import AbstractActorRef.cellOffset + import AbstractActorRef.{ cellOffset, lookupOffset } + /* + * H E R E B E D R A G O N S ! + * + * There are two main functions of a Cell: message queueing and child lookup. + * When switching out the UnstartedCell for its real replacement, the former + * must be switched after all messages have been drained from the temporary + * queue into the real mailbox, while the latter must be switched before + * processing the very first message (i.e. before Cell.start()). Hence there + * are two refs here, one for each function, and they are switched just so. + */ @volatile private var _cellDoNotCallMeDirectly: Cell = _ + @volatile private var _lookupDoNotCallMeDirectly: Cell = _ def underlying: Cell = Unsafe.instance.getObjectVolatile(this, cellOffset).asInstanceOf[Cell] + def lookup = Unsafe.instance.getObjectVolatile(this, lookupOffset).asInstanceOf[Cell] @tailrec final def swapCell(next: Cell): Cell = { val old = underlying if (Unsafe.instance.compareAndSwapObject(this, cellOffset, old, next)) old else swapCell(next) } + @tailrec final def swapLookup(next: Cell): Cell = { + val old = lookup + if (Unsafe.instance.compareAndSwapObject(this, lookupOffset, old, next)) old else swapLookup(next) + } + /** * Initialize: make a dummy cell which holds just a mailbox, then tell our * supervisor that we exist so that he can create the real Cell in @@ -52,12 +70,17 @@ private[akka] class RepointableActorRef( * * This is protected so that others can have different initialization. */ - def initialize(): this.type = { - val uid = ThreadLocalRandom.current.nextInt() - swapCell(new UnstartedCell(system, this, props, supervisor, uid)) - supervisor.sendSystemMessage(Supervise(this, uid)) - this - } + def initialize(async: Boolean): this.type = + underlying match { + case null ⇒ + val uid = ThreadLocalRandom.current.nextInt() + swapCell(new UnstartedCell(system, this, props, supervisor, uid)) + swapLookup(underlying) + supervisor.sendSystemMessage(Supervise(this, async, uid)) + if (!async) point() + this + case other ⇒ throw new IllegalStateException("initialize called more than once!") + } /** * This method is supposed to be called by the supervisor in handleSupervise() @@ -65,21 +88,33 @@ private[akka] class RepointableActorRef( * modification of the `underlying` field, though it is safe to send messages * at any time. */ - def activate(): this.type = { + def point(): this.type = underlying match { - case u: UnstartedCell ⇒ u.replaceWith(newCell(u)) - case _ ⇒ // this happens routinely for things which were created async=false + case u: UnstartedCell ⇒ + /* + * The problem here was that if the real actor (which will start running + * at cell.start()) creates children in its constructor, then this may + * happen before the swapCell in u.replaceWith, meaning that those + * children cannot be looked up immediately, e.g. if they shall become + * routees. + */ + val cell = newCell(u) + swapLookup(cell) + cell.start() + u.replaceWith(cell) + this + case null ⇒ throw new IllegalStateException("underlying cell is null") + case _ ⇒ this // this happens routinely for things which were created async=false } - this - } /** * This is called by activate() to obtain the cell which is to replace the * unstarted cell. The cell must be fully functional. */ - def newCell(old: Cell): Cell = - new ActorCell(system, this, props, supervisor) - .start(sendSupervise = false, old.asInstanceOf[UnstartedCell].uid) + def newCell(old: UnstartedCell): Cell = + new ActorCell(system, this, props, supervisor).init(old.uid, sendSupervise = false) + + def start(): Unit = () def suspend(): Unit = underlying.suspend() @@ -89,7 +124,11 @@ private[akka] class RepointableActorRef( def restart(cause: Throwable): Unit = underlying.restart(cause) - def isStarted: Boolean = !underlying.isInstanceOf[UnstartedCell] + def isStarted: Boolean = underlying match { + case _: UnstartedCell ⇒ false + case null ⇒ throw new IllegalStateException("isStarted called before initialized") + case _ ⇒ true + } def isTerminated: Boolean = underlying.isTerminated @@ -105,14 +144,14 @@ private[akka] class RepointableActorRef( case ".." ⇒ getParent.getChild(name) case "" ⇒ getChild(name) case other ⇒ - underlying.getChildByName(other) match { + lookup.getChildByName(other) match { case Some(crs: ChildRestartStats) ⇒ crs.child.asInstanceOf[InternalActorRef].getChild(name) case _ ⇒ Nobody } } } else this - def !(message: Any)(implicit sender: ActorRef = null) = underlying.tell(message, sender) + def !(message: Any)(implicit sender: ActorRef = Actor.noSender) = underlying.tell(message, sender) def sendSystemMessage(message: SystemMessage) = underlying.sendSystemMessage(message) @@ -120,116 +159,116 @@ private[akka] class RepointableActorRef( protected def writeReplace(): AnyRef = SerializedActorRef(path) } -private[akka] class UnstartedCell(val systemImpl: ActorSystemImpl, val self: RepointableActorRef, val props: Props, val supervisor: InternalActorRef, val uid: Int) - extends Cell { +private[akka] class UnstartedCell(val systemImpl: ActorSystemImpl, + val self: RepointableActorRef, + val props: Props, + val supervisor: InternalActorRef, + val uid: Int) extends Cell { /* * This lock protects all accesses to this cell’s queues. It also ensures * safe switching to the started ActorCell. */ - val lock = new ReentrantLock + private[this] final val lock = new ReentrantLock - // use Envelope to keep on-send checks in the same place - val queue: Queue[Envelope] = Queue() - val systemQueue: Queue[SystemMessage] = Queue() - var suspendCount: Int = 0 + // use Envelope to keep on-send checks in the same place ACCESS MUST BE PROTECTED BY THE LOCK + private[this] final val queue = new JLinkedList[Any]() - private def timeout = system.settings.UnstartedPushTimeout.duration.toMillis + import systemImpl.settings.UnstartedPushTimeout.{ duration ⇒ timeout } - def replaceWith(cell: Cell): Unit = { - lock.lock() + def replaceWith(cell: Cell): Unit = locked { try { - /* - * The CallingThreadDispatcher nicely dives under the ReentrantLock and - * breaks things by enqueueing into stale queues from within the message - * processing which happens in-line for sendSystemMessage() and tell(). - * Since this is the only possible way to f*ck things up within this - * lock, double-tap (well, N-tap, really); concurrent modification is - * still not possible because we’re the only thread accessing the queues. - */ - while (systemQueue.nonEmpty || queue.nonEmpty) { - while (systemQueue.nonEmpty) { - val msg = systemQueue.dequeue() - cell.sendSystemMessage(msg) - } - if (queue.nonEmpty) { - val envelope = queue.dequeue() - cell.tell(envelope.message, envelope.sender) + while (!queue.isEmpty) { + queue.poll() match { + case s: SystemMessage ⇒ cell.sendSystemMessage(s) + case e: Envelope ⇒ cell.tell(e.message, e.sender) } } - } finally try + } finally { self.swapCell(cell) - finally try - for (_ ← 1 to suspendCount) cell.suspend() - finally - lock.unlock() + } } def system: ActorSystem = systemImpl - def suspend(): Unit = { - lock.lock() - try suspendCount += 1 - finally lock.unlock() - } - def resume(causedByFailure: Throwable): Unit = { - lock.lock() - try suspendCount -= 1 - finally lock.unlock() - } - def restart(cause: Throwable): Unit = { - lock.lock() - try suspendCount -= 1 - finally lock.unlock() - } + def start(): this.type = this + def suspend(): Unit = sendSystemMessage(Suspend()) + def resume(causedByFailure: Throwable): Unit = sendSystemMessage(Resume(causedByFailure)) + def restart(cause: Throwable): Unit = sendSystemMessage(Recreate(cause)) def stop(): Unit = sendSystemMessage(Terminate()) - def isTerminated: Boolean = false + def isTerminated: Boolean = locked { + val cell = self.underlying + if (cellIsReady(cell)) cell.isTerminated else false + } def parent: InternalActorRef = supervisor def childrenRefs: ChildrenContainer = ChildrenContainer.EmptyChildrenContainer def getChildByName(name: String): Option[ChildRestartStats] = None + def tell(message: Any, sender: ActorRef): Unit = { - if (lock.tryLock(timeout, TimeUnit.MILLISECONDS)) { + val useSender = if (sender eq Actor.noSender) system.deadLetters else sender + if (lock.tryLock(timeout.length, timeout.unit)) { try { - if (self.underlying eq this) queue enqueue Envelope(message, sender, system) - else self.underlying.tell(message, sender) - } finally { - lock.unlock() - } + val cell = self.underlying + if (cellIsReady(cell)) { + cell.tell(message, useSender) + } else if (!queue.offer(Envelope(message, useSender, system))) { + system.eventStream.publish(Warning(self.path.toString, getClass, "dropping message of type " + message.getClass + " due to enqueue failure")) + system.deadLetters ! DeadLetter(message, useSender, self) + } + } finally lock.unlock() } else { - system.deadLetters ! DeadLetter(message, sender, self) - } - } - def sendSystemMessage(msg: SystemMessage): Unit = { - if (lock.tryLock(timeout, TimeUnit.MILLISECONDS)) { - try { - if (self.underlying eq this) systemQueue enqueue msg - else self.underlying.sendSystemMessage(msg) - } finally { - lock.unlock() - } - } else { - // FIXME: once we have guaranteed delivery of system messages, hook this in! - system.eventStream.publish(Warning(self.path.toString, getClass, "dropping system message " + msg + " due to lock timeout")) - system.deadLetters ! DeadLetter(msg, self, self) - } - } - def isLocal = true - def hasMessages: Boolean = { - lock.lock() - try { - if (self.underlying eq this) !queue.isEmpty - else self.underlying.hasMessages - } finally { - lock.unlock() - } - } - def numberOfMessages: Int = { - lock.lock() - try { - if (self.underlying eq this) queue.size - else self.underlying.numberOfMessages - } finally { - lock.unlock() + system.eventStream.publish(Warning(self.path.toString, getClass, "dropping message of type" + message.getClass + " due to lock timeout")) + system.deadLetters ! DeadLetter(message, useSender, self) } } + // FIXME: once we have guaranteed delivery of system messages, hook this in! + def sendSystemMessage(msg: SystemMessage): Unit = + if (lock.tryLock(timeout.length, timeout.unit)) { + try { + val cell = self.underlying + if (cellIsReady(cell)) { + cell.sendSystemMessage(msg) + } else { + // systemMessages that are sent during replace need to jump to just after the last system message in the queue, so it's processed before other messages + val wasEnqueued = if ((self.lookup ne this) && (self.underlying eq this) && !queue.isEmpty()) { + @tailrec def tryEnqueue(i: JListIterator[Any] = queue.listIterator(), insertIntoIndex: Int = -1): Boolean = + if (i.hasNext()) + tryEnqueue(i, + if (i.next().isInstanceOf[SystemMessage]) i.nextIndex() // update last sysmsg seen so far + else insertIntoIndex) // or just keep the last seen one + else if (insertIntoIndex == -1) queue.offer(msg) + else Try(queue.add(insertIntoIndex, msg)).isSuccess + tryEnqueue() + } else queue.offer(msg) + + if (!wasEnqueued) { + system.eventStream.publish(Warning(self.path.toString, getClass, "dropping system message " + msg + " due to enqueue failure")) + system.deadLetters ! DeadLetter(msg, self, self) + } + } + } finally lock.unlock() + } else { + system.eventStream.publish(Warning(self.path.toString, getClass, "dropping system message " + msg + " due to lock timeout")) + system.deadLetters ! DeadLetter(msg, self, self) + } + + def isLocal = true + + private[this] final def cellIsReady(cell: Cell): Boolean = (cell ne this) && (cell ne null) + + def hasMessages: Boolean = locked { + val cell = self.underlying + if (cellIsReady(cell)) cell.hasMessages else !queue.isEmpty + } + + def numberOfMessages: Int = locked { + val cell = self.underlying + if (cellIsReady(cell)) cell.numberOfMessages else queue.size + } + + private[this] final def locked[T](body: ⇒ T): T = { + lock.lock() + try body finally lock.unlock() + } + } diff --git a/akka-actor/src/main/scala/akka/actor/Scheduler.scala b/akka-actor/src/main/scala/akka/actor/Scheduler.scala index 02c67c6423..4aa91f916f 100644 --- a/akka-actor/src/main/scala/akka/actor/Scheduler.scala +++ b/akka-actor/src/main/scala/akka/actor/Scheduler.scala @@ -4,17 +4,18 @@ package akka.actor -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.util.internal.{ TimerTask, HashedWheelTimer, Timeout ⇒ HWTimeout, Timer } import akka.event.LoggingAdapter import akka.dispatch.MessageDispatcher import java.io.Closeable -import java.util.concurrent.atomic.AtomicReference +import java.util.concurrent.atomic.{ AtomicReference, AtomicLong } import scala.annotation.tailrec import akka.util.internal._ import concurrent.ExecutionContext -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration +// The Scheduler trait is included in the documentation. KEEP THE LINES SHORT!!! //#scheduler /** * An Akka scheduler service. This one needs one special behavior: if @@ -50,7 +51,8 @@ trait Scheduler { */ def schedule( initialDelay: FiniteDuration, - interval: FiniteDuration)(f: ⇒ Unit)(implicit executor: ExecutionContext): Cancellable + interval: FiniteDuration)(f: ⇒ Unit)( + implicit executor: ExecutionContext): Cancellable /** * Schedules a function to be run repeatedly with an initial delay and @@ -93,7 +95,8 @@ trait Scheduler { * Scala API */ def scheduleOnce( - delay: FiniteDuration)(f: ⇒ Unit)(implicit executor: ExecutionContext): Cancellable + delay: FiniteDuration)(f: ⇒ Unit)( + implicit executor: ExecutionContext): Cancellable } //#scheduler @@ -137,14 +140,17 @@ class DefaultScheduler(hashedWheelTimer: HashedWheelTimer, log: LoggingAdapter) val continuousCancellable = new ContinuousCancellable continuousCancellable.init( hashedWheelTimer.newTimeout( - new TimerTask with ContinuousScheduling { + new AtomicLong(System.nanoTime + initialDelay.toNanos) with TimerTask with ContinuousScheduling { def run(timeout: HWTimeout) { executor execute new Runnable { override def run = { receiver ! message // Check if the receiver is still alive and kicking before reschedule the task if (receiver.isTerminated) log.debug("Could not reschedule message to be sent because receiving actor {} has been terminated.", receiver) - else scheduleNext(timeout, delay, continuousCancellable) + else { + val driftNanos = System.nanoTime - getAndAdd(delay.toNanos) + scheduleNext(timeout, Duration.fromNanos(Math.max(delay.toNanos - driftNanos, 1)), continuousCancellable) + } } } } @@ -162,11 +168,12 @@ class DefaultScheduler(hashedWheelTimer: HashedWheelTimer, log: LoggingAdapter) val continuousCancellable = new ContinuousCancellable continuousCancellable.init( hashedWheelTimer.newTimeout( - new TimerTask with ContinuousScheduling { + new AtomicLong(System.nanoTime + initialDelay.toNanos) with TimerTask with ContinuousScheduling { override def run(timeout: HWTimeout): Unit = executor.execute(new Runnable { override def run = { runnable.run() - scheduleNext(timeout, delay, continuousCancellable) + val driftNanos = System.nanoTime - getAndAdd(delay.toNanos) + scheduleNext(timeout, Duration.fromNanos(Math.max(delay.toNanos - driftNanos, 1)), continuousCancellable) } }) }, @@ -199,8 +206,8 @@ class DefaultScheduler(hashedWheelTimer: HashedWheelTimer, log: LoggingAdapter) } override def close(): Unit = { - import scala.collection.JavaConverters._ - hashedWheelTimer.stop().asScala foreach execDirectly + val i = hashedWheelTimer.stop().iterator() + while (i.hasNext) execDirectly(i.next()) } } diff --git a/akka-actor/src/main/scala/akka/actor/Stash.scala b/akka-actor/src/main/scala/akka/actor/Stash.scala index 05b618d03a..cdf4ef6d5b 100644 --- a/akka-actor/src/main/scala/akka/actor/Stash.scala +++ b/akka-actor/src/main/scala/akka/actor/Stash.scala @@ -16,13 +16,13 @@ import akka.AkkaException * def receive = { * case "open" ⇒ * unstashAll() - * context.become { + * context.become({ * case "write" ⇒ // do writing... * case "close" ⇒ * unstashAll() * context.unbecome() * case msg ⇒ stash() - * } + * }, discardOld = false) * case "done" ⇒ // done * case msg ⇒ stash() * } diff --git a/akka-actor/src/main/scala/akka/actor/TypedActor.scala b/akka-actor/src/main/scala/akka/actor/TypedActor.scala index 69a3707f48..cc12ed07a2 100644 --- a/akka-actor/src/main/scala/akka/actor/TypedActor.scala +++ b/akka-actor/src/main/scala/akka/actor/TypedActor.scala @@ -4,22 +4,25 @@ package akka.actor import language.existentials -import akka.japi.{ Creator, Option ⇒ JOption } -import java.lang.reflect.{ InvocationTargetException, Method, InvocationHandler, Proxy } -import akka.util.Timeout + import scala.util.control.NonFatal -import scala.concurrent.util.Duration +import scala.util.{ Try, Success, Failure } +import scala.collection.immutable +import scala.concurrent.duration.FiniteDuration +import scala.concurrent.duration.Duration +import scala.reflect.ClassTag import scala.concurrent.{ Await, Future } +import akka.japi.{ Creator, Option ⇒ JOption } +import akka.japi.Util.{ immutableSeq, immutableSingletonSeq } +import akka.util.Timeout import akka.util.Reflect.instantiator +import akka.serialization.{ JavaSerializer, SerializationExtension } import akka.dispatch._ import java.util.concurrent.atomic.{ AtomicReference ⇒ AtomVar } import java.util.concurrent.TimeoutException import java.util.concurrent.TimeUnit.MILLISECONDS -import scala.reflect.ClassTag -import akka.serialization.{ JavaSerializer, SerializationExtension } import java.io.ObjectStreamException -import scala.util.{ Try, Success, Failure } -import scala.concurrent.util.FiniteDuration +import java.lang.reflect.{ InvocationTargetException, Method, InvocationHandler, Proxy } /** * A TypedActorFactory is something that can created TypedActor instances. @@ -439,8 +442,8 @@ object TypedProps { * @return a sequence of interfaces that the specified class implements, * or a sequence containing only itself, if itself is an interface. */ - def extractInterfaces(clazz: Class[_]): Seq[Class[_]] = - if (clazz.isInterface) Seq[Class[_]](clazz) else clazz.getInterfaces.toList + def extractInterfaces(clazz: Class[_]): immutable.Seq[Class[_]] = + if (clazz.isInterface) immutableSingletonSeq(clazz) else immutableSeq(clazz.getInterfaces) /** * Uses the supplied class as the factory for the TypedActor implementation, @@ -489,7 +492,7 @@ object TypedProps { */ @SerialVersionUID(1L) case class TypedProps[T <: AnyRef] protected[TypedProps] ( - interfaces: Seq[Class[_]], + interfaces: immutable.Seq[Class[_]], creator: () ⇒ T, dispatcher: String = TypedProps.defaultDispatcherId, deploy: Deploy = Props.defaultDeploy, @@ -607,8 +610,7 @@ class TypedActorExtension(system: ExtendedActorSystem) extends TypedActorFactory protected def actorFactory: ActorRefFactory = system protected def typedActor = this - val serialization = SerializationExtension(system) - val settings = system.settings + import system.settings /** * Default timeout for typed actor methods with non-void return type @@ -635,23 +637,18 @@ class TypedActorExtension(system: ExtendedActorSystem) extends TypedActorFactory private[akka] def createActorRefProxy[R <: AnyRef, T <: R](props: TypedProps[T], proxyVar: AtomVar[R], actorRef: ⇒ ActorRef): R = { //Warning, do not change order of the following statements, it's some elaborate chicken-n-egg handling val actorVar = new AtomVar[ActorRef](null) - val classLoader: ClassLoader = if (props.loader.nonEmpty) props.loader.get else props.interfaces.headOption.map(_.getClassLoader).orNull //If we have no loader, we arbitrarily take the loader of the first interface val proxy = Proxy.newProxyInstance( - classLoader, + (props.loader orElse props.interfaces.collectFirst { case any ⇒ any.getClassLoader }).orNull, //If we have no loader, we arbitrarily take the loader of the first interface props.interfaces.toArray, - new TypedActorInvocationHandler( - this, - actorVar, - if (props.timeout.isDefined) props.timeout.get else DefaultReturnTimeout)).asInstanceOf[R] + new TypedActorInvocationHandler(this, actorVar, props.timeout getOrElse DefaultReturnTimeout)).asInstanceOf[R] - proxyVar match { - case null ⇒ - actorVar.set(actorRef) - proxy - case _ ⇒ - proxyVar.set(proxy) // Chicken and egg situation we needed to solve, set the proxy so that we can set the self-reference inside each receive - actorVar.set(actorRef) //Make sure the InvocationHandler gets ahold of the actor reference, this is not a problem since the proxy hasn't escaped this method yet - proxyVar.get + if (proxyVar eq null) { + actorVar set actorRef + proxy + } else { + proxyVar set proxy // Chicken and egg situation we needed to solve, set the proxy so that we can set the self-reference inside each receive + actorVar set actorRef //Make sure the InvocationHandler gets ahold of the actor reference, this is not a problem since the proxy hasn't escaped this method yet + proxyVar.get } } diff --git a/akka-actor/src/main/scala/akka/actor/UntypedActor.scala b/akka-actor/src/main/scala/akka/actor/UntypedActor.scala index 015d8fb9e3..47ddf4fc43 100644 --- a/akka-actor/src/main/scala/akka/actor/UntypedActor.scala +++ b/akka-actor/src/main/scala/akka/actor/UntypedActor.scala @@ -36,7 +36,7 @@ import akka.japi.{ Creator } * } * } * - * private static SupervisorStrategy strategy = new OneForOneStrategy(10, Duration.parse("1 minute"), + * private static SupervisorStrategy strategy = new OneForOneStrategy(10, Duration.create("1 minute"), * new Function() { * @Override * public Directive apply(Throwable t) { diff --git a/akka-actor/src/main/scala/akka/actor/dsl/Creators.scala b/akka-actor/src/main/scala/akka/actor/dsl/Creators.scala index 29dda88300..a9515f3000 100644 --- a/akka-actor/src/main/scala/akka/actor/dsl/Creators.scala +++ b/akka-actor/src/main/scala/akka/actor/dsl/Creators.scala @@ -6,10 +6,8 @@ package akka.actor.dsl import scala.concurrent.Await import akka.actor.ActorLogging -import scala.concurrent.util.Deadline import scala.collection.immutable.TreeSet -import scala.concurrent.util.{ Duration, FiniteDuration } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Cancellable import akka.actor.{ Actor, Stash, SupervisorStrategy } import scala.collection.mutable.Queue @@ -31,7 +29,9 @@ trait Creators { this: ActorDSL.type ⇒ * for quickly trying things out in the REPL. It makes the following keywords * available: * - * - `become` mapped to `context.become(_, discardOld = false)` + * - `become` mapped to `context.become(_, discardOld = true)` + * + * - `becomeStacked` mapped to `context.become(_, discardOld = false)` * * - `unbecome` mapped to `context.unbecome` * @@ -89,7 +89,14 @@ trait Creators { this: ActorDSL.type ⇒ * stack is cleared upon restart. Use `unbecome()` to pop an element off * this stack. */ - def become(r: Receive) = context.become(r, discardOld = false) + def becomeStacked(r: Receive) = context.become(r, discardOld = false) + + /** + * Replace the behavior at the top of the behavior stack for this actor. The + * stack is cleared upon restart. Use `unbecome()` to pop an element off + * this stack or `becomeStacked()` to push a new element on top of it. + */ + def become(r: Receive) = context.become(r, discardOld = true) /** * Pop the active behavior from the behavior stack of this actor. This stack diff --git a/akka-actor/src/main/scala/akka/actor/dsl/Inbox.scala b/akka-actor/src/main/scala/akka/actor/dsl/Inbox.scala index 7b1a77bc71..418a035e53 100644 --- a/akka-actor/src/main/scala/akka/actor/dsl/Inbox.scala +++ b/akka-actor/src/main/scala/akka/actor/dsl/Inbox.scala @@ -6,10 +6,8 @@ package akka.actor.dsl import scala.concurrent.Await import akka.actor.ActorLogging -import scala.concurrent.util.Deadline import scala.collection.immutable.TreeSet -import scala.concurrent.util.{ Duration, FiniteDuration } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Cancellable import akka.actor.Actor import scala.collection.mutable.Queue @@ -129,10 +127,10 @@ trait Inbox { this: ActorDSL.type ⇒ val next = clientsByTimeout.head.deadline import context.dispatcher if (currentDeadline.isEmpty) { - currentDeadline = Some((next, context.system.scheduler.scheduleOnce(next.timeLeft.asInstanceOf[FiniteDuration], self, Kick))) + currentDeadline = Some((next, context.system.scheduler.scheduleOnce(next.timeLeft, self, Kick))) } else if (currentDeadline.get._1 != next) { currentDeadline.get._2.cancel() - currentDeadline = Some((next, context.system.scheduler.scheduleOnce(next.timeLeft.asInstanceOf[FiniteDuration], self, Kick))) + currentDeadline = Some((next, context.system.scheduler.scheduleOnce(next.timeLeft, self, Kick))) } } } @@ -169,7 +167,7 @@ trait Inbox { this: ActorDSL.type ⇒ * this method within an actor! */ def receive(timeout: FiniteDuration = defaultTimeout): Any = { - implicit val t = Timeout((timeout + extraTime).asInstanceOf[FiniteDuration]) + implicit val t = Timeout(timeout + extraTime) Await.result(receiver ? Get(Deadline.now + timeout), Duration.Inf) } @@ -186,7 +184,7 @@ trait Inbox { this: ActorDSL.type ⇒ * this method within an actor! */ def select[T](timeout: FiniteDuration = defaultTimeout)(predicate: PartialFunction[Any, T]): T = { - implicit val t = Timeout((timeout + extraTime).asInstanceOf[FiniteDuration]) + implicit val t = Timeout(timeout + extraTime) predicate(Await.result(receiver ? Select(Deadline.now + timeout, predicate), Duration.Inf)) } diff --git a/akka-actor/src/main/scala/akka/actor/dungeon/Children.scala b/akka-actor/src/main/scala/akka/actor/dungeon/Children.scala index 85dfd7095a..ba856206ea 100644 --- a/akka-actor/src/main/scala/akka/actor/dungeon/Children.scala +++ b/akka-actor/src/main/scala/akka/actor/dungeon/Children.scala @@ -5,14 +5,12 @@ package akka.actor.dungeon import scala.annotation.tailrec -import scala.collection.JavaConverters.asJavaIterableConverter import scala.util.control.NonFatal +import scala.collection.immutable import akka.actor._ -import akka.actor.ActorCell import akka.actor.ActorPath.ElementRegex import akka.serialization.SerializationExtension import akka.util.{ Unsafe, Helpers } -import akka.actor.ChildNameReserved private[akka] trait Children { this: ActorCell ⇒ @@ -24,8 +22,9 @@ private[akka] trait Children { this: ActorCell ⇒ def childrenRefs: ChildrenContainer = Unsafe.instance.getObjectVolatile(this, AbstractActorCell.childrenOffset).asInstanceOf[ChildrenContainer] - final def children: Iterable[ActorRef] = childrenRefs.children - final def getChildren(): java.lang.Iterable[ActorRef] = children.asJava + final def children: immutable.Iterable[ActorRef] = childrenRefs.children + final def getChildren(): java.lang.Iterable[ActorRef] = + scala.collection.JavaConverters.asJavaIterableConverter(children).asJava final def child(name: String): Option[ActorRef] = Option(getChild(name)) final def getChild(name: String): ActorRef = childrenRefs.getByName(name) match { @@ -53,19 +52,24 @@ private[akka] trait Children { this: ActorCell ⇒ } final def stop(actor: ActorRef): Unit = { - val started = actor match { - case r: RepointableRef ⇒ r.isStarted - case _ ⇒ true + if (childrenRefs.getByRef(actor).isDefined) { + @tailrec def shallDie(ref: ActorRef): Boolean = { + val c = childrenRefs + swapChildrenRefs(c, c.shallDie(ref)) || shallDie(ref) + } + + if (actor match { + case r: RepointableRef ⇒ r.isStarted + case _ ⇒ true + }) shallDie(actor) } - if (childrenRefs.getByRef(actor).isDefined && started) shallDie(actor) actor.asInstanceOf[InternalActorRef].stop() } /* * low level CAS helpers */ - - @inline private def swapChildrenRefs(oldChildren: ChildrenContainer, newChildren: ChildrenContainer): Boolean = + @inline private final def swapChildrenRefs(oldChildren: ChildrenContainer, newChildren: ChildrenContainer): Boolean = Unsafe.instance.compareAndSwapObject(this, AbstractActorCell.childrenOffset, oldChildren, newChildren) @tailrec final def reserveChild(name: String): Boolean = { @@ -90,18 +94,6 @@ private[akka] trait Children { this: ActorCell ⇒ } } - @tailrec final protected def shallDie(ref: ActorRef): Boolean = { - val c = childrenRefs - swapChildrenRefs(c, c.shallDie(ref)) || shallDie(ref) - } - - @tailrec final private def removeChild(ref: ActorRef): ChildrenContainer = { - val c = childrenRefs - val n = c.remove(ref) - if (swapChildrenRefs(c, n)) n - else removeChild(ref) - } - @tailrec final protected def setChildrenTerminationReason(reason: ChildrenContainer.SuspendReason): Boolean = { childrenRefs match { case c: ChildrenContainer.TerminatingChildrenContainer ⇒ @@ -141,13 +133,21 @@ private[akka] trait Children { this: ActorCell ⇒ protected def getChildByRef(ref: ActorRef): Option[ChildRestartStats] = childrenRefs.getByRef(ref) - protected def getAllChildStats: Iterable[ChildRestartStats] = childrenRefs.stats + protected def getAllChildStats: immutable.Iterable[ChildRestartStats] = childrenRefs.stats protected def removeChildAndGetStateChange(child: ActorRef): Option[SuspendReason] = { - childrenRefs match { + @tailrec def removeChild(ref: ActorRef): ChildrenContainer = { + val c = childrenRefs + val n = c.remove(ref) + if (swapChildrenRefs(c, n)) n else removeChild(ref) + } + + childrenRefs match { // The match must be performed BEFORE the removeChild case TerminatingChildrenContainer(_, _, reason) ⇒ - val newContainer = removeChild(child) - if (!newContainer.isInstanceOf[TerminatingChildrenContainer]) Some(reason) else None + removeChild(child) match { + case _: TerminatingChildrenContainer ⇒ None + case _ ⇒ Some(reason) + } case _ ⇒ removeChild(child) None @@ -192,6 +192,7 @@ private[akka] trait Children { this: ActorCell ⇒ // mailbox==null during RoutedActorCell constructor, where suspends are queued otherwise if (mailbox ne null) for (_ ← 1 to mailbox.suspendCount) actor.suspend() initChild(actor) + actor.start() actor } } diff --git a/akka-actor/src/main/scala/akka/actor/dungeon/ChildrenContainer.scala b/akka-actor/src/main/scala/akka/actor/dungeon/ChildrenContainer.scala index eeb28cf018..1fccbf8078 100644 --- a/akka-actor/src/main/scala/akka/actor/dungeon/ChildrenContainer.scala +++ b/akka-actor/src/main/scala/akka/actor/dungeon/ChildrenContainer.scala @@ -4,10 +4,11 @@ package akka.actor.dungeon -import scala.collection.immutable.TreeMap +import scala.collection.immutable import akka.actor.{ InvalidActorNameException, ChildStats, ChildRestartStats, ChildNameReserved, ActorRef } import akka.dispatch.SystemMessage +import akka.util.Collections.{ EmptyImmutableSeq, PartialImmutableValuesIterable } /** * INTERNAL API @@ -20,8 +21,8 @@ private[akka] trait ChildrenContainer { def getByName(name: String): Option[ChildStats] def getByRef(actor: ActorRef): Option[ChildRestartStats] - def children: Iterable[ActorRef] - def stats: Iterable[ChildRestartStats] + def children: immutable.Iterable[ActorRef] + def stats: immutable.Iterable[ChildRestartStats] def shallDie(actor: ActorRef): ChildrenContainer @@ -49,6 +50,18 @@ private[akka] object ChildrenContainer { case class Creation() extends SuspendReason with WaitingForChildren case object Termination extends SuspendReason + class ChildRestartsIterable(stats: immutable.MapLike[_, ChildStats, _]) extends PartialImmutableValuesIterable[ChildStats, ChildRestartStats] { + override final def apply(c: ChildStats) = c.asInstanceOf[ChildRestartStats] + override final def isDefinedAt(c: ChildStats) = c.isInstanceOf[ChildRestartStats] + override final def valuesIterator = stats.valuesIterator + } + + class ChildrenIterable(stats: immutable.MapLike[_, ChildStats, _]) extends PartialImmutableValuesIterable[ChildStats, ActorRef] { + override final def apply(c: ChildStats) = c.asInstanceOf[ChildRestartStats].child + override final def isDefinedAt(c: ChildStats) = c.isInstanceOf[ChildRestartStats] + override final def valuesIterator = stats.valuesIterator + } + trait WaitingForChildren { private var todo: SystemMessage = null def enqueue(message: SystemMessage) = { message.next = todo; todo = message } @@ -56,13 +69,13 @@ private[akka] object ChildrenContainer { } trait EmptyChildrenContainer extends ChildrenContainer { - val emptyStats = TreeMap.empty[String, ChildStats] + val emptyStats = immutable.TreeMap.empty[String, ChildStats] override def add(name: String, stats: ChildRestartStats): ChildrenContainer = new NormalChildrenContainer(emptyStats.updated(name, stats)) override def remove(child: ActorRef): ChildrenContainer = this override def getByName(name: String): Option[ChildRestartStats] = None override def getByRef(actor: ActorRef): Option[ChildRestartStats] = None - override def children: Iterable[ActorRef] = Nil - override def stats: Iterable[ChildRestartStats] = Nil + override def children: immutable.Iterable[ActorRef] = EmptyImmutableSeq + override def stats: immutable.Iterable[ChildRestartStats] = EmptyImmutableSeq override def shallDie(actor: ActorRef): ChildrenContainer = this override def reserve(name: String): ChildrenContainer = new NormalChildrenContainer(emptyStats.updated(name, ChildNameReserved)) override def unreserve(name: String): ChildrenContainer = this @@ -95,7 +108,7 @@ private[akka] object ChildrenContainer { * calling context.stop(child) and processing the ChildTerminated() system * message). */ - class NormalChildrenContainer(val c: TreeMap[String, ChildStats]) extends ChildrenContainer { + class NormalChildrenContainer(val c: immutable.TreeMap[String, ChildStats]) extends ChildrenContainer { override def add(name: String, stats: ChildRestartStats): ChildrenContainer = new NormalChildrenContainer(c.updated(name, stats)) @@ -108,9 +121,11 @@ private[akka] object ChildrenContainer { case _ ⇒ None } - override def children: Iterable[ActorRef] = c.values.view.collect { case ChildRestartStats(child, _, _) ⇒ child } + override def children: immutable.Iterable[ActorRef] = + if (c.isEmpty) EmptyImmutableSeq else new ChildrenIterable(c) - override def stats: Iterable[ChildRestartStats] = c.values.view.collect { case c: ChildRestartStats ⇒ c } + override def stats: immutable.Iterable[ChildRestartStats] = + if (c.isEmpty) EmptyImmutableSeq else new ChildRestartsIterable(c) override def shallDie(actor: ActorRef): ChildrenContainer = TerminatingChildrenContainer(c, Set(actor), UserRequest) @@ -130,7 +145,7 @@ private[akka] object ChildrenContainer { } object NormalChildrenContainer { - def apply(c: TreeMap[String, ChildStats]): ChildrenContainer = + def apply(c: immutable.TreeMap[String, ChildStats]): ChildrenContainer = if (c.isEmpty) EmptyChildrenContainer else new NormalChildrenContainer(c) } @@ -145,7 +160,7 @@ private[akka] object ChildrenContainer { * type of container, depending on whether or not children are left and whether or not * the reason was “Terminating”. */ - case class TerminatingChildrenContainer(c: TreeMap[String, ChildStats], toDie: Set[ActorRef], reason: SuspendReason) + case class TerminatingChildrenContainer(c: immutable.TreeMap[String, ChildStats], toDie: Set[ActorRef], reason: SuspendReason) extends ChildrenContainer { override def add(name: String, stats: ChildRestartStats): ChildrenContainer = copy(c.updated(name, stats)) @@ -166,9 +181,11 @@ private[akka] object ChildrenContainer { case _ ⇒ None } - override def children: Iterable[ActorRef] = c.values.view.collect { case ChildRestartStats(child, _, _) ⇒ child } + override def children: immutable.Iterable[ActorRef] = + if (c.isEmpty) EmptyImmutableSeq else new ChildrenIterable(c) - override def stats: Iterable[ChildRestartStats] = c.values.view.collect { case c: ChildRestartStats ⇒ c } + override def stats: immutable.Iterable[ChildRestartStats] = + if (c.isEmpty) EmptyImmutableSeq else new ChildRestartsIterable(c) override def shallDie(actor: ActorRef): ChildrenContainer = copy(toDie = toDie + actor) diff --git a/akka-actor/src/main/scala/akka/actor/dungeon/Dispatch.scala b/akka-actor/src/main/scala/akka/actor/dungeon/Dispatch.scala index 1b9476acb1..469aac78c2 100644 --- a/akka-actor/src/main/scala/akka/actor/dungeon/Dispatch.scala +++ b/akka-actor/src/main/scala/akka/actor/dungeon/Dispatch.scala @@ -38,12 +38,11 @@ private[akka] trait Dispatch { this: ActorCell ⇒ final def isTerminated: Boolean = mailbox.isClosed /** - * Start this cell, i.e. attach it to the dispatcher. The UID must reasonably - * be different from the previous UID of a possible actor with the same path, + * Initialize this cell, i.e. set up mailboxes and supervision. The UID must be + * reasonably different from the previous UID of a possible actor with the same path, * which can be achieved by using ThreadLocalRandom.current.nextInt(). */ - final def start(sendSupervise: Boolean, uid: Int): this.type = { - + final def init(uid: Int, sendSupervise: Boolean): this.type = { /* * Create the mailbox and enqueue the Create() message to ensure that * this is processed before anything else. @@ -56,13 +55,18 @@ private[akka] trait Dispatch { this: ActorCell ⇒ if (sendSupervise) { // ➡➡➡ NEVER SEND THE SAME SYSTEM MESSAGE OBJECT TO TWO ACTORS ⬅⬅⬅ - parent.sendSystemMessage(akka.dispatch.Supervise(self, uid)) + parent.sendSystemMessage(akka.dispatch.Supervise(self, async = false, uid)) parent ! NullMessage // read ScalaDoc of NullMessage to see why } + this + } + /** + * Start this cell, i.e. attach it to the dispatcher. + */ + final def start(): this.type = { // This call is expected to start off the actor by scheduling its mailbox. dispatcher.attach(this) - this } diff --git a/akka-actor/src/main/scala/akka/actor/dungeon/FaultHandling.scala b/akka-actor/src/main/scala/akka/actor/dungeon/FaultHandling.scala index a42d15c6f5..ac4f5b5c36 100644 --- a/akka-actor/src/main/scala/akka/actor/dungeon/FaultHandling.scala +++ b/akka-actor/src/main/scala/akka/actor/dungeon/FaultHandling.scala @@ -10,13 +10,13 @@ import akka.dispatch._ import akka.event.Logging.{ Warning, Error, Debug } import scala.util.control.NonFatal import akka.event.Logging -import scala.Some +import scala.collection.immutable import akka.dispatch.ChildTerminated import akka.actor.PreRestartException import akka.actor.Failed import akka.actor.PostRestartException import akka.event.Logging.Debug -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration private[akka] trait FaultHandling { this: ActorCell ⇒ @@ -160,7 +160,7 @@ private[akka] trait FaultHandling { this: ActorCell ⇒ } } - final def handleInvokeFailure(childrenNotToSuspend: Iterable[ActorRef], t: Throwable, message: String): Unit = { + final def handleInvokeFailure(childrenNotToSuspend: immutable.Iterable[ActorRef], t: Throwable, message: String): Unit = { publish(Error(t, self.path.toString, clazz(actor), message)) // prevent any further messages to be processed until the actor has been restarted if (!isFailed) try { diff --git a/akka-actor/src/main/scala/akka/actor/dungeon/ReceiveTimeout.scala b/akka-actor/src/main/scala/akka/actor/dungeon/ReceiveTimeout.scala index 0c3661b59a..5e1e4465eb 100644 --- a/akka-actor/src/main/scala/akka/actor/dungeon/ReceiveTimeout.scala +++ b/akka-actor/src/main/scala/akka/actor/dungeon/ReceiveTimeout.scala @@ -8,8 +8,8 @@ import ReceiveTimeout.emptyReceiveTimeoutData import akka.actor.ActorCell import akka.actor.ActorCell.emptyCancellable import akka.actor.Cancellable -import scala.concurrent.util.Duration -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.Duration +import scala.concurrent.duration.FiniteDuration private[akka] object ReceiveTimeout { final val emptyReceiveTimeoutData: (Duration, Cancellable) = (Duration.Undefined, ActorCell.emptyCancellable) diff --git a/akka-actor/src/main/scala/akka/dispatch/AbstractDispatcher.scala b/akka-actor/src/main/scala/akka/dispatch/AbstractDispatcher.scala index 23fa51bb76..8f13e5fa11 100644 --- a/akka-actor/src/main/scala/akka/dispatch/AbstractDispatcher.scala +++ b/akka-actor/src/main/scala/akka/dispatch/AbstractDispatcher.scala @@ -13,10 +13,10 @@ import akka.serialization.SerializationExtension import akka.util.{ Unsafe, Index } import scala.annotation.tailrec import scala.concurrent.forkjoin.{ ForkJoinTask, ForkJoinPool } -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.concurrent.{ ExecutionContext, Await, Awaitable } import scala.util.control.NonFatal -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration final case class Envelope private (val message: Any, val sender: ActorRef) @@ -108,7 +108,7 @@ private[akka] case class Terminate() extends SystemMessage // sent to self from /** * INTERNAL API */ -private[akka] case class Supervise(child: ActorRef, uid: Int) extends SystemMessage // sent to supervisor ActorRef from ActorCell.start +private[akka] case class Supervise(child: ActorRef, async: Boolean, uid: Int) extends SystemMessage // sent to supervisor ActorRef from ActorCell.start /** * INTERNAL API */ @@ -288,7 +288,7 @@ abstract class MessageDispatcher(val prerequisites: DispatcherPrerequisites) ext if (debug) actors.remove(this, actor.self) addInhabitants(-1) val mailBox = actor.swapMailbox(deadLetterMailbox) - mailBox.becomeClosed() // FIXME reschedule in tell if possible race with cleanUp is detected in order to properly clean up + mailBox.becomeClosed() mailBox.cleanUp() } @@ -420,7 +420,7 @@ abstract class MessageDispatcherConfigurator(val config: Config, val prerequisit case "unbounded" ⇒ UnboundedMailbox() case "bounded" ⇒ new BoundedMailbox(prerequisites.settings, config) case fqcn ⇒ - val args = Seq(classOf[ActorSystem.Settings] -> prerequisites.settings, classOf[Config] -> config) + val args = List(classOf[ActorSystem.Settings] -> prerequisites.settings, classOf[Config] -> config) prerequisites.dynamicAccess.createInstanceFor[MailboxType](fqcn, args).recover({ case exception ⇒ throw new IllegalArgumentException( @@ -436,7 +436,7 @@ abstract class MessageDispatcherConfigurator(val config: Config, val prerequisit case null | "" | "fork-join-executor" ⇒ new ForkJoinExecutorConfigurator(config.getConfig("fork-join-executor"), prerequisites) case "thread-pool-executor" ⇒ new ThreadPoolExecutorConfigurator(config.getConfig("thread-pool-executor"), prerequisites) case fqcn ⇒ - val args = Seq( + val args = List( classOf[Config] -> config, classOf[DispatcherPrerequisites] -> prerequisites) prerequisites.dynamicAccess.createInstanceFor[ExecutorServiceConfigurator](fqcn, args).recover({ diff --git a/akka-actor/src/main/scala/akka/dispatch/BalancingDispatcher.scala b/akka-actor/src/main/scala/akka/dispatch/BalancingDispatcher.scala index c90048c80b..6efb5771ef 100644 --- a/akka-actor/src/main/scala/akka/dispatch/BalancingDispatcher.scala +++ b/akka-actor/src/main/scala/akka/dispatch/BalancingDispatcher.scala @@ -6,12 +6,12 @@ package akka.dispatch import akka.actor.{ ActorCell, ActorRef } import scala.annotation.tailrec -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.util.Helpers import java.util.{ Comparator, Iterator } import java.util.concurrent.{ Executor, LinkedBlockingQueue, ConcurrentLinkedQueue, ConcurrentSkipListSet } import akka.actor.ActorSystemImpl -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * An executor based event driven dispatcher which will try to redistribute work from busy actors to idle actors. It is assumed diff --git a/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala b/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala index 96166022f8..6577e217a1 100644 --- a/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala +++ b/akka-actor/src/main/scala/akka/dispatch/Dispatcher.scala @@ -10,9 +10,9 @@ import akka.event.Logging import java.util.concurrent.atomic.AtomicReference import java.util.concurrent.{ ExecutorService, RejectedExecutionException } import scala.concurrent.forkjoin.ForkJoinPool -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.concurrent.Awaitable -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * The event-based ``Dispatcher`` binds a set of Actors to a thread pool backed up by a @@ -93,7 +93,7 @@ class Dispatcher( */ protected[akka] def shutdown: Unit = { val newDelegate = executorServiceDelegate.copy() // Doesn't matter which one we copy - val es = synchronized { // FIXME getAndSet using ARFU or Unsafe + val es = synchronized { val service = executorServiceDelegate executorServiceDelegate = newDelegate // just a quick getAndSet service diff --git a/akka-actor/src/main/scala/akka/dispatch/Dispatchers.scala b/akka-actor/src/main/scala/akka/dispatch/Dispatchers.scala index 125c400bb6..910a5ceed5 100644 --- a/akka-actor/src/main/scala/akka/dispatch/Dispatchers.scala +++ b/akka-actor/src/main/scala/akka/dispatch/Dispatchers.scala @@ -9,7 +9,7 @@ import com.typesafe.config.{ ConfigFactory, Config } import akka.actor.{ Scheduler, DynamicAccess, ActorSystem } import akka.event.Logging.Warning import akka.event.EventStream -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration /** * DispatcherPrerequisites represents useful contextual pieces when constructing a MessageDispatcher @@ -147,7 +147,7 @@ class Dispatchers(val settings: ActorSystem.Settings, val prerequisites: Dispatc case "BalancingDispatcher" ⇒ new BalancingDispatcherConfigurator(cfg, prerequisites) case "PinnedDispatcher" ⇒ new PinnedDispatcherConfigurator(cfg, prerequisites) case fqn ⇒ - val args = Seq(classOf[Config] -> cfg, classOf[DispatcherPrerequisites] -> prerequisites) + val args = List(classOf[Config] -> cfg, classOf[DispatcherPrerequisites] -> prerequisites) prerequisites.dynamicAccess.createInstanceFor[MessageDispatcherConfigurator](fqn, args).recover({ case exception ⇒ throw new IllegalArgumentException( diff --git a/akka-actor/src/main/scala/akka/dispatch/Future.scala b/akka-actor/src/main/scala/akka/dispatch/Future.scala index 113215cd23..a7c964b750 100644 --- a/akka-actor/src/main/scala/akka/dispatch/Future.scala +++ b/akka-actor/src/main/scala/akka/dispatch/Future.scala @@ -68,7 +68,7 @@ object ExecutionContexts { * Futures is the Java API for Futures and Promises */ object Futures { - + import scala.collection.JavaConverters.iterableAsScalaIterableConverter /** * Java API, equivalent to Future.apply */ @@ -95,7 +95,7 @@ object Futures { */ def find[T <: AnyRef](futures: JIterable[Future[T]], predicate: JFunc[T, java.lang.Boolean], executor: ExecutionContext): Future[JOption[T]] = { implicit val ec = executor - Future.find[T]((scala.collection.JavaConversions.iterableAsScalaIterable(futures)))(predicate.apply(_))(executor).map(JOption.fromScalaOption(_)) + Future.find[T](futures.asScala)(predicate.apply(_))(executor) map JOption.fromScalaOption } /** @@ -103,7 +103,7 @@ object Futures { * Returns a Future to the result of the first future in the list that is completed */ def firstCompletedOf[T <: AnyRef](futures: JIterable[Future[T]], executor: ExecutionContext): Future[T] = - Future.firstCompletedOf(scala.collection.JavaConversions.iterableAsScalaIterable(futures))(executor) + Future.firstCompletedOf(futures.asScala)(executor) /** * Java API @@ -113,14 +113,14 @@ object Futures { * or the result of the fold. */ def fold[T <: AnyRef, R <: AnyRef](zero: R, futures: JIterable[Future[T]], fun: akka.japi.Function2[R, T, R], executor: ExecutionContext): Future[R] = - Future.fold(scala.collection.JavaConversions.iterableAsScalaIterable(futures))(zero)(fun.apply)(executor) + Future.fold(futures.asScala)(zero)(fun.apply)(executor) /** * Java API. * Reduces the results of the supplied futures and binary function. */ def reduce[T <: AnyRef, R >: T](futures: JIterable[Future[T]], fun: akka.japi.Function2[R, T, R], executor: ExecutionContext): Future[R] = - Future.reduce[T, R](scala.collection.JavaConversions.iterableAsScalaIterable(futures))(fun.apply)(executor) + Future.reduce[T, R](futures.asScala)(fun.apply)(executor) /** * Java API. @@ -129,9 +129,7 @@ object Futures { */ def sequence[A](in: JIterable[Future[A]], executor: ExecutionContext): Future[JIterable[A]] = { implicit val d = executor - scala.collection.JavaConversions.iterableAsScalaIterable(in).foldLeft(Future(new JLinkedList[A]())) { (fr, fa) ⇒ - for (r ← fr; a ← fa) yield { r add a; r } - } + in.asScala.foldLeft(Future(new JLinkedList[A]())) { (fr, fa) ⇒ for (r ← fr; a ← fa) yield { r add a; r } } } /** @@ -142,7 +140,7 @@ object Futures { */ def traverse[A, B](in: JIterable[A], fn: JFunc[A, Future[B]], executor: ExecutionContext): Future[JIterable[B]] = { implicit val d = executor - scala.collection.JavaConversions.iterableAsScalaIterable(in).foldLeft(Future(new JLinkedList[B]())) { (fr, a) ⇒ + in.asScala.foldLeft(Future(new JLinkedList[B]())) { (fr, a) ⇒ val fb = fn(a) for (r ← fr; b ← fb) yield { r add b; r } } diff --git a/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala b/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala index 8d9a553ffb..d17ad5b7b6 100644 --- a/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala +++ b/akka-actor/src/main/scala/akka/dispatch/Mailbox.scala @@ -9,11 +9,11 @@ import akka.AkkaException import akka.actor.{ ActorCell, ActorRef, Cell, ActorSystem, InternalActorRef, DeadLetter } import akka.util.{ Unsafe, BoundedBlockingQueue } import akka.event.Logging.Error -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.annotation.tailrec import scala.util.control.NonFatal import com.typesafe.config.Config -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * INTERNAL API diff --git a/akka-actor/src/main/scala/akka/dispatch/PinnedDispatcher.scala b/akka-actor/src/main/scala/akka/dispatch/PinnedDispatcher.scala index af421ddb96..52d5587597 100644 --- a/akka-actor/src/main/scala/akka/dispatch/PinnedDispatcher.scala +++ b/akka-actor/src/main/scala/akka/dispatch/PinnedDispatcher.scala @@ -5,8 +5,8 @@ package akka.dispatch import akka.actor.ActorCell -import scala.concurrent.util.Duration -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.Duration +import scala.concurrent.duration.FiniteDuration /** * Dedicates a unique thread for each actor passed in as reference. Served through its messageQueue. diff --git a/akka-actor/src/main/scala/akka/dispatch/ThreadPoolBuilder.scala b/akka-actor/src/main/scala/akka/dispatch/ThreadPoolBuilder.scala index 67b0aa33a5..9d06a7b74c 100644 --- a/akka-actor/src/main/scala/akka/dispatch/ThreadPoolBuilder.scala +++ b/akka-actor/src/main/scala/akka/dispatch/ThreadPoolBuilder.scala @@ -6,7 +6,7 @@ package akka.dispatch import java.util.Collection import scala.concurrent.{ Awaitable, BlockContext, CanAwait } -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.concurrent.forkjoin._ import java.util.concurrent.{ ArrayBlockingQueue, diff --git a/akka-actor/src/main/scala/akka/event/EventBus.scala b/akka-actor/src/main/scala/akka/event/EventBus.scala index cb83fbe806..6e3e25e42c 100644 --- a/akka-actor/src/main/scala/akka/event/EventBus.scala +++ b/akka-actor/src/main/scala/akka/event/EventBus.scala @@ -10,6 +10,7 @@ import java.util.concurrent.ConcurrentSkipListSet import java.util.Comparator import akka.util.{ Subclassification, SubclassifiedIndex } import scala.collection.immutable.TreeSet +import scala.collection.immutable /** * Represents the base type for EventBuses @@ -167,12 +168,12 @@ trait SubchannelClassification { this: EventBus ⇒ recv foreach (publish(event, _)) } - private def removeFromCache(changes: Seq[(Classifier, Set[Subscriber])]): Unit = + private def removeFromCache(changes: immutable.Seq[(Classifier, Set[Subscriber])]): Unit = cache = (cache /: changes) { case (m, (c, cs)) ⇒ m.updated(c, m.getOrElse(c, Set.empty[Subscriber]) -- cs) } - private def addToCache(changes: Seq[(Classifier, Set[Subscriber])]): Unit = + private def addToCache(changes: immutable.Seq[(Classifier, Set[Subscriber])]): Unit = cache = (cache /: changes) { case (m, (c, cs)) ⇒ m.updated(c, m.getOrElse(c, Set.empty[Subscriber]) ++ cs) } @@ -265,9 +266,9 @@ trait ActorClassification { this: ActorEventBus with ActorClassifier ⇒ } } - protected final def dissociate(monitored: ActorRef): Iterable[ActorRef] = { + protected final def dissociate(monitored: ActorRef): immutable.Iterable[ActorRef] = { @tailrec - def dissociateAsMonitored(monitored: ActorRef): Iterable[ActorRef] = { + def dissociateAsMonitored(monitored: ActorRef): immutable.Iterable[ActorRef] = { val current = mappings get monitored current match { case null ⇒ empty diff --git a/akka-actor/src/main/scala/akka/event/Logging.scala b/akka-actor/src/main/scala/akka/event/Logging.scala index 48d7c44d5d..14ba99bcaa 100644 --- a/akka-actor/src/main/scala/akka/event/Logging.scala +++ b/akka-actor/src/main/scala/akka/event/Logging.scala @@ -9,12 +9,13 @@ import akka.actor._ import akka.{ ConfigurationException, AkkaException } import akka.actor.ActorSystem.Settings import akka.util.{ Timeout, ReentrantGuard } -import scala.concurrent.util.duration._ import java.util.concurrent.atomic.AtomicInteger -import scala.util.control.NoStackTrace import java.util.concurrent.TimeoutException +import scala.annotation.implicitNotFound +import scala.collection.immutable +import scala.concurrent.duration._ import scala.concurrent.Await -import annotation.implicitNotFound +import scala.util.control.NoStackTrace /** * This trait brings log level handling to the EventStream: it reads the log @@ -448,7 +449,7 @@ object Logging { } // these type ascriptions/casts are necessary to avoid CCEs during construction while retaining correct type - val AllLogLevels: Seq[LogLevel] = Seq(ErrorLevel, WarningLevel, InfoLevel, DebugLevel) + val AllLogLevels: immutable.Seq[LogLevel] = Vector(ErrorLevel, WarningLevel, InfoLevel, DebugLevel) /** * Obtain LoggingAdapter for the given actor system and source object. This @@ -708,7 +709,7 @@ object Logging { val path: ActorPath = new RootActorPath(Address("akka", "all-systems"), "/StandardOutLogger") def provider: ActorRefProvider = throw new UnsupportedOperationException("StandardOutLogger does not provide") override val toString = "StandardOutLogger" - override def !(message: Any)(implicit sender: ActorRef = null): Unit = print(message) + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = print(message) } val StandardOutLogger = new StandardOutLogger @@ -877,15 +878,25 @@ class BusLogging(val bus: LoggingBus, val logSource: String, val logClass: Class protected def notifyDebug(message: String): Unit = bus.publish(Debug(logSource, logClass, message)) } -private[akka] object NoLogging extends LoggingAdapter { - def isErrorEnabled = false - def isWarningEnabled = false - def isInfoEnabled = false - def isDebugEnabled = false +/** + * NoLogging is a LoggingAdapter that does absolutely nothing – no logging at all. + */ +object NoLogging extends LoggingAdapter { - protected def notifyError(message: String): Unit = () - protected def notifyError(cause: Throwable, message: String): Unit = () - protected def notifyWarning(message: String): Unit = () - protected def notifyInfo(message: String): Unit = () - protected def notifyDebug(message: String): Unit = () + /** + * Java API to return the reference to NoLogging + * @return The NoLogging instance + */ + def getInstance = this + + final override def isErrorEnabled = false + final override def isWarningEnabled = false + final override def isInfoEnabled = false + final override def isDebugEnabled = false + + final protected override def notifyError(message: String): Unit = () + final protected override def notifyError(cause: Throwable, message: String): Unit = () + final protected override def notifyWarning(message: String): Unit = () + final protected override def notifyInfo(message: String): Unit = () + final protected override def notifyDebug(message: String): Unit = () } diff --git a/akka-actor/src/main/scala/akka/japi/JavaAPI.scala b/akka-actor/src/main/scala/akka/japi/JavaAPI.scala index 642600f2bd..87bb338b0f 100644 --- a/akka-actor/src/main/scala/akka/japi/JavaAPI.scala +++ b/akka-actor/src/main/scala/akka/japi/JavaAPI.scala @@ -5,10 +5,13 @@ package akka.japi import language.implicitConversions -import scala.Some + +import scala.collection.immutable import scala.reflect.ClassTag import scala.util.control.NoStackTrace import scala.runtime.AbstractPartialFunction +import akka.util.Collections.EmptyImmutableSeq +import java.util.Collections.{ emptyList, singletonList } /** * A Function interface. Used to create first-class-functions is Java. @@ -114,13 +117,11 @@ abstract class JavaPartialFunction[A, B] extends AbstractPartialFunction[A, B] { * Java API */ sealed abstract class Option[A] extends java.lang.Iterable[A] { - import scala.collection.JavaConversions._ - def get: A def isEmpty: Boolean def isDefined: Boolean = !isEmpty def asScala: scala.Option[A] - def iterator: java.util.Iterator[A] = if (isEmpty) Iterator.empty else Iterator.single(get) + def iterator: java.util.Iterator[A] = if (isEmpty) emptyList[A].iterator else singletonList(get).iterator } object Option { @@ -175,9 +176,40 @@ object Option { * This class hold common utilities for Java */ object Util { + + /** + * Returns a ClassTag describing the provided Class. + * + * Java API + */ def classTag[T](clazz: Class[T]): ClassTag[T] = ClassTag(clazz) - def arrayToSeq[T](arr: Array[T]): Seq[T] = arr.toSeq + /** + * Returns an immutable.Seq representing the provided array of Classes, + * an overloading of the generic immutableSeq in Util, to accommodate for erasure. + * + * Java API + */ + def immutableSeq(arr: Array[Class[_]]): immutable.Seq[Class[_]] = immutableSeq[Class[_]](arr) - def arrayToSeq(classes: Array[Class[_]]): Seq[Class[_]] = classes.toSeq + /** + * + */ + def immutableSeq[T](arr: Array[T]): immutable.Seq[T] = if ((arr ne null) && arr.length > 0) Vector(arr: _*) else Nil + + def immutableSeq[T](iterable: java.lang.Iterable[T]): immutable.Seq[T] = + iterable match { + case imm: immutable.Seq[_] ⇒ imm.asInstanceOf[immutable.Seq[T]] + case other ⇒ + val i = other.iterator() + if (i.hasNext) { + val builder = new immutable.VectorBuilder[T] + + do { builder += i.next() } while (i.hasNext) + + builder.result() + } else EmptyImmutableSeq + } + + def immutableSingletonSeq[T](value: T): immutable.Seq[T] = value :: Nil } diff --git a/akka-actor/src/main/scala/akka/pattern/AskSupport.scala b/akka-actor/src/main/scala/akka/pattern/AskSupport.scala index 704ce43d8d..2ff45b0290 100644 --- a/akka-actor/src/main/scala/akka/pattern/AskSupport.scala +++ b/akka-actor/src/main/scala/akka/pattern/AskSupport.scala @@ -247,7 +247,7 @@ private[akka] final class PromiseActorRef private (val provider: ActorRefProvide case Registering ⇒ path // spin until registration is completed } - override def !(message: Any)(implicit sender: ActorRef = null): Unit = state match { + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = state match { case Stopped | _: StoppedWithPath ⇒ provider.deadLetters ! message case _ ⇒ if (!(result.tryComplete( message match { @@ -281,7 +281,7 @@ private[akka] final class PromiseActorRef private (val provider: ActorRefProvide val watchers = clearWatchers() if (!watchers.isEmpty) { val termination = Terminated(this)(existenceConfirmed = true, addressTerminated = false) - watchers foreach { w ⇒ try w.tell(termination, this) catch { case NonFatal(t) ⇒ /* FIXME LOG THIS */ } } + watchers foreach { _.tell(termination, this) } } } state match { diff --git a/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala b/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala index df228f821d..8a423c12b3 100644 --- a/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala +++ b/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala @@ -10,8 +10,7 @@ import akka.util.Unsafe import scala.util.control.NoStackTrace import java.util.concurrent.{ Callable, CopyOnWriteArrayList } import scala.concurrent.{ ExecutionContext, Future, Promise, Await } -import scala.concurrent.util.{ FiniteDuration, Deadline } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.util.control.NonFatal import scala.util.Success @@ -35,8 +34,8 @@ object CircuitBreaker { * * @param scheduler Reference to Akka scheduler * @param maxFailures Maximum number of failures before opening the circuit - * @param callTimeout [[scala.concurrent.util.Duration]] of time after which to consider a call a failure - * @param resetTimeout [[scala.concurrent.util.Duration]] of time after which to attempt to close the circuit + * @param callTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to consider a call a failure + * @param resetTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to attempt to close the circuit */ def apply(scheduler: Scheduler, maxFailures: Int, callTimeout: FiniteDuration, resetTimeout: FiniteDuration): CircuitBreaker = new CircuitBreaker(scheduler, maxFailures, callTimeout, resetTimeout)(syncExecutionContext) @@ -49,8 +48,8 @@ object CircuitBreaker { * * @param scheduler Reference to Akka scheduler * @param maxFailures Maximum number of failures before opening the circuit - * @param callTimeout [[scala.concurrent.util.Duration]] of time after which to consider a call a failure - * @param resetTimeout [[scala.concurrent.util.Duration]] of time after which to attempt to close the circuit + * @param callTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to consider a call a failure + * @param resetTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to attempt to close the circuit */ def create(scheduler: Scheduler, maxFailures: Int, callTimeout: FiniteDuration, resetTimeout: FiniteDuration): CircuitBreaker = apply(scheduler, maxFailures, callTimeout, resetTimeout) @@ -72,8 +71,8 @@ object CircuitBreaker { * * @param scheduler Reference to Akka scheduler * @param maxFailures Maximum number of failures before opening the circuit - * @param callTimeout [[scala.concurrent.util.Duration]] of time after which to consider a call a failure - * @param resetTimeout [[scala.concurrent.util.Duration]] of time after which to attempt to close the circuit + * @param callTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to consider a call a failure + * @param resetTimeout [[scala.concurrent.duration.FiniteDuration]] of time after which to attempt to close the circuit * @param executor [[scala.concurrent.ExecutionContext]] used for execution of state transition listeners */ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: FiniteDuration, resetTimeout: FiniteDuration)(implicit executor: ExecutionContext) extends AbstractCircuitBreaker { @@ -156,22 +155,20 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * The callback is run in the [[scala.concurrent.ExecutionContext]] supplied in the constructor. * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onOpen[T](callback: ⇒ T): CircuitBreaker = { - Open.addListener(() ⇒ callback) - this - } + def onOpen(callback: ⇒ Unit): CircuitBreaker = onOpen(new Runnable { def run = callback }) /** * Java API for onOpen * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onOpen[T](callback: Callable[T]): CircuitBreaker = onOpen(callback.call) + def onOpen(callback: Runnable): CircuitBreaker = { + Open addListener callback + this + } /** * Adds a callback to execute when circuit breaker transitions to half-open @@ -179,22 +176,20 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * The callback is run in the [[scala.concurrent.ExecutionContext]] supplied in the constructor. * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onHalfOpen[T](callback: ⇒ T): CircuitBreaker = { - HalfOpen.addListener(() ⇒ callback) - this - } + def onHalfOpen(callback: ⇒ Unit): CircuitBreaker = onHalfOpen(new Runnable { def run = callback }) /** * JavaAPI for onHalfOpen * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onHalfOpen[T](callback: Callable[T]): CircuitBreaker = onHalfOpen(callback.call) + def onHalfOpen(callback: Runnable): CircuitBreaker = { + HalfOpen addListener callback + this + } /** * Adds a callback to execute when circuit breaker state closes @@ -202,22 +197,20 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * The callback is run in the [[scala.concurrent.ExecutionContext]] supplied in the constructor. * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onClose[T](callback: ⇒ T): CircuitBreaker = { - Closed.addListener(() ⇒ callback) - this - } + def onClose(callback: ⇒ Unit): CircuitBreaker = onClose(new Runnable { def run = callback }) /** * JavaAPI for onClose * * @param callback Handler to be invoked on state change - * @tparam T Type supplied to assist with type inference, otherwise ignored by implementation * @return CircuitBreaker for fluent usage */ - def onClose[T](callback: Callable[T]): CircuitBreaker = onClose(callback.call) + def onClose(callback: Runnable): CircuitBreaker = { + Closed addListener callback + this + } /** * Retrieves current failure count. @@ -262,7 +255,7 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * Internal state abstraction */ private sealed trait State { - private val listeners = new CopyOnWriteArrayList[() ⇒ _] + private val listeners = new CopyOnWriteArrayList[Runnable] /** * Add a listener function which is invoked on state entry @@ -270,7 +263,7 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * @param listener listener implementation * @tparam T return type of listener, not used - but supplied for type inference purposes */ - def addListener[T](listener: () ⇒ T): Unit = listeners add listener + def addListener(listener: Runnable): Unit = listeners add listener /** * Test for whether listeners exist @@ -289,8 +282,7 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite val iterator = listeners.iterator while (iterator.hasNext) { val listener = iterator.next - //FIXME per @viktorklang: it's a bit wasteful to create Futures for one-offs, just use EC.execute instead - Future(listener())(executor) + executor.execute(listener) } } } @@ -453,12 +445,12 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * @return Future containing result of protected call */ override def invoke[T](body: ⇒ Future[T]): Future[T] = - Promise.failed[T](new CircuitBreakerOpenException(remainingTimeout().timeLeft.asInstanceOf[FiniteDuration])).future + Promise.failed[T](new CircuitBreakerOpenException(remainingTimeout().timeLeft)).future /** * Calculate remaining timeout to inform the caller in case a backoff algorithm is useful * - * @return [[akka.util.Deadline]] to when the breaker will attempt a reset by transitioning to half-open + * @return [[scala.concurrent.duration.Deadline]] to when the breaker will attempt a reset by transitioning to half-open */ private def remainingTimeout(): Deadline = get match { case 0L ⇒ Deadline.now diff --git a/akka-actor/src/main/scala/akka/pattern/FutureTimeoutSupport.scala b/akka-actor/src/main/scala/akka/pattern/FutureTimeoutSupport.scala index dc398e7fa2..6820cf4bfa 100644 --- a/akka-actor/src/main/scala/akka/pattern/FutureTimeoutSupport.scala +++ b/akka-actor/src/main/scala/akka/pattern/FutureTimeoutSupport.scala @@ -4,11 +4,11 @@ package akka.pattern * Copyright (C) 2009-2012 Typesafe Inc. */ -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.concurrent.{ ExecutionContext, Promise, Future } import akka.actor._ import scala.util.control.NonFatal -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration trait FutureTimeoutSupport { /** diff --git a/akka-actor/src/main/scala/akka/pattern/GracefulStopSupport.scala b/akka-actor/src/main/scala/akka/pattern/GracefulStopSupport.scala index 37fcc532e6..9279707238 100644 --- a/akka-actor/src/main/scala/akka/pattern/GracefulStopSupport.scala +++ b/akka-actor/src/main/scala/akka/pattern/GracefulStopSupport.scala @@ -8,9 +8,9 @@ import akka.actor._ import akka.util.{ Timeout } import akka.dispatch.{ Unwatch, Watch } import scala.concurrent.Future -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.util.Success -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration trait GracefulStopSupport { /** diff --git a/akka-actor/src/main/scala/akka/pattern/Patterns.scala b/akka-actor/src/main/scala/akka/pattern/Patterns.scala index c4440f4723..66e391c285 100644 --- a/akka-actor/src/main/scala/akka/pattern/Patterns.scala +++ b/akka-actor/src/main/scala/akka/pattern/Patterns.scala @@ -6,14 +6,14 @@ package akka.pattern import akka.actor.Scheduler import scala.concurrent.ExecutionContext import java.util.concurrent.Callable -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration object Patterns { import akka.actor.{ ActorRef, ActorSystem } import akka.pattern.{ ask ⇒ scalaAsk, pipe ⇒ scalaPipe, gracefulStop ⇒ scalaGracefulStop, after ⇒ scalaAfter } import akka.util.Timeout import scala.concurrent.Future - import scala.concurrent.util.Duration + import scala.concurrent.duration.Duration /** * Java API for `akka.pattern.ask`: diff --git a/akka-actor/src/main/scala/akka/pattern/PipeToSupport.scala b/akka-actor/src/main/scala/akka/pattern/PipeToSupport.scala index 5563a908de..a766954e54 100644 --- a/akka-actor/src/main/scala/akka/pattern/PipeToSupport.scala +++ b/akka-actor/src/main/scala/akka/pattern/PipeToSupport.scala @@ -7,12 +7,12 @@ import language.implicitConversions import scala.concurrent.{ Future, ExecutionContext } import scala.util.{ Failure, Success } -import akka.actor.{ Status, ActorRef } +import akka.actor.{ Status, ActorRef, Actor } trait PipeToSupport { final class PipeableFuture[T](val future: Future[T])(implicit executionContext: ExecutionContext) { - def pipeTo(recipient: ActorRef)(implicit sender: ActorRef = null): Future[T] = { + def pipeTo(recipient: ActorRef)(implicit sender: ActorRef = Actor.noSender): Future[T] = { future onComplete { case Success(r) ⇒ recipient ! r case Failure(f) ⇒ recipient ! Status.Failure(f) diff --git a/akka-actor/src/main/scala/akka/routing/ConsistentHash.scala b/akka-actor/src/main/scala/akka/routing/ConsistentHash.scala index 79c31cda33..84100f0f21 100644 --- a/akka-actor/src/main/scala/akka/routing/ConsistentHash.scala +++ b/akka-actor/src/main/scala/akka/routing/ConsistentHash.scala @@ -4,7 +4,7 @@ package akka.routing -import scala.collection.immutable.SortedMap +import scala.collection.immutable import scala.reflect.ClassTag import java.util.Arrays @@ -18,7 +18,7 @@ import java.util.Arrays * hash, i.e. make sure it is different for different nodes. * */ -class ConsistentHash[T: ClassTag] private (nodes: SortedMap[Int, T], virtualNodesFactor: Int) { +class ConsistentHash[T: ClassTag] private (nodes: immutable.SortedMap[Int, T], val virtualNodesFactor: Int) { import ConsistentHash._ @@ -106,7 +106,7 @@ class ConsistentHash[T: ClassTag] private (nodes: SortedMap[Int, T], virtualNode object ConsistentHash { def apply[T: ClassTag](nodes: Iterable[T], virtualNodesFactor: Int): ConsistentHash[T] = { - new ConsistentHash(SortedMap.empty[Int, T] ++ + new ConsistentHash(immutable.SortedMap.empty[Int, T] ++ (for (node ← nodes; vnode ← 1 to virtualNodesFactor) yield (nodeHashFor(node, vnode) -> node)), virtualNodesFactor) } @@ -120,8 +120,10 @@ object ConsistentHash { apply(nodes.asScala, virtualNodesFactor)(ClassTag(classOf[Any].asInstanceOf[Class[T]])) } - private def nodeHashFor(node: Any, vnode: Int): Int = - hashFor((node + ":" + vnode).getBytes("UTF-8")) + private def nodeHashFor(node: Any, vnode: Int): Int = { + val baseStr = node.toString + ":" + hashFor(baseStr + vnode) + } private def hashFor(bytes: Array[Byte]): Int = MurmurHash.arrayHash(bytes) diff --git a/akka-actor/src/main/scala/akka/routing/ConsistentHashingRouter.scala b/akka-actor/src/main/scala/akka/routing/ConsistentHashingRouter.scala index cdfd040ace..e88195f577 100644 --- a/akka-actor/src/main/scala/akka/routing/ConsistentHashingRouter.scala +++ b/akka-actor/src/main/scala/akka/routing/ConsistentHashingRouter.scala @@ -3,30 +3,29 @@ */ package akka.routing -import scala.collection.JavaConversions.iterableAsScalaIterable +import scala.collection.immutable +import akka.japi.Util.immutableSeq import scala.util.control.NonFatal import akka.actor.ActorRef import akka.actor.SupervisorStrategy -import akka.actor.Props import akka.dispatch.Dispatchers import akka.event.Logging import akka.serialization.SerializationExtension import java.util.concurrent.atomic.AtomicReference +import akka.actor.Address +import akka.actor.ExtendedActorSystem object ConsistentHashingRouter { /** * Creates a new ConsistentHashingRouter, routing to the specified routees */ - def apply(routees: Iterable[ActorRef]): ConsistentHashingRouter = + def apply(routees: immutable.Iterable[ActorRef]): ConsistentHashingRouter = new ConsistentHashingRouter(routees = routees map (_.path.toString)) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef]): ConsistentHashingRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala) - } + def create(routees: java.lang.Iterable[ActorRef]): ConsistentHashingRouter = apply(immutableSeq(routees)) /** * If you don't define the `hashMapping` when @@ -144,7 +143,7 @@ object ConsistentHashingRouter { */ @SerialVersionUID(1L) case class ConsistentHashingRouter( - nrOfInstances: Int = 0, routees: Iterable[String] = Nil, override val resizer: Option[Resizer] = None, + nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy, val virtualNodesFactor: Int = 0, @@ -163,7 +162,7 @@ case class ConsistentHashingRouter( * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String]) = this(routees = iterableAsScalaIterable(routeePaths)) + def this(routeePaths: java.lang.Iterable[String]) = this(routees = immutableSeq(routeePaths)) /** * Constructor that sets the resizer to be used. @@ -225,7 +224,7 @@ trait ConsistentHashingLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] def virtualNodesFactor: Int @@ -238,20 +237,22 @@ trait ConsistentHashingLike { this: RouterConfig ⇒ } val log = Logging(routeeProvider.context.system, routeeProvider.context.self) + val selfAddress = routeeProvider.context.system.asInstanceOf[ExtendedActorSystem].provider.getDefaultAddress val vnodes = if (virtualNodesFactor == 0) routeeProvider.context.system.settings.DefaultVirtualNodesFactor else virtualNodesFactor // tuple of routees and the ConsistentHash, updated together in updateConsistentHash - val consistentHashRef = new AtomicReference[(IndexedSeq[ActorRef], ConsistentHash[ActorRef])]((null, null)) + val consistentHashRef = new AtomicReference[(IndexedSeq[ConsistentActorRef], ConsistentHash[ConsistentActorRef])]((null, null)) updateConsistentHash() // update consistentHash when routees has changed // changes to routees are rare and when no changes this is a quick operation - def updateConsistentHash(): ConsistentHash[ActorRef] = { + def updateConsistentHash(): ConsistentHash[ConsistentActorRef] = { val oldConsistentHashTuple = consistentHashRef.get val (oldConsistentHashRoutees, oldConsistentHash) = oldConsistentHashTuple - val currentRoutees = routeeProvider.routees + val currentRoutees = routeeProvider.routees map { ConsistentActorRef(_, selfAddress) } + if (currentRoutees ne oldConsistentHashRoutees) { // when other instance, same content, no need to re-hash, but try to set routees val consistentHash = @@ -267,9 +268,9 @@ trait ConsistentHashingLike { this: RouterConfig ⇒ val currentConsistenHash = updateConsistentHash() if (currentConsistenHash.isEmpty) routeeProvider.context.system.deadLetters else hashData match { - case bytes: Array[Byte] ⇒ currentConsistenHash.nodeFor(bytes) - case str: String ⇒ currentConsistenHash.nodeFor(str) - case x: AnyRef ⇒ currentConsistenHash.nodeFor(SerializationExtension(routeeProvider.context.system).serialize(x).get) + case bytes: Array[Byte] ⇒ currentConsistenHash.nodeFor(bytes).actorRef + case str: String ⇒ currentConsistenHash.nodeFor(str).actorRef + case x: AnyRef ⇒ currentConsistenHash.nodeFor(SerializationExtension(routeeProvider.context.system).serialize(x).get).actorRef } } catch { case NonFatal(e) ⇒ @@ -294,4 +295,21 @@ trait ConsistentHashingLike { this: RouterConfig ⇒ } } +} + +/** + * INTERNAL API + * Important to use ActorRef with full address, with host and port, in the hash ring, + * so that same ring is produced on different nodes. + * The ConsistentHash uses toString of the ring nodes, and the ActorRef itself + * isn't a good representation, because LocalActorRef doesn't include the + * host and port. + */ +private[akka] case class ConsistentActorRef(actorRef: ActorRef, selfAddress: Address) { + override def toString: String = { + actorRef.path.address match { + case Address(_, _, None, None) ⇒ actorRef.path.toStringWithAddress(selfAddress) + case a ⇒ actorRef.path.toString + } + } } \ No newline at end of file diff --git a/akka-actor/src/main/scala/akka/routing/Listeners.scala b/akka-actor/src/main/scala/akka/routing/Listeners.scala index 7cc48b05f8..346f994a2f 100644 --- a/akka-actor/src/main/scala/akka/routing/Listeners.scala +++ b/akka-actor/src/main/scala/akka/routing/Listeners.scala @@ -45,7 +45,7 @@ trait Listeners { self: Actor ⇒ * @param msg * @param sender */ - protected def gossip(msg: Any)(implicit sender: ActorRef = null): Unit = { + protected def gossip(msg: Any)(implicit sender: ActorRef = Actor.noSender): Unit = { val i = listeners.iterator while (i.hasNext) i.next ! msg } diff --git a/akka-actor/src/main/scala/akka/routing/Routing.scala b/akka-actor/src/main/scala/akka/routing/Routing.scala index 8fcff84831..8c2c81bac2 100644 --- a/akka-actor/src/main/scala/akka/routing/Routing.scala +++ b/akka-actor/src/main/scala/akka/routing/Routing.scala @@ -5,24 +5,25 @@ package akka.routing import language.implicitConversions import language.postfixOps + +import scala.collection.immutable +import scala.concurrent.duration._ import akka.actor._ -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ import akka.ConfigurationException +import akka.dispatch.Dispatchers import akka.pattern.pipe +import akka.japi.Util.immutableSeq import com.typesafe.config.Config -import scala.collection.JavaConversions.iterableAsScalaIterable import java.util.concurrent.atomic.{ AtomicLong, AtomicBoolean } import java.util.concurrent.TimeUnit +import akka.event.Logging.Warning import scala.concurrent.forkjoin.ThreadLocalRandom -import akka.dispatch.Dispatchers import scala.annotation.tailrec -import concurrent.ExecutionContext -import scala.concurrent.util.FiniteDuration +import akka.event.Logging.Warning /** * A RoutedActorRef is an ActorRef that has a set of connected ActorRef and it uses a Router to - * send a message to on (or more) of these actors. + * send a message to one (or more) of these actors. */ private[akka] class RoutedActorRef(_system: ActorSystemImpl, _props: Props, _supervisor: InternalActorRef, _path: ActorPath) extends RepointableActorRef(_system, _props, _supervisor, _path) { @@ -36,11 +37,11 @@ private[akka] class RoutedActorRef(_system: ActorSystemImpl, _props: Props, _sup _props.routerConfig.verifyConfig() - override def newCell(old: Cell): Cell = new RoutedActorCell(system, this, props, supervisor, old.asInstanceOf[UnstartedCell].uid) + override def newCell(old: UnstartedCell): Cell = new RoutedActorCell(system, this, props, supervisor).init(old.uid, sendSupervise = false) } -private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActorRef, _props: Props, _supervisor: InternalActorRef, _uid: Int) +private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActorRef, _props: Props, _supervisor: InternalActorRef) extends ActorCell( _system, _ref, @@ -52,7 +53,7 @@ private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActo private val resizeCounter = new AtomicLong @volatile - private var _routees: IndexedSeq[ActorRef] = IndexedSeq.empty[ActorRef] // this MUST be initialized during createRoute + private var _routees: immutable.IndexedSeq[ActorRef] = immutable.IndexedSeq.empty[ActorRef] // this MUST be initialized during createRoute def routees = _routees @volatile @@ -71,20 +72,15 @@ private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActo r } - start(sendSupervise = false, _uid) - /* * end of construction */ - def applyRoute(sender: ActorRef, message: Any): Iterable[Destination] = message match { - case _: AutoReceivedMessage ⇒ Destination(self, self) :: Nil - case CurrentRoutees ⇒ - sender ! RouterRoutees(_routees) - Nil - case _ ⇒ - if (route.isDefinedAt(sender, message)) route(sender, message) - else Nil + def applyRoute(sender: ActorRef, message: Any): immutable.Iterable[Destination] = message match { + case _: AutoReceivedMessage ⇒ Destination(sender, self) :: Nil + case CurrentRoutees ⇒ sender ! RouterRoutees(_routees); Nil + case msg if route.isDefinedAt(sender, msg) ⇒ route(sender, message) + case _ ⇒ Nil } /** @@ -93,8 +89,8 @@ private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActo * Not thread safe, but intended to be called from protected points, such as * `RouterConfig.createRoute` and `Resizer.resize` */ - private[akka] def addRoutees(newRoutees: Iterable[ActorRef]): Unit = { - _routees = _routees ++ newRoutees + private[akka] def addRoutees(newRoutees: immutable.Iterable[ActorRef]): Unit = { + _routees ++= newRoutees // subscribe to Terminated messages for all route destinations, to be handled by Router actor newRoutees foreach watch } @@ -105,35 +101,43 @@ private[akka] class RoutedActorCell(_system: ActorSystemImpl, _ref: InternalActo * Not thread safe, but intended to be called from protected points, such as * `Resizer.resize` */ - private[akka] def removeRoutees(abandonedRoutees: Iterable[ActorRef]): Unit = { + private[akka] def removeRoutees(abandonedRoutees: immutable.Iterable[ActorRef]): Unit = { _routees = abandonedRoutees.foldLeft(_routees) { (xs, x) ⇒ unwatch(x); xs.filterNot(_ == x) } } + /** + * Send the message to the destinations defined by the `route` function. + * + * If the message is a [[akka.routing.RouterEnvelope]] it will be + * unwrapped before sent to the destinations. + * + * When [[akka.routing.CurrentRoutees]] is sent to the RoutedActorRef it + * replies with [[akka.routing.RouterRoutees]]. + * + * Resize is triggered when messages are sent to the routees, and the + * resizer is invoked asynchronously, i.e. not necessarily before the + * message has been sent. + */ override def tell(message: Any, sender: ActorRef): Unit = { - resize() - val s = if (sender eq null) system.deadLetters else sender - val msg = message match { case wrapped: RouterEnvelope ⇒ wrapped.message case m ⇒ m } - - applyRoute(s, message) match { - case Destination(_, x) :: Nil if x == self ⇒ super.tell(message, s) - case refs ⇒ - refs foreach (p ⇒ - if (p.recipient == self) super.tell(msg, p.sender) - else p.recipient.!(msg)(p.sender)) + applyRoute(s, message) foreach { + case Destination(snd, `self`) ⇒ + super.tell(msg, snd) + case Destination(snd, recipient) ⇒ + resize() // only resize when the message target is one of the routees + recipient.tell(msg, snd) } } - def resize(): Unit = { + def resize(): Unit = for (r ← routerConfig.resizer) { if (r.isTimeForResize(resizeCounter.getAndIncrement()) && resizeInProgress.compareAndSet(false, true)) super.tell(Router.Resize, self) } - } } /** @@ -197,19 +201,21 @@ trait RouterConfig { */ def withFallback(other: RouterConfig): RouterConfig = this - protected def toAll(sender: ActorRef, routees: Iterable[ActorRef]): Iterable[Destination] = + protected def toAll(sender: ActorRef, routees: immutable.Iterable[ActorRef]): immutable.Iterable[Destination] = routees.map(Destination(sender, _)) /** * Routers with dynamically resizable number of routees return the [[akka.routing.Resizer]] - * to use. + * to use. The resizer is invoked once when the router is created, before any messages can + * be sent to it. Resize is also triggered when messages are sent to the routees, and the + * resizer is invoked asynchronously, i.e. not necessarily before the message has been sent. */ def resizer: Option[Resizer] = None /** * Check that everything is there which is needed. Called in constructor of RoutedActorRef to fail early. */ - def verifyConfig(): Unit = {} + def verifyConfig(): Unit = () } @@ -228,7 +234,7 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi * Not thread safe, but intended to be called from protected points, such as * `RouterConfig.createRoute` and `Resizer.resize`. */ - def registerRoutees(routees: Iterable[ActorRef]): Unit = routedCell.addRoutees(routees) + def registerRoutees(routees: immutable.Iterable[ActorRef]): Unit = routedCell.addRoutees(routees) /** * Adds the routees to the router. @@ -237,7 +243,7 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi * `RouterConfig.createRoute` and `Resizer.resize`. * Java API. */ - def registerRoutees(routees: java.lang.Iterable[ActorRef]): Unit = registerRoutees(routees.asScala) + def registerRoutees(routees: java.lang.Iterable[ActorRef]): Unit = registerRoutees(immutableSeq(routees)) /** * Removes routees from the router. This method doesn't stop the routees. @@ -245,7 +251,7 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi * Not thread safe, but intended to be called from protected points, such as * `Resizer.resize`. */ - def unregisterRoutees(routees: Iterable[ActorRef]): Unit = routedCell.removeRoutees(routees) + def unregisterRoutees(routees: immutable.Iterable[ActorRef]): Unit = routedCell.removeRoutees(routees) /** * Removes routees from the router. This method doesn't stop the routees. @@ -254,28 +260,25 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi * `Resizer.resize`. * JAVA API */ - def unregisterRoutees(routees: java.lang.Iterable[ActorRef]): Unit = unregisterRoutees(routees.asScala) + def unregisterRoutees(routees: java.lang.Iterable[ActorRef]): Unit = unregisterRoutees(immutableSeq(routees)) /** * Looks up routes with specified paths and registers them. */ - def registerRouteesFor(paths: Iterable[String]): Unit = registerRoutees(paths.map(context.actorFor(_))) + def registerRouteesFor(paths: immutable.Iterable[String]): Unit = registerRoutees(paths.map(context.actorFor(_))) /** * Looks up routes with specified paths and registers them. * JAVA API */ - def registerRouteesFor(paths: java.lang.Iterable[String]): Unit = registerRouteesFor(paths.asScala) + def registerRouteesFor(paths: java.lang.Iterable[String]): Unit = registerRouteesFor(immutableSeq(paths)) /** * Creates new routees from specified `Props` and registers them. */ - def createRoutees(nrOfInstances: Int): Unit = { - if (nrOfInstances <= 0) throw new IllegalArgumentException( - "Must specify nrOfInstances or routees for [%s]" format context.self.path.toString) - else - registerRoutees(IndexedSeq.fill(nrOfInstances)(context.actorOf(routeeProps))) - } + def createRoutees(nrOfInstances: Int): Unit = + if (nrOfInstances <= 0) throw new IllegalArgumentException("Must specify nrOfInstances or routees for [%s]" format context.self.path.toString) + else registerRoutees(immutable.IndexedSeq.fill(nrOfInstances)(context.actorOf(routeeProps))) /** * Remove specified number of routees by unregister them @@ -298,7 +301,7 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi * Give concurrent messages a chance to be placed in mailbox before * sending PoisonPill. */ - protected def delayedStop(scheduler: Scheduler, abandon: Iterable[ActorRef], stopDelay: FiniteDuration): Unit = { + protected def delayedStop(scheduler: Scheduler, abandon: immutable.Iterable[ActorRef], stopDelay: FiniteDuration): Unit = { if (abandon.nonEmpty) { if (stopDelay <= Duration.Zero) { abandon foreach (_ ! PoisonPill) @@ -316,7 +319,7 @@ class RouteeProvider(val context: ActorContext, val routeeProps: Props, val resi /** * All routees of the router */ - def routees: IndexedSeq[ActorRef] = routedCell.routees + def routees: immutable.IndexedSeq[ActorRef] = routedCell.routees /** * All routees of the router @@ -346,7 +349,13 @@ abstract class CustomRouterConfig extends RouterConfig { } trait CustomRoute { - def destinationsFor(sender: ActorRef, message: Any): java.lang.Iterable[Destination] + /** + * use akka.japi.Util.immutableSeq to convert a java.lang.Iterable to the return type needed for destinationsFor, + * or if you just want to return a single Destination, use akka.japi.Util.immutableSingletonSeq + * + * Java API + */ + def destinationsFor(sender: ActorRef, message: Any): immutable.Seq[Destination] } /** @@ -368,7 +377,7 @@ trait Router extends Actor { if (ab.get) try ref.routerConfig.resizer foreach (_.resize(ref.routeeProvider)) finally ab.set(false) case Terminated(child) ⇒ - ref.removeRoutees(IndexedSeq(child)) + ref.removeRoutees(child :: Nil) if (ref.routees.isEmpty) context.stop(self) }: Receive) orElse routerReceive @@ -428,7 +437,7 @@ case object CurrentRoutees extends CurrentRoutees { * Message used to carry information about what routees the router is currently using. */ @SerialVersionUID(1L) -case class RouterRoutees(routees: Iterable[ActorRef]) +case class RouterRoutees(routees: immutable.Iterable[ActorRef]) /** * For every message sent to a router, its route determines a set of destinations, @@ -448,9 +457,9 @@ case class Destination(sender: ActorRef, recipient: ActorRef) @SerialVersionUID(1L) abstract class NoRouter extends RouterConfig case object NoRouter extends NoRouter { - def createRoute(routeeProvider: RouteeProvider): Route = null // FIXME, null, really?? - def routerDispatcher: String = "" - def supervisorStrategy = null // FIXME null, really?? + def createRoute(routeeProvider: RouteeProvider): Route = throw new UnsupportedOperationException("NoRouter does not createRoute") + def routerDispatcher: String = throw new UnsupportedOperationException("NoRouter has no dispatcher") + def supervisorStrategy = throw new UnsupportedOperationException("NoRouter has no strategy") override def withFallback(other: RouterConfig): RouterConfig = other /** @@ -496,16 +505,14 @@ object RoundRobinRouter { /** * Creates a new RoundRobinRouter, routing to the specified routees */ - def apply(routees: Iterable[ActorRef]): RoundRobinRouter = + def apply(routees: immutable.Iterable[ActorRef]): RoundRobinRouter = new RoundRobinRouter(routees = routees map (_.path.toString)) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef]): RoundRobinRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala) - } + def create(routees: java.lang.Iterable[ActorRef]): RoundRobinRouter = + apply(immutableSeq(routees)) } /** * A Router that uses round-robin to select a connection. For concurrent calls, round robin is just a best effort. @@ -549,7 +556,7 @@ object RoundRobinRouter { * using `actorFor` in [[akka.actor.ActorRefProvider]] */ @SerialVersionUID(1L) -case class RoundRobinRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, override val resizer: Option[Resizer] = None, +case class RoundRobinRouter(nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy) extends RouterConfig with RoundRobinLike { @@ -566,7 +573,7 @@ case class RoundRobinRouter(nrOfInstances: Int = 0, routees: Iterable[String] = * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String]) = this(routees = iterableAsScalaIterable(routeePaths)) + def this(routeePaths: java.lang.Iterable[String]) = this(routees = immutableSeq(routeePaths)) /** * Constructor that sets the resizer to be used. @@ -604,7 +611,7 @@ trait RoundRobinLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] def createRoute(routeeProvider: RouteeProvider): Route = { if (resizer.isEmpty) { @@ -624,7 +631,7 @@ trait RoundRobinLike { this: RouterConfig ⇒ case (sender, message) ⇒ message match { case Broadcast(msg) ⇒ toAll(sender, routeeProvider.routees) - case msg ⇒ List(Destination(sender, getNext())) + case msg ⇒ Destination(sender, getNext()) :: Nil } } } @@ -634,15 +641,13 @@ object RandomRouter { /** * Creates a new RandomRouter, routing to the specified routees */ - def apply(routees: Iterable[ActorRef]): RandomRouter = new RandomRouter(routees = routees map (_.path.toString)) + def apply(routees: immutable.Iterable[ActorRef]): RandomRouter = new RandomRouter(routees = routees map (_.path.toString)) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef]): RandomRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala) - } + def create(routees: java.lang.Iterable[ActorRef]): RandomRouter = + apply(immutableSeq(routees)) } /** * A Router that randomly selects one of the target connections to send a message to. @@ -686,7 +691,7 @@ object RandomRouter { * using `actorFor` in [[akka.actor.ActorRefProvider]] */ @SerialVersionUID(1L) -case class RandomRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, override val resizer: Option[Resizer] = None, +case class RandomRouter(nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy) extends RouterConfig with RandomLike { @@ -703,7 +708,7 @@ case class RandomRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String]) = this(routees = iterableAsScalaIterable(routeePaths)) + def this(routeePaths: java.lang.Iterable[String]) = this(routees = immutableSeq(routeePaths)) /** * Constructor that sets the resizer to be used. @@ -740,7 +745,7 @@ case class RandomRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, trait RandomLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] def createRoute(routeeProvider: RouteeProvider): Route = { if (resizer.isEmpty) { @@ -758,7 +763,7 @@ trait RandomLike { this: RouterConfig ⇒ case (sender, message) ⇒ message match { case Broadcast(msg) ⇒ toAll(sender, routeeProvider.routees) - case msg ⇒ List(Destination(sender, getNext())) + case msg ⇒ Destination(sender, getNext()) :: Nil } } } @@ -768,16 +773,14 @@ object SmallestMailboxRouter { /** * Creates a new SmallestMailboxRouter, routing to the specified routees */ - def apply(routees: Iterable[ActorRef]): SmallestMailboxRouter = + def apply(routees: immutable.Iterable[ActorRef]): SmallestMailboxRouter = new SmallestMailboxRouter(routees = routees map (_.path.toString)) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef]): SmallestMailboxRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala) - } + def create(routees: java.lang.Iterable[ActorRef]): SmallestMailboxRouter = + apply(immutableSeq(routees)) } /** * A Router that tries to send to the non-suspended routee with fewest messages in mailbox. @@ -830,7 +833,7 @@ object SmallestMailboxRouter { * using `actorFor` in [[akka.actor.ActorRefProvider]] */ @SerialVersionUID(1L) -case class SmallestMailboxRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, override val resizer: Option[Resizer] = None, +case class SmallestMailboxRouter(nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy) extends RouterConfig with SmallestMailboxLike { @@ -847,7 +850,7 @@ case class SmallestMailboxRouter(nrOfInstances: Int = 0, routees: Iterable[Strin * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String]) = this(routees = iterableAsScalaIterable(routeePaths)) + def this(routeePaths: java.lang.Iterable[String]) = this(routees = immutableSeq(routeePaths)) /** * Constructor that sets the resizer to be used. @@ -884,7 +887,7 @@ case class SmallestMailboxRouter(nrOfInstances: Int = 0, routees: Iterable[Strin trait SmallestMailboxLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] /** * Returns true if the actor is currently processing a message. @@ -956,7 +959,7 @@ trait SmallestMailboxLike { this: RouterConfig ⇒ // 4. An ActorRef with unknown mailbox size that isn't processing anything // 5. An ActorRef with a known mailbox size // 6. An ActorRef without any messages - @tailrec def getNext(targets: IndexedSeq[ActorRef] = routeeProvider.routees, + @tailrec def getNext(targets: immutable.IndexedSeq[ActorRef] = routeeProvider.routees, proposedTarget: ActorRef = routeeProvider.context.system.deadLetters, currentScore: Long = Long.MaxValue, at: Int = 0, @@ -987,7 +990,7 @@ trait SmallestMailboxLike { this: RouterConfig ⇒ case (sender, message) ⇒ message match { case Broadcast(msg) ⇒ toAll(sender, routeeProvider.routees) - case msg ⇒ List(Destination(sender, getNext())) + case msg ⇒ Destination(sender, getNext()) :: Nil } } } @@ -997,15 +1000,13 @@ object BroadcastRouter { /** * Creates a new BroadcastRouter, routing to the specified routees */ - def apply(routees: Iterable[ActorRef]): BroadcastRouter = new BroadcastRouter(routees = routees map (_.path.toString)) + def apply(routees: immutable.Iterable[ActorRef]): BroadcastRouter = new BroadcastRouter(routees = routees map (_.path.toString)) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef]): BroadcastRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala) - } + def create(routees: java.lang.Iterable[ActorRef]): BroadcastRouter = + apply(immutableSeq(routees)) } /** * A Router that uses broadcasts a message to all its connections. @@ -1049,7 +1050,7 @@ object BroadcastRouter { * using `actorFor` in [[akka.actor.ActorRefProvider]] */ @SerialVersionUID(1L) -case class BroadcastRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, override val resizer: Option[Resizer] = None, +case class BroadcastRouter(nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy) extends RouterConfig with BroadcastLike { @@ -1066,7 +1067,7 @@ case class BroadcastRouter(nrOfInstances: Int = 0, routees: Iterable[String] = N * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String]) = this(routees = iterableAsScalaIterable(routeePaths)) + def this(routeePaths: java.lang.Iterable[String]) = this(routees = immutableSeq(routeePaths)) /** * Constructor that sets the resizer to be used. @@ -1104,7 +1105,7 @@ trait BroadcastLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] def createRoute(routeeProvider: RouteeProvider): Route = { if (resizer.isEmpty) { @@ -1122,16 +1123,14 @@ object ScatterGatherFirstCompletedRouter { /** * Creates a new ScatterGatherFirstCompletedRouter, routing to the specified routees, timing out after the specified Duration */ - def apply(routees: Iterable[ActorRef], within: FiniteDuration): ScatterGatherFirstCompletedRouter = + def apply(routees: immutable.Iterable[ActorRef], within: FiniteDuration): ScatterGatherFirstCompletedRouter = new ScatterGatherFirstCompletedRouter(routees = routees map (_.path.toString), within = within) /** * Java API to create router with the supplied 'routees' actors. */ - def create(routees: java.lang.Iterable[ActorRef], within: FiniteDuration): ScatterGatherFirstCompletedRouter = { - import scala.collection.JavaConverters._ - apply(routees.asScala, within) - } + def create(routees: java.lang.Iterable[ActorRef], within: FiniteDuration): ScatterGatherFirstCompletedRouter = + apply(immutableSeq(routees), within) } /** * Simple router that broadcasts the message to all routees, and replies with the first response. @@ -1177,7 +1176,7 @@ object ScatterGatherFirstCompletedRouter { * using `actorFor` in [[akka.actor.ActorRefProvider]] */ @SerialVersionUID(1L) -case class ScatterGatherFirstCompletedRouter(nrOfInstances: Int = 0, routees: Iterable[String] = Nil, within: FiniteDuration, +case class ScatterGatherFirstCompletedRouter(nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, within: FiniteDuration, override val resizer: Option[Resizer] = None, val routerDispatcher: String = Dispatchers.DefaultDispatcherId, val supervisorStrategy: SupervisorStrategy = Router.defaultSupervisorStrategy) @@ -1198,8 +1197,7 @@ case class ScatterGatherFirstCompletedRouter(nrOfInstances: Int = 0, routees: It * @param routeePaths string representation of the actor paths of the routees that will be looked up * using `actorFor` in [[akka.actor.ActorRefProvider]] */ - def this(routeePaths: java.lang.Iterable[String], w: FiniteDuration) = - this(routees = iterableAsScalaIterable(routeePaths), within = w) + def this(routeePaths: java.lang.Iterable[String], w: FiniteDuration) = this(routees = immutableSeq(routeePaths), within = w) /** * Constructor that sets the resizer to be used. @@ -1237,7 +1235,7 @@ trait ScatterGatherFirstCompletedLike { this: RouterConfig ⇒ def nrOfInstances: Int - def routees: Iterable[String] + def routees: immutable.Iterable[String] def within: FiniteDuration @@ -1397,7 +1395,7 @@ case class DefaultResizer( * @param routees The current actor in the resizer * @return the number of routees by which the resizer should be adjusted (positive, negative or zero) */ - def capacity(routees: IndexedSeq[ActorRef]): Int = { + def capacity(routees: immutable.IndexedSeq[ActorRef]): Int = { val currentSize = routees.size val press = pressure(routees) val delta = filter(press, currentSize) @@ -1425,7 +1423,7 @@ case class DefaultResizer( * @param routees the current resizer of routees * @return number of busy routees, between 0 and routees.size */ - def pressure(routees: IndexedSeq[ActorRef]): Int = { + def pressure(routees: immutable.IndexedSeq[ActorRef]): Int = { routees count { case a: ActorRefWithCell ⇒ a.underlying match { diff --git a/akka-actor/src/main/scala/akka/routing/package.scala b/akka-actor/src/main/scala/akka/routing/package.scala index 0b40793861..76dc2f3104 100644 --- a/akka-actor/src/main/scala/akka/routing/package.scala +++ b/akka-actor/src/main/scala/akka/routing/package.scala @@ -4,10 +4,12 @@ package akka +import scala.collection.immutable + package object routing { /** * Routing logic, partial function from (sender, message) to a * set of destinations. */ - type Route = PartialFunction[(akka.actor.ActorRef, Any), Iterable[Destination]] + type Route = PartialFunction[(akka.actor.ActorRef, Any), immutable.Iterable[Destination]] } diff --git a/akka-actor/src/main/scala/akka/serialization/Serialization.scala b/akka-actor/src/main/scala/akka/serialization/Serialization.scala index 003c9de2b1..2fb6a37469 100644 --- a/akka-actor/src/main/scala/akka/serialization/Serialization.scala +++ b/akka-actor/src/main/scala/akka/serialization/Serialization.scala @@ -4,14 +4,14 @@ package akka.serialization -import akka.AkkaException import com.typesafe.config.Config -import akka.actor.{ Extension, ExtendedActorSystem, Address, DynamicAccess } +import akka.actor.{ Extension, ExtendedActorSystem, Address } import akka.event.Logging import java.util.concurrent.ConcurrentHashMap import scala.collection.mutable.ArrayBuffer import java.io.NotSerializableException -import util.{ Try, DynamicVariable } +import scala.util.{ Try, DynamicVariable } +import scala.collection.immutable object Serialization { @@ -27,17 +27,13 @@ object Serialization { val currentTransportAddress = new DynamicVariable[Address](null) class Settings(val config: Config) { + val Serializers: Map[String, String] = configToMap("akka.actor.serializers") + val SerializationBindings: Map[String, String] = configToMap("akka.actor.serialization-bindings") - import scala.collection.JavaConverters._ - import config._ - - val Serializers: Map[String, String] = configToMap(getConfig("akka.actor.serializers")) - - val SerializationBindings: Map[String, String] = configToMap(getConfig("akka.actor.serialization-bindings")) - - private def configToMap(cfg: Config): Map[String, String] = - cfg.root.unwrapped.asScala.toMap.map { case (k, v) ⇒ (k, v.toString) } - + private final def configToMap(path: String): Map[String, String] = { + import scala.collection.JavaConverters._ + config.getConfig(path).root.unwrapped.asScala.toMap map { case (k, v) ⇒ (k -> v.toString) } + } } } @@ -62,16 +58,16 @@ class Serialization(val system: ExtendedActorSystem) extends Extension { * using the optional type hint to the Serializer and the optional ClassLoader ot load it into. * Returns either the resulting object or an Exception if one was thrown. */ - def deserialize(bytes: Array[Byte], - serializerId: Int, - clazz: Option[Class[_]]): Try[AnyRef] = Try(serializerByIdentity(serializerId).fromBinary(bytes, clazz)) + def deserialize(bytes: Array[Byte], serializerId: Int, clazz: Option[Class[_]]): Try[AnyRef] = + Try(serializerByIdentity(serializerId).fromBinary(bytes, clazz)) /** * Deserializes the given array of bytes using the specified type to look up what Serializer should be used. * You can specify an optional ClassLoader to load the object into. * Returns either the resulting object or an Exception if one was thrown. */ - def deserialize(bytes: Array[Byte], clazz: Class[_]): Try[AnyRef] = Try(serializerFor(clazz).fromBinary(bytes, Some(clazz))) + def deserialize(bytes: Array[Byte], clazz: Class[_]): Try[AnyRef] = + Try(serializerFor(clazz).fromBinary(bytes, Some(clazz))) /** * Returns the Serializer configured for the given object, returns the NullSerializer if it's null. @@ -95,9 +91,8 @@ class Serialization(val system: ExtendedActorSystem) extends Extension { */ def serializerFor(clazz: Class[_]): Serializer = serializerMap.get(clazz) match { - case null ⇒ - // bindings are ordered from most specific to least specific - def unique(possibilities: Seq[(Class[_], Serializer)]): Boolean = + case null ⇒ // bindings are ordered from most specific to least specific + def unique(possibilities: immutable.Seq[(Class[_], Serializer)]): Boolean = possibilities.size == 1 || (possibilities forall (_._1 isAssignableFrom possibilities(0)._1)) || (possibilities forall (_._2 == possibilities(0)._2)) @@ -122,8 +117,8 @@ class Serialization(val system: ExtendedActorSystem) extends Extension { * loading is performed by the system’s [[akka.actor.DynamicAccess]]. */ def serializerOf(serializerFQN: String): Try[Serializer] = - system.dynamicAccess.createInstanceFor[Serializer](serializerFQN, Seq(classOf[ExtendedActorSystem] -> system)) recoverWith { - case _ ⇒ system.dynamicAccess.createInstanceFor[Serializer](serializerFQN, Seq()) + system.dynamicAccess.createInstanceFor[Serializer](serializerFQN, List(classOf[ExtendedActorSystem] -> system)) recoverWith { + case _ ⇒ system.dynamicAccess.createInstanceFor[Serializer](serializerFQN, Nil) } /** @@ -137,21 +132,21 @@ class Serialization(val system: ExtendedActorSystem) extends Extension { * bindings is a Seq of tuple representing the mapping from Class to Serializer. * It is primarily ordered by the most specific classes first, and secondly in the configured order. */ - private[akka] val bindings: Seq[ClassSerializer] = - sort(for ((k: String, v: String) ← settings.SerializationBindings if v != "none") yield (system.dynamicAccess.getClassFor[Any](k).get, serializers(v))) + private[akka] val bindings: immutable.Seq[ClassSerializer] = + sort(for ((k: String, v: String) ← settings.SerializationBindings if v != "none") yield (system.dynamicAccess.getClassFor[Any](k).get, serializers(v))).to[immutable.Seq] /** * Sort so that subtypes always precede their supertypes, but without * obeying any order between unrelated subtypes (insert sort). */ - private def sort(in: Iterable[ClassSerializer]): Seq[ClassSerializer] = - (new ArrayBuffer[ClassSerializer](in.size) /: in) { (buf, ca) ⇒ + private def sort(in: Iterable[ClassSerializer]): immutable.Seq[ClassSerializer] = + ((new ArrayBuffer[ClassSerializer](in.size) /: in) { (buf, ca) ⇒ buf.indexWhere(_._1 isAssignableFrom ca._1) match { case -1 ⇒ buf append ca case x ⇒ buf insert (x, ca) } buf - } + }).to[immutable.Seq] /** * serializerMap is a Map whose keys is the class that is serializable and values is the serializer diff --git a/akka-actor/src/main/scala/akka/util/Collections.scala b/akka-actor/src/main/scala/akka/util/Collections.scala new file mode 100644 index 0000000000..0ccbcd408c --- /dev/null +++ b/akka-actor/src/main/scala/akka/util/Collections.scala @@ -0,0 +1,54 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.util + +import scala.collection.immutable +import scala.annotation.tailrec + +/** + * INTERNAL API + */ +private[akka] object Collections { + + case object EmptyImmutableSeq extends immutable.Seq[Nothing] { + override final def iterator = Iterator.empty + override final def apply(idx: Int): Nothing = throw new java.lang.IndexOutOfBoundsException(idx.toString) + override final def length: Int = 0 + } + + abstract class PartialImmutableValuesIterable[From, To] extends immutable.Iterable[To] { + def isDefinedAt(from: From): Boolean + def apply(from: From): To + def valuesIterator: Iterator[From] + final def iterator: Iterator[To] = { + val superIterator = valuesIterator + new Iterator[To] { + private[this] var _next: To = _ + private[this] var _hasNext = false + + @tailrec override final def hasNext: Boolean = + if (!_hasNext && superIterator.hasNext) { // If we need and are able to look for the next value + val potentiallyNext = superIterator.next() + if (isDefinedAt(potentiallyNext)) { + _next = apply(potentiallyNext) + _hasNext = true + true + } else hasNext //Attempt to find the next + } else _hasNext // Return if we found one + + override final def next(): To = if (hasNext) { + val ret = _next + _next = null.asInstanceOf[To] // Mark as consumed (nice to the GC, don't leak the last returned value) + _hasNext = false // Mark as consumed (we need to look for the next value) + ret + } else throw new java.util.NoSuchElementException("next") + } + } + + override lazy val size: Int = iterator.size + override def foreach[C](f: To ⇒ C) = iterator foreach f + } + +} \ No newline at end of file diff --git a/akka-actor/src/main/scala/akka/util/Convert.scala b/akka-actor/src/main/scala/akka/util/Convert.scala deleted file mode 100644 index 3fead7aef7..0000000000 --- a/akka-actor/src/main/scala/akka/util/Convert.scala +++ /dev/null @@ -1,45 +0,0 @@ -/** - * Copyright (C) 2009-2012 Typesafe Inc. - */ - -package akka.util -//FIXME DOCS! -object Convert { - - def intToBytes(value: Int): Array[Byte] = { - val bytes = Array.fill[Byte](4)(0) - bytes(0) = (value >>> 24).asInstanceOf[Byte] - bytes(1) = (value >>> 16).asInstanceOf[Byte] - bytes(2) = (value >>> 8).asInstanceOf[Byte] - bytes(3) = value.asInstanceOf[Byte] - bytes - } - - def bytesToInt(bytes: Array[Byte], offset: Int): Int = { - (0 until 4).foldLeft(0)((value, index) ⇒ value + ((bytes(index + offset) & 0x000000FF) << ((4 - 1 - index) * 8))) - } - - def longToBytes(value: Long): Array[Byte] = { - val writeBuffer = Array.fill[Byte](8)(0) - writeBuffer(0) = (value >>> 56).asInstanceOf[Byte] - writeBuffer(1) = (value >>> 48).asInstanceOf[Byte] - writeBuffer(2) = (value >>> 40).asInstanceOf[Byte] - writeBuffer(3) = (value >>> 32).asInstanceOf[Byte] - writeBuffer(4) = (value >>> 24).asInstanceOf[Byte] - writeBuffer(5) = (value >>> 16).asInstanceOf[Byte] - writeBuffer(6) = (value >>> 8).asInstanceOf[Byte] - writeBuffer(7) = (value >>> 0).asInstanceOf[Byte] - writeBuffer - } - - def bytesToLong(buf: Array[Byte]): Long = { - ((buf(0) & 0xFFL) << 56) | - ((buf(1) & 0xFFL) << 48) | - ((buf(2) & 0xFFL) << 40) | - ((buf(3) & 0xFFL) << 32) | - ((buf(4) & 0xFFL) << 24) | - ((buf(5) & 0xFFL) << 16) | - ((buf(6) & 0xFFL) << 8) | - ((buf(7) & 0xFFL) << 0) - } -} diff --git a/akka-actor/src/main/scala/akka/util/Index.scala b/akka-actor/src/main/scala/akka/util/Index.scala index 3289ed8f13..83d8a40885 100644 --- a/akka-actor/src/main/scala/akka/util/Index.scala +++ b/akka-actor/src/main/scala/akka/util/Index.scala @@ -7,6 +7,7 @@ import annotation.tailrec import java.util.concurrent.{ ConcurrentSkipListSet, ConcurrentHashMap } import java.util.{ Comparator, Set ⇒ JSet } +import scala.collection.JavaConverters.{ asScalaIteratorConverter, collectionAsScalaIterableConverter } import scala.collection.mutable /** @@ -71,12 +72,11 @@ class Index[K, V](val mapSize: Int, val valueComparator: Comparator[V]) { * @return Some(value) for the first matching value where the supplied function returns true for the given key, * if no matches it returns None */ - def findValue(key: K)(f: (V) ⇒ Boolean): Option[V] = { - import scala.collection.JavaConversions._ - val set = container get key - if (set ne null) set.iterator.find(f) - else None - } + def findValue(key: K)(f: (V) ⇒ Boolean): Option[V] = + container get key match { + case null ⇒ None + case set ⇒ set.iterator.asScala find f + } /** * Returns an Iterator of V containing the values for the supplied key, or an empty iterator if the key doesn't exist @@ -84,27 +84,24 @@ class Index[K, V](val mapSize: Int, val valueComparator: Comparator[V]) { def valueIterator(key: K): scala.Iterator[V] = { container.get(key) match { case null ⇒ Iterator.empty - case some ⇒ scala.collection.JavaConversions.asScalaIterator(some.iterator()) + case some ⇒ some.iterator.asScala } } /** * Applies the supplied function to all keys and their values */ - def foreach(fun: (K, V) ⇒ Unit): Unit = { - import scala.collection.JavaConversions._ - container.entrySet foreach { e ⇒ e.getValue.foreach(fun(e.getKey, _)) } - } + def foreach(fun: (K, V) ⇒ Unit): Unit = + container.entrySet.iterator.asScala foreach { e ⇒ e.getValue.iterator.asScala.foreach(fun(e.getKey, _)) } /** * Returns the union of all value sets. */ def values: Set[V] = { - import scala.collection.JavaConversions._ val builder = mutable.Set.empty[V] for { - entry ← container.entrySet - v ← entry.getValue + values ← container.values.iterator.asScala + v ← values.iterator.asScala } builder += v builder.toSet } @@ -112,7 +109,7 @@ class Index[K, V](val mapSize: Int, val valueComparator: Comparator[V]) { /** * Returns the key set. */ - def keys: Iterable[K] = scala.collection.JavaConversions.collectionAsScalaIterable(container.keySet) + def keys: Iterable[K] = container.keySet.asScala /** * Disassociates the value of type V from the key of type K diff --git a/akka-actor/src/main/scala/akka/util/SubclassifiedIndex.scala b/akka-actor/src/main/scala/akka/util/SubclassifiedIndex.scala index ae82da6407..236f645864 100644 --- a/akka-actor/src/main/scala/akka/util/SubclassifiedIndex.scala +++ b/akka-actor/src/main/scala/akka/util/SubclassifiedIndex.scala @@ -3,6 +3,8 @@ */ package akka.util +import scala.collection.immutable + /** * Typeclass which describes a classification hierarchy. Observe the contract between `isEqual` and `isSubclass`! */ @@ -30,7 +32,7 @@ private[akka] object SubclassifiedIndex { val kids = subkeys flatMap (_ addValue value) if (!(values contains value)) { values += value - kids :+ ((key, values)) + kids :+ ((key, Set(value))) } else kids } @@ -55,7 +57,7 @@ private[akka] object SubclassifiedIndex { } private[SubclassifiedIndex] def emptyMergeMap[K, V] = internalEmptyMergeMap.asInstanceOf[Map[K, Set[V]]] - private[this] val internalEmptyMergeMap = Map[AnyRef, Set[AnyRef]]().withDefault(_ ⇒ Set[AnyRef]()) + private[this] val internalEmptyMergeMap = Map[AnyRef, Set[AnyRef]]().withDefaultValue(Set[AnyRef]()) } /** @@ -74,7 +76,7 @@ private[akka] class SubclassifiedIndex[K, V] private (private var values: Set[V] import SubclassifiedIndex._ - type Changes = Seq[(K, Set[V])] + type Changes = immutable.Seq[(K, Set[V])] protected var subkeys = Vector.empty[Nonroot[K, V]] @@ -208,5 +210,5 @@ private[akka] class SubclassifiedIndex[K, V] private (private var values: Set[V] private def mergeChangesByKey(changes: Changes): Changes = (emptyMergeMap[K, V] /: changes) { case (m, (k, s)) ⇒ m.updated(k, m(k) ++ s) - }.toSeq + }.to[immutable.Seq] } diff --git a/akka-actor/src/main/scala/akka/util/Timeout.scala b/akka-actor/src/main/scala/akka/util/Timeout.scala index 62faa56f3d..7062eabd35 100644 --- a/akka-actor/src/main/scala/akka/util/Timeout.scala +++ b/akka-actor/src/main/scala/akka/util/Timeout.scala @@ -8,7 +8,7 @@ import language.implicitConversions import java.util.concurrent.TimeUnit import java.lang.{ Double ⇒ JDouble } -import scala.concurrent.util.{ Duration, FiniteDuration } +import scala.concurrent.duration.{ Duration, FiniteDuration } @SerialVersionUID(1L) case class Timeout(duration: FiniteDuration) { diff --git a/akka-actor/src/main/scala/akka/util/Unsafe.java b/akka-actor/src/main/scala/akka/util/Unsafe.java index ace3c1baac..005d1b3441 100644 --- a/akka-actor/src/main/scala/akka/util/Unsafe.java +++ b/akka-actor/src/main/scala/akka/util/Unsafe.java @@ -5,27 +5,9 @@ package akka.util; -import java.lang.reflect.Field; - /** * INTERNAL API */ public final class Unsafe { - public final static sun.misc.Unsafe instance; - static { - try { - sun.misc.Unsafe found = null; - for(Field field : sun.misc.Unsafe.class.getDeclaredFields()) { - if (field.getType() == sun.misc.Unsafe.class) { - field.setAccessible(true); - found = (sun.misc.Unsafe) field.get(null); - break; - } - } - if (found == null) throw new IllegalStateException("Can't find instance of sun.misc.Unsafe"); - else instance = found; - } catch(Throwable t) { - throw new ExceptionInInitializerError(t); - } - } + public final static sun.misc.Unsafe instance = scala.concurrent.util.Unsafe.instance; } diff --git a/akka-agent/src/main/scala/akka/agent/Agent.scala b/akka-agent/src/main/scala/akka/agent/Agent.scala index f85fdfc4ed..215de37c28 100644 --- a/akka-agent/src/main/scala/akka/agent/Agent.scala +++ b/akka-agent/src/main/scala/akka/agent/Agent.scala @@ -10,7 +10,7 @@ import akka.pattern.ask import akka.util.Timeout import scala.concurrent.stm._ import scala.concurrent.{ ExecutionContext, Future, Promise, Await } -import scala.concurrent.util.{ FiniteDuration, Duration } +import scala.concurrent.duration.{ FiniteDuration, Duration } /** * Used internally to send functions. diff --git a/akka-agent/src/test/scala/akka/agent/AgentSpec.scala b/akka-agent/src/test/scala/akka/agent/AgentSpec.scala index 746cc18fae..e6fb305151 100644 --- a/akka-agent/src/test/scala/akka/agent/AgentSpec.scala +++ b/akka-agent/src/test/scala/akka/agent/AgentSpec.scala @@ -3,8 +3,7 @@ package akka.agent import language.postfixOps import scala.concurrent.{ Await, Future } -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import akka.testkit._ import scala.concurrent.stm._ diff --git a/akka-camel/src/main/scala/akka/camel/Activation.scala b/akka-camel/src/main/scala/akka/camel/Activation.scala index a12abc7d0c..b035cbd267 100644 --- a/akka-camel/src/main/scala/akka/camel/Activation.scala +++ b/akka-camel/src/main/scala/akka/camel/Activation.scala @@ -8,8 +8,9 @@ import akka.camel.internal._ import akka.util.Timeout import akka.actor.{ ActorSystem, Props, ActorRef } import akka.pattern._ -import scala.concurrent.util.Duration import concurrent.{ ExecutionContext, Future } +import scala.concurrent.duration.Duration +import scala.concurrent.duration.FiniteDuration /** * Activation trait that can be used to wait on activation or de-activation of Camel endpoints. @@ -34,4 +35,4 @@ trait Activation { * @param timeout the timeout of the Future */ def deactivationFutureFor(endpoint: ActorRef)(implicit timeout: Timeout, executor: ExecutionContext): Future[ActorRef] -} \ No newline at end of file +} diff --git a/akka-camel/src/main/scala/akka/camel/ActorNotRegisteredException.scala b/akka-camel/src/main/scala/akka/camel/ActorNotRegisteredException.scala index 7a303e47b3..29889d8bf6 100644 --- a/akka-camel/src/main/scala/akka/camel/ActorNotRegisteredException.scala +++ b/akka-camel/src/main/scala/akka/camel/ActorNotRegisteredException.scala @@ -3,7 +3,6 @@ package akka.camel * Thrown to indicate that the actor referenced by an endpoint URI cannot be * found in the actor system. * - * @author Martin Krasser */ class ActorNotRegisteredException(uri: String) extends RuntimeException { override def getMessage: String = "Actor [%s] doesn't exist" format uri diff --git a/akka-camel/src/main/scala/akka/camel/ActorRouteDefinition.scala b/akka-camel/src/main/scala/akka/camel/ActorRouteDefinition.scala index 9cb84a2a2a..e8b1be8550 100644 --- a/akka-camel/src/main/scala/akka/camel/ActorRouteDefinition.scala +++ b/akka-camel/src/main/scala/akka/camel/ActorRouteDefinition.scala @@ -7,7 +7,7 @@ package akka.camel import akka.actor.ActorRef import akka.camel.internal.component.CamelPath import org.apache.camel.model.ProcessorDefinition -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration /** * Wraps a [[org.apache.camel.model.ProcessorDefinition]]. diff --git a/akka-camel/src/main/scala/akka/camel/Camel.scala b/akka-camel/src/main/scala/akka/camel/Camel.scala index de2e61fd0d..c72193becb 100644 --- a/akka-camel/src/main/scala/akka/camel/Camel.scala +++ b/akka-camel/src/main/scala/akka/camel/Camel.scala @@ -4,16 +4,15 @@ package akka.camel -import internal._ +import akka.camel.internal._ import akka.actor._ +import akka.ConfigurationException import org.apache.camel.ProducerTemplate import org.apache.camel.impl.DefaultCamelContext import org.apache.camel.model.RouteDefinition import com.typesafe.config.Config -import scala.concurrent.util.Duration -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.{ Duration, FiniteDuration } import java.util.concurrent.TimeUnit._ -import akka.ConfigurationException /** * Camel trait encapsulates the underlying camel machinery. @@ -88,8 +87,8 @@ class CamelSettings private[camel] (config: Config, dynamicAccess: DynamicAccess final val StreamingCache: Boolean = config.getBoolean("akka.camel.streamingCache") final val Conversions: (String, RouteDefinition) ⇒ RouteDefinition = { - import scala.collection.JavaConverters.asScalaSetConverter val specifiedConversions = { + import scala.collection.JavaConverters.asScalaSetConverter val section = config.getConfig("akka.camel.conversions") section.entrySet.asScala.map(e ⇒ (e.getKey, section.getString(e.getKey))) } diff --git a/akka-camel/src/main/scala/akka/camel/CamelMessage.scala b/akka-camel/src/main/scala/akka/camel/CamelMessage.scala index 8b0dbef50a..c9dc32e597 100644 --- a/akka-camel/src/main/scala/akka/camel/CamelMessage.scala +++ b/akka-camel/src/main/scala/akka/camel/CamelMessage.scala @@ -5,21 +5,17 @@ package akka.camel import java.util.{ Map ⇒ JMap, Set ⇒ JSet } - -import scala.collection.JavaConversions._ - -import org.apache.camel.{ CamelContext, Message ⇒ JCamelMessage } +import org.apache.camel.{ CamelContext, Message ⇒ JCamelMessage, StreamCache } import akka.AkkaException import scala.reflect.ClassTag +import scala.util.Try +import scala.collection.JavaConversions._ import akka.dispatch.Mapper -import util.{ Success, Failure, Try } /** * An immutable representation of a Camel message. - * @author Martin Krasser */ case class CamelMessage(body: Any, headers: Map[String, Any]) { - def this(body: Any, headers: JMap[String, Any]) = this(body, headers.toMap) //for Java override def toString: String = "CamelMessage(%s, %s)" format (body, headers) @@ -76,8 +72,7 @@ case class CamelMessage(body: Any, headers: Map[String, Any]) { *

* Java API */ - def getHeaderAs[T](name: String, clazz: Class[T], camelContext: CamelContext): T = - headerAs[T](name)(ClassTag(clazz), camelContext).get + def getHeaderAs[T](name: String, clazz: Class[T], camelContext: CamelContext): T = headerAs[T](name)(ClassTag(clazz), camelContext).get /** * Returns a new CamelMessage with a transformed body using a transformer function. @@ -112,7 +107,21 @@ case class CamelMessage(body: Any, headers: Map[String, Any]) { * Java API * */ - def getBodyAs[T](clazz: Class[T], camelContext: CamelContext): T = camelContext.getTypeConverter.mandatoryConvertTo[T](clazz, body) + def getBodyAs[T](clazz: Class[T], camelContext: CamelContext): T = { + val result = camelContext.getTypeConverter.mandatoryConvertTo[T](clazz, body) + // to be able to re-read a StreamCache we must "undo" the side effect by resetting the StreamCache + resetStreamCache() + result + } + + /** + * Reset StreamCache body. Nothing is done if the body is not a StreamCache. + * See http://camel.apache.org/stream-caching.html + */ + def resetStreamCache(): Unit = body match { + case stream: StreamCache ⇒ stream.reset + case _ ⇒ + } /** * Returns a new CamelMessage with a new body, while keeping the same headers. @@ -142,7 +151,6 @@ case class CamelMessage(body: Any, headers: Map[String, Any]) { /** * Companion object of CamelMessage class. * - * @author Martin Krasser */ object CamelMessage { @@ -186,7 +194,7 @@ object CamelMessage { /** * Positive acknowledgement message (used for application-acknowledged message receipts). * When `autoAck` is set to false in the [[akka.camel.Consumer]], you can send an `Ack` to the sender of the CamelMessage. - * @author Martin Krasser + * */ case object Ack { /** Java API to get the Ack singleton */ diff --git a/akka-camel/src/main/scala/akka/camel/CamelSupport.scala b/akka-camel/src/main/scala/akka/camel/CamelSupport.scala index 84cd23e339..cf4c49283d 100644 --- a/akka-camel/src/main/scala/akka/camel/CamelSupport.scala +++ b/akka-camel/src/main/scala/akka/camel/CamelSupport.scala @@ -2,7 +2,7 @@ package akka.camel import akka.actor.Actor import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit._ private[camel] trait CamelSupport { this: Actor ⇒ diff --git a/akka-camel/src/main/scala/akka/camel/Consumer.scala b/akka-camel/src/main/scala/akka/camel/Consumer.scala index 506624dbd6..19ddc85b59 100644 --- a/akka-camel/src/main/scala/akka/camel/Consumer.scala +++ b/akka-camel/src/main/scala/akka/camel/Consumer.scala @@ -7,13 +7,13 @@ package akka.camel import akka.camel.internal.CamelSupervisor.Register import org.apache.camel.model.{ RouteDefinition, ProcessorDefinition } import akka.actor._ -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration._ import akka.dispatch.Mapper /** * Mixed in by Actor implementations that consume message from Camel endpoints. * - * @author Martin Krasser + * */ trait Consumer extends Actor with CamelSupport { import Consumer._ diff --git a/akka-camel/src/main/scala/akka/camel/Producer.scala b/akka-camel/src/main/scala/akka/camel/Producer.scala index 683ff4f20f..017304ea4d 100644 --- a/akka-camel/src/main/scala/akka/camel/Producer.scala +++ b/akka-camel/src/main/scala/akka/camel/Producer.scala @@ -13,8 +13,6 @@ import org.apache.camel.processor.SendProcessor /** * Support trait for producing messages to Camel endpoints. - * - * @author Martin Krasser */ trait ProducerSupport extends Actor with CamelSupport { private[this] var messages = Map[ActorRef, Any]() @@ -160,20 +158,20 @@ trait Producer extends ProducerSupport { this: Actor ⇒ /** * For internal use only. - * @author Martin Krasser + * */ private case class MessageResult(message: CamelMessage) extends NoSerializationVerificationNeeded /** * For internal use only. - * @author Martin Krasser + * */ private case class FailureResult(cause: Throwable, headers: Map[String, Any] = Map.empty) extends NoSerializationVerificationNeeded /** * A one-way producer. * - * @author Martin Krasser + * */ trait Oneway extends Producer { this: Actor ⇒ override def oneway: Boolean = true diff --git a/akka-camel/src/main/scala/akka/camel/internal/ActivationTracker.scala b/akka-camel/src/main/scala/akka/camel/internal/ActivationTracker.scala index 43ca2701c6..9beb6a8894 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/ActivationTracker.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/ActivationTracker.scala @@ -6,15 +6,14 @@ package akka.camel.internal import akka.actor._ import collection.mutable.WeakHashMap -import akka.camel._ -import internal.ActivationProtocol._ +import akka.camel.internal.ActivationProtocol._ /** * For internal use only. An actor that tracks activation and de-activation of endpoints. */ -private[akka] final class ActivationTracker extends Actor with ActorLogging { - val activations = new WeakHashMap[ActorRef, ActivationStateMachine] +private[camel] class ActivationTracker extends Actor with ActorLogging { + val activations = new WeakHashMap[ActorRef, ActivationStateMachine] /** * A state machine that keeps track of the endpoint activation status of an actor. */ @@ -22,7 +21,6 @@ private[akka] final class ActivationTracker extends Actor with ActorLogging { type State = PartialFunction[ActivationMessage, Unit] var receive: State = notActivated() - /** * Not activated state * @return a partial function that handles messages in the 'not activated' state @@ -68,8 +66,12 @@ private[akka] final class ActivationTracker extends Actor with ActorLogging { * @return a partial function that handles messages in the 'de-activated' state */ def deactivated: State = { + // deactivated means it was activated at some point, so tell sender it was activated case AwaitActivation(ref) ⇒ sender ! EndpointActivated(ref) case AwaitDeActivation(ref) ⇒ sender ! EndpointDeActivated(ref) + //resurrected at restart. + case msg @ EndpointActivated(ref) ⇒ + receive = activated(Nil) } /** @@ -80,6 +82,7 @@ private[akka] final class ActivationTracker extends Actor with ActorLogging { def failedToActivate(cause: Throwable): State = { case AwaitActivation(ref) ⇒ sender ! EndpointFailedToActivate(ref, cause) case AwaitDeActivation(ref) ⇒ sender ! EndpointFailedToActivate(ref, cause) + case EndpointDeActivated(_) ⇒ // the de-register at termination always sends a de-activated when the cleanup is done. ignoring. } /** @@ -90,6 +93,7 @@ private[akka] final class ActivationTracker extends Actor with ActorLogging { def failedToDeActivate(cause: Throwable): State = { case AwaitActivation(ref) ⇒ sender ! EndpointActivated(ref) case AwaitDeActivation(ref) ⇒ sender ! EndpointFailedToDeActivate(ref, cause) + case EndpointDeActivated(_) ⇒ // the de-register at termination always sends a de-activated when the cleanup is done. ignoring. } } @@ -114,4 +118,4 @@ private[camel] case class AwaitActivation(ref: ActorRef) extends ActivationMessa * For internal use only. * @param ref the actorRef */ -private[camel] case class AwaitDeActivation(ref: ActorRef) extends ActivationMessage(ref) \ No newline at end of file +private[camel] case class AwaitDeActivation(ref: ActorRef) extends ActivationMessage(ref) diff --git a/akka-camel/src/main/scala/akka/camel/internal/CamelExchangeAdapter.scala b/akka-camel/src/main/scala/akka/camel/internal/CamelExchangeAdapter.scala index 5750856b37..b6a991d4d5 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/CamelExchangeAdapter.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/CamelExchangeAdapter.scala @@ -1,9 +1,6 @@ package akka.camel.internal -import scala.collection.JavaConversions._ - import org.apache.camel.util.ExchangeHelper - import org.apache.camel.{ Exchange, Message ⇒ JCamelMessage } import akka.camel.{ FailureResult, AkkaCamelException, CamelMessage } @@ -14,7 +11,7 @@ import akka.camel.{ FailureResult, AkkaCamelException, CamelMessage } * This adapter is used to convert to immutable messages to be used with Actors, and convert the immutable messages back * to org.apache.camel.Message when using Camel. * - * @author Martin Krasser + * */ private[camel] class CamelExchangeAdapter(val exchange: Exchange) { /** @@ -83,8 +80,10 @@ private[camel] class CamelExchangeAdapter(val exchange: Exchange) { * * @see AkkaCamelException */ - def toAkkaCamelException(headers: Map[String, Any]): AkkaCamelException = + def toAkkaCamelException(headers: Map[String, Any]): AkkaCamelException = { + import scala.collection.JavaConversions._ new AkkaCamelException(exchange.getException, headers ++ response.getHeaders) + } /** * Creates an immutable Failure object from the adapted Exchange so it can be used internally between Actors. @@ -101,7 +100,10 @@ private[camel] class CamelExchangeAdapter(val exchange: Exchange) { * * @see Failure */ - def toFailureResult(headers: Map[String, Any]): FailureResult = FailureResult(exchange.getException, headers ++ response.getHeaders) + def toFailureResult(headers: Map[String, Any]): FailureResult = { + import scala.collection.JavaConversions._ + FailureResult(exchange.getException, headers ++ response.getHeaders) + } /** * Creates an immutable CamelMessage object from Exchange.getIn so it can be used with Actors. diff --git a/akka-camel/src/main/scala/akka/camel/internal/CamelSupervisor.scala b/akka-camel/src/main/scala/akka/camel/internal/CamelSupervisor.scala index b19bdbc0a2..bbad41e02f 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/CamelSupervisor.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/CamelSupervisor.scala @@ -115,9 +115,9 @@ private[camel] class Registry(activationTracker: ActorRef) extends Actor with Ca case msg @ Register(producer, _, None) ⇒ if (!producers(producer)) { producers += producer - producerRegistrar forward msg parent ! AddWatch(producer) } + producerRegistrar forward msg case DeRegister(actorRef) ⇒ producers.find(_ == actorRef).foreach { p ⇒ deRegisterProducer(p) @@ -155,6 +155,8 @@ private[camel] class ProducerRegistrar(activationTracker: ActorRef) extends Acto } catch { case NonFatal(e) ⇒ throw new ActorActivationException(producer, e) } + } else { + camelObjects.get(producer).foreach { case (endpoint, processor) ⇒ producer ! CamelProducerObjects(endpoint, processor) } } case DeRegister(producer) ⇒ camelObjects.get(producer).foreach { diff --git a/akka-camel/src/main/scala/akka/camel/internal/ConsumerActorRouteBuilder.scala b/akka-camel/src/main/scala/akka/camel/internal/ConsumerActorRouteBuilder.scala index 2caf952c6a..a27c23ec2f 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/ConsumerActorRouteBuilder.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/ConsumerActorRouteBuilder.scala @@ -16,7 +16,7 @@ import org.apache.camel.model.RouteDefinition * * @param endpointUri endpoint URI of the consumer actor. * - * @author Martin Krasser + * */ private[camel] class ConsumerActorRouteBuilder(endpointUri: String, consumer: ActorRef, config: ConsumerConfig, settings: CamelSettings) extends RouteBuilder { diff --git a/akka-camel/src/main/scala/akka/camel/internal/DefaultCamel.scala b/akka-camel/src/main/scala/akka/camel/internal/DefaultCamel.scala index 13d5fe73d1..e876a36e2a 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/DefaultCamel.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/DefaultCamel.scala @@ -7,8 +7,7 @@ import akka.event.Logging import akka.camel.{ CamelSettings, Camel } import akka.camel.internal.ActivationProtocol._ import scala.util.control.NonFatal -import scala.concurrent.util.Duration -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration._ import org.apache.camel.ProducerTemplate import concurrent.{ Future, ExecutionContext } import akka.util.Timeout @@ -99,4 +98,4 @@ private[camel] class DefaultCamel(val system: ExtendedActorSystem) extends Camel case EndpointDeActivated(`endpoint`) ⇒ endpoint case EndpointFailedToDeActivate(`endpoint`, cause) ⇒ throw cause }) -} \ No newline at end of file +} diff --git a/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala b/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala index 2a1be08354..2585b970c9 100644 --- a/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala +++ b/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala @@ -11,8 +11,7 @@ import org.apache.camel.impl.{ DefaultProducer, DefaultEndpoint, DefaultComponen import akka.actor._ import akka.pattern._ import scala.reflect.BeanProperty -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import scala.concurrent.{ ExecutionContext, Future } import scala.util.control.NonFatal import java.util.concurrent.{ TimeUnit, TimeoutException, CountDownLatch } @@ -21,7 +20,6 @@ import akka.camel.internal.CamelExchangeAdapter import akka.camel.{ ActorNotRegisteredException, Camel, Ack, FailureResult, CamelMessage } import support.TypeConverterSupport import scala.util.{ Failure, Success, Try } -import scala.concurrent.util.FiniteDuration /** * For internal use only. @@ -33,7 +31,7 @@ import scala.concurrent.util.FiniteDuration * Messages are sent to [[akka.camel.Consumer]] actors through a [[akka.camel.internal.component.ActorEndpoint]] that * this component provides. * - * @author Martin Krasser + * */ private[camel] class ActorComponent(camel: Camel, system: ActorSystem) extends DefaultComponent { /** @@ -54,7 +52,7 @@ private[camel] class ActorComponent(camel: Camel, system: ActorSystem) extends D * [actorPath]?[options]%s, * where [actorPath] refers to the actor path to the actor. * - * @author Martin Krasser + * */ private[camel] class ActorEndpoint(uri: String, comp: ActorComponent, @@ -106,7 +104,7 @@ private[camel] trait ActorEndpointConfig { * @see akka.camel.component.ActorComponent * @see akka.camel.component.ActorEndpoint * - * @author Martin Krasser + * */ private[camel] class ActorProducer(val endpoint: ActorEndpoint, camel: Camel) extends DefaultProducer(endpoint) with AsyncProcessor { /** @@ -135,7 +133,7 @@ private[camel] class ActorProducer(val endpoint: ActorEndpoint, camel: Camel) ex private[camel] def processExchangeAdapter(exchange: CamelExchangeAdapter): Unit = { val isDone = new CountDownLatch(1) processExchangeAdapter(exchange, new AsyncCallback { def done(doneSync: Boolean) { isDone.countDown() } }) - isDone.await(camel.settings.ReplyTimeout.toMillis, TimeUnit.MILLISECONDS) + isDone.await(endpoint.replyTimeout.length, endpoint.replyTimeout.unit) } /** @@ -183,7 +181,7 @@ private[camel] class ActorProducer(val endpoint: ActorEndpoint, camel: Camel) ex } /** - * For internal use only. Converts Strings to [[scala.concurrent.util.Duration]] + * For internal use only. Converts Strings to [[scala.concurrent.duration.Duration]] */ private[camel] object DurationTypeConverter extends TypeConverterSupport { diff --git a/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala b/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala index cd353e04a0..7688df5130 100644 --- a/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala +++ b/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala @@ -12,7 +12,7 @@ import org.apache.camel.impl.DefaultCamelContext /** * Subclass this abstract class to create an untyped producer actor. This class is meant to be used from Java. * - * @author Martin Krasser + * */ abstract class UntypedProducerActor extends UntypedActor with ProducerSupport { /** diff --git a/akka-camel/src/test/java/akka/camel/ConsumerJavaTestBase.java b/akka-camel/src/test/java/akka/camel/ConsumerJavaTestBase.java index 0c9aad7e23..d8aec8a761 100644 --- a/akka-camel/src/test/java/akka/camel/ConsumerJavaTestBase.java +++ b/akka-camel/src/test/java/akka/camel/ConsumerJavaTestBase.java @@ -4,6 +4,8 @@ package akka.camel; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.Props; @@ -11,15 +13,13 @@ import akka.testkit.JavaTestKit; import akka.util.Timeout; import scala.concurrent.Await; import scala.concurrent.ExecutionContext; -import scala.concurrent.util.Duration; import org.junit.AfterClass; import org.junit.Test; import java.util.concurrent.TimeUnit; import akka.testkit.AkkaSpec; -import scala.concurrent.util.FiniteDuration; import static org.junit.Assert.*; /** - * @author Martin Krasser + * */ public class ConsumerJavaTestBase { diff --git a/akka-camel/src/test/java/akka/camel/CustomRouteTestBase.java b/akka-camel/src/test/java/akka/camel/CustomRouteTestBase.java index 77b0294f60..ae6d9c5531 100644 --- a/akka-camel/src/test/java/akka/camel/CustomRouteTestBase.java +++ b/akka-camel/src/test/java/akka/camel/CustomRouteTestBase.java @@ -7,7 +7,8 @@ import akka.camel.javaapi.UntypedProducerActor; import akka.util.Timeout; import scala.concurrent.Await; import scala.concurrent.ExecutionContext; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import org.apache.camel.CamelExecutionException; import org.apache.camel.Exchange; import org.apache.camel.Predicate; @@ -16,7 +17,6 @@ import org.apache.camel.component.mock.MockEndpoint; import org.junit.Before; import org.junit.After; import org.junit.Test; -import scala.concurrent.util.FiniteDuration; import java.util.concurrent.TimeUnit; diff --git a/akka-camel/src/test/java/akka/camel/MessageJavaTestBase.java b/akka-camel/src/test/java/akka/camel/MessageJavaTestBase.java index 95cdc5007b..d805a8b2c1 100644 --- a/akka-camel/src/test/java/akka/camel/MessageJavaTestBase.java +++ b/akka-camel/src/test/java/akka/camel/MessageJavaTestBase.java @@ -8,6 +8,7 @@ import akka.actor.ActorSystem; import akka.dispatch.Mapper; import akka.japi.Function; import org.apache.camel.NoTypeConversionAvailableException; +import org.apache.camel.converter.stream.InputStreamCache; import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; @@ -18,7 +19,7 @@ import java.util.*; import static org.junit.Assert.assertEquals; /** - * @author Martin Krasser + * */ public class MessageJavaTestBase { static Camel camel; @@ -100,6 +101,14 @@ public class MessageJavaTestBase { message("test1" , createMap("A", "1")).withHeaders(createMap("C", "3"))); } + @Test + public void shouldBeAbleToReReadStreamCacheBody() throws Exception { + CamelMessage msg = new CamelMessage(new InputStreamCache("test1".getBytes("utf-8")), empty); + assertEquals("test1", msg.getBodyAs(String.class, camel.context())); + // re-read + assertEquals("test1", msg.getBodyAs(String.class, camel.context())); + } + private static Set createSet(String... entries) { HashSet set = new HashSet(); set.addAll(Arrays.asList(entries)); diff --git a/akka-camel/src/test/java/akka/camel/SampleErrorHandlingConsumer.java b/akka-camel/src/test/java/akka/camel/SampleErrorHandlingConsumer.java index c654e3958d..92fb124a11 100644 --- a/akka-camel/src/test/java/akka/camel/SampleErrorHandlingConsumer.java +++ b/akka-camel/src/test/java/akka/camel/SampleErrorHandlingConsumer.java @@ -7,15 +7,15 @@ package akka.camel; import akka.actor.Status; import akka.camel.javaapi.UntypedConsumerActor; import akka.dispatch.Mapper; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import org.apache.camel.builder.Builder; import org.apache.camel.model.ProcessorDefinition; import org.apache.camel.model.RouteDefinition; import scala.Option; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.FiniteDuration; /** - * @author Martin Krasser + * */ public class SampleErrorHandlingConsumer extends UntypedConsumerActor { private static Mapper> mapper = new Mapper>() { diff --git a/akka-camel/src/test/java/akka/camel/SampleUntypedConsumer.java b/akka-camel/src/test/java/akka/camel/SampleUntypedConsumer.java index be293c21b9..030c951cc9 100644 --- a/akka-camel/src/test/java/akka/camel/SampleUntypedConsumer.java +++ b/akka-camel/src/test/java/akka/camel/SampleUntypedConsumer.java @@ -7,7 +7,7 @@ package akka.camel; import akka.camel.javaapi.UntypedConsumerActor; /** - * @author Martin Krasser + * */ public class SampleUntypedConsumer extends UntypedConsumerActor { diff --git a/akka-camel/src/test/java/akka/camel/SampleUntypedForwardingProducer.java b/akka-camel/src/test/java/akka/camel/SampleUntypedForwardingProducer.java index 375ef36835..b99a7ecc31 100644 --- a/akka-camel/src/test/java/akka/camel/SampleUntypedForwardingProducer.java +++ b/akka-camel/src/test/java/akka/camel/SampleUntypedForwardingProducer.java @@ -6,7 +6,7 @@ package akka.camel; import akka.camel.javaapi.UntypedProducerActor; /** - * @author Martin Krasser + * */ public class SampleUntypedForwardingProducer extends UntypedProducerActor { diff --git a/akka-camel/src/test/java/akka/camel/SampleUntypedReplyingProducer.java b/akka-camel/src/test/java/akka/camel/SampleUntypedReplyingProducer.java index 039494fd00..c47187d1da 100644 --- a/akka-camel/src/test/java/akka/camel/SampleUntypedReplyingProducer.java +++ b/akka-camel/src/test/java/akka/camel/SampleUntypedReplyingProducer.java @@ -7,7 +7,7 @@ package akka.camel; import akka.camel.javaapi.UntypedProducerActor; /** - * @author Martin Krasser + * */ public class SampleUntypedReplyingProducer extends UntypedProducerActor { diff --git a/akka-camel/src/test/scala/akka/camel/ActivationIntegrationTest.scala b/akka-camel/src/test/scala/akka/camel/ActivationIntegrationTest.scala index 54c671c3b5..a945e3a63e 100644 --- a/akka-camel/src/test/scala/akka/camel/ActivationIntegrationTest.scala +++ b/akka-camel/src/test/scala/akka/camel/ActivationIntegrationTest.scala @@ -7,7 +7,7 @@ package akka.camel import language.postfixOps import org.scalatest.matchers.MustMatchers -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import org.apache.camel.ProducerTemplate import akka.actor._ import TestSupport._ diff --git a/akka-camel/src/test/scala/akka/camel/CamelConfigSpec.scala b/akka-camel/src/test/scala/akka/camel/CamelConfigSpec.scala index 9f4b802081..ca7b4ba3cc 100644 --- a/akka-camel/src/test/scala/akka/camel/CamelConfigSpec.scala +++ b/akka-camel/src/test/scala/akka/camel/CamelConfigSpec.scala @@ -6,7 +6,7 @@ package akka.camel import org.scalatest.matchers.MustMatchers import org.scalatest.WordSpec import akka.actor.ActorSystem -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit._ class CamelConfigSpec extends WordSpec with MustMatchers { diff --git a/akka-camel/src/test/scala/akka/camel/ConcurrentActivationTest.scala b/akka-camel/src/test/scala/akka/camel/ConcurrentActivationTest.scala index 988e4a78f1..ff5524ad6c 100644 --- a/akka-camel/src/test/scala/akka/camel/ConcurrentActivationTest.scala +++ b/akka-camel/src/test/scala/akka/camel/ConcurrentActivationTest.scala @@ -1,13 +1,18 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ package akka.camel +import language.postfixOps + import org.scalatest.WordSpec import org.scalatest.matchers.MustMatchers +import scala.concurrent.{ Promise, Await, Future } +import scala.collection.immutable import akka.camel.TestSupport.NonSharedCamelSystem import akka.actor.{ ActorRef, Props, Actor } import akka.routing.BroadcastRouter -import concurrent.{ Promise, Await, Future } -import scala.concurrent.util.duration._ -import language.postfixOps +import scala.concurrent.duration._ import akka.testkit._ import akka.util.Timeout import org.apache.camel.model.RouteDefinition @@ -58,7 +63,7 @@ class ConcurrentActivationTest extends WordSpec with MustMatchers with NonShared activations.size must be(2 * number * number) // must be the size of the activated activated producers and consumers deactivations.size must be(2 * number * number) - def partitionNames(refs: Seq[ActorRef]) = refs.map(_.path.name).partition(_.startsWith("concurrent-test-echo-consumer")) + def partitionNames(refs: immutable.Seq[ActorRef]) = refs.map(_.path.name).partition(_.startsWith("concurrent-test-echo-consumer")) def assertContainsSameElements(lists: (Seq[_], Seq[_])) { val (a, b) = lists a.intersect(b).size must be(a.size) diff --git a/akka-camel/src/test/scala/akka/camel/ConsumerIntegrationTest.scala b/akka-camel/src/test/scala/akka/camel/ConsumerIntegrationTest.scala index 0de66ae082..6462e0b191 100644 --- a/akka-camel/src/test/scala/akka/camel/ConsumerIntegrationTest.scala +++ b/akka-camel/src/test/scala/akka/camel/ConsumerIntegrationTest.scala @@ -17,7 +17,7 @@ import org.apache.camel.builder.Builder import org.apache.camel.{ FailedToCreateRouteException, CamelExecutionException } import java.util.concurrent.{ ExecutionException, TimeUnit, TimeoutException } import akka.actor.Status.Failure -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import concurrent.{ ExecutionContext, Await } import akka.testkit._ import akka.util.Timeout @@ -30,7 +30,7 @@ class ConsumerIntegrationTest extends WordSpec with MustMatchers with NonSharedC "Consumer must throw FailedToCreateRouteException, while awaiting activation, if endpoint is invalid" in { filterEvents(EventFilter[ActorActivationException](occurrences = 1)) { - val actorRef = system.actorOf(Props(new TestActor(uri = "some invalid uri"))) + val actorRef = system.actorOf(Props(new TestActor(uri = "some invalid uri")), "invalidActor") intercept[FailedToCreateRouteException] { Await.result(camel.activationFutureFor(actorRef), defaultTimeoutDuration) } diff --git a/akka-camel/src/test/scala/akka/camel/MessageScalaTest.scala b/akka-camel/src/test/scala/akka/camel/MessageScalaTest.scala index dd73027624..cbf0190e91 100644 --- a/akka-camel/src/test/scala/akka/camel/MessageScalaTest.scala +++ b/akka-camel/src/test/scala/akka/camel/MessageScalaTest.scala @@ -5,11 +5,11 @@ package akka.camel import java.io.InputStream - import org.apache.camel.NoTypeConversionAvailableException import akka.camel.TestSupport.{ SharedCamelSystem } import org.scalatest.FunSuite import org.scalatest.matchers.MustMatchers +import org.apache.camel.converter.stream.InputStreamCache class MessageScalaTest extends FunSuite with MustMatchers with SharedCamelSystem { implicit def camelContext = camel.context @@ -44,12 +44,17 @@ class MessageScalaTest extends FunSuite with MustMatchers with SharedCamelSystem test("mustSetBodyAndPreserveHeaders") { CamelMessage("test1", Map("A" -> "1")).copy(body = "test2") must be( CamelMessage("test2", Map("A" -> "1"))) - } test("mustSetHeadersAndPreserveBody") { CamelMessage("test1", Map("A" -> "1")).copy(headers = Map("C" -> "3")) must be( CamelMessage("test1", Map("C" -> "3"))) + } + test("mustBeAbleToReReadStreamCacheBody") { + val msg = CamelMessage(new InputStreamCache("test1".getBytes("utf-8")), Map.empty) + msg.bodyAs[String] must be("test1") + // re-read + msg.bodyAs[String] must be("test1") } } diff --git a/akka-camel/src/test/scala/akka/camel/ProducerFeatureTest.scala b/akka-camel/src/test/scala/akka/camel/ProducerFeatureTest.scala index 2ead3101bd..07a46781c8 100644 --- a/akka-camel/src/test/scala/akka/camel/ProducerFeatureTest.scala +++ b/akka-camel/src/test/scala/akka/camel/ProducerFeatureTest.scala @@ -15,18 +15,24 @@ import akka.actor.SupervisorStrategy.Stop import org.scalatest.{ BeforeAndAfterEach, BeforeAndAfterAll, WordSpec } import akka.actor._ import akka.pattern._ -import scala.concurrent.util.{ Deadline, FiniteDuration } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import org.scalatest.matchers.MustMatchers import akka.testkit._ +import akka.actor.Status.Failure /** * Tests the features of the Camel Producer. */ -class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAndAfterEach with SharedCamelSystem with MustMatchers { +class ProducerFeatureTest extends TestKit(ActorSystem("test", AkkaSpec.testConf)) with WordSpec with BeforeAndAfterAll with BeforeAndAfterEach with MustMatchers { import ProducerFeatureTest._ + implicit def camel = CamelExtension(system) + + override protected def afterAll() { + super.afterAll() + system.shutdown() + } val camelContext = camel.context // to make testing equality of messages easier, otherwise the breadcrumb shows up in the result. @@ -41,9 +47,8 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd "produce a message and receive normal response" in { val producer = system.actorOf(Props(new TestProducer("direct:producer-test-2", true)), name = "direct-producer-2") val message = CamelMessage("test", Map(CamelMessage.MessageExchangeId -> "123")) - val future = producer.ask(message)(timeoutDuration) - val expected = CamelMessage("received TEST", Map(CamelMessage.MessageExchangeId -> "123")) - Await.result(future, timeoutDuration) must be === expected + producer.tell(message, testActor) + expectMsg(CamelMessage("received TEST", Map(CamelMessage.MessageExchangeId -> "123"))) stopGracefully(producer) } @@ -66,12 +71,17 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd case _: AkkaCamelException ⇒ Stop } }), name = "prod-anonymous-supervisor") - val producer = Await.result[ActorRef](supervisor.ask(Props(new TestProducer("direct:producer-test-2"))).mapTo[ActorRef], timeoutDuration) + + supervisor.tell(Props(new TestProducer("direct:producer-test-2")), testActor) + val producer = receiveOne(timeoutDuration).asInstanceOf[ActorRef] val message = CamelMessage("fail", Map(CamelMessage.MessageExchangeId -> "123")) filterEvents(EventFilter[AkkaCamelException](occurrences = 1)) { - val e = intercept[AkkaCamelException] { Await.result(producer.ask(message)(timeoutDuration), timeoutDuration) } - e.getMessage must be("failure") - e.headers must be(Map(CamelMessage.MessageExchangeId -> "123")) + producer.tell(message, testActor) + expectMsgPF(timeoutDuration) { + case Failure(e: AkkaCamelException) ⇒ + e.getMessage must be("failure") + e.headers must be(Map(CamelMessage.MessageExchangeId -> "123")) + } } Await.ready(latch, timeoutDuration) deadActor must be(Some(producer)) @@ -102,15 +112,8 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd "produce message to direct:producer-test-3 and receive normal response" in { val producer = system.actorOf(Props(new TestProducer("direct:producer-test-3")), name = "direct-producer-test-3") val message = CamelMessage("test", Map(CamelMessage.MessageExchangeId -> "123")) - val future = producer.ask(message)(timeoutDuration) - - Await.result(future, timeoutDuration) match { - case result: CamelMessage ⇒ - // a normal response must have been returned by the producer - val expected = CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123")) - result must be(expected) - case unexpected ⇒ fail("Actor responded with unexpected message:" + unexpected) - } + producer.tell(message, testActor) + expectMsg(CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123"))) stopGracefully(producer) } @@ -119,9 +122,12 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd val message = CamelMessage("fail", Map(CamelMessage.MessageExchangeId -> "123")) filterEvents(EventFilter[AkkaCamelException](occurrences = 1)) { - val e = intercept[AkkaCamelException] { Await.result(producer.ask(message)(timeoutDuration), timeoutDuration) } - e.getMessage must be("failure") - e.headers must be(Map(CamelMessage.MessageExchangeId -> "123")) + producer.tell(message, testActor) + expectMsgPF(timeoutDuration) { + case Failure(e: AkkaCamelException) ⇒ + e.getMessage must be("failure") + e.headers must be(Map(CamelMessage.MessageExchangeId -> "123")) + } } stopGracefully(producer) } @@ -130,15 +136,8 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd val target = system.actorOf(Props[ReplyingForwardTarget], name = "reply-forwarding-target") val producer = system.actorOf(Props(new TestForwarder("direct:producer-test-2", target)), name = "direct-producer-test-2-forwarder") val message = CamelMessage("test", Map(CamelMessage.MessageExchangeId -> "123")) - val future = producer.ask(message)(timeoutDuration) - - Await.result(future, timeoutDuration) match { - case result: CamelMessage ⇒ - // a normal response must have been returned by the forward target - val expected = CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123", "test" -> "result")) - result must be(expected) - case unexpected ⇒ fail("Actor responded with unexpected message:" + unexpected) - } + producer.tell(message, testActor) + expectMsg(CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123", "test" -> "result"))) stopGracefully(target, producer) } @@ -148,9 +147,12 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd val message = CamelMessage("fail", Map(CamelMessage.MessageExchangeId -> "123")) filterEvents(EventFilter[AkkaCamelException](occurrences = 1)) { - val e = intercept[AkkaCamelException] { Await.result(producer.ask(message)(timeoutDuration), timeoutDuration) } - e.getMessage must be("failure") - e.headers must be(Map(CamelMessage.MessageExchangeId -> "123", "test" -> "failure")) + producer.tell(message, testActor) + expectMsgPF(timeoutDuration) { + case Failure(e: AkkaCamelException) ⇒ + e.getMessage must be("failure") + e.headers must be(Map(CamelMessage.MessageExchangeId -> "123", "test" -> "failure")) + } } stopGracefully(target, producer) } @@ -181,13 +183,8 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd val producer = system.actorOf(Props(new TestForwarder("direct:producer-test-3", target)), name = "direct-producer-test-3-to-replying-actor") val message = CamelMessage("test", Map(CamelMessage.MessageExchangeId -> "123")) - val future = producer.ask(message)(timeoutDuration) - Await.result(future, timeoutDuration) match { - case message: CamelMessage ⇒ - val expected = CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123", "test" -> "result")) - message must be(expected) - case unexpected ⇒ fail("Actor responded with unexpected message:" + unexpected) - } + producer.tell(message, testActor) + expectMsg(CamelMessage("received test", Map(CamelMessage.MessageExchangeId -> "123", "test" -> "result"))) stopGracefully(target, producer) } @@ -197,9 +194,12 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd val message = CamelMessage("fail", Map(CamelMessage.MessageExchangeId -> "123")) filterEvents(EventFilter[AkkaCamelException](occurrences = 1)) { - val e = intercept[AkkaCamelException] { Await.result(producer.ask(message)(timeoutDuration), timeoutDuration) } - e.getMessage must be("failure") - e.headers must be(Map(CamelMessage.MessageExchangeId -> "123", "test" -> "failure")) + producer.tell(message, testActor) + expectMsgPF(timeoutDuration) { + case Failure(e: AkkaCamelException) ⇒ + e.getMessage must be("failure") + e.headers must be(Map(CamelMessage.MessageExchangeId -> "123", "test" -> "failure")) + } } stopGracefully(target, producer) } @@ -225,6 +225,23 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd } stopGracefully(target, producer) } + + "keep producing messages after error" in { + import TestSupport._ + val consumer = start(new IntermittentErrorConsumer("direct:intermittentTest-1"), "intermittentTest-error-consumer") + val producer = start(new SimpleProducer("direct:intermittentTest-1"), "intermittentTest-producer") + filterEvents(EventFilter[AkkaCamelException](occurrences = 1)) { + val futureFailed = producer.tell("fail", testActor) + expectMsgPF(timeoutDuration) { + case Failure(e) ⇒ + e.getMessage must be("fail") + } + producer.tell("OK", testActor) + expectMsg("OK") + } + stop(consumer) + stop(producer) + } } private def mockEndpoint = camel.context.getEndpoint("mock:mock", classOf[MockEndpoint]) @@ -232,7 +249,7 @@ class ProducerFeatureTest extends WordSpec with BeforeAndAfterAll with BeforeAnd def stopGracefully(actors: ActorRef*)(implicit timeout: Timeout) { val deadline = timeout.duration.fromNow for (a ← actors) - Await.result(gracefulStop(a, deadline.timeLeft.asInstanceOf[FiniteDuration]), deadline.timeLeft) must be === true + Await.result(gracefulStop(a, deadline.timeLeft), deadline.timeLeft) must be === true } } @@ -240,6 +257,11 @@ object ProducerFeatureTest { class TestProducer(uri: String, upper: Boolean = false) extends Actor with Producer { def endpointUri = uri + override def preRestart(reason: Throwable, message: Option[Any]) { + //overriding on purpose so it doesn't try to deRegister and reRegister at restart, + // which would cause a deadletter message in the test output. + } + override protected def transformOutgoingMessage(msg: Any) = msg match { case msg: CamelMessage ⇒ if (upper) msg.mapBody { body: String ⇒ body.toUpperCase @@ -304,4 +326,18 @@ object ProducerFeatureTest { } } + class SimpleProducer(override val endpointUri: String) extends Producer { + override protected def transformResponse(msg: Any) = msg match { + case m: CamelMessage ⇒ m.bodyAs[String] + case m: Any ⇒ m + } + } + + class IntermittentErrorConsumer(override val endpointUri: String) extends Consumer { + def receive = { + case msg: CamelMessage if msg.bodyAs[String] == "fail" ⇒ sender ! Failure(new Exception("fail")) + case msg: CamelMessage ⇒ sender ! msg + } + } + } diff --git a/akka-camel/src/test/scala/akka/camel/TestSupport.scala b/akka-camel/src/test/scala/akka/camel/TestSupport.scala index c25ccdab3c..4ff7155666 100644 --- a/akka-camel/src/test/scala/akka/camel/TestSupport.scala +++ b/akka-camel/src/test/scala/akka/camel/TestSupport.scala @@ -7,11 +7,10 @@ package akka.camel import language.postfixOps import language.implicitConversions -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.{ TimeoutException, ExecutionException, TimeUnit } import org.scalatest.{ BeforeAndAfterEach, BeforeAndAfterAll, Suite } import org.scalatest.matchers.{ BePropertyMatcher, BePropertyMatchResult } -import scala.concurrent.util.{ FiniteDuration, Duration } import scala.reflect.ClassTag import akka.actor.{ ActorRef, Props, ActorSystem, Actor } import concurrent.Await @@ -75,10 +74,10 @@ private[camel] object TestSupport { } def time[A](block: ⇒ A): FiniteDuration = { - val start = System.currentTimeMillis() + val start = System.nanoTime() block - val duration = System.currentTimeMillis() - start - duration millis + val duration = System.nanoTime() - start + duration nanos } def anInstanceOf[T](implicit tag: ClassTag[T]) = { diff --git a/akka-camel/src/test/scala/akka/camel/UntypedProducerTest.scala b/akka-camel/src/test/scala/akka/camel/UntypedProducerTest.scala index a9d097aa10..e89a568b42 100644 --- a/akka-camel/src/test/scala/akka/camel/UntypedProducerTest.scala +++ b/akka-camel/src/test/scala/akka/camel/UntypedProducerTest.scala @@ -14,7 +14,7 @@ import akka.camel.TestSupport.SharedCamelSystem import akka.actor.Props import akka.pattern._ import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import org.scalatest._ import akka.testkit._ import matchers.MustMatchers diff --git a/akka-camel/src/test/scala/akka/camel/internal/ActivationTrackerTest.scala b/akka-camel/src/test/scala/akka/camel/internal/ActivationTrackerTest.scala index 3b6c029fc0..783e7ab9a5 100644 --- a/akka-camel/src/test/scala/akka/camel/internal/ActivationTrackerTest.scala +++ b/akka-camel/src/test/scala/akka/camel/internal/ActivationTrackerTest.scala @@ -2,13 +2,12 @@ package akka.camel.internal import language.postfixOps import org.scalatest.matchers.MustMatchers -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import org.scalatest.{ GivenWhenThen, BeforeAndAfterEach, BeforeAndAfterAll, WordSpec } import akka.actor.{ Props, ActorSystem } import akka.camel._ import akka.testkit.{ TimingTest, TestProbe, TestKit } import akka.camel.internal.ActivationProtocol._ -import scala.concurrent.util.FiniteDuration class ActivationTrackerTest extends TestKit(ActorSystem("test")) with WordSpec with MustMatchers with BeforeAndAfterAll with BeforeAndAfterEach with GivenWhenThen { @@ -110,6 +109,14 @@ class ActivationTrackerTest extends TestKit(ActorSystem("test")) with WordSpec w awaiting.verifyActivated() } + + "send activation message when an actor is activated, deactivated and activated again" taggedAs TimingTest in { + publish(EndpointActivated(actor.ref)) + publish(EndpointDeActivated(actor.ref)) + publish(EndpointActivated(actor.ref)) + awaiting.awaitActivation() + awaiting.verifyActivated() + } } class Awaiting(actor: TestProbe) { diff --git a/akka-camel/src/test/scala/akka/camel/internal/component/ActorComponentConfigurationTest.scala b/akka-camel/src/test/scala/akka/camel/internal/component/ActorComponentConfigurationTest.scala index 09f9c1aa62..1be5295225 100644 --- a/akka-camel/src/test/scala/akka/camel/internal/component/ActorComponentConfigurationTest.scala +++ b/akka-camel/src/test/scala/akka/camel/internal/component/ActorComponentConfigurationTest.scala @@ -7,7 +7,7 @@ package akka.camel.internal.component import language.postfixOps import org.scalatest.matchers.MustMatchers -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.camel.TestSupport.SharedCamelSystem import org.apache.camel.Component import org.scalatest.WordSpec diff --git a/akka-camel/src/test/scala/akka/camel/internal/component/ActorProducerTest.scala b/akka-camel/src/test/scala/akka/camel/internal/component/ActorProducerTest.scala index 8f7845d3ff..57d4ee02c6 100644 --- a/akka-camel/src/test/scala/akka/camel/internal/component/ActorProducerTest.scala +++ b/akka-camel/src/test/scala/akka/camel/internal/component/ActorProducerTest.scala @@ -10,8 +10,7 @@ import org.mockito.Matchers.any import org.mockito.Mockito._ import org.apache.camel.{ CamelContext, ProducerTemplate, AsyncCallback } import java.util.concurrent.atomic.AtomicBoolean -import scala.concurrent.util.duration._ -import concurrent.util.{ FiniteDuration, Duration } +import scala.concurrent.duration._ import java.lang.String import akka.camel._ import internal.{ DefaultCamel, CamelExchangeAdapter } @@ -24,11 +23,12 @@ import akka.actor.Status.{ Success, Failure } import com.typesafe.config.ConfigFactory import akka.actor.ActorSystem.Settings import akka.event.LoggingAdapter -import akka.testkit.{ TimingTest, TestKit, TestProbe } +import akka.testkit.{ TestLatch, TimingTest, TestKit, TestProbe } import org.apache.camel.impl.DefaultCamelContext import concurrent.{ Await, Promise, Future } import akka.util.Timeout import akka.actor._ +import akka.testkit._ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with MustMatchers with ActorProducerFixture { implicit val timeout = Timeout(10 seconds) @@ -65,9 +65,7 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with "process the exchange" in { producer = given(outCapable = false, autoAck = false) import system.dispatcher - val future = Future { - producer.processExchangeAdapter(exchange) - } + val future = Future { producer.processExchangeAdapter(exchange) } within(1 second) { probe.expectMsgType[CamelMessage] info("message sent to consumer") @@ -111,10 +109,21 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with } "response is not sent by actor" must { - + val latch = TestLatch(1) + val callback = new AsyncCallback { + def done(doneSync: Boolean) { + latch.countDown() + } + } def process() = { producer = given(outCapable = true, replyTimeout = 100 millis) - time(producer.processExchangeAdapter(exchange)) + val duration = time { + producer.processExchangeAdapter(exchange, callback) + // wait for the actor to complete the callback + Await.ready(latch, 1.seconds.dilated) + } + latch.reset() + duration } "timeout after replyTimeout" taggedAs TimingTest in { @@ -159,16 +168,20 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with val doneSync = producer.processExchangeAdapter(exchange, asyncCallback) - asyncCallback.expectNoCallWithin(100 millis); info("no async callback before response") + asyncCallback.expectNoCallWithin(100 millis) + info("no async callback before response") within(1 second) { probe.expectMsgType[CamelMessage] probe.sender ! "some message" } - doneSync must be(false); info("done async") + doneSync must be(false) + info("done async") - asyncCallback.expectDoneAsyncWithin(1 second); info("async callback received") - verify(exchange).setResponse(msg("some message")); info("response as expected") + asyncCallback.expectDoneAsyncWithin(1 second) + info("async callback received") + verify(exchange).setResponse(msg("some message")) + info("response as expected") } } @@ -197,7 +210,10 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with producer.processExchangeAdapter(exchange, asyncCallback) asyncCallback.awaitCalled(100 millis) verify(exchange).setFailure(Matchers.argThat(new ArgumentMatcher[FailureResult] { - def matches(failure: AnyRef) = { failure.asInstanceOf[FailureResult].cause must be(anInstanceOf[TimeoutException]); true } + def matches(failure: AnyRef) = { + failure.asInstanceOf[FailureResult].cause must be(anInstanceOf[TimeoutException]) + true + } })) } @@ -221,9 +237,12 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with producer = given(outCapable = false, autoAck = true) val doneSync = producer.processExchangeAdapter(exchange, asyncCallback) - doneSync must be(true); info("done sync") - asyncCallback.expectDoneSyncWithin(1 second); info("async callback called") - verify(exchange, never()).setResponse(any[CamelMessage]); info("no response forwarded to exchange") + doneSync must be(true) + info("done sync") + asyncCallback.expectDoneSyncWithin(1 second) + info("async callback called") + verify(exchange, never()).setResponse(any[CamelMessage]) + info("no response forwarded to exchange") } } @@ -238,11 +257,14 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with doneSync must be(false) within(1 second) { - probe.expectMsgType[CamelMessage]; info("message sent to consumer") + probe.expectMsgType[CamelMessage] + info("message sent to consumer") probe.sender ! Ack - asyncCallback.expectDoneAsyncWithin(remaining); info("async callback called") + asyncCallback.expectDoneAsyncWithin(remaining) + info("async callback called") } - verify(exchange, never()).setResponse(any[CamelMessage]); info("no response forwarded to exchange") + verify(exchange, never()).setResponse(any[CamelMessage]) + info("no response forwarded to exchange") } } @@ -253,12 +275,16 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with producer.processExchangeAdapter(exchange, asyncCallback) within(1 second) { - probe.expectMsgType[CamelMessage]; info("message sent to consumer") + probe.expectMsgType[CamelMessage] + info("message sent to consumer") probe.sender ! "some neither Ack nor Failure response" - asyncCallback.expectDoneAsyncWithin(remaining); info("async callback called") + asyncCallback.expectDoneAsyncWithin(remaining) + info("async callback called") } - verify(exchange, never()).setResponse(any[CamelMessage]); info("no response forwarded to exchange") - verify(exchange).setFailure(any[FailureResult]); info("failure set") + verify(exchange, never()).setResponse(any[CamelMessage]) + info("no response forwarded to exchange") + verify(exchange).setFailure(any[FailureResult]) + info("failure set") } } @@ -282,12 +308,15 @@ class ActorProducerTest extends TestKit(ActorSystem("test")) with WordSpec with doneSync must be(false) within(1 second) { - probe.expectMsgType[CamelMessage]; info("message sent to consumer") + probe.expectMsgType[CamelMessage] + info("message sent to consumer") probe.sender ! Failure(new Exception) - asyncCallback.awaitCalled(remaining); + asyncCallback.awaitCalled(remaining) } - verify(exchange, never()).setResponse(any[CamelMessage]); info("no response forwarded to exchange") - verify(exchange).setFailure(any[FailureResult]); info("failure set") + verify(exchange, never()).setResponse(any[CamelMessage]) + info("no response forwarded to exchange") + verify(exchange).setFailure(any[FailureResult]) + info("failure set") } } } @@ -363,10 +392,8 @@ trait ActorProducerFixture extends MockitoSugar with BeforeAndAfterAll with Befo def createAsyncCallback = new TestAsyncCallback class TestAsyncCallback extends AsyncCallback { - def expectNoCallWithin(duration: Duration) { - if (callbackReceived.await(duration.toNanos, TimeUnit.NANOSECONDS)) fail("NOT expected callback, but received one!") - } - + def expectNoCallWithin(duration: Duration): Unit = + if (callbackReceived.await(duration.length, duration.unit)) fail("NOT expected callback, but received one!") def awaitCalled(timeout: Duration = 1 second) { valueWithin(1 second) } val callbackReceived = new CountDownLatch(1) @@ -378,7 +405,7 @@ trait ActorProducerFixture extends MockitoSugar with BeforeAndAfterAll with Befo } private[this] def valueWithin(implicit timeout: FiniteDuration) = - if (!callbackReceived.await(timeout.toNanos, TimeUnit.NANOSECONDS)) fail("Callback not received!") + if (!callbackReceived.await(timeout.length, timeout.unit)) fail("Callback not received!") else callbackValue.get def expectDoneSyncWithin(implicit timeout: FiniteDuration): Unit = if (!valueWithin(timeout)) fail("Expected to be done Synchronously") diff --git a/akka-camel/src/test/scala/akka/camel/internal/component/DurationConverterTest.scala b/akka-camel/src/test/scala/akka/camel/internal/component/DurationConverterTest.scala index 307b0d71d7..06c5d5aa5e 100644 --- a/akka-camel/src/test/scala/akka/camel/internal/component/DurationConverterTest.scala +++ b/akka-camel/src/test/scala/akka/camel/internal/component/DurationConverterTest.scala @@ -7,8 +7,7 @@ package akka.camel.internal.component import language.postfixOps import org.scalatest.matchers.MustMatchers -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import org.scalatest.WordSpec import org.apache.camel.TypeConversionException diff --git a/akka-cluster/src/main/resources/reference.conf b/akka-cluster/src/main/resources/reference.conf index 4347f6c0b0..53b14e2842 100644 --- a/akka-cluster/src/main/resources/reference.conf +++ b/akka-cluster/src/main/resources/reference.conf @@ -70,7 +70,7 @@ akka { failure-detector { # FQCN of the failure detector implementation. - # It must implement akka.cluster.akka.cluster and + # It must implement akka.cluster.FailureDetector and # have constructor with akka.actor.ActorSystem and # akka.cluster.ClusterSettings parameters implementation-class = "akka.cluster.AccrualFailureDetector" @@ -78,6 +78,10 @@ akka { # how often should the node send out heartbeats? heartbeat-interval = 1s + # Number of member nodes that each member will send heartbeat messages to, + # i.e. each node will be monitored by this number of other nodes. + monitored-by-nr-of-members = 5 + # defines the failure detector threshold # A low threshold is prone to generate many wrong suspicions but ensures # a quick detection in the event of a real crash. Conversely, a high @@ -102,22 +106,32 @@ akka { max-sample-size = 1000 } - # Uses JMX and Hyperic SIGAR, if SIGAR is on the classpath. metrics { # Enable or disable metrics collector for load-balancing nodes. enabled = on - # How often metrics is sampled on a node. - metrics-interval = 3s + # FQCN of the metrics collector implementation. + # It must implement akka.cluster.cluster.MetricsCollector and + # have constructor with akka.actor.ActorSystem parameter. + # The default SigarMetricsCollector uses JMX and Hyperic SIGAR, if SIGAR + # is on the classpath, otherwise only JMX. + collector-class = "akka.cluster.SigarMetricsCollector" + + # How often metrics are sampled on a node. + # Shorter interval will collect the metrics more often. + collect-interval = 3s # How often a node publishes metrics information. gossip-interval = 3s # How quickly the exponential weighting of past data is decayed compared to - # new data. - # If set to 0 data streaming over time will be turned off. - # Set higher to increase the bias toward newer values - rate-of-decay = 10 + # new data. Set lower to increase the bias toward newer values. + # The relevance of each data sample is halved for every passing half-life duration, + # i.e. after 4 times the half-life, a data sample’s relevance is reduced to 6% of + # its original relevance. The initial relevance of a data sample is given by + # 1 – 0.5 ^ (collect-interval / half-life). + # See http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average + moving-average-half-life = 12s } # If the tick-duration of the default scheduler is longer than the @@ -139,6 +153,16 @@ akka { } # Default configuration for routers + actor.deployment.default { + # MetricsSelector to use + # - available: "mix", "heap", "cpu", "load" + # - or: Fully qualified class name of the MetricsSelector class. + # The class must extend akka.cluster.routing.MetricsSelector + # and have a constructor with com.typesafe.config.Config + # parameter. + # - default is "mix" + metrics-selector = mix + } actor.deployment.default.cluster { # enable cluster aware router that deploys to nodes in the cluster enabled = off @@ -165,4 +189,5 @@ akka { routees-path = "" } + } diff --git a/akka-cluster/src/main/scala/akka/cluster/AccrualFailureDetector.scala b/akka-cluster/src/main/scala/akka/cluster/AccrualFailureDetector.scala index 7efe1f0f1e..feb950a9a8 100644 --- a/akka-cluster/src/main/scala/akka/cluster/AccrualFailureDetector.scala +++ b/akka-cluster/src/main/scala/akka/cluster/AccrualFailureDetector.scala @@ -6,12 +6,11 @@ package akka.cluster import akka.actor.{ ActorSystem, Address, ExtendedActorSystem } import akka.event.Logging -import scala.collection.immutable.Map +import scala.collection.immutable import scala.annotation.tailrec import java.util.concurrent.atomic.AtomicReference import java.util.concurrent.TimeUnit.NANOSECONDS -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object AccrualFailureDetector { private def realClock: () ⇒ Long = () ⇒ NANOSECONDS.toMillis(System.nanoTime) @@ -234,7 +233,7 @@ private[cluster] object HeartbeatHistory { */ def apply(maxSampleSize: Int): HeartbeatHistory = HeartbeatHistory( maxSampleSize = maxSampleSize, - intervals = IndexedSeq.empty, + intervals = immutable.IndexedSeq.empty, intervalSum = 0L, squaredIntervalSum = 0L) @@ -249,7 +248,7 @@ private[cluster] object HeartbeatHistory { */ private[cluster] case class HeartbeatHistory private ( maxSampleSize: Int, - intervals: IndexedSeq[Long], + intervals: immutable.IndexedSeq[Long], intervalSum: Long, squaredIntervalSum: Long) { @@ -285,4 +284,4 @@ private[cluster] case class HeartbeatHistory private ( squaredIntervalSum = squaredIntervalSum - pow2(intervals.head)) private def pow2(x: Long) = x * x -} \ No newline at end of file +} diff --git a/akka-cluster/src/main/scala/akka/cluster/Cluster.scala b/akka-cluster/src/main/scala/akka/cluster/Cluster.scala index 25b1cd684b..e362b4ac34 100644 --- a/akka-cluster/src/main/scala/akka/cluster/Cluster.scala +++ b/akka-cluster/src/main/scala/akka/cluster/Cluster.scala @@ -14,17 +14,15 @@ import akka.pattern._ import akka.remote._ import akka.routing._ import akka.util._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.{ Duration, Deadline } +import scala.concurrent.duration._ import scala.concurrent.forkjoin.ThreadLocalRandom import scala.annotation.tailrec -import scala.collection.immutable.SortedSet +import scala.collection.immutable import java.io.Closeable import java.util.concurrent.atomic.AtomicBoolean import java.util.concurrent.atomic.AtomicReference import akka.util.internal.HashedWheelTimer import concurrent.{ ExecutionContext, Await } -import scala.concurrent.util.FiniteDuration /** * Cluster Extension Id and factory for creating Cluster extension. @@ -62,22 +60,22 @@ class Cluster(val system: ExtendedActorSystem) extends Extension { val settings = new ClusterSettings(system.settings.config, system.name) import settings._ - val selfAddress = system.provider match { + val selfAddress: Address = system.provider match { case c: ClusterActorRefProvider ⇒ c.transport.address case other ⇒ throw new ConfigurationException( "ActorSystem [%s] needs to have a 'ClusterActorRefProvider' enabled in the configuration, currently uses [%s]". format(system, other.getClass.getName)) } - private val _isRunning = new AtomicBoolean(true) + private val _isTerminated = new AtomicBoolean(false) private val log = Logging(system, "Cluster") log.info("Cluster Node [{}] - is starting up...", selfAddress) - val failureDetector = { + val failureDetector: FailureDetector = { import settings.{ FailureDetectorImplementationClass ⇒ fqcn } system.dynamicAccess.createInstanceFor[FailureDetector]( - fqcn, Seq(classOf[ActorSystem] -> system, classOf[ClusterSettings] -> settings)).recover({ + fqcn, List(classOf[ActorSystem] -> system, classOf[ClusterSettings] -> settings)).recover({ case e ⇒ throw new ConfigurationException("Could not create custom failure detector [" + fqcn + "] due to:" + e.toString) }).get } @@ -171,9 +169,9 @@ class Cluster(val system: ExtendedActorSystem) extends Extension { // ====================================================== /** - * Returns true if the cluster node is up and running, false if it is shut down. + * Returns true if this cluster instance has be shutdown. */ - def isRunning: Boolean = _isRunning.get + def isTerminated: Boolean = _isTerminated.get /** * Subscribe to cluster domain events. @@ -243,7 +241,7 @@ class Cluster(val system: ExtendedActorSystem) extends Extension { * in config. Especially useful from tests when Addresses are unknown * before startup time. */ - private[cluster] def joinSeedNodes(seedNodes: IndexedSeq[Address]): Unit = + private[cluster] def joinSeedNodes(seedNodes: immutable.IndexedSeq[Address]): Unit = clusterCore ! InternalClusterAction.JoinSeedNodes(seedNodes) /** @@ -255,7 +253,7 @@ class Cluster(val system: ExtendedActorSystem) extends Extension { * to go through graceful handoff process `LEAVE -> EXITING -> REMOVED -> SHUTDOWN`. */ private[cluster] def shutdown(): Unit = { - if (_isRunning.compareAndSet(true, false)) { + if (_isTerminated.compareAndSet(false, true)) { log.info("Cluster Node [{}] - Shutting down cluster Node and cluster daemons...", selfAddress) system.stop(clusterDaemons) diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterActorRefProvider.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterActorRefProvider.scala index 024dfdc00c..5adb57615a 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterActorRefProvider.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterActorRefProvider.scala @@ -18,16 +18,29 @@ import akka.actor.Props import akka.actor.Scheduler import akka.actor.Scope import akka.actor.Terminated -import akka.cluster.routing.ClusterRouterConfig -import akka.cluster.routing.ClusterRouterSettings import akka.dispatch.ChildTerminated import akka.event.EventStream +import akka.japi.Util.immutableSeq import akka.remote.RemoteActorRefProvider import akka.remote.RemoteDeployer import akka.remote.routing.RemoteRouterConfig +import akka.routing.RouterConfig +import akka.routing.DefaultResizer +import akka.cluster.routing.ClusterRouterConfig +import akka.cluster.routing.ClusterRouterSettings +import akka.cluster.routing.AdaptiveLoadBalancingRouter +import akka.cluster.routing.MixMetricsSelector +import akka.cluster.routing.HeapMetricsSelector +import akka.cluster.routing.SystemLoadAverageMetricsSelector +import akka.cluster.routing.CpuMetricsSelector +import akka.cluster.routing.MetricsSelector /** * INTERNAL API + * + * The `ClusterActorRefProvider` will load the [[akka.cluster.Cluster]] + * extension, i.e. the cluster will automatically be started when + * the `ClusterActorRefProvider` is used. */ class ClusterActorRefProvider( _systemName: String, @@ -42,10 +55,17 @@ class ClusterActorRefProvider( override def init(system: ActorSystemImpl): Unit = { super.init(system) + // initialize/load the Cluster extension + Cluster(system) + remoteDeploymentWatcher = system.systemActorOf(Props[RemoteDeploymentWatcher], "RemoteDeploymentWatcher") } - override val deployer: ClusterDeployer = new ClusterDeployer(settings, dynamicAccess) + /** + * Factory method to make it possible to override deployer in subclass + * Creates a new instance every time + */ + override protected def createDeployer: ClusterDeployer = new ClusterDeployer(settings, dynamicAccess) /** * This method is overridden here to keep track of remote deployed actors to @@ -108,6 +128,36 @@ private[akka] class ClusterDeployer(_settings: ActorSystem.Settings, _pm: Dynami case None ⇒ None } } + + override protected def createRouterConfig(routerType: String, key: String, config: Config, deployment: Config): RouterConfig = { + val routees = immutableSeq(deployment.getStringList("routees.paths")) + val nrOfInstances = deployment.getInt("nr-of-instances") + val resizer = if (config.hasPath("resizer")) Some(DefaultResizer(deployment.getConfig("resizer"))) else None + + routerType match { + case "adaptive" ⇒ + val metricsSelector = deployment.getString("metrics-selector") match { + case "mix" ⇒ MixMetricsSelector + case "heap" ⇒ HeapMetricsSelector + case "cpu" ⇒ CpuMetricsSelector + case "load" ⇒ SystemLoadAverageMetricsSelector + case fqn ⇒ + val args = List(classOf[Config] -> deployment) + dynamicAccess.createInstanceFor[MetricsSelector](fqn, args).recover({ + case exception ⇒ throw new IllegalArgumentException( + ("Cannot instantiate metrics-selector [%s], defined in [%s], " + + "make sure it extends [akka.cluster.routing.MetricsSelector] and " + + "has constructor with [com.typesafe.config.Config] parameter") + .format(fqn, key), exception) + }).get + } + + AdaptiveLoadBalancingRouter(metricsSelector, nrOfInstances, routees, resizer) + + case _ ⇒ super.createRouterConfig(routerType, key, config, deployment) + } + + } } @SerialVersionUID(1L) diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterDaemon.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterDaemon.scala index 6012c48f45..99fa7e2821 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterDaemon.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterDaemon.scala @@ -3,9 +3,8 @@ */ package akka.cluster -import scala.collection.immutable.SortedSet -import scala.concurrent.util.{ Deadline, Duration } -import scala.concurrent.util.duration._ +import scala.collection.immutable +import scala.concurrent.duration._ import scala.concurrent.forkjoin.ThreadLocalRandom import akka.actor.{ Actor, ActorLogging, ActorRef, Address, Cancellable, Props, ReceiveTimeout, RootActorPath, Scheduler } import akka.actor.Status.Failure @@ -16,7 +15,6 @@ import akka.cluster.MemberStatus._ import akka.cluster.ClusterEvent._ import language.existentials import language.postfixOps -import scala.concurrent.util.FiniteDuration /** * Base trait for all cluster messages. All ClusterMessage's are serializable. @@ -63,7 +61,7 @@ private[cluster] object InternalClusterAction { * Command to initiate the process to join the specified * seed nodes. */ - case class JoinSeedNodes(seedNodes: IndexedSeq[Address]) + case class JoinSeedNodes(seedNodes: immutable.IndexedSeq[Address]) /** * Start message of the process to join one of the seed nodes. @@ -155,8 +153,8 @@ private[cluster] final class ClusterDaemon(settings: ClusterSettings) extends Ac withDispatcher(context.props.dispatcher), name = "publisher") val core = context.actorOf(Props(new ClusterCoreDaemon(publisher)). withDispatcher(context.props.dispatcher), name = "core") - context.actorOf(Props[ClusterHeartbeatDaemon]. - withDispatcher(context.props.dispatcher), name = "heartbeat") + context.actorOf(Props[ClusterHeartbeatReceiver]. + withDispatcher(context.props.dispatcher), name = "heartbeatReceiver") if (settings.MetricsEnabled) context.actorOf(Props(new ClusterMetricsCollector(publisher)). withDispatcher(context.props.dispatcher), name = "metrics") @@ -172,59 +170,44 @@ private[cluster] final class ClusterDaemon(settings: ClusterSettings) extends Ac private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Actor with ActorLogging { import ClusterLeaderAction._ import InternalClusterAction._ - import ClusterHeartbeatSender._ + import ClusterHeartbeatSender.JoinInProgress val cluster = Cluster(context.system) import cluster.{ selfAddress, scheduler, failureDetector } import cluster.settings._ val vclockNode = VectorClock.Node(selfAddress.toString) - val selfHeartbeat = Heartbeat(selfAddress) // note that self is not initially member, // and the Gossip is not versioned for this 'Node' yet var latestGossip: Gossip = Gossip() - var joinInProgress: Map[Address, Deadline] = Map.empty var stats = ClusterStats() - val heartbeatSender = context.actorOf(Props[ClusterHeartbeatSender]. - withDispatcher(UseDispatcher), name = "heartbeatSender") val coreSender = context.actorOf(Props[ClusterCoreSender]. withDispatcher(UseDispatcher), name = "coreSender") + val heartbeatSender = context.actorOf(Props[ClusterHeartbeatSender]. + withDispatcher(UseDispatcher), name = "heartbeatSender") import context.dispatcher // start periodic gossip to random nodes in cluster - val gossipTask = - FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(GossipInterval).asInstanceOf[FiniteDuration], GossipInterval) { - self ! GossipTick - } - - // start periodic heartbeat to all nodes in cluster - val heartbeatTask = - FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(HeartbeatInterval).asInstanceOf[FiniteDuration], HeartbeatInterval) { - self ! HeartbeatTick - } + val gossipTask = scheduler.schedule(PeriodicTasksInitialDelay.max(GossipInterval), + GossipInterval, self, GossipTick) // start periodic cluster failure detector reaping (moving nodes condemned by the failure detector to unreachable list) - val failureDetectorReaperTask = - FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(UnreachableNodesReaperInterval).asInstanceOf[FiniteDuration], UnreachableNodesReaperInterval) { - self ! ReapUnreachableTick - } + val failureDetectorReaperTask = scheduler.schedule(PeriodicTasksInitialDelay.max(UnreachableNodesReaperInterval), + UnreachableNodesReaperInterval, self, ReapUnreachableTick) // start periodic leader action management (only applies for the current leader) - private val leaderActionsTask = - FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(LeaderActionsInterval).asInstanceOf[FiniteDuration], LeaderActionsInterval) { - self ! LeaderActionsTick - } + val leaderActionsTask = scheduler.schedule(PeriodicTasksInitialDelay.max(LeaderActionsInterval), + LeaderActionsInterval, self, LeaderActionsTick) // start periodic publish of current stats - private val publishStatsTask: Option[Cancellable] = + val publishStatsTask: Option[Cancellable] = if (PublishStatsInterval == Duration.Zero) None - else Some(FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(PublishStatsInterval).asInstanceOf[FiniteDuration], PublishStatsInterval) { - self ! PublishStatsTick - }) + else Some(scheduler.schedule(PeriodicTasksInitialDelay.max(PublishStatsInterval), + PublishStatsInterval, self, PublishStatsTick)) override def preStart(): Unit = { if (AutoJoin) self ! JoinSeedNodes(SeedNodes) @@ -232,7 +215,6 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto override def postStop(): Unit = { gossipTask.cancel() - heartbeatTask.cancel() failureDetectorReaperTask.cancel() leaderActionsTask.cancel() publishStatsTask foreach { _.cancel() } @@ -250,7 +232,6 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto case msg: GossipEnvelope ⇒ receiveGossip(msg) case msg: GossipMergeConflict ⇒ receiveGossipMerge(msg) case GossipTick ⇒ gossip() - case HeartbeatTick ⇒ heartbeat() case ReapUnreachableTick ⇒ reapUnreachableMembers() case LeaderActionsTick ⇒ leaderActions() case PublishStatsTick ⇒ publishInternalStats() @@ -275,7 +256,7 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto def initJoin(): Unit = sender ! InitJoinAck(selfAddress) - def joinSeedNodes(seedNodes: IndexedSeq[Address]): Unit = { + def joinSeedNodes(seedNodes: immutable.IndexedSeq[Address]): Unit = { // only the node which is named first in the list of seed nodes will join itself if (seedNodes.isEmpty || seedNodes.head == selfAddress) self ! JoinTo(selfAddress) @@ -293,12 +274,12 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto val localGossip = latestGossip // wipe our state since a node that joins a cluster must be empty latestGossip = Gossip() - joinInProgress = Map(address -> (Deadline.now + JoinTimeout)) // wipe the failure detector since we are starting fresh and shouldn't care about the past failureDetector.reset() publish(localGossip) + heartbeatSender ! JoinInProgress(address, Deadline.now + JoinTimeout) context.become(initialized) if (address == selfAddress) @@ -517,12 +498,7 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto else if (remoteGossip.version < localGossip.version) localGossip // local gossip is newer else remoteGossip // remote gossip is newer - val newJoinInProgress = - if (joinInProgress.isEmpty) joinInProgress - else joinInProgress -- winningGossip.members.map(_.address) -- winningGossip.overview.unreachable.map(_.address) - latestGossip = winningGossip seen selfAddress - joinInProgress = newJoinInProgress // for all new joining nodes we remove them from the failure detector (latestGossip.members -- localGossip.members).foreach { @@ -744,27 +720,10 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto } } - def heartbeat(): Unit = { - removeOverdueJoinInProgress() - - val beatTo = latestGossip.members.toSeq.map(_.address) ++ joinInProgress.keys - - val deadline = Deadline.now + HeartbeatInterval - beatTo.foreach { address ⇒ if (address != selfAddress) heartbeatSender ! SendHeartbeat(selfHeartbeat, address, deadline) } - } - - /** - * Removes overdue joinInProgress from State. - */ - def removeOverdueJoinInProgress(): Unit = { - joinInProgress --= joinInProgress collect { case (address, deadline) if deadline.isOverdue ⇒ address } - } - /** * Reaps the unreachable members (moves them to the 'unreachable' list in the cluster overview) according to the failure detector's verdict. */ def reapUnreachableMembers(): Unit = { - if (!isSingletonCluster && isAvailable) { // only scrutinize if we are a non-singleton cluster and available @@ -804,14 +763,14 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto def isSingletonCluster: Boolean = latestGossip.isSingletonCluster - def isAvailable: Boolean = latestGossip.isAvailable(selfAddress) + def isAvailable: Boolean = !latestGossip.isUnreachable(selfAddress) /** * Gossips latest gossip to a random member in the set of members passed in as argument. * * @return the used [[akka.actor.Address] if any */ - private def gossipToRandomNodeOf(addresses: IndexedSeq[Address]): Option[Address] = { + private def gossipToRandomNodeOf(addresses: immutable.IndexedSeq[Address]): Option[Address] = { log.debug("Cluster Node [{}] - Selecting random node to gossip to [{}]", selfAddress, addresses.mkString(", ")) // filter out myself val peer = selectRandomNode(addresses filterNot (_ == selfAddress)) @@ -864,7 +823,7 @@ private[cluster] final class ClusterCoreDaemon(publisher: ActorRef) extends Acto * 5. seed3 retries the join procedure and gets acks from seed2 first, and then joins to seed2 * */ -private[cluster] final class JoinSeedNodeProcess(seedNodes: IndexedSeq[Address]) extends Actor with ActorLogging { +private[cluster] final class JoinSeedNodeProcess(seedNodes: immutable.IndexedSeq[Address]) extends Actor with ActorLogging { import InternalClusterAction._ def selfAddress = Cluster(context.system).selfAddress @@ -938,4 +897,4 @@ private[cluster] case class ClusterStats( def incrementMergeDetectedCount(): ClusterStats = copy(mergeDetectedCount = mergeDetectedCount + 1) -} \ No newline at end of file +} diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterEvent.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterEvent.scala index 8d87f3fe53..f82f5e8835 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterEvent.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterEvent.scala @@ -4,12 +4,15 @@ package akka.cluster import language.postfixOps -import scala.collection.immutable.SortedSet +import scala.collection.immutable import akka.actor.{ Actor, ActorLogging, ActorRef, Address } import akka.cluster.ClusterEvent._ import akka.cluster.MemberStatus._ import akka.event.EventStream import akka.actor.AddressTerminated +import java.lang.Iterable +import akka.japi.Util.immutableSeq +import akka.util.Collections.EmptyImmutableSeq /** * Domain events published to the event bus. @@ -28,7 +31,7 @@ object ClusterEvent { * Current snapshot state of the cluster. Sent to new subscriber. */ case class CurrentClusterState( - members: SortedSet[Member] = SortedSet.empty, + members: immutable.SortedSet[Member] = immutable.SortedSet.empty, unreachable: Set[Member] = Set.empty, convergence: Boolean = false, seenBy: Set[Address] = Set.empty, @@ -47,19 +50,15 @@ object ClusterEvent { * Java API * Read only */ - def getUnreachable: java.util.Set[Member] = { - import scala.collection.JavaConverters._ - unreachable.asJava - } + def getUnreachable: java.util.Set[Member] = + scala.collection.JavaConverters.setAsJavaSetConverter(unreachable).asJava /** * Java API * Read only */ - def getSeenBy: java.util.Set[Address] = { - import scala.collection.JavaConverters._ - seenBy.asJava - } + def getSeenBy: java.util.Set[Address] = + scala.collection.JavaConverters.setAsJavaSetConverter(seenBy).asJava /** * Java API @@ -139,11 +138,16 @@ object ClusterEvent { } /** - * INTERNAL API * - * Current snapshot of cluster member metrics. Published to subscribers. + * Current snapshot of cluster node metrics. Published to subscribers. */ - case class ClusterMetricsChanged(nodes: Set[NodeMetrics]) extends ClusterDomainEvent + case class ClusterMetricsChanged(nodeMetrics: Set[NodeMetrics]) extends ClusterDomainEvent { + /** + * Java API + */ + def getNodeMetrics: java.lang.Iterable[NodeMetrics] = + scala.collection.JavaConverters.asJavaIterableConverter(nodeMetrics).asJava + } /** * INTERNAL API @@ -159,7 +163,7 @@ object ClusterEvent { /** * INTERNAL API */ - private[cluster] def diff(oldGossip: Gossip, newGossip: Gossip): IndexedSeq[ClusterDomainEvent] = { + private[cluster] def diff(oldGossip: Gossip, newGossip: Gossip): immutable.IndexedSeq[ClusterDomainEvent] = { val newMembers = newGossip.members -- oldGossip.members val membersGroupedByAddress = (newGossip.members.toList ++ oldGossip.members.toList).groupBy(_.address) @@ -194,18 +198,18 @@ object ClusterEvent { val newConvergence = newGossip.convergence val convergenceChanged = newConvergence != oldGossip.convergence - val convergenceEvents = if (convergenceChanged) Seq(ConvergenceChanged(newConvergence)) else Seq.empty + val convergenceEvents = if (convergenceChanged) List(ConvergenceChanged(newConvergence)) else EmptyImmutableSeq val leaderEvents = - if (newGossip.leader != oldGossip.leader) Seq(LeaderChanged(newGossip.leader)) - else Seq.empty + if (newGossip.leader != oldGossip.leader) List(LeaderChanged(newGossip.leader)) + else EmptyImmutableSeq val newSeenBy = newGossip.seenBy val seenEvents = - if (convergenceChanged || newSeenBy != oldGossip.seenBy) Seq(SeenChanged(newConvergence, newSeenBy)) - else Seq.empty + if (convergenceChanged || newSeenBy != oldGossip.seenBy) List(SeenChanged(newConvergence, newSeenBy)) + else EmptyImmutableSeq - memberEvents.toIndexedSeq ++ unreachableEvents ++ downedEvents ++ unreachableDownedEvents ++ removedEvents ++ + memberEvents.toVector ++ unreachableEvents ++ downedEvents ++ unreachableDownedEvents ++ removedEvents ++ leaderEvents ++ convergenceEvents ++ seenEvents } diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterHeartbeat.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterHeartbeat.scala index b48c9f066b..4ea4382b5a 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterHeartbeat.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterHeartbeat.scala @@ -5,30 +5,47 @@ package akka.cluster import language.postfixOps -import akka.actor.{ ReceiveTimeout, ActorLogging, ActorRef, Address, Actor, RootActorPath, Props } -import java.security.MessageDigest +import scala.collection.immutable +import scala.annotation.tailrec +import scala.concurrent.duration._ +import java.net.URLEncoder +import akka.actor.{ ActorLogging, ActorRef, Address, Actor, RootActorPath, PoisonPill, Props } import akka.pattern.{ CircuitBreaker, CircuitBreakerOpenException } -import scala.concurrent.util.duration._ -import scala.concurrent.util.Deadline +import akka.cluster.ClusterEvent._ +import akka.routing.ConsistentHash /** - * Sent at regular intervals for failure detection. + * INTERNAL API */ -case class Heartbeat(from: Address) extends ClusterMessage +private[akka] object ClusterHeartbeatReceiver { + /** + * Sent at regular intervals for failure detection. + */ + case class Heartbeat(from: Address) extends ClusterMessage + + /** + * Tell failure detector at receiving side that it should + * remove the monitoring, because heartbeats will end from + * this node. + */ + case class EndHeartbeat(from: Address) extends ClusterMessage +} /** * INTERNAL API. * - * Receives Heartbeat messages and delegates to Cluster. + * Receives Heartbeat messages and updates failure detector. * Instantiated as a single instance for each Cluster - e.g. heartbeats are serialized * to Cluster message after message, but concurrent with other types of messages. */ -private[cluster] final class ClusterHeartbeatDaemon extends Actor with ActorLogging { +private[cluster] final class ClusterHeartbeatReceiver extends Actor with ActorLogging { + import ClusterHeartbeatReceiver._ val failureDetector = Cluster(context.system).failureDetector def receive = { - case Heartbeat(from) ⇒ failureDetector heartbeat from + case Heartbeat(from) ⇒ failureDetector heartbeat from + case EndHeartbeat(from) ⇒ failureDetector remove from } } @@ -38,69 +55,271 @@ private[cluster] final class ClusterHeartbeatDaemon extends Actor with ActorLogg */ private[cluster] object ClusterHeartbeatSender { /** - * - * Command to [akka.cluster.ClusterHeartbeatSender]], which will send [[akka.cluster.Heartbeat]] - * to the other node. + * Tell [akka.cluster.ClusterHeartbeatSender]] that this node has started joining of + * another node and heartbeats should be sent unconditionally until it becomes + * member or deadline is overdue. This is done to be able to detect immediate death + * of the joining node. * Local only, no need to serialize. */ - case class SendHeartbeat(heartbeatMsg: Heartbeat, to: Address, deadline: Deadline) + case class JoinInProgress(address: Address, deadline: Deadline) } /* * INTERNAL API * * This actor is responsible for sending the heartbeat messages to - * other nodes. Netty blocks when sending to broken connections. This actor - * isolates sending to different nodes by using child workers for each target + * a few other nodes that will monitor this node. + * + * Netty blocks when sending to broken connections. This actor + * isolates sending to different nodes by using child actors for each target * address and thereby reduce the risk of irregular heartbeats to healty * nodes due to broken connections to other nodes. */ private[cluster] final class ClusterHeartbeatSender extends Actor with ActorLogging { import ClusterHeartbeatSender._ + import ClusterHeartbeatSenderConnection._ + import ClusterHeartbeatReceiver._ + import InternalClusterAction.HeartbeatTick + + val cluster = Cluster(context.system) + import cluster.{ selfAddress, scheduler } + import cluster.settings._ + import context.dispatcher + + val selfHeartbeat = Heartbeat(selfAddress) + val selfEndHeartbeat = EndHeartbeat(selfAddress) + + var state = ClusterHeartbeatSenderState.empty(ConsistentHash(Seq.empty[Address], HeartbeatConsistentHashingVirtualNodesFactor), + selfAddress.toString, MonitoredByNrOfMembers) + + // start periodic heartbeat to other nodes in cluster + val heartbeatTask = scheduler.schedule(PeriodicTasksInitialDelay max HeartbeatInterval, + HeartbeatInterval, self, HeartbeatTick) + + override def preStart(): Unit = cluster.subscribe(self, classOf[MemberEvent]) + + override def postStop(): Unit = { + heartbeatTask.cancel() + cluster.unsubscribe(self) + } /** * Looks up and returns the remote cluster heartbeat connection for the specific address. */ def clusterHeartbeatConnectionFor(address: Address): ActorRef = - context.actorFor(RootActorPath(address) / "system" / "cluster" / "heartbeat") - - val digester = MessageDigest.getInstance("MD5") - - /** - * Child name is MD5 hash of the address. - * FIXME Change to URLEncode when ticket #2123 has been fixed - */ - def encodeChildName(name: String): String = { - digester update name.getBytes("UTF-8") - digester.digest.map { h ⇒ "%02x".format(0xFF & h) }.mkString - } + context.actorFor(RootActorPath(address) / "system" / "cluster" / "heartbeatReceiver") def receive = { - case msg @ SendHeartbeat(from, to, deadline) ⇒ - val workerName = encodeChildName(to.toString) - val worker = context.actorFor(workerName) match { + case HeartbeatTick ⇒ heartbeat() + case s: CurrentClusterState ⇒ reset(s) + case MemberUnreachable(m) ⇒ removeMember(m) + case MemberRemoved(m) ⇒ removeMember(m) + case e: MemberEvent ⇒ addMember(e.member) + case JoinInProgress(a, d) ⇒ addJoinInProgress(a, d) + } + + def reset(snapshot: CurrentClusterState): Unit = + state = state.reset(snapshot.members.collect { case m if m.address != selfAddress ⇒ m.address }) + + def addMember(m: Member): Unit = if (m.address != selfAddress) + state = state addMember m.address + + def removeMember(m: Member): Unit = if (m.address != selfAddress) + state = state removeMember m.address + + def addJoinInProgress(address: Address, deadline: Deadline): Unit = if (address != selfAddress) + state = state.addJoinInProgress(address, deadline) + + def heartbeat(): Unit = { + state = state.removeOverdueJoinInProgress() + + def connection(to: Address): ActorRef = { + // URL encoded target address as child actor name + val connectionName = URLEncoder.encode(to.toString, "UTF-8") + context.actorFor(connectionName) match { case notFound if notFound.isTerminated ⇒ - context.actorOf(Props(new ClusterHeartbeatSenderWorker(clusterHeartbeatConnectionFor(to))), workerName) + context.actorOf(Props(new ClusterHeartbeatSenderConnection(clusterHeartbeatConnectionFor(to))), connectionName) case child ⇒ child } - worker ! msg + } + + val deadline = Deadline.now + HeartbeatInterval + state.active foreach { to ⇒ connection(to) ! SendHeartbeat(selfHeartbeat, to, deadline) } + + // When sending heartbeats to a node is stopped a few `EndHeartbeat` messages is + // sent to notify it that no more heartbeats will be sent. + for ((to, count) ← state.ending) { + val c = connection(to) + c ! SendEndHeartbeat(selfEndHeartbeat, to) + if (count == NumberOfEndHeartbeats) { + state = state.removeEnding(to) + c ! PoisonPill + } else + state = state.increaseEndingCount(to) + } } } /** - * Responsible for sending [[akka.cluster.Heartbeat]] to one specific address. + * INTERNAL API + */ +private[cluster] object ClusterHeartbeatSenderState { + /** + * Initial, empty state + */ + def empty(consistentHash: ConsistentHash[Address], selfAddressStr: String, + monitoredByNrOfMembers: Int): ClusterHeartbeatSenderState = + ClusterHeartbeatSenderState(consistentHash, selfAddressStr, monitoredByNrOfMembers) + + /** + * Create a new state based on previous state, and + * keep track of which nodes to stop sending heartbeats to. + */ + private def apply( + old: ClusterHeartbeatSenderState, + consistentHash: ConsistentHash[Address], + all: Set[Address]): ClusterHeartbeatSenderState = { + + /** + * Select a few peers that heartbeats will be sent to, i.e. that will + * monitor this node. Try to send heartbeats to same nodes as much + * as possible, but re-balance with consistent hashing algorithm when + * new members are added or removed. + */ + def selectPeers: Set[Address] = { + val allSize = all.size + val nrOfPeers = math.min(allSize, old.monitoredByNrOfMembers) + // try more if consistentHash results in same node as already selected + val attemptLimit = nrOfPeers * 2 + @tailrec def select(acc: Set[Address], n: Int): Set[Address] = { + if (acc.size == nrOfPeers || n == attemptLimit) acc + else select(acc + consistentHash.nodeFor(old.selfAddressStr + n), n + 1) + } + if (nrOfPeers >= allSize) all + else select(Set.empty[Address], 0) + } + + val curr = selectPeers + // start ending process for nodes not selected any more + // abort ending process for nodes that have been selected again + val end = old.ending ++ (old.current -- curr).map(_ -> 0) -- curr + old.copy(consistentHash = consistentHash, all = all, current = curr, ending = end) + } + +} + +/** + * INTERNAL API * - * Netty blocks when sending to broken connections, and this actor uses - * a configurable circuit breaker to reduce connect attempts to broken + * State used by [akka.cluster.ClusterHeartbeatSender]. + * The initial state is created with `empty` in the of + * the companion object, thereafter the state is modified + * with the methods, such as `addMember`. It is immutable, + * i.e. the methods return new instances. + */ +private[cluster] case class ClusterHeartbeatSenderState private ( + consistentHash: ConsistentHash[Address], + selfAddressStr: String, + monitoredByNrOfMembers: Int, + all: Set[Address] = Set.empty, + current: Set[Address] = Set.empty, + ending: Map[Address, Int] = Map.empty, + joinInProgress: Map[Address, Deadline] = Map.empty) { + + // FIXME can be disabled as optimization + assertInvariants + + private def assertInvariants: Unit = { + val currentAndEnding = current.intersect(ending.keySet) + require(currentAndEnding.isEmpty, + "Same nodes in current and ending not allowed, got [%s]" format currentAndEnding) + val joinInProgressAndAll = joinInProgress.keySet.intersect(all) + require(joinInProgressAndAll.isEmpty, + "Same nodes in joinInProgress and all not allowed, got [%s]" format joinInProgressAndAll) + val currentNotInAll = current -- all + require(currentNotInAll.isEmpty, + "Nodes in current but not in all not allowed, got [%s]" format currentNotInAll) + require(all.isEmpty == consistentHash.isEmpty, "ConsistentHash doesn't correspond to all nodes [%s]" + format all) + } + + val active: Set[Address] = current ++ joinInProgress.keySet + + def reset(nodes: Set[Address]): ClusterHeartbeatSenderState = + ClusterHeartbeatSenderState(nodes.foldLeft(this) { _ removeJoinInProgress _ }, + consistentHash = ConsistentHash(nodes, consistentHash.virtualNodesFactor), + all = nodes) + + def addMember(a: Address): ClusterHeartbeatSenderState = + ClusterHeartbeatSenderState(removeJoinInProgress(a), all = all + a, consistentHash = consistentHash :+ a) + + def removeMember(a: Address): ClusterHeartbeatSenderState = + ClusterHeartbeatSenderState(removeJoinInProgress(a), all = all - a, consistentHash = consistentHash :- a) + + private def removeJoinInProgress(address: Address): ClusterHeartbeatSenderState = { + if (joinInProgress contains address) + copy(joinInProgress = joinInProgress - address, ending = ending + (address -> 0)) + else this + } + + def addJoinInProgress(address: Address, deadline: Deadline): ClusterHeartbeatSenderState = { + if (all contains address) this + else copy(joinInProgress = joinInProgress + (address -> deadline), ending = ending - address) + } + + /** + * Cleanup overdue joinInProgress, in case a joining node never + * became member, for some reason. + */ + def removeOverdueJoinInProgress(): ClusterHeartbeatSenderState = { + val overdue = joinInProgress collect { case (address, deadline) if deadline.isOverdue ⇒ address } + if (overdue.isEmpty) this + else + copy(ending = ending ++ overdue.map(_ -> 0), joinInProgress = joinInProgress -- overdue) + } + + def removeEnding(a: Address): ClusterHeartbeatSenderState = copy(ending = ending - a) + + def increaseEndingCount(a: Address): ClusterHeartbeatSenderState = copy(ending = ending + (a -> (ending(a) + 1))) + +} + +/** + * INTERNAL API + */ +private[cluster] object ClusterHeartbeatSenderConnection { + import ClusterHeartbeatReceiver._ + + /** + * Command to [akka.cluster.ClusterHeartbeatSenderConnection]], which will send + * [[akka.cluster.ClusterHeartbeatReceiver.Heartbeat]] to the other node. + * Local only, no need to serialize. + */ + case class SendHeartbeat(heartbeatMsg: Heartbeat, to: Address, deadline: Deadline) + + /** + * Command to [akka.cluster.ClusterHeartbeatSenderConnection]], which will send + * [[akka.cluster.ClusterHeartbeatReceiver.EndHeartbeat]] to the other node. + * Local only, no need to serialize. + */ + case class SendEndHeartbeat(endHeartbeatMsg: EndHeartbeat, to: Address) +} + +/** + * Responsible for sending [[akka.cluster.ClusterHeartbeatReceiver.Heartbeat]] + * and [[akka.cluster.ClusterHeartbeatReceiver.EndHeartbeat]] to one specific address. + * + * This actor exists only because Netty blocks when sending to broken connections, + * and this actor uses a configurable circuit breaker to reduce connect attempts to broken * connections. * - * @see ClusterHeartbeatSender + * @see akka.cluster.ClusterHeartbeatSender */ -private[cluster] final class ClusterHeartbeatSenderWorker(toRef: ActorRef) +private[cluster] final class ClusterHeartbeatSenderConnection(toRef: ActorRef) extends Actor with ActorLogging { - import ClusterHeartbeatSender._ + import ClusterHeartbeatSenderConnection._ val breaker = { val cbSettings = Cluster(context.system).settings.SendCircuitBreakerSettings @@ -111,21 +330,19 @@ private[cluster] final class ClusterHeartbeatSenderWorker(toRef: ActorRef) onClose(log.debug("CircuitBreaker Closed for [{}]", toRef)) } - // make sure it will cleanup when not used any more - context.setReceiveTimeout(30 seconds) - def receive = { case SendHeartbeat(heartbeatMsg, _, deadline) ⇒ if (!deadline.isOverdue) { - // the CircuitBreaker will measure elapsed time and open if too many long calls + log.debug("Cluster Node [{}] - Heartbeat to [{}]", heartbeatMsg.from, toRef) + // Netty blocks when sending to broken connections, the CircuitBreaker will + // measure elapsed time and open if too many long calls try breaker.withSyncCircuitBreaker { - log.debug("Cluster Node [{}] - Heartbeat to [{}]", heartbeatMsg.from, toRef) toRef ! heartbeatMsg - if (deadline.isOverdue) log.debug("Sending heartbeat to [{}] took longer than expected", toRef) } catch { case e: CircuitBreakerOpenException ⇒ /* skip sending heartbeat to broken connection */ } } - - case ReceiveTimeout ⇒ context.stop(self) // cleanup when not used - + if (deadline.isOverdue) log.debug("Sending heartbeat to [{}] took longer than expected", toRef) + case SendEndHeartbeat(endHeartbeatMsg, _) ⇒ + log.debug("Cluster Node [{}] - EndHeartbeat to [{}]", endHeartbeatMsg.from, toRef) + toRef ! endHeartbeatMsg } -} \ No newline at end of file +} diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterJmx.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterJmx.scala index 4eb27e836e..ae023263c8 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterJmx.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterJmx.scala @@ -16,17 +16,70 @@ import javax.management.InstanceNotFoundException * Interface for the cluster JMX MBean. */ trait ClusterNodeMBean { + + /** + * Member status for this node. + */ def getMemberStatus: String + + /** + * Comma separated addresses of member nodes, sorted in the cluster ring order. + * The address format is `akka://actor-system-name@hostname:port` + */ + def getMembers: String + + /** + * Comma separated addresses of unreachable member nodes. + * The address format is `akka://actor-system-name@hostname:port` + */ + def getUnreachable: String + + /* + * String that will list all nodes in the node ring as follows: + * {{{ + * Members: + * Member(address = akka://system0@localhost:5550, status = Up) + * Member(address = akka://system1@localhost:5551, status = Up) + * Unreachable: + * Member(address = akka://system2@localhost:5553, status = Down) + * }}} + */ def getClusterStatus: String + + /** + * Get the address of the current leader. + * The address format is `akka://actor-system-name@hostname:port` + */ def getLeader: String + /** + * Does the cluster consist of only one member? + */ def isSingleton: Boolean - def isConvergence: Boolean - def isAvailable: Boolean - def isRunning: Boolean + /** + * Returns true if the node is not unreachable and not `Down` + * and not `Removed`. + */ + def isAvailable: Boolean + + /** + * Try to join this cluster node with the node specified by 'address'. + * The address format is `akka://actor-system-name@hostname:port`. + * A 'Join(thisNodeAddress)' command is sent to the node to join. + */ def join(address: String) + + /** + * Send command to issue state transition to LEAVING for the node specified by 'address'. + * The address format is `akka://actor-system-name@hostname:port` + */ def leave(address: String) + + /** + * Send command to DOWN the node specified by 'address'. + * The address format is `akka://actor-system-name@hostname:port` + */ def down(address: String) } @@ -47,34 +100,26 @@ private[akka] class ClusterJmx(cluster: Cluster, log: LoggingAdapter) { // JMX attributes (bean-style) - /* - * Sends a string to the JMX client that will list all nodes in the node ring as follows: - * {{{ - * Members: - * Member(address = akka://system0@localhost:5550, status = Up) - * Member(address = akka://system1@localhost:5551, status = Up) - * Unreachable: - * Member(address = akka://system2@localhost:5553, status = Down) - * }}} - */ def getClusterStatus: String = { val unreachable = clusterView.unreachableMembers "\nMembers:\n\t" + clusterView.members.mkString("\n\t") + { if (unreachable.nonEmpty) "\nUnreachable:\n\t" + unreachable.mkString("\n\t") else "" } } + def getMembers: String = + clusterView.members.toSeq.map(_.address).mkString(",") + + def getUnreachable: String = + clusterView.unreachableMembers.map(_.address).mkString(",") + def getMemberStatus: String = clusterView.status.toString - def getLeader: String = clusterView.leader.toString + def getLeader: String = clusterView.leader.fold("")(_.toString) def isSingleton: Boolean = clusterView.isSingletonCluster - def isConvergence: Boolean = clusterView.convergence - def isAvailable: Boolean = clusterView.isAvailable - def isRunning: Boolean = clusterView.isRunning - // JMX commands def join(address: String) = cluster.join(AddressFromURIString(address)) diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterMetricsCollector.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterMetricsCollector.scala index 87bb15450b..271ad1d29a 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterMetricsCollector.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterMetricsCollector.scala @@ -4,35 +4,39 @@ package akka.cluster -import scala.language.postfixOps -import scala.concurrent.util.duration._ -import scala.concurrent.util.FiniteDuration -import scala.collection.immutable.{ SortedSet, Map } +import java.io.Closeable +import java.lang.System.{ currentTimeMillis ⇒ newTimestamp } +import java.lang.management.{ OperatingSystemMXBean, MemoryMXBean, ManagementFactory } +import java.lang.reflect.InvocationTargetException +import java.lang.reflect.Method +import scala.collection.immutable +import scala.concurrent.duration._ import scala.concurrent.forkjoin.ThreadLocalRandom import scala.util.{ Try, Success, Failure } -import scala.math.ScalaNumber -import scala.runtime.{ RichLong, RichDouble, RichInt } - -import akka.actor._ -import akka.event.LoggingAdapter +import akka.ConfigurationException +import akka.actor.Actor +import akka.actor.ActorLogging +import akka.actor.ActorRef +import akka.actor.ActorSystem +import akka.actor.Address +import akka.actor.DynamicAccess +import akka.actor.ExtendedActorSystem import akka.cluster.MemberStatus.Up - -import java.lang.management.{ OperatingSystemMXBean, MemoryMXBean, ManagementFactory } -import java.lang.reflect.Method -import java.lang.System.{ currentTimeMillis ⇒ newTimestamp } +import akka.event.Logging +import java.lang.management.MemoryUsage /** * INTERNAL API. * - * This strategy is primarily for load-balancing of nodes. It controls metrics sampling + * Cluster metrics is primarily for load-balancing of nodes. It controls metrics sampling * at a regular frequency, prepares highly variable data for further analysis by other entities, - * and publishes the latest cluster metrics data around the node ring to assist in determining - * the need to redirect traffic to the least-loaded nodes. + * and publishes the latest cluster metrics data around the node ring and local eventStream + * to assist in determining the need to redirect traffic to the least-loaded nodes. * * Metrics sampling is delegated to the [[akka.cluster.MetricsCollector]]. * - * Calculation of statistical data for each monitored process is delegated to the - * [[akka.cluster.DataStream]] for exponential smoothing, with additional decay factor. + * Smoothing of the data for each monitored process is delegated to the + * [[akka.cluster.EWMA]] for exponential weighted moving average. */ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Actor with ActorLogging { @@ -47,31 +51,29 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto /** * The node ring gossipped that contains only members that are Up. */ - var nodes: SortedSet[Address] = SortedSet.empty + var nodes: immutable.SortedSet[Address] = immutable.SortedSet.empty /** * The latest metric values with their statistical data. */ - var latestGossip: MetricsGossip = MetricsGossip(MetricsRateOfDecay) + var latestGossip: MetricsGossip = MetricsGossip.empty /** * The metrics collector that samples data on the node. */ - val collector: MetricsCollector = MetricsCollector(selfAddress, log, context.system.asInstanceOf[ExtendedActorSystem].dynamicAccess) + val collector: MetricsCollector = MetricsCollector(context.system.asInstanceOf[ExtendedActorSystem], settings) /** * Start periodic gossip to random nodes in cluster */ - val gossipTask = FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(MetricsGossipInterval).asInstanceOf[FiniteDuration], MetricsGossipInterval) { - self ! GossipTick - } + val gossipTask = scheduler.schedule(PeriodicTasksInitialDelay max MetricsGossipInterval, + MetricsGossipInterval, self, GossipTick) /** * Start periodic metrics collection */ - val metricsTask = FixedRateTask(scheduler, PeriodicTasksInitialDelay.max(MetricsInterval).asInstanceOf[FiniteDuration], MetricsInterval) { - self ! MetricsTick - } + val metricsTask = scheduler.schedule(PeriodicTasksInitialDelay max MetricsInterval, + MetricsInterval, self, MetricsTick) override def preStart(): Unit = { cluster.subscribe(self, classOf[MemberEvent]) @@ -82,7 +84,7 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto case GossipTick ⇒ gossip() case MetricsTick ⇒ collect() case state: CurrentClusterState ⇒ receiveState(state) - case MemberUp(m) ⇒ receiveMember(m) + case MemberUp(m) ⇒ addMember(m) case e: MemberEvent ⇒ removeMember(e) case msg: MetricsGossipEnvelope ⇒ receiveGossip(msg) } @@ -97,7 +99,7 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto /** * Adds a member to the node ring. */ - def receiveMember(member: Member): Unit = nodes += member.address + def addMember(member: Member): Unit = nodes += member.address /** * Removes a member from the member node ring. @@ -111,7 +113,8 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto /** * Updates the initial node ring for those nodes that are [[akka.cluster.MemberStatus.Up]]. */ - def receiveState(state: CurrentClusterState): Unit = nodes = state.members collect { case m if m.status == Up ⇒ m.address } + def receiveState(state: CurrentClusterState): Unit = + nodes = state.members collect { case m if m.status == Up ⇒ m.address } /** * Samples the latest metrics for the node, updates metrics statistics in @@ -126,27 +129,33 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto /** * Receives changes from peer nodes, merges remote with local gossip nodes, then publishes - * changes to the event stream for load balancing router consumption, and gossips to peers. + * changes to the event stream for load balancing router consumption, and gossip back. */ def receiveGossip(envelope: MetricsGossipEnvelope): Unit = { - val remoteGossip = envelope.gossip - - if (remoteGossip != latestGossip) { - latestGossip = latestGossip merge remoteGossip - publish() - gossipTo(envelope.from) - } + // remote node might not have same view of member nodes, this side should only care + // about nodes that are known here, otherwise removed nodes can come back + val otherGossip = envelope.gossip.filter(nodes) + latestGossip = latestGossip merge otherGossip + publish() + if (!envelope.reply) + replyGossipTo(envelope.from) } /** * Gossip to peer nodes. */ - def gossip(): Unit = selectRandomNode((nodes - selfAddress).toIndexedSeq) foreach gossipTo + def gossip(): Unit = selectRandomNode((nodes - selfAddress).toVector) foreach gossipTo def gossipTo(address: Address): Unit = - context.actorFor(self.path.toStringWithAddress(address)) ! MetricsGossipEnvelope(selfAddress, latestGossip) + sendGossip(address, MetricsGossipEnvelope(selfAddress, latestGossip, reply = false)) - def selectRandomNode(addresses: IndexedSeq[Address]): Option[Address] = + def replyGossipTo(address: Address): Unit = + sendGossip(address, MetricsGossipEnvelope(selfAddress, latestGossip, reply = true)) + + def sendGossip(address: Address, envelope: MetricsGossipEnvelope): Unit = + context.actorFor(self.path.toStringWithAddress(address)) ! envelope + + def selectRandomNode(addresses: immutable.IndexedSeq[Address]): Option[Address] = if (addresses.isEmpty) None else Some(addresses(ThreadLocalRandom.current nextInt addresses.size)) /** @@ -156,61 +165,50 @@ private[cluster] class ClusterMetricsCollector(publisher: ActorRef) extends Acto } +/** + * INTERNAL API + */ +private[cluster] object MetricsGossip { + val empty = MetricsGossip(Set.empty[NodeMetrics]) +} + /** * INTERNAL API * * @param nodes metrics per node */ -private[cluster] case class MetricsGossip(rateOfDecay: Int, nodes: Set[NodeMetrics] = Set.empty) { +private[cluster] case class MetricsGossip(nodes: Set[NodeMetrics]) { /** * Removes nodes if their correlating node ring members are not [[akka.cluster.MemberStatus.Up]] */ def remove(node: Address): MetricsGossip = copy(nodes = nodes filterNot (_.address == node)) + /** + * Only the nodes that are in the `includeNodes` Set. + */ + def filter(includeNodes: Set[Address]): MetricsGossip = + copy(nodes = nodes filter { includeNodes contains _.address }) + /** * Adds new remote [[akka.cluster.NodeMetrics]] and merges existing from a remote gossip. */ - def merge(remoteGossip: MetricsGossip): MetricsGossip = { - val remoteNodes = remoteGossip.nodes.map(n ⇒ n.address -> n).toMap - val toMerge = nodeKeys intersect remoteNodes.keySet - val onlyInRemote = remoteNodes.keySet -- nodeKeys - val onlyInLocal = nodeKeys -- remoteNodes.keySet + def merge(otherGossip: MetricsGossip): MetricsGossip = + otherGossip.nodes.foldLeft(this) { (gossip, nodeMetrics) ⇒ gossip :+ nodeMetrics } - val seen = nodes.collect { - case n if toMerge contains n.address ⇒ n merge remoteNodes(n.address) - case n if onlyInLocal contains n.address ⇒ n - } - - val unseen = remoteGossip.nodes.collect { case n if onlyInRemote contains n.address ⇒ n } - - copy(nodes = seen ++ unseen) + /** + * Adds new local [[akka.cluster.NodeMetrics]], or merges an existing. + */ + def :+(newNodeMetrics: NodeMetrics): MetricsGossip = nodeMetricsFor(newNodeMetrics.address) match { + case Some(existingNodeMetrics) ⇒ + copy(nodes = nodes - existingNodeMetrics + (existingNodeMetrics merge newNodeMetrics)) + case None ⇒ copy(nodes = nodes + newNodeMetrics) } /** - * Adds new local [[akka.cluster.NodeMetrics]] and initializes the data, or merges an existing. + * Returns [[akka.cluster.NodeMetrics]] for a node if exists. */ - def :+(data: NodeMetrics): MetricsGossip = { - val previous = metricsFor(data) - val names = previous map (_.name) - - val (toMerge: Set[Metric], unseen: Set[Metric]) = data.metrics partition (a ⇒ names contains a.name) - val initialized = unseen.map(_.initialize(rateOfDecay)) - val merged = toMerge flatMap (latest ⇒ previous.collect { case peer if latest same peer ⇒ peer :+ latest }) - - val refreshed = nodes filterNot (_.address == data.address) - copy(nodes = refreshed + data.copy(metrics = initialized ++ merged)) - } - - /** - * Returns a set of [[akka.actor.Address]] for a given node set. - */ - def nodeKeys: Set[Address] = nodes map (_.address) - - /** - * Returns metrics for a node if exists. - */ - def metricsFor(node: NodeMetrics): Set[Metric] = nodes flatMap (n ⇒ if (n same node) n.metrics else Set.empty[Metric]) + def nodeMetricsFor(address: Address): Option[NodeMetrics] = nodes find { n ⇒ n.address == address } } @@ -218,7 +216,31 @@ private[cluster] case class MetricsGossip(rateOfDecay: Int, nodes: Set[NodeMetri * INTERNAL API * Envelope adding a sender address to the gossip. */ -private[cluster] case class MetricsGossipEnvelope(from: Address, gossip: MetricsGossip) extends ClusterMessage +private[cluster] case class MetricsGossipEnvelope(from: Address, gossip: MetricsGossip, reply: Boolean) + extends ClusterMessage + +object EWMA { + /** + * math.log(2) + */ + private val LogOf2 = 0.69315 + + /** + * Calculate the alpha (decay factor) used in [[akka.cluster.EWMA]] + * from specified half-life and interval between observations. + * Half-life is the interval over which the weights decrease by a factor of two. + * The relevance of each data sample is halved for every passing half-life duration, + * i.e. after 4 times the half-life, a data sample’s relevance is reduced to 6% of + * its original relevance. The initial relevance of a data sample is given by + * 1 – 0.5 ^ (collect-interval / half-life). + */ + def alpha(halfLife: FiniteDuration, collectInterval: FiniteDuration): Double = { + val halfLifeMillis = halfLife.toMillis + require(halfLife.toMillis > 0, "halfLife must be > 0 s") + val decayRate = LogOf2 / halfLifeMillis + 1 - math.exp(-decayRate * collectInterval.toMillis) + } +} /** * The exponentially weighted moving average (EWMA) approach captures short-term @@ -226,176 +248,282 @@ private[cluster] case class MetricsGossipEnvelope(from: Address, gossip: Metrics * of its alpha, or decay factor, this provides a statistical streaming data model * that is exponentially biased towards newer entries. * + * http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average + * * An EWMA only needs the most recent forecast value to be kept, as opposed to a standard * moving average model. * * INTERNAL API * - * @param decay sets how quickly the exponential weighting decays for past data compared to new data + * @param alpha decay factor, sets how quickly the exponential weighting decays for past data compared to new data, + * see http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average * - * @param ewma the current exponentially weighted moving average, e.g. Y(n - 1), or, + * @param value the current exponentially weighted moving average, e.g. Y(n - 1), or, * the sampled value resulting from the previous smoothing iteration. * This value is always used as the previous EWMA to calculate the new EWMA. * - * @param timestamp the most recent time of sampling - * - * @param startTime the time of initial sampling for this data stream */ -private[cluster] case class DataStream(decay: Int, ewma: ScalaNumber, startTime: Long, timestamp: Long) - extends ClusterMessage with MetricNumericConverter { +private[cluster] case class EWMA(value: Double, alpha: Double) extends ClusterMessage { - /** - * The rate at which the weights of past observations - * decay as they become more distant. - */ - private val α = 2 / decay + 1 + require(0.0 <= alpha && alpha <= 1.0, "alpha must be between 0.0 and 1.0") /** * Calculates the exponentially weighted moving average for a given monitored data set. - * The datam can be too large to fit into an int or long, thus we use ScalaNumber, - * and defer to BigInt or BigDecimal. * * @param xn the new data point - * @return an new [[akka.cluster.DataStream]] with the updated yn and timestamp + * @return a new [[akka.cluster.EWMA]] with the updated value */ - def :+(xn: ScalaNumber): DataStream = convert(xn) fold ( - nl ⇒ copy(ewma = BigInt(α * nl + 1 - α * ewma.longValue()), timestamp = newTimestamp), - nd ⇒ copy(ewma = BigDecimal(α * nd + 1 - α * ewma.doubleValue()), timestamp = newTimestamp)) - - /** - * The duration of observation for this data stream - */ - def duration: FiniteDuration = (timestamp - startTime) millis + def :+(xn: Double): EWMA = { + val newValue = (alpha * xn) + (1 - alpha) * value + if (newValue == value) this // no change + else copy(value = newValue) + } } /** - * INTERNAL API + * Metrics key/value. * - * Companion object of DataStream class. - */ -private[cluster] object DataStream { - - def apply(decay: Int, data: ScalaNumber): Option[DataStream] = if (decay > 0) - Some(DataStream(decay, data, newTimestamp, newTimestamp)) else None - -} - -/** - * INTERNAL API + * Equality of Metric is based on its name. * * @param name the metric name - * - * @param value the metric value, which may or may not be defined - * + * @param value the metric value, which may or may not be defined, it must be a valid numerical value, + * see [[akka.cluster.MetricNumericConverter.defined()]] * @param average the data stream of the metric value, for trending over time. Metrics that are already - * averages (e.g. system load average) or finite (e.g. as total cores), are not trended. + * averages (e.g. system load average) or finite (e.g. as number of processors), are not trended. */ -private[cluster] case class Metric(name: String, value: Option[ScalaNumber], average: Option[DataStream]) +case class Metric private (name: String, value: Number, private val average: Option[EWMA]) extends ClusterMessage with MetricNumericConverter { - /** - * Returns the metric with a new data stream for data trending if eligible, - * otherwise returns the unchanged metric. - */ - def initialize(decay: Int): Metric = if (initializable) copy(average = DataStream(decay, value.get)) else this + require(defined(value), s"Invalid Metric [$name] value [$value]") /** * If defined ( [[akka.cluster.MetricNumericConverter.defined()]] ), updates the new * data point, and if defined, updates the data stream. Returns the updated metric. */ - def :+(latest: Metric): Metric = latest.value match { - case Some(v) if this same latest ⇒ average match { - case Some(previous) ⇒ copy(value = Some(v), average = Some(previous :+ v)) - case None if latest.average.isDefined ⇒ copy(value = Some(v), average = latest.average) - case None if !latest.average.isDefined ⇒ copy(value = Some(v)) - } - case None ⇒ this + def :+(latest: Metric): Metric = if (this sameAs latest) average match { + case Some(avg) ⇒ copy(value = latest.value, average = Some(avg :+ latest.value.doubleValue)) + case None if latest.average.isDefined ⇒ copy(value = latest.value, average = latest.average) + case _ ⇒ copy(value = latest.value) + } + else this + + /** + * The numerical value of the average, if defined, otherwise the latest value + */ + def smoothValue: Double = average match { + case Some(avg) ⇒ avg.value + case None ⇒ value.doubleValue } /** - * @see [[akka.cluster.MetricNumericConverter.defined()]] + * @return true if this value is smoothed */ - def isDefined: Boolean = value match { - case Some(a) ⇒ defined(a) - case None ⇒ false - } + def isSmooth: Boolean = average.isDefined /** * Returns true if that is tracking the same metric as this. */ - def same(that: Metric): Boolean = name == that.name + def sameAs(that: Metric): Boolean = name == that.name - /** - * Returns true if the metric requires initialization. - */ - def initializable: Boolean = trendable && isDefined && average.isEmpty - - /** - * Returns true if the metric is a value applicable for trending. - */ - def trendable: Boolean = !(Metric.noStream contains name) - -} - -/** - * INTERNAL API - * - * Companion object of Metric class. - */ -private[cluster] object Metric extends MetricNumericConverter { - - /** - * The metrics that are already averages or finite are not trended over time. - */ - private val noStream = Set("system-load-average", "total-cores", "processors") - - /** - * Evaluates validity of value based on whether it is available (SIGAR on classpath) - * or defined for the OS (JMX). If undefined we set the value option to None and do not modify - * the latest sampled metric to avoid skewing the statistical trend. - */ - def apply(name: String, value: Option[ScalaNumber]): Metric = value match { - case Some(v) if defined(v) ⇒ Metric(name, value, None) - case _ ⇒ Metric(name, None, None) + override def hashCode = name.## + override def equals(obj: Any) = obj match { + case other: Metric ⇒ sameAs(other) + case _ ⇒ false + } + +} + +/** + * Factory for creating valid Metric instances. + */ +object Metric extends MetricNumericConverter { + + /** + * Creates a new Metric instance if the value is valid, otherwise None + * is returned. Invalid numeric values are negative and NaN/Infinite. + */ + def create(name: String, value: Number, decayFactor: Option[Double]): Option[Metric] = + if (defined(value)) Some(new Metric(name, value, ceateEWMA(value.doubleValue, decayFactor))) + else None + + /** + * Creates a new Metric instance if the Try is successful and the value is valid, + * otherwise None is returned. Invalid numeric values are negative and NaN/Infinite. + */ + def create(name: String, value: Try[Number], decayFactor: Option[Double]): Option[Metric] = value match { + case Success(v) ⇒ create(name, v, decayFactor) + case Failure(_) ⇒ None + } + + private def ceateEWMA(value: Double, decayFactor: Option[Double]): Option[EWMA] = decayFactor match { + case Some(alpha) ⇒ Some(EWMA(value, alpha)) + case None ⇒ None } } /** - * INTERNAL API - * * The snapshot of current sampled health metrics for any monitored process. * Collected and gossipped at regular intervals for dynamic cluster management strategies. * - * For the JVM memory. The amount of used and committed memory will always be <= max if max is defined. - * A memory allocation may fail if it attempts to increase the used memory such that used > committed - * even if used <= max is true (e.g. when the system virtual memory is low). - * - * The system is possibly nearing a bottleneck if the system load average is nearing in cpus/cores. + * Equality of NodeMetrics is based on its address. * * @param address [[akka.actor.Address]] of the node the metrics are gathered at - * - * @param timestamp the time of sampling - * - * @param metrics the array of sampled [[akka.actor.Metric]] + * @param timestamp the time of sampling, in milliseconds since midnight, January 1, 1970 UTC + * @param metrics the set of sampled [[akka.actor.Metric]] */ -private[cluster] case class NodeMetrics(address: Address, timestamp: Long, metrics: Set[Metric] = Set.empty[Metric]) extends ClusterMessage { +case class NodeMetrics(address: Address, timestamp: Long, metrics: Set[Metric] = Set.empty[Metric]) extends ClusterMessage { /** * Returns the most recent data. */ - def merge(that: NodeMetrics): NodeMetrics = if (this updatable that) copy(metrics = that.metrics, timestamp = that.timestamp) else this + def merge(that: NodeMetrics): NodeMetrics = { + require(address == that.address, s"merge only allowed for same address, [$address] != [$that.address]") + if (timestamp >= that.timestamp) this // that is older + else { + // equality is based on the name of the Metric and Set doesn't replace existing element + copy(metrics = that.metrics ++ metrics, timestamp = that.timestamp) + } + } + + def metric(key: String): Option[Metric] = metrics.collectFirst { case m if m.name == key ⇒ m } /** - * Returns true if that address is the same as this and its metric set is more recent. + * Java API */ - def updatable(that: NodeMetrics): Boolean = (this same that) && (that.timestamp > timestamp) + def getMetrics: java.lang.Iterable[Metric] = + scala.collection.JavaConverters.asJavaIterableConverter(metrics).asJava /** * Returns true if that address is the same as this */ - def same(that: NodeMetrics): Boolean = address == that.address + def sameAs(that: NodeMetrics): Boolean = address == that.address + + override def hashCode = address.## + override def equals(obj: Any) = obj match { + case other: NodeMetrics ⇒ sameAs(other) + case _ ⇒ false + } + +} + +/** + * Definitions of the built-in standard metrics. + * + * The following extractors and data structures makes it easy to consume the + * [[akka.cluster.NodeMetrics]] in for example load balancers. + */ +object StandardMetrics { + + // Constants for the heap related Metric names + final val HeapMemoryUsed = "heap-memory-used" + final val HeapMemoryCommitted = "heap-memory-committed" + final val HeapMemoryMax = "heap-memory-max" + + // Constants for the cpu related Metric names + final val SystemLoadAverage = "system-load-average" + final val Processors = "processors" + final val CpuCombined = "cpu-combined" + + object HeapMemory { + + /** + * Given a NodeMetrics it returns the HeapMemory data if the nodeMetrics contains + * necessary heap metrics. + * @return if possible a tuple matching the HeapMemory constructor parameters + */ + def unapply(nodeMetrics: NodeMetrics): Option[(Address, Long, Long, Long, Option[Long])] = { + for { + used ← nodeMetrics.metric(HeapMemoryUsed) + committed ← nodeMetrics.metric(HeapMemoryCommitted) + } yield (nodeMetrics.address, nodeMetrics.timestamp, + used.smoothValue.longValue, committed.smoothValue.longValue, + nodeMetrics.metric(HeapMemoryMax).map(_.smoothValue.longValue)) + } + + } + + /** + * Java API to extract HeapMemory data from nodeMetrics, if the nodeMetrics + * contains necessary heap metrics, otherwise it returns null. + */ + def extractHeapMemory(nodeMetrics: NodeMetrics): HeapMemory = nodeMetrics match { + case HeapMemory(address, timestamp, used, committed, max) ⇒ + // note that above extractor returns tuple + HeapMemory(address, timestamp, used, committed, max) + case _ ⇒ null + } + + /** + * The amount of used and committed memory will always be <= max if max is defined. + * A memory allocation may fail if it attempts to increase the used memory such that used > committed + * even if used <= max is true (e.g. when the system virtual memory is low). + * + * @param address [[akka.actor.Address]] of the node the metrics are gathered at + * @param timestamp the time of sampling, in milliseconds since midnight, January 1, 1970 UTC + * @param used the current sum of heap memory used from all heap memory pools (in bytes) + * @param committed the current sum of heap memory guaranteed to be available to the JVM + * from all heap memory pools (in bytes). Committed will always be greater than or equal to used. + * @param max the maximum amount of memory (in bytes) that can be used for JVM memory management. + * Can be undefined on some OS. + */ + case class HeapMemory(address: Address, timestamp: Long, used: Long, committed: Long, max: Option[Long]) { + require(committed > 0L, "committed heap expected to be > 0 bytes") + require(max.isEmpty || max.get > 0L, "max heap expected to be > 0 bytes") + } + + object Cpu { + + /** + * Given a NodeMetrics it returns the Cpu data if the nodeMetrics contains + * necessary cpu metrics. + * @return if possible a tuple matching the Cpu constructor parameters + */ + def unapply(nodeMetrics: NodeMetrics): Option[(Address, Long, Option[Double], Option[Double], Int)] = { + for { + processors ← nodeMetrics.metric(Processors) + } yield (nodeMetrics.address, nodeMetrics.timestamp, + nodeMetrics.metric(SystemLoadAverage).map(_.smoothValue), + nodeMetrics.metric(CpuCombined).map(_.smoothValue), processors.value.intValue) + } + + } + + /** + * Java API to extract Cpu data from nodeMetrics, if the nodeMetrics + * contains necessary cpu metrics, otherwise it returns null. + */ + def extractCpu(nodeMetrics: NodeMetrics): Cpu = nodeMetrics match { + case Cpu(address, timestamp, systemLoadAverage, cpuCombined, processors) ⇒ + // note that above extractor returns tuple + Cpu(address, timestamp, systemLoadAverage, cpuCombined, processors) + case _ ⇒ null + } + + /** + * @param address [[akka.actor.Address]] of the node the metrics are gathered at + * @param timestamp the time of sampling, in milliseconds since midnight, January 1, 1970 UTC + * @param systemLoadAverage OS-specific average load on the CPUs in the system, for the past 1 minute, + * The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. + * @param cpuCombined combined CPU sum of User + Sys + Nice + Wait, in percentage ([0.0 - 1.0]. This + * metric can describe the amount of time the CPU spent executing code during n-interval and how + * much more it could theoretically. + * @param processors the number of available processors + */ + case class Cpu( + address: Address, + timestamp: Long, + systemLoadAverage: Option[Double], + cpuCombined: Option[Double], + processors: Int) { + + cpuCombined match { + case Some(x) ⇒ require(0.0 <= x && x <= 1.0, s"cpuCombined must be between [0.0 - 1.0], was [$x]") + case None ⇒ + } + + } } @@ -408,91 +536,199 @@ private[cluster] case class NodeMetrics(address: Address, timestamp: Long, metri private[cluster] trait MetricNumericConverter { /** - * A defined value is neither a -1 or NaN/Infinite: + * An defined value is neither negative nor NaN/Infinite: *

  • JMX system load average and max heap can be 'undefined' for certain OS, in which case a -1 is returned
  • *
  • SIGAR combined CPU can occasionally return a NaN or Infinite (known bug)
*/ - def defined(value: ScalaNumber): Boolean = convert(value) fold (a ⇒ value != -1, b ⇒ !(b.isNaN || b.isInfinite)) + def defined(value: Number): Boolean = convertNumber(value) match { + case Left(a) ⇒ a >= 0 + case Right(b) ⇒ !(b < 0.0 || b.isNaN || b.isInfinite) + } /** * May involve rounding or truncation. */ - def convert(from: ScalaNumber): Either[Long, Double] = from match { - case n: BigInt ⇒ Left(n.longValue()) - case n: BigDecimal ⇒ Right(n.doubleValue()) - case n: RichInt ⇒ Left(n.abs) - case n: RichLong ⇒ Left(n.self) - case n: RichDouble ⇒ Right(n.self) + def convertNumber(from: Any): Either[Long, Double] = from match { + case n: Int ⇒ Left(n) + case n: Long ⇒ Left(n) + case n: Double ⇒ Right(n) + case n: Float ⇒ Right(n) + case n: BigInt ⇒ Left(n.longValue) + case n: BigDecimal ⇒ Right(n.doubleValue) + case x ⇒ throw new IllegalArgumentException(s"Not a number [$x]") } } /** * INTERNAL API - * - * Loads JVM metrics through JMX monitoring beans. If Hyperic SIGAR is on the classpath, this - * loads wider and more accurate range of metrics in combination with SIGAR's native OS library. - * - * FIXME switch to Scala reflection - * - * @param sigar the optional org.hyperic.Sigar instance + */ +private[cluster] trait MetricsCollector extends Closeable { + /** + * Samples and collects new data points. + */ + def sample: NodeMetrics +} + +/** + * Loads JVM and system metrics through JMX monitoring beans. * * @param address The [[akka.actor.Address]] of the node being sampled + * @param decay how quickly the exponential weighting of past data is decayed */ -private[cluster] class MetricsCollector private (private val sigar: Option[AnyRef], address: Address) extends MetricNumericConverter { +class JmxMetricsCollector(address: Address, decayFactor: Double) extends MetricsCollector { + import StandardMetrics._ + + private def this(cluster: Cluster) = + this(cluster.selfAddress, + EWMA.alpha(cluster.settings.MetricsMovingAverageHalfLife, cluster.settings.MetricsInterval)) + + /** + * This constructor is used when creating an instance from configured FQCN + */ + def this(system: ActorSystem) = this(Cluster(system)) + + private val decayFactorOption = Some(decayFactor) private val memoryMBean: MemoryMXBean = ManagementFactory.getMemoryMXBean private val osMBean: OperatingSystemMXBean = ManagementFactory.getOperatingSystemMXBean - private val LoadAverage: Option[Method] = createMethodFrom(sigar, "getLoadAverage") - - private val CpuList: Option[Method] = createMethodFrom(sigar, "getCpuInfoList").map(m ⇒ m) - - private val NetInterfaces: Option[Method] = createMethodFrom(sigar, "getNetInterfaceList") - - private val Cpu: Option[Method] = createMethodFrom(sigar, "getCpuPerc") - - private val CombinedCpu: Option[Method] = Try(Cpu.get.getReturnType.getMethod("getCombined")).toOption - /** * Samples and collects new data points. - * - * @return [[akka.cluster.NodeMetrics]] + * Creates a new instance each time. */ - def sample: NodeMetrics = NodeMetrics(address, newTimestamp, Set(cpuCombined, totalCores, - systemLoadAverage, used, committed, max, processors, networkMaxRx, networkMaxTx)) + def sample: NodeMetrics = NodeMetrics(address, newTimestamp, metrics) + + def metrics: Set[Metric] = { + val heap = heapMemoryUsage + Set(systemLoadAverage, heapUsed(heap), heapCommitted(heap), heapMax(heap), processors).flatten + } /** - * (SIGAR / JMX) Returns the OS-specific average system load on the CPUs in the system, for the past 1 minute. - * On some systems the JMX OS system load average may not be available, in which case a -1 is returned. - * Hyperic SIGAR provides more precise values, thus, if the library is on the classpath, it is the default. + * JMX Returns the OS-specific average load on the CPUs in the system, for the past 1 minute. + * On some systems the JMX OS system load average may not be available, in which case a -1 is + * returned from JMX, and None is returned from this method. + * Creates a new instance each time. */ - def systemLoadAverage: Metric = Metric("system-load-average", Some(BigDecimal(Try( - LoadAverage.get.invoke(sigar.get).asInstanceOf[Array[Double]].toSeq.head) getOrElse osMBean.getSystemLoadAverage))) + def systemLoadAverage: Option[Metric] = Metric.create( + name = SystemLoadAverage, + value = osMBean.getSystemLoadAverage, + decayFactor = None) /** * (JMX) Returns the number of available processors + * Creates a new instance each time. */ - def processors: Metric = Metric("processors", Some(BigInt(osMBean.getAvailableProcessors))) + def processors: Option[Metric] = Metric.create( + name = Processors, + value = osMBean.getAvailableProcessors, + decayFactor = None) + + /** + * Current heap to be passed in to heapUsed, heapCommitted and heapMax + */ + def heapMemoryUsage: MemoryUsage = memoryMBean.getHeapMemoryUsage /** * (JMX) Returns the current sum of heap memory used from all heap memory pools (in bytes). + * Creates a new instance each time. */ - def used: Metric = Metric("heap-memory-used", Some(BigInt(memoryMBean.getHeapMemoryUsage.getUsed))) + def heapUsed(heap: MemoryUsage): Option[Metric] = Metric.create( + name = HeapMemoryUsed, + value = heap.getUsed, + decayFactor = decayFactorOption) /** * (JMX) Returns the current sum of heap memory guaranteed to be available to the JVM - * from all heap memory pools (in bytes). Committed will always be greater - * than or equal to used. + * from all heap memory pools (in bytes). + * Creates a new instance each time. */ - def committed: Metric = Metric("heap-memory-committed", Some(BigInt(memoryMBean.getHeapMemoryUsage.getCommitted))) + def heapCommitted(heap: MemoryUsage): Option[Metric] = Metric.create( + name = HeapMemoryCommitted, + value = heap.getCommitted, + decayFactor = decayFactorOption) /** * (JMX) Returns the maximum amount of memory (in bytes) that can be used - * for JVM memory management. If undefined, returns -1. + * for JVM memory management. If not defined the metrics value is None, i.e. + * never negative. + * Creates a new instance each time. */ - def max: Metric = Metric("heap-memory-max", Some(BigInt(memoryMBean.getHeapMemoryUsage.getMax))) + def heapMax(heap: MemoryUsage): Option[Metric] = Metric.create( + name = HeapMemoryMax, + value = heap.getMax, + decayFactor = None) + + override def close(): Unit = () + +} + +/** + * Loads metrics through Hyperic SIGAR and JMX monitoring beans. This + * loads wider and more accurate range of metrics compared to JmxMetricsCollector + * by using SIGAR's native OS library. + * + * The constructor will by design throw exception if org.hyperic.sigar.Sigar can't be loaded, due + * to missing classes or native libraries. + * + * TODO switch to Scala reflection + * + * @param address The [[akka.actor.Address]] of the node being sampled + * @param decay how quickly the exponential weighting of past data is decayed + * @param sigar the org.hyperic.Sigar instance + */ +class SigarMetricsCollector(address: Address, decayFactor: Double, sigar: AnyRef) + extends JmxMetricsCollector(address, decayFactor) { + + import StandardMetrics._ + + private def this(cluster: Cluster) = + this(cluster.selfAddress, + EWMA.alpha(cluster.settings.MetricsMovingAverageHalfLife, cluster.settings.MetricsInterval), + cluster.system.dynamicAccess.createInstanceFor[AnyRef]("org.hyperic.sigar.Sigar", Nil).get) + + /** + * This constructor is used when creating an instance from configured FQCN + */ + def this(system: ActorSystem) = this(Cluster(system)) + + private val decayFactorOption = Some(decayFactor) + + private val EmptyClassArray: Array[(Class[_])] = Array.empty[(Class[_])] + private val LoadAverage: Option[Method] = createMethodFrom(sigar, "getLoadAverage") + private val Cpu: Option[Method] = createMethodFrom(sigar, "getCpuPerc") + private val CombinedCpu: Option[Method] = Try(Cpu.get.getReturnType.getMethod("getCombined")).toOption + + // Do something initially, in constructor, to make sure that the native library can be loaded. + // This will by design throw exception if sigar isn't usable + val pid: Long = createMethodFrom(sigar, "getPid") match { + case Some(method) ⇒ + try method.invoke(sigar).asInstanceOf[Long] catch { + case e: InvocationTargetException if e.getCause.isInstanceOf[LinkageError] ⇒ + // native libraries not in place + // don't throw fatal LinkageError, but something harmless + throw new IllegalArgumentException(e.getCause.toString) + case e: InvocationTargetException ⇒ throw e.getCause + } + case None ⇒ throw new IllegalArgumentException("Wrong version of Sigar, expected 'getPid' method") + } + + override def metrics: Set[Metric] = { + super.metrics.filterNot(_.name == SystemLoadAverage) ++ Set(systemLoadAverage, cpuCombined).flatten + } + + /** + * (SIGAR / JMX) Returns the OS-specific average load on the CPUs in the system, for the past 1 minute. + * On some systems the JMX OS system load average may not be available, in which case a -1 is returned + * from JMX, which means that None is returned from this method. + * Hyperic SIGAR provides more precise values, thus, if the library is on the classpath, it is the default. + * Creates a new instance each time. + */ + override def systemLoadAverage: Option[Metric] = Metric.create( + name = SystemLoadAverage, + value = Try(LoadAverage.get.invoke(sigar).asInstanceOf[Array[AnyRef]](0).asInstanceOf[Number]), + decayFactor = None) orElse super.systemLoadAverage /** * (SIGAR) Returns the combined CPU sum of User + Sys + Nice + Wait, in percentage. This metric can describe @@ -501,68 +737,51 @@ private[cluster] class MetricsCollector private (private val sigar: Option[AnyRe * * In the data stream, this will sometimes return with a valid metric value, and sometimes as a NaN or Infinite. * Documented bug https://bugzilla.redhat.com/show_bug.cgi?id=749121 and several others. + * + * Creates a new instance each time. */ - def cpuCombined: Metric = Metric("cpu-combined", Try(BigDecimal(CombinedCpu.get.invoke(Cpu.get.invoke(sigar.get)).asInstanceOf[Double])).toOption) - - /** - * (SIGAR) Returns the total number of cores. - */ - def totalCores: Metric = Metric("total-cores", Try(BigInt(CpuList.get.invoke(sigar.get).asInstanceOf[Array[AnyRef]].map(cpu ⇒ - createMethodFrom(Some(cpu), "getTotalCores").get.invoke(cpu).asInstanceOf[Int]).head)).toOption) - //Array[Int].head - if this would differ on some servers, expose all. In testing each int was always equal. - - /** - * (SIGAR) Returns the max network IO read/write value, in bytes, for network latency evaluation. - */ - def networkMaxRx: Metric = networkMaxFor("getRxBytes", "network-max-rx") - - /** - * (SIGAR) Returns the max network IO tx value, in bytes. - */ - def networkMaxTx: Metric = networkMaxFor("getTxBytes", "network-max-tx") - - /** - * Returns the network stats per interface. - */ - def networkStats: Map[String, AnyRef] = Try(NetInterfaces.get.invoke(sigar.get).asInstanceOf[Array[String]].map(arg ⇒ - arg -> (createMethodFrom(sigar, "getNetInterfaceStat", Array(classOf[String])).get.invoke(sigar.get, arg))).toMap) getOrElse Map.empty[String, AnyRef] - - /** - * Returns true if SIGAR is successfully installed on the classpath, otherwise false. - */ - def isSigar: Boolean = sigar.isDefined + def cpuCombined: Option[Metric] = Metric.create( + name = CpuCombined, + value = Try(CombinedCpu.get.invoke(Cpu.get.invoke(sigar)).asInstanceOf[Number]), + decayFactor = decayFactorOption) /** * Releases any native resources associated with this instance. */ - def close(): Unit = if (isSigar) Try(createMethodFrom(sigar, "close").get.invoke(sigar.get)) getOrElse Unit + override def close(): Unit = Try(createMethodFrom(sigar, "close").get.invoke(sigar)) - /** - * Returns the max bytes for the given method in metric for metric from the network interface stats. - */ - private def networkMaxFor(method: String, metric: String): Metric = Metric(metric, Try(Some(BigInt( - networkStats.collect { case (_, a) ⇒ createMethodFrom(Some(a), method).get.invoke(a).asInstanceOf[Long] }.max))) getOrElse None) - - private def createMethodFrom(ref: Option[AnyRef], method: String, types: Array[(Class[_])] = Array.empty[(Class[_])]): Option[Method] = - Try(ref.get.getClass.getMethod(method, types: _*)).toOption + private def createMethodFrom(ref: AnyRef, method: String, types: Array[(Class[_])] = EmptyClassArray): Option[Method] = + Try(ref.getClass.getMethod(method, types: _*)).toOption } /** * INTERNAL API - * Companion object of MetricsCollector class. + * Factory to create configured MetricsCollector. + * If instantiation of SigarMetricsCollector fails (missing class or native library) + * it falls back to use JmxMetricsCollector. */ private[cluster] object MetricsCollector { - def apply(address: Address, log: LoggingAdapter, dynamicAccess: DynamicAccess): MetricsCollector = - dynamicAccess.createInstanceFor[AnyRef]("org.hyperic.sigar.Sigar", Seq.empty) match { - case Success(identity) ⇒ new MetricsCollector(Some(identity), address) - case Failure(e) ⇒ - log.debug(e.toString) - log.info("Hyperic SIGAR was not found on the classpath or not installed properly. " + - "Metrics will be retreived from MBeans, and may be incorrect on some platforms. " + - "To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate" + - "platform-specific native libary to 'java.library.path'.") - new MetricsCollector(None, address) + def apply(system: ExtendedActorSystem, settings: ClusterSettings): MetricsCollector = { + import settings.{ MetricsCollectorClass ⇒ fqcn } + def log = Logging(system, "MetricsCollector") + if (fqcn == classOf[SigarMetricsCollector].getName) { + Try(new SigarMetricsCollector(system)) match { + case Success(sigarCollector) ⇒ sigarCollector + case Failure(e) ⇒ + log.info("Metrics will be retreived from MBeans, and may be incorrect on some platforms. " + + "To increase metric accuracy add the 'sigar.jar' to the classpath and the appropriate " + + "platform-specific native libary to 'java.library.path'. Reason: " + + e.toString) + new JmxMetricsCollector(system) + } + + } else { + system.dynamicAccess.createInstanceFor[MetricsCollector](fqcn, List(classOf[ActorSystem] -> system)). + recover { + case e ⇒ throw new ConfigurationException("Could not create custom metrics collector [" + fqcn + "] due to:" + e.toString) + }.get } + } } diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterReadView.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterReadView.scala index 5920ac3dca..5f80cfd044 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterReadView.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterReadView.scala @@ -5,7 +5,7 @@ package akka.cluster import java.io.Closeable -import scala.collection.immutable.SortedSet +import scala.collection.immutable import akka.actor.{ Actor, ActorRef, ActorSystemImpl, Address, Props } import akka.cluster.ClusterEvent._ import akka.actor.PoisonPill @@ -74,14 +74,14 @@ private[akka] class ClusterReadView(cluster: Cluster) extends Closeable { } /** - * Returns true if the cluster node is up and running, false if it is shut down. + * Returns true if this cluster instance has be shutdown. */ - def isRunning: Boolean = cluster.isRunning + def isTerminated: Boolean = cluster.isTerminated /** * Current cluster members, sorted by address. */ - def members: SortedSet[Member] = state.members + def members: immutable.SortedSet[Member] = state.members /** * Members that has been detected as unreachable. @@ -108,7 +108,7 @@ private[akka] class ClusterReadView(cluster: Cluster) extends Closeable { def leader: Option[Address] = state.leader /** - * Is this node a singleton cluster? + * Does the cluster consist of only one member? */ def isSingletonCluster: Boolean = members.size == 1 @@ -118,11 +118,14 @@ private[akka] class ClusterReadView(cluster: Cluster) extends Closeable { def convergence: Boolean = state.convergence /** - * Returns true if the node is UP or JOINING. + * Returns true if the node is not unreachable and not `Down` + * and not `Removed`. */ def isAvailable: Boolean = { val myself = self - !unreachableMembers.contains(myself) && !myself.status.isUnavailable + !unreachableMembers.contains(myself) && + myself.status != MemberStatus.Down && + myself.status != MemberStatus.Removed } /** diff --git a/akka-cluster/src/main/scala/akka/cluster/ClusterSettings.scala b/akka-cluster/src/main/scala/akka/cluster/ClusterSettings.scala index 6110df034a..6861459168 100644 --- a/akka-cluster/src/main/scala/akka/cluster/ClusterSettings.scala +++ b/akka-cluster/src/main/scala/akka/cluster/ClusterSettings.scala @@ -3,31 +3,51 @@ */ package akka.cluster +import scala.collection.immutable import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit.MILLISECONDS import akka.ConfigurationException -import scala.collection.JavaConverters._ import akka.actor.Address import akka.actor.AddressFromURIString import akka.dispatch.Dispatchers -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration +import akka.japi.Util.immutableSeq class ClusterSettings(val config: Config, val systemName: String) { import config._ - final val FailureDetectorThreshold = getDouble("akka.cluster.failure-detector.threshold") - final val FailureDetectorMaxSampleSize = getInt("akka.cluster.failure-detector.max-sample-size") - final val FailureDetectorImplementationClass = getString("akka.cluster.failure-detector.implementation-class") - final val FailureDetectorMinStdDeviation: FiniteDuration = - Duration(getMilliseconds("akka.cluster.failure-detector.min-std-deviation"), MILLISECONDS) - final val FailureDetectorAcceptableHeartbeatPause: FiniteDuration = - Duration(getMilliseconds("akka.cluster.failure-detector.acceptable-heartbeat-pause"), MILLISECONDS) - final val HeartbeatInterval: FiniteDuration = Duration(getMilliseconds("akka.cluster.failure-detector.heartbeat-interval"), MILLISECONDS) + final val FailureDetectorThreshold: Double = { + val x = getDouble("akka.cluster.failure-detector.threshold") + require(x > 0.0, "failure-detector.threshold must be > 0") + x + } + final val FailureDetectorMaxSampleSize: Int = { + val n = getInt("akka.cluster.failure-detector.max-sample-size") + require(n > 0, "failure-detector.max-sample-size must be > 0"); n + } + final val FailureDetectorImplementationClass: String = getString("akka.cluster.failure-detector.implementation-class") + final val FailureDetectorMinStdDeviation: FiniteDuration = { + val d = Duration(getMilliseconds("akka.cluster.failure-detector.min-std-deviation"), MILLISECONDS) + require(d > Duration.Zero, "failure-detector.min-std-deviation must be > 0"); d + } + final val FailureDetectorAcceptableHeartbeatPause: FiniteDuration = { + val d = Duration(getMilliseconds("akka.cluster.failure-detector.acceptable-heartbeat-pause"), MILLISECONDS) + require(d >= Duration.Zero, "failure-detector.acceptable-heartbeat-pause must be >= 0"); d + } + final val HeartbeatInterval: FiniteDuration = { + val d = Duration(getMilliseconds("akka.cluster.failure-detector.heartbeat-interval"), MILLISECONDS) + require(d > Duration.Zero, "failure-detector.heartbeat-interval must be > 0"); d + } + final val HeartbeatConsistentHashingVirtualNodesFactor = 10 // no need for configuration + final val NumberOfEndHeartbeats: Int = (FailureDetectorAcceptableHeartbeatPause / HeartbeatInterval + 1).toInt + final val MonitoredByNrOfMembers: Int = { + val n = getInt("akka.cluster.failure-detector.monitored-by-nr-of-members") + require(n > 0, "failure-detector.monitored-by-nr-of-members must be > 0"); n + } - final val SeedNodes: IndexedSeq[Address] = getStringList("akka.cluster.seed-nodes").asScala.map { - case AddressFromURIString(addr) ⇒ addr - }.toIndexedSeq + final val SeedNodes: immutable.IndexedSeq[Address] = + immutableSeq(getStringList("akka.cluster.seed-nodes")).map { case AddressFromURIString(addr) ⇒ addr }.toVector final val SeedNodeTimeout: FiniteDuration = Duration(getMilliseconds("akka.cluster.seed-node-timeout"), MILLISECONDS) final val PeriodicTasksInitialDelay: FiniteDuration = Duration(getMilliseconds("akka.cluster.periodic-tasks-initial-delay"), MILLISECONDS) final val GossipInterval: FiniteDuration = Duration(getMilliseconds("akka.cluster.gossip-interval"), MILLISECONDS) @@ -51,9 +71,16 @@ class ClusterSettings(val config: Config, val systemName: String) { callTimeout = Duration(getMilliseconds("akka.cluster.send-circuit-breaker.call-timeout"), MILLISECONDS), resetTimeout = Duration(getMilliseconds("akka.cluster.send-circuit-breaker.reset-timeout"), MILLISECONDS)) final val MetricsEnabled: Boolean = getBoolean("akka.cluster.metrics.enabled") - final val MetricsInterval: FiniteDuration = Duration(getMilliseconds("akka.cluster.metrics.metrics-interval"), MILLISECONDS) + final val MetricsCollectorClass: String = getString("akka.cluster.metrics.collector-class") + final val MetricsInterval: FiniteDuration = { + val d = Duration(getMilliseconds("akka.cluster.metrics.collect-interval"), MILLISECONDS) + require(d > Duration.Zero, "metrics.collect-interval must be > 0"); d + } final val MetricsGossipInterval: FiniteDuration = Duration(getMilliseconds("akka.cluster.metrics.gossip-interval"), MILLISECONDS) - final val MetricsRateOfDecay: Int = getInt("akka.cluster.metrics.rate-of-decay") + final val MetricsMovingAverageHalfLife: FiniteDuration = { + val d = Duration(getMilliseconds("akka.cluster.metrics.moving-average-half-life"), MILLISECONDS) + require(d > Duration.Zero, "metrics.moving-average-half-life must be > 0"); d + } } case class CircuitBreakerSettings(maxFailures: Int, callTimeout: FiniteDuration, resetTimeout: FiniteDuration) diff --git a/akka-cluster/src/main/scala/akka/cluster/FixedRateTask.scala b/akka-cluster/src/main/scala/akka/cluster/FixedRateTask.scala deleted file mode 100644 index 9e6eedf659..0000000000 --- a/akka-cluster/src/main/scala/akka/cluster/FixedRateTask.scala +++ /dev/null @@ -1,58 +0,0 @@ -/** - * Copyright (C) 2009-2012 Typesafe Inc. - */ - -package akka.cluster - -import java.util.concurrent.TimeUnit -import java.util.concurrent.atomic.{ AtomicBoolean, AtomicLong } -import akka.actor.{ Scheduler, Cancellable } -import scala.concurrent.util.Duration -import concurrent.ExecutionContext -import scala.concurrent.util.FiniteDuration - -/** - * INTERNAL API - */ -private[akka] object FixedRateTask { - def apply(scheduler: Scheduler, - initalDelay: FiniteDuration, - delay: FiniteDuration)(f: ⇒ Unit)(implicit executor: ExecutionContext): FixedRateTask = - new FixedRateTask(scheduler, initalDelay, delay, new Runnable { def run(): Unit = f }) -} - -/** - * INTERNAL API - * - * Task to be scheduled periodically at a fixed rate, compensating, on average, - * for inaccuracy in scheduler. It will start when constructed, using the - * initialDelay. - */ -private[akka] class FixedRateTask(scheduler: Scheduler, - initalDelay: FiniteDuration, - delay: FiniteDuration, - task: Runnable)(implicit executor: ExecutionContext) - extends Runnable with Cancellable { - - private val delayNanos = delay.toNanos - private val cancelled = new AtomicBoolean(false) - private val counter = new AtomicLong(0L) - private val startTime = System.nanoTime + initalDelay.toNanos - scheduler.scheduleOnce(initalDelay, this) - - def cancel(): Unit = cancelled.set(true) - - def isCancelled: Boolean = cancelled.get - - override final def run(): Unit = if (!isCancelled) try { - task.run() - } finally if (!isCancelled) { - val nextTime = startTime + delayNanos * counter.incrementAndGet - // it's ok to schedule with negative duration, will run asap - val nextDelay = Duration(nextTime - System.nanoTime, TimeUnit.NANOSECONDS) - try { - scheduler.scheduleOnce(nextDelay, this) - } catch { case e: IllegalStateException ⇒ /* will happen when scheduler is closed, nothing wrong */ } - } - -} diff --git a/akka-cluster/src/main/scala/akka/cluster/Gossip.scala b/akka-cluster/src/main/scala/akka/cluster/Gossip.scala index be734703be..81618f4a68 100644 --- a/akka-cluster/src/main/scala/akka/cluster/Gossip.scala +++ b/akka-cluster/src/main/scala/akka/cluster/Gossip.scala @@ -5,14 +5,14 @@ package akka.cluster import akka.actor.Address -import scala.collection.immutable.SortedSet +import scala.collection.immutable import MemberStatus._ /** * Internal API */ private[cluster] object Gossip { - val emptyMembers: SortedSet[Member] = SortedSet.empty + val emptyMembers: immutable.SortedSet[Member] = immutable.SortedSet.empty } /** @@ -50,7 +50,7 @@ private[cluster] object Gossip { */ private[cluster] case class Gossip( overview: GossipOverview = GossipOverview(), - members: SortedSet[Member] = Gossip.emptyMembers, // sorted set of members with their status, sorted by address + members: immutable.SortedSet[Member] = Gossip.emptyMembers, // sorted set of members with their status, sorted by address version: VectorClock = VectorClock()) // vector clock version extends ClusterMessage // is a serializable cluster message with Versioned[Gossip] { @@ -168,15 +168,10 @@ private[cluster] case class Gossip( def isSingletonCluster: Boolean = members.size == 1 /** - * Returns true if the node is UP or JOINING. + * Returns true if the node is in the unreachable set */ - def isAvailable(address: Address): Boolean = !isUnavailable(address) - - def isUnavailable(address: Address): Boolean = { - val isUnreachable = overview.unreachable exists { _.address == address } - val hasUnavailableMemberStatus = members exists { m ⇒ m.status.isUnavailable && m.address == address } - isUnreachable || hasUnavailableMemberStatus - } + def isUnreachable(address: Address): Boolean = + overview.unreachable exists { _.address == address } def member(address: Address): Member = { members.find(_.address == address).orElse(overview.unreachable.find(_.address == address)). diff --git a/akka-cluster/src/main/scala/akka/cluster/Member.scala b/akka-cluster/src/main/scala/akka/cluster/Member.scala index f8a064977d..1ee4aae804 100644 --- a/akka-cluster/src/main/scala/akka/cluster/Member.scala +++ b/akka-cluster/src/main/scala/akka/cluster/Member.scala @@ -6,7 +6,7 @@ package akka.cluster import language.implicitConversions -import scala.collection.immutable.SortedSet +import scala.collection.immutable import scala.collection.GenTraversableOnce import akka.actor.Address import MemberStatus._ @@ -87,13 +87,7 @@ object Member { * * Can be one of: Joining, Up, Leaving, Exiting and Down. */ -abstract class MemberStatus extends ClusterMessage { - - /** - * Using the same notion for 'unavailable' as 'non-convergence': DOWN - */ - def isUnavailable: Boolean = this == Down -} +abstract class MemberStatus extends ClusterMessage object MemberStatus { case object Joining extends MemberStatus diff --git a/akka-cluster/src/main/scala/akka/cluster/routing/AdaptiveLoadBalancingRouter.scala b/akka-cluster/src/main/scala/akka/cluster/routing/AdaptiveLoadBalancingRouter.scala new file mode 100644 index 0000000000..60a9c5b6a7 --- /dev/null +++ b/akka-cluster/src/main/scala/akka/cluster/routing/AdaptiveLoadBalancingRouter.scala @@ -0,0 +1,434 @@ +/* + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster.routing + +import java.util.Arrays + +import scala.concurrent.forkjoin.ThreadLocalRandom +import scala.collection.immutable +import akka.actor.Actor +import akka.actor.ActorRef +import akka.actor.Address +import akka.actor.OneForOneStrategy +import akka.actor.Props +import akka.actor.SupervisorStrategy +import akka.dispatch.Dispatchers +import akka.cluster.Cluster +import akka.cluster.ClusterEvent.ClusterMetricsChanged +import akka.cluster.ClusterEvent.CurrentClusterState +import akka.cluster.NodeMetrics +import akka.cluster.StandardMetrics.Cpu +import akka.cluster.StandardMetrics.HeapMemory +import akka.event.Logging +import akka.japi.Util.immutableSeq +import akka.routing.Broadcast +import akka.routing.Destination +import akka.routing.FromConfig +import akka.routing.NoRouter +import akka.routing.Resizer +import akka.routing.Route +import akka.routing.RouteeProvider +import akka.routing.RouterConfig + +object AdaptiveLoadBalancingRouter { + private val escalateStrategy: SupervisorStrategy = OneForOneStrategy() { + case _ ⇒ SupervisorStrategy.Escalate + } +} + +/** + * A Router that performs load balancing of messages to cluster nodes based on + * cluster metric data. + * + * It uses random selection of routees based probabilities derived from + * the remaining capacity of corresponding node. + * + * Please note that providing both 'nrOfInstances' and 'routees' does not make logical + * sense as this means that the router should both create new actors and use the 'routees' + * actor(s). In this case the 'nrOfInstances' will be ignored and the 'routees' will be used. + *
+ * The configuration parameter trumps the constructor arguments. This means that + * if you provide either 'nrOfInstances' or 'routees' during instantiation they will + * be ignored if the router is defined in the configuration file for the actor being used. + * + *

Supervision Setup

+ * + * The router creates a “head” actor which supervises and/or monitors the + * routees. Instances are created as children of this actor, hence the + * children are not supervised by the parent of the router. Common choices are + * to always escalate (meaning that fault handling is always applied to all + * children simultaneously; this is the default) or use the parent’s strategy, + * which will result in routed children being treated individually, but it is + * possible as well to use Routers to give different supervisor strategies to + * different groups of children. + * + * @param metricsSelector decides what probability to use for selecting a routee, based + * on remaining capacity as indicated by the node metrics + * @param routees string representation of the actor paths of the routees that will be looked up + * using `actorFor` in [[akka.actor.ActorRefProvider]] + */ +@SerialVersionUID(1L) +case class AdaptiveLoadBalancingRouter( + metricsSelector: MetricsSelector = MixMetricsSelector, + nrOfInstances: Int = 0, routees: immutable.Iterable[String] = Nil, + override val resizer: Option[Resizer] = None, + val routerDispatcher: String = Dispatchers.DefaultDispatcherId, + val supervisorStrategy: SupervisorStrategy = AdaptiveLoadBalancingRouter.escalateStrategy) + extends RouterConfig with AdaptiveLoadBalancingRouterLike { + + /** + * Constructor that sets nrOfInstances to be created. + * Java API + * @param selector the selector is responsible for producing weighted mix of routees from the node metrics + * @param nr number of routees to create + */ + def this(selector: MetricsSelector, nr: Int) = this(metricsSelector = selector, nrOfInstances = nr) + + /** + * Constructor that sets the routees to be used. + * Java API + * @param selector the selector is responsible for producing weighted mix of routees from the node metrics + * @param routeePaths string representation of the actor paths of the routees that will be looked up + * using `actorFor` in [[akka.actor.ActorRefProvider]] + */ + def this(selector: MetricsSelector, routeePaths: java.lang.Iterable[String]) = + this(metricsSelector = selector, routees = immutableSeq(routeePaths)) + + /** + * Constructor that sets the resizer to be used. + * Java API + * @param selector the selector is responsible for producing weighted mix of routees from the node metrics + */ + def this(selector: MetricsSelector, resizer: Resizer) = + this(metricsSelector = selector, resizer = Some(resizer)) + + /** + * Java API for setting routerDispatcher + */ + def withDispatcher(dispatcherId: String): AdaptiveLoadBalancingRouter = + copy(routerDispatcher = dispatcherId) + + /** + * Java API for setting the supervisor strategy to be used for the “head” + * Router actor. + */ + def withSupervisorStrategy(strategy: SupervisorStrategy): AdaptiveLoadBalancingRouter = + copy(supervisorStrategy = strategy) + + /** + * Uses the resizer of the given RouterConfig if this RouterConfig + * doesn't have one, i.e. the resizer defined in code is used if + * resizer was not defined in config. + */ + override def withFallback(other: RouterConfig): RouterConfig = other match { + case _: FromConfig | _: NoRouter ⇒ this + case otherRouter: AdaptiveLoadBalancingRouter ⇒ + val useResizer = + if (this.resizer.isEmpty && otherRouter.resizer.isDefined) otherRouter.resizer + else this.resizer + copy(resizer = useResizer) + case _ ⇒ throw new IllegalArgumentException("Expected AdaptiveLoadBalancingRouter, got [%s]".format(other)) + } + +} + +/** + * INTERNAL API. + * + * This strategy is a metrics-aware router which performs load balancing of messages to + * cluster nodes based on cluster metric data. It consumes [[akka.cluster.ClusterMetricsChanged]] + * events and the [[akka.cluster.routing.MetricsSelector]] creates an mix of + * weighted routees based on the node metrics. Messages are routed randomly to the + * weighted routees, i.e. nodes with lower load are more likely to be used than nodes with + * higher load + */ +trait AdaptiveLoadBalancingRouterLike { this: RouterConfig ⇒ + + def metricsSelector: MetricsSelector + + def nrOfInstances: Int + + def routees: immutable.Iterable[String] + + def routerDispatcher: String + + override def createRoute(routeeProvider: RouteeProvider): Route = { + if (resizer.isEmpty) { + if (routees.isEmpty) routeeProvider.createRoutees(nrOfInstances) + else routeeProvider.registerRouteesFor(routees) + } + + val log = Logging(routeeProvider.context.system, routeeProvider.context.self) + + // The current weighted routees, if any. Weights are produced by the metricsSelector + // via the metricsListener Actor. It's only updated by the actor, but accessed from + // the threads of the senders. + @volatile var weightedRoutees: Option[WeightedRoutees] = None + + // subscribe to ClusterMetricsChanged and update weightedRoutees + val metricsListener = routeeProvider.context.actorOf(Props(new Actor { + + val cluster = Cluster(context.system) + + override def preStart(): Unit = cluster.subscribe(self, classOf[ClusterMetricsChanged]) + override def postStop(): Unit = cluster.unsubscribe(self) + + def receive = { + case ClusterMetricsChanged(metrics) ⇒ receiveMetrics(metrics) + case _: CurrentClusterState ⇒ // ignore + } + + def receiveMetrics(metrics: Set[NodeMetrics]): Unit = { + // this is the only place from where weightedRoutees is updated + weightedRoutees = Some(new WeightedRoutees(routeeProvider.routees, cluster.selfAddress, + metricsSelector.weights(metrics))) + } + + }).withDispatcher(routerDispatcher), name = "metricsListener") + + def getNext(): ActorRef = weightedRoutees match { + case Some(weighted) ⇒ + if (weighted.isEmpty) routeeProvider.context.system.deadLetters + else weighted(ThreadLocalRandom.current.nextInt(weighted.total) + 1) + case None ⇒ + val currentRoutees = routeeProvider.routees + if (currentRoutees.isEmpty) routeeProvider.context.system.deadLetters + else currentRoutees(ThreadLocalRandom.current.nextInt(currentRoutees.size)) + } + + { + case (sender, message) ⇒ + message match { + case Broadcast(msg) ⇒ toAll(sender, routeeProvider.routees) + case msg ⇒ List(Destination(sender, getNext())) + } + } + } +} + +/** + * MetricsSelector that uses the heap metrics. + * Low heap capacity => small weight. + */ +@SerialVersionUID(1L) +case object HeapMetricsSelector extends CapacityMetricsSelector { + /** + * Java API: get the singleton instance + */ + def getInstance = this + + override def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] = { + nodeMetrics.collect { + case HeapMemory(address, _, used, committed, max) ⇒ + val capacity = max match { + case None ⇒ (committed - used).toDouble / committed + case Some(m) ⇒ (m - used).toDouble / m + } + (address, capacity) + }.toMap + } +} + +/** + * MetricsSelector that uses the combined CPU metrics. + * Combined CPU is sum of User + Sys + Nice + Wait, in percentage. + * Low cpu capacity => small weight. + */ +@SerialVersionUID(1L) +case object CpuMetricsSelector extends CapacityMetricsSelector { + /** + * Java API: get the singleton instance + */ + def getInstance = this + + override def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] = { + nodeMetrics.collect { + case Cpu(address, _, _, Some(cpuCombined), _) ⇒ + val capacity = 1.0 - cpuCombined + (address, capacity) + }.toMap + } +} + +/** + * MetricsSelector that uses the system load average metrics. + * System load average is OS-specific average load on the CPUs in the system, + * for the past 1 minute. The system is possibly nearing a bottleneck if the + * system load average is nearing number of cpus/cores. + * Low load average capacity => small weight. + */ +@SerialVersionUID(1L) +case object SystemLoadAverageMetricsSelector extends CapacityMetricsSelector { + /** + * Java API: get the singleton instance + */ + def getInstance = this + + override def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] = { + nodeMetrics.collect { + case Cpu(address, _, Some(systemLoadAverage), _, processors) ⇒ + val capacity = 1.0 - math.min(1.0, systemLoadAverage / processors) + (address, capacity) + }.toMap + } +} + +/** + * Singleton instance of the default MixMetricsSelector, which uses [akka.cluster.routing.HeapMetricsSelector], + * [akka.cluster.routing.CpuMetricsSelector], and [akka.cluster.routing.SystemLoadAverageMetricsSelector] + */ +@SerialVersionUID(1L) +object MixMetricsSelector extends MixMetricsSelectorBase( + Vector(HeapMetricsSelector, CpuMetricsSelector, SystemLoadAverageMetricsSelector)) { + + /** + * Java API: get the default singleton instance + */ + def getInstance = this +} + +/** + * MetricsSelector that combines other selectors and aggregates their capacity + * values. By default it uses [akka.cluster.routing.HeapMetricsSelector], + * [akka.cluster.routing.CpuMetricsSelector], and [akka.cluster.routing.SystemLoadAverageMetricsSelector] + */ +@SerialVersionUID(1L) +case class MixMetricsSelector( + selectors: immutable.IndexedSeq[CapacityMetricsSelector]) + extends MixMetricsSelectorBase(selectors) + +/** + * Base class for MetricsSelector that combines other selectors and aggregates their capacity. + */ +@SerialVersionUID(1L) +abstract class MixMetricsSelectorBase(selectors: immutable.IndexedSeq[CapacityMetricsSelector]) + extends CapacityMetricsSelector { + + /** + * Java API + */ + def this(selectors: java.lang.Iterable[CapacityMetricsSelector]) = this(immutableSeq(selectors).toVector) + + override def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] = { + val combined: immutable.IndexedSeq[(Address, Double)] = selectors.flatMap(_.capacity(nodeMetrics).toSeq) + // aggregated average of the capacities by address + combined.foldLeft(Map.empty[Address, (Double, Int)].withDefaultValue((0.0, 0))) { + case (acc, (address, capacity)) ⇒ + val (sum, count) = acc(address) + acc + (address -> (sum + capacity, count + 1)) + }.map { + case (addr, (sum, count)) ⇒ (addr -> sum / count) + } + } + +} + +/** + * A MetricsSelector is responsible for producing weights from the node metrics. + */ +@SerialVersionUID(1L) +trait MetricsSelector extends Serializable { + /** + * The weights per address, based on the the nodeMetrics. + */ + def weights(nodeMetrics: Set[NodeMetrics]): Map[Address, Int] +} + +/** + * A MetricsSelector producing weights from remaining capacity. + * The weights are typically proportional to the remaining capacity. + */ +abstract class CapacityMetricsSelector extends MetricsSelector { + + /** + * Remaining capacity for each node. The value is between + * 0.0 and 1.0, where 0.0 means no remaining capacity (full + * utilization) and 1.0 means full remaining capacity (zero + * utilization). + */ + def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] + + /** + * Converts the capacity values to weights. The node with lowest + * capacity gets weight 1 (lowest usable capacity is 1%) and other + * nodes gets weights proportional to their capacity compared to + * the node with lowest capacity. + */ + def weights(capacity: Map[Address, Double]): Map[Address, Int] = { + if (capacity.isEmpty) Map.empty[Address, Int] + else { + val (_, min) = capacity.minBy { case (_, c) ⇒ c } + // lowest usable capacity is 1% (>= 0.5% will be rounded to weight 1), also avoids div by zero + val divisor = math.max(0.01, min) + capacity map { case (addr, c) ⇒ (addr -> math.round((c) / divisor).toInt) } + } + } + + /** + * The weights per address, based on the capacity produced by + * the nodeMetrics. + */ + override def weights(nodeMetrics: Set[NodeMetrics]): Map[Address, Int] = + weights(capacity(nodeMetrics)) + +} + +/** + * INTERNAL API + * + * Pick routee based on its weight. Higher weight, higher probability. + */ +private[cluster] class WeightedRoutees(refs: immutable.IndexedSeq[ActorRef], selfAddress: Address, weights: Map[Address, Int]) { + + // fill an array of same size as the refs with accumulated weights, + // binarySearch is used to pick the right bucket from a requested value + // from 1 to the total sum of the used weights. + private val buckets: Array[Int] = { + def fullAddress(actorRef: ActorRef): Address = actorRef.path.address match { + case Address(_, _, None, None) ⇒ selfAddress + case a ⇒ a + } + val buckets = Array.ofDim[Int](refs.size) + val meanWeight = if (weights.isEmpty) 1 else weights.values.sum / weights.size + val w = weights.withDefaultValue(meanWeight) // we don’t necessarily have metrics for all addresses + var i = 0 + var sum = 0 + refs foreach { ref ⇒ + sum += w(fullAddress(ref)) + buckets(i) = sum + i += 1 + } + buckets + } + + def isEmpty: Boolean = buckets.length == 0 + + def total: Int = { + require(!isEmpty, "WeightedRoutees must not be used when empty") + buckets(buckets.length - 1) + } + + /** + * Pick the routee matching a value, from 1 to total. + */ + def apply(value: Int): ActorRef = { + require(1 <= value && value <= total, "value must be between [1 - %s]" format total) + refs(idx(Arrays.binarySearch(buckets, value))) + } + + /** + * Converts the result of Arrays.binarySearch into a index in the buckets array + * see documentation of Arrays.binarySearch for what it returns + */ + private def idx(i: Int): Int = { + if (i >= 0) i // exact match + else { + val j = math.abs(i + 1) + if (j >= buckets.length) throw new IndexOutOfBoundsException( + "Requested index [%s] is > max index [%s]".format(i, buckets.length)) + else j + } + } +} \ No newline at end of file diff --git a/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala b/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala index 52a9a55e21..906b3d154f 100644 --- a/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala +++ b/akka-cluster/src/main/scala/akka/cluster/routing/ClusterRouterConfig.scala @@ -5,16 +5,13 @@ package akka.cluster.routing import java.lang.IllegalStateException import java.util.concurrent.atomic.AtomicInteger -import scala.collection.immutable.SortedSet +import scala.collection.immutable import com.typesafe.config.ConfigFactory import akka.ConfigurationException -import akka.actor.Actor import akka.actor.ActorContext import akka.actor.ActorRef -import akka.actor.ActorSystemImpl import akka.actor.Address import akka.actor.Deploy -import akka.actor.InternalActorRef import akka.actor.Props import akka.actor.SupervisorStrategy import akka.cluster.Cluster @@ -51,7 +48,7 @@ final case class ClusterRouterConfig(local: RouterConfig, settings: ClusterRoute // Intercept ClusterDomainEvent and route them to the ClusterRouterActor ({ - case (sender, message: ClusterDomainEvent) ⇒ Seq(Destination(sender, routeeProvider.context.self)) + case (sender, message: ClusterDomainEvent) ⇒ List(Destination(sender, routeeProvider.context.self)) }: Route) orElse localRoute } @@ -130,7 +127,7 @@ case class ClusterRouterSettings private[akka] ( if (isRouteesPathDefined && maxInstancesPerNode != 1) throw new IllegalArgumentException("maxInstancesPerNode of cluster router must be 1 when routeesPath is defined") - val routeesPathElements: Iterable[String] = routeesPath match { + val routeesPathElements: immutable.Iterable[String] = routeesPath match { case RelativeActorPath(elements) ⇒ elements case _ ⇒ throw new IllegalArgumentException("routeesPath [%s] is not a valid relative actor path" format routeesPath) @@ -156,7 +153,7 @@ private[akka] class ClusterRouteeProvider( // need this counter as instance variable since Resizer may call createRoutees several times private val childNameCounter = new AtomicInteger - override def registerRouteesFor(paths: Iterable[String]): Unit = + override def registerRouteesFor(paths: immutable.Iterable[String]): Unit = throw new ConfigurationException("Cluster deployment can not be combined with routees for [%s]" format context.self.path.toString) @@ -183,7 +180,7 @@ private[akka] class ClusterRouteeProvider( context.asInstanceOf[ActorCell].attachChild(routeeProps.withDeploy(deploy), name, systemService = false) } // must register each one, since registered routees are used in selectDeploymentTarget - registerRoutees(Some(ref)) + registerRoutees(List(ref)) // recursion until all created doCreateRoutees() @@ -196,13 +193,13 @@ private[akka] class ClusterRouteeProvider( private def selectDeploymentTarget: Option[Address] = { val currentRoutees = routees - val currentNodes = availbleNodes + val currentNodes = availableNodes if (currentNodes.isEmpty || currentRoutees.size >= settings.totalInstances) { None } else { // find the node with least routees val numberOfRouteesPerNode: Map[Address, Int] = - currentRoutees.foldLeft(currentNodes.map(_ -> 0).toMap.withDefault(_ ⇒ 0)) { (acc, x) ⇒ + currentRoutees.foldLeft(currentNodes.map(_ -> 0).toMap.withDefaultValue(0)) { (acc, x) ⇒ val address = fullAddress(x) acc + (address -> (acc(address) + 1)) } @@ -222,27 +219,26 @@ private[akka] class ClusterRouteeProvider( case a ⇒ a } - private[routing] def availbleNodes: SortedSet[Address] = { + private[routing] def availableNodes: immutable.SortedSet[Address] = { import Member.addressOrdering val currentNodes = nodes if (currentNodes.isEmpty && settings.allowLocalRoutees) //use my own node, cluster information not updated yet - SortedSet(cluster.selfAddress) + immutable.SortedSet(cluster.selfAddress) else currentNodes } @volatile - private[routing] var nodes: SortedSet[Address] = { + private[routing] var nodes: immutable.SortedSet[Address] = { import Member.addressOrdering cluster.readView.members.collect { - case m if isAvailble(m) ⇒ m.address + case m if isAvailable(m) ⇒ m.address } } - private[routing] def isAvailble(m: Member): Boolean = { + private[routing] def isAvailable(m: Member): Boolean = m.status == MemberStatus.Up && (settings.allowLocalRoutees || m.address != cluster.selfAddress) - } } @@ -271,10 +267,10 @@ private[akka] class ClusterRouterActor extends Router { override def routerReceive: Receive = { case s: CurrentClusterState ⇒ import Member.addressOrdering - routeeProvider.nodes = s.members.collect { case m if routeeProvider.isAvailble(m) ⇒ m.address } + routeeProvider.nodes = s.members.collect { case m if routeeProvider.isAvailable(m) ⇒ m.address } routeeProvider.createRoutees() - case m: MemberEvent if routeeProvider.isAvailble(m.member) ⇒ + case m: MemberEvent if routeeProvider.isAvailable(m.member) ⇒ routeeProvider.nodes += m.member.address // createRoutees will create routees based on // totalInstances and maxInstancesPerNode diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUnreachableSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUnreachableSpec.scala index 49483d39ef..bf1009b472 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUnreachableSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUnreachableSpec.scala @@ -8,6 +8,7 @@ import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ import akka.actor.Address +import scala.collection.immutable case class ClientDowningNodeThatIsUnreachableMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -44,14 +45,14 @@ abstract class ClientDowningNodeThatIsUnreachableSpec(multiNodeConfig: ClientDow runOn(first) { // kill 'third' node - testConductor.shutdown(third, 0) + testConductor.shutdown(third, 0).await markNodeAsUnavailable(thirdAddress) // mark 'third' node as DOWN cluster.down(thirdAddress) enterBarrier("down-third-node") - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(thirdAddress)) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(thirdAddress)) clusterView.members.exists(_.address == thirdAddress) must be(false) } @@ -62,7 +63,7 @@ abstract class ClientDowningNodeThatIsUnreachableSpec(multiNodeConfig: ClientDow runOn(second, fourth) { enterBarrier("down-third-node") - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(thirdAddress)) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(thirdAddress)) } enterBarrier("await-completion") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUpSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUpSpec.scala index 5a7308ec92..2a0af15997 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUpSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClientDowningNodeThatIsUpSpec.scala @@ -8,6 +8,7 @@ import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ import akka.actor.Address +import scala.collection.immutable case class ClientDowningNodeThatIsUpMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -49,7 +50,7 @@ abstract class ClientDowningNodeThatIsUpSpec(multiNodeConfig: ClientDowningNodeT markNodeAsUnavailable(thirdAddress) - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(thirdAddress)) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(thirdAddress)) clusterView.members.exists(_.address == thirdAddress) must be(false) } @@ -60,7 +61,7 @@ abstract class ClientDowningNodeThatIsUpSpec(multiNodeConfig: ClientDowningNodeT runOn(second, fourth) { enterBarrier("down-third-node") - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(thirdAddress)) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(thirdAddress)) } enterBarrier("await-completion") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterAccrualFailureDetectorSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterAccrualFailureDetectorSpec.scala index 0d4f62c740..552f90bd49 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterAccrualFailureDetectorSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterAccrualFailureDetectorSpec.scala @@ -6,7 +6,7 @@ package akka.cluster import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit._ object ClusterAccrualFailureDetectorMultiJvmSpec extends MultiNodeConfig { @@ -46,7 +46,7 @@ abstract class ClusterAccrualFailureDetectorSpec "mark node as 'unavailable' if a node in the cluster is shut down (and its heartbeats stops)" taggedAs LongRunningTest in { runOn(first) { - testConductor.shutdown(third, 0) + testConductor.shutdown(third, 0).await } enterBarrier("third-shutdown") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsDataStreamingOffSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsDataStreamingOffSpec.scala deleted file mode 100644 index 18f2bcf9ae..0000000000 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsDataStreamingOffSpec.scala +++ /dev/null @@ -1,31 +0,0 @@ -/* - * Copyright (C) 2009-2012 Typesafe Inc. - */ - -package akka.cluster - -import scala.language.postfixOps -import scala.concurrent.util.duration._ -import akka.remote.testkit.{ MultiNodeSpec, MultiNodeConfig } -import com.typesafe.config.ConfigFactory -import akka.testkit.LongRunningTest - -object ClusterMetricsDataStreamingOffMultiJvmSpec extends MultiNodeConfig { - val first = role("first") - val second = role("second") - commonConfig(ConfigFactory.parseString("akka.cluster.metrics.rate-of-decay = 0") - .withFallback(MultiNodeClusterSpec.clusterConfigWithFailureDetectorPuppet)) -} -class ClusterMetricsDataStreamingOffMultiJvmNode1 extends ClusterMetricsDataStreamingOffSpec -class ClusterMetricsDataStreamingOffMultiJvmNode2 extends ClusterMetricsDataStreamingOffSpec - -abstract class ClusterMetricsDataStreamingOffSpec extends MultiNodeSpec(ClusterMetricsDataStreamingOffMultiJvmSpec) with MultiNodeClusterSpec with MetricSpec { - "Cluster metrics" must { - "not collect stream metric data" taggedAs LongRunningTest in within(30 seconds) { - awaitClusterUp(roles: _*) - awaitCond(clusterView.clusterMetrics.size == roles.size) - awaitCond(clusterView.clusterMetrics.flatMap(_.metrics).filter(_.trendable).forall(_.average.isEmpty)) - enterBarrier("after") - } - } -} diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsSpec.scala index c6ef98e660..6712502312 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/ClusterMetricsSpec.scala @@ -5,7 +5,7 @@ package akka.cluster import scala.language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec @@ -28,9 +28,11 @@ class ClusterMetricsMultiJvmNode3 extends ClusterMetricsSpec class ClusterMetricsMultiJvmNode4 extends ClusterMetricsSpec class ClusterMetricsMultiJvmNode5 extends ClusterMetricsSpec -abstract class ClusterMetricsSpec extends MultiNodeSpec(ClusterMetricsMultiJvmSpec) with MultiNodeClusterSpec with MetricSpec { +abstract class ClusterMetricsSpec extends MultiNodeSpec(ClusterMetricsMultiJvmSpec) with MultiNodeClusterSpec { import ClusterMetricsMultiJvmSpec._ + def isSigar(collector: MetricsCollector): Boolean = collector.isInstanceOf[SigarMetricsCollector] + "Cluster metrics" must { "periodically collect metrics on each node, publish ClusterMetricsChanged to the event stream, " + "and gossip metrics around the node ring" taggedAs LongRunningTest in within(60 seconds) { @@ -38,9 +40,8 @@ abstract class ClusterMetricsSpec extends MultiNodeSpec(ClusterMetricsMultiJvmSp enterBarrier("cluster-started") awaitCond(clusterView.members.filter(_.status == MemberStatus.Up).size == roles.size) awaitCond(clusterView.clusterMetrics.size == roles.size) - assertInitialized(cluster.settings.MetricsRateOfDecay, collectNodeMetrics(clusterView.clusterMetrics).toSet) - val collector = MetricsCollector(cluster.selfAddress, log, system.asInstanceOf[ExtendedActorSystem].dynamicAccess) - clusterView.clusterMetrics.foreach(n ⇒ assertExpectedSampleSize(collector.isSigar, cluster.settings.MetricsRateOfDecay, n)) + val collector = MetricsCollector(cluster.system, cluster.settings) + collector.sample.metrics.size must be > (3) enterBarrier("after") } "reflect the correct number of node metrics in cluster view" taggedAs LongRunningTest in within(30 seconds) { diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/ConvergenceSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/ConvergenceSpec.scala index 5cbcfaf6b6..b2a9453035 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/ConvergenceSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/ConvergenceSpec.scala @@ -9,7 +9,7 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Address case class ConvergenceMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { @@ -61,7 +61,7 @@ abstract class ConvergenceSpec(multiNodeConfig: ConvergenceMultiNodeConfig) runOn(first) { // kill 'third' node - testConductor.shutdown(third, 0) + testConductor.shutdown(third, 0).await markNodeAsUnavailable(thirdAddress) } diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinInProgressSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinInProgressSpec.scala index e198694aab..f59db3f21e 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinInProgressSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinInProgressSpec.scala @@ -8,8 +8,7 @@ import org.scalatest.BeforeAndAfter import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Deadline +import scala.concurrent.duration._ object JoinInProgressMultiJvmSpec extends MultiNodeConfig { val first = role("first") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinSeedNodeSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinSeedNodeSpec.scala index 1391b80127..464b627944 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinSeedNodeSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/JoinSeedNodeSpec.scala @@ -3,12 +3,13 @@ */ package akka.cluster +import scala.collection.immutable import com.typesafe.config.ConfigFactory import org.scalatest.BeforeAndAfter import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Address object JoinSeedNodeMultiJvmSpec extends MultiNodeConfig { @@ -35,7 +36,7 @@ abstract class JoinSeedNodeSpec import JoinSeedNodeMultiJvmSpec._ - def seedNodes: IndexedSeq[Address] = IndexedSeq(seed1, seed2, seed3) + def seedNodes: immutable.IndexedSeq[Address] = Vector(seed1, seed2, seed3) "A cluster with seed nodes" must { "be able to start the seed nodes concurrently" taggedAs LongRunningTest in { diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/LargeClusterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/LargeClusterSpec.scala index e5c72e642b..a5d2ceb58d 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/LargeClusterSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/LargeClusterSpec.scala @@ -8,19 +8,16 @@ import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ import akka.testkit.TestEvent._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.ActorSystem -import scala.concurrent.util.Deadline import java.util.concurrent.TimeoutException import scala.collection.immutable.SortedSet import scala.concurrent.Await -import scala.concurrent.util.Duration import java.util.concurrent.TimeUnit import akka.remote.testconductor.RoleName import akka.actor.Props import akka.actor.Actor import akka.cluster.MemberStatus._ -import scala.concurrent.util.FiniteDuration object LargeClusterMultiJvmSpec extends MultiNodeConfig { // each jvm simulates a datacenter with many nodes @@ -42,7 +39,7 @@ object LargeClusterMultiJvmSpec extends MultiNodeConfig { gossip-interval = 500 ms auto-join = off auto-down = on - failure-detector.acceptable-heartbeat-pause = 10s + failure-detector.acceptable-heartbeat-pause = 5s publish-stats-interval = 0 s # always, when it happens } akka.event-handlers = ["akka.testkit.TestEventListener"] @@ -57,7 +54,8 @@ object LargeClusterMultiJvmSpec extends MultiNodeConfig { akka.scheduler.tick-duration = 33 ms akka.remote.log-remote-lifecycle-events = off akka.remote.netty.execution-pool-size = 4 - #akka.remote.netty.reconnection-time-window = 1s + #akka.remote.netty.reconnection-time-window = 10s + akka.remote.netty.write-timeout = 5s akka.remote.netty.backoff-timeout = 500ms akka.remote.netty.connection-timeout = 500ms @@ -135,9 +133,7 @@ abstract class LargeClusterSpec systems foreach { Cluster(_) } } - def expectedMaxDuration(totalNodes: Int): FiniteDuration = - // this cast will always succeed, but the compiler does not know about it ... - (5.seconds + (2.seconds * totalNodes)).asInstanceOf[FiniteDuration] + def expectedMaxDuration(totalNodes: Int): FiniteDuration = 5.seconds + 2.seconds * totalNodes def joinAll(from: RoleName, to: RoleName, totalNodes: Int, runOnRoles: RoleName*): Unit = { val joiningClusters = systems.map(Cluster(_)).toSet @@ -151,7 +147,7 @@ abstract class LargeClusterSpec runOn(runOnRoles: _*) { systems.size must be(nodesPerDatacenter) // make sure it is initialized - val clusterNodes = ifNode(from)(joiningClusterNodes)(systems.map(Cluster(_)).toSet) + val clusterNodes = if(isNode(from)) joiningClusterNodes else systems.map(Cluster(_)).toSet val startGossipCounts = Map.empty[Cluster, Long] ++ clusterNodes.map(c ⇒ (c -> c.readView.latestStats.receivedGossipCount)) def gossipCount(c: Cluster): Long = { @@ -263,7 +259,7 @@ abstract class LargeClusterSpec if (bulk.nonEmpty) { val totalNodes = nodesPerDatacenter * 4 + bulk.size within(expectedMaxDuration(totalNodes)) { - val joiningClusters = ifNode(fifthDatacenter)(bulk.map(Cluster(_)).toSet)(Set.empty) + val joiningClusters = if(isNode(fifthDatacenter)) bulk.map(Cluster(_)).toSet else Set.empty[Cluster] join(joiningClusters, from = fifthDatacenter, to = firstDatacenter, totalNodes, runOnRoles = firstDatacenter, secondDatacenter, thirdDatacenter, fourthDatacenter, fifthDatacenter) enterBarrier("fifth-datacenter-joined-" + bulk.size) @@ -273,7 +269,7 @@ abstract class LargeClusterSpec for (i ← 0 until oneByOne.size) { val totalNodes = nodesPerDatacenter * 4 + bulk.size + i + 1 within(expectedMaxDuration(totalNodes)) { - val joiningClusters = ifNode(fifthDatacenter)(Set(Cluster(oneByOne(i))))(Set.empty) + val joiningClusters = if(isNode(fifthDatacenter)) Set(Cluster(oneByOne(i))) else Set.empty[Cluster] join(joiningClusters, from = fifthDatacenter, to = firstDatacenter, totalNodes, runOnRoles = firstDatacenter, secondDatacenter, thirdDatacenter, fourthDatacenter, fifthDatacenter) enterBarrier("fifth-datacenter-joined-" + (bulk.size + i)) @@ -285,7 +281,7 @@ abstract class LargeClusterSpec val unreachableNodes = nodesPerDatacenter val liveNodes = nodesPerDatacenter * 4 - within((30.seconds + (3.seconds * liveNodes)).asInstanceOf[FiniteDuration]) { + within(30.seconds + 3.seconds * liveNodes) { val startGossipCounts = Map.empty[Cluster, Long] ++ systems.map(sys ⇒ (Cluster(sys) -> Cluster(sys).readView.latestStats.receivedGossipCount)) def gossipCount(c: Cluster): Long = { @@ -319,7 +315,7 @@ abstract class LargeClusterSpec } runOn(firstDatacenter) { - testConductor.shutdown(secondDatacenter, 0) + testConductor.shutdown(secondDatacenter, 0).await } enterBarrier("second-datacenter-shutdown") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderDowningNodeThatIsUnreachableSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderDowningNodeThatIsUnreachableSpec.scala index 4299ffe839..279e32ab66 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderDowningNodeThatIsUnreachableSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderDowningNodeThatIsUnreachableSpec.scala @@ -10,7 +10,8 @@ import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ import akka.actor._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable case class LeaderDowningNodeThatIsUnreachableMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -51,7 +52,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow val fourthAddress = address(fourth) runOn(first) { // kill 'fourth' node - testConductor.shutdown(fourth, 0) + testConductor.shutdown(fourth, 0).await enterBarrier("down-fourth-node") // mark the node as unreachable in the failure detector @@ -59,7 +60,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow // --- HERE THE LEADER SHOULD DETECT FAILURE AND AUTO-DOWN THE UNREACHABLE NODE --- - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(fourthAddress), 30.seconds) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(fourthAddress), 30.seconds) } runOn(fourth) { @@ -69,7 +70,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow runOn(second, third) { enterBarrier("down-fourth-node") - awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = Seq(fourthAddress), 30.seconds) + awaitUpConvergence(numberOfMembers = 3, canNotBePartOfMemberRing = List(fourthAddress), 30.seconds) } enterBarrier("await-completion-1") @@ -81,7 +82,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow enterBarrier("before-down-second-node") runOn(first) { // kill 'second' node - testConductor.shutdown(second, 0) + testConductor.shutdown(second, 0).await enterBarrier("down-second-node") // mark the node as unreachable in the failure detector @@ -89,7 +90,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow // --- HERE THE LEADER SHOULD DETECT FAILURE AND AUTO-DOWN THE UNREACHABLE NODE --- - awaitUpConvergence(numberOfMembers = 2, canNotBePartOfMemberRing = Seq(secondAddress), 30.seconds) + awaitUpConvergence(numberOfMembers = 2, canNotBePartOfMemberRing = List(secondAddress), 30.seconds) } runOn(second) { @@ -99,7 +100,7 @@ abstract class LeaderDowningNodeThatIsUnreachableSpec(multiNodeConfig: LeaderDow runOn(third) { enterBarrier("down-second-node") - awaitUpConvergence(numberOfMembers = 2, canNotBePartOfMemberRing = Seq(secondAddress), 30 seconds) + awaitUpConvergence(numberOfMembers = 2, canNotBePartOfMemberRing = List(secondAddress), 30 seconds) } enterBarrier("await-completion-2") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderElectionSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderElectionSpec.scala index 8c2198dd7b..dfe1553369 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderElectionSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderElectionSpec.scala @@ -4,10 +4,13 @@ package akka.cluster +import language.postfixOps import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ +import scala.concurrent.duration._ +import scala.collection.immutable case class LeaderElectionMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val controller = role("controller") @@ -40,7 +43,7 @@ abstract class LeaderElectionSpec(multiNodeConfig: LeaderElectionMultiNodeConfig import multiNodeConfig._ // sorted in the order used by the cluster - lazy val sortedRoles = Seq(first, second, third, fourth).sorted + lazy val sortedRoles = List(first, second, third, fourth).sorted "A cluster of four nodes" must { @@ -61,48 +64,58 @@ abstract class LeaderElectionSpec(multiNodeConfig: LeaderElectionMultiNodeConfig val leader = currentRoles.head val aUser = currentRoles.last val remainingRoles = currentRoles.tail + val n = "-" + (alreadyShutdown + 1) myself match { case `controller` ⇒ val leaderAddress = address(leader) - enterBarrier("before-shutdown") - testConductor.shutdown(leader, 0) - enterBarrier("after-shutdown", "after-down", "completed") - markNodeAsUnavailable(leaderAddress) + enterBarrier("before-shutdown" + n) + testConductor.shutdown(leader, 0).await + enterBarrier("after-shutdown" + n, "after-unavailable" + n, "after-down" + n, "completed" + n) case `leader` ⇒ - enterBarrier("before-shutdown", "after-shutdown") + enterBarrier("before-shutdown" + n, "after-shutdown" + n) // this node will be shutdown by the controller and doesn't participate in more barriers case `aUser` ⇒ val leaderAddress = address(leader) - enterBarrier("before-shutdown", "after-shutdown") + enterBarrier("before-shutdown" + n, "after-shutdown" + n) + + // detect failure + markNodeAsUnavailable(leaderAddress) + awaitCond(clusterView.unreachableMembers.exists(m ⇒ m.address == leaderAddress)) + enterBarrier("after-unavailable" + n) + // user marks the shutdown leader as DOWN cluster.down(leaderAddress) - enterBarrier("after-down", "completed") - markNodeAsUnavailable(leaderAddress) + enterBarrier("after-down" + n, "completed" + n) case _ if remainingRoles.contains(myself) ⇒ // remaining cluster nodes, not shutdown - enterBarrier("before-shutdown", "after-shutdown", "after-down") + val leaderAddress = address(leader) + enterBarrier("before-shutdown" + n, "after-shutdown" + n) + awaitCond(clusterView.unreachableMembers.exists(m ⇒ m.address == leaderAddress)) + enterBarrier("after-unavailable" + n) + + enterBarrier("after-down" + n) awaitUpConvergence(currentRoles.size - 1) val nextExpectedLeader = remainingRoles.head clusterView.isLeader must be(myself == nextExpectedLeader) assertLeaderIn(remainingRoles) - enterBarrier("completed") + enterBarrier("completed" + n) } } - "be able to 're-elect' a single leader after leader has left" taggedAs LongRunningTest in { + "be able to 're-elect' a single leader after leader has left" taggedAs LongRunningTest in within(20 seconds) { shutdownLeaderAndVerifyNewLeader(alreadyShutdown = 0) enterBarrier("after-2") } - "be able to 're-elect' a single leader after leader has left (again)" taggedAs LongRunningTest in { + "be able to 're-elect' a single leader after leader has left (again)" taggedAs LongRunningTest in within(20 seconds) { shutdownLeaderAndVerifyNewLeader(alreadyShutdown = 1) enterBarrier("after-3") } diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderLeavingSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderLeavingSpec.scala index 394db2af77..acaf909d57 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderLeavingSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/LeaderLeavingSpec.scala @@ -8,7 +8,7 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.Actor import akka.cluster.MemberStatus._ @@ -57,7 +57,7 @@ abstract class LeaderLeavingSpec enterBarrier("leader-left") // verify that the LEADER is shut down - awaitCond(!cluster.isRunning) + awaitCond(cluster.isTerminated) // verify that the LEADER is REMOVED awaitCond(clusterView.status == Removed) diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/MBeanSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/MBeanSpec.scala new file mode 100644 index 0000000000..e6d83f881e --- /dev/null +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/MBeanSpec.scala @@ -0,0 +1,149 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ +package akka.cluster + +import language.postfixOps +import com.typesafe.config.ConfigFactory +import scala.concurrent.duration._ +import java.lang.management.ManagementFactory +import javax.management.InstanceNotFoundException +import javax.management.ObjectName +import akka.remote.testkit.MultiNodeConfig +import akka.remote.testkit.MultiNodeSpec +import akka.testkit._ +import scala.util.Try + +object MBeanMultiJvmSpec extends MultiNodeConfig { + val first = role("first") + val second = role("second") + val third = role("third") + val fourth = role("fourth") + + commonConfig(debugConfig(on = false).withFallback(ConfigFactory.parseString(""" + akka.cluster.jmx.enabled = on + """)).withFallback(MultiNodeClusterSpec.clusterConfig)) + +} + +class MBeanMultiJvmNode1 extends MBeanSpec +class MBeanMultiJvmNode2 extends MBeanSpec +class MBeanMultiJvmNode3 extends MBeanSpec +class MBeanMultiJvmNode4 extends MBeanSpec + +abstract class MBeanSpec + extends MultiNodeSpec(MBeanMultiJvmSpec) + with MultiNodeClusterSpec { + + import MBeanMultiJvmSpec._ + import ClusterEvent._ + + val mbeanName = new ObjectName("akka:type=Cluster") + lazy val mbeanServer = ManagementFactory.getPlatformMBeanServer + + "Cluster MBean" must { + "expose attributes" taggedAs LongRunningTest in { + val info = mbeanServer.getMBeanInfo(mbeanName) + info.getAttributes.map(_.getName).toSet must be(Set( + "ClusterStatus", "Members", "Unreachable", "MemberStatus", "Leader", "Singleton", "Available")) + enterBarrier("after-1") + } + + "expose operations" taggedAs LongRunningTest in { + val info = mbeanServer.getMBeanInfo(mbeanName) + info.getOperations.map(_.getName).toSet must be(Set( + "join", "leave", "down")) + enterBarrier("after-2") + } + + "change attributes after startup" taggedAs LongRunningTest in { + runOn(first) { + mbeanServer.getAttribute(mbeanName, "Available").asInstanceOf[Boolean] must be(false) + mbeanServer.getAttribute(mbeanName, "Singleton").asInstanceOf[Boolean] must be(false) + mbeanServer.getAttribute(mbeanName, "Leader") must be("") + mbeanServer.getAttribute(mbeanName, "Members") must be("") + mbeanServer.getAttribute(mbeanName, "Unreachable") must be("") + mbeanServer.getAttribute(mbeanName, "MemberStatus") must be("Removed") + } + awaitClusterUp(first) + runOn(first) { + awaitCond(mbeanServer.getAttribute(mbeanName, "MemberStatus") == "Up") + awaitCond(mbeanServer.getAttribute(mbeanName, "Leader") == address(first).toString) + mbeanServer.getAttribute(mbeanName, "Singleton").asInstanceOf[Boolean] must be(true) + mbeanServer.getAttribute(mbeanName, "Members") must be(address(first).toString) + mbeanServer.getAttribute(mbeanName, "Unreachable") must be("") + mbeanServer.getAttribute(mbeanName, "Available").asInstanceOf[Boolean] must be(true) + } + enterBarrier("after-3") + } + + "support join" taggedAs LongRunningTest in { + runOn(second, third, fourth) { + mbeanServer.invoke(mbeanName, "join", Array(address(first).toString), Array("java.lang.String")) + } + enterBarrier("joined") + + awaitUpConvergence(4) + assertMembers(clusterView.members, roles.map(address(_)): _*) + awaitCond(mbeanServer.getAttribute(mbeanName, "MemberStatus") == "Up") + val expectedMembers = roles.sorted.map(address(_)).mkString(",") + awaitCond(mbeanServer.getAttribute(mbeanName, "Members") == expectedMembers) + val expectedLeader = address(roleOfLeader()) + awaitCond(mbeanServer.getAttribute(mbeanName, "Leader") == expectedLeader.toString) + mbeanServer.getAttribute(mbeanName, "Singleton").asInstanceOf[Boolean] must be(false) + + enterBarrier("after-4") + } + + "support down" taggedAs LongRunningTest in { + val fourthAddress = address(fourth) + runOn(first) { + testConductor.shutdown(fourth, 0).await + } + enterBarrier("fourth-shutdown") + + runOn(first, second, third) { + awaitCond(mbeanServer.getAttribute(mbeanName, "Unreachable") == fourthAddress.toString) + val expectedMembers = Seq(first, second, third).sorted.map(address(_)).mkString(",") + awaitCond(mbeanServer.getAttribute(mbeanName, "Members") == expectedMembers) + } + enterBarrier("fourth-unreachable") + + runOn(second) { + mbeanServer.invoke(mbeanName, "down", Array(fourthAddress.toString), Array("java.lang.String")) + } + enterBarrier("fourth-down") + + runOn(first, second, third) { + awaitUpConvergence(3, canNotBePartOfMemberRing = List(fourthAddress)) + assertMembers(clusterView.members, first, second, third) + } + + enterBarrier("after-5") + } + + "support leave" taggedAs LongRunningTest in within(20 seconds) { + runOn(second) { + mbeanServer.invoke(mbeanName, "leave", Array(address(third).toString), Array("java.lang.String")) + } + enterBarrier("third-left") + runOn(first, second) { + awaitUpConvergence(2) + assertMembers(clusterView.members, first, second) + val expectedMembers = Seq(first, second).sorted.map(address(_)).mkString(",") + awaitCond(mbeanServer.getAttribute(mbeanName, "Members") == expectedMembers) + } + runOn(third) { + awaitCond(cluster.isTerminated) + // mbean should be unregistered, i.e. throw InstanceNotFoundException + awaitCond(Try { mbeanServer.getMBeanInfo(mbeanName); false } recover { + case e: InstanceNotFoundException ⇒ true + case _ ⇒ false + } get) + } + + enterBarrier("after-6") + } + + } +} diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerExitingSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerExitingSpec.scala index afeec13d9e..b36ffccf7c 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerExitingSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerExitingSpec.scala @@ -9,7 +9,7 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.Actor import akka.cluster.MemberStatus._ diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerJoinSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerJoinSpec.scala index 6454a87d45..effff75438 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerJoinSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/MembershipChangeListenerJoinSpec.scala @@ -9,7 +9,7 @@ import org.scalatest.BeforeAndAfter import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.Actor diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/MultiNodeClusterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/MultiNodeClusterSpec.scala index a5415e4aca..43af47b70f 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/MultiNodeClusterSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/MultiNodeClusterSpec.scala @@ -4,23 +4,21 @@ package akka.cluster import language.implicitConversions + +import org.scalatest.Suite +import org.scalatest.exceptions.TestFailedException + import com.typesafe.config.Config import com.typesafe.config.ConfigFactory -import akka.actor.{ Address, ExtendedActorSystem } import akka.remote.testconductor.RoleName import akka.remote.testkit.{ STMultiNodeSpec, MultiNodeSpec } import akka.testkit._ import akka.testkit.TestEvent._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration -import org.scalatest.Suite -import org.scalatest.exceptions.TestFailedException -import java.util.concurrent.ConcurrentHashMap -import akka.actor.ActorPath -import akka.actor.RootActorPath -import scala.concurrent.util.FiniteDuration +import akka.actor.{ ActorSystem, Address } import akka.event.Logging.ErrorLevel -import akka.actor.ActorSystem +import scala.concurrent.duration._ +import scala.collection.immutable +import java.util.concurrent.ConcurrentHashMap object MultiNodeClusterSpec { @@ -66,7 +64,7 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS def muteLog(sys: ActorSystem = system): Unit = { if (!sys.log.isDebugEnabled) { Seq(".*Metrics collection has started successfully.*", - ".*Hyperic SIGAR was not found on the classpath.*", + ".*Metrics will be retreived from MBeans.*", ".*Cluster Node.* - registered cluster JMX MBean.*", ".*Cluster Node.* - is starting up.*", ".*Shutting down cluster Node.*", @@ -160,7 +158,7 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS * nodes (roles). First node will be started first * and others will join the first. */ - def startCluster(roles: RoleName*): Unit = awaitStartCluster(false, roles.toSeq) + def startCluster(roles: RoleName*): Unit = awaitStartCluster(false, roles.to[immutable.Seq]) /** * Initialize the cluster of the specified member @@ -168,11 +166,9 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS * First node will be started first and others will join * the first. */ - def awaitClusterUp(roles: RoleName*): Unit = { - awaitStartCluster(true, roles.toSeq) - } + def awaitClusterUp(roles: RoleName*): Unit = awaitStartCluster(true, roles.to[immutable.Seq]) - private def awaitStartCluster(upConvergence: Boolean = true, roles: Seq[RoleName]): Unit = { + private def awaitStartCluster(upConvergence: Boolean = true, roles: immutable.Seq[RoleName]): Unit = { runOn(roles.head) { // make sure that the node-to-join is started before other join startClusterNode() @@ -198,19 +194,21 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS expectedAddresses.sorted.zipWithIndex.foreach { case (a, i) ⇒ members(i).address must be(a) } } - def assertLeader(nodesInCluster: RoleName*): Unit = if (nodesInCluster.contains(myself)) { - assertLeaderIn(nodesInCluster) - } + def assertLeader(nodesInCluster: RoleName*): Unit = + if (nodesInCluster.contains(myself)) assertLeaderIn(nodesInCluster.to[immutable.Seq]) /** * Assert that the cluster has elected the correct leader * out of all nodes in the cluster. First * member in the cluster ring is expected leader. */ - def assertLeaderIn(nodesInCluster: Seq[RoleName]): Unit = if (nodesInCluster.contains(myself)) { + def assertLeaderIn(nodesInCluster: immutable.Seq[RoleName]): Unit = if (nodesInCluster.contains(myself)) { nodesInCluster.length must not be (0) val expectedLeader = roleOfLeader(nodesInCluster) - clusterView.isLeader must be(ifNode(expectedLeader)(true)(false)) + val leader = clusterView.leader + val isLeader = leader == Some(clusterView.selfAddress) + assert(isLeader == isNode(expectedLeader), + "expectedLeader [%s], got leader [%s], members [%s]".format(expectedLeader, leader, clusterView.members)) clusterView.status must (be(MemberStatus.Up) or be(MemberStatus.Leaving)) } @@ -220,12 +218,15 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS */ def awaitUpConvergence( numberOfMembers: Int, - canNotBePartOfMemberRing: Seq[Address] = Seq.empty[Address], + canNotBePartOfMemberRing: immutable.Seq[Address] = Nil, timeout: FiniteDuration = 20.seconds): Unit = { within(timeout) { awaitCond(clusterView.members.size == numberOfMembers) awaitCond(clusterView.members.forall(_.status == MemberStatus.Up)) awaitCond(clusterView.convergence) + // clusterView.leader is updated by LeaderChanged, await that to be updated also + val expectedLeader = clusterView.members.headOption.map(_.address) + awaitCond(clusterView.leader == expectedLeader) if (!canNotBePartOfMemberRing.isEmpty) // don't run this on an empty set awaitCond( canNotBePartOfMemberRing forall (address ⇒ !(clusterView.members exists (_.address == address)))) @@ -238,7 +239,7 @@ trait MultiNodeClusterSpec extends Suite with STMultiNodeSpec { self: MultiNodeS def awaitSeenSameState(addresses: Address*): Unit = awaitCond((addresses.toSet -- clusterView.seenBy).isEmpty) - def roleOfLeader(nodesInCluster: Seq[RoleName] = roles): RoleName = { + def roleOfLeader(nodesInCluster: immutable.Seq[RoleName] = roles): RoleName = { nodesInCluster.length must not be (0) nodesInCluster.sorted.head } diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingAndBeingRemovedSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingAndBeingRemovedSpec.scala index 3fec2f22ad..2dfddc330f 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingAndBeingRemovedSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingAndBeingRemovedSpec.scala @@ -8,7 +8,7 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object NodeLeavingAndExitingAndBeingRemovedMultiJvmSpec extends MultiNodeConfig { val first = role("first") @@ -51,7 +51,7 @@ abstract class NodeLeavingAndExitingAndBeingRemovedSpec runOn(second) { // verify that the second node is shut down and has status REMOVED - awaitCond(!cluster.isRunning, reaperWaitingTime) + awaitCond(cluster.isTerminated, reaperWaitingTime) awaitCond(clusterView.status == MemberStatus.Removed, reaperWaitingTime) } diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingSpec.scala index 2e25b5fc12..e1051e4161 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeLeavingAndExitingSpec.scala @@ -8,7 +8,7 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.Actor import akka.cluster.MemberStatus._ diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeUpSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeUpSpec.scala index 0b6cea8683..0a82b74563 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeUpSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/NodeUpSpec.scala @@ -8,7 +8,7 @@ import org.scalatest.BeforeAndAfter import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.collection.immutable.SortedSet import java.util.concurrent.atomic.AtomicReference import akka.actor.Props diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/SingletonClusterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/SingletonClusterSpec.scala index 1bde3bfd3d..33ce67ecb5 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/SingletonClusterSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/SingletonClusterSpec.scala @@ -7,7 +7,8 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable case class SingletonClusterMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -61,13 +62,13 @@ abstract class SingletonClusterSpec(multiNodeConfig: SingletonClusterMultiNodeCo "become singleton cluster when one node is shutdown" taggedAs LongRunningTest in { runOn(first) { val secondAddress = address(second) - testConductor.shutdown(second, 0) + testConductor.shutdown(second, 0).await markNodeAsUnavailable(secondAddress) - awaitUpConvergence(numberOfMembers = 1, canNotBePartOfMemberRing = Seq(secondAddress), 30.seconds) + awaitUpConvergence(numberOfMembers = 1, canNotBePartOfMemberRing = List(secondAddress), 30.seconds) clusterView.isSingletonCluster must be(true) - assertLeader(first) + awaitCond(clusterView.isLeader) } enterBarrier("after-3") diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/SplitBrainSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/SplitBrainSpec.scala index 0c98b178a3..967e5adb52 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/SplitBrainSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/SplitBrainSpec.scala @@ -9,9 +9,10 @@ import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ import akka.actor.Address import akka.remote.testconductor.Direction +import scala.concurrent.duration._ +import scala.collection.immutable case class SplitBrainMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -27,6 +28,8 @@ case class SplitBrainMultiNodeConfig(failureDetectorPuppet: Boolean) extends Mul failure-detector.threshold = 4 }""")). withFallback(MultiNodeClusterSpec.clusterConfig(failureDetectorPuppet))) + + testTransport(on = true) } class SplitBrainWithFailureDetectorPuppetMultiJvmNode1 extends SplitBrainSpec(failureDetectorPuppet = true) @@ -51,10 +54,10 @@ abstract class SplitBrainSpec(multiNodeConfig: SplitBrainMultiNodeConfig) muteMarkingAsUnreachable() - val side1 = IndexedSeq(first, second) - val side2 = IndexedSeq(third, fourth, fifth) + val side1 = Vector(first, second) + val side2 = Vector(third, fourth, fifth) - "A cluster of 5 members" must { + "A cluster of 5 members" ignore { "reach initial convergence" taggedAs LongRunningTest in { awaitClusterUp(first, second, third, fourth, fifth) diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/SunnyWeatherSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/SunnyWeatherSpec.scala index 2fa233bcf5..581eca3978 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/SunnyWeatherSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/SunnyWeatherSpec.scala @@ -8,7 +8,7 @@ import org.scalatest.BeforeAndAfter import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.util.concurrent.atomic.AtomicReference import scala.collection.immutable.SortedSet import akka.actor.Props diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/UnreachableNodeRejoinsClusterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/UnreachableNodeRejoinsClusterSpec.scala index c95462c7d4..c78cbf904d 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/UnreachableNodeRejoinsClusterSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/UnreachableNodeRejoinsClusterSpec.scala @@ -6,13 +6,14 @@ package akka.cluster import language.postfixOps import org.scalatest.BeforeAndAfter +import com.typesafe.config.ConfigFactory import akka.remote.testkit.MultiNodeConfig import akka.remote.testkit.MultiNodeSpec import akka.testkit._ -import com.typesafe.config.ConfigFactory import akka.actor.Address import akka.remote.testconductor.{ RoleName, Direction } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable case class UnreachableNodeRejoinsClusterMultiNodeConfig(failureDetectorPuppet: Boolean) extends MultiNodeConfig { val first = role("first") @@ -21,6 +22,8 @@ case class UnreachableNodeRejoinsClusterMultiNodeConfig(failureDetectorPuppet: B val fourth = role("fourth") commonConfig(debugConfig(on = false).withFallback(MultiNodeClusterSpec.clusterConfig)) + + testTransport(on = true) } class UnreachableNodeRejoinsClusterWithFailureDetectorPuppetMultiJvmNode1 extends UnreachableNodeRejoinsClusterSpec(failureDetectorPuppet = true) @@ -43,7 +46,7 @@ abstract class UnreachableNodeRejoinsClusterSpec(multiNodeConfig: UnreachableNod muteMarkingAsUnreachable() - def allBut(role: RoleName, roles: Seq[RoleName] = roles): Seq[RoleName] = { + def allBut(role: RoleName, roles: immutable.Seq[RoleName] = roles): immutable.Seq[RoleName] = { roles.filterNot(_ == role) } @@ -57,7 +60,7 @@ abstract class UnreachableNodeRejoinsClusterSpec(multiNodeConfig: UnreachableNod enterBarrier("after_" + endBarrierNumber) } - "A cluster of " + roles.size + " members" must { + "A cluster of " + roles.size + " members" ignore { "reach initial convergence" taggedAs LongRunningTest in { awaitClusterUp(roles: _*) @@ -123,7 +126,7 @@ abstract class UnreachableNodeRejoinsClusterSpec(multiNodeConfig: UnreachableNod } runOn(allBut(victim): _*) { - awaitUpConvergence(roles.size - 1, Seq(victim)) + awaitUpConvergence(roles.size - 1, List(victim)) } endBarrier diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/AdaptiveLoadBalancingRouterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/AdaptiveLoadBalancingRouterSpec.scala new file mode 100644 index 0000000000..723ef6b8ec --- /dev/null +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/AdaptiveLoadBalancingRouterSpec.scala @@ -0,0 +1,218 @@ +/* + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster.routing + +import language.postfixOps +import java.lang.management.ManagementFactory +import scala.concurrent.Await +import scala.concurrent.duration._ +import com.typesafe.config.Config +import com.typesafe.config.ConfigFactory + +import akka.actor._ +import akka.cluster.Cluster +import akka.cluster.MultiNodeClusterSpec +import akka.cluster.NodeMetrics +import akka.pattern.ask +import akka.remote.testkit.{ MultiNodeSpec, MultiNodeConfig } +import akka.routing.CurrentRoutees +import akka.routing.FromConfig +import akka.routing.RouterRoutees +import akka.testkit.{ LongRunningTest, DefaultTimeout, ImplicitSender } + +object AdaptiveLoadBalancingRouterMultiJvmSpec extends MultiNodeConfig { + + class Routee extends Actor { + def receive = { + case _ ⇒ sender ! Reply(Cluster(context.system).selfAddress) + } + } + + class Memory extends Actor with ActorLogging { + var usedMemory: Array[Array[Int]] = _ + def receive = { + case AllocateMemory ⇒ + val heap = ManagementFactory.getMemoryMXBean.getHeapMemoryUsage + // getMax can be undefined (-1) + val max = math.max(heap.getMax, heap.getCommitted) + val used = heap.getUsed + log.debug("used heap before: [{}] bytes, of max [{}]", used, heap.getMax) + // allocate 70% of free space + val allocateBytes = (0.7 * (max - used)).toInt + val numberOfArrays = allocateBytes / 1024 + usedMemory = Array.ofDim(numberOfArrays, 248) // each 248 element Int array will use ~ 1 kB + log.debug("used heap after: [{}] bytes", ManagementFactory.getMemoryMXBean.getHeapMemoryUsage.getUsed) + sender ! "done" + } + } + + case object AllocateMemory + case class Reply(address: Address) + + val first = role("first") + val second = role("second") + val third = role("third") + + commonConfig(debugConfig(on = false).withFallback(ConfigFactory.parseString(""" + akka.cluster.metrics.collect-interval = 1s + akka.cluster.metrics.gossip-interval = 1s + akka.cluster.metrics.moving-average-half-life = 2s + akka.actor.deployment { + /router3 = { + router = adaptive + metrics-selector = cpu + nr-of-instances = 9 + } + /router4 = { + router = adaptive + metrics-selector = "akka.cluster.routing.TestCustomMetricsSelector" + nr-of-instances = 10 + cluster { + enabled = on + max-nr-of-instances-per-node = 2 + } + } + } + """)).withFallback(MultiNodeClusterSpec.clusterConfig)) + +} + +class TestCustomMetricsSelector(config: Config) extends MetricsSelector { + override def weights(nodeMetrics: Set[NodeMetrics]): Map[Address, Int] = Map.empty +} + +class AdaptiveLoadBalancingRouterMultiJvmNode1 extends AdaptiveLoadBalancingRouterSpec +class AdaptiveLoadBalancingRouterMultiJvmNode2 extends AdaptiveLoadBalancingRouterSpec +class AdaptiveLoadBalancingRouterMultiJvmNode3 extends AdaptiveLoadBalancingRouterSpec + +abstract class AdaptiveLoadBalancingRouterSpec extends MultiNodeSpec(AdaptiveLoadBalancingRouterMultiJvmSpec) + with MultiNodeClusterSpec + with ImplicitSender with DefaultTimeout { + import AdaptiveLoadBalancingRouterMultiJvmSpec._ + + def currentRoutees(router: ActorRef) = + Await.result(router ? CurrentRoutees, remaining).asInstanceOf[RouterRoutees].routees + + def receiveReplies(expectedReplies: Int): Map[Address, Int] = { + val zero = Map.empty[Address, Int] ++ roles.map(address(_) -> 0) + (receiveWhile(5 seconds, messages = expectedReplies) { + case Reply(address) ⇒ address + }).foldLeft(zero) { + case (replyMap, address) ⇒ replyMap + (address -> (replyMap(address) + 1)) + } + } + + /** + * Fills in self address for local ActorRef + */ + def fullAddress(actorRef: ActorRef): Address = actorRef.path.address match { + case Address(_, _, None, None) ⇒ cluster.selfAddress + case a ⇒ a + } + + def startRouter(name: String): ActorRef = { + val router = system.actorOf(Props[Routee].withRouter(ClusterRouterConfig( + local = AdaptiveLoadBalancingRouter(HeapMetricsSelector), + settings = ClusterRouterSettings(totalInstances = 10, maxInstancesPerNode = 1))), name) + awaitCond { + // it may take some time until router receives cluster member events + currentRoutees(router).size == roles.size + } + currentRoutees(router).map(fullAddress).toSet must be(roles.map(address).toSet) + router + } + + "A cluster with a AdaptiveLoadBalancingRouter" must { + "start cluster nodes" taggedAs LongRunningTest in { + awaitClusterUp(roles: _*) + enterBarrier("after-1") + } + + "use all nodes in the cluster when not overloaded" taggedAs LongRunningTest in { + runOn(first) { + val router1 = startRouter("router1") + + // collect some metrics before we start + Thread.sleep(cluster.settings.MetricsInterval.toMillis * 10) + + val iterationCount = 100 + 1 to iterationCount foreach { _ ⇒ + router1 ! "hit" + // wait a while between each message, since metrics is collected periodically + Thread.sleep(10) + } + + val replies = receiveReplies(iterationCount) + + replies(first) must be > (0) + replies(second) must be > (0) + replies(third) must be > (0) + replies.values.sum must be(iterationCount) + + } + + enterBarrier("after-2") + } + + "prefer node with more free heap capacity" taggedAs LongRunningTest in { + System.gc() + enterBarrier("gc") + + runOn(second) { + within(20.seconds) { + system.actorOf(Props[Memory], "memory") ! AllocateMemory + expectMsg("done") + } + } + enterBarrier("heap-allocated") + + runOn(first) { + val router2 = startRouter("router2") + router2 + + // collect some metrics before we start + Thread.sleep(cluster.settings.MetricsInterval.toMillis * 10) + + val iterationCount = 3000 + 1 to iterationCount foreach { _ ⇒ + router2 ! "hit" + } + + val replies = receiveReplies(iterationCount) + + replies(third) must be > (replies(second)) + replies.values.sum must be(iterationCount) + + } + + enterBarrier("after-3") + } + + "create routees from configuration" taggedAs LongRunningTest in { + runOn(first) { + val router3 = system.actorOf(Props[Memory].withRouter(FromConfig()), "router3") + awaitCond { + // it may take some time until router receives cluster member events + currentRoutees(router3).size == 9 + } + currentRoutees(router3).map(fullAddress).toSet must be(Set(address(first))) + } + enterBarrier("after-4") + } + + "create routees from cluster.enabled configuration" taggedAs LongRunningTest in { + runOn(first) { + val router4 = system.actorOf(Props[Memory].withRouter(FromConfig()), "router4") + awaitCond { + // it may take some time until router receives cluster member events + currentRoutees(router4).size == 6 + } + currentRoutees(router4).map(fullAddress).toSet must be(Set( + address(first), address(second), address(third))) + } + enterBarrier("after-5") + } + } +} diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterConsistentHashingRouterSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterConsistentHashingRouterSpec.scala index c39edd8a13..daf4e81038 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterConsistentHashingRouterSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterConsistentHashingRouterSpec.scala @@ -4,7 +4,7 @@ package akka.cluster.routing import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory diff --git a/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterRoundRobinRoutedActorSpec.scala b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterRoundRobinRoutedActorSpec.scala index a78b179652..0098da695b 100644 --- a/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterRoundRobinRoutedActorSpec.scala +++ b/akka-cluster/src/multi-jvm/scala/akka/cluster/routing/ClusterRoundRobinRoutedActorSpec.scala @@ -5,7 +5,7 @@ package akka.cluster.routing import language.postfixOps import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory import akka.actor.Actor diff --git a/akka-cluster/src/test/scala/akka/cluster/AccrualFailureDetectorSpec.scala b/akka-cluster/src/test/scala/akka/cluster/AccrualFailureDetectorSpec.scala index 1cb0a9c164..8a9d6eb6fc 100644 --- a/akka-cluster/src/test/scala/akka/cluster/AccrualFailureDetectorSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/AccrualFailureDetectorSpec.scala @@ -7,9 +7,8 @@ package akka.cluster import akka.actor.Address import akka.testkit._ import akka.testkit.TestEvent._ -import scala.collection.immutable.TreeMap -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.collection.immutable +import scala.concurrent.duration._ @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) class AccrualFailureDetectorSpec extends AkkaSpec(""" @@ -28,7 +27,7 @@ class AccrualFailureDetectorSpec extends AkkaSpec(""" val conn = Address("akka", "", "localhost", 2552) val conn2 = Address("akka", "", "localhost", 2553) - def fakeTimeGenerator(timeIntervals: Seq[Long]): () ⇒ Long = { + def fakeTimeGenerator(timeIntervals: immutable.Seq[Long]): () ⇒ Long = { var times = timeIntervals.tail.foldLeft(List[Long](timeIntervals.head))((acc, c) ⇒ acc ::: List[Long](acc.last + c)) def timeGenerator(): Long = { val currentTime = times.head @@ -74,7 +73,7 @@ class AccrualFailureDetectorSpec extends AkkaSpec(""" "return realistic phi values" in { val fd = createFailureDetector() - val test = TreeMap(0 -> 0.0, 500 -> 0.1, 1000 -> 0.3, 1200 -> 1.6, 1400 -> 4.7, 1600 -> 10.8, 1700 -> 15.3) + val test = immutable.TreeMap(0 -> 0.0, 500 -> 0.1, 1000 -> 0.3, 1200 -> 1.6, 1400 -> 4.7, 1600 -> 10.8, 1700 -> 15.3) for ((timeDiff, expectedPhi) ← test) { fd.phi(timeDiff = timeDiff, mean = 1000.0, stdDeviation = 100.0) must be(expectedPhi plusOrMinus (0.1)) } diff --git a/akka-cluster/src/test/scala/akka/cluster/ClusterConfigSpec.scala b/akka-cluster/src/test/scala/akka/cluster/ClusterConfigSpec.scala index be5ae74e4d..a857a3363c 100644 --- a/akka-cluster/src/test/scala/akka/cluster/ClusterConfigSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/ClusterConfigSpec.scala @@ -8,8 +8,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import akka.dispatch.Dispatchers -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) class ClusterConfigSpec extends AkkaSpec { @@ -29,6 +28,8 @@ class ClusterConfigSpec extends AkkaSpec { PeriodicTasksInitialDelay must be(1 seconds) GossipInterval must be(1 second) HeartbeatInterval must be(1 second) + NumberOfEndHeartbeats must be(4) + MonitoredByNrOfMembers must be(5) LeaderActionsInterval must be(1 second) UnreachableNodesReaperInterval must be(1 second) PublishStatsInterval must be(10 second) @@ -46,9 +47,10 @@ class ClusterConfigSpec extends AkkaSpec { callTimeout = 2 seconds, resetTimeout = 30 seconds)) MetricsEnabled must be(true) + MetricsCollectorClass must be(classOf[SigarMetricsCollector].getName) MetricsInterval must be(3 seconds) MetricsGossipInterval must be(3 seconds) - MetricsRateOfDecay must be(10) + MetricsMovingAverageHalfLife must be(12 seconds) } } } diff --git a/akka-cluster/src/test/scala/akka/cluster/ClusterDomainEventPublisherSpec.scala b/akka-cluster/src/test/scala/akka/cluster/ClusterDomainEventPublisherSpec.scala index 5b615a61af..59252ba599 100644 --- a/akka-cluster/src/test/scala/akka/cluster/ClusterDomainEventPublisherSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/ClusterDomainEventPublisherSpec.scala @@ -6,7 +6,7 @@ package akka.cluster import language.postfixOps import scala.collection.immutable.SortedSet -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import org.scalatest.BeforeAndAfterEach import akka.actor.Address import akka.actor.Props @@ -16,22 +16,11 @@ import akka.cluster.ClusterEvent._ import akka.testkit.AkkaSpec import akka.testkit.ImplicitSender import akka.actor.ActorRef - -object ClusterDomainEventPublisherSpec { - val config = """ - akka.cluster.auto-join = off - akka.actor.provider = "akka.cluster.ClusterActorRefProvider" - akka.remote.log-remote-lifecycle-events = off - akka.remote.netty.port = 0 - """ - - case class GossipTo(address: Address) -} +import akka.testkit.TestProbe @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class ClusterDomainEventPublisherSpec extends AkkaSpec(ClusterDomainEventPublisherSpec.config) +class ClusterDomainEventPublisherSpec extends AkkaSpec with BeforeAndAfterEach with ImplicitSender { - import ClusterDomainEventPublisherSpec._ var publisher: ActorRef = _ val a1 = Member(Address("akka", "sys", "a", 2552), Up) @@ -47,14 +36,15 @@ class ClusterDomainEventPublisherSpec extends AkkaSpec(ClusterDomainEventPublish val g4 = Gossip(members = SortedSet(d1, a1, b1, c2)).seen(a1.address) val g5 = Gossip(members = SortedSet(d1, a1, b1, c2)).seen(a1.address).seen(b1.address).seen(c2.address).seen(d1.address) + override def atStartup(): Unit = { + system.eventStream.subscribe(testActor, classOf[ClusterDomainEvent]) + } + override def beforeEach(): Unit = { publisher = system.actorOf(Props[ClusterDomainEventPublisher]) - publisher ! Subscribe(testActor, classOf[ClusterDomainEvent]) - expectMsgType[CurrentClusterState] } override def afterEach(): Unit = { - publisher ! Unsubscribe(testActor, None) system.stop(publisher) } @@ -116,10 +106,23 @@ class ClusterDomainEventPublisherSpec extends AkkaSpec(ClusterDomainEventPublish expectMsgType[SeenChanged] } + "send CurrentClusterState when subscribe" in { + val subscriber = TestProbe() + publisher ! Subscribe(subscriber.ref, classOf[ClusterDomainEvent]) + subscriber.expectMsgType[CurrentClusterState] + // but only to the new subscriber + expectNoMsg(1 second) + } + "support unsubscribe" in { - publisher ! Unsubscribe(testActor, Some(classOf[ClusterDomainEvent])) - publisher ! PublishChanges(g1, g2) - expectNoMsg + val subscriber = TestProbe() + publisher ! Subscribe(subscriber.ref, classOf[ClusterDomainEvent]) + subscriber.expectMsgType[CurrentClusterState] + publisher ! Unsubscribe(subscriber.ref, Some(classOf[ClusterDomainEvent])) + publisher ! PublishChanges(Gossip(members = SortedSet(a1)), Gossip(members = SortedSet(a1, b1))) + subscriber.expectNoMsg(1 second) + // but testActor is still subscriber + expectMsg(MemberUp(b1)) } } diff --git a/akka-cluster/src/test/scala/akka/cluster/ClusterHeartbeatSenderStateSpec.scala b/akka-cluster/src/test/scala/akka/cluster/ClusterHeartbeatSenderStateSpec.scala new file mode 100644 index 0000000000..4eedee1df4 --- /dev/null +++ b/akka-cluster/src/test/scala/akka/cluster/ClusterHeartbeatSenderStateSpec.scala @@ -0,0 +1,107 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster + +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers +import akka.actor.Address +import akka.routing.ConsistentHash +import scala.concurrent.duration._ + +@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) +class ClusterHeartbeatSenderStateSpec extends WordSpec with MustMatchers { + + val selfAddress = Address("akka", "sys", "myself", 2552) + val aa = Address("akka", "sys", "aa", 2552) + val bb = Address("akka", "sys", "bb", 2552) + val cc = Address("akka", "sys", "cc", 2552) + val dd = Address("akka", "sys", "dd", 2552) + val ee = Address("akka", "sys", "ee", 2552) + + val emptyState = ClusterHeartbeatSenderState.empty(ConsistentHash(Seq.empty[Address], 10), + selfAddress.toString, 3) + + "A ClusterHeartbeatSenderState" must { + + "return empty active set when no nodes" in { + emptyState.active.isEmpty must be(true) + } + + "include joinInProgress in active set" in { + val s = emptyState.addJoinInProgress(aa, Deadline.now + 30.seconds) + s.joinInProgress.keySet must be(Set(aa)) + s.active must be(Set(aa)) + } + + "remove joinInProgress from active set after removeOverdueJoinInProgress" in { + val s = emptyState.addJoinInProgress(aa, Deadline.now - 30.seconds).removeOverdueJoinInProgress() + s.joinInProgress must be(Map.empty) + s.active must be(Set.empty) + s.ending must be(Map(aa -> 0)) + } + + "remove joinInProgress after reset" in { + val s = emptyState.addJoinInProgress(aa, Deadline.now + 30.seconds).reset(Set(aa, bb)) + s.joinInProgress must be(Map.empty) + } + + "remove joinInProgress after addMember" in { + val s = emptyState.addJoinInProgress(aa, Deadline.now + 30.seconds).addMember(aa) + s.joinInProgress must be(Map.empty) + } + + "remove joinInProgress after removeMember" in { + val s = emptyState.addJoinInProgress(aa, Deadline.now + 30.seconds).reset(Set(aa, bb)).removeMember(aa) + s.joinInProgress must be(Map.empty) + s.ending must be(Map(aa -> 0)) + } + + "remove from ending after addJoinInProgress" in { + val s = emptyState.reset(Set(aa, bb)).removeMember(aa) + s.ending must be(Map(aa -> 0)) + val s2 = s.addJoinInProgress(aa, Deadline.now + 30.seconds) + s2.joinInProgress.keySet must be(Set(aa)) + s2.ending must be(Map.empty) + } + + "include nodes from reset in active set" in { + val nodes = Set(aa, bb, cc) + val s = emptyState.reset(nodes) + s.all must be(nodes) + s.current must be(nodes) + s.ending must be(Map.empty) + s.active must be(nodes) + } + + "limit current nodes to monitoredByNrOfMembers when adding members" in { + val nodes = Set(aa, bb, cc, dd) + val s = nodes.foldLeft(emptyState) { _ addMember _ } + s.all must be(nodes) + s.current.size must be(3) + s.addMember(ee).current.size must be(3) + } + + "move meber to ending set when removing member" in { + val nodes = Set(aa, bb, cc, dd, ee) + val s = emptyState.reset(nodes) + s.ending must be(Map.empty) + val included = s.current.head + val s2 = s.removeMember(included) + s2.ending must be(Map(included -> 0)) + s2.current must not contain (included) + val s3 = s2.addMember(included) + s3.current must contain(included) + s3.ending.keySet must not contain (included) + } + + "increase ending count correctly" in { + val s = emptyState.reset(Set(aa)).removeMember(aa) + s.ending must be(Map(aa -> 0)) + val s2 = s.increaseEndingCount(aa).increaseEndingCount(aa) + s2.ending must be(Map(aa -> 2)) + } + + } +} diff --git a/akka-cluster/src/test/scala/akka/cluster/ClusterSpec.scala b/akka-cluster/src/test/scala/akka/cluster/ClusterSpec.scala index 8edbdd1669..a659abf313 100644 --- a/akka-cluster/src/test/scala/akka/cluster/ClusterSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/ClusterSpec.scala @@ -6,8 +6,7 @@ package akka.cluster import language.postfixOps import language.reflectiveCalls -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.testkit.AkkaSpec import akka.testkit.ImplicitSender import akka.actor.ExtendedActorSystem diff --git a/akka-cluster/src/test/scala/akka/cluster/DataStreamSpec.scala b/akka-cluster/src/test/scala/akka/cluster/DataStreamSpec.scala deleted file mode 100644 index 2f2ccaa2ae..0000000000 --- a/akka-cluster/src/test/scala/akka/cluster/DataStreamSpec.scala +++ /dev/null @@ -1,62 +0,0 @@ -/* - * Copyright (C) 2009-2012 Typesafe Inc. - */ - -package akka.cluster - -import language.postfixOps -import scala.concurrent.util.duration._ - -import akka.testkit.{ LongRunningTest, AkkaSpec } - -@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class DataStreamSpec extends AkkaSpec(MetricsEnabledSpec.config) with AbstractClusterMetricsSpec with MetricNumericConverter { - import system.dispatcher - - val collector = createMetricsCollector - val DefaultRateOfDecay = 10 - - "DataStream" must { - - "calculate the ewma for multiple, variable, data streams" taggedAs LongRunningTest in { - val firstDataSet = collector.sample.metrics.collect { case m if m.trendable && m.isDefined ⇒ m.initialize(DefaultRateOfDecay) } - var streamingDataSet = firstDataSet - - val cancellable = system.scheduler.schedule(0 seconds, 100 millis) { - streamingDataSet = collector.sample.metrics.flatMap(latest ⇒ streamingDataSet.collect { - case streaming if (latest.trendable && latest.isDefined) && (latest same streaming) - && (latest.value.get != streaming.value.get) ⇒ { - val updatedDataStream = streaming.average.get :+ latest.value.get - updatedDataStream.timestamp must be > (streaming.average.get.timestamp) - updatedDataStream.duration.length must be > (streaming.average.get.duration.length) - updatedDataStream.ewma must not be (streaming.average.get.ewma) - updatedDataStream.ewma must not be (latest.value.get) - streaming.copy(value = latest.value, average = Some(updatedDataStream)) - } - }) - } - awaitCond(firstDataSet.size == streamingDataSet.size, longDuration) - cancellable.cancel() - - val finalDataSet = streamingDataSet.map(m ⇒ m.name -> m).toMap - firstDataSet map { - first ⇒ - val newMetric = finalDataSet(first.name) - val e1 = first.average.get - val e2 = newMetric.average.get - - if (first.value.get != newMetric.value.get) { - e2.ewma must not be (first.value.get) - e2.ewma must not be (newMetric.value.get) - } - if (first.value.get.longValue > newMetric.value.get.longValue) e1.ewma.longValue must be > e2.ewma.longValue - else if (first.value.get.longValue < newMetric.value.get.longValue) e1.ewma.longValue must be < e2.ewma.longValue - } - } - - "data streaming is disabled if the decay is set to 0" in { - val data = collector.sample.metrics map (_.initialize(0)) - data foreach (_.average.isEmpty must be(true)) - } - } -} diff --git a/akka-cluster/src/test/scala/akka/cluster/EWMASpec.scala b/akka-cluster/src/test/scala/akka/cluster/EWMASpec.scala new file mode 100644 index 0000000000..ed954b7bb6 --- /dev/null +++ b/akka-cluster/src/test/scala/akka/cluster/EWMASpec.scala @@ -0,0 +1,101 @@ +/* + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster + +import language.postfixOps +import scala.concurrent.duration._ +import akka.testkit.{ LongRunningTest, AkkaSpec } +import scala.concurrent.forkjoin.ThreadLocalRandom + +@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) +class EWMASpec extends AkkaSpec(MetricsEnabledSpec.config) with MetricsCollectorFactory { + import system.dispatcher + + val collector = createMetricsCollector + + "DataStream" must { + + "calcualate same ewma for constant values" in { + val ds = EWMA(value = 100.0, alpha = 0.18) :+ + 100.0 :+ 100.0 :+ 100.0 + ds.value must be(100.0 plusOrMinus 0.001) + } + + "calcualate correct ewma for normal decay" in { + val d0 = EWMA(value = 1000.0, alpha = 2.0 / (1 + 10)) + d0.value must be(1000.0 plusOrMinus 0.01) + val d1 = d0 :+ 10.0 + d1.value must be(820.0 plusOrMinus 0.01) + val d2 = d1 :+ 10.0 + d2.value must be(672.73 plusOrMinus 0.01) + val d3 = d2 :+ 10.0 + d3.value must be(552.23 plusOrMinus 0.01) + val d4 = d3 :+ 10.0 + d4.value must be(453.64 plusOrMinus 0.01) + + val dn = (1 to 100).foldLeft(d0)((d, _) ⇒ d :+ 10.0) + dn.value must be(10.0 plusOrMinus 0.1) + } + + "calculate ewma for alpha 1.0, max bias towards latest value" in { + val d0 = EWMA(value = 100.0, alpha = 1.0) + d0.value must be(100.0 plusOrMinus 0.01) + val d1 = d0 :+ 1.0 + d1.value must be(1.0 plusOrMinus 0.01) + val d2 = d1 :+ 57.0 + d2.value must be(57.0 plusOrMinus 0.01) + val d3 = d2 :+ 10.0 + d3.value must be(10.0 plusOrMinus 0.01) + } + + "calculate alpha from half-life and collect interval" in { + // according to http://en.wikipedia.org/wiki/Moving_average#Exponential_moving_average + val expectedAlpha = 0.1 + // alpha = 2.0 / (1 + N) + val n = 19 + val halfLife = n.toDouble / 2.8854 + val collectInterval = 1.second + val halfLifeDuration = (halfLife * 1000).millis + EWMA.alpha(halfLifeDuration, collectInterval) must be(expectedAlpha plusOrMinus 0.001) + } + + "calculate sane alpha from short half-life" in { + val alpha = EWMA.alpha(1.millis, 3.seconds) + alpha must be <= (1.0) + alpha must be >= (0.0) + alpha must be(1.0 plusOrMinus 0.001) + } + + "calculate sane alpha from long half-life" in { + val alpha = EWMA.alpha(1.day, 3.seconds) + alpha must be <= (1.0) + alpha must be >= (0.0) + alpha must be(0.0 plusOrMinus 0.001) + } + + "calculate the ewma for multiple, variable, data streams" taggedAs LongRunningTest in { + var streamingDataSet = Map.empty[String, Metric] + var usedMemory = Array.empty[Byte] + (1 to 50) foreach { _ ⇒ + // wait a while between each message to give the metrics a chance to change + Thread.sleep(100) + usedMemory = usedMemory ++ Array.fill(1024)(ThreadLocalRandom.current.nextInt(127).toByte) + val changes = collector.sample.metrics.flatMap { latest ⇒ + streamingDataSet.get(latest.name) match { + case None ⇒ Some(latest) + case Some(previous) ⇒ + if (latest.isSmooth && latest.value != previous.value) { + val updated = previous :+ latest + updated.isSmooth must be(true) + updated.smoothValue must not be (previous.smoothValue) + Some(updated) + } else None + } + } + streamingDataSet ++= changes.map(m ⇒ m.name -> m) + } + } + } +} diff --git a/akka-cluster/src/test/scala/akka/cluster/FixedRateTaskSpec.scala b/akka-cluster/src/test/scala/akka/cluster/FixedRateTaskSpec.scala deleted file mode 100644 index e6590cf9c3..0000000000 --- a/akka-cluster/src/test/scala/akka/cluster/FixedRateTaskSpec.scala +++ /dev/null @@ -1,43 +0,0 @@ -/** - * Copyright (C) 2009-2012 Typesafe Inc. - */ - -package akka.cluster - -import akka.testkit.AkkaSpec -import scala.concurrent.util.duration._ -import akka.testkit.TimingTest -import akka.testkit.TestLatch -import scala.concurrent.Await - -@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class FixedRateTaskSpec extends AkkaSpec { - import system.dispatcher - "Task scheduled at fixed rate" must { - "adjust for scheduler inaccuracy" taggedAs TimingTest in { - val startTime = System.nanoTime - val n = 33 - val latch = new TestLatch(n) - FixedRateTask(system.scheduler, 150.millis, 150.millis) { - latch.countDown() - } - Await.ready(latch, 6.seconds) - val rate = n * 1000.0 / (System.nanoTime - startTime).nanos.toMillis - rate must be(6.66 plusOrMinus (0.4)) - } - - "compensate for long running task" taggedAs TimingTest in { - val startTime = System.nanoTime - val n = 22 - val latch = new TestLatch(n) - FixedRateTask(system.scheduler, 225.millis, 225.millis) { - Thread.sleep(80) - latch.countDown() - } - Await.ready(latch, 6.seconds) - val rate = n * 1000.0 / (System.nanoTime - startTime).nanos.toMillis - rate must be(4.4 plusOrMinus (0.3)) - } - } -} - diff --git a/akka-cluster/src/test/scala/akka/cluster/MetricNumericConverterSpec.scala b/akka-cluster/src/test/scala/akka/cluster/MetricNumericConverterSpec.scala index 1f23da769c..f572b13233 100644 --- a/akka-cluster/src/test/scala/akka/cluster/MetricNumericConverterSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/MetricNumericConverterSpec.scala @@ -4,40 +4,35 @@ package akka.cluster -import akka.testkit.{ ImplicitSender, AkkaSpec } +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers +import akka.cluster.StandardMetrics._ +import scala.util.Failure @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class MetricNumericConverterSpec extends AkkaSpec(MetricsEnabledSpec.config) with MetricNumericConverter with ImplicitSender with AbstractClusterMetricsSpec { +class MetricNumericConverterSpec extends WordSpec with MustMatchers with MetricNumericConverter { "MetricNumericConverter" must { - val collector = createMetricsCollector - "convert " in { - convert(0).isLeft must be(true) - convert(1).left.get must be(1) - convert(1L).isLeft must be(true) - convert(0.0).isRight must be(true) + "convert" in { + convertNumber(0).isLeft must be(true) + convertNumber(1).left.get must be(1) + convertNumber(1L).isLeft must be(true) + convertNumber(0.0).isRight must be(true) } "define a new metric" in { - val metric = Metric("heap-memory-used", Some(0L)) - metric.initializable must be(true) - metric.name must not be (null) - metric.average.isEmpty must be(true) - metric.trendable must be(true) - - if (collector.isSigar) { - val cores = collector.totalCores - cores.isDefined must be(true) - cores.value.get.intValue must be > (0) - cores.initializable must be(false) - } + val Some(metric) = Metric.create(HeapMemoryUsed, 256L, decayFactor = Some(0.18)) + metric.name must be(HeapMemoryUsed) + metric.value must be(256L) + metric.isSmooth must be(true) + metric.smoothValue must be(256.0 plusOrMinus 0.0001) } "define an undefined value with a None " in { - Metric("x", Some(-1)).value.isDefined must be(false) - Metric("x", Some(java.lang.Double.NaN)).value.isDefined must be(false) - Metric("x", None).isDefined must be(false) + Metric.create("x", -1, None).isDefined must be(false) + Metric.create("x", java.lang.Double.NaN, None).isDefined must be(false) + Metric.create("x", Failure(new RuntimeException), None).isDefined must be(false) } "recognize whether a metric value is defined" in { @@ -47,6 +42,7 @@ class MetricNumericConverterSpec extends AkkaSpec(MetricsEnabledSpec.config) wit "recognize whether a metric value is not defined" in { defined(-1) must be(false) + defined(-1.0) must be(false) defined(Double.NaN) must be(false) } } diff --git a/akka-cluster/src/test/scala/akka/cluster/MetricValuesSpec.scala b/akka-cluster/src/test/scala/akka/cluster/MetricValuesSpec.scala new file mode 100644 index 0000000000..8a38b59da6 --- /dev/null +++ b/akka-cluster/src/test/scala/akka/cluster/MetricValuesSpec.scala @@ -0,0 +1,69 @@ +/* + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster + +import scala.util.Try +import akka.actor.Address +import akka.testkit.AkkaSpec +import akka.cluster.StandardMetrics._ + +@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) +class MetricValuesSpec extends AkkaSpec(MetricsEnabledSpec.config) with MetricsCollectorFactory { + + val collector = createMetricsCollector + + val node1 = NodeMetrics(Address("akka", "sys", "a", 2554), 1, collector.sample.metrics) + val node2 = NodeMetrics(Address("akka", "sys", "a", 2555), 1, collector.sample.metrics) + + val nodes: Seq[NodeMetrics] = { + (1 to 100).foldLeft(List(node1, node2)) { (nodes, _) ⇒ + nodes map { n ⇒ + n.copy(metrics = collector.sample.metrics.flatMap(latest ⇒ n.metrics.collect { + case streaming if latest sameAs streaming ⇒ streaming :+ latest + })) + } + } + } + + "NodeMetrics.MetricValues" must { + "extract expected metrics for load balancing" in { + val stream1 = node2.metric(HeapMemoryCommitted).get.value.longValue + val stream2 = node1.metric(HeapMemoryUsed).get.value.longValue + stream1 must be >= (stream2) + } + + "extract expected MetricValue types for load balancing" in { + nodes foreach { node ⇒ + node match { + case HeapMemory(address, _, used, committed, Some(max)) ⇒ + committed must be >= (used) + used must be <= (max) + committed must be <= (max) + // extract is the java api + StandardMetrics.extractHeapMemory(node) must not be (null) + case HeapMemory(address, _, used, committed, None) ⇒ + used must be > (0L) + committed must be > (0L) + // extract is the java api + StandardMetrics.extractCpu(node) must not be (null) + } + + node match { + case Cpu(address, _, systemLoadAverageOption, cpuCombinedOption, processors) ⇒ + processors must be > (0) + if (systemLoadAverageOption.isDefined) + systemLoadAverageOption.get must be >= (0.0) + if (cpuCombinedOption.isDefined) { + cpuCombinedOption.get must be <= (1.0) + cpuCombinedOption.get must be >= (0.0) + } + // extract is the java api + StandardMetrics.extractCpu(node) must not be (null) + } + } + } + } + +} \ No newline at end of file diff --git a/akka-cluster/src/test/scala/akka/cluster/MetricsCollectorSpec.scala b/akka-cluster/src/test/scala/akka/cluster/MetricsCollectorSpec.scala index 2288279a03..2ce3892645 100644 --- a/akka-cluster/src/test/scala/akka/cluster/MetricsCollectorSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/MetricsCollectorSpec.scala @@ -1,68 +1,61 @@ /* + * Copyright (C) 2009-2012 Typesafe Inc. */ package akka.cluster import scala.language.postfixOps -import scala.concurrent.util.duration._ -import scala.concurrent.util.FiniteDuration + +import scala.collection.immutable +import scala.concurrent.duration._ import scala.concurrent.Await +import scala.util.{ Success, Try, Failure } import akka.actor._ import akka.testkit._ +import akka.cluster.StandardMetrics._ import org.scalatest.WordSpec import org.scalatest.matchers.MustMatchers -import util.{ Success, Try, Failure } object MetricsEnabledSpec { val config = """ akka.cluster.metrics.enabled = on - akka.cluster.metrics.metrics-interval = 1 s + akka.cluster.metrics.collect-interval = 1 s akka.cluster.metrics.gossip-interval = 1 s - akka.cluster.metrics.rate-of-decay = 10 akka.actor.provider = "akka.remote.RemoteActorRefProvider" """ } @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class MetricsCollectorSpec extends AkkaSpec(MetricsEnabledSpec.config) with ImplicitSender with AbstractClusterMetricsSpec with MetricSpec { +class MetricsCollectorSpec extends AkkaSpec(MetricsEnabledSpec.config) with ImplicitSender with MetricsCollectorFactory { import system.dispatcher val collector = createMetricsCollector "Metric must" must { - "create and initialize a new metric or merge an existing one" in { - for (i ← 0 to samples) { - val metrics = collector.sample.metrics - assertCreatedUninitialized(metrics) - assertInitialized(window, metrics map (_.initialize(window))) - } - } "merge 2 metrics that are tracking the same metric" in { - for (i ← 0 to samples) { + for (i ← 1 to 20) { val sample1 = collector.sample.metrics val sample2 = collector.sample.metrics - var merged = sample2 flatMap (latest ⇒ sample1 collect { - case peer if latest same peer ⇒ { + val merged12 = sample2 flatMap (latest ⇒ sample1 collect { + case peer if latest sameAs peer ⇒ val m = peer :+ latest - assertMerged(latest, peer, m) + m.value must be(latest.value) + m.isSmooth must be(peer.isSmooth || latest.isSmooth) m - } }) - val sample3 = collector.sample.metrics map (_.initialize(window)) - val sample4 = collector.sample.metrics map (_.initialize(window)) - merged = sample4 flatMap (latest ⇒ sample3 collect { - case peer if latest same peer ⇒ { + val sample3 = collector.sample.metrics + val sample4 = collector.sample.metrics + val merged34 = sample4 flatMap (latest ⇒ sample3 collect { + case peer if latest sameAs peer ⇒ val m = peer :+ latest - assertMerged(latest, peer, m) + m.value must be(latest.value) + m.isSmooth must be(peer.isSmooth || latest.isSmooth) m - } }) - merged.size must be(sample3.size) - merged.size must be(sample4.size) } } } @@ -75,160 +68,65 @@ class MetricsCollectorSpec extends AkkaSpec(MetricsEnabledSpec.config) with Impl "collect accurate metrics for a node" in { val sample = collector.sample - assertExpectedSampleSize(collector.isSigar, window, sample) - val metrics = sample.metrics.collect { case m if m.isDefined ⇒ (m.name, m.value.get) } - val used = metrics collectFirst { case ("heap-memory-used", b) ⇒ b } - val committed = metrics collectFirst { case ("heap-memory-committed", b) ⇒ b } + val metrics = sample.metrics.collect { case m ⇒ (m.name, m.value) } + val used = metrics collectFirst { case (HeapMemoryUsed, b) ⇒ b } + val committed = metrics collectFirst { case (HeapMemoryCommitted, b) ⇒ b } metrics foreach { - case ("total-cores", b) ⇒ b.intValue must be > (0) - case ("network-max-rx", b) ⇒ b.longValue must be > (0L) - case ("network-max-tx", b) ⇒ b.longValue must be > (0L) - case ("system-load-average", b) ⇒ b.doubleValue must be >= (0.0) - case ("processors", b) ⇒ b.intValue must be >= (0) - case ("heap-memory-used", b) ⇒ b.longValue must be >= (0L) - case ("heap-memory-committed", b) ⇒ b.longValue must be > (0L) - case ("cpu-combined", b) ⇒ - b.doubleValue must be <= (1.0) - b.doubleValue must be >= (0.0) - case ("heap-memory-max", b) ⇒ + case (SystemLoadAverage, b) ⇒ b.doubleValue must be >= (0.0) + case (Processors, b) ⇒ b.intValue must be >= (0) + case (HeapMemoryUsed, b) ⇒ b.longValue must be >= (0L) + case (HeapMemoryCommitted, b) ⇒ b.longValue must be > (0L) + case (HeapMemoryMax, b) ⇒ + b.longValue must be > (0L) used.get.longValue must be <= (b.longValue) committed.get.longValue must be <= (b.longValue) - } - } + case (CpuCombined, b) ⇒ + b.doubleValue must be <= (1.0) + b.doubleValue must be >= (0.0) - "collect SIGAR metrics if it is on the classpath" in { - if (collector.isSigar) { - // combined cpu may or may not be defined on a given sampling - // systemLoadAverage is SIGAR present - collector.systemLoadAverage.isDefined must be(true) - collector.networkStats.nonEmpty must be(true) - collector.networkMaxRx.isDefined must be(true) - collector.networkMaxTx.isDefined must be(true) - collector.totalCores.isDefined must be(true) } } "collect JMX metrics" in { // heap max may be undefined depending on the OS - // systemLoadAverage is JMX is SIGAR not present - collector.systemLoadAverage.isDefined must be(true) - collector.used.isDefined must be(true) - collector.committed.isDefined must be(true) - collector.processors.isDefined must be(true) + // systemLoadAverage is JMX when SIGAR not present, but + // it's not present on all platforms + val c = collector.asInstanceOf[JmxMetricsCollector] + val heap = c.heapMemoryUsage + c.heapUsed(heap).isDefined must be(true) + c.heapCommitted(heap).isDefined must be(true) + c.processors.isDefined must be(true) } - "collect [" + samples + "] node metrics samples in an acceptable duration" taggedAs LongRunningTest in { - val latch = TestLatch(samples) - val task = FixedRateTask(system.scheduler, 0 seconds, interval) { + "collect 50 node metrics samples in an acceptable duration" taggedAs LongRunningTest in within(7 seconds) { + (1 to 50) foreach { _ ⇒ val sample = collector.sample - assertCreatedUninitialized(sample.metrics) - assertExpectedSampleSize(collector.isSigar, window, sample) - latch.countDown() + sample.metrics.size must be >= (3) + Thread.sleep(100) } - Await.ready(latch, longDuration) - task.cancel() } } } -trait MetricSpec extends WordSpec with MustMatchers { +/** + * Used when testing metrics without full cluster + */ +trait MetricsCollectorFactory { this: AkkaSpec ⇒ - def assertMasterMetricsAgainstGossipMetrics(master: Set[NodeMetrics], gossip: MetricsGossip): Unit = { - val masterMetrics = collectNodeMetrics(master) - val gossipMetrics = collectNodeMetrics(gossip.nodes) - gossipMetrics.size must be(masterMetrics.size plusOrMinus 1) // combined cpu - } + private def extendedActorSystem = system.asInstanceOf[ExtendedActorSystem] - def assertExpectedNodeAddresses(gossip: MetricsGossip, nodes: Set[NodeMetrics]): Unit = - gossip.nodes.map(_.address) must be(nodes.map(_.address)) + def selfAddress = extendedActorSystem.provider.rootPath.address - def assertExpectedSampleSize(isSigar: Boolean, gossip: MetricsGossip): Unit = - gossip.nodes.foreach(n ⇒ assertExpectedSampleSize(isSigar, gossip.rateOfDecay, n)) + val defaultDecayFactor = 2.0 / (1 + 10) - def assertCreatedUninitialized(gossip: MetricsGossip): Unit = - gossip.nodes.foreach(n ⇒ assertCreatedUninitialized(n.metrics.filterNot(_.trendable))) + def createMetricsCollector: MetricsCollector = + Try(new SigarMetricsCollector(selfAddress, defaultDecayFactor, + extendedActorSystem.dynamicAccess.createInstanceFor[AnyRef]("org.hyperic.sigar.Sigar", Nil))). + recover { + case e ⇒ + log.debug("Metrics will be retreived from MBeans, Sigar failed to load. Reason: " + e) + new JmxMetricsCollector(selfAddress, defaultDecayFactor) + }.get - def assertInitialized(gossip: MetricsGossip): Unit = - gossip.nodes.foreach(n ⇒ assertInitialized(gossip.rateOfDecay, n.metrics)) - - def assertCreatedUninitialized(metrics: Set[Metric]): Unit = { - metrics.size must be > (0) - metrics foreach { m ⇒ - m.average.isEmpty must be(true) - if (m.value.isDefined) m.isDefined must be(true) - if (m.initializable) (m.trendable && m.isDefined && m.average.isEmpty) must be(true) - } - } - - def assertInitialized(decay: Int, metrics: Set[Metric]): Unit = if (decay > 0) metrics.filter(_.trendable) foreach { m ⇒ - m.initializable must be(false) - if (m.isDefined) m.average.isDefined must be(true) - } - - def assertMerged(latest: Metric, peer: Metric, merged: Metric): Unit = if (latest same peer) { - if (latest.isDefined) { - if (peer.isDefined) { - merged.isDefined must be(true) - merged.value.get must be(latest.value.get) - if (latest.trendable) { - if (latest.initializable) merged.average.isEmpty must be(true) - else merged.average.isDefined must be(true) - } - } else { - merged.isDefined must be(true) - merged.value.get must be(latest.value.get) - if (latest.average.isDefined) merged.average.get must be(latest.average.get) - else merged.average.isEmpty must be(true) - } - } else { - if (peer.isDefined) { - merged.isDefined must be(true) - merged.value.get must be(peer.value.get) - if (peer.trendable) { - if (peer.initializable) merged.average.isEmpty must be(true) - else merged.average.isDefined must be(true) - } - } else { - merged.isDefined must be(false) - merged.average.isEmpty must be(true) - } - } - } - - def assertExpectedSampleSize(isSigar: Boolean, decay: Int, node: NodeMetrics): Unit = { - node.metrics.size must be(9) - val metrics = node.metrics.filter(_.isDefined) - if (isSigar) { // combined cpu + jmx max heap - metrics.size must be >= (7) - metrics.size must be <= (9) - } else { // jmx max heap - metrics.size must be >= (4) - metrics.size must be <= (5) - } - - if (decay > 0) metrics.collect { case m if m.trendable && (!m.initializable) ⇒ m }.foreach(_.average.isDefined must be(true)) - } - - def collectNodeMetrics(nodes: Set[NodeMetrics]): Seq[Metric] = { - var r: Seq[Metric] = Seq.empty - nodes.foreach(n ⇒ r ++= n.metrics.filter(_.isDefined)) - r - } + def isSigar(collector: MetricsCollector): Boolean = collector.isInstanceOf[SigarMetricsCollector] } - -trait AbstractClusterMetricsSpec extends DefaultTimeout { - this: AkkaSpec ⇒ - - val selfAddress = new Address("akka", "localhost") - - val window = 49 - - val interval: FiniteDuration = 100 millis - - val longDuration = 120 seconds // for long running tests - - val samples = 100 - - def createMetricsCollector: MetricsCollector = MetricsCollector(selfAddress, log, system.asInstanceOf[ExtendedActorSystem].dynamicAccess) - -} \ No newline at end of file diff --git a/akka-cluster/src/test/scala/akka/cluster/MetricsGossipSpec.scala b/akka-cluster/src/test/scala/akka/cluster/MetricsGossipSpec.scala index 3ff6db6de2..6d54a69bc2 100644 --- a/akka-cluster/src/test/scala/akka/cluster/MetricsGossipSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/MetricsGossipSpec.scala @@ -4,7 +4,7 @@ package akka.cluster -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit.{ ImplicitSender, AkkaSpec } import akka.actor.Address @@ -12,95 +12,95 @@ import akka.actor.Address import java.lang.System.{ currentTimeMillis ⇒ newTimestamp } @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class MetricsGossipSpec extends AkkaSpec(MetricsEnabledSpec.config) with ImplicitSender with AbstractClusterMetricsSpec with MetricSpec { +class MetricsGossipSpec extends AkkaSpec(MetricsEnabledSpec.config) with ImplicitSender with MetricsCollectorFactory { val collector = createMetricsCollector "A MetricsGossip" must { - "add and initialize new NodeMetrics" in { + "add new NodeMetrics" in { val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) val m2 = NodeMetrics(Address("akka", "sys", "a", 2555), newTimestamp, collector.sample.metrics) - var localGossip = MetricsGossip(window) - localGossip :+= m1 - localGossip.nodes.size must be(1) - localGossip.nodeKeys.size must be(localGossip.nodes.size) - assertMasterMetricsAgainstGossipMetrics(Set(m1), localGossip) - assertExpectedSampleSize(collector.isSigar, localGossip) - assertInitialized(localGossip.rateOfDecay, collectNodeMetrics(localGossip.nodes).toSet) + m1.metrics.size must be > (3) + m2.metrics.size must be > (3) - localGossip :+= m2 - localGossip.nodes.size must be(2) - localGossip.nodeKeys.size must be(localGossip.nodes.size) - assertMasterMetricsAgainstGossipMetrics(Set(m1, m2), localGossip) - assertExpectedSampleSize(collector.isSigar, localGossip) - assertInitialized(localGossip.rateOfDecay, collectNodeMetrics(localGossip.nodes).toSet) + val g1 = MetricsGossip.empty :+ m1 + g1.nodes.size must be(1) + g1.nodeMetricsFor(m1.address).map(_.metrics) must be(Some(m1.metrics)) + + val g2 = g1 :+ m2 + g2.nodes.size must be(2) + g2.nodeMetricsFor(m1.address).map(_.metrics) must be(Some(m1.metrics)) + g2.nodeMetricsFor(m2.address).map(_.metrics) must be(Some(m2.metrics)) } "merge peer metrics" in { val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) val m2 = NodeMetrics(Address("akka", "sys", "a", 2555), newTimestamp, collector.sample.metrics) - var remoteGossip = MetricsGossip(window) - remoteGossip :+= m1 - remoteGossip :+= m2 - remoteGossip.nodes.size must be(2) - val beforeMergeNodes = remoteGossip.nodes + val g1 = MetricsGossip.empty :+ m1 :+ m2 + g1.nodes.size must be(2) + val beforeMergeNodes = g1.nodes - val m2Updated = m2 copy (metrics = collector.sample.metrics, timestamp = newTimestamp) - remoteGossip :+= m2Updated // merge peers - remoteGossip.nodes.size must be(2) - assertMasterMetricsAgainstGossipMetrics(beforeMergeNodes, remoteGossip) - assertExpectedSampleSize(collector.isSigar, remoteGossip) - remoteGossip.nodes collect { case peer if peer.address == m2.address ⇒ peer.timestamp must be(m2Updated.timestamp) } + val m2Updated = m2 copy (metrics = collector.sample.metrics, timestamp = m2.timestamp + 1000) + val g2 = g1 :+ m2Updated // merge peers + g2.nodes.size must be(2) + g2.nodeMetricsFor(m1.address).map(_.metrics) must be(Some(m1.metrics)) + g2.nodeMetricsFor(m2.address).map(_.metrics) must be(Some(m2Updated.metrics)) + g2.nodes collect { case peer if peer.address == m2.address ⇒ peer.timestamp must be(m2Updated.timestamp) } } "merge an existing metric set for a node and update node ring" in { val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) val m2 = NodeMetrics(Address("akka", "sys", "a", 2555), newTimestamp, collector.sample.metrics) val m3 = NodeMetrics(Address("akka", "sys", "a", 2556), newTimestamp, collector.sample.metrics) - val m2Updated = m2 copy (metrics = collector.sample.metrics, timestamp = newTimestamp) + val m2Updated = m2 copy (metrics = collector.sample.metrics, timestamp = m2.timestamp + 1000) - var localGossip = MetricsGossip(window) - localGossip :+= m1 - localGossip :+= m2 + val g1 = MetricsGossip.empty :+ m1 :+ m2 + val g2 = MetricsGossip.empty :+ m3 :+ m2Updated - var remoteGossip = MetricsGossip(window) - remoteGossip :+= m3 - remoteGossip :+= m2Updated - - localGossip.nodeKeys.contains(m1.address) must be(true) - remoteGossip.nodeKeys.contains(m3.address) must be(true) + g1.nodes.map(_.address) must be(Set(m1.address, m2.address)) // must contain nodes 1,3, and the most recent version of 2 - val mergedGossip = localGossip merge remoteGossip - mergedGossip.nodes.size must be(3) - assertExpectedNodeAddresses(mergedGossip, Set(m1, m2, m3)) - assertExpectedSampleSize(collector.isSigar, mergedGossip) - assertCreatedUninitialized(mergedGossip) - assertInitialized(mergedGossip) - mergedGossip.nodes.find(_.address == m2.address).get.timestamp must be(m2Updated.timestamp) + val mergedGossip = g1 merge g2 + mergedGossip.nodes.map(_.address) must be(Set(m1.address, m2.address, m3.address)) + mergedGossip.nodeMetricsFor(m1.address).map(_.metrics) must be(Some(m1.metrics)) + mergedGossip.nodeMetricsFor(m2.address).map(_.metrics) must be(Some(m2Updated.metrics)) + mergedGossip.nodeMetricsFor(m3.address).map(_.metrics) must be(Some(m3.metrics)) + mergedGossip.nodes.foreach(_.metrics.size must be > (3)) + mergedGossip.nodeMetricsFor(m2.address).map(_.timestamp) must be(Some(m2Updated.timestamp)) } "get the current NodeMetrics if it exists in the local nodes" in { val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) - var localGossip = MetricsGossip(window) - localGossip :+= m1 - localGossip.metricsFor(m1).nonEmpty must be(true) + val g1 = MetricsGossip.empty :+ m1 + g1.nodeMetricsFor(m1.address).map(_.metrics) must be(Some(m1.metrics)) } "remove a node if it is no longer Up" in { val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) val m2 = NodeMetrics(Address("akka", "sys", "a", 2555), newTimestamp, collector.sample.metrics) - var localGossip = MetricsGossip(window) - localGossip :+= m1 - localGossip :+= m2 + val g1 = MetricsGossip.empty :+ m1 :+ m2 + g1.nodes.size must be(2) + val g2 = g1 remove m1.address + g2.nodes.size must be(1) + g2.nodes.exists(_.address == m1.address) must be(false) + g2.nodeMetricsFor(m1.address) must be(None) + g2.nodeMetricsFor(m2.address).map(_.metrics) must be(Some(m2.metrics)) + } - localGossip.nodes.size must be(2) - localGossip = localGossip remove m1.address - localGossip.nodes.size must be(1) - localGossip.nodes.exists(_.address == m1.address) must be(false) + "filter nodes" in { + val m1 = NodeMetrics(Address("akka", "sys", "a", 2554), newTimestamp, collector.sample.metrics) + val m2 = NodeMetrics(Address("akka", "sys", "a", 2555), newTimestamp, collector.sample.metrics) + + val g1 = MetricsGossip.empty :+ m1 :+ m2 + g1.nodes.size must be(2) + val g2 = g1 filter Set(m2.address) + g2.nodes.size must be(1) + g2.nodes.exists(_.address == m1.address) must be(false) + g2.nodeMetricsFor(m1.address) must be(None) + g2.nodeMetricsFor(m2.address).map(_.metrics) must be(Some(m2.metrics)) } } } diff --git a/akka-cluster/src/test/scala/akka/cluster/NodeMetricsSpec.scala b/akka-cluster/src/test/scala/akka/cluster/NodeMetricsSpec.scala index 5d58bc84e5..7e80a04d64 100644 --- a/akka-cluster/src/test/scala/akka/cluster/NodeMetricsSpec.scala +++ b/akka-cluster/src/test/scala/akka/cluster/NodeMetricsSpec.scala @@ -4,51 +4,44 @@ package akka.cluster -import akka.testkit.AkkaSpec +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers import akka.actor.Address @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) -class NodeMetricsSpec extends AkkaSpec with AbstractClusterMetricsSpec with MetricSpec { - - val collector = createMetricsCollector +class NodeMetricsSpec extends WordSpec with MustMatchers { val node1 = Address("akka", "sys", "a", 2554) - val node2 = Address("akka", "sys", "a", 2555) "NodeMetrics must" must { - "recognize updatable nodes" in { - (NodeMetrics(node1, 0) updatable NodeMetrics(node1, 1)) must be(true) - } - - "recognize non-updatable nodes" in { - (NodeMetrics(node1, 1) updatable NodeMetrics(node2, 0)) must be(false) - } "return correct result for 2 'same' nodes" in { - (NodeMetrics(node1, 0) same NodeMetrics(node1, 0)) must be(true) + (NodeMetrics(node1, 0) sameAs NodeMetrics(node1, 0)) must be(true) } "return correct result for 2 not 'same' nodes" in { - (NodeMetrics(node1, 0) same NodeMetrics(node2, 0)) must be(false) + (NodeMetrics(node1, 0) sameAs NodeMetrics(node2, 0)) must be(false) } "merge 2 NodeMetrics by most recent" in { - val sample1 = NodeMetrics(node1, 1, collector.sample.metrics) - val sample2 = NodeMetrics(node1, 2, collector.sample.metrics) + val sample1 = NodeMetrics(node1, 1, Set(Metric.create("a", 10, None), Metric.create("b", 20, None)).flatten) + val sample2 = NodeMetrics(node1, 2, Set(Metric.create("a", 11, None), Metric.create("c", 30, None)).flatten) val merged = sample1 merge sample2 merged.timestamp must be(sample2.timestamp) - merged.metrics must be(sample2.metrics) + merged.metric("a").map(_.value) must be(Some(11)) + merged.metric("b").map(_.value) must be(Some(20)) + merged.metric("c").map(_.value) must be(Some(30)) } "not merge 2 NodeMetrics if master is more recent" in { - val sample1 = NodeMetrics(node1, 1, collector.sample.metrics) - val sample2 = NodeMetrics(node2, 0, sample1.metrics) + val sample1 = NodeMetrics(node1, 1, Set(Metric.create("a", 10, None), Metric.create("b", 20, None)).flatten) + val sample2 = NodeMetrics(node1, 0, Set(Metric.create("a", 11, None), Metric.create("c", 30, None)).flatten) - val merged = sample2 merge sample2 // older and not same - merged.timestamp must be(sample2.timestamp) - merged.metrics must be(sample2.metrics) + val merged = sample1 merge sample2 // older and not same + merged.timestamp must be(sample1.timestamp) + merged.metrics must be(sample1.metrics) } } } diff --git a/akka-cluster/src/test/scala/akka/cluster/routing/MetricsSelectorSpec.scala b/akka-cluster/src/test/scala/akka/cluster/routing/MetricsSelectorSpec.scala new file mode 100644 index 0000000000..5b5b92d950 --- /dev/null +++ b/akka-cluster/src/test/scala/akka/cluster/routing/MetricsSelectorSpec.scala @@ -0,0 +1,118 @@ +/* + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster.routing + +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers + +import akka.actor.Address +import akka.cluster.Metric +import akka.cluster.NodeMetrics +import akka.cluster.StandardMetrics._ + +@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) +class MetricsSelectorSpec extends WordSpec with MustMatchers { + + val abstractSelector = new CapacityMetricsSelector { + override def capacity(nodeMetrics: Set[NodeMetrics]): Map[Address, Double] = Map.empty + } + + val a1 = Address("akka", "sys", "a1", 2551) + val b1 = Address("akka", "sys", "b1", 2551) + val c1 = Address("akka", "sys", "c1", 2551) + val d1 = Address("akka", "sys", "d1", 2551) + + val decayFactor = Some(0.18) + + val nodeMetricsA = NodeMetrics(a1, System.currentTimeMillis, Set( + Metric.create(HeapMemoryUsed, 128, decayFactor), + Metric.create(HeapMemoryCommitted, 256, decayFactor), + Metric.create(HeapMemoryMax, 512, None), + Metric.create(CpuCombined, 0.1, decayFactor), + Metric.create(SystemLoadAverage, 0.5, None), + Metric.create(Processors, 8, None)).flatten) + + val nodeMetricsB = NodeMetrics(b1, System.currentTimeMillis, Set( + Metric.create(HeapMemoryUsed, 256, decayFactor), + Metric.create(HeapMemoryCommitted, 512, decayFactor), + Metric.create(HeapMemoryMax, 1024, None), + Metric.create(CpuCombined, 0.5, decayFactor), + Metric.create(SystemLoadAverage, 1.0, None), + Metric.create(Processors, 16, None)).flatten) + + val nodeMetricsC = NodeMetrics(c1, System.currentTimeMillis, Set( + Metric.create(HeapMemoryUsed, 1024, decayFactor), + Metric.create(HeapMemoryCommitted, 1024, decayFactor), + Metric.create(HeapMemoryMax, 1024, None), + Metric.create(CpuCombined, 1.0, decayFactor), + Metric.create(SystemLoadAverage, 16.0, None), + Metric.create(Processors, 16, None)).flatten) + + val nodeMetricsD = NodeMetrics(d1, System.currentTimeMillis, Set( + Metric.create(HeapMemoryUsed, 511, decayFactor), + Metric.create(HeapMemoryCommitted, 512, decayFactor), + Metric.create(HeapMemoryMax, 512, None), + Metric.create(Processors, 2, decayFactor)).flatten) + + val nodeMetrics = Set(nodeMetricsA, nodeMetricsB, nodeMetricsC, nodeMetricsD) + + "CapacityMetricsSelector" must { + + "calculate weights from capacity" in { + val capacity = Map(a1 -> 0.6, b1 -> 0.3, c1 -> 0.1) + val weights = abstractSelector.weights(capacity) + weights must be(Map(c1 -> 1, b1 -> 3, a1 -> 6)) + } + + "handle low and zero capacity" in { + val capacity = Map(a1 -> 0.0, b1 -> 1.0, c1 -> 0.005, d1 -> 0.004) + val weights = abstractSelector.weights(capacity) + weights must be(Map(a1 -> 0, b1 -> 100, c1 -> 1, d1 -> 0)) + } + + } + + "HeapMetricsSelector" must { + "calculate capacity of heap metrics" in { + val capacity = HeapMetricsSelector.capacity(nodeMetrics) + capacity(a1) must be(0.75 plusOrMinus 0.0001) + capacity(b1) must be(0.75 plusOrMinus 0.0001) + capacity(c1) must be(0.0 plusOrMinus 0.0001) + capacity(d1) must be(0.001953125 plusOrMinus 0.0001) + } + } + + "CpuMetricsSelector" must { + "calculate capacity of cpuCombined metrics" in { + val capacity = CpuMetricsSelector.capacity(nodeMetrics) + capacity(a1) must be(0.9 plusOrMinus 0.0001) + capacity(b1) must be(0.5 plusOrMinus 0.0001) + capacity(c1) must be(0.0 plusOrMinus 0.0001) + capacity.contains(d1) must be(false) + } + } + + "SystemLoadAverageMetricsSelector" must { + "calculate capacity of systemLoadAverage metrics" in { + val capacity = SystemLoadAverageMetricsSelector.capacity(nodeMetrics) + capacity(a1) must be(0.9375 plusOrMinus 0.0001) + capacity(b1) must be(0.9375 plusOrMinus 0.0001) + capacity(c1) must be(0.0 plusOrMinus 0.0001) + capacity.contains(d1) must be(false) + } + } + + "MixMetricsSelector" must { + "aggregate capacity of all metrics" in { + val capacity = MixMetricsSelector.capacity(nodeMetrics) + capacity(a1) must be((0.75 + 0.9 + 0.9375) / 3 plusOrMinus 0.0001) + capacity(b1) must be((0.75 + 0.5 + 0.9375) / 3 plusOrMinus 0.0001) + capacity(c1) must be((0.0 + 0.0 + 0.0) / 3 plusOrMinus 0.0001) + capacity(d1) must be((0.001953125) / 1 plusOrMinus 0.0001) + } + } + +} + diff --git a/akka-cluster/src/test/scala/akka/cluster/routing/WeightedRouteesSpec.scala b/akka-cluster/src/test/scala/akka/cluster/routing/WeightedRouteesSpec.scala new file mode 100644 index 0000000000..f34b81c5ec --- /dev/null +++ b/akka-cluster/src/test/scala/akka/cluster/routing/WeightedRouteesSpec.scala @@ -0,0 +1,87 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.cluster.routing + +import com.typesafe.config.ConfigFactory + +import akka.actor.Address +import akka.actor.RootActorPath +import akka.testkit.AkkaSpec + +@org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) +class WeightedRouteesSpec extends AkkaSpec(ConfigFactory.parseString(""" + akka.actor.provider = "akka.cluster.ClusterActorRefProvider" + akka.remote.netty.port = 0 + """)) { + + val a1 = Address("akka", "sys", "a1", 2551) + val b1 = Address("akka", "sys", "b1", 2551) + val c1 = Address("akka", "sys", "c1", 2551) + val d1 = Address("akka", "sys", "d1", 2551) + + val refA = system.actorFor(RootActorPath(a1) / "user" / "a") + val refB = system.actorFor(RootActorPath(b1) / "user" / "b") + val refC = system.actorFor(RootActorPath(c1) / "user" / "c") + + "WeightedRoutees" must { + + "allocate weighted refs" in { + val weights = Map(a1 -> 1, b1 -> 3, c1 -> 10) + val refs = Vector(refA, refB, refC) + val weighted = new WeightedRoutees(refs, a1, weights) + + weighted(1) must be(refA) + 2 to 4 foreach { weighted(_) must be(refB) } + 5 to 14 foreach { weighted(_) must be(refC) } + weighted.total must be(14) + } + + "check boundaries" in { + val empty = new WeightedRoutees(Vector(), a1, Map.empty) + empty.isEmpty must be(true) + intercept[IllegalArgumentException] { + empty.total + } + val weighted = new WeightedRoutees(Vector(refA, refB, refC), a1, Map.empty) + weighted.total must be(3) + intercept[IllegalArgumentException] { + weighted(0) + } + intercept[IllegalArgumentException] { + weighted(4) + } + } + + "allocate refs for undefined weight" in { + val weights = Map(a1 -> 1, b1 -> 7) + val refs = Vector(refA, refB, refC) + val weighted = new WeightedRoutees(refs, a1, weights) + + weighted(1) must be(refA) + 2 to 8 foreach { weighted(_) must be(refB) } + // undefined, uses the mean of the weights, i.e. 4 + 9 to 12 foreach { weighted(_) must be(refC) } + weighted.total must be(12) + } + + "allocate weighted local refs" in { + val weights = Map(a1 -> 2, b1 -> 1, c1 -> 10) + val refs = Vector(testActor, refB, refC) + val weighted = new WeightedRoutees(refs, a1, weights) + + 1 to 2 foreach { weighted(_) must be(testActor) } + 3 to weighted.total foreach { weighted(_) must not be (testActor) } + } + + "not allocate ref with weight zero" in { + val weights = Map(a1 -> 0, b1 -> 2, c1 -> 10) + val refs = Vector(refA, refB, refC) + val weighted = new WeightedRoutees(refs, a1, weights) + + 1 to weighted.total foreach { weighted(_) must not be (refA) } + } + + } +} diff --git a/akka-contrib/docs/index.rst b/akka-contrib/docs/index.rst index 9f5b57c513..5303c21c6d 100644 --- a/akka-contrib/docs/index.rst +++ b/akka-contrib/docs/index.rst @@ -29,6 +29,7 @@ The Current List of Modules .. toctree:: reliable-proxy + throttle Suggested Way of Using these Contributions ------------------------------------------ diff --git a/akka-contrib/docs/throttle.rst b/akka-contrib/docs/throttle.rst new file mode 100644 index 0000000000..ab60fb6b96 --- /dev/null +++ b/akka-contrib/docs/throttle.rst @@ -0,0 +1,60 @@ +Throttling Actor Messages +========================= + +Introduction +------------ + +Suppose you are writing an application that makes HTTP requests to an external +web service and that this web service has a restriction in place: you may not +make more than 10 requests in 1 minute. You will get blocked or need to pay if +you don’t stay under this limit. In such a scenario you will want to employ +a *message throttler*. + +This extension module provides a simple implementation of a throttling actor, +the :class:`TimerBasedThrottler`. + + +How to use it +------------- + +You can use a :class:`TimerBasedThrottler` as follows: + +.. includecode:: @contribSrc@/src/test/scala/akka/contrib/throttle/TimerBasedThrottlerSpec.scala#demo-code + +Please refer to the ScalaDoc documentation for the details. + + +The guarantees +-------------- + +:class:`TimerBasedThrottler` uses a timer internally. When the throttler’s rate is 3 msg/s, +for example, the throttler will start a timer that triggers +every second and each time will give the throttler exactly three "vouchers"; +each voucher gives the throttler a right to deliver a message. In this way, +at most 3 messages will be sent out by the throttler in each interval. + +It should be noted that such timer-based throttlers provide relatively **weak guarantees**: + +* Only *start times* are taken into account. This may be a problem if, for example, the + throttler is used to throttle requests to an external web service. If a web request + takes very long on the server then the rate *observed on the server* may be higher. +* A timer-based throttler only makes guarantees for the intervals of its own timer. In + our example, no more than 3 messages are delivered within such intervals. Other + intervals on the timeline, however, may contain more calls. + +The two cases are illustrated in the two figures below, each showing a timeline and three +intervals of the timer. The message delivery times chosen by the throttler are indicated +by dots, and as you can see, each interval contains at most 3 point, so the throttler +works correctly. Still, there is in each example an interval (the red one) that is +problematic. In the first scenario, this is because the delivery times are merely the +start times of longer requests (indicated by the four bars above the timeline that start +at the dots), so that the server observes four requests during the red interval. In the +second scenario, the messages are centered around one of the points in time where the +timer triggers, causing the red interval to contain too many messages. + +.. image:: throttler.png + +For some application scenarios, the guarantees provided by a timer-based throttler might +be too weak. Charles Cordingley’s `blog post `_ +discusses a throttler with stronger guarantees (it solves problem 2 from above). +Future versions of this module may feature throttlers with better guarantees. \ No newline at end of file diff --git a/akka-contrib/docs/throttler.png b/akka-contrib/docs/throttler.png new file mode 100644 index 0000000000..eab1a52a34 Binary files /dev/null and b/akka-contrib/docs/throttler.png differ diff --git a/akka-contrib/src/main/scala/akka/contrib/pattern/ReliableProxy.scala b/akka-contrib/src/main/scala/akka/contrib/pattern/ReliableProxy.scala index d46eff9f5f..9d4b9ecd7b 100644 --- a/akka-contrib/src/main/scala/akka/contrib/pattern/ReliableProxy.scala +++ b/akka-contrib/src/main/scala/akka/contrib/pattern/ReliableProxy.scala @@ -6,7 +6,7 @@ package akka.contrib.pattern import akka.actor._ import akka.remote.RemoteScope -import scala.concurrent.util._ +import scala.concurrent.duration._ object ReliableProxy { @@ -164,4 +164,4 @@ class ReliableProxy(target: ActorRef, retryAfter: FiniteDuration) extends Actor m } -} \ No newline at end of file +} diff --git a/akka-contrib/src/main/scala/akka/contrib/throttle/TimerBasedThrottler.scala b/akka-contrib/src/main/scala/akka/contrib/throttle/TimerBasedThrottler.scala new file mode 100644 index 0000000000..de614619f7 --- /dev/null +++ b/akka-contrib/src/main/scala/akka/contrib/throttle/TimerBasedThrottler.scala @@ -0,0 +1,296 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.contrib.throttle + +import scala.concurrent.duration.{ Duration, FiniteDuration } +import scala.util.control.NonFatal +import scala.collection.immutable.{ Queue ⇒ Q } +import akka.actor.{ ActorRef, Actor, FSM } +import Throttler._ +import TimerBasedThrottler._ +import java.util.concurrent.TimeUnit +import akka.AkkaException + +/** + * Marker trait for throttlers. + * + * == Throttling == + * A throttler is an actor that is defined through a target actor and a rate + * (of type [[akka.contrib.throttle.Throttler.Rate]]). You set or change the target and rate at any time through the `SetTarget(target)` + * and `SetRate(rate)` messages, respectively. When you send the throttler any other message `msg`, it will + * put the message `msg` into an internal queue and eventually send all queued messages to the target, at + * a speed that respects the given rate. If no target is currently defined then the messages will be queued + * and will be delivered as soon as a target gets set. + * + * A [[akka.contrib.throttle.Throttler]] understands actor messages of type + * [[akka.contrib.throttle.Throttler.SetTarget]], [[akka.contrib.throttle.Throttler.SetRate]], in + * addition to any other messages, which the throttler will consider as messages to be sent to + * the target. + * + * == Transparency == + * Notice that the throttler `forward`s messages, i.e., the target will see the original message sender (and not the throttler) as the sender of the message. + * + * == Persistence == + * Throttlers usually use an internal queue to keep the messages that need to be sent to the target. + * You therefore cannot rely on the throttler's inbox size in order to learn how much messages are + * outstanding. + * + * It is left to the implementation whether the internal queue is persisted over application restarts or + * actor failure. + * + * == Processing messages == + * The target should process messages as fast as possible. If the target requires substantial time to + * process messages, it should distribute its work to other actors (using for example something like + * a `BalancingDispatcher`), otherwise the resulting system will always work below + * the threshold rate. + * + * Example: Suppose the throttler has a rate of 3msg/s and the target requires 1s to process a message. + * This system will only process messages at a rate of 1msg/s: the target will receive messages at at most 3msg/s + * but as it handles them synchronously and each of them takes 1s, its inbox will grow and grow. In such + * a situation, the target should distribute its messages to a set of worker actors so that individual messages + * can be handled in parallel. + * + * @see [[akka.contrib.throttle.TimerBasedThrottler]] + */ +trait Throttler { this: Actor ⇒ } + +/** + * Message types understood by [[akka.contrib.throttle.Throttler]]'s. + * + * @see [[akka.contrib.throttle.Throttler]] + * @see [[akka.contrib.throttle.Throttler.Rate]] + */ +object Throttler { + /** + * A rate used for throttling. + * + * There are some shorthands available to construct rates: + * {{{ + * import java.util.concurrent.TimeUnit._ + * import scala.concurrent.duration.{ Duration, FiniteDuration } + * + * val rate1 = 1 msgsPer (1, SECONDS) + * val rate2 = 1 msgsPer Duration(1, SECONDS) + * val rate3 = 1 msgsPer (1 seconds) + * val rate4 = 1 msgsPerSecond + * val rate5 = 1 msgsPerMinute + * val rate6 = 1 msgsPerHour + * }}} + * + * @param numberOfCalls the number of calls that may take place in a period + * @param duration the length of the period + * @see [[akka.contrib.throttle.Throttler]] + */ + case class Rate(val numberOfCalls: Int, val duration: FiniteDuration) { + /** + * The duration in milliseconds. + */ + def durationInMillis(): Long = duration.toMillis + } + + /** + * Set the target of a [[akka.contrib.throttle.Throttler]]. + * + * You may change a throttler's target at any time. + * + * Notice that the messages sent by the throttler to the target will have the original sender (and + * not the throttler) as the sender. (In Akka terms, the throttler `forward`s the message.) + * + * @param target if `target` is `None`, the throttler will stop delivering messages and the messages already received + * but not yet delivered, as well as any messages received in the future will be queued + * and eventually be delivered when a new target is set. If `target` is not `None`, the currently queued messages + * as well as any messages received in the the future will be delivered to the new target at a rate not exceeding the current throttler's rate. + */ + case class SetTarget(target: Option[ActorRef]) + + /** + * Set the rate of a [[akka.contrib.throttle.Throttler]]. + * + * You may change a throttler's rate at any time. + * + * @param rate the rate at which messages will be delivered to the target of the throttler + */ + case class SetRate(rate: Rate) + + import language.implicitConversions + + /** + * Helper for some syntactic sugar. + * + * @see [[akka.contrib.throttle.Throttler.Rate]] + */ + implicit class RateInt(val numberOfCalls: Int) extends AnyVal { + def msgsPer(duration: Int, timeUnit: TimeUnit) = Rate(numberOfCalls, Duration(duration, timeUnit)) + def msgsPer(duration: FiniteDuration) = Rate(numberOfCalls, duration) + def msgsPerSecond = Rate(numberOfCalls, Duration(1, TimeUnit.SECONDS)) + def msgsPerMinute = Rate(numberOfCalls, Duration(1, TimeUnit.MINUTES)) + def msgsPerHour = Rate(numberOfCalls, Duration(1, TimeUnit.HOURS)) + } + +} + +/** + * Implementation-specific internals. + */ +object TimerBasedThrottler { + private[throttle] case object Tick + + // States of the FSM: A `TimerBasedThrottler` is in state `Active` iff the timer is running. + private[throttle] sealed trait State + private[throttle] case object Idle extends State + private[throttle] case object Active extends State + + // Messages, as we queue them to be sent later + private[throttle] case class Message(message: Any, sender: ActorRef) + + // The data of the FSM + private[throttle] sealed case class Data(target: Option[ActorRef], + callsLeftInThisPeriod: Int, + queue: Q[Message]) +} + +/** + * A [[akka.contrib.throttle.Throttler]] that uses a timer to control the message delivery rate. + * + * ==Example== + * For example, if you set a rate like "3 messages in 1 second", the throttler + * will send the first three messages immediately to the target actor but will need to impose a delay before + * sending out further messages: + * {{{ + * // A simple actor that prints whatever it receives + * val printer = system.actorOf(Props(new Actor { + * def receive = { + * case x => println(x) + * } + * })) + * // The throttler for this example, setting the rate + * val throttler = system.actorOf(Props(new TimerBasedThrottler(3 msgsPer (1.second)))) + * // Set the target + * throttler ! SetTarget(Some(printer)) + * // These three messages will be sent to the printer immediately + * throttler ! "1" + * throttler ! "2" + * throttler ! "3" + * // These two will wait at least until 1 second has passed + * throttler ! "4" + * throttler ! "5" + * }}} + * + * ==Implementation notes== + * This throttler implementation internally installs a timer that repeats every `rate.durationInMillis` and enables `rate.numberOfCalls` + * additional calls to take place. A `TimerBasedThrottler` uses very few system resources, provided the rate's duration is not too + * fine-grained (which would cause a lot of timer invocations); for example, it does not store the calling history + * as other throttlers may need to do. + * + * However, a `TimerBasedThrottler` only provides ''weak guarantees'' on the rate (see also + * this blog post): + * + * - Only ''delivery'' times are taken into account: if, for example, the throttler is used to throttle + * requests to an external web service then only the start times of the web requests are considered. + * If a web request takes very long on the server then more than `rate.numberOfCalls`-many requests + * may be observed on the server in an interval of duration `rate.durationInMillis()`. + * - There may be intervals of duration `rate.durationInMillis()` that contain more than `rate.numberOfCalls` + * message deliveries: a `TimerBasedThrottler` only makes guarantees for the intervals + * of its ''own'' timer, namely that no more than `rate.numberOfCalls`-many messages are delivered within such intervals. Other intervals on the + * timeline may contain more calls. + * + * For some applications, these guarantees may not be sufficient. + * + * ==Known issues== + * + * - If you change the rate using `SetRate(rate)`, the actual rate may in fact be higher for the + * overlapping period (i.e., `durationInMillis()`) of the new and old rate. Therefore, + * changing the rate frequently is not recommended with the current implementation. + * - The queue of messages to be delivered is not persisted in any way; actor or system failure will + * cause the queued messages to be lost. + * + * @see [[akka.contrib.throttle.Throttler]] + */ +class TimerBasedThrottler(var rate: Rate) extends Actor with Throttler with FSM[State, Data] { + startWith(Idle, Data(None, rate.numberOfCalls, Q())) + + // Idle: no messages, or target not set + when(Idle) { + // Set the rate + case Event(SetRate(rate), d) ⇒ + this.rate = rate + stay using d.copy(callsLeftInThisPeriod = rate.numberOfCalls) + + // Set the target + case Event(SetTarget(t @ Some(_)), d) if !d.queue.isEmpty ⇒ + goto(Active) using deliverMessages(d.copy(target = t)) + case Event(SetTarget(t), d) ⇒ + stay using d.copy(target = t) + + // Queuing + case Event(msg, d @ Data(None, _, queue)) ⇒ + stay using d.copy(queue = queue.enqueue(Message(msg, context.sender))) + case Event(msg, d @ Data(Some(_), _, Seq())) ⇒ + goto(Active) using deliverMessages(d.copy(queue = Q(Message(msg, context.sender)))) + // Note: The case Event(msg, t @ Data(Some(_), _, _, Seq(_*))) should never happen here. + } + + when(Active) { + // Set the rate + case Event(SetRate(rate), d) ⇒ + this.rate = rate + // Note: this should be improved (see "Known issues" in class comments) + stopTimer() + startTimer(rate) + stay using d.copy(callsLeftInThisPeriod = rate.numberOfCalls) + + // Set the target (when the new target is None) + case Event(SetTarget(None), d) ⇒ + // Note: We do not yet switch to state `Inactive` because we need the timer to tick once more before + stay using d.copy(target = None) + + // Set the target (when the new target is not None) + case Event(SetTarget(t @ Some(_)), d) ⇒ + stay using d.copy(target = t) + + // Tick after a `SetTarget(None)`: take the additional permits and go to `Idle` + case Event(Tick, d @ Data(None, _, _)) ⇒ + goto(Idle) using d.copy(callsLeftInThisPeriod = rate.numberOfCalls) + + // Period ends and we have no more messages: take the additional permits and go to `Idle` + case Event(Tick, d @ Data(_, _, Seq())) ⇒ + goto(Idle) using d.copy(callsLeftInThisPeriod = rate.numberOfCalls) + + // Period ends and we get more occasions to send messages + case Event(Tick, d @ Data(_, _, _)) ⇒ + stay using deliverMessages(d.copy(callsLeftInThisPeriod = rate.numberOfCalls)) + + // Queue a message (when we cannot send messages in the current period anymore) + case Event(msg, d @ Data(_, 0, queue)) ⇒ + stay using d.copy(queue = queue.enqueue(Message(msg, context.sender))) + + // Queue a message (when we can send some more messages in the current period) + case Event(msg, d @ Data(_, _, queue)) ⇒ + stay using deliverMessages(d.copy(queue = queue.enqueue(Message(msg, context.sender)))) + } + + onTransition { + case Idle -> Active ⇒ startTimer(rate) + case Active -> Idle ⇒ stopTimer() + } + + initialize + + private def startTimer(rate: Rate) = setTimer("morePermits", Tick, rate.duration, true) + private def stopTimer() = cancelTimer("morePermits") + + /** + * Send as many messages as we can (while respecting the rate) to the target and + * return the state data (with the queue containing the remaining ones). + */ + private def deliverMessages(data: Data): Data = { + val queue = data.queue + val nrOfMsgToSend = scala.math.min(queue.length, data.callsLeftInThisPeriod) + + queue.take(nrOfMsgToSend).foreach(x ⇒ data.target.get.tell(x.message, x.sender)) + + data.copy(queue = queue.drop(nrOfMsgToSend), callsLeftInThisPeriod = data.callsLeftInThisPeriod - nrOfMsgToSend) + } +} \ No newline at end of file diff --git a/akka-contrib/src/multi-jvm/scala/akka/contrib/pattern/ReliableProxySpec.scala b/akka-contrib/src/multi-jvm/scala/akka/contrib/pattern/ReliableProxySpec.scala index 03fef8da54..f71bb0116b 100644 --- a/akka-contrib/src/multi-jvm/scala/akka/contrib/pattern/ReliableProxySpec.scala +++ b/akka-contrib/src/multi-jvm/scala/akka/contrib/pattern/ReliableProxySpec.scala @@ -14,7 +14,7 @@ import akka.remote.testconductor.Direction import akka.actor.Props import akka.actor.Actor import akka.testkit.ImplicitSender -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.FSM import akka.actor.ActorRef import akka.testkit.TestProbe @@ -22,6 +22,8 @@ import akka.testkit.TestProbe object ReliableProxySpec extends MultiNodeConfig { val local = role("local") val remote = role("remote") + + testTransport(on = true) } class ReliableProxyMultiJvmNode1 extends ReliableProxySpec @@ -120,8 +122,8 @@ class ReliableProxySpec extends MultiNodeSpec(ReliableProxySpec) with STMultiNod enterBarrier("test2b") runOn(local) { - testConductor.throttle(local, remote, Direction.Send, -1) - expectTransition(Active, Idle) + testConductor.throttle(local, remote, Direction.Send, -1).await + within(5 seconds) { expectTransition(Active, Idle) } } runOn(remote) { within(1 second) { @@ -150,8 +152,8 @@ class ReliableProxySpec extends MultiNodeSpec(ReliableProxySpec) with STMultiNod enterBarrier("test3a") runOn(local) { - testConductor.throttle(local, remote, Direction.Receive, -1) - expectTransition(Active, Idle) + testConductor.throttle(local, remote, Direction.Receive, -1).await + within(5 seconds) { expectTransition(Active, Idle) } } enterBarrier("test3b") @@ -193,4 +195,4 @@ class ReliableProxySpec extends MultiNodeSpec(ReliableProxySpec) with STMultiNod } } -} \ No newline at end of file +} diff --git a/akka-contrib/src/test/java/akka/contrib/pattern/ReliableProxyTest.java b/akka-contrib/src/test/java/akka/contrib/pattern/ReliableProxyTest.java index afb0c34378..4ae2c20b1f 100644 --- a/akka-contrib/src/test/java/akka/contrib/pattern/ReliableProxyTest.java +++ b/akka-contrib/src/test/java/akka/contrib/pattern/ReliableProxyTest.java @@ -8,8 +8,8 @@ import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; -import scala.concurrent.util.Duration; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import akka.actor.Actor; import akka.actor.ActorRef; import akka.actor.ActorSystem; diff --git a/akka-contrib/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala b/akka-contrib/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala index 259c94010c..07c2d7af74 100644 --- a/akka-contrib/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala +++ b/akka-contrib/src/test/scala/akka/contrib/pattern/ReliableProxyDocSpec.scala @@ -8,7 +8,7 @@ import akka.testkit.AkkaSpec import akka.actor.Props import akka.actor.Actor import akka.testkit.ImplicitSender -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.FSM import akka.actor.ActorRef @@ -39,4 +39,4 @@ class ReliableProxyDocSpec extends AkkaSpec with ImplicitSender { } -} \ No newline at end of file +} diff --git a/akka-contrib/src/test/scala/akka/contrib/throttle/TimerBasedThrottlerSpec.scala b/akka-contrib/src/test/scala/akka/contrib/throttle/TimerBasedThrottlerSpec.scala new file mode 100644 index 0000000000..7304df1448 --- /dev/null +++ b/akka-contrib/src/test/scala/akka/contrib/throttle/TimerBasedThrottlerSpec.scala @@ -0,0 +1,206 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package akka.contrib.throttle + +import language.postfixOps +import scala.concurrent.duration._ +import akka.actor.ActorSystem +import akka.actor.Actor +import akka.actor.Props +import akka.testkit.TestKit +import akka.testkit.ImplicitSender +import akka.contrib.throttle.Throttler._ +import org.junit.runner.RunWith +import org.scalatest.junit.JUnitRunner +import org.scalatest.WordSpec +import org.scalatest.matchers.MustMatchers +import org.scalatest.BeforeAndAfterAll +import akka.testkit._ + +object TimerBasedThrottlerSpec { + class EchoActor extends Actor { + def receive = { + case x ⇒ sender ! x + } + } +} + +@RunWith(classOf[JUnitRunner]) +class TimerBasedThrottlerSpec extends TestKit(ActorSystem("TimerBasedThrottlerSpec")) with ImplicitSender + with WordSpec with MustMatchers with BeforeAndAfterAll { + + override def afterAll { + system.shutdown() + } + + "A throttler" must { + def println(a: Any) = () + "must pass the ScalaDoc class documentation example program" in { + //#demo-code + // A simple actor that prints whatever it receives + val printer = system.actorOf(Props(new Actor { + def receive = { + case x ⇒ println(x) + } + })) + // The throttler for this example, setting the rate + val throttler = system.actorOf(Props(new TimerBasedThrottler( + 3 msgsPer (1.second.dilated)))) + // Set the target + throttler ! SetTarget(Some(printer)) + // These three messages will be sent to the echoer immediately + throttler ! "1" + throttler ! "2" + throttler ! "3" + // These two will wait until a second has passed + throttler ! "4" + throttler ! "5" + //#demo-code + } + + "keep messages until a target is set" in { + val echo = system.actorOf(Props[TimerBasedThrottlerSpec.EchoActor]) + val throttler = system.actorOf(Props(new TimerBasedThrottler(3 msgsPer (1.second.dilated)))) + throttler ! "1" + throttler ! "2" + throttler ! "3" + throttler ! "4" + throttler ! "5" + throttler ! "6" + expectNoMsg(1 second) + throttler ! SetTarget(Some(echo)) + within(2 seconds) { + expectMsg("1") + expectMsg("2") + expectMsg("3") + expectMsg("4") + expectMsg("5") + expectMsg("6") + } + } + + "send messages after a `SetTarget(None)` pause" in { + val echo = system.actorOf(Props[TimerBasedThrottlerSpec.EchoActor]) + val throttler = system.actorOf(Props(new TimerBasedThrottler(3 msgsPer (1.second.dilated)))) + throttler ! SetTarget(Some(echo)) + throttler ! "1" + throttler ! "2" + throttler ! "3" + throttler ! SetTarget(None) + within(1 second) { + expectMsg("1") + expectMsg("2") + expectMsg("3") + expectNoMsg() + } + expectNoMsg(1 second) + throttler ! SetTarget(Some(echo)) + throttler ! "4" + throttler ! "5" + throttler ! "6" + throttler ! "7" + within(1 seconds) { + expectMsg("4") + expectMsg("5") + expectMsg("6") + expectNoMsg() + } + within(1 second) { + expectMsg("7") + } + } + + "keep messages when the target is set to None" in { + val echo = system.actorOf(Props[TimerBasedThrottlerSpec.EchoActor]) + val throttler = system.actorOf(Props(new TimerBasedThrottler(3 msgsPer (1.second.dilated)))) + throttler ! SetTarget(Some(echo)) + throttler ! "1" + throttler ! "2" + throttler ! "3" + throttler ! "4" + throttler ! "5" + throttler ! "6" + throttler ! "7" + throttler ! SetTarget(None) + within(1 second) { + expectMsg("1") + expectMsg("2") + expectMsg("3") + expectNoMsg() + } + expectNoMsg(1 second) + throttler ! SetTarget(Some(echo)) + within(1 seconds) { + expectMsg("4") + expectMsg("5") + expectMsg("6") + expectNoMsg() + } + within(1 second) { + expectMsg("7") + } + } + + "respect the rate (3 msg/s)" in { + val echo = system.actorOf(Props[TimerBasedThrottlerSpec.EchoActor]) + val throttler = system.actorOf(Props(new TimerBasedThrottler(3 msgsPer (1.second.dilated)))) + throttler ! SetTarget(Some(echo)) + throttler ! "1" + throttler ! "2" + throttler ! "3" + throttler ! "4" + throttler ! "5" + throttler ! "6" + throttler ! "7" + within(1 second) { + expectMsg("1") + expectMsg("2") + expectMsg("3") + expectNoMsg() + } + within(1 second) { + expectMsg("4") + expectMsg("5") + expectMsg("6") + expectNoMsg() + } + within(1 second) { + expectMsg("7") + } + } + + "respect the rate (4 msg/s)" in { + val echo = system.actorOf(Props[TimerBasedThrottlerSpec.EchoActor]) + val throttler = system.actorOf(Props(new TimerBasedThrottler(4 msgsPer (1.second.dilated)))) + throttler ! SetTarget(Some(echo)) + throttler ! "1" + throttler ! "2" + throttler ! "3" + throttler ! "4" + throttler ! "5" + throttler ! "6" + throttler ! "7" + throttler ! "8" + throttler ! "9" + within(1 second) { + expectMsg("1") + expectMsg("2") + expectMsg("3") + expectMsg("4") + expectNoMsg() + } + within(1 second) { + expectMsg("5") + expectMsg("6") + expectMsg("7") + expectMsg("8") + expectNoMsg() + } + within(1 second) { + expectMsg("9") + } + } + } +} \ No newline at end of file diff --git a/akka-dataflow/src/main/scala/akka/dataflow/package.scala b/akka-dataflow/src/main/scala/akka/dataflow/package.scala index 9f4e6a0da2..31248958d1 100644 --- a/akka-dataflow/src/main/scala/akka/dataflow/package.scala +++ b/akka-dataflow/src/main/scala/akka/dataflow/package.scala @@ -46,7 +46,7 @@ package object dataflow { implicit class DataflowPromise[T](val promise: Promise[T]) extends AnyVal { /** - * Completes the Promise with the speicifed value or throws an exception if already + * Completes the Promise with the specified value or throws an exception if already * completed. See Promise.success(value) for semantics. * * @param value The value which denotes the successful value of the Promise @@ -59,7 +59,7 @@ package object dataflow { /** * Completes this Promise with the value of the specified Future when/if it completes. * - * @param other The Future whose value will be transfered to this Promise upon completion + * @param other The Future whose value will be transferred to this Promise upon completion * @param ec An ExecutionContext which will be used to execute callbacks registered in this method * @return A Future representing the result of this operation */ @@ -75,7 +75,7 @@ package object dataflow { /** * Completes this Promise with the value of the specified Promise when/if it completes. * - * @param other The Promise whose value will be transfered to this Promise upon completion + * @param other The Promise whose value will be transferred to this Promise upon completion * @param ec An ExecutionContext which will be used to execute callbacks registered in this method * @return A Future representing the result of this operation */ diff --git a/akka-dataflow/src/test/scala/akka/dataflow/DataflowSpec.scala b/akka-dataflow/src/test/scala/akka/dataflow/DataflowSpec.scala index 2bc616881b..0543b557c3 100644 --- a/akka-dataflow/src/test/scala/akka/dataflow/DataflowSpec.scala +++ b/akka-dataflow/src/test/scala/akka/dataflow/DataflowSpec.scala @@ -11,7 +11,7 @@ import akka.actor.Status._ import akka.pattern.ask import akka.testkit.{ EventFilter, filterEvents, filterException } import scala.concurrent.{ Await, Promise, Future } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit.{ DefaultTimeout, TestLatch, AkkaSpec } import java.util.concurrent.TimeoutException @@ -271,9 +271,7 @@ class DataflowSpec extends AkkaSpec with DefaultTimeout { assert(checkType(rString, classTag[String])) assert(checkType(rInt, classTag[Int])) assert(!checkType(rInt, classTag[String])) - assert(intercept[java.lang.Exception] { - assert(!checkType(rInt, classTag[Nothing])) - }.getMessage == "Nothing is a bottom type, therefore its erasure does not return a value") // When this fails, remove the intercept + assert(!checkType(rInt, classTag[Nothing])) assert(!checkType(rInt, classTag[Any])) Await.result(rString, timeout.duration) diff --git a/akka-docs/_sphinx/themes/akka/layout.html b/akka-docs/_sphinx/themes/akka/layout.html index be74cedd45..1e0f13bdc2 100644 --- a/akka-docs/_sphinx/themes/akka/layout.html +++ b/akka-docs/_sphinx/themes/akka/layout.html @@ -49,7 +49,7 @@ diff --git a/akka-docs/_sphinx/themes/akka/static/docs.css b/akka-docs/_sphinx/themes/akka/static/docs.css index 3d37718c68..7121bb66ae 100644 --- a/akka-docs/_sphinx/themes/akka/static/docs.css +++ b/akka-docs/_sphinx/themes/akka/static/docs.css @@ -6,7 +6,7 @@ a:hover { color: #73a600; text-decoration: none; } .main { position: relative; height: auto; margin-top: -18px; overflow: auto; } .page-title { position: relative; top: 24px; font-family: 'Exo', sans-serif; font-size: 24px; font-weight: 400; color: rgba(255, 255, 255, 1); text-shadow:0 2px 0 #000000; width: 900px;} .main-container { background: #f2f2eb; min-height: 600px; padding-top: 20px; margin-top: 28px; padding-bottom: 40px; } -.container h1:first-of-type { visibility: hidden; margin-top: -36px; } +.container h1:first-of-type { display: none; visibility: hidden; margin-top: -36px; } .pdf-link { float: right; height: 40px; margin-bottom: -15px; margin-top: -5px; } .breadcrumb { height: 18px; } .breadcrumb li { float: right; } @@ -172,4 +172,5 @@ strong {color: #1d3c52; } } .pre { padding: 1px 2px; color: #5d8700; background-color: #f3f7e9; border: 1px solid #dee1e2; font-family: Menlo, Monaco, "Courier New", monospace; font-size: 12px; -webkit-border-radius: 3px; -moz-border-radius: 3px; border-radius: 3px; } -.footer h5 { text-transform: none; } \ No newline at end of file +.footer h5 { text-transform: none; } + diff --git a/akka-docs/rst/cluster/cluster-usage-java.rst b/akka-docs/rst/cluster/cluster-usage-java.rst index c8e2d791b4..a799dae457 100644 --- a/akka-docs/rst/cluster/cluster-usage-java.rst +++ b/akka-docs/rst/cluster/cluster-usage-java.rst @@ -25,24 +25,24 @@ version from ``_. We recommend against using ``SNAPSHOT`` in order to obtain stable builds. +.. _cluster_simple_example_java: + A Simple Cluster Example ^^^^^^^^^^^^^^^^^^^^^^^^ The following small program together with its configuration starts an ``ActorSystem`` -with the Cluster extension enabled. It joins the cluster and logs some membership events. +with the Cluster enabled. It joins the cluster and logs some membership events. Try it out: 1. Add the following ``application.conf`` in your project, place it in ``src/main/resources``: -.. literalinclude:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf - :language: none +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#cluster To enable cluster capabilities in your Akka project you should, at a minimum, add the :ref:`remoting-java` settings, but with ``akka.cluster.ClusterActorRefProvider``. -The ``akka.cluster.seed-nodes`` and cluster extension should normally also be added to your -``application.conf`` file. +The ``akka.cluster.seed-nodes`` should normally also be added to your ``application.conf`` file. The seed nodes are configured contact points for initial, automatic, join of the cluster. @@ -54,17 +54,33 @@ ip-addresses or host names of the machines in ``application.conf`` instead of `` .. literalinclude:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/simple/japi/SimpleClusterApp.java :language: java +3. Add `maven exec plugin `_ to your pom.xml:: -3. Start the first seed node. Open a sbt session in one terminal window and run:: + + + + + org.codehaus.mojo + exec-maven-plugin + 1.2.1 + + + + - run-main sample.cluster.simple.japi.SimpleClusterApp 2551 + +4. Start the first seed node. Open a terminal window and run (one line):: + + mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp" \ + -Dexec.args="2551" 2551 corresponds to the port of the first seed-nodes element in the configuration. In the log output you see that the cluster node has been started and changed status to 'Up'. -4. Start the second seed node. Open a sbt session in another terminal window and run:: +5. Start the second seed node. Open another terminal window and run:: - run-main sample.cluster.simple.japi.SimpleClusterApp 2552 + mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp" \ + -Dexec.args="2552" 2552 corresponds to the port of the second seed-nodes element in the configuration. @@ -73,9 +89,9 @@ and becomes a member of the cluster. It's status changed to 'Up'. Switch over to the first terminal window and see in the log output that the member joined. -5. Start another node. Open a sbt session in yet another terminal window and run:: +6. Start another node. Open a sbt session in yet another terminal window and run:: - run-main sample.cluster.simple.japi.SimpleClusterApp + mvn exec:java -Dexec.mainClass="sample.cluster.simple.japi.SimpleClusterApp" Now you don't need to specify the port number, and it will use a random available port. It joins one of the configured seed nodes. Look at the log output in the different terminal @@ -197,23 +213,28 @@ Death watch uses the cluster failure detector for nodes in the cluster, i.e. it network failures and JVM crashes, in addition to graceful termination of watched actor. -This example is included in ``akka-samples/akka-sample-cluster`` -and you can try by starting nodes in different terminal windows. For example, starting 2 +This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the +`source <@github@/akka-samples/akka-sample-cluster>`_ to your +maven project, defined as in :ref:`cluster_simple_example_java`. +Run it by starting nodes in different terminal windows. For example, starting 2 frontend nodes and 3 backend nodes:: - sbt + mvn exec:java \ + -Dexec.mainClass="sample.cluster.transformation.japi.TransformationFrontendMain" \ + -Dexec.args="2551" - project akka-sample-cluster-experimental + mvn exec:java \ + -Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain" \ + -Dexec.args="2552" - run-main sample.cluster.transformation.japi.TransformationFrontendMain 2551 + mvn exec:java \ + -Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain" - run-main sample.cluster.transformation.japi.TransformationBackendMain 2552 + mvn exec:java \ + -Dexec.mainClass="sample.cluster.transformation.japi.TransformationBackendMain" - run-main sample.cluster.transformation.japi.TransformationBackendMain - - run-main sample.cluster.transformation.japi.TransformationBackendMain - - run-main sample.cluster.transformation.japi.TransformationFrontendMain + mvn exec:java \ + -Dexec.mainClass="sample.cluster.transformation.japi.TransformationFrontendMain" .. note:: The above example should probably be designed as two separate, frontend/backend, clusters, when there is a `cluster client for decoupling clusters `_. @@ -355,21 +376,26 @@ This means that user requests can be sent to ``StatsService`` on any node and it ``StatsWorker`` on all nodes. There can only be one worker per node, but that worker could easily fan out to local children if more parallelism is needed. -This example is included in ``akka-samples/akka-sample-cluster`` -and you can try by starting nodes in different terminal windows. For example, starting 3 +This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the +`source <@github@/akka-samples/akka-sample-cluster>`_ to your +maven project, defined as in :ref:`cluster_simple_example_java`. +Run it by starting nodes in different terminal windows. For example, starting 3 service nodes and 1 client:: - sbt + mvn exec:java \ + -Dexec.mainClass="run-main sample.cluster.stats.japi.StatsSampleMain" \ + -Dexec.args="2551" - project akka-sample-cluster-experimental + mvn exec:java \ + -Dexec.mainClass="run-main sample.cluster.stats.japi.StatsSampleMain" \ + -Dexec.args="2552" - run-main sample.cluster.stats.japi.StatsSampleMain 2551 + mvn exec:java \ + -Dexec.mainClass="run-main sample.cluster.stats.japi.StatsSampleMain" - run-main sample.cluster.stats.japi.StatsSampleMain 2552 + mvn exec:java \ + -Dexec.mainClass="run-main sample.cluster.stats.japi.StatsSampleMain" - run-main sample.cluster.stats.japi.StatsSampleClientMain - - run-main sample.cluster.stats.japi.StatsSampleMain The above setup is nice for this example, but we will also take a look at how to use a single master node that creates and deploys workers. To keep track of a single @@ -387,25 +413,130 @@ All nodes start ``StatsFacade`` and the router is now configured like this: .. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleOneMasterMain.java#start-router-deploy - -This example is included in ``akka-samples/akka-sample-cluster`` -and you can try by starting nodes in different terminal windows. For example, starting 3 +This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the +`source <@github@/akka-samples/akka-sample-cluster>`_ to your +maven project, defined as in :ref:`cluster_simple_example_java`. +Run it by starting nodes in different terminal windows. For example, starting 3 service nodes and 1 client:: - sbt + mvn exec:java \ + -Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain" \ + -Dexec.args="2551" - project akka-sample-cluster-experimental + mvn exec:java \ + -Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain" \ + -Dexec.args="2552" - run-main sample.cluster.stats.japi.StatsSampleOneMasterMain 2551 + mvn exec:java \ + -Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterClientMain" - run-main sample.cluster.stats.japi.StatsSampleOneMasterMain 2552 + mvn exec:java \ + -Dexec.mainClass="sample.cluster.stats.japi.StatsSampleOneMasterMain" - run-main sample.cluster.stats.japi.StatsSampleOneMasterClientMain - - run-main sample.cluster.stats.japi.StatsSampleOneMasterMain .. note:: The above example, especially the last part, will be simplified when the cluster handles automatic actor partitioning. +Cluster Metrics +^^^^^^^^^^^^^^^ + +The member nodes of the cluster collects system health metrics and publishes that to other nodes and to +registered subscribers. This information is primarily used for load-balancing routers. + +Hyperic Sigar +------------- + +The built-in metrics is gathered from JMX MBeans, and optionally you can use `Hyperic Sigar `_ +for a wider and more accurate range of metrics compared to what can be retrieved from ordinary MBeans. +Sigar is using a native OS library. To enable usage of Sigar you need to add the directory of the native library to +``-Djava.libarary.path=`` add the following dependency:: + + + org.hyperic + sigar + @sigarVersion@ + + + + +Adaptive Load Balancing +----------------------- + +The ``AdaptiveLoadBalancingRouter`` performs load balancing of messages to cluster nodes based on the cluster metrics data. +It uses random selection of routees with probabilities derived from the remaining capacity of the corresponding node. +It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights: + +* ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max +* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) +* ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization +* ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors. +* Any custom implementation of ``akka.cluster.routing.MetricsSelector`` + +The collected metrics values are smoothed with `exponential weighted moving average `_. In the :ref:`cluster_configuration_java` you can adjust how quickly past data is decayed compared to new data. + +Let's take a look at this router in action. + +In this example the following imports are used: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java#imports + +The backend worker that performs the factorial calculation: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java#backend + +The frontend that receives user jobs and delegates to the backends via the router: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#frontend + + +As you can see, the router is defined in the same way as other routers, and in this case it's configured as follows: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#adaptive-router + +It's only router type ``adaptive`` and the ``metrics-selector`` that is specific to this router, other things work +in the same way as other routers. + +The same type of router could also have been defined in code: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#router-lookup-in-code + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java#router-deploy-in-code + +This example is included in ``akka-samples/akka-sample-cluster`` and you can try it by copying the +`source <@github@/akka-samples/akka-sample-cluster>`_ to your +maven project, defined as in :ref:`cluster_simple_example_java`. +Run it by starting nodes in different terminal windows. For example, starting 3 backend nodes and +one frontend:: + + mvn exec:java \ + -Dexec.mainClass="sample.cluster.factorial.FactorialBackendMain" \ + -Dexec.args="2551" + + mvn exec:java \ + -Dexec.mainClass="sample.cluster.factorial.FactorialBackendMain" \ + -Dexec.args="2552" + + mvn exec:java \ + -Dexec.mainClass="sample.cluster.factorial.FactorialBackendMain" + + mvn exec:java \ + -Dexec.mainClass="sample.cluster.factorial.FactorialFrontendMain" + +Press ctrl-c in the terminal window of the frontend to stop the factorial calculations. + + +Subscribe to Metrics Events +--------------------------- + +It's possible to subscribe to the metrics events directly to implement other functionality. + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/MetricsListener.java#metrics-listener + +Custom Metrics Collector +------------------------ + +You can plug-in your own metrics collector instead of +``akka.cluster.SigarMetricsCollector`` or ``akka.cluster.JmxMetricsCollector``. Look at those two implementations +for inspiration. The implementation class can be defined in the :ref:`cluster_configuration_java`. .. _cluster_jmx_java: @@ -442,15 +573,16 @@ Run it without parameters to see instructions about how to use the script:: leave - Sends a request for node with URL to LEAVE the cluster down - Sends a request for marking node with URL as DOWN member-status - Asks the member node for its current status + members - Asks the cluster for addresses of current members + unreachable - Asks the cluster for addresses of unreachable members cluster-status - Asks the cluster for its current status (member ring, unavailable nodes, meta data etc.) leader - Asks the cluster who the current leader is is-singleton - Checks if the cluster is a singleton cluster (single node cluster) is-available - Checks if the member node is available - is-running - Checks if the member node is running - has-convergence - Checks if there is a cluster convergence - Where the should be on the format of 'akka://actor-system-name@hostname:port' + Where the should be on the format of + 'akka://actor-system-name@hostname:port' Examples: bin/akka-cluster localhost:9999 is-available bin/akka-cluster localhost:9999 join akka://MySystem@darkstar:2552 @@ -490,7 +622,7 @@ introduce the extra overhead of another thread. :: # shorter tick-duration of default scheduler when using cluster - akka.scheduler.tick-duration.tick-duration = 33ms + akka.scheduler.tick-duration = 33ms diff --git a/akka-docs/rst/cluster/cluster-usage-scala.rst b/akka-docs/rst/cluster/cluster-usage-scala.rst index 31ce7e7191..49d1c3b547 100644 --- a/akka-docs/rst/cluster/cluster-usage-scala.rst +++ b/akka-docs/rst/cluster/cluster-usage-scala.rst @@ -25,20 +25,18 @@ A Simple Cluster Example ^^^^^^^^^^^^^^^^^^^^^^^^ The following small program together with its configuration starts an ``ActorSystem`` -with the Cluster extension enabled. It joins the cluster and logs some membership events. +with the Cluster enabled. It joins the cluster and logs some membership events. Try it out: 1. Add the following ``application.conf`` in your project, place it in ``src/main/resources``: -.. literalinclude:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf - :language: none +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#cluster To enable cluster capabilities in your Akka project you should, at a minimum, add the :ref:`remoting-scala` settings, but with ``akka.cluster.ClusterActorRefProvider``. -The ``akka.cluster.seed-nodes`` and cluster extension should normally also be added to your -``application.conf`` file. +The ``akka.cluster.seed-nodes`` should normally also be added to your ``application.conf`` file. The seed nodes are configured contact points for initial, automatic, join of the cluster. @@ -265,6 +263,8 @@ This is how the curve looks like for ``acceptable-heartbeat-pause`` configured t .. image:: images/phi3.png +.. _cluster_aware_routers_scala: + Cluster Aware Routers ^^^^^^^^^^^^^^^^^^^^^ @@ -397,6 +397,97 @@ service nodes and 1 client:: .. note:: The above example, especially the last part, will be simplified when the cluster handles automatic actor partitioning. +Cluster Metrics +^^^^^^^^^^^^^^^ + +The member nodes of the cluster collects system health metrics and publishes that to other nodes and to +registered subscribers. This information is primarily used for load-balancing routers. + +Hyperic Sigar +------------- + +The built-in metrics is gathered from JMX MBeans, and optionally you can use `Hyperic Sigar `_ +for a wider and more accurate range of metrics compared to what can be retrieved from ordinary MBeans. +Sigar is using a native OS library. To enable usage of Sigar you need to add the directory of the native library to +``-Djava.libarary.path=`` add the following dependency:: + + "org.hyperic" % "sigar" % "@sigarVersion@" + + +Adaptive Load Balancing +----------------------- + +The ``AdaptiveLoadBalancingRouter`` performs load balancing of messages to cluster nodes based on the cluster metrics data. +It uses random selection of routees with probabilities derived from the remaining capacity of the corresponding node. +It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights: + +* ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max +* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) +* ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization +* ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors. +* Any custom implementation of ``akka.cluster.routing.MetricsSelector`` + +The collected metrics values are smoothed with `exponential weighted moving average `_. In the :ref:`cluster_configuration_scala` you can adjust how quickly past data is decayed compared to new data. + +Let's take a look at this router in action. + +In this example the following imports are used: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#imports + +The backend worker that performs the factorial calculation: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#backend + +The frontend that receives user jobs and delegates to the backends via the router: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#frontend + + +As you can see, the router is defined in the same way as other routers, and in this case it's configured as follows: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/resources/application.conf#adaptive-router + +It's only router type ``adaptive`` and the ``metrics-selector`` that is specific to this router, other things work +in the same way as other routers. + +The same type of router could also have been defined in code: + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#router-lookup-in-code + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#router-deploy-in-code + +This example is included in ``akka-samples/akka-sample-cluster`` +and you can try by starting nodes in different terminal windows. For example, starting 3 backend nodes and one frontend:: + + sbt + + project akka-sample-cluster-experimental + + run-main sample.cluster.factorial.FactorialBackend 2551 + + run-main sample.cluster.factorial.FactorialBackend 2552 + + run-main sample.cluster.factorial.FactorialBackend + + run-main sample.cluster.factorial.FactorialFrontend + +Press ctrl-c in the terminal window of the frontend to stop the factorial calculations. + +Subscribe to Metrics Events +--------------------------- + +It's possible to subscribe to the metrics events directly to implement other functionality. + +.. includecode:: ../../../akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala#metrics-listener + +Custom Metrics Collector +------------------------ + +You can plug-in your own metrics collector instead of +``akka.cluster.SigarMetricsCollector`` or ``akka.cluster.JmxMetricsCollector``. Look at those two implementations +for inspiration. The implementation class can be defined in the :ref:`cluster_configuration_scala`. + How to Test ^^^^^^^^^^^ @@ -488,15 +579,16 @@ Run it without parameters to see instructions about how to use the script:: leave - Sends a request for node with URL to LEAVE the cluster down - Sends a request for marking node with URL as DOWN member-status - Asks the member node for its current status + members - Asks the cluster for addresses of current members + unreachable - Asks the cluster for addresses of unreachable members cluster-status - Asks the cluster for its current status (member ring, unavailable nodes, meta data etc.) leader - Asks the cluster who the current leader is is-singleton - Checks if the cluster is a singleton cluster (single node cluster) is-available - Checks if the member node is available - is-running - Checks if the member node is running - has-convergence - Checks if there is a cluster convergence - Where the should be on the format of 'akka://actor-system-name@hostname:port' + Where the should be on the format of + 'akka://actor-system-name@hostname:port' Examples: bin/akka-cluster localhost:9999 is-available bin/akka-cluster localhost:9999 join akka://MySystem@darkstar:2552 @@ -536,7 +628,7 @@ introduce the extra overhead of another thread. :: # shorter tick-duration of default scheduler when using cluster - akka.scheduler.tick-duration.tick-duration = 33ms + akka.scheduler.tick-duration = 33ms diff --git a/akka-docs/rst/cluster/cluster.rst b/akka-docs/rst/cluster/cluster.rst index 1190da953a..dfcb4f0a42 100644 --- a/akka-docs/rst/cluster/cluster.rst +++ b/akka-docs/rst/cluster/cluster.rst @@ -84,9 +84,9 @@ Gossip The cluster membership used in Akka is based on Amazon's `Dynamo`_ system and particularly the approach taken in Basho's' `Riak`_ distributed database. Cluster membership is communicated using a `Gossip Protocol`_, where the current -state of the cluster is gossiped randomly through the cluster. Joining a cluster -is initiated by issuing a ``Join`` command to one of the nodes in the cluster to -join. +state of the cluster is gossiped randomly through the cluster, with preference to +members that have not seen the latest version. Joining a cluster is initiated +by issuing a ``Join`` command to one of the nodes in the cluster to join. .. _Gossip Protocol: http://en.wikipedia.org/wiki/Gossip_protocol .. _Dynamo: http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf @@ -209,8 +209,7 @@ node to initiate a round of gossip with. The choice of node is random but can also include extra gossiping nodes with either newer or older state versions. The gossip overview contains the current state version for all nodes and also a -list of unreachable nodes. Whenever a node receives a gossip overview it updates -the `Failure Detector`_ with the liveness information. +list of unreachable nodes. The nodes defined as ``seed`` nodes are just regular member nodes whose only "special role" is to function as contact points in the cluster. diff --git a/akka-docs/rst/common/code/docs/circuitbreaker/CircuitBreakerDocSpec.scala b/akka-docs/rst/common/code/docs/circuitbreaker/CircuitBreakerDocSpec.scala index 9d279f0df0..55d5cfb657 100644 --- a/akka-docs/rst/common/code/docs/circuitbreaker/CircuitBreakerDocSpec.scala +++ b/akka-docs/rst/common/code/docs/circuitbreaker/CircuitBreakerDocSpec.scala @@ -5,7 +5,7 @@ package docs.circuitbreaker //#imports1 -import scala.concurrent.util.duration._ // small d is important here +import scala.concurrent.duration._ import akka.pattern.CircuitBreaker import akka.pattern.pipe import akka.actor.Actor diff --git a/akka-docs/rst/common/code/docs/circuitbreaker/DangerousJavaActor.java b/akka-docs/rst/common/code/docs/circuitbreaker/DangerousJavaActor.java index dbaa9b4100..f3347937fc 100644 --- a/akka-docs/rst/common/code/docs/circuitbreaker/DangerousJavaActor.java +++ b/akka-docs/rst/common/code/docs/circuitbreaker/DangerousJavaActor.java @@ -8,7 +8,7 @@ package docs.circuitbreaker; import akka.actor.UntypedActor; import scala.concurrent.Future; import akka.event.LoggingAdapter; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.pattern.CircuitBreaker; import akka.event.Logging; @@ -29,10 +29,9 @@ public class DangerousJavaActor extends UntypedActor { this.breaker = new CircuitBreaker( getContext().dispatcher(), getContext().system().scheduler(), 5, Duration.create(10, "s"), Duration.create(1, "m")) - .onOpen(new Callable() { - public Object call() throws Exception { + .onOpen(new Runnable() { + public void run() { notifyMeOnOpen(); - return null; } }); } diff --git a/akka-docs/rst/common/code/docs/duration/Java.java b/akka-docs/rst/common/code/docs/duration/Java.java index 06bea4d3e3..cd46c2c822 100644 --- a/akka-docs/rst/common/code/docs/duration/Java.java +++ b/akka-docs/rst/common/code/docs/duration/Java.java @@ -5,15 +5,15 @@ package docs.duration; //#import -import scala.concurrent.util.Duration; -import scala.concurrent.util.Deadline; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.Deadline; //#import class Java { public void demo() { //#dsl final Duration fivesec = Duration.create(5, "seconds"); - final Duration threemillis = Duration.parse("3 millis"); + final Duration threemillis = Duration.create("3 millis"); final Duration diff = fivesec.minus(threemillis); assert diff.lt(fivesec); assert Duration.Zero().lt(Duration.Inf()); diff --git a/akka-docs/rst/common/code/docs/duration/Sample.scala b/akka-docs/rst/common/code/docs/duration/Sample.scala index cd559ccfee..d374313900 100644 --- a/akka-docs/rst/common/code/docs/duration/Sample.scala +++ b/akka-docs/rst/common/code/docs/duration/Sample.scala @@ -4,9 +4,11 @@ package docs.duration +import language.postfixOps + object Scala { //#dsl - import scala.concurrent.util.duration._ // notice the small d + import scala.concurrent.duration._ val fivesec = 5.seconds val threemillis = 3.millis diff --git a/akka-docs/rst/common/duration.rst b/akka-docs/rst/common/duration.rst index c159c99a8c..97136d48b3 100644 --- a/akka-docs/rst/common/duration.rst +++ b/akka-docs/rst/common/duration.rst @@ -5,7 +5,7 @@ Duration ######## Durations are used throughout the Akka library, wherefore this concept is -represented by a special data type, :class:`scala.concurrent.util.Duration`. +represented by a special data type, :class:`scala.concurrent.duration.Duration`. Values of this type may represent infinite (:obj:`Duration.Inf`, :obj:`Duration.MinusInf`) or finite durations, or be :obj:`Duration.Undefined`. diff --git a/akka-docs/rst/dev/developer-guidelines.rst b/akka-docs/rst/dev/developer-guidelines.rst index 903f2d64d9..c8acfe33cc 100644 --- a/akka-docs/rst/dev/developer-guidelines.rst +++ b/akka-docs/rst/dev/developer-guidelines.rst @@ -3,6 +3,10 @@ Developer Guidelines ==================== +.. note:: + + First read `The Akka Contributor Guidelines `_ . + Code Style ---------- @@ -51,7 +55,7 @@ There is a testing standard that should be followed: `Ticket001Spec `_. It enables assertions concerning replies received and their timing, there is more documentation in the :ref:`akka-testkit` module. +There is a useful test kit for testing actors: `akka.util.TestKit <@github@/akka-testkit/src/main/scala/akka/testkit/TestKit.scala>`_. It enables assertions concerning replies received and their timing, there is more documentation in the :ref:`akka-testkit` module. Multi-JVM Testing ^^^^^^^^^^^^^^^^^ diff --git a/akka-docs/rst/dev/documentation.rst b/akka-docs/rst/dev/documentation.rst index ad8ff244ae..b990a6bbf3 100644 --- a/akka-docs/rst/dev/documentation.rst +++ b/akka-docs/rst/dev/documentation.rst @@ -17,13 +17,12 @@ built using `Sphinx`_. Sphinx ====== -More to come... - +For more details see `The Sphinx Documentation `_ reStructuredText ================ -More to come... +For more details see `The reST Quickref `_ Sections -------- @@ -75,16 +74,17 @@ First install `Sphinx`_. See below. Building -------- -:: +For the html version of the docs:: - cd akka-docs + sbt sphinx:generate-html - make html - open _build/html/index.html + open /akka-docs/target/sphinx/html/index.html - make pdf - open _build/latex/Akka.pdf +For the pdf version of the docs:: + sbt sphinx:generate-pdf + + open /akka-docs/target/sphinx/latex/Akka.pdf Installing Sphinx on OS X ------------------------- @@ -127,7 +127,7 @@ Add texlive bin to $PATH: :: - /usr/local/texlive/2010basic/bin/universal-darwin + /usr/local/texlive/2012basic/bin/universal-darwin Add missing tex packages: @@ -140,10 +140,3 @@ Add missing tex packages: sudo tlmgr install wrapfig sudo tlmgr install helvetic sudo tlmgr install courier - -Link the akka pygments style: - -:: - - cd /usr/local/Cellar/python/2.7.1/lib/python2.7/site-packages/pygments/styles - ln -s /path/to/akka/akka-docs/themes/akka/pygments/akka.py akka.py diff --git a/akka-docs/rst/dev/multi-jvm-testing.rst b/akka-docs/rst/dev/multi-jvm-testing.rst index 8157e6bd84..bee19cbe71 100644 --- a/akka-docs/rst/dev/multi-jvm-testing.rst +++ b/akka-docs/rst/dev/multi-jvm-testing.rst @@ -18,52 +18,65 @@ You can add it as a plugin by adding the following to your project/plugins.sbt: .. includecode:: ../../../project/plugins.sbt#sbt-multi-jvm You can then add multi-JVM testing to ``project/Build.scala`` by including the ``MultiJvm`` -settings and config. For example, here is an example of how the akka-remote-tests project adds -multi-JVM testing (Simplified for clarity): +settings and config. Please note that MultiJvm test sources are located in ``src/multi-jvm/...``, +and not in ``src/test/...``. + +Here is an example Build.scala file that uses the MultiJvm plugin: .. parsed-literal:: import sbt._ import Keys._ import com.typesafe.sbt.SbtMultiJvm - import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.{ MultiJvm, extraOptions } + import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.{ MultiJvm } - object AkkaBuild extends Build { + object ExampleBuild extends Build { - lazy val remoteTests = Project( - id = "akka-remote-tests", - base = file("akka-remote-tests"), - dependencies = Seq(remote, actorTests % "test->test", - testkit % "test->test"), - settings = defaultSettings ++ Seq( - // disable parallel tests - parallelExecution in Test := false, - extraOptions in MultiJvm <<= (sourceDirectory in MultiJvm) { src => - (name: String) => (src ** (name + ".conf")).get. - headOption.map("-Dakka.config=" + _.absolutePath).toSeq - }, - executeTests in Test <<= ((executeTests in Test), - (executeTests in MultiJvm)) map { - case ((_, testResults), (_, multiJvmResults)) => - val results = testResults ++ multiJvmResults - (Tests.overall(results.values), results) - } - ) - ) configs (MultiJvm) + lazy val buildSettings = Defaults.defaultSettings ++ multiJvmSettings ++ Seq( + organization := "example", + version := "1.0", + scalaVersion := "@scalaVersion@", + // make sure that the artifacts don't have the scala version in the name + crossPaths := false + ) - lazy val buildSettings = Defaults.defaultSettings ++ - SbtMultiJvm.multiJvmSettings ++ Seq( - organization := "com.typesafe.akka", - version := "@version@", - scalaVersion := "@scalaVersion@", - crossPaths := false - ) + lazy val example = Project( + id = "example", + base = file("."), + settings = buildSettings ++ + Seq(libraryDependencies ++= Dependencies.example) + ) configs(MultiJvm) - lazy val defaultSettings = buildSettings ++ Seq( - resolvers += "Typesafe Repo" at "http://repo.typesafe.com/typesafe/releases/" - ) + lazy val multiJvmSettings = SbtMultiJvm.multiJvmSettings ++ Seq( + // make sure that MultiJvm test are compiled by the default test compilation + compile in MultiJvm <<= (compile in MultiJvm) triggeredBy (compile in Test), + // disable parallel tests + parallelExecution in Test := false, + // make sure that MultiJvm tests are executed by the default test target + executeTests in Test <<= + ((executeTests in Test), (executeTests in MultiJvm)) map { + case ((_, testResults), (_, multiJvmResults)) => + val results = testResults ++ multiJvmResults + (Tests.overall(results.values), results) + } + ) - } + object Dependencies { + val example = Seq( + // ---- application dependencies ---- + "com.typesafe.akka" %% "akka-actor" % "@version@" @crossString@, + "com.typesafe.akka" %% "akka-remote" % "@version@" @crossString@, + + // ---- test dependencies ---- + "com.typesafe.akka" %% "akka-testkit" % "@version@" % + "test" cross CrossVersion.full, + "com.typesafe.akka" %% "akka-remote-tests-experimental" % "@version@" % + "test" cross CrossVersion.full, + "org.scalatest" %% "scalatest" % "1.8-B2" % "test" cross CrossVersion.full, + "junit" % "junit" % "4.5" % "test" + ) + } + } You can specify JVM options for the forked JVMs:: @@ -73,7 +86,7 @@ You can specify JVM options for the forked JVMs:: Running tests ============= -The multi-jvm tasks are similar to the normal tasks: ``test``, ``test-only``, +The multi-JVM tasks are similar to the normal tasks: ``test``, ``test-only``, and ``run``, but are under the ``multi-jvm`` configuration. So in Akka, to run all the multi-JVM tests in the akka-remote project use (at @@ -111,8 +124,8 @@ options after the test names and ``--``. For example: Creating application tests ========================== -The tests are discovered, and combined, through a naming convention. MultiJvm tests are -located in ``src/multi-jvm/scala`` directory. A test is named with the following pattern: +The tests are discovered, and combined, through a naming convention. MultiJvm test sources +are located in ``src/multi-jvm/...``. A test is named with the following pattern: .. code-block:: none @@ -162,14 +175,26 @@ spawned, one for each node. It will look like this: [success] Total time: ... -Naming -====== +Changing Defaults +================= + +You can chenge the name of the multi-JVM test source directory by adding the following +configuration to your project: + +.. code-block:: none + + unmanagedSourceDirectories in MultiJvm <<= + Seq(baseDirectory(_ / "src/some_directory_here")).join + You can change what the ``MultiJvm`` identifier is. For example, to change it to -``ClusterTest`` use the ``multiJvmMarker`` setting:: +``ClusterTest`` use the ``multiJvmMarker`` setting: + +.. code-block:: none multiJvmMarker in MultiJvm := "ClusterTest" + Your tests should now be named ``{TestName}ClusterTest{NodeName}``. diff --git a/akka-docs/rst/dev/multi-node-testing.rst b/akka-docs/rst/dev/multi-node-testing.rst index eca5139a9a..b098317054 100644 --- a/akka-docs/rst/dev/multi-node-testing.rst +++ b/akka-docs/rst/dev/multi-node-testing.rst @@ -207,6 +207,9 @@ surprising ways. * Don't issue a shutdown of the first node. The first node is the controller and if it shuts down your test will break. + * To be able to use ``blackhole``, ``passThrough``, and ``throttle`` you must activate the ``TestConductorTranport`` + by specifying ``testTransport(on = true)`` in your MultiNodeConfig. + * Throttling, shutdown and other failure injections can only be done from the first node, which again is the controller. * Don't ask for the address of a node using ``node(address)`` after the node has been shut down. Grab the address before diff --git a/akka-docs/rst/general/actor-systems.rst b/akka-docs/rst/general/actor-systems.rst index 1b7d6a7759..22768a7342 100644 --- a/akka-docs/rst/general/actor-systems.rst +++ b/akka-docs/rst/general/actor-systems.rst @@ -89,10 +89,9 @@ Actor Best Practices bothering everyone else needlessly and avoid hogging resources. Translated to programming this means to process events and generate responses (or more requests) in an event-driven manner. Actors should not block (i.e. passively - wait while occupying a Thread) on some external entity, which might be a - lock, a network socket, etc. The blocking operations should be done in some - special-cased thread which sends messages to the actors which shall act on - them. + wait while occupying a Thread) on some external entity—which might be a + lock, a network socket, etc.—unless it is unavoidable; in the latter case + see below. #. Do not pass mutable objects between actors. In order to ensure that, prefer immutable messages. If the encapsulation of actors is broken by exposing @@ -109,8 +108,55 @@ Actor Best Practices #. Top-level actors are the innermost part of your Error Kernel, so create them sparingly and prefer truly hierarchical systems. This has benefits wrt. fault-handling (both considering the granularity of configuration and the - performance) and it also reduces the number of blocking calls made, since - the creation of top-level actors involves synchronous messaging. + performance) and it also reduces the strain on the guardian actor, which is + a single point of contention if over-used. + +Blocking Needs Careful Management +--------------------------------- + +In some cases it is unavoidable to do blocking operations, i.e. to put a thread +to sleep for an indeterminate time, waiting for an external event to occur. +Examples are legacy RDBMS drivers or messaging APIs, and the underlying reason +in typically that (network) I/O occurs under the covers. When facing this, you +may be tempted to just wrap the blocking call inside a :class:`Future` and work +with that instead, but this strategy is too simple: you are quite likely to +find bottle-necks or run out of memory or threads when the application runs +under increased load. + +The non-exhaustive list of adequate solutions to the “blocking problem” +includes the following suggestions: + + - Do the blocking call within an actor (or a set of actors managed by a router + [:ref:`Java `, :ref:`Scala `]), making sure to + configure a thread pool which is either dedicated for this purpose or + sufficiently sized. + + - Do the blocking call within a :class:`Future`, ensuring an upper bound on + the number of such calls at any point in time (submitting an unbounded + number of tasks of this nature will exhaust your memory or thread limits). + + - Do the blocking call within a :class:`Future`, providing a thread pool with + an upper limit on the number of threads which is appropriate for the + hardware on which the application runs. + + - Dedicate a single thread to manage a set of blocking resources (e.g. a NIO + selector driving multiple channels) and dispatch events as they occur as + actor messages. + +The first possibility is especially well-suited for resources which are +single-threaded in nature, like database handles which traditionally can only +execute one outstanding query at a time and use internal synchronization to +ensure this. A common pattern is to create a router for N actors, each of which +wraps a single DB connection and handles queries as sent to the router. The +number N must then be tuned for maximum throughput, which will vary depending +on which DBMS is deployed on what hardware. + +.. note:: + + Configuring thread pools is a task best delegated to Akka, simply configure + in the ``application.conf`` and instantiate through an :class:`ActorSystem` + [:ref:`Java `, :ref:`Scala + `] What you should not concern yourself with ----------------------------------------- diff --git a/akka-docs/rst/general/jmm.rst b/akka-docs/rst/general/jmm.rst index 085a347451..dc0c87e2a4 100644 --- a/akka-docs/rst/general/jmm.rst +++ b/akka-docs/rst/general/jmm.rst @@ -13,9 +13,9 @@ Prior to Java 5, the Java Memory Model (JMM) was ill defined. It was possible to shared memory was accessed by multiple threads, such as: * a thread not seeing values written by other threads: a visibility problem -* a thread observing 'impossible' behavior of other threads, caused by instructions not being executed in the order - -expected: an instruction reordering problem. +* a thread observing 'impossible' behavior of other threads, caused by + instructions not being executed in the order expected: an instruction + reordering problem. With the implementation of JSR 133 in Java 5, a lot of these issues have been resolved. The JMM is a set of rules based on the "happens-before" relation, which constrain when one memory access must happen before another, and conversely, @@ -120,4 +120,4 @@ Since Akka runs on the JVM there are still some rules to be followed. } } -* Messages **should** be immutable, this is to avoid the shared mutable state trap. \ No newline at end of file +* Messages **should** be immutable, this is to avoid the shared mutable state trap. diff --git a/akka-docs/rst/general/supervision.rst b/akka-docs/rst/general/supervision.rst index c28bbfc4f2..9659d3f5cd 100644 --- a/akka-docs/rst/general/supervision.rst +++ b/akka-docs/rst/general/supervision.rst @@ -189,6 +189,13 @@ external resource, which may also be one of its own children. If a third party terminates a child by way of the ``system.stop(child)`` method or sending a :class:`PoisonPill`, the supervisor might well be affected. +.. warning:: + + DeathWatch for Akka Remote does not (yet) get triggered by connection failures – + which means that if the parent node or the network goes down, nobody will get notified. + This feature may be added in a future release of Akka Remoting. + Akka Cluster, however, has such functionality. + One-For-One Strategy vs. All-For-One Strategy --------------------------------------------- diff --git a/akka-docs/rst/index.rst b/akka-docs/rst/index.rst index 05b57dc816..57f40438fa 100644 --- a/akka-docs/rst/index.rst +++ b/akka-docs/rst/index.rst @@ -20,7 +20,7 @@ Links * :ref:`migration` -* `Downloads `_ +* `Downloads `_ * `Source Code `_ diff --git a/akka-docs/rst/intro/getting-started.rst b/akka-docs/rst/intro/getting-started.rst index fdd6169abd..599268ad6b 100644 --- a/akka-docs/rst/intro/getting-started.rst +++ b/akka-docs/rst/intro/getting-started.rst @@ -51,7 +51,7 @@ How to see the JARs dependencies of each Akka module is described in the Using a release distribution ---------------------------- -Download the release you need from http://akka.io/downloads and unzip it. +Download the release you need from http://typesafe.com/stack/downloads/akka and unzip it. Using a snapshot version ------------------------ diff --git a/akka-docs/rst/intro/what-is-akka.rst b/akka-docs/rst/intro/what-is-akka.rst index dc351b0d22..4f393fda5d 100644 --- a/akka-docs/rst/intro/what-is-akka.rst +++ b/akka-docs/rst/intro/what-is-akka.rst @@ -18,7 +18,10 @@ fault-tolerant applications. Akka is Open Source and available under the Apache 2 License. -Download from http://akka.io/downloads/ +Download from http://typesafe.com/stack/downloads/akka/ + +Please note that all code samples compile, so if you want direct access to the sources, have a look +over at the `Akka Docs Project <@github@/akka-docs/rst>`_. Akka implements a unique hybrid diff --git a/akka-docs/rst/intro/why-akka.rst b/akka-docs/rst/intro/why-akka.rst index 85789fdf19..e11cfee187 100644 --- a/akka-docs/rst/intro/why-akka.rst +++ b/akka-docs/rst/intro/why-akka.rst @@ -24,7 +24,7 @@ and then there's the whole package, the Akka Microkernel, which is a standalone container to deploy your Akka application in. With CPUs growing more and more cores every cycle, Akka is the alternative that provides outstanding performance even if you're only running it on one machine. Akka also supplies a wide array -of concurrency-paradigms, allowing for users to choose the right tool for the +of concurrency-paradigms, allowing users to choose the right tool for the job. diff --git a/akka-docs/rst/java/camel.rst b/akka-docs/rst/java/camel.rst index 429454f25d..4825e4e4a1 100644 --- a/akka-docs/rst/java/camel.rst +++ b/akka-docs/rst/java/camel.rst @@ -132,7 +132,7 @@ An ``ActivationTimeoutException`` is thrown if the endpoint could not be activat Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the `SendProcessor`_ is stopped. A ``DeActivationTimeoutException`` is thrown if the associated camel objects could not be deactivated within the specified timeout. -.. _Camel: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Camel.scala +.. _Camel: @github@/akka-camel/src/main/scala/akka/camel/Camel.scala .. _CamelContext: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java .. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java .. _SendProcessor: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java @@ -146,7 +146,7 @@ class. For example, the following actor class (Consumer1) implements the `getEndpointUri` method, which is declared in the `UntypedConsumerActor`_ class, in order to receive messages from the ``file:data/input/actor`` Camel endpoint. -.. _UntypedConsumerActor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/javaapi/UntypedConsumer.scala +.. _UntypedConsumerActor: @github@/akka-camel/src/main/scala/akka/camel/javaapi/UntypedConsumer.scala .. includecode:: code/docs/camel/Consumer1.java#Consumer1 @@ -156,7 +156,7 @@ actor. Messages consumed by actors from Camel endpoints are of type `CamelMessage`_. These are immutable representations of Camel messages. .. _file component: http://camel.apache.org/file2.html -.. _Message: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala +.. _Message: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala Here's another example that sets the endpointUri to @@ -176,7 +176,7 @@ client the response type should be `CamelMessage`_. For any other response type, new CamelMessage object is created by akka-camel with the actor response as message body. -.. _Message: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala +.. _Message: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala .. _camel-acknowledgements-java: @@ -221,7 +221,7 @@ The timeout on the consumer actor can be overridden with the ``replyTimeout``, a .. includecode:: code/docs/camel/Consumer4.java#Consumer4 .. _Exchange: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java -.. _ask: http://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/pattern/Patterns.scala +.. _ask: @github@/akka-actor/src/main/scala/akka/pattern/Patterns.scala Producer Actors =============== @@ -296,7 +296,7 @@ For initiating a a two-way message exchange, one of the .. includecode:: code/docs/camel/RequestBodyActor.java#RequestProducerTemplate -.. _UntypedProducerActor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala +.. _UntypedProducerActor: @github@/akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala .. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java .. _camel-asynchronous-routing-java: @@ -361,7 +361,7 @@ Akka Camel components Akka actors can be accessed from Camel routes using the `actor`_ Camel component. This component can be used to access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections. -.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala +.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala .. _access-to-actors-java: @@ -372,7 +372,7 @@ To access actors from custom Camel routes, the `actor`_ Camel component should be used. It fully supports Camel's `asynchronous routing engine`_. -.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala +.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala .. _asynchronous routing engine: http://camel.apache.org/asynchronous-routing-engine.html This component accepts the following endpoint URI format: @@ -388,6 +388,8 @@ URI options The following URI options are supported: +.. tabularcolumns:: |l|l|l|L| + +--------------+----------+---------+------------------------------------------------+ | Name | Type | Default | Description | +==============+==========+=========+================================================+ diff --git a/akka-docs/rst/java/code/docs/actor/FaultHandlingTestBase.java b/akka-docs/rst/java/code/docs/actor/FaultHandlingTestBase.java index 7db5715e31..4494bb0c51 100644 --- a/akka-docs/rst/java/code/docs/actor/FaultHandlingTestBase.java +++ b/akka-docs/rst/java/code/docs/actor/FaultHandlingTestBase.java @@ -7,14 +7,18 @@ package docs.actor; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.SupervisorStrategy; -import static akka.actor.SupervisorStrategy.*; +import static akka.actor.SupervisorStrategy.resume; +import static akka.actor.SupervisorStrategy.restart; +import static akka.actor.SupervisorStrategy.stop; +import static akka.actor.SupervisorStrategy.escalate; +import akka.actor.SupervisorStrategy.Directive; import akka.actor.OneForOneStrategy; import akka.actor.Props; import akka.actor.Terminated; import akka.actor.UntypedActor; import scala.concurrent.Await; import static akka.pattern.Patterns.ask; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.testkit.AkkaSpec; import akka.testkit.TestProbe; @@ -23,10 +27,11 @@ import akka.testkit.ErrorFilter; import akka.testkit.EventFilter; import akka.testkit.TestEvent; import static java.util.concurrent.TimeUnit.SECONDS; +import static akka.japi.Util.immutableSeq; import akka.japi.Function; import scala.Option; import scala.collection.JavaConverters; -import scala.collection.Seq; +import scala.collection.immutable.Seq; import org.junit.Test; import org.junit.BeforeClass; @@ -41,7 +46,7 @@ public class FaultHandlingTestBase { //#strategy private static SupervisorStrategy strategy = - new OneForOneStrategy(10, Duration.parse("1 minute"), + new OneForOneStrategy(10, Duration.create("1 minute"), new Function() { @Override public Directive apply(Throwable t) { @@ -81,7 +86,7 @@ public class FaultHandlingTestBase { //#strategy2 private static SupervisorStrategy strategy = new OneForOneStrategy(10, - Duration.parse("1 minute"), + Duration.create("1 minute"), new Function() { @Override public Directive apply(Throwable t) { @@ -215,8 +220,7 @@ public class FaultHandlingTestBase { //#testkit public Seq seq(A... args) { - return JavaConverters.collectionAsScalaIterableConverter( - java.util.Arrays.asList(args)).asScala().toSeq(); + return immutableSeq(args); } //#testkit } diff --git a/akka-docs/rst/java/code/docs/actor/MyReceivedTimeoutUntypedActor.java b/akka-docs/rst/java/code/docs/actor/MyReceivedTimeoutUntypedActor.java index b1fb899be7..1c09272582 100644 --- a/akka-docs/rst/java/code/docs/actor/MyReceivedTimeoutUntypedActor.java +++ b/akka-docs/rst/java/code/docs/actor/MyReceivedTimeoutUntypedActor.java @@ -6,18 +6,23 @@ package docs.actor; //#receive-timeout import akka.actor.ReceiveTimeout; import akka.actor.UntypedActor; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; public class MyReceivedTimeoutUntypedActor extends UntypedActor { public MyReceivedTimeoutUntypedActor() { - getContext().setReceiveTimeout(Duration.parse("30 seconds")); + // To set an initial delay + getContext().setReceiveTimeout(Duration.create("30 seconds")); } public void onReceive(Object message) { if (message.equals("Hello")) { + // To set in a response to a message + getContext().setReceiveTimeout(Duration.create("10 seconds")); getSender().tell("Hello world", getSelf()); } else if (message == ReceiveTimeout.getInstance()) { + // To turn it off + getContext().setReceiveTimeout(Duration.Undefined()); throw new RuntimeException("received timeout"); } else { unhandled(message); diff --git a/akka-docs/rst/java/code/docs/actor/SchedulerDocTestBase.java b/akka-docs/rst/java/code/docs/actor/SchedulerDocTestBase.java index 34f56715d6..0b3d55f33f 100644 --- a/akka-docs/rst/java/code/docs/actor/SchedulerDocTestBase.java +++ b/akka-docs/rst/java/code/docs/actor/SchedulerDocTestBase.java @@ -5,7 +5,7 @@ package docs.actor; //#imports1 import akka.actor.Props; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import java.util.concurrent.TimeUnit; //#imports1 diff --git a/akka-docs/rst/java/code/docs/actor/TypedActorDocTestBase.java b/akka-docs/rst/java/code/docs/actor/TypedActorDocTestBase.java index 3f0e2bdb09..35c8441263 100644 --- a/akka-docs/rst/java/code/docs/actor/TypedActorDocTestBase.java +++ b/akka-docs/rst/java/code/docs/actor/TypedActorDocTestBase.java @@ -11,7 +11,7 @@ import akka.japi.*; import akka.dispatch.Futures; import scala.concurrent.Await; import scala.concurrent.Future; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import java.util.concurrent.TimeUnit; //#imports diff --git a/akka-docs/rst/java/code/docs/actor/UntypedActorDocTestBase.java b/akka-docs/rst/java/code/docs/actor/UntypedActorDocTestBase.java index 95da8a7cd1..fded2ddb3b 100644 --- a/akka-docs/rst/java/code/docs/actor/UntypedActorDocTestBase.java +++ b/akka-docs/rst/java/code/docs/actor/UntypedActorDocTestBase.java @@ -14,7 +14,7 @@ import scala.concurrent.Future; import akka.dispatch.Futures; import akka.dispatch.Mapper; import scala.concurrent.Await; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.util.Timeout; //#import-future @@ -35,7 +35,7 @@ import akka.actor.Terminated; import static akka.pattern.Patterns.gracefulStop; import scala.concurrent.Future; import scala.concurrent.Await; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.pattern.AskTimeoutException; //#import-gracefulStop @@ -44,7 +44,7 @@ import static akka.pattern.Patterns.ask; import static akka.pattern.Patterns.pipe; import scala.concurrent.Future; import akka.dispatch.Futures; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.util.Timeout; import java.util.concurrent.TimeUnit; import java.util.ArrayList; @@ -192,7 +192,7 @@ public class UntypedActorDocTestBase { ActorSystem system = ActorSystem.create("MySystem"); ActorRef myActor = system.actorOf(new Props(WatchActor.class)); Future future = Patterns.ask(myActor, "kill", 1000); - assert Await.result(future, Duration.parse("1 second")).equals("finished"); + assert Await.result(future, Duration.create("1 second")).equals("finished"); system.shutdown(); } @@ -351,24 +351,23 @@ public class UntypedActorDocTestBase { static //#stash public class ActorWithProtocol extends UntypedActorWithStash { - private Boolean isOpen = false; public void onReceive(Object msg) { - if (isOpen) { - if (msg.equals("write")) { - // do writing... - } else if (msg.equals("close")) { - unstashAll(); - isOpen = false; - } else { - stash(); - } + if (msg.equals("open")) { + unstashAll(); + getContext().become(new Procedure() { + public void apply(Object msg) throws Exception { + if (msg.equals("write")) { + // do writing... + } else if (msg.equals("close")) { + unstashAll(); + getContext().unbecome(); + } else { + stash(); + } + } + }, false); // add behavior on top instead of replacing } else { - if (msg.equals("open")) { - unstashAll(); - isOpen = true; - } else { - stash(); - } + stash(); } } } diff --git a/akka-docs/rst/java/code/docs/actor/UntypedActorSwapper.java b/akka-docs/rst/java/code/docs/actor/UntypedActorSwapper.java index c882ac015a..5098278a38 100644 --- a/akka-docs/rst/java/code/docs/actor/UntypedActorSwapper.java +++ b/akka-docs/rst/java/code/docs/actor/UntypedActorSwapper.java @@ -32,9 +32,9 @@ public class UntypedActorSwapper { @Override public void apply(Object message) { log.info("Ho"); - getContext().unbecome(); // resets the latest 'become' (just for fun) + getContext().unbecome(); // resets the latest 'become' } - }); + }, false); // this signals stacking of the new behavior } else { unhandled(message); } diff --git a/akka-docs/rst/java/code/docs/actor/japi/FaultHandlingDocSample.java b/akka-docs/rst/java/code/docs/actor/japi/FaultHandlingDocSample.java index f724cbafbc..5b7a3073c3 100644 --- a/akka-docs/rst/java/code/docs/actor/japi/FaultHandlingDocSample.java +++ b/akka-docs/rst/java/code/docs/actor/japi/FaultHandlingDocSample.java @@ -13,7 +13,7 @@ import java.util.Map; import akka.actor.*; import akka.dispatch.Mapper; import akka.japi.Function; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.util.Timeout; import akka.event.Logging; import akka.event.LoggingAdapter; @@ -22,7 +22,11 @@ import com.typesafe.config.ConfigFactory; import static akka.japi.Util.classTag; -import static akka.actor.SupervisorStrategy.*; +import static akka.actor.SupervisorStrategy.resume; +import static akka.actor.SupervisorStrategy.restart; +import static akka.actor.SupervisorStrategy.stop; +import static akka.actor.SupervisorStrategy.escalate; +import akka.actor.SupervisorStrategy.Directive; import static akka.pattern.Patterns.ask; import static akka.pattern.Patterns.pipe; @@ -62,7 +66,7 @@ public class FaultHandlingDocSample { public void preStart() { // If we don't get any progress within 15 seconds then the service // is unavailable - getContext().setReceiveTimeout(Duration.parse("15 seconds")); + getContext().setReceiveTimeout(Duration.create("15 seconds")); } public void onReceive(Object msg) { @@ -237,7 +241,7 @@ public class FaultHandlingDocSample { // Restart the storage child when StorageException is thrown. // After 3 restarts within 5 seconds it will be stopped. private static SupervisorStrategy strategy = new OneForOneStrategy(3, - Duration.parse("5 seconds"), new Function() { + Duration.create("5 seconds"), new Function() { @Override public Directive apply(Throwable t) { if (t instanceof StorageException) { diff --git a/akka-docs/rst/java/code/docs/camel/ActivationTestBase.java b/akka-docs/rst/java/code/docs/camel/ActivationTestBase.java index 4347cfb66a..10e369baeb 100644 --- a/akka-docs/rst/java/code/docs/camel/ActivationTestBase.java +++ b/akka-docs/rst/java/code/docs/camel/ActivationTestBase.java @@ -8,8 +8,8 @@ package docs.camel; import akka.camel.javaapi.UntypedConsumerActor; import akka.util.Timeout; import scala.concurrent.Future; - import scala.concurrent.util.Duration; - import scala.concurrent.util.FiniteDuration; + import scala.concurrent.duration.Duration; + import scala.concurrent.duration.FiniteDuration; import static java.util.concurrent.TimeUnit.SECONDS; //#CamelActivation diff --git a/akka-docs/rst/java/code/docs/camel/Consumer4.java b/akka-docs/rst/java/code/docs/camel/Consumer4.java index 2074bc2c78..a41eba3869 100644 --- a/akka-docs/rst/java/code/docs/camel/Consumer4.java +++ b/akka-docs/rst/java/code/docs/camel/Consumer4.java @@ -2,8 +2,8 @@ package docs.camel; //#Consumer4 import akka.camel.CamelMessage; import akka.camel.javaapi.UntypedConsumerActor; -import scala.concurrent.util.Duration; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import java.util.concurrent.TimeUnit; diff --git a/akka-docs/rst/java/code/docs/dispatcher/DispatcherDocTestBase.java b/akka-docs/rst/java/code/docs/dispatcher/DispatcherDocTestBase.java index 04705d524c..5c0c2d1711 100644 --- a/akka-docs/rst/java/code/docs/dispatcher/DispatcherDocTestBase.java +++ b/akka-docs/rst/java/code/docs/dispatcher/DispatcherDocTestBase.java @@ -5,10 +5,6 @@ package docs.dispatcher; //#imports import akka.actor.*; -import akka.actor.ActorRef; -import akka.actor.Props; -import akka.actor.UntypedActor; -import akka.actor.UntypedActorFactory; //#imports //#imports-prio @@ -37,6 +33,7 @@ import org.junit.After; import org.junit.Before; import org.junit.Test; import scala.Option; +import scala.concurrent.ExecutionContext; import com.typesafe.config.ConfigFactory; @@ -75,6 +72,14 @@ public class DispatcherDocTestBase { .withDispatcher("my-pinned-dispatcher")); //#defining-pinned-dispatcher } + + public void compileLookup() { + //#lookup + // this is scala.concurrent.ExecutionContext + // for use with Futures, Scheduler, etc. + final ExecutionContext ex = system.dispatchers().lookup("my-dispatcher"); + //#lookup + } @Test public void priorityDispatcher() throws Exception { diff --git a/akka-docs/rst/java/code/docs/event/LoggingDocTestBase.java b/akka-docs/rst/java/code/docs/event/LoggingDocTestBase.java index 54847c4f66..3e3fa46844 100644 --- a/akka-docs/rst/java/code/docs/event/LoggingDocTestBase.java +++ b/akka-docs/rst/java/code/docs/event/LoggingDocTestBase.java @@ -119,5 +119,4 @@ public class LoggingDocTestBase { } } //#deadletter-actor - } diff --git a/akka-docs/rst/java/code/docs/extension/SettingsExtensionDocTestBase.java b/akka-docs/rst/java/code/docs/extension/SettingsExtensionDocTestBase.java index c4134413ac..72836b503d 100644 --- a/akka-docs/rst/java/code/docs/extension/SettingsExtensionDocTestBase.java +++ b/akka-docs/rst/java/code/docs/extension/SettingsExtensionDocTestBase.java @@ -9,7 +9,7 @@ import akka.actor.AbstractExtensionId; import akka.actor.ExtensionIdProvider; import akka.actor.ActorSystem; import akka.actor.ExtendedActorSystem; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import com.typesafe.config.Config; import java.util.concurrent.TimeUnit; diff --git a/akka-docs/rst/java/code/docs/future/FutureDocTestBase.java b/akka-docs/rst/java/code/docs/future/FutureDocTestBase.java index 7b1e1f2be5..975814ded2 100644 --- a/akka-docs/rst/java/code/docs/future/FutureDocTestBase.java +++ b/akka-docs/rst/java/code/docs/future/FutureDocTestBase.java @@ -12,7 +12,7 @@ import akka.util.Timeout; //#imports1 //#imports2 -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.japi.Function; import java.util.concurrent.Callable; import static akka.dispatch.Futures.future; @@ -43,10 +43,10 @@ import scala.concurrent.ExecutionContext$; //#imports8 import static akka.pattern.Patterns.after; +import java.util.Arrays; //#imports8 import java.util.ArrayList; -import java.util.Arrays; import java.util.List; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; @@ -79,6 +79,21 @@ public class FutureDocTestBase { system.shutdown(); } + public final static class PrintResult extends OnSuccess { + @Override public final void onSuccess(T t) { + // print t + } + } + + public final static class Demo { + //#print-result + public final static class PrintResult extends OnSuccess { + @Override public final void onSuccess(T t) { + System.out.println(t); + } + } + //#print-result + } @SuppressWarnings("unchecked") @Test public void useCustomExecutionContext() throws Exception { ExecutorService yourExecutorServiceGoesHere = Executors.newSingleThreadExecutor(); //#diy-execution-context @@ -102,6 +117,9 @@ public class FutureDocTestBase { Future future = Patterns.ask(actor, msg, timeout); String result = (String) Await.result(future, timeout.duration()); //#ask-blocking + //#pipe-to + akka.pattern.Patterns.pipe(future, system.dispatcher()).to(actor); + //#pipe-to assertEquals("HELLO", result); } @@ -113,9 +131,11 @@ public class FutureDocTestBase { return "Hello" + "World"; } }, system.dispatcher()); - String result = (String) Await.result(f, Duration.create(1, SECONDS)); + + f.onSuccess(new PrintResult(), system.dispatcher()); //#future-eval - assertEquals("HelloWorld", result); + String result = (String) Await.result(f, Duration.create(5, SECONDS)); + assertEquals("HelloWorld", result); } @Test @@ -135,9 +155,10 @@ public class FutureDocTestBase { } }, ec); - int result = Await.result(f2, Duration.create(1, SECONDS)); - assertEquals(10, result); + f2.onSuccess(new PrintResult(), system.dispatcher()); //#map + int result = Await.result(f2, Duration.create(5, SECONDS)); + assertEquals(10, result); } @Test @@ -158,8 +179,9 @@ public class FutureDocTestBase { } }, ec); + f2.onSuccess(new PrintResult(), system.dispatcher()); //#map2 - int result = Await.result(f2, Duration.create(1, SECONDS)); + int result = Await.result(f2, Duration.create(5, SECONDS)); assertEquals(10, result); } @@ -174,7 +196,8 @@ public class FutureDocTestBase { } }, ec); - Thread.sleep(100); + // Thread.sleep is only here to prove a point + Thread.sleep(100); // Do not use this in your code Future f2 = f1.map(new Mapper() { public Integer apply(String s) { @@ -182,8 +205,9 @@ public class FutureDocTestBase { } }, ec); + f2.onSuccess(new PrintResult(), system.dispatcher()); //#map3 - int result = Await.result(f2, Duration.create(1, SECONDS)); + int result = Await.result(f2, Duration.create(5, SECONDS)); assertEquals(10, result); } @@ -208,8 +232,9 @@ public class FutureDocTestBase { } }, ec); + f2.onSuccess(new PrintResult(), system.dispatcher()); //#flat-map - int result = Await.result(f2, Duration.create(1, SECONDS)); + int result = Await.result(f2, Duration.create(5, SECONDS)); assertEquals(10, result); } @@ -238,8 +263,9 @@ public class FutureDocTestBase { } }, ec); - long result = Await.result(futureSum, Duration.create(1, SECONDS)); + futureSum.onSuccess(new PrintResult(), system.dispatcher()); //#sequence + long result = Await.result(futureSum, Duration.create(5, SECONDS)); assertEquals(3L, result); } @@ -262,9 +288,10 @@ public class FutureDocTestBase { }, ec); //Returns the sequence of strings as upper case - Iterable result = Await.result(futureResult, Duration.create(1, SECONDS)); - assertEquals(Arrays.asList("A", "B", "C"), result); + futureResult.onSuccess(new PrintResult>(), system.dispatcher()); //#traverse + Iterable result = Await.result(futureResult, Duration.create(5, SECONDS)); + assertEquals(Arrays.asList("A", "B", "C"), result); } @Test @@ -286,9 +313,10 @@ public class FutureDocTestBase { return r + t; //Just concatenate } }, ec); - String result = Await.result(resultFuture, Duration.create(1, SECONDS)); - //#fold + resultFuture.onSuccess(new PrintResult(), system.dispatcher()); + //#fold + String result = Await.result(resultFuture, Duration.create(5, SECONDS)); assertEquals("ab", result); } @@ -310,8 +338,9 @@ public class FutureDocTestBase { } }, ec); - Object result = Await.result(resultFuture, Duration.create(1, SECONDS)); + resultFuture.onSuccess(new PrintResult(), system.dispatcher()); //#reduce + Object result = Await.result(resultFuture, Duration.create(5, SECONDS)); assertEquals("ab", result); } @@ -326,10 +355,10 @@ public class FutureDocTestBase { Future otherFuture = Futures.failed( new IllegalArgumentException("Bang!")); //#failed - Object result = Await.result(future, Duration.create(1, SECONDS)); + Object result = Await.result(future, Duration.create(5, SECONDS)); assertEquals("Yay!", result); Throwable result2 = Await.result(otherFuture.failed(), - Duration.create(1, SECONDS)); + Duration.create(5, SECONDS)); assertEquals("Bang!", result2.getMessage()); } @@ -399,9 +428,11 @@ public class FutureDocTestBase { throw problem; } }, ec); - int result = Await.result(future, Duration.create(1, SECONDS)); - assertEquals(result, 0); + + future.onSuccess(new PrintResult(), system.dispatcher()); //#recover + int result = Await.result(future, Duration.create(5, SECONDS)); + assertEquals(result, 0); } @Test @@ -425,9 +456,11 @@ public class FutureDocTestBase { throw problem; } }, ec); - int result = Await.result(future, Duration.create(1, SECONDS)); - assertEquals(result, 0); + + future.onSuccess(new PrintResult(), system.dispatcher()); //#try-recover + int result = Await.result(future, Duration.create(5, SECONDS)); + assertEquals(result, 0); } @Test @@ -497,9 +530,10 @@ public class FutureDocTestBase { } }, ec); - String result = Await.result(future3, Duration.create(1, SECONDS)); - assertEquals("foo bar", result); + future3.onSuccess(new PrintResult(), system.dispatcher()); //#zip + String result = Await.result(future3, Duration.create(5, SECONDS)); + assertEquals("foo bar", result); } { @@ -509,9 +543,10 @@ public class FutureDocTestBase { Future future3 = Futures.successful("bar"); // Will have "bar" in this case Future future4 = future1.fallbackTo(future2).fallbackTo(future3); - String result = Await.result(future4, Duration.create(1, SECONDS)); - assertEquals("bar", result); + future4.onSuccess(new PrintResult(), system.dispatcher()); //#fallback-to + String result = Await.result(future4, Duration.create(5, SECONDS)); + assertEquals("bar", result); } } @@ -529,7 +564,7 @@ public class FutureDocTestBase { return "foo"; } }, ec); - Future result = future.either(delayed); + Future result = Futures.firstCompletedOf(Arrays.asList(future, delayed), ec); //#after Await.result(result, Duration.create(2, SECONDS)); } diff --git a/akka-docs/rst/java/code/docs/jrouting/CustomRouterDocTestBase.java b/akka-docs/rst/java/code/docs/jrouting/CustomRouterDocTestBase.java index dc42707bfd..239a3c318d 100644 --- a/akka-docs/rst/java/code/docs/jrouting/CustomRouterDocTestBase.java +++ b/akka-docs/rst/java/code/docs/jrouting/CustomRouterDocTestBase.java @@ -11,6 +11,7 @@ import static docs.jrouting.CustomRouterDocTestBase.Message.RepublicanVote; import static org.junit.Assert.assertEquals; import java.util.Arrays; +import java.util.Collections; import java.util.List; import org.junit.After; @@ -19,7 +20,7 @@ import org.junit.Test; import scala.concurrent.Await; import scala.concurrent.Future; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.OneForOneStrategy; @@ -68,8 +69,8 @@ public class CustomRouterDocTestBase { public void demonstrateSupervisor() { //#supervision final SupervisorStrategy strategy = - new OneForOneStrategy(5, Duration.parse("1 minute"), - new Class[] { Exception.class }); + new OneForOneStrategy(5, Duration.create("1 minute"), + Collections.>singletonList(Exception.class)); final ActorRef router = system.actorOf(new Props(MyActor.class) .withRouter(new RoundRobinRouter(5).withSupervisorStrategy(strategy))); //#supervision @@ -179,16 +180,14 @@ public class CustomRouterDocTestBase { //#crRoutingLogic return new CustomRoute() { @Override - public Iterable destinationsFor(ActorRef sender, Object msg) { + public scala.collection.immutable.Seq destinationsFor(ActorRef sender, Object msg) { switch ((Message) msg) { case DemocratVote: case DemocratCountResult: - return Arrays.asList( - new Destination[] { new Destination(sender, democratActor) }); + return akka.japi.Util.immutableSingletonSeq(new Destination(sender, democratActor)); case RepublicanVote: case RepublicanCountResult: - return Arrays.asList( - new Destination[] { new Destination(sender, republicanActor) }); + return akka.japi.Util.immutableSingletonSeq(new Destination(sender, republicanActor)); default: throw new IllegalArgumentException("Unknown message: " + msg); } diff --git a/akka-docs/rst/java/code/docs/jrouting/ParentActor.java b/akka-docs/rst/java/code/docs/jrouting/ParentActor.java index c61e9d96f3..e3750bfd23 100644 --- a/akka-docs/rst/java/code/docs/jrouting/ParentActor.java +++ b/akka-docs/rst/java/code/docs/jrouting/ParentActor.java @@ -11,7 +11,7 @@ import akka.routing.SmallestMailboxRouter; import akka.actor.UntypedActor; import akka.actor.ActorRef; import akka.actor.Props; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.util.Timeout; import scala.concurrent.Future; import scala.concurrent.Await; diff --git a/akka-docs/rst/java/code/docs/jrouting/RouterViaProgramExample.java b/akka-docs/rst/java/code/docs/jrouting/RouterViaProgramExample.java index 7065524e52..a3a48f300c 100644 --- a/akka-docs/rst/java/code/docs/jrouting/RouterViaProgramExample.java +++ b/akka-docs/rst/java/code/docs/jrouting/RouterViaProgramExample.java @@ -70,7 +70,7 @@ public class RouterViaProgramExample { int upperBound = 15; DefaultResizer resizer = new DefaultResizer(lowerBound, upperBound); ActorRef router3 = system.actorOf( - new Props(ExampleActor.class).withRouter(new RoundRobinRouter(nrOfInstances))); + new Props(ExampleActor.class).withRouter(new RoundRobinRouter(resizer))); //#programmaticRoutingWithResizer for (int i = 1; i <= 6; i++) { router3.tell(new ExampleActor.Message(i), null); diff --git a/akka-docs/rst/java/code/docs/pattern/SchedulerPatternTest.java b/akka-docs/rst/java/code/docs/pattern/SchedulerPatternTest.java new file mode 100644 index 0000000000..e712eee146 --- /dev/null +++ b/akka-docs/rst/java/code/docs/pattern/SchedulerPatternTest.java @@ -0,0 +1,191 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package docs.pattern; + +import akka.actor.*; +import akka.testkit.*; +import akka.testkit.TestEvent.Mute; +import akka.testkit.TestEvent.UnMute; +import org.junit.*; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; +import java.util.Arrays; +import java.util.concurrent.TimeUnit; + +public class SchedulerPatternTest { + + static ActorSystem system; + + @BeforeClass + public static void setUp() { + system = ActorSystem.create("SchedulerPatternTest", AkkaSpec.testConf()); + } + + @AfterClass + public static void tearDown() { + system.shutdown(); + } + + static + //#schedule-constructor + public class ScheduleInConstructor extends UntypedActor { + + private final Cancellable tick = getContext().system().scheduler().schedule( + Duration.create(500, TimeUnit.MILLISECONDS), + Duration.create(1000, TimeUnit.MILLISECONDS), + getSelf(), "tick", getContext().dispatcher()); + //#schedule-constructor + // this variable and constructor is declared here to not show up in the docs + final ActorRef target; + public ScheduleInConstructor(ActorRef target) { + this.target = target; + } + //#schedule-constructor + + @Override + public void postStop() { + tick.cancel(); + } + + @Override + public void onReceive(Object message) throws Exception { + if (message.equals("tick")) { + // do something useful here + //#schedule-constructor + target.tell(message, getSelf()); + //#schedule-constructor + } + //#schedule-constructor + else if (message.equals("restart")) { + throw new ArithmeticException(); + } + //#schedule-constructor + else { + unhandled(message); + } + } + } + //#schedule-constructor + + static + //#schedule-receive + public class ScheduleInReceive extends UntypedActor { + //#schedule-receive + // this variable and constructor is declared here to not show up in the docs + final ActorRef target; + public ScheduleInReceive(ActorRef target) { + this.target = target; + } + //#schedule-receive + + @Override + public void preStart() { + getContext().system().scheduler().scheduleOnce( + Duration.create(500, TimeUnit.MILLISECONDS), + getSelf(), "tick", getContext().dispatcher()); + } + + // override postRestart so we don't call preStart and schedule a new message + @Override + public void postRestart(Throwable reason) { + } + + @Override + public void onReceive(Object message) throws Exception { + if (message.equals("tick")) { + // send another periodic tick after the specified delay + getContext().system().scheduler().scheduleOnce( + Duration.create(1000, TimeUnit.MILLISECONDS), + getSelf(), "tick", getContext().dispatcher()); + // do something useful here + //#schedule-receive + target.tell(message, getSelf()); + //#schedule-receive + } + //#schedule-receive + else if (message.equals("restart")) { + throw new ArithmeticException(); + } + //#schedule-receive + else { + unhandled(message); + } + } + } + //#schedule-receive + + @Test + @Ignore // no way to tag this as timing sensitive + public void scheduleInConstructor() { + new TestSchedule(system) {{ + final JavaTestKit probe = new JavaTestKit(system); + + final Props props = new Props(new UntypedActorFactory() { + public UntypedActor create() { + return new ScheduleInConstructor(probe.getRef()); + } + }); + + testSchedule(probe, props, duration("3000 millis"), duration("2000 millis")); + }}; + } + + @Test + @Ignore // no way to tag this as timing sensitive + public void scheduleInReceive() { + + new TestSchedule(system) {{ + final JavaTestKit probe = new JavaTestKit(system); + + final Props props = new Props(new UntypedActorFactory() { + public UntypedActor create() { + return new ScheduleInReceive(probe.getRef()); + } + }); + + testSchedule(probe, props, duration("3000 millis"), duration("2500 millis")); + }}; + } + + public static class TestSchedule extends JavaTestKit { + private ActorSystem system; + + public TestSchedule(ActorSystem system) { + super(system); + this.system = system; + } + + public void testSchedule(final JavaTestKit probe, Props props, + FiniteDuration startDuration, + FiniteDuration afterRestartDuration) { + Iterable filter = + Arrays.asList(new akka.testkit.EventFilter[]{ + (akka.testkit.EventFilter) new ErrorFilter(ArithmeticException.class)}); + try { + system.eventStream().publish(new Mute(filter)); + + final ActorRef actor = system.actorOf(props); + new Within(startDuration) { + protected void run() { + probe.expectMsgEquals("tick"); + probe.expectMsgEquals("tick"); + probe.expectMsgEquals("tick"); + } + }; + actor.tell("restart", getRef()); + new Within(afterRestartDuration) { + protected void run() { + probe.expectMsgEquals("tick"); + probe.expectMsgEquals("tick"); + } + }; + system.stop(actor); + } + finally { + system.eventStream().publish(new UnMute(filter)); + } + } + } +} diff --git a/akka-docs/rst/java/code/docs/remoting/RemoteDeploymentDocTestBase.java b/akka-docs/rst/java/code/docs/remoting/RemoteDeploymentDocTestBase.java index eaf5fbab79..49d0a631f6 100644 --- a/akka-docs/rst/java/code/docs/remoting/RemoteDeploymentDocTestBase.java +++ b/akka-docs/rst/java/code/docs/remoting/RemoteDeploymentDocTestBase.java @@ -7,6 +7,8 @@ import org.junit.AfterClass; import org.junit.BeforeClass; import org.junit.Test; +import com.typesafe.config.ConfigFactory; + //#import import akka.actor.ActorRef; import akka.actor.Address; @@ -60,6 +62,14 @@ public class RemoteDeploymentDocTestBase { actor.tell("Pretty slick", null); //#sample-actor } + + @Test + public void demonstrateProgrammaticConfig() { + //#programmatic + ConfigFactory.parseString("akka.remote.netty.hostname=\"1.2.3.4\"") + .withFallback(ConfigFactory.load()); + //#programmatic + } } \ No newline at end of file diff --git a/akka-docs/rst/java/code/docs/serialization/SerializationDocTestBase.java b/akka-docs/rst/java/code/docs/serialization/SerializationDocTestBase.java index 7fdb6420f1..db46031584 100644 --- a/akka-docs/rst/java/code/docs/serialization/SerializationDocTestBase.java +++ b/akka-docs/rst/java/code/docs/serialization/SerializationDocTestBase.java @@ -138,12 +138,7 @@ public class SerializationDocTestBase { } public Address getAddress() { - final ActorRefProvider provider = system.provider(); - if (provider instanceof RemoteActorRefProvider) { - return ((RemoteActorRefProvider) provider).transport().address(); - } else { - throw new UnsupportedOperationException("need RemoteActorRefProvider"); - } + return system.provider().getDefaultAddress(); } } diff --git a/akka-docs/rst/java/code/docs/testkit/TestKitDocTest.java b/akka-docs/rst/java/code/docs/testkit/TestKitDocTest.java index 14a51f9957..89253110ff 100644 --- a/akka-docs/rst/java/code/docs/testkit/TestKitDocTest.java +++ b/akka-docs/rst/java/code/docs/testkit/TestKitDocTest.java @@ -26,7 +26,7 @@ import akka.testkit.TestActor; import akka.testkit.TestActor.AutoPilot; import akka.testkit.TestActorRef; import akka.testkit.JavaTestKit; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; public class TestKitDocTest { diff --git a/akka-docs/rst/java/code/docs/testkit/TestKitSampleTest.java b/akka-docs/rst/java/code/docs/testkit/TestKitSampleTest.java index b86cc366da..fc8178b7f2 100644 --- a/akka-docs/rst/java/code/docs/testkit/TestKitSampleTest.java +++ b/akka-docs/rst/java/code/docs/testkit/TestKitSampleTest.java @@ -14,7 +14,7 @@ import akka.actor.ActorSystem; import akka.actor.Props; import akka.actor.UntypedActor; import akka.testkit.JavaTestKit; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; public class TestKitSampleTest { diff --git a/akka-docs/rst/java/code/docs/zeromq/ZeromqDocTestBase.java b/akka-docs/rst/java/code/docs/zeromq/ZeromqDocTestBase.java index 5a761c3cfe..9ec3bc49f9 100644 --- a/akka-docs/rst/java/code/docs/zeromq/ZeromqDocTestBase.java +++ b/akka-docs/rst/java/code/docs/zeromq/ZeromqDocTestBase.java @@ -30,7 +30,7 @@ import akka.actor.UntypedActor; import akka.actor.Props; import akka.event.Logging; import akka.event.LoggingAdapter; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.serialization.SerializationExtension; import akka.serialization.Serialization; import java.io.Serializable; @@ -99,6 +99,9 @@ public class ZeromqDocTestBase { pubSocket.tell(new ZMQMessage(new Frame("foo.bar"), new Frame(payload)), null); //#pub-topic + system.stop(subSocket); + system.stop(subTopicSocket); + //#high-watermark ActorRef highWatermarkSocket = ZeroMQExtension.get(system).newRouterSocket( new SocketOption[] { new Listener(listener), diff --git a/akka-docs/rst/java/dispatchers.rst b/akka-docs/rst/java/dispatchers.rst index 7fa54e0529..b25a1c33b0 100644 --- a/akka-docs/rst/java/dispatchers.rst +++ b/akka-docs/rst/java/dispatchers.rst @@ -13,6 +13,15 @@ Default dispatcher Every ``ActorSystem`` will have a default dispatcher that will be used in case nothing else is configured for an ``Actor``. The default dispatcher can be configured, and is by default a ``Dispatcher`` with a "fork-join-executor", which gives excellent performance in most cases. +.. _dispatcher-lookup-java: + +Looking up a Dispatcher +----------------------- + +Dispatchers implement the :class:`ExecutionContext` interface and can thus be used to run :class:`Future` invocations etc. + +.. includecode:: code/docs/dispatcher/DispatcherDocTestBase.java#lookup + Setting the dispatcher for an Actor ----------------------------------- diff --git a/akka-docs/rst/java/event-bus.rst b/akka-docs/rst/java/event-bus.rst index faecd1d209..fa71e356d8 100644 --- a/akka-docs/rst/java/event-bus.rst +++ b/akka-docs/rst/java/event-bus.rst @@ -185,7 +185,7 @@ at runtime:: system.eventStream.setLogLevel(Logging.DebugLevel()); -This means that log events for a level which will not be logged are not +This means that log events for a level which will not be logged are typically not dispatched at all (unless manual subscriptions to the respective event class have been done) diff --git a/akka-docs/rst/java/fault-tolerance.rst b/akka-docs/rst/java/fault-tolerance.rst index 3794ebd3fe..9cb9d234fd 100644 --- a/akka-docs/rst/java/fault-tolerance.rst +++ b/akka-docs/rst/java/fault-tolerance.rst @@ -24,9 +24,6 @@ sample as it is easy to follow the log output to understand what is happening in fault-tolerance-sample -.. includecode:: code/docs/actor/japi/FaultHandlingDocSample.java#all - :exclude: imports,messages,dummydb - Creating a Supervisor Strategy ------------------------------ diff --git a/akka-docs/rst/java/futures.rst b/akka-docs/rst/java/futures.rst index 137f0badac..f643155bc3 100644 --- a/akka-docs/rst/java/futures.rst +++ b/akka-docs/rst/java/futures.rst @@ -47,6 +47,17 @@ Alternatives to blocking are discussed further within this documentation. Also note that the ``Future`` returned by an ``UntypedActor`` is a ``Future`` since an ``UntypedActor`` is dynamic. That is why the cast to ``String`` is used in the above sample. +.. warning:: + + ``Await.result`` and ``Await.ready`` are provided for exceptional situations where you **must** block, + a good rule of thumb is to only use them if you know why you **must** block. For all other cases, use + asynchronous composition as described below. + +To send the result of a ``Future`` to an ``Actor``, you can use the ``pipe`` construct: + +.. includecode:: code/docs/future/FutureDocTestBase.java + :include: pipe-to + Use Directly ------------ @@ -75,6 +86,11 @@ Or failures: .. includecode:: code/docs/future/FutureDocTestBase.java :include: failed +For these examples ``PrintResult`` is defined as follows: + +.. includecode:: code/docs/future/FutureDocTestBase.java + :include: print-result + Functional Futures ------------------ diff --git a/akka-docs/rst/java/howto.rst b/akka-docs/rst/java/howto.rst index a1f8e4a11f..b15a18a38c 100644 --- a/akka-docs/rst/java/howto.rst +++ b/akka-docs/rst/java/howto.rst @@ -16,6 +16,37 @@ sense to add to the ``akka.pattern`` package for creating an `OTP-like library You might find some of the patterns described in the Scala chapter of :ref:`howto-scala` useful even though the example code is written in Scala. +Scheduling Periodic Messages +============================ + +This pattern describes how to schedule periodic messages to yourself in two different +ways. + +The first way is to set up periodic message scheduling in the constructor of the actor, +and cancel that scheduled sending in ``postStop`` or else we might have multiple registered +message sends to the same actor. + +.. note:: + + With this approach the scheduled periodic message send will be restarted with the actor on restarts. + This also means that the time period that elapses between two tick messages during a restart may drift + off based on when you restart the scheduled message sends relative to the time that the last message was + sent, and how long the initial delay is. Worst case scenario is ``interval`` plus ``initialDelay``. + +.. includecode:: code/docs/pattern/SchedulerPatternTest.java#schedule-constructor + +The second variant sets up an initial one shot message send in the ``preStart`` method +of the actor, and the then the actor when it receives this message sets up a new one shot +message send. You also have to override ``postRestart`` so we don't call ``preStart`` +and schedule the initial message send again. + +.. note:: + + With this approach we won't fill up the mailbox with tick messages if the actor is + under pressure, but only schedule a new tick message when we have seen the previous one. + +.. includecode:: code/docs/pattern/SchedulerPatternTest.java#schedule-receive + Single-Use Actor Trees with High-Level Error Reporting ====================================================== @@ -69,4 +100,3 @@ This is an especially nice pattern, since it does even come with some empty exam Spread the word: this is the easiest way to get famous! Please keep this pattern at the end of this file. - diff --git a/akka-docs/rst/java/logging.rst b/akka-docs/rst/java/logging.rst index eefce2b35d..0f857837c5 100644 --- a/akka-docs/rst/java/logging.rst +++ b/akka-docs/rst/java/logging.rst @@ -194,8 +194,7 @@ It has one single dependency; the slf4j-api jar. In runtime you also need a SLF4 ch.qos.logback logback-classic - 1.0.4 - runtime + 1.0.7 You need to enable the Slf4jEventHandler in the 'event-handlers' element in diff --git a/akka-docs/rst/java/microkernel.rst b/akka-docs/rst/java/microkernel.rst index 3d89c5c7e6..832e02ef90 100644 --- a/akka-docs/rst/java/microkernel.rst +++ b/akka-docs/rst/java/microkernel.rst @@ -10,7 +10,7 @@ having to create a launcher script. The Akka Microkernel is included in the Akka download found at `downloads`_. -.. _downloads: http://akka.io/downloads +.. _downloads: http://typesafe.com/stack/downloads/akka To run an application with the microkernel you need to create a Bootable class that handles the startup and shutdown the application. An example is included below. @@ -19,11 +19,7 @@ Put your application jar in the ``deploy`` directory to have it automatically loaded. To start the kernel use the scripts in the ``bin`` directory, passing the boot -classes for your application. - -There is a simple example of an application setup for running with the -microkernel included in the akka download. This can be run with the following -command (on a unix-based system): +classes for your application. Example command (on a unix-based system): .. code-block:: none diff --git a/akka-docs/rst/java/remoting.rst b/akka-docs/rst/java/remoting.rst index 826a5b7ba1..fae73cfde7 100644 --- a/akka-docs/rst/java/remoting.rst +++ b/akka-docs/rst/java/remoting.rst @@ -60,6 +60,13 @@ reference file for more information: .. literalinclude:: ../../../akka-remote/src/main/resources/reference.conf :language: none +.. note:: + + Setting properties like the listening IP and port number programmatically is + best done by using something like the following: + + .. includecode:: code/docs/remoting/RemoteDeploymentDocTestBase.java#programmatic + Looking up Remote Actors ^^^^^^^^^^^^^^^^^^^^^^^^ @@ -115,6 +122,15 @@ actor systems has to have a JAR containing the class. object, which in most cases is not serializable. It is best to make a static inner class which implements :class:`UntypedActorFactory`. +.. note:: + + You can use asterisks as wildcard matches for the actor path sections, so you could specify: + ``/*/sampleActor`` and that would match all ``sampleActor`` on that level in the hierarchy. + You can also use wildcard in the last position to match all actors at a certain level: + ``/someParent/*``. Non-wildcard matches always have higher priority to match than wildcards, so: + ``/foo/bar`` is considered **more specific** than ``/foo/*`` and only the highest priority match is used. + Please note that it **cannot** be used to partially match section, like this: ``/foo*/bar``, ``/f*o/bar`` etc. + .. warning:: *Caveat:* Remote deployment ties both systems together in a tight fashion, @@ -186,7 +202,7 @@ Description of the Remoting Sample There is a more extensive remote example that comes with the Akka distribution. Please have a look here for more information: `Remote Sample -`_ +<@github@/akka-samples/akka-sample-remote>`_ This sample demonstrates both, remote deployment and look-up of remote actors. First, let us have a look at the common setup for both scenarios (this is ``common.conf``): diff --git a/akka-docs/rst/java/routing.rst b/akka-docs/rst/java/routing.rst index 4d21bdd187..9f74e6d902 100644 --- a/akka-docs/rst/java/routing.rst +++ b/akka-docs/rst/java/routing.rst @@ -66,7 +66,7 @@ In addition to being able to supply looked-up remote actors as routees, you can make the router deploy its created children on a set of remote hosts; this will be done in round-robin fashion. In order to do that, wrap the router configuration in a :class:`RemoteRouterConfig`, attaching the remote addresses of -the nodes to deploy to. Naturally, this requires your to include the +the nodes to deploy to. Naturally, this requires you to include the ``akka-remote`` module on your classpath: .. includecode:: code/docs/jrouting/RouterViaProgramExample.java#remoteRoutees @@ -114,7 +114,7 @@ Routers vs. Supervision ^^^^^^^^^^^^^^^^^^^^^^^ As explained in the previous section, routers create new actor instances as -children of the “head” router, who therefor also is their supervisor. The +children of the “head” router, who therefore also is their supervisor. The supervisor strategy of this actor can be configured by means of the :meth:`RouterConfig.supervisorStrategy` property, which is supported for all built-in router types. It defaults to “always escalate”, which leads to the @@ -434,7 +434,7 @@ Configured Custom Router It is possible to define configuration properties for custom routers. In the ``router`` property of the deployment configuration you define the fully qualified class name of the router class. The router class must extend -``akka.routing.CustomRouterConfig`` and and have constructor with ``com.typesafe.config.Config`` parameter. +``akka.routing.CustomRouterConfig`` and have constructor with one ``com.typesafe.config.Config`` parameter. The deployment section of the configuration is passed to the constructor. Custom Resizer diff --git a/akka-docs/rst/java/serialization.rst b/akka-docs/rst/java/serialization.rst index caec1ba325..4668597c4f 100644 --- a/akka-docs/rst/java/serialization.rst +++ b/akka-docs/rst/java/serialization.rst @@ -149,16 +149,12 @@ concrete address handy you can create a dummy one for the right protocol using ``new Address(protocol, "", "", 0)`` (assuming that the actual transport used is as lenient as Akka’s RemoteActorRefProvider). -There is a possible simplification available if you are just using the default -:class:`NettyRemoteTransport` with the :meth:`RemoteActorRefProvider`, which is -enabled by the fact that this combination has just a single remote address: +There is also a default remote address which is the one used by cluster support +(and typical systems have just this one); you can get it like this: .. includecode:: code/docs/serialization/SerializationDocTestBase.java :include: external-address-default -This solution has to be adapted once other providers are used (like the planned -extensions for clustering). - Deep serialization of Actors ---------------------------- diff --git a/akka-docs/rst/java/untyped-actors.rst b/akka-docs/rst/java/untyped-actors.rst index 685a0903d5..2acc8bbee1 100644 --- a/akka-docs/rst/java/untyped-actors.rst +++ b/akka-docs/rst/java/untyped-actors.rst @@ -127,10 +127,11 @@ UntypedActor API The :class:`UntypedActor` class defines only one abstract method, the above mentioned :meth:`onReceive(Object message)`, which implements the behavior of the actor. -If the current actor behavior does not match a received message, -:meth:`unhandled` is called, which by default publishes a ``new +If the current actor behavior does not match a received message, it's recommended that +you call the :meth:`unhandled` method, which by default publishes a ``new akka.actor.UnhandledMessage(message, sender, recipient)`` on the actor system’s -event stream. +event stream (set configuration item ``akka.actor.debug.unhandled`` to ``on`` +to have them converted into actual Debug messages). In addition, it offers: @@ -431,13 +432,20 @@ defaults to a 'dead-letter' actor ref. getSender().tell(result); // will have dead-letter actor as default } -Initial receive timeout -======================= +Receive timeout +=============== -A timeout mechanism can be used to receive a message when no initial message is -received within a certain time. To receive this timeout you have to set the -``receiveTimeout`` property and declare handing for the ReceiveTimeout -message. +The `UntypedActorContext` :meth:`setReceiveTimeout` defines the inactivity timeout after which +the sending of a `ReceiveTimeout` message is triggered. +When specified, the receive function should be able to handle an `akka.actor.ReceiveTimeout` message. +1 millisecond is the minimum supported timeout. + +Please note that the receive timeout might fire and enqueue the `ReceiveTimeout` message right after +another message was enqueued; hence it is **not guaranteed** that upon reception of the receive +timeout there must have been an idle period beforehand as configured via this method. + +Once set, the receive timeout stays in effect (i.e. continues firing repeatedly after inactivity +periods). Pass in `Duration.Undefined` to switch off this feature. .. includecode:: code/docs/actor/MyReceivedTimeoutUntypedActor.java#receive-timeout @@ -542,7 +550,8 @@ Upgrade Akka supports hotswapping the Actor’s message loop (e.g. its implementation) at runtime. Use the ``getContext().become`` method from within the Actor. -The hotswapped code is kept in a Stack which can be pushed and popped. +The hotswapped code is kept in a Stack which can be pushed (replacing or adding +at the top) and popped. .. warning:: @@ -556,26 +565,19 @@ To hotswap the Actor using ``getContext().become``: .. includecode:: code/docs/actor/UntypedActorDocTestBase.java :include: hot-swap-actor -The ``become`` method is useful for many different things, such as to implement -a Finite State Machine (FSM). +This variant of the :meth:`become` method is useful for many different things, +such as to implement a Finite State Machine (FSM). It will replace the current +behavior (i.e. the top of the behavior stack), which means that you do not use +:meth:`unbecome`, instead always the next behavior is explicitly installed. -Here is another little cute example of ``become`` and ``unbecome`` in action: +The other way of using :meth:`become` does not replace but add to the top of +the behavior stack. In this case care must be taken to ensure that the number +of “pop” operations (i.e. :meth:`unbecome`) matches the number of “push” ones +in the long run, otherwise this amounts to a memory leak (which is why this +behavior is not the default). .. includecode:: code/docs/actor/UntypedActorSwapper.java#swapper -Downgrade ---------- - -Since the hotswapped code is pushed to a Stack you can downgrade the code as -well. Use the ``getContext().unbecome`` method from within the Actor. - -.. code-block:: java - - public void onReceive(Object message) { - if (message.equals("revert")) getContext().unbecome(); - } - - Stash ===== @@ -620,9 +622,11 @@ The stash is backed by a ``scala.collection.immutable.Vector``. As a result, even a very large number of messages may be stashed without a major impact on performance. -Note that the stash is not persisted across restarts of an actor, -unlike the actor's mailbox. Therefore, it should be managed like other -parts of the actor's state which have the same property. +Note that the stash is part of the ephemeral actor state, unlike the +mailbox. Therefore, it should be managed like other parts of the +actor's state which have the same property. The :class:`Stash` trait’s +implementation of :meth:`preRestart` will call ``unstashAll()``, which is +usually the desired behavior. Killing an Actor diff --git a/akka-docs/rst/modules/camel.rst b/akka-docs/rst/modules/camel.rst deleted file mode 100644 index 68686ce586..0000000000 --- a/akka-docs/rst/modules/camel.rst +++ /dev/null @@ -1,13 +0,0 @@ - -.. _camel-module: - -####### - Camel -####### - -.. note:: - The Akka Camel module has not been migrated to Akka 2.1-SNAPSHOT yet. - - It might not make it into Akka 2.0 final but will then hopefully be - re-introduce in an upcoming release. It might also be backported to - 2.0 final. diff --git a/akka-docs/rst/modules/code/docs/actor/mailbox/DurableMailboxDocSpec.scala b/akka-docs/rst/modules/code/docs/actor/mailbox/DurableMailboxDocSpec.scala index 9618f81ff9..4c2880f53d 100644 --- a/akka-docs/rst/modules/code/docs/actor/mailbox/DurableMailboxDocSpec.scala +++ b/akka-docs/rst/modules/code/docs/actor/mailbox/DurableMailboxDocSpec.scala @@ -53,7 +53,7 @@ import akka.dispatch.MessageQueue import akka.actor.mailbox.DurableMessageQueue import akka.actor.mailbox.DurableMessageSerialization import akka.pattern.CircuitBreaker -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ class MyMailboxType(systemSettings: ActorSystem.Settings, config: Config) extends MailboxType { diff --git a/akka-docs/rst/modules/durable-mailbox.rst b/akka-docs/rst/modules/durable-mailbox.rst index f76cee0dbd..7fa5aa2480 100644 --- a/akka-docs/rst/modules/durable-mailbox.rst +++ b/akka-docs/rst/modules/durable-mailbox.rst @@ -96,7 +96,8 @@ added in concrete subclass like this: To use ``DurableMailboxDocSpec`` add this dependency:: - "com.typesafe.akka" %% "akka-mailboxes-common" % "@version@" classifier "test" @crossString@ + "com.typesafe.akka" %% "akka-mailboxes-common" % + "@version@" classifier "test" @crossString@ For more inspiration you can look at the old implementations based on Redis, MongoDB, Beanstalk, and ZooKeeper, which can be found in Akka git repository tag diff --git a/akka-docs/rst/modules/index.rst b/akka-docs/rst/modules/index.rst index 2c6d78603e..8dbf7533b7 100644 --- a/akka-docs/rst/modules/index.rst +++ b/akka-docs/rst/modules/index.rst @@ -6,4 +6,3 @@ Modules durable-mailbox http - camel diff --git a/akka-docs/rst/project/links.rst b/akka-docs/rst/project/links.rst index 238971d1ba..842042cdec 100644 --- a/akka-docs/rst/project/links.rst +++ b/akka-docs/rst/project/links.rst @@ -1,13 +1,17 @@ .. _support: +######### + Project +######### + Commercial Support -================== +^^^^^^^^^^^^^^^^^^ Commercial support is provided by `Typesafe `_. Akka is now part of the `Typesafe Stack `_. Mailing List -============ +^^^^^^^^^^^^ `Akka User Google Group `_ @@ -15,13 +19,13 @@ Mailing List Downloads -========= +^^^^^^^^^ -``_ +``_ Source Code -=========== +^^^^^^^^^^^ Akka uses Git and is hosted at `Github `_. @@ -29,7 +33,7 @@ Akka uses Git and is hosted at `Github `_. Releases Repository -=================== +^^^^^^^^^^^^^^^^^^^ The Akka Maven repository can be found at http://repo.akka.io/releases/. @@ -50,7 +54,7 @@ underlying repositories directly. Snapshots Repository -==================== +^^^^^^^^^^^^^^^^^^^^ Nightly builds are available in http://repo.akka.io/snapshots/ and proxied through http://repo.typesafe.com/typesafe/snapshots/ as both ``SNAPSHOT`` and @@ -60,12 +64,41 @@ For timestamped versions, pick a timestamp from http://repo.typesafe.com/typesafe/snapshots/com/typesafe/akka/akka-actor_@binVersion@/. All Akka modules that belong to the same build have the same timestamp. -Make sure that you add the repository to the sbt resolvers or maven repositories:: +sbt definition of snapshot repository +------------------------------------- + +Make sure that you add the repository to the sbt resolvers:: resolvers += "Typesafe Snapshots" at "http://repo.typesafe.com/typesafe/snapshots/" Define the library dependencies with the timestamp as version. For example:: - libraryDependencies += "com.typesafe.akka" % "akka-actor_@binVersion@" % "2.1-20120913-000917" + libraryDependencies += "com.typesafe.akka" % "akka-remote_@binVersion@" % + "2.1-20121016-001042" + +maven definition of snapshot repository +--------------------------------------- + +Make sure that you add the repository to the maven repositories in pom.xml:: + + + + typesafe-snapshots + Typesafe Snapshots + http://repo.typesafe.com/typesafe/snapshots/ + default + + + +Define the library dependencies with the timestamp as version. For example:: + + + + com.typesafe.akka + akka-remote_@binVersion@ + 2.1-20121016-001042 + + + + - libraryDependencies += "com.typesafe.akka" % "akka-remote_@binVersion@" % "2.1-20120913-000917" diff --git a/akka-docs/rst/project/migration-guide-2.0.x-2.1.x.rst b/akka-docs/rst/project/migration-guide-2.0.x-2.1.x.rst index 449746ad02..dc160a7170 100644 --- a/akka-docs/rst/project/migration-guide-2.0.x-2.1.x.rst +++ b/akka-docs/rst/project/migration-guide-2.0.x-2.1.x.rst @@ -24,7 +24,7 @@ Config Dependency dependency of akka-actor and it is no longer embedded in ``akka-actor.jar``. If your are using a build tool with dependency resolution, such as sbt or maven you will not notice the difference, but if you have manually constructed classpaths -you need to add `config-0.5.0.jar `_. +you need to add `config-1.0.0.jar `_. Pieces Moved to Scala Standard Library ====================================== @@ -38,9 +38,9 @@ Search Replace with ``akka.dispatch.Future`` ``scala.concurrent.Future`` ``akka.dispatch.Promise`` ``scala.concurrent.Promise`` ``akka.dispatch.ExecutionContext`` ``scala.concurrent.ExecutionContext`` -``akka.util.Duration`` ``scala.concurrent.util.Duration`` -``akka.util.duration`` ``scala.concurrent.util.duration`` -``akka.util.Deadline`` ``scala.concurrent.util.Deadline`` +``akka.util.Duration`` ``scala.concurrent.duration.Duration`` +``akka.util.duration`` ``scala.concurrent.duration`` +``akka.util.Deadline`` ``scala.concurrent.duration.Deadline`` ``akka.util.NonFatal`` ``scala.util.control.NonFatal`` ``akka.japi.Util.manifest`` ``akka.japi.Util.classTag`` ==================================== ==================================== @@ -66,8 +66,9 @@ Java: :: // Use this Actors' Dispatcher as ExecutionContext - getContext().system().scheduler().scheduleOnce(Duration.parse("10 seconds", - getSelf(), new Reconnect(), getContext().getDispatcher()); + getContext().system().scheduler().scheduleOnce(Duration.create( + 10, TimeUnit.SECONDS), getSelf(), new Reconnect(), + getContext().getDispatcher()); // Use ActorSystem's default Dispatcher as ExecutionContext system.scheduler().scheduleOnce(Duration.create(50, TimeUnit.MILLISECONDS), @@ -203,17 +204,17 @@ v2.0 Scala:: v2.1 Scala:: - val router2 = system.actorOf(Props[ExampleActor1].withRouter( - RoundRobinRouter(routees = routees))) + val router2 = system.actorOf(Props.empty.withRouter( + RoundRobinRouter(routees = routees))) v2.0 Java:: - ActorRef router2 = system.actorOf(new Props(ExampleActor.class).withRouter( + ActorRef router2 = system.actorOf(new Props().withRouter( RoundRobinRouter.create(routees))); v2.1 Java:: - ActorRef router2 = system.actorOf(new Props().withRouter( + ActorRef router2 = system.actorOf(Props.empty().withRouter( RoundRobinRouter.create(routees))); Props: Function-based creation @@ -383,7 +384,7 @@ v2.0:: v2.1:: - final FiniteDuration d = Duration.create("1 second"); + final FiniteDuration d = Duration.create(1, TimeUnit.SECONDS); final Timeout t = new Timeout(d); // always required finite duration, now enforced Package Name Changes in Remoting @@ -394,13 +395,17 @@ This has been done to enable OSGi bundles that don't have conflicting package na Change the following import statements. Please note that the serializers are often referenced from configuration. -================================================ ======================================================= -Search Replace with -================================================ ======================================================= -``akka.routing.RemoteRouterConfig`` ``akka.remote.routing.RemoteRouterConfig`` -``akka.serialization.ProtobufSerializer`` ``akka.remote.serialization.ProtobufSerializer`` -``akka.serialization.DaemonMsgCreateSerializer`` ``akka.remote.serialization.DaemonMsgCreateSerializer`` -================================================ ======================================================= +Search -> Replace with:: + + akka.routing.RemoteRouterConfig -> + akka.remote.routing.RemoteRouterConfig + + akka.serialization.ProtobufSerializer -> + akka.remote.serialization.ProtobufSerializer + + akka.serialization.DaemonMsgCreateSerializer -> + akka.remote.serialization.DaemonMsgCreateSerializer + Package Name Changes in Durable Mailboxes ========================================= @@ -410,14 +415,20 @@ This has been done to enable OSGi bundles that don't have conflicting package na Change the following import statements. Please note that the ``FileBasedMailboxType`` is often referenced from configuration. -================================================ ========================================================= -Search Replace with -================================================ ========================================================= -``akka.actor.mailbox.FileBasedMailboxType`` ``akka.actor.mailbox.filebased.FileBasedMailboxType`` -``akka.actor.mailbox.FileBasedMailboxSettings`` ``akka.actor.mailbox.filebased.FileBasedMailboxSettings`` -``akka.actor.mailbox.FileBasedMessageQueue`` ``akka.actor.mailbox.filebased.FileBasedMessageQueue`` -``akka.actor.mailbox.filequeue.*`` ``akka.actor.mailbox.filebased.filequeue.*`` -================================================ ========================================================= +Search -> Replace with:: + + akka.actor.mailbox.FileBasedMailboxType -> + akka.actor.mailbox.filebased.FileBasedMailboxType + + akka.actor.mailbox.FileBasedMailboxSettings -> + akka.actor.mailbox.filebased.FileBasedMailboxSettings + + akka.actor.mailbox.FileBasedMessageQueue -> + akka.actor.mailbox.filebased.FileBasedMessageQueue + + akka.actor.mailbox.filequeue.* -> + akka.actor.mailbox.filebased.filequeue.* + Actor Receive Timeout ===================== diff --git a/akka-docs/rst/project/migration-guide-2.1.x-2.2.x.rst b/akka-docs/rst/project/migration-guide-2.1.x-2.2.x.rst new file mode 100644 index 0000000000..3002db8233 --- /dev/null +++ b/akka-docs/rst/project/migration-guide-2.1.x-2.2.x.rst @@ -0,0 +1,33 @@ +.. _migration-2.2: + +################################ + Migration Guide 2.1.x to 2.2.x +################################ + +The 2.2 release contains several structural changes that require some +simple, mechanical source-level changes in client code. + +When migrating from 1.3.x to 2.1.x you should first follow the instructions for +migrating `1.3.x to 2.0.x `_ and then :ref:`2.0.x to 2.1.x `. + +Immutable everywhere +==================== + +Akka has in 2.2 been refactored to require ``scala.collection.immutable`` data structures as much as possible, +this leads to fewer bugs and more opportunity for sharing data safely. + +==================================== ==================================== +Search Replace with +==================================== ==================================== +``akka.japi.Util.arrayToSeq`` ``akka.japi.Util.immutableSeq`` +==================================== ==================================== + +If you need to convert from Java to ``scala.collection.immutable.Seq`` or ``scala.collection.immutable.Iterable`` you should use ``akka.japi.Util.immutableSeq(…)``, +and if you need to convert from Scala you can simply switch to using immutable collections yourself or use the ``to[immutable.]`` method. + +API changes to FSM and TestFSMRef +================================= + +The ``timerActive_?`` method has been deprecated in both the ``FSM`` trait and the ``TestFSMRef`` +class. You should now use the ``isTimerActive`` method instead. The old method will remain +throughout 2.2.x. It will be removed in Akka 2.3. \ No newline at end of file diff --git a/akka-docs/rst/project/migration-guides.rst b/akka-docs/rst/project/migration-guides.rst index 79e2f7b8cc..5f464f3a08 100644 --- a/akka-docs/rst/project/migration-guides.rst +++ b/akka-docs/rst/project/migration-guides.rst @@ -8,3 +8,4 @@ Migration Guides migration-guide-1.3.x-2.0.x migration-guide-2.0.x-2.1.x + migration-guide-2.1.x-2.2.x diff --git a/akka-docs/rst/scala/actors.rst b/akka-docs/rst/scala/actors.rst index fea94dec0d..4f44497485 100644 --- a/akka-docs/rst/scala/actors.rst +++ b/akka-docs/rst/scala/actors.rst @@ -174,6 +174,17 @@ form of the ``implicit val context: ActorContext``. Outside of an actor, you have to either declare an implicit :class:`ActorSystem`, or you can give the factory explicitly (see further below). +The two possible ways of issuing a ``context.become`` (replacing or adding the +new behavior) are offered separately to enable a clutter-free notation of +nested receives: + +.. includecode:: ../../../akka-actor-tests/src/test/scala/akka/actor/ActorDSLSpec.scala#becomeStacked + +Please note that calling ``unbecome`` more often than ``becomeStacked`` results +in the original behavior being installed, which in case of the :class:`Act` +trait is the empty behavior (the outer ``become`` just replaces it during +construction). + Life-cycle hooks are also exposed as DSL elements (see `Start Hook`_ and `Stop Hook`_ below), where later invocations of the methods shown below will replace the contents of the respective hooks: @@ -223,8 +234,8 @@ If the current actor behavior does not match a received message, :meth:`unhandled` is called, which by default publishes an ``akka.actor.UnhandledMessage(message, sender, recipient)`` on the actor system’s event stream (set configuration item -``akka.event-handler-startup-timeout`` to ``true`` to have them converted into -actual Debug messages) +``akka.actor.debug.unhandled`` to ``on`` to have them converted into +actual Debug messages). In addition, it offers: @@ -549,13 +560,20 @@ defaults to a 'dead-letter' actor ref. val result = process(request) sender ! result // will have dead-letter actor as default -Initial receive timeout -======================= +Receive timeout +=============== -A timeout mechanism can be used to receive a message when no initial message is -received within a certain time. To receive this timeout you have to set the -``receiveTimeout`` property and declare a case handing the ReceiveTimeout -object. +The `ActorContext` :meth:`setReceiveTimeout` defines the inactivity timeout after which +the sending of a `ReceiveTimeout` message is triggered. +When specified, the receive function should be able to handle an `akka.actor.ReceiveTimeout` message. +1 millisecond is the minimum supported timeout. + +Please note that the receive timeout might fire and enqueue the `ReceiveTimeout` message right after +another message was enqueued; hence it is **not guaranteed** that upon reception of the receive +timeout there must have been an idle period beforehand as configured via this method. + +Once set, the receive timeout stays in effect (i.e. continues firing repeatedly after inactivity +periods). Pass in `Duration.Undefined` to switch off this feature. .. includecode:: code/docs/actor/ActorDocSpec.scala#receive-timeout @@ -646,11 +664,10 @@ Upgrade ------- Akka supports hotswapping the Actor’s message loop (e.g. its implementation) at -runtime: Invoke the ``context.become`` method from within the Actor. - -Become takes a ``PartialFunction[Any, Unit]`` that implements -the new message handler. The hotswapped code is kept in a Stack which can be -pushed and popped. +runtime: invoke the ``context.become`` method from within the Actor. +:meth:`become` takes a ``PartialFunction[Any, Unit]`` that implements the new +message handler. The hotswapped code is kept in a Stack which can be pushed and +popped. .. warning:: @@ -660,38 +677,26 @@ To hotswap the Actor behavior using ``become``: .. includecode:: code/docs/actor/ActorDocSpec.scala#hot-swap-actor -The ``become`` method is useful for many different things, but a particular nice -example of it is in example where it is used to implement a Finite State Machine -(FSM): `Dining Hakkers`_. +This variant of the :meth:`become` method is useful for many different things, +such as to implement a Finite State Machine (FSM, for an example see `Dining +Hakkers`_). It will replace the current behavior (i.e. the top of the behavior +stack), which means that you do not use :meth:`unbecome`, instead always the +next behavior is explicitly installed. -.. _Dining Hakkers: http://github.com/akka/akka/blob/master/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala +.. _Dining Hakkers: @github@/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala -Here is another little cute example of ``become`` and ``unbecome`` in action: +The other way of using :meth:`become` does not replace but add to the top of +the behavior stack. In this case care must be taken to ensure that the number +of “pop” operations (i.e. :meth:`unbecome`) matches the number of “push” ones +in the long run, otherwise this amounts to a memory leak (which is why this +behavior is not the default). .. includecode:: code/docs/actor/ActorDocSpec.scala#swapper Encoding Scala Actors nested receives without accidentally leaking memory ------------------------------------------------------------------------- -See this `Unnested receive example `_. - - -Downgrade ---------- - -Since the hotswapped code is pushed to a Stack you can downgrade the code as -well, all you need to do is to: Invoke the ``context.unbecome`` method from within the Actor. - -This will pop the Stack and replace the Actor's implementation with the -``PartialFunction[Any, Unit]`` that is at the top of the Stack. - -Here's how you use the ``unbecome`` method: - -.. code-block:: scala - - def receive = { - case "revert" => context.unbecome() - } +See this `Unnested receive example <@github@/akka-docs/rst/scala/code/docs/actor/UnnestedReceives.scala>`_. Stash @@ -745,9 +750,11 @@ major impact on performance. callback. This means it's not possible to write ``Actor with MyActor with Stash`` if ``MyActor`` overrides ``preRestart``. -Note that the stash is not persisted across restarts of an actor, -unlike the actor's mailbox. Therefore, it should be managed like other -parts of the actor's state which have the same property. +Note that the stash is part of the ephemeral actor state, unlike the +mailbox. Therefore, it should be managed like other parts of the +actor's state which have the same property. The :class:`Stash` trait’s +implementation of :meth:`preRestart` will call ``unstashAll()``, which is +usually the desired behavior. Killing an Actor diff --git a/akka-docs/rst/scala/camel.rst b/akka-docs/rst/scala/camel.rst index 292324e26d..c556827a69 100644 --- a/akka-docs/rst/scala/camel.rst +++ b/akka-docs/rst/scala/camel.rst @@ -129,7 +129,7 @@ An ``ActivationTimeoutException`` is thrown if the endpoint could not be activat Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the `SendProcessor`_ is stopped. A ``DeActivationTimeoutException`` is thrown if the associated camel objects could not be deactivated within the specified timeout. -.. _Camel: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Camel.scala +.. _Camel: @github@/akka-camel/src/main/scala/akka/camel/Camel.scala .. _CamelContext: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java .. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java .. _SendProcessor: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java @@ -143,7 +143,7 @@ trait. For example, the following actor class (Consumer1) implements the endpointUri method, which is declared in the Consumer trait, in order to receive messages from the ``file:data/input/actor`` Camel endpoint. -.. _Consumer: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Consumer.scala +.. _Consumer: @github@/akka-camel/src/main/scala/akka/camel/Consumer.scala .. includecode:: code/docs/camel/Consumers.scala#Consumer1 @@ -153,7 +153,7 @@ actor. Messages consumed by actors from Camel endpoints are of type `CamelMessage`_. These are immutable representations of Camel messages. .. _file component: http://camel.apache.org/file2.html -.. _Message: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala +.. _Message: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala Here's another example that sets the endpointUri to @@ -173,7 +173,7 @@ client the response type should be `CamelMessage`_. For any other response type, new CamelMessage object is created by akka-camel with the actor response as message body. -.. _CamelMessage: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/CamelMessage.scala +.. _CamelMessage: @github@/akka-camel/src/main/scala/akka/camel/CamelMessage.scala .. _camel-acknowledgements: @@ -218,7 +218,7 @@ The timeout on the consumer actor can be overridden with the ``replyTimeout``, a .. includecode:: code/docs/camel/Consumers.scala#Consumer4 .. _Exchange: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java -.. _ask: http://github.com/akka/akka/blob/master/akka-actor/src/main/scala/akka/pattern/AskSupport.scala +.. _ask: @github@/akka-actor/src/main/scala/akka/pattern/AskSupport.scala Producer Actors =============== @@ -292,7 +292,7 @@ For initiating a a two-way message exchange, one of the .. includecode:: code/docs/camel/Producers.scala#RequestProducerTemplate -.. _Producer: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/Producer.scala +.. _Producer: @github@/akka-camel/src/main/scala/akka/camel/Producer.scala .. _ProducerTemplate: https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/ProducerTemplate.java .. _camel-asynchronous-routing: @@ -357,7 +357,7 @@ Akka Camel components Akka actors can be accessed from Camel routes using the `actor`_ Camel component. This component can be used to access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections. -.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala +.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala .. _access-to-actors: @@ -368,7 +368,7 @@ To access actors from custom Camel routes, the `actor`_ Camel component should be used. It fully supports Camel's `asynchronous routing engine`_. -.. _actor: http://github.com/akka/akka/blob/master/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala +.. _actor: @github@/akka-camel/src/main/scala/akka/camel/internal/component/ActorComponent.scala .. _asynchronous routing engine: http://camel.apache.org/asynchronous-routing-engine.html This component accepts the following endpoint URI format: @@ -384,6 +384,8 @@ URI options The following URI options are supported: +.. tabularcolumns:: |l|l|l|L| + +--------------+----------+---------+-------------------------------------------+ | Name | Type | Default | Description | +==============+==========+=========+===========================================+ diff --git a/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala index 0cd43bdd7e..fc936ff13b 100644 --- a/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala @@ -18,7 +18,7 @@ import org.scalatest.{ BeforeAndAfterAll, WordSpec } import org.scalatest.matchers.MustMatchers import akka.testkit._ import akka.util._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Actor.Receive import scala.concurrent.Await @@ -96,11 +96,11 @@ class Swapper extends Actor { def receive = { case Swap ⇒ log.info("Hi") - become { + become({ case Swap ⇒ log.info("Ho") unbecome() // resets the latest 'become' (just for fun) - } + }, discardOld = false) // push on top instead of replace } } @@ -245,7 +245,7 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) { "using implicit timeout" in { val myActor = system.actorOf(Props(new FirstActor)) //#using-implicit-timeout - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.util.Timeout import akka.pattern.ask implicit val timeout = Timeout(5 seconds) @@ -258,7 +258,7 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) { "using explicit timeout" in { val myActor = system.actorOf(Props(new FirstActor)) //#using-explicit-timeout - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.pattern.ask val future = myActor.ask("hello")(5 seconds) //#using-explicit-timeout @@ -268,12 +268,18 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) { "using receiveTimeout" in { //#receive-timeout import akka.actor.ReceiveTimeout - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ class MyActor extends Actor { + // To set an initial delay context.setReceiveTimeout(30 milliseconds) def receive = { - case "Hello" ⇒ //... - case ReceiveTimeout ⇒ throw new RuntimeException("received timeout") + case "Hello" ⇒ + // To set in a response to a message + context.setReceiveTimeout(100 milliseconds) + case ReceiveTimeout ⇒ + // To turn it off + context.setReceiveTimeout(Duration.Undefined) + throw new RuntimeException("Receive timed out") } } //#receive-timeout @@ -310,13 +316,13 @@ class ActorDocSpec extends AkkaSpec(Map("akka.loglevel" -> "INFO")) { def receive = { case "open" ⇒ unstashAll() - context.become { + context.become({ case "write" ⇒ // do writing... case "close" ⇒ unstashAll() context.unbecome() case msg ⇒ stash() - } + }, discardOld = false) // stack on top instead of replacing case msg ⇒ stash() } } diff --git a/akka-docs/rst/scala/code/docs/actor/FSMDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/FSMDocSpec.scala index 5bc1ea8d70..15821419d4 100644 --- a/akka-docs/rst/scala/code/docs/actor/FSMDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/FSMDocSpec.scala @@ -8,6 +8,7 @@ import language.postfixOps import akka.testkit.{ AkkaSpec ⇒ MyFavoriteTestFrameWorkPlusAkkaTestKit } //#test-code import akka.actor.Props +import scala.collection.immutable class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit { @@ -15,7 +16,7 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit { //#fsm-code-elided //#simple-imports import akka.actor.{ Actor, ActorRef, FSM } - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ //#simple-imports //#simple-events // received events @@ -24,7 +25,7 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit { case object Flush // sent events - case class Batch(obj: Seq[Any]) + case class Batch(obj: immutable.Seq[Any]) //#simple-events //#simple-state // states @@ -34,7 +35,7 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit { sealed trait Data case object Uninitialized extends Data - case class Todo(target: ActorRef, queue: Seq[Any]) extends Data + case class Todo(target: ActorRef, queue: immutable.Seq[Any]) extends Data //#simple-state //#simple-fsm class Buncher extends Actor with FSM[State, Data] { @@ -188,17 +189,26 @@ class FSMDocSpec extends MyFavoriteTestFrameWorkPlusAkkaTestKit { } //#fsm-code-elided + "demonstrate NullFunction" in { + class A extends Actor with FSM[Int, Null] { + val SomeState = 0 + //#NullFunction + when(SomeState)(FSM.NullFunction) + //#NullFunction + } + } + "batch correctly" in { val buncher = system.actorOf(Props(new Buncher)) buncher ! SetTarget(testActor) buncher ! Queue(42) buncher ! Queue(43) - expectMsg(Batch(Seq(42, 43))) + expectMsg(Batch(immutable.Seq(42, 43))) buncher ! Queue(44) buncher ! Flush buncher ! Queue(45) - expectMsg(Batch(Seq(44))) - expectMsg(Batch(Seq(45))) + expectMsg(Batch(immutable.Seq(44))) + expectMsg(Batch(immutable.Seq(45))) } "batch not if uninitialized" in { diff --git a/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSample.scala b/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSample.scala index ade871de77..cc1bd3053a 100644 --- a/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSample.scala +++ b/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSample.scala @@ -9,8 +9,7 @@ import language.postfixOps //#imports import akka.actor._ import akka.actor.SupervisorStrategy._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.util.Timeout import akka.event.LoggingReceive import akka.pattern.{ ask, pipe } diff --git a/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSpec.scala index d96771a87a..5c35ae6e2a 100644 --- a/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/FaultHandlingDocSpec.scala @@ -22,7 +22,7 @@ object FaultHandlingDocSpec { //#strategy import akka.actor.OneForOneStrategy import akka.actor.SupervisorStrategy._ - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) { @@ -44,7 +44,7 @@ object FaultHandlingDocSpec { //#strategy2 import akka.actor.OneForOneStrategy import akka.actor.SupervisorStrategy._ - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ override val supervisorStrategy = OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = 1 minute) { diff --git a/akka-docs/rst/scala/code/docs/actor/SchedulerDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/SchedulerDocSpec.scala index 2d76628089..5b46d94298 100644 --- a/akka-docs/rst/scala/code/docs/actor/SchedulerDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/SchedulerDocSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps //#imports1 import akka.actor.Actor import akka.actor.Props -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ //#imports1 diff --git a/akka-docs/rst/scala/code/docs/actor/TypedActorDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/TypedActorDocSpec.scala index 7ef1204a7d..487ff8f04c 100644 --- a/akka-docs/rst/scala/code/docs/actor/TypedActorDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/TypedActorDocSpec.scala @@ -7,7 +7,7 @@ import language.postfixOps //#imports import scala.concurrent.{ Promise, Future, Await } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.{ ActorContext, TypedActor, TypedProps } //#imports diff --git a/akka-docs/rst/scala/code/docs/agent/AgentDocSpec.scala b/akka-docs/rst/scala/code/docs/agent/AgentDocSpec.scala index 1f855057e4..1eaf81f15d 100644 --- a/akka-docs/rst/scala/code/docs/agent/AgentDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/agent/AgentDocSpec.scala @@ -6,7 +6,7 @@ package docs.agent import language.postfixOps import akka.agent.Agent -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import akka.testkit._ @@ -99,7 +99,7 @@ class AgentDocSpec extends AkkaSpec { val agent = Agent(0) //#read-await - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.util.Timeout implicit val timeout = Timeout(5 seconds) @@ -126,7 +126,7 @@ class AgentDocSpec extends AkkaSpec { "transfer example" in { //#transfer-example import akka.agent.Agent - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.util.Timeout import scala.concurrent.stm._ diff --git a/akka-docs/rst/scala/code/docs/camel/Consumers.scala b/akka-docs/rst/scala/code/docs/camel/Consumers.scala index 1d500cf04c..f1d184ec66 100644 --- a/akka-docs/rst/scala/code/docs/camel/Consumers.scala +++ b/akka-docs/rst/scala/code/docs/camel/Consumers.scala @@ -59,7 +59,7 @@ object Consumers { object Sample4 { //#Consumer4 import akka.camel.{ CamelMessage, Consumer } - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ class Consumer4 extends Consumer { def endpointUri = "jetty:http://localhost:8877/camel/default" diff --git a/akka-docs/rst/scala/code/docs/camel/Introduction.scala b/akka-docs/rst/scala/code/docs/camel/Introduction.scala index 348e6ed914..e1b5f17a17 100644 --- a/akka-docs/rst/scala/code/docs/camel/Introduction.scala +++ b/akka-docs/rst/scala/code/docs/camel/Introduction.scala @@ -79,7 +79,7 @@ object Introduction { { //#CamelActivation import akka.camel.{ CamelMessage, Consumer } - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ class MyEndpoint extends Consumer { def endpointUri = "mina:tcp://localhost:6200?textline=true" diff --git a/akka-docs/rst/scala/code/docs/camel/Producers.scala b/akka-docs/rst/scala/code/docs/camel/Producers.scala index fe471eec89..8835ec7df3 100644 --- a/akka-docs/rst/scala/code/docs/camel/Producers.scala +++ b/akka-docs/rst/scala/code/docs/camel/Producers.scala @@ -17,7 +17,7 @@ object Producers { //#Producer1 //#AskProducer import akka.pattern.ask - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ implicit val timeout = Timeout(10 seconds) val system = ActorSystem("some-system") diff --git a/akka-docs/rst/scala/code/docs/dataflow/DataflowDocSpec.scala b/akka-docs/rst/scala/code/docs/dataflow/DataflowDocSpec.scala index 8cd02a56f3..345d23b4ac 100644 --- a/akka-docs/rst/scala/code/docs/dataflow/DataflowDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/dataflow/DataflowDocSpec.scala @@ -5,7 +5,7 @@ package docs.dataflow import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.{ Await, Future, Promise } import org.scalatest.WordSpec import org.scalatest.matchers.MustMatchers @@ -44,21 +44,23 @@ class DataflowDocSpec extends WordSpec with MustMatchers { } "demonstrate the use of dataflow variables" in { - def println[T](any: Try[T]): Unit = any.get must be === 20 + val result = Promise[Int]() + def println(any: Try[Int]): Unit = result.complete(any) //#dataflow-variable-a + val v1, v2 = Promise[Int]() flow { - val v1, v2 = Promise[Int]() - // v1 will become the value of v2 + 10 when v2 gets a value v1 << v2() + 10 - v2 << flow { 5 } // As you can see, no blocking! v1() + v2() } onComplete println + flow { v2 << 5 } // As you can see, no blocking above! //#dataflow-variable-a + Await.result(result.future, 10.seconds) must be === 20 } "demonstrate the difference between for and flow" in { - def println[T](any: Try[T]): Unit = any.get must be === 2 + val result = Promise[Int]() + def println(any: Try[Int]): Unit = result.tryComplete(any) //#for-vs-flow val f1, f2 = Future { 1 } @@ -68,6 +70,7 @@ class DataflowDocSpec extends WordSpec with MustMatchers { usingFor onComplete println usingFlow onComplete println //#for-vs-flow + Await.result(result.future, 10.seconds) must be === 2 } } diff --git a/akka-docs/rst/scala/code/docs/dispatcher/DispatcherDocSpec.scala b/akka-docs/rst/scala/code/docs/dispatcher/DispatcherDocSpec.scala index 7d06bb43da..666e533c18 100644 --- a/akka-docs/rst/scala/code/docs/dispatcher/DispatcherDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/dispatcher/DispatcherDocSpec.scala @@ -10,7 +10,7 @@ import org.scalatest.matchers.MustMatchers import akka.testkit.AkkaSpec import akka.event.Logging import akka.event.LoggingAdapter -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.{ Props, Actor, PoisonPill, ActorSystem } object DispatcherDocSpec { @@ -186,6 +186,13 @@ class DispatcherDocSpec extends AkkaSpec(DispatcherDocSpec.config) { //#defining-pinned-dispatcher } + "looking up a dispatcher" in { + //#lookup + // for use with Futures, Scheduler, etc. + implicit val executionContext = system.dispatchers.lookup("my-dispatcher") + //#lookup + } + "defining priority dispatcher" in { //#prio-dispatcher diff --git a/akka-docs/rst/scala/code/docs/extension/SettingsExtensionDocSpec.scala b/akka-docs/rst/scala/code/docs/extension/SettingsExtensionDocSpec.scala index 831ec28b21..502a214761 100644 --- a/akka-docs/rst/scala/code/docs/extension/SettingsExtensionDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/extension/SettingsExtensionDocSpec.scala @@ -8,7 +8,7 @@ import akka.actor.Extension import akka.actor.ExtensionId import akka.actor.ExtensionIdProvider import akka.actor.ExtendedActorSystem -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import com.typesafe.config.Config import java.util.concurrent.TimeUnit diff --git a/akka-docs/rst/scala/code/docs/future/FutureDocSpec.scala b/akka-docs/rst/scala/code/docs/future/FutureDocSpec.scala index 86aa0ba382..a80f920a6b 100644 --- a/akka-docs/rst/scala/code/docs/future/FutureDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/future/FutureDocSpec.scala @@ -9,7 +9,7 @@ import akka.testkit._ import akka.actor.{ Actor, Props } import akka.actor.Status import akka.util.Timeout -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import java.lang.IllegalStateException import scala.concurrent.{ Await, ExecutionContext, Future, Promise } import scala.util.{ Failure, Success } @@ -39,6 +39,9 @@ object FutureDocSpec { class FutureDocSpec extends AkkaSpec { import FutureDocSpec._ import system.dispatcher + + val println: PartialFunction[Any, Unit] = { case _ ⇒ } + "demonstrate usage custom ExecutionContext" in { val yourExecutorServiceGoesHere = java.util.concurrent.Executors.newSingleThreadExecutor() //#diy-execution-context @@ -62,12 +65,18 @@ class FutureDocSpec extends AkkaSpec { import scala.concurrent.Await import akka.pattern.ask import akka.util.Timeout - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ implicit val timeout = Timeout(5 seconds) val future = actor ? msg // enabled by the “ask” import val result = Await.result(future, timeout.duration).asInstanceOf[String] //#ask-blocking + + //#pipe-to + import akka.pattern.pipe + future pipeTo actor + //#pipe-to + result must be("HELLO") } @@ -88,14 +97,14 @@ class FutureDocSpec extends AkkaSpec { //#future-eval import scala.concurrent.Await import scala.concurrent.Future - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ val future = Future { "Hello" + "World" } - val result = Await.result(future, 1 second) + future foreach println //#future-eval - result must be("HelloWorld") + Await.result(future, 1 second) must be("HelloWorld") } "demonstrate usage of map" in { @@ -106,10 +115,11 @@ class FutureDocSpec extends AkkaSpec { val f2 = f1 map { x ⇒ x.length } + f2 foreach println + //#map val result = Await.result(f2, 1 second) result must be(10) f1.value must be(Some(Success("HelloWorld"))) - //#map } "demonstrate wrong usage of nested map" in { @@ -123,6 +133,7 @@ class FutureDocSpec extends AkkaSpec { x.length * y } } + f3 foreach println //#wrong-nested-map Await.ready(f3, 1 second) } @@ -138,25 +149,30 @@ class FutureDocSpec extends AkkaSpec { x.length * y } } + f3 foreach println + //#flat-map val result = Await.result(f3, 1 second) result must be(30) - //#flat-map } "demonstrate usage of filter" in { //#filter val future1 = Future.successful(4) val future2 = future1.filter(_ % 2 == 0) - val result = Await.result(future2, 1 second) - result must be(4) + + future2 foreach println val failedFilter = future1.filter(_ % 2 == 1).recover { // When filter fails, it will have a java.util.NoSuchElementException case m: NoSuchElementException ⇒ 0 } + + failedFilter foreach println + //#filter + val result = Await.result(future2, 1 second) + result must be(4) val result2 = Await.result(failedFilter, 1 second) result2 must be(0) //Can only be 0 when there was a MatchError - //#filter } "demonstrate usage of for comprehension" in { @@ -171,9 +187,10 @@ class FutureDocSpec extends AkkaSpec { // Note that the execution of futures a, b, and c // are not done in parallel. + f foreach println + //#for-comprehension val result = Await.result(f, 1 second) result must be(24) - //#for-comprehension } "demonstrate wrong way of composing" in { @@ -220,8 +237,9 @@ class FutureDocSpec extends AkkaSpec { c ← ask(actor3, (a + b)).mapTo[Int] } yield c - val result = Await.result(f3, 1 second).asInstanceOf[Int] + f3 foreach println //#composing + val result = Await.result(f3, 1 second).asInstanceOf[Int] result must be(3) } @@ -236,25 +254,28 @@ class FutureDocSpec extends AkkaSpec { val futureList = Future.sequence(listOfFutures) // Find the sum of the odd numbers - val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int] - oddSum must be(10000) + val oddSum = futureList.map(_.sum) + oddSum foreach println //#sequence-ask + Await.result(oddSum, 1 second).asInstanceOf[Int] must be(10000) } "demonstrate usage of sequence" in { //#sequence val futureList = Future.sequence((1 to 100).toList.map(x ⇒ Future(x * 2 - 1))) - val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int] - oddSum must be(10000) + val oddSum = futureList.map(_.sum) + oddSum foreach println //#sequence + Await.result(oddSum, 1 second).asInstanceOf[Int] must be(10000) } "demonstrate usage of traverse" in { //#traverse val futureList = Future.traverse((1 to 100).toList)(x ⇒ Future(x * 2 - 1)) - val oddSum = Await.result(futureList.map(_.sum), 1 second).asInstanceOf[Int] - oddSum must be(10000) + val oddSum = futureList.map(_.sum) + oddSum foreach println //#traverse + Await.result(oddSum, 1 second).asInstanceOf[Int] must be(10000) } "demonstrate usage of fold" in { @@ -262,8 +283,9 @@ class FutureDocSpec extends AkkaSpec { // Create a sequence of Futures val futures = for (i ← 1 to 1000) yield Future(i * 2) val futureSum = Future.fold(futures)(0)(_ + _) - Await.result(futureSum, 1 second) must be(1001000) + futureSum foreach println //#fold + Await.result(futureSum, 1 second) must be(1001000) } "demonstrate usage of reduce" in { @@ -271,8 +293,9 @@ class FutureDocSpec extends AkkaSpec { // Create a sequence of Futures val futures = for (i ← 1 to 1000) yield Future(i * 2) val futureSum = Future.reduce(futures)(_ + _) - Await.result(futureSum, 1 second) must be(1001000) + futureSum foreach println //#reduce + Await.result(futureSum, 1 second) must be(1001000) } "demonstrate usage of recover" in { @@ -283,6 +306,7 @@ class FutureDocSpec extends AkkaSpec { val future = akka.pattern.ask(actor, msg1) recover { case e: ArithmeticException ⇒ 0 } + future foreach println //#recover Await.result(future, 1 second) must be(0) } @@ -297,6 +321,7 @@ class FutureDocSpec extends AkkaSpec { case foo: IllegalArgumentException ⇒ Future.failed[Int](new IllegalStateException("All br0ken!")) } + future foreach println //#try-recover Await.result(future, 1 second) must be(0) } @@ -306,6 +331,7 @@ class FutureDocSpec extends AkkaSpec { val future2 = Future { "bar" } //#zip val future3 = future1 zip future2 map { case (a, b) ⇒ a + " " + b } + future3 foreach println //#zip Await.result(future3, 1 second) must be("foo bar") } @@ -321,6 +347,7 @@ class FutureDocSpec extends AkkaSpec { } andThen { case _ ⇒ watchSomeTV } + result foreach println //#and-then Await.result(result, 1 second) must be("foo bar") } @@ -331,6 +358,7 @@ class FutureDocSpec extends AkkaSpec { val future3 = Future { "pigdog" } //#fallback-to val future4 = future1 fallbackTo future2 fallbackTo future3 + future4 foreach println //#fallback-to Await.result(future4, 1 second) must be("foo") } @@ -389,9 +417,23 @@ class FutureDocSpec extends AkkaSpec { val delayed = after(200 millis, using = system.scheduler)(Future.failed( new IllegalStateException("OHNOES"))) val future = Future { Thread.sleep(1000); "foo" } - val result = future either delayed + val result = Future firstCompletedOf Seq(future, delayed) //#after intercept[IllegalStateException] { Await.result(result, 2 second) } } + "demonstrate context.dispatcher" in { + //#context-dispatcher + class A extends Actor { + import context.dispatcher + val f = Future("hello") + def receive = { + //#receive-omitted + case _ ⇒ + //#receive-omitted + } + } + //#context-dispatcher + } + } diff --git a/akka-docs/rst/scala/code/docs/pattern/SchedulerPatternSpec.scala b/akka-docs/rst/scala/code/docs/pattern/SchedulerPatternSpec.scala new file mode 100644 index 0000000000..fba8ed9ff9 --- /dev/null +++ b/akka-docs/rst/scala/code/docs/pattern/SchedulerPatternSpec.scala @@ -0,0 +1,99 @@ +/** + * Copyright (C) 2009-2012 Typesafe Inc. + */ + +package docs.pattern + +import language.postfixOps + +import akka.actor.{ Props, ActorRef, Actor } +import scala.concurrent.duration._ +import akka.testkit.{ TimingTest, AkkaSpec, filterException } +import docs.pattern.SchedulerPatternSpec.ScheduleInConstructor + +object SchedulerPatternSpec { + //#schedule-constructor + class ScheduleInConstructor extends Actor { + import context.dispatcher + val tick = + context.system.scheduler.schedule(500 millis, 1000 millis, self, "tick") + //#schedule-constructor + // this var and constructor is declared here to not show up in the docs + var target: ActorRef = null + def this(target: ActorRef) = { this(); this.target = target } + //#schedule-constructor + + override def postStop() = tick.cancel() + + def receive = { + case "tick" ⇒ + // do something useful here + //#schedule-constructor + target ! "tick" + case "restart" ⇒ + throw new ArithmeticException + //#schedule-constructor + } + } + //#schedule-constructor + + //#schedule-receive + class ScheduleInReceive extends Actor { + import context._ + //#schedule-receive + // this var and constructor is declared here to not show up in the docs + var target: ActorRef = null + def this(target: ActorRef) = { this(); this.target = target } + //#schedule-receive + + override def preStart() = + system.scheduler.scheduleOnce(500 millis, self, "tick") + + // override postRestart so we don't call preStart and schedule a new message + override def postRestart(reason: Throwable) = {} + + def receive = { + case "tick" ⇒ + // send another periodic tick after the specified delay + system.scheduler.scheduleOnce(1000 millis, self, "tick") + // do something useful here + //#schedule-receive + target ! "tick" + case "restart" ⇒ + throw new ArithmeticException + //#schedule-receive + } + } + //#schedule-receive +} + +class SchedulerPatternSpec extends AkkaSpec { + + def testSchedule(actor: ActorRef, startDuration: FiniteDuration, + afterRestartDuration: FiniteDuration) = { + + filterException[ArithmeticException] { + within(startDuration) { + expectMsg("tick") + expectMsg("tick") + expectMsg("tick") + } + actor ! "restart" + within(afterRestartDuration) { + expectMsg("tick") + expectMsg("tick") + } + system.stop(actor) + } + } + + "send periodic ticks from the constructor" taggedAs TimingTest in { + testSchedule(system.actorOf(Props(new ScheduleInConstructor(testActor))), + 3000 millis, 2000 millis) + } + + "send ticks from the preStart and receive" taggedAs TimingTest in { + testSchedule(system.actorOf(Props(new ScheduleInConstructor(testActor))), + 3000 millis, 2500 millis) + } +} diff --git a/akka-docs/rst/scala/code/docs/routing/RouterTypeExample.scala b/akka-docs/rst/scala/code/docs/routing/RouterTypeExample.scala index 4f48116b18..6fc5920ec9 100644 --- a/akka-docs/rst/scala/code/docs/routing/RouterTypeExample.scala +++ b/akka-docs/rst/scala/code/docs/routing/RouterTypeExample.scala @@ -8,7 +8,7 @@ import language.postfixOps import akka.routing.{ ScatterGatherFirstCompletedRouter, BroadcastRouter, RandomRouter, RoundRobinRouter } import annotation.tailrec import akka.actor.{ Props, Actor } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import scala.concurrent.Await import akka.pattern.ask diff --git a/akka-docs/rst/scala/code/docs/serialization/SerializationDocSpec.scala b/akka-docs/rst/scala/code/docs/serialization/SerializationDocSpec.scala index d979952887..1607556ab2 100644 --- a/akka-docs/rst/scala/code/docs/serialization/SerializationDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/serialization/SerializationDocSpec.scala @@ -2,19 +2,6 @@ * Copyright (C) 2009-2012 Typesafe Inc. */ -//#extract-transport -package object akka { - // needs to be inside the akka package because accessing unsupported API ! - def transportOf(system: actor.ExtendedActorSystem): remote.RemoteTransport = - system.provider match { - case r: remote.RemoteActorRefProvider ⇒ r.transport - case _ ⇒ - throw new UnsupportedOperationException( - "this method requires the RemoteActorRefProvider to be configured") - } -} -//#extract-transport - package docs.serialization { import org.scalatest.matchers.MustMatchers @@ -216,7 +203,7 @@ package docs.serialization { object ExternalAddress extends ExtensionKey[ExternalAddressExt] class ExternalAddressExt(system: ExtendedActorSystem) extends Extension { - def addressForAkka: Address = akka.transportOf(system).address + def addressForAkka: Address = system.provider.getDefaultAddress } def serializeAkkaDefault(ref: ActorRef): String = diff --git a/akka-docs/rst/scala/code/docs/testkit/TestKitUsageSpec.scala b/akka-docs/rst/scala/code/docs/testkit/TestKitUsageSpec.scala index d767879cc2..8b153c5944 100644 --- a/akka-docs/rst/scala/code/docs/testkit/TestKitUsageSpec.scala +++ b/akka-docs/rst/scala/code/docs/testkit/TestKitUsageSpec.scala @@ -21,7 +21,8 @@ import akka.actor.Props import akka.testkit.DefaultTimeout import akka.testkit.ImplicitSender import akka.testkit.TestKit -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable /** * a Test to show some TestKit examples @@ -38,8 +39,8 @@ class TestKitUsageSpec val filterRef = system.actorOf(Props(new FilteringActor(testActor))) val randomHead = Random.nextInt(6) val randomTail = Random.nextInt(10) - val headList = Seq().padTo(randomHead, "0") - val tailList = Seq().padTo(randomTail, "1") + val headList = immutable.Seq().padTo(randomHead, "0") + val tailList = immutable.Seq().padTo(randomTail, "1") val seqRef = system.actorOf(Props(new SequencingActor(testActor, headList, tailList))) @@ -145,7 +146,7 @@ object TestKitUsageSpec { * like to test that the interesting value is received and that you cant * be bothered with the rest */ - class SequencingActor(next: ActorRef, head: Seq[String], tail: Seq[String]) + class SequencingActor(next: ActorRef, head: immutable.Seq[String], tail: immutable.Seq[String]) extends Actor { def receive = { case msg ⇒ { diff --git a/akka-docs/rst/scala/code/docs/testkit/TestkitDocSpec.scala b/akka-docs/rst/scala/code/docs/testkit/TestkitDocSpec.scala index 8a78fc8c7a..e50b6e8fdf 100644 --- a/akka-docs/rst/scala/code/docs/testkit/TestkitDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/testkit/TestkitDocSpec.scala @@ -8,7 +8,7 @@ import scala.util.Success //#imports-test-probe import akka.testkit.TestProbe -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor._ import scala.concurrent.Future @@ -89,7 +89,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { //#test-fsm-ref import akka.testkit.TestFSMRef import akka.actor.FSM - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ val fsm = TestFSMRef(new Actor with FSM[Int, String] { startWith(1, "") @@ -110,11 +110,11 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { fsm.setState(stateName = 1) assert(fsm.stateName == 1) - assert(fsm.timerActive_?("test") == false) + assert(fsm.isTimerActive("test") == false) fsm.setTimer("test", 12, 10 millis, true) - assert(fsm.timerActive_?("test") == true) + assert(fsm.isTimerActive("test") == true) fsm.cancelTimer("test") - assert(fsm.timerActive_?("test") == false) + assert(fsm.isTimerActive("test") == false) //#test-fsm-ref } @@ -122,7 +122,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { //#test-behavior import akka.testkit.TestActorRef - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import scala.concurrent.Await import akka.pattern.ask @@ -161,7 +161,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { type Worker = MyActor //#test-within import akka.actor.Props - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ val worker = system.actorOf(Props[Worker]) within(200 millis) { @@ -175,7 +175,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { "demonstrate dilated duration" in { //#duration-dilation - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.testkit._ 10.milliseconds.dilated //#duration-dilation @@ -208,7 +208,7 @@ class TestkitDocSpec extends AkkaSpec with DefaultTimeout with ImplicitSender { "demonstrate probe reply" in { import akka.testkit.TestProbe - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.pattern.ask //#test-probe-reply val probe = TestProbe() diff --git a/akka-docs/rst/scala/code/docs/transactor/TransactorDocSpec.scala b/akka-docs/rst/scala/code/docs/transactor/TransactorDocSpec.scala index 2faa1a9703..2b75a15b92 100644 --- a/akka-docs/rst/scala/code/docs/transactor/TransactorDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/transactor/TransactorDocSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps import akka.actor._ import akka.transactor._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.util.Timeout import akka.testkit._ import scala.concurrent.stm._ @@ -141,7 +141,7 @@ class TransactorDocSpec extends AkkaSpec { //#run-coordinated-example import scala.concurrent.Await - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.util.Timeout import akka.pattern.ask @@ -168,7 +168,7 @@ class TransactorDocSpec extends AkkaSpec { import CoordinatedApi._ //#implicit-timeout - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ import akka.util.Timeout implicit val timeout = Timeout(5 seconds) diff --git a/akka-docs/rst/scala/code/docs/zeromq/ZeromqDocSpec.scala b/akka-docs/rst/scala/code/docs/zeromq/ZeromqDocSpec.scala index b02055a1b8..ab2e4f4d27 100644 --- a/akka-docs/rst/scala/code/docs/zeromq/ZeromqDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/zeromq/ZeromqDocSpec.scala @@ -5,13 +5,13 @@ package docs.zeromq import language.postfixOps +import scala.concurrent.duration._ +import scala.collection.immutable import akka.actor.{ Actor, Props } -import scala.concurrent.util.duration._ import akka.testkit._ -import akka.zeromq.{ ZeroMQVersion, ZeroMQExtension } +import akka.zeromq.{ ZeroMQVersion, ZeroMQExtension, SocketType, Bind } import java.text.SimpleDateFormat import java.util.Date -import akka.zeromq.{ SocketType, Bind } object ZeromqDocSpec { @@ -29,7 +29,8 @@ object ZeromqDocSpec { class HealthProbe extends Actor { - val pubSocket = ZeroMQExtension(context.system).newSocket(SocketType.Pub, Bind("tcp://127.0.0.1:1235")) + val pubSocket = ZeroMQExtension(context.system).newSocket(SocketType.Pub, + Bind("tcp://127.0.0.1:1235")) val memory = ManagementFactory.getMemoryMXBean val os = ManagementFactory.getOperatingSystemMXBean val ser = SerializationExtension(context.system) @@ -52,12 +53,12 @@ object ZeromqDocSpec { val heapPayload = ser.serialize(Heap(timestamp, currentHeap.getUsed, currentHeap.getMax)).get // the first frame is the topic, second is the message - pubSocket ! ZMQMessage(Seq(Frame("health.heap"), Frame(heapPayload))) + pubSocket ! ZMQMessage(immutable.Seq(Frame("health.heap"), Frame(heapPayload))) // use akka SerializationExtension to convert to bytes val loadPayload = ser.serialize(Load(timestamp, os.getSystemLoadAverage)).get // the first frame is the topic, second is the message - pubSocket ! ZMQMessage(Seq(Frame("health.load"), Frame(loadPayload))) + pubSocket ! ZMQMessage(immutable.Seq(Frame("health.load"), Frame(loadPayload))) } } //#health @@ -146,7 +147,7 @@ class ZeromqDocSpec extends AkkaSpec("akka.loglevel=INFO") { val payload = Array.empty[Byte] //#pub-topic - pubSocket ! ZMQMessage(Seq(Frame("foo.bar"), Frame(payload))) + pubSocket ! ZMQMessage(Frame("foo.bar"), Frame(payload)) //#pub-topic system.stop(subSocket) diff --git a/akka-docs/rst/scala/dispatchers.rst b/akka-docs/rst/scala/dispatchers.rst index ac95c9c42c..68182af86e 100644 --- a/akka-docs/rst/scala/dispatchers.rst +++ b/akka-docs/rst/scala/dispatchers.rst @@ -13,6 +13,15 @@ Default dispatcher Every ``ActorSystem`` will have a default dispatcher that will be used in case nothing else is configured for an ``Actor``. The default dispatcher can be configured, and is by default a ``Dispatcher`` with a "fork-join-executor", which gives excellent performance in most cases. +.. _dispatcher-lookup-scala: + +Looking up a Dispatcher +----------------------- + +Dispatchers implement the :class:`ExecutionContext` interface and can thus be used to run :class:`Future` invocations etc. + +.. includecode:: code/docs/dispatcher/DispatcherDocSpec.scala#lookup + Setting the dispatcher for an Actor ----------------------------------- diff --git a/akka-docs/rst/scala/fault-tolerance.rst b/akka-docs/rst/scala/fault-tolerance.rst index f0e3952c99..6b6559e647 100644 --- a/akka-docs/rst/scala/fault-tolerance.rst +++ b/akka-docs/rst/scala/fault-tolerance.rst @@ -24,9 +24,6 @@ sample as it is easy to follow the log output to understand what is happening in fault-tolerance-sample -.. includecode:: code/docs/actor/FaultHandlingDocSample.scala#all - :exclude: imports,messages,dummydb - Creating a Supervisor Strategy ------------------------------ diff --git a/akka-docs/rst/scala/fsm.rst b/akka-docs/rst/scala/fsm.rst index b8fac5a6e3..f30c4f36dc 100644 --- a/akka-docs/rst/scala/fsm.rst +++ b/akka-docs/rst/scala/fsm.rst @@ -124,6 +124,14 @@ obvious that an actor is actually created: :include: simple-fsm :exclude: fsm-body +.. note:: + + The FSM trait defines a ``receive`` method which handles internal messages + and passes everything else through to the FSM logic (according to the + current state). When overriding the ``receive`` method, keep in mind that + e.g. state timeout handling depends on actually passing the messages through + the FSM logic. + The :class:`FSM` trait takes two type parameters: #. the supertype of all state names, usually a sealed trait with case objects @@ -171,6 +179,18 @@ demonstrated below: The :class:`Event(msg: Any, data: D)` case class is parameterized with the data type held by the FSM for convenient pattern matching. +.. warning:: + + It is required that you define handlers for each of the possible FSM states, + otherwise there will be failures when trying to switch to undeclared states. + +It is recommended practice to declare the states as objects extending a +sealed trait and then verify that there is a ``when`` clause for each of the +states. If you want to leave the handling of a state “unhandled” (more below), +it still needs to be declared like this: + +.. includecode:: code/docs/actor/FSMDocSpec.scala#NullFunction + Defining the Initial State -------------------------- @@ -192,6 +212,9 @@ do something else in this case you can specify that with .. includecode:: code/docs/actor/FSMDocSpec.scala :include: unhandled-syntax +Within this handler the state of the FSM may be queried using the +:meth:`stateName` method. + **IMPORTANT**: This handler is not stacked, meaning that each invocation of :func:`whenUnhandled` replaces the previously installed handler. @@ -348,7 +371,7 @@ which is guaranteed to work immediately, meaning that the scheduled message will not be processed after this call even if the timer already fired and queued it. The status of any timer may be inquired with - :func:`timerActive_?(name)` + :func:`isTimerActive(name)` These named timers complement state timeouts because they are not affected by intervening reception of other messages. diff --git a/akka-docs/rst/scala/futures.rst b/akka-docs/rst/scala/futures.rst index 1b4df4154b..6cfb188a6d 100644 --- a/akka-docs/rst/scala/futures.rst +++ b/akka-docs/rst/scala/futures.rst @@ -22,13 +22,26 @@ by the ``ExecutionContext`` companion object to wrap ``Executors`` and ``Executo .. includecode:: code/docs/future/FutureDocSpec.scala :include: diy-execution-context +Within Actors +^^^^^^^^^^^^^ + +Each actor is configured to be run on a :class:`MessageDispatcher`, and that +dispatcher doubles as an :class:`ExecutionContext`. If the nature of the Future +calls invoked by the actor matches or is compatible with the activities of that +actor (e.g. all CPU bound and no latency requirements), then it may be easiest +to reuse the dispatcher for running the Futures by importing +``context.dispatcher``. + +.. includecode:: code/docs/future/FutureDocSpec.scala#context-dispatcher + :exclude: receive-omitted + Use With Actors --------------- There are generally two ways of getting a reply from an ``Actor``: the first is by a sent message (``actor ! msg``), which only works if the original sender was an ``Actor``) and the second is through a ``Future``. -Using an ``Actor``\'s ``?`` method to send a message will return a ``Future``. To wait for and retrieve the actual result the simplest method is: +Using an ``Actor``\'s ``?`` method to send a message will return a ``Future``: .. includecode:: code/docs/future/FutureDocSpec.scala :include: ask-blocking @@ -46,6 +59,11 @@ When using non-blocking it is better to use the ``mapTo`` method to safely try t The ``mapTo`` method will return a new ``Future`` that contains the result if the cast was successful, or a ``ClassCastException`` if not. Handling ``Exception``\s will be discussed further within this documentation. +To send the result of a ``Future`` to an ``Actor``, you can use the ``pipe`` construct: + +.. includecode:: code/docs/future/FutureDocSpec.scala + :include: pipe-to + Use Directly ------------ @@ -137,6 +155,12 @@ First an example of using ``Await.result``: .. includecode:: code/docs/future/FutureDocSpec.scala :include: composing-wrong +.. warning:: + + ``Await.result`` and ``Await.ready`` are provided for exceptional situations where you **must** block, + a good rule of thumb is to only use them if you know why you **must** block. For all other cases, use + asynchronous composition as described below. + Here we wait for the results from the first 2 ``Actor``\s before sending that result to the third ``Actor``. We called ``Await.result`` 3 times, which caused our little program to block 3 times before getting our final result. Now compare that to this example: diff --git a/akka-docs/rst/scala/howto.rst b/akka-docs/rst/scala/howto.rst index 7d064e2491..dcdebe06db 100644 --- a/akka-docs/rst/scala/howto.rst +++ b/akka-docs/rst/scala/howto.rst @@ -111,6 +111,37 @@ This is where the Spider pattern comes in." The pattern is described `Discovering Message Flows in Actor System with the Spider Pattern `_. +Scheduling Periodic Messages +============================ + +This pattern describes how to schedule periodic messages to yourself in two different +ways. + +The first way is to set up periodic message scheduling in the constructor of the actor, +and cancel that scheduled sending in ``postStop`` or else we might have multiple registered +message sends to the same actor. + +.. note:: + + With this approach the scheduled periodic message send will be restarted with the actor on restarts. + This also means that the time period that elapses between two tick messages during a restart may drift + off based on when you restart the scheduled message sends relative to the time that the last message was + sent, and how long the initial delay is. Worst case scenario is ``interval`` plus ``initialDelay``. + +.. includecode:: code/docs/pattern/SchedulerPatternSpec.scala#schedule-constructor + +The second variant sets up an initial one shot message send in the ``preStart`` method +of the actor, and the then the actor when it receives this message sets up a new one shot +message send. You also have to override ``postRestart`` so we don't call ``preStart`` +and schedule the initial message send again. + +.. note:: + + With this approach we won't fill up the mailbox with tick messages if the actor is + under pressure, but only schedule a new tick message when we have seen the previous one. + +.. includecode:: code/docs/pattern/SchedulerPatternSpec.scala#schedule-receive + Template Pattern ================ @@ -127,4 +158,3 @@ This is an especially nice pattern, since it does even come with some empty exam Spread the word: this is the easiest way to get famous! Please keep this pattern at the end of this file. - diff --git a/akka-docs/rst/scala/io.rst b/akka-docs/rst/scala/io.rst index 866fa8bffc..abeb6b729c 100644 --- a/akka-docs/rst/scala/io.rst +++ b/akka-docs/rst/scala/io.rst @@ -138,9 +138,9 @@ Receiving messages from the ``IOManager``: IO.Iteratee ^^^^^^^^^^^ -Included with Akka's IO support is a basic implementation of ``Iteratee``\s. ``Iteratee``\s are an effective way of handling a stream of data without needing to wait for all the data to arrive. This is especially useful when dealing with non blocking IO since we will usually receive data in chunks which may not include enough information to process, or it may contain much more data then we currently need. +Included with Akka's IO support is a basic implementation of ``Iteratee``\s. ``Iteratee``\s are an effective way of handling a stream of data without needing to wait for all the data to arrive. This is especially useful when dealing with non blocking IO since we will usually receive data in chunks which may not include enough information to process, or it may contain much more data than we currently need. -This ``Iteratee`` implementation is much more basic then what is usually found. There is only support for ``ByteString`` input, and enumerators aren't used. The reason for this limited implementation is to reduce the amount of explicit type signatures needed and to keep things simple. It is important to note that Akka's ``Iteratee``\s are completely optional, incoming data can be handled in any way, including other ``Iteratee`` libraries. +This ``Iteratee`` implementation is much more basic than what is usually found. There is only support for ``ByteString`` input, and enumerators aren't used. The reason for this limited implementation is to reduce the amount of explicit type signatures needed and to keep things simple. It is important to note that Akka's ``Iteratee``\s are completely optional, incoming data can be handled in any way, including other ``Iteratee`` libraries. ``Iteratee``\s work by processing the data that it is given and returning either the result (with any unused input) or a continuation if more input is needed. They are monadic, so methods like ``flatMap`` can be used to pass the result of an ``Iteratee`` to another. @@ -204,7 +204,7 @@ Following the path we read in the query (if it exists): .. includecode:: code/docs/io/HTTPServer.scala :include: read-query -It is much simpler then reading the path since we aren't doing any parsing of the query since there is no standard format of the query string. +It is much simpler than reading the path since we aren't doing any parsing of the query since there is no standard format of the query string. Both the path and query used the ``readUriPart`` ``Iteratee``, which is next: diff --git a/akka-docs/rst/scala/logging.rst b/akka-docs/rst/scala/logging.rst index 60cd3f2a61..f8c3e11f27 100644 --- a/akka-docs/rst/scala/logging.rst +++ b/akka-docs/rst/scala/logging.rst @@ -232,7 +232,7 @@ It has one single dependency; the slf4j-api jar. In runtime you also need a SLF4 .. code-block:: scala - lazy val logback = "ch.qos.logback" % "logback-classic" % "1.0.4" % "runtime" + lazy val logback = "ch.qos.logback" % "logback-classic" % "1.0.7" You need to enable the Slf4jEventHandler in the 'event-handlers' element in diff --git a/akka-docs/rst/scala/microkernel.rst b/akka-docs/rst/scala/microkernel.rst index c223f9dd45..5a1908346a 100644 --- a/akka-docs/rst/scala/microkernel.rst +++ b/akka-docs/rst/scala/microkernel.rst @@ -10,7 +10,7 @@ having to create a launcher script. The Akka Microkernel is included in the Akka download found at `downloads`_. -.. _downloads: http://akka.io/downloads +.. _downloads: http://typesafe.com/stack/downloads/akka To run an application with the microkernel you need to create a Bootable class that handles the startup and shutdown the application. An example is included below. @@ -19,11 +19,7 @@ Put your application jar in the ``deploy`` directory to have it automatically loaded. To start the kernel use the scripts in the ``bin`` directory, passing the boot -classes for your application. - -There is a simple example of an application setup for running with the -microkernel included in the akka download. This can be run with the following -command (on a unix-based system): +classes for your application. Example command (on a unix-based system): .. code-block:: none diff --git a/akka-docs/rst/scala/remoting.rst b/akka-docs/rst/scala/remoting.rst index ca7220a419..cf12c93c60 100644 --- a/akka-docs/rst/scala/remoting.rst +++ b/akka-docs/rst/scala/remoting.rst @@ -57,6 +57,13 @@ reference file for more information: .. literalinclude:: ../../../akka-remote/src/main/resources/reference.conf :language: none +.. note:: + + Setting properties like the listening IP and port number programmatically is + best done by using something like the following: + + .. includecode:: ../java/code/docs/remoting/RemoteDeploymentDocTestBase.java#programmatic + Types of Remote Interaction ^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -122,6 +129,15 @@ actor systems has to have a JAR containing the class. most cases is not serializable. It is best to create a factory method in the companion object of the actor’s class. +.. note:: + + You can use asterisks as wildcard matches for the actor paths, so you could specify: + ``/*/sampleActor`` and that would match all ``sampleActor`` on that level in the hierarchy. + You can also use wildcard in the last position to match all actors at a certain level: + ``/someParent/*``. Non-wildcard matches always have higher priority to match than wildcards, so: + ``/foo/bar`` is considered **more specific** than ``/foo/*`` and only the highest priority match is used. + Please note that it **cannot** be used to partially match section, like this: ``/foo*/bar``, ``/f*o/bar`` etc. + .. warning:: *Caveat:* Remote deployment ties both systems together in a tight fashion, @@ -193,7 +209,7 @@ Description of the Remoting Sample There is a more extensive remote example that comes with the Akka distribution. Please have a look here for more information: `Remote Sample -`_ +<@github@/akka-samples/akka-sample-remote>`_ This sample demonstrates both, remote deployment and look-up of remote actors. First, let us have a look at the common setup for both scenarios (this is ``common.conf``): diff --git a/akka-docs/rst/scala/routing.rst b/akka-docs/rst/scala/routing.rst index 9dc356c98c..f04223b5d3 100644 --- a/akka-docs/rst/scala/routing.rst +++ b/akka-docs/rst/scala/routing.rst @@ -66,7 +66,7 @@ In addition to being able to supply looked-up remote actors as routees, you can make the router deploy its created children on a set of remote hosts; this will be done in round-robin fashion. In order to do that, wrap the router configuration in a :class:`RemoteRouterConfig`, attaching the remote addresses of -the nodes to deploy to. Naturally, this requires your to include the +the nodes to deploy to. Naturally, this requires you to include the ``akka-remote`` module on your classpath: .. includecode:: code/docs/routing/RouterViaProgramExample.scala#remoteRoutees @@ -430,7 +430,7 @@ Configured Custom Router It is possible to define configuration properties for custom routers. In the ``router`` property of the deployment configuration you define the fully qualified class name of the router class. The router class must extend -``akka.routing.RouterConfig`` and and have constructor with ``com.typesafe.config.Config`` parameter. +``akka.routing.RouterConfig`` and have constructor with one ``com.typesafe.config.Config`` parameter. The deployment section of the configuration is passed to the constructor. Custom Resizer diff --git a/akka-docs/rst/scala/serialization.rst b/akka-docs/rst/scala/serialization.rst index 10283b441f..70a02faecd 100644 --- a/akka-docs/rst/scala/serialization.rst +++ b/akka-docs/rst/scala/serialization.rst @@ -138,24 +138,12 @@ concrete address handy you can create a dummy one for the right protocol using ``Address(protocol, "", "", 0)`` (assuming that the actual transport used is as lenient as Akka’s RemoteActorRefProvider). -There is a possible simplification available if you are just using the default -:class:`NettyRemoteTransport` with the :meth:`RemoteActorRefProvider`, which is -enabled by the fact that this combination has just a single remote address. -This approach relies on internal API, which means that it is not guaranteed to -be supported in future versions. To make this caveat more obvious, some bridge -code in the ``akka`` package is required to make it work: - -.. includecode:: code/docs/serialization/SerializationDocSpec.scala - :include: extract-transport - -And with this, the address extraction goes like this: +There is also a default remote address which is the one used by cluster support +(and typical systems have just this one); you can get it like this: .. includecode:: code/docs/serialization/SerializationDocSpec.scala :include: external-address-default -This solution has to be adapted once other providers are used (like the planned -extensions for clustering). - Deep serialization of Actors ---------------------------- diff --git a/akka-docs/rst/scala/testing.rst b/akka-docs/rst/scala/testing.rst index 9a80ab0e59..a57305ce30 100644 --- a/akka-docs/rst/scala/testing.rst +++ b/akka-docs/rst/scala/testing.rst @@ -713,8 +713,8 @@ Some `Specs2 `_ users have contributed examples of how to wor actually beneficial also for the third point—is to apply the TestKit together with :class:`org.specs2.specification.Scope`. * The Specification traits provide a :class:`Duration` DSL which uses partly - the same method names as :class:`scala.concurrent.util.Duration`, resulting in ambiguous - implicits if ``akka.util.duration._`` is imported. There are two work-arounds: + the same method names as :class:`scala.concurrent.duration.Duration`, resulting in ambiguous + implicits if ``scala.concurrent.duration._`` is imported. There are two work-arounds: * either use the Specification variant of Duration and supply an implicit conversion to the Akka Duration. This conversion is not supplied with the diff --git a/akka-docs/rst/scala/typed-actors.rst b/akka-docs/rst/scala/typed-actors.rst index ce9c608e4e..0a0597cf0d 100644 --- a/akka-docs/rst/scala/typed-actors.rst +++ b/akka-docs/rst/scala/typed-actors.rst @@ -7,7 +7,10 @@ Essentially turning method invocations into asynchronous dispatch instead of syn Typed Actors consist of 2 "parts", a public interface and an implementation, and if you've done any work in "enterprise" Java, this will be very familiar to you. As with normal Actors you have an external API (the public interface instance) that will delegate methodcalls asynchronously to a private instance of the implementation. -The advantage of Typed Actors vs. Actors is that with TypedActors you have a static contract, and don't need to define your own messages, the downside is that it places some limitations on what you can do and what you can't, i.e. you can't use become/unbecome. +The advantage of Typed Actors vs. Actors is that with TypedActors you have a +static contract, and don't need to define your own messages, the downside is +that it places some limitations on what you can do and what you can't, i.e. you +cannot use :meth:`become`/:meth:`unbecome`. Typed Actors are implemented using `JDK Proxies `_ which provide a pretty easy-worked API to intercept method calls. diff --git a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailbox.scala b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailbox.scala index 59e5780849..47ad1483c3 100644 --- a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailbox.scala +++ b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailbox.scala @@ -12,7 +12,7 @@ import akka.ConfigurationException import akka.dispatch._ import scala.util.control.NonFatal import akka.pattern.{ CircuitBreakerOpenException, CircuitBreaker } -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration class FileBasedMailboxType(systemSettings: ActorSystem.Settings, config: Config) extends MailboxType { private val settings = new FileBasedMailboxSettings(systemSettings, config) diff --git a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailboxSettings.scala b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailboxSettings.scala index 7ac8d0a044..305a3d3a43 100644 --- a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailboxSettings.scala +++ b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/FileBasedMailboxSettings.scala @@ -5,10 +5,9 @@ package akka.actor.mailbox.filebased import akka.actor.mailbox._ import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import java.util.concurrent.TimeUnit.MILLISECONDS import akka.actor.ActorSystem -import scala.concurrent.util.FiniteDuration class FileBasedMailboxSettings(val systemSettings: ActorSystem.Settings, val userConfig: Config) extends DurableMailboxSettings { diff --git a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/filequeue/PersistentQueue.scala b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/filequeue/PersistentQueue.scala index 83d539361c..33aa49061b 100644 --- a/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/filequeue/PersistentQueue.scala +++ b/akka-durable-mailboxes/akka-file-mailbox/src/main/scala/akka/actor/mailbox/filebased/filequeue/PersistentQueue.scala @@ -20,7 +20,6 @@ package akka.actor.mailbox.filebased.filequeue import java.io._ import scala.collection.mutable import akka.event.LoggingAdapter -import scala.concurrent.util.Duration import java.util.concurrent.TimeUnit import akka.actor.mailbox.filebased.FileBasedMailboxSettings diff --git a/akka-durable-mailboxes/akka-file-mailbox/src/test/scala/akka/actor/mailbox/filebased/FileBasedMailboxSpec.scala b/akka-durable-mailboxes/akka-file-mailbox/src/test/scala/akka/actor/mailbox/filebased/FileBasedMailboxSpec.scala index 5b982523ee..e0271461e8 100644 --- a/akka-durable-mailboxes/akka-file-mailbox/src/test/scala/akka/actor/mailbox/filebased/FileBasedMailboxSpec.scala +++ b/akka-durable-mailboxes/akka-file-mailbox/src/test/scala/akka/actor/mailbox/filebased/FileBasedMailboxSpec.scala @@ -28,7 +28,7 @@ class FileBasedMailboxSpec extends DurableMailboxSpec("File", FileBasedMailboxSp settings.QueuePath must be("file-based") settings.CircuitBreakerMaxFailures must be(5) - import scala.concurrent.util.duration._ + import scala.concurrent.duration._ settings.CircuitBreakerCallTimeout must be(5 seconds) } diff --git a/akka-durable-mailboxes/akka-mailboxes-common/src/test/scala/akka/actor/mailbox/DurableMailboxSpec.scala b/akka-durable-mailboxes/akka-mailboxes-common/src/test/scala/akka/actor/mailbox/DurableMailboxSpec.scala index 0e156a5632..2c6706f693 100644 --- a/akka-durable-mailboxes/akka-mailboxes-common/src/test/scala/akka/actor/mailbox/DurableMailboxSpec.scala +++ b/akka-durable-mailboxes/akka-mailboxes-common/src/test/scala/akka/actor/mailbox/DurableMailboxSpec.scala @@ -19,7 +19,7 @@ import DurableMailboxSpecActorFactory.{ MailboxTestActor, AccumulatorActor } import akka.actor.{ RepointableRef, Props, ActorSystem, ActorRefWithCell, ActorRef, ActorCell, Actor } import akka.dispatch.Mailbox import akka.testkit.TestKit -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ object DurableMailboxSpecActorFactory { diff --git a/akka-kernel/src/main/dist/README b/akka-kernel/src/main/dist/README index d15368f696..3b25f979a6 100644 --- a/akka-kernel/src/main/dist/README +++ b/akka-kernel/src/main/dist/README @@ -2,7 +2,7 @@ Akka ==== -This is the Akka 2.1-SNAPSHOT download. +This is the Akka 2.2-SNAPSHOT download. Included are all libraries, documentation, and sources for Akka. diff --git a/akka-kernel/src/main/dist/bin/akka-cluster b/akka-kernel/src/main/dist/bin/akka-cluster index 0cbff520dd..e544772a13 100755 --- a/akka-kernel/src/main/dist/bin/akka-cluster +++ b/akka-kernel/src/main/dist/bin/akka-cluster @@ -16,7 +16,7 @@ declare AKKA_HOME="$(cd "$(cd "$(dirname "$0")"; pwd -P)"/..; pwd)" -[ -n "$JMX_CLIENT_CLASSPATH" ] || JMX_CLIENT_CLASSPATH="$AKKA_HOME/lib/akka/akka-kernel-*" +[ -n "$JMX_CLIENT_CLASSPATH" ] || JMX_CLIENT_CLASSPATH="$AKKA_HOME/lib/akka/akka-kernel*" # NOTE: The 'cmdline-jmxclient' is available as part of the Akka distribution. JMX_CLIENT="java -cp $JMX_CLIENT_CLASSPATH akka.jmx.Client -" @@ -103,6 +103,32 @@ case "$2" in $JMX_CLIENT $HOST akka:type=Cluster ClusterStatus ;; + members) + if [ $# -ne 2 ]; then + echo "Usage: $SELF members" + exit 1 + fi + + ensureNodeIsRunningAndAvailable + shift + + echo "Querying members" + $JMX_CLIENT $HOST akka:type=Cluster Members + ;; + + unreachable) + if [ $# -ne 2 ]; then + echo "Usage: $SELF unreachable" + exit 1 + fi + + ensureNodeIsRunningAndAvailable + shift + + echo "Querying unreachable members" + $JMX_CLIENT $HOST akka:type=Cluster Unreachable + ;; + leader) if [ $# -ne 2 ]; then echo "Usage: $SELF leader" @@ -129,19 +155,6 @@ case "$2" in $JMX_CLIENT $HOST akka:type=Cluster Singleton ;; - has-convergence) - if [ $# -ne 2 ]; then - echo "Usage: $SELF is-convergence" - exit 1 - fi - - ensureNodeIsRunningAndAvailable - shift - - echo "Checking for cluster convergence" - $JMX_CLIENT $HOST akka:type=Cluster Convergence - ;; - is-available) if [ $# -ne 2 ]; then echo "Usage: $SELF is-available" @@ -155,19 +168,6 @@ case "$2" in $JMX_CLIENT $HOST akka:type=Cluster Available ;; - is-running) - if [ $# -ne 2 ]; then - echo "Usage: $SELF is-running" - exit 1 - fi - - ensureNodeIsRunningAndAvailable - shift - - echo "Checking if member node on $HOST is RUNNING" - $JMX_CLIENT $HOST akka:type=Cluster Running - ;; - *) printf "Usage: bin/$SELF ...\n" printf "\n" @@ -176,12 +176,12 @@ case "$2" in printf "%26s - %s\n" "leave " "Sends a request for node with URL to LEAVE the cluster" printf "%26s - %s\n" "down " "Sends a request for marking node with URL as DOWN" printf "%26s - %s\n" member-status "Asks the member node for its current status" + printf "%26s - %s\n" members "Asks the cluster for addresses of current members" + printf "%26s - %s\n" unreachable "Asks the cluster for addresses of unreachable members" printf "%26s - %s\n" cluster-status "Asks the cluster for its current status (member ring, unavailable nodes, meta data etc.)" printf "%26s - %s\n" leader "Asks the cluster who the current leader is" printf "%26s - %s\n" is-singleton "Checks if the cluster is a singleton cluster (single node cluster)" printf "%26s - %s\n" is-available "Checks if the member node is available" - printf "%26s - %s\n" is-running "Checks if the member node is running" - printf "%26s - %s\n" has-convergence "Checks if there is a cluster convergence" printf "Where the should be on the format of 'akka://actor-system-name@hostname:port'\n" printf "\n" printf "Examples: bin/$SELF localhost:9999 is-available\n" diff --git a/akka-kernel/src/main/dist/config/application.conf b/akka-kernel/src/main/dist/config/application.conf index 4abcd7e7f8..d3a3ea5725 100644 --- a/akka-kernel/src/main/dist/config/application.conf +++ b/akka-kernel/src/main/dist/config/application.conf @@ -1,3 +1,3 @@ # In this file you can override any option defined in the 'reference.conf' files. # Copy in all or parts of the 'reference.conf' files and modify as you please. -# For more info about config, please visit the Akka Documentation: http://akka.io/docs/akka/2.1-SNAPSHOT/ +# For more info about config, please visit the Akka Documentation: http://akka.io/docs/akka/2.2-SNAPSHOT/ diff --git a/akka-kernel/src/main/scala/akka/kernel/Main.scala b/akka-kernel/src/main/scala/akka/kernel/Main.scala index 97ff625ab8..3fe3cac403 100644 --- a/akka-kernel/src/main/scala/akka/kernel/Main.scala +++ b/akka-kernel/src/main/scala/akka/kernel/Main.scala @@ -9,6 +9,7 @@ import java.io.File import java.lang.Boolean.getBoolean import java.net.URLClassLoader import java.util.jar.JarFile +import scala.collection.immutable import scala.collection.JavaConverters._ /** @@ -77,8 +78,8 @@ object Main { Thread.currentThread.setContextClassLoader(classLoader) - val bootClasses: Seq[String] = args.toSeq - val bootables: Seq[Bootable] = bootClasses map { c ⇒ classLoader.loadClass(c).newInstance.asInstanceOf[Bootable] } + val bootClasses: immutable.Seq[String] = args.to[immutable.Seq] + val bootables: immutable.Seq[Bootable] = bootClasses map { c ⇒ classLoader.loadClass(c).newInstance.asInstanceOf[Bootable] } for (bootable ← bootables) { log("Starting up " + bootable.getClass.getName) @@ -122,7 +123,7 @@ object Main { new URLClassLoader(urls, Thread.currentThread.getContextClassLoader) } - private def addShutdownHook(bootables: Seq[Bootable]): Unit = { + private def addShutdownHook(bootables: immutable.Seq[Bootable]): Unit = { Runtime.getRuntime.addShutdownHook(new Thread(new Runnable { def run = { log("") diff --git a/akka-osgi-aries/src/main/scala/akka/osgi/aries/blueprint/BlueprintActorSystemFactory.scala b/akka-osgi-aries/src/main/scala/akka/osgi/aries/blueprint/BlueprintActorSystemFactory.scala index 30720a230c..ce759a4fa8 100644 --- a/akka-osgi-aries/src/main/scala/akka/osgi/aries/blueprint/BlueprintActorSystemFactory.scala +++ b/akka-osgi-aries/src/main/scala/akka/osgi/aries/blueprint/BlueprintActorSystemFactory.scala @@ -15,7 +15,9 @@ import com.typesafe.config.{ Config, ConfigFactory } * If you're looking for a way to set up Akka using Blueprint without the namespace handler, you should use * [[akka.osgi.OsgiActorSystemFactory]] instead. */ -class BlueprintActorSystemFactory(context: BundleContext, name: String) extends OsgiActorSystemFactory(context) { +class BlueprintActorSystemFactory(context: BundleContext, name: String, fallbackClassLoader: Option[ClassLoader]) extends OsgiActorSystemFactory(context, fallbackClassLoader) { + + def this(context: BundleContext, name: String) = this(context, name, Some(OsgiActorSystemFactory.akkaActorClassLoader)) var config: Option[String] = None diff --git a/akka-osgi-aries/src/test/scala/akka/osgi/aries/blueprint/NamespaceHandlerTest.scala b/akka-osgi-aries/src/test/scala/akka/osgi/aries/blueprint/NamespaceHandlerTest.scala index 2728a80894..79d07c65a3 100644 --- a/akka-osgi-aries/src/test/scala/akka/osgi/aries/blueprint/NamespaceHandlerTest.scala +++ b/akka-osgi-aries/src/test/scala/akka/osgi/aries/blueprint/NamespaceHandlerTest.scala @@ -32,7 +32,7 @@ class SimpleNamespaceHandlerTest extends WordSpec with MustMatchers with PojoSRT import NamespaceHandlerTest._ - val testBundles: Seq[BundleDescriptor] = buildTestBundles(Seq( + val testBundles = buildTestBundles(List( AKKA_OSGI_BLUEPRINT, bundle(TEST_BUNDLE_NAME).withBlueprintFile(getClass.getResource("simple.xml")))) @@ -62,7 +62,7 @@ class ConfigNamespaceHandlerTest extends WordSpec with MustMatchers with PojoSRT import NamespaceHandlerTest._ - val testBundles: Seq[BundleDescriptor] = buildTestBundles(Seq( + val testBundles = buildTestBundles(List( AKKA_OSGI_BLUEPRINT, bundle(TEST_BUNDLE_NAME).withBlueprintFile(getClass.getResource("config.xml")))) @@ -94,7 +94,7 @@ class DependencyInjectionNamespaceHandlerTest extends WordSpec with MustMatchers import NamespaceHandlerTest._ - val testBundles: Seq[BundleDescriptor] = buildTestBundles(Seq( + val testBundles = buildTestBundles(List( AKKA_OSGI_BLUEPRINT, bundle(TEST_BUNDLE_NAME).withBlueprintFile(getClass.getResource("injection.xml")))) diff --git a/akka-osgi/src/main/scala/akka/osgi/OsgiActorSystemFactory.scala b/akka-osgi/src/main/scala/akka/osgi/OsgiActorSystemFactory.scala index 608b80403b..447719ef39 100644 --- a/akka-osgi/src/main/scala/akka/osgi/OsgiActorSystemFactory.scala +++ b/akka-osgi/src/main/scala/akka/osgi/OsgiActorSystemFactory.scala @@ -12,12 +12,12 @@ import org.osgi.framework.BundleContext * Factory class to create ActorSystem implementations in an OSGi environment. This mainly involves dealing with * bundle classloaders appropriately to ensure that configuration files and classes get loaded properly */ -class OsgiActorSystemFactory(val context: BundleContext) { +class OsgiActorSystemFactory(val context: BundleContext, val fallbackClassLoader: Option[ClassLoader]) { /* * Classloader that delegates to the bundle for which the factory is creating an ActorSystem */ - private val classloader = BundleDelegatingClassLoader(context) + private val classloader = new BundleDelegatingClassLoader(context.getBundle, fallbackClassLoader) /** * Creates the [[akka.actor.ActorSystem]], using the name specified @@ -37,7 +37,7 @@ class OsgiActorSystemFactory(val context: BundleContext) { * loaded from the akka-actor bundle. */ def actorSystemConfig(context: BundleContext): Config = - ConfigFactory.load(classloader).withFallback(ConfigFactory.defaultReference(classOf[ActorSystem].getClassLoader)) + ConfigFactory.load(classloader).withFallback(ConfigFactory.defaultReference(OsgiActorSystemFactory.akkaActorClassLoader)) /** * Determine the name for the [[akka.actor.ActorSystem]] @@ -49,8 +49,13 @@ class OsgiActorSystemFactory(val context: BundleContext) { } object OsgiActorSystemFactory { + /** + * Class loader of akka-actor bundle. + */ + def akkaActorClassLoader = classOf[ActorSystem].getClassLoader + /* * Create an [[OsgiActorSystemFactory]] instance to set up Akka in an OSGi environment */ - def apply(context: BundleContext): OsgiActorSystemFactory = new OsgiActorSystemFactory(context) + def apply(context: BundleContext): OsgiActorSystemFactory = new OsgiActorSystemFactory(context, Some(akkaActorClassLoader)) } diff --git a/akka-osgi/src/test/scala/akka/osgi/ActorSystemActivatorTest.scala b/akka-osgi/src/test/scala/akka/osgi/ActorSystemActivatorTest.scala index 80bac1529f..27455be75e 100644 --- a/akka-osgi/src/test/scala/akka/osgi/ActorSystemActivatorTest.scala +++ b/akka-osgi/src/test/scala/akka/osgi/ActorSystemActivatorTest.scala @@ -9,7 +9,8 @@ import org.scalatest.WordSpec import akka.actor.ActorSystem import akka.pattern.ask import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.collection.immutable import akka.util.Timeout import de.kalpatec.pojosr.framework.launch.BundleDescriptor import test.{ RuntimeNameActorSystemActivator, TestActivators, PingPongActorSystemActivator } @@ -32,7 +33,7 @@ class PingPongActorSystemActivatorTest extends WordSpec with MustMatchers with P import ActorSystemActivatorTest._ - val testBundles: Seq[BundleDescriptor] = buildTestBundles(Seq( + val testBundles: immutable.Seq[BundleDescriptor] = buildTestBundles(List( bundle(TEST_BUNDLE_NAME).withActivator(classOf[PingPongActorSystemActivator]))) "PingPongActorSystemActivator" must { @@ -65,7 +66,8 @@ class RuntimeNameActorSystemActivatorTest extends WordSpec with MustMatchers wit import ActorSystemActivatorTest._ - val testBundles: Seq[BundleDescriptor] = buildTestBundles(Seq(bundle(TEST_BUNDLE_NAME).withActivator(classOf[RuntimeNameActorSystemActivator]))) + val testBundles: immutable.Seq[BundleDescriptor] = + buildTestBundles(List(bundle(TEST_BUNDLE_NAME).withActivator(classOf[RuntimeNameActorSystemActivator]))) "RuntimeNameActorSystemActivator" must { diff --git a/akka-osgi/src/test/scala/akka/osgi/PojoSRTestSupport.scala b/akka-osgi/src/test/scala/akka/osgi/PojoSRTestSupport.scala index e993d04f01..d1d77daf1e 100644 --- a/akka-osgi/src/test/scala/akka/osgi/PojoSRTestSupport.scala +++ b/akka-osgi/src/test/scala/akka/osgi/PojoSRTestSupport.scala @@ -6,32 +6,32 @@ package akka.osgi import de.kalpatec.pojosr.framework.launch.{ BundleDescriptor, PojoServiceRegistryFactory, ClasspathScanner } import scala.collection.JavaConversions.seqAsJavaList -import scala.collection.JavaConversions.collectionAsScalaIterable import org.apache.commons.io.IOUtils.copy import org.osgi.framework._ import java.net.URL - import java.util.jar.JarInputStream import java.io._ import org.scalatest.{ BeforeAndAfterAll, Suite } import java.util.{ UUID, Date, ServiceLoader, HashMap } import scala.reflect.ClassTag -import scala.Some +import scala.collection.immutable +import scala.concurrent.duration._ +import scala.annotation.tailrec /** * Trait that provides support for building akka-osgi tests using PojoSR */ trait PojoSRTestSupport extends Suite with BeforeAndAfterAll { - val MAX_WAIT_TIME = 12800 - val START_WAIT_TIME = 50 + val MaxWaitDuration = 12800.millis + val SleepyTime = 50.millis /** * All bundles being found on the test classpath are automatically installed and started in the PojoSR runtime. * Implement this to define the extra bundles that should be available for testing. */ - def testBundles: Seq[BundleDescriptor] + def testBundles: immutable.Seq[BundleDescriptor] val bufferedLoadingErrors = new ByteArrayOutputStream() @@ -70,27 +70,28 @@ trait PojoSRTestSupport extends Suite with BeforeAndAfterAll { def serviceForType[T](implicit t: ClassTag[T]): T = context.getService(awaitReference(t.runtimeClass)).asInstanceOf[T] - def awaitReference(serviceType: Class[_]): ServiceReference = awaitReference(serviceType, START_WAIT_TIME) + def awaitReference(serviceType: Class[_]): ServiceReference = awaitReference(serviceType, SleepyTime) - def awaitReference(serviceType: Class[_], wait: Long): ServiceReference = { - val option = Option(context.getServiceReference(serviceType.getName)) - Thread.sleep(wait) //FIXME No sleep please - option match { - case Some(reference) ⇒ reference - case None if (wait > MAX_WAIT_TIME) ⇒ fail("Gave up waiting for service of type %s".format(serviceType)) - case None ⇒ awaitReference(serviceType, wait * 2) + def awaitReference(serviceType: Class[_], wait: FiniteDuration): ServiceReference = { + + @tailrec def poll(step: Duration, deadline: Deadline): ServiceReference = context.getServiceReference(serviceType.getName) match { + case null ⇒ + if (deadline.isOverdue()) fail("Gave up waiting for service of type %s".format(serviceType)) + else { + Thread.sleep((step min deadline.timeLeft max Duration.Zero).toMillis) + poll(step, deadline) + } + case some ⇒ some } + + poll(wait, Deadline.now + MaxWaitDuration) } - protected def buildTestBundles(builders: Seq[BundleDescriptorBuilder]): Seq[BundleDescriptor] = builders map (_.build) + protected def buildTestBundles(builders: immutable.Seq[BundleDescriptorBuilder]): immutable.Seq[BundleDescriptor] = + builders map (_.build) - def filterErrors()(block: ⇒ Unit): Unit = { - try { - block - } catch { - case e: Throwable ⇒ System.err.write(bufferedLoadingErrors.toByteArray); throw e - } - } + def filterErrors()(block: ⇒ Unit): Unit = + try block catch { case e: Throwable ⇒ System.err.write(bufferedLoadingErrors.toByteArray); throw e } } object PojoSRTestSupport { @@ -142,12 +143,12 @@ class BundleDescriptorBuilder(name: String) { } def extractHeaders(file: File): HashMap[String, String] = { + import scala.collection.JavaConverters.iterableAsScalaIterableConverter val headers = new HashMap[String, String]() - val jis = new JarInputStream(new FileInputStream(file)) try { - for (entry ← jis.getManifest().getMainAttributes().entrySet()) - headers.put(entry.getKey().toString(), entry.getValue().toString()) + for (entry ← jis.getManifest.getMainAttributes.entrySet.asScala) + headers.put(entry.getKey.toString, entry.getValue.toString) } finally jis.close() headers diff --git a/akka-remote-tests/src/main/resources/reference.conf b/akka-remote-tests/src/main/resources/reference.conf index 40c16c4ccd..7ad9bf6e76 100644 --- a/akka-remote-tests/src/main/resources/reference.conf +++ b/akka-remote-tests/src/main/resources/reference.conf @@ -29,5 +29,37 @@ akka { # minimum time interval which is to be inserted between reconnect attempts reconnect-backoff = 1s + + netty { + # (I&O) Used to configure the number of I/O worker threads on server sockets + server-socket-worker-pool { + # Min number of threads to cap factor-based number to + pool-size-min = 1 + + # The pool size factor is used to determine thread pool size + # using the following formula: ceil(available processors * factor). + # Resulting size is then bounded by the pool-size-min and + # pool-size-max values. + pool-size-factor = 1.0 + + # Max number of threads to cap factor-based number to + pool-size-max = 2 + } + + # (I&O) Used to configure the number of I/O worker threads on client sockets + client-socket-worker-pool { + # Min number of threads to cap factor-based number to + pool-size-min = 1 + + # The pool size factor is used to determine thread pool size + # using the following formula: ceil(available processors * factor). + # Resulting size is then bounded by the pool-size-min and + # pool-size-max values. + pool-size-factor = 1.0 + + # Max number of threads to cap factor-based number to + pool-size-max = 2 + } + } } } \ No newline at end of file diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Conductor.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Conductor.scala index 7aaa6d72b3..25837cbb71 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Conductor.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Conductor.scala @@ -9,7 +9,7 @@ import RemoteConnection.getAddrString import TestConductorProtocol._ import org.jboss.netty.channel.{ Channel, SimpleChannelUpstreamHandler, ChannelHandlerContext, ChannelStateEvent, MessageEvent } import com.typesafe.config.ConfigFactory -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.pattern.ask import scala.concurrent.Await import akka.event.{ LoggingAdapter, Logging } @@ -21,9 +21,9 @@ import akka.actor.{ OneForOneStrategy, SupervisorStrategy, Status, Address, Pois import java.util.concurrent.ConcurrentHashMap import java.util.concurrent.TimeUnit.MILLISECONDS import akka.util.{ Timeout } -import scala.concurrent.util.{ Deadline, Duration } import scala.reflect.classTag -import scala.concurrent.util.FiniteDuration +import akka.ConfigurationException +import akka.AkkaException sealed trait Direction { def includes(other: Direction): Boolean @@ -114,6 +114,10 @@ trait Conductor { this: TestConductorExt ⇒ * determining how much to send, leading to the correct output rate, but with * increased latency. * + * ====Note==== + * To use this feature you must activate the `TestConductorTranport` + * by specifying `testTransport(on = true)` in your MultiNodeConfig. + * * @param node is the symbolic name of the node which is to be affected * @param target is the symbolic name of the other node to which connectivity shall be throttled * @param direction can be either `Direction.Send`, `Direction.Receive` or `Direction.Both` @@ -121,6 +125,7 @@ trait Conductor { this: TestConductorExt ⇒ */ def throttle(node: RoleName, target: RoleName, direction: Direction, rateMBit: Double): Future[Done] = { import Settings.QueryTimeout + requireTestConductorTranport() controller ? Throttle(node, target, direction, rateMBit.toFloat) mapTo classTag[Done] } @@ -130,25 +135,40 @@ trait Conductor { this: TestConductorExt ⇒ * submitting them to the Socket or right after receiving them from the * Socket. * + * ====Note==== + * To use this feature you must activate the `TestConductorTranport` + * by specifying `testTransport(on = true)` in your MultiNodeConfig. + * * @param node is the symbolic name of the node which is to be affected * @param target is the symbolic name of the other node to which connectivity shall be impeded * @param direction can be either `Direction.Send`, `Direction.Receive` or `Direction.Both` */ def blackhole(node: RoleName, target: RoleName, direction: Direction): Future[Done] = { import Settings.QueryTimeout + requireTestConductorTranport() controller ? Throttle(node, target, direction, 0f) mapTo classTag[Done] } + private def requireTestConductorTranport(): Unit = + if (!transport.isInstanceOf[TestConductorTransport]) + throw new ConfigurationException("To use this feature you must activate the TestConductorTranport by " + + "specifying `testTransport(on = true)` in your MultiNodeConfig.") + /** * Switch the Netty pipeline of the remote support into pass through mode for * sending and/or receiving. * + * ====Note==== + * To use this feature you must activate the `TestConductorTranport` + * by specifying `testTransport(on = true)` in your MultiNodeConfig. + * * @param node is the symbolic name of the node which is to be affected * @param target is the symbolic name of the other node to which connectivity shall be impeded * @param direction can be either `Direction.Send`, `Direction.Receive` or `Direction.Both` */ def passThrough(node: RoleName, target: RoleName, direction: Direction): Future[Done] = { import Settings.QueryTimeout + requireTestConductorTranport() controller ? Throttle(node, target, direction, -1f) mapTo classTag[Done] } @@ -188,7 +208,10 @@ trait Conductor { this: TestConductorExt ⇒ */ def shutdown(node: RoleName, exitValue: Int): Future[Done] = { import Settings.QueryTimeout - controller ? Terminate(node, exitValue) mapTo classTag[Done] + import system.dispatcher + // the recover is needed to handle ClientDisconnectedException exception, + // which is normal during shutdown + controller ? Terminate(node, exitValue) mapTo classTag[Done] recover { case _: ClientDisconnectedException ⇒ Done } } /** @@ -290,7 +313,7 @@ private[akka] class ServerFSM(val controller: ActorRef, val channel: Channel) ex whenUnhandled { case Event(ClientDisconnected, Some(s)) ⇒ - s ! Status.Failure(new RuntimeException("client disconnected in state " + stateName + ": " + channel)) + s ! Status.Failure(new ClientDisconnectedException("client disconnected in state " + stateName + ": " + channel)) stop() case Event(ClientDisconnected, None) ⇒ stop() } @@ -348,6 +371,7 @@ private[akka] class ServerFSM(val controller: ActorRef, val channel: Channel) ex */ private[akka] object Controller { case class ClientDisconnected(name: RoleName) + class ClientDisconnectedException(msg: String) extends AkkaException(msg) case object GetNodes case object GetSockAddr case class CreateServerFSM(channel: Channel) @@ -367,7 +391,7 @@ private[akka] class Controller(private var initialParticipants: Int, controllerP import BarrierCoordinator._ val settings = TestConductor().Settings - val connection = RemoteConnection(Server, controllerPort, + val connection = RemoteConnection(Server, controllerPort, settings.ServerSocketWorkerPoolSize, new ConductorHandler(settings.QueryTimeout, self, Logging(context.system, "ConductorHandler"))) /* @@ -545,7 +569,7 @@ private[akka] class BarrierCoordinator extends Actor with LoggingFSM[BarrierCoor } onTransition { - case Idle -> Waiting ⇒ setTimer("Timeout", StateTimeout, nextStateData.deadline.timeLeft.asInstanceOf[FiniteDuration], false) + case Idle -> Waiting ⇒ setTimer("Timeout", StateTimeout, nextStateData.deadline.timeLeft, false) case Waiting -> Idle ⇒ cancelTimer("Timeout") } @@ -556,7 +580,7 @@ private[akka] class BarrierCoordinator extends Actor with LoggingFSM[BarrierCoor val enterDeadline = getDeadline(timeout) // we only allow the deadlines to get shorter if (enterDeadline.timeLeft < deadline.timeLeft) { - setTimer("Timeout", StateTimeout, enterDeadline.timeLeft.asInstanceOf[FiniteDuration], false) + setTimer("Timeout", StateTimeout, enterDeadline.timeLeft, false) handleBarrier(d.copy(arrived = together, deadline = enterDeadline)) } else handleBarrier(d.copy(arrived = together)) @@ -587,7 +611,7 @@ private[akka] class BarrierCoordinator extends Actor with LoggingFSM[BarrierCoor } } - def getDeadline(timeout: Option[Duration]): Deadline = { + def getDeadline(timeout: Option[FiniteDuration]): Deadline = { Deadline.now + timeout.getOrElse(TestConductor().Settings.BarrierTimeout.duration) } diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/DataTypes.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/DataTypes.scala index b0ebed3653..cbe0825f35 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/DataTypes.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/DataTypes.scala @@ -12,7 +12,7 @@ import akka.remote.testconductor.{ TestConductorProtocol ⇒ TCP } import com.google.protobuf.Message import akka.actor.Address import org.jboss.netty.handler.codec.oneone.OneToOneDecoder -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.remote.testconductor.TestConductorProtocol.BarrierOp case class RoleName(name: String) @@ -32,7 +32,7 @@ private[akka] sealed trait ConfirmedClientOp extends ClientOp */ private[akka] case class Hello(name: String, addr: Address) extends NetworkOp -private[akka] case class EnterBarrier(name: String, timeout: Option[Duration]) extends ServerOp with NetworkOp +private[akka] case class EnterBarrier(name: String, timeout: Option[FiniteDuration]) extends ServerOp with NetworkOp private[akka] case class FailBarrier(name: String) extends ServerOp with NetworkOp private[akka] case class BarrierResult(name: String, success: Boolean) extends UnconfirmedClientOp with NetworkOp diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Extension.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Extension.scala index 4469ce308a..62eca3128d 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Extension.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Extension.scala @@ -5,7 +5,9 @@ import akka.remote.RemoteActorRefProvider import akka.util.Timeout import java.util.concurrent.TimeUnit.MILLISECONDS import java.util.concurrent.ConcurrentHashMap -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration +import com.typesafe.config.Config +import akka.dispatch.ThreadPoolConfig /** * Access to the [[akka.remote.testconductor.TestConductorExt]] extension: @@ -29,21 +31,37 @@ object TestConductor extends ExtensionKey[TestConductorExt] { * [[akka.actor.Extension]]. Please follow the aforementioned links for * more information. * - * This extension requires the `akka.actor.provider` - * to be a [[akka.remote.RemoteActorRefProvider]]. + * ====Note==== + * This extension requires the `akka.actor.provider` + * to be a [[akka.remote.RemoteActorRefProvider]]. + * + * To use ``blackhole``, ``passThrough``, and ``throttle`` you must activate the + * `TestConductorTranport` by specifying `testTransport(on = true)` in your + * MultiNodeConfig. + * */ class TestConductorExt(val system: ExtendedActorSystem) extends Extension with Conductor with Player { object Settings { - val config = system.settings.config + val config = system.settings.config.getConfig("akka.testconductor") - val ConnectTimeout = Duration(config.getMilliseconds("akka.testconductor.connect-timeout"), MILLISECONDS) - val ClientReconnects = config.getInt("akka.testconductor.client-reconnects") - val ReconnectBackoff = Duration(config.getMilliseconds("akka.testconductor.reconnect-backoff"), MILLISECONDS) + val ConnectTimeout = Duration(config.getMilliseconds("connect-timeout"), MILLISECONDS) + val ClientReconnects = config.getInt("client-reconnects") + val ReconnectBackoff = Duration(config.getMilliseconds("reconnect-backoff"), MILLISECONDS) - implicit val BarrierTimeout = Timeout(Duration(config.getMilliseconds("akka.testconductor.barrier-timeout"), MILLISECONDS)) - implicit val QueryTimeout = Timeout(Duration(config.getMilliseconds("akka.testconductor.query-timeout"), MILLISECONDS)) - val PacketSplitThreshold = Duration(config.getMilliseconds("akka.testconductor.packet-split-threshold"), MILLISECONDS) + implicit val BarrierTimeout = Timeout(Duration(config.getMilliseconds("barrier-timeout"), MILLISECONDS)) + implicit val QueryTimeout = Timeout(Duration(config.getMilliseconds("query-timeout"), MILLISECONDS)) + val PacketSplitThreshold = Duration(config.getMilliseconds("packet-split-threshold"), MILLISECONDS) + + private def computeWPS(config: Config): Int = + ThreadPoolConfig.scaledPoolSize( + config.getInt("pool-size-min"), + config.getDouble("pool-size-factor"), + config.getInt("pool-size-max")) + + val ServerSocketWorkerPoolSize = computeWPS(config.getConfig("netty.server-socket-worker-pool")) + + val ClientSocketWorkerPoolSize = computeWPS(config.getConfig("netty.client-socket-worker-pool")) } /** diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/NetworkFailureInjector.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/NetworkFailureInjector.scala index e1d5fb0854..1ac1bc4839 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/NetworkFailureInjector.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/NetworkFailureInjector.scala @@ -6,15 +6,13 @@ package akka.remote.testconductor import language.postfixOps import java.net.InetSocketAddress import scala.annotation.tailrec -import scala.collection.immutable.Queue +import scala.collection.immutable +import scala.concurrent.duration._ import org.jboss.netty.buffer.ChannelBuffer import org.jboss.netty.channel.{ SimpleChannelHandler, MessageEvent, Channels, ChannelStateEvent, ChannelHandlerContext, ChannelFutureListener, ChannelFuture } import akka.actor.{ Props, LoggingFSM, Address, ActorSystem, ActorRef, ActorLogging, Actor, FSM } import akka.event.Logging import akka.remote.netty.ChannelAddress -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ -import scala.concurrent.util.FiniteDuration /** * INTERNAL API. @@ -232,7 +230,7 @@ private[akka] object ThrottleActor { case object Throttle extends State case object Blackhole extends State - case class Data(lastSent: Long, rateMBit: Float, queue: Queue[Send]) + case class Data(lastSent: Long, rateMBit: Float, queue: immutable.Queue[Send]) case class Send(ctx: ChannelHandlerContext, direction: Direction, future: Option[ChannelFuture], msg: AnyRef) case class SetRate(rateMBit: Float) @@ -250,7 +248,7 @@ private[akka] class ThrottleActor(channelContext: ChannelHandlerContext) private val packetSplitThreshold = TestConductor(context.system).Settings.PacketSplitThreshold - startWith(PassThrough, Data(0, -1, Queue())) + startWith(PassThrough, Data(0, -1, immutable.Queue())) when(PassThrough) { case Event(s @ Send(_, _, _, msg), _) ⇒ @@ -260,8 +258,8 @@ private[akka] class ThrottleActor(channelContext: ChannelHandlerContext) } when(Throttle) { - case Event(s: Send, data @ Data(_, _, Queue())) ⇒ - stay using sendThrottled(data.copy(lastSent = System.nanoTime, queue = Queue(s))) + case Event(s: Send, data @ Data(_, _, immutable.Queue())) ⇒ + stay using sendThrottled(data.copy(lastSent = System.nanoTime, queue = immutable.Queue(s))) case Event(s: Send, data) ⇒ stay using sendThrottled(data.copy(queue = data.queue.enqueue(s))) case Event(Tick, data) ⇒ @@ -288,7 +286,7 @@ private[akka] class ThrottleActor(channelContext: ChannelHandlerContext) whenUnhandled { case Event(SetRate(rate), d) ⇒ if (rate > 0) { - goto(Throttle) using d.copy(lastSent = System.nanoTime, rateMBit = rate, queue = Queue()) + goto(Throttle) using d.copy(lastSent = System.nanoTime, rateMBit = rate, queue = immutable.Queue()) } else if (rate == 0) { goto(Blackhole) } else { @@ -304,7 +302,7 @@ private[akka] class ThrottleActor(channelContext: ChannelHandlerContext) log.debug("sending msg (Tick): {}", s.msg) send(s) } - if (!timerActive_?("send")) + if (!isTimerActive("send")) for (time ← toTick) { log.debug("scheduling next Tick in {}", time) setTimer("send", Tick, time, false) @@ -330,23 +328,23 @@ private[akka] class ThrottleActor(channelContext: ChannelHandlerContext) */ private def schedule(d: Data): (Data, Seq[Send], Option[FiniteDuration]) = { val now = System.nanoTime - @tailrec def rec(d: Data, toSend: Seq[Send]): (Data, Seq[Send], Option[FiniteDuration]) = { + @tailrec def rec(d: Data, toSend: immutable.Seq[Send]): (Data, immutable.Seq[Send], Option[FiniteDuration]) = { if (d.queue.isEmpty) (d, toSend, None) else { val timeForPacket = d.lastSent + (1000 * size(d.queue.head.msg) / d.rateMBit).toLong if (timeForPacket <= now) rec(Data(timeForPacket, d.rateMBit, d.queue.tail), toSend :+ d.queue.head) else { val splitThreshold = d.lastSent + packetSplitThreshold.toNanos - if (now < splitThreshold) (d, toSend, Some(((timeForPacket - now).nanos min (splitThreshold - now).nanos).asInstanceOf[FiniteDuration])) + if (now < splitThreshold) (d, toSend, Some((timeForPacket - now).nanos min (splitThreshold - now).nanos)) else { val microsToSend = (now - d.lastSent) / 1000 val (s1, s2) = split(d.queue.head, (microsToSend * d.rateMBit / 8).toInt) - (d.copy(queue = s2 +: d.queue.tail), toSend :+ s1, Some(((timeForPacket - now).nanos min packetSplitThreshold).asInstanceOf[FiniteDuration])) + (d.copy(queue = s2 +: d.queue.tail), toSend :+ s1, Some((timeForPacket - now).nanos min packetSplitThreshold)) } } } } - rec(d, Seq()) + rec(d, Nil) } /** diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Player.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Player.scala index 03b07486f0..95bfab1ee5 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/Player.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/Player.scala @@ -4,22 +4,20 @@ package akka.remote.testconductor import language.postfixOps + +import java.util.concurrent.TimeoutException import akka.actor.{ Actor, ActorRef, ActorSystem, LoggingFSM, Props, PoisonPill, Status, Address, Scheduler } -import RemoteConnection.getAddrString -import scala.concurrent.util.{ Duration, Deadline } -import scala.concurrent.util.duration._ +import akka.remote.testconductor.RemoteConnection.getAddrString +import scala.collection.immutable +import scala.concurrent.{ ExecutionContext, Await, Future } +import scala.concurrent.duration._ +import scala.util.control.NoStackTrace +import scala.reflect.classTag import akka.util.Timeout import org.jboss.netty.channel.{ Channel, SimpleChannelUpstreamHandler, ChannelHandlerContext, ChannelStateEvent, MessageEvent, WriteCompletionEvent, ExceptionEvent } -import com.typesafe.config.ConfigFactory -import java.util.concurrent.TimeUnit.MILLISECONDS -import java.util.concurrent.TimeoutException import akka.pattern.{ ask, pipe, AskTimeoutException } -import scala.util.control.NoStackTrace import akka.event.{ LoggingAdapter, Logging } import java.net.{ InetSocketAddress, ConnectException } -import scala.reflect.classTag -import concurrent.{ ExecutionContext, Await, Future } -import scala.concurrent.util.FiniteDuration /** * The Player is the client component of the @@ -69,25 +67,23 @@ trait Player { this: TestConductorExt ⇒ * Enter the named barriers, one after the other, in the order given. Will * throw an exception in case of timeouts or other errors. */ - def enter(name: String*) { - enter(Settings.BarrierTimeout, name) - } + def enter(name: String*): Unit = enter(Settings.BarrierTimeout, name.to[immutable.Seq]) /** * Enter the named barriers, one after the other, in the order given. Will * throw an exception in case of timeouts or other errors. */ - def enter(timeout: Timeout, name: Seq[String]) { + def enter(timeout: Timeout, name: immutable.Seq[String]) { system.log.debug("entering barriers " + name.mkString("(", ", ", ")")) val stop = Deadline.now + timeout.duration name foreach { b ⇒ - val barrierTimeout = stop.timeLeft.asInstanceOf[FiniteDuration] + val barrierTimeout = stop.timeLeft if (barrierTimeout < Duration.Zero) { client ! ToServer(FailBarrier(b)) throw new TimeoutException("Server timed out while waiting for barrier " + b); } try { - implicit val timeout = Timeout((barrierTimeout + Settings.QueryTimeout.duration).asInstanceOf[FiniteDuration]) + implicit val timeout = Timeout(barrierTimeout + Settings.QueryTimeout.duration) Await.result(client ? ToServer(EnterBarrier(b, Option(barrierTimeout))), Duration.Inf) } catch { case e: AskTimeoutException ⇒ @@ -145,7 +141,8 @@ private[akka] class ClientFSM(name: RoleName, controllerAddr: InetSocketAddress) val settings = TestConductor().Settings val handler = new PlayerHandler(controllerAddr, settings.ClientReconnects, settings.ReconnectBackoff, - self, Logging(context.system, "PlayerHandler"), context.system.scheduler)(context.dispatcher) + settings.ClientSocketWorkerPoolSize, self, Logging(context.system, "PlayerHandler"), + context.system.scheduler)(context.dispatcher) startWith(Connecting, Data(None, None)) @@ -255,7 +252,8 @@ private[akka] class ClientFSM(name: RoleName, controllerAddr: InetSocketAddress) private[akka] class PlayerHandler( server: InetSocketAddress, private var reconnects: Int, - backoff: Duration, + backoff: FiniteDuration, + poolSize: Int, fsm: ActorRef, log: LoggingAdapter, scheduler: Scheduler)(implicit executor: ExecutionContext) @@ -278,14 +276,14 @@ private[akka] class PlayerHandler( event.getCause match { case c: ConnectException if reconnects > 0 ⇒ reconnects -= 1 - scheduler.scheduleOnce(nextAttempt.timeLeft.asInstanceOf[FiniteDuration])(reconnect()) + scheduler.scheduleOnce(nextAttempt.timeLeft)(reconnect()) case e ⇒ fsm ! ConnectionFailure(e.getMessage) } } private def reconnect(): Unit = { nextAttempt = Deadline.now + backoff - RemoteConnection(Client, server, this) + RemoteConnection(Client, server, poolSize, this) } override def channelConnected(ctx: ChannelHandlerContext, event: ChannelStateEvent) = { diff --git a/akka-remote-tests/src/main/scala/akka/remote/testconductor/RemoteConnection.scala b/akka-remote-tests/src/main/scala/akka/remote/testconductor/RemoteConnection.scala index 1979857bf0..db212e7cbf 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testconductor/RemoteConnection.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testconductor/RemoteConnection.scala @@ -45,16 +45,18 @@ private[akka] case object Server extends Role * INTERNAL API. */ private[akka] object RemoteConnection { - def apply(role: Role, sockaddr: InetSocketAddress, handler: ChannelUpstreamHandler): Channel = { + def apply(role: Role, sockaddr: InetSocketAddress, poolSize: Int, handler: ChannelUpstreamHandler): Channel = { role match { case Client ⇒ - val socketfactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool, Executors.newCachedThreadPool) + val socketfactory = new NioClientSocketChannelFactory(Executors.newCachedThreadPool, Executors.newCachedThreadPool, + poolSize) val bootstrap = new ClientBootstrap(socketfactory) bootstrap.setPipelineFactory(new TestConductorPipelineFactory(handler)) bootstrap.setOption("tcpNoDelay", true) bootstrap.connect(sockaddr).getChannel case Server ⇒ - val socketfactory = new NioServerSocketChannelFactory(Executors.newCachedThreadPool, Executors.newCachedThreadPool) + val socketfactory = new NioServerSocketChannelFactory(Executors.newCachedThreadPool, Executors.newCachedThreadPool, + poolSize) val bootstrap = new ServerBootstrap(socketfactory) bootstrap.setPipelineFactory(new TestConductorPipelineFactory(handler)) bootstrap.setOption("reuseAddress", true) diff --git a/akka-remote-tests/src/main/scala/akka/remote/testkit/MultiNodeSpec.scala b/akka-remote-tests/src/main/scala/akka/remote/testkit/MultiNodeSpec.scala index a842a547a1..350c9a3171 100644 --- a/akka-remote-tests/src/main/scala/akka/remote/testkit/MultiNodeSpec.scala +++ b/akka-remote-tests/src/main/scala/akka/remote/testkit/MultiNodeSpec.scala @@ -7,18 +7,19 @@ import language.implicitConversions import language.postfixOps import java.net.InetSocketAddress +import java.util.concurrent.TimeoutException import com.typesafe.config.{ ConfigObject, ConfigFactory, Config } +import scala.concurrent.{ Await, Awaitable } +import scala.util.control.NonFatal +import scala.collection.immutable import akka.actor._ import akka.util.Timeout import akka.remote.testconductor.{ TestConductorExt, TestConductor, RoleName } import akka.remote.RemoteActorRefProvider import akka.testkit._ -import scala.concurrent.{ Await, Awaitable } -import scala.util.control.NonFatal -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ -import java.util.concurrent.TimeoutException +import scala.concurrent.duration._ import akka.remote.testconductor.RoleName +import akka.remote.testconductor.TestConductorTransport import akka.actor.RootActorPath import akka.event.{ Logging, LoggingAdapter } @@ -30,8 +31,9 @@ abstract class MultiNodeConfig { private var _commonConf: Option[Config] = None private var _nodeConf = Map[RoleName, Config]() private var _roles = Vector[RoleName]() - private var _deployments = Map[RoleName, Seq[String]]() + private var _deployments = Map[RoleName, immutable.Seq[String]]() private var _allDeploy = Vector[String]() + private var _testTransport = false /** * Register a common base config for all test participants, if so desired. @@ -41,7 +43,10 @@ abstract class MultiNodeConfig { /** * Register a config override for a specific participant. */ - def nodeConfig(role: RoleName, config: Config): Unit = _nodeConf += role -> config + def nodeConfig(roles: RoleName*)(configs: Config*): Unit = { + val c = configs.reduceLeft(_ withFallback _) + _nodeConf ++= roles map { _ -> c } + } /** * Include for verbose debug logging @@ -81,19 +86,30 @@ abstract class MultiNodeConfig { def deployOnAll(deployment: String): Unit = _allDeploy :+= deployment + /** + * To be able to use `blackhole`, `passThrough`, and `throttle` you must + * activate the TestConductorTranport by specifying + * `testTransport(on = true)` in your MultiNodeConfig. + */ + def testTransport(on: Boolean): Unit = _testTransport = on + private[testkit] lazy val myself: RoleName = { require(_roles.size > MultiNodeSpec.selfIndex, "not enough roles declared for this test") _roles(MultiNodeSpec.selfIndex) } private[testkit] def config: Config = { - val configs = (_nodeConf get myself).toList ::: _commonConf.toList ::: MultiNodeSpec.nodeConfig :: MultiNodeSpec.baseConfig :: Nil - configs reduce (_ withFallback _) + val transportConfig = + if (_testTransport) ConfigFactory.parseString("akka.remote.transport=" + classOf[TestConductorTransport].getName) + else ConfigFactory.empty + + val configs = (_nodeConf get myself).toList ::: _commonConf.toList ::: transportConfig :: MultiNodeSpec.nodeConfig :: MultiNodeSpec.baseConfig :: Nil + configs reduceLeft (_ withFallback _) } - private[testkit] def deployments(node: RoleName): Seq[String] = (_deployments get node getOrElse Nil) ++ _allDeploy + private[testkit] def deployments(node: RoleName): immutable.Seq[String] = (_deployments get node getOrElse Nil) ++ _allDeploy - private[testkit] def roles: Seq[RoleName] = _roles + private[testkit] def roles: immutable.Seq[RoleName] = _roles } @@ -175,7 +191,6 @@ object MultiNodeSpec { private[testkit] val nodeConfig = mapToConfig(Map( "akka.actor.provider" -> "akka.remote.RemoteActorRefProvider", - "akka.remote.transport" -> "akka.remote.testconductor.TestConductorTransport", "akka.remote.netty.hostname" -> selfName, "akka.remote.netty.port" -> selfPort)) @@ -220,7 +235,7 @@ object MultiNodeSpec { * `AskTimeoutException: sending to terminated ref breaks promises`. Using lazy * val is fine. */ -abstract class MultiNodeSpec(val myself: RoleName, _system: ActorSystem, _roles: Seq[RoleName], deployments: RoleName ⇒ Seq[String]) +abstract class MultiNodeSpec(val myself: RoleName, _system: ActorSystem, _roles: immutable.Seq[RoleName], deployments: RoleName ⇒ Seq[String]) extends TestKit(_system) with MultiNodeSpecCallbacks { import MultiNodeSpec._ @@ -280,7 +295,7 @@ abstract class MultiNodeSpec(val myself: RoleName, _system: ActorSystem, _roles: /** * All registered roles */ - def roles: Seq[RoleName] = _roles + def roles: immutable.Seq[RoleName] = _roles /** * TO BE DEFINED BY USER: Defines the number of participants required for starting the test. This @@ -307,26 +322,24 @@ abstract class MultiNodeSpec(val myself: RoleName, _system: ActorSystem, _roles: * to the `roleMap`). */ def runOn(nodes: RoleName*)(thunk: ⇒ Unit): Unit = { - if (nodes exists (_ == myself)) { + if (isNode(nodes: _*)) { thunk } } /** - * Execute the `yes` block of code only on the given nodes (names according - * to the `roleMap`) else execute the `no` block of code. + * Verify that the running node matches one of the given nodes */ - def ifNode[T](nodes: RoleName*)(yes: ⇒ T)(no: ⇒ T): T = { - if (nodes exists (_ == myself)) yes else no - } + def isNode(nodes: RoleName*): Boolean = nodes contains myself /** * Enter the named barriers in the order given. Use the remaining duration from * the innermost enclosing `within` block or the default `BarrierTimeout` */ - def enterBarrier(name: String*) { - testConductor.enter(Timeout.durationToTimeout(remainingOr(testConductor.Settings.BarrierTimeout.duration)), name) - } + def enterBarrier(name: String*): Unit = + testConductor.enter( + Timeout.durationToTimeout(remainingOr(testConductor.Settings.BarrierTimeout.duration)), + name.to[immutable.Seq]) /** * Query the controller for the transport address of the given node (by role name) and @@ -389,10 +402,8 @@ abstract class MultiNodeSpec(val myself: RoleName, _system: ActorSystem, _roles: } import scala.collection.JavaConverters._ ConfigFactory.parseString(deployString).root.asScala foreach { - case (key, value: ConfigObject) ⇒ - deployer.parseConfig(key, value.toConfig) foreach deployer.deploy - case (key, x) ⇒ - throw new IllegalArgumentException("key " + key + " must map to deployment section, not simple value " + x) + case (key, value: ConfigObject) ⇒ deployer.parseConfig(key, value.toConfig) foreach deployer.deploy + case (key, x) ⇒ throw new IllegalArgumentException(s"key $key must map to deployment section, not simple value $x") } } diff --git a/akka-remote-tests/src/multi-jvm/scala/akka/remote/NewRemoteActorSpec.scala b/akka-remote-tests/src/multi-jvm/scala/akka/remote/NewRemoteActorSpec.scala index f7820ae8d3..b315e5c5d0 100644 --- a/akka-remote-tests/src/multi-jvm/scala/akka/remote/NewRemoteActorSpec.scala +++ b/akka-remote-tests/src/multi-jvm/scala/akka/remote/NewRemoteActorSpec.scala @@ -12,7 +12,7 @@ import akka.pattern.ask import testkit.{ STMultiNodeSpec, MultiNodeConfig, MultiNodeSpec } import akka.testkit._ import akka.actor.Terminated -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory object NewRemoteActorMultiJvmSpec extends MultiNodeConfig { diff --git a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RandomRoutedRemoteActorSpec.scala b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RandomRoutedRemoteActorSpec.scala index 040d91ad57..90de3e7970 100644 --- a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RandomRoutedRemoteActorSpec.scala +++ b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RandomRoutedRemoteActorSpec.scala @@ -17,7 +17,7 @@ import akka.routing.Broadcast import akka.routing.RandomRouter import akka.routing.RoutedActorRef import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object RandomRoutedRemoteActorMultiJvmSpec extends MultiNodeConfig { diff --git a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RoundRobinRoutedRemoteActorSpec.scala b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RoundRobinRoutedRemoteActorSpec.scala index 5a629abc37..a75b983f07 100644 --- a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RoundRobinRoutedRemoteActorSpec.scala +++ b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/RoundRobinRoutedRemoteActorSpec.scala @@ -12,14 +12,16 @@ import akka.actor.PoisonPill import akka.actor.Address import scala.concurrent.Await import akka.pattern.ask -import akka.remote.testkit.{STMultiNodeSpec, MultiNodeConfig, MultiNodeSpec} +import akka.remote.testkit.{ STMultiNodeSpec, MultiNodeConfig, MultiNodeSpec } import akka.routing.Broadcast +import akka.routing.CurrentRoutees +import akka.routing.RouterRoutees import akka.routing.RoundRobinRouter import akka.routing.RoutedActorRef import akka.routing.Resizer import akka.routing.RouteeProvider import akka.testkit._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object RoundRobinRoutedRemoteActorMultiJvmSpec extends MultiNodeConfig { @@ -59,7 +61,7 @@ class RoundRobinRoutedRemoteActorMultiJvmNode3 extends RoundRobinRoutedRemoteAct class RoundRobinRoutedRemoteActorMultiJvmNode4 extends RoundRobinRoutedRemoteActorSpec class RoundRobinRoutedRemoteActorSpec extends MultiNodeSpec(RoundRobinRoutedRemoteActorMultiJvmSpec) - with STMultiNodeSpec with ImplicitSender with DefaultTimeout { + with STMultiNodeSpec with ImplicitSender with DefaultTimeout { import RoundRobinRoutedRemoteActorMultiJvmSpec._ def initialParticipants = 4 @@ -105,7 +107,7 @@ class RoundRobinRoutedRemoteActorSpec extends MultiNodeSpec(RoundRobinRoutedRemo } "A new remote actor configured with a RoundRobin router and Resizer" must { - "be locally instantiated on a remote node after several resize rounds" taggedAs LongRunningTest in { + "be locally instantiated on a remote node after several resize rounds" taggedAs LongRunningTest in within(5 seconds) { runOn(first, second, third) { enterBarrier("start", "broadcast-end", "end", "done") @@ -117,22 +119,21 @@ class RoundRobinRoutedRemoteActorSpec extends MultiNodeSpec(RoundRobinRoutedRemo resizer = Some(new TestResizer))), "service-hello2") actor.isInstanceOf[RoutedActorRef] must be(true) - val iterationCount = 9 + actor ! CurrentRoutees + expectMsgType[RouterRoutees].routees.size must be(1) val repliesFrom: Set[ActorRef] = - (for { - i ← 0 until iterationCount - } yield { + (for (n ← 2 to 8) yield { actor ! "hit" - receiveOne(5 seconds) match { case ref: ActorRef ⇒ ref } + awaitCond(Await.result(actor ? CurrentRoutees, remaining).asInstanceOf[RouterRoutees].routees.size == n) + expectMsgType[ActorRef] }).toSet enterBarrier("broadcast-end") actor ! Broadcast(PoisonPill) enterBarrier("end") - // at least more than one actor per node - repliesFrom.size must be > (3) + repliesFrom.size must be(7) val repliesFromAddresses = repliesFrom.map(_.path.address) repliesFromAddresses must be === (Set(node(first), node(second), node(third)).map(_.address)) diff --git a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/ScatterGatherRoutedRemoteActorSpec.scala b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/ScatterGatherRoutedRemoteActorSpec.scala index d4d125e411..69a8ff02e5 100644 --- a/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/ScatterGatherRoutedRemoteActorSpec.scala +++ b/akka-remote-tests/src/multi-jvm/scala/akka/remote/router/ScatterGatherRoutedRemoteActorSpec.scala @@ -16,7 +16,7 @@ import akka.routing.ScatterGatherFirstCompletedRouter import akka.routing.RoutedActorRef import akka.testkit._ import akka.testkit.TestEvent._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.PoisonPill import akka.actor.Address diff --git a/akka-remote-tests/src/multi-jvm/scala/akka/remote/testconductor/TestConductorSpec.scala b/akka-remote-tests/src/multi-jvm/scala/akka/remote/testconductor/TestConductorSpec.scala index 3a49490e1a..544ee03ead 100644 --- a/akka-remote-tests/src/multi-jvm/scala/akka/remote/testconductor/TestConductorSpec.scala +++ b/akka-remote-tests/src/multi-jvm/scala/akka/remote/testconductor/TestConductorSpec.scala @@ -10,8 +10,7 @@ import akka.actor.Props import akka.actor.Actor import scala.concurrent.Await import scala.concurrent.Awaitable -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.testkit.ImplicitSender import akka.testkit.LongRunningTest import java.net.InetSocketAddress @@ -23,6 +22,8 @@ object TestConductorMultiJvmSpec extends MultiNodeConfig { val master = role("master") val slave = role("slave") + + testTransport(on = true) } class TestConductorMultiJvmNode1 extends TestConductorSpec @@ -88,11 +89,8 @@ class TestConductorSpec extends MultiNodeSpec(TestConductorMultiJvmSpec) with ST } val (min, max) = - ifNode(master) { - (0 seconds, 500 millis) - } { - (0.6 seconds, 2 seconds) - } + if(isNode(master))(0 seconds, 500 millis) + else (0.6 seconds, 2 seconds) within(min, max) { expectMsg(500 millis, 10) diff --git a/akka-remote-tests/src/test/scala/akka/remote/testconductor/BarrierSpec.scala b/akka-remote-tests/src/test/scala/akka/remote/testconductor/BarrierSpec.scala index f306477a28..103d16089d 100644 --- a/akka-remote-tests/src/test/scala/akka/remote/testconductor/BarrierSpec.scala +++ b/akka-remote-tests/src/test/scala/akka/remote/testconductor/BarrierSpec.scala @@ -7,8 +7,7 @@ import language.postfixOps import akka.actor.{ Props, AddressFromURIString, ActorRef, Actor, OneForOneStrategy, SupervisorStrategy } import akka.testkit.{ AkkaSpec, ImplicitSender, EventFilter, TestProbe, TimingTest } -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.event.Logging import akka.util.Timeout import org.scalatest.BeforeAndAfterEach @@ -545,4 +544,4 @@ class BarrierSpec extends AkkaSpec(BarrierSpec.config) with ImplicitSender { private def data(clients: Set[Controller.NodeInfo], barrier: String, arrived: List[ActorRef], previous: Data): Data = { Data(clients, barrier, arrived, previous.deadline) } -} \ No newline at end of file +} diff --git a/akka-remote-tests/src/test/scala/akka/remote/testkit/LogRoleReplace.scala b/akka-remote-tests/src/test/scala/akka/remote/testkit/LogRoleReplace.scala index 51a189b7f9..6905b9b116 100644 --- a/akka-remote-tests/src/test/scala/akka/remote/testkit/LogRoleReplace.scala +++ b/akka-remote-tests/src/test/scala/akka/remote/testkit/LogRoleReplace.scala @@ -93,7 +93,7 @@ object LogRoleReplace extends ClipboardOwner { class LogRoleReplace { private val RoleStarted = """\[([\w\-]+)\].*Role \[([\w]+)\] started with address \[akka://.*@([\w\-\.]+):([0-9]+)\]""".r - private val ColorCode = """\[[0-9]+m""" + private val ColorCode = """\u001B?\[[0-9]+m""" private var replacements: Map[String, String] = Map.empty diff --git a/akka-remote/src/main/resources/reference.conf b/akka-remote/src/main/resources/reference.conf index a70106a8b2..bb91d1a34c 100644 --- a/akka-remote/src/main/resources/reference.conf +++ b/akka-remote/src/main/resources/reference.conf @@ -109,6 +109,11 @@ akka { # (I) EXPERIMENTAL If "" then the specified dispatcher # will be used to accept inbound connections, and perform IO. If "" then # dedicated threads will be used. + # + # CAUTION: This might lead to the used dispatcher not shutting down properly! + # - may prevent the JVM from shutting down normally + # - may leak threads when shutting down an ActorSystem + # use-dispatcher-for-io = "" # (I) The hostname or ip to bind the remoting to, @@ -171,10 +176,12 @@ akka { # (O) Time between reconnect attempts for active clients reconnect-delay = 5s - # (O) Read inactivity period (lowest resolution is seconds) + # (O) Client read inactivity period (finest resolution is seconds) # after which active client connection is shutdown; - # will be re-established in case of new communication requests. + # Connection will be re-established in case of new communication requests. # A value of 0 will turn this feature off + # This value should be left to be 0 when use-passive-connections is off, or if + # no traffic is expected from the server side (i.e. it is a sink). read-timeout = 0s # (O) Write inactivity period (lowest resolution is seconds) @@ -218,7 +225,8 @@ akka { # Example: ["TLS_RSA_WITH_AES_128_CBC_SHA", "TLS_RSA_WITH_AES_256_CBC_SHA"] # You need to install the JCE Unlimited Strength Jurisdiction Policy # Files to use AES 256. - # More info here: http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#SunJCEProvider + # More info here: + # http://docs.oracle.com/javase/7/docs/technotes/guides/security/SunProviders.html#SunJCEProvider enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"] # Using /dev/./urandom is only necessary when using SHA1PRNG on Linux to @@ -244,6 +252,36 @@ akka { # suite (see enabled-algorithms section above) random-number-generator = "" } + + # (I&O) Used to configure the number of I/O worker threads on server sockets + server-socket-worker-pool { + # Min number of threads to cap factor-based number to + pool-size-min = 2 + + # The pool size factor is used to determine thread pool size + # using the following formula: ceil(available processors * factor). + # Resulting size is then bounded by the pool-size-min and + # pool-size-max values. + pool-size-factor = 1.0 + + # Max number of threads to cap factor-based number to + pool-size-max = 8 + } + + # (I&O) Used to configure the number of I/O worker threads on client sockets + client-socket-worker-pool { + # Min number of threads to cap factor-based number to + pool-size-min = 2 + + # The pool size factor is used to determine thread pool size + # using the following formula: ceil(available processors * factor). + # Resulting size is then bounded by the pool-size-min and + # pool-size-max values. + pool-size-factor = 1.0 + + # Max number of threads to cap factor-based number to + pool-size-max = 8 + } } } } diff --git a/akka-remote/src/main/scala/akka/remote/RemoteActorRefProvider.scala b/akka-remote/src/main/scala/akka/remote/RemoteActorRefProvider.scala index 50531bfa91..d9a0c53229 100644 --- a/akka-remote/src/main/scala/akka/remote/RemoteActorRefProvider.scala +++ b/akka-remote/src/main/scala/akka/remote/RemoteActorRefProvider.scala @@ -14,6 +14,10 @@ import scala.util.control.NonFatal /** * Remote ActorRefProvider. Starts up actor on remote node and creates a RemoteActorRef representing it. + * + * INTERNAL API! + * + * Depending on this class is not supported, only the [[ActorRefProvider]] interface is supported. */ class RemoteActorRefProvider( val systemName: String, @@ -24,7 +28,13 @@ class RemoteActorRefProvider( val remoteSettings: RemoteSettings = new RemoteSettings(settings.config, systemName) - val deployer: RemoteDeployer = new RemoteDeployer(settings, dynamicAccess) + override val deployer: Deployer = createDeployer + + /** + * Factory method to make it possible to override deployer in subclass + * Creates a new instance every time + */ + protected def createDeployer: RemoteDeployer = new RemoteDeployer(settings, dynamicAccess) private val local = new LocalActorRefProvider(systemName, settings, eventStream, scheduler, dynamicAccess, deployer) @@ -68,7 +78,7 @@ class RemoteActorRefProvider( _transport = { val fqn = remoteSettings.RemoteTransport - val args = Seq( + val args = List( classOf[ExtendedActorSystem] -> system, classOf[RemoteActorRefProvider] -> this) @@ -154,12 +164,11 @@ class RemoteActorRefProvider( Iterator(props.deploy) ++ deployment.iterator reduce ((a, b) ⇒ b withFallback a) match { case d @ Deploy(_, _, _, RemoteScope(addr)) ⇒ - if (addr == rootPath.address || addr == transport.address) { + if (isSelfAddress(addr)) { local.actorOf(system, props, supervisor, path, false, deployment.headOption, false, async) } else { val rpath = RootActorPath(addr) / "remote" / transport.address.hostPort / path.elements - useActorOnNode(rpath, props, d, supervisor) - new RemoteActorRef(this, transport, rpath, supervisor) + new RemoteActorRef(this, transport, rpath, supervisor, Some(props), Some(d)) } case _ ⇒ local.actorOf(system, props, supervisor, path, systemService, deployment.headOption, false, async) @@ -168,13 +177,13 @@ class RemoteActorRefProvider( } def actorFor(path: ActorPath): InternalActorRef = - if (path.address == rootPath.address || path.address == transport.address) actorFor(rootGuardian, path.elements) - else new RemoteActorRef(this, transport, path, Nobody) + if (isSelfAddress(path.address)) actorFor(rootGuardian, path.elements) + else new RemoteActorRef(this, transport, path, Nobody, props = None, deploy = None) def actorFor(ref: InternalActorRef, path: String): InternalActorRef = path match { case ActorPathExtractor(address, elems) ⇒ - if (address == rootPath.address || address == transport.address) actorFor(rootGuardian, elems) - else new RemoteActorRef(this, transport, new RootActorPath(address) / elems, Nobody) + if (isSelfAddress(address)) actorFor(rootGuardian, elems) + else new RemoteActorRef(this, transport, new RootActorPath(address) / elems, Nobody, props = None, deploy = None) case _ ⇒ local.actorFor(ref, path) } @@ -191,14 +200,18 @@ class RemoteActorRefProvider( } def getExternalAddressFor(addr: Address): Option[Address] = { - val ta = transport.address - val ra = rootPath.address addr match { - case `ta` | `ra` ⇒ Some(rootPath.address) + case _ if isSelfAddress(addr) ⇒ Some(local.rootPath.address) case Address("akka", _, Some(_), Some(_)) ⇒ Some(transport.address) case _ ⇒ None } } + + def getDefaultAddress: Address = transport.address + + private def isSelfAddress(address: Address): Boolean = + address == rootPath.address || address == transport.address + } private[akka] trait RemoteRef extends ActorRefScope { @@ -213,7 +226,9 @@ private[akka] class RemoteActorRef private[akka] ( val provider: RemoteActorRefProvider, remote: RemoteTransport, val path: ActorPath, - val getParent: InternalActorRef) + val getParent: InternalActorRef, + props: Option[Props], + deploy: Option[Deploy]) extends InternalActorRef with RemoteRef { def getChild(name: Iterator[String]): InternalActorRef = { @@ -221,7 +236,7 @@ private[akka] class RemoteActorRef private[akka] ( s.headOption match { case None ⇒ this case Some("..") ⇒ getParent getChild name - case _ ⇒ new RemoteActorRef(provider, remote, path / s, Nobody) + case _ ⇒ new RemoteActorRef(provider, remote, path / s, Nobody, props = None, deploy = None) } } @@ -235,7 +250,7 @@ private[akka] class RemoteActorRef private[akka] ( provider.deadLetters ! message } - override def !(message: Any)(implicit sender: ActorRef = null): Unit = + override def !(message: Any)(implicit sender: ActorRef = Actor.noSender): Unit = try remote.send(message, Option(sender), this) catch { case e @ (_: InterruptedException | NonFatal(_)) ⇒ @@ -243,6 +258,8 @@ private[akka] class RemoteActorRef private[akka] ( provider.deadLetters ! message } + def start(): Unit = if (props.isDefined && deploy.isDefined) provider.useActorOnNode(path, props.get, deploy.get, getParent) + def suspend(): Unit = sendSystemMessage(Suspend()) def resume(causedByFailure: Throwable): Unit = sendSystemMessage(Resume(causedByFailure)) @@ -253,4 +270,4 @@ private[akka] class RemoteActorRef private[akka] ( @throws(classOf[java.io.ObjectStreamException]) private def writeReplace(): AnyRef = SerializedActorRef(path) -} \ No newline at end of file +} diff --git a/akka-remote/src/main/scala/akka/remote/RemoteDaemon.scala b/akka-remote/src/main/scala/akka/remote/RemoteDaemon.scala index ecfd544dcb..ee8a6c5698 100644 --- a/akka-remote/src/main/scala/akka/remote/RemoteDaemon.scala +++ b/akka-remote/src/main/scala/akka/remote/RemoteDaemon.scala @@ -63,7 +63,7 @@ private[akka] class RemoteSystemDaemon( } } - override def !(msg: Any)(implicit sender: ActorRef = null): Unit = msg match { + override def !(msg: Any)(implicit sender: ActorRef = Actor.noSender): Unit = msg match { case message: DaemonMsg ⇒ log.debug("Received command [{}] to RemoteSystemDaemon on [{}]", message, path.address) message match { @@ -102,7 +102,10 @@ private[akka] class RemoteSystemDaemon( } case AddressTerminated(address) ⇒ - foreachChild { case a: InternalActorRef if a.getParent.path.address == address ⇒ system.stop(a) } + foreachChild { + case a: InternalActorRef if a.getParent.path.address == address ⇒ system.stop(a) + case _ ⇒ // skip, this child doesn't belong to the terminated address + } case unknown ⇒ log.warning("Unknown message {} received by {}", unknown, this) } diff --git a/akka-remote/src/main/scala/akka/remote/RemoteDeployer.scala b/akka-remote/src/main/scala/akka/remote/RemoteDeployer.scala index fbc9c7b913..60c77fb4cc 100644 --- a/akka-remote/src/main/scala/akka/remote/RemoteDeployer.scala +++ b/akka-remote/src/main/scala/akka/remote/RemoteDeployer.scala @@ -6,8 +6,9 @@ package akka.remote import akka.actor._ import akka.routing._ import akka.remote.routing._ -import com.typesafe.config._ import akka.ConfigurationException +import akka.japi.Util.immutableSeq +import com.typesafe.config._ @SerialVersionUID(1L) case class RemoteScope(node: Address) extends Scope { @@ -22,9 +23,9 @@ private[akka] class RemoteDeployer(_settings: ActorSystem.Settings, _pm: Dynamic case d @ Some(deploy) ⇒ deploy.config.getString("remote") match { case AddressFromURIString(r) ⇒ Some(deploy.copy(scope = RemoteScope(r))) - case str ⇒ - if (!str.isEmpty) throw new ConfigurationException("unparseable remote node name " + str) - val nodes = deploy.config.getStringList("target.nodes").asScala.toIndexedSeq map (AddressFromURIString(_)) + case str if !str.isEmpty ⇒ throw new ConfigurationException("unparseable remote node name " + str) + case _ ⇒ + val nodes = immutableSeq(deploy.config.getStringList("target.nodes")).map(AddressFromURIString(_)) if (nodes.isEmpty || deploy.routerConfig == NoRouter) d else Some(deploy.copy(routerConfig = RemoteRouterConfig(deploy.routerConfig, nodes))) } diff --git a/akka-remote/src/main/scala/akka/remote/RemoteSettings.scala b/akka-remote/src/main/scala/akka/remote/RemoteSettings.scala index c18635f1ca..804ccf5525 100644 --- a/akka-remote/src/main/scala/akka/remote/RemoteSettings.scala +++ b/akka-remote/src/main/scala/akka/remote/RemoteSettings.scala @@ -4,7 +4,7 @@ package akka.remote import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit.MILLISECONDS class RemoteSettings(val config: Config, val systemName: String) { diff --git a/akka-remote/src/main/scala/akka/remote/netty/Client.scala b/akka-remote/src/main/scala/akka/remote/netty/Client.scala index 4a391af5f0..517b568b76 100644 --- a/akka-remote/src/main/scala/akka/remote/netty/Client.scala +++ b/akka-remote/src/main/scala/akka/remote/netty/Client.scala @@ -20,7 +20,8 @@ import akka.actor.{ DeadLetter, Address, ActorRef } import akka.util.Switch import scala.util.control.NonFatal import org.jboss.netty.handler.ssl.SslHandler -import scala.concurrent.util.Deadline +import scala.concurrent.duration._ +import java.nio.channels.ClosedChannelException /** * This is the abstract baseclass for netty remote clients, currently there's only an @@ -43,8 +44,6 @@ private[akka] abstract class RemoteClient private[akka] (val netty: NettyRemoteT def shutdown(): Boolean - def isBoundTo(address: Address): Boolean = remoteAddress == address - /** * Converts the message to the wireprotocol and sends the message across the wire */ @@ -63,21 +62,23 @@ private[akka] abstract class RemoteClient private[akka] (val netty: NettyRemoteT private def send(request: (Any, Option[ActorRef], ActorRef)): Unit = { try { val channel = currentChannel - val f = channel.write(request) - f.addListener( - new ChannelFutureListener { - import netty.system.deadLetters - def operationComplete(future: ChannelFuture): Unit = - if (future.isCancelled || !future.isSuccess) request match { - case (msg, sender, recipient) ⇒ deadLetters ! DeadLetter(msg, sender.getOrElse(deadLetters), recipient) - // We don't call notifyListeners here since we don't think failed message deliveries are errors - /// If the connection goes down we'll get the error reporting done by the pipeline. - } - }) - // Check if we should back off - if (!channel.isWritable) { - val backoff = netty.settings.BackoffTimeout - if (backoff.length > 0 && !f.await(backoff.length, backoff.unit)) f.cancel() //Waited as long as we could, now back off + if (channel.isOpen) { + val f = channel.write(request) + f.addListener( + new ChannelFutureListener { + import netty.system.deadLetters + def operationComplete(future: ChannelFuture): Unit = + if (future.isCancelled || !future.isSuccess) request match { + case (msg, sender, recipient) ⇒ deadLetters ! DeadLetter(msg, sender.getOrElse(deadLetters), recipient) + // We don't call notifyListeners here since we don't think failed message deliveries are errors + /// If the connection goes down we'll get the error reporting done by the pipeline. + } + }) + // Check if we should back off + if (!channel.isWritable) { + val backoff = netty.settings.BackoffTimeout + if (backoff.length > 0 && !f.await(backoff.length, backoff.unit)) f.cancel() //Waited as long as we could, now back off + } } } catch { case NonFatal(e) ⇒ netty.notifyListeners(RemoteClientError(e, netty, remoteAddress)) @@ -195,8 +196,11 @@ private[akka] class ActiveRemoteClient private[akka] ( notifyListeners(RemoteClientShutdown(netty, remoteAddress)) try { if ((connection ne null) && (connection.getChannel ne null)) { - ChannelAddress.remove(connection.getChannel) - connection.getChannel.close() + val channel = connection.getChannel + ChannelAddress.remove(channel) + // Try to disconnect first to reduce "connection reset by peer" events + if (channel.isConnected) channel.disconnect() + if (channel.isOpen) channel.close() } } finally { try { @@ -267,10 +271,8 @@ private[akka] class ActiveRemoteClientHandler( case CommandType.SHUTDOWN ⇒ runOnceNow { client.netty.shutdownClientConnection(remoteAddress) } case _ ⇒ //Ignore others } - case arp: AkkaRemoteProtocol if arp.hasMessage ⇒ client.netty.receiveMessage(new RemoteMessage(arp.getMessage, client.netty.system)) - case other ⇒ throw new RemoteClientException("Unknown message received in remote client handler: " + other, client.netty, client.remoteAddress) } @@ -307,9 +309,14 @@ private[akka] class ActiveRemoteClientHandler( } override def exceptionCaught(ctx: ChannelHandlerContext, event: ExceptionEvent) = { - val cause = if (event.getCause ne null) event.getCause else new Exception("Unknown cause") - client.notifyListeners(RemoteClientError(cause, client.netty, client.remoteAddress)) - event.getChannel.close() + val cause = if (event.getCause ne null) event.getCause else new AkkaException("Unknown cause") + cause match { + case _: ClosedChannelException ⇒ // Ignore + case NonFatal(e) ⇒ + client.notifyListeners(RemoteClientError(e, client.netty, client.remoteAddress)) + event.getChannel.close() + case e: Throwable ⇒ throw e // Rethrow fatals + } } } diff --git a/akka-remote/src/main/scala/akka/remote/netty/NettyRemoteSupport.scala b/akka-remote/src/main/scala/akka/remote/netty/NettyRemoteSupport.scala index 6e36c63024..814bdc0c07 100644 --- a/akka-remote/src/main/scala/akka/remote/netty/NettyRemoteSupport.scala +++ b/akka-remote/src/main/scala/akka/remote/netty/NettyRemoteSupport.scala @@ -8,7 +8,9 @@ import java.net.InetSocketAddress import java.util.concurrent.atomic.{ AtomicReference, AtomicBoolean } import java.util.concurrent.locks.ReentrantReadWriteLock import java.util.concurrent.Executors -import scala.collection.mutable.HashMap +import scala.collection.mutable +import scala.collection.immutable +import scala.util.control.NonFatal import org.jboss.netty.channel.group.{ DefaultChannelGroup, ChannelGroupFuture } import org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory import org.jboss.netty.channel.{ ChannelHandlerContext, Channel, DefaultChannelPipeline, ChannelHandler, ChannelPipelineFactory, ChannelLocal } @@ -20,7 +22,6 @@ import org.jboss.netty.util.{ DefaultObjectSizeEstimator, HashedWheelTimer } import akka.event.Logging import akka.remote.RemoteProtocol.AkkaRemoteProtocol import akka.remote.{ RemoteTransportException, RemoteTransport, RemoteActorRefProvider, RemoteActorRef, RemoteServerStarted } -import scala.util.control.NonFatal import akka.actor.{ ExtendedActorSystem, Address, ActorRef } import com.google.protobuf.MessageLite @@ -40,12 +41,9 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider // TODO replace by system.scheduler val timer: HashedWheelTimer = new HashedWheelTimer(system.threadFactory) - val clientChannelFactory = settings.UseDispatcherForIO match { - case Some(id) ⇒ - val d = system.dispatchers.lookup(id) - new NioClientSocketChannelFactory(d, d) - case None ⇒ - new NioClientSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()) + val clientChannelFactory = { + val boss, worker = settings.UseDispatcherForIO.map(system.dispatchers.lookup) getOrElse Executors.newCachedThreadPool() + new NioClientSocketChannelFactory(boss, worker, settings.ClientSocketWorkerPoolSize) } /** @@ -56,7 +54,7 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider * Construct a DefaultChannelPipeline from a sequence of handlers; to be used * in implementations of ChannelPipelineFactory. */ - def apply(handlers: Seq[ChannelHandler]): DefaultChannelPipeline = + def apply(handlers: immutable.Seq[ChannelHandler]): DefaultChannelPipeline = (new DefaultChannelPipeline /: handlers) { (p, h) ⇒ p.addLast(Logging.simpleName(h.getClass), h); p } /** @@ -72,7 +70,7 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider * Construct a default protocol stack, excluding the “head” handler (i.e. the one which * actually dispatches the received messages to the local target actors). */ - def defaultStack(withTimeout: Boolean, isClient: Boolean): Seq[ChannelHandler] = + def defaultStack(withTimeout: Boolean, isClient: Boolean): immutable.Seq[ChannelHandler] = (if (settings.EnableSSL) List(NettySSLSupport(settings, NettyRemoteTransport.this.log, isClient)) else Nil) ::: (if (withTimeout) List(timeout) else Nil) ::: msgFormat ::: @@ -141,7 +139,7 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider def createPipeline(endpoint: ⇒ ChannelHandler, withTimeout: Boolean, isClient: Boolean): ChannelPipelineFactory = PipelineFactory(Seq(endpoint), withTimeout, isClient) - private val remoteClients = new HashMap[Address, RemoteClient] + private val remoteClients = new mutable.HashMap[Address, RemoteClient] private val clientsLock = new ReentrantReadWriteLock override protected def useUntrustedMode = remoteSettings.UntrustedMode @@ -245,13 +243,13 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider } } - def bindClient(remoteAddress: Address, client: RemoteClient, putIfAbsent: Boolean = false): Boolean = { + def bindClient(remoteAddress: Address, client: RemoteClient): Boolean = { clientsLock.writeLock().lock() try { - if (putIfAbsent && remoteClients.contains(remoteAddress)) false + if (remoteClients.contains(remoteAddress)) false else { client.connect() - remoteClients.put(remoteAddress, client).foreach(_.shutdown()) + remoteClients.put(remoteAddress, client) true } } finally { @@ -259,17 +257,7 @@ private[akka] class NettyRemoteTransport(_system: ExtendedActorSystem, _provider } } - def unbindClient(remoteAddress: Address): Unit = { - clientsLock.writeLock().lock() - try { - remoteClients foreach { - case (k, v) ⇒ - if (v.isBoundTo(remoteAddress)) { v.shutdown(); remoteClients.remove(k) } - } - } finally { - clientsLock.writeLock().unlock() - } - } + def unbindClient(remoteAddress: Address): Unit = shutdownClientConnection(remoteAddress) def shutdownClientConnection(remoteAddress: Address): Boolean = { clientsLock.writeLock().lock() diff --git a/akka-remote/src/main/scala/akka/remote/netty/Server.scala b/akka-remote/src/main/scala/akka/remote/netty/Server.scala index 16269a43a2..15ca143bf8 100644 --- a/akka-remote/src/main/scala/akka/remote/netty/Server.scala +++ b/akka-remote/src/main/scala/akka/remote/netty/Server.scala @@ -3,21 +3,19 @@ */ package akka.remote.netty +import akka.actor.Address +import akka.remote.RemoteProtocol.{ RemoteControlProtocol, CommandType, AkkaRemoteProtocol } +import akka.remote._ +import java.net.InetAddress import java.net.InetSocketAddress +import java.nio.channels.ClosedChannelException import java.util.concurrent.Executors -import scala.Option.option2Iterable import org.jboss.netty.bootstrap.ServerBootstrap -import org.jboss.netty.channel.ChannelHandler.Sharable +import org.jboss.netty.channel._ import org.jboss.netty.channel.group.ChannelGroup import org.jboss.netty.channel.socket.nio.NioServerSocketChannelFactory -import org.jboss.netty.handler.codec.frame.{ LengthFieldPrepender, LengthFieldBasedFrameDecoder } -import org.jboss.netty.handler.execution.ExecutionHandler -import akka.remote.RemoteProtocol.{ RemoteControlProtocol, CommandType, AkkaRemoteProtocol } -import akka.remote.{ RemoteServerShutdown, RemoteServerError, RemoteServerClientDisconnected, RemoteServerClientConnected, RemoteServerClientClosed, RemoteProtocol, RemoteMessage } -import akka.actor.Address -import java.net.InetAddress -import akka.actor.ActorSystemImpl -import org.jboss.netty.channel._ +import scala.util.control.NonFatal +import akka.AkkaException private[akka] class NettyRemoteServer(val netty: NettyRemoteTransport) { @@ -25,14 +23,10 @@ private[akka] class NettyRemoteServer(val netty: NettyRemoteTransport) { val ip = InetAddress.getByName(settings.Hostname) - private val factory = - settings.UseDispatcherForIO match { - case Some(id) ⇒ - val d = netty.system.dispatchers.lookup(id) - new NioServerSocketChannelFactory(d, d) - case None ⇒ - new NioServerSocketChannelFactory(Executors.newCachedThreadPool(), Executors.newCachedThreadPool()) - } + private val factory = { + val boss, worker = settings.UseDispatcherForIO.map(netty.system.dispatchers.lookup) getOrElse Executors.newCachedThreadPool() + new NioServerSocketChannelFactory(boss, worker, settings.ServerSocketWorkerPoolSize) + } // group of open channels, used for clean-up private val openChannels: ChannelGroup = new DefaultDisposableChannelGroup("akka-remote-server") @@ -154,7 +148,6 @@ private[akka] class RemoteServerHandler( event.getMessage match { case remote: AkkaRemoteProtocol if remote.hasMessage ⇒ netty.receiveMessage(new RemoteMessage(remote.getMessage, netty.system)) - case remote: AkkaRemoteProtocol if remote.hasInstruction ⇒ val instruction = remote.getInstruction instruction.getCommandType match { @@ -179,8 +172,14 @@ private[akka] class RemoteServerHandler( } override def exceptionCaught(ctx: ChannelHandlerContext, event: ExceptionEvent) = { - netty.notifyListeners(RemoteServerError(event.getCause, netty)) - event.getChannel.close() + val cause = if (event.getCause ne null) event.getCause else new AkkaException("Unknown cause") + cause match { + case _: ClosedChannelException ⇒ // Ignore + case NonFatal(e) ⇒ + netty.notifyListeners(RemoteServerError(e, netty)) + event.getChannel.close() + case e: Throwable ⇒ throw e // Rethrow fatals + } } } diff --git a/akka-remote/src/main/scala/akka/remote/netty/Settings.scala b/akka-remote/src/main/scala/akka/remote/netty/Settings.scala index 4a874c5283..5852f7a3ca 100644 --- a/akka-remote/src/main/scala/akka/remote/netty/Settings.scala +++ b/akka-remote/src/main/scala/akka/remote/netty/Settings.scala @@ -4,12 +4,13 @@ package akka.remote.netty import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit._ import java.net.InetAddress import akka.ConfigurationException -import scala.collection.JavaConverters.iterableAsScalaIterableConverter -import scala.concurrent.util.FiniteDuration +import akka.japi.Util.immutableSeq +import scala.concurrent.duration.FiniteDuration +import akka.dispatch.ThreadPoolConfig private[akka] class NettySettings(config: Config, val systemName: String) { @@ -88,42 +89,19 @@ private[akka] class NettySettings(config: Config, val systemName: String) { case sz ⇒ sz } - val SSLKeyStore = getString("ssl.key-store") match { - case "" ⇒ None - case keyStore ⇒ Some(keyStore) - } + val SSLKeyStore = Option(getString("ssl.key-store")).filter(_.length > 0) + val SSLTrustStore = Option(getString("ssl.trust-store")).filter(_.length > 0) + val SSLKeyStorePassword = Option(getString("ssl.key-store-password")).filter(_.length > 0) - val SSLTrustStore = getString("ssl.trust-store") match { - case "" ⇒ None - case trustStore ⇒ Some(trustStore) - } + val SSLTrustStorePassword = Option(getString("ssl.trust-store-password")).filter(_.length > 0) - val SSLKeyStorePassword = getString("ssl.key-store-password") match { - case "" ⇒ None - case password ⇒ Some(password) - } + val SSLEnabledAlgorithms = immutableSeq(getStringList("ssl.enabled-algorithms")).to[Set] - val SSLTrustStorePassword = getString("ssl.trust-store-password") match { - case "" ⇒ None - case password ⇒ Some(password) - } + val SSLProtocol = Option(getString("ssl.protocol")).filter(_.length > 0) - val SSLEnabledAlgorithms = iterableAsScalaIterableConverter(getStringList("ssl.enabled-algorithms")).asScala.toSet[String] + val SSLRandomSource = Option(getString("ssl.sha1prng-random-source")).filter(_.length > 0) - val SSLProtocol = getString("ssl.protocol") match { - case "" ⇒ None - case protocol ⇒ Some(protocol) - } - - val SSLRandomSource = getString("ssl.sha1prng-random-source") match { - case "" ⇒ None - case path ⇒ Some(path) - } - - val SSLRandomNumberGenerator = getString("ssl.random-number-generator") match { - case "" ⇒ None - case rng ⇒ Some(rng) - } + val SSLRandomNumberGenerator = Option(getString("ssl.random-number-generator")).filter(_.length > 0) val EnableSSL = { val enableSSL = getBoolean("ssl.enable") @@ -139,4 +117,14 @@ private[akka] class NettySettings(config: Config, val systemName: String) { } enableSSL } + + private def computeWPS(config: Config): Int = + ThreadPoolConfig.scaledPoolSize( + config.getInt("pool-size-min"), + config.getDouble("pool-size-factor"), + config.getInt("pool-size-max")) + + val ServerSocketWorkerPoolSize = computeWPS(config.getConfig("server-socket-worker-pool")) + + val ClientSocketWorkerPoolSize = computeWPS(config.getConfig("client-socket-worker-pool")) } diff --git a/akka-remote/src/main/scala/akka/remote/routing/RemoteRouterConfig.scala b/akka-remote/src/main/scala/akka/remote/routing/RemoteRouterConfig.scala index 8a4e3bce7c..369d8b0c7b 100644 --- a/akka-remote/src/main/scala/akka/remote/routing/RemoteRouterConfig.scala +++ b/akka-remote/src/main/scala/akka/remote/routing/RemoteRouterConfig.scala @@ -6,19 +6,17 @@ package akka.remote.routing import akka.routing.{ Route, Router, RouterConfig, RouteeProvider, Resizer } import com.typesafe.config.ConfigFactory import akka.actor.ActorContext -import akka.actor.ActorRef import akka.actor.Deploy -import akka.actor.InternalActorRef import akka.actor.Props -import akka.ConfigurationException -import akka.remote.RemoteScope -import akka.actor.AddressFromURIString import akka.actor.SupervisorStrategy import akka.actor.Address -import scala.collection.JavaConverters._ +import akka.actor.ActorCell +import akka.ConfigurationException +import akka.remote.RemoteScope +import akka.japi.Util.immutableSeq +import scala.collection.immutable import java.util.concurrent.atomic.AtomicInteger import java.lang.IllegalStateException -import akka.actor.ActorCell /** * [[akka.routing.RouterConfig]] implementation for remote deployment on defined @@ -29,7 +27,7 @@ import akka.actor.ActorCell @SerialVersionUID(1L) final case class RemoteRouterConfig(local: RouterConfig, nodes: Iterable[Address]) extends RouterConfig { - def this(local: RouterConfig, nodes: java.lang.Iterable[Address]) = this(local, nodes.asScala) + def this(local: RouterConfig, nodes: java.lang.Iterable[Address]) = this(local, immutableSeq(nodes)) def this(local: RouterConfig, nodes: Array[Address]) = this(local, nodes: Iterable[Address]) override def createRouteeProvider(context: ActorContext, routeeProps: Props) = @@ -64,20 +62,20 @@ final case class RemoteRouterConfig(local: RouterConfig, nodes: Iterable[Address final class RemoteRouteeProvider(nodes: Iterable[Address], _context: ActorContext, _routeeProps: Props, _resizer: Option[Resizer]) extends RouteeProvider(_context, _routeeProps, _resizer) { - if (nodes.isEmpty) throw new ConfigurationException("Must specify list of remote target.nodes for [%s]" - format context.self.path.toString) + if (nodes.isEmpty) + throw new ConfigurationException("Must specify list of remote target.nodes for [%s]" format context.self.path.toString) // need this iterator as instance variable since Resizer may call createRoutees several times private val nodeAddressIter: Iterator[Address] = Stream.continually(nodes).flatten.iterator // need this counter as instance variable since Resizer may call createRoutees several times private val childNameCounter = new AtomicInteger - override def registerRouteesFor(paths: Iterable[String]): Unit = + override def registerRouteesFor(paths: immutable.Iterable[String]): Unit = throw new ConfigurationException("Remote target.nodes can not be combined with routees for [%s]" format context.self.path.toString) override def createRoutees(nrOfInstances: Int): Unit = { - val refs = IndexedSeq.fill(nrOfInstances) { + val refs = immutable.IndexedSeq.fill(nrOfInstances) { val name = "c" + childNameCounter.incrementAndGet val deploy = Deploy(config = ConfigFactory.empty(), routerConfig = routeeProps.routerConfig, scope = RemoteScope(nodeAddressIter.next)) diff --git a/akka-remote/src/main/scala/akka/remote/security/provider/InternetSeedGenerator.scala b/akka-remote/src/main/scala/akka/remote/security/provider/InternetSeedGenerator.scala index f049a4e678..b274c4c0b6 100644 --- a/akka-remote/src/main/scala/akka/remote/security/provider/InternetSeedGenerator.scala +++ b/akka-remote/src/main/scala/akka/remote/security/provider/InternetSeedGenerator.scala @@ -16,6 +16,7 @@ package akka.remote.security.provider import org.uncommons.maths.random.{ SeedGenerator, SeedException, SecureRandomSeedGenerator, RandomDotOrgSeedGenerator, DevRandomSeedGenerator } +import scala.collection.immutable /** * Internal API @@ -33,8 +34,8 @@ object InternetSeedGenerator { /**Singleton instance. */ private final val Instance: InternetSeedGenerator = new InternetSeedGenerator /**Delegate generators. */ - private final val Generators: Seq[SeedGenerator] = - Seq(new RandomDotOrgSeedGenerator, // first try the Internet seed generator + private final val Generators: immutable.Seq[SeedGenerator] = + List(new RandomDotOrgSeedGenerator, // first try the Internet seed generator new SecureRandomSeedGenerator) // this is last because it always works } diff --git a/akka-remote/src/test/scala/akka/remote/NetworkFailureSpec.scala b/akka-remote/src/test/scala/akka/remote/NetworkFailureSpec.scala index 053c9a93b6..dad24b8d4b 100644 --- a/akka-remote/src/test/scala/akka/remote/NetworkFailureSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/NetworkFailureSpec.scala @@ -17,7 +17,7 @@ import scala.concurrent.{ ExecutionContext, Future } trait NetworkFailureSpec extends DefaultTimeout { self: AkkaSpec ⇒ import Actor._ - import scala.concurrent.util.Duration + import scala.concurrent.duration.Duration import system.dispatcher diff --git a/akka-remote/src/test/scala/akka/remote/RemoteConfigSpec.scala b/akka-remote/src/test/scala/akka/remote/RemoteConfigSpec.scala index 3ca382b00e..45b6ad5610 100644 --- a/akka-remote/src/test/scala/akka/remote/RemoteConfigSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/RemoteConfigSpec.scala @@ -7,8 +7,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import akka.actor.ExtendedActorSystem -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.remote.netty.NettyRemoteTransport @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) @@ -63,7 +62,32 @@ class RemoteConfigSpec extends AkkaSpec( WriteBufferLowWaterMark must be(None) SendBufferSize must be(None) ReceiveBufferSize must be(None) + ServerSocketWorkerPoolSize must be >= (2) + ServerSocketWorkerPoolSize must be <= (8) + ClientSocketWorkerPoolSize must be >= (2) + ClientSocketWorkerPoolSize must be <= (8) } + "contain correct configuration values in reference.conf" in { + val c = system.asInstanceOf[ExtendedActorSystem]. + provider.asInstanceOf[RemoteActorRefProvider]. + remoteSettings.config.getConfig("akka.remote.netty") + + // server-socket-worker-pool + { + val pool = c.getConfig("server-socket-worker-pool") + pool.getInt("pool-size-min") must equal(2) + pool.getDouble("pool-size-factor") must equal(1.0) + pool.getInt("pool-size-max") must equal(8) + } + + // client-socket-worker-pool + { + val pool = c.getConfig("client-socket-worker-pool") + pool.getInt("pool-size-min") must equal(2) + pool.getDouble("pool-size-factor") must equal(1.0) + pool.getInt("pool-size-max") must equal(8) + } + } } } diff --git a/akka-remote/src/test/scala/akka/remote/Ticket1978CommunicationSpec.scala b/akka-remote/src/test/scala/akka/remote/Ticket1978CommunicationSpec.scala index b6d2bed02a..c194fe1fa6 100644 --- a/akka-remote/src/test/scala/akka/remote/Ticket1978CommunicationSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/Ticket1978CommunicationSpec.scala @@ -17,8 +17,7 @@ import akka.remote.netty.{ NettySettings, NettySSLSupport } import javax.net.ssl.SSLException import akka.util.Timeout import scala.concurrent.Await -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.event.{ Logging, NoLogging, LoggingAdapter } object Configuration { diff --git a/akka-remote/src/test/scala/akka/remote/Ticket1978ConfigSpec.scala b/akka-remote/src/test/scala/akka/remote/Ticket1978ConfigSpec.scala index be172a563b..e088ae3362 100644 --- a/akka-remote/src/test/scala/akka/remote/Ticket1978ConfigSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/Ticket1978ConfigSpec.scala @@ -3,8 +3,7 @@ package akka.remote import akka.testkit._ import akka.actor._ import com.typesafe.config._ -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ import akka.remote.netty.NettyRemoteTransport import java.util.ArrayList diff --git a/akka-remote/src/test/scala/akka/remote/UntrustedSpec.scala b/akka-remote/src/test/scala/akka/remote/UntrustedSpec.scala index d3aa1a42e9..58ace1bb7c 100644 --- a/akka-remote/src/test/scala/akka/remote/UntrustedSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/UntrustedSpec.scala @@ -18,7 +18,7 @@ import akka.event.Logging import org.scalatest.junit.JUnitRunner import org.junit.runner.RunWith import akka.actor.Terminated -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.PoisonPill @RunWith(classOf[JUnitRunner]) diff --git a/akka-remote/src/test/scala/akka/remote/serialization/DaemonMsgCreateSerializerSpec.scala b/akka-remote/src/test/scala/akka/remote/serialization/DaemonMsgCreateSerializerSpec.scala index 2c80c99615..776feb410c 100644 --- a/akka-remote/src/test/scala/akka/remote/serialization/DaemonMsgCreateSerializerSpec.scala +++ b/akka-remote/src/test/scala/akka/remote/serialization/DaemonMsgCreateSerializerSpec.scala @@ -12,7 +12,7 @@ import akka.testkit.AkkaSpec import akka.actor.{ Actor, Address, Props, Deploy, OneForOneStrategy, SupervisorStrategy, FromClassCreator } import akka.remote.{ DaemonMsgCreate, RemoteScope } import akka.routing.{ RoundRobinRouter, FromConfig } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ object DaemonMsgCreateSerializerSpec { class MyActor extends Actor { diff --git a/akka-samples/akka-sample-camel/src/main/scala/AsyncRouteAndTransform.scala b/akka-samples/akka-sample-camel/src/main/scala/AsyncRouteAndTransform.scala index 5c6f52d595..d424d8486f 100644 --- a/akka-samples/akka-sample-camel/src/main/scala/AsyncRouteAndTransform.scala +++ b/akka-samples/akka-sample-camel/src/main/scala/AsyncRouteAndTransform.scala @@ -24,7 +24,7 @@ class HttpProducer(transformer: ActorRef) extends Actor with Producer { def endpointUri = "jetty://http://akka.io/?bridgeEndpoint=true" override def transformOutgoingMessage(msg: Any) = msg match { - case msg: CamelMessage ⇒ msg.withHeaders(msg.headers(Set(Exchange.HTTP_PATH))) + case msg: CamelMessage ⇒ msg.copy(headers = msg.headers(Set(Exchange.HTTP_PATH))) } override def routeResponse(msg: Any) { diff --git a/akka-samples/akka-sample-camel/src/main/scala/SimpleFileConsumer.scala b/akka-samples/akka-sample-camel/src/main/scala/SimpleFileConsumer.scala index 909de26813..94370d0529 100644 --- a/akka-samples/akka-sample-camel/src/main/scala/SimpleFileConsumer.scala +++ b/akka-samples/akka-sample-camel/src/main/scala/SimpleFileConsumer.scala @@ -19,6 +19,6 @@ class FileConsumer(uri: String) extends Consumer { def endpointUri = uri def receive = { case msg: CamelMessage ⇒ - println("Received file %s with content:\n%s".format(msg.getHeader(Exchange.FILE_NAME), msg.bodyAs[String])) + println("Received file %s with content:\n%s".format(msg.headers(Exchange.FILE_NAME), msg.bodyAs[String])) } } diff --git a/akka-samples/akka-sample-cluster/pom.xml b/akka-samples/akka-sample-cluster/pom.xml new file mode 100644 index 0000000000..78cdd9ca2b --- /dev/null +++ b/akka-samples/akka-sample-cluster/pom.xml @@ -0,0 +1,39 @@ + + + 4.0.0 + + com.typesafe.akka + akka-sample-cluster-experimental-japi + 2.2-SNAPSHOT + jar + + UTF-8 + + + + com.typesafe.akka + akka-cluster-experimental_2.10.0-RC1 + 2.1-20121016-001042 + + + + + typesafe-snapshots + Typesafe Snapshots + http://repo.typesafe.com/typesafe/snapshots/ + default + + + + + + org.codehaus.mojo + exec-maven-plugin + 1.2.1 + + + + diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-freebsd-6.so b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-freebsd-6.so new file mode 100644 index 0000000000..3e94f0d2bf Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-freebsd-6.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-linux.so new file mode 100644 index 0000000000..5a2e4c24fe Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-solaris.so b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-solaris.so new file mode 100644 index 0000000000..6396482a43 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-amd64-solaris.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-hpux-11.sl b/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-hpux-11.sl new file mode 100644 index 0000000000..d92ea4a96a Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-hpux-11.sl differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-linux.so new file mode 100644 index 0000000000..2bd2fc8e32 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ia64-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-pa-hpux-11.sl b/akka-samples/akka-sample-cluster/sigar/libsigar-pa-hpux-11.sl new file mode 100644 index 0000000000..0dfd8a1122 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-pa-hpux-11.sl differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-aix-5.so b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-aix-5.so new file mode 100644 index 0000000000..7d4b519921 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-aix-5.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-linux.so new file mode 100644 index 0000000000..4394b1b00f Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-aix-5.so b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-aix-5.so new file mode 100644 index 0000000000..35fd828808 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-aix-5.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-linux.so new file mode 100644 index 0000000000..a1ba2529c9 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-ppc64-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-s390x-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-s390x-linux.so new file mode 100644 index 0000000000..c275f4ac69 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-s390x-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-sparc-solaris.so b/akka-samples/akka-sample-cluster/sigar/libsigar-sparc-solaris.so new file mode 100644 index 0000000000..aa847d2b54 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-sparc-solaris.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-sparc64-solaris.so b/akka-samples/akka-sample-cluster/sigar/libsigar-sparc64-solaris.so new file mode 100644 index 0000000000..6c4fe809c5 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-sparc64-solaris.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-universal-macosx.dylib b/akka-samples/akka-sample-cluster/sigar/libsigar-universal-macosx.dylib new file mode 100644 index 0000000000..27ab107111 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-universal-macosx.dylib differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-universal64-macosx.dylib b/akka-samples/akka-sample-cluster/sigar/libsigar-universal64-macosx.dylib new file mode 100644 index 0000000000..0c721fecf3 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-universal64-macosx.dylib differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-5.so b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-5.so new file mode 100644 index 0000000000..8c50c6117a Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-5.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-6.so b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-6.so new file mode 100644 index 0000000000..f0800274a6 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-freebsd-6.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-x86-linux.so b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-linux.so new file mode 100644 index 0000000000..a0b64eddb0 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-linux.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/libsigar-x86-solaris.so b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-solaris.so new file mode 100644 index 0000000000..c6452e5655 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/libsigar-x86-solaris.so differ diff --git a/akka-samples/akka-sample-cluster/sigar/sigar-amd64-winnt.dll b/akka-samples/akka-sample-cluster/sigar/sigar-amd64-winnt.dll new file mode 100644 index 0000000000..1ec8a0353e Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/sigar-amd64-winnt.dll differ diff --git a/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.dll b/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.dll new file mode 100644 index 0000000000..6afdc0166c Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.dll differ diff --git a/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.lib b/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.lib new file mode 100644 index 0000000000..04924a1fc1 Binary files /dev/null and b/akka-samples/akka-sample-cluster/sigar/sigar-x86-winnt.lib differ diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java new file mode 100644 index 0000000000..b1f813f684 --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackend.java @@ -0,0 +1,49 @@ +package sample.cluster.factorial.japi; + +//#imports +import java.math.BigInteger; +import java.util.concurrent.Callable; +import scala.concurrent.Future; +import akka.actor.UntypedActor; +import akka.dispatch.Mapper; +import static akka.dispatch.Futures.future; +import static akka.pattern.Patterns.pipe; +//#imports + +//#backend +public class FactorialBackend extends UntypedActor { + + @Override + public void onReceive(Object message) { + if (message instanceof Integer) { + final Integer n = (Integer) message; + Future f = future(new Callable() { + public BigInteger call() { + return factorial(n); + } + }, getContext().dispatcher()); + + Future result = f.map( + new Mapper() { + public FactorialResult apply(BigInteger factorial) { + return new FactorialResult(n, factorial); + } + }, getContext().dispatcher()); + + pipe(result, getContext().dispatcher()).to(getSender()); + + } else { + unhandled(message); + } + } + + BigInteger factorial(int n) { + BigInteger acc = BigInteger.ONE; + for (int i = 1; i <= n; ++i) { + acc = acc.multiply(BigInteger.valueOf(i)); + } + return acc; + } +} +//#backend + diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackendMain.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackendMain.java new file mode 100644 index 0000000000..4bf907748d --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialBackendMain.java @@ -0,0 +1,22 @@ +package sample.cluster.factorial.japi; + +import akka.actor.ActorSystem; +import akka.actor.Props; + +public class FactorialBackendMain { + + public static void main(String[] args) throws Exception { + // Override the configuration of the port + // when specified as program argument + if (args.length > 0) + System.setProperty("akka.remote.netty.port", args[0]); + + ActorSystem system = ActorSystem.create("ClusterSystem"); + + system.actorOf(new Props(FactorialBackend.class), "factorialBackend"); + + system.actorOf(new Props(MetricsListener.class), "metricsListener"); + + } + +} diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java new file mode 100644 index 0000000000..13af688739 --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontend.java @@ -0,0 +1,90 @@ +package sample.cluster.factorial.japi; + +import akka.actor.UntypedActor; +import akka.actor.ActorRef; +import akka.actor.Props; +import akka.event.Logging; +import akka.event.LoggingAdapter; +import akka.routing.FromConfig; +import akka.cluster.routing.AdaptiveLoadBalancingRouter; +import akka.cluster.routing.ClusterRouterConfig; +import akka.cluster.routing.ClusterRouterSettings; +import akka.cluster.routing.HeapMetricsSelector; +import akka.cluster.routing.SystemLoadAverageMetricsSelector; + +//#frontend +public class FactorialFrontend extends UntypedActor { + final int upToN; + final boolean repeat; + + LoggingAdapter log = Logging.getLogger(getContext().system(), this); + + ActorRef backend = getContext().actorOf( + new Props(FactorialBackend.class).withRouter(FromConfig.getInstance()), + "factorialBackendRouter"); + + public FactorialFrontend(int upToN, boolean repeat) { + this.upToN = upToN; + this.repeat = repeat; + } + + @Override + public void preStart() { + sendJobs(); + } + + @Override + public void onReceive(Object message) { + if (message instanceof FactorialResult) { + FactorialResult result = (FactorialResult) message; + if (result.n == upToN) { + log.debug("{}! = {}", result.n, result.factorial); + if (repeat) sendJobs(); + } + + } else { + unhandled(message); + } + } + + void sendJobs() { + log.info("Starting batch of factorials up to [{}]", upToN); + for (int n = 1; n <= upToN; n++) { + backend.tell(n, getSelf()); + } + } + +} +//#frontend + + +//not used, only for documentation +abstract class FactorialFrontend2 extends UntypedActor { + //#router-lookup-in-code + int totalInstances = 100; + String routeesPath = "/user/statsWorker"; + boolean allowLocalRoutees = true; + ActorRef backend = getContext().actorOf( + new Props(FactorialBackend.class).withRouter(new ClusterRouterConfig( + new AdaptiveLoadBalancingRouter(HeapMetricsSelector.getInstance(), 0), + new ClusterRouterSettings( + totalInstances, routeesPath, allowLocalRoutees))), + "factorialBackendRouter2"); + //#router-lookup-in-code +} + +//not used, only for documentation +abstract class StatsService3 extends UntypedActor { + //#router-deploy-in-code + int totalInstances = 100; + int maxInstancesPerNode = 3; + boolean allowLocalRoutees = false; + ActorRef backend = getContext().actorOf( + new Props(FactorialBackend.class).withRouter(new ClusterRouterConfig( + new AdaptiveLoadBalancingRouter( + SystemLoadAverageMetricsSelector.getInstance(), 0), + new ClusterRouterSettings( + totalInstances, maxInstancesPerNode, allowLocalRoutees))), + "factorialBackendRouter3"); + //#router-deploy-in-code +} \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontendMain.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontendMain.java new file mode 100644 index 0000000000..8d52bdf54a --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialFrontendMain.java @@ -0,0 +1,27 @@ +package sample.cluster.factorial.japi; + +import akka.actor.ActorRef; +import akka.actor.ActorSystem; +import akka.actor.Props; +import akka.actor.UntypedActor; +import akka.actor.UntypedActorFactory; + + +public class FactorialFrontendMain { + + public static void main(String[] args) throws Exception { + final int upToN = (args.length == 0 ? 200 : Integer.valueOf(args[0])); + + ActorSystem system = ActorSystem.create("ClusterSystem"); + + // start the calculations when there is at least 2 other members + system.actorOf(new Props(new UntypedActorFactory() { + @Override + public UntypedActor create() { + return new StartupFrontend(upToN); + } + }), "startup"); + + } + +} diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialResult.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialResult.java new file mode 100644 index 0000000000..0cb74b6b54 --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/FactorialResult.java @@ -0,0 +1,14 @@ +package sample.cluster.factorial.japi; + +import java.math.BigInteger; +import java.io.Serializable; + +public class FactorialResult implements Serializable { + public final int n; + public final BigInteger factorial; + + FactorialResult(int n, BigInteger factorial) { + this.n = n; + this.factorial = factorial; + } +} \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/MetricsListener.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/MetricsListener.java new file mode 100644 index 0000000000..3acbf3e4c0 --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/MetricsListener.java @@ -0,0 +1,68 @@ +package sample.cluster.factorial.japi; + +//#metrics-listener +import akka.actor.UntypedActor; +import akka.cluster.Cluster; +import akka.cluster.ClusterEvent.ClusterMetricsChanged; +import akka.cluster.ClusterEvent.CurrentClusterState; +import akka.cluster.NodeMetrics; +import akka.cluster.StandardMetrics; +import akka.cluster.StandardMetrics.HeapMemory; +import akka.cluster.StandardMetrics.Cpu; +import akka.event.Logging; +import akka.event.LoggingAdapter; + +public class MetricsListener extends UntypedActor { + LoggingAdapter log = Logging.getLogger(getContext().system(), this); + + Cluster cluster = Cluster.get(getContext().system()); + + //subscribe to ClusterMetricsChanged + @Override + public void preStart() { + cluster.subscribe(getSelf(), ClusterMetricsChanged.class); + } + + //re-subscribe when restart + @Override + public void postStop() { + cluster.unsubscribe(getSelf()); + } + + + @Override + public void onReceive(Object message) { + if (message instanceof ClusterMetricsChanged) { + ClusterMetricsChanged clusterMetrics = (ClusterMetricsChanged) message; + for (NodeMetrics nodeMetrics : clusterMetrics.getNodeMetrics()) { + if (nodeMetrics.address().equals(cluster.selfAddress())) { + logHeap(nodeMetrics); + logCpu(nodeMetrics); + } + } + + } else if (message instanceof CurrentClusterState) { + // ignore + + } else { + unhandled(message); + } + } + + void logHeap(NodeMetrics nodeMetrics) { + HeapMemory heap = StandardMetrics.extractHeapMemory(nodeMetrics); + if (heap != null) { + log.info("Used heap: {} MB", ((double) heap.used()) / 1024 / 1024); + } + } + + void logCpu(NodeMetrics nodeMetrics) { + Cpu cpu = StandardMetrics.extractCpu(nodeMetrics); + if (cpu != null && cpu.systemLoadAverage().isDefined()) { + log.info("Load: {} ({} processors)", cpu.systemLoadAverage().get(), + cpu.processors()); + } + } + +} +//#metrics-listener \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/StartupFrontend.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/StartupFrontend.java new file mode 100644 index 0000000000..54ca680988 --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/factorial/japi/StartupFrontend.java @@ -0,0 +1,56 @@ +package sample.cluster.factorial.japi; + +import akka.actor.Props; +import akka.actor.UntypedActor; +import akka.actor.UntypedActorFactory; +import akka.cluster.Cluster; +import akka.cluster.ClusterEvent.CurrentClusterState; +import akka.cluster.ClusterEvent.MemberUp; +import akka.event.Logging; +import akka.event.LoggingAdapter; + +public class StartupFrontend extends UntypedActor { + final int upToN; + LoggingAdapter log = Logging.getLogger(getContext().system(), this); + int memberCount = 0; + + public StartupFrontend(int upToN) { + this.upToN = upToN; + } + + //subscribe to ClusterMetricsChanged + @Override + public void preStart() { + log.info("Factorials will start when 3 members in the cluster."); + Cluster.get(getContext().system()).subscribe(getSelf(), MemberUp.class); + } + + @Override + public void onReceive(Object message) { + if (message instanceof CurrentClusterState) { + CurrentClusterState state = (CurrentClusterState) message; + memberCount = state.members().size(); + runWhenReady(); + + } else if (message instanceof MemberUp) { + memberCount++; + runWhenReady(); + + } else { + unhandled(message); + } + + } + + void runWhenReady() { + if (memberCount >= 3) { + getContext().system().actorOf(new Props(new UntypedActorFactory() { + @Override + public UntypedActor create() { + return new FactorialFrontend(upToN, true); + } + }), "factorialFrontend"); + getContext().stop(getSelf()); + } + } +} diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsAggregator.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsAggregator.java index 469a443131..0716cc38ec 100644 --- a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsAggregator.java +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsAggregator.java @@ -6,7 +6,7 @@ import java.util.concurrent.TimeUnit; import sample.cluster.stats.japi.StatsMessages.JobFailed; import sample.cluster.stats.japi.StatsMessages.StatsResult; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.actor.ActorRef; import akka.actor.ReceiveTimeout; import akka.actor.UntypedActor; @@ -25,7 +25,7 @@ public class StatsAggregator extends UntypedActor { @Override public void preStart() { - getContext().setReceiveTimeout(Duration.create(10, TimeUnit.SECONDS)); + getContext().setReceiveTimeout(Duration.create(3, TimeUnit.SECONDS)); } @Override diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsFacade.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsFacade.java index 25366c8064..15a271027c 100644 --- a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsFacade.java +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsFacade.java @@ -1,17 +1,23 @@ package sample.cluster.stats.japi; +import scala.concurrent.Future; import sample.cluster.stats.japi.StatsMessages.JobFailed; import sample.cluster.stats.japi.StatsMessages.StatsJob; import akka.actor.ActorRef; import akka.actor.Address; import akka.actor.Props; import akka.actor.UntypedActor; +import akka.dispatch.Recover; import akka.cluster.Cluster; import akka.cluster.ClusterEvent.CurrentClusterState; import akka.cluster.ClusterEvent.LeaderChanged; import akka.cluster.ClusterEvent.MemberEvent; import akka.event.Logging; import akka.event.LoggingAdapter; +import akka.util.Timeout; +import static akka.pattern.Patterns.ask; +import static akka.pattern.Patterns.pipe; +import static java.util.concurrent.TimeUnit.SECONDS; //#facade public class StatsFacade extends UntypedActor { @@ -43,7 +49,13 @@ public class StatsFacade extends UntypedActor { } else if (message instanceof StatsJob) { StatsJob job = (StatsJob) message; - currentMaster.forward(job, getContext()); + Future f = ask(currentMaster, job, new Timeout(5, SECONDS)). + recover(new Recover() { + public Object recover(Throwable t) { + return new JobFailed("Service unavailable, try again later"); + } + }, getContext().dispatcher()); + pipe(f, getContext().dispatcher()).to(getSender()); } else if (message instanceof CurrentClusterState) { CurrentClusterState state = (CurrentClusterState) message; diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleClient.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleClient.java index 3350fed61a..bb3f52e248 100644 --- a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleClient.java +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/stats/japi/StatsSampleClient.java @@ -10,8 +10,8 @@ import sample.cluster.stats.japi.StatsMessages.JobFailed; import sample.cluster.stats.japi.StatsMessages.StatsJob; import sample.cluster.stats.japi.StatsMessages.StatsResult; import scala.concurrent.forkjoin.ThreadLocalRandom; -import scala.concurrent.util.Duration; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; import akka.actor.ActorRef; import akka.actor.Address; import akka.actor.Cancellable; diff --git a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationFrontendMain.java b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationFrontendMain.java index 2793494b3d..741d9452be 100644 --- a/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationFrontendMain.java +++ b/akka-samples/akka-sample-cluster/src/main/java/sample/cluster/transformation/japi/TransformationFrontendMain.java @@ -4,7 +4,7 @@ import java.util.concurrent.TimeUnit; import sample.cluster.transformation.japi.TransformationMessages.TransformationJob; import scala.concurrent.ExecutionContext; -import scala.concurrent.util.Duration; +import scala.concurrent.duration.Duration; import akka.actor.ActorRef; import akka.actor.ActorSystem; import akka.actor.Props; diff --git a/akka-samples/akka-sample-cluster/src/main/resources/application.conf b/akka-samples/akka-sample-cluster/src/main/resources/application.conf index 62554a65cf..507191b79c 100644 --- a/akka-samples/akka-sample-cluster/src/main/resources/application.conf +++ b/akka-samples/akka-sample-cluster/src/main/resources/application.conf @@ -1,3 +1,4 @@ +# //#cluster akka { actor { provider = "akka.cluster.ClusterActorRefProvider" @@ -11,8 +12,6 @@ akka { } } - extensions = ["akka.cluster.Cluster"] - cluster { seed-nodes = [ "akka://ClusterSystem@127.0.0.1:2551", @@ -20,4 +19,23 @@ akka { auto-down = on } -} \ No newline at end of file +} +# //#cluster + +# //#adaptive-router +akka.actor.deployment { + /factorialFrontend/factorialBackendRouter = { + router = adaptive + # metrics-selector = heap + # metrics-selector = load + # metrics-selector = cpu + metrics-selector = mix + nr-of-instances = 100 + cluster { + enabled = on + routees-path = "/user/factorialBackend" + allow-local-routees = off + } + } +} +# //#adaptive-router \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala new file mode 100644 index 0000000000..9e219a933a --- /dev/null +++ b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/factorial/FactorialSample.scala @@ -0,0 +1,184 @@ +package sample.cluster.factorial + +//#imports +import scala.annotation.tailrec +import scala.concurrent.Future +import akka.actor.Actor +import akka.actor.ActorLogging +import akka.actor.ActorRef +import akka.actor.ActorSystem +import akka.actor.Props +import akka.pattern.pipe +import akka.routing.FromConfig + +//#imports + +import akka.cluster.Cluster +import akka.cluster.ClusterEvent.CurrentClusterState +import akka.cluster.ClusterEvent.MemberUp + +object FactorialFrontend { + def main(args: Array[String]): Unit = { + val upToN = if (args.isEmpty) 200 else args(0).toInt + + val system = ActorSystem("ClusterSystem") + + // start the calculations when there is at least 2 other members + system.actorOf(Props(new Actor with ActorLogging { + var memberCount = 0 + + log.info("Factorials will start when 3 members in the cluster.") + Cluster(context.system).subscribe(self, classOf[MemberUp]) + + def receive = { + case state: CurrentClusterState ⇒ + memberCount = state.members.size + runWhenReady() + case MemberUp(member) ⇒ + memberCount += 1 + runWhenReady() + } + + def runWhenReady(): Unit = if (memberCount >= 3) { + context.system.actorOf(Props(new FactorialFrontend(upToN, repeat = true)), + name = "factorialFrontend") + context stop self + } + + }), name = "startup") + + } +} + +//#frontend +class FactorialFrontend(upToN: Int, repeat: Boolean) extends Actor with ActorLogging { + + val backend = context.actorOf(Props[FactorialBackend].withRouter(FromConfig), + name = "factorialBackendRouter") + + override def preStart(): Unit = sendJobs() + + def receive = { + case (n: Int, factorial: BigInt) ⇒ + if (n == upToN) { + log.debug("{}! = {}", n, factorial) + if (repeat) sendJobs() + } + } + + def sendJobs(): Unit = { + log.info("Starting batch of factorials up to [{}]", upToN) + 1 to upToN foreach { backend ! _ } + } +} +//#frontend + +object FactorialBackend { + def main(args: Array[String]): Unit = { + // Override the configuration of the port + // when specified as program argument + if (args.nonEmpty) System.setProperty("akka.remote.netty.port", args(0)) + + val system = ActorSystem("ClusterSystem") + system.actorOf(Props[FactorialBackend], name = "factorialBackend") + + system.actorOf(Props[MetricsListener], name = "metricsListener") + } +} + +//#backend +class FactorialBackend extends Actor with ActorLogging { + + import context.dispatcher + + def receive = { + case (n: Int) ⇒ + Future(factorial(n)) map { result ⇒ (n, result) } pipeTo sender + } + + def factorial(n: Int): BigInt = { + @tailrec def factorialAcc(acc: BigInt, n: Int): BigInt = { + if (n <= 1) acc + else factorialAcc(acc * n, n - 1) + } + factorialAcc(BigInt(1), n) + } + +} +//#backend + +//#metrics-listener +import akka.cluster.Cluster +import akka.cluster.ClusterEvent.ClusterMetricsChanged +import akka.cluster.ClusterEvent.CurrentClusterState +import akka.cluster.NodeMetrics +import akka.cluster.StandardMetrics.HeapMemory +import akka.cluster.StandardMetrics.Cpu + +class MetricsListener extends Actor with ActorLogging { + val selfAddress = Cluster(context.system).selfAddress + + // subscribe to ClusterMetricsChanged + // re-subscribe when restart + override def preStart(): Unit = + Cluster(context.system).subscribe(self, classOf[ClusterMetricsChanged]) + override def postStop(): Unit = + Cluster(context.system).unsubscribe(self) + + def receive = { + case ClusterMetricsChanged(clusterMetrics) ⇒ + clusterMetrics.filter(_.address == selfAddress) foreach { nodeMetrics ⇒ + logHeap(nodeMetrics) + logCpu(nodeMetrics) + } + case state: CurrentClusterState ⇒ // ignore + } + + def logHeap(nodeMetrics: NodeMetrics): Unit = nodeMetrics match { + case HeapMemory(address, timestamp, used, committed, max) ⇒ + log.info("Used heap: {} MB", used.doubleValue / 1024 / 1024) + case _ ⇒ // no heap info + } + + def logCpu(nodeMetrics: NodeMetrics): Unit = nodeMetrics match { + case Cpu(address, timestamp, Some(systemLoadAverage), cpuCombined, processors) ⇒ + log.info("Load: {} ({} processors)", systemLoadAverage, processors) + case _ ⇒ // no cpu info + } +} + +//#metrics-listener + +// not used, only for documentation +abstract class FactorialFrontend2 extends Actor { + //#router-lookup-in-code + import akka.cluster.routing.ClusterRouterConfig + import akka.cluster.routing.ClusterRouterSettings + import akka.cluster.routing.AdaptiveLoadBalancingRouter + import akka.cluster.routing.HeapMetricsSelector + + val backend = context.actorOf(Props[FactorialBackend].withRouter( + ClusterRouterConfig(AdaptiveLoadBalancingRouter(HeapMetricsSelector), + ClusterRouterSettings( + totalInstances = 100, routeesPath = "/user/statsWorker", + allowLocalRoutees = true))), + name = "factorialBackendRouter2") + //#router-lookup-in-code +} + +// not used, only for documentation +abstract class FactorialFrontend3 extends Actor { + //#router-deploy-in-code + import akka.cluster.routing.ClusterRouterConfig + import akka.cluster.routing.ClusterRouterSettings + import akka.cluster.routing.AdaptiveLoadBalancingRouter + import akka.cluster.routing.SystemLoadAverageMetricsSelector + + val backend = context.actorOf(Props[FactorialBackend].withRouter( + ClusterRouterConfig(AdaptiveLoadBalancingRouter( + SystemLoadAverageMetricsSelector), ClusterRouterSettings( + totalInstances = 100, maxInstancesPerNode = 3, + allowLocalRoutees = false))), + name = "factorialBackendRouter3") + //#router-deploy-in-code +} \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala index 549738541e..a1cab85069 100644 --- a/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala +++ b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/stats/StatsSample.scala @@ -3,7 +3,7 @@ package sample.cluster.stats //#imports import language.postfixOps import scala.concurrent.forkjoin.ThreadLocalRandom -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory import akka.actor.Actor import akka.actor.ActorLogging @@ -22,6 +22,9 @@ import akka.cluster.ClusterEvent.MemberUp import akka.cluster.MemberStatus import akka.routing.FromConfig import akka.routing.ConsistentHashingRouter.ConsistentHashableEnvelope +import akka.pattern.ask +import akka.pattern.pipe +import akka.util.Timeout //#imports //#messages @@ -51,7 +54,7 @@ class StatsService extends Actor { class StatsAggregator(expectedResults: Int, replyTo: ActorRef) extends Actor { var results = IndexedSeq.empty[Int] - context.setReceiveTimeout(10 seconds) + context.setReceiveTimeout(3 seconds) def receive = { case wordCount: Int ⇒ @@ -88,6 +91,7 @@ class StatsWorker extends Actor { //#facade class StatsFacade extends Actor with ActorLogging { + import context.dispatcher val cluster = Cluster(context.system) var currentMaster: Option[ActorRef] = None @@ -102,7 +106,12 @@ class StatsFacade extends Actor with ActorLogging { case job: StatsJob if currentMaster.isEmpty ⇒ sender ! JobFailed("Service unavailable, try again later") case job: StatsJob ⇒ - currentMaster foreach { _ forward job } + implicit val timeout = Timeout(5.seconds) + currentMaster foreach { + _ ? job recover { + case _ ⇒ JobFailed("Service unavailable, try again later") + } pipeTo sender + } case state: CurrentClusterState ⇒ state.leader foreach updateCurrentMaster case LeaderChanged(Some(leaderAddress)) ⇒ diff --git a/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala index 53ee7bcae5..159e17f94b 100644 --- a/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala +++ b/akka-samples/akka-sample-cluster/src/main/scala/sample/cluster/transformation/TransformationSample.scala @@ -2,7 +2,7 @@ package sample.cluster.transformation //#imports import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Actor import akka.actor.ActorRef @@ -68,7 +68,8 @@ class TransformationFrontend extends Actor { context watch sender backends = backends :+ sender - case Terminated(a) ⇒ backends.filterNot(_ == a) + case Terminated(a) ⇒ + backends = backends.filterNot(_ == a) } } //#frontend diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala index 9f7010f7cd..5f1c9728a3 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSingleMasterSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.stats import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory @@ -33,6 +33,8 @@ object StatsSampleSingleMasterSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector #//#router-deploy-config akka.actor.deployment { /statsFacade/statsService/workerRouter { @@ -67,12 +69,11 @@ abstract class StatsSampleSingleMasterSpec extends MultiNodeSpec(StatsSampleSing override def afterAll() = multiNodeSpecAfterAll() "The stats sample with single master" must { - "illustrate how to startup cluster" in within(10 seconds) { + "illustrate how to startup cluster" in within(15 seconds) { Cluster(system).subscribe(testActor, classOf[MemberUp]) expectMsgClass(classOf[CurrentClusterState]) Cluster(system) join node(first).address - system.actorOf(Props[StatsFacade], "statsFacade") expectMsgAllOf( MemberUp(Member(node(first).address, MemberStatus.Up)), @@ -80,15 +81,17 @@ abstract class StatsSampleSingleMasterSpec extends MultiNodeSpec(StatsSampleSing MemberUp(Member(node(third).address, MemberStatus.Up))) Cluster(system).unsubscribe(testActor) + + system.actorOf(Props[StatsFacade], "statsFacade") testConductor.enter("all-up") } - "show usage of the statsFacade" in within(5 seconds) { + "show usage of the statsFacade" in within(20 seconds) { val facade = system.actorFor(RootActorPath(node(third).address) / "user" / "statsFacade") // eventually the service should be ok, - // worker nodes might not be up yet + // service and worker nodes might not be up yet awaitCond { facade ! StatsJob("this is the text that will be analyzed") expectMsgPF() { diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala index b1141e587f..7d9fbda51b 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/StatsSampleSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.stats import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.RootActorPath @@ -27,6 +27,8 @@ object StatsSampleSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector #//#router-lookup-config akka.actor.deployment { /statsService/workerRouter { @@ -71,16 +73,16 @@ abstract class StatsSampleSpec extends MultiNodeSpec(StatsSampleSpecConfig) override def afterAll() = multiNodeSpecAfterAll() -//#abstract-test + //#abstract-test "The stats sample" must { //#startup-cluster - "illustrate how to startup cluster" in within(10 seconds) { + "illustrate how to startup cluster" in within(15 seconds) { Cluster(system).subscribe(testActor, classOf[MemberUp]) expectMsgClass(classOf[CurrentClusterState]) - //#addresses + //#addresses val firstAddress = node(first).address val secondAddress = node(second).address val thirdAddress = node(third).address @@ -104,34 +106,37 @@ abstract class StatsSampleSpec extends MultiNodeSpec(StatsSampleSpecConfig) } //#startup-cluster - //#test-statsService - "show usage of the statsService from one node" in within(5 seconds) { + "show usage of the statsService from one node" in within(15 seconds) { runOn(second) { - val service = system.actorFor(node(third) / "user" / "statsService") - service ! StatsJob("this is the text that will be analyzed") - val meanWordLength = expectMsgPF() { - case StatsResult(meanWordLength) ⇒ meanWordLength - } - meanWordLength must be(3.875 plusOrMinus 0.001) + assertServiceOk } testConductor.enter("done-2") } + + def assertServiceOk: Unit = { + val service = system.actorFor(node(third) / "user" / "statsService") + // eventually the service should be ok, + // first attempts might fail because worker actors not started yet + awaitCond { + service ! StatsJob("this is the text that will be analyzed") + expectMsgPF() { + case unavailble: JobFailed ⇒ false + case StatsResult(meanWordLength) ⇒ + meanWordLength must be(3.875 plusOrMinus 0.001) + true + } + } + + } //#test-statsService - - "show usage of the statsService from all nodes" in within(5 seconds) { - val service = system.actorFor(node(third) / "user" / "statsService") - service ! StatsJob("this is the text that will be analyzed") - val meanWordLength = expectMsgPF() { - case StatsResult(meanWordLength) ⇒ meanWordLength - } - meanWordLength must be(3.875 plusOrMinus 0.001) - testConductor.enter("done-2") + "show usage of the statsService from all nodes" in within(15 seconds) { + assertServiceOk + testConductor.enter("done-3") } - } } \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleJapiSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleJapiSpec.scala index 4c73b858cb..4583dac90e 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleJapiSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleJapiSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.stats.japi import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.Props import akka.actor.RootActorPath @@ -31,6 +31,8 @@ object StatsSampleJapiSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector akka.actor.deployment { /statsService/workerRouter { router = consistent-hashing @@ -65,7 +67,7 @@ abstract class StatsSampleJapiSpec extends MultiNodeSpec(StatsSampleJapiSpecConf "The japi stats sample" must { - "illustrate how to startup cluster" in within(10 seconds) { + "illustrate how to startup cluster" in within(15 seconds) { Cluster(system).subscribe(testActor, classOf[MemberUp]) expectMsgClass(classOf[CurrentClusterState]) @@ -88,33 +90,36 @@ abstract class StatsSampleJapiSpec extends MultiNodeSpec(StatsSampleJapiSpecConf testConductor.enter("all-up") } - - "show usage of the statsService from one node" in within(5 seconds) { + "show usage of the statsService from one node" in within(15 seconds) { runOn(second) { - val service = system.actorFor(node(third) / "user" / "statsService") - service ! new StatsJob("this is the text that will be analyzed") - val meanWordLength = expectMsgPF() { - case r: StatsResult ⇒ r.getMeanWordLength - } - meanWordLength must be(3.875 plusOrMinus 0.001) + assertServiceOk } testConductor.enter("done-2") } + + def assertServiceOk: Unit = { + val service = system.actorFor(node(third) / "user" / "statsService") + // eventually the service should be ok, + // first attempts might fail because worker actors not started yet + awaitCond { + service ! new StatsJob("this is the text that will be analyzed") + expectMsgPF() { + case unavailble: JobFailed ⇒ false + case r: StatsResult ⇒ + r.getMeanWordLength must be(3.875 plusOrMinus 0.001) + true + } + } + } //#test-statsService - - "show usage of the statsService from all nodes" in within(5 seconds) { - val service = system.actorFor(node(third) / "user" / "statsService") - service ! new StatsJob("this is the text that will be analyzed") - val meanWordLength = expectMsgPF() { - case r: StatsResult ⇒ r.getMeanWordLength - } - meanWordLength must be(3.875 plusOrMinus 0.001) - testConductor.enter("done-2") + "show usage of the statsService from all nodes" in within(15 seconds) { + assertServiceOk + + testConductor.enter("done-3") } - } } \ No newline at end of file diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleSingleMasterJapiSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleSingleMasterJapiSpec.scala index e6ef3d333f..ca69c1ae6c 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleSingleMasterJapiSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/stats/japi/StatsSampleSingleMasterJapiSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.stats.japi import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory @@ -34,6 +34,8 @@ object StatsSampleSingleMasterJapiSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector akka.actor.deployment { /statsFacade/statsService/workerRouter { router = consistent-hashing @@ -66,12 +68,11 @@ abstract class StatsSampleSingleMasterJapiSpec extends MultiNodeSpec(StatsSample override def afterAll() = multiNodeSpecAfterAll() "The japi stats sample with single master" must { - "illustrate how to startup cluster" in within(10 seconds) { + "illustrate how to startup cluster" in within(15 seconds) { Cluster(system).subscribe(testActor, classOf[MemberUp]) expectMsgClass(classOf[CurrentClusterState]) Cluster(system) join node(first).address - system.actorOf(Props[StatsFacade], "statsFacade") expectMsgAllOf( MemberUp(Member(node(first).address, MemberStatus.Up)), @@ -80,14 +81,16 @@ abstract class StatsSampleSingleMasterJapiSpec extends MultiNodeSpec(StatsSample Cluster(system).unsubscribe(testActor) + system.actorOf(Props[StatsFacade], "statsFacade") + testConductor.enter("all-up") } - "show usage of the statsFacade" in within(5 seconds) { + "show usage of the statsFacade" in within(20 seconds) { val facade = system.actorFor(RootActorPath(node(third).address) / "user" / "statsFacade") // eventually the service should be ok, - // worker nodes might not be up yet + // service and worker nodes might not be up yet awaitCond { facade ! new StatsJob("this is the text that will be analyzed") expectMsgPF() { diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/TransformationSampleSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/TransformationSampleSpec.scala index 1c3176ee16..0e4403b285 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/TransformationSampleSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/TransformationSampleSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.transformation import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory @@ -29,6 +29,8 @@ object TransformationSampleSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector """)) } @@ -52,7 +54,7 @@ abstract class TransformationSampleSpec extends MultiNodeSpec(TransformationSamp override def afterAll() = multiNodeSpecAfterAll() "The transformation sample" must { - "illustrate how to start first frontend" in { + "illustrate how to start first frontend" in within(15 seconds) { runOn(frontend1) { // this will only run on the 'first' node Cluster(system) join node(frontend1).address @@ -88,6 +90,8 @@ abstract class TransformationSampleSpec extends MultiNodeSpec(TransformationSamp Cluster(system) join node(frontend1).address system.actorOf(Props[TransformationFrontend], name = "frontend") } + testConductor.enter("frontend2-started") + runOn(backend2, backend3) { Cluster(system) join node(backend1).address system.actorOf(Props[TransformationBackend], name = "backend") diff --git a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/japi/TransformationSampleJapiSpec.scala b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/japi/TransformationSampleJapiSpec.scala index 4a8cd1c8c8..bf6fdaf19c 100644 --- a/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/japi/TransformationSampleJapiSpec.scala +++ b/akka-samples/akka-sample-cluster/src/multi-jvm/scala/sample/cluster/transformation/japi/TransformationSampleJapiSpec.scala @@ -1,7 +1,7 @@ package sample.cluster.transformation.japi import language.postfixOps -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import com.typesafe.config.ConfigFactory @@ -30,6 +30,8 @@ object TransformationSampleJapiSpecConfig extends MultiNodeConfig { akka.actor.provider = "akka.cluster.ClusterActorRefProvider" akka.remote.log-remote-lifecycle-events = off akka.cluster.auto-join = off + # don't use sigar for tests, native lib not in path + akka.cluster.metrics.collector-class = akka.cluster.JmxMetricsCollector """)) } @@ -53,7 +55,7 @@ abstract class TransformationSampleJapiSpec extends MultiNodeSpec(Transformation override def afterAll() = multiNodeSpecAfterAll() "The japi transformation sample" must { - "illustrate how to start first frontend" in { + "illustrate how to start first frontend" in within(15 seconds) { runOn(frontend1) { // this will only run on the 'first' node Cluster(system) join node(frontend1).address @@ -89,6 +91,7 @@ abstract class TransformationSampleJapiSpec extends MultiNodeSpec(Transformation Cluster(system) join node(frontend1).address system.actorOf(Props[TransformationFrontend], name = "frontend") } + testConductor.enter("frontend2-started") runOn(backend2, backend3) { Cluster(system) join node(backend1).address system.actorOf(Props[TransformationBackend], name = "backend") diff --git a/akka-samples/akka-sample-fsm/src/main/scala/Buncher.scala b/akka-samples/akka-sample-fsm/src/main/scala/Buncher.scala index 64dc611396..9fc9a371f6 100644 --- a/akka-samples/akka-sample-fsm/src/main/scala/Buncher.scala +++ b/akka-samples/akka-sample-fsm/src/main/scala/Buncher.scala @@ -5,9 +5,9 @@ package sample.fsm.buncher import akka.actor.ActorRefFactory import scala.reflect.ClassTag -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.actor.{ FSM, Actor, ActorRef } -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /* * generic typed object buncher. diff --git a/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala b/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala index 9fc39ec2a6..a9740267ca 100644 --- a/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala +++ b/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnBecome.scala @@ -9,7 +9,7 @@ import language.postfixOps //http://www.dalnefre.com/wp/2010/08/dining-philosophers-in-humus/ import akka.actor._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ /* * First we define our messages, they basically speak for themselves diff --git a/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnFsm.scala b/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnFsm.scala index 902eb797d2..b1fec79f2b 100644 --- a/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnFsm.scala +++ b/akka-samples/akka-sample-fsm/src/main/scala/DiningHakkersOnFsm.scala @@ -6,9 +6,7 @@ package sample.fsm.dining.fsm import language.postfixOps import akka.actor._ import akka.actor.FSM._ -import scala.concurrent.util.Duration -import scala.concurrent.util.duration._ -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration._ /* * Some messages for the chopstick diff --git a/akka-samples/akka-sample-hello-kernel/src/main/java/sample/kernel/hello/java/HelloKernel.java b/akka-samples/akka-sample-hello-kernel/src/main/java/sample/kernel/hello/java/HelloKernel.java index 77e579aa1e..fa1cdf08d1 100644 --- a/akka-samples/akka-sample-hello-kernel/src/main/java/sample/kernel/hello/java/HelloKernel.java +++ b/akka-samples/akka-sample-hello-kernel/src/main/java/sample/kernel/hello/java/HelloKernel.java @@ -12,7 +12,7 @@ import akka.kernel.Bootable; public class HelloKernel implements Bootable { final ActorSystem system = ActorSystem.create("hellokernel"); - static class HelloActor extends UntypedActor { + public static class HelloActor extends UntypedActor { final ActorRef worldActor = getContext().actorOf( new Props(WorldActor.class)); @@ -26,7 +26,7 @@ public class HelloKernel implements Bootable { } } - static class WorldActor extends UntypedActor { + public static class WorldActor extends UntypedActor { public void onReceive(Object message) { if (message instanceof String) getSender().tell(((String) message).toUpperCase() + " world!", diff --git a/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala b/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala index c37e3e1ed8..a227611fdf 100644 --- a/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala +++ b/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/CreationApplication.scala @@ -40,7 +40,7 @@ class CreationActor extends Actor { case result: MathResult ⇒ result match { case MultiplicationResult(n1, n2, r) ⇒ println("Mul result: %d * %d = %d".format(n1, n2, r)) - case DivisionResult(n1, n2, r) ⇒ + case DivisionResult(n1, n2, r) ⇒ println("Div result: %.0f / %d = %.2f".format(n1, n2, r)) } } diff --git a/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala b/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala index cee40de5e7..70f49eb29d 100644 --- a/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala +++ b/akka-samples/akka-sample-remote/src/main/scala/sample/remote/calculator/LookupApplication.scala @@ -40,7 +40,7 @@ class LookupActor extends Actor { def receive = { case (actor: ActorRef, op: MathOp) ⇒ actor ! op case result: MathResult ⇒ result match { - case AddResult(n1, n2, r) ⇒ + case AddResult(n1, n2, r) ⇒ println("Add result: %d + %d = %d".format(n1, n2, r)) case SubtractResult(n1, n2, r) ⇒ println("Sub result: %d - %d = %d".format(n1, n2, r)) diff --git a/akka-sbt-plugin/sample/project/Build.scala b/akka-sbt-plugin/sample/project/Build.scala index b212548f6b..5223e097bd 100644 --- a/akka-sbt-plugin/sample/project/Build.scala +++ b/akka-sbt-plugin/sample/project/Build.scala @@ -6,8 +6,8 @@ import akka.sbt.AkkaKernelPlugin.{ Dist, outputDirectory, distJvmOptions} object HelloKernelBuild extends Build { val Organization = "akka.sample" - val Version = "2.1-SNAPSHOT" - val ScalaVersion = "2.10.0-M6" + val Version = "2.2-SNAPSHOT" + val ScalaVersion = "2.10.0-RC3" lazy val HelloKernel = Project( id = "hello-kernel", @@ -49,10 +49,10 @@ object Dependencies { object Dependency { // Versions object V { - val Akka = "2.1-SNAPSHOT" + val Akka = "2.2-SNAPSHOT" } - val akkaKernel = "com.typesafe.akka" % "akka-kernel" % V.Akka - val akkaSlf4j = "com.typesafe.akka" % "akka-slf4j" % V.Akka - val logback = "ch.qos.logback" % "logback-classic" % "1.0.0" + val akkaKernel = "com.typesafe.akka" %% "akka-kernel" % V.Akka cross CrossVersion.full + val akkaSlf4j = "com.typesafe.akka" %% "akka-slf4j" % V.Akka cross CrossVersion.full + val logback = "ch.qos.logback" % "logback-classic" % "1.0.0" } diff --git a/akka-sbt-plugin/sample/project/build.properties b/akka-sbt-plugin/sample/project/build.properties index f4ff7a5afa..4474a03e1a 100644 --- a/akka-sbt-plugin/sample/project/build.properties +++ b/akka-sbt-plugin/sample/project/build.properties @@ -1 +1 @@ -sbt.version=0.11.2 +sbt.version=0.12.1 diff --git a/akka-sbt-plugin/sample/project/plugins.sbt b/akka-sbt-plugin/sample/project/plugins.sbt index 3f814e328e..6200abdd63 100644 --- a/akka-sbt-plugin/sample/project/plugins.sbt +++ b/akka-sbt-plugin/sample/project/plugins.sbt @@ -1,3 +1 @@ -resolvers += "Typesafe Repo" at "http://repo.typesafe.com/typesafe/releases/" - -addSbtPlugin("com.typesafe.akka" % "akka-sbt-plugin" % "2.1-SNAPSHOT") +addSbtPlugin("com.typesafe.akka" % "akka-sbt-plugin" % "2.2-SNAPSHOT") diff --git a/akka-sbt-plugin/src/main/scala/AkkaKernelPlugin.scala b/akka-sbt-plugin/src/main/scala/AkkaKernelPlugin.scala index 835a596a4a..0a22709b1c 100644 --- a/akka-sbt-plugin/src/main/scala/AkkaKernelPlugin.scala +++ b/akka-sbt-plugin/src/main/scala/AkkaKernelPlugin.scala @@ -19,6 +19,7 @@ object AkkaKernelPlugin extends Plugin { configSourceDirs: Seq[File], distJvmOptions: String, distMainClass: String, + distBootClass: String, libFilter: File ⇒ Boolean, additionalLibs: Seq[File]) @@ -30,8 +31,12 @@ object AkkaKernelPlugin extends Plugin { val configSourceDirs = TaskKey[Seq[File]]("config-source-directories", "Configuration files are copied from these directories") - val distJvmOptions = SettingKey[String]("kernel-jvm-options", "JVM parameters to use in start script") - val distMainClass = SettingKey[String]("kernel-main-class", "Kernel main class to use in start script") + val distJvmOptions = SettingKey[String]("kernel-jvm-options", + "JVM parameters to use in start script") + val distMainClass = SettingKey[String]("kernel-main-class", + "main class to use in start script, defaults to akka.kernel.Main to load an akka.kernel.Bootable") + val distBootClass = SettingKey[String]("kernel-boot-class", + "class implementing akka.kernel.Bootable, which gets loaded by the default 'distMainClass'") val libFilter = SettingKey[File ⇒ Boolean]("lib-filter", "Filter of dependency jar files") val additionalLibs = TaskKey[Seq[File]]("additional-libs", "Additional dependency jar files") @@ -50,16 +55,17 @@ object AkkaKernelPlugin extends Plugin { configSourceDirs <<= defaultConfigSourceDirs, distJvmOptions := "-Xms1024M -Xmx1024M -Xss1M -XX:MaxPermSize=256M -XX:+UseParallelGC", distMainClass := "akka.kernel.Main", + distBootClass := "", libFilter := { f ⇒ true }, additionalLibs <<= defaultAdditionalLibs, - distConfig <<= (outputDirectory, configSourceDirs, distJvmOptions, distMainClass, libFilter, additionalLibs) map DistConfig)) ++ + distConfig <<= (outputDirectory, configSourceDirs, distJvmOptions, distMainClass, distBootClass, libFilter, additionalLibs) map DistConfig)) ++ Seq(dist <<= (dist in Dist), distNeedsPackageBin) private def distTask: Initialize[Task[File]] = (distConfig, sourceDirectory, crossTarget, dependencyClasspath, projectDependencies, allDependencies, buildStructure, state) map { (conf, src, tgt, cp, projDeps, allDeps, buildStruct, st) ⇒ if (isKernelProject(allDeps)) { - val log = logger(st) + val log = st.log val distBinPath = conf.outputDirectory / "bin" val distConfigPath = conf.outputDirectory / "config" val distDeployPath = conf.outputDirectory / "deploy" @@ -69,7 +75,7 @@ object AkkaKernelPlugin extends Plugin { log.info("Creating distribution %s ..." format conf.outputDirectory) IO.createDirectory(conf.outputDirectory) - Scripts(conf.distJvmOptions, conf.distMainClass).writeScripts(distBinPath) + Scripts(conf.distJvmOptions, conf.distMainClass, conf.distBootClass).writeScripts(distBinPath) copyDirectories(conf.configSourceDirs, distConfigPath) copyJars(tgt, distDeployPath) @@ -109,7 +115,7 @@ object AkkaKernelPlugin extends Plugin { Seq.empty[File] } - private case class Scripts(jvmOptions: String, mainClass: String) { + private case class Scripts(jvmOptions: String, mainClass: String, bootClass: String) { def writeScripts(to: File) = { scripts.map { script ⇒ @@ -131,8 +137,8 @@ object AkkaKernelPlugin extends Plugin { |AKKA_CLASSPATH="$AKKA_HOME/config:$AKKA_HOME/lib/*" |JAVA_OPTS="%s" | - |java $JAVA_OPTS -cp "$AKKA_CLASSPATH" -Dakka.home="$AKKA_HOME" %s "$@" - |""".stripMargin.format(jvmOptions, mainClass) + |java $JAVA_OPTS -cp "$AKKA_CLASSPATH" -Dakka.home="$AKKA_HOME" %s%s "$@" + |""".stripMargin.format(jvmOptions, mainClass, if (bootClass.nonEmpty) " " + bootClass else "") private def distBatScript = """|@echo off @@ -140,8 +146,8 @@ object AkkaKernelPlugin extends Plugin { |set AKKA_CLASSPATH=%%AKKA_HOME%%\config;%%AKKA_HOME%%\lib\* |set JAVA_OPTS=%s | - |java %%JAVA_OPTS%% -cp "%%AKKA_CLASSPATH%%" -Dakka.home="%%AKKA_HOME%%" %s %%* - |""".stripMargin.format(jvmOptions, mainClass) + |java %%JAVA_OPTS%% -cp "%%AKKA_CLASSPATH%%" -Dakka.home="%%AKKA_HOME%%" %s%s %%* + |""".stripMargin.format(jvmOptions, mainClass, if (bootClass.nonEmpty) " " + bootClass else "") private def setExecutable(target: File, executable: Boolean): Option[String] = { val success = target.setExecutable(executable, false) @@ -201,7 +207,7 @@ object AkkaKernelPlugin extends Plugin { def setting[A](key: SettingKey[A], errorMessage: ⇒ String) = { optionalSetting(key) getOrElse { - logger(state).error(errorMessage); + state.log.error(errorMessage); throw new IllegalArgumentException() } } diff --git a/akka-slf4j/src/test/scala/akka/event/slf4j/Slf4jEventHandlerSpec.scala b/akka-slf4j/src/test/scala/akka/event/slf4j/Slf4jEventHandlerSpec.scala index 77b10039ad..bdc00e6c17 100644 --- a/akka-slf4j/src/test/scala/akka/event/slf4j/Slf4jEventHandlerSpec.scala +++ b/akka-slf4j/src/test/scala/akka/event/slf4j/Slf4jEventHandlerSpec.scala @@ -8,7 +8,7 @@ import language.postfixOps import akka.testkit.AkkaSpec import akka.actor.Actor import akka.actor.ActorLogging -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.event.Logging import akka.actor.Props import ch.qos.logback.core.OutputStreamAppender diff --git a/akka-testkit/src/main/java/akka/testkit/JavaTestKit.java b/akka-testkit/src/main/java/akka/testkit/JavaTestKit.java index 88fe0d940e..835fd1939a 100644 --- a/akka-testkit/src/main/java/akka/testkit/JavaTestKit.java +++ b/akka-testkit/src/main/java/akka/testkit/JavaTestKit.java @@ -10,8 +10,8 @@ import akka.event.Logging; import akka.event.Logging.LogEvent; import akka.japi.JavaPartialFunction; import akka.japi.Util; -import scala.concurrent.util.Duration; -import scala.concurrent.util.FiniteDuration; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; /** * Java API for the TestProbe. Proper JavaDocs to come once JavaDoccing is implemented. @@ -184,31 +184,31 @@ public class JavaTestKit { } public Object expectMsgAnyOf(Object... msgs) { - return p.expectMsgAnyOf(Util.arrayToSeq(msgs)); + return p.expectMsgAnyOf(Util.immutableSeq(msgs)); } public Object expectMsgAnyOf(FiniteDuration max, Object... msgs) { - return p.expectMsgAnyOf(max, Util.arrayToSeq(msgs)); + return p.expectMsgAnyOf(max, Util.immutableSeq(msgs)); } public Object[] expectMsgAllOf(Object... msgs) { - return (Object[]) p.expectMsgAllOf(Util.arrayToSeq(msgs)).toArray( + return (Object[]) p.expectMsgAllOf(Util.immutableSeq(msgs)).toArray( Util.classTag(Object.class)); } public Object[] expectMsgAllOf(FiniteDuration max, Object... msgs) { - return (Object[]) p.expectMsgAllOf(max, Util.arrayToSeq(msgs)).toArray( + return (Object[]) p.expectMsgAllOf(max, Util.immutableSeq(msgs)).toArray( Util.classTag(Object.class)); } @SuppressWarnings("unchecked") public T expectMsgAnyClassOf(Class... classes) { - final Object result = p.expectMsgAnyClassOf(Util.arrayToSeq(classes)); + final Object result = p.expectMsgAnyClassOf(Util.immutableSeq(classes)); return (T) result; } public Object expectMsgAnyClassOf(FiniteDuration max, Class... classes) { - return p.expectMsgAnyClassOf(max, Util.arrayToSeq(classes)); + return p.expectMsgAnyClassOf(max, Util.immutableSeq(classes)); } public void expectNoMsg() { diff --git a/akka-testkit/src/main/resources/reference.conf b/akka-testkit/src/main/resources/reference.conf index 17da88c22e..7adeb68331 100644 --- a/akka-testkit/src/main/resources/reference.conf +++ b/akka-testkit/src/main/resources/reference.conf @@ -15,7 +15,8 @@ akka { # all required messages are received filter-leeway = 3s - # duration to wait in expectMsg and friends outside of within() block by default + # duration to wait in expectMsg and friends outside of within() block + # by default single-expect-default = 3s # The timeout that is added as an implicit by DefaultTimeout trait diff --git a/akka-testkit/src/main/scala/akka/testkit/CallingThreadDispatcher.scala b/akka-testkit/src/main/scala/akka/testkit/CallingThreadDispatcher.scala index 0fbe4d7c18..dad7f4643e 100644 --- a/akka-testkit/src/main/scala/akka/testkit/CallingThreadDispatcher.scala +++ b/akka-testkit/src/main/scala/akka/testkit/CallingThreadDispatcher.scala @@ -12,9 +12,9 @@ import scala.annotation.tailrec import com.typesafe.config.Config import akka.actor.{ ActorInitializationException, ExtensionIdProvider, ExtensionId, Extension, ExtendedActorSystem, ActorRef, ActorCell } import akka.dispatch.{ MessageQueue, MailboxType, TaskInvocation, SystemMessage, Suspend, Resume, MessageDispatcherConfigurator, MessageDispatcher, Mailbox, Envelope, DispatcherPrerequisites, DefaultSystemMessageQueue } -import scala.concurrent.util.duration.intToDurationInt +import scala.concurrent.duration._ import akka.util.Switch -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import scala.concurrent.Awaitable import akka.actor.ActorContext import scala.util.control.NonFatal diff --git a/akka-testkit/src/main/scala/akka/testkit/TestActorRef.scala b/akka-testkit/src/main/scala/akka/testkit/TestActorRef.scala index e113a8596c..132e3f5e78 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestActorRef.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestActorRef.scala @@ -42,6 +42,9 @@ class TestActorRef[T <: Actor]( _supervisor, _supervisor.path / name) { + // we need to start ourselves since the creation of an actor has been split into initialization and starting + underlying.start() + import TestActorRef.InternalGetActor override def newActorCell(system: ActorSystemImpl, ref: InternalActorRef, props: Props, supervisor: InternalActorRef): ActorCell = @@ -135,7 +138,7 @@ object TestActorRef { def apply[T <: Actor](implicit t: ClassTag[T], system: ActorSystem): TestActorRef[T] = apply[T](randomName) def apply[T <: Actor](name: String)(implicit t: ClassTag[T], system: ActorSystem): TestActorRef[T] = apply[T](Props({ - system.asInstanceOf[ExtendedActorSystem].dynamicAccess.createInstanceFor[T](t.runtimeClass, Seq()).recover({ + system.asInstanceOf[ExtendedActorSystem].dynamicAccess.createInstanceFor[T](t.runtimeClass, Nil).recover({ case exception ⇒ throw ActorInitializationException(null, "Could not instantiate Actor" + "\nMake sure Actor is NOT defined inside a class/trait," + diff --git a/akka-testkit/src/main/scala/akka/testkit/TestBarrier.scala b/akka-testkit/src/main/scala/akka/testkit/TestBarrier.scala index 929838a8b5..5d043f4b10 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestBarrier.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestBarrier.scala @@ -4,10 +4,10 @@ package akka.testkit -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.{ CyclicBarrier, TimeUnit, TimeoutException } import akka.actor.ActorSystem -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration class TestBarrierTimeoutException(message: String) extends RuntimeException(message) diff --git a/akka-testkit/src/main/scala/akka/testkit/TestEventListener.scala b/akka-testkit/src/main/scala/akka/testkit/TestEventListener.scala index 2987ede478..dfcd7b9dd2 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestEventListener.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestEventListener.scala @@ -6,15 +6,16 @@ package akka.testkit import language.existentials import scala.util.matching.Regex +import scala.collection.immutable +import scala.concurrent.duration.Duration +import scala.reflect.ClassTag import akka.actor.{ DeadLetter, ActorSystem, Terminated, UnhandledMessage } import akka.dispatch.{ SystemMessage, Terminate } import akka.event.Logging.{ Warning, LogEvent, InitializeLogger, Info, Error, Debug, LoggerInitialized } import akka.event.Logging -import java.lang.{ Iterable ⇒ JIterable } -import scala.collection.JavaConverters -import scala.concurrent.util.Duration -import scala.reflect.ClassTag import akka.actor.NoSerializationVerificationNeeded +import akka.japi.Util.immutableSeq +import java.lang.{ Iterable ⇒ JIterable } /** * Implementation helpers of the EventFilter facilities: send `Mute` @@ -38,22 +39,22 @@ sealed trait TestEvent */ object TestEvent { object Mute { - def apply(filter: EventFilter, filters: EventFilter*): Mute = new Mute(filter +: filters.toSeq) + def apply(filter: EventFilter, filters: EventFilter*): Mute = new Mute(filter +: filters.to[immutable.Seq]) } - case class Mute(filters: Seq[EventFilter]) extends TestEvent with NoSerializationVerificationNeeded { + case class Mute(filters: immutable.Seq[EventFilter]) extends TestEvent with NoSerializationVerificationNeeded { /** * Java API */ - def this(filters: JIterable[EventFilter]) = this(JavaConverters.iterableAsScalaIterableConverter(filters).asScala.toSeq) + def this(filters: JIterable[EventFilter]) = this(immutableSeq(filters)) } object UnMute { - def apply(filter: EventFilter, filters: EventFilter*): UnMute = new UnMute(filter +: filters.toSeq) + def apply(filter: EventFilter, filters: EventFilter*): UnMute = new UnMute(filter +: filters.to[immutable.Seq]) } - case class UnMute(filters: Seq[EventFilter]) extends TestEvent with NoSerializationVerificationNeeded { + case class UnMute(filters: immutable.Seq[EventFilter]) extends TestEvent with NoSerializationVerificationNeeded { /** * Java API */ - def this(filters: JIterable[EventFilter]) = this(JavaConverters.iterableAsScalaIterableConverter(filters).asScala.toSeq) + def this(filters: JIterable[EventFilter]) = this(immutableSeq(filters)) } } diff --git a/akka-testkit/src/main/scala/akka/testkit/TestFSMRef.scala b/akka-testkit/src/main/scala/akka/testkit/TestFSMRef.scala index 5d634de9ef..302942dc4c 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestFSMRef.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestFSMRef.scala @@ -5,9 +5,9 @@ package akka.testkit import akka.actor._ -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.dispatch.DispatcherPrerequisites -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * This is a specialised form of the TestActorRef with support for querying and @@ -78,8 +78,18 @@ class TestFSMRef[S, D, T <: Actor]( /** * Proxy for FSM.timerActive_?. */ - def timerActive_?(name: String) = fsm.timerActive_?(name) + @deprecated("Use isTimerActive(name) instead.", "2.2") + def timerActive_?(name: String) = isTimerActive(name) + /** + * Proxy for FSM.isTimerActive. + */ + def isTimerActive(name: String) = fsm.isTimerActive(name) + + /** + * Proxy for FSM.timerActive_?. + */ + def isStateTimerActive = fsm.isStateTimerActive } object TestFSMRef { diff --git a/akka-testkit/src/main/scala/akka/testkit/TestKit.scala b/akka-testkit/src/main/scala/akka/testkit/TestKit.scala index 9838f62d2a..e81acb23a3 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestKit.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestKit.scala @@ -5,19 +5,18 @@ package akka.testkit import language.postfixOps +import scala.annotation.{ varargs, tailrec } +import scala.collection.immutable +import scala.concurrent.duration._ +import scala.reflect.ClassTag +import java.util.concurrent.{ BlockingDeque, LinkedBlockingDeque, TimeUnit, atomic } +import java.util.concurrent.atomic.AtomicInteger import akka.actor._ import akka.actor.Actor._ -import scala.concurrent.util.{ Duration, FiniteDuration } -import scala.concurrent.util.duration._ -import java.util.concurrent.{ BlockingDeque, LinkedBlockingDeque, TimeUnit, atomic } -import atomic.AtomicInteger -import scala.annotation.tailrec import akka.util.{ Timeout, BoxedType } -import scala.annotation.varargs -import scala.reflect.ClassTag object TestActor { - type Ignore = Option[PartialFunction[AnyRef, Boolean]] + type Ignore = Option[PartialFunction[Any, Boolean]] abstract class AutoPilot { def run(sender: ActorRef, msg: Any): AutoPilot @@ -139,7 +138,7 @@ trait TestKitBase { * Ignore all messages in the test actor for which the given partial * function returns true. */ - def ignoreMsg(f: PartialFunction[AnyRef, Boolean]) { testActor ! TestActor.SetIgnore(Some(f)) } + def ignoreMsg(f: PartialFunction[Any, Boolean]) { testActor ! TestActor.SetIgnore(Some(f)) } /** * Stop ignoring messages in the test actor. @@ -192,7 +191,7 @@ trait TestKitBase { def remainingOr(duration: FiniteDuration): FiniteDuration = end match { case x if x eq Duration.Undefined ⇒ duration case x if !x.isFinite ⇒ throw new IllegalArgumentException("`end` cannot be infinite") - case f: FiniteDuration ⇒ (end - now).asInstanceOf[FiniteDuration] // RK FIXME after next Scala milestone + case f: FiniteDuration ⇒ f - now } private def remainingOrDilated(max: Duration): FiniteDuration = max match { @@ -416,7 +415,7 @@ trait TestKitBase { /** * Same as `expectMsgAllOf(remaining, obj...)`, but correctly treating the timeFactor. */ - def expectMsgAllOf[T](obj: T*): Seq[T] = expectMsgAllOf_internal(remaining, obj: _*) + def expectMsgAllOf[T](obj: T*): immutable.Seq[T] = expectMsgAllOf_internal(remaining, obj: _*) /** * Receive a number of messages from the test actor matching the given @@ -431,19 +430,19 @@ trait TestKitBase { * expectMsgAllOf(1 second, Result1(), Result2()) * */ - def expectMsgAllOf[T](max: FiniteDuration, obj: T*): Seq[T] = expectMsgAllOf_internal(max.dilated, obj: _*) + def expectMsgAllOf[T](max: FiniteDuration, obj: T*): immutable.Seq[T] = expectMsgAllOf_internal(max.dilated, obj: _*) - private def expectMsgAllOf_internal[T](max: FiniteDuration, obj: T*): Seq[T] = { + private def expectMsgAllOf_internal[T](max: FiniteDuration, obj: T*): immutable.Seq[T] = { val recv = receiveN_internal(obj.size, max) obj foreach (x ⇒ assert(recv exists (x == _), "not found " + x)) recv foreach (x ⇒ assert(obj exists (x == _), "found unexpected " + x)) - recv.asInstanceOf[Seq[T]] + recv.asInstanceOf[immutable.Seq[T]] } /** * Same as `expectMsgAllClassOf(remaining, obj...)`, but correctly treating the timeFactor. */ - def expectMsgAllClassOf[T](obj: Class[_ <: T]*): Seq[T] = internalExpectMsgAllClassOf(remaining, obj: _*) + def expectMsgAllClassOf[T](obj: Class[_ <: T]*): immutable.Seq[T] = internalExpectMsgAllClassOf(remaining, obj: _*) /** * Receive a number of messages from the test actor matching the given @@ -453,19 +452,19 @@ trait TestKitBase { * Wait time is bounded by the given duration, with an AssertionFailure * being thrown in case of timeout. */ - def expectMsgAllClassOf[T](max: FiniteDuration, obj: Class[_ <: T]*): Seq[T] = internalExpectMsgAllClassOf(max.dilated, obj: _*) + def expectMsgAllClassOf[T](max: FiniteDuration, obj: Class[_ <: T]*): immutable.Seq[T] = internalExpectMsgAllClassOf(max.dilated, obj: _*) - private def internalExpectMsgAllClassOf[T](max: FiniteDuration, obj: Class[_ <: T]*): Seq[T] = { + private def internalExpectMsgAllClassOf[T](max: FiniteDuration, obj: Class[_ <: T]*): immutable.Seq[T] = { val recv = receiveN_internal(obj.size, max) obj foreach (x ⇒ assert(recv exists (_.getClass eq BoxedType(x)), "not found " + x)) recv foreach (x ⇒ assert(obj exists (c ⇒ BoxedType(c) eq x.getClass), "found non-matching object " + x)) - recv.asInstanceOf[Seq[T]] + recv.asInstanceOf[immutable.Seq[T]] } /** * Same as `expectMsgAllConformingOf(remaining, obj...)`, but correctly treating the timeFactor. */ - def expectMsgAllConformingOf[T](obj: Class[_ <: T]*): Seq[T] = internalExpectMsgAllConformingOf(remaining, obj: _*) + def expectMsgAllConformingOf[T](obj: Class[_ <: T]*): immutable.Seq[T] = internalExpectMsgAllConformingOf(remaining, obj: _*) /** * Receive a number of messages from the test actor matching the given @@ -478,13 +477,13 @@ trait TestKitBase { * Beware that one object may satisfy all given class constraints, which * may be counter-intuitive. */ - def expectMsgAllConformingOf[T](max: FiniteDuration, obj: Class[_ <: T]*): Seq[T] = internalExpectMsgAllConformingOf(max.dilated, obj: _*) + def expectMsgAllConformingOf[T](max: FiniteDuration, obj: Class[_ <: T]*): immutable.Seq[T] = internalExpectMsgAllConformingOf(max.dilated, obj: _*) - private def internalExpectMsgAllConformingOf[T](max: FiniteDuration, obj: Class[_ <: T]*): Seq[T] = { + private def internalExpectMsgAllConformingOf[T](max: FiniteDuration, obj: Class[_ <: T]*): immutable.Seq[T] = { val recv = receiveN_internal(obj.size, max) obj foreach (x ⇒ assert(recv exists (BoxedType(x) isInstance _), "not found " + x)) recv foreach (x ⇒ assert(obj exists (c ⇒ BoxedType(c) isInstance x), "found non-matching object " + x)) - recv.asInstanceOf[Seq[T]] + recv.asInstanceOf[immutable.Seq[T]] } /** @@ -521,7 +520,7 @@ trait TestKitBase { * assert(series == (1 to 7).toList) * */ - def receiveWhile[T](max: Duration = Duration.Undefined, idle: Duration = Duration.Inf, messages: Int = Int.MaxValue)(f: PartialFunction[AnyRef, T]): Seq[T] = { + def receiveWhile[T](max: Duration = Duration.Undefined, idle: Duration = Duration.Inf, messages: Int = Int.MaxValue)(f: PartialFunction[AnyRef, T]): immutable.Seq[T] = { val stop = now + remainingOrDilated(max) var msg: Message = NullMessage @@ -554,14 +553,14 @@ trait TestKitBase { * Same as `receiveN(n, remaining)` but correctly taking into account * Duration.timeFactor. */ - def receiveN(n: Int): Seq[AnyRef] = receiveN_internal(n, remaining) + def receiveN(n: Int): immutable.Seq[AnyRef] = receiveN_internal(n, remaining) /** * Receive N messages in a row before the given deadline. */ - def receiveN(n: Int, max: FiniteDuration): Seq[AnyRef] = receiveN_internal(n, max.dilated) + def receiveN(n: Int, max: FiniteDuration): immutable.Seq[AnyRef] = receiveN_internal(n, max.dilated) - private def receiveN_internal(n: Int, max: Duration): Seq[AnyRef] = { + private def receiveN_internal(n: Int, max: Duration): immutable.Seq[AnyRef] = { val stop = max + now for { x ← 1 to n } yield { val timeout = stop - now diff --git a/akka-testkit/src/main/scala/akka/testkit/TestKitExtension.scala b/akka-testkit/src/main/scala/akka/testkit/TestKitExtension.scala index 50dc392a09..33102e09a6 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestKitExtension.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestKitExtension.scala @@ -4,11 +4,11 @@ package akka.testkit import com.typesafe.config.Config -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.util.Timeout import java.util.concurrent.TimeUnit.MILLISECONDS import akka.actor.{ ExtensionId, ActorSystem, Extension, ExtendedActorSystem } -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration object TestKitExtension extends ExtensionId[TestKitSettings] { override def get(system: ActorSystem): TestKitSettings = super.get(system) diff --git a/akka-testkit/src/main/scala/akka/testkit/TestLatch.scala b/akka-testkit/src/main/scala/akka/testkit/TestLatch.scala index cedf351551..2bb7a8f4b5 100644 --- a/akka-testkit/src/main/scala/akka/testkit/TestLatch.scala +++ b/akka-testkit/src/main/scala/akka/testkit/TestLatch.scala @@ -4,11 +4,11 @@ package akka.testkit -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import akka.actor.ActorSystem import scala.concurrent.{ Await, CanAwait, Awaitable } import java.util.concurrent.{ TimeoutException, CountDownLatch, TimeUnit } -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * A count down latch wrapper for use in testing. diff --git a/akka-testkit/src/main/scala/akka/testkit/package.scala b/akka-testkit/src/main/scala/akka/testkit/package.scala index 247cf9e17f..ff8926154e 100644 --- a/akka-testkit/src/main/scala/akka/testkit/package.scala +++ b/akka-testkit/src/main/scala/akka/testkit/package.scala @@ -3,15 +3,17 @@ package akka import language.implicitConversions import akka.actor.ActorSystem -import scala.concurrent.util.{ Duration, FiniteDuration } -import java.util.concurrent.TimeUnit.MILLISECONDS +import scala.concurrent.duration.{ Duration, FiniteDuration } import scala.reflect.ClassTag +import scala.collection.immutable +import java.util.concurrent.TimeUnit.MILLISECONDS package object testkit { def filterEvents[T](eventFilters: Iterable[EventFilter])(block: ⇒ T)(implicit system: ActorSystem): T = { def now = System.currentTimeMillis - system.eventStream.publish(TestEvent.Mute(eventFilters.toSeq)) + system.eventStream.publish(TestEvent.Mute(eventFilters.to[immutable.Seq])) + try { val result = block @@ -23,7 +25,7 @@ package object testkit { result } finally { - system.eventStream.publish(TestEvent.UnMute(eventFilters.toSeq)) + system.eventStream.publish(TestEvent.UnMute(eventFilters.to[immutable.Seq])) } } @@ -35,7 +37,7 @@ package object testkit { * Scala API. Scale timeouts (durations) during tests with the configured * 'akka.test.timefactor'. * Implicit conversion to add dilated function to Duration. - * import scala.concurrent.util.duration._ + * import scala.concurrent.duration._ * import akka.testkit._ * 10.milliseconds.dilated * diff --git a/akka-testkit/src/test/scala/akka/testkit/AkkaSpec.scala b/akka-testkit/src/test/scala/akka/testkit/AkkaSpec.scala index bd4de8b906..d4844087b7 100644 --- a/akka-testkit/src/test/scala/akka/testkit/AkkaSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/AkkaSpec.scala @@ -9,7 +9,7 @@ import org.scalatest.{ WordSpec, BeforeAndAfterAll, Tag } import org.scalatest.matchers.MustMatchers import akka.actor.{ Actor, Props, ActorSystem, PoisonPill, DeadLetter, ActorSystemImpl } import akka.event.{ Logging, LoggingAdapter } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import scala.concurrent.{ Await, Future } import com.typesafe.config.{ Config, ConfigFactory } import java.util.concurrent.TimeoutException diff --git a/akka-testkit/src/test/scala/akka/testkit/AkkaSpecSpec.scala b/akka-testkit/src/test/scala/akka/testkit/AkkaSpecSpec.scala index c8eee623f7..baebc2e6d4 100644 --- a/akka-testkit/src/test/scala/akka/testkit/AkkaSpecSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/AkkaSpecSpec.scala @@ -11,7 +11,7 @@ import org.scalatest.matchers.MustMatchers import akka.actor._ import com.typesafe.config.ConfigFactory import concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.DeadLetter import akka.pattern.ask diff --git a/akka-testkit/src/test/scala/akka/testkit/TestActorRefSpec.scala b/akka-testkit/src/test/scala/akka/testkit/TestActorRefSpec.scala index 0ee1923359..f847c2b48a 100644 --- a/akka-testkit/src/test/scala/akka/testkit/TestActorRefSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/TestActorRefSpec.scala @@ -9,11 +9,10 @@ import org.scalatest.{ BeforeAndAfterEach, WordSpec } import akka.actor._ import akka.event.Logging.Warning import scala.concurrent.{ Future, Promise, Await } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.ActorSystem import akka.pattern.ask import akka.dispatch.Dispatcher -import scala.concurrent.util.Duration /** * Test whether TestActorRef behaves as an ActorRef should, besides its own spec. diff --git a/akka-testkit/src/test/scala/akka/testkit/TestFSMRefSpec.scala b/akka-testkit/src/test/scala/akka/testkit/TestFSMRefSpec.scala index 256273bc1f..41efe55e6d 100644 --- a/akka-testkit/src/test/scala/akka/testkit/TestFSMRefSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/TestFSMRefSpec.scala @@ -9,7 +9,7 @@ import language.postfixOps import org.scalatest.matchers.MustMatchers import org.scalatest.{ BeforeAndAfterEach, WordSpec } import akka.actor._ -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) class TestFSMRefSpec extends AkkaSpec { @@ -51,11 +51,11 @@ class TestFSMRefSpec extends AkkaSpec { case x ⇒ stay } }, "test-fsm-ref-2") - fsm.timerActive_?("test") must be(false) + fsm.isTimerActive("test") must be(false) fsm.setTimer("test", 12, 10 millis, true) - fsm.timerActive_?("test") must be(true) + fsm.isTimerActive("test") must be(true) fsm.cancelTimer("test") - fsm.timerActive_?("test") must be(false) + fsm.isTimerActive("test") must be(false) } } } diff --git a/akka-testkit/src/test/scala/akka/testkit/TestProbeSpec.scala b/akka-testkit/src/test/scala/akka/testkit/TestProbeSpec.scala index 10c39cdc05..72e5b3a8c0 100644 --- a/akka-testkit/src/test/scala/akka/testkit/TestProbeSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/TestProbeSpec.scala @@ -7,7 +7,7 @@ import org.scalatest.matchers.MustMatchers import org.scalatest.{ BeforeAndAfterEach, WordSpec } import akka.actor._ import scala.concurrent.{ Future, Await } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.pattern.ask @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) @@ -78,6 +78,13 @@ class TestProbeSpec extends AkkaSpec with DefaultTimeout { expectMsgAllClassOf(5 seconds, classOf[Int]) must be(Seq(42)) } + "be able to ignore primitive types" in { + ignoreMsg { case 42 ⇒ true } + testActor ! 42 + testActor ! "pigdog" + expectMsg("pigdog") + } + } } diff --git a/akka-testkit/src/test/scala/akka/testkit/TestTimeSpec.scala b/akka-testkit/src/test/scala/akka/testkit/TestTimeSpec.scala index aac0f490b0..4ca3969ab0 100644 --- a/akka-testkit/src/test/scala/akka/testkit/TestTimeSpec.scala +++ b/akka-testkit/src/test/scala/akka/testkit/TestTimeSpec.scala @@ -2,7 +2,7 @@ package akka.testkit import org.scalatest.matchers.MustMatchers import org.scalatest.{ BeforeAndAfterEach, WordSpec } -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import com.typesafe.config.Config @org.junit.runner.RunWith(classOf[org.scalatest.junit.JUnitRunner]) diff --git a/akka-transactor/src/main/scala/akka/transactor/TransactorExtension.scala b/akka-transactor/src/main/scala/akka/transactor/TransactorExtension.scala index 6b4a0157bc..2225010fd8 100644 --- a/akka-transactor/src/main/scala/akka/transactor/TransactorExtension.scala +++ b/akka-transactor/src/main/scala/akka/transactor/TransactorExtension.scala @@ -7,7 +7,7 @@ import akka.actor.{ ActorSystem, ExtensionId, ExtensionIdProvider, ExtendedActor import akka.actor.Extension import com.typesafe.config.Config import akka.util.Timeout -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit.MILLISECONDS /** diff --git a/akka-transactor/src/main/scala/akka/transactor/UntypedTransactor.scala b/akka-transactor/src/main/scala/akka/transactor/UntypedTransactor.scala index 98899e0a13..69fcac66ad 100644 --- a/akka-transactor/src/main/scala/akka/transactor/UntypedTransactor.scala +++ b/akka-transactor/src/main/scala/akka/transactor/UntypedTransactor.scala @@ -5,13 +5,15 @@ package akka.transactor import akka.actor.{ UntypedActor, ActorRef } -import scala.collection.JavaConversions._ import java.util.{ Set ⇒ JSet } +import java.util.Collections.{ emptySet, singleton ⇒ singletonSet } /** * An UntypedActor version of transactor for using from Java. */ abstract class UntypedTransactor extends UntypedActor { + import scala.collection.JavaConverters.asScalaSetConverter + private val settings = TransactorExtension(context.system) /** @@ -21,8 +23,7 @@ abstract class UntypedTransactor extends UntypedActor { final def onReceive(message: Any) { message match { case coordinated @ Coordinated(message) ⇒ { - val others = coordinate(message) - for (sendTo ← others) { + for (sendTo ← coordinate(message).asScala) { sendTo.actor ! coordinated(sendTo.message.getOrElse(message)) } before(message) @@ -49,19 +50,19 @@ abstract class UntypedTransactor extends UntypedActor { /** * Empty set of transactors to send to. */ - def nobody: JSet[SendTo] = Set[SendTo]() + def nobody: JSet[SendTo] = emptySet() /** * For including one other actor in this coordinated transaction and sending * them the same message as received. Use as the result in `coordinated`. */ - def include(actor: ActorRef): JSet[SendTo] = Set(SendTo(actor)) + def include(actor: ActorRef): JSet[SendTo] = singletonSet(SendTo(actor)) /** * For including one other actor in this coordinated transaction and specifying the * message to send. Use as the result in `coordinated`. */ - def include(actor: ActorRef, message: Any): JSet[SendTo] = Set(SendTo(actor, Some(message))) + def include(actor: ActorRef, message: Any): JSet[SendTo] = singletonSet(SendTo(actor, Some(message))) /** * For including another actor in this coordinated transaction and sending diff --git a/akka-transactor/src/test/java/akka/transactor/UntypedCoordinatedIncrementTest.java b/akka-transactor/src/test/java/akka/transactor/UntypedCoordinatedIncrementTest.java index 5aecd341e0..f73a659c46 100644 --- a/akka-transactor/src/test/java/akka/transactor/UntypedCoordinatedIncrementTest.java +++ b/akka-transactor/src/test/java/akka/transactor/UntypedCoordinatedIncrementTest.java @@ -25,14 +25,14 @@ import akka.testkit.ErrorFilter; import akka.testkit.TestEvent; import akka.util.Timeout; -import java.util.Arrays; +import static akka.japi.Util.immutableSeq; import java.util.ArrayList; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import scala.collection.JavaConverters; -import scala.collection.Seq; +import scala.collection.immutable.Seq; public class UntypedCoordinatedIncrementTest { private static ActorSystem system; @@ -110,6 +110,6 @@ public class UntypedCoordinatedIncrementTest { } public Seq seq(A... args) { - return JavaConverters.collectionAsScalaIterableConverter(Arrays.asList(args)).asScala().toSeq(); + return immutableSeq(args); } } diff --git a/akka-transactor/src/test/java/akka/transactor/UntypedTransactorTest.java b/akka-transactor/src/test/java/akka/transactor/UntypedTransactorTest.java index 5aae61d9c1..ade645dfd8 100644 --- a/akka-transactor/src/test/java/akka/transactor/UntypedTransactorTest.java +++ b/akka-transactor/src/test/java/akka/transactor/UntypedTransactorTest.java @@ -25,14 +25,14 @@ import akka.testkit.ErrorFilter; import akka.testkit.TestEvent; import akka.util.Timeout; -import java.util.Arrays; +import static akka.japi.Util.immutableSeq; import java.util.ArrayList; import java.util.List; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; import scala.collection.JavaConverters; -import scala.collection.Seq; +import scala.collection.immutable.Seq; public class UntypedTransactorTest { @@ -118,8 +118,6 @@ public class UntypedTransactorTest { } public Seq seq(A... args) { - return JavaConverters - .collectionAsScalaIterableConverter(Arrays.asList(args)).asScala() - .toSeq(); + return immutableSeq(args); } } diff --git a/akka-transactor/src/test/scala/akka/transactor/CoordinatedIncrementSpec.scala b/akka-transactor/src/test/scala/akka/transactor/CoordinatedIncrementSpec.scala index e4724cf8a3..6dad1079a3 100644 --- a/akka-transactor/src/test/scala/akka/transactor/CoordinatedIncrementSpec.scala +++ b/akka-transactor/src/test/scala/akka/transactor/CoordinatedIncrementSpec.scala @@ -6,12 +6,13 @@ package akka.transactor import org.scalatest.BeforeAndAfterAll -import akka.actor._ import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.concurrent.stm._ +import scala.collection.immutable +import akka.actor._ import akka.util.Timeout import akka.testkit._ -import scala.concurrent.stm._ import akka.pattern.{ AskTimeoutException, ask } object CoordinatedIncrement { @@ -30,7 +31,7 @@ object CoordinatedIncrement { } """ - case class Increment(friends: Seq[ActorRef]) + case class Increment(friends: immutable.Seq[ActorRef]) case object GetCount class Counter(name: String) extends Actor { diff --git a/akka-transactor/src/test/scala/akka/transactor/FickleFriendsSpec.scala b/akka-transactor/src/test/scala/akka/transactor/FickleFriendsSpec.scala index 4e1219324e..eb75247164 100644 --- a/akka-transactor/src/test/scala/akka/transactor/FickleFriendsSpec.scala +++ b/akka-transactor/src/test/scala/akka/transactor/FickleFriendsSpec.scala @@ -8,21 +8,22 @@ import language.postfixOps import org.scalatest.BeforeAndAfterAll -import akka.actor._ import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.concurrent.stm._ +import scala.collection.immutable +import scala.util.Random.{ nextInt ⇒ random } +import scala.util.control.NonFatal +import akka.actor._ import akka.testkit._ import akka.testkit.TestEvent.Mute -import scala.concurrent.stm._ -import scala.util.Random.{ nextInt ⇒ random } import java.util.concurrent.CountDownLatch import akka.pattern.{ AskTimeoutException, ask } import akka.util.Timeout -import scala.util.control.NonFatal object FickleFriends { - case class FriendlyIncrement(friends: Seq[ActorRef], timeout: Timeout, latch: CountDownLatch) - case class Increment(friends: Seq[ActorRef]) + case class FriendlyIncrement(friends: immutable.Seq[ActorRef], timeout: Timeout, latch: CountDownLatch) + case class Increment(friends: immutable.Seq[ActorRef]) case object GetCount /** @@ -120,7 +121,7 @@ class FickleFriendsSpec extends AkkaSpec with BeforeAndAfterAll { "Coordinated fickle friends" should { "eventually succeed to increment all counters by one" in { - val ignoreExceptions = Seq( + val ignoreExceptions = immutable.Seq( EventFilter[ExpectedFailureException](), EventFilter[CoordinatedTransactionException](), EventFilter[AskTimeoutException]()) diff --git a/akka-transactor/src/test/scala/akka/transactor/TransactorSpec.scala b/akka-transactor/src/test/scala/akka/transactor/TransactorSpec.scala index cb4d2d633b..0de8a13d97 100644 --- a/akka-transactor/src/test/scala/akka/transactor/TransactorSpec.scala +++ b/akka-transactor/src/test/scala/akka/transactor/TransactorSpec.scala @@ -7,15 +7,16 @@ package akka.transactor import language.postfixOps import akka.actor._ +import scala.collection.immutable import scala.concurrent.Await -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ +import scala.concurrent.stm._ import akka.util.Timeout import akka.testkit._ -import scala.concurrent.stm._ import akka.pattern.{ AskTimeoutException, ask } object TransactorIncrement { - case class Increment(friends: Seq[ActorRef], latch: TestLatch) + case class Increment(friends: immutable.Seq[ActorRef], latch: TestLatch) case object GetCount class Counter(name: String) extends Transactor { diff --git a/akka-zeromq/src/main/scala/akka/zeromq/ConcurrentSocketActor.scala b/akka-zeromq/src/main/scala/akka/zeromq/ConcurrentSocketActor.scala index 6fc349b798..a9efa56c1e 100644 --- a/akka-zeromq/src/main/scala/akka/zeromq/ConcurrentSocketActor.scala +++ b/akka-zeromq/src/main/scala/akka/zeromq/ConcurrentSocketActor.scala @@ -6,13 +6,14 @@ package akka.zeromq import org.zeromq.ZMQ.{ Socket, Poller } import org.zeromq.{ ZMQ ⇒ JZMQ } import akka.actor._ -import scala.concurrent.{ Promise, Future } -import scala.concurrent.util.Duration +import scala.collection.immutable import scala.annotation.tailrec +import scala.concurrent.{ Promise, Future } +import scala.concurrent.duration.Duration import scala.collection.mutable.ListBuffer +import scala.util.control.NonFatal import akka.event.Logging import java.util.concurrent.TimeUnit -import scala.util.control.NonFatal private[zeromq] object ConcurrentSocketActor { private sealed trait PollMsg @@ -25,7 +26,7 @@ private[zeromq] object ConcurrentSocketActor { private val DefaultContext = Context() } -private[zeromq] class ConcurrentSocketActor(params: Seq[SocketOption]) extends Actor { +private[zeromq] class ConcurrentSocketActor(params: immutable.Seq[SocketOption]) extends Actor { import ConcurrentSocketActor._ private val noBytes = Array[Byte]() @@ -40,7 +41,7 @@ private[zeromq] class ConcurrentSocketActor(params: Seq[SocketOption]) extends A private val socket: Socket = zmqContext.socket(socketType) private val poller: Poller = zmqContext.poller - private val pendingSends = new ListBuffer[Seq[Frame]] + private val pendingSends = new ListBuffer[immutable.Seq[Frame]] def receive = { case m: PollMsg ⇒ doPoll(m) @@ -151,7 +152,7 @@ private[zeromq] class ConcurrentSocketActor(params: Seq[SocketOption]) extends A } } finally notifyListener(Closed) - @tailrec private def flushMessage(i: Seq[Frame]): Boolean = + @tailrec private def flushMessage(i: immutable.Seq[Frame]): Boolean = if (i.isEmpty) true else { @@ -198,7 +199,7 @@ private[zeromq] class ConcurrentSocketActor(params: Seq[SocketOption]) extends A case frames ⇒ notifyListener(deserializer(frames)); doPoll(mode, togo - 1) } - @tailrec private def receiveMessage(mode: PollMsg, currentFrames: Vector[Frame] = Vector.empty): Seq[Frame] = + @tailrec private def receiveMessage(mode: PollMsg, currentFrames: Vector[Frame] = Vector.empty): immutable.Seq[Frame] = if (mode == PollCareful && (poller.poll(0) <= 0)) { if (currentFrames.isEmpty) currentFrames else throw new IllegalStateException("Received partial transmission!") } else { diff --git a/akka-zeromq/src/main/scala/akka/zeromq/SocketOption.scala b/akka-zeromq/src/main/scala/akka/zeromq/SocketOption.scala index ea7fb82d07..b70c245327 100644 --- a/akka-zeromq/src/main/scala/akka/zeromq/SocketOption.scala +++ b/akka-zeromq/src/main/scala/akka/zeromq/SocketOption.scala @@ -4,12 +4,11 @@ package akka.zeromq import com.google.protobuf.Message -import org.zeromq.{ ZMQ ⇒ JZMQ } import akka.actor.ActorRef -import scala.concurrent.util.duration._ -import scala.concurrent.util.Duration +import scala.concurrent.duration._ +import scala.collection.immutable +import org.zeromq.{ ZMQ ⇒ JZMQ } import org.zeromq.ZMQ.{ Poller, Socket } -import scala.concurrent.util.FiniteDuration /** * Marker trait representing request messages for zeromq @@ -38,7 +37,7 @@ sealed trait SocketConnectOption extends SocketOption { * A base trait for pubsub options for the ZeroMQ socket */ sealed trait PubSubOption extends SocketOption { - def payload: Seq[Byte] + def payload: immutable.Seq[Byte] } /** @@ -81,7 +80,7 @@ class Context(numIoThreads: Int) extends SocketMeta { * A base trait for message deserializers */ trait Deserializer extends SocketOption { - def apply(frames: Seq[Frame]): Any + def apply(frames: immutable.Seq[Frame]): Any } /** @@ -174,12 +173,12 @@ case class Bind(endpoint: String) extends SocketConnectOption * * @param payload the topic to subscribe to */ -case class Subscribe(payload: Seq[Byte]) extends PubSubOption { - def this(topic: String) = this(topic.getBytes("UTF-8")) +case class Subscribe(payload: immutable.Seq[Byte]) extends PubSubOption { + def this(topic: String) = this(topic.getBytes("UTF-8").to[immutable.Seq]) } object Subscribe { def apply(topic: String): Subscribe = new Subscribe(topic) - val all = Subscribe(Seq.empty) + val all = Subscribe("") } /** @@ -191,8 +190,8 @@ object Subscribe { * * @param payload */ -case class Unsubscribe(payload: Seq[Byte]) extends PubSubOption { - def this(topic: String) = this(topic.getBytes("UTF-8")) +case class Unsubscribe(payload: immutable.Seq[Byte]) extends PubSubOption { + def this(topic: String) = this(topic.getBytes("UTF-8").to[immutable.Seq]) } object Unsubscribe { def apply(topic: String): Unsubscribe = new Unsubscribe(topic) @@ -202,17 +201,17 @@ object Unsubscribe { * Send a message over the zeromq socket * @param frames */ -case class Send(frames: Seq[Frame]) extends Request +case class Send(frames: immutable.Seq[Frame]) extends Request /** * A message received over the zeromq socket * @param frames */ -case class ZMQMessage(frames: Seq[Frame]) { +case class ZMQMessage(frames: immutable.Seq[Frame]) { - def this(frame: Frame) = this(Seq(frame)) - def this(frame1: Frame, frame2: Frame) = this(Seq(frame1, frame2)) - def this(frameArray: Array[Frame]) = this(frameArray.toSeq) + def this(frame: Frame) = this(List(frame)) + def this(frame1: Frame, frame2: Frame) = this(List(frame1, frame2)) + def this(frameArray: Array[Frame]) = this(frameArray.to[immutable.Seq]) /** * Convert the bytes in the first frame to a String, using specified charset. @@ -226,8 +225,9 @@ case class ZMQMessage(frames: Seq[Frame]) { def payload(frameIndex: Int): Array[Byte] = frames(frameIndex).payload.toArray } object ZMQMessage { - def apply(bytes: Array[Byte]): ZMQMessage = ZMQMessage(Seq(Frame(bytes))) - def apply(message: Message): ZMQMessage = ZMQMessage(message.toByteArray) + def apply(bytes: Array[Byte]): ZMQMessage = new ZMQMessage(List(Frame(bytes))) + def apply(frames: Frame*): ZMQMessage = new ZMQMessage(frames.to[immutable.Seq]) + def apply(message: Message): ZMQMessage = apply(message.toByteArray) } /** diff --git a/akka-zeromq/src/main/scala/akka/zeromq/ZMQMessageDeserializer.scala b/akka-zeromq/src/main/scala/akka/zeromq/ZMQMessageDeserializer.scala index 2d41424e88..d0141bf515 100644 --- a/akka-zeromq/src/main/scala/akka/zeromq/ZMQMessageDeserializer.scala +++ b/akka-zeromq/src/main/scala/akka/zeromq/ZMQMessageDeserializer.scala @@ -3,7 +3,10 @@ */ package akka.zeromq +import scala.collection.immutable + object Frame { + def apply(bytes: Array[Byte]): Frame = new Frame(bytes) def apply(text: String): Frame = new Frame(text) } @@ -11,8 +14,8 @@ object Frame { * A single message frame of a zeromq message * @param payload */ -case class Frame(payload: Seq[Byte]) { - def this(bytes: Array[Byte]) = this(bytes.toSeq) +case class Frame(payload: immutable.Seq[Byte]) { + def this(bytes: Array[Byte]) = this(bytes.to[immutable.Seq]) def this(text: String) = this(text.getBytes("UTF-8")) } @@ -20,5 +23,5 @@ case class Frame(payload: Seq[Byte]) { * Deserializes ZeroMQ messages into an immutable sequence of frames */ class ZMQMessageDeserializer extends Deserializer { - def apply(frames: Seq[Frame]): ZMQMessage = ZMQMessage(frames) + def apply(frames: immutable.Seq[Frame]): ZMQMessage = ZMQMessage(frames) } diff --git a/akka-zeromq/src/main/scala/akka/zeromq/ZeroMQExtension.scala b/akka-zeromq/src/main/scala/akka/zeromq/ZeroMQExtension.scala index 32a3326076..bc40ea580b 100644 --- a/akka-zeromq/src/main/scala/akka/zeromq/ZeroMQExtension.scala +++ b/akka-zeromq/src/main/scala/akka/zeromq/ZeroMQExtension.scala @@ -7,12 +7,13 @@ import org.zeromq.{ ZMQ ⇒ JZMQ } import org.zeromq.ZMQ.Poller import akka.actor._ import akka.pattern.ask +import scala.collection.immutable import scala.concurrent.Await -import scala.concurrent.util.Duration +import scala.concurrent.duration.Duration import java.util.concurrent.TimeUnit import akka.util.Timeout import org.zeromq.ZMQException -import scala.concurrent.util.FiniteDuration +import scala.concurrent.duration.FiniteDuration /** * A Model to represent a version of the zeromq library @@ -66,7 +67,8 @@ class ZeroMQExtension(system: ActorSystem) extends Extension { case s: SocketType.ZMQSocketType ⇒ true case _ ⇒ false }, "A socket type is required") - Props(new ConcurrentSocketActor(socketParameters)).withDispatcher("akka.zeromq.socket-dispatcher") + val params = socketParameters.to[immutable.Seq] + Props(new ConcurrentSocketActor(params)).withDispatcher("akka.zeromq.socket-dispatcher") } /** diff --git a/akka-zeromq/src/test/scala/akka/zeromq/ConcurrentSocketActorSpec.scala b/akka-zeromq/src/test/scala/akka/zeromq/ConcurrentSocketActorSpec.scala index 3226b874a1..6feaffd6d6 100644 --- a/akka-zeromq/src/test/scala/akka/zeromq/ConcurrentSocketActorSpec.scala +++ b/akka-zeromq/src/test/scala/akka/zeromq/ConcurrentSocketActorSpec.scala @@ -7,7 +7,7 @@ import language.postfixOps import org.scalatest.matchers.MustMatchers import akka.testkit.{ TestProbe, DefaultTimeout, AkkaSpec } -import scala.concurrent.util.duration._ +import scala.concurrent.duration._ import akka.actor.{ Cancellable, Actor, Props, ActorRef } import akka.util.Timeout @@ -51,7 +51,7 @@ class ConcurrentSocketActorSpec extends AkkaSpec { val msgGenerator = system.scheduler.schedule(100 millis, 10 millis, new Runnable { var number = 0 def run() { - publisher ! ZMQMessage(Seq(Frame(number.toString.getBytes), Frame(Seq()))) + publisher ! ZMQMessage(Frame(number.toString), Frame(Nil)) number += 1 } }) @@ -88,8 +88,8 @@ class ConcurrentSocketActorSpec extends AkkaSpec { try { replierProbe.expectMsg(Connecting) - val request = ZMQMessage(Seq(Frame("Request"))) - val reply = ZMQMessage(Seq(Frame("Reply"))) + val request = ZMQMessage(Frame("Request")) + val reply = ZMQMessage(Frame("Reply")) requester ! request replierProbe.expectMsg(request) @@ -112,7 +112,7 @@ class ConcurrentSocketActorSpec extends AkkaSpec { try { pullerProbe.expectMsg(Connecting) - val message = ZMQMessage(Seq(Frame("Pushed message"))) + val message = ZMQMessage(Frame("Pushed message")) pusher ! message pullerProbe.expectMsg(message) diff --git a/project/AkkaBuild.scala b/project/AkkaBuild.scala index adf1b3c9d6..d71bb7ee17 100644 --- a/project/AkkaBuild.scala +++ b/project/AkkaBuild.scala @@ -15,7 +15,7 @@ import com.typesafe.tools.mima.plugin.MimaPlugin.mimaDefaultSettings import com.typesafe.tools.mima.plugin.MimaKeys.previousArtifact import com.typesafe.sbt.SbtSite.site import com.typesafe.sbt.site.SphinxSupport -import com.typesafe.sbt.site.SphinxSupport.{ enableOutput, generatePdf, sphinxInputs, sphinxPackages, Sphinx } +import com.typesafe.sbt.site.SphinxSupport.{ enableOutput, generatePdf, generatedPdf, sphinxInputs, sphinxPackages, Sphinx } import com.typesafe.sbt.preprocess.Preprocess.{ preprocess, preprocessExts, preprocessVars, simplePreprocess } import ls.Plugin.{ lsSettings, LsKeys } import java.lang.Boolean.getBoolean @@ -28,8 +28,10 @@ object AkkaBuild extends Build { lazy val buildSettings = Seq( organization := "com.typesafe.akka", - version := "2.1-SNAPSHOT", - scalaVersion := System.getProperty("akka.scalaVersion", "2.10.0-M7") + version := "2.2-SNAPSHOT", + // FIXME: use 2.10.0 for final + // Also change ScalaVersion in akka-sbt-plugin/sample/project/Build.scala + scalaVersion := System.getProperty("akka.scalaVersion", "2.10.0-RC3") ) lazy val akka = Project( @@ -41,17 +43,17 @@ object AkkaBuild extends Build { parallelExecution in GlobalScope := System.getProperty("akka.parallelExecution", "false").toBoolean, Publish.defaultPublishTo in ThisBuild <<= crossTarget / "repository", Unidoc.unidocExclude := Seq(samples.id), - Dist.distExclude := Seq(actorTests.id, akkaSbtPlugin.id, docs.id, samples.id), + Dist.distExclude := Seq(actorTests.id, akkaSbtPlugin.id, docs.id, samples.id, osgi.id, osgiAries.id), initialCommands in ThisBuild := """|import language.postfixOps |import akka.actor._ |import ActorDSL._ |import scala.concurrent._ |import com.typesafe.config.ConfigFactory - |import scala.concurrent.util.duration._ + |import scala.concurrent.duration._ |import akka.util.Timeout |val config = ConfigFactory.parseString("akka.stdout-loglevel=INFO,akka.loglevel=DEBUG") - |val remoteConfig = ConfigFactory.parseString("akka.remote.netty{port=0,use-dispatcher-for-io=akka.actor.default-dispatcher,execution-pool-size=0},akka.actor.provider=RemoteActorRefProvider").withFallback(config) + |val remoteConfig = ConfigFactory.parseString("akka.remote.netty{port=0,use-dispatcher-for-io=akka.actor.default-dispatcher,execution-pool-size=0},akka.actor.provider=akka.remote.RemoteActorRefProvider").withFallback(config) |var system: ActorSystem = null |implicit def _system = system |def startSystem(remoting: Boolean = false) { system = ActorSystem("repl", if(remoting) remoteConfig else config); println("don’t forget to system.shutdown()!") } @@ -62,10 +64,10 @@ object AkkaBuild extends Build { // generate online version of docs sphinxInputs in Sphinx <<= sphinxInputs in Sphinx in LocalProject(docs.id) map { inputs => inputs.copy(tags = inputs.tags :+ "online") }, // don't regenerate the pdf, just reuse the akka-docs version - generatePdf in Sphinx <<= generatePdf in Sphinx in LocalProject(docs.id) map identity + generatedPdf in Sphinx <<= generatedPdf in Sphinx in LocalProject(docs.id) map identity ), - aggregate = Seq(actor, testkit, actorTests, dataflow, remote, remoteTests, camel, cluster, slf4j, agent, transactor, mailboxes, zeroMQ, kernel, akkaSbtPlugin, osgi, osgiAries, docs, contrib) + aggregate = Seq(actor, testkit, actorTests, dataflow, remote, remoteTests, camel, cluster, slf4j, agent, transactor, mailboxes, zeroMQ, kernel, akkaSbtPlugin, osgi, osgiAries, docs, contrib, samples) ) lazy val actor = Project( @@ -96,7 +98,7 @@ object AkkaBuild extends Build { id = "akka-testkit", base = file("akka-testkit"), dependencies = Seq(actor), - settings = defaultSettings ++ Seq( + settings = defaultSettings ++ OSGi.testkit ++ Seq( libraryDependencies ++= Dependencies.testkit, initialCommands += "import akka.testkit._", previousArtifact := akkaPreviousArtifact("akka-testkit") @@ -272,7 +274,7 @@ object AkkaBuild extends Build { publishMavenStyle := false, // SBT Plugins should be published as Ivy publishTo <<= Publish.akkaPluginPublishTo, scalacOptions in Compile := Seq("-encoding", "UTF-8", "-deprecation", "-unchecked"), - scalaVersion := "2.9.1", + scalaVersion := "2.9.2", scalaBinaryVersion <<= scalaVersion ) ) @@ -324,7 +326,13 @@ object AkkaBuild extends Build { base = file("akka-samples/akka-sample-cluster"), dependencies = Seq(cluster, remoteTests % "test", testkit % "test"), settings = sampleSettings ++ multiJvmSettings ++ experimentalSettings ++ Seq( + // sigar is in Typesafe repo + resolvers += "Typesafe Repo" at "http://repo.typesafe.com/typesafe/releases/", libraryDependencies ++= Dependencies.clusterSample, + javaOptions in run ++= Seq( + "-Djava.library.path=./sigar", + "-Xms128m", "-Xmx1024m"), + Keys.fork in run := true, // disable parallel tests parallelExecution in Test := false, extraOptions in MultiJvm <<= (sourceDirectory in MultiJvm) { src => @@ -401,13 +409,7 @@ object AkkaBuild extends Build { super.settings ++ buildSettings ++ Seq( - shellPrompt := { s => Project.extract(s).currentProject.id + " > " }, - resolvers <<= (resolvers, scalaVersion) apply { - case (res, "2.10.0-SNAPSHOT") => - res :+ ("Scala Community 2.10.0-SNAPSHOT" at "https://scala-webapps.epfl.ch/jenkins/job/community-nightly/ws/target/repositories/fc24ea43b17664f020e43379e800c34be09700bd") - case (res, _) => - res - } + shellPrompt := { s => Project.extract(s).currentProject.id + " > " } ) lazy val baseSettings = Defaults.defaultSettings ++ Publish.settings @@ -417,7 +419,7 @@ object AkkaBuild extends Build { ) lazy val sampleSettings = defaultSettings ++ Seq( - publishArtifact in Compile := false + publishArtifact in (Compile, packageBin) := false ) lazy val experimentalSettings = Seq( @@ -436,18 +438,24 @@ object AkkaBuild extends Build { val excludeTestNames = SettingKey[Seq[String]]("exclude-test-names") val excludeTestTags = SettingKey[Set[String]]("exclude-test-tags") - val includeTestTags = SettingKey[Set[String]]("include-test-tags") val onlyTestTags = SettingKey[Set[String]]("only-test-tags") - val defaultExcludedTags = Set("timing", "long-running") - lazy val defaultMultiJvmOptions: Seq[String] = { import scala.collection.JavaConverters._ + // multinode.D= and multinode.X= makes it possible to pass arbitrary + // -D or -X arguments to the forked jvm, e.g. + // -Dmultinode.Djava.net.preferIPv4Stack=true -Dmultinode.Xmx512m -Dmultinode.XX:MaxPermSize=256M + val MultinodeJvmArgs = "multinode\\.(D|X)(.*)".r val akkaProperties = System.getProperties.propertyNames.asScala.toList.collect { + case MultinodeJvmArgs(a, b) => + val value = System.getProperty("multinode." + a + b) + "-" + a + b + (if (value == "") "" else "=" + value) case key: String if key.startsWith("multinode.") => "-D" + key + "=" + System.getProperty(key) case key: String if key.startsWith("akka.") => "-D" + key + "=" + System.getProperty(key) } - akkaProperties ::: (if (getBoolean("sbt.log.noformat")) List("-Dakka.test.nocolor=true") else Nil) + + "-Xmx256m" :: akkaProperties ::: + (if (getBoolean("sbt.log.noformat")) List("-Dakka.test.nocolor=true") else Nil) } // for excluding tests by name use system property: -Dakka.test.names.exclude=TimingSpec @@ -457,14 +465,7 @@ object AkkaBuild extends Build { // for excluding tests by tag use system property: -Dakka.test.tags.exclude= // note that it will not be used if you specify -Dakka.test.tags.only lazy val useExcludeTestTags: Set[String] = { - if (useOnlyTestTags.isEmpty) defaultExcludedTags ++ systemPropertyAsSeq("akka.test.tags.exclude").toSet - else Set.empty - } - - // for including tests by tag use system property: -Dakka.test.tags.include= - // note that it will not be used if you specify -Dakka.test.tags.only - lazy val useIncludeTestTags: Set[String] = { - if (useOnlyTestTags.isEmpty) systemPropertyAsSeq("akka.test.tags.include").toSet + if (useOnlyTestTags.isEmpty) systemPropertyAsSeq("akka.test.tags.exclude").toSet else Set.empty } @@ -472,8 +473,7 @@ object AkkaBuild extends Build { lazy val useOnlyTestTags: Set[String] = systemPropertyAsSeq("akka.test.tags.only").toSet def executeMultiJvmTests: Boolean = { - useOnlyTestTags.contains("long-running") || - !(useExcludeTestTags -- useIncludeTestTags).contains("long-running") + useOnlyTestTags.contains("long-running") || !useExcludeTestTags.contains("long-running") } def systemPropertyAsSeq(name: String): Seq[String] = { @@ -484,7 +484,7 @@ object AkkaBuild extends Build { val multiNodeEnabled = java.lang.Boolean.getBoolean("akka.test.multi-node") lazy val defaultMultiJvmScalatestOptions: Seq[String] = { - val excludeTags = (useExcludeTestTags -- useIncludeTestTags).toSeq + val excludeTags = useExcludeTestTags.toSeq Seq("-C", "org.scalatest.akka.QuietReporter") ++ (if (excludeTags.isEmpty) Seq.empty else Seq("-l", if (multiNodeEnabled) excludeTags.mkString("\"", " ", "\"") else excludeTags.mkString(" "))) ++ (if (useOnlyTestTags.isEmpty) Seq.empty else Seq("-n", if (multiNodeEnabled) useOnlyTestTags.mkString("\"", " ", "\"") else useOnlyTestTags.mkString(" "))) @@ -515,15 +515,13 @@ object AkkaBuild extends Build { excludeTestNames := useExcludeTestNames, excludeTestTags := useExcludeTestTags, - includeTestTags := useIncludeTestTags, onlyTestTags := useOnlyTestTags, // add filters for tests excluded by name testOptions in Test <++= excludeTestNames map { _.map(exclude => Tests.Filter(test => !test.contains(exclude))) }, - // add arguments for tests excluded by tag - includes override excludes (opposite to scalatest) - testOptions in Test <++= (excludeTestTags, includeTestTags) map { (excludes, includes) => - val tags = (excludes -- includes) + // add arguments for tests excluded by tag + testOptions in Test <++= excludeTestTags map { tags => if (tags.isEmpty) Seq.empty else Seq(Tests.Argument("-l", tags.mkString(" "))) }, @@ -543,6 +541,7 @@ object AkkaBuild extends Build { // customization of sphinx @@ replacements, add to all sphinx-using projects // add additional replacements here preprocessVars <<= (scalaVersion, version) { (s, v) => + val isSnapshot = v.endsWith("SNAPSHOT") val BinVer = """(\d+\.\d+)\.\d+""".r Map( "version" -> v, @@ -558,7 +557,9 @@ object AkkaBuild extends Build { "binVersion" -> (s match { case BinVer(bv) => bv case _ => s - }) + }), + "sigarVersion" -> Dependencies.Compile.sigar.revision, + "github" -> "http://github.com/akka/akka/tree/%s".format((if (isSnapshot) "master" else "v" + v)) ) }, preprocess <<= (sourceDirectory, target in preprocess, cacheDirectory, preprocessExts, preprocessVars, streams) map { @@ -630,13 +631,13 @@ object AkkaBuild extends Build { val fileMailbox = exports(Seq("akka.actor.mailbox.filebased.*")) - val mailboxesCommon = exports(Seq("akka.actor.mailbox.*")) + val mailboxesCommon = exports(Seq("akka.actor.mailbox.*"), imports = Seq(protobufImport())) val osgi = exports(Seq("akka.osgi")) ++ Seq(OsgiKeys.privatePackage := Seq("akka.osgi.impl")) val osgiAries = exports() ++ Seq(OsgiKeys.privatePackage := Seq("akka.osgi.aries.*")) - val remote = exports(Seq("akka.remote.*")) + val remote = exports(Seq("akka.remote.*"), imports = Seq(protobufImport())) val slf4j = exports(Seq("akka.event.slf4j.*")) @@ -644,16 +645,19 @@ object AkkaBuild extends Build { val transactor = exports(Seq("akka.transactor.*")) - val zeroMQ = exports(Seq("akka.zeromq.*")) + val testkit = exports(Seq("akka.testkit.*")) - def exports(packages: Seq[String] = Seq()) = osgiSettings ++ Seq( - OsgiKeys.importPackage := defaultImports, + val zeroMQ = exports(Seq("akka.zeromq.*"), imports = Seq(protobufImport()) ) + + def exports(packages: Seq[String] = Seq(), imports: Seq[String] = Nil) = osgiSettings ++ Seq( + OsgiKeys.importPackage := imports ++ defaultImports, OsgiKeys.exportPackage := packages ) def defaultImports = Seq("!sun.misc", akkaImport(), configImport(), scalaImport(), "*") def akkaImport(packageName: String = "akka.*") = "%s;version=\"[2.1,2.2)\"".format(packageName) - def configImport(packageName: String = "com.typesafe.config.*") = "%s;version=\"[0.4.1,0.5)\"".format(packageName) + def configImport(packageName: String = "com.typesafe.config.*") = "%s;version=\"[0.4.1,1.1.0)\"".format(packageName) + def protobufImport(packageName: String = "com.google.protobuf.*") = "%s;version=\"[2.4.0,2.5.0)\"".format(packageName) def scalaImport(packageName: String = "scala.*") = "%s;version=\"[2.10,2.11)\"".format(packageName) } } @@ -661,7 +665,48 @@ object AkkaBuild extends Build { // Dependencies object Dependencies { - import Dependency._ + + object Compile { + // Compile + val camelCore = "org.apache.camel" % "camel-core" % "2.10.0" exclude("org.slf4j", "slf4j-api") // ApacheV2 + + val config = "com.typesafe" % "config" % "1.0.0" // ApacheV2 + val netty = "io.netty" % "netty" % "3.5.8.Final" // ApacheV2 + val protobuf = "com.google.protobuf" % "protobuf-java" % "2.4.1" // New BSD + val scalaStm = "org.scala-stm" % "scala-stm" % "0.6" cross CrossVersion.full // Modified BSD (Scala) + + val slf4jApi = "org.slf4j" % "slf4j-api" % "1.7.2" // MIT + val zeroMQClient = "org.zeromq" % "zeromq-scala-binding" % "0.0.6" cross CrossVersion.full // ApacheV2 + val uncommonsMath = "org.uncommons.maths" % "uncommons-maths" % "1.2.2a" exclude("jfree", "jcommon") exclude("jfree", "jfreechart") // ApacheV2 + val ariesBlueprint = "org.apache.aries.blueprint" % "org.apache.aries.blueprint" % "0.3.2" // ApacheV2 + val osgiCore = "org.osgi" % "org.osgi.core" % "4.2.0" // ApacheV2 + + + // Camel Sample + val camelJetty = "org.apache.camel" % "camel-jetty" % camelCore.revision // ApacheV2 + + // Cluster Sample + val sigar = "org.hyperic" % "sigar" % "1.6.4" // ApacheV2 + + // Test + + object Test { + val commonsMath = "org.apache.commons" % "commons-math" % "2.1" % "test" // ApacheV2 + val commonsIo = "commons-io" % "commons-io" % "2.0.1" % "test" // ApacheV2 + val junit = "junit" % "junit" % "4.10" % "test" // Common Public License 1.0 + val logback = "ch.qos.logback" % "logback-classic" % "1.0.7" % "test" // EPL 1.0 / LGPL 2.1 + val mockito = "org.mockito" % "mockito-all" % "1.8.1" % "test" // MIT + val scalatest = "org.scalatest" % "scalatest" % "1.8-B1" % "test" cross CrossVersion.full // ApacheV2 + val scalacheck = "org.scalacheck" % "scalacheck" % "1.10.0" % "test" cross CrossVersion.full // New BSD + val ariesProxy = "org.apache.aries.proxy" % "org.apache.aries.proxy.impl" % "0.3" % "test" // ApacheV2 + val pojosr = "com.googlecode.pojosr" % "de.kalpatec.pojosr.framework" % "0.1.4" % "test" // ApacheV2 + val tinybundles = "org.ops4j.pax.tinybundles" % "tinybundles" % "1.0.0" % "test" // ApacheV2 + val log4j = "log4j" % "log4j" % "1.2.14" % "test" // ApacheV2 + val junitIntf = "com.novocode" % "junit-interface" % "0.8" % "test" // MIT + } + } + + import Compile._ val actor = Seq(config) @@ -699,45 +744,10 @@ object Dependencies { val zeroMQ = Seq(protobuf, zeroMQClient, Test.scalatest, Test.junit) - val clusterSample = Seq(Test.scalatest) + val clusterSample = Seq(Test.scalatest, sigar) val contrib = Seq(Test.junitIntf) val multiNodeSample = Seq(Test.scalatest) } -object Dependency { - // Compile - val camelCore = "org.apache.camel" % "camel-core" % "2.10.0" exclude("org.slf4j", "slf4j-api") // ApacheV2 - val config = "com.typesafe" % "config" % "0.5.2" // ApacheV2 - val netty = "io.netty" % "netty" % "3.5.4.Final" // ApacheV2 - val protobuf = "com.google.protobuf" % "protobuf-java" % "2.4.1" // New BSD - val scalaStm = "org.scala-tools" % "scala-stm" % "0.6" cross CrossVersion.full // Modified BSD (Scala) - - val slf4jApi = "org.slf4j" % "slf4j-api" % "1.6.4" // MIT - val zeroMQClient = "org.zeromq" % "zeromq-scala-binding" % "0.0.6" cross CrossVersion.full // ApacheV2 - val uncommonsMath = "org.uncommons.maths" % "uncommons-maths" % "1.2.2a" // ApacheV2 - val ariesBlueprint = "org.apache.aries.blueprint" % "org.apache.aries.blueprint" % "0.3.2" // ApacheV2 - val osgiCore = "org.osgi" % "org.osgi.core" % "4.2.0" // ApacheV2 - - - // Camel Sample - val camelJetty = "org.apache.camel" % "camel-jetty" % camelCore.revision // ApacheV2 - - // Test - - object Test { - val commonsMath = "org.apache.commons" % "commons-math" % "2.1" % "test" // ApacheV2 - val commonsIo = "commons-io" % "commons-io" % "2.0.1" % "test" // ApacheV2 - val junit = "junit" % "junit" % "4.10" % "test" // Common Public License 1.0 - val logback = "ch.qos.logback" % "logback-classic" % "1.0.4" % "test" // EPL 1.0 / LGPL 2.1 - val mockito = "org.mockito" % "mockito-all" % "1.8.1" % "test" // MIT - val scalatest = "org.scalatest" % "scalatest" % "1.9-2.10.0-M7-B1" % "test" cross CrossVersion.full // ApacheV2 - val scalacheck = "org.scalacheck" % "scalacheck" % "1.10.0" % "test" cross CrossVersion.full // New BSD - val ariesProxy = "org.apache.aries.proxy" % "org.apache.aries.proxy.impl" % "0.3" % "test" // ApacheV2 - val pojosr = "com.googlecode.pojosr" % "de.kalpatec.pojosr.framework" % "0.1.4" % "test" // ApacheV2 - val tinybundles = "org.ops4j.pax.tinybundles" % "tinybundles" % "1.0.0" % "test" // ApacheV2 - val log4j = "log4j" % "log4j" % "1.2.14" % "test" // ApacheV2 - val junitIntf = "com.novocode" % "junit-interface" % "0.8" % "test" // MIT - } -} diff --git a/project/Dist.scala b/project/Dist.scala index 53fd40fed2..86d4263346 100644 --- a/project/Dist.scala +++ b/project/Dist.scala @@ -57,7 +57,6 @@ object Dist { val base = unzipped / ("akka-" + version) val distBase = projectBase / "akka-kernel" / "src" / "main" / "dist" val deploy = base / "deploy" - val deployReadme = deploy / "readme" val doc = base / "doc" / "akka" val api = doc / "api" val docs = doc / "docs" diff --git a/project/build.properties b/project/build.properties index a8c2f849be..4474a03e1a 100644 --- a/project/build.properties +++ b/project/build.properties @@ -1 +1 @@ -sbt.version=0.12.0 +sbt.version=0.12.1 diff --git a/project/plugins.sbt b/project/plugins.sbt index 6f68c66496..8e57ed4e2d 100644 --- a/project/plugins.sbt +++ b/project/plugins.sbt @@ -3,7 +3,7 @@ resolvers += Classpaths.typesafeResolver // these comment markers are for including code into the docs //#sbt-multi-jvm -addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.3.3") +addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.3.4") //#sbt-multi-jvm addSbtPlugin("com.typesafe.sbt" % "sbt-scalariform" % "1.0.0") diff --git a/project/scripts/release b/project/scripts/release index 1dd5d9d3ae..c660aaeeda 100755 --- a/project/scripts/release +++ b/project/scripts/release @@ -2,18 +2,92 @@ # # Release script for Akka. # -# To run this script you need a user account on repo.akka.io and contributor access -# to github.com/akka/akka. +# Prerequisites and Installation Instructions # -# If your username on repo.akka.io is different from your local username then you can -# configure ssh to always associate a particular username with repo.akka.io by adding -# the following to .ssh/config: -# Host repo.akka.io -# User +# 1) You must be able to sign the artifacts with PGP +# +# 1.1) If you don't have PGP and a PGP key +# On OS X from othe command line: +# shell> brew install gnupg +# shell> gpg --gen-key +# +# Default values for the key type and 2048 bits is OK. +# Make sure to use the email address that you will use later to register +# with Sonatype. +# +# 1.2) Install the sbt-pgp plugin from http://www.scala-sbt.org/sbt-pgp/ +# The plugin will try to use the default key stored in ~/.gnupg/pubring.gpg +# and ~/.gnupg/secring.gpg. +# +# 1.3) Check that signing works +# From inside sbt do the following +# sbt> publish-local +# It should should ask you for your pass phrase, and create .asc files for +# all artifacts +# +# 1.4) Publish your key to a server that Sonatype use +# From the command line: +# shell> gpg --keyserver hkp://pool.sks-keyservers.net/ --send-keys +# To find out your key id do this from the command line: +# shell> gpg --list-keys +# pub 2048/ ... +# +# 2) You must have publishing rights to oss.sonatype.org +# +# 2.1) Register with oss.sonatype.org by only following the instructions under +# sign up here https://docs.sonatype.org/display/Repository/Sonatype+OSS+Maven+Repository+Usage+Guide +# Use the same email address as you used for the pgp key. +# +# 2.2) Ask Jonas who is the original creator of this ticket https://issues.sonatype.org/browse/OSSRH-3097 +# to add a comment that says that your username (not your full name) should +# have publish access to that project. There is manual administration of +# the ticket at Sonatype, so it could take a little while. +# +# 2.3) Add your credentials to sbt by adding a global.sbt file in your sbt home +# directory containing the following. +# credentials += Credentials("Sonatype Nexus Repository Manager", +# "oss.sonatype.org", +# "", +# "") +# +# 3) You must have publishing rights to scalasbt.artifactoryonline.com +# +# 3.1) Politely ask the Q-branch to create a user for you +# +# 3.2) Add your credentials to sbt by adding this to your global.sbt file +# credentials += Credentials("Artifactory Realm", +# "scalasbt.artifactoryonline.com", +# "", +# "") +# The encrypted password is available in your profile here +# http://scalasbt.artifactoryonline.com/scalasbt/webapp/profile.html +# +# 4) You must have access to repo.akka.io +# +# 4.1) Ask someone in the team for login information for the akkarepo user. +# +# 4.2) Install your public ssh key to avoid typing in your password. +# From the command line: +# shell> cat ~/.ssh/id_rsa.pub | ssh akkarepo@repo.akka.io "cat >> ~/.ssh/authorized_keys" +# +# 5) Have access to github.com/akka/akka. This should be a given. +# +# Now you should be all set to run the script +# +# Run the script in two stages. +# First a dry run: +# shell> project/scripts/release --dry-run +# And if all goes well a real run: +# shell> project/scripts/release +# +# The sbt plugin is published directly to scalasbt.artifactoryonline.com, but the +# artifacts published to oss.sonatype.org needs to be released by following the +# instructions under release here +# https://docs.sonatype.org/display/Repository/Sonatype+OSS+Maven+Repository+Usage+Guide # defaults -declare -r default_server="repo.akka.io" -declare -r default_path="/akka/www" +declare -r default_server="akkarepo@repo.akka.io" +declare -r default_path="www" # settings declare -r release_dir="target/release" @@ -203,12 +277,6 @@ try git checkout -b ${release_branch} # find and replace the version try ${script_dir}/find-replace ${current_version} ${version} -#find and replace github links -try ${script_dir}/find-replace http://github.com/akka/akka/tree/master http://github.com/akka/akka/tree/v${version} -try ${script_dir}/find-replace https://github.com/akka/akka/tree/master http://github.com/akka/akka/tree/v${version} -try ${script_dir}/find-replace http://github.com/akka/akka/blob/master http://github.com/akka/akka/tree/v${version} -try ${script_dir}/find-replace https://github.com/akka/akka/blob/master http://github.com/akka/akka/tree/v${version} - # start clean try sbt clean diff --git a/scripts/multi-node-log-replace.sh b/scripts/multi-node-log-replace.sh index 8e8af7112a..3bee844c2d 100755 --- a/scripts/multi-node-log-replace.sh +++ b/scripts/multi-node-log-replace.sh @@ -22,4 +22,4 @@ # check for an sbt command type -P sbt &> /dev/null || fail "sbt command not found" -sbt "project akka-remote-tests" "test:run-main akka.remote.testkit.LogRoleReplace $1 $2" \ No newline at end of file +sbt "project akka-remote-tests-experimental" "test:run-main akka.remote.testkit.LogRoleReplace $1 $2"