Merge branch 'master' into wip-multi-dc-merge-master-patriknw

This commit is contained in:
Patrik Nordwall 2017-08-31 10:51:12 +02:00
commit 6ed3295acd
393 changed files with 11343 additions and 9108 deletions

View file

@ -62,7 +62,7 @@ The steps are exactly the same for everyone involved in the project (be it core
1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a pull request against the mainline Akka. 1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a pull request against the mainline Akka.
1. Create a branch on your fork and work on the feature. For example: `git checkout -b wip-custom-headers-akka-http` 1. Create a branch on your fork and work on the feature. For example: `git checkout -b wip-custom-headers-akka-http`
- Please make sure to follow the general quality guidelines (specified below) when developing your patch. - Please make sure to follow the general quality guidelines (specified below) when developing your patch.
- Please write additional tests covering your feature and adjust existing ones if needed before submitting your pull request. The `validatePullRequest` sbt task ([explained below](#validatePullRequest)) may come in handy to verify your changes are correct. - Please write additional tests covering your feature and adjust existing ones if needed before submitting your pull request. The `validatePullRequest` sbt task ([explained below](#the-validatepullrequest-task)) may come in handy to verify your changes are correct.
1. Once your feature is complete, prepare the commit following our [Creating Commits And Writing Commit Messages](#creating-commits-and-writing-commit-messages). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve). 1. Once your feature is complete, prepare the commit following our [Creating Commits And Writing Commit Messages](#creating-commits-and-writing-commit-messages). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve).
1. If it's a new feature, or a change of behaviour, document it on the [akka-docs](https://github.com/akka/akka/tree/master/akka-docs), remember, an undocumented feature is not a feature. If the feature was touching Scala or Java DSL, make sure to document it in both the Java and Scala documentation (usually in a file of the same name, but under `/scala/` instead of `/java/` etc). 1. If it's a new feature, or a change of behaviour, document it on the [akka-docs](https://github.com/akka/akka/tree/master/akka-docs), remember, an undocumented feature is not a feature. If the feature was touching Scala or Java DSL, make sure to document it in both the Java and Scala documentation (usually in a file of the same name, but under `/scala/` instead of `/java/` etc).
1. Now it's finally time to [submit the pull request](https://help.github.com/articles/using-pull-requests)! 1. Now it's finally time to [submit the pull request](https://help.github.com/articles/using-pull-requests)!
@ -181,7 +181,7 @@ an error like this:
[error] filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.foldAsync") [error] filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.foldAsync")
``` ```
In such situations it's good to consult with a core team member if the violation can be safely ignored (by adding the above snippet to `project/MiMa.scala`), or if it would indeed break binary compatibility. In such situations it's good to consult with a core team member if the violation can be safely ignored (by adding the above snippet to `<module>/src/main/mima-filters/<last-version>.backwards.excludes`), or if it would indeed break binary compatibility.
Situations when it may be fine to ignore a MiMa issued warning include: Situations when it may be fine to ignore a MiMa issued warning include:
@ -233,6 +233,15 @@ akka-docs/paradox
The generated html documentation is in `akka-docs/target/paradox/site/main/index.html`. The generated html documentation is in `akka-docs/target/paradox/site/main/index.html`.
### Java- or Scala-specific documentation
For new documentation chapters, we recommend adding a page to the `scala` tree documenting both Java and Scala, using [tabs](http://developer.lightbend.com/docs/paradox/latest/features/snippet-inclusion.html) for code snippets and [groups]( http://developer.lightbend.com/docs/paradox/latest/features/groups.html) for other Java- or Scala-specific segments or sections.
An example of such a 'merged' page is `akka-docs/src/main/paradox/scala/actors.md`.
Add a symlink to the `java` tree to make the page available there as well.
Consolidation of existing pages is tracked in [issue #23052](https://github.com/akka/akka/issues/23052)
### Note for paradox on Windows ### Note for paradox on Windows
On Windows, you need special care to generate html documentation with paradox. On Windows, you need special care to generate html documentation with paradox.

View file

@ -8,7 +8,7 @@ Akka is here to change that.
Using the Actor Model we raise the abstraction level and provide a better platform to build correct concurrent and scalable applications. This model is a perfect match for the principles laid out in the [Reactive Manifesto](http://www.reactivemanifesto.org/). Using the Actor Model we raise the abstraction level and provide a better platform to build correct concurrent and scalable applications. This model is a perfect match for the principles laid out in the [Reactive Manifesto](http://www.reactivemanifesto.org/).
For resilience we adopt the "Let it crash" model which the telecom industry has used with great success to build applications that self-heal and systems that never stop. For resilience, we adopt the "Let it crash" model which the telecom industry has used with great success to build applications that self-heal and systems that never stop.
Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications. Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications.

View file

@ -0,0 +1,266 @@
/**
* Copyright (C) 2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.atomic.AtomicInteger
import scala.concurrent.duration._
import scala.util.control.NoStackTrace
import akka.testkit._
import scala.concurrent.Await
object TimerSpec {
sealed trait Command
case class Tick(n: Int) extends Command
case object Bump extends Command
case class SlowThenBump(latch: TestLatch) extends Command
with NoSerializationVerificationNeeded
case object End extends Command
case class Throw(e: Throwable) extends Command
case object Cancel extends Command
case class SlowThenThrow(latch: TestLatch, e: Throwable) extends Command
with NoSerializationVerificationNeeded
sealed trait Event
case class Tock(n: Int) extends Event
case class GotPostStop(timerActive: Boolean) extends Event
case class GotPreRestart(timerActive: Boolean) extends Event
class Exc extends RuntimeException("simulated exc") with NoStackTrace
def target(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int): Props =
Props(new Target(monitor, interval, repeat, initial))
class Target(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int) extends Actor with Timers {
private var bumpCount = initial()
if (repeat)
timers.startPeriodicTimer("T", Tick(bumpCount), interval)
else
timers.startSingleTimer("T", Tick(bumpCount), interval)
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
monitor ! GotPreRestart(timers.isTimerActive("T"))
// don't call super.preRestart to avoid postStop
}
override def postStop(): Unit = {
monitor ! GotPostStop(timers.isTimerActive("T"))
}
def bump(): Unit = {
bumpCount += 1
timers.startPeriodicTimer("T", Tick(bumpCount), interval)
}
override def receive = {
case Tick(n)
monitor ! Tock(n)
case Bump
bump()
case SlowThenBump(latch)
Await.ready(latch, 10.seconds)
bump()
case End
context.stop(self)
case Cancel
timers.cancel("T")
case Throw(e)
throw e
case SlowThenThrow(latch, e)
Await.ready(latch, 10.seconds)
throw e
}
}
def fsmTarget(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int): Props =
Props(new FsmTarget(monitor, interval, repeat, initial))
object TheState
class FsmTarget(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int) extends FSM[TheState.type, Int] {
private var restarting = false
override def preRestart(reason: Throwable, message: Option[Any]): Unit = {
restarting = true
super.preRestart(reason, message)
monitor ! GotPreRestart(isTimerActive("T"))
}
override def postStop(): Unit = {
super.postStop()
if (!restarting)
monitor ! GotPostStop(isTimerActive("T"))
}
def bump(bumpCount: Int): State = {
setTimer("T", Tick(bumpCount + 1), interval, repeat)
stay using (bumpCount + 1)
}
{
val i = initial()
startWith(TheState, i)
setTimer("T", Tick(i), interval, repeat)
}
when(TheState) {
case Event(Tick(n), _)
monitor ! Tock(n)
stay
case Event(Bump, bumpCount)
bump(bumpCount)
case Event(SlowThenBump(latch), bumpCount)
Await.ready(latch, 10.seconds)
bump(bumpCount)
case Event(End, _)
stop()
case Event(Cancel, _)
cancelTimer("T")
stay
case Event(Throw(e), _)
throw e
case Event(SlowThenThrow(latch, e), _)
Await.ready(latch, 10.seconds)
throw e
}
initialize()
}
}
class TimerSpec extends AbstractTimerSpec {
override def testName: String = "Timers"
override def target(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int = () 1): Props =
TimerSpec.target(monitor, interval, repeat, initial)
}
class FsmTimerSpec extends AbstractTimerSpec {
override def testName: String = "FSM Timers"
override def target(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int = () 1): Props =
TimerSpec.fsmTarget(monitor, interval, repeat, initial)
}
abstract class AbstractTimerSpec extends AkkaSpec {
import TimerSpec._
val interval = 1.second
val dilatedInterval = interval.dilated
def target(monitor: ActorRef, interval: FiniteDuration, repeat: Boolean, initial: () Int = () 1): Props
def testName: String
testName must {
"schedule non-repeated ticks" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, 10.millis, repeat = false))
probe.expectMsg(Tock(1))
probe.expectNoMsg(100.millis)
ref ! End
probe.expectMsg(GotPostStop(false))
}
"schedule repeated ticks" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
probe.within((interval * 4) - 100.millis) {
probe.expectMsg(Tock(1))
probe.expectMsg(Tock(1))
probe.expectMsg(Tock(1))
}
ref ! End
probe.expectMsg(GotPostStop(false))
}
"replace timer" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
probe.expectMsg(Tock(1))
val latch = new TestLatch(1)
// next Tock(1) enqueued in mailboxed, but should be discarded because of new timer
ref ! SlowThenBump(latch)
probe.expectNoMsg(interval + 100.millis)
latch.countDown()
probe.expectMsg(Tock(2))
ref ! End
probe.expectMsg(GotPostStop(false))
}
"cancel timer" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
probe.expectMsg(Tock(1))
ref ! Cancel
probe.expectNoMsg(dilatedInterval + 100.millis)
ref ! End
probe.expectMsg(GotPostStop(false))
}
"cancel timers when restarted" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
ref ! Throw(new Exc)
probe.expectMsg(GotPreRestart(false))
ref ! End
probe.expectMsg(GotPostStop(false))
}
"discard timers from old incarnation after restart, alt 1" taggedAs TimingTest in {
val probe = TestProbe()
val startCounter = new AtomicInteger(0)
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true,
initial = () startCounter.incrementAndGet()))
probe.expectMsg(Tock(1))
val latch = new TestLatch(1)
// next Tock(1) is enqueued in mailbox, but should be discarded by new incarnation
ref ! SlowThenThrow(latch, new Exc)
probe.expectNoMsg(interval + 100.millis)
latch.countDown()
probe.expectMsg(GotPreRestart(false))
probe.expectNoMsg(interval / 2)
probe.expectMsg(Tock(2)) // this is from the startCounter increment
ref ! End
probe.expectMsg(GotPostStop(false))
}
"discard timers from old incarnation after restart, alt 2" taggedAs TimingTest in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
probe.expectMsg(Tock(1))
// change state so that we see that the restart starts over again
ref ! Bump
probe.expectMsg(Tock(2))
val latch = new TestLatch(1)
// next Tock(2) is enqueued in mailbox, but should be discarded by new incarnation
ref ! SlowThenThrow(latch, new Exc)
probe.expectNoMsg(interval + 100.millis)
latch.countDown()
probe.expectMsg(GotPreRestart(false))
probe.expectMsg(Tock(1))
ref ! End
probe.expectMsg(GotPostStop(false))
}
"cancel timers when stopped" in {
val probe = TestProbe()
val ref = system.actorOf(target(probe.ref, dilatedInterval, repeat = true))
ref ! End
probe.expectMsg(GotPostStop(false))
}
}
}

View file

@ -139,18 +139,18 @@ class SerializeSpec extends AkkaSpec(SerializationTests.serializeConf) {
val ser = SerializationExtension(system) val ser = SerializationExtension(system)
import ser._ import ser._
val addr = Address("120", "Monroe Street", "Santa Clara", "95050") val address = Address("120", "Monroe Street", "Santa Clara", "95050")
val person = Person("debasish ghosh", 25, Address("120", "Monroe Street", "Santa Clara", "95050")) val person = Person("debasish ghosh", 25, Address("120", "Monroe Street", "Santa Clara", "95050"))
"Serialization" must { "Serialization" must {
"have correct bindings" in { "have correct bindings" in {
ser.bindings.collectFirst { case (c, s) if c == addr.getClass s.getClass } should ===(Some(classOf[JavaSerializer])) ser.bindings.collectFirst { case (c, s) if c == address.getClass s.getClass } should ===(Some(classOf[JavaSerializer]))
ser.bindings.collectFirst { case (c, s) if c == classOf[PlainMessage] s.getClass } should ===(Some(classOf[NoopSerializer])) ser.bindings.collectFirst { case (c, s) if c == classOf[PlainMessage] s.getClass } should ===(Some(classOf[NoopSerializer]))
} }
"serialize Address" in { "serialize Address" in {
assert(deserialize(serialize(addr).get, classOf[Address]).get === addr) assert(deserialize(serialize(address).get, classOf[Address]).get === address)
} }
"serialize Person" in { "serialize Person" in {

View file

@ -0,0 +1,151 @@
/**
* Copyright (C) 2016-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.util
import org.scalatest.Matchers
import org.scalatest.WordSpec
import scala.util.Random
class ImmutableIntMapSpec extends WordSpec with Matchers {
"ImmutableIntMap" must {
"have no entries when empty" in {
val empty = ImmutableIntMap.empty
empty.size should be(0)
empty.keysIterator.toList should be(Nil)
}
"add and get entries" in {
val m1 = ImmutableIntMap.empty.updated(10, 10)
m1.keysIterator.toList should be(List(10))
m1.keysIterator.map(m1.get).toList should be(List(10))
val m2 = m1.updated(20, 20)
m2.keysIterator.toList should be(List(10, 20))
m2.keysIterator.map(m2.get).toList should be(List(10, 20))
val m3 = m1.updated(5, 5)
m3.keysIterator.toList should be(List(5, 10))
m3.keysIterator.map(m3.get).toList should be(List(5, 10))
val m4 = m2.updated(5, 5)
m4.keysIterator.toList should be(List(5, 10, 20))
m4.keysIterator.map(m4.get).toList should be(List(5, 10, 20))
val m5 = m4.updated(15, 15)
m5.keysIterator.toList should be(List(5, 10, 15, 20))
m5.keysIterator.map(m5.get).toList should be(List(5, 10, 15, 20))
}
"replace entries" in {
val m1 = ImmutableIntMap.empty.updated(10, 10).updated(10, 11)
m1.keysIterator.map(m1.get).toList should be(List(11))
val m2 = m1.updated(20, 20).updated(30, 30)
.updated(20, 21).updated(30, 31)
m2.keysIterator.map(m2.get).toList should be(List(11, 21, 31))
}
"update if absent" in {
val m1 = ImmutableIntMap.empty.updated(10, 10).updated(20, 11)
m1.updateIfAbsent(10, 15) should be(ImmutableIntMap.empty.updated(10, 10).updated(20, 11))
m1.updateIfAbsent(30, 12) should be(ImmutableIntMap.empty.updated(10, 10).updated(20, 11).updated(30, 12))
}
"have toString" in {
ImmutableIntMap.empty.toString should be("ImmutableIntMap()")
ImmutableIntMap.empty.updated(10, 10).toString should be("ImmutableIntMap(10 -> 10)")
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).toString should be(
"ImmutableIntMap(10 -> 10, 20 -> 20)")
}
"have equals and hashCode" in {
ImmutableIntMap.empty.updated(10, 10) should be(ImmutableIntMap.empty.updated(10, 10))
ImmutableIntMap.empty.updated(10, 10).hashCode should be(
ImmutableIntMap.empty.updated(10, 10).hashCode)
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30) should be(
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30))
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30).hashCode should be(
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30).hashCode)
ImmutableIntMap.empty.updated(10, 10).updated(20, 20) should not be ImmutableIntMap.empty.updated(10, 10)
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30) should not be
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 31)
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30) should not be
ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(31, 30)
ImmutableIntMap.empty should be(ImmutableIntMap.empty)
ImmutableIntMap.empty.hashCode should be(ImmutableIntMap.empty.hashCode)
}
"remove entries" in {
val m1 = ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30)
val m2 = m1.remove(10)
m2.keysIterator.map(m2.get).toList should be(List(20, 30))
val m3 = m1.remove(20)
m3.keysIterator.map(m3.get).toList should be(List(10, 30))
val m4 = m1.remove(30)
m4.keysIterator.map(m4.get).toList should be(List(10, 20))
m1.remove(5) should be(m1)
m1.remove(10).remove(20).remove(30) should be(ImmutableIntMap.empty)
}
"get None when entry doesn't exist" in {
val m1 = ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30)
m1.get(5) should be(Int.MinValue)
m1.get(15) should be(Int.MinValue)
m1.get(25) should be(Int.MinValue)
m1.get(35) should be(Int.MinValue)
}
"contain keys" in {
val m1 = ImmutableIntMap.empty.updated(10, 10).updated(20, 20).updated(30, 30)
m1.contains(10) should be(true)
m1.contains(20) should be(true)
m1.contains(30) should be(true)
m1.contains(5) should be(false)
m1.contains(25) should be(false)
}
"have correct behavior for random operations" in {
val seed = System.nanoTime()
val rnd = new Random(seed)
var longMap = ImmutableIntMap.empty
var reference = Map.empty[Long, Int]
def verify(): Unit = {
val m = longMap.keysIterator.map(key key longMap.get(key)).toMap
m should be(reference)
}
(1 to 1000).foreach { i
withClue(s"seed=$seed, iteration=$i") {
val key = rnd.nextInt(100)
val value = rnd.nextPrintableChar()
rnd.nextInt(3) match {
case 0 | 1
longMap = longMap.updated(key, value)
reference = reference.updated(key, value)
case 2
longMap = longMap.remove(key)
reference = reference - key
}
verify()
}
}
}
}
}

View file

@ -0,0 +1,12 @@
# #19281 BackoffSupervisor updates
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.BackoffSupervisor.akka$pattern$BackoffSupervisor$$child_=")
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.BackoffSupervisor.akka$pattern$BackoffSupervisor$$restartCount")
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.BackoffSupervisor.akka$pattern$BackoffSupervisor$$restartCount_=")
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.BackoffSupervisor.akka$pattern$BackoffSupervisor$$child")
# #19487
ProblemFilters.exclude[Problem]("akka.actor.dungeon.Children*")
# #19440
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.PipeToSupport.pipeCompletionStage")
ProblemFilters.exclude[MissingMethodProblem]("akka.pattern.FutureTimeoutSupport.afterCompletionStage")

View file

@ -0,0 +1,2 @@
# #21131 new implementation for Akka Typed
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.DeathWatch.isWatching")

View file

@ -0,0 +1,4 @@
# MarkerLoggingAdapter introduced (all internal classes)
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.actor.LocalActorRefProvider.log")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.actor.VirtualPathContainer.log")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.actor.VirtualPathContainer.this")

View file

@ -0,0 +1,2 @@
# #21775 - overrode ByteString.stringPrefix and made it final
ProblemFilters.exclude[FinalMethodProblem]("akka.util.ByteString.stringPrefix")

View file

@ -0,0 +1,2 @@
# #21894 Programmatic configuration of the ActorSystem
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.this")

View file

@ -0,0 +1,3 @@
# #15947 catch mailbox creation failures
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.RepointableActorRef.point")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.Dispatch.initWithFailure")

View file

@ -0,0 +1,7 @@
# #20994 adding new decode method, since we're on JDK7+ now
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.util.ByteString.decodeString")
# #19872 double wildcard for actor deployment config
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.actor.Deployer.lookup")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.util.WildcardTree.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.util.WildcardTree.find")

View file

@ -0,0 +1,2 @@
# #21273 minor cleanup of WildcardIndex
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.util.WildcardIndex.empty")

View file

@ -0,0 +1,79 @@
# #18262 embed FJP, Mailbox extends ForkJoinTask
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator#ForkJoinExecutorServiceFactory.threadFactory")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator#ForkJoinExecutorServiceFactory.this")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator#ForkJoinExecutorServiceFactory.this")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator.validate")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.MonitorableThreadFactory")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.MonitorableThreadFactory.newThread")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator#AkkaForkJoinPool.this")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.ForkJoinExecutorConfigurator#AkkaForkJoinPool.this")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.Mailbox")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.BalancingDispatcher$SharingMailbox")
ProblemFilters.exclude[MissingTypesProblem]("akka.dispatch.MonitorableThreadFactory$AkkaForkJoinWorkerThread")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.dispatch.MonitorableThreadFactory#AkkaForkJoinWorkerThread.this")
# #22295 Improve Circuit breaker
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.pattern.CircuitBreaker#State.callThrough")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.pattern.CircuitBreaker#State.invoke")
# #21717 Improvements to AbstractActor API
ProblemFilters.exclude[Problem]("akka.japi.pf.ReceiveBuilder*")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.AbstractActor.receive")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.AbstractActor.createReceive")
ProblemFilters.exclude[MissingClassProblem]("akka.actor.AbstractActorContext")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.actor.AbstractActor.getContext")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.actor.AbstractActor.emptyBehavior")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.Children.findChild")
ProblemFilters.exclude[MissingTypesProblem]("akka.actor.ActorCell")
ProblemFilters.exclude[MissingTypesProblem]("akka.routing.RoutedActorCell")
ProblemFilters.exclude[MissingTypesProblem]("akka.routing.ResizablePoolCell")
# #21423 remove deprecated ActorSystem termination methods (in 2.5.x)
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.shutdown")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.isTerminated")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.awaitTermination")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystemImpl.awaitTermination")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystem.shutdown")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystem.isTerminated")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystem.awaitTermination")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorSystem.awaitTermination")
# #21423 remove deprecated ActorPath.ElementRegex
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorPath.ElementRegex")
# #21423 remove some deprecated event bus classes
ProblemFilters.exclude[MissingClassProblem]("akka.event.ActorClassification")
ProblemFilters.exclude[MissingClassProblem]("akka.event.EventStream$")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.event.EventStream.this")
ProblemFilters.exclude[MissingClassProblem]("akka.event.japi.ActorEventBus")
# #21423 remove deprecated util.Crypt
ProblemFilters.exclude[MissingClassProblem]("akka.util.Crypt")
ProblemFilters.exclude[MissingClassProblem]("akka.util.Crypt$")
# #21423 removal of deprecated serializer constructors (in 2.5.x)
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.serialization.JavaSerializer.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.serialization.ByteArraySerializer.this")
# #21423 removal of deprecated constructor in PromiseActorRef
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.PromiseActorRef.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.PromiseActorRef.apply")
# #21423 remove deprecated methods in routing
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.Pool.nrOfInstances")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.Group.paths")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.PoolBase.nrOfInstances")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.GroupBase.paths")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.GroupBase.getPaths")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.routing.FromConfig.nrOfInstances")
# #22105 Akka Typed process DSL
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.ActorCell.addFunctionRef")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.dungeon.Children.addFunctionRef")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.Children.addFunctionRef")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.Children.addFunctionRef$default$2")
# #22208 remove extension key
ProblemFilters.exclude[MissingClassProblem]("akka.event.Logging$Extension$")

View file

@ -0,0 +1,19 @@
# #22794 watchWith
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.ActorContext.watchWith")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.DeathWatch.watchWith")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.DeathWatch.akka$actor$dungeon$DeathWatch$$watching")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.actor.dungeon.DeathWatch.akka$actor$dungeon$DeathWatch$$watching_=")
# #22881 Make sure connections are aborted correctly on Windows
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.io.ChannelRegistration.cancel")
# #21213 Feature request: Let BackoffSupervisor reply to messages when its child is stopped
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.BackoffSupervisor.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.BackoffOptionsImpl.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.BackoffOptionsImpl.this")
ProblemFilters.exclude[MissingTypesProblem]("akka.pattern.BackoffOptionsImpl$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.BackoffOptionsImpl.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.pattern.BackoffOnRestartSupervisor.this")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.pattern.HandleBackoff.replyWhileStopped")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.pattern.BackoffOptions.withReplyWhileStopped")

View file

@ -0,0 +1,2 @@
# #22881 Make sure connections are aborted correctly on Windows
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.io.ChannelRegistration.cancel")

View file

@ -0,0 +1,4 @@
# #15733 Timers
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.FSM#Timer.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.FSM#Timer.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.actor.FSM#Timer.apply")

View file

@ -335,6 +335,7 @@ akka {
# - "default-executor" requires a "default-executor" section # - "default-executor" requires a "default-executor" section
# - "fork-join-executor" requires a "fork-join-executor" section # - "fork-join-executor" requires a "fork-join-executor" section
# - "thread-pool-executor" requires a "thread-pool-executor" section # - "thread-pool-executor" requires a "thread-pool-executor" section
# - "affinity-pool-executor" requires an "affinity-pool-executor" section
# - A FQCN of a class extending ExecutorServiceConfigurator # - A FQCN of a class extending ExecutorServiceConfigurator
executor = "default-executor" executor = "default-executor"
@ -350,6 +351,78 @@ akka {
fallback = "fork-join-executor" fallback = "fork-join-executor"
} }
# This will be used if you have set "executor = "affinity-pool-executor""
# Underlying thread pool implementation is akka.dispatch.affinity.AffinityPool.
# This executor is classified as "ApiMayChange".
affinity-pool-executor {
# Min number of threads to cap factor-based parallelism number to
parallelism-min = 4
# The parallelism factor is used to determine thread pool size using the
# following formula: ceil(available processors * factor). Resulting size
# is then bounded by the parallelism-min and parallelism-max values.
parallelism-factor = 0.8
# Max number of threads to cap factor-based parallelism number to.
parallelism-max = 64
# Each worker in the pool uses a separate bounded MPSC queue. This value
# indicates the upper bound of the queue. Whenever an attempt to enqueue
# a task is made and the queue does not have capacity to accomodate
# the task, the rejection handler created by the rejection handler specified
# in "rejection-handler" is invoked.
task-queue-size = 512
# FQCN of the Rejection handler used in the pool.
# Must have an empty public constructor and must
# implement akka.actor.affinity.RejectionHandlerFactory.
rejection-handler = "akka.dispatch.affinity.ThrowOnOverflowRejectionHandler"
# Level of CPU time used, on a scale between 1 and 10, during backoff/idle.
# The tradeoff is that to have low latency more CPU time must be used to be
# able to react quickly on incoming messages or send as fast as possible after
# backoff backpressure.
# Level 1 strongly prefer low CPU consumption over low latency.
# Level 10 strongly prefer low latency over low CPU consumption.
idle-cpu-level = 5
# FQCN of the akka.dispatch.affinity.QueueSelectorFactory.
# The Class of the FQCN must have a public constructor with a
# (com.typesafe.config.Config) parameter.
# A QueueSelectorFactory create instances of akka.dispatch.affinity.QueueSelector,
# that is responsible for determining which task queue a Runnable should be enqueued in.
queue-selector = "akka.dispatch.affinity.FairDistributionHashCache"
# When using the "akka.dispatch.affinity.FairDistributionHashCache" queue selector
# internally the AffinityPool uses two methods to determine which task
# queue to allocate a Runnable to:
# - map based - maintains a round robin counter and a map of Runnable
# hashcodes to queues that they have been associated with. This ensures
# maximum fairness in terms of work distribution, meaning that each worker
# will get approximately equal amount of mailboxes to execute. This is suitable
# in cases where we have a small number of actors that will be scheduled on
# the pool and we want to ensure the maximum possible utilization of the
# available threads.
# - hash based - the task - queue in which the runnable should go is determined
# by using an uniformly distributed int to int hash function which uses the
# hash code of the Runnable as an input. This is preferred in situations where we
# have enough number of distinct actors to ensure statistically uniform
# distribution of work across threads or we are ready to sacrifice the
# former for the added benefit of avoiding map look-ups.
fair-work-distribution {
# The value serves as a threshold which determines the point at which the
# pool switches from the first to the second work distribution schemes.
# For example, if the value is set to 128, the pool can observe up to
# 128 unique actors and schedule their mailboxes using the map based
# approach. Once this number is reached the pool switches to hash based
# task distribution mode. If the value is set to 0, the map based
# work distribution approach is disabled and only the hash based is
# used irrespective of the number of unique actors. Valid range is
# 0 to 2048 (inclusive)
threshold = 128
}
}
# This will be used if you have set "executor = "fork-join-executor"" # This will be used if you have set "executor = "fork-join-executor""
# Underlying thread pool implementation is akka.dispatch.forkjoin.ForkJoinPool # Underlying thread pool implementation is akka.dispatch.forkjoin.ForkJoinPool
fork-join-executor { fork-join-executor {

View file

@ -272,6 +272,7 @@ class ActorInterruptedException private[akka] (cause: Throwable) extends AkkaExc
*/ */
@SerialVersionUID(1L) @SerialVersionUID(1L)
final case class UnhandledMessage(@BeanProperty message: Any, @BeanProperty sender: ActorRef, @BeanProperty recipient: ActorRef) final case class UnhandledMessage(@BeanProperty message: Any, @BeanProperty sender: ActorRef, @BeanProperty recipient: ActorRef)
extends NoSerializationVerificationNeeded
/** /**
* Classes for passing status back to the sender. * Classes for passing status back to the sender.

View file

@ -54,7 +54,7 @@ object ActorPath {
* Parse string as actor path; throws java.net.MalformedURLException if unable to do so. * Parse string as actor path; throws java.net.MalformedURLException if unable to do so.
*/ */
def fromString(s: String): ActorPath = s match { def fromString(s: String): ActorPath = s match {
case ActorPathExtractor(addr, elems) RootActorPath(addr) / elems case ActorPathExtractor(address, elems) RootActorPath(address) / elems
case _ throw new MalformedURLException("cannot parse as ActorPath: " + s) case _ throw new MalformedURLException("cannot parse as ActorPath: " + s)
} }
@ -367,10 +367,10 @@ final class ChildActorPath private[akka] (val parent: ActorPath, val name: Strin
appendUidFragment(sb).toString appendUidFragment(sb).toString
} }
private def addressStringLengthDiff(addr: Address): Int = { private def addressStringLengthDiff(address: Address): Int = {
val r = root val r = root
if (r.address.host.isDefined) 0 if (r.address.host.isDefined) 0
else (addr.toString.length - r.address.toString.length) else (address.toString.length - r.address.toString.length)
} }
/** /**

View file

@ -159,7 +159,7 @@ abstract class ActorRef extends java.lang.Comparable[ActorRef] with Serializable
/** /**
* This trait represents the Scala Actor API * This trait represents the Scala Actor API
* There are implicit conversions in ../actor/Implicits.scala * There are implicit conversions in package.scala
* from ActorRef -&gt; ScalaActorRef and back * from ActorRef -&gt; ScalaActorRef and back
*/ */
trait ScalaActorRef { ref: ActorRef trait ScalaActorRef { ref: ActorRef

View file

@ -9,6 +9,7 @@ import scala.collection.mutable
import akka.routing.{ Deafen, Listen, Listeners } import akka.routing.{ Deafen, Listen, Listeners }
import scala.concurrent.duration.FiniteDuration import scala.concurrent.duration.FiniteDuration
import scala.concurrent.duration._ import scala.concurrent.duration._
import akka.annotation.InternalApi
object FSM { object FSM {
@ -87,8 +88,9 @@ object FSM {
/** /**
* INTERNAL API * INTERNAL API
*/ */
// FIXME: what about the cancellable? @InternalApi
private[akka] final case class Timer(name: String, msg: Any, repeat: Boolean, generation: Int)(context: ActorContext) private[akka] final case class Timer(name: String, msg: Any, repeat: Boolean, generation: Int,
owner: AnyRef)(context: ActorContext)
extends NoSerializationVerificationNeeded { extends NoSerializationVerificationNeeded {
private var ref: Option[Cancellable] = _ private var ref: Option[Cancellable] = _
private val scheduler = context.system.scheduler private val scheduler = context.system.scheduler
@ -419,7 +421,7 @@ trait FSM[S, D] extends Actor with Listeners with ActorLogging {
if (timers contains name) { if (timers contains name) {
timers(name).cancel timers(name).cancel
} }
val timer = Timer(name, msg, repeat, timerGen.next)(context) val timer = Timer(name, msg, repeat, timerGen.next, this)(context)
timer.schedule(self, timeout) timer.schedule(self, timeout)
timers(name) = timer timers(name) = timer
} }
@ -616,8 +618,8 @@ trait FSM[S, D] extends Actor with Listeners with ActorLogging {
if (generation == gen) { if (generation == gen) {
processMsg(StateTimeout, "state timeout") processMsg(StateTimeout, "state timeout")
} }
case t @ Timer(name, msg, repeat, gen) case t @ Timer(name, msg, repeat, gen, owner)
if ((timers contains name) && (timers(name).generation == gen)) { if ((owner eq this) && (timers contains name) && (timers(name).generation == gen)) {
if (timeoutFuture.isDefined) { if (timeoutFuture.isDefined) {
timeoutFuture.get.cancel() timeoutFuture.get.cancel()
timeoutFuture = None timeoutFuture = None
@ -782,7 +784,7 @@ trait LoggingFSM[S, D] extends FSM[S, D] { this: Actor ⇒
if (debugEvent) { if (debugEvent) {
val srcstr = source match { val srcstr = source match {
case s: String s case s: String s
case Timer(name, _, _, _) "timer " + name case Timer(name, _, _, _, _) "timer " + name
case a: ActorRef a.toString case a: ActorRef a.toString
case _ "unknown" case _ "unknown"
} }

View file

@ -0,0 +1,119 @@
/**
* Copyright (C) 2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import scala.concurrent.duration.FiniteDuration
import akka.annotation.DoNotInherit
import akka.util.OptionVal
/**
* Scala API: Mix in Timers into your Actor to get support for scheduled
* `self` messages via [[TimerScheduler]].
*
* Timers are bound to the lifecycle of the actor that owns it,
* and thus are cancelled automatically when it is restarted or stopped.
*/
trait Timers extends Actor {
private val _timers = new TimerSchedulerImpl(context)
/**
* Start and cancel timers via the enclosed `TimerScheduler`.
*/
final def timers: TimerScheduler = _timers
override protected[akka] def aroundPreRestart(reason: Throwable, message: Option[Any]): Unit = {
timers.cancelAll()
super.aroundPreRestart(reason, message)
}
override protected[akka] def aroundPostStop(): Unit = {
timers.cancelAll()
super.aroundPostStop()
}
override protected[akka] def aroundReceive(receive: Actor.Receive, msg: Any): Unit = {
msg match {
case timerMsg: TimerSchedulerImpl.TimerMsg
_timers.interceptTimerMsg(timerMsg) match {
case OptionVal.Some(m) super.aroundReceive(receive, m)
case OptionVal.None // discard
}
case _
super.aroundReceive(receive, msg)
}
}
}
/**
* Java API: Support for scheduled `self` messages via [[TimerScheduler]].
*
* Timers are bound to the lifecycle of the actor that owns it,
* and thus are cancelled automatically when it is restarted or stopped.
*/
abstract class AbstractActorWithTimers extends AbstractActor with Timers {
/**
* Start and cancel timers via the enclosed `TimerScheduler`.
*/
final def getTimers: TimerScheduler = timers
}
/**
* Support for scheduled `self` messages in an actor.
* It is used by mixing in trait `Timers` in Scala or extending `AbstractActorWithTimers`
* in Java.
*
* Timers are bound to the lifecycle of the actor that owns it,
* and thus are cancelled automatically when it is restarted or stopped.
*
* `TimerScheduler` is not thread-safe, i.e. it must only be used within
* the actor that owns it.
*/
@DoNotInherit abstract class TimerScheduler {
/**
* Start a periodic timer that will send `msg` to the `self` actor at
* a fixed `interval`.
*
* Each timer has a key and if a new timer with same key is started
* the previous is cancelled and it's guaranteed that a message from the
* previous timer is not received, even though it might already be enqueued
* in the mailbox when the new timer is started.
*/
def startPeriodicTimer(key: Any, msg: Any, interval: FiniteDuration): Unit
/**
* Start a timer that will send `msg` once to the `self` actor after
* the given `timeout`.
*
* Each timer has a key and if a new timer with same key is started
* the previous is cancelled and it's guaranteed that a message from the
* previous timer is not received, even though it might already be enqueued
* in the mailbox when the new timer is started.
*/
def startSingleTimer(key: Any, msg: Any, timeout: FiniteDuration): Unit
/**
* Check if a timer with a given `key` is active.
*/
def isTimerActive(key: Any): Boolean
/**
* Cancel a timer with a given `key`.
* If canceling a timer that was already canceled, or key never was used to start a timer
* this operation will do nothing.
*
* It is guaranteed that a message from a canceled timer, including its previous incarnation
* for the same key, will not be received by the actor, even though the message might already
* be enqueued in the mailbox when cancel is called.
*/
def cancel(key: Any): Unit
/**
* Cancel all timers.
*/
def cancelAll(): Unit
}

View file

@ -0,0 +1,111 @@
/**
* Copyright (C) 2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import scala.concurrent.duration.FiniteDuration
import akka.annotation.InternalApi
import akka.event.Logging
import akka.util.OptionVal
/**
* INTERNAL API
*/
@InternalApi private[akka] object TimerSchedulerImpl {
final case class Timer(key: Any, msg: Any, repeat: Boolean, generation: Int, task: Cancellable)
final case class TimerMsg(key: Any, generation: Int, owner: TimerSchedulerImpl)
extends NoSerializationVerificationNeeded
}
/**
* INTERNAL API
*/
@InternalApi private[akka] class TimerSchedulerImpl(ctx: ActorContext) extends TimerScheduler {
import TimerSchedulerImpl._
private val log = Logging(ctx.system, classOf[TimerScheduler])
private var timers: Map[Any, Timer] = Map.empty
private var timerGen = 0
private def nextTimerGen(): Int = {
timerGen += 1
timerGen
}
override def startPeriodicTimer(key: Any, msg: Any, interval: FiniteDuration): Unit =
startTimer(key, msg, interval, repeat = true)
override def startSingleTimer(key: Any, msg: Any, timeout: FiniteDuration): Unit =
startTimer(key, msg, timeout, repeat = false)
private def startTimer(key: Any, msg: Any, timeout: FiniteDuration, repeat: Boolean): Unit = {
timers.get(key) match {
case Some(t) cancelTimer(t)
case None
}
val nextGen = nextTimerGen()
val timerMsg = TimerMsg(key, nextGen, this)
val task =
if (repeat)
ctx.system.scheduler.schedule(timeout, timeout, ctx.self, timerMsg)(ctx.dispatcher)
else
ctx.system.scheduler.scheduleOnce(timeout, ctx.self, timerMsg)(ctx.dispatcher)
val nextTimer = Timer(key, msg, repeat, nextGen, task)
log.debug("Start timer [{}] with generation [{}]", key, nextGen)
timers = timers.updated(key, nextTimer)
}
override def isTimerActive(key: Any): Boolean =
timers.contains(key)
override def cancel(key: Any): Unit = {
timers.get(key) match {
case None // already removed/canceled
case Some(t) cancelTimer(t)
}
}
private def cancelTimer(timer: Timer): Unit = {
log.debug("Cancel timer [{}] with generation [{}]", timer.key, timer.generation)
timer.task.cancel()
timers -= timer.key
}
override def cancelAll(): Unit = {
log.debug("Cancel all timers")
timers.valuesIterator.foreach { timer
timer.task.cancel()
}
timers = Map.empty
}
def interceptTimerMsg(timerMsg: TimerMsg): OptionVal[AnyRef] = {
timers.get(timerMsg.key) match {
case None
// it was from canceled timer that was already enqueued in mailbox
log.debug("Received timer [{}] that has been removed, discarding", timerMsg.key)
OptionVal.None // message should be ignored
case Some(t)
if (timerMsg.owner ne this) {
// after restart, it was from an old instance that was enqueued in mailbox before canceled
log.debug("Received timer [{}] from old restarted instance, discarding", timerMsg.key)
OptionVal.None // message should be ignored
} else if (timerMsg.generation == t.generation) {
// valid timer
log.debug("Received timer [{}]", timerMsg.key)
if (!t.repeat)
timers -= t.key
OptionVal.Some(t.msg.asInstanceOf[AnyRef])
} else {
// it was from an old timer that was enqueued in mailbox before canceled
log.debug(
"Received timer [{}] from from old generation [{}], expected generation [{}], discarding",
timerMsg.key, timerMsg.generation, t.generation)
OptionVal.None // message should be ignored
}
}
}
}

View file

@ -8,11 +8,13 @@ import java.util.concurrent._
import java.{ util ju } import java.{ util ju }
import akka.actor._ import akka.actor._
import akka.dispatch.affinity.AffinityPoolConfigurator
import akka.dispatch.sysmsg._ import akka.dispatch.sysmsg._
import akka.event.EventStream import akka.event.EventStream
import akka.event.Logging.{ Debug, Error, LogEventException } import akka.event.Logging.{ Debug, Error, LogEventException }
import akka.util.{ Index, Unsafe } import akka.util.{ Index, Unsafe }
import com.typesafe.config.Config import com.typesafe.config.Config
import scala.annotation.tailrec import scala.annotation.tailrec
import scala.concurrent.{ ExecutionContext, ExecutionContextExecutor } import scala.concurrent.{ ExecutionContext, ExecutionContextExecutor }
import scala.concurrent.duration.{ Duration, FiniteDuration } import scala.concurrent.duration.{ Duration, FiniteDuration }
@ -327,6 +329,8 @@ abstract class MessageDispatcherConfigurator(_config: Config, val prerequisites:
def configurator(executor: String): ExecutorServiceConfigurator = executor match { def configurator(executor: String): ExecutorServiceConfigurator = executor match {
case null | "" | "fork-join-executor" new ForkJoinExecutorConfigurator(config.getConfig("fork-join-executor"), prerequisites) case null | "" | "fork-join-executor" new ForkJoinExecutorConfigurator(config.getConfig("fork-join-executor"), prerequisites)
case "thread-pool-executor" new ThreadPoolExecutorConfigurator(config.getConfig("thread-pool-executor"), prerequisites) case "thread-pool-executor" new ThreadPoolExecutorConfigurator(config.getConfig("thread-pool-executor"), prerequisites)
case "affinity-pool-executor" new AffinityPoolConfigurator(config.getConfig("affinity-pool-executor"), prerequisites)
case fqcn case fqcn
val args = List( val args = List(
classOf[Config] config, classOf[Config] config,

View file

@ -0,0 +1,418 @@
/**
* Copyright (C) 2016-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.dispatch.affinity
import java.lang.invoke.MethodHandles
import java.lang.invoke.MethodType.methodType
import java.util.Collections
import java.util.concurrent.TimeUnit.MICROSECONDS
import java.util.concurrent._
import java.util.concurrent.atomic.{ AtomicInteger, AtomicReference }
import java.util.concurrent.locks.LockSupport
import java.lang.Integer.reverseBytes
import akka.dispatch._
import akka.util.Helpers.Requiring
import com.typesafe.config.Config
import akka.annotation.{ InternalApi, ApiMayChange }
import akka.event.Logging
import akka.util.{ ImmutableIntMap, OptionVal, ReentrantGuard }
import scala.annotation.{ tailrec, switch }
import scala.collection.{ mutable, immutable }
import scala.util.control.NonFatal
@InternalApi
@ApiMayChange
private[affinity] object AffinityPool {
type PoolState = Int
// PoolState: waiting to be initialized
final val Uninitialized = 0
// PoolState: currently in the process of initializing
final val Initializing = 1
// PoolState: accepts new tasks and processes tasks that are enqueued
final val Running = 2
// PoolState: does not accept new tasks, processes tasks that are in the queue
final val ShuttingDown = 3
// PoolState: does not accept new tasks, does not process tasks in queue
final val ShutDown = 4
// PoolState: all threads have been stopped, does not process tasks and does not accept new ones
final val Terminated = 5
// Method handle to JDK9+ onSpinWait method
private val onSpinWaitMethodHandle =
try
OptionVal.Some(MethodHandles.lookup.findStatic(classOf[Thread], "onSpinWait", methodType(classOf[Unit])))
catch {
case NonFatal(_) OptionVal.None
}
type IdleState = Int
// IdleState: Initial state
final val Initial = 0
// IdleState: Spinning
final val Spinning = 1
// IdleState: Yielding
final val Yielding = 2
// IdleState: Parking
final val Parking = 3
// Following are auxiliary class and trait definitions
private final class IdleStrategy(idleCpuLevel: Int) {
private[this] val maxSpins = 1100 * idleCpuLevel - 1000
private[this] val maxYields = 5 * idleCpuLevel
private[this] val minParkPeriodNs = 1
private[this] val maxParkPeriodNs = MICROSECONDS.toNanos(250 - ((80 * (idleCpuLevel - 1)) / 3))
private[this] var state: IdleState = Initial
private[this] var turns = 0L
private[this] var parkPeriodNs = 0L
@volatile private[this] var idling = false
@inline private[this] final def transitionTo(newState: IdleState): Unit = {
state = newState
turns = 0
}
final def isIdling: Boolean = idling
final def idle(): Unit = {
(state: @switch) match {
case Initial
idling = true
transitionTo(Spinning)
case Spinning
onSpinWaitMethodHandle match {
case OptionVal.Some(m) m.invokeExact()
case OptionVal.None
}
turns += 1
if (turns > maxSpins)
transitionTo(Yielding)
case Yielding
turns += 1
if (turns > maxYields) {
parkPeriodNs = minParkPeriodNs
transitionTo(Parking)
} else Thread.`yield`()
case Parking
LockSupport.parkNanos(parkPeriodNs)
parkPeriodNs = Math.min(parkPeriodNs << 1, maxParkPeriodNs)
}
}
final def reset(): Unit = {
idling = false
transitionTo(Initial)
}
}
private final class BoundedAffinityTaskQueue(capacity: Int) extends AbstractBoundedNodeQueue[Runnable](capacity)
}
/**
* An [[ExecutorService]] implementation which pins actor to particular threads
* and guaranteed that an actor's [[Mailbox]] will e run on the thread it used
* it used to run. In situations where we see a lot of cache ping pong, this
* might lead to significant performance improvements.
*
* INTERNAL API
*/
@InternalApi
@ApiMayChange
private[akka] class AffinityPool(
id: String,
parallelism: Int,
affinityGroupSize: Int,
threadFactory: ThreadFactory,
idleCpuLevel: Int,
final val queueSelector: QueueSelector,
rejectionHandler: RejectionHandler)
extends AbstractExecutorService {
if (parallelism <= 0)
throw new IllegalArgumentException("Size of pool cannot be less or equal to 0")
import AffinityPool._
// Held while starting/shutting down workers/pool in order to make
// the operations linear and enforce atomicity. An example of that would be
// adding a worker. We want the creation of the worker, addition
// to the set and starting to worker to be an atomic action. Using
// a concurrent set would not give us that
private val bookKeepingLock = new ReentrantGuard()
// condition used for awaiting termination
private val terminationCondition = bookKeepingLock.newCondition()
// indicates the current state of the pool
@volatile final private var poolState: PoolState = Uninitialized
private[this] final val workQueues = Array.fill(parallelism)(new BoundedAffinityTaskQueue(affinityGroupSize))
private[this] final val workers = mutable.Set[AffinityPoolWorker]()
def start(): this.type =
bookKeepingLock.withGuard {
if (poolState == Uninitialized) {
poolState = Initializing
workQueues.foreach(q addWorker(workers, q))
poolState = Running
}
this
}
// WARNING: Only call while holding the bookKeepingLock
private def addWorker(workers: mutable.Set[AffinityPoolWorker], q: BoundedAffinityTaskQueue): Unit = {
val worker = new AffinityPoolWorker(q, new IdleStrategy(idleCpuLevel))
workers.add(worker)
worker.start()
}
/**
* Each worker should go through that method while terminating.
* In turn each worker is responsible for modifying the pool
* state accordingly. For example if this is the last worker
* and the queue is empty and we are in a ShuttingDown state
* the worker can transition the pool to ShutDown and attempt
* termination
*
* Furthermore, if this worker has experienced abrupt termination
* due to an exception being thrown in user code, the worker is
* responsible for adding one more worker to compensate for its
* own termination
*
*/
private def onWorkerExit(w: AffinityPoolWorker, abruptTermination: Boolean): Unit =
bookKeepingLock.withGuard {
workers.remove(w)
if (abruptTermination && poolState == Running)
addWorker(workers, w.q)
else if (workers.isEmpty && !abruptTermination && poolState >= ShuttingDown) {
poolState = ShutDown // transition to shutdown and try to transition to termination
attemptPoolTermination()
}
}
override def execute(command: Runnable): Unit = {
val queue = workQueues(queueSelector.getQueue(command, parallelism)) // Will throw NPE if command is null
if (poolState >= ShuttingDown || !queue.add(command))
rejectionHandler.reject(command, this)
}
override def awaitTermination(timeout: Long, unit: TimeUnit): Boolean = {
// recurse until pool is terminated or time out reached
@tailrec
def awaitTermination(nanos: Long): Boolean = {
if (poolState == Terminated) true
else if (nanos <= 0) false
else awaitTermination(terminationCondition.awaitNanos(nanos))
}
bookKeepingLock.withGuard {
// need to hold the lock to avoid monitor exception
awaitTermination(unit.toNanos(timeout))
}
}
// WARNING: Only call while holding the bookKeepingLock
private def attemptPoolTermination(): Unit =
if (workers.isEmpty && poolState == ShutDown) {
poolState = Terminated
terminationCondition.signalAll()
}
override def shutdownNow(): java.util.List[Runnable] =
bookKeepingLock.withGuard {
poolState = ShutDown
workers.foreach(_.stop())
attemptPoolTermination()
// like in the FJ executor, we do not provide facility to obtain tasks that were in queue
Collections.emptyList[Runnable]()
}
override def shutdown(): Unit =
bookKeepingLock.withGuard {
poolState = ShuttingDown
// interrupts only idle workers.. so others can process their queues
workers.foreach(_.stopIfIdle())
attemptPoolTermination()
}
override def isShutdown: Boolean = poolState >= ShutDown
override def isTerminated: Boolean = poolState == Terminated
override def toString: String =
s"${Logging.simpleName(this)}(id = $id, parallelism = $parallelism, affinityGroupSize = $affinityGroupSize, threadFactory = $threadFactory, idleCpuLevel = $idleCpuLevel, queueSelector = $queueSelector, rejectionHandler = $rejectionHandler)"
private[this] final class AffinityPoolWorker( final val q: BoundedAffinityTaskQueue, final val idleStrategy: IdleStrategy) extends Runnable {
final val thread: Thread = threadFactory.newThread(this)
final def start(): Unit =
if (thread eq null) throw new IllegalStateException(s"Was not able to allocate worker thread for ${AffinityPool.this}")
else thread.start()
override final def run(): Unit = {
// Returns true if it executed something, false otherwise
def executeNext(): Boolean = {
val c = q.poll()
val next = c ne null
if (next) {
c.run()
idleStrategy.reset()
} else {
idleStrategy.idle() // if not wait for a bit
}
next
}
/**
* We keep running as long as we are Running
* or we're ShuttingDown but we still have tasks to execute,
* and we're not interrupted.
*/
@tailrec def runLoop(): Unit =
if (!Thread.interrupted()) {
(poolState: @switch) match {
case Uninitialized ()
case Initializing | Running
executeNext()
runLoop()
case ShuttingDown
if (executeNext()) runLoop()
else ()
case ShutDown | Terminated ()
}
}
var abruptTermination = true
try {
runLoop()
abruptTermination = false // if we have reached here, our termination is not due to an exception
} finally {
onWorkerExit(this, abruptTermination)
}
}
def stop(): Unit = if (!thread.isInterrupted) thread.interrupt()
def stopIfIdle(): Unit = if (idleStrategy.isIdling) stop()
}
}
/**
* INTERNAL API
*/
@InternalApi
@ApiMayChange
private[akka] final class AffinityPoolConfigurator(config: Config, prerequisites: DispatcherPrerequisites)
extends ExecutorServiceConfigurator(config, prerequisites) {
private val poolSize = ThreadPoolConfig.scaledPoolSize(
config.getInt("parallelism-min"),
config.getDouble("parallelism-factor"),
config.getInt("parallelism-max"))
private val taskQueueSize = config.getInt("task-queue-size")
private val idleCpuLevel = config.getInt("idle-cpu-level").requiring(level
1 <= level && level <= 10, "idle-cpu-level must be between 1 and 10")
private val queueSelectorFactoryFQCN = config.getString("queue-selector")
private val queueSelectorFactory: QueueSelectorFactory =
prerequisites.dynamicAccess.createInstanceFor[QueueSelectorFactory](queueSelectorFactoryFQCN, immutable.Seq(classOf[Config] config))
.recover({
case exception throw new IllegalArgumentException(
s"Cannot instantiate QueueSelectorFactory(queueSelector = $queueSelectorFactoryFQCN), make sure it has an accessible constructor which accepts a Config parameter")
}).get
private val rejectionHandlerFactoryFCQN = config.getString("rejection-handler")
private val rejectionHandlerFactory = prerequisites.dynamicAccess
.createInstanceFor[RejectionHandlerFactory](rejectionHandlerFactoryFCQN, Nil).recover({
case exception throw new IllegalArgumentException(
s"Cannot instantiate RejectionHandlerFactory(rejection-handler = $rejectionHandlerFactoryFCQN), make sure it has an accessible empty constructor",
exception)
}).get
override def createExecutorServiceFactory(id: String, threadFactory: ThreadFactory): ExecutorServiceFactory =
new ExecutorServiceFactory {
override def createExecutorService: ExecutorService =
new AffinityPool(id, poolSize, taskQueueSize, threadFactory, idleCpuLevel, queueSelectorFactory.create(), rejectionHandlerFactory.create()).start()
}
}
trait RejectionHandler {
def reject(command: Runnable, service: ExecutorService)
}
trait RejectionHandlerFactory {
def create(): RejectionHandler
}
trait QueueSelectorFactory {
def create(): QueueSelector
}
/**
* A `QueueSelector` is responsible for, given a `Runnable` and the number of available
* queues, return which of the queues that `Runnable` should be placed in.
*/
trait QueueSelector {
/**
* Must be deterministicreturn the same value for the same input.
* @returns given a `Runnable` a number between 0 .. `queues` (exclusive)
* @throws NullPointerException when `command` is `null`
*/
def getQueue(command: Runnable, queues: Int): Int
}
/**
* INTERNAL API
*/
@InternalApi
@ApiMayChange
private[akka] final class ThrowOnOverflowRejectionHandler extends RejectionHandlerFactory with RejectionHandler {
override final def reject(command: Runnable, service: ExecutorService): Unit =
throw new RejectedExecutionException(s"Task $command rejected from $service")
override final def create(): RejectionHandler = this
}
/**
* INTERNAL API
*/
@InternalApi
@ApiMayChange
private[akka] final class FairDistributionHashCache( final val config: Config) extends QueueSelectorFactory {
private final val MaxFairDistributionThreshold = 2048
private[this] final val fairDistributionThreshold = config.getInt("fair-work-distribution.threshold").requiring(thr
0 <= thr && thr <= MaxFairDistributionThreshold, s"fair-work-distribution.threshold must be between 0 and $MaxFairDistributionThreshold")
override final def create(): QueueSelector = new AtomicReference[ImmutableIntMap](ImmutableIntMap.empty) with QueueSelector {
override def toString: String = s"FairDistributionHashCache(fairDistributionThreshold = $fairDistributionThreshold)"
private[this] final def improve(h: Int): Int = Math.abs(reverseBytes(h * 0x9e3775cd) * 0x9e3775cd) // `sbhash`: In memory of Phil Bagwell.
override final def getQueue(command: Runnable, queues: Int): Int = {
val runnableHash = command.hashCode()
if (fairDistributionThreshold == 0)
improve(runnableHash) % queues
else {
@tailrec
def cacheLookup(prev: ImmutableIntMap, hash: Int): Int = {
val existingIndex = prev.get(runnableHash)
if (existingIndex >= 0) existingIndex
else if (prev.size > fairDistributionThreshold) improve(hash) % queues
else {
val index = prev.size % queues
if (compareAndSet(prev, prev.updated(runnableHash, index))) index
else cacheLookup(get(), hash)
}
}
cacheLookup(get(), runnableHash)
}
}
}
}

View file

@ -8,6 +8,7 @@ import java.util.concurrent.atomic.AtomicInteger
import akka.actor.ActorSystem.Settings import akka.actor.ActorSystem.Settings
import akka.actor._ import akka.actor._
import akka.annotation.{ DoNotInherit, InternalApi }
import akka.dispatch.RequiresMessageQueue import akka.dispatch.RequiresMessageQueue
import akka.event.Logging._ import akka.event.Logging._
import akka.util.ReentrantGuard import akka.util.ReentrantGuard
@ -1403,7 +1404,9 @@ trait DiagnosticLoggingAdapter extends LoggingAdapter {
def clearMDC(): Unit = mdc(emptyMDC) def clearMDC(): Unit = mdc(emptyMDC)
} }
final class LogMarker(val name: String) /** DO NOT INHERIT: Class is open only for use by akka-slf4j*/
@DoNotInherit
class LogMarker(val name: String)
object LogMarker { object LogMarker {
/** The Marker is internally transferred via MDC using using this key */ /** The Marker is internally transferred via MDC using using this key */
private[akka] final val MDCKey = "marker" private[akka] final val MDCKey = "marker"

View file

@ -30,7 +30,7 @@ object Dns extends ExtensionId[DnsExt] with ExtensionIdProvider {
@throws[UnknownHostException] @throws[UnknownHostException]
def addr: InetAddress = addrOption match { def addr: InetAddress = addrOption match {
case Some(addr) addr case Some(ipAddress) ipAddress
case None throw new UnknownHostException(name) case None throw new UnknownHostException(name)
} }
} }

View file

@ -68,6 +68,12 @@ private[io] trait ChannelRegistration extends NoSerializationVerificationNeeded
} }
private[io] object SelectionHandler { private[io] object SelectionHandler {
// Let select return every MaxSelectMillis which will automatically cleanup stale entries in the selection set.
// Otherwise, an idle Selector might block for a long time keeping a reference to the dead connection actor's ActorRef
// which might keep other stuff in memory.
// See https://github.com/akka/akka/issues/23437
// As this is basic house-keeping functionality it doesn't seem useful to make the value configurable.
val MaxSelectMillis = 10000 // wake up once in 10 seconds
trait HasFailureMessage { trait HasFailureMessage {
def failureMessage: Any def failureMessage: Any
@ -119,7 +125,7 @@ private[io] object SelectionHandler {
private[this] val select = new Task { private[this] val select = new Task {
def tryRun(): Unit = { def tryRun(): Unit = {
if (selector.select() > 0) { // This assumes select return value == selectedKeys.size if (selector.select(MaxSelectMillis) > 0) { // This assumes select return value == selectedKeys.size
val keys = selector.selectedKeys val keys = selector.selectedKeys
val iterator = keys.iterator() val iterator = keys.iterator()
while (iterator.hasNext) { while (iterator.hasNext) {

View file

@ -3,18 +3,28 @@
*/ */
package akka.pattern package akka.pattern
import akka.actor.{ ActorSelection, Scheduler } import java.util.concurrent.{ Callable, CompletionStage, TimeUnit }
import java.util.concurrent.{ Callable, TimeUnit }
import scala.concurrent.ExecutionContext
import scala.concurrent.duration.FiniteDuration
import java.util.concurrent.CompletionStage
import scala.compat.java8.FutureConverters._
import akka.actor.{ ActorSelection, Scheduler }
import scala.compat.java8.FutureConverters._
import scala.concurrent.ExecutionContext
/**
* "Pre Java 8" Java API for Akka patterns such as `ask`, `pipe` and others.
*
* These methods are possible to call from Java however work with the Scala [[scala.concurrent.Future]],
* due to the lack of non-blocking reactive Future implementation before Java 8.
*
* For Java applications developed with Java 8 and later, you might want to use [[akka.pattern.PatternsCS]] instead,
* which provide alternatives for these patterns which work with [[java.util.concurrent.CompletionStage]].
*/
object Patterns { object Patterns {
import akka.actor.ActorRef
import akka.japi import akka.japi
import akka.actor.{ ActorRef } import akka.pattern.{ after scalaAfter, ask scalaAsk, gracefulStop scalaGracefulStop, pipe scalaPipe }
import akka.pattern.{ ask scalaAsk, pipe scalaPipe, gracefulStop scalaGracefulStop, after scalaAfter }
import akka.util.Timeout import akka.util.Timeout
import scala.concurrent.Future import scala.concurrent.Future
import scala.concurrent.duration._ import scala.concurrent.duration._
@ -259,11 +269,17 @@ object Patterns {
scalaAfter(duration, scheduler)(value)(context) scalaAfter(duration, scheduler)(value)(context)
} }
/**
* Java 8+ API for Akka patterns such as `ask`, `pipe` and others which work with [[java.util.concurrent.CompletionStage]].
*
* For working with Scala [[scala.concurrent.Future]] from Java you may want to use [[akka.pattern.Patterns]] instead.
*/
object PatternsCS { object PatternsCS {
import akka.actor.ActorRef
import akka.japi import akka.japi
import akka.actor.{ ActorRef }
import akka.pattern.{ ask scalaAsk, gracefulStop scalaGracefulStop } import akka.pattern.{ ask scalaAsk, gracefulStop scalaGracefulStop }
import akka.util.Timeout import akka.util.Timeout
import scala.concurrent.duration._ import scala.concurrent.duration._
/** /**

View file

@ -179,6 +179,7 @@ class Serialization(val system: ExtendedActorSystem) extends Extension {
* using the optional type hint to the Serializer. * using the optional type hint to the Serializer.
* Returns either the resulting object or throws an exception if deserialization fails. * Returns either the resulting object or throws an exception if deserialization fails.
*/ */
@throws(classOf[NotSerializableException])
def deserializeByteBuffer(buf: ByteBuffer, serializerId: Int, manifest: String): AnyRef = { def deserializeByteBuffer(buf: ByteBuffer, serializerId: Int, manifest: String): AnyRef = {
val serializer = try getSerializerById(serializerId) catch { val serializer = try getSerializerById(serializerId) catch {
case _: NoSuchElementException throw new NotSerializableException( case _: NoSuchElementException throw new NotSerializableException(
@ -220,6 +221,7 @@ class Serialization(val system: ExtendedActorSystem) extends Extension {
* *
* Throws java.io.NotSerializableException if no `serialization-bindings` is configured for the class. * Throws java.io.NotSerializableException if no `serialization-bindings` is configured for the class.
*/ */
@throws(classOf[NotSerializableException])
def serializerFor(clazz: Class[_]): Serializer = def serializerFor(clazz: Class[_]): Serializer =
serializerMap.get(clazz) match { serializerMap.get(clazz) match {
case null // bindings are ordered from most specific to least specific case null // bindings are ordered from most specific to least specific

View file

@ -4,7 +4,7 @@ package akka.serialization
* Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com> * Copyright (C) 2009-2017 Lightbend Inc. <http://www.lightbend.com>
*/ */
import java.io.{ ByteArrayInputStream, ByteArrayOutputStream, ObjectOutputStream } import java.io.{ ByteArrayInputStream, ByteArrayOutputStream, NotSerializableException, ObjectOutputStream }
import java.nio.ByteBuffer import java.nio.ByteBuffer
import java.util.concurrent.Callable import java.util.concurrent.Callable
@ -57,6 +57,7 @@ trait Serializer {
* Produces an object from an array of bytes, with an optional type-hint; * Produces an object from an array of bytes, with an optional type-hint;
* the class should be loaded using ActorSystem.dynamicAccess. * the class should be loaded using ActorSystem.dynamicAccess.
*/ */
@throws(classOf[NotSerializableException])
def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef
/** /**
@ -67,6 +68,7 @@ trait Serializer {
/** /**
* Java API: deserialize with type hint * Java API: deserialize with type hint
*/ */
@throws(classOf[NotSerializableException])
final def fromBinary(bytes: Array[Byte], clazz: Class[_]): AnyRef = fromBinary(bytes, Option(clazz)) final def fromBinary(bytes: Array[Byte], clazz: Class[_]): AnyRef = fromBinary(bytes, Option(clazz))
} }
@ -135,6 +137,7 @@ abstract class SerializerWithStringManifest extends Serializer {
* and message is dropped. Other exceptions will tear down the TCP connection * and message is dropped. Other exceptions will tear down the TCP connection
* because it can be an indication of corrupt bytes from the underlying transport. * because it can be an indication of corrupt bytes from the underlying transport.
*/ */
@throws(classOf[NotSerializableException])
def fromBinary(bytes: Array[Byte], manifest: String): AnyRef def fromBinary(bytes: Array[Byte], manifest: String): AnyRef
final def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef = { final def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef = {
@ -194,6 +197,7 @@ trait ByteBufferSerializer {
* Produces an object from a `ByteBuffer`, with an optional type-hint; * Produces an object from a `ByteBuffer`, with an optional type-hint;
* the class should be loaded using ActorSystem.dynamicAccess. * the class should be loaded using ActorSystem.dynamicAccess.
*/ */
@throws(classOf[NotSerializableException])
def fromBinary(buf: ByteBuffer, manifest: String): AnyRef def fromBinary(buf: ByteBuffer, manifest: String): AnyRef
} }
@ -257,6 +261,8 @@ object BaseSerializer {
* the JSerializer (also possible with empty constructor). * the JSerializer (also possible with empty constructor).
*/ */
abstract class JSerializer extends Serializer { abstract class JSerializer extends Serializer {
@throws(classOf[NotSerializableException])
final def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef = final def fromBinary(bytes: Array[Byte], manifest: Option[Class[_]]): AnyRef =
fromBinaryJava(bytes, manifest.orNull) fromBinaryJava(bytes, manifest.orNull)
@ -315,6 +321,7 @@ class JavaSerializer(val system: ExtendedActorSystem) extends BaseSerializer {
bos.toByteArray bos.toByteArray
} }
@throws(classOf[NotSerializableException])
def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = { def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = {
val in = new ClassLoaderObjectInputStream(system.dynamicAccess.classLoader, new ByteArrayInputStream(bytes)) val in = new ClassLoaderObjectInputStream(system.dynamicAccess.classLoader, new ByteArrayInputStream(bytes))
val obj = JavaSerializer.currentSystem.withValue(system) { in.readObject } val obj = JavaSerializer.currentSystem.withValue(system) { in.readObject }
@ -344,11 +351,13 @@ final case class DisabledJavaSerializer(system: ExtendedActorSystem) extends Ser
throw IllegalSerialization throw IllegalSerialization
} }
@throws(classOf[NotSerializableException])
override def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = { override def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = {
log.warning(LogMarker.Security, "Incoming message attempted to use Java Serialization even though `akka.actor.allow-java-serialization = off` was set!") log.warning(LogMarker.Security, "Incoming message attempted to use Java Serialization even though `akka.actor.allow-java-serialization = off` was set!")
throw IllegalDeserialization throw IllegalDeserialization
} }
@throws(classOf[NotSerializableException])
override def fromBinary(buf: ByteBuffer, manifest: String): AnyRef = { override def fromBinary(buf: ByteBuffer, manifest: String): AnyRef = {
// we don't capture the manifest or mention it in the log as the default setting for includeManifest is set to false. // we don't capture the manifest or mention it in the log as the default setting for includeManifest is set to false.
log.warning(LogMarker.Security, "Incoming message attempted to use Java Serialization even though `akka.actor.allow-java-serialization = off` was set!") log.warning(LogMarker.Security, "Incoming message attempted to use Java Serialization even though `akka.actor.allow-java-serialization = off` was set!")
@ -376,6 +385,7 @@ class NullSerializer extends Serializer {
def includeManifest: Boolean = false def includeManifest: Boolean = false
def identifier = 0 def identifier = 0
def toBinary(o: AnyRef): Array[Byte] = nullAsBytes def toBinary(o: AnyRef): Array[Byte] = nullAsBytes
@throws(classOf[NotSerializableException])
def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = null def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = null
} }
@ -392,6 +402,8 @@ class ByteArraySerializer(val system: ExtendedActorSystem) extends BaseSerialize
case other throw new IllegalArgumentException( case other throw new IllegalArgumentException(
s"${getClass.getName} only serializes byte arrays, not [${other.getClass.getName}]") s"${getClass.getName} only serializes byte arrays, not [${other.getClass.getName}]")
} }
@throws(classOf[NotSerializableException])
def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = bytes def fromBinary(bytes: Array[Byte], clazz: Option[Class[_]]): AnyRef = bytes
override def toBinary(o: AnyRef, buf: ByteBuffer): Unit = override def toBinary(o: AnyRef, buf: ByteBuffer): Unit =
@ -402,6 +414,7 @@ class ByteArraySerializer(val system: ExtendedActorSystem) extends BaseSerialize
s"${getClass.getName} only serializes byte arrays, not [${other.getClass.getName}]") s"${getClass.getName} only serializes byte arrays, not [${other.getClass.getName}]")
} }
@throws(classOf[NotSerializableException])
override def fromBinary(buf: ByteBuffer, manifest: String): AnyRef = { override def fromBinary(buf: ByteBuffer, manifest: String): AnyRef = {
val bytes = new Array[Byte](buf.remaining()) val bytes = new Array[Byte](buf.remaining())
buf.get(bytes) buf.get(bytes)

View file

@ -60,6 +60,24 @@ object ByteString {
*/ */
def fromArray(array: Array[Byte]): ByteString = apply(array) def fromArray(array: Array[Byte]): ByteString = apply(array)
/**
* Unsafe API: Use only in situations you are completely confident that this is what
* you need, and that you understand the implications documented below.
*
* Creates a ByteString without copying the passed in byte array, unlike other factory
* methods defined on ByteString. This method of creating a ByteString saves one array
* copy and allocation and therefore can lead to better performance, however it also means
* that one MUST NOT modify the passed in array, or unexpected immutable data structure
* contract-breaking behaviour will manifest itself.
*
* This API is intended for users who have obtained an byte array from some other API, and
* want wrap it into an ByteArray, and from there on only use that reference (the ByteString)
* to operate on the wrapped data. For all other intents and purposes, please use the usual
* apply and create methods - which provide the immutability guarantees by copying the array.
*
*/
def fromArrayUnsafe(array: Array[Byte]): ByteString = ByteString1C(array)
/** /**
* Creates a new ByteString by copying length bytes starting at offset from * Creates a new ByteString by copying length bytes starting at offset from
* an Array. * an Array.
@ -67,6 +85,24 @@ object ByteString {
def fromArray(array: Array[Byte], offset: Int, length: Int): ByteString = def fromArray(array: Array[Byte], offset: Int, length: Int): ByteString =
CompactByteString.fromArray(array, offset, length) CompactByteString.fromArray(array, offset, length)
/**
* Unsafe API: Use only in situations you are completely confident that this is what
* you need, and that you understand the implications documented below.
*
* Creates a ByteString without copying the passed in byte array, unlike other factory
* methods defined on ByteString. This method of creating a ByteString saves one array
* copy and allocation and therefore can lead to better performance, however it also means
* that one MUST NOT modify the passed in array, or unexpected immutable data structure
* contract-breaking behaviour will manifest itself.
*
* This API is intended for users who have obtained an byte array from some other API, and
* want wrap it into an ByteArray, and from there on only use that reference (the ByteString)
* to operate on the wrapped data. For all other intents and purposes, please use the usual
* apply and create methods - which provide the immutability guarantees by copying the array.
*
*/
def fromArrayUnsafe(array: Array[Byte], offset: Int, length: Int): ByteString = ByteString1(array, offset, length)
/** /**
* JAVA API * JAVA API
* Creates a new ByteString by copying an int array by converting from integral numbers to bytes. * Creates a new ByteString by copying an int array by converting from integral numbers to bytes.

View file

@ -0,0 +1,145 @@
/**
* Copyright (C) 2016-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.util
import java.util.Arrays
import akka.annotation.InternalApi
import scala.annotation.tailrec
/**
* INTERNAL API
*/
@InternalApi private[akka] object ImmutableIntMap {
final val empty: ImmutableIntMap = new ImmutableIntMap(Array.emptyIntArray, 0)
}
/**
* INTERNAL API
* Specialized Map for primitive `Int` keys and values to avoid allocations (boxing).
* Keys and values are encoded consecutively in a single Int array and does copy-on-write with no
* structural sharing, it's intended for rather small maps (<1000 elements).
*/
@InternalApi private[akka] final class ImmutableIntMap private (private final val kvs: Array[Int], final val size: Int) {
private final def this(key: Int, value: Int) = {
this(new Array[Int](2), 1)
kvs(0) = key
kvs(1) = value
}
private[this] final def indexForKey(key: Int): Int = {
// Custom implementation of binary search since we encode key + value in consecutive indicies.
// We do the binary search on half the size of the array then project to the full size.
// >>> 1 for division by 2: https://research.googleblog.com/2006/06/extra-extra-read-all-about-it-nearly.html
@tailrec def find(lo: Int, hi: Int): Int =
if (lo <= hi) {
val lohi = lo + hi // Since we search in half the array we don't need to div by 2 to find the real index of key
val idx = lohi & ~1 // Since keys are in even slots, we get the key idx from lo+hi by removing the lowest bit if set (odd)
val k = kvs(idx)
if (k == key) idx
else if (k < key) find((lohi >>> 1) + 1, hi)
else /* if (k > key) */ find(lo, (lohi >>> 1) - 1)
} else ~(lo << 1) // same as -((lo*2)+1): Item should be placed, negated to indicate no match
find(0, size - 1)
}
/**
* Worst case `O(log n)`, allocation free.
* Will return Int.MinValue if not found, so beware of storing Int.MinValues
*/
final def get(key: Int): Int = {
// same binary search as in `indexforKey` replicated here for performance reasons.
@tailrec def find(lo: Int, hi: Int): Int =
if (lo <= hi) {
val lohi = lo + hi // Since we search in half the array we don't need to div by 2 to find the real index of key
val k = kvs(lohi & ~1) // Since keys are in even slots, we get the key idx from lo+hi by removing the lowest bit if set (odd)
if (k == key) kvs(lohi | 1) // lohi, if odd, already points to the value-index, if even, we set the lowest bit to add 1
else if (k < key) find((lohi >>> 1) + 1, hi)
else /* if (k > key) */ find(lo, (lohi >>> 1) - 1)
} else Int.MinValue
find(0, size - 1)
}
/**
* Worst case `O(log n)`, allocation free.
*/
final def contains(key: Int): Boolean = indexForKey(key) >= 0
/**
* Worst case `O(n)`, creates new `ImmutableIntMap`
* with the given key and value if that key is not yet present in the map.
*/
final def updateIfAbsent(key: Int, value: Int): ImmutableIntMap =
if (size > 0) {
val i = indexForKey(key)
if (i >= 0) this
else insert(key, value, i)
} else new ImmutableIntMap(key, value)
/**
* Worst case `O(n)`, creates new `ImmutableIntMap`
* with the given key with the given value.
*/
final def updated(key: Int, value: Int): ImmutableIntMap =
if (size > 0) {
val i = indexForKey(key)
if (i >= 0) {
val valueIndex = i + 1
if (kvs(valueIndex) != value) update(value, valueIndex)
else this // If no change no need to copy anything
} else insert(key, value, i)
} else new ImmutableIntMap(key, value)
private[this] final def update(value: Int, valueIndex: Int): ImmutableIntMap = {
val newKvs = kvs.clone() // clone() can in theory be faster since it could do a malloc + memcpy iso. calloc etc
newKvs(valueIndex) = value
new ImmutableIntMap(newKvs, size)
}
private[this] final def insert(key: Int, value: Int, index: Int): ImmutableIntMap = {
val at = ~index // ~n == -(n + 1): insert the entry at the right positionkeep the array sorted
val newKvs = new Array[Int](kvs.length + 2)
System.arraycopy(kvs, 0, newKvs, 0, at)
newKvs(at) = key
newKvs(at + 1) = value
System.arraycopy(kvs, at, newKvs, at + 2, kvs.length - at)
new ImmutableIntMap(newKvs, size + 1)
}
/**
* Worst case `O(n)`, creates new `ImmutableIntMap`
* without the given key.
*/
final def remove(key: Int): ImmutableIntMap = {
val i = indexForKey(key)
if (i >= 0) {
if (size > 1) {
val newSz = kvs.length - 2
val newKvs = new Array[Int](newSz)
System.arraycopy(kvs, 0, newKvs, 0, i)
System.arraycopy(kvs, i + 2, newKvs, i, newSz - i)
new ImmutableIntMap(newKvs, size - 1)
} else ImmutableIntMap.empty
} else this
}
/**
* All keys
*/
final def keysIterator: Iterator[Int] =
if (size < 1) Iterator.empty
else Iterator.range(0, kvs.length - 1, 2).map(kvs.apply)
override final def toString: String =
if (size < 1) "ImmutableIntMap()"
else Iterator.range(0, kvs.length - 1, 2).map(i s"${kvs(i)} -> ${kvs(i + 1)}").mkString("ImmutableIntMap(", ", ", ")")
override final def hashCode: Int = Arrays.hashCode(kvs)
override final def equals(obj: Any): Boolean = obj match {
case other: ImmutableIntMap Arrays.equals(kvs, other.kvs) // No need to test `this eq obj` since this is done for the kvs arrays anyway
case _ false
}
}

View file

@ -28,7 +28,7 @@ private[akka] object OptionVal {
* *
* Note that it can be used in pattern matching without allocations * Note that it can be used in pattern matching without allocations
* because it has name based extractor using methods `isEmpty` and `get`. * because it has name based extractor using methods `isEmpty` and `get`.
* See http://hseeberger.github.io/blog/2013/10/04/name-based-extractors-in-scala-2-dot-11/ * See https://hseeberger.wordpress.com/2013/10/04/name-based-extractors-in-scala-2-11/
*/ */
private[akka] final class OptionVal[+A >: Null](val x: A) extends AnyVal { private[akka] final class OptionVal[+A >: Null](val x: A) extends AnyVal {

10
akka-bench-jmh/README.md Normal file
View file

@ -0,0 +1,10 @@
# Akka Microbenchmarks
This subproject contains some microbenchmarks excercising key parts of Akka.
You can run them like:
project akka-bench-jmh
jmh:run -i 3 -wi 3 -f 1 .*ActorCreationBenchmark
Use 'jmh:run -h' to get an overview of the availabe options.

View file

@ -0,0 +1,94 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.TimeUnit
import akka.actor.BenchmarkActors._
import akka.actor.ForkJoinActorBenchmark.cores
import com.typesafe.config.ConfigFactory
import org.openjdk.jmh.annotations._
@State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput))
@Fork(1)
@Threads(1)
@Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.SECONDS, batchSize = 1)
@Measurement(iterations = 10, time = 15, timeUnit = TimeUnit.SECONDS, batchSize = 1)
class AffinityPoolComparativeBenchmark {
@Param(Array("1"))
var throughPut = 0
@Param(Array("affinity-dispatcher", "default-fj-dispatcher", "fixed-size-dispatcher"))
var dispatcher = ""
@Param(Array("SingleConsumerOnlyUnboundedMailbox")) //"default"
var mailbox = ""
final val numThreads, numActors = 8
final val numMessagesPerActorPair = 2000000
final val totalNumberOfMessages = numMessagesPerActorPair * (numActors / 2)
implicit var system: ActorSystem = _
@Setup(Level.Trial)
def setup(): Unit = {
requireRightNumberOfCores(cores)
val mailboxConf = mailbox match {
case "default" => ""
case "SingleConsumerOnlyUnboundedMailbox" =>
s"""default-mailbox.mailbox-type = "${classOf[akka.dispatch.SingleConsumerOnlyUnboundedMailbox].getName}""""
}
system = ActorSystem("AffinityPoolComparativeBenchmark", ConfigFactory.parseString(
s"""| akka {
| log-dead-letters = off
| actor {
| default-fj-dispatcher {
| executor = "fork-join-executor"
| fork-join-executor {
| parallelism-min = $numThreads
| parallelism-factor = 1.0
| parallelism-max = $numThreads
| }
| throughput = $throughPut
| }
|
| fixed-size-dispatcher {
| executor = "thread-pool-executor"
| thread-pool-executor {
| fixed-pool-size = $numThreads
| }
| throughput = $throughPut
| }
|
| affinity-dispatcher {
| executor = "affinity-pool-executor"
| affinity-pool-executor {
| parallelism-min = $numThreads
| parallelism-factor = 1.0
| parallelism-max = $numThreads
| task-queue-size = 512
| idle-cpu-level = 5
| fair-work-distribution.threshold = 2048
| }
| throughput = $throughPut
| }
| $mailboxConf
| }
| }
""".stripMargin
))
}
@TearDown(Level.Trial)
def shutdown(): Unit = tearDownSystem()
@Benchmark
@OperationsPerInvocation(totalNumberOfMessages)
def pingPong(): Unit = benchmarkPingPongActors(numMessagesPerActorPair, numActors, dispatcher, throughPut, timeout)
}

View file

@ -0,0 +1,68 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.TimeUnit
import akka.actor.BenchmarkActors._
import com.typesafe.config.ConfigFactory
import org.openjdk.jmh.annotations._
@State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput))
@Fork(1)
@Threads(1)
@Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.SECONDS, batchSize = 1)
@Measurement(iterations = 10, time = 15, timeUnit = TimeUnit.SECONDS, batchSize = 1)
class AffinityPoolIdleCPULevelBenchmark {
final val numThreads, numActors = 8
final val numMessagesPerActorPair = 2000000
final val totalNumberOfMessages = numMessagesPerActorPair * (numActors / 2)
implicit var system: ActorSystem = _
@Param(Array("1", "3", "5", "7", "10"))
var idleCPULevel = ""
@Param(Array("25"))
var throughPut = 0
@Setup(Level.Trial)
def setup(): Unit = {
requireRightNumberOfCores(numThreads)
system = ActorSystem("AffinityPoolWaitingStrategyBenchmark", ConfigFactory.parseString(
s""" | akka {
| log-dead-letters = off
| actor {
| affinity-dispatcher {
| executor = "affinity-pool-executor"
| affinity-pool-executor {
| parallelism-min = $numThreads
| parallelism-factor = 1.0
| parallelism-max = $numThreads
| task-queue-size = 512
| idle-cpu-level = $idleCPULevel
| fair-work-distribution.threshold = 2048
| }
| throughput = $throughPut
| }
|
| }
| }
""".stripMargin
))
}
@TearDown(Level.Trial)
def shutdown(): Unit = tearDownSystem()
@Benchmark
@OutputTimeUnit(TimeUnit.NANOSECONDS)
@OperationsPerInvocation(8000000)
def pingPong(): Unit = benchmarkPingPongActors(numMessagesPerActorPair, numActors, "affinity-dispatcher", throughPut, timeout)
}

View file

@ -0,0 +1,110 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.{ CountDownLatch, TimeUnit }
import akka.actor.BenchmarkActors._
import akka.actor.ForkJoinActorBenchmark.cores
import com.typesafe.config.ConfigFactory
import org.openjdk.jmh.annotations._
@State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput))
@Fork(1)
@Threads(1)
@Warmup(iterations = 10, time = 15, timeUnit = TimeUnit.SECONDS, batchSize = 1)
@Measurement(iterations = 10, time = 20, timeUnit = TimeUnit.SECONDS, batchSize = 1)
class AffinityPoolRequestResponseBenchmark {
@Param(Array("1", "5", "50"))
var throughPut = 0
@Param(Array("affinity-dispatcher", "default-fj-dispatcher", "fixed-size-dispatcher"))
var dispatcher = ""
@Param(Array("SingleConsumerOnlyUnboundedMailbox")) //"default"
var mailbox = ""
final val numThreads, numActors = 8
final val numQueriesPerActor = 400000
final val totalNumberOfMessages = numQueriesPerActor * numActors
final val numUsersInDB = 300000
implicit var system: ActorSystem = _
var actors: Vector[(ActorRef, ActorRef)] = null
var latch: CountDownLatch = null
@Setup(Level.Trial)
def setup(): Unit = {
requireRightNumberOfCores(cores)
val mailboxConf = mailbox match {
case "default" => ""
case "SingleConsumerOnlyUnboundedMailbox" =>
s"""default-mailbox.mailbox-type = "${classOf[akka.dispatch.SingleConsumerOnlyUnboundedMailbox].getName}""""
}
system = ActorSystem("AffinityPoolComparativeBenchmark", ConfigFactory.parseString(
s"""| akka {
| log-dead-letters = off
| actor {
| default-fj-dispatcher {
| executor = "fork-join-executor"
| fork-join-executor {
| parallelism-min = $numThreads
| parallelism-factor = 1.0
| parallelism-max = $numThreads
| }
| throughput = $throughPut
| }
|
| fixed-size-dispatcher {
| executor = "thread-pool-executor"
| thread-pool-executor {
| fixed-pool-size = $numThreads
| }
| throughput = $throughPut
| }
|
| affinity-dispatcher {
| executor = "affinity-pool-executor"
| affinity-pool-executor {
| parallelism-min = $numThreads
| parallelism-factor = 1.0
| parallelism-max = $numThreads
| task-queue-size = 512
| idle-cpu-level = 5
| fair-work-distribution.threshold = 2048
| }
| throughput = $throughPut
| }
| $mailboxConf
| }
| }
""".stripMargin
))
}
@TearDown(Level.Trial)
def shutdown(): Unit = tearDownSystem()
@Setup(Level.Invocation)
def setupActors(): Unit = {
val (_actors, _latch) = RequestResponseActors.startUserQueryActorPairs(numActors, numQueriesPerActor, numUsersInDB, dispatcher)
actors = _actors
latch = _latch
}
@Benchmark
@OperationsPerInvocation(totalNumberOfMessages)
def queryUserServiceActor(): Unit = {
val startNanoTime = System.nanoTime()
RequestResponseActors.initiateQuerySimulation(actors, throughPut * 2)
latch.await(BenchmarkActors.timeout.toSeconds, TimeUnit.SECONDS)
BenchmarkActors.printProgress(totalNumberOfMessages, numActors, startNanoTime)
}
}

View file

@ -0,0 +1,102 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.{ CountDownLatch, TimeUnit }
import scala.concurrent.Await
import scala.concurrent.duration.Duration
import scala.concurrent.duration._
object BenchmarkActors {
val timeout = 30.seconds
case object Message
case object Stop
class PingPong(val messages: Int, latch: CountDownLatch) extends Actor {
var left = messages / 2
def receive = {
case Message =>
if (left == 0) {
latch.countDown()
context stop self
}
sender() ! Message
left -= 1
}
}
object PingPong {
def props(messages: Int, latch: CountDownLatch) = Props(new PingPong(messages, latch))
}
class Pipe(next: Option[ActorRef]) extends Actor {
def receive = {
case Message =>
if (next.isDefined) next.get forward Message
case Stop =>
context stop self
if (next.isDefined) next.get forward Stop
}
}
object Pipe {
def props(next: Option[ActorRef]) = Props(new Pipe(next))
}
private def startPingPongActorPairs(messagesPerPair: Int, numPairs: Int, dispatcher: String)(implicit system: ActorSystem) = {
val fullPathToDispatcher = "akka.actor." + dispatcher
val latch = new CountDownLatch(numPairs * 2)
val actors = for {
i <- (1 to numPairs).toVector
} yield {
val ping = system.actorOf(PingPong.props(messagesPerPair, latch).withDispatcher(fullPathToDispatcher))
val pong = system.actorOf(PingPong.props(messagesPerPair, latch).withDispatcher(fullPathToDispatcher))
(ping, pong)
}
(actors, latch)
}
private def initiatePingPongForPairs(refs: Vector[(ActorRef, ActorRef)], inFlight: Int) = {
for {
(ping, pong) <- refs
_ <- 1 to inFlight
} {
ping.tell(Message, pong)
}
}
def printProgress(totalMessages: Long, numActors: Int, startNanoTime: Long) = {
val durationMicros = (System.nanoTime() - startNanoTime) / 1000
println(f" $totalMessages messages by $numActors actors took ${durationMicros / 1000} ms, " +
f"${totalMessages.toDouble / durationMicros}%,.2f M msg/s")
}
def requireRightNumberOfCores(numCores: Int) =
require(
Runtime.getRuntime.availableProcessors == numCores,
s"Update the cores constant to ${Runtime.getRuntime.availableProcessors}"
)
def benchmarkPingPongActors(numMessagesPerActorPair: Int, numActors: Int, dispatcher: String, throughPut: Int, shutdownTimeout: Duration)(implicit system: ActorSystem): Unit = {
val numPairs = numActors / 2
val totalNumMessages = numPairs * numMessagesPerActorPair
val (actors, latch) = startPingPongActorPairs(numMessagesPerActorPair, numPairs, dispatcher)
val startNanoTime = System.nanoTime()
initiatePingPongForPairs(actors, inFlight = throughPut * 2)
latch.await(shutdownTimeout.toSeconds, TimeUnit.SECONDS)
printProgress(totalNumMessages, numActors, startNanoTime)
}
def tearDownSystem()(implicit system: ActorSystem): Unit = {
system.terminate()
Await.ready(system.whenTerminated, timeout)
}
}

View file

@ -6,46 +6,61 @@ package akka.actor
import akka.testkit.TestProbe import akka.testkit.TestProbe
import com.typesafe.config.ConfigFactory import com.typesafe.config.ConfigFactory
import org.openjdk.jmh.annotations._ import org.openjdk.jmh.annotations._
import scala.concurrent.duration._
import java.util.concurrent.TimeUnit import java.util.concurrent.TimeUnit
import scala.concurrent.Await import scala.concurrent.Await
import scala.annotation.tailrec import scala.annotation.tailrec
import BenchmarkActors._
import scala.concurrent.duration._
@State(Scope.Benchmark) @State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput)) @BenchmarkMode(Array(Mode.Throughput))
@Fork(1) @Fork(1)
@Threads(1) @Threads(1)
@Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.SECONDS, batchSize = 1) @Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.SECONDS, batchSize = 1)
@Measurement(iterations = 20) @Measurement(iterations = 10, time = 15, timeUnit = TimeUnit.SECONDS, batchSize = 1)
class ForkJoinActorBenchmark { class ForkJoinActorBenchmark {
import ForkJoinActorBenchmark._ import ForkJoinActorBenchmark._
@Param(Array("5")) @Param(Array("5", "25", "50"))
var tpt = 0 var tpt = 0
@Param(Array("1")) @Param(Array(coresStr)) // coresStr, cores2xStr, cores4xStr
var threads = "" var threads = ""
@Param(Array("SingleConsumerOnlyUnboundedMailbox")) //"default"
var mailbox = ""
implicit var system: ActorSystem = _ implicit var system: ActorSystem = _
@Setup(Level.Trial) @Setup(Level.Trial)
def setup(): Unit = { def setup(): Unit = {
requireRightNumberOfCores(cores)
val mailboxConf = mailbox match {
case "default" => ""
case "SingleConsumerOnlyUnboundedMailbox" =>
s"""default-mailbox.mailbox-type = "${classOf[akka.dispatch.SingleConsumerOnlyUnboundedMailbox].getName}""""
}
system = ActorSystem("ForkJoinActorBenchmark", ConfigFactory.parseString( system = ActorSystem("ForkJoinActorBenchmark", ConfigFactory.parseString(
s"""| akka { s"""
| log-dead-letters = off akka {
| actor { log-dead-letters = off
| default-dispatcher { actor {
| executor = "fork-join-executor" default-dispatcher {
| fork-join-executor { executor = "fork-join-executor"
| parallelism-min = 1 fork-join-executor {
| parallelism-factor = $threads parallelism-min = $threads
| parallelism-max = 64 parallelism-factor = 1
| } parallelism-max = $threads
| throughput = $tpt }
| } throughput = $tpt
| } }
| } $mailboxConf
""".stripMargin }
}
"""
)) ))
} }
@ -55,110 +70,31 @@ class ForkJoinActorBenchmark {
Await.ready(system.whenTerminated, 15.seconds) Await.ready(system.whenTerminated, 15.seconds)
} }
var pingPongActors: Vector[(ActorRef, ActorRef)] = null @Benchmark
var pingPongLessActorsThanCoresActors: Vector[(ActorRef, ActorRef)] = null @OperationsPerInvocation(totalMessagesTwoActors)
var pingPongSameNumberOfActorsAsCoresActors: Vector[(ActorRef, ActorRef)] = null def pingPong(): Unit = benchmarkPingPongActors(messages, twoActors, "default-dispatcher", tpt, timeout)
var pingPongMoreActorsThanCoresActors: Vector[(ActorRef, ActorRef)] = null
@Setup(Level.Invocation)
def setupActors(): Unit = {
pingPongActors = startActors(1)
pingPongLessActorsThanCoresActors = startActors(lessThanCoresActorPairs)
pingPongSameNumberOfActorsAsCoresActors = startActors(cores / 2)
pingPongMoreActorsThanCoresActors = startActors(moreThanCoresActorPairs)
}
@TearDown(Level.Invocation)
def tearDownActors(): Unit = {
stopActors(pingPongActors)
stopActors(pingPongLessActorsThanCoresActors)
stopActors(pingPongSameNumberOfActorsAsCoresActors)
stopActors(pingPongMoreActorsThanCoresActors)
}
def startActors(n: Int): Vector[(ActorRef, ActorRef)] = {
for {
i <- (1 to n).toVector
} yield {
val ping = system.actorOf(Props[ForkJoinActorBenchmark.PingPong])
val pong = system.actorOf(Props[ForkJoinActorBenchmark.PingPong])
(ping, pong)
}
}
def stopActors(refs: Vector[(ActorRef, ActorRef)]): Unit = {
if (refs ne null) {
refs.foreach {
case (ping, pong) =>
system.stop(ping)
system.stop(pong)
}
awaitTerminated(refs)
}
}
def awaitTerminated(refs: Vector[(ActorRef, ActorRef)]): Unit = {
if (refs ne null) refs.foreach {
case (ping, pong) =>
val p = TestProbe()
p.watch(ping)
p.expectTerminated(ping, timeout)
p.watch(pong)
p.expectTerminated(pong, timeout)
}
}
def sendMessage(refs: Vector[(ActorRef, ActorRef)], inFlight: Int): Unit = {
for {
(ping, pong) <- refs
_ <- 1 to inFlight
} {
ping.tell(Message, pong)
}
}
@Benchmark @Benchmark
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
@OperationsPerInvocation(messages)
def pingPong(): Unit = {
// only one message in flight
sendMessage(pingPongActors, inFlight = 1)
awaitTerminated(pingPongActors)
}
@Benchmark
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
@OperationsPerInvocation(totalMessagesLessThanCores) @OperationsPerInvocation(totalMessagesLessThanCores)
def pingPongLessActorsThanCores(): Unit = { def pingPongLessActorsThanCores(): Unit = benchmarkPingPongActors(messages, lessThanCoresActors, "default-dispatcher", tpt, timeout)
sendMessage(pingPongLessActorsThanCoresActors, inFlight = 2 * tpt)
awaitTerminated(pingPongLessActorsThanCoresActors)
}
@Benchmark @Benchmark
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
@OperationsPerInvocation(totalMessagesSameAsCores) @OperationsPerInvocation(totalMessagesSameAsCores)
def pingPongSameNumberOfActorsAsCores(): Unit = { def pingPongSameNumberOfActorsAsCores(): Unit = benchmarkPingPongActors(messages, sameAsCoresActors, "default-dispatcher", tpt, timeout)
sendMessage(pingPongSameNumberOfActorsAsCoresActors, inFlight = 2 * tpt)
awaitTerminated(pingPongSameNumberOfActorsAsCoresActors)
}
@Benchmark @Benchmark
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
@OperationsPerInvocation(totalMessagesMoreThanCores) @OperationsPerInvocation(totalMessagesMoreThanCores)
def pingPongMoreActorsThanCores(): Unit = { def pingPongMoreActorsThanCores(): Unit = benchmarkPingPongActors(messages, moreThanCoresActors, "default-dispatcher", tpt, timeout)
sendMessage(pingPongMoreActorsThanCoresActors, inFlight = 2 * tpt)
awaitTerminated(pingPongMoreActorsThanCoresActors)
}
// @Benchmark // @Benchmark
// @Measurement(timeUnit = TimeUnit.MILLISECONDS) // @Measurement(timeUnit = TimeUnit.MILLISECONDS)
// @OperationsPerInvocation(messages) // @OperationsPerInvocation(messages)
def floodPipe(): Unit = { def floodPipe(): Unit = {
val end = system.actorOf(Props(classOf[ForkJoinActorBenchmark.Pipe], None)) val end = system.actorOf(Props(classOf[Pipe], None))
val middle = system.actorOf(Props(classOf[ForkJoinActorBenchmark.Pipe], Some(end))) val middle = system.actorOf(Props(classOf[Pipe], Some(end)))
val penultimate = system.actorOf(Props(classOf[ForkJoinActorBenchmark.Pipe], Some(middle))) val penultimate = system.actorOf(Props(classOf[Pipe], Some(middle)))
val beginning = system.actorOf(Props(classOf[ForkJoinActorBenchmark.Pipe], Some(penultimate))) val beginning = system.actorOf(Props(classOf[Pipe], Some(penultimate)))
val p = TestProbe() val p = TestProbe()
p.watch(end) p.watch(end)
@ -178,39 +114,23 @@ class ForkJoinActorBenchmark {
} }
object ForkJoinActorBenchmark { object ForkJoinActorBenchmark {
case object Stop final val messages = 2000000 // messages per actor pair
case object Message
final val timeout = 15.seconds
final val messages = 400000
// Constants because they are used in annotations
// update according to cpu // update according to cpu
final val cores = 8 final val cores = 8
// 2 actors per final val coresStr = "8"
final val moreThanCoresActorPairs = cores * 2 final val cores2xStr = "16"
final val lessThanCoresActorPairs = (cores / 2) - 1 final val cores4xStr = "24"
final val totalMessagesMoreThanCores = moreThanCoresActorPairs * messages
final val totalMessagesLessThanCores = lessThanCoresActorPairs * messages
final val totalMessagesSameAsCores = cores * messages
class Pipe(next: Option[ActorRef]) extends Actor { final val twoActors = 2
def receive = { final val moreThanCoresActors = cores * 2
case Message => final val lessThanCoresActors = cores / 2
if (next.isDefined) next.get forward Message final val sameAsCoresActors = cores
case Stop =>
context stop self
if (next.isDefined) next.get forward Stop
}
}
class PingPong extends Actor {
var left = messages / 2
def receive = {
case Message =>
if (left <= 1) final val totalMessagesTwoActors = messages
context stop self final val totalMessagesMoreThanCores = (moreThanCoresActors * messages) / 2
final val totalMessagesLessThanCores = (lessThanCoresActors * messages) / 2
final val totalMessagesSameAsCores = (sameAsCoresActors * messages) / 2
sender() ! Message
left -= 1
}
}
} }

View file

@ -0,0 +1,93 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.actor
import java.util.concurrent.CountDownLatch
import scala.collection.mutable
import scala.util.Random
object RequestResponseActors {
case class Request(userId: Int)
case class User(userId: Int, firstName: String, lastName: String, ssn: Int, friends: Seq[Int])
class UserQueryActor(latch: CountDownLatch, numQueries: Int, numUsersInDB: Int) extends Actor {
private var left = numQueries
private val receivedUsers: mutable.Map[Int, User] = mutable.Map()
private val randGenerator = new Random()
override def receive: Receive = {
case u: User => {
receivedUsers.put(u.userId, u)
if (left == 0) {
latch.countDown()
context stop self
} else {
sender() ! Request(randGenerator.nextInt(numUsersInDB))
}
left -= 1
}
}
}
object UserQueryActor {
def props(latch: CountDownLatch, numQueries: Int, numUsersInDB: Int) = {
Props(new UserQueryActor(latch, numQueries, numUsersInDB))
}
}
class UserServiceActor(userDb: Map[Int, User], latch: CountDownLatch, numQueries: Int) extends Actor {
private var left = numQueries
def receive = {
case Request(id) =>
userDb.get(id) match {
case Some(u) => sender() ! u
case None =>
}
if (left == 0) {
latch.countDown()
context stop self
}
left -= 1
}
}
object UserServiceActor {
def props(latch: CountDownLatch, numQueries: Int, numUsersInDB: Int) = {
val r = new Random()
val users = for {
id <- 0 until numUsersInDB
firstName = r.nextString(5)
lastName = r.nextString(7)
ssn = r.nextInt()
friendIds = for { _ <- 0 until 5 } yield r.nextInt(numUsersInDB)
} yield id -> User(id, firstName, lastName, ssn, friendIds)
Props(new UserServiceActor(users.toMap, latch, numQueries))
}
}
def startUserQueryActorPairs(numActors: Int, numQueriesPerActor: Int, numUsersInDBPerActor: Int, dispatcher: String)(implicit system: ActorSystem) = {
val fullPathToDispatcher = "akka.actor." + dispatcher
val latch = new CountDownLatch(numActors)
val actorsPairs = for {
i <- (1 to (numActors / 2)).toVector
userQueryActor = system.actorOf(UserQueryActor.props(latch, numQueriesPerActor, numUsersInDBPerActor).withDispatcher(fullPathToDispatcher))
userServiceActor = system.actorOf(UserServiceActor.props(latch, numQueriesPerActor, numUsersInDBPerActor).withDispatcher(fullPathToDispatcher))
} yield (userQueryActor, userServiceActor)
(actorsPairs, latch)
}
def initiateQuerySimulation(requestResponseActorPairs: Seq[(ActorRef, ActorRef)], inFlight: Int) = {
for {
(queryActor, serviceActor) <- requestResponseActorPairs
i <- 1 to inFlight
} {
serviceActor.tell(Request(i), queryActor)
}
}
}

View file

@ -24,6 +24,11 @@ class LatchSink(countDownAfter: Int, latch: CountDownLatch) extends GraphStage[S
override def preStart(): Unit = pull(in) override def preStart(): Unit = pull(in)
override def onUpstreamFailure(ex: Throwable): Unit = {
println(ex.getMessage)
ex.printStackTrace()
}
override def onPush(): Unit = { override def onPush(): Unit = {
n += 1 n += 1
if (n == countDownAfter) if (n == countDownAfter)

View file

@ -0,0 +1,128 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.stream
import java.util.concurrent.TimeUnit
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.scaladsl._
import com.typesafe.config.ConfigFactory
import org.openjdk.jmh.annotations._
import java.util.concurrent.Semaphore
import scala.util.Success
import akka.stream.impl.fusing.GraphStages
import org.reactivestreams._
import scala.concurrent.Await
import scala.concurrent.duration._
import akka.remote.artery.BenchTestSource
import java.util.concurrent.CountDownLatch
import akka.remote.artery.LatchSink
import akka.stream.impl.PhasedFusingActorMaterializer
import akka.testkit.TestProbe
import akka.stream.impl.StreamSupervisor
import akka.stream.scaladsl.PartitionHub
import akka.remote.artery.FixedSizePartitionHub
object PartitionHubBenchmark {
final val OperationsPerInvocation = 100000
}
@State(Scope.Benchmark)
@OutputTimeUnit(TimeUnit.SECONDS)
@BenchmarkMode(Array(Mode.Throughput))
class PartitionHubBenchmark {
import PartitionHubBenchmark._
val config = ConfigFactory.parseString(
"""
akka.actor.default-dispatcher {
executor = "fork-join-executor"
fork-join-executor {
parallelism-factor = 1
}
}
"""
)
implicit val system = ActorSystem("PartitionHubBenchmark", config)
var materializer: ActorMaterializer = _
@Param(Array("2", "5", "10", "20", "30"))
var NumberOfStreams = 0
@Param(Array("256"))
var BufferSize = 0
var testSource: Source[java.lang.Integer, NotUsed] = _
@Setup
def setup(): Unit = {
val settings = ActorMaterializerSettings(system)
materializer = ActorMaterializer(settings)
testSource = Source.fromGraph(new BenchTestSource(OperationsPerInvocation))
}
@TearDown
def shutdown(): Unit = {
Await.result(system.terminate(), 5.seconds)
}
@Benchmark
@OperationsPerInvocation(OperationsPerInvocation)
def partition(): Unit = {
val N = OperationsPerInvocation
val latch = new CountDownLatch(NumberOfStreams)
val source = testSource
.runWith(PartitionHub.sink[java.lang.Integer](
(size, elem) => elem.intValue % NumberOfStreams,
startAfterNrOfConsumers = NumberOfStreams, bufferSize = BufferSize
))(materializer)
for (_ <- 0 until NumberOfStreams)
source.runWith(new LatchSink(N / NumberOfStreams, latch))(materializer)
if (!latch.await(30, TimeUnit.SECONDS)) {
dumpMaterializer()
throw new RuntimeException("Latch didn't complete in time")
}
}
// @Benchmark
// @OperationsPerInvocation(OperationsPerInvocation)
def arteryLanes(): Unit = {
val N = OperationsPerInvocation
val latch = new CountDownLatch(NumberOfStreams)
val source = testSource
.runWith(
Sink.fromGraph(new FixedSizePartitionHub(
_.intValue % NumberOfStreams,
lanes = NumberOfStreams, bufferSize = BufferSize
))
)(materializer)
for (_ <- 0 until NumberOfStreams)
source.runWith(new LatchSink(N / NumberOfStreams, latch))(materializer)
if (!latch.await(30, TimeUnit.SECONDS)) {
dumpMaterializer()
throw new RuntimeException("Latch didn't complete in time")
}
}
private def dumpMaterializer(): Unit = {
materializer match {
case impl: PhasedFusingActorMaterializer
val probe = TestProbe()(system)
impl.supervisor.tell(StreamSupervisor.GetChildren, probe.ref)
val children = probe.expectMsgType[StreamSupervisor.Children].children
children.foreach(_ ! StreamSupervisor.PrintDebugDump)
}
}
}

View file

@ -0,0 +1,112 @@
/**
* Copyright (C) 2014-2017 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.util
import org.openjdk.jmh.annotations._
import java.util.concurrent.TimeUnit
import scala.annotation.tailrec
@State(Scope.Benchmark)
@BenchmarkMode(Array(Mode.Throughput))
@Fork(1)
@Threads(1)
@Warmup(iterations = 10, time = 5, timeUnit = TimeUnit.MICROSECONDS, batchSize = 1)
@Measurement(iterations = 10, time = 15, timeUnit = TimeUnit.MICROSECONDS, batchSize = 1)
class ImmutableIntMapBench {
@tailrec private[this] final def add(n: Int, c: ImmutableIntMap = ImmutableIntMap.empty): ImmutableIntMap =
if (n >= 0) add(n - 1, c.updated(n, n))
else c
@tailrec private[this] final def contains(n: Int, by: Int, to: Int, in: ImmutableIntMap, b: Boolean): Boolean =
if (n <= to) {
val result = in.contains(n)
contains(n + by, by, to, in, result)
} else b
@tailrec private[this] final def get(n: Int, by: Int, to: Int, in: ImmutableIntMap, b: Int): Int =
if (n <= to) {
val result = in.get(n)
get(n + by, by, to, in, result)
} else b
@tailrec private[this] final def hashCode(n: Int, in: ImmutableIntMap, b: Int): Int =
if (n >= 0) {
val result = in.hashCode
hashCode(n - 1, in, result)
} else b
@tailrec private[this] final def updateIfAbsent(n: Int, by: Int, to: Int, in: ImmutableIntMap): ImmutableIntMap =
if (n <= to) updateIfAbsent(n + by, by, to, in.updateIfAbsent(n, n))
else in
@tailrec private[this] final def getKey(iterations: Int, key: Int, from: ImmutableIntMap): ImmutableIntMap = {
if (iterations > 0 && key != Int.MinValue) {
val k = from.get(key)
getKey(iterations - 1, k, from)
} else from
}
val odd1000 = (0 to 1000).iterator.filter(_ % 2 == 1).foldLeft(ImmutableIntMap.empty)((l, i) => l.updated(i, i))
@Benchmark
@OperationsPerInvocation(1)
def add1(): ImmutableIntMap = add(1)
@Benchmark
@OperationsPerInvocation(10)
def add10(): ImmutableIntMap = add(10)
@Benchmark
@OperationsPerInvocation(100)
def add100(): ImmutableIntMap = add(100)
@Benchmark
@OperationsPerInvocation(1000)
def add1000(): ImmutableIntMap = add(1000)
@Benchmark
@OperationsPerInvocation(10000)
def add10000(): ImmutableIntMap = add(10000)
@Benchmark
@OperationsPerInvocation(500)
def contains(): Boolean = contains(n = 1, by = 2, to = odd1000.size, in = odd1000, b = false)
@Benchmark
@OperationsPerInvocation(500)
def notcontains(): Boolean = contains(n = 0, by = 2, to = odd1000.size, in = odd1000, b = false)
@Benchmark
@OperationsPerInvocation(500)
def get(): Int = get(n = 1, by = 2, to = odd1000.size, in = odd1000, b = Int.MinValue)
@Benchmark
@OperationsPerInvocation(500)
def notget(): Int = get(n = 0, by = 2, to = odd1000.size, in = odd1000, b = Int.MinValue)
@Benchmark
@OperationsPerInvocation(500)
def updateNotAbsent(): ImmutableIntMap = updateIfAbsent(n = 1, by = 2, to = odd1000.size, in = odd1000)
@Benchmark
@OperationsPerInvocation(500)
def updateAbsent(): ImmutableIntMap = updateIfAbsent(n = 0, by = 2, to = odd1000.size, in = odd1000)
@Benchmark
@OperationsPerInvocation(10000)
def hashcode(): Int = hashCode(10000, odd1000, 0)
@Benchmark
@OperationsPerInvocation(1000)
def getMidElement(): ImmutableIntMap = getKey(iterations = 1000, key = 249, from = odd1000)
@Benchmark
@OperationsPerInvocation(1000)
def getLoElement(): ImmutableIntMap = getKey(iterations = 1000, key = 1, from = odd1000)
@Benchmark
@OperationsPerInvocation(1000)
def getHiElement(): ImmutableIntMap = getKey(iterations = 1000, key = 999, from = odd1000)
}

View file

@ -372,7 +372,7 @@ abstract class MixMetricsSelectorBase(selectors: immutable.IndexedSeq[CapacityMe
val (sum, count) = acc(address) val (sum, count) = acc(address)
acc + (address ((sum + capacity, count + 1))) acc + (address ((sum + capacity, count + 1)))
}.map { }.map {
case (addr, (sum, count)) addr (sum / count) case (address, (sum, count)) address (sum / count)
} }
} }
@ -434,7 +434,7 @@ abstract class CapacityMetricsSelector extends MetricsSelector {
val (_, min) = capacity.minBy { case (_, c) c } val (_, min) = capacity.minBy { case (_, c) c }
// lowest usable capacity is 1% (>= 0.5% will be rounded to weight 1), also avoids div by zero // lowest usable capacity is 1% (>= 0.5% will be rounded to weight 1), also avoids div by zero
val divisor = math.max(0.01, min) val divisor = math.max(0.01, min)
capacity map { case (addr, c) (addr math.round((c) / divisor).toInt) } capacity map { case (address, c) (address math.round((c) / divisor).toInt) }
} }
} }

View file

@ -138,7 +138,7 @@ abstract class AdaptiveLoadBalancingRouterSpec extends MultiNodeSpec(AdaptiveLoa
val router = system.actorOf( val router = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
local = AdaptiveLoadBalancingPool(HeapMetricsSelector), local = AdaptiveLoadBalancingPool(HeapMetricsSelector),
settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 1, allowLocalRoutees = true, useRole = None)). settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 1, allowLocalRoutees = true)).
props(Props[Echo]), props(Props[Echo]),
name) name)
// it may take some time until router receives cluster member events // it may take some time until router receives cluster member events

View file

@ -45,7 +45,7 @@ object StatsSampleSpecConfig extends MultiNodeConfig {
cluster { cluster {
enabled = on enabled = on
allow-local-routees = on allow-local-routees = on
use-role = compute use-roles = ["compute"]
} }
} }
} }

View file

@ -57,7 +57,7 @@ abstract class StatsService2 extends Actor {
val workerRouter = context.actorOf( val workerRouter = context.actorOf(
ClusterRouterGroup(ConsistentHashingGroup(Nil), ClusterRouterGroupSettings( ClusterRouterGroup(ConsistentHashingGroup(Nil), ClusterRouterGroupSettings(
totalInstances = 100, routeesPaths = List("/user/statsWorker"), totalInstances = 100, routeesPaths = List("/user/statsWorker"),
allowLocalRoutees = true, useRole = Some("compute"))).props(), allowLocalRoutees = true, useRoles = Set("compute"))).props(),
name = "workerRouter2") name = "workerRouter2")
//#router-lookup-in-code //#router-lookup-in-code
} }
@ -71,7 +71,7 @@ abstract class StatsService3 extends Actor {
val workerRouter = context.actorOf( val workerRouter = context.actorOf(
ClusterRouterPool(ConsistentHashingPool(0), ClusterRouterPoolSettings( ClusterRouterPool(ConsistentHashingPool(0), ClusterRouterPoolSettings(
totalInstances = 100, maxInstancesPerNode = 3, totalInstances = 100, maxInstancesPerNode = 3,
allowLocalRoutees = false, useRole = None)).props(Props[StatsWorker]), allowLocalRoutees = false)).props(Props[StatsWorker]),
name = "workerRouter3") name = "workerRouter3")
//#router-deploy-in-code //#router-deploy-in-code
} }

View file

@ -0,0 +1,7 @@
# #18722 internal changes to actor
ProblemFilters.exclude[Problem]("akka.cluster.sharding.DDataShardCoordinator*")
ProblemFilters.exclude[MissingTypesProblem]("akka.cluster.sharding.ShardRegion$GetCurrentRegions$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.ShardCoordinator#Internal#State.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.ShardCoordinator#Internal#State.copy")

View file

@ -0,0 +1,2 @@
# #21194 renamed internal actor method
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.ShardCoordinator.allocateShardHomes")

View file

@ -0,0 +1,7 @@
# Internal MessageBuffer for actors
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.Shard.totalBufferSize")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.sharding.Shard.messageBuffers")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.Shard.messageBuffers_=")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.ShardRegion.totalBufferSize")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.sharding.ShardRegion.shardBuffers")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.ShardRegion.shardBuffers_=")

View file

@ -0,0 +1,5 @@
# #20319 - remove not needed "no. of persists" counter in sharding
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.PersistentShard.persistCount")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.PersistentShard.persistCount_=")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.PersistentShardCoordinator.persistCount")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.PersistentShardCoordinator.persistCount_=")

View file

@ -0,0 +1,18 @@
# #22141 sharding minCap
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.updatingStateTimeout")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.waitingForStateTimeout")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.this")
# #22154 Sharding remembering entities with ddata, internal actors
ProblemFilters.exclude[Problem]("akka.cluster.sharding.Shard*")
ProblemFilters.exclude[Problem]("akka.cluster.sharding.PersistentShard*")
ProblemFilters.exclude[Problem]("akka.cluster.sharding.ClusterShardingGuardian*")
ProblemFilters.exclude[Problem]("akka.cluster.sharding.ShardRegion*")
# #21423 remove deprecated persist method (persistAll)
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.PersistentShard.persist")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.PersistentShard.persistAsync")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.PersistentShardCoordinator.persist")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.PersistentShardCoordinator.persistAsync")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.RemoveInternalClusterShardingData#RemoveOnePersistenceId.persist")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.sharding.RemoveInternalClusterShardingData#RemoveOnePersistenceId.persistAsync")

View file

@ -0,0 +1,6 @@
# #22868 store shards
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.sendUpdate")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.waitingForUpdate")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.getState")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.waitingForState")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.sharding.DDataShardCoordinator.this")

View file

@ -0,0 +1,6 @@
# Internal MessageBuffer for actors
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.pubsub.PerGroupingBuffer.akka$cluster$pubsub$PerGroupingBuffer$$buffers")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.pubsub.PerGroupingBuffer.akka$cluster$pubsub$PerGroupingBuffer$_setter_$akka$cluster$pubsub$PerGroupingBuffer$$buffers_=")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.singleton.ClusterSingletonProxy.buffer")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.singleton.ClusterSingletonProxy.buffer_=")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.client.ClusterClient.buffer")

View file

@ -0,0 +1,4 @@
# #20462 - now uses a Set instead of a Seq within the private API of the cluster client
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.client.ClusterClient.contacts_=")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.client.ClusterClient.contacts")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.client.ClusterClient.initialContactsSel")

View file

@ -0,0 +1,2 @@
ProblemFilters.exclude[Problem]("akka.cluster.pubsub.DistributedPubSubMediator$Internal*")
ProblemFilters.exclude[Problem]("akka.cluster.pubsub.DistributedPubSubMediator#Internal*")

View file

@ -0,0 +1,7 @@
# #20846 change of internal Status message
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages#StatusOrBuilder.getReplyToStatus")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages#StatusOrBuilder.hasReplyToStatus")
# #20942 ClusterSingleton
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.singleton.ClusterSingletonManager.addRemoved")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.singleton.ClusterSingletonManager.selfAddressOption")

View file

@ -10,7 +10,9 @@ import scala.concurrent.duration._
import java.util.concurrent.ThreadLocalRandom import java.util.concurrent.ThreadLocalRandom
import java.net.URLEncoder import java.net.URLEncoder
import java.net.URLDecoder import java.net.URLDecoder
import akka.actor._ import akka.actor._
import akka.annotation.DoNotInherit
import akka.cluster.Cluster import akka.cluster.Cluster
import akka.cluster.ClusterEvent._ import akka.cluster.ClusterEvent._
import akka.cluster.Member import akka.cluster.Member
@ -24,6 +26,7 @@ import akka.routing.RouterEnvelope
import akka.routing.RoundRobinRoutingLogic import akka.routing.RoundRobinRoutingLogic
import akka.routing.ConsistentHashingRoutingLogic import akka.routing.ConsistentHashingRoutingLogic
import akka.routing.BroadcastRoutingLogic import akka.routing.BroadcastRoutingLogic
import scala.collection.immutable.TreeMap import scala.collection.immutable.TreeMap
import com.typesafe.config.Config import com.typesafe.config.Config
import akka.dispatch.Dispatchers import akka.dispatch.Dispatchers
@ -399,6 +402,7 @@ object DistributedPubSubMediator {
*/ */
def wrapIfNeeded: Any Any = { def wrapIfNeeded: Any Any = {
case msg: RouterEnvelope MediatorRouterEnvelope(msg) case msg: RouterEnvelope MediatorRouterEnvelope(msg)
case null throw InvalidMessageException("Message must not be null")
case msg: Any msg case msg: Any msg
} }
} }
@ -475,7 +479,10 @@ trait DistributedPubSubMessage extends Serializable
* Successful `Subscribe` and `Unsubscribe` is acknowledged with * Successful `Subscribe` and `Unsubscribe` is acknowledged with
* [[DistributedPubSubMediator.SubscribeAck]] and [[DistributedPubSubMediator.UnsubscribeAck]] * [[DistributedPubSubMediator.SubscribeAck]] and [[DistributedPubSubMediator.UnsubscribeAck]]
* replies. * replies.
*
* Not intended for subclassing by user code.
*/ */
@DoNotInherit
class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Actor with ActorLogging with PerGroupingBuffer { class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Actor with ActorLogging with PerGroupingBuffer {
import DistributedPubSubMediator._ import DistributedPubSubMediator._

View file

@ -5,6 +5,7 @@
package akka.cluster.singleton package akka.cluster.singleton
import com.typesafe.config.Config import com.typesafe.config.Config
import scala.concurrent.duration._ import scala.concurrent.duration._
import scala.collection.immutable import scala.collection.immutable
import akka.actor.Actor import akka.actor.Actor
@ -25,9 +26,11 @@ import akka.AkkaException
import akka.actor.NoSerializationVerificationNeeded import akka.actor.NoSerializationVerificationNeeded
import akka.cluster.UniqueAddress import akka.cluster.UniqueAddress
import akka.cluster.ClusterEvent import akka.cluster.ClusterEvent
import scala.concurrent.Promise import scala.concurrent.Promise
import akka.Done import akka.Done
import akka.actor.CoordinatedShutdown import akka.actor.CoordinatedShutdown
import akka.annotation.DoNotInherit
import akka.pattern.ask import akka.pattern.ask
import akka.util.Timeout import akka.util.Timeout
import akka.cluster.ClusterSettings import akka.cluster.ClusterSettings
@ -395,6 +398,8 @@ class ClusterSingletonManagerIsStuck(message: String) extends AkkaException(mess
* Use factory method [[ClusterSingletonManager#props]] to create the * Use factory method [[ClusterSingletonManager#props]] to create the
* [[akka.actor.Props]] for the actor. * [[akka.actor.Props]] for the actor.
* *
* Not intended for subclassing by user code.
*
* *
* @param singletonProps [[akka.actor.Props]] of the singleton actor instance. * @param singletonProps [[akka.actor.Props]] of the singleton actor instance.
* *
@ -408,6 +413,7 @@ class ClusterSingletonManagerIsStuck(message: String) extends AkkaException(mess
* *
* @param settings see [[ClusterSingletonManagerSettings]] * @param settings see [[ClusterSingletonManagerSettings]]
*/ */
@DoNotInherit
class ClusterSingletonManager( class ClusterSingletonManager(
singletonProps: Props, singletonProps: Props,
terminationMessage: Any, terminationMessage: Any,

View file

@ -82,7 +82,7 @@ public class DistributedPubSubMediatorTest extends JUnitSuite {
.match(String.class, msg -> .match(String.class, msg ->
log.info("Got: {}", msg)) log.info("Got: {}", msg))
.match(DistributedPubSubMediator.SubscribeAck.class, msg -> .match(DistributedPubSubMediator.SubscribeAck.class, msg ->
log.info("subscribing")) log.info("subscribed"))
.build(); .build();
} }
} }
@ -126,8 +126,6 @@ public class DistributedPubSubMediatorTest extends JUnitSuite {
return receiveBuilder() return receiveBuilder()
.match(String.class, msg -> .match(String.class, msg ->
log.info("Got: {}", msg)) log.info("Got: {}", msg))
.match(DistributedPubSubMediator.SubscribeAck.class, msg ->
log.info("subscribing"))
.build(); .build();
} }

View file

@ -14016,6 +14016,26 @@ public final class ClusterMessages {
*/ */
akka.protobuf.ByteString akka.protobuf.ByteString
getUseRoleBytes(); getUseRoleBytes();
// repeated string useRoles = 5;
/**
* <code>repeated string useRoles = 5;</code>
*/
java.util.List<java.lang.String>
getUseRolesList();
/**
* <code>repeated string useRoles = 5;</code>
*/
int getUseRolesCount();
/**
* <code>repeated string useRoles = 5;</code>
*/
java.lang.String getUseRoles(int index);
/**
* <code>repeated string useRoles = 5;</code>
*/
akka.protobuf.ByteString
getUseRolesBytes(int index);
} }
/** /**
* Protobuf type {@code ClusterRouterPoolSettings} * Protobuf type {@code ClusterRouterPoolSettings}
@ -14088,6 +14108,14 @@ public final class ClusterMessages {
useRole_ = input.readBytes(); useRole_ = input.readBytes();
break; break;
} }
case 42: {
if (!((mutable_bitField0_ & 0x00000010) == 0x00000010)) {
useRoles_ = new akka.protobuf.LazyStringArrayList();
mutable_bitField0_ |= 0x00000010;
}
useRoles_.add(input.readBytes());
break;
}
} }
} }
} catch (akka.protobuf.InvalidProtocolBufferException e) { } catch (akka.protobuf.InvalidProtocolBufferException e) {
@ -14096,6 +14124,9 @@ public final class ClusterMessages {
throw new akka.protobuf.InvalidProtocolBufferException( throw new akka.protobuf.InvalidProtocolBufferException(
e.getMessage()).setUnfinishedMessage(this); e.getMessage()).setUnfinishedMessage(this);
} finally { } finally {
if (((mutable_bitField0_ & 0x00000010) == 0x00000010)) {
useRoles_ = new akka.protobuf.UnmodifiableLazyStringList(useRoles_);
}
this.unknownFields = unknownFields.build(); this.unknownFields = unknownFields.build();
makeExtensionsImmutable(); makeExtensionsImmutable();
} }
@ -14219,11 +14250,42 @@ public final class ClusterMessages {
} }
} }
// repeated string useRoles = 5;
public static final int USEROLES_FIELD_NUMBER = 5;
private akka.protobuf.LazyStringList useRoles_;
/**
* <code>repeated string useRoles = 5;</code>
*/
public java.util.List<java.lang.String>
getUseRolesList() {
return useRoles_;
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public int getUseRolesCount() {
return useRoles_.size();
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public java.lang.String getUseRoles(int index) {
return useRoles_.get(index);
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public akka.protobuf.ByteString
getUseRolesBytes(int index) {
return useRoles_.getByteString(index);
}
private void initFields() { private void initFields() {
totalInstances_ = 0; totalInstances_ = 0;
maxInstancesPerNode_ = 0; maxInstancesPerNode_ = 0;
allowLocalRoutees_ = false; allowLocalRoutees_ = false;
useRole_ = ""; useRole_ = "";
useRoles_ = akka.protobuf.LazyStringArrayList.EMPTY;
} }
private byte memoizedIsInitialized = -1; private byte memoizedIsInitialized = -1;
public final boolean isInitialized() { public final boolean isInitialized() {
@ -14261,6 +14323,9 @@ public final class ClusterMessages {
if (((bitField0_ & 0x00000008) == 0x00000008)) { if (((bitField0_ & 0x00000008) == 0x00000008)) {
output.writeBytes(4, getUseRoleBytes()); output.writeBytes(4, getUseRoleBytes());
} }
for (int i = 0; i < useRoles_.size(); i++) {
output.writeBytes(5, useRoles_.getByteString(i));
}
getUnknownFields().writeTo(output); getUnknownFields().writeTo(output);
} }
@ -14286,6 +14351,15 @@ public final class ClusterMessages {
size += akka.protobuf.CodedOutputStream size += akka.protobuf.CodedOutputStream
.computeBytesSize(4, getUseRoleBytes()); .computeBytesSize(4, getUseRoleBytes());
} }
{
int dataSize = 0;
for (int i = 0; i < useRoles_.size(); i++) {
dataSize += akka.protobuf.CodedOutputStream
.computeBytesSizeNoTag(useRoles_.getByteString(i));
}
size += dataSize;
size += 1 * getUseRolesList().size();
}
size += getUnknownFields().getSerializedSize(); size += getUnknownFields().getSerializedSize();
memoizedSerializedSize = size; memoizedSerializedSize = size;
return size; return size;
@ -14410,6 +14484,8 @@ public final class ClusterMessages {
bitField0_ = (bitField0_ & ~0x00000004); bitField0_ = (bitField0_ & ~0x00000004);
useRole_ = ""; useRole_ = "";
bitField0_ = (bitField0_ & ~0x00000008); bitField0_ = (bitField0_ & ~0x00000008);
useRoles_ = akka.protobuf.LazyStringArrayList.EMPTY;
bitField0_ = (bitField0_ & ~0x00000010);
return this; return this;
} }
@ -14454,6 +14530,12 @@ public final class ClusterMessages {
to_bitField0_ |= 0x00000008; to_bitField0_ |= 0x00000008;
} }
result.useRole_ = useRole_; result.useRole_ = useRole_;
if (((bitField0_ & 0x00000010) == 0x00000010)) {
useRoles_ = new akka.protobuf.UnmodifiableLazyStringList(
useRoles_);
bitField0_ = (bitField0_ & ~0x00000010);
}
result.useRoles_ = useRoles_;
result.bitField0_ = to_bitField0_; result.bitField0_ = to_bitField0_;
onBuilt(); onBuilt();
return result; return result;
@ -14484,6 +14566,16 @@ public final class ClusterMessages {
useRole_ = other.useRole_; useRole_ = other.useRole_;
onChanged(); onChanged();
} }
if (!other.useRoles_.isEmpty()) {
if (useRoles_.isEmpty()) {
useRoles_ = other.useRoles_;
bitField0_ = (bitField0_ & ~0x00000010);
} else {
ensureUseRolesIsMutable();
useRoles_.addAll(other.useRoles_);
}
onChanged();
}
this.mergeUnknownFields(other.getUnknownFields()); this.mergeUnknownFields(other.getUnknownFields());
return this; return this;
} }
@ -14696,6 +14788,99 @@ public final class ClusterMessages {
return this; return this;
} }
// repeated string useRoles = 5;
private akka.protobuf.LazyStringList useRoles_ = akka.protobuf.LazyStringArrayList.EMPTY;
private void ensureUseRolesIsMutable() {
if (!((bitField0_ & 0x00000010) == 0x00000010)) {
useRoles_ = new akka.protobuf.LazyStringArrayList(useRoles_);
bitField0_ |= 0x00000010;
}
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public java.util.List<java.lang.String>
getUseRolesList() {
return java.util.Collections.unmodifiableList(useRoles_);
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public int getUseRolesCount() {
return useRoles_.size();
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public java.lang.String getUseRoles(int index) {
return useRoles_.get(index);
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public akka.protobuf.ByteString
getUseRolesBytes(int index) {
return useRoles_.getByteString(index);
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public Builder setUseRoles(
int index, java.lang.String value) {
if (value == null) {
throw new NullPointerException();
}
ensureUseRolesIsMutable();
useRoles_.set(index, value);
onChanged();
return this;
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public Builder addUseRoles(
java.lang.String value) {
if (value == null) {
throw new NullPointerException();
}
ensureUseRolesIsMutable();
useRoles_.add(value);
onChanged();
return this;
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public Builder addAllUseRoles(
java.lang.Iterable<java.lang.String> values) {
ensureUseRolesIsMutable();
super.addAll(values, useRoles_);
onChanged();
return this;
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public Builder clearUseRoles() {
useRoles_ = akka.protobuf.LazyStringArrayList.EMPTY;
bitField0_ = (bitField0_ & ~0x00000010);
onChanged();
return this;
}
/**
* <code>repeated string useRoles = 5;</code>
*/
public Builder addUseRolesBytes(
akka.protobuf.ByteString value) {
if (value == null) {
throw new NullPointerException();
}
ensureUseRolesIsMutable();
useRoles_.add(value);
onChanged();
return this;
}
// @@protoc_insertion_point(builder_scope:ClusterRouterPoolSettings) // @@protoc_insertion_point(builder_scope:ClusterRouterPoolSettings)
} }
@ -14842,15 +15027,15 @@ public final class ClusterMessages {
" \002(\0132\005.Pool\022,\n\010settings\030\002 \002(\0132\032.ClusterR" + " \002(\0132\005.Pool\022,\n\010settings\030\002 \002(\0132\032.ClusterR" +
"outerPoolSettings\"<\n\004Pool\022\024\n\014serializerI" + "outerPoolSettings\"<\n\004Pool\022\024\n\014serializerI" +
"d\030\001 \002(\r\022\020\n\010manifest\030\002 \002(\t\022\014\n\004data\030\003 \002(\014\"" + "d\030\001 \002(\r\022\020\n\010manifest\030\002 \002(\t\022\014\n\004data\030\003 \002(\014\"" +
"|\n\031ClusterRouterPoolSettings\022\026\n\016totalIns" + "\216\001\n\031ClusterRouterPoolSettings\022\026\n\016totalIn" +
"tances\030\001 \002(\r\022\033\n\023maxInstancesPerNode\030\002 \002(" + "stances\030\001 \002(\r\022\033\n\023maxInstancesPerNode\030\002 \002" +
"\r\022\031\n\021allowLocalRoutees\030\003 \002(\010\022\017\n\007useRole\030" + "(\r\022\031\n\021allowLocalRoutees\030\003 \002(\010\022\017\n\007useRole" +
"\004 \001(\t*D\n\022ReachabilityStatus\022\r\n\tReachable", "\030\004 \001(\t\022\020\n\010useRoles\030\005 \003(\t*D\n\022Reachability",
"\020\000\022\017\n\013Unreachable\020\001\022\016\n\nTerminated\020\002*b\n\014M" + "Status\022\r\n\tReachable\020\000\022\017\n\013Unreachable\020\001\022\016" +
"emberStatus\022\013\n\007Joining\020\000\022\006\n\002Up\020\001\022\013\n\007Leav" + "\n\nTerminated\020\002*b\n\014MemberStatus\022\013\n\007Joinin" +
"ing\020\002\022\013\n\007Exiting\020\003\022\010\n\004Down\020\004\022\013\n\007Removed\020" + "g\020\000\022\006\n\002Up\020\001\022\013\n\007Leaving\020\002\022\013\n\007Exiting\020\003\022\010\n" +
"\005\022\014\n\010WeaklyUp\020\006B\035\n\031akka.cluster.protobuf" + "\004Down\020\004\022\013\n\007Removed\020\005\022\014\n\010WeaklyUp\020\006B\035\n\031ak" +
".msgH\001" "ka.cluster.protobuf.msgH\001"
}; };
akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
new akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { new akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
@ -14964,7 +15149,7 @@ public final class ClusterMessages {
internal_static_ClusterRouterPoolSettings_fieldAccessorTable = new internal_static_ClusterRouterPoolSettings_fieldAccessorTable = new
akka.protobuf.GeneratedMessage.FieldAccessorTable( akka.protobuf.GeneratedMessage.FieldAccessorTable(
internal_static_ClusterRouterPoolSettings_descriptor, internal_static_ClusterRouterPoolSettings_descriptor,
new java.lang.String[] { "TotalInstances", "MaxInstancesPerNode", "AllowLocalRoutees", "UseRole", }); new java.lang.String[] { "TotalInstances", "MaxInstancesPerNode", "AllowLocalRoutees", "UseRole", "UseRoles", });
return null; return null;
} }
}; };

View file

@ -0,0 +1,3 @@
# #20644 long uids
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#UniqueAddressOrBuilder.hasUid2")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#UniqueAddressOrBuilder.getUid2")

View file

@ -0,0 +1 @@
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ClusterEvent#ReachabilityEvent.member")

View file

@ -0,0 +1,87 @@
# #21423 Remove deprecated metrics
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterReadView.clusterMetrics")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.InternalClusterAction$MetricsTick$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsCollector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.Metric")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsCollector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.Metric$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterSettings.MetricsMovingAverageHalfLife")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterSettings.MetricsGossipInterval")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterSettings.MetricsCollectorClass")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterSettings.MetricsInterval")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterSettings.MetricsEnabled")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.JmxMetricsCollector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.SigarMetricsCollector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricNumericConverter")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.ClusterEvent$ClusterMetricsChanged")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsGossipEnvelope")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.NodeMetrics")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics$Cpu$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics$Cpu")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.InternalClusterAction$PublisherCreated")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.EWMA")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsGossip$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.InternalClusterAction$PublisherCreated$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.NodeMetrics$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsGossipEnvelope$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.ClusterMetricsCollector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.EWMA$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics$HeapMemory")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.MetricsGossip")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.ClusterEvent$ClusterMetricsChanged$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.StandardMetrics$HeapMemory$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.SystemLoadAverageMetricsSelector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingMetricsListener")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.WeightedRoutees")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingPool")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.CpuMetricsSelector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.MixMetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.CapacityMetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.SystemLoadAverageMetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingRoutingLogic")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.HeapMetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingPool$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.CpuMetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingRoutingLogic$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.HeapMetricsSelector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.MetricsSelector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingGroup$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.MixMetricsSelectorBase")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.AdaptiveLoadBalancingGroup")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.MixMetricsSelector$")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.routing.MetricsSelector")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$EWMA$Builder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$MetricOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$Number")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$NumberType")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossipEnvelopeOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$Builder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetricsOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$NumberOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$EWMA")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossip$Builder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossipOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossipEnvelope")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossip")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$MetricsGossipEnvelope$Builder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$EWMAOrBuilder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$Metric")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$Metric$Builder")
ProblemFilters.exclude[MissingClassProblem]("akka.cluster.protobuf.msg.ClusterMessages$NodeMetrics$Number$Builder")
# #21537 coordinated shutdown
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ClusterCoreDaemon.removed")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.Gossip.convergence")
# #21423 removal of deprecated serializer constructors (in 2.5.x)
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.protobuf.ClusterMessageSerializer.this")
# #21423 remove deprecated methods in routing
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.routing.ClusterRouterGroup.paths")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.routing.ClusterRouterPool.nrOfInstances")
# #21944
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ClusterEvent#ReachabilityEvent.member")

View file

@ -0,0 +1,7 @@
# #23257 replace ClusterRouterGroup/Pool "use-role" with "use-roles"
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#ClusterRouterPoolSettingsOrBuilder.getUseRoles")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#ClusterRouterPoolSettingsOrBuilder.getUseRolesBytes")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#ClusterRouterPoolSettingsOrBuilder.getUseRolesCount")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.protobuf.msg.ClusterMessages#ClusterRouterPoolSettingsOrBuilder.getUseRolesList")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.routing.ClusterRouterSettingsBase.useRole")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.routing.ClusterRouterSettingsBase.useRoles")

View file

@ -229,4 +229,5 @@ message UniqueAddress {
required uint32 maxInstancesPerNode = 2; required uint32 maxInstancesPerNode = 2;
required bool allowLocalRoutees = 3; required bool allowLocalRoutees = 3;
optional string useRole = 4; optional string useRole = 4;
repeated string useRoles = 5;
} }

View file

@ -291,9 +291,12 @@ akka {
# Useful for master-worker scenario where all routees are remote. # Useful for master-worker scenario where all routees are remote.
allow-local-routees = on allow-local-routees = on
# Use members with all specified roles, or all members if undefined or empty.
use-roles = []
# Deprecated, since Akka 2.5.4, replaced by use-roles
# Use members with specified role, or all members if undefined or empty. # Use members with specified role, or all members if undefined or empty.
use-role = "" use-role = ""
} }
# Protobuf serializer for cluster messages # Protobuf serializer for cluster messages

View file

@ -73,7 +73,7 @@ final class ClusterSettings(val config: Config, val systemName: String) {
} }
val SeedNodes: immutable.IndexedSeq[Address] = val SeedNodes: immutable.IndexedSeq[Address] =
immutableSeq(cc.getStringList("seed-nodes")).map { case AddressFromURIString(addr) addr }.toVector immutableSeq(cc.getStringList("seed-nodes")).map { case AddressFromURIString(address) address }.toVector
val SeedNodeTimeout: FiniteDuration = cc.getMillisDuration("seed-node-timeout") val SeedNodeTimeout: FiniteDuration = cc.getMillisDuration("seed-node-timeout")
val RetryUnsuccessfulJoinAfter: Duration = { val RetryUnsuccessfulJoinAfter: Duration = {
val key = "retry-unsuccessful-join-after" val key = "retry-unsuccessful-join-after"

View file

@ -13,8 +13,8 @@ import akka.serialization.{ BaseSerializer, SerializationExtension, SerializerWi
import akka.protobuf.{ ByteString, MessageLite } import akka.protobuf.{ ByteString, MessageLite }
import scala.annotation.tailrec import scala.annotation.tailrec
import scala.collection.JavaConverters._
import scala.collection.immutable import scala.collection.immutable
import scala.collection.JavaConverters._
import scala.concurrent.duration.Deadline import scala.concurrent.duration.Deadline
import java.io.NotSerializableException import java.io.NotSerializableException
@ -166,8 +166,11 @@ class ClusterMessageSerializer(val system: ExtendedActorSystem) extends BaseSeri
builder.setAllowLocalRoutees(settings.allowLocalRoutees) builder.setAllowLocalRoutees(settings.allowLocalRoutees)
.setMaxInstancesPerNode(settings.maxInstancesPerNode) .setMaxInstancesPerNode(settings.maxInstancesPerNode)
.setTotalInstances(settings.totalInstances) .setTotalInstances(settings.totalInstances)
.addAllUseRoles(settings.useRoles.asJava)
// for backwards compatibility
settings.useRole.foreach(builder.setUseRole) settings.useRole.foreach(builder.setUseRole)
builder.build() builder.build()
} }
@ -408,11 +411,12 @@ class ClusterMessageSerializer(val system: ExtendedActorSystem) extends BaseSeri
} }
private def clusterRouterPoolSettingsFromProto(crps: cm.ClusterRouterPoolSettings): ClusterRouterPoolSettings = { private def clusterRouterPoolSettingsFromProto(crps: cm.ClusterRouterPoolSettings): ClusterRouterPoolSettings = {
// For backwards compatibility, useRoles is the combination of getUseRole and getUseRolesList
ClusterRouterPoolSettings( ClusterRouterPoolSettings(
totalInstances = crps.getTotalInstances, totalInstances = crps.getTotalInstances,
maxInstancesPerNode = crps.getMaxInstancesPerNode, maxInstancesPerNode = crps.getMaxInstancesPerNode,
allowLocalRoutees = crps.getAllowLocalRoutees, allowLocalRoutees = crps.getAllowLocalRoutees,
useRole = if (crps.hasUseRole) Some(crps.getUseRole) else None useRoles = if (crps.hasUseRole) { crps.getUseRolesList.asScala.toSet + crps.getUseRole } else { crps.getUseRolesList.asScala.toSet }
) )
} }

View file

@ -26,16 +26,26 @@ import akka.routing.RoutingLogic
import com.typesafe.config.Config import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory import com.typesafe.config.ConfigFactory
import scala.annotation.tailrec import scala.annotation.{ tailrec, varargs }
import scala.collection.immutable import scala.collection.immutable
import scala.collection.JavaConverters._
object ClusterRouterGroupSettings { object ClusterRouterGroupSettings {
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def apply(totalInstances: Int, routeesPaths: immutable.Seq[String], allowLocalRoutees: Boolean, useRole: Option[String]): ClusterRouterGroupSettings =
ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRole.toSet)
@varargs
def apply(totalInstances: Int, routeesPaths: immutable.Seq[String], allowLocalRoutees: Boolean, useRoles: String*): ClusterRouterGroupSettings =
ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRoles.toSet)
// For backwards compatibility, useRoles is the combination of use-roles and use-role
def fromConfig(config: Config): ClusterRouterGroupSettings = def fromConfig(config: Config): ClusterRouterGroupSettings =
ClusterRouterGroupSettings( ClusterRouterGroupSettings(
totalInstances = ClusterRouterSettingsBase.getMaxTotalNrOfInstances(config), totalInstances = ClusterRouterSettingsBase.getMaxTotalNrOfInstances(config),
routeesPaths = immutableSeq(config.getStringList("routees.paths")), routeesPaths = immutableSeq(config.getStringList("routees.paths")),
allowLocalRoutees = config.getBoolean("cluster.allow-local-routees"), allowLocalRoutees = config.getBoolean("cluster.allow-local-routees"),
useRole = ClusterRouterSettingsBase.useRoleOption(config.getString("cluster.use-role"))) useRoles = config.getStringList("cluster.use-roles").asScala.toSet ++ ClusterRouterSettingsBase.useRoleOption(config.getString("cluster.use-role")))
} }
/** /**
@ -46,33 +56,71 @@ final case class ClusterRouterGroupSettings(
totalInstances: Int, totalInstances: Int,
routeesPaths: immutable.Seq[String], routeesPaths: immutable.Seq[String],
allowLocalRoutees: Boolean, allowLocalRoutees: Boolean,
useRole: Option[String]) extends ClusterRouterSettingsBase { useRoles: Set[String]) extends ClusterRouterSettingsBase {
// For binary compatibility
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def useRole: Option[String] = useRoles.headOption
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def this(totalInstances: Int, routeesPaths: immutable.Seq[String], allowLocalRoutees: Boolean, useRole: Option[String]) =
this(totalInstances, routeesPaths, allowLocalRoutees, useRole.toSet)
/** /**
* Java API * Java API
*/ */
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def this(totalInstances: Int, routeesPaths: java.lang.Iterable[String], allowLocalRoutees: Boolean, useRole: String) = def this(totalInstances: Int, routeesPaths: java.lang.Iterable[String], allowLocalRoutees: Boolean, useRole: String) =
this(totalInstances, immutableSeq(routeesPaths), allowLocalRoutees, ClusterRouterSettingsBase.useRoleOption(useRole)) this(totalInstances, immutableSeq(routeesPaths), allowLocalRoutees, Option(useRole).toSet)
/**
* Java API
*/
def this(totalInstances: Int, routeesPaths: java.lang.Iterable[String], allowLocalRoutees: Boolean, useRoles: java.util.Set[String]) =
this(totalInstances, immutableSeq(routeesPaths), allowLocalRoutees, useRoles.asScala.toSet)
// For binary compatibility
@deprecated("Use constructor with useRoles instead", since = "2.5.4")
def copy(totalInstances: Int = totalInstances, routeesPaths: immutable.Seq[String] = routeesPaths, allowLocalRoutees: Boolean = allowLocalRoutees, useRole: Option[String] = useRole): ClusterRouterGroupSettings =
new ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRole)
if (totalInstances <= 0) throw new IllegalArgumentException("totalInstances of cluster router must be > 0") if (totalInstances <= 0) throw new IllegalArgumentException("totalInstances of cluster router must be > 0")
if ((routeesPaths eq null) || routeesPaths.isEmpty || routeesPaths.head == "") if ((routeesPaths eq null) || routeesPaths.isEmpty || routeesPaths.head == "")
throw new IllegalArgumentException("routeesPaths must be defined") throw new IllegalArgumentException("routeesPaths must be defined")
routeesPaths.foreach(p p match { routeesPaths.foreach {
case RelativeActorPath(elements) // good case RelativeActorPath(elements) // good
case _ case p
throw new IllegalArgumentException(s"routeesPaths [$p] is not a valid actor path without address information") throw new IllegalArgumentException(s"routeesPaths [$p] is not a valid actor path without address information")
}) }
def withUseRoles(useRoles: Set[String]): ClusterRouterGroupSettings = new ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRoles)
@varargs
def withUseRoles(useRoles: String*): ClusterRouterGroupSettings = new ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRoles.toSet)
/**
* Java API
*/
def withUseRoles(useRoles: java.util.Set[String]): ClusterRouterGroupSettings = new ClusterRouterGroupSettings(totalInstances, routeesPaths, allowLocalRoutees, useRoles.asScala.toSet)
} }
object ClusterRouterPoolSettings { object ClusterRouterPoolSettings {
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def apply(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRole: Option[String]): ClusterRouterPoolSettings =
ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRole.toSet)
@varargs
def apply(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRoles: String*): ClusterRouterPoolSettings =
ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRoles.toSet)
// For backwards compatibility, useRoles is the combination of use-roles and use-role
def fromConfig(config: Config): ClusterRouterPoolSettings = def fromConfig(config: Config): ClusterRouterPoolSettings =
ClusterRouterPoolSettings( ClusterRouterPoolSettings(
totalInstances = ClusterRouterSettingsBase.getMaxTotalNrOfInstances(config), totalInstances = ClusterRouterSettingsBase.getMaxTotalNrOfInstances(config),
maxInstancesPerNode = config.getInt("cluster.max-nr-of-instances-per-node"), maxInstancesPerNode = config.getInt("cluster.max-nr-of-instances-per-node"),
allowLocalRoutees = config.getBoolean("cluster.allow-local-routees"), allowLocalRoutees = config.getBoolean("cluster.allow-local-routees"),
useRole = ClusterRouterSettingsBase.useRoleOption(config.getString("cluster.use-role"))) useRoles = config.getStringList("cluster.use-roles").asScala.toSet ++ ClusterRouterSettingsBase.useRoleOption(config.getString("cluster.use-role")))
} }
/** /**
@ -85,16 +133,45 @@ final case class ClusterRouterPoolSettings(
totalInstances: Int, totalInstances: Int,
maxInstancesPerNode: Int, maxInstancesPerNode: Int,
allowLocalRoutees: Boolean, allowLocalRoutees: Boolean,
useRole: Option[String]) extends ClusterRouterSettingsBase { useRoles: Set[String]) extends ClusterRouterSettingsBase {
// For binary compatibility
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def useRole: Option[String] = useRoles.headOption
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def this(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRole: Option[String]) =
this(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRole.toSet)
/** /**
* Java API * Java API
*/ */
@deprecated("useRole has been replaced with useRoles", since = "2.5.4")
def this(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRole: String) = def this(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRole: String) =
this(totalInstances, maxInstancesPerNode, allowLocalRoutees, ClusterRouterSettingsBase.useRoleOption(useRole)) this(totalInstances, maxInstancesPerNode, allowLocalRoutees, Option(useRole).toSet)
/**
* Java API
*/
def this(totalInstances: Int, maxInstancesPerNode: Int, allowLocalRoutees: Boolean, useRoles: java.util.Set[String]) =
this(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRoles.asScala.toSet)
// For binary compatibility
@deprecated("Use copy with useRoles instead", since = "2.5.4")
def copy(totalInstances: Int = totalInstances, maxInstancesPerNode: Int = maxInstancesPerNode, allowLocalRoutees: Boolean = allowLocalRoutees, useRole: Option[String] = useRole): ClusterRouterPoolSettings =
new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRole)
if (maxInstancesPerNode <= 0) throw new IllegalArgumentException("maxInstancesPerNode of cluster pool router must be > 0") if (maxInstancesPerNode <= 0) throw new IllegalArgumentException("maxInstancesPerNode of cluster pool router must be > 0")
def withUseRoles(useRoles: Set[String]): ClusterRouterPoolSettings = new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRoles)
@varargs
def withUseRoles(useRoles: String*): ClusterRouterPoolSettings = new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRoles.toSet)
/**
* Java API
*/
def withUseRoles(useRoles: java.util.Set[String]): ClusterRouterPoolSettings = new ClusterRouterPoolSettings(totalInstances, maxInstancesPerNode, allowLocalRoutees, useRoles.asScala.toSet)
} }
/** /**
@ -125,10 +202,11 @@ private[akka] object ClusterRouterSettingsBase {
private[akka] trait ClusterRouterSettingsBase { private[akka] trait ClusterRouterSettingsBase {
def totalInstances: Int def totalInstances: Int
def allowLocalRoutees: Boolean def allowLocalRoutees: Boolean
def useRole: Option[String] def useRoles: Set[String]
require(useRole.isEmpty || useRole.get.nonEmpty, "useRole must be either None or non-empty Some wrapped role")
require(totalInstances > 0, "totalInstances of cluster router must be > 0") require(totalInstances > 0, "totalInstances of cluster router must be > 0")
require(useRoles != null, "useRoles must be non-null")
require(!useRoles.exists(role role == null || role.isEmpty), "All roles in useRoles must be non-empty")
} }
/** /**
@ -141,11 +219,11 @@ private[akka] trait ClusterRouterSettingsBase {
final case class ClusterRouterGroup(local: Group, settings: ClusterRouterGroupSettings) extends Group with ClusterRouterConfigBase { final case class ClusterRouterGroup(local: Group, settings: ClusterRouterGroupSettings) extends Group with ClusterRouterConfigBase {
override def paths(system: ActorSystem): immutable.Iterable[String] = override def paths(system: ActorSystem): immutable.Iterable[String] =
if (settings.allowLocalRoutees && settings.useRole.isDefined) { if (settings.allowLocalRoutees && settings.useRoles.nonEmpty) {
if (Cluster(system).selfRoles.contains(settings.useRole.get)) { if (settings.useRoles.subsetOf(Cluster(system).selfRoles)) {
settings.routeesPaths settings.routeesPaths
} else Nil } else Nil
} else if (settings.allowLocalRoutees && settings.useRole.isEmpty) { } else if (settings.allowLocalRoutees && settings.useRoles.isEmpty) {
settings.routeesPaths settings.routeesPaths
} else Nil } else Nil
@ -157,8 +235,8 @@ final case class ClusterRouterGroup(local: Group, settings: ClusterRouterGroupSe
override def withFallback(other: RouterConfig): RouterConfig = other match { override def withFallback(other: RouterConfig): RouterConfig = other match {
case ClusterRouterGroup(_: ClusterRouterGroup, _) throw new IllegalStateException( case ClusterRouterGroup(_: ClusterRouterGroup, _) throw new IllegalStateException(
"ClusterRouterGroup is not allowed to wrap a ClusterRouterGroup") "ClusterRouterGroup is not allowed to wrap a ClusterRouterGroup")
case ClusterRouterGroup(local, _) case ClusterRouterGroup(otherLocal, _)
copy(local = this.local.withFallback(local).asInstanceOf[Group]) copy(local = this.local.withFallback(otherLocal).asInstanceOf[Group])
case _ case _
copy(local = this.local.withFallback(other).asInstanceOf[Group]) copy(local = this.local.withFallback(other).asInstanceOf[Group])
} }
@ -192,11 +270,11 @@ final case class ClusterRouterPool(local: Pool, settings: ClusterRouterPoolSetti
* Initial number of routee instances * Initial number of routee instances
*/ */
override def nrOfInstances(sys: ActorSystem): Int = override def nrOfInstances(sys: ActorSystem): Int =
if (settings.allowLocalRoutees && settings.useRole.isDefined) { if (settings.allowLocalRoutees && settings.useRoles.nonEmpty) {
if (Cluster(sys).selfRoles.contains(settings.useRole.get)) { if (settings.useRoles.subsetOf(Cluster(sys).selfRoles)) {
settings.maxInstancesPerNode settings.maxInstancesPerNode
} else 0 } else 0
} else if (settings.allowLocalRoutees && settings.useRole.isEmpty) { } else if (settings.allowLocalRoutees && settings.useRoles.isEmpty) {
settings.maxInstancesPerNode settings.maxInstancesPerNode
} else 0 } else 0
@ -234,7 +312,7 @@ private[akka] trait ClusterRouterConfigBase extends RouterConfig {
// Intercept ClusterDomainEvent and route them to the ClusterRouterActor // Intercept ClusterDomainEvent and route them to the ClusterRouterActor
override def isManagementMessage(msg: Any): Boolean = override def isManagementMessage(msg: Any): Boolean =
(msg.isInstanceOf[ClusterDomainEvent]) || msg.isInstanceOf[CurrentClusterState] || super.isManagementMessage(msg) msg.isInstanceOf[ClusterDomainEvent] || msg.isInstanceOf[CurrentClusterState] || super.isManagementMessage(msg)
} }
/** /**
@ -383,17 +461,14 @@ private[akka] trait ClusterRouterActor { this: RouterActor ⇒
def isAvailable(m: Member): Boolean = def isAvailable(m: Member): Boolean =
(m.status == MemberStatus.Up || m.status == MemberStatus.WeaklyUp) && (m.status == MemberStatus.Up || m.status == MemberStatus.WeaklyUp) &&
satisfiesRole(m.roles) && satisfiesRoles(m.roles) &&
(settings.allowLocalRoutees || m.address != cluster.selfAddress) (settings.allowLocalRoutees || m.address != cluster.selfAddress)
private def satisfiesRole(memberRoles: Set[String]): Boolean = settings.useRole match { private def satisfiesRoles(memberRoles: Set[String]): Boolean = settings.useRoles.subsetOf(memberRoles)
case None true
case Some(r) memberRoles.contains(r)
}
def availableNodes: immutable.SortedSet[Address] = { def availableNodes: immutable.SortedSet[Address] = {
import akka.cluster.Member.addressOrdering import akka.cluster.Member.addressOrdering
if (nodes.isEmpty && settings.allowLocalRoutees && satisfiesRole(cluster.selfRoles)) if (nodes.isEmpty && settings.allowLocalRoutees && satisfiesRoles(cluster.selfRoles))
// use my own node, cluster information not updated yet // use my own node, cluster information not updated yet
immutable.SortedSet(cluster.selfAddress) immutable.SortedSet(cluster.selfAddress)
else else
@ -404,11 +479,11 @@ private[akka] trait ClusterRouterActor { this: RouterActor ⇒
* Fills in self address for local ActorRef * Fills in self address for local ActorRef
*/ */
def fullAddress(routee: Routee): Address = { def fullAddress(routee: Routee): Address = {
val a = routee match { val address = routee match {
case ActorRefRoutee(ref) ref.path.address case ActorRefRoutee(ref) ref.path.address
case ActorSelectionRoutee(sel) sel.anchor.path.address case ActorSelectionRoutee(sel) sel.anchor.path.address
} }
a match { address match {
case Address(_, _, None, None) cluster.selfAddress case Address(_, _, None, None) cluster.selfAddress
case a a case a a
} }
@ -458,4 +533,3 @@ private[akka] trait ClusterRouterActor { this: RouterActor ⇒
if (isAvailable(m)) addMember(m) if (isAvailable(m)) addMember(m)
} }
} }

View file

@ -76,7 +76,7 @@ abstract class ClusterConsistentHashingGroupSpec extends MultiNodeSpec(ClusterCo
val router = system.actorOf( val router = system.actorOf(
ClusterRouterGroup( ClusterRouterGroup(
local = ConsistentHashingGroup(paths, hashMapping = hashMapping), local = ConsistentHashingGroup(paths, hashMapping = hashMapping),
settings = ClusterRouterGroupSettings(totalInstances = 10, paths, allowLocalRoutees = true, useRole = None)).props(), settings = ClusterRouterGroupSettings(totalInstances = 10, paths, allowLocalRoutees = true)).props(),
"router") "router")
// it may take some time until router receives cluster member events // it may take some time until router receives cluster member events
awaitAssert { currentRoutees(router).size should ===(3) } awaitAssert { currentRoutees(router).size should ===(3) }

View file

@ -124,7 +124,7 @@ abstract class ClusterConsistentHashingRouterSpec extends MultiNodeSpec(ClusterC
val router2 = system.actorOf( val router2 = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
local = ConsistentHashingPool(nrOfInstances = 0), local = ConsistentHashingPool(nrOfInstances = 0),
settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 2, allowLocalRoutees = true, useRole = None)). settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 2, allowLocalRoutees = true)).
props(Props[Echo]), props(Props[Echo]),
"router2") "router2")
// it may take some time until router receives cluster member events // it may take some time until router receives cluster member events
@ -159,7 +159,7 @@ abstract class ClusterConsistentHashingRouterSpec extends MultiNodeSpec(ClusterC
val router4 = system.actorOf( val router4 = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
local = ConsistentHashingPool(nrOfInstances = 0, hashMapping = hashMapping), local = ConsistentHashingPool(nrOfInstances = 0, hashMapping = hashMapping),
settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 1, allowLocalRoutees = true, useRole = None)). settings = ClusterRouterPoolSettings(totalInstances = 10, maxInstancesPerNode = 1, allowLocalRoutees = true)).
props(Props[Echo]), props(Props[Echo]),
"router4") "router4")

View file

@ -85,7 +85,7 @@ object ClusterRoundRobinMultiJvmSpec extends MultiNodeConfig {
router = round-robin-pool router = round-robin-pool
cluster { cluster {
enabled = on enabled = on
use-role = a use-roles = ["a"]
max-total-nr-of-instances = 10 max-total-nr-of-instances = 10
} }
} }
@ -115,7 +115,7 @@ abstract class ClusterRoundRobinSpec extends MultiNodeSpec(ClusterRoundRobinMult
lazy val router2 = system.actorOf( lazy val router2 = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
RoundRobinPool(nrOfInstances = 0), RoundRobinPool(nrOfInstances = 0),
ClusterRouterPoolSettings(totalInstances = 3, maxInstancesPerNode = 1, allowLocalRoutees = true, useRole = None)). ClusterRouterPoolSettings(totalInstances = 3, maxInstancesPerNode = 1, allowLocalRoutees = true)).
props(Props[SomeActor]), props(Props[SomeActor]),
"router2") "router2")
lazy val router3 = system.actorOf(FromConfig.props(Props[SomeActor]), "router3") lazy val router3 = system.actorOf(FromConfig.props(Props[SomeActor]), "router3")

View file

@ -99,12 +99,12 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"pool local: off, roles: off, 6 => 0,2,2" taggedAs LongRunningTest in { "pool local: off, roles: off, 6 => 0,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("b") val roles = Set("b")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
RoundRobinPool(nrOfInstances = 6), RoundRobinPool(nrOfInstances = 6),
ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = false, useRole = role)). ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = false, useRoles = roles)).
props(Props[SomeActor]), props(Props[SomeActor]),
"router-2") "router-2")
@ -129,13 +129,13 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"group local: off, roles: off, 6 => 0,2,2" taggedAs LongRunningTest in { "group local: off, roles: off, 6 => 0,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("b") val roles = Set("b")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterGroup( ClusterRouterGroup(
RoundRobinGroup(paths = Nil), RoundRobinGroup(paths = Nil),
ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"), ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"),
allowLocalRoutees = false, useRole = role)).props, allowLocalRoutees = false, useRoles = roles)).props,
"router-2b") "router-2b")
awaitAssert(currentRoutees(router).size should ===(4)) awaitAssert(currentRoutees(router).size should ===(4))
@ -159,12 +159,12 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"pool local: on, role: b, 6 => 0,2,2" taggedAs LongRunningTest in { "pool local: on, role: b, 6 => 0,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("b") val roles = Set("b")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
RoundRobinPool(nrOfInstances = 6), RoundRobinPool(nrOfInstances = 6),
ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRole = role)). ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRoles = roles)).
props(Props[SomeActor]), props(Props[SomeActor]),
"router-3") "router-3")
@ -189,13 +189,13 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"group local: on, role: b, 6 => 0,2,2" taggedAs LongRunningTest in { "group local: on, role: b, 6 => 0,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("b") val roles = Set("b")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterGroup( ClusterRouterGroup(
RoundRobinGroup(paths = Nil), RoundRobinGroup(paths = Nil),
ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"), ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"),
allowLocalRoutees = true, useRole = role)).props, allowLocalRoutees = true, useRoles = roles)).props,
"router-3b") "router-3b")
awaitAssert(currentRoutees(router).size should ===(4)) awaitAssert(currentRoutees(router).size should ===(4))
@ -219,12 +219,12 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"pool local: on, role: a, 6 => 2,0,0" taggedAs LongRunningTest in { "pool local: on, role: a, 6 => 2,0,0" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("a") val roles = Set("a")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
RoundRobinPool(nrOfInstances = 6), RoundRobinPool(nrOfInstances = 6),
ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRole = role)). ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRoles = roles)).
props(Props[SomeActor]), props(Props[SomeActor]),
"router-4") "router-4")
@ -249,13 +249,13 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"group local: on, role: a, 6 => 2,0,0" taggedAs LongRunningTest in { "group local: on, role: a, 6 => 2,0,0" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("a") val roles = Set("a")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterGroup( ClusterRouterGroup(
RoundRobinGroup(paths = Nil), RoundRobinGroup(paths = Nil),
ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"), ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"),
allowLocalRoutees = true, useRole = role)).props, allowLocalRoutees = true, useRoles = roles)).props,
"router-4b") "router-4b")
awaitAssert(currentRoutees(router).size should ===(2)) awaitAssert(currentRoutees(router).size should ===(2))
@ -279,12 +279,12 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"pool local: on, role: c, 6 => 2,2,2" taggedAs LongRunningTest in { "pool local: on, role: c, 6 => 2,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("c") val roles = Set("c")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterPool( ClusterRouterPool(
RoundRobinPool(nrOfInstances = 6), RoundRobinPool(nrOfInstances = 6),
ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRole = role)). ClusterRouterPoolSettings(totalInstances = 6, maxInstancesPerNode = 2, allowLocalRoutees = true, useRoles = roles)).
props(Props[SomeActor]), props(Props[SomeActor]),
"router-5") "router-5")
@ -309,13 +309,13 @@ abstract class UseRoleIgnoredSpec extends MultiNodeSpec(UseRoleIgnoredMultiJvmSp
"group local: on, role: c, 6 => 2,2,2" taggedAs LongRunningTest in { "group local: on, role: c, 6 => 2,2,2" taggedAs LongRunningTest in {
runOn(first) { runOn(first) {
val role = Some("c") val roles = Set("c")
val router = system.actorOf( val router = system.actorOf(
ClusterRouterGroup( ClusterRouterGroup(
RoundRobinGroup(paths = Nil), RoundRobinGroup(paths = Nil),
ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"), ClusterRouterGroupSettings(totalInstances = 6, routeesPaths = List("/user/foo", "/user/bar"),
allowLocalRoutees = true, useRole = role)).props, allowLocalRoutees = true, useRoles = roles)).props,
"router-5b") "router-5b")
awaitAssert(currentRoutees(router).size should ===(6)) awaitAssert(currentRoutees(router).size should ===(6))

View file

@ -57,7 +57,7 @@ class ClusterDeployerSpec extends AkkaSpec(ClusterDeployerSpec.deployerConf) {
service, service,
deployment.get.config, deployment.get.config,
ClusterRouterPool(RoundRobinPool(20), ClusterRouterPoolSettings( ClusterRouterPool(RoundRobinPool(20), ClusterRouterPoolSettings(
totalInstances = 20, maxInstancesPerNode = 3, allowLocalRoutees = false, useRole = None)), totalInstances = 20, maxInstancesPerNode = 3, allowLocalRoutees = false)),
ClusterScope, ClusterScope,
Deploy.NoDispatcherGiven, Deploy.NoDispatcherGiven,
Deploy.NoMailboxGiven))) Deploy.NoMailboxGiven)))
@ -73,7 +73,7 @@ class ClusterDeployerSpec extends AkkaSpec(ClusterDeployerSpec.deployerConf) {
service, service,
deployment.get.config, deployment.get.config,
ClusterRouterGroup(RoundRobinGroup(List("/user/myservice")), ClusterRouterGroupSettings( ClusterRouterGroup(RoundRobinGroup(List("/user/myservice")), ClusterRouterGroupSettings(
totalInstances = 20, routeesPaths = List("/user/myservice"), allowLocalRoutees = false, useRole = None)), totalInstances = 20, routeesPaths = List("/user/myservice"), allowLocalRoutees = false)),
ClusterScope, ClusterScope,
"mydispatcher", "mydispatcher",
"mymailbox"))) "mymailbox")))

View file

@ -4,12 +4,12 @@
package akka.cluster.protobuf package akka.cluster.protobuf
import akka.cluster._ import akka.cluster._
import akka.actor.{ Address, ExtendedActorSystem } import akka.actor.{ ActorSystem, Address, ExtendedActorSystem }
import akka.cluster.routing.{ ClusterRouterPool, ClusterRouterPoolSettings } import akka.cluster.routing.{ ClusterRouterPool, ClusterRouterPoolSettings }
import akka.routing.{ DefaultOptimalSizeExploringResizer, RoundRobinPool } import akka.routing.RoundRobinPool
import collection.immutable.SortedSet import collection.immutable.SortedSet
import akka.testkit.AkkaSpec import akka.testkit.{ AkkaSpec, TestKit }
class ClusterMessageSerializerSpec extends AkkaSpec( class ClusterMessageSerializerSpec extends AkkaSpec(
"akka.actor.provider = cluster") { "akka.actor.provider = cluster") {
@ -80,6 +80,41 @@ class ClusterMessageSerializerSpec extends AkkaSpec(
checkSerialization(InternalClusterAction.Welcome(uniqueAddress, g2)) checkSerialization(InternalClusterAction.Welcome(uniqueAddress, g2))
} }
"be compatible with wire format of version 2.5.3 (using use-role instead of use-roles)" in {
val system = ActorSystem("ClusterMessageSerializer-old-wire-format")
try {
val serializer = new ClusterMessageSerializer(system.asInstanceOf[ExtendedActorSystem])
// the oldSnapshot was created with the version of ClusterRouterPoolSettings in Akka 2.5.3. See issue #23257.
// It was created with:
/*
import org.apache.commons.codec.binary.Hex.encodeHex
val bytes = serializer.toBinary(
ClusterRouterPool(RoundRobinPool(nrOfInstances = 4), ClusterRouterPoolSettings(123, 345, true, Some("role ABC"))))
println(String.valueOf(encodeHex(bytes)))
*/
val oldBytesHex = "0a0f08101205524f5252501a04080418001211087b10d90218012208726f6c6520414243"
import org.apache.commons.codec.binary.Hex.decodeHex
val oldBytes = decodeHex(oldBytesHex.toCharArray)
val result = serializer.fromBinary(oldBytes, classOf[ClusterRouterPool])
result match {
case pool: ClusterRouterPool
pool.settings.totalInstances should ===(123)
pool.settings.maxInstancesPerNode should ===(345)
pool.settings.allowLocalRoutees should ===(true)
pool.settings.useRole should ===(Some("role ABC"))
pool.settings.useRoles should ===(Set("role ABC"))
}
} finally {
TestKit.shutdownActorSystem(system)
}
}
"add a default data center role if none is present" in { "add a default data center role if none is present" in {
val env = roundtrip(GossipEnvelope(a1.uniqueAddress, d1.uniqueAddress, Gossip(SortedSet(a1, d1)))) val env = roundtrip(GossipEnvelope(a1.uniqueAddress, d1.uniqueAddress, Gossip(SortedSet(a1, d1))))
env.gossip.members.head.roles should be(Set(ClusterSettings.DcRolePrefix + "default")) env.gossip.members.head.roles should be(Set(ClusterSettings.DcRolePrefix + "default"))
@ -87,7 +122,34 @@ class ClusterMessageSerializerSpec extends AkkaSpec(
} }
} }
"Cluster router pool" must { "Cluster router pool" must {
"be serializable" in { "be serializable with no role" in {
checkSerialization(ClusterRouterPool(
RoundRobinPool(
nrOfInstances = 4
),
ClusterRouterPoolSettings(
totalInstances = 2,
maxInstancesPerNode = 5,
allowLocalRoutees = true
)
))
}
"be serializable with one role" in {
checkSerialization(ClusterRouterPool(
RoundRobinPool(
nrOfInstances = 4
),
ClusterRouterPoolSettings(
totalInstances = 2,
maxInstancesPerNode = 5,
allowLocalRoutees = true,
useRoles = Set("Richard, Duke of Gloucester")
)
))
}
"be serializable with many roles" in {
checkSerialization(ClusterRouterPool( checkSerialization(ClusterRouterPool(
RoundRobinPool( RoundRobinPool(
nrOfInstances = 4), nrOfInstances = 4),
@ -95,7 +157,9 @@ class ClusterMessageSerializerSpec extends AkkaSpec(
totalInstances = 2, totalInstances = 2,
maxInstancesPerNode = 5, maxInstancesPerNode = 5,
allowLocalRoutees = true, allowLocalRoutees = true,
useRole = Some("Richard, Duke of Gloucester")))) useRoles = Set("Richard, Duke of Gloucester", "Hongzhi Emperor", "Red Rackham")
)
))
} }
} }

View file

@ -41,8 +41,7 @@ class ClusterRouterSupervisorSpec extends AkkaSpec("""
}), ClusterRouterPoolSettings( }), ClusterRouterPoolSettings(
totalInstances = 1, totalInstances = 1,
maxInstancesPerNode = 1, maxInstancesPerNode = 1,
allowLocalRoutees = true, allowLocalRoutees = true)).
useRole = None)).
props(Props(classOf[KillableActor], testActor)), name = "therouter") props(Props(classOf[KillableActor], testActor)), name = "therouter")
router ! "go away" router ! "go away"

View file

@ -0,0 +1,2 @@
# #18328 optimize VersionVector for size 1
ProblemFilters.exclude[Problem]("akka.cluster.ddata.VersionVector*")

View file

@ -0,0 +1,3 @@
# #20644 long uids
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.protobuf.msg.ReplicatorMessages#UniqueAddressOrBuilder.hasUid2")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.protobuf.msg.ReplicatorMessages#UniqueAddressOrBuilder.getUid2")

View file

@ -0,0 +1,4 @@
# #21645 durable distributed data
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.WriteAggregator.props")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.WriteAggregator.this")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.ddata.Replicator.write")

View file

@ -0,0 +1,67 @@
# #22269 GSet as delta-CRDT
# constructor supplied by companion object
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.GSet.this")
# #21875 delta-CRDT
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.GCounter.this")
# #22188 ORSet delta-CRDT
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.ORSet.this")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.protobuf.SerializationSupport.versionVectorToProto")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.protobuf.SerializationSupport.versionVectorFromProto")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.protobuf.SerializationSupport.versionVectorFromBinary")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.ddata.protobuf.ReplicatedDataSerializer.versionVectorToProto")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.ddata.protobuf.ReplicatedDataSerializer.versionVectorFromProto")
# #21647 pruning
ProblemFilters.exclude[Problem]("akka.cluster.ddata.PruningState*")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.RemovedNodePruning.modifiedByNodes")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.RemovedNodePruning.usingNodes")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.Replicator*")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.protobuf.msg*")
# #21648 Prefer reachable nodes in consistency writes/reads
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.ReadWriteAggregator.unreachable")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.WriteAggregator.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.WriteAggregator.props")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.ReadAggregator.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.ReadAggregator.props")
# #22035 Make it possible to use anything as the key in a map
ProblemFilters.exclude[Problem]("akka.cluster.ddata.protobuf.msg.ReplicatedDataMessages*")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.ORMap*")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.LWWMap*")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.PNCounterMap*")
ProblemFilters.exclude[Problem]("akka.cluster.ddata.ORMultiMap*")
# #20140 durable distributed data
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#ReplicationDeleteFailure.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DeleteSuccess.apply")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.Replicator#DeleteResponse.getRequest")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.Replicator#DeleteResponse.request")
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.Replicator#Command.request")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator.receiveDelete")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#ReplicationDeleteFailure.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#ReplicationDeleteFailure.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DeleteSuccess.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DeleteSuccess.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#Delete.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DataDeleted.apply")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DataDeleted.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#DataDeleted.this")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#Delete.copy")
# #21618 distributed data
ProblemFilters.exclude[MissingTypesProblem]("akka.cluster.ddata.Replicator$ReadMajority$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#ReadMajority.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#ReadMajority.apply")
ProblemFilters.exclude[MissingTypesProblem]("akka.cluster.ddata.Replicator$WriteMajority$")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#WriteMajority.copy")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.Replicator#WriteMajority.apply")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.ddata.DurableStore#Store.apply")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.ddata.DurableStore#Store.copy$default$2")
ProblemFilters.exclude[IncompatibleResultTypeProblem]("akka.cluster.ddata.DurableStore#Store.data")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.ddata.DurableStore#Store.copy")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.ddata.DurableStore#Store.this")
ProblemFilters.exclude[IncompatibleMethTypeProblem]("akka.cluster.ddata.LmdbDurableStore.dbPut")

View file

@ -0,0 +1,6 @@
# #22759 LMDB files
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.LmdbDurableStore.env")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.LmdbDurableStore.db")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.LmdbDurableStore.keyBuffer")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.LmdbDurableStore.valueBuffer_=")
ProblemFilters.exclude[DirectMissingMethodProblem]("akka.cluster.ddata.LmdbDurableStore.valueBuffer")

View file

@ -0,0 +1,2 @@
# #23025 OversizedPayloadException DeltaPropagation
ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.cluster.ddata.DeltaPropagationSelector.maxDeltaSize")

View file

@ -523,7 +523,8 @@ object Replicator {
/** Java API */ /** Java API */
def getRequest: Optional[Any] = Optional.ofNullable(request.orNull) def getRequest: Optional[Any] = Optional.ofNullable(request.orNull)
} }
final case class UpdateSuccess[A <: ReplicatedData](key: Key[A], request: Option[Any]) extends UpdateResponse[A] final case class UpdateSuccess[A <: ReplicatedData](key: Key[A], request: Option[Any])
extends UpdateResponse[A] with DeadLetterSuppression
sealed abstract class UpdateFailure[A <: ReplicatedData] extends UpdateResponse[A] sealed abstract class UpdateFailure[A <: ReplicatedData] extends UpdateResponse[A]
/** /**

View file

@ -81,7 +81,7 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
implicit val cluster = Cluster(system) implicit val cluster = Cluster(system)
val timeout = 5.seconds.dilated val timeout = 14.seconds.dilated // initialization of lmdb can be very slow in CI environment
val writeTwo = WriteTo(2, timeout) val writeTwo = WriteTo(2, timeout)
val readTwo = ReadFrom(2, timeout) val readTwo = ReadFrom(2, timeout)
@ -238,9 +238,9 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
runOn(first) { runOn(first) {
val sys1 = ActorSystem("AdditionalSys", system.settings.config) val sys1 = ActorSystem("AdditionalSys", system.settings.config)
val addr = Cluster(sys1).selfAddress val address = Cluster(sys1).selfAddress
try { try {
Cluster(sys1).join(addr) Cluster(sys1).join(address)
new TestKit(sys1) with ImplicitSender { new TestKit(sys1) with ImplicitSender {
val r = newReplicator(sys1) val r = newReplicator(sys1)
@ -276,11 +276,11 @@ abstract class DurableDataSpec(multiNodeConfig: DurableDataSpecConfig)
"AdditionalSys", "AdditionalSys",
// use the same port // use the same port
ConfigFactory.parseString(s""" ConfigFactory.parseString(s"""
akka.remote.artery.canonical.port = ${addr.port.get} akka.remote.artery.canonical.port = ${address.port.get}
akka.remote.netty.tcp.port = ${addr.port.get} akka.remote.netty.tcp.port = ${address.port.get}
""").withFallback(system.settings.config)) """).withFallback(system.settings.config))
try { try {
Cluster(sys2).join(addr) Cluster(sys2).join(address)
new TestKit(sys2) with ImplicitSender { new TestKit(sys2) with ImplicitSender {
val r2: ActorRef = newReplicator(sys2) val r2: ActorRef = newReplicator(sys2)

View file

@ -148,10 +148,10 @@ class DurablePruningSpec extends MultiNodeSpec(DurablePruningSpec) with STMultiN
enterBarrier("pruned") enterBarrier("pruned")
runOn(first) { runOn(first) {
val addr = cluster2.selfAddress val address = cluster2.selfAddress
val sys3 = ActorSystem(system.name, ConfigFactory.parseString(s""" val sys3 = ActorSystem(system.name, ConfigFactory.parseString(s"""
akka.remote.artery.canonical.port = ${addr.port.get} akka.remote.artery.canonical.port = ${address.port.get}
akka.remote.netty.tcp.port = ${addr.port.get} akka.remote.netty.tcp.port = ${address.port.get}
""").withFallback(system.settings.config)) """).withFallback(system.settings.config))
val cluster3 = Cluster(sys3) val cluster3 = Cluster(sys3)
val replicator3 = startReplicator(sys3) val replicator3 = startReplicator(sys3)

View file

@ -16,10 +16,20 @@ enablePlugins(AkkaParadoxPlugin)
name in (Compile, paradox) := "Akka" name in (Compile, paradox) := "Akka"
val paradoxBrowse = taskKey[Unit]("Open the docs in the default browser")
paradoxBrowse := {
import java.awt.Desktop
val rootDocFile = (target in (Compile, paradox)).value / "index.html"
val log = streams.value.log
if (!rootDocFile.exists()) log.info("No generated docs found, generate with the 'paradox' task")
else if (Desktop.isDesktopSupported) Desktop.getDesktop.open(rootDocFile)
else log.info(s"Couldn't open default browser, but docs are at $rootDocFile")
}
paradoxProperties ++= Map( paradoxProperties ++= Map(
"akka.canonical.base_url" -> "http://doc.akka.io/docs/akka/current", "akka.canonical.base_url" -> "http://doc.akka.io/docs/akka/current",
"github.base_url" -> GitHub.url(version.value), // for links like this: @github[#1](#1) or @github[83986f9](83986f9) "github.base_url" -> GitHub.url(version.value), // for links like this: @github[#1](#1) or @github[83986f9](83986f9)
"extref.akka.http.base_url" -> "http://doc.akka.io/docs/akka-http/current", "extref.akka.http.base_url" -> "http://doc.akka.io/docs/akka-http/current/%s",
"extref.wikipedia.base_url" -> "https://en.wikipedia.org/wiki/%s", "extref.wikipedia.base_url" -> "https://en.wikipedia.org/wiki/%s",
"extref.github.base_url" -> (GitHub.url(version.value) + "/%s"), // for links to our sources "extref.github.base_url" -> (GitHub.url(version.value) + "/%s"), // for links to our sources
"extref.samples.base_url" -> "https://github.com/akka/akka-samples/tree/2.5/%s", "extref.samples.base_url" -> "https://github.com/akka/akka-samples/tree/2.5/%s",

View file

@ -1,4 +1,4 @@
# Contents # Contents
* @ref[Java Documentation](java/index.md) * @ref:[Java Documentation](java/index.md)
* @ref[Scala Documentation](scala/index.md) * @ref:[Scala Documentation](scala/index.md)

View file

@ -1,415 +0,0 @@
# Camel
@@@ warning
Akka Camel is deprecated in favour of [Alpakka](https://github.com/akka/alpakka) , the Akka Streams based collection of integrations to various endpoints (including Camel).
@@@
## Introduction
The akka-camel module allows Untyped Actors to receive
and send messages over a great variety of protocols and APIs.
In addition to the native Scala and Java actor API, actors can now exchange messages with other systems over large number
of protocols and APIs such as HTTP, SOAP, TCP, FTP, SMTP or JMS, to mention a
few. At the moment, approximately 80 protocols and APIs are supported.
### Apache Camel
The akka-camel module is based on [Apache Camel](http://camel.apache.org/), a powerful and light-weight
integration framework for the JVM. For an introduction to Apache Camel you may
want to read this [Apache Camel article](http://architects.dzone.com/articles/apache-camel-integration). Camel comes with a
large number of [components](http://camel.apache.org/components.html) that provide bindings to different protocols and
APIs. The [camel-extra](http://code.google.com/p/camel-extra/) project provides further components.
### Consumer
Here's an example of using Camel's integration components in Akka.
@@snip [MyEndpoint.java]($code$/java/jdocs/camel/MyEndpoint.java) { #Consumer-mina }
The above example exposes an actor over a TCP endpoint via Apache
Camel's [Mina component](http://camel.apache.org/mina2.html). The actor implements the *getEndpointUri* method to define
an endpoint from which it can receive messages. After starting the actor, TCP
clients can immediately send messages to and receive responses from that
actor. If the message exchange should go over HTTP (via Camel's Jetty
component), the actor's *getEndpointUri* method should return a different URI, for instance "jetty:[http://localhost:8877/example](http://localhost:8877/example)".
In the above case an extra constructor is added that can set the endpoint URI, which would result in
the *getEndpointUri* returning the URI that was set using this constructor.
### Producer
Actors can also trigger message exchanges with external systems i.e. produce to
Camel endpoints.
@@snip [Orders.java]($code$/java/jdocs/camel/Orders.java) { #Producer }
In the above example, any message sent to this actor will be sent to
the JMS queue `Orders`. Producer actors may choose from the same set of Camel
components as Consumer actors do.
Below an example of how to send a message to the Orders producer.
@@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #TellProducer }
### CamelMessage
The number of Camel components is constantly increasing. The akka-camel module
can support these in a plug-and-play manner. Just add them to your application's
classpath, define a component-specific endpoint URI and use it to exchange
messages over the component-specific protocols or APIs. This is possible because
Camel components bind protocol-specific message formats to a Camel-specific
[normalized message format](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Message.java). The normalized message format hides
protocol-specific details from Akka and makes it therefore very easy to support
a large number of protocols through a uniform Camel component interface. The
akka-camel module further converts mutable Camel messages into immutable
representations which are used by Consumer and Producer actors for pattern
matching, transformation, serialization or storage. In the above example of the Orders Producer,
the XML message is put in the body of a newly created Camel Message with an empty set of headers.
You can also create a CamelMessage yourself with the appropriate body and headers as you see fit.
### CamelExtension
The akka-camel module is implemented as an Akka Extension, the `CamelExtension` object.
Extensions will only be loaded once per `ActorSystem`, which will be managed by Akka.
The `CamelExtension` object provides access to the @extref[Camel](github:akka-camel/src/main/scala/akka/camel/Camel.scala) interface.
The @extref[Camel](github:akka-camel/src/main/scala/akka/camel/Camel.scala) interface in turn provides access to two important Apache Camel objects, the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and the `ProducerTemplate`.
Below you can see how you can get access to these Apache Camel objects.
@@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtension }
One `CamelExtension` is only loaded once for every one `ActorSystem`, which makes it safe to call the `CamelExtension` at any point in your code to get to the
Apache Camel objects associated with it. There is one [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) and one `ProducerTemplate` for every one `ActorSystem` that uses a `CamelExtension`.
By Default, a new [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is created when the `CamelExtension` starts. If you want to inject your own context instead,
you can implement the @extref[ContextProvider](github:akka-camel/src/main/scala/akka/camel/ContextProvider.scala) interface and add the FQCN of your implementation in the config, as the value of the "akka.camel.context-provider".
This interface define a single method `getContext()` used to load the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java).
Below an example on how to add the ActiveMQ component to the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java), which is required when you would like to use the ActiveMQ component.
@@snip [CamelExtensionTest.java]($code$/java/jdocs/camel/CamelExtensionTest.java) { #CamelExtensionAddComponent }
The [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) joins the lifecycle of the `ActorSystem` and `CamelExtension` it is associated with; the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) is started when
the `CamelExtension` is created, and it is shut down when the associated `ActorSystem` is shut down. The same is true for the `ProducerTemplate`.
The `CamelExtension` is used by both *Producer* and *Consumer* actors to interact with Apache Camel internally.
You can access the `CamelExtension` inside a *Producer* or a *Consumer* using the `camel` method, or get straight at the *CamelContext*
using the `getCamelContext` method or to the *ProducerTemplate* using the *getProducerTemplate* method.
Actors are created and started asynchronously. When a *Consumer* actor is created, the *Consumer* is published at its Camel endpoint
(more precisely, the route is added to the [CamelContext](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/CamelContext.java) from the [Endpoint](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Endpoint.java) to the actor).
When a *Producer* actor is created, a [SendProcessor](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java) and [Endpoint](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Endpoint.java) are created so that the Producer can send messages to it.
Publication is done asynchronously; setting up an endpoint may still be in progress after you have
requested the actor to be created. Some Camel components can take a while to startup, and in some cases you might want to know when the endpoints are activated and ready to be used.
The @extref[Camel](github:akka-camel/src/main/scala/akka/camel/Camel.scala) interface allows you to find out when the endpoint is activated or deactivated.
@@snip [ActivationTestBase.java]($code$/java/jdocs/camel/ActivationTestBase.java) { #CamelActivation }
The above code shows that you can get a `Future` to the activation of the route from the endpoint to the actor, or you can wait in a blocking fashion on the activation of the route.
An `ActivationTimeoutException` is thrown if the endpoint could not be activated within the specified timeout. Deactivation works in a similar fashion:
@@snip [ActivationTestBase.java]($code$/java/jdocs/camel/ActivationTestBase.java) { #CamelDeactivation }
Deactivation of a Consumer or a Producer actor happens when the actor is terminated. For a Consumer, the route to the actor is stopped. For a Producer, the [SendProcessor](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/processor/SendProcessor.java) is stopped.
A `DeActivationTimeoutException` is thrown if the associated camel objects could not be deactivated within the specified timeout.
## Consumer Actors
For objects to receive messages, they must inherit from the @extref[UntypedConsumerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedConsumer.scala)
class. For example, the following actor class (Consumer1) implements the
*getEndpointUri* method, which is declared in the @extref[UntypedConsumerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedConsumer.scala) class, in order to receive
messages from the `file:data/input/actor` Camel endpoint.
@@snip [Consumer1.java]($code$/java/jdocs/camel/Consumer1.java) { #Consumer1 }
Whenever a file is put into the data/input/actor directory, its content is
picked up by the Camel [file component](http://camel.apache.org/file2.html) and sent as message to the
actor. Messages consumed by actors from Camel endpoints are of type
[CamelMessage](#camelmessage). These are immutable representations of Camel messages.
Here's another example that sets the endpointUri to
`jetty:http://localhost:8877/camel/default`. It causes Camel's Jetty
component to start an embedded [Jetty](http://www.eclipse.org/jetty/) server, accepting HTTP connections
from localhost on port 8877.
@@snip [Consumer2.java]($code$/java/jdocs/camel/Consumer2.java) { #Consumer2 }
After starting the actor, clients can send messages to that actor by POSTing to
`http://localhost:8877/camel/default`. The actor sends a response by using the
`getSender().tell` method. For returning a message body and headers to the HTTP
client the response type should be [CamelMessage](#camelmessage). For any other response type, a
new CamelMessage object is created by akka-camel with the actor response as message
body.
<a id="camel-acknowledgements"></a>
### Delivery acknowledgements
With in-out message exchanges, clients usually know that a message exchange is
done when they receive a reply from a consumer actor. The reply message can be a
CamelMessage (or any object which is then internally converted to a CamelMessage) on
success, and a Failure message on failure.
With in-only message exchanges, by default, an exchange is done when a message
is added to the consumer actor's mailbox. Any failure or exception that occurs
during processing of that message by the consumer actor cannot be reported back
to the endpoint in this case. To allow consumer actors to positively or
negatively acknowledge the receipt of a message from an in-only message
exchange, they need to override the `autoAck` method to return false.
In this case, consumer actors must reply either with a
special akka.camel.Ack message (positive acknowledgement) or a akka.actor.Status.Failure (negative
acknowledgement).
@@snip [Consumer3.java]($code$/java/jdocs/camel/Consumer3.java) { #Consumer3 }
<a id="camel-timeout"></a>
### Consumer timeout
Camel Exchanges (and their corresponding endpoints) that support two-way communications need to wait for a response from
an actor before returning it to the initiating client.
For some endpoint types, timeout values can be defined in an endpoint-specific
way which is described in the documentation of the individual Camel
components. Another option is to configure timeouts on the level of consumer actors.
Two-way communications between a Camel endpoint and an actor are
initiated by sending the request message to the actor with the @extref[ask](github:akka-actor/src/main/scala/akka/pattern/Patterns.scala) pattern
and the actor replies to the endpoint when the response is ready. The ask request to the actor can timeout, which will
result in the [Exchange](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java) failing with a TimeoutException set on the failure of the [Exchange](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/Exchange.java).
The timeout on the consumer actor can be overridden with the `replyTimeout`, as shown below.
@@snip [Consumer4.java]($code$/java/jdocs/camel/Consumer4.java) { #Consumer4 }
## Producer Actors
For sending messages to Camel endpoints, actors need to inherit from the @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) class and implement the getEndpointUri method.
@@snip [Producer1.java]($code$/java/jdocs/camel/Producer1.java) { #Producer1 }
Producer1 inherits a default implementation of the onReceive method from the
@extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) class. To customize a producer actor's default behavior you must override the @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala).onTransformResponse and
@extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala).onTransformOutgoingMessage methods. This is explained later in more detail.
Producer Actors cannot override the @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala).onReceive method.
Any message sent to a Producer actor will be sent to
the associated Camel endpoint, in the above example to
`http://localhost:8080/news`. The @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) always sends messages asynchronously. Response messages (if supported by the
configured endpoint) will, by default, be returned to the original sender. The
following example uses the ask pattern to send a message to a
Producer actor and waits for a response.
@@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #AskProducer }
The future contains the response CamelMessage, or an `AkkaCamelException` when an error occurred, which contains the headers of the response.
<a id="camel-custom-processing"></a>
### Custom Processing
Instead of replying to the initial sender, producer actors can implement custom
response processing by overriding the onRouteResponse method. In the following example, the response
message is forwarded to a target actor instead of being replied to the original
sender.
@@snip [ResponseReceiver.java]($code$/java/jdocs/camel/ResponseReceiver.java) { #RouteResponse }
@@snip [Forwarder.java]($code$/java/jdocs/camel/Forwarder.java) { #RouteResponse }
@@snip [OnRouteResponseTestBase.java]($code$/java/jdocs/camel/OnRouteResponseTestBase.java) { #RouteResponse }
Before producing messages to endpoints, producer actors can pre-process them by
overriding the @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala).onTransformOutgoingMessage method.
@@snip [Transformer.java]($code$/java/jdocs/camel/Transformer.java) { #TransformOutgoingMessage }
### Producer configuration options
The interaction of producer actors with Camel endpoints can be configured to be
one-way or two-way (by initiating in-only or in-out message exchanges,
respectively). By default, the producer initiates an in-out message exchange
with the endpoint. For initiating an in-only exchange, producer actors have to override the isOneway method to return true.
@@snip [OnewaySender.java]($code$/java/jdocs/camel/OnewaySender.java) { #Oneway }
### Message correlation
To correlate request with response messages, applications can set the
*Message.MessageExchangeId* message header.
@@snip [ProducerTestBase.java]($code$/java/jdocs/camel/ProducerTestBase.java) { #Correlate }
### ProducerTemplate
The @extref[UntypedProducerActor](github:akka-camel/src/main/scala/akka/camel/javaapi/UntypedProducerActor.scala) class is a very convenient way for actors to produce messages to Camel endpoints.
Actors may also use a Camel `ProducerTemplate` for producing messages to endpoints.
@@snip [MyActor.java]($code$/java/jdocs/camel/MyActor.java) { #ProducerTemplate }
For initiating a two-way message exchange, one of the
`ProducerTemplate.request*` methods must be used.
@@snip [RequestBodyActor.java]($code$/java/jdocs/camel/RequestBodyActor.java) { #RequestProducerTemplate }
<a id="camel-asynchronous-routing"></a>
## Asynchronous routing
In-out message exchanges between endpoints and actors are
designed to be asynchronous. This is the case for both, consumer and producer
actors.
* A consumer endpoint sends request messages to its consumer actor using the `tell`
method and the actor returns responses with `getSender().tell` once they are
ready.
* A producer actor sends request messages to its endpoint using Camel's
asynchronous routing engine. Asynchronous responses are wrapped and added to the
producer actor's mailbox for later processing. By default, response messages are
returned to the initial sender but this can be overridden by Producer
implementations (see also description of the `onRouteResponse` method
in [Custom Processing](#camel-custom-processing)).
However, asynchronous two-way message exchanges, without allocating a thread for
the full duration of exchange, cannot be generically supported by Camel's
asynchronous routing engine alone. This must be supported by the individual
Camel components (from which endpoints are created) as well. They must be
able to suspend any work started for request processing (thereby freeing threads
to do other work) and resume processing when the response is ready. This is
currently the case for a [subset of components](http://camel.apache.org/asynchronous-routing-engine.html) such as the Jetty component.
All other Camel components can still be used, of course, but they will cause
allocation of a thread for the duration of an in-out message exchange. There's
also [Examples](#camel-examples) that implements both, an asynchronous
consumer and an asynchronous producer, with the jetty component.
If the used Camel component is blocking it might be necessary to use a separate
@ref:[dispatcher](dispatchers.md) for the producer. The Camel processor is
invoked by a child actor of the producer and the dispatcher can be defined in
the deployment section of the configuration. For example, if your producer actor
has path `/user/integration/output` the dispatcher of the child actor can be
defined with:
```
akka.actor.deployment {
/integration/output/* {
dispatcher = my-dispatcher
}
}
```
## Custom Camel routes
In all the examples so far, routes to consumer actors have been automatically
constructed by akka-camel, when the actor was started. Although the default
route construction templates, used by akka-camel internally, are sufficient for
most use cases, some applications may require more specialized routes to actors.
The akka-camel module provides two mechanisms for customizing routes to actors,
which will be explained in this section. These are:
* Usage of [Akka Camel components](#camel-components) to access actors.
Any Camel route can use these components to access Akka actors.
* [Intercepting route construction](#camel-intercepting-route-construction) to actors.
This option gives you the ability to change routes that have already been added to Camel.
Consumer actors have a hook into the route definition process which can be used to change the route.
<a id="camel-components"></a>
### Akka Camel components
Akka actors can be accessed from Camel routes using the actor Camel component. This component can be used to
access any Akka actor (not only consumer actors) from Camel routes, as described in the following sections.
<a id="access-to-actors"></a>
### Access to actors
To access actors from custom Camel routes, the actor Camel
component should be used. It fully supports Camel's [asynchronous routing
engine](http://camel.apache.org/asynchronous-routing-engine.html).
This component accepts the following endpoint URI format:
* `[<actor-path>]?<options>`
where `<actor-path>` is the `ActorPath` to the actor. The `<options>` are
name-value pairs separated by `&` (i.e. `name1=value1&name2=value2&...`).
#### URI options
The following URI options are supported:
|Name | Type | Default | Description |
|-------------|----------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|replyTimeout | Duration | false | The reply timeout, specified in the same way that you use the duration in akka, for instance `10 seconds` except that in the url it is handy to use a + between the amount and the unit, like for example `200+millis` See also [Consumer timeout](#camel-timeout).|
|autoAck | Boolean | true | If set to true, in-only message exchanges are auto-acknowledged when the message is added to the actor's mailbox. If set to false, actors must acknowledge the receipt of the message. See also [Delivery acknowledgements](#camel-acknowledgements). |
Here's an actor endpoint URI example containing an actor path:
```
akka://some-system/user/myconsumer?autoAck=false&replyTimeout=100+millis
```
In the following example, a custom route to an actor is created, using the
actor's path.
@@snip [Responder.java]($code$/java/jdocs/camel/Responder.java) { #CustomRoute }
@@snip [CustomRouteBuilder.java]($code$/java/jdocs/camel/CustomRouteBuilder.java) { #CustomRoute }
@@snip [CustomRouteTestBase.java]($code$/java/jdocs/camel/CustomRouteTestBase.java) { #CustomRoute }
The *CamelPath.toCamelUri* converts the *ActorRef* to the Camel actor component URI format which points to the actor endpoint as described above.
When a message is received on the jetty endpoint, it is routed to the Responder actor, which in return replies back to the client of
the HTTP request.
<a id="camel-intercepting-route-construction"></a>
### Intercepting route construction
The previous section, [Akka Camel components](#camel-components), explained how to setup a route to
an actor manually.
It was the application's responsibility to define the route and add it to the current CamelContext.
This section explains a more convenient way to define custom routes: akka-camel is still setting up the routes to consumer actors
(and adds these routes to the current CamelContext) but applications can define extensions to these routes.
Extensions can be defined with Camel's [Java DSL](http://camel.apache.org/dsl.html) or [Scala DSL](http://camel.apache.org/scala-dsl.html). For example, an extension could be a custom error handler that redelivers messages from an endpoint to an actor's bounded mailbox when the mailbox was full.
The following examples demonstrate how to extend a route to a consumer actor for
handling exceptions thrown by that actor.
@@snip [ErrorThrowingConsumer.java]($code$/java/jdocs/camel/ErrorThrowingConsumer.java) { #ErrorThrowingConsumer }
The above ErrorThrowingConsumer sends the Failure back to the sender in preRestart
because the Exception that is thrown in the actor would
otherwise just crash the actor, by default the actor would be restarted, and the response would never reach the client of the Consumer.
The akka-camel module creates a RouteDefinition instance by calling
from(endpointUri) on a Camel RouteBuilder (where endpointUri is the endpoint URI
of the consumer actor) and passes that instance as argument to the route
definition handler *). The route definition handler then extends the route and
returns a ProcessorDefinition (in the above example, the ProcessorDefinition
returned by the end method. See the [org.apache.camel.model](https://svn.apache.org/repos/asf/camel/tags/camel-2.8.0/camel-core/src/main/java/org/apache/camel/model/) package for
details). After executing the route definition handler, akka-camel finally calls
a to(targetActorUri) on the returned ProcessorDefinition to complete the
route to the consumer actor (where targetActorUri is the actor component URI as described in [Access to actors](#access-to-actors)).
If the actor cannot be found, a *ActorNotRegisteredException* is thrown.
*) Before passing the RouteDefinition instance to the route definition handler,
akka-camel may make some further modifications to it.
<a id="camel-examples"></a>
## Examples
The sample named @extref[Akka Camel Samples with Java](ecs:akka-samples-camel-java) (@extref[source code](samples:akka-sample-camel-java))
contains 3 samples:
* Asynchronous routing and transformation - This example demonstrates how to implement consumer and
producer actors that support [Asynchronous routing](#camel-asynchronous-routing) with their Camel endpoints.
* Custom Camel route - Demonstrates the combined usage of a `Producer` and a
`Consumer` actor as well as the inclusion of a custom Camel route.
* Quartz Scheduler Example - Showing how simple is to implement a cron-style scheduler by
using the Camel Quartz component
## Configuration
There are several configuration properties for the Camel module, please refer
to the @ref:[reference configuration](general/configuration.md#config-akka-camel).
## Additional Resources
For an introduction to akka-camel 2, see also the Peter Gabryanczyk's talk [Migrating akka-camel module to Akka 2.x](http://skillsmatter.com/podcast/scala/akka-2-x).
For an introduction to akka-camel 1, see also the [Appendix E - Akka and Camel](http://www.manning.com/ibsen/appEsample.pdf)
(pdf) of the book [Camel in Action](http://www.manning.com/ibsen/).
Other, more advanced external articles (for version 1) are:
* [Akka Consumer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-consumer-actors-new-features-and.html)
* [Akka Producer Actors: New Features and Best Practices](http://krasserm.blogspot.com/2011/02/akka-producer-actor-new-features-and.html)

View file

@ -0,0 +1 @@
../scala/camel.md

View file

@ -1,665 +0,0 @@
# Distributed Data
*Akka Distributed Data* is useful when you need to share data between nodes in an
Akka Cluster. The data is accessed with an actor providing a key-value store like API.
The keys are unique identifiers with type information of the data values. The values
are *Conflict Free Replicated Data Types* (CRDTs).
All data entries are spread to all nodes, or nodes with a certain role, in the cluster
via direct replication and gossip based dissemination. You have fine grained control
of the consistency level for reads and writes.
The nature CRDTs makes it possible to perform updates from any node without coordination.
Concurrent updates from different nodes will automatically be resolved by the monotonic
merge function, which all data types must provide. The state changes always converge.
Several useful data types for counters, sets, maps and registers are provided and
you can also implement your own custom data types.
It is eventually consistent and geared toward providing high read and write availability
(partition tolerance), with low latency. Note that in an eventually consistent system a read may return an
out-of-date value.
## Using the Replicator
The `akka.cluster.ddata.Replicator` actor provides the API for interacting with the data.
The `Replicator` actor must be started on each node in the cluster, or group of nodes tagged
with a specific role. It communicates with other `Replicator` instances with the same path
(without address) that are running on other nodes . For convenience it can be used with the
`akka.cluster.ddata.DistributedData` extension but it can also be started as an ordinary
actor using the `Replicator.props`. If it is started as an ordinary actor it is important
that it is given the same name, started on same path, on all nodes.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Data. This means that the data will be replicated to the
@ref:[WeaklyUp](cluster-usage.md#weakly-up) nodes with the background gossip protocol. Note that it
will not participate in any actions where the consistency mode is to read/write from all
nodes or the majority of nodes. The @ref:[WeaklyUp](cluster-usage.md#weakly-up) node is not counted
as part of the cluster. So 3 nodes + 5 @ref:[WeaklyUp](cluster-usage.md#weakly-up) is essentially a
3 node cluster as far as consistent actions are concerned.
Below is an example of an actor that schedules tick messages to itself and for each tick
adds or removes elements from a `ORSet` (observed-remove set). It also subscribes to
changes of this.
@@snip [DataBot.java]($code$/java/jdocs/ddata/DataBot.java) { #data-bot }
<a id="replicator-update"></a>
### Update
To modify and replicate a data value you send a `Replicator.Update` message to the local
`Replicator`.
The current data value for the `key` of the `Update` is passed as parameter to the `modify`
function of the `Update`. The function is supposed to return the new value of the data, which
will then be replicated according to the given consistency level.
The `modify` function is called by the `Replicator` actor and must therefore be a pure
function that only uses the data parameter and stable fields from enclosing scope. It must
for example not access the sender reference of an enclosing actor.
`Update`
is intended to only be sent from an actor running in same local
`ActorSystem`
as
: the `Replicator`, because the `modify` function is typically not serializable.
You supply a write consistency level which has the following meaning:
* `writeLocal` the value will immediately only be written to the local replica,
and later disseminated with gossip
* `WriteTo(n)` the value will immediately be written to at least `n` replicas,
including the local replica
* `WriteMajority` the value will immediately be written to a majority of replicas, i.e.
at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
(or cluster role group)
* `WriteAll` the value will immediately be written to all nodes in the cluster
(or all nodes in the cluster role group)
When you specify to write to `n` out of `x` nodes, the update will first replicate to `n` nodes.
If there are not enough Acks after 1/5th of the timeout, the update will be replicated to `n` other
nodes. If there are less than n nodes left all of the remaining nodes are used. Reachable nodes
are prefered over unreachable nodes.
Note that `WriteMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update }
As reply of the `Update` a `Replicator.UpdateSuccess` is sent to the sender of the
`Update` if the value was successfully replicated according to the supplied consistency
level within the supplied timeout. Otherwise a `Replicator.UpdateFailure` subclass is
sent back. Note that a `Replicator.UpdateTimeout` reply does not mean that the update completely failed
or was rolled back. It may still have been replicated to some nodes, and will eventually
be replicated to all nodes with the gossip protocol.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response1 }
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-response2 }
You will always see your own writes. For example if you send two `Update` messages
changing the value of the same `key`, the `modify` function of the second message will
see the change that was performed by the first `Update` message.
In the `Update` message you can pass an optional request context, which the `Replicator`
does not care about, but is included in the reply messages. This is a convenient
way to pass contextual information (e.g. original sender) without having to use `ask`
or maintain local correlation data structures.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #update-request-context }
<a id="replicator-get"></a>
### Get
To retrieve the current value of a data you send `Replicator.Get` message to the
`Replicator`. You supply a consistency level which has the following meaning:
* `readLocal` the value will only be read from the local replica
* `ReadFrom(n)` the value will be read and merged from `n` replicas,
including the local replica
* `ReadMajority` the value will be read and merged from a majority of replicas, i.e.
at least **N/2 + 1** replicas, where N is the number of nodes in the cluster
(or cluster role group)
* `ReadAll` the value will be read and merged from all nodes in the cluster
(or all nodes in the cluster role group)
Note that `ReadMajority` has a `minCap` parameter that is useful to specify to achieve better safety for small clusters.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get }
As reply of the `Get` a `Replicator.GetSuccess` is sent to the sender of the
`Get` if the value was successfully retrieved according to the supplied consistency
level within the supplied timeout. Otherwise a `Replicator.GetFailure` is sent.
If the key does not exist the reply will be `Replicator.NotFound`.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response1 }
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-response2 }
You will always read your own writes. For example if you send a `Update` message
followed by a `Get` of the same `key` the `Get` will retrieve the change that was
performed by the preceding `Update` message. However, the order of the reply messages are
not defined, i.e. in the previous example you may receive the `GetSuccess` before
the `UpdateSuccess`.
In the `Get` message you can pass an optional request context in the same way as for the
`Update` message, described above. For example the original sender can be passed and replied
to after receiving and transforming `GetSuccess`.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #get-request-context }
### Consistency
The consistency level that is supplied in the [Update](#replicator-update) and [Get](#replicator-get)
specifies per request how many replicas that must respond successfully to a write and read request.
For low latency reads you use `ReadLocal` with the risk of retrieving stale data, i.e. updates
from other nodes might not be visible yet.
When using `writeLocal` the update is only written to the local replica and then disseminated
in the background with the gossip protocol, which can take few seconds to spread to all nodes.
`WriteAll` and `ReadAll` is the strongest consistency level, but also the slowest and with
lowest availability. For example, it is enough that one node is unavailable for a `Get` request
and you will not receive the value.
If consistency is important, you can ensure that a read always reflects the most recent
write by using the following formula:
```
(nodes_written + nodes_read) > N
```
where N is the total number of nodes in the cluster, or the number of nodes with the role that is
used for the `Replicator`.
For example, in a 7 node cluster this these consistency properties are achieved by writing to 4 nodes
and reading from 4 nodes, or writing to 5 nodes and reading from 3 nodes.
By combining `WriteMajority` and `ReadMajority` levels a read always reflects the most recent write.
The `Replicator` writes and reads to a majority of replicas, i.e. **N / 2 + 1**. For example,
in a 5 node cluster it writes to 3 nodes and reads from 3 nodes. In a 6 node cluster it writes
to 4 nodes and reads from 4 nodes.
You can define a minimum number of nodes for `WriteMajority` and `ReadMajority`,
this will minimize the risk of reading steal data. Minimum cap is
provided by minCap property of `WriteMajority` and `ReadMajority` and defines the required majority.
If the minCap is higher then **N / 2 + 1** the minCap will be used.
For example if the minCap is 5 the `WriteMajority` and `ReadMajority` for cluster of 3 nodes will be 3, for
cluster of 6 nodes will be 5 and for cluster of 12 nodes will be 7 ( **N / 2 + 1** ).
For small clusters (<7) the risk of membership changes between a WriteMajority and ReadMajority
is rather high and then the nice properties of combining majority write and reads are not
guaranteed. Therefore the `ReadMajority` and `WriteMajority` have a `minCap` parameter that
is useful to specify to achieve better safety for small clusters. It means that if the cluster
size is smaller than the majority size it will use the `minCap` number of nodes but at most
the total size of the cluster.
Here is an example of using `writeMajority` and `readMajority`:
@@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #read-write-majority }
@@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #get-cart }
@@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #add-item }
In some rare cases, when performing an `Update` it is needed to first try to fetch latest data from
other nodes. That can be done by first sending a `Get` with `ReadMajority` and then continue with
the `Update` when the `GetSuccess`, `GetFailure` or `NotFound` reply is received. This might be
needed when you need to base a decision on latest information or when removing entries from `ORSet`
or `ORMap`. If an entry is added to an `ORSet` or `ORMap` from one node and removed from another
node the entry will only be removed if the added entry is visible on the node where the removal is
performed (hence the name observed-removed set).
The following example illustrates how to do that:
@@snip [ShoppingCart.java]($code$/java/jdocs/ddata/ShoppingCart.java) { #remove-item }
@@@ warning
*Caveat:* Even if you use `writeMajority` and `readMajority` there is small risk that you may
read stale data if the cluster membership has changed between the `Update` and the `Get`.
For example, in cluster of 5 nodes when you `Update` and that change is written to 3 nodes:
n1, n2, n3. Then 2 more nodes are added and a `Get` request is reading from 4 nodes, which
happens to be n4, n5, n6, n7, i.e. the value on n1, n2, n3 is not seen in the response of the
`Get` request.
@@@
### Subscribe
You may also register interest in change notifications by sending `Replicator.Subscribe`
message to the `Replicator`. It will send `Replicator.Changed` messages to the registered
subscriber when the data for the subscribed key is updated. Subscribers will be notified
periodically with the configured `notify-subscribers-interval`, and it is also possible to
send an explicit `Replicator.FlushChanges` message to the `Replicator` to notify the subscribers
immediately.
The subscriber is automatically removed if the subscriber is terminated. A subscriber can
also be deregistered with the `Replicator.Unsubscribe` message.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #subscribe }
### Delete
A data entry can be deleted by sending a `Replicator.Delete` message to the local
local `Replicator`. As reply of the `Delete` a `Replicator.DeleteSuccess` is sent to
the sender of the `Delete` if the value was successfully deleted according to the supplied
consistency level within the supplied timeout. Otherwise a `Replicator.ReplicationDeleteFailure`
is sent. Note that `ReplicationDeleteFailure` does not mean that the delete completely failed or
was rolled back. It may still have been replicated to some nodes, and may eventually be replicated
to all nodes.
A deleted key cannot be reused again, but it is still recommended to delete unused
data entries because that reduces the replication overhead when new nodes join the cluster.
Subsequent `Delete`, `Update` and `Get` requests will be replied with `Replicator.DataDeleted`.
Subscribers will receive `Replicator.DataDeleted`.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #delete }
@@@ warning
As deleted keys continue to be included in the stored data on each node as well as in gossip
messages, a continuous series of updates and deletes of top-level entities will result in
growing memory usage until an ActorSystem runs out of memory. To use Akka Distributed Data
where frequent adds and removes are required, you should use a fixed number of top-level data
types that support both updates and removals, for example `ORMap` or `ORSet`.
@@@
<a id="delta-crdt"></a>
### delta-CRDT
[Delta State Replicated Data Types](http://arxiv.org/abs/1603.01529)
are supported. delta-CRDT is a way to reduce the need for sending the full state
for updates. For example adding element `'c'` and `'d'` to set `{'a', 'b'}` would
result in sending the delta `{'c', 'd'}` and merge that with the state on the
receiving side, resulting in set `{'a', 'b', 'c', 'd'}`.
The protocol for replicating the deltas supports causal consistency if the data type
is marked with `RequiresCausalDeliveryOfDeltas`. Otherwise it is only eventually
consistent. Without causal consistency it means that if elements `'c'` and `'d'` are
added in two separate *Update* operations these deltas may occasionally be propagated
to nodes in different order than the causal order of the updates. For this example it
can result in that set `{'a', 'b', 'd'}` can be seen before element 'c' is seen. Eventually
it will be `{'a', 'b', 'c', 'd'}`.
Note that the full state is occasionally also replicated for delta-CRDTs, for example when
new nodes are added to the cluster or when deltas could not be propagated because
of network partitions or similar problems.
The the delta propagation can be disabled with configuration property:
```
akka.cluster.distributed-data.delta-crdt.enabled=off
```
## Data Types
The data types must be convergent (stateful) CRDTs and implement the `ReplicatedData` trait,
i.e. they provide a monotonic merge function and the state changes always converge.
You can use your own custom `AbstractReplicatedData` or `AbstractDeltaReplicatedData` types,
and several types are provided by this package, such as:
* Counters: `GCounter`, `PNCounter`
* Sets: `GSet`, `ORSet`
* Maps: `ORMap`, `ORMultiMap`, `LWWMap`, `PNCounterMap`
* Registers: `LWWRegister`, `Flag`
### Counters
`GCounter` is a "grow only counter". It only supports increments, no decrements.
It works in a similar way as a vector clock. It keeps track of one counter per node and the total
value is the sum of these counters. The `merge` is implemented by taking the maximum count for
each node.
If you need both increments and decrements you can use the `PNCounter` (positive/negative counter).
It is tracking the increments (P) separate from the decrements (N). Both P and N are represented
as two internal `GCounter`. Merge is handled by merging the internal P and N counters.
The value of the counter is the value of the P counter minus the value of the N counter.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #pncounter }
`GCounter` and `PNCounter` have support for [delta-CRDT](#delta-crdt) and don't need causal
delivery of deltas.
Several related counters can be managed in a map with the `PNCounterMap` data type.
When the counters are placed in a `PNCounterMap` as opposed to placing them as separate top level
values they are guaranteed to be replicated together as one unit, which is sometimes necessary for
related data.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #pncountermap }
### Sets
If you only need to add elements to a set and not remove elements the `GSet` (grow-only set) is
the data type to use. The elements can be any type of values that can be serialized.
Merge is simply the union of the two sets.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #gset }
`GSet` has support for [delta-CRDT](#delta-crdt) and it doesn't require causal delivery of deltas.
If you need add and remove operations you should use the `ORSet` (observed-remove set).
Elements can be added and removed any number of times. If an element is concurrently added and
removed, the add will win. You cannot remove an element that you have not seen.
The `ORSet` has a version vector that is incremented when an element is added to the set.
The version for the node that added the element is also tracked for each element in a so
called "birth dot". The version vector and the dots are used by the `merge` function to
track causality of the operations and resolve concurrent updates.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #orset }
`ORSet` has support for [delta-CRDT](#delta-crdt) and it requires causal delivery of deltas.
### Maps
`ORMap` (observed-remove map) is a map with keys of `Any` type and the values are `ReplicatedData`
types themselves. It supports add, remove and delete any number of times for a map entry.
If an entry is concurrently added and removed, the add will win. You cannot remove an entry that
you have not seen. This is the same semantics as for the `ORSet`.
If an entry is concurrently updated to different values the values will be merged, hence the
requirement that the values must be `ReplicatedData` types.
It is rather inconvenient to use the `ORMap` directly since it does not expose specific types
of the values. The `ORMap` is intended as a low level tool for building more specific maps,
such as the following specialized maps.
`ORMultiMap` (observed-remove multi-map) is a multi-map implementation that wraps an
`ORMap` with an `ORSet` for the map's value.
`PNCounterMap` (positive negative counter map) is a map of named counters. It is a specialized
`ORMap` with `PNCounter` values.
`LWWMap` (last writer wins map) is a specialized `ORMap` with `LWWRegister` (last writer wins register)
values.
`ORMap`, `ORMultiMap`, `PNCounterMap` and `LWWMap` have support for [delta-CRDT](#delta-crdt) and they require causal
delivery of deltas. Support for deltas here means that the `ORSet` being underlying key type for all those maps
uses delta propagation to deliver updates. Effectively, the update for map is then a pair, consisting of delta for the `ORSet`
being the key and full update for the respective value (`ORSet`, `PNCounter` or `LWWRegister`) kept in the map.
There is a special version of `ORMultiMap`, created by using separate constructor
`ORMultiMap.emptyWithValueDeltas[A, B]`, that also propagates the updates to its values (of `ORSet` type) as deltas.
This means that the `ORMultiMap` initiated with `ORMultiMap.emptyWithValueDeltas` propagates its updates as pairs
consisting of delta of the key and delta of the value. It is much more efficient in terms of network bandwith consumed.
However, this behaviour has not been made default for `ORMultiMap` because currently the merge process for
updates for `ORMultiMap.emptyWithValueDeltas` results in a tombstone (being a form of [CRDT Garbage](#crdt-garbage) )
in form of additional `ORSet` entry being created in a situation when a key has been added and then removed.
There is ongoing work aimed at removing necessity of creation of the aforementioned tombstone. Please also note
that despite having the same Scala type, `ORMultiMap.emptyWithValueDeltas` is not compatible with 'vanilla' `ORMultiMap`,
because of different replication mechanism.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #ormultimap }
When a data entry is changed the full state of that entry is replicated to other nodes, i.e.
when you update a map the whole map is replicated. Therefore, instead of using one `ORMap`
with 1000 elements it is more efficient to split that up in 10 top level `ORMap` entries
with 100 elements each. Top level entries are replicated individually, which has the
trade-off that different entries may not be replicated at the same time and you may see
inconsistencies between related entries. Separate top level entries cannot be updated atomically
together.
Note that `LWWRegister` and therefore `LWWMap` relies on synchronized clocks and should only be used
when the choice of value is not important for concurrent updates occurring within the clock skew. Read more
in the below section about `LWWRegister`.
### Flags and Registers
`Flag` is a data type for a boolean value that is initialized to `false` and can be switched
to `true`. Thereafter it cannot be changed. `true` wins over `false` in merge.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #flag }
`LWWRegister` (last writer wins register) can hold any (serializable) value.
Merge of a `LWWRegister` takes the register with highest timestamp. Note that this
relies on synchronized clocks. *LWWRegister* should only be used when the choice of
value is not important for concurrent updates occurring within the clock skew.
Merge takes the register updated by the node with lowest address (`UniqueAddress` is ordered)
if the timestamps are exactly the same.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister }
Instead of using timestamps based on `System.currentTimeMillis()` time it is possible to
use a timestamp value based on something else, for example an increasing version number
from a database record that is used for optimistic concurrency control.
@@snip [DistributedDataDocTest.java]($code$/java/jdocs/ddata/DistributedDataDocTest.java) { #lwwregister-custom-clock }
For first-write-wins semantics you can use the `LWWRegister#reverseClock` instead of the
`LWWRegister#defaultClock`.
The `defaultClock` is using max value of `System.currentTimeMillis()` and `currentTimestamp + 1`.
This means that the timestamp is increased for changes on the same node that occurs within
the same millisecond. It also means that it is safe to use the `LWWRegister` without
synchronized clocks when there is only one active writer, e.g. a Cluster Singleton. Such a
single writer should then first read current value with `ReadMajority` (or more) before
changing and writing the value with `WriteMajority` (or more).
### Custom Data Type
You can rather easily implement your own data types. The only requirement is that it implements
the `mergeData` function of the `AbstractReplicatedData` class.
A nice property of stateful CRDTs is that they typically compose nicely, i.e. you can combine several
smaller data types to build richer data structures. For example, the `PNCounter` is composed of
two internal `GCounter` instances to keep track of increments and decrements separately.
Here is s simple implementation of a custom `TwoPhaseSet` that is using two internal `GSet` types
to keep track of addition and removals. A `TwoPhaseSet` is a set where an element may be added and
removed, but never added again thereafter.
@@snip [TwoPhaseSet.java]($code$/java/jdocs/ddata/TwoPhaseSet.java) { #twophaseset }
Data types should be immutable, i.e. "modifying" methods should return a new instance.
Implement the additional methods of `AbstractDeltaReplicatedData` if it has support for delta-CRDT replication.
#### Serialization
The data types must be serializable with an @ref:[Akka Serializer](serialization.md).
It is highly recommended that you implement efficient serialization with Protobuf or similar
for your custom data types. The built in data types are marked with `ReplicatedDataSerialization`
and serialized with `akka.cluster.ddata.protobuf.ReplicatedDataSerializer`.
Serialization of the data types are used in remote messages and also for creating message
digests (SHA-1) to detect changes. Therefore it is important that the serialization is efficient
and produce the same bytes for the same content. For example sets and maps should be sorted
deterministically in the serialization.
This is a protobuf representation of the above `TwoPhaseSet`:
@@snip [TwoPhaseSetMessages.proto]($code$/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset }
The serializer for the `TwoPhaseSet`:
@@snip [TwoPhaseSetSerializer.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer.java) { #serializer }
Note that the elements of the sets are sorted so the SHA-1 digests are the same
for the same elements.
You register the serializer in configuration:
@@snip [DistributedDataDocSpec.scala]($code$/scala/docs/ddata/DistributedDataDocSpec.scala) { #japi-serializer-config }
Using compression can sometimes be a good idea to reduce the data size. Gzip compression is
provided by the `akka.cluster.ddata.protobuf.SerializationSupport` trait:
@@snip [TwoPhaseSetSerializerWithCompression.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializerWithCompression.java) { #compression }
The two embedded `GSet` can be serialized as illustrated above, but in general when composing
new data types from the existing built in types it is better to make use of the existing
serializer for those types. This can be done by declaring those as bytes fields in protobuf:
@@snip [TwoPhaseSetMessages.proto]($code$/../main/protobuf/TwoPhaseSetMessages.proto) { #twophaseset2 }
and use the methods `otherMessageToProto` and `otherMessageFromBinary` that are provided
by the `SerializationSupport` trait to serialize and deserialize the `GSet` instances. This
works with any type that has a registered Akka serializer. This is how such an serializer would
look like for the `TwoPhaseSet`:
@@snip [TwoPhaseSetSerializer2.java]($code$/java/jdocs/ddata/protobuf/TwoPhaseSetSerializer2.java) { #serializer }
<a id="ddata-durable"></a>
### Durable Storage
By default the data is only kept in memory. It is redundant since it is replicated to other nodes
in the cluster, but if you stop all nodes the data is lost, unless you have saved it
elsewhere.
Entries can be configured to be durable, i.e. stored on local disk on each node. The stored data will be loaded
next time the replicator is started, i.e. when actor system is restarted. This means data will survive as
long as at least one node from the old cluster takes part in a new cluster. The keys of the durable entries
are configured with:
```
akka.cluster.distributed-data.durable.keys = ["a", "b", "durable*"]
```
Prefix matching is supported by using `*` at the end of a key.
All entries can be made durable by specifying:
```
akka.cluster.distributed-data.durable.keys = ["*"]
```
[LMDB](https://github.com/lmdbjava/lmdbjava/) is the default storage implementation. It is
possible to replace that with another implementation by implementing the actor protocol described in
`akka.cluster.ddata.DurableStore` and defining the `akka.cluster.distributed-data.durable.store-actor-class`
property for the new implementation.
The location of the files for the data is configured with:
```
# Directory of LMDB file. There are two options:
# 1. A relative or absolute path to a directory that ends with 'ddata'
# the full name of the directory will contain name of the ActorSystem
# and its remote port.
# 2. Otherwise the path is used as is, as a relative or absolute path to
# a directory.
akka.cluster.distributed-data.lmdb.dir = "ddata"
```
When running in production you may want to configure the directory to a specific
path (alt 2), since the default directory contains the remote port of the
actor system to make the name unique. If using a dynamically assigned
port (0) it will be different each time and the previously stored data
will not be loaded.
Making the data durable has of course a performance cost. By default, each update is flushed
to disk before the `UpdateSuccess` reply is sent. For better performance, but with the risk of losing
the last writes if the JVM crashes, you can enable write behind mode. Changes are then accumulated during
a time period before it is written to LMDB and flushed to disk. Enabling write behind is especially
efficient when performing many writes to the same key, because it is only the last value for each key
that will be serialized and stored. The risk of losing writes if the JVM crashes is small since the
data is typically replicated to other nodes immediately according to the given `WriteConsistency`.
```
akka.cluster.distributed-data.lmdb.write-behind-interval = 200 ms
```
Note that you should be prepared to receive `WriteFailure` as reply to an `Update` of a
durable entry if the data could not be stored for some reason. When enabling `write-behind-interval`
such errors will only be logged and `UpdateSuccess` will still be the reply to the `Update`.
There is one important caveat when it comes pruning of [CRDT Garbage](#crdt-garbage) for durable data.
If and old data entry that was never pruned is injected and merged with existing data after
that the pruning markers have been removed the value will not be correct. The time-to-live
of the markers is defined by configuration
`akka.cluster.distributed-data.durable.remove-pruning-marker-after` and is in the magnitude of days.
This would be possible if a node with durable data didn't participate in the pruning
(e.g. it was shutdown) and later started after this time. A node with durable data should not
be stopped for longer time than this duration and if it is joining again after this
duration its data should first be manually removed (from the lmdb directory).
<a id="crdt-garbage"></a>
### CRDT Garbage
One thing that can be problematic with CRDTs is that some data types accumulate history (garbage).
For example a `GCounter` keeps track of one counter per node. If a `GCounter` has been updated
from one node it will associate the identifier of that node forever. That can become a problem
for long running systems with many cluster nodes being added and removed. To solve this problem
the `Replicator` performs pruning of data associated with nodes that have been removed from the
cluster. Data types that need pruning have to implement the `RemovedNodePruning` trait. See the
API documentation of the `Replicator` for details.
## Samples
Several interesting samples are included and described in the
tutorial named @extref[Akka Distributed Data Samples with Java](ecs:akka-samples-distributed-data-java) (@extref[source code](samples:akka-sample-distributed-data-java))
* Low Latency Voting Service
* Highly Available Shopping Cart
* Distributed Service Registry
* Replicated Cache
* Replicated Metrics
## Limitations
There are some limitations that you should be aware of.
CRDTs cannot be used for all types of problems, and eventual consistency does not fit
all domains. Sometimes you need strong consistency.
It is not intended for *Big Data*. The number of top level entries should not exceed 100000.
When a new node is added to the cluster all these entries are transferred (gossiped) to the
new node. The entries are split up in chunks and all existing nodes collaborate in the gossip,
but it will take a while (tens of seconds) to transfer all entries and this means that you
cannot have too many top level entries. The current recommended limit is 100000. We will
be able to improve this if needed, but the design is still not intended for billions of entries.
All data is held in memory, which is another reason why it is not intended for *Big Data*.
When a data entry is changed the full state of that entry may be replicated to other nodes
if it doesn't support [delta-CRDT](#delta-crdt). The full state is also replicated for delta-CRDTs,
for example when new nodes are added to the cluster or when deltas could not be propagated because
of network partitions or similar problems. This means that you cannot have too large
data entries, because then the remote message size will be too large.
## Learn More about CRDTs
* [The Final Causal Frontier](http://www.ustream.tv/recorded/61448875)
talk by Sean Cribbs
* [Eventually Consistent Data Structures](https://vimeo.com/43903960)
talk by Sean Cribbs
* [Strong Eventual Consistency and Conflict-free Replicated Data Types](http://research.microsoft.com/apps/video/default.aspx?id=153540&r=1)
talk by Mark Shapiro
* [A comprehensive study of Convergent and Commutative Replicated Data Types](http://hal.upmc.fr/file/index/docid/555588/filename/techreport.pdf)
paper by Mark Shapiro et. al.
## Dependencies
To use Distributed Data you must add the following dependency in your project.
sbt
: @@@vars
```
"com.typesafe.akka" %% "akka-distributed-data" % "$akka.version$"
```
@@@
Maven
: @@@vars
```
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-distributed-data_$scala.binary_version$</artifactId>
<version>$akka.version$</version>
</dependency>
```
@@@
## Configuration
The `DistributedData` extension can be configured with the following properties:
@@snip [reference.conf]($akka$/akka-distributed-data/src/main/resources/reference.conf) { #distributed-data }

View file

@ -0,0 +1 @@
../scala/distributed-data.md

View file

@ -1,202 +0,0 @@
# Distributed Publish Subscribe in Cluster
How do I send a message to an actor without knowing which node it is running on?
How do I send messages to all actors in the cluster that have registered interest
in a named topic?
This pattern provides a mediator actor, `akka.cluster.pubsub.DistributedPubSubMediator`,
that manages a registry of actor references and replicates the entries to peer
actors among all cluster nodes or a group of nodes tagged with a specific role.
The `DistributedPubSubMediator` actor is supposed to be started on all nodes,
or all nodes with specified role, in the cluster. The mediator can be
started with the `DistributedPubSub` extension or as an ordinary actor.
The registry is eventually consistent, i.e. changes are not immediately visible at
other nodes, but typically they will be fully replicated to all other nodes after
a few seconds. Changes are only performed in the own part of the registry and those
changes are versioned. Deltas are disseminated in a scalable way to other nodes with
a gossip protocol.
Cluster members with status @ref:[WeaklyUp](cluster-usage.md#weakly-up),
will participate in Distributed Publish Subscribe, i.e. subscribers on nodes with
`WeaklyUp` status will receive published messages if the publisher and subscriber are on
same side of a network partition.
You can send messages via the mediator on any node to registered actors on
any other node.
There a two different modes of message delivery, explained in the sections
[Publish](#distributed-pub-sub-publish) and [Send](#distributed-pub-sub-send) below.
<a id="distributed-pub-sub-publish"></a>
## Publish
This is the true pub/sub mode. A typical usage of this mode is a chat room in an instant
messaging application.
Actors are registered to a named topic. This enables many subscribers on each node.
The message will be delivered to all subscribers of the topic.
For efficiency the message is sent over the wire only once per node (that has a matching topic),
and then delivered to all subscribers of the local topic representation.
You register actors to the local mediator with `DistributedPubSubMediator.Subscribe`.
Successful `Subscribe` and `Unsubscribe` is acknowledged with
`DistributedPubSubMediator.SubscribeAck` and `DistributedPubSubMediator.UnsubscribeAck`
replies. The acknowledgment means that the subscription is registered, but it can still
take some time until it is replicated to other nodes.
You publish messages by sending `DistributedPubSubMediator.Publish` message to the
local mediator.
Actors are automatically removed from the registry when they are terminated, or you
can explicitly remove entries with `DistributedPubSubMediator.Unsubscribe`.
An example of a subscriber actor:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #subscriber }
Subscriber actors can be started on several nodes in the cluster, and all will receive
messages published to the "content" topic.
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-subscribers }
A simple actor that publishes to this "content" topic:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publisher }
It can publish messages to the topic from anywhere in the cluster:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #publish-message }
### Topic Groups
Actors may also be subscribed to a named topic with a `group` id.
If subscribing with a group id, each message published to a topic with the
`sendOneMessageToEachGroup` flag set to `true` is delivered via the supplied `RoutingLogic`
(default random) to one actor within each subscribing group.
If all the subscribed actors have the same group id, then this works just like
`Send` and each message is only delivered to one subscriber.
If all the subscribed actors have different group names, then this works like
normal `Publish` and each message is broadcasted to all subscribers.
@@@ note
Note that if the group id is used it is part of the topic identifier.
Messages published with `sendOneMessageToEachGroup=false` will not be delivered
to subscribers that subscribed with a group id.
Messages published with `sendOneMessageToEachGroup=true` will not be delivered
to subscribers that subscribed without a group id.
@@@
<a id="distributed-pub-sub-send"></a>
## Send
This is a point-to-point mode where each message is delivered to one destination,
but you still do not have to know where the destination is located.
A typical usage of this mode is private chat to one other user in an instant messaging
application. It can also be used for distributing tasks to registered workers, like a
cluster aware router where the routees dynamically can register themselves.
The message will be delivered to one recipient with a matching path, if any such
exists in the registry. If several entries match the path because it has been registered
on several nodes the message will be sent via the supplied `RoutingLogic` (default random)
to one destination. The sender of the message can specify that local affinity is preferred,
i.e. the message is sent to an actor in the same local actor system as the used mediator actor,
if any such exists, otherwise route to any other matching entry.
You register actors to the local mediator with `DistributedPubSubMediator.Put`.
The `ActorRef` in `Put` must belong to the same local actor system as the mediator.
The path without address information is the key to which you send messages.
On each node there can only be one actor for a given path, since the path is unique
within one local actor system.
You send messages by sending `DistributedPubSubMediator.Send` message to the
local mediator with the path (without address information) of the destination
actors.
Actors are automatically removed from the registry when they are terminated, or you
can explicitly remove entries with `DistributedPubSubMediator.Remove`.
An example of a destination actor:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-destination }
Subscriber actors can be started on several nodes in the cluster, and all will receive
messages published to the "content" topic.
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #start-send-destinations }
A simple actor that publishes to this "content" topic:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #sender }
It can publish messages to the topic from anywhere in the cluster:
@@snip [DistributedPubSubMediatorTest.java]($akka$/akka-cluster-tools/src/test/java/akka/cluster/pubsub/DistributedPubSubMediatorTest.java) { #send-message }
It is also possible to broadcast messages to the actors that have been registered with
`Put`. Send `DistributedPubSubMediator.SendToAll` message to the local mediator and the wrapped message
will then be delivered to all recipients with a matching path. Actors with
the same path, without address information, can be registered on different nodes.
On each node there can only be one such actor, since the path is unique within one
local actor system.
Typical usage of this mode is to broadcast messages to all replicas
with the same path, e.g. 3 actors on different nodes that all perform the same actions,
for redundancy. You can also optionally specify a property (`allButSelf`) deciding
if the message should be sent to a matching path on the self node or not.
## DistributedPubSub Extension
In the example above the mediator is started and accessed with the `akka.cluster.pubsub.DistributedPubSub` extension.
That is convenient and perfectly fine in most cases, but it can be good to know that it is possible to
start the mediator actor as an ordinary actor and you can have several different mediators at the same
time to be able to divide a large number of actors/topics to different mediators. For example you might
want to use different cluster roles for different mediators.
The `DistributedPubSub` extension can be configured with the following properties:
@@snip [reference.conf]($akka$/akka-cluster-tools/src/main/resources/reference.conf) { #pub-sub-ext-config }
It is recommended to load the extension when the actor system is started by defining it in
`akka.extensions` configuration property. Otherwise it will be activated when first used
and then it takes a while for it to be populated.
```
akka.extensions = ["akka.cluster.pubsub.DistributedPubSub"]
```
## Delivery Guarantee
As in @ref:[Message Delivery Reliability](general/message-delivery-reliability.md) of Akka, message delivery guarantee in distributed pub sub modes is **at-most-once delivery**.
In other words, messages can be lost over the wire.
If you are looking for at-least-once delivery guarantee, we recommend [Kafka Akka Streams integration](http://doc.akka.io/docs/akka-stream-kafka/current/home.html).
## Dependencies
To use Distributed Publish Subscribe you must add the following dependency in your project.
sbt
: @@@vars
```
"com.typesafe.akka" %% "akka-cluster-tools" % "$akka.version$"
```
@@@
Maven
: @@@vars
```
<dependency>
<groupId>com.typesafe.akka</groupId>
<artifactId>akka-cluster-tools_$scala.binary_version$</artifactId>
<version>$akka.version$</version>
</dependency>
```
@@@

View file

@ -0,0 +1 @@
../scala/distributed-pub-sub.md

View file

@ -1,296 +0,0 @@
# Futures
## Introduction
In the Scala Standard Library, a [Future](http://en.wikipedia.org/wiki/Futures_and_promises) is a data structure
used to retrieve the result of some concurrent operation. This result can be accessed synchronously (blocking)
or asynchronously (non-blocking). To be able to use this from Java, Akka provides a java friendly interface
in `akka.dispatch.Futures`.
See also @ref:[Java 8 Compatibility](java8-compat.md) for Java compatibility.
## Execution Contexts
In order to execute callbacks and operations, Futures need something called an `ExecutionContext`,
which is very similar to a `java.util.concurrent.Executor`. if you have an `ActorSystem` in scope,
it will use its default dispatcher as the `ExecutionContext`, or you can use the factory methods provided
by the `ExecutionContexts` class to wrap `Executors` and `ExecutorServices`, or even create your own.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports1 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #diy-execution-context }
## Use with Actors
There are generally two ways of getting a reply from an `AbstractActor`: the first is by a sent message (`actorRef.tell(msg, sender)`),
which only works if the original sender was an `AbstractActor`) and the second is through a `Future`.
Using the `ActorRef`'s `ask` method to send a message will return a `Future`.
To wait for and retrieve the actual result the simplest method is:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports1 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #ask-blocking }
This will cause the current thread to block and wait for the `AbstractActor` to 'complete' the `Future` with it's reply.
Blocking is discouraged though as it can cause performance problem.
The blocking operations are located in `Await.result` and `Await.ready` to make it easy to spot where blocking occurs.
Alternatives to blocking are discussed further within this documentation.
Also note that the `Future` returned by an `AbstractActor` is a `Future<Object>` since an `AbstractActor` is dynamic.
That is why the cast to `String` is used in the above sample.
@@@ warning
`Await.result` and `Await.ready` are provided for exceptional situations where you **must** block,
a good rule of thumb is to only use them if you know why you **must** block. For all other cases, use
asynchronous composition as described below.
@@@
To send the result of a `Future` to an `Actor`, you can use the `pipe` construct:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #pipe-to }
## Use Directly
A common use case within Akka is to have some computation performed concurrently without needing
the extra utility of an `AbstractActor`. If you find yourself creating a pool of `AbstractActor`s for the sole reason
of performing a calculation in parallel, there is an easier (and faster) way:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports2 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #future-eval }
In the above code the block passed to `future` will be executed by the default `Dispatcher`,
with the return value of the block used to complete the `Future` (in this case, the result would be the string: "HelloWorld").
Unlike a `Future` that is returned from an `AbstractActor`, this `Future` is properly typed,
and we also avoid the overhead of managing an `AbstractActor`.
You can also create already completed Futures using the `Futures` class, which can be either successes:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #successful }
Or failures:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #failed }
It is also possible to create an empty `Promise`, to be filled later, and obtain the corresponding `Future`:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #promise }
For these examples `PrintResult` is defined as follows:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #print-result }
## Functional Futures
Scala's `Future` has several monadic methods that are very similar to the ones used by `Scala`'s collections.
These allow you to create 'pipelines' or 'streams' that the result will travel through.
### Future is a Monad
The first method for working with `Future` functionally is `map`. This method takes a `Mapper` which performs
some operation on the result of the `Future`, and returning a new result.
The return value of the `map` method is another `Future` that will contain the new result:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports2 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #map }
In this example we are joining two strings together within a `Future`. Instead of waiting for f1 to complete,
we apply our function that calculates the length of the string using the `map` method.
Now we have a second `Future`, f2, that will eventually contain an `Integer`.
When our original `Future`, f1, completes, it will also apply our function and complete the second `Future`
with its result. When we finally `get` the result, it will contain the number 10.
Our original `Future` still contains the string "HelloWorld" and is unaffected by the `map`.
Something to note when using these methods: passed work is always dispatched on the provided `ExecutionContext`. Even if
the `Future` has already been completed, when one of these methods is called.
### Composing Futures
It is very often desirable to be able to combine different Futures with each other,
below are some examples on how that can be done in a non-blocking fashion.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports3 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #sequence }
To better explain what happened in the example, `Future.sequence` is taking the `Iterable<Future<Integer>>`
and turning it into a `Future<Iterable<Integer>>`. We can then use `map` to work with the `Iterable<Integer>` directly,
and we aggregate the sum of the `Iterable`.
The `traverse` method is similar to `sequence`, but it takes a sequence of `A` and applies a function from `A` to `Future<B>`
and returns a `Future<Iterable<B>>`, enabling parallel `map` over the sequence, if you use `Futures.future` to create the `Future`.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports4 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #traverse }
It's as simple as that!
Then there's a method that's called `fold` that takes a start-value,
a sequence of `Future`:s and a function from the type of the start-value, a timeout,
and the type of the futures and returns something with the same type as the start-value,
and then applies the function to all elements in the sequence of futures, non-blockingly,
the execution will be started when the last of the Futures is completed.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports5 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #fold }
That's all it takes!
If the sequence passed to `fold` is empty, it will return the start-value, in the case above, that will be empty String.
In some cases you don't have a start-value and you're able to use the value of the first completing `Future`
in the sequence as the start-value, you can use `reduce`, it works like this:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports6 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #reduce }
Same as with `fold`, the execution will be started when the last of the Futures is completed, you can also parallelize
it by chunking your futures into sub-sequences and reduce them, and then reduce the reduced results again.
This is just a sample of what can be done.
## Callbacks
Sometimes you just want to listen to a `Future` being completed, and react to that not by creating a new Future, but by side-effecting.
For this Scala supports `onComplete`, `onSuccess` and `onFailure`, of which the last two are specializations of the first.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onSuccess }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onFailure }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #onComplete }
## Ordering
Since callbacks are executed in any order and potentially in parallel,
it can be tricky at the times when you need sequential ordering of operations.
But there's a solution! And it's name is `andThen`, and it creates a new `Future` with
the specified callback, a `Future` that will have the same result as the `Future` it's called on,
which allows for ordering like in the following sample:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #and-then }
## Auxiliary methods
`Future` `fallbackTo` combines 2 Futures into a new `Future`, and will hold the successful value of the second `Future`
if the first `Future` fails.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #fallback-to }
You can also combine two Futures into a new `Future` that will hold a tuple of the two Futures successful results,
using the `zip` operation.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #zip }
## Exceptions
Since the result of a `Future` is created concurrently to the rest of the program, exceptions must be handled differently.
It doesn't matter if an `AbstractActor` or the dispatcher is completing the `Future`, if an `Exception` is caught
the `Future` will contain it instead of a valid result. If a `Future` does contain an `Exception`,
calling `Await.result` will cause it to be thrown again so it can be handled properly.
It is also possible to handle an `Exception` by returning a different result.
This is done with the `recover` method. For example:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #recover }
In this example, if the actor replied with a `akka.actor.Status.Failure` containing the `ArithmeticException`,
our `Future` would have a result of 0. The `recover` method works very similarly to the standard try/catch blocks,
so multiple `Exception`s can be handled in this manner, and if an `Exception` is not handled this way
it will behave as if we hadn't used the `recover` method.
You can also use the `recoverWith` method, which has the same relationship to `recover` as `flatMap` has to `map`,
and is use like this:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #try-recover }
## After
`akka.pattern.Patterns.after` makes it easy to complete a `Future` with a value or exception after a timeout.
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #imports7 }
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #after }
## Java 8, CompletionStage and CompletableFuture
Starting with Akka 2.4.2 we have begun to introduce Java 8 `java.util.concurrent.CompletionStage` in Java APIs.
It's a `scala.concurrent.Future` counterpart in Java; conversion from `scala.concurrent.Future` is done using
`scala-java8-compat` library.
Unlike `scala.concurrent.Future` which has async methods only, `CompletionStage` has *async* and *non-async* methods.
The `scala-java8-compat` library returns its own implementation of `CompletionStage` which delegates all *non-async*
methods to their *async* counterparts. The implementation extends standard Java `CompletableFuture`.
Java 8 `CompletableFuture` creates a new instance of `CompletableFuture` for any new stage,
which means `scala-java8-compat` implementation is not used after the first mapping method.
@@@ note
After adding any additional computation stage to `CompletionStage` returned by `scala-java8-compat`
(e.g. `CompletionStage` instances returned by Akka) it falls back to standard behaviour of Java `CompletableFuture`.
@@@
Actions supplied for dependent completions of *non-async* methods may be performed by the thread
that completes the current `CompletableFuture`, or by any other caller of a completion method.
All *async* methods without an explicit Executor are performed using the `ForkJoinPool.commonPool()` executor.
### Non-async methods
When non-async methods are applied on a not yet completed `CompletionStage`, they are completed by
the thread which completes initial `CompletionStage`:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-completion-thread }
In this example Scala `Future` is converted to `CompletionStage` just like Akka does.
The completion is delayed: we are calling `thenApply` multiple times on a not yet complete `CompletionStage`, then
complete the `Future`.
First `thenApply` is actually performed on `scala-java8-compat` instance and computational stage (lambda) execution
is delegated to default Java `thenApplyAsync` which is executed on `ForkJoinPool.commonPool()`.
Second and third `thenApply` methods are executed on Java 8 `CompletableFuture` instance which executes computational
stages on the thread which completed the first stage. It is never executed on a thread of Scala `Future` because
default `thenApply` breaks the chain and executes on `ForkJoinPool.commonPool()`.
In the next example `thenApply` methods are executed on an already completed `Future`/`CompletionStage`:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-main-thread }
First `thenApply` is still executed on `ForkJoinPool.commonPool()` (because it is actually `thenApplyAsync`
which is always executed on global Java pool).
Then we wait for stages to complete so second and third `thenApply` are executed on completed `CompletionStage`,
and stages are executed on the current thread - the thread which called second and third `thenApply`.
### Async methods
As mentioned above, default *async* methods are always executed on `ForkJoinPool.commonPool()`:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-async-default }
`CompletionStage` also has *async* methods which take `Executor` as a second parameter, just like `Future`:
@@snip [FutureDocTest.java]($code$/java/jdocs/future/FutureDocTest.java) { #apply-async-executor }
This example is behaving like `Future`: every stage is executed on an explicitly specified `Executor`.
@@@ note
When in doubt, async methods with explicit executor should be used. Always async methods with a dedicated
executor/dispatcher for long-running or blocking computations, such as IO operations.
@@@
See also:
* [CompletionStage](https://docs.oracle.com/javase/8/jdocs/api/java/util/concurrent/CompletionStage.html)
* [CompletableFuture](https://docs.oracle.com/javase/8/jdocs/api/java/util/concurrent/CompletableFuture.html)
* [scala-java8-compat](https://github.com/scala/scala-java8-compat)

View file

@ -0,0 +1 @@
../scala/futures.md

View file

@ -1,21 +0,0 @@
# Actors
@@toc { depth=2 }
@@@ index
* [actors](actors.md)
* [typed](typed.md)
* [fault-tolerance](fault-tolerance.md)
* [dispatchers](dispatchers.md)
* [mailboxes](mailboxes.md)
* [routing](routing.md)
* [fsm](fsm.md)
* [persistence](persistence.md)
* [persistence-schema-evolution](persistence-schema-evolution.md)
* [persistence-query](persistence-query.md)
* [persistence-query-leveldb](persistence-query-leveldb.md)
* [testing](testing.md)
* [typed-actors](typed-actors.md)
@@@

View file

@ -0,0 +1 @@
../scala/index-actors.md

Some files were not shown because too many files have changed in this diff Show more