Merge branch 'master' into wip-sync-artery-dev-2.4.9-patriknw
This commit is contained in:
commit
8ab02738b7
483 changed files with 9535 additions and 2177 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
|
@ -1,6 +1,7 @@
|
|||
*#
|
||||
*.log
|
||||
*.orig
|
||||
*.jfr
|
||||
*.iml
|
||||
*.ipr
|
||||
*.iws
|
||||
|
|
|
|||
|
|
@ -85,6 +85,8 @@ The TL;DR; of the above very precise workflow version is:
|
|||
4. Keep polishing it until received enough LGTM
|
||||
5. Profit!
|
||||
|
||||
Note that the Akka sbt project is large, so `sbt` needs to be run with lots of heap (1-2 Gb). This can be specified using a command line argument `sbt -mem 2048` or in the environment variable `SBT_OPTS` but then as a regular JVM memory flag, for example `SBT_OPTS=-Xmx2G`, on some platforms you can also edit the global defaults for sbt in `/usr/local/etc/sbtopts`.
|
||||
|
||||
## The `validatePullRequest` task
|
||||
|
||||
The Akka build includes a special task called `validatePullRequest` which investigates the changes made as well as dirty
|
||||
|
|
@ -179,6 +181,19 @@ For more info, or for a starting point for new projects, look at the [Lightbend
|
|||
|
||||
For larger projects that have invested a lot of time and resources into their current documentation and samples scheme (like for example Play), it is understandable that it will take some time to migrate to this new model. In these cases someone from the project needs to take the responsibility of manual QA and verifier for the documentation and samples.
|
||||
|
||||
### JavaDoc
|
||||
|
||||
Akka generates JavaDoc-style API documentation using the [genjavadoc](https://github.com/typesafehub/genjavadoc) sbt plugin, since the sources are written mostly in Scala.
|
||||
|
||||
Generating JavaDoc is not enabled by default, as it's not needed on day-to-day development as it's expected to just work.
|
||||
If you'd like to check if you links and formatting looks good in JavaDoc (and not only in ScalaDoc), you can generate it by running:
|
||||
|
||||
```
|
||||
sbt -Dakka.genjavadoc.enabled=true javaunidoc:doc
|
||||
```
|
||||
|
||||
Which will generate JavaDoc style docs in `./target/javaunidoc/index.html`
|
||||
|
||||
## External Dependencies
|
||||
|
||||
All the external runtime dependencies for the project, including transitive dependencies, must have an open source license that is equal to, or compatible with, [Apache 2](http://www.apache.org/licenses/LICENSE-2.0).
|
||||
|
|
@ -224,7 +239,7 @@ Example:
|
|||
Akka uses [Jenkins GitHub pull request builder plugin](https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin)
|
||||
that automatically merges the code, builds it, runs the tests and comments on the Pull Request in GitHub.
|
||||
|
||||
Upon a submission of a Pull Request the Github pull request builder plugin will post a following comment:
|
||||
Upon a submission of a Pull Request the GitHub pull request builder plugin will post a following comment:
|
||||
|
||||
Can one of the repo owners verify this patch?
|
||||
|
||||
|
|
@ -258,6 +273,21 @@ Thus we ask Java contributions to follow these simple guidelines:
|
|||
- `{` on same line as method name
|
||||
- in all other aspects, follow the [Oracle Java Style Guide](http://www.oracle.com/technetwork/java/codeconvtoc-136057.html)
|
||||
|
||||
### Preferred ways to use timeouts in tests
|
||||
|
||||
Avoid short test timeouts, since Jenkins server may GC heavily causing spurious test failures. GC pause or other hiccup of 2 seconds is common in our CI environment. Please note that usually giving a larger timeout *does not slow down the tests*, as in an `expectMessage` call for example it usually will complete quickly.
|
||||
|
||||
There is a number of ways timeouts can be defined in Akka tests. The following ways to use timeouts are recommended (in order of preference):
|
||||
|
||||
* `remaining` is first choice (requires `within` block)
|
||||
* `remainingOrDefault` is second choice
|
||||
* `3.seconds` is third choice if not using testkit
|
||||
* lower timeouts must come with a very good reason (e.g. Awaiting on a known to be "already completed" `Future`)
|
||||
|
||||
Special care should be given `expectNoMsg` calls, which indeed will wait the entire timeout before continuing, therefore a shorter timeout should be used in those, for example `200` or `300.millis`.
|
||||
|
||||
You can read up on remaining and friends in [TestKit.scala](https://github.com/akka/akka/blob/master/akka-testkit/src/main/scala/akka/testkit/TestKit.scala)
|
||||
|
||||
## Contributing Modules
|
||||
|
||||
For external contributions of entire features, the normal way is to establish it
|
||||
|
|
|
|||
|
|
@ -18,7 +18,7 @@ Reference Documentation
|
|||
-----------------------
|
||||
|
||||
The reference documentation is available at [doc.akka.io](http://doc.akka.io),
|
||||
for [Scala](http://doc.akka.io/docs/akka/current/scala.html) and [Java](http://doc.akka.io/docs/akka/current/scala.html).
|
||||
for [Scala](http://doc.akka.io/docs/akka/current/scala.html) and [Java](http://doc.akka.io/docs/akka/current/java.html).
|
||||
|
||||
|
||||
Community
|
||||
|
|
@ -31,9 +31,10 @@ You can join these groups and chats to discuss and ask Akka related questions:
|
|||
|
||||
In addition to that, you may enjoy following:
|
||||
|
||||
- The [news](http://akka.io/news) section of the page, which is updated whenever a new version is released
|
||||
- The [Akka Team Blog](http://blog.akka.io)
|
||||
- [@akkateam](https://twitter.com/akkateam) on Twitter
|
||||
- Questions tagged [#akka on StackOverflow](stackoverflow.com/questions/tagged/akka)
|
||||
- Questions tagged [#akka on StackOverflow](http://stackoverflow.com/questions/tagged/akka)
|
||||
|
||||
Contributing
|
||||
------------
|
||||
|
|
@ -42,7 +43,7 @@ Contributions are *very* welcome!
|
|||
If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a PullRequest implementing it.
|
||||
|
||||
Refer to the [CONTRIBUTING.md](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) file for more details about the workflow,
|
||||
and general hints how to prepare your pull request. You can also chat ask for clarifications or guidance in github issues directly,
|
||||
and general hints how to prepare your pull request. You can also chat ask for clarifications or guidance in GitHub issues directly,
|
||||
or in the akka/dev chat if a more real time communication would be of benefit.
|
||||
|
||||
A chat room is available for all questions related to *developing and contributing* to Akka:
|
||||
|
|
|
|||
|
|
@ -11,6 +11,7 @@ import java.util.ArrayList;
|
|||
import java.util.stream.IntStream;
|
||||
|
||||
import akka.testkit.TestActors;
|
||||
import org.junit.Assert;
|
||||
import org.junit.Test;
|
||||
|
||||
import akka.japi.Creator;
|
||||
|
|
@ -209,7 +210,7 @@ public class ActorCreationTest extends JUnitSuite {
|
|||
public void testWrongAnonymousClassStaticCreator() {
|
||||
try {
|
||||
Props.create(new C() {}); // has implicit reference to outer class
|
||||
fail("Should have detected this is not a real static class, and thrown");
|
||||
org.junit.Assert.fail("Should have detected this is not a real static class, and thrown");
|
||||
} catch (IllegalArgumentException e) {
|
||||
assertEquals("cannot use non-static local Creator to create actors; make it static (e.g. local to a static method) or top-level", e.getMessage());
|
||||
}
|
||||
|
|
@ -278,7 +279,7 @@ public class ActorCreationTest extends JUnitSuite {
|
|||
// captures enclosing class
|
||||
};
|
||||
Props.create(anonymousCreatorFromStaticMethod);
|
||||
fail("Should have detected this is not a real static class, and thrown");
|
||||
org.junit.Assert.fail("Should have detected this is not a real static class, and thrown");
|
||||
} catch (IllegalArgumentException e) {
|
||||
assertEquals("cannot use non-static local Creator to create actors; make it static (e.g. local to a static method) or top-level", e.getMessage());
|
||||
}
|
||||
|
|
@ -296,7 +297,7 @@ public class ActorCreationTest extends JUnitSuite {
|
|||
assertEquals(TestActor.class, p.actorClass());
|
||||
try {
|
||||
TestActor.propsUsingLamdaWithoutClass(17);
|
||||
fail("Should have detected lambda erasure, and thrown");
|
||||
org.junit.Assert.fail("Should have detected lambda erasure, and thrown");
|
||||
} catch (IllegalArgumentException e) {
|
||||
assertEquals("erased Creator types are unsupported, use Props.create(actorClass, creator) instead",
|
||||
e.getMessage());
|
||||
|
|
|
|||
|
|
@ -41,14 +41,14 @@ public class JavaAPITestBase extends JUnitSuite {
|
|||
String s : Option.some("abc")) {
|
||||
return;
|
||||
}
|
||||
fail("for-loop not entered");
|
||||
org.junit.Assert.fail("for-loop not entered");
|
||||
}
|
||||
|
||||
@Test
|
||||
public void shouldNotEnterForLoop() {
|
||||
for (@SuppressWarnings("unused")
|
||||
Object o : Option.none()) {
|
||||
fail("for-loop entered");
|
||||
org.junit.Assert.fail("for-loop entered");
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,42 @@
|
|||
/*
|
||||
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.pattern;
|
||||
|
||||
import akka.actor.*;
|
||||
import akka.testkit.AkkaJUnitActorSystemResource;
|
||||
import akka.testkit.AkkaSpec;
|
||||
import org.junit.ClassRule;
|
||||
import org.junit.Test;
|
||||
import org.scalatest.junit.JUnitSuite;
|
||||
import scala.compat.java8.FutureConverters;
|
||||
import scala.concurrent.Await;
|
||||
import scala.concurrent.duration.FiniteDuration;
|
||||
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.CompletionStage;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import static org.junit.Assert.assertEquals;
|
||||
|
||||
public class CircuitBreakerTest extends JUnitSuite {
|
||||
|
||||
@ClassRule
|
||||
public static AkkaJUnitActorSystemResource actorSystemResource =
|
||||
new AkkaJUnitActorSystemResource("JavaAPI", AkkaSpec.testConf());
|
||||
|
||||
private final ActorSystem system = actorSystemResource.getSystem();
|
||||
|
||||
@Test
|
||||
public void useCircuitBreakerWithCompletableFuture() throws Exception {
|
||||
final FiniteDuration fiveSeconds = FiniteDuration.create(5, TimeUnit.SECONDS);
|
||||
final FiniteDuration fiveHundredMillis = FiniteDuration.create(500, TimeUnit.MILLISECONDS);
|
||||
final CircuitBreaker breaker = new CircuitBreaker(system.dispatcher(), system.scheduler(), 1, fiveSeconds, fiveHundredMillis);
|
||||
|
||||
final CompletableFuture<String> f = new CompletableFuture<>();
|
||||
f.complete("hello");
|
||||
final CompletionStage<String> res = breaker.callWithCircuitBreakerCS(() -> f);
|
||||
assertEquals("hello", Await.result(FutureConverters.toScala(res), fiveSeconds));
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -60,6 +60,12 @@ object DeployerSpec {
|
|||
"/*/some" {
|
||||
router = scatter-gather-pool
|
||||
}
|
||||
"/double/**" {
|
||||
router = random-pool
|
||||
}
|
||||
"/double/more/**" {
|
||||
router = round-robin-pool
|
||||
}
|
||||
}
|
||||
""", ConfigParseOptions.defaults)
|
||||
|
||||
|
|
@ -74,7 +80,7 @@ class DeployerSpec extends AkkaSpec(DeployerSpec.deployerConf) {
|
|||
|
||||
"be able to parse 'akka.actor.deployment._' with all default values" in {
|
||||
val service = "/service1"
|
||||
val deployment = system.asInstanceOf[ActorSystemImpl].provider.deployer.lookup(service.split("/").drop(1))
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
|
||||
deployment should ===(Some(
|
||||
Deploy(
|
||||
|
|
@ -88,13 +94,13 @@ class DeployerSpec extends AkkaSpec(DeployerSpec.deployerConf) {
|
|||
|
||||
"use None deployment for undefined service" in {
|
||||
val service = "/undefined"
|
||||
val deployment = system.asInstanceOf[ActorSystemImpl].provider.deployer.lookup(service.split("/").drop(1))
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
deployment should ===(None)
|
||||
}
|
||||
|
||||
"be able to parse 'akka.actor.deployment._' with dispatcher config" in {
|
||||
val service = "/service3"
|
||||
val deployment = system.asInstanceOf[ActorSystemImpl].provider.deployer.lookup(service.split("/").drop(1))
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
|
||||
deployment should ===(Some(
|
||||
Deploy(
|
||||
|
|
@ -108,7 +114,7 @@ class DeployerSpec extends AkkaSpec(DeployerSpec.deployerConf) {
|
|||
|
||||
"be able to parse 'akka.actor.deployment._' with mailbox config" in {
|
||||
val service = "/service4"
|
||||
val deployment = system.asInstanceOf[ActorSystemImpl].provider.deployer.lookup(service.split("/").drop(1))
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
|
||||
deployment should ===(Some(
|
||||
Deploy(
|
||||
|
|
@ -186,8 +192,15 @@ class DeployerSpec extends AkkaSpec(DeployerSpec.deployerConf) {
|
|||
assertRouting("/somewildcardmatch/some", ScatterGatherFirstCompletedPool(nrOfInstances = 1, within = 2 seconds), "/*/some")
|
||||
}
|
||||
|
||||
"be able to use double wildcards" in {
|
||||
assertRouting("/double/wildcardmatch", RandomPool(1), "/double/**")
|
||||
assertRouting("/double/wildcardmatch/anothermatch", RandomPool(1), "/double/**")
|
||||
assertRouting("/double/more/anothermatch", RoundRobinPool(1), "/double/more/**")
|
||||
assertNoRouting("/double")
|
||||
}
|
||||
|
||||
"have correct router mappings" in {
|
||||
val mapping = system.asInstanceOf[ActorSystemImpl].provider.deployer.routerTypeMapping
|
||||
val mapping = system.asInstanceOf[ExtendedActorSystem].provider.deployer.routerTypeMapping
|
||||
mapping("from-code") should ===(classOf[akka.routing.NoRouter].getName)
|
||||
mapping("round-robin-pool") should ===(classOf[akka.routing.RoundRobinPool].getName)
|
||||
mapping("round-robin-group") should ===(classOf[akka.routing.RoundRobinGroup].getName)
|
||||
|
|
@ -203,8 +216,13 @@ class DeployerSpec extends AkkaSpec(DeployerSpec.deployerConf) {
|
|||
mapping("consistent-hashing-group") should ===(classOf[akka.routing.ConsistentHashingGroup].getName)
|
||||
}
|
||||
|
||||
def assertNoRouting(service: String): Unit = {
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
deployment shouldNot be(defined)
|
||||
}
|
||||
|
||||
def assertRouting(service: String, expected: RouterConfig, expectPath: String): Unit = {
|
||||
val deployment = system.asInstanceOf[ActorSystemImpl].provider.deployer.lookup(service.split("/").drop(1))
|
||||
val deployment = system.asInstanceOf[ExtendedActorSystem].provider.deployer.lookup(service.split("/").drop(1))
|
||||
deployment.map(_.path).getOrElse("NOT FOUND") should ===(expectPath)
|
||||
deployment.get.routerConfig.getClass should ===(expected.getClass)
|
||||
deployment.get.scope should ===(NoScopeGiven)
|
||||
|
|
|
|||
|
|
@ -102,7 +102,12 @@ class ReceiveTimeoutSpec extends AkkaSpec {
|
|||
}
|
||||
}))
|
||||
|
||||
val ticks = system.scheduler.schedule(100.millis, 100.millis, timeoutActor, TransperentTick)(system.dispatcher)
|
||||
val ticks = system.scheduler.schedule(100.millis, 100.millis, new Runnable {
|
||||
override def run() = {
|
||||
timeoutActor ! TransperentTick
|
||||
timeoutActor ! Identify(None)
|
||||
}
|
||||
})(system.dispatcher)
|
||||
|
||||
Await.ready(timeoutLatch, TestLatch.DefaultTimeout)
|
||||
ticks.cancel()
|
||||
|
|
|
|||
|
|
@ -719,7 +719,14 @@ class FutureSpec extends AkkaSpec with Checkers with BeforeAndAfterAll with Defa
|
|||
Await.result(p.future, timeout.duration) should ===(message)
|
||||
}
|
||||
}
|
||||
"always cast successfully using mapTo" in { f((future, message) ⇒ (evaluating { Await.result(future.mapTo[java.lang.Thread], timeout.duration) } should produce[java.lang.Exception]).getMessage should ===(message)) }
|
||||
"always cast successfully using mapTo" in {
|
||||
f((future, message) ⇒ {
|
||||
val exception = the[java.lang.Exception] thrownBy {
|
||||
Await.result(future.mapTo[java.lang.Thread], timeout.duration)
|
||||
}
|
||||
exception.getMessage should ===(message)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
implicit def arbFuture: Arbitrary[Future[Int]] = Arbitrary(for (n ← arbitrary[Int]) yield Future(n))
|
||||
|
|
|
|||
|
|
@ -3,9 +3,9 @@ package akka.io
|
|||
import java.net.InetAddress
|
||||
import java.util.concurrent.atomic.AtomicLong
|
||||
|
||||
import org.scalatest.{ ShouldMatchers, WordSpec }
|
||||
import org.scalatest.{ Matchers, WordSpec }
|
||||
|
||||
class SimpleDnsCacheSpec extends WordSpec with ShouldMatchers {
|
||||
class SimpleDnsCacheSpec extends WordSpec with Matchers {
|
||||
"Cache" should {
|
||||
"not reply with expired but not yet swept out entries" in {
|
||||
val localClock = new AtomicLong(0)
|
||||
|
|
@ -15,11 +15,11 @@ class SimpleDnsCacheSpec extends WordSpec with ShouldMatchers {
|
|||
val cacheEntry = Dns.Resolved("test.local", Seq(InetAddress.getByName("127.0.0.1")))
|
||||
cache.put(cacheEntry, 5000)
|
||||
|
||||
cache.cached("test.local") should equal(Some(cacheEntry))
|
||||
cache.cached("test.local") should ===(Some(cacheEntry))
|
||||
localClock.set(4999)
|
||||
cache.cached("test.local") should equal(Some(cacheEntry))
|
||||
cache.cached("test.local") should ===(Some(cacheEntry))
|
||||
localClock.set(5000)
|
||||
cache.cached("test.local") should equal(None)
|
||||
cache.cached("test.local") should ===(None)
|
||||
}
|
||||
|
||||
"sweep out expired entries on cleanup()" in {
|
||||
|
|
@ -30,16 +30,16 @@ class SimpleDnsCacheSpec extends WordSpec with ShouldMatchers {
|
|||
val cacheEntry = Dns.Resolved("test.local", Seq(InetAddress.getByName("127.0.0.1")))
|
||||
cache.put(cacheEntry, 5000)
|
||||
|
||||
cache.cached("test.local") should equal(Some(cacheEntry))
|
||||
cache.cached("test.local") should ===(Some(cacheEntry))
|
||||
localClock.set(5000)
|
||||
cache.cached("test.local") should equal(None)
|
||||
cache.cached("test.local") should ===(None)
|
||||
localClock.set(0)
|
||||
cache.cached("test.local") should equal(Some(cacheEntry))
|
||||
cache.cached("test.local") should ===(Some(cacheEntry))
|
||||
localClock.set(5000)
|
||||
cache.cleanup()
|
||||
cache.cached("test.local") should equal(None)
|
||||
cache.cached("test.local") should ===(None)
|
||||
localClock.set(0)
|
||||
cache.cached("test.local") should equal(None)
|
||||
cache.cached("test.local") should ===(None)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -449,10 +449,10 @@ class TcpConnectionSpec extends AkkaSpec("""
|
|||
assertThisConnectionActorTerminated()
|
||||
|
||||
val buffer = ByteBuffer.allocate(1)
|
||||
val thrown = evaluating {
|
||||
val thrown = the[IOException] thrownBy {
|
||||
windowsWorkaroundToDetectAbort()
|
||||
serverSideChannel.read(buffer)
|
||||
} should produce[IOException]
|
||||
}
|
||||
thrown.getMessage should ===(ConnectionResetByPeerMessage)
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -133,10 +133,10 @@ class CircuitBreakerSpec extends AkkaSpec with BeforeAndAfter {
|
|||
val breaker = CircuitBreakerSpec.shortCallTimeoutCb()
|
||||
Future {
|
||||
breaker().withSyncCircuitBreaker {
|
||||
Thread.sleep(500.millis.dilated.toMillis)
|
||||
Thread.sleep(1.second.dilated.toMillis)
|
||||
}
|
||||
}
|
||||
within(300.millis) {
|
||||
within(900.millis) {
|
||||
awaitCond(breaker().currentFailureCount == 1, 100.millis.dilated)
|
||||
}
|
||||
}
|
||||
|
|
@ -219,7 +219,7 @@ class CircuitBreakerSpec extends AkkaSpec with BeforeAndAfter {
|
|||
val breaker = CircuitBreakerSpec.shortCallTimeoutCb()
|
||||
|
||||
val fut = breaker().withCircuitBreaker(Future {
|
||||
Thread.sleep(150.millis.dilated.toMillis);
|
||||
Thread.sleep(150.millis.dilated.toMillis)
|
||||
throwException
|
||||
})
|
||||
checkLatch(breaker.openLatch)
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import java.lang.Float.floatToRawIntBits
|
|||
import java.nio.{ ByteBuffer, ByteOrder }
|
||||
import java.nio.ByteOrder.{ BIG_ENDIAN, LITTLE_ENDIAN }
|
||||
|
||||
import akka.util.ByteString.{ ByteString1, ByteString1C, ByteStrings }
|
||||
import org.apache.commons.codec.binary.Hex.encodeHex
|
||||
import org.scalacheck.Arbitrary.arbitrary
|
||||
import org.scalacheck.{ Arbitrary, Gen }
|
||||
|
|
@ -20,6 +21,12 @@ import scala.collection.mutable.Builder
|
|||
|
||||
class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
||||
|
||||
// // uncomment when developing locally to get better coverage
|
||||
// implicit override val generatorDrivenConfig =
|
||||
// PropertyCheckConfig(
|
||||
// minSuccessful = 1000,
|
||||
// minSize = 0, maxSize = 100)
|
||||
|
||||
def genSimpleByteString(min: Int, max: Int) = for {
|
||||
n ← Gen.choose(min, max)
|
||||
b ← Gen.containerOfN[Array, Byte](n, arbitrary[Byte])
|
||||
|
|
@ -56,14 +63,21 @@ class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
|||
} yield (xs, from, until)
|
||||
}
|
||||
|
||||
def testSer(obj: AnyRef) = {
|
||||
def serialize(obj: AnyRef): Array[Byte] = {
|
||||
val os = new ByteArrayOutputStream
|
||||
val bos = new ObjectOutputStream(os)
|
||||
bos.writeObject(obj)
|
||||
val arr = os.toByteArray
|
||||
val is = new ObjectInputStream(new ByteArrayInputStream(arr))
|
||||
os.toByteArray
|
||||
}
|
||||
|
||||
is.readObject == obj
|
||||
def deserialize(bytes: Array[Byte]): AnyRef = {
|
||||
val is = new ObjectInputStream(new ByteArrayInputStream(bytes))
|
||||
|
||||
is.readObject
|
||||
}
|
||||
|
||||
def testSer(obj: AnyRef) = {
|
||||
deserialize(serialize(obj)) == obj
|
||||
}
|
||||
|
||||
def hexFromSer(obj: AnyRef) = {
|
||||
|
|
@ -281,10 +295,113 @@ class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
|||
reference.toSeq == builder.result
|
||||
}
|
||||
|
||||
"ByteString1" must {
|
||||
"drop(0)" in {
|
||||
ByteString1.fromString("").drop(0) should ===(ByteString.empty)
|
||||
ByteString1.fromString("a").drop(0) should ===(ByteString("a"))
|
||||
}
|
||||
"drop(1)" in {
|
||||
ByteString1.fromString("").drop(1) should ===(ByteString(""))
|
||||
ByteString1.fromString("a").drop(1) should ===(ByteString(""))
|
||||
ByteString1.fromString("ab").drop(1) should ===(ByteString("b"))
|
||||
ByteString1.fromString("xaaa").drop(1) should ===(ByteString("aaa"))
|
||||
ByteString1.fromString("xaab").drop(1).take(2) should ===(ByteString("aa"))
|
||||
ByteString1.fromString("0123456789").drop(5).take(4).drop(1).take(2) should ===(ByteString("67"))
|
||||
}
|
||||
"drop(n)" in {
|
||||
ByteString1.fromString("ab").drop(2) should ===(ByteString(""))
|
||||
ByteString1.fromString("ab").drop(3) should ===(ByteString(""))
|
||||
}
|
||||
}
|
||||
"ByteString1C" must {
|
||||
"drop(0)" in {
|
||||
ByteString1C.fromString("").drop(0) should ===(ByteString.empty)
|
||||
ByteString1C.fromString("a").drop(0) should ===(ByteString("a"))
|
||||
}
|
||||
"drop(1)" in {
|
||||
ByteString1C.fromString("").drop(1) should ===(ByteString(""))
|
||||
ByteString1C.fromString("a").drop(1) should ===(ByteString(""))
|
||||
ByteString1C.fromString("ab").drop(1) should ===(ByteString("b"))
|
||||
}
|
||||
"drop(n)" in {
|
||||
ByteString1C.fromString("ab").drop(2) should ===(ByteString(""))
|
||||
ByteString1C.fromString("ab").drop(3) should ===(ByteString(""))
|
||||
}
|
||||
"take" in {
|
||||
ByteString1.fromString("abcdefg").drop(1).take(0) should ===(ByteString(""))
|
||||
ByteString1.fromString("abcdefg").drop(1).take(-1) should ===(ByteString(""))
|
||||
ByteString1.fromString("abcdefg").drop(1).take(-2) should ===(ByteString(""))
|
||||
ByteString1.fromString("abcdefg").drop(2) should ===(ByteString("cdefg"))
|
||||
ByteString1.fromString("abcdefg").drop(2).take(1) should ===(ByteString("c"))
|
||||
}
|
||||
}
|
||||
"ByteStrings" must {
|
||||
"drop(0)" in {
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("")).drop(0) should ===(ByteString.empty)
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("a")).drop(0) should ===(ByteString("a"))
|
||||
(ByteString1C.fromString("") ++ ByteString1.fromString("a")).drop(0) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).drop(0) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("a")).drop(0) should ===(ByteString("aa"))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("")).drop(0) should ===(ByteString(""))
|
||||
}
|
||||
"drop(1)" in {
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("")).drop(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).drop(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("a")).drop(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(1) should ===(ByteString("bcd"))
|
||||
ByteStrings(Vector(ByteString1.fromString("xaaa"))).drop(1) should ===(ByteString("aaa"))
|
||||
}
|
||||
"drop(n)" in {
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).drop(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("a")).drop(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(3) should ===(ByteString("d"))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(4) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(5) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(10) should ===(ByteString(""))
|
||||
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).drop(-2) should ===(ByteString("abcd"))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("")).drop(-2) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("")).drop(Int.MinValue) should ===(ByteString("ab"))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("ab")).dropRight(Int.MinValue) should ===(ByteString("ab"))
|
||||
}
|
||||
"slice" in {
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).slice(0, 1) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString(""), ByteString1.fromString("a")).slice(1, 1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).slice(2, 2) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).slice(2, 3) should ===(ByteString("c"))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).slice(2, 4) should ===(ByteString("cd"))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).slice(3, 4) should ===(ByteString("d"))
|
||||
ByteStrings(ByteString1.fromString("ab"), ByteString1.fromString("cd")).slice(10, 100) should ===(ByteString(""))
|
||||
}
|
||||
"dropRight" in {
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).dropRight(0) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).dropRight(-1) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).dropRight(Int.MinValue) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).dropRight(1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("")).dropRight(Int.MaxValue) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).dropRight(1) should ===(ByteString("ab"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).dropRight(2) should ===(ByteString("a"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).dropRight(3) should ===(ByteString(""))
|
||||
}
|
||||
"take" in {
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).drop(1).take(0) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).drop(1).take(-1) should ===(ByteString(""))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).drop(1).take(-2) should ===(ByteString(""))
|
||||
(ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")) ++ ByteString1.fromString("defg")).drop(2) should ===(ByteString("cdefg"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).drop(2).take(1) should ===(ByteString("c"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).take(100) should ===(ByteString("abc"))
|
||||
ByteStrings(ByteString1.fromString("a"), ByteString1.fromString("bc")).drop(1).take(100) should ===(ByteString("bc"))
|
||||
}
|
||||
}
|
||||
|
||||
"A ByteString" must {
|
||||
"have correct size" when {
|
||||
"concatenating" in { check((a: ByteString, b: ByteString) ⇒ (a ++ b).size == a.size + b.size) }
|
||||
"dropping" in { check((a: ByteString, b: ByteString) ⇒ (a ++ b).drop(b.size).size == a.size) }
|
||||
"taking" in { check((a: ByteString, b: ByteString) ⇒ (a ++ b).take(a.size) == a) }
|
||||
"takingRight" in { check((a: ByteString, b: ByteString) ⇒ (a ++ b).takeRight(b.size) == b) }
|
||||
"droppnig then taking" in { check((a: ByteString, b: ByteString) ⇒ (b ++ a ++ b).drop(b.size).take(a.size) == a) }
|
||||
"droppingRight" in { check((a: ByteString, b: ByteString) ⇒ (b ++ a ++ b).drop(b.size).dropRight(b.size) == a) }
|
||||
}
|
||||
|
||||
"be sequential" when {
|
||||
|
|
@ -301,6 +418,21 @@ class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
|||
(a ++ b ++ c) == xs
|
||||
}
|
||||
}
|
||||
def excerciseRecombining(xs: ByteString, from: Int, until: Int) = {
|
||||
val (tmp, c) = xs.splitAt(until)
|
||||
val (a, b) = tmp.splitAt(from)
|
||||
(a ++ b ++ c) should ===(xs)
|
||||
}
|
||||
"recombining - edge cases" in {
|
||||
excerciseRecombining(ByteStrings(Vector(ByteString1(Array[Byte](1)), ByteString1(Array[Byte](2)))), -2147483648, 112121212)
|
||||
excerciseRecombining(ByteStrings(Vector(ByteString1(Array[Byte](100)))), 0, 2)
|
||||
excerciseRecombining(ByteStrings(Vector(ByteString1(Array[Byte](100)))), -2147483648, 2)
|
||||
excerciseRecombining(ByteStrings(Vector(ByteString1.fromString("ab"), ByteString1.fromString("cd"))), 0, 1)
|
||||
excerciseRecombining(ByteString1.fromString("abc").drop(1).take(1), -324234, 234232)
|
||||
excerciseRecombining(ByteString("a"), 0, 2147483647)
|
||||
excerciseRecombining(ByteStrings(Vector(ByteString1.fromString("ab"), ByteString1.fromString("cd"))).drop(2), 2147483647, 1)
|
||||
excerciseRecombining(ByteString1.fromString("ab").drop1(1), Int.MaxValue, Int.MaxValue)
|
||||
}
|
||||
}
|
||||
|
||||
"behave as expected" when {
|
||||
|
|
@ -322,7 +454,7 @@ class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
|||
check { (a: ByteString) ⇒ a.asByteBuffers.foldLeft(ByteString.empty) { (bs, bb) ⇒ bs ++ ByteString(bb) } == a }
|
||||
check { (a: ByteString) ⇒ a.asByteBuffers.forall(_.isReadOnly) }
|
||||
check { (a: ByteString) ⇒
|
||||
import scala.collection.JavaConverters.iterableAsScalaIterableConverter;
|
||||
import scala.collection.JavaConverters.iterableAsScalaIterableConverter
|
||||
a.asByteBuffers.zip(a.getByteBuffers().asScala).forall(x ⇒ x._1 == x._2)
|
||||
}
|
||||
}
|
||||
|
|
@ -404,6 +536,13 @@ class ByteStringSpec extends WordSpec with Matchers with Checkers {
|
|||
testSer(bs)
|
||||
}
|
||||
}
|
||||
|
||||
"with a large concatenated bytestring" in {
|
||||
// coverage for #20901
|
||||
val original = ByteString(Array.fill[Byte](1000)(1)) ++ ByteString(Array.fill[Byte](1000)(2))
|
||||
|
||||
deserialize(serialize(original)) shouldEqual original
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,67 @@
|
|||
/**
|
||||
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.util
|
||||
|
||||
import org.scalatest.{ Matchers, WordSpec }
|
||||
|
||||
class WildcardIndexSpec extends WordSpec with Matchers {
|
||||
|
||||
"wildcard index" must {
|
||||
"allow to insert elements using Arrays of strings" in {
|
||||
emptyIndex.insert(Array("a", "b"), 1) shouldBe a[WildcardIndex[_]]
|
||||
emptyIndex.insert(Array("a"), 1) shouldBe a[WildcardIndex[_]]
|
||||
emptyIndex.insert(Array.empty[String], 1) shouldBe a[WildcardIndex[_]]
|
||||
}
|
||||
|
||||
"allow to find inserted elements" in {
|
||||
val tree = emptyIndex.insert(Array("a"), 1).insert(Array("a", "b"), 2).insert(Array("a", "c"), 3)
|
||||
tree.find(Array("a", "b")).get shouldBe 2
|
||||
tree.find(Array("a")).get shouldBe 1
|
||||
tree.find(Array("x")) shouldBe None
|
||||
tree.find(Array.empty[String]) shouldBe None
|
||||
}
|
||||
|
||||
"match all elements in the subArray when it contains a wildcard" in {
|
||||
val tree1 = emptyIndex.insert(Array("a"), 1).insert(Array("a", "*"), 1)
|
||||
tree1.find(Array("z")) shouldBe None
|
||||
tree1.find(Array("a")).get shouldBe 1
|
||||
tree1.find(Array("a", "b")).get shouldBe 1
|
||||
tree1.find(Array("a", "x")).get shouldBe 1
|
||||
|
||||
val tree2 = emptyIndex.insert(Array("a", "*"), 1).insert(Array("a", "*", "c"), 2)
|
||||
tree2.find(Array("z")) shouldBe None
|
||||
tree2.find(Array("a", "b")).get shouldBe 1
|
||||
tree2.find(Array("a", "x")).get shouldBe 1
|
||||
tree2.find(Array("a", "x", "c")).get shouldBe 2
|
||||
tree2.find(Array("a", "x", "y")) shouldBe None
|
||||
}
|
||||
|
||||
"never find anything when emptyIndex" in {
|
||||
emptyIndex.find(Array("a")) shouldBe None
|
||||
emptyIndex.find(Array("a", "b")) shouldBe None
|
||||
emptyIndex.find(Array.empty[String]) shouldBe None
|
||||
}
|
||||
|
||||
"match all remaining elements when it contains a terminal double wildcard" in {
|
||||
val tree1 = emptyIndex.insert(Array("a", "**"), 1)
|
||||
tree1.find(Array("z")) shouldBe None
|
||||
tree1.find(Array("a", "b")).get shouldBe 1
|
||||
tree1.find(Array("a", "x")).get shouldBe 1
|
||||
tree1.find(Array("a", "x", "y")).get shouldBe 1
|
||||
|
||||
val tree2 = emptyIndex.insert(Array("**"), 1)
|
||||
tree2.find(Array("anything", "I", "want")).get shouldBe 1
|
||||
tree2.find(Array("anything")).get shouldBe 1
|
||||
}
|
||||
|
||||
"ignore non-terminal double wildcards" in {
|
||||
val tree = emptyIndex.insert(Array("a", "**", "c"), 1)
|
||||
tree.find(Array("a", "x", "y", "c")) shouldBe None
|
||||
tree.find(Array("a", "x", "y")) shouldBe None
|
||||
}
|
||||
}
|
||||
|
||||
private val emptyIndex = WildcardIndex[Int]()
|
||||
}
|
||||
|
|
@ -61,7 +61,7 @@ case object Kill extends Kill {
|
|||
* is returned in the `ActorIdentity` message as `correlationId`.
|
||||
*/
|
||||
@SerialVersionUID(1L)
|
||||
final case class Identify(messageId: Any) extends AutoReceivedMessage
|
||||
final case class Identify(messageId: Any) extends AutoReceivedMessage with NotInfluenceReceiveTimeout
|
||||
|
||||
/**
|
||||
* Reply to [[akka.actor.Identify]]. Contains
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ package akka.actor
|
|||
import java.util.concurrent.atomic.AtomicReference
|
||||
|
||||
import akka.routing._
|
||||
import akka.util.WildcardTree
|
||||
import akka.util.WildcardIndex
|
||||
import com.typesafe.config._
|
||||
|
||||
import scala.annotation.tailrec
|
||||
|
|
@ -132,7 +132,7 @@ private[akka] class Deployer(val settings: ActorSystem.Settings, val dynamicAcce
|
|||
import scala.collection.JavaConverters._
|
||||
|
||||
private val resizerEnabled: Config = ConfigFactory.parseString("resizer.enabled=on")
|
||||
private val deployments = new AtomicReference(WildcardTree[Deploy]())
|
||||
private val deployments = new AtomicReference(WildcardIndex[Deploy]())
|
||||
private val config = settings.config.getConfig("akka.actor.deployment")
|
||||
protected val default = config.getConfig("default")
|
||||
val routerTypeMapping: Map[String, String] =
|
||||
|
|
@ -146,20 +146,18 @@ private[akka] class Deployer(val settings: ActorSystem.Settings, val dynamicAcce
|
|||
case _ ⇒ None
|
||||
} foreach deploy
|
||||
|
||||
def lookup(path: ActorPath): Option[Deploy] = lookup(path.elements.drop(1).iterator)
|
||||
def lookup(path: ActorPath): Option[Deploy] = lookup(path.elements.drop(1))
|
||||
|
||||
def lookup(path: Iterable[String]): Option[Deploy] = lookup(path.iterator)
|
||||
|
||||
def lookup(path: Iterator[String]): Option[Deploy] = deployments.get().find(path).data
|
||||
def lookup(path: Iterable[String]): Option[Deploy] = deployments.get().find(path)
|
||||
|
||||
def deploy(d: Deploy): Unit = {
|
||||
@tailrec def add(path: Array[String], d: Deploy, w: WildcardTree[Deploy] = deployments.get): Unit = {
|
||||
for (i ← 0 until path.length) path(i) match {
|
||||
@tailrec def add(path: Array[String], d: Deploy, w: WildcardIndex[Deploy] = deployments.get): Unit = {
|
||||
for (i ← path.indices) path(i) match {
|
||||
case "" ⇒ throw new InvalidActorNameException(s"Actor name in deployment [${d.path}] must not be empty")
|
||||
case el ⇒ ActorPath.validatePathElement(el, fullPath = d.path)
|
||||
}
|
||||
|
||||
if (!deployments.compareAndSet(w, w.insert(path.iterator, d))) add(path, d)
|
||||
if (!deployments.compareAndSet(w, w.insert(path, d))) add(path, d)
|
||||
}
|
||||
|
||||
add(d.path.split("/").drop(1), d)
|
||||
|
|
|
|||
|
|
@ -224,6 +224,7 @@ object FSM {
|
|||
* Finite State Machine actor trait. Use as follows:
|
||||
*
|
||||
* <pre>
|
||||
* object A {
|
||||
* trait State
|
||||
* case class One extends State
|
||||
* case class Two extends State
|
||||
|
|
@ -785,7 +786,7 @@ trait LoggingFSM[S, D] extends FSM[S, D] { this: Actor ⇒
|
|||
case a: ActorRef ⇒ a.toString
|
||||
case _ ⇒ "unknown"
|
||||
}
|
||||
log.debug("processing " + event + " from " + srcstr)
|
||||
log.debug("processing {} from {} in state {}", event, srcstr, stateName)
|
||||
}
|
||||
|
||||
if (logDepth > 0) {
|
||||
|
|
|
|||
|
|
@ -12,7 +12,7 @@ import java.util.concurrent.atomic.AtomicInteger
|
|||
*
|
||||
* Watches all actors which subscribe on the given event stream, and unsubscribes them from it when they are Terminated.
|
||||
*/
|
||||
private[akka] class ActorClassificationUnsubscriber(bus: ManagedActorClassification, debug: Boolean) extends Actor with Stash {
|
||||
protected[akka] class ActorClassificationUnsubscriber(bus: ManagedActorClassification, debug: Boolean) extends Actor with Stash {
|
||||
|
||||
import ActorClassificationUnsubscriber._
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ import java.util.concurrent.atomic.AtomicInteger
|
|||
* subscribe calls * because of the need of linearizing the history message sequence and the possibility of sometimes
|
||||
* watching a few actors too much - we opt for the 2nd choice here.
|
||||
*/
|
||||
private[akka] class EventStreamUnsubscriber(eventStream: EventStream, debug: Boolean = false) extends Actor {
|
||||
protected[akka] class EventStreamUnsubscriber(eventStream: EventStream, debug: Boolean = false) extends Actor {
|
||||
|
||||
import EventStreamUnsubscriber._
|
||||
|
||||
|
|
|
|||
|
|
@ -3,18 +3,24 @@
|
|||
*/
|
||||
package akka.pattern
|
||||
|
||||
import java.util.concurrent.atomic.{ AtomicInteger, AtomicLong, AtomicBoolean }
|
||||
import java.util.concurrent.atomic.{ AtomicBoolean, AtomicInteger, AtomicLong }
|
||||
|
||||
import akka.AkkaException
|
||||
import akka.actor.Scheduler
|
||||
import akka.util.Unsafe
|
||||
|
||||
import scala.util.control.NoStackTrace
|
||||
import java.util.concurrent.{ Callable, CopyOnWriteArrayList }
|
||||
import scala.concurrent.{ ExecutionContext, Future, Promise, Await }
|
||||
import java.util.concurrent.{ Callable, CompletionStage, CopyOnWriteArrayList }
|
||||
|
||||
import scala.concurrent.{ Await, ExecutionContext, Future, Promise }
|
||||
import scala.concurrent.duration._
|
||||
import scala.concurrent.TimeoutException
|
||||
import scala.util.control.NonFatal
|
||||
import scala.util.Success
|
||||
import akka.dispatch.ExecutionContexts.sameThreadExecutionContext
|
||||
import akka.japi.function.Creator
|
||||
|
||||
import scala.compat.java8.FutureConverters
|
||||
|
||||
/**
|
||||
* Companion object providing factory methods for Circuit Breaker which runs callbacks in caller's thread
|
||||
|
|
@ -123,6 +129,18 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite
|
|||
*/
|
||||
def callWithCircuitBreaker[T](body: Callable[Future[T]]): Future[T] = withCircuitBreaker(body.call)
|
||||
|
||||
/**
|
||||
* Java API (8) for [[#withCircuitBreaker]]
|
||||
*
|
||||
* @param body Call needing protected
|
||||
* @return [[java.util.concurrent.CompletionStage]] containing the call result or a
|
||||
* `scala.concurrent.TimeoutException` if the call timed out
|
||||
*/
|
||||
def callWithCircuitBreakerCS[T](body: Callable[CompletionStage[T]]): CompletionStage[T] =
|
||||
FutureConverters.toJava[T](callWithCircuitBreaker(new Callable[Future[T]] {
|
||||
override def call(): Future[T] = FutureConverters.toScala(body.call())
|
||||
}))
|
||||
|
||||
/**
|
||||
* Wraps invocations of synchronous calls that need to be protected
|
||||
*
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ object ConsistentHashingRouter {
|
|||
* INTERNAL API
|
||||
*/
|
||||
private[akka] def hashMappingAdapter(mapper: ConsistentHashMapper): ConsistentHashMapping = {
|
||||
case message if (mapper.hashKey(message).asInstanceOf[AnyRef] ne null) ⇒
|
||||
case message if mapper.hashKey(message).asInstanceOf[AnyRef] ne null ⇒
|
||||
mapper.hashKey(message)
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -273,7 +273,9 @@ abstract class CustomRouterConfig extends RouterConfig {
|
|||
}
|
||||
|
||||
/**
|
||||
* Router configuration which has no default, i.e. external configuration is required.
|
||||
* Wraps a [[akka.actor.Props]] to mark the actor as externally configurable to be used with a router.
|
||||
* If a [[akka.actor.Props]] is not wrapped with [[FromConfig]] then the actor will ignore the router part of the deployment section
|
||||
* in the configuration.
|
||||
*/
|
||||
case object FromConfig extends FromConfig {
|
||||
/**
|
||||
|
|
@ -290,7 +292,9 @@ case object FromConfig extends FromConfig {
|
|||
}
|
||||
|
||||
/**
|
||||
* Java API: Router configuration which has no default, i.e. external configuration is required.
|
||||
* Java API: Wraps a [[akka.actor.Props]] to mark the actor as externally configurable to be used with a router.
|
||||
* If a [[akka.actor.Props]] is not wrapped with [[FromConfig]] then the actor will ignore the router part of the deployment section
|
||||
* in the configuration.
|
||||
*
|
||||
* This can be used when the dispatcher to be used for the head Router needs to be configured
|
||||
* (defaults to default-dispatcher).
|
||||
|
|
|
|||
|
|
@ -234,6 +234,7 @@ object ByteIterator {
|
|||
new MultiByteArrayIterator(clonedIterators)
|
||||
}
|
||||
|
||||
/** For performance sensitive code, call take() directly on ByteString (it's optimised there) */
|
||||
final override def take(n: Int): this.type = {
|
||||
var rest = n
|
||||
val builder = new ListBuffer[ByteArrayIterator]
|
||||
|
|
@ -249,7 +250,8 @@ object ByteIterator {
|
|||
normalize()
|
||||
}
|
||||
|
||||
@tailrec final override def drop(n: Int): this.type =
|
||||
/** For performance sensitive code, call drop() directly on ByteString (it's optimised there) */
|
||||
final override def drop(n: Int): this.type =
|
||||
if ((n > 0) && !isEmpty) {
|
||||
val nCurrent = math.min(n, current.len)
|
||||
current.drop(n)
|
||||
|
|
@ -341,7 +343,9 @@ object ByteIterator {
|
|||
def getDoubles(xs: Array[Double], offset: Int, n: Int)(implicit byteOrder: ByteOrder): this.type =
|
||||
getToArray(xs, offset, n, 8) { getDouble(byteOrder) } { current.getDoubles(_, _, _)(byteOrder) }
|
||||
|
||||
def copyToBuffer(buffer: ByteBuffer): Int = {
|
||||
/** For performance sensitive code, call copyToBuffer() directly on ByteString (it's optimised there) */
|
||||
override def copyToBuffer(buffer: ByteBuffer): Int = {
|
||||
// the fold here is better than indexing into the LinearSeq
|
||||
val n = iterators.foldLeft(0) { _ + _.copyToBuffer(buffer) }
|
||||
normalize()
|
||||
n
|
||||
|
|
@ -635,6 +639,7 @@ abstract class ByteIterator extends BufferedIterator[Byte] {
|
|||
* @param buffer a ByteBuffer to copy bytes to
|
||||
* @return the number of bytes actually copied
|
||||
*/
|
||||
/** For performance sensitive code, call take() directly on ByteString (it's optimised there) */
|
||||
def copyToBuffer(buffer: ByteBuffer): Int
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -7,14 +7,15 @@ package akka.util
|
|||
import java.io.{ ObjectInputStream, ObjectOutputStream }
|
||||
import java.nio.{ ByteBuffer, ByteOrder }
|
||||
import java.lang.{ Iterable ⇒ JIterable }
|
||||
import scala.annotation.varargs
|
||||
|
||||
import scala.annotation.{ tailrec, varargs }
|
||||
import scala.collection.IndexedSeqOptimized
|
||||
import scala.collection.mutable.{ Builder, WrappedArray }
|
||||
import scala.collection.immutable
|
||||
import scala.collection.immutable.{ IndexedSeq, VectorBuilder }
|
||||
import scala.collection.immutable.{ IndexedSeq, VectorBuilder, VectorIterator }
|
||||
import scala.collection.generic.CanBuildFrom
|
||||
import scala.reflect.ClassTag
|
||||
import java.nio.charset.StandardCharsets
|
||||
import java.nio.charset.{ Charset, StandardCharsets }
|
||||
|
||||
object ByteString {
|
||||
|
||||
|
|
@ -103,13 +104,14 @@ object ByteString {
|
|||
}
|
||||
|
||||
private[akka] object ByteString1C extends Companion {
|
||||
def fromString(s: String): ByteString1C = new ByteString1C(s.getBytes)
|
||||
def apply(bytes: Array[Byte]): ByteString1C = new ByteString1C(bytes)
|
||||
val SerializationIdentity = 1.toByte
|
||||
|
||||
def readFromInputStream(is: ObjectInputStream): ByteString1C = {
|
||||
val length = is.readInt()
|
||||
val arr = new Array[Byte](length)
|
||||
is.read(arr, 0, length)
|
||||
is.readFully(arr, 0, length)
|
||||
ByteString1C(arr)
|
||||
}
|
||||
}
|
||||
|
|
@ -123,37 +125,74 @@ object ByteString {
|
|||
|
||||
override def length: Int = bytes.length
|
||||
|
||||
// Avoid `iterator` in performance sensitive code, call ops directly on ByteString instead
|
||||
override def iterator: ByteIterator.ByteArrayIterator = ByteIterator.ByteArrayIterator(bytes, 0, bytes.length)
|
||||
|
||||
private[akka] def toByteString1: ByteString1 = ByteString1(bytes)
|
||||
/** INTERNAL API */
|
||||
private[akka] def toByteString1: ByteString1 = ByteString1(bytes, 0, bytes.length)
|
||||
|
||||
/** INTERNAL API */
|
||||
private[akka] def byteStringCompanion = ByteString1C
|
||||
|
||||
def asByteBuffer: ByteBuffer = toByteString1.asByteBuffer
|
||||
override def asByteBuffer: ByteBuffer = toByteString1.asByteBuffer
|
||||
|
||||
def asByteBuffers: scala.collection.immutable.Iterable[ByteBuffer] = List(asByteBuffer)
|
||||
override def asByteBuffers: scala.collection.immutable.Iterable[ByteBuffer] = List(asByteBuffer)
|
||||
|
||||
def decodeString(charset: String): String =
|
||||
override def decodeString(charset: String): String =
|
||||
if (isEmpty) "" else new String(bytes, charset)
|
||||
|
||||
def ++(that: ByteString): ByteString =
|
||||
override def decodeString(charset: Charset): String =
|
||||
if (isEmpty) "" else new String(bytes, charset)
|
||||
|
||||
override def ++(that: ByteString): ByteString = {
|
||||
if (that.isEmpty) this
|
||||
else if (this.isEmpty) that
|
||||
else toByteString1 ++ that
|
||||
}
|
||||
|
||||
override def take(n: Int): ByteString =
|
||||
if (n <= 0) ByteString.empty
|
||||
else toByteString1.take(n)
|
||||
|
||||
override def dropRight(n: Int): ByteString =
|
||||
if (n <= 0) this
|
||||
else toByteString1.dropRight(n)
|
||||
|
||||
override def drop(n: Int): ByteString =
|
||||
if (n <= 0) this
|
||||
else toByteString1.drop(n)
|
||||
|
||||
override def slice(from: Int, until: Int): ByteString =
|
||||
if ((from != 0) || (until != length)) toByteString1.slice(from, until)
|
||||
else this
|
||||
if ((from == 0) && (until == length)) this
|
||||
else if (from > length) ByteString.empty
|
||||
else toByteString1.slice(from, until)
|
||||
|
||||
private[akka] def writeToOutputStream(os: ObjectOutputStream): Unit =
|
||||
private[akka] override def writeToOutputStream(os: ObjectOutputStream): Unit =
|
||||
toByteString1.writeToOutputStream(os)
|
||||
|
||||
override def copyToBuffer(buffer: ByteBuffer): Int =
|
||||
writeToBuffer(buffer, offset = 0)
|
||||
|
||||
/** INTERNAL API: Specialized for internal use, writing multiple ByteString1C into the same ByteBuffer. */
|
||||
private[akka] def writeToBuffer(buffer: ByteBuffer, offset: Int): Int = {
|
||||
val copyLength = Math.min(buffer.remaining, offset + length)
|
||||
if (copyLength > 0) {
|
||||
buffer.put(bytes, offset, copyLength)
|
||||
drop(copyLength)
|
||||
}
|
||||
copyLength
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
/** INTERNAL API: ByteString backed by exactly one array, with start / end markers */
|
||||
private[akka] object ByteString1 extends Companion {
|
||||
val empty: ByteString1 = new ByteString1(Array.empty[Byte])
|
||||
def apply(bytes: Array[Byte]): ByteString1 = ByteString1(bytes, 0, bytes.length)
|
||||
def fromString(s: String): ByteString1 = apply(s.getBytes)
|
||||
def apply(bytes: Array[Byte]): ByteString1 = apply(bytes, 0, bytes.length)
|
||||
def apply(bytes: Array[Byte], startIndex: Int, length: Int): ByteString1 =
|
||||
if (length == 0) empty else new ByteString1(bytes, startIndex, length)
|
||||
if (length == 0) empty
|
||||
else new ByteString1(bytes, Math.max(0, startIndex), Math.max(0, length))
|
||||
|
||||
val SerializationIdentity = 0.toByte
|
||||
|
||||
|
|
@ -170,6 +209,7 @@ object ByteString {
|
|||
|
||||
def apply(idx: Int): Byte = bytes(checkRangeConvert(idx))
|
||||
|
||||
// Avoid `iterator` in performance sensitive code, call ops directly on ByteString instead
|
||||
override def iterator: ByteIterator.ByteArrayIterator =
|
||||
ByteIterator.ByteArrayIterator(bytes, startIndex, startIndex + length)
|
||||
|
||||
|
|
@ -189,6 +229,48 @@ object ByteString {
|
|||
|
||||
private[akka] def byteStringCompanion = ByteString1
|
||||
|
||||
override def dropRight(n: Int): ByteString =
|
||||
dropRight1(n)
|
||||
|
||||
/** INTERNAL API */
|
||||
private[akka] def dropRight1(n: Int): ByteString1 =
|
||||
if (n <= 0) this
|
||||
else if (length - n <= 0) ByteString1.empty
|
||||
else new ByteString1(bytes, startIndex, length - n)
|
||||
|
||||
override def drop(n: Int): ByteString =
|
||||
if (n <= 0) this else drop1(n)
|
||||
|
||||
/** INTERNAL API */
|
||||
private[akka] def drop1(n: Int): ByteString1 = {
|
||||
val nextStartIndex = startIndex + n
|
||||
if (nextStartIndex >= bytes.length) ByteString1.empty
|
||||
else ByteString1(bytes, nextStartIndex, length - n)
|
||||
}
|
||||
|
||||
override def take(n: Int): ByteString =
|
||||
if (n <= 0) ByteString.empty
|
||||
else ByteString1(bytes, startIndex, Math.min(n, length))
|
||||
|
||||
override def slice(from: Int, until: Int): ByteString = {
|
||||
if (from <= 0 && until >= length) this // we can do < / > since we're Compact
|
||||
else if (until <= from) ByteString1.empty
|
||||
else ByteString1(bytes, startIndex + from, until - from)
|
||||
}
|
||||
|
||||
override def copyToBuffer(buffer: ByteBuffer): Int =
|
||||
writeToBuffer(buffer)
|
||||
|
||||
/** INTERNAL API: Specialized for internal use, writing multiple ByteString1C into the same ByteBuffer. */
|
||||
private[akka] def writeToBuffer(buffer: ByteBuffer): Int = {
|
||||
val copyLength = Math.min(buffer.remaining, length)
|
||||
if (copyLength > 0) {
|
||||
buffer.put(bytes, startIndex, copyLength)
|
||||
drop(copyLength)
|
||||
}
|
||||
copyLength
|
||||
}
|
||||
|
||||
def compact: CompactByteString =
|
||||
if (isCompact) ByteString1C(bytes) else ByteString1C(toArray)
|
||||
|
||||
|
|
@ -200,7 +282,10 @@ object ByteString {
|
|||
|
||||
def asByteBuffers: scala.collection.immutable.Iterable[ByteBuffer] = List(asByteBuffer)
|
||||
|
||||
def decodeString(charset: String): String =
|
||||
override def decodeString(charset: String): String =
|
||||
new String(if (length == bytes.length) bytes else toArray, charset)
|
||||
|
||||
override def decodeString(charset: Charset): String = // avoids Charset.forName lookup in String internals
|
||||
new String(if (length == bytes.length) bytes else toArray, charset)
|
||||
|
||||
def ++(that: ByteString): ByteString = {
|
||||
|
|
@ -283,8 +368,9 @@ object ByteString {
|
|||
*/
|
||||
final class ByteStrings private (private[akka] val bytestrings: Vector[ByteString1], val length: Int) extends ByteString with Serializable {
|
||||
if (bytestrings.isEmpty) throw new IllegalArgumentException("bytestrings must not be empty")
|
||||
if (bytestrings.head.isEmpty) throw new IllegalArgumentException("bytestrings.head must not be empty")
|
||||
|
||||
def apply(idx: Int): Byte =
|
||||
def apply(idx: Int): Byte = {
|
||||
if (0 <= idx && idx < length) {
|
||||
var pos = 0
|
||||
var seen = 0
|
||||
|
|
@ -294,7 +380,9 @@ object ByteString {
|
|||
}
|
||||
bytestrings(pos)(idx - seen)
|
||||
} else throw new IndexOutOfBoundsException(idx.toString)
|
||||
}
|
||||
|
||||
/** Avoid `iterator` in performance sensitive code, call ops directly on ByteString instead */
|
||||
override def iterator: ByteIterator.MultiByteArrayIterator =
|
||||
ByteIterator.MultiByteArrayIterator(bytestrings.toStream map { _.iterator })
|
||||
|
||||
|
|
@ -312,6 +400,14 @@ object ByteString {
|
|||
|
||||
def isCompact: Boolean = if (bytestrings.length == 1) bytestrings.head.isCompact else false
|
||||
|
||||
override def copyToBuffer(buffer: ByteBuffer): Int = {
|
||||
@tailrec def copyItToTheBuffer(buffer: ByteBuffer, i: Int, written: Int): Int =
|
||||
if (i < bytestrings.length) copyItToTheBuffer(buffer, i + 1, written + bytestrings(i).writeToBuffer(buffer))
|
||||
else written
|
||||
|
||||
copyItToTheBuffer(buffer, 0, 0)
|
||||
}
|
||||
|
||||
def compact: CompactByteString = {
|
||||
if (isCompact) bytestrings.head.compact
|
||||
else {
|
||||
|
|
@ -331,11 +427,83 @@ object ByteString {
|
|||
|
||||
def decodeString(charset: String): String = compact.decodeString(charset)
|
||||
|
||||
def decodeString(charset: Charset): String =
|
||||
compact.decodeString(charset)
|
||||
|
||||
private[akka] def writeToOutputStream(os: ObjectOutputStream): Unit = {
|
||||
os.writeInt(bytestrings.length)
|
||||
bytestrings.foreach(_.writeToOutputStream(os))
|
||||
}
|
||||
|
||||
override def take(n: Int): ByteString = {
|
||||
@tailrec def take0(n: Int, b: ByteStringBuilder, bs: Vector[ByteString1]): ByteString =
|
||||
if (bs.isEmpty || n <= 0) b.result
|
||||
else {
|
||||
val head = bs.head
|
||||
if (n <= head.length) b.append(head.take(n)).result
|
||||
else take0(n - head.length, b.append(head), bs.tail)
|
||||
}
|
||||
|
||||
if (n <= 0) ByteString.empty
|
||||
else if (n >= length) this
|
||||
else take0(n, ByteString.newBuilder, bytestrings)
|
||||
}
|
||||
|
||||
override def dropRight(n: Int): ByteString =
|
||||
if (n <= 0) this
|
||||
else {
|
||||
val last = bytestrings.last
|
||||
if (n < last.length) new ByteStrings(bytestrings.init :+ last.dropRight1(n), length - n)
|
||||
else {
|
||||
val remaining = bytestrings.init
|
||||
if (remaining.isEmpty) ByteString.empty
|
||||
else {
|
||||
val s = new ByteStrings(remaining, length - last.length)
|
||||
val remainingToBeDropped = n - last.length
|
||||
s.dropRight(remainingToBeDropped)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
override def slice(from: Int, until: Int): ByteString =
|
||||
if ((from == 0) && (until == length)) this
|
||||
else if (from > length || until <= from) ByteString.empty
|
||||
else drop(from).dropRight(length - until)
|
||||
|
||||
override def drop(n: Int): ByteString =
|
||||
if (n <= 0) this
|
||||
else if (n > length) ByteString.empty
|
||||
else drop0(n)
|
||||
|
||||
private def drop0(n: Int): ByteString = {
|
||||
var continue = true
|
||||
var fullDrops = 0
|
||||
var remainingToDrop = n
|
||||
do {
|
||||
// impl note: could be optimised a bit by using VectorIterator instead,
|
||||
// however then we're forced to call .toVector which halfs performance
|
||||
// We can work around that, as there's a Scala private method "remainingVector" which is fast,
|
||||
// but let's not go into calling private APIs here just yet.
|
||||
val currentLength = bytestrings(fullDrops).length
|
||||
if (remainingToDrop >= currentLength) {
|
||||
fullDrops += 1
|
||||
remainingToDrop -= currentLength
|
||||
} else continue = false
|
||||
} while (remainingToDrop > 0 && continue)
|
||||
|
||||
val remainingByteStrings = bytestrings.drop(fullDrops)
|
||||
if (remainingByteStrings.isEmpty) ByteString.empty
|
||||
else if (remainingToDrop > 0) {
|
||||
val h: ByteString1 = remainingByteStrings.head.drop1(remainingToDrop)
|
||||
val bs = remainingByteStrings.tail
|
||||
|
||||
if (h.isEmpty)
|
||||
if (bs.isEmpty) ByteString.empty
|
||||
else new ByteStrings(bs, length - n)
|
||||
else new ByteStrings(h +: bs, length - n)
|
||||
} else ByteStrings(remainingByteStrings, length - n)
|
||||
}
|
||||
|
||||
protected def writeReplace(): AnyRef = new SerializationProxy(this)
|
||||
}
|
||||
|
||||
|
|
@ -386,6 +554,8 @@ sealed abstract class ByteString extends IndexedSeq[Byte] with IndexedSeqOptimiz
|
|||
// *must* be overridden by derived classes. This construction is necessary
|
||||
// to specialize the return type, as the method is already implemented in
|
||||
// a parent trait.
|
||||
//
|
||||
// Avoid `iterator` in performance sensitive code, call ops directly on ByteString instead
|
||||
override def iterator: ByteIterator = throw new UnsupportedOperationException("Method iterator is not implemented in ByteString")
|
||||
|
||||
override def head: Byte = apply(0)
|
||||
|
|
@ -393,14 +563,19 @@ sealed abstract class ByteString extends IndexedSeq[Byte] with IndexedSeqOptimiz
|
|||
override def last: Byte = apply(length - 1)
|
||||
override def init: ByteString = dropRight(1)
|
||||
|
||||
override def slice(from: Int, until: Int): ByteString =
|
||||
if ((from == 0) && (until == length)) this
|
||||
else iterator.slice(from, until).toByteString
|
||||
|
||||
override def take(n: Int): ByteString = slice(0, n)
|
||||
// *must* be overridden by derived classes.
|
||||
override def take(n: Int): ByteString = throw new UnsupportedOperationException("Method slice is not implemented in ByteString")
|
||||
override def takeRight(n: Int): ByteString = slice(length - n, length)
|
||||
override def drop(n: Int): ByteString = slice(n, length)
|
||||
override def dropRight(n: Int): ByteString = slice(0, length - n)
|
||||
|
||||
// these methods are optimized in derived classes utilising the maximum knowlage about data layout available to them:
|
||||
// *must* be overridden by derived classes.
|
||||
override def slice(from: Int, until: Int): ByteString = throw new UnsupportedOperationException("Method slice is not implemented in ByteString")
|
||||
|
||||
// *must* be overridden by derived classes.
|
||||
override def drop(n: Int): ByteString = throw new UnsupportedOperationException("Method drop is not implemented in ByteString")
|
||||
|
||||
// *must* be overridden by derived classes.
|
||||
override def dropRight(n: Int): ByteString = throw new UnsupportedOperationException("Method dropRight is not implemented in ByteString")
|
||||
|
||||
override def takeWhile(p: Byte ⇒ Boolean): ByteString = iterator.takeWhile(p).toByteString
|
||||
override def dropWhile(p: Byte ⇒ Boolean): ByteString = iterator.dropWhile(p).toByteString
|
||||
|
|
@ -425,7 +600,7 @@ sealed abstract class ByteString extends IndexedSeq[Byte] with IndexedSeqOptimiz
|
|||
*
|
||||
* @return this ByteString copied into a byte array
|
||||
*/
|
||||
protected[ByteString] def toArray: Array[Byte] = toArray[Byte] // protected[ByteString] == public to Java but hidden to Scala * fnizz *
|
||||
protected[ByteString] def toArray: Array[Byte] = toArray[Byte]
|
||||
|
||||
override def toArray[B >: Byte](implicit arg0: ClassTag[B]): Array[B] = iterator.toArray
|
||||
override def copyToArray[B >: Byte](xs: Array[B], start: Int, len: Int): Unit =
|
||||
|
|
@ -452,7 +627,8 @@ sealed abstract class ByteString extends IndexedSeq[Byte] with IndexedSeqOptimiz
|
|||
* @param buffer a ByteBuffer to copy bytes to
|
||||
* @return the number of bytes actually copied
|
||||
*/
|
||||
def copyToBuffer(buffer: ByteBuffer): Int = iterator.copyToBuffer(buffer)
|
||||
// *must* be overridden by derived classes.
|
||||
def copyToBuffer(buffer: ByteBuffer): Int = throw new UnsupportedOperationException("Method copyToBuffer is not implemented in ByteString")
|
||||
|
||||
/**
|
||||
* Create a new ByteString with all contents compacted into a single,
|
||||
|
|
@ -504,9 +680,16 @@ sealed abstract class ByteString extends IndexedSeq[Byte] with IndexedSeqOptimiz
|
|||
|
||||
/**
|
||||
* Decodes this ByteString using a charset to produce a String.
|
||||
* If you have a [[Charset]] instance available, use `decodeString(charset: java.nio.charset.Charset` instead.
|
||||
*/
|
||||
def decodeString(charset: String): String
|
||||
|
||||
/**
|
||||
* Decodes this ByteString using a charset to produce a String.
|
||||
* Avoids Charset.forName lookup in String internals, thus is preferable to `decodeString(charset: String)`.
|
||||
*/
|
||||
def decodeString(charset: Charset): String
|
||||
|
||||
/**
|
||||
* map method that will automatically cast Int back into Byte.
|
||||
*/
|
||||
|
|
@ -568,8 +751,8 @@ object CompactByteString {
|
|||
* an Array.
|
||||
*/
|
||||
def fromArray(array: Array[Byte], offset: Int, length: Int): CompactByteString = {
|
||||
val copyOffset = math.max(offset, 0)
|
||||
val copyLength = math.max(math.min(array.length - copyOffset, length), 0)
|
||||
val copyOffset = Math.max(offset, 0)
|
||||
val copyLength = Math.max(Math.min(array.length - copyOffset, length), 0)
|
||||
if (copyLength == 0) empty
|
||||
else {
|
||||
val copyArray = new Array[Byte](copyLength)
|
||||
|
|
@ -666,6 +849,8 @@ final class ByteStringBuilder extends Builder[Byte, ByteString] {
|
|||
|
||||
override def ++=(xs: TraversableOnce[Byte]): this.type = {
|
||||
xs match {
|
||||
case b: ByteString if b.isEmpty ⇒
|
||||
// do nothing
|
||||
case b: ByteString1C ⇒
|
||||
clearTemp()
|
||||
_builder += b.toByteString1
|
||||
|
|
@ -708,7 +893,7 @@ final class ByteStringBuilder extends Builder[Byte, ByteString] {
|
|||
/**
|
||||
* Java API: append a ByteString to this builder.
|
||||
*/
|
||||
def append(bs: ByteString): this.type = this ++= bs
|
||||
def append(bs: ByteString): this.type = if (bs.isEmpty) this else this ++= bs
|
||||
|
||||
/**
|
||||
* Add a single Byte to this builder.
|
||||
|
|
@ -875,7 +1060,7 @@ final class ByteStringBuilder extends Builder[Byte, ByteString] {
|
|||
fillByteBuffer(len * 8, byteOrder) { _.asDoubleBuffer.put(array, start, len) }
|
||||
|
||||
def clear(): Unit = {
|
||||
_builder.clear
|
||||
_builder.clear()
|
||||
_length = 0
|
||||
_tempLength = 0
|
||||
}
|
||||
|
|
|
|||
81
akka-actor/src/main/scala/akka/util/WildcardIndex.scala
Normal file
81
akka-actor/src/main/scala/akka/util/WildcardIndex.scala
Normal file
|
|
@ -0,0 +1,81 @@
|
|||
/**
|
||||
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.util
|
||||
|
||||
import scala.annotation.tailrec
|
||||
import scala.collection.immutable.HashMap
|
||||
|
||||
private[akka] final case class WildcardIndex[T](wildcardTree: WildcardTree[T] = WildcardTree[T](), doubleWildcardTree: WildcardTree[T] = WildcardTree[T]()) {
|
||||
|
||||
val empty = WildcardTree[T]()
|
||||
|
||||
def insert(elems: Array[String], d: T): WildcardIndex[T] = elems.lastOption match {
|
||||
case Some("**") ⇒ copy(doubleWildcardTree = doubleWildcardTree.insert(elems.iterator, d))
|
||||
case Some(_) ⇒ copy(wildcardTree = wildcardTree.insert(elems.iterator, d))
|
||||
case _ ⇒ this
|
||||
}
|
||||
|
||||
def find(elems: Iterable[String]): Option[T] =
|
||||
(if (wildcardTree.isEmpty) {
|
||||
if (doubleWildcardTree.isEmpty) {
|
||||
empty
|
||||
} else {
|
||||
doubleWildcardTree.findWithTerminalDoubleWildcard(elems.iterator)
|
||||
}
|
||||
} else {
|
||||
val withSingleWildcard = wildcardTree.findWithSingleWildcard(elems.iterator)
|
||||
if (withSingleWildcard.isEmpty) {
|
||||
doubleWildcardTree.findWithTerminalDoubleWildcard(elems.iterator)
|
||||
} else {
|
||||
withSingleWildcard
|
||||
}
|
||||
}).data
|
||||
|
||||
}
|
||||
|
||||
private[akka] object WildcardTree {
|
||||
private val empty = new WildcardTree[Nothing]()
|
||||
def apply[T](): WildcardTree[T] = empty.asInstanceOf[WildcardTree[T]]
|
||||
}
|
||||
|
||||
private[akka] final case class WildcardTree[T](data: Option[T] = None, children: Map[String, WildcardTree[T]] = HashMap[String, WildcardTree[T]]()) {
|
||||
|
||||
lazy val isEmpty: Boolean = data.isEmpty && children.isEmpty
|
||||
|
||||
def insert(elems: Iterator[String], d: T): WildcardTree[T] =
|
||||
if (!elems.hasNext) {
|
||||
copy(data = Some(d))
|
||||
} else {
|
||||
val e = elems.next()
|
||||
copy(children = children.updated(e, children.getOrElse(e, WildcardTree[T]()).insert(elems, d)))
|
||||
}
|
||||
|
||||
@tailrec def findWithSingleWildcard(elems: Iterator[String]): WildcardTree[T] =
|
||||
if (!elems.hasNext) this
|
||||
else {
|
||||
children.get(elems.next()) match {
|
||||
case Some(branch) ⇒ branch.findWithSingleWildcard(elems)
|
||||
case None ⇒ children.get("*") match {
|
||||
case Some(branch) ⇒ branch.findWithSingleWildcard(elems)
|
||||
case None ⇒ WildcardTree[T]()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@tailrec def findWithTerminalDoubleWildcard(elems: Iterator[String], alt: WildcardTree[T] = WildcardTree[T]()): WildcardTree[T] = {
|
||||
if (!elems.hasNext) this
|
||||
else {
|
||||
val newAlt = children.getOrElse("**", alt)
|
||||
children.get(elems.next()) match {
|
||||
case Some(branch) ⇒ branch.findWithTerminalDoubleWildcard(elems, newAlt)
|
||||
case None ⇒ children.get("*") match {
|
||||
case Some(branch) ⇒ branch.findWithTerminalDoubleWildcard(elems, newAlt)
|
||||
case None ⇒ newAlt
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -1,32 +0,0 @@
|
|||
/**
|
||||
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.util
|
||||
|
||||
import annotation.tailrec
|
||||
import collection.immutable.HashMap
|
||||
|
||||
private[akka] object WildcardTree {
|
||||
private val empty = new WildcardTree[Nothing]()
|
||||
def apply[T](): WildcardTree[T] = empty.asInstanceOf[WildcardTree[T]]
|
||||
}
|
||||
private[akka] final case class WildcardTree[T](data: Option[T] = None, children: Map[String, WildcardTree[T]] = HashMap[String, WildcardTree[T]]()) {
|
||||
|
||||
def insert(elems: Iterator[String], d: T): WildcardTree[T] =
|
||||
if (!elems.hasNext) {
|
||||
copy(data = Some(d))
|
||||
} else {
|
||||
val e = elems.next()
|
||||
copy(children = children.updated(e, children.get(e).getOrElse(WildcardTree()).insert(elems, d)))
|
||||
}
|
||||
|
||||
@tailrec final def find(elems: Iterator[String]): WildcardTree[T] =
|
||||
if (!elems.hasNext) this
|
||||
else {
|
||||
(children.get(elems.next()) orElse children.get("*")) match {
|
||||
case Some(branch) ⇒ branch.find(elems)
|
||||
case None ⇒ WildcardTree()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,128 @@
|
|||
/**
|
||||
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.http
|
||||
|
||||
import java.util.concurrent.{ CountDownLatch, TimeUnit }
|
||||
|
||||
import akka.NotUsed
|
||||
import akka.actor.ActorSystem
|
||||
import akka.http.impl.util.ByteStringRendering
|
||||
import akka.http.scaladsl.{ Http, HttpExt }
|
||||
import akka.http.scaladsl.Http.ServerBinding
|
||||
import akka.http.scaladsl.model._
|
||||
import akka.http.scaladsl.server.Directives._
|
||||
import akka.http.scaladsl.unmarshalling._
|
||||
import akka.stream._
|
||||
import akka.stream.TLSProtocol.{ SslTlsInbound, SslTlsOutbound }
|
||||
import akka.stream.scaladsl._
|
||||
import akka.stream.stage.{ GraphStage, GraphStageLogic }
|
||||
import akka.util.ByteString
|
||||
import com.typesafe.config.ConfigFactory
|
||||
import org.openjdk.jmh.annotations._
|
||||
import org.openjdk.jmh.infra.Blackhole
|
||||
|
||||
import scala.concurrent.{ Await, Future }
|
||||
import scala.concurrent.duration._
|
||||
import scala.util.Try
|
||||
|
||||
/*
|
||||
Baseline:
|
||||
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] HttpBlueprintBenchmark.run_10000_reqs thrpt 20 197972.659 ± 14512.694 ops/s
|
||||
*/
|
||||
@State(Scope.Benchmark)
|
||||
@OutputTimeUnit(TimeUnit.SECONDS)
|
||||
@BenchmarkMode(Array(Mode.Throughput))
|
||||
class HttpBlueprintBenchmark {
|
||||
|
||||
val config = ConfigFactory.parseString(
|
||||
"""
|
||||
akka {
|
||||
loglevel = "WARNING"
|
||||
|
||||
stream.materializer {
|
||||
|
||||
# default: sync-processing-limit = 1000
|
||||
sync-processing-limit = 1000
|
||||
|
||||
# default: output-burst-limit = 10000
|
||||
output-burst-limit = 1000
|
||||
|
||||
# default: initial-input-buffer-size = 4
|
||||
initial-input-buffer-size = 4
|
||||
|
||||
# default: max-input-buffer-size = 16
|
||||
max-input-buffer-size = 16
|
||||
|
||||
}
|
||||
|
||||
http {
|
||||
# default: request-timeout = 20s
|
||||
request-timeout = infinite # disabled
|
||||
# request-timeout = 20s
|
||||
}
|
||||
}""".stripMargin
|
||||
).withFallback(ConfigFactory.load())
|
||||
|
||||
implicit val system: ActorSystem = ActorSystem("HttpBenchmark", config)
|
||||
|
||||
val materializer: ActorMaterializer = ActorMaterializer()
|
||||
val notFusingMaterializer = ActorMaterializer(materializer.settings.withAutoFusing(false))
|
||||
|
||||
val request: HttpRequest = HttpRequest()
|
||||
val requestRendered = ByteString(
|
||||
"GET / HTTP/1.1\r\n" +
|
||||
"Accept: */*\r\n" +
|
||||
"Accept-Encoding: gzip, deflate\r\n" +
|
||||
"Connection: keep-alive\r\n" +
|
||||
"Host: example.com\r\n" +
|
||||
"User-Agent: HTTPie/0.9.3\r\n" +
|
||||
"\r\n"
|
||||
)
|
||||
|
||||
val response: HttpResponse = HttpResponse()
|
||||
val responseRendered: ByteString = ByteString(
|
||||
s"HTTP/1.1 200 OK\r\n" +
|
||||
s"Content-Length: 0\r\n" +
|
||||
s"\r\n"
|
||||
)
|
||||
|
||||
def TCPPlacebo(requests: Int): Flow[ByteString, ByteString, NotUsed] =
|
||||
Flow.fromSinkAndSource(
|
||||
Flow[ByteString].takeWhile(it => !(it.utf8String contains "Connection: close")) to Sink.ignore,
|
||||
Source.repeat(requestRendered).take(requests)
|
||||
)
|
||||
|
||||
def layer: BidiFlow[HttpResponse, SslTlsOutbound, SslTlsInbound, HttpRequest, NotUsed] = Http().serverLayer()(materializer)
|
||||
def server(requests: Int): Flow[HttpResponse, HttpRequest, _] = layer atop TLSPlacebo() join TCPPlacebo(requests)
|
||||
|
||||
val reply = Flow[HttpRequest].map { _ => response }
|
||||
|
||||
@TearDown
|
||||
def shutdown(): Unit = {
|
||||
Await.result(system.terminate(), 5.seconds)
|
||||
}
|
||||
|
||||
val nothingHere: Flow[HttpRequest, HttpResponse, NotUsed] =
|
||||
Flow.fromSinkAndSource(Sink.cancelled, Source.empty)
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100000)
|
||||
def run_10000_reqs() = {
|
||||
val n = 100000
|
||||
val latch = new CountDownLatch(n)
|
||||
|
||||
val replyCountdown = reply map { x =>
|
||||
latch.countDown()
|
||||
x
|
||||
}
|
||||
server(n).joinMat(replyCountdown)(Keep.right).run()(materializer)
|
||||
|
||||
latch.await()
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,102 @@
|
|||
/**
|
||||
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.http
|
||||
|
||||
import java.util.concurrent.TimeUnit
|
||||
import javax.net.ssl.SSLContext
|
||||
|
||||
import akka.Done
|
||||
import akka.actor.ActorSystem
|
||||
import akka.event.NoLogging
|
||||
import akka.http.impl.engine.parsing.{ HttpHeaderParser, HttpRequestParser }
|
||||
import akka.http.scaladsl.settings.ParserSettings
|
||||
import akka.event.NoLogging
|
||||
import akka.stream.ActorMaterializer
|
||||
import akka.stream.TLSProtocol.SessionBytes
|
||||
import akka.stream.scaladsl._
|
||||
import akka.util.ByteString
|
||||
import org.openjdk.jmh.annotations.{ OperationsPerInvocation, _ }
|
||||
import org.openjdk.jmh.infra.Blackhole
|
||||
|
||||
import scala.concurrent.duration._
|
||||
import scala.concurrent.{ Await, Future }
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@OutputTimeUnit(TimeUnit.SECONDS)
|
||||
@BenchmarkMode(Array(Mode.Throughput))
|
||||
class HttpRequestParsingBenchmark {
|
||||
|
||||
implicit val system: ActorSystem = ActorSystem("HttpRequestParsingBenchmark")
|
||||
implicit val materializer = ActorMaterializer()(system)
|
||||
val parserSettings = ParserSettings(system)
|
||||
val parser = new HttpRequestParser(parserSettings, false, HttpHeaderParser(parserSettings, NoLogging)())
|
||||
val dummySession = SSLContext.getDefault.createSSLEngine.getSession
|
||||
|
||||
@Param(Array("small", "large"))
|
||||
var req: String = ""
|
||||
|
||||
def request = req match {
|
||||
case "small" => requestBytesSmall
|
||||
case "large" => requestBytesLarge
|
||||
}
|
||||
|
||||
val requestBytesSmall: SessionBytes = SessionBytes(
|
||||
dummySession,
|
||||
ByteString(
|
||||
"""|GET / HTTP/1.1
|
||||
|Accept: */*
|
||||
|Accept-Encoding: gzip, deflate
|
||||
|Connection: keep-alive
|
||||
|Host: example.com
|
||||
|User-Agent: HTTPie/0.9.3
|
||||
|
|
||||
|""".stripMargin.replaceAll("\n", "\r\n")
|
||||
)
|
||||
)
|
||||
|
||||
val requestBytesLarge: SessionBytes = SessionBytes(
|
||||
dummySession,
|
||||
ByteString(
|
||||
"""|GET /json HTTP/1.1
|
||||
|Host: server
|
||||
|User-Agent: Mozilla/5.0 (X11; Linux x86_64) Gecko/20130501 Firefox/30.0 AppleWebKit/600.00 Chrome/30.0.0000.0 Trident/10.0 Safari/600.00
|
||||
|Cookie: uid=12345678901234567890; __utma=1.1234567890.1234567890.1234567890.1234567890.12; wd=2560x1600
|
||||
|Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
|
||||
|Accept-Language: en-US,en;q=0.5
|
||||
|Connection: keep-alive
|
||||
|
|
||||
|""".stripMargin.replaceAll("\n", "\r\n")
|
||||
)
|
||||
)
|
||||
|
||||
/*
|
||||
// before:
|
||||
[info] Benchmark (req) Mode Cnt Score Error Units
|
||||
[info] HttpRequestParsingBenchmark.parse_10000_requests small thrpt 20 358 982.157 ± 93745.863 ops/s
|
||||
[info] HttpRequestParsingBenchmark.parse_10000_requests large thrpt 20 388 335.666 ± 16990.715 ops/s
|
||||
|
||||
// after:
|
||||
[info] HttpRequestParsingBenchmark.parse_10000_requests_val small thrpt 20 623 975.879 ± 6191.897 ops/s
|
||||
[info] HttpRequestParsingBenchmark.parse_10000_requests_val large thrpt 20 507 460.283 ± 4735.843 ops/s
|
||||
*/
|
||||
|
||||
val httpMessageParser = Flow.fromGraph(parser)
|
||||
|
||||
def flow(bytes: SessionBytes, n: Int): RunnableGraph[Future[Done]] =
|
||||
Source.repeat(request).take(n)
|
||||
.via(httpMessageParser)
|
||||
.toMat(Sink.ignore)(Keep.right)
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(10000)
|
||||
def parse_10000_requests_val(blackhole: Blackhole): Unit = {
|
||||
val done = flow(requestBytesSmall, 10000).run()
|
||||
Await.ready(done, 32.days)
|
||||
}
|
||||
|
||||
@TearDown
|
||||
def shutdown(): Unit = {
|
||||
Await.result(system.terminate(), 5.seconds)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,250 @@
|
|||
/**
|
||||
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.http
|
||||
|
||||
import java.util.concurrent.{ CountDownLatch, TimeUnit }
|
||||
|
||||
import akka.NotUsed
|
||||
import akka.actor.ActorSystem
|
||||
import akka.event.NoLogging
|
||||
import akka.http.impl.engine.rendering.ResponseRenderingOutput.HttpData
|
||||
import akka.http.impl.engine.rendering.{ HttpResponseRendererFactory, ResponseRenderingContext, ResponseRenderingOutput }
|
||||
import akka.http.scaladsl.Http
|
||||
import akka.http.scaladsl.model._
|
||||
import akka.http.scaladsl.model.headers.Server
|
||||
import akka.http.scaladsl.unmarshalling.Unmarshal
|
||||
import akka.stream._
|
||||
import akka.stream.scaladsl._
|
||||
import akka.stream.stage.{ GraphStageLogic, GraphStageWithMaterializedValue, InHandler }
|
||||
import akka.util.ByteString
|
||||
import com.typesafe.config.ConfigFactory
|
||||
import org.openjdk.jmh.annotations._
|
||||
import org.openjdk.jmh.infra.Blackhole
|
||||
|
||||
import scala.concurrent.duration._
|
||||
import scala.concurrent.{ Await, Future }
|
||||
import scala.util.Try
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@OutputTimeUnit(TimeUnit.SECONDS)
|
||||
@BenchmarkMode(Array(Mode.Throughput))
|
||||
class HttpResponseRenderingBenchmark extends HttpResponseRendererFactory(
|
||||
serverHeader = Some(Server("Akka HTTP 2.4.x")),
|
||||
responseHeaderSizeHint = 64,
|
||||
log = NoLogging
|
||||
) {
|
||||
|
||||
val config = ConfigFactory.parseString(
|
||||
"""
|
||||
akka {
|
||||
loglevel = "ERROR"
|
||||
}""".stripMargin
|
||||
).withFallback(ConfigFactory.load())
|
||||
|
||||
implicit val system = ActorSystem("HttpResponseRenderingBenchmark", config)
|
||||
implicit val materializer = ActorMaterializer()
|
||||
|
||||
import system.dispatcher
|
||||
|
||||
val requestRendered = ByteString(
|
||||
"GET / HTTP/1.1\r\n" +
|
||||
"Accept: */*\r\n" +
|
||||
"Accept-Encoding: gzip, deflate\r\n" +
|
||||
"Connection: keep-alive\r\n" +
|
||||
"Host: example.com\r\n" +
|
||||
"User-Agent: HTTPie/0.9.3\r\n" +
|
||||
"\r\n"
|
||||
)
|
||||
|
||||
def TCPPlacebo(requests: Int): Flow[ByteString, ByteString, NotUsed] =
|
||||
Flow.fromSinkAndSource(
|
||||
Flow[ByteString].takeWhile(it => !(it.utf8String contains "Connection: close")) to Sink.ignore,
|
||||
Source.repeat(requestRendered).take(requests)
|
||||
)
|
||||
|
||||
def TlsPlacebo = TLSPlacebo()
|
||||
|
||||
val requestRendering: Flow[HttpRequest, String, NotUsed] =
|
||||
Http()
|
||||
.clientLayer(headers.Host("blah.com"))
|
||||
.atop(TlsPlacebo)
|
||||
.join {
|
||||
Flow[ByteString].map { x ⇒
|
||||
val response = s"HTTP/1.1 200 OK\r\nContent-Length: ${x.size}\r\n\r\n"
|
||||
ByteString(response) ++ x
|
||||
}
|
||||
}
|
||||
.mapAsync(1)(response => Unmarshal(response).to[String])
|
||||
|
||||
def renderResponse: Future[String] = Source.single(HttpRequest(uri = "/foo"))
|
||||
.via(requestRendering)
|
||||
.runWith(Sink.head)
|
||||
|
||||
var request: HttpRequest = _
|
||||
var pool: Flow[(HttpRequest, Int), (Try[HttpResponse], Int), _] = _
|
||||
|
||||
@TearDown
|
||||
def shutdown(): Unit = {
|
||||
Await.ready(Http().shutdownAllConnectionPools(), 1.second)
|
||||
Await.result(system.terminate(), 5.seconds)
|
||||
}
|
||||
|
||||
/*
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] HttpResponseRenderingBenchmark.header_date_val thrpt 20 2 704 169 260 029.906 ± 234456086114.237 ops/s
|
||||
|
||||
// def, normal time
|
||||
[info] HttpResponseRenderingBenchmark.header_date_def thrpt 20 178 297 625 609.638 ± 7429280865.659 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.response_ok_simple_val thrpt 20 1 258 119.673 ± 58399.454 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.response_ok_simple_def thrpt 20 687 576.928 ± 94813.618 ops/s
|
||||
|
||||
// clock nanos
|
||||
[info] HttpResponseRenderingBenchmark.response_ok_simple_clock thrpt 20 1 676 438.649 ± 33976.590 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.response_ok_simple_clock thrpt 40 1 199 462.263 ± 222226.304 ops/s
|
||||
|
||||
// ------
|
||||
|
||||
// before optimisig collectFirst
|
||||
[info] HttpResponseRenderingBenchmark.json_response thrpt 20 1 782 572.845 ± 16572.625 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.simple_response thrpt 20 1 611 802.216 ± 19557.151 ops/s
|
||||
|
||||
// after removing collectFirst and Option from renderHeaders
|
||||
// not much of a difference, but hey, less Option allocs
|
||||
[info] HttpResponseRenderingBenchmark.json_response thrpt 20 1 785 152.896 ± 15210.299 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.simple_response thrpt 20 1 783 800.184 ± 14938.415 ops/s
|
||||
|
||||
// -----
|
||||
|
||||
// baseline for this optimisation is the above results (after collectFirst).
|
||||
|
||||
// after introducing pre-rendered ContentType headers:
|
||||
|
||||
normal clock
|
||||
[info] HttpResponseRenderingBenchmark.json_long_raw_response thrpt 20 1738558.895 ± 159612.661 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.json_response thrpt 20 1714176.824 ± 100011.642 ops/s
|
||||
|
||||
"fast clock"
|
||||
[info] HttpResponseRenderingBenchmark.json_long_raw_response thrpt 20 1 528 632.480 ± 44934.827 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.json_response thrpt 20 1 517 383.792 ± 28256.716 ops/s
|
||||
|
||||
*/
|
||||
|
||||
/**
|
||||
* HTTP/1.1 200 OK
|
||||
* Server: Akka HTTP 2.4.x
|
||||
* Date: Tue, 26 Jul 2016 15:26:53 GMT
|
||||
* Content-Type: text/plain; charset=UTF-8
|
||||
* Content-Length: 6
|
||||
*
|
||||
* ENTITY
|
||||
*/
|
||||
val simpleResponse =
|
||||
ResponseRenderingContext(
|
||||
response = HttpResponse(
|
||||
200,
|
||||
headers = Nil,
|
||||
entity = HttpEntity("ENTITY")
|
||||
),
|
||||
requestMethod = HttpMethods.GET
|
||||
)
|
||||
|
||||
/**
|
||||
* HTTP/1.1 200 OK
|
||||
* Server: Akka HTTP 2.4.x
|
||||
* Date: Tue, 26 Jul 2016 15:26:53 GMT
|
||||
* Content-Type: application/json
|
||||
* Content-Length: 27
|
||||
*
|
||||
* {"message":"Hello, World!"}
|
||||
*/
|
||||
val jsonResponse =
|
||||
ResponseRenderingContext(
|
||||
response = HttpResponse(
|
||||
200,
|
||||
headers = Nil,
|
||||
entity = HttpEntity(ContentTypes.`application/json`, """{"message":"Hello, World!"}""")
|
||||
),
|
||||
requestMethod = HttpMethods.GET
|
||||
)
|
||||
|
||||
/**
|
||||
* HTTP/1.1 200 OK
|
||||
* Server: Akka HTTP 2.4.x
|
||||
* Date: Tue, 26 Jul 2016 15:26:53 GMT
|
||||
* Content-Type: application/json
|
||||
* Content-Length: 315
|
||||
*
|
||||
* [{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]
|
||||
*/
|
||||
val jsonLongRawResponse =
|
||||
ResponseRenderingContext(
|
||||
response = HttpResponse(
|
||||
200,
|
||||
headers = Nil,
|
||||
entity = HttpEntity(ContentTypes.`application/json`, """[{"id":4174,"randomNumber":331},{"id":51,"randomNumber":6544},{"id":4462,"randomNumber":952},{"id":2221,"randomNumber":532},{"id":9276,"randomNumber":3097},{"id":3056,"randomNumber":7293},{"id":6964,"randomNumber":620},{"id":675,"randomNumber":6601},{"id":8414,"randomNumber":6569},{"id":2753,"randomNumber":4065}]""")
|
||||
),
|
||||
requestMethod = HttpMethods.GET
|
||||
)
|
||||
|
||||
@Benchmark
|
||||
@Threads(8)
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def simple_response(blackhole: Blackhole): Unit =
|
||||
renderToImpl(simpleResponse, blackhole, n = 100 * 1000).await()
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def json_response(blackhole: Blackhole): Unit =
|
||||
renderToImpl(jsonResponse, blackhole, n = 100 * 1000).await()
|
||||
|
||||
/*
|
||||
Difference between 27 and 315 bytes long JSON is:
|
||||
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] HttpResponseRenderingBenchmark.json_long_raw_response thrpt 20 1 932 331.049 ± 64125.621 ops/s
|
||||
[info] HttpResponseRenderingBenchmark.json_response thrpt 20 1 973 232.941 ± 18568.314 ops/s
|
||||
*/
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def json_long_raw_response(blackhole: Blackhole): Unit =
|
||||
renderToImpl(jsonLongRawResponse, blackhole, n = 100 * 1000).await()
|
||||
|
||||
class JitSafeLatch[A](blackhole: Blackhole, n: Int) extends GraphStageWithMaterializedValue[SinkShape[A], CountDownLatch] {
|
||||
val in = Inlet[A]("JitSafeLatch.in")
|
||||
override val shape = SinkShape(in)
|
||||
|
||||
override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, CountDownLatch) = {
|
||||
val latch = new CountDownLatch(n)
|
||||
val logic = new GraphStageLogic(shape) with InHandler {
|
||||
|
||||
override def preStart(): Unit = pull(in)
|
||||
override def onPush(): Unit = {
|
||||
if (blackhole ne null) blackhole.consume(grab(in))
|
||||
latch.countDown()
|
||||
pull(in)
|
||||
}
|
||||
|
||||
setHandler(in, this)
|
||||
}
|
||||
|
||||
(logic, latch)
|
||||
}
|
||||
}
|
||||
|
||||
def renderToImpl(ctx: ResponseRenderingContext, blackhole: Blackhole, n: Int)(implicit mat: Materializer): CountDownLatch = {
|
||||
val latch =
|
||||
(Source.repeat(ctx).take(n) ++ Source.maybe[ResponseRenderingContext]) // never send upstream completion
|
||||
.via(renderer.named("renderer"))
|
||||
.runWith(new JitSafeLatch[ResponseRenderingOutput](blackhole, n))
|
||||
|
||||
latch
|
||||
}
|
||||
|
||||
// TODO benchmark with stable override
|
||||
override def currentTimeMillis(): Long = System.currentTimeMillis()
|
||||
// override def currentTimeMillis(): Long = System.currentTimeMillis() // DateTime(2011, 8, 25, 9, 10, 29).clicks // provide a stable date for testing
|
||||
|
||||
}
|
||||
|
||||
|
|
@ -0,0 +1,331 @@
|
|||
/**
|
||||
* Copyright (C) 2014-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package akka.stream
|
||||
|
||||
import java.util.concurrent.{ CountDownLatch, TimeUnit }
|
||||
|
||||
import akka.NotUsed
|
||||
import akka.actor.ActorSystem
|
||||
import akka.stream.impl.fusing.GraphStages
|
||||
import akka.stream.scaladsl._
|
||||
import akka.stream.stage._
|
||||
import org.openjdk.jmh.annotations.{ OperationsPerInvocation, _ }
|
||||
|
||||
import scala.concurrent.Await
|
||||
import scala.concurrent.duration._
|
||||
|
||||
object FusedGraphsBenchmark {
|
||||
val ElementCount = 100 * 1000
|
||||
|
||||
@volatile var blackhole: org.openjdk.jmh.infra.Blackhole = _
|
||||
}
|
||||
|
||||
// Just to avoid allocations and still have a way to do some work in stages. The value itself does not matter
|
||||
// so no issues with sharing (the result does not make any sense, but hey)
|
||||
class MutableElement(var value: Int)
|
||||
|
||||
class TestSource(elems: Array[MutableElement]) extends GraphStage[SourceShape[MutableElement]] {
|
||||
val out = Outlet[MutableElement]("TestSource.out")
|
||||
override val shape = SourceShape(out)
|
||||
|
||||
override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with OutHandler {
|
||||
private[this] var left = FusedGraphsBenchmark.ElementCount - 1
|
||||
|
||||
override def onPull(): Unit = {
|
||||
if (left >= 0) {
|
||||
push(out, elems(left))
|
||||
left -= 1
|
||||
} else completeStage()
|
||||
}
|
||||
|
||||
setHandler(out, this)
|
||||
}
|
||||
}
|
||||
|
||||
class JitSafeCompletionLatch extends GraphStageWithMaterializedValue[SinkShape[MutableElement], CountDownLatch] {
|
||||
val in = Inlet[MutableElement]("JitSafeCompletionLatch.in")
|
||||
override val shape = SinkShape(in)
|
||||
|
||||
override def createLogicAndMaterializedValue(inheritedAttributes: Attributes): (GraphStageLogic, CountDownLatch) = {
|
||||
val latch = new CountDownLatch(1)
|
||||
val logic = new GraphStageLogic(shape) with InHandler {
|
||||
private[this] var sum = 0
|
||||
|
||||
override def preStart(): Unit = pull(in)
|
||||
override def onPush(): Unit = {
|
||||
sum += grab(in).value
|
||||
pull(in)
|
||||
}
|
||||
|
||||
override def onUpstreamFinish(): Unit = {
|
||||
// Do not ignore work along the chain
|
||||
FusedGraphsBenchmark.blackhole.consume(sum)
|
||||
latch.countDown()
|
||||
completeStage()
|
||||
}
|
||||
|
||||
setHandler(in, this)
|
||||
}
|
||||
|
||||
(logic, latch)
|
||||
}
|
||||
}
|
||||
|
||||
class IdentityStage extends GraphStage[FlowShape[MutableElement, MutableElement]] {
|
||||
val in = Inlet[MutableElement]("Identity.in")
|
||||
val out = Outlet[MutableElement]("Identity.out")
|
||||
override val shape = FlowShape(in, out)
|
||||
|
||||
override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = new GraphStageLogic(shape) with InHandler with OutHandler {
|
||||
override def onPush(): Unit = push(out, grab(in))
|
||||
override def onPull(): Unit = pull(in)
|
||||
|
||||
setHandlers(in, out, this)
|
||||
}
|
||||
}
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@OutputTimeUnit(TimeUnit.MILLISECONDS)
|
||||
@BenchmarkMode(Array(Mode.Throughput))
|
||||
class FusedGraphsBenchmark {
|
||||
import FusedGraphsBenchmark._
|
||||
|
||||
implicit val system = ActorSystem("test")
|
||||
var materializer: ActorMaterializer = _
|
||||
var testElements: Array[MutableElement] = _
|
||||
|
||||
var singleIdentity: RunnableGraph[CountDownLatch] = _
|
||||
var chainOfIdentities: RunnableGraph[CountDownLatch] = _
|
||||
var singleMap: RunnableGraph[CountDownLatch] = _
|
||||
var chainOfMaps: RunnableGraph[CountDownLatch] = _
|
||||
var repeatTakeMapAndFold: RunnableGraph[CountDownLatch] = _
|
||||
var singleBuffer: RunnableGraph[CountDownLatch] = _
|
||||
var chainOfBuffers: RunnableGraph[CountDownLatch] = _
|
||||
var broadcastZip: RunnableGraph[CountDownLatch] = _
|
||||
var balanceMerge: RunnableGraph[CountDownLatch] = _
|
||||
var broadcastZipBalanceMerge: RunnableGraph[CountDownLatch] = _
|
||||
|
||||
@Setup
|
||||
def setup(): Unit = {
|
||||
val settings = ActorMaterializerSettings(system)
|
||||
.withFuzzing(false)
|
||||
.withSyncProcessingLimit(Int.MaxValue)
|
||||
.withAutoFusing(false) // We fuse manually in this test in the setup
|
||||
|
||||
materializer = ActorMaterializer(settings)
|
||||
testElements = Array.fill(ElementCount)(new MutableElement(0))
|
||||
val addFunc = (x: MutableElement) => { x.value += 1; x }
|
||||
|
||||
val testSource = Source.fromGraph(new TestSource(testElements))
|
||||
val testSink = Sink.fromGraph(new JitSafeCompletionLatch)
|
||||
|
||||
def fuse(r: RunnableGraph[CountDownLatch]): RunnableGraph[CountDownLatch] = {
|
||||
RunnableGraph.fromGraph(Fusing.aggressive(r))
|
||||
}
|
||||
|
||||
val identityStage = new IdentityStage
|
||||
|
||||
singleIdentity =
|
||||
fuse(
|
||||
testSource
|
||||
.via(identityStage)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
chainOfIdentities =
|
||||
fuse(
|
||||
testSource
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.via(identityStage)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
singleMap =
|
||||
fuse(
|
||||
testSource
|
||||
.map(addFunc)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
chainOfMaps =
|
||||
fuse(
|
||||
testSource
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
repeatTakeMapAndFold =
|
||||
fuse(
|
||||
Source.repeat(new MutableElement(0))
|
||||
.take(ElementCount)
|
||||
.map(addFunc)
|
||||
.map(addFunc)
|
||||
.fold(new MutableElement(0))((acc, x) => { acc.value += x.value; acc })
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
singleBuffer =
|
||||
fuse(
|
||||
testSource
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
chainOfBuffers =
|
||||
fuse(
|
||||
testSource
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.buffer(10, OverflowStrategy.backpressure)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
val broadcastZipFlow: Flow[MutableElement, MutableElement, NotUsed] = Flow.fromGraph(GraphDSL.create() { implicit b =>
|
||||
import GraphDSL.Implicits._
|
||||
|
||||
val bcast = b.add(Broadcast[MutableElement](2))
|
||||
val zip = b.add(Zip[MutableElement, MutableElement]())
|
||||
|
||||
bcast ~> zip.in0
|
||||
bcast ~> zip.in1
|
||||
|
||||
FlowShape(bcast.in, zip.out.map(_._1).outlet)
|
||||
})
|
||||
|
||||
val balanceMergeFlow: Flow[MutableElement, MutableElement, NotUsed] = Flow.fromGraph(GraphDSL.create() { implicit b =>
|
||||
import GraphDSL.Implicits._
|
||||
|
||||
val balance = b.add(Balance[MutableElement](2))
|
||||
val merge = b.add(Merge[MutableElement](2))
|
||||
|
||||
balance ~> merge
|
||||
balance ~> merge
|
||||
|
||||
FlowShape(balance.in, merge.out)
|
||||
})
|
||||
|
||||
broadcastZip =
|
||||
fuse(
|
||||
testSource
|
||||
.via(broadcastZipFlow)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
balanceMerge =
|
||||
fuse(
|
||||
testSource
|
||||
.via(balanceMergeFlow)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
|
||||
broadcastZipBalanceMerge =
|
||||
fuse(
|
||||
testSource
|
||||
.via(broadcastZipFlow)
|
||||
.via(balanceMergeFlow)
|
||||
.toMat(testSink)(Keep.right)
|
||||
)
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def single_identity(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
singleIdentity.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def chain_of_identities(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
chainOfIdentities.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def single_map(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
singleMap.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def chain_of_maps(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
chainOfMaps.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def repeat_take_map_and_fold(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
repeatTakeMapAndFold.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def single_buffer(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
singleBuffer.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def chain_of_buffers(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
chainOfBuffers.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def broadcast_zip(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
broadcastZip.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def balance_merge(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
balanceMerge.run()(materializer).await()
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(100 * 1000)
|
||||
def boradcast_zip_balance_merge(blackhole: org.openjdk.jmh.infra.Blackhole): Unit = {
|
||||
FusedGraphsBenchmark.blackhole = blackhole
|
||||
broadcastZipBalanceMerge.run()(materializer).await()
|
||||
}
|
||||
|
||||
@TearDown
|
||||
def shutdown(): Unit = {
|
||||
Await.result(system.terminate(), 5.seconds)
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,62 @@
|
|||
/*
|
||||
* Copyright (C) 2009-2015 Typesafe Inc. <http://www.typesafe.com>
|
||||
*/
|
||||
package akka.stream
|
||||
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
import akka.stream.impl.JsonObjectParser
|
||||
import akka.util.ByteString
|
||||
import org.openjdk.jmh.annotations._
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@OutputTimeUnit(TimeUnit.SECONDS)
|
||||
@BenchmarkMode(Array(Mode.Throughput))
|
||||
class JsonFramingBenchmark {
|
||||
|
||||
/*
|
||||
Benchmark Mode Cnt Score Error Units
|
||||
// old
|
||||
JsonFramingBenchmark.collecting_1 thrpt 20 81.476 ± 14.793 ops/s
|
||||
JsonFramingBenchmark.collecting_offer_5 thrpt 20 20.187 ± 2.291 ops/s
|
||||
|
||||
// new
|
||||
JsonFramingBenchmark.counting_1 thrpt 20 10766.738 ± 1278.300 ops/s
|
||||
JsonFramingBenchmark.counting_offer_5 thrpt 20 28798.255 ± 2670.163 ops/s
|
||||
*/
|
||||
|
||||
val json =
|
||||
ByteString(
|
||||
"""|{"fname":"Frank","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Bob","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Bob","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Bob","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Bob","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Bob","name":"Smith","age":42,"id":1337,"boardMember":false},
|
||||
|{"fname":"Hank","name":"Smith","age":42,"id":1337,"boardMember":false}""".stripMargin
|
||||
)
|
||||
|
||||
val bracket = new JsonObjectParser
|
||||
|
||||
@Setup(Level.Invocation)
|
||||
def init(): Unit = {
|
||||
bracket.offer(json)
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
def counting_1: ByteString =
|
||||
bracket.poll().get
|
||||
|
||||
@Benchmark
|
||||
@OperationsPerInvocation(5)
|
||||
def counting_offer_5: ByteString = {
|
||||
bracket.offer(json)
|
||||
bracket.poll().get
|
||||
bracket.poll().get
|
||||
bracket.poll().get
|
||||
bracket.poll().get
|
||||
bracket.poll().get
|
||||
bracket.poll().get
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,92 @@
|
|||
/**
|
||||
* Copyright (C) 2014-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.util
|
||||
|
||||
import java.nio.ByteBuffer
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
import akka.util.ByteString.{ ByteString1, ByteString1C, ByteStrings }
|
||||
import org.openjdk.jmh.annotations._
|
||||
import org.openjdk.jmh.infra.Blackhole
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
|
||||
class ByteString_copyToBuffer_Benchmark {
|
||||
|
||||
val _bs_mini = ByteString(Array.ofDim[Byte](128 * 4))
|
||||
val _bs_small = ByteString(Array.ofDim[Byte](1024 * 1))
|
||||
val _bs_large = ByteString(Array.ofDim[Byte](1024 * 4))
|
||||
|
||||
val bs_mini = ByteString(Array.ofDim[Byte](128 * 4 * 4))
|
||||
val bs_small = ByteString(Array.ofDim[Byte](1024 * 1 * 4))
|
||||
val bs_large = ByteString(Array.ofDim[Byte](1024 * 4 * 4))
|
||||
|
||||
val bss_mini = ByteStrings(Vector.fill(4)(bs_mini.asInstanceOf[ByteString1C].toByteString1), 4 * bs_mini.length)
|
||||
val bss_small = ByteStrings(Vector.fill(4)(bs_small.asInstanceOf[ByteString1C].toByteString1), 4 * bs_small.length)
|
||||
val bss_large = ByteStrings(Vector.fill(4)(bs_large.asInstanceOf[ByteString1C].toByteString1), 4 * bs_large.length)
|
||||
val bss_pc_large = bss_large.compact
|
||||
|
||||
val buf = ByteBuffer.allocate(1024 * 4 * 4)
|
||||
|
||||
/*
|
||||
BEFORE
|
||||
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] ByteStringBenchmark.bs_large_copyToBuffer thrpt 40 142 163 289.866 ± 21751578.294 ops/s
|
||||
[info] ByteStringBenchmark.bss_large_copyToBuffer thrpt 40 1 489 195.631 ± 209165.487 ops/s << that's the interesting case, we needlessly fold and allocate tons of Stream etc
|
||||
[info] ByteStringBenchmark.bss_large_pc_copyToBuffer thrpt 40 184 466 756.364 ± 9169108.378 ops/s // "can't beat that"
|
||||
|
||||
|
||||
[info] ....[Thread state: RUNNABLE]........................................................................
|
||||
[info] 35.9% 35.9% scala.collection.Iterator$class.toStream
|
||||
[info] 20.2% 20.2% scala.collection.immutable.Stream.foldLeft
|
||||
[info] 11.6% 11.6% scala.collection.immutable.Stream$StreamBuilder.<init>
|
||||
[info] 10.9% 10.9% akka.util.ByteIterator.<init>
|
||||
[info] 6.1% 6.1% scala.collection.mutable.ListBuffer.<init>
|
||||
[info] 5.2% 5.2% akka.util.ByteString.copyToBuffer
|
||||
[info] 5.2% 5.2% scala.collection.AbstractTraversable.<init>
|
||||
[info] 2.2% 2.2% scala.collection.immutable.VectorIterator.initFrom
|
||||
[info] 1.2% 1.2% akka.util.generated.ByteStringBenchmark_bss_large_copyToBuffer.bss_large_copyToBuffer_thrpt_jmhStub
|
||||
[info] 0.3% 0.3% akka.util.ByteIterator$MultiByteArrayIterator.copyToBuffer
|
||||
[info] 1.2% 1.2% <other>
|
||||
|
||||
|
||||
AFTER specializing impls
|
||||
|
||||
[info] ....[Thread state: RUNNABLE]........................................................................
|
||||
[info] 99.5% 99.6% akka.util.generated.ByteStringBenchmark_bss_large_copyToBuffer_jmhTest.bss_large_copyToBuffer_thrpt_jmhStub
|
||||
[info] 0.1% 0.1% java.util.concurrent.CountDownLatch.countDown
|
||||
[info] 0.1% 0.1% sun.reflect.NativeMethodAccessorImpl.invoke0
|
||||
[info] 0.1% 0.1% sun.misc.Unsafe.putObject
|
||||
[info] 0.1% 0.1% org.openjdk.jmh.infra.IterationParamsL2.getBatchSize
|
||||
[info] 0.1% 0.1% java.lang.Thread.currentThread
|
||||
[info] 0.1% 0.1% sun.misc.Unsafe.compareAndSwapInt
|
||||
[info] 0.1% 0.1% sun.reflect.AccessorGenerator.internalize
|
||||
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] ByteStringBenchmark.bs_large_copyToBuffer thrpt 40 177 328 585.473 ± 7742067.648 ops/s
|
||||
[info] ByteStringBenchmark.bss_large_copyToBuffer thrpt 40 113 535 003.488 ± 3899763.124 ops/s // previous bad case now very good (was 2M/s)
|
||||
[info] ByteStringBenchmark.bss_large_pc_copyToBuffer thrpt 40 203 590 896.493 ± 7582752.024 ops/s // "can't beat that"
|
||||
|
||||
*/
|
||||
|
||||
@Benchmark
|
||||
def bs_large_copyToBuffer(): Int = {
|
||||
buf.flip()
|
||||
bs_large.copyToBuffer(buf)
|
||||
}
|
||||
|
||||
@Benchmark
|
||||
def bss_large_copyToBuffer(): Int = {
|
||||
buf.flip()
|
||||
bss_large.copyToBuffer(buf)
|
||||
}
|
||||
|
||||
/** Pre-compacted */
|
||||
@Benchmark
|
||||
def bss_large_pc_copyToBuffer(): Int = {
|
||||
buf.flip()
|
||||
bss_pc_large.copyToBuffer(buf)
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,64 @@
|
|||
/**
|
||||
* Copyright (C) 2014-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.util
|
||||
|
||||
import java.nio.charset.Charset
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
import akka.util.ByteString.{ ByteString1C, ByteStrings }
|
||||
import org.openjdk.jmh.annotations._
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
|
||||
class ByteString_decode_Benchmark {
|
||||
|
||||
val _bs_large = ByteString(Array.ofDim[Byte](1024 * 4))
|
||||
|
||||
val bs_large = ByteString(Array.ofDim[Byte](1024 * 4 * 4))
|
||||
|
||||
val bss_large = ByteStrings(Vector.fill(4)(bs_large.asInstanceOf[ByteString1C].toByteString1), 4 * bs_large.length)
|
||||
val bc_large = bss_large.compact // compacted
|
||||
|
||||
val utf8String = "utf-8"
|
||||
val utf8 = Charset.forName(utf8String)
|
||||
|
||||
/*
|
||||
Using Charset helps a bit, but nothing impressive:
|
||||
|
||||
[info] ByteString_decode_Benchmark.bc_large_decodeString_stringCharset_utf8 thrpt 20 21 612.293 ± 825.099 ops/s
|
||||
=>
|
||||
[info] ByteString_decode_Benchmark.bc_large_decodeString_charsetCharset_utf8 thrpt 20 22 473.372 ± 851.597 ops/s
|
||||
|
||||
|
||||
[info] ByteString_decode_Benchmark.bs_large_decodeString_stringCharset_utf8 thrpt 20 84 443.674 ± 3723.987 ops/s
|
||||
=>
|
||||
[info] ByteString_decode_Benchmark.bs_large_decodeString_charsetCharset_utf8 thrpt 20 93 865.033 ± 2052.476 ops/s
|
||||
|
||||
|
||||
[info] ByteString_decode_Benchmark.bss_large_decodeString_stringCharset_utf8 thrpt 20 14 886.553 ± 326.752 ops/s
|
||||
=>
|
||||
[info] ByteString_decode_Benchmark.bss_large_decodeString_charsetCharset_utf8 thrpt 20 16 031.670 ± 474.565 ops/s
|
||||
*/
|
||||
|
||||
@Benchmark
|
||||
def bc_large_decodeString_stringCharset_utf8: String =
|
||||
bc_large.decodeString(utf8String)
|
||||
@Benchmark
|
||||
def bs_large_decodeString_stringCharset_utf8: String =
|
||||
bs_large.decodeString(utf8String)
|
||||
@Benchmark
|
||||
def bss_large_decodeString_stringCharset_utf8: String =
|
||||
bss_large.decodeString(utf8String)
|
||||
|
||||
@Benchmark
|
||||
def bc_large_decodeString_charsetCharset_utf8: String =
|
||||
bc_large.decodeString(utf8)
|
||||
@Benchmark
|
||||
def bs_large_decodeString_charsetCharset_utf8: String =
|
||||
bs_large.decodeString(utf8)
|
||||
@Benchmark
|
||||
def bss_large_decodeString_charsetCharset_utf8: String =
|
||||
bss_large.decodeString(utf8)
|
||||
|
||||
}
|
||||
|
|
@ -0,0 +1,156 @@
|
|||
/**
|
||||
* Copyright (C) 2014-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.util
|
||||
|
||||
import java.nio.ByteBuffer
|
||||
import java.util.concurrent.TimeUnit
|
||||
|
||||
import akka.util.ByteString.{ ByteString1C, ByteStrings }
|
||||
import org.openjdk.jmh.annotations._
|
||||
|
||||
@State(Scope.Benchmark)
|
||||
@Measurement(timeUnit = TimeUnit.MILLISECONDS)
|
||||
class ByteString_dropSliceTake_Benchmark {
|
||||
|
||||
val _bs_mini = ByteString(Array.ofDim[Byte](128 * 4))
|
||||
val _bs_small = ByteString(Array.ofDim[Byte](1024 * 1))
|
||||
val _bs_large = ByteString(Array.ofDim[Byte](1024 * 4))
|
||||
|
||||
val bs_mini = ByteString(Array.ofDim[Byte](128 * 4 * 4))
|
||||
val bs_small = ByteString(Array.ofDim[Byte](1024 * 1 * 4))
|
||||
val bs_large = ByteString(Array.ofDim[Byte](1024 * 4 * 4))
|
||||
|
||||
val bss_mini = ByteStrings(Vector.fill(4)(bs_mini.asInstanceOf[ByteString1C].toByteString1), 4 * bs_mini.length)
|
||||
val bss_small = ByteStrings(Vector.fill(4)(bs_small.asInstanceOf[ByteString1C].toByteString1), 4 * bs_small.length)
|
||||
val bss_large = ByteStrings(Vector.fill(4)(bs_large.asInstanceOf[ByteString1C].toByteString1), 4 * bs_large.length)
|
||||
val bss_pc_large = bss_large.compact
|
||||
|
||||
/*
|
||||
--------------------------------- BASELINE --------------------------------------------------------------------
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_100 thrpt 20 111 122 621.983 ± 6172679.160 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_256 thrpt 20 110 238 003.870 ± 4042572.908 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_2000 thrpt 20 106 435 449.123 ± 2972282.531 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_100 thrpt 20 1 155 292.430 ± 23096.219 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_256 thrpt 20 1 191 713.229 ± 15910.426 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_2000 thrpt 20 1 201 342.579 ± 21119.392 ops/s
|
||||
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_100 thrpt 20 108 252 561.824 ± 3841392.346 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_256 thrpt 20 112 515 936.237 ± 5651549.124 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_2000 thrpt 20 110 851 553.706 ± 3327510.108 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_18 thrpt 20 983 544.541 ± 46299.808 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_100 thrpt 20 875 345.433 ± 44760.533 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_256 thrpt 20 864 182.258 ± 111172.303 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_2000 thrpt 20 997 459.151 ± 33627.993 ops/s
|
||||
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_slice_80_80 thrpt 20 112 299 538.691 ± 7259114.294 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_slice_129_129 thrpt 20 105 640 836.625 ± 9112709.942 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_slice_80_80 thrpt 20 10 868 202.262 ± 526537.133 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_slice_129_129 thrpt 20 9 429 199.802 ± 1321542.453 ops/s
|
||||
|
||||
--------------------------------- AFTER -----------------------------------------------------------------------
|
||||
|
||||
------ TODAY –––––––
|
||||
[info] Benchmark Mode Cnt Score Error Units
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_100 thrpt 20 126 091 961.654 ± 2813125.268 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_256 thrpt 20 118 393 394.350 ± 2934782.759 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_dropRight_2000 thrpt 20 119 183 386.004 ± 4445324.298 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_100 thrpt 20 8 813 065.392 ± 234570.880 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_256 thrpt 20 9 039 585.934 ± 297168.301 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_dropRight_2000 thrpt 20 9 629 458.168 ± 124846.904 ops/s
|
||||
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_100 thrpt 20 111 666 137.955 ± 4846727.674 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_256 thrpt 20 114 405 514.622 ± 4985750.805 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_drop_2000 thrpt 20 114 364 716.297 ± 2512280.603 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_18 thrpt 20 10 040 457.962 ± 527850.116 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_100 thrpt 20 9 184 934.769 ± 549140.840 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_256 thrpt 20 10 887 437.121 ± 195606.240 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_drop_2000 thrpt 20 10 725 300.292 ± 403470.413 ops/s
|
||||
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_slice_80_80 thrpt 20 233 017 314.148 ± 7070246.826 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bs_large_slice_129_129 thrpt 20 275 245 086.247 ± 4969752.048 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_slice_80_80 thrpt 20 264 963 420.976 ± 4259289.143 ops/s
|
||||
[info] ByteString_dropSliceTake_Benchmark.bss_large_slice_129_129 thrpt 20 265 477 577.022 ± 4623974.283 ops/s
|
||||
|
||||
*/
|
||||
|
||||
// 18 == "http://example.com", a typical url length
|
||||
|
||||
@Benchmark
|
||||
def bs_large_drop_0: ByteString =
|
||||
bs_large.drop(0)
|
||||
@Benchmark
|
||||
def bss_large_drop_0: ByteString =
|
||||
bss_large.drop(0)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_drop_18: ByteString =
|
||||
bs_large.drop(18)
|
||||
@Benchmark
|
||||
def bss_large_drop_18: ByteString =
|
||||
bss_large.drop(18)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_drop_100: ByteString =
|
||||
bs_large.drop(100)
|
||||
@Benchmark
|
||||
def bss_large_drop_100: ByteString =
|
||||
bss_large.drop(100)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_drop_256: ByteString =
|
||||
bs_large.drop(256)
|
||||
@Benchmark
|
||||
def bss_large_drop_256: ByteString =
|
||||
bss_large.drop(256)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_drop_2000: ByteString =
|
||||
bs_large.drop(2000)
|
||||
@Benchmark
|
||||
def bss_large_drop_2000: ByteString =
|
||||
bss_large.drop(2000)
|
||||
|
||||
/* these force 2 array drops, and 1 element drop inside the 2nd to first/last; can be considered as "bad case" */
|
||||
|
||||
@Benchmark
|
||||
def bs_large_slice_129_129: ByteString =
|
||||
bs_large.slice(129, 129)
|
||||
@Benchmark
|
||||
def bss_large_slice_129_129: ByteString =
|
||||
bss_large.slice(129, 129)
|
||||
|
||||
/* these only move the indexes, don't drop any arrays "happy case" */
|
||||
|
||||
@Benchmark
|
||||
def bs_large_slice_80_80: ByteString =
|
||||
bs_large.slice(80, 80)
|
||||
@Benchmark
|
||||
def bss_large_slice_80_80: ByteString =
|
||||
bss_large.slice(80, 80)
|
||||
|
||||
// drop right ---
|
||||
|
||||
@Benchmark
|
||||
def bs_large_dropRight_100: ByteString =
|
||||
bs_large.dropRight(100)
|
||||
@Benchmark
|
||||
def bss_large_dropRight_100: ByteString =
|
||||
bss_large.dropRight(100)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_dropRight_256: ByteString =
|
||||
bs_large.dropRight(256)
|
||||
@Benchmark
|
||||
def bss_large_dropRight_256: ByteString =
|
||||
bss_large.dropRight(256)
|
||||
|
||||
@Benchmark
|
||||
def bs_large_dropRight_2000: ByteString =
|
||||
bs_large.dropRight(2000)
|
||||
@Benchmark
|
||||
def bss_large_dropRight_2000: ByteString =
|
||||
bss_large.dropRight(2000)
|
||||
|
||||
}
|
||||
|
|
@ -123,13 +123,13 @@ class ConsumerIntegrationTest extends WordSpec with Matchers with NonSharedCamel
|
|||
}
|
||||
|
||||
"Error passing consumer supports redelivery through route modification" in {
|
||||
val ref = start(new FailingOnceConsumer("direct:failing-once-concumer") {
|
||||
val ref = start(new FailingOnceConsumer("direct:failing-once-consumer") {
|
||||
override def onRouteDefinition = (rd: RouteDefinition) ⇒ {
|
||||
rd.onException(classOf[TestException]).maximumRedeliveries(1).end
|
||||
rd.onException(classOf[TestException]).redeliveryDelay(0L).maximumRedeliveries(1).end
|
||||
}
|
||||
}, name = "direct-failing-once-consumer")
|
||||
filterEvents(EventFilter[TestException](occurrences = 1)) {
|
||||
camel.sendTo("direct:failing-once-concumer", msg = "hello") should ===("accepted: hello")
|
||||
camel.sendTo("direct:failing-once-consumer", msg = "hello") should ===("accepted: hello")
|
||||
}
|
||||
stop(ref)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -94,6 +94,20 @@ akka.cluster.sharding {
|
|||
# works only for state-store-mode = "ddata"
|
||||
updating-state-timeout = 5 s
|
||||
|
||||
# The shard uses this strategy to determines how to recover the underlying entity actors. The strategy is only used
|
||||
# by the persistent shard when rebalancing or restarting. The value can either be "all" or "constant". The "all"
|
||||
# strategy start all the underlying entity actors at the same time. The constant strategy will start the underlying
|
||||
# entity actors at a fix rate. The default strategy "all".
|
||||
entity-recovery-strategy = "all"
|
||||
|
||||
# Default settings for the constant rate entity recovery strategy
|
||||
entity-recovery-constant-rate-strategy {
|
||||
# Sets the frequency at which a batch of entity actors is started.
|
||||
frequency = 100 ms
|
||||
# Sets the number of entity actors to be restart at a particular interval
|
||||
number-of-entities = 5
|
||||
}
|
||||
|
||||
# Settings for the coordinator singleton. Same layout as akka.cluster.singleton.
|
||||
# The "role" of the singleton configuration is not used. The singleton role will
|
||||
# be the same as "akka.cluster.sharding.role".
|
||||
|
|
|
|||
|
|
@ -258,7 +258,7 @@ class ClusterSharding(system: ExtendedActorSystem) extends Extension {
|
|||
* @param entityProps the `Props` of the entity actors that will be created by the `ShardRegion`
|
||||
* @param settings configuration settings, see [[ClusterShardingSettings]]
|
||||
* @param messageExtractor functions to extract the entity id, shard id, and the message to send to the
|
||||
* entity from the incoming message
|
||||
* entity from the incoming message, see [[ShardRegion.MessageExtractor]]
|
||||
* @param allocationStrategy possibility to use a custom shard allocation and
|
||||
* rebalancing logic
|
||||
* @param handOffStopMessage the message that will be sent to entities when they are to be stopped
|
||||
|
|
|
|||
|
|
@ -38,7 +38,10 @@ object ClusterShardingSettings {
|
|||
leastShardAllocationMaxSimultaneousRebalance =
|
||||
config.getInt("least-shard-allocation-strategy.max-simultaneous-rebalance"),
|
||||
waitingForStateTimeout = config.getDuration("waiting-for-state-timeout", MILLISECONDS).millis,
|
||||
updatingStateTimeout = config.getDuration("updating-state-timeout", MILLISECONDS).millis)
|
||||
updatingStateTimeout = config.getDuration("updating-state-timeout", MILLISECONDS).millis,
|
||||
entityRecoveryStrategy = config.getString("entity-recovery-strategy"),
|
||||
entityRecoveryConstantRateStrategyFrequency = config.getDuration("entity-recovery-constant-rate-strategy.frequency", MILLISECONDS).millis,
|
||||
entityRecoveryConstantRateStrategyNumberOfEntities = config.getInt("entity-recovery-constant-rate-strategy.number-of-entities"))
|
||||
|
||||
val coordinatorSingletonSettings = ClusterSingletonManagerSettings(config.getConfig("coordinator-singleton"))
|
||||
|
||||
|
|
@ -71,19 +74,62 @@ object ClusterShardingSettings {
|
|||
if (role == "") None else Option(role)
|
||||
|
||||
class TuningParameters(
|
||||
val coordinatorFailureBackoff: FiniteDuration,
|
||||
val retryInterval: FiniteDuration,
|
||||
val bufferSize: Int,
|
||||
val handOffTimeout: FiniteDuration,
|
||||
val shardStartTimeout: FiniteDuration,
|
||||
val shardFailureBackoff: FiniteDuration,
|
||||
val entityRestartBackoff: FiniteDuration,
|
||||
val rebalanceInterval: FiniteDuration,
|
||||
val snapshotAfter: Int,
|
||||
val leastShardAllocationRebalanceThreshold: Int,
|
||||
val leastShardAllocationMaxSimultaneousRebalance: Int,
|
||||
val waitingForStateTimeout: FiniteDuration,
|
||||
val updatingStateTimeout: FiniteDuration)
|
||||
val coordinatorFailureBackoff: FiniteDuration,
|
||||
val retryInterval: FiniteDuration,
|
||||
val bufferSize: Int,
|
||||
val handOffTimeout: FiniteDuration,
|
||||
val shardStartTimeout: FiniteDuration,
|
||||
val shardFailureBackoff: FiniteDuration,
|
||||
val entityRestartBackoff: FiniteDuration,
|
||||
val rebalanceInterval: FiniteDuration,
|
||||
val snapshotAfter: Int,
|
||||
val leastShardAllocationRebalanceThreshold: Int,
|
||||
val leastShardAllocationMaxSimultaneousRebalance: Int,
|
||||
val waitingForStateTimeout: FiniteDuration,
|
||||
val updatingStateTimeout: FiniteDuration,
|
||||
val entityRecoveryStrategy: String,
|
||||
val entityRecoveryConstantRateStrategyFrequency: FiniteDuration,
|
||||
val entityRecoveryConstantRateStrategyNumberOfEntities: Int) {
|
||||
|
||||
require(
|
||||
entityRecoveryStrategy == "all" || entityRecoveryStrategy == "constant",
|
||||
s"Unknown 'entity-recovery-strategy' [$entityRecoveryStrategy], valid values are 'all' or 'constant'")
|
||||
|
||||
// included for binary compatibility
|
||||
def this(
|
||||
coordinatorFailureBackoff: FiniteDuration,
|
||||
retryInterval: FiniteDuration,
|
||||
bufferSize: Int,
|
||||
handOffTimeout: FiniteDuration,
|
||||
shardStartTimeout: FiniteDuration,
|
||||
shardFailureBackoff: FiniteDuration,
|
||||
entityRestartBackoff: FiniteDuration,
|
||||
rebalanceInterval: FiniteDuration,
|
||||
snapshotAfter: Int,
|
||||
leastShardAllocationRebalanceThreshold: Int,
|
||||
leastShardAllocationMaxSimultaneousRebalance: Int,
|
||||
waitingForStateTimeout: FiniteDuration,
|
||||
updatingStateTimeout: FiniteDuration) = {
|
||||
this(
|
||||
coordinatorFailureBackoff,
|
||||
retryInterval,
|
||||
bufferSize,
|
||||
handOffTimeout,
|
||||
shardStartTimeout,
|
||||
shardFailureBackoff,
|
||||
entityRestartBackoff,
|
||||
rebalanceInterval,
|
||||
snapshotAfter,
|
||||
leastShardAllocationRebalanceThreshold,
|
||||
leastShardAllocationMaxSimultaneousRebalance,
|
||||
waitingForStateTimeout,
|
||||
updatingStateTimeout,
|
||||
"all",
|
||||
100 milliseconds,
|
||||
5
|
||||
)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -6,16 +6,19 @@ package akka.cluster.sharding
|
|||
import java.net.URLEncoder
|
||||
import akka.actor.ActorLogging
|
||||
import akka.actor.ActorRef
|
||||
import akka.actor.ActorSystem
|
||||
import akka.actor.Deploy
|
||||
import akka.actor.Props
|
||||
import akka.actor.Terminated
|
||||
import akka.cluster.sharding.Shard.{ ShardCommand }
|
||||
import akka.cluster.sharding.Shard.ShardCommand
|
||||
import akka.persistence.PersistentActor
|
||||
import akka.persistence.SnapshotOffer
|
||||
import akka.actor.Actor
|
||||
import akka.persistence.RecoveryCompleted
|
||||
import akka.persistence.SaveSnapshotFailure
|
||||
import akka.persistence.SaveSnapshotSuccess
|
||||
import scala.concurrent.Future
|
||||
import scala.concurrent.duration.FiniteDuration
|
||||
|
||||
/**
|
||||
* INTERNAL API
|
||||
|
|
@ -35,6 +38,12 @@ private[akka] object Shard {
|
|||
*/
|
||||
final case class RestartEntity(entity: EntityId) extends ShardCommand
|
||||
|
||||
/**
|
||||
* When initialising a shard with remember entities enabled the following message is used
|
||||
* to restart batches of entity actors at a time.
|
||||
*/
|
||||
final case class RestartEntities(entity: Set[EntityId]) extends ShardCommand
|
||||
|
||||
/**
|
||||
* A case class which represents a state change for the Shard
|
||||
*/
|
||||
|
|
@ -116,7 +125,7 @@ private[akka] class Shard(
|
|||
|
||||
import ShardRegion.{ handOffStopperProps, EntityId, Msg, Passivate, ShardInitialized }
|
||||
import ShardCoordinator.Internal.{ HandOff, ShardStopped }
|
||||
import Shard.{ State, RestartEntity, EntityStopped, EntityStarted }
|
||||
import Shard.{ State, RestartEntity, RestartEntities, EntityStopped, EntityStarted }
|
||||
import Shard.{ ShardQuery, GetCurrentShardState, CurrentShardState, GetShardStats, ShardStats }
|
||||
import akka.cluster.sharding.ShardCoordinator.Internal.CoordinatorMessage
|
||||
import akka.cluster.sharding.ShardRegion.ShardRegionCommand
|
||||
|
|
@ -151,7 +160,8 @@ private[akka] class Shard(
|
|||
}
|
||||
|
||||
def receiveShardCommand(msg: ShardCommand): Unit = msg match {
|
||||
case RestartEntity(id) ⇒ getEntity(id)
|
||||
case RestartEntity(id) ⇒ getEntity(id)
|
||||
case RestartEntities(ids) ⇒ ids foreach getEntity
|
||||
}
|
||||
|
||||
def receiveShardRegionCommand(msg: ShardRegionCommand): Unit = msg match {
|
||||
|
|
@ -313,8 +323,19 @@ private[akka] class PersistentShard(
|
|||
with PersistentActor with ActorLogging {
|
||||
|
||||
import ShardRegion.{ EntityId, Msg }
|
||||
import Shard.{ State, RestartEntity, EntityStopped, EntityStarted }
|
||||
import Shard.{ State, RestartEntity, RestartEntities, EntityStopped, EntityStarted }
|
||||
import settings.tuningParameters._
|
||||
import akka.pattern.pipe
|
||||
|
||||
val rememberedEntitiesRecoveryStrategy: EntityRecoveryStrategy =
|
||||
entityRecoveryStrategy match {
|
||||
case "all" ⇒ EntityRecoveryStrategy.allStrategy()
|
||||
case "constant" ⇒ EntityRecoveryStrategy.constantStrategy(
|
||||
context.system,
|
||||
entityRecoveryConstantRateStrategyFrequency,
|
||||
entityRecoveryConstantRateStrategyNumberOfEntities
|
||||
)
|
||||
}
|
||||
|
||||
override def persistenceId = s"/sharding/${typeName}Shard/${shardId}"
|
||||
|
||||
|
|
@ -344,7 +365,7 @@ private[akka] class PersistentShard(
|
|||
case EntityStopped(id) ⇒ state = state.copy(state.entities - id)
|
||||
case SnapshotOffer(_, snapshot: State) ⇒ state = snapshot
|
||||
case RecoveryCompleted ⇒
|
||||
state.entities foreach getEntity
|
||||
restartRememberedEntities()
|
||||
super.initialized()
|
||||
log.debug("Shard recovery completed {}", shardId)
|
||||
}
|
||||
|
|
@ -388,4 +409,45 @@ private[akka] class PersistentShard(
|
|||
}
|
||||
}
|
||||
|
||||
private def restartRememberedEntities(): Unit = {
|
||||
rememberedEntitiesRecoveryStrategy.recoverEntities(state.entities).foreach { scheduledRecovery ⇒
|
||||
import context.dispatcher
|
||||
scheduledRecovery.filter(_.nonEmpty).map(RestartEntities).pipeTo(self)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
object EntityRecoveryStrategy {
|
||||
def allStrategy(): EntityRecoveryStrategy = new AllAtOnceEntityRecoveryStrategy()
|
||||
|
||||
def constantStrategy(actorSystem: ActorSystem, frequency: FiniteDuration, numberOfEntities: Int): EntityRecoveryStrategy =
|
||||
new ConstantRateEntityRecoveryStrategy(actorSystem, frequency, numberOfEntities)
|
||||
}
|
||||
|
||||
trait EntityRecoveryStrategy {
|
||||
import ShardRegion.EntityId
|
||||
import scala.concurrent.Future
|
||||
|
||||
def recoverEntities(entities: Set[EntityId]): Set[Future[Set[EntityId]]]
|
||||
}
|
||||
|
||||
final class AllAtOnceEntityRecoveryStrategy extends EntityRecoveryStrategy {
|
||||
import ShardRegion.EntityId
|
||||
override def recoverEntities(entities: Set[EntityId]): Set[Future[Set[EntityId]]] =
|
||||
if (entities.isEmpty) Set.empty else Set(Future.successful(entities))
|
||||
}
|
||||
|
||||
final class ConstantRateEntityRecoveryStrategy(actorSystem: ActorSystem, frequency: FiniteDuration, numberOfEntities: Int) extends EntityRecoveryStrategy {
|
||||
import ShardRegion.EntityId
|
||||
import akka.pattern.after
|
||||
import actorSystem.dispatcher
|
||||
|
||||
override def recoverEntities(entities: Set[EntityId]): Set[Future[Set[EntityId]]] =
|
||||
entities.grouped(numberOfEntities).foldLeft((frequency, Set[Future[Set[EntityId]]]())) {
|
||||
case ((interval, scheduledEntityIds), entityIds) ⇒
|
||||
(interval + frequency, scheduledEntityIds + scheduleEntities(interval, entityIds))
|
||||
}._2
|
||||
|
||||
private def scheduleEntities(interval: FiniteDuration, entityIds: Set[EntityId]) =
|
||||
after(interval, actorSystem.scheduler)(Future.successful[Set[EntityId]](entityIds))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -153,7 +153,7 @@ object ShardCoordinator {
|
|||
case (_, v) ⇒ v.filterNot(s ⇒ rebalanceInProgress(s))
|
||||
}.maxBy(_.size)
|
||||
if (mostShards.size - leastShards.size >= rebalanceThreshold)
|
||||
Future.successful(Set(mostShards.head))
|
||||
Future.successful(mostShards.take(maxSimultaneousRebalance - rebalanceInProgress.size).toSet)
|
||||
else
|
||||
emptyRebalanceResult
|
||||
} else emptyRebalanceResult
|
||||
|
|
@ -367,8 +367,8 @@ object ShardCoordinator {
|
|||
}
|
||||
|
||||
def stoppingShard: Receive = {
|
||||
case ShardStopped(shard) ⇒ done(ok = true)
|
||||
case ReceiveTimeout ⇒ done(ok = false)
|
||||
case ShardStopped(`shard`) ⇒ done(ok = true)
|
||||
case ReceiveTimeout ⇒ done(ok = false)
|
||||
}
|
||||
|
||||
def done(ok: Boolean): Unit = {
|
||||
|
|
|
|||
|
|
@ -413,8 +413,8 @@ class ShardRegion(
|
|||
case msg: CoordinatorMessage ⇒ receiveCoordinatorMessage(msg)
|
||||
case cmd: ShardRegionCommand ⇒ receiveCommand(cmd)
|
||||
case query: ShardRegionQuery ⇒ receiveQuery(query)
|
||||
case msg if extractEntityId.isDefinedAt(msg) ⇒ deliverMessage(msg, sender())
|
||||
case msg: RestartShard ⇒ deliverMessage(msg, sender())
|
||||
case msg if extractEntityId.isDefinedAt(msg) ⇒ deliverMessage(msg, sender())
|
||||
}
|
||||
|
||||
def receiveClusterState(state: CurrentClusterState): Unit = {
|
||||
|
|
@ -454,7 +454,7 @@ class ShardRegion(
|
|||
regionByShard.get(shard) match {
|
||||
case Some(r) if r == self && ref != self ⇒
|
||||
// should not happen, inconsistency between ShardRegion and ShardCoordinator
|
||||
throw new IllegalStateException(s"Unexpected change of shard [${shard}] from self to [${ref}]")
|
||||
throw new IllegalStateException(s"Unexpected change of shard [$shard] from self to [$ref]")
|
||||
case _ ⇒
|
||||
}
|
||||
regionByShard = regionByShard.updated(shard, ref)
|
||||
|
|
@ -546,7 +546,7 @@ class ShardRegion(
|
|||
}
|
||||
|
||||
def receiveTerminated(ref: ActorRef): Unit = {
|
||||
if (coordinator.exists(_ == ref))
|
||||
if (coordinator.contains(ref))
|
||||
coordinator = None
|
||||
else if (regions.contains(ref)) {
|
||||
val shards = regions(ref)
|
||||
|
|
@ -711,7 +711,7 @@ class ShardRegion(
|
|||
case Some(ref) ⇒
|
||||
log.debug("Forwarding request for shard [{}] to [{}]", shardId, ref)
|
||||
ref.tell(msg, snd)
|
||||
case None if (shardId == null || shardId == "") ⇒
|
||||
case None if shardId == null || shardId == "" ⇒
|
||||
log.warning("Shard must not be empty, dropping message [{}]", msg.getClass.getName)
|
||||
context.system.deadLetters ! msg
|
||||
case None ⇒
|
||||
|
|
|
|||
|
|
@ -114,7 +114,11 @@ object ClusterShardingSpec {
|
|||
|
||||
}
|
||||
|
||||
abstract class ClusterShardingSpecConfig(val mode: String) extends MultiNodeConfig {
|
||||
abstract class ClusterShardingSpecConfig(
|
||||
val mode: String,
|
||||
val entityRecoveryStrategy: String = "all")
|
||||
extends MultiNodeConfig {
|
||||
|
||||
val controller = role("controller")
|
||||
val first = role("first")
|
||||
val second = role("second")
|
||||
|
|
@ -144,6 +148,11 @@ abstract class ClusterShardingSpecConfig(val mode: String) extends MultiNodeConf
|
|||
entity-restart-backoff = 1s
|
||||
rebalance-interval = 2 s
|
||||
state-store-mode = "$mode"
|
||||
entity-recovery-strategy = "$entityRecoveryStrategy"
|
||||
entity-recovery-constant-rate-strategy {
|
||||
frequency = 1 ms
|
||||
number-of-entities = 1
|
||||
}
|
||||
least-shard-allocation-strategy {
|
||||
rebalance-threshold = 2
|
||||
max-simultaneous-rebalance = 1
|
||||
|
|
@ -177,9 +186,19 @@ object ClusterShardingDocCode {
|
|||
|
||||
object PersistentClusterShardingSpecConfig extends ClusterShardingSpecConfig("persistence")
|
||||
object DDataClusterShardingSpecConfig extends ClusterShardingSpecConfig("ddata")
|
||||
object PersistentClusterShardingWithEntityRecoverySpecConfig extends ClusterShardingSpecConfig(
|
||||
"persistence",
|
||||
"all"
|
||||
)
|
||||
object DDataClusterShardingWithEntityRecoverySpecConfig extends ClusterShardingSpecConfig(
|
||||
"ddata",
|
||||
"constant"
|
||||
)
|
||||
|
||||
class PersistentClusterShardingSpec extends ClusterShardingSpec(PersistentClusterShardingSpecConfig)
|
||||
class DDataClusterShardingSpec extends ClusterShardingSpec(DDataClusterShardingSpecConfig)
|
||||
class PersistentClusterShardingWithEntityRecoverySpec extends ClusterShardingSpec(PersistentClusterShardingWithEntityRecoverySpecConfig)
|
||||
class DDataClusterShardingWithEntityRecoverySpec extends ClusterShardingSpec(DDataClusterShardingWithEntityRecoverySpecConfig)
|
||||
|
||||
class PersistentClusterShardingMultiJvmNode1 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingMultiJvmNode2 extends PersistentClusterShardingSpec
|
||||
|
|
@ -197,6 +216,22 @@ class DDataClusterShardingMultiJvmNode5 extends DDataClusterShardingSpec
|
|||
class DDataClusterShardingMultiJvmNode6 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingMultiJvmNode7 extends DDataClusterShardingSpec
|
||||
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode1 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode2 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode3 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode4 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode5 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode6 extends PersistentClusterShardingSpec
|
||||
class PersistentClusterShardingWithEntityRecoveryMultiJvmNode7 extends PersistentClusterShardingSpec
|
||||
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode1 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode2 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode3 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode4 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode5 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode6 extends DDataClusterShardingSpec
|
||||
class DDataClusterShardingWithEntityRecoveryMultiJvmNode7 extends DDataClusterShardingSpec
|
||||
|
||||
abstract class ClusterShardingSpec(config: ClusterShardingSpecConfig) extends MultiNodeSpec(config) with STMultiNodeSpec with ImplicitSender {
|
||||
import ClusterShardingSpec._
|
||||
import config._
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
package akka.cluster.sharding
|
||||
|
||||
import akka.cluster.sharding.ShardRegion.EntityId
|
||||
import akka.testkit.AkkaSpec
|
||||
import scala.concurrent.{ Await, Future }
|
||||
import scala.concurrent.duration._
|
||||
import scala.language.postfixOps
|
||||
|
||||
class AllAtOnceEntityRecoveryStrategySpec extends AkkaSpec {
|
||||
val strategy = EntityRecoveryStrategy.allStrategy()
|
||||
|
||||
import system.dispatcher
|
||||
|
||||
"AllAtOnceEntityRecoveryStrategy" must {
|
||||
"recover entities" in {
|
||||
val entities = Set[EntityId]("1", "2", "3", "4", "5")
|
||||
val startTime = System.currentTimeMillis()
|
||||
val resultWithTimes = strategy.recoverEntities(entities).map(
|
||||
_.map(entityIds ⇒ (entityIds, System.currentTimeMillis() - startTime))
|
||||
)
|
||||
|
||||
val result = Await.result(Future.sequence(resultWithTimes), 4 seconds).toList.sortWith(_._2 < _._2)
|
||||
result.size should ===(1)
|
||||
|
||||
val scheduledEntities = result.map(_._1)
|
||||
scheduledEntities.head should ===(entities)
|
||||
|
||||
val times = result.map(_._2)
|
||||
times.head should ===(0L +- 20L)
|
||||
}
|
||||
|
||||
"not recover when no entities to recover" in {
|
||||
val result = strategy.recoverEntities(Set[EntityId]())
|
||||
result.size should ===(0)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -0,0 +1,45 @@
|
|||
package akka.cluster.sharding
|
||||
|
||||
import akka.cluster.sharding.ShardRegion.EntityId
|
||||
import akka.testkit.AkkaSpec
|
||||
|
||||
import scala.concurrent.{ Await, Future }
|
||||
import scala.concurrent.duration._
|
||||
import scala.language.postfixOps
|
||||
|
||||
class ConstantRateEntityRecoveryStrategySpec extends AkkaSpec {
|
||||
|
||||
import system.dispatcher
|
||||
|
||||
val strategy = EntityRecoveryStrategy.constantStrategy(system, 500 millis, 2)
|
||||
|
||||
"ConstantRateEntityRecoveryStrategy" must {
|
||||
"recover entities" in {
|
||||
val entities = Set[EntityId]("1", "2", "3", "4", "5")
|
||||
val startTime = System.currentTimeMillis()
|
||||
val resultWithTimes = strategy.recoverEntities(entities).map(
|
||||
_.map(entityIds ⇒ (entityIds, System.currentTimeMillis() - startTime))
|
||||
)
|
||||
|
||||
val result = Await.result(Future.sequence(resultWithTimes), 4 seconds).toList.sortWith(_._2 < _._2)
|
||||
result.size should ===(3)
|
||||
|
||||
val scheduledEntities = result.map(_._1)
|
||||
scheduledEntities.head.size should ===(2)
|
||||
scheduledEntities(1).size should ===(2)
|
||||
scheduledEntities(2).size should ===(1)
|
||||
scheduledEntities.foldLeft(Set[EntityId]())(_ ++ _) should ===(entities)
|
||||
|
||||
val times = result.map(_._2)
|
||||
|
||||
times.head should ===(500L +- 30L)
|
||||
times(1) should ===(1000L +- 30L)
|
||||
times(2) should ===(1500L +- 30L)
|
||||
}
|
||||
|
||||
"not recover when no entities to recover" in {
|
||||
val result = strategy.recoverEntities(Set[EntityId]())
|
||||
result.size should ===(0)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -31,14 +31,23 @@ class LeastShardAllocationStrategySpec extends AkkaSpec {
|
|||
Await.result(allocationStrategy.rebalance(allocations, Set.empty), 3.seconds) should ===(Set.empty[String])
|
||||
|
||||
val allocations2 = allocations.updated(regionB, Vector("shard2", "shard3", "shard4"))
|
||||
Await.result(allocationStrategy.rebalance(allocations2, Set.empty), 3.seconds) should ===(Set("shard2"))
|
||||
Await.result(allocationStrategy.rebalance(allocations2, Set.empty), 3.seconds) should ===(Set("shard2", "shard3"))
|
||||
Await.result(allocationStrategy.rebalance(allocations2, Set("shard4")), 3.seconds) should ===(Set.empty[String])
|
||||
|
||||
val allocations3 = allocations2.updated(regionA, Vector("shard1", "shard5", "shard6"))
|
||||
Await.result(allocationStrategy.rebalance(allocations3, Set("shard1")), 3.seconds) should ===(Set("shard2"))
|
||||
}
|
||||
|
||||
"must limit number of simultanious rebalance" in {
|
||||
"rebalance multiple shards if max simultaneous rebalances is not exceeded" in {
|
||||
val allocations = Map(
|
||||
regionA → Vector("shard1"),
|
||||
regionB → Vector("shard2", "shard3", "shard4", "shard5", "shard6"),
|
||||
regionC → Vector.empty)
|
||||
|
||||
Await.result(allocationStrategy.rebalance(allocations, Set.empty), 3.seconds) should ===(Set("shard2", "shard3"))
|
||||
Await.result(allocationStrategy.rebalance(allocations, Set("shard2", "shard3")), 3.seconds) should ===(Set.empty[String])
|
||||
}
|
||||
"limit number of simultaneous rebalance" in {
|
||||
val allocations = Map(
|
||||
regionA → Vector("shard1"),
|
||||
regionB → Vector("shard2", "shard3", "shard4", "shard5", "shard6"), regionC → Vector.empty)
|
||||
|
|
|
|||
|
|
@ -23,6 +23,7 @@ import akka.cluster.Member
|
|||
import akka.cluster.MemberStatus
|
||||
import akka.AkkaException
|
||||
import akka.actor.NoSerializationVerificationNeeded
|
||||
import akka.cluster.UniqueAddress
|
||||
|
||||
object ClusterSingletonManagerSettings {
|
||||
|
||||
|
|
@ -184,11 +185,11 @@ object ClusterSingletonManager {
|
|||
case object End extends State
|
||||
|
||||
case object Uninitialized extends Data
|
||||
final case class YoungerData(oldestOption: Option[Address]) extends Data
|
||||
final case class BecomingOldestData(previousOldestOption: Option[Address]) extends Data
|
||||
final case class YoungerData(oldestOption: Option[UniqueAddress]) extends Data
|
||||
final case class BecomingOldestData(previousOldestOption: Option[UniqueAddress]) extends Data
|
||||
final case class OldestData(singleton: ActorRef, singletonTerminated: Boolean = false) extends Data
|
||||
final case class WasOldestData(singleton: ActorRef, singletonTerminated: Boolean,
|
||||
newOldestOption: Option[Address]) extends Data
|
||||
newOldestOption: Option[UniqueAddress]) extends Data
|
||||
final case class HandingOverData(singleton: ActorRef, handOverTo: Option[ActorRef]) extends Data
|
||||
case object EndData extends Data
|
||||
final case class DelayedMemberRemoved(member: Member)
|
||||
|
|
@ -205,9 +206,9 @@ object ClusterSingletonManager {
|
|||
/**
|
||||
* The first event, corresponding to CurrentClusterState.
|
||||
*/
|
||||
final case class InitialOldestState(oldest: Option[Address], safeToBeOldest: Boolean)
|
||||
final case class InitialOldestState(oldest: Option[UniqueAddress], safeToBeOldest: Boolean)
|
||||
|
||||
final case class OldestChanged(oldest: Option[Address])
|
||||
final case class OldestChanged(oldest: Option[UniqueAddress])
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -245,14 +246,14 @@ object ClusterSingletonManager {
|
|||
block()
|
||||
val after = membersByAge.headOption
|
||||
if (before != after)
|
||||
changes :+= OldestChanged(after.map(_.address))
|
||||
changes :+= OldestChanged(after.map(_.uniqueAddress))
|
||||
}
|
||||
|
||||
def handleInitial(state: CurrentClusterState): Unit = {
|
||||
membersByAge = immutable.SortedSet.empty(ageOrdering) union state.members.filter(m ⇒
|
||||
(m.status == MemberStatus.Up || m.status == MemberStatus.Leaving) && matchingRole(m))
|
||||
val safeToBeOldest = !state.members.exists { m ⇒ (m.status == MemberStatus.Down || m.status == MemberStatus.Exiting) }
|
||||
val initial = InitialOldestState(membersByAge.headOption.map(_.address), safeToBeOldest)
|
||||
val initial = InitialOldestState(membersByAge.headOption.map(_.uniqueAddress), safeToBeOldest)
|
||||
changes :+= initial
|
||||
}
|
||||
|
||||
|
|
@ -376,7 +377,7 @@ class ClusterSingletonManager(
|
|||
import FSM.`→`
|
||||
|
||||
val cluster = Cluster(context.system)
|
||||
val selfAddressOption = Some(cluster.selfAddress)
|
||||
val selfUniqueAddressOption = Some(cluster.selfUniqueAddress)
|
||||
import cluster.settings.LogInfo
|
||||
|
||||
require(
|
||||
|
|
@ -406,13 +407,13 @@ class ClusterSingletonManager(
|
|||
var selfExited = false
|
||||
|
||||
// keep track of previously removed members
|
||||
var removed = Map.empty[Address, Deadline]
|
||||
var removed = Map.empty[UniqueAddress, Deadline]
|
||||
|
||||
def addRemoved(address: Address): Unit =
|
||||
removed += address → (Deadline.now + 15.minutes)
|
||||
def addRemoved(node: UniqueAddress): Unit =
|
||||
removed += node → (Deadline.now + 15.minutes)
|
||||
|
||||
def cleanupOverdueNotMemberAnyMore(): Unit = {
|
||||
removed = removed filter { case (address, deadline) ⇒ deadline.hasTimeLeft }
|
||||
removed = removed filter { case (_, deadline) ⇒ deadline.hasTimeLeft }
|
||||
}
|
||||
|
||||
def logInfo(message: String): Unit =
|
||||
|
|
@ -463,10 +464,10 @@ class ClusterSingletonManager(
|
|||
|
||||
case Event(InitialOldestState(oldestOption, safeToBeOldest), _) ⇒
|
||||
oldestChangedReceived = true
|
||||
if (oldestOption == selfAddressOption && safeToBeOldest)
|
||||
if (oldestOption == selfUniqueAddressOption && safeToBeOldest)
|
||||
// oldest immediately
|
||||
gotoOldest()
|
||||
else if (oldestOption == selfAddressOption)
|
||||
else if (oldestOption == selfUniqueAddressOption)
|
||||
goto(BecomingOldest) using BecomingOldestData(None)
|
||||
else
|
||||
goto(Younger) using YoungerData(oldestOption)
|
||||
|
|
@ -475,22 +476,22 @@ class ClusterSingletonManager(
|
|||
when(Younger) {
|
||||
case Event(OldestChanged(oldestOption), YoungerData(previousOldestOption)) ⇒
|
||||
oldestChangedReceived = true
|
||||
if (oldestOption == selfAddressOption) {
|
||||
logInfo("Younger observed OldestChanged: [{} -> myself]", previousOldestOption)
|
||||
if (oldestOption == selfUniqueAddressOption) {
|
||||
logInfo("Younger observed OldestChanged: [{} -> myself]", previousOldestOption.map(_.address))
|
||||
previousOldestOption match {
|
||||
case None ⇒ gotoOldest()
|
||||
case Some(prev) if removed.contains(prev) ⇒ gotoOldest()
|
||||
case Some(prev) ⇒
|
||||
peer(prev) ! HandOverToMe
|
||||
peer(prev.address) ! HandOverToMe
|
||||
goto(BecomingOldest) using BecomingOldestData(previousOldestOption)
|
||||
}
|
||||
} else {
|
||||
logInfo("Younger observed OldestChanged: [{} -> {}]", previousOldestOption, oldestOption)
|
||||
logInfo("Younger observed OldestChanged: [{} -> {}]", previousOldestOption.map(_.address), oldestOption.map(_.address))
|
||||
getNextOldestChanged()
|
||||
stay using YoungerData(oldestOption)
|
||||
}
|
||||
|
||||
case Event(MemberRemoved(m, _), _) if m.address == cluster.selfAddress ⇒
|
||||
case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
|
||||
|
|
@ -498,11 +499,17 @@ class ClusterSingletonManager(
|
|||
scheduleDelayedMemberRemoved(m)
|
||||
stay
|
||||
|
||||
case Event(DelayedMemberRemoved(m), YoungerData(Some(previousOldest))) if m.address == previousOldest ⇒
|
||||
case Event(DelayedMemberRemoved(m), YoungerData(Some(previousOldest))) if m.uniqueAddress == previousOldest ⇒
|
||||
logInfo("Previous oldest removed [{}]", m.address)
|
||||
addRemoved(m.address)
|
||||
addRemoved(m.uniqueAddress)
|
||||
// transition when OldestChanged
|
||||
stay using YoungerData(None)
|
||||
|
||||
case Event(HandOverToMe, _) ⇒
|
||||
// this node was probably quickly restarted with same hostname:port,
|
||||
// confirm that the old singleton instance has been stopped
|
||||
sender() ! HandOverDone
|
||||
stay
|
||||
}
|
||||
|
||||
when(BecomingOldest) {
|
||||
|
|
@ -514,16 +521,16 @@ class ClusterSingletonManager(
|
|||
stay
|
||||
|
||||
case Event(HandOverDone, BecomingOldestData(Some(previousOldest))) ⇒
|
||||
if (sender().path.address == previousOldest)
|
||||
if (sender().path.address == previousOldest.address)
|
||||
gotoOldest()
|
||||
else {
|
||||
logInfo(
|
||||
"Ignoring HandOverDone in BecomingOldest from [{}]. Expected previous oldest [{}]",
|
||||
sender().path.address, previousOldest)
|
||||
sender().path.address, previousOldest.address)
|
||||
stay
|
||||
}
|
||||
|
||||
case Event(MemberRemoved(m, _), _) if m.address == cluster.selfAddress ⇒
|
||||
case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
|
||||
|
|
@ -531,26 +538,39 @@ class ClusterSingletonManager(
|
|||
scheduleDelayedMemberRemoved(m)
|
||||
stay
|
||||
|
||||
case Event(DelayedMemberRemoved(m), BecomingOldestData(Some(previousOldest))) if m.address == previousOldest ⇒
|
||||
logInfo("Previous oldest [{}] removed", previousOldest)
|
||||
addRemoved(m.address)
|
||||
case Event(DelayedMemberRemoved(m), BecomingOldestData(Some(previousOldest))) if m.uniqueAddress == previousOldest ⇒
|
||||
logInfo("Previous oldest [{}] removed", previousOldest.address)
|
||||
addRemoved(m.uniqueAddress)
|
||||
gotoOldest()
|
||||
|
||||
case Event(TakeOverFromMe, BecomingOldestData(None)) ⇒
|
||||
sender() ! HandOverToMe
|
||||
stay using BecomingOldestData(Some(sender().path.address))
|
||||
|
||||
case Event(TakeOverFromMe, BecomingOldestData(Some(previousOldest))) ⇒
|
||||
if (previousOldest == sender().path.address) sender() ! HandOverToMe
|
||||
else logInfo(
|
||||
"Ignoring TakeOver request in BecomingOldest from [{}]. Expected previous oldest [{}]",
|
||||
sender().path.address, previousOldest)
|
||||
stay
|
||||
case Event(TakeOverFromMe, BecomingOldestData(previousOldestOption)) ⇒
|
||||
val senderAddress = sender().path.address
|
||||
// it would have been better to include the UniqueAddress in the TakeOverFromMe message,
|
||||
// but can't change due to backwards compatibility
|
||||
cluster.state.members.collectFirst { case m if m.address == senderAddress ⇒ m.uniqueAddress } match {
|
||||
case None ⇒
|
||||
// from unknown node, ignore
|
||||
logInfo(
|
||||
"Ignoring TakeOver request from unknown node in BecomingOldest from [{}].", senderAddress)
|
||||
stay
|
||||
case Some(senderUniqueAddress) ⇒
|
||||
previousOldestOption match {
|
||||
case Some(previousOldest) ⇒
|
||||
if (previousOldest == senderUniqueAddress) sender() ! HandOverToMe
|
||||
else logInfo(
|
||||
"Ignoring TakeOver request in BecomingOldest from [{}]. Expected previous oldest [{}]",
|
||||
sender().path.address, previousOldest.address)
|
||||
stay
|
||||
case None ⇒
|
||||
sender() ! HandOverToMe
|
||||
stay using BecomingOldestData(Some(senderUniqueAddress))
|
||||
}
|
||||
}
|
||||
|
||||
case Event(HandOverRetry(count), BecomingOldestData(previousOldestOption)) ⇒
|
||||
if (count <= maxHandOverRetries) {
|
||||
logInfo("Retry [{}], sending HandOverToMe to [{}]", count, previousOldestOption)
|
||||
previousOldestOption foreach { peer(_) ! HandOverToMe }
|
||||
logInfo("Retry [{}], sending HandOverToMe to [{}]", count, previousOldestOption.map(_.address))
|
||||
previousOldestOption.foreach(node ⇒ peer(node.address) ! HandOverToMe)
|
||||
setTimer(HandOverRetryTimer, HandOverRetry(count + 1), handOverRetryInterval, repeat = false)
|
||||
stay()
|
||||
} else if (previousOldestOption forall removed.contains) {
|
||||
|
|
@ -582,16 +602,19 @@ class ClusterSingletonManager(
|
|||
when(Oldest) {
|
||||
case Event(OldestChanged(oldestOption), OldestData(singleton, singletonTerminated)) ⇒
|
||||
oldestChangedReceived = true
|
||||
logInfo("Oldest observed OldestChanged: [{} -> {}]", cluster.selfAddress, oldestOption)
|
||||
logInfo("Oldest observed OldestChanged: [{} -> {}]", cluster.selfAddress, oldestOption.map(_.address))
|
||||
oldestOption match {
|
||||
case Some(a) if a == cluster.selfAddress ⇒
|
||||
case Some(a) if a == cluster.selfUniqueAddress ⇒
|
||||
// already oldest
|
||||
stay
|
||||
case Some(a) if !selfExited && removed.contains(a) ⇒
|
||||
// The member removal was not completed and the old removed node is considered
|
||||
// oldest again. Safest is to terminate the singleton instance and goto Younger.
|
||||
// This node will become oldest again when the other is removed again.
|
||||
gotoHandingOver(singleton, singletonTerminated, None)
|
||||
case Some(a) ⇒
|
||||
// send TakeOver request in case the new oldest doesn't know previous oldest
|
||||
peer(a) ! TakeOverFromMe
|
||||
peer(a.address) ! TakeOverFromMe
|
||||
setTimer(TakeOverRetryTimer, TakeOverRetry(1), handOverRetryInterval, repeat = false)
|
||||
goto(WasOldest) using WasOldestData(singleton, singletonTerminated, newOldestOption = Some(a))
|
||||
case None ⇒
|
||||
|
|
@ -610,8 +633,8 @@ class ClusterSingletonManager(
|
|||
when(WasOldest) {
|
||||
case Event(TakeOverRetry(count), WasOldestData(_, _, newOldestOption)) ⇒
|
||||
if (count <= maxTakeOverRetries) {
|
||||
logInfo("Retry [{}], sending TakeOverFromMe to [{}]", count, newOldestOption)
|
||||
newOldestOption foreach { peer(_) ! TakeOverFromMe }
|
||||
logInfo("Retry [{}], sending TakeOverFromMe to [{}]", count, newOldestOption.map(_.address))
|
||||
newOldestOption.foreach(node ⇒ peer(node.address) ! TakeOverFromMe)
|
||||
setTimer(TakeOverRetryTimer, TakeOverRetry(count + 1), handOverRetryInterval, repeat = false)
|
||||
stay
|
||||
} else if (cluster.isTerminated)
|
||||
|
|
@ -622,12 +645,12 @@ class ClusterSingletonManager(
|
|||
case Event(HandOverToMe, WasOldestData(singleton, singletonTerminated, _)) ⇒
|
||||
gotoHandingOver(singleton, singletonTerminated, Some(sender()))
|
||||
|
||||
case Event(MemberRemoved(m, _), _) if m.address == cluster.selfAddress && !selfExited ⇒
|
||||
case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress && !selfExited ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
|
||||
case Event(MemberRemoved(m, _), WasOldestData(singleton, singletonTerminated, Some(newOldest))) if !selfExited && m.address == newOldest ⇒
|
||||
addRemoved(m.address)
|
||||
case Event(MemberRemoved(m, _), WasOldestData(singleton, singletonTerminated, Some(newOldest))) if !selfExited && m.uniqueAddress == newOldest ⇒
|
||||
addRemoved(m.uniqueAddress)
|
||||
gotoHandingOver(singleton, singletonTerminated, None)
|
||||
|
||||
case Event(Terminated(ref), d @ WasOldestData(singleton, _, _)) if ref == singleton ⇒
|
||||
|
|
@ -660,17 +683,17 @@ class ClusterSingletonManager(
|
|||
val newOldest = handOverTo.map(_.path.address)
|
||||
logInfo("Singleton terminated, hand-over done [{} -> {}]", cluster.selfAddress, newOldest)
|
||||
handOverTo foreach { _ ! HandOverDone }
|
||||
if (removed.contains(cluster.selfAddress)) {
|
||||
if (removed.contains(cluster.selfUniqueAddress)) {
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
} else if (selfExited)
|
||||
goto(End) using EndData
|
||||
} else if (handOverTo.isEmpty)
|
||||
goto(Younger) using YoungerData(None)
|
||||
else
|
||||
goto(Younger) using YoungerData(newOldest)
|
||||
goto(End) using EndData
|
||||
}
|
||||
|
||||
when(End) {
|
||||
case Event(MemberRemoved(m, _), _) if m.address == cluster.selfAddress ⇒
|
||||
case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
}
|
||||
|
|
@ -678,21 +701,21 @@ class ClusterSingletonManager(
|
|||
whenUnhandled {
|
||||
case Event(_: CurrentClusterState, _) ⇒ stay
|
||||
case Event(MemberExited(m), _) ⇒
|
||||
if (m.address == cluster.selfAddress) {
|
||||
if (m.uniqueAddress == cluster.selfUniqueAddress) {
|
||||
selfExited = true
|
||||
logInfo("Exited [{}]", m.address)
|
||||
}
|
||||
stay
|
||||
case Event(MemberRemoved(m, _), _) if m.address == cluster.selfAddress && !selfExited ⇒
|
||||
case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress && !selfExited ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
stop()
|
||||
case Event(MemberRemoved(m, _), _) ⇒
|
||||
if (!selfExited) logInfo("Member removed [{}]", m.address)
|
||||
addRemoved(m.address)
|
||||
addRemoved(m.uniqueAddress)
|
||||
stay
|
||||
case Event(DelayedMemberRemoved(m), _) ⇒
|
||||
if (!selfExited) logInfo("Member removed [{}]", m.address)
|
||||
addRemoved(m.address)
|
||||
addRemoved(m.uniqueAddress)
|
||||
stay
|
||||
case Event(TakeOverFromMe, _) ⇒
|
||||
logInfo("Ignoring TakeOver request in [{}] from [{}].", stateName, sender().path.address)
|
||||
|
|
@ -720,7 +743,7 @@ class ClusterSingletonManager(
|
|||
}
|
||||
|
||||
onTransition {
|
||||
case _ → (Younger | End) if removed.contains(cluster.selfAddress) ⇒
|
||||
case _ → (Younger | End) if removed.contains(cluster.selfUniqueAddress) ⇒
|
||||
logInfo("Self removed, stopping ClusterSingletonManager")
|
||||
// note that FSM.stop() can't be used in onTransition
|
||||
context.stop(self)
|
||||
|
|
|
|||
|
|
@ -0,0 +1,109 @@
|
|||
/**
|
||||
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package akka.cluster.singleton
|
||||
|
||||
import scala.concurrent.duration._
|
||||
|
||||
import akka.actor.ActorSystem
|
||||
import akka.actor.PoisonPill
|
||||
import akka.cluster.Cluster
|
||||
import akka.cluster.MemberStatus
|
||||
import akka.testkit.AkkaSpec
|
||||
import akka.testkit.TestActors
|
||||
import akka.testkit.TestProbe
|
||||
import com.typesafe.config.ConfigFactory
|
||||
|
||||
class ClusterSingletonRestartSpec extends AkkaSpec("""
|
||||
akka.loglevel = INFO
|
||||
akka.actor.provider = akka.cluster.ClusterActorRefProvider
|
||||
akka.remote {
|
||||
netty.tcp {
|
||||
hostname = "127.0.0.1"
|
||||
port = 0
|
||||
}
|
||||
}
|
||||
""") {
|
||||
|
||||
val sys1 = ActorSystem(system.name, system.settings.config)
|
||||
val sys2 = ActorSystem(system.name, system.settings.config)
|
||||
var sys3: ActorSystem = null
|
||||
|
||||
def join(from: ActorSystem, to: ActorSystem): Unit = {
|
||||
from.actorOf(
|
||||
ClusterSingletonManager.props(
|
||||
singletonProps = TestActors.echoActorProps,
|
||||
terminationMessage = PoisonPill,
|
||||
settings = ClusterSingletonManagerSettings(from)),
|
||||
name = "echo")
|
||||
|
||||
within(10.seconds) {
|
||||
awaitAssert {
|
||||
Cluster(from) join Cluster(to).selfAddress
|
||||
Cluster(from).state.members.map(_.uniqueAddress) should contain(Cluster(from).selfUniqueAddress)
|
||||
Cluster(from).state.members.map(_.status) should ===(Set(MemberStatus.Up))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
"Restarting cluster node with same hostname and port" must {
|
||||
"hand-over to next oldest" in {
|
||||
join(sys1, sys1)
|
||||
join(sys2, sys1)
|
||||
|
||||
val proxy2 = sys2.actorOf(ClusterSingletonProxy.props("user/echo", ClusterSingletonProxySettings(sys2)), "proxy2")
|
||||
|
||||
within(5.seconds) {
|
||||
awaitAssert {
|
||||
val probe = TestProbe()(sys2)
|
||||
proxy2.tell("hello", probe.ref)
|
||||
probe.expectMsg(1.second, "hello")
|
||||
}
|
||||
}
|
||||
|
||||
shutdown(sys1)
|
||||
// it will be downed by the join attempts of the new incarnation
|
||||
|
||||
sys3 = ActorSystem(
|
||||
system.name,
|
||||
ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${Cluster(sys1).selfAddress.port.get}").withFallback(
|
||||
system.settings.config))
|
||||
join(sys3, sys2)
|
||||
|
||||
within(5.seconds) {
|
||||
awaitAssert {
|
||||
val probe = TestProbe()(sys2)
|
||||
proxy2.tell("hello2", probe.ref)
|
||||
probe.expectMsg(1.second, "hello2")
|
||||
}
|
||||
}
|
||||
|
||||
Cluster(sys2).leave(Cluster(sys2).selfAddress)
|
||||
|
||||
within(10.seconds) {
|
||||
awaitAssert {
|
||||
Cluster(sys3).state.members.map(_.uniqueAddress) should ===(Set(Cluster(sys3).selfUniqueAddress))
|
||||
}
|
||||
}
|
||||
|
||||
val proxy3 = sys3.actorOf(ClusterSingletonProxy.props("user/echo", ClusterSingletonProxySettings(sys3)), "proxy3")
|
||||
|
||||
within(5.seconds) {
|
||||
awaitAssert {
|
||||
val probe = TestProbe()(sys3)
|
||||
proxy3.tell("hello3", probe.ref)
|
||||
probe.expectMsg(1.second, "hello3")
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
override def afterTermination(): Unit = {
|
||||
shutdown(sys1)
|
||||
shutdown(sys2)
|
||||
if (sys3 != null)
|
||||
shutdown(sys3)
|
||||
}
|
||||
}
|
||||
|
||||
|
|
@ -85,25 +85,25 @@ private[cluster] object InternalClusterAction {
|
|||
* If a node is uninitialized it will reply to `InitJoin` with
|
||||
* `InitJoinNack`.
|
||||
*/
|
||||
case object JoinSeedNode
|
||||
case object JoinSeedNode extends DeadLetterSuppression
|
||||
|
||||
/**
|
||||
* see JoinSeedNode
|
||||
*/
|
||||
@SerialVersionUID(1L)
|
||||
case object InitJoin extends ClusterMessage
|
||||
case object InitJoin extends ClusterMessage with DeadLetterSuppression
|
||||
|
||||
/**
|
||||
* see JoinSeedNode
|
||||
*/
|
||||
@SerialVersionUID(1L)
|
||||
final case class InitJoinAck(address: Address) extends ClusterMessage
|
||||
final case class InitJoinAck(address: Address) extends ClusterMessage with DeadLetterSuppression
|
||||
|
||||
/**
|
||||
* see JoinSeedNode
|
||||
*/
|
||||
@SerialVersionUID(1L)
|
||||
final case class InitJoinNack(address: Address) extends ClusterMessage
|
||||
final case class InitJoinNack(address: Address) extends ClusterMessage with DeadLetterSuppression
|
||||
|
||||
/**
|
||||
* Marker interface for periodic tick messages
|
||||
|
|
@ -508,8 +508,15 @@ private[cluster] class ClusterCoreDaemon(publisher: ActorRef) extends Actor with
|
|||
// new node will retry join
|
||||
logInfo("New incarnation of existing member [{}] is trying to join. " +
|
||||
"Existing will be removed from the cluster and then new member will be allowed to join.", m)
|
||||
if (m.status != Down)
|
||||
if (m.status != Down) {
|
||||
// we can confirm it as terminated/unreachable immediately
|
||||
val newReachability = latestGossip.overview.reachability.terminated(selfUniqueAddress, m.uniqueAddress)
|
||||
val newOverview = latestGossip.overview.copy(reachability = newReachability)
|
||||
val newGossip = latestGossip.copy(overview = newOverview)
|
||||
updateLatestGossip(newGossip)
|
||||
|
||||
downing(m.address)
|
||||
}
|
||||
case None ⇒
|
||||
// remove the node from the failure detector
|
||||
failureDetector.remove(node.address)
|
||||
|
|
@ -609,7 +616,7 @@ private[cluster] class ClusterCoreDaemon(publisher: ActorRef) extends Actor with
|
|||
publish(latestGossip)
|
||||
case Some(_) ⇒ // already down
|
||||
case None ⇒
|
||||
logInfo("Ignoring down of unknown node [{}] as [{}]", address)
|
||||
logInfo("Ignoring down of unknown node [{}]", address)
|
||||
}
|
||||
|
||||
}
|
||||
|
|
@ -1259,10 +1266,10 @@ private[cluster] class OnMemberStatusChangedListener(callback: Runnable, status:
|
|||
import ClusterEvent._
|
||||
private val cluster = Cluster(context.system)
|
||||
private val to = status match {
|
||||
case Up ⇒
|
||||
classOf[MemberUp]
|
||||
case Removed ⇒
|
||||
classOf[MemberRemoved]
|
||||
case Up ⇒ classOf[MemberUp]
|
||||
case Removed ⇒ classOf[MemberRemoved]
|
||||
case other ⇒ throw new IllegalArgumentException(
|
||||
s"Expected Up or Removed in OnMemberStatusChangedListener, got [$other]")
|
||||
}
|
||||
|
||||
override def preStart(): Unit =
|
||||
|
|
|
|||
|
|
@ -74,12 +74,15 @@ abstract class NodeChurnSpec
|
|||
}
|
||||
}
|
||||
|
||||
def awaitRemoved(additionaSystems: Vector[ActorSystem]): Unit = {
|
||||
def awaitRemoved(additionaSystems: Vector[ActorSystem], round: Int): Unit = {
|
||||
awaitMembersUp(roles.size, timeout = 40.seconds)
|
||||
within(20.seconds) {
|
||||
enterBarrier("removed-" + round)
|
||||
within(3.seconds) {
|
||||
awaitAssert {
|
||||
additionaSystems.foreach { s ⇒
|
||||
Cluster(s).isTerminated should be(true)
|
||||
withClue(s"Cluster(s).self:") {
|
||||
Cluster(s).isTerminated should be(true)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -113,7 +116,7 @@ abstract class NodeChurnSpec
|
|||
else
|
||||
Cluster(node).leave(Cluster(node).selfAddress)
|
||||
}
|
||||
awaitRemoved(systems)
|
||||
awaitRemoved(systems, n)
|
||||
enterBarrier("members-removed-" + n)
|
||||
systems.foreach(_.terminate().await)
|
||||
log.info("end of round-" + n)
|
||||
|
|
|
|||
|
|
@ -330,9 +330,19 @@ abstract class SurviveNetworkInstabilitySpec
|
|||
runOn(side1AfterJoin: _*) {
|
||||
// side2 removed
|
||||
val expected = (side1AfterJoin map address).toSet
|
||||
awaitAssert(clusterView.members.map(_.address) should ===(expected))
|
||||
awaitAssert(clusterView.members.collectFirst { case m if m.address == address(eighth) ⇒ m.status } should ===(
|
||||
Some(MemberStatus.Up)))
|
||||
awaitAssert {
|
||||
// repeat the downing in case it was not successful, which may
|
||||
// happen if the removal was reverted due to gossip merge, see issue #18767
|
||||
runOn(fourth) {
|
||||
for (role2 ← side2) {
|
||||
cluster.down(role2)
|
||||
}
|
||||
}
|
||||
|
||||
clusterView.members.map(_.address) should ===(expected)
|
||||
clusterView.members.collectFirst { case m if m.address == address(eighth) ⇒ m.status } should ===(
|
||||
Some(MemberStatus.Up))
|
||||
}
|
||||
}
|
||||
|
||||
enterBarrier("side2-removed")
|
||||
|
|
|
|||
|
|
@ -326,7 +326,7 @@ class PersistentReceivePipelineSpec(config: Config) extends AkkaSpec(config) wit
|
|||
totaller ! 6
|
||||
totaller ! "get"
|
||||
expectMsg(6)
|
||||
probe.expectMsg(99)
|
||||
probe.expectMsg(99L)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -276,14 +276,14 @@ object Replicator {
|
|||
final case class Subscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage
|
||||
/**
|
||||
* Unregister a subscriber.
|
||||
*
|
||||
* @see [[Replicator.Subscribe]]
|
||||
*
|
||||
* @see [[Replicator.Subscribe]]
|
||||
*/
|
||||
final case class Unsubscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage
|
||||
/**
|
||||
* The data value is retrieved with [[#get]] using the typed key.
|
||||
*
|
||||
* @see [[Replicator.Subscribe]]
|
||||
*
|
||||
* @see [[Replicator.Subscribe]]
|
||||
*/
|
||||
final case class Changed[A <: ReplicatedData](key: Key[A])(data: A) extends ReplicatorMessage {
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -42,7 +42,7 @@ class ReplicatorPruningSpec extends MultiNodeSpec(ReplicatorPruningSpec) with ST
|
|||
val replicator = system.actorOf(Replicator.props(
|
||||
ReplicatorSettings(system).withGossipInterval(1.second)
|
||||
.withPruning(pruningInterval = 1.second, maxPruningDissemination)), "replicator")
|
||||
val timeout = 2.seconds.dilated
|
||||
val timeout = 3.seconds.dilated
|
||||
|
||||
val KeyA = GCounterKey("A")
|
||||
val KeyB = ORSetKey[String]("B")
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ class ReplicatorSpec extends MultiNodeSpec(ReplicatorSpec) with STMultiNodeSpec
|
|||
implicit val cluster = Cluster(system)
|
||||
val replicator = system.actorOf(Replicator.props(
|
||||
ReplicatorSettings(system).withGossipInterval(1.second).withMaxDeltaElements(10)), "replicator")
|
||||
val timeout = 2.seconds.dilated
|
||||
val timeout = 3.seconds.dilated
|
||||
val writeTwo = WriteTo(2, timeout)
|
||||
val writeMajority = WriteMajority(timeout)
|
||||
val writeAll = WriteAll(timeout)
|
||||
|
|
@ -112,7 +112,7 @@ class ReplicatorSpec extends MultiNodeSpec(ReplicatorSpec) with STMultiNodeSpec
|
|||
val c4 = c3 + 1
|
||||
// too strong consistency level
|
||||
replicator ! Update(KeyA, GCounter(), writeTwo)(_ + 1)
|
||||
expectMsg(UpdateTimeout(KeyA, None))
|
||||
expectMsg(timeout + 1.second, UpdateTimeout(KeyA, None))
|
||||
replicator ! Get(KeyA, ReadLocal)
|
||||
expectMsg(GetSuccess(KeyA, None)(c4)).dataValue should be(c4)
|
||||
changedProbe.expectMsg(Changed(KeyA)(c4)).dataValue should be(c4)
|
||||
|
|
@ -347,9 +347,9 @@ class ReplicatorSpec extends MultiNodeSpec(ReplicatorSpec) with STMultiNodeSpec
|
|||
val c40 = expectMsgPF() { case g @ GetSuccess(KeyD, _) ⇒ g.get(KeyD) }
|
||||
c40.value should be(40)
|
||||
replicator ! Update(KeyD, GCounter() + 1, writeTwo)(_ + 1)
|
||||
expectMsg(UpdateTimeout(KeyD, None))
|
||||
expectMsg(timeout + 1.second, UpdateTimeout(KeyD, None))
|
||||
replicator ! Update(KeyD, GCounter(), writeTwo)(_ + 1)
|
||||
expectMsg(UpdateTimeout(KeyD, None))
|
||||
expectMsg(timeout + 1.second, UpdateTimeout(KeyD, None))
|
||||
}
|
||||
runOn(first) {
|
||||
for (n ← 1 to 30) {
|
||||
|
|
@ -466,7 +466,7 @@ class ReplicatorSpec extends MultiNodeSpec(ReplicatorSpec) with STMultiNodeSpec
|
|||
|
||||
runOn(first, second) {
|
||||
replicator ! Get(KeyE2, readAll, Some(998))
|
||||
expectMsg(GetFailure(KeyE2, Some(998)))
|
||||
expectMsg(timeout + 1.second, GetFailure(KeyE2, Some(998)))
|
||||
replicator ! Get(KeyE2, ReadLocal)
|
||||
expectMsg(NotFound(KeyE2, None))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -86,7 +86,7 @@
|
|||
</div>
|
||||
<ul class="nav">
|
||||
<li><a href="http://akka.io/docs">Documentation</a></li>
|
||||
<li><a href="http://akka.io/faq">FAQ</a></li>
|
||||
<li><a href="http://doc.akka.io/docs/akka/current/additional/faq.html">FAQ</a></li>
|
||||
<li><a href="http://akka.io/downloads">Download</a></li>
|
||||
<li><a href="http://groups.google.com/group/akka-user">Mailing List</a></li>
|
||||
<li><a href="http://github.com/akka/akka">Code</a></li>
|
||||
|
|
@ -158,7 +158,7 @@
|
|||
<ul>
|
||||
<li><h5>Akka</h5></li>
|
||||
<li><a href="http://akka.io/docs">Documentation</a></li>
|
||||
<li><a href="http://akka.io/faq">FAQ</a></li>
|
||||
<li><a href="http://doc.akka.io/docs/akka/current/additional/faq.html">FAQ</a></li>
|
||||
<li><a href="http://akka.io/downloads">Downloads</a></li>
|
||||
<li><a href="http://akka.io/news">News</a></li>
|
||||
<li><a href="http://letitcrash.com">Blog</a></li>
|
||||
|
|
|
|||
|
|
@ -51,27 +51,16 @@ public class DangerousJavaActor extends UntypedActor {
|
|||
if (message instanceof String) {
|
||||
String m = (String) message;
|
||||
if ("is my middle name".equals(m)) {
|
||||
pipe(breaker.callWithCircuitBreaker(
|
||||
new Callable<Future<String>>() {
|
||||
public Future<String> call() throws Exception {
|
||||
return future(
|
||||
new Callable<String>() {
|
||||
public String call() {
|
||||
return dangerousCall();
|
||||
}
|
||||
}, getContext().dispatcher());
|
||||
}
|
||||
}), getContext().dispatcher()).to(getSender());
|
||||
pipe(
|
||||
breaker.callWithCircuitBreaker(() ->
|
||||
future(() -> dangerousCall(), getContext().dispatcher())
|
||||
), getContext().dispatcher()
|
||||
).to(getSender());
|
||||
}
|
||||
if ("block for me".equals(m)) {
|
||||
getSender().tell(breaker
|
||||
.callWithSyncCircuitBreaker(
|
||||
new Callable<String>() {
|
||||
@Override
|
||||
public String call() throws Exception {
|
||||
return dangerousCall();
|
||||
}
|
||||
}), getSelf());
|
||||
() -> dangerousCall()), getSelf());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -107,8 +107,8 @@ actors in the hierarchy from the root up. Examples are::
|
|||
"akka://my-sys/user/service-a/worker1" // purely local
|
||||
"akka.tcp://my-sys@host.example.com:5678/user/service-b" // remote
|
||||
|
||||
Here, ``akka.tcp`` is the default remote transport for the 2.2 release; other transports
|
||||
are pluggable. A remote host using UDP would be accessible by using ``akka.udp``.
|
||||
Here, ``akka.tcp`` is the default remote transport for the 2.4 release; other transports
|
||||
are pluggable.
|
||||
The interpretation of the host and port part (i.e. ``host.example.com:5678`` in the example)
|
||||
depends on the transport mechanism used, but it must abide by the URI structural rules.
|
||||
|
||||
|
|
|
|||
|
|
@ -46,6 +46,12 @@ class ConfigDocSpec extends WordSpec with Matchers {
|
|||
"/actorC/*" {
|
||||
dispatcher = my-dispatcher
|
||||
}
|
||||
|
||||
# all descendants of '/user/actorC' (direct children, and their children recursively)
|
||||
# have a dedicated dispatcher
|
||||
"/actorC/**" {
|
||||
dispatcher = my-dispatcher
|
||||
}
|
||||
|
||||
# '/user/actorD/actorE' has a special priority mailbox
|
||||
/actorD/actorE {
|
||||
|
|
|
|||
|
|
@ -413,10 +413,18 @@ topics. An example may look like this:
|
|||
|
||||
You can use asterisks as wildcard matches for the actor path sections, so you could specify:
|
||||
``/*/sampleActor`` and that would match all ``sampleActor`` on that level in the hierarchy.
|
||||
You can also use wildcard in the last position to match all actors at a certain level:
|
||||
``/someParent/*``. Non-wildcard matches always have higher priority to match than wildcards, so:
|
||||
``/foo/bar`` is considered **more specific** than ``/foo/*`` and only the highest priority match is used.
|
||||
Please note that it **cannot** be used to partially match section, like this: ``/foo*/bar``, ``/f*o/bar`` etc.
|
||||
In addition, please note:
|
||||
|
||||
- you can also use wildcards in the last position to match all actors at a certain level: ``/someParent/*``
|
||||
- you can use double-wildcards in the last position to match all child actors and their children
|
||||
recursively: ``/someParent/**``
|
||||
- non-wildcard matches always have higher priority to match than wildcards, and single wildcard matches
|
||||
have higher priority than double-wildcards, so: ``/foo/bar`` is considered **more specific** than
|
||||
``/foo/*``, which is considered **more specific** than ``/foo/**``. Only the highest priority match is used
|
||||
- wildcards **cannot** be used to partially match section, like this: ``/foo*/bar``, ``/f*o/bar`` etc.
|
||||
|
||||
.. note::
|
||||
Double-wildcards can only be placed in the last position.
|
||||
|
||||
Listing of the Reference Configuration
|
||||
--------------------------------------
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Akka is very modular and consists of several JARs containing different features.
|
|||
|
||||
- ``akka-cluster`` – Cluster membership management, elastic routers.
|
||||
|
||||
- ``akka-osgi`` – utilities for using Akka in OSGi containers
|
||||
- ``akka-osgi`` – Utilities for using Akka in OSGi containers
|
||||
|
||||
- ``akka-osgi-aries`` – Aries blueprint for provisioning actor systems
|
||||
|
||||
|
|
@ -45,6 +45,8 @@ Akka is very modular and consists of several JARs containing different features.
|
|||
|
||||
- ``akka-slf4j`` – SLF4J Logger (event bus listener)
|
||||
|
||||
- ``akka-stream`` – Reactive stream processing
|
||||
|
||||
- ``akka-testkit`` – Toolkit for testing Actor systems
|
||||
|
||||
In addition to these stable modules there are several which are on their way
|
||||
|
|
@ -212,12 +214,12 @@ For snapshot versions, the snapshot repository needs to be added as well:
|
|||
Using Akka with Eclipse
|
||||
-----------------------
|
||||
|
||||
Setup SBT project and then use `sbteclipse <https://github.com/typesafehub/sbteclipse>`_ to generate a Eclipse project.
|
||||
Setup SBT project and then use `sbteclipse <https://github.com/typesafehub/sbteclipse>`_ to generate an Eclipse project.
|
||||
|
||||
Using Akka with IntelliJ IDEA
|
||||
-----------------------------
|
||||
|
||||
Setup SBT project and then use `sbt-idea <https://github.com/mpeltonen/sbt-idea>`_ to generate a IntelliJ IDEA project.
|
||||
Setup SBT project and then use `sbt-idea <https://github.com/mpeltonen/sbt-idea>`_ to generate an IntelliJ IDEA project.
|
||||
|
||||
Using Akka with NetBeans
|
||||
------------------------
|
||||
|
|
|
|||
|
|
@ -60,7 +60,7 @@ and :ref:`Scala <cluster_usage_scala>` documentation chapters.
|
|||
Persistence
|
||||
-----------
|
||||
|
||||
State changes experience by an actor can optionally be persisted and replayed when the actor is started or
|
||||
State changes experienced by an actor can optionally be persisted and replayed when the actor is started or
|
||||
restarted. This allows actors to recover their state, even after JVM crashes or when being migrated
|
||||
to another node.
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ but also in the size of applications it is useful for. The core of Akka, akka-ac
|
|||
is very small and easily dropped into an existing project where you need
|
||||
asynchronicity and lockless concurrency without hassle.
|
||||
|
||||
You can choose to include only the parts of akka you need in your application.
|
||||
You can choose to include only the parts of Akka you need in your application.
|
||||
With CPUs growing more and more cores every cycle, Akka is the alternative that
|
||||
provides outstanding performance even if you're only running it on one machine.
|
||||
Akka also supplies a wide array of concurrency-paradigms, allowing users to choose
|
||||
|
|
|
|||
|
|
@ -281,7 +281,7 @@ Actors may also use a Camel `ProducerTemplate`_ for producing messages to endpoi
|
|||
|
||||
.. includecode:: code/docs/camel/MyActor.java#ProducerTemplate
|
||||
|
||||
For initiating a a two-way message exchange, one of the
|
||||
For initiating a two-way message exchange, one of the
|
||||
``ProducerTemplate.request*`` methods must be used.
|
||||
|
||||
.. includecode:: code/docs/camel/RequestBodyActor.java#RequestProducerTemplate
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ Cluster metrics information is primarily used for load-balancing routers,
|
|||
and can also be used to implement advanced metrics-based node life cycles,
|
||||
such as "Node Let-it-crash" when CPU steal time becomes excessive.
|
||||
|
||||
Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar.
|
||||
Cluster Metrics Extension is a separate Akka module delivered in ``akka-cluster-metrics`` jar.
|
||||
|
||||
To enable usage of the extension you need to add the following dependency to your project:
|
||||
::
|
||||
|
|
|
|||
|
|
@ -270,8 +270,8 @@ Note that stopped entities will be started again when a new message is targeted
|
|||
Graceful Shutdown
|
||||
-----------------
|
||||
|
||||
You can send the message ``ClusterSharding.GracefulShutdown`` message (``ClusterSharding.gracefulShutdownInstance``
|
||||
in Java) to the ``ShardRegion`` actor to handoff all shards that are hosted by that ``ShardRegion`` and then the
|
||||
You can send the ``ShardRegion.gracefulShutdownInstance`` message
|
||||
to the ``ShardRegion`` actor to handoff all shards that are hosted by that ``ShardRegion`` and then the
|
||||
``ShardRegion`` actor will be stopped. You can ``watch`` the ``ShardRegion`` actor to know when it is completed.
|
||||
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
|
||||
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.
|
||||
|
|
|
|||
|
|
@ -733,3 +733,13 @@ For this purpose you can define a separate dispatcher to be used for the cluster
|
|||
parallelism-max = 4
|
||||
}
|
||||
}
|
||||
|
||||
.. note::
|
||||
Normally it should not be necessary to configure a separate dispatcher for the Cluster.
|
||||
The default-dispatcher should be sufficient for performing the Cluster tasks, i.e. ``akka.cluster.use-dispatcher``
|
||||
should not be changed. If you have Cluster related problems when using the default-dispatcher that is typically an
|
||||
indication that you are running blocking or CPU intensive actors/tasks on the default-dispatcher.
|
||||
Use dedicated dispatchers for such actors/tasks instead of running them on the default-dispatcher,
|
||||
because that may starve system internal tasks.
|
||||
Related config properties: ``akka.cluster.use-dispatcher = akka.cluster.cluster-dispatcher``.
|
||||
Corresponding default values: ``akka.cluster.use-dispatcher =``.
|
||||
|
|
@ -10,9 +10,9 @@ import org.junit.Test;
|
|||
import akka.http.javadsl.model.FormData;
|
||||
import akka.http.javadsl.model.HttpRequest;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.StringUnmarshallers;
|
||||
import akka.http.javadsl.server.StringUnmarshaller;
|
||||
import akka.http.javadsl.server.Unmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
|
||||
import akka.http.javadsl.unmarshalling.StringUnmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.japi.Pair;
|
||||
|
||||
|
|
@ -72,4 +72,4 @@ public class FormFieldRequestValsExampleTest extends JUnitRouteTest {
|
|||
}
|
||||
|
||||
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ import akka.http.javadsl.marshallers.jackson.Jackson;
|
|||
import akka.http.javadsl.model.*;
|
||||
import akka.http.javadsl.model.headers.Connection;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.Unmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.japi.function.Function;
|
||||
import akka.stream.ActorMaterializer;
|
||||
import akka.stream.IOResult;
|
||||
|
|
|
|||
|
|
@ -0,0 +1,179 @@
|
|||
/*
|
||||
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
|
||||
package docs.http.javadsl.server;
|
||||
|
||||
import akka.NotUsed;
|
||||
import akka.http.javadsl.common.CsvEntityStreamingSupport;
|
||||
import akka.http.javadsl.common.JsonEntityStreamingSupport;
|
||||
import akka.http.javadsl.marshallers.jackson.Jackson;
|
||||
import akka.http.javadsl.marshalling.Marshaller;
|
||||
import akka.http.javadsl.model.*;
|
||||
import akka.http.javadsl.model.headers.Accept;
|
||||
import akka.http.javadsl.server.*;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.http.javadsl.testkit.TestRoute;
|
||||
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
|
||||
import akka.http.javadsl.common.EntityStreamingSupport;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.stream.javadsl.Flow;
|
||||
import akka.stream.javadsl.Source;
|
||||
import akka.util.ByteString;
|
||||
import org.junit.Test;
|
||||
|
||||
import java.util.concurrent.CompletionStage;
|
||||
|
||||
public class JsonStreamingExamplesTest extends JUnitRouteTest {
|
||||
|
||||
//#routes
|
||||
final Route tweets() {
|
||||
//#formats
|
||||
final Unmarshaller<ByteString, JavaTweet> JavaTweets = Jackson.byteStringUnmarshaller(JavaTweet.class);
|
||||
//#formats
|
||||
|
||||
//#response-streaming
|
||||
|
||||
// Step 1: Enable JSON streaming
|
||||
// we're not using this in the example, but it's the simplest way to start:
|
||||
// The default rendering is a JSON array: `[el, el, el , ...]`
|
||||
final JsonEntityStreamingSupport jsonStreaming = EntityStreamingSupport.json();
|
||||
|
||||
// Step 1.1: Enable and customise how we'll render the JSON, as a compact array:
|
||||
final ByteString start = ByteString.fromString("[");
|
||||
final ByteString between = ByteString.fromString(",");
|
||||
final ByteString end = ByteString.fromString("]");
|
||||
final Flow<ByteString, ByteString, NotUsed> compactArrayRendering =
|
||||
Flow.of(ByteString.class).intersperse(start, between, end);
|
||||
|
||||
final JsonEntityStreamingSupport compactJsonSupport = EntityStreamingSupport.json()
|
||||
.withFramingRendererFlow(compactArrayRendering);
|
||||
|
||||
|
||||
// Step 2: implement the route
|
||||
final Route responseStreaming = path("tweets", () ->
|
||||
get(() ->
|
||||
parameter(StringUnmarshallers.INTEGER, "n", n -> {
|
||||
final Source<JavaTweet, NotUsed> tws =
|
||||
Source.repeat(new JavaTweet(12, "Hello World!")).take(n);
|
||||
|
||||
// Step 3: call complete* with your source, marshaller, and stream rendering mode
|
||||
return completeOKWithSource(tws, Jackson.marshaller(), compactJsonSupport);
|
||||
})
|
||||
)
|
||||
);
|
||||
//#response-streaming
|
||||
|
||||
//#incoming-request-streaming
|
||||
final Route incomingStreaming = path("tweets", () ->
|
||||
post(() ->
|
||||
extractMaterializer(mat -> {
|
||||
final JsonEntityStreamingSupport jsonSupport = EntityStreamingSupport.json();
|
||||
|
||||
return entityAsSourceOf(JavaTweets, jsonSupport, sourceOfTweets -> {
|
||||
final CompletionStage<Integer> tweetsCount = sourceOfTweets.runFold(0, (acc, tweet) -> acc + 1, mat);
|
||||
return onComplete(tweetsCount, c -> complete("Total number of tweets: " + c));
|
||||
});
|
||||
}
|
||||
)
|
||||
)
|
||||
);
|
||||
//#incoming-request-streaming
|
||||
|
||||
return responseStreaming.orElse(incomingStreaming);
|
||||
}
|
||||
|
||||
final Route csvTweets() {
|
||||
//#csv-example
|
||||
final Marshaller<JavaTweet, ByteString> renderAsCsv =
|
||||
Marshaller.withFixedContentType(ContentTypes.TEXT_CSV_UTF8, t ->
|
||||
ByteString.fromString(t.getId() + "," + t.getMessage())
|
||||
);
|
||||
|
||||
final CsvEntityStreamingSupport compactJsonSupport = EntityStreamingSupport.csv();
|
||||
|
||||
final Route responseStreaming = path("tweets", () ->
|
||||
get(() ->
|
||||
parameter(StringUnmarshallers.INTEGER, "n", n -> {
|
||||
final Source<JavaTweet, NotUsed> tws =
|
||||
Source.repeat(new JavaTweet(12, "Hello World!")).take(n);
|
||||
return completeWithSource(tws, renderAsCsv, compactJsonSupport);
|
||||
})
|
||||
)
|
||||
);
|
||||
//#csv-example
|
||||
|
||||
return responseStreaming;
|
||||
}
|
||||
//#routes
|
||||
|
||||
@Test
|
||||
public void getTweetsTest() {
|
||||
//#response-streaming
|
||||
// tests:
|
||||
final TestRoute routes = testRoute(tweets());
|
||||
|
||||
// test happy path
|
||||
final Accept acceptApplication = Accept.create(MediaRanges.create(MediaTypes.APPLICATION_JSON));
|
||||
routes.run(HttpRequest.GET("/tweets?n=2").addHeader(acceptApplication))
|
||||
.assertStatusCode(200)
|
||||
.assertEntity("[{\"id\":12,\"message\":\"Hello World!\"},{\"id\":12,\"message\":\"Hello World!\"}]");
|
||||
|
||||
// test responses to potential errors
|
||||
final Accept acceptText = Accept.create(MediaRanges.ALL_TEXT);
|
||||
routes.run(HttpRequest.GET("/tweets?n=3").addHeader(acceptText))
|
||||
.assertStatusCode(StatusCodes.NOT_ACCEPTABLE) // 406
|
||||
.assertEntity("Resource representation is only available with these types:\napplication/json");
|
||||
//#response-streaming
|
||||
}
|
||||
|
||||
@Test
|
||||
public void csvExampleTweetsTest() {
|
||||
//#response-streaming
|
||||
// tests --------------------------------------------
|
||||
final TestRoute routes = testRoute(csvTweets());
|
||||
|
||||
// test happy path
|
||||
final Accept acceptCsv = Accept.create(MediaRanges.create(MediaTypes.TEXT_CSV));
|
||||
routes.run(HttpRequest.GET("/tweets?n=2").addHeader(acceptCsv))
|
||||
.assertStatusCode(200)
|
||||
.assertEntity("12,Hello World!\n" +
|
||||
"12,Hello World!");
|
||||
|
||||
// test responses to potential errors
|
||||
final Accept acceptText = Accept.create(MediaRanges.ALL_APPLICATION);
|
||||
routes.run(HttpRequest.GET("/tweets?n=3").addHeader(acceptText))
|
||||
.assertStatusCode(StatusCodes.NOT_ACCEPTABLE) // 406
|
||||
.assertEntity("Resource representation is only available with these types:\ntext/csv; charset=UTF-8");
|
||||
//#response-streaming
|
||||
}
|
||||
|
||||
//#models
|
||||
private static final class JavaTweet {
|
||||
private int id;
|
||||
private String message;
|
||||
|
||||
public JavaTweet(int id, String message) {
|
||||
this.id = id;
|
||||
this.message = message;
|
||||
}
|
||||
|
||||
public int getId() {
|
||||
return id;
|
||||
}
|
||||
|
||||
public void setId(int id) {
|
||||
this.id = id;
|
||||
}
|
||||
|
||||
public void setMessage(String message) {
|
||||
this.message = message;
|
||||
}
|
||||
|
||||
public String getMessage() {
|
||||
return message;
|
||||
}
|
||||
|
||||
}
|
||||
//#models
|
||||
}
|
||||
|
|
@ -3,6 +3,7 @@
|
|||
*/
|
||||
package docs.http.javadsl.server.directives;
|
||||
|
||||
import akka.NotUsed;
|
||||
import akka.actor.ActorSystem;
|
||||
import akka.dispatch.ExecutionContexts;
|
||||
import akka.event.Logging;
|
||||
|
|
@ -31,14 +32,17 @@ import akka.util.ByteString;
|
|||
import org.junit.Ignore;
|
||||
import org.junit.Test;
|
||||
import scala.concurrent.ExecutionContextExecutor;
|
||||
import scala.concurrent.duration.FiniteDuration;
|
||||
|
||||
import java.nio.file.Paths;
|
||||
import java.util.Arrays;
|
||||
import java.util.Collections;
|
||||
import java.util.Iterator;
|
||||
import java.util.Optional;
|
||||
import java.util.concurrent.CompletableFuture;
|
||||
import java.util.concurrent.CompletionStage;
|
||||
import java.util.concurrent.Executors;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.Function;
|
||||
import java.util.function.Predicate;
|
||||
import java.util.function.Supplier;
|
||||
|
|
@ -785,4 +789,105 @@ public class BasicDirectivesExamplesTest extends JUnitRouteTest {
|
|||
//#extractUnmatchedPath
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testExtractRequestEntity() {
|
||||
//#extractRequestEntity
|
||||
final Route route = extractRequestEntity(entity ->
|
||||
complete("Request entity content-type is " + entity.getContentType())
|
||||
);
|
||||
|
||||
// tests:
|
||||
testRoute(route).run(
|
||||
HttpRequest.POST("/abc")
|
||||
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, "req"))
|
||||
).assertEntity("Request entity content-type is text/plain; charset=UTF-8");
|
||||
//#extractRequestEntity
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testExtractDataBytes() {
|
||||
//#extractDataBytes
|
||||
final Route route = extractDataBytes(data -> {
|
||||
final CompletionStage<Integer> sum = data.runFold(0, (acc, i) ->
|
||||
acc + Integer.valueOf(i.utf8String()), materializer());
|
||||
return onSuccess(() -> sum, s ->
|
||||
complete(HttpResponse.create().withEntity(HttpEntities.create(s.toString()))));
|
||||
});
|
||||
|
||||
// tests:
|
||||
final Iterator iterator = Arrays.asList(
|
||||
ByteString.fromString("1"),
|
||||
ByteString.fromString("2"),
|
||||
ByteString.fromString("3")).iterator();
|
||||
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
|
||||
|
||||
testRoute(route).run(
|
||||
HttpRequest.POST("abc")
|
||||
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
|
||||
).assertEntity("6");
|
||||
//#extractDataBytes
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testExtractStrictEntity() {
|
||||
//#extractStrictEntity
|
||||
final FiniteDuration timeout = FiniteDuration.create(3, TimeUnit.SECONDS);
|
||||
final Route route = extractStrictEntity(timeout, strict ->
|
||||
complete(strict.getData().utf8String())
|
||||
);
|
||||
|
||||
// tests:
|
||||
final Iterator iterator = Arrays.asList(
|
||||
ByteString.fromString("1"),
|
||||
ByteString.fromString("2"),
|
||||
ByteString.fromString("3")).iterator();
|
||||
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
|
||||
testRoute(route).run(
|
||||
HttpRequest.POST("/")
|
||||
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
|
||||
).assertEntity("123");
|
||||
//#extractStrictEntity
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testToStrictEntity() {
|
||||
//#toStrictEntity
|
||||
final FiniteDuration timeout = FiniteDuration.create(3, TimeUnit.SECONDS);
|
||||
final Route route = toStrictEntity(timeout, () ->
|
||||
extractRequest(req -> {
|
||||
if (req.entity() instanceof HttpEntity.Strict) {
|
||||
final HttpEntity.Strict strict = (HttpEntity.Strict)req.entity();
|
||||
return complete("Request entity is strict, data=" + strict.getData().utf8String());
|
||||
} else {
|
||||
return complete("Ooops, request entity is not strict!");
|
||||
}
|
||||
})
|
||||
);
|
||||
|
||||
// tests:
|
||||
final Iterator iterator = Arrays.asList(
|
||||
ByteString.fromString("1"),
|
||||
ByteString.fromString("2"),
|
||||
ByteString.fromString("3")).iterator();
|
||||
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
|
||||
testRoute(route).run(
|
||||
HttpRequest.POST("/")
|
||||
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
|
||||
).assertEntity("Request entity is strict, data=123");
|
||||
//#toStrictEntity
|
||||
}
|
||||
|
||||
@Test
|
||||
public void testExtractActorSystem() {
|
||||
//#extractActorSystem
|
||||
final Route route = extractActorSystem(actorSystem ->
|
||||
complete("Actor System extracted, hash=" + actorSystem.hashCode())
|
||||
);
|
||||
|
||||
// tests:
|
||||
testRoute(route).run(HttpRequest.GET("/"))
|
||||
.assertEntity("Actor System extracted, hash=" + system().hashCode());
|
||||
//#extractActorSystem
|
||||
}
|
||||
|
||||
}
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import akka.http.javadsl.model.HttpRequest;
|
|||
import akka.http.javadsl.model.headers.AcceptEncoding;
|
||||
import akka.http.javadsl.model.headers.ContentEncoding;
|
||||
import akka.http.javadsl.model.headers.HttpEncodings;
|
||||
import akka.http.javadsl.server.Coder;
|
||||
import akka.http.javadsl.coding.Coder;
|
||||
import akka.http.javadsl.server.Rejections;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
|
|
@ -16,7 +16,7 @@ import org.junit.Test;
|
|||
|
||||
import java.util.Collections;
|
||||
|
||||
import static akka.http.javadsl.server.Unmarshaller.entityToString;
|
||||
import static akka.http.javadsl.unmarshalling.Unmarshaller.entityToString;
|
||||
|
||||
public class CodingDirectivesExamplesTest extends JUnitRouteTest {
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,77 @@
|
|||
/*
|
||||
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
|
||||
*/
|
||||
package docs.http.javadsl.server.directives;
|
||||
|
||||
import akka.NotUsed;
|
||||
import akka.actor.ActorSystem;
|
||||
import akka.event.LoggingAdapter;
|
||||
import akka.event.NoLogging;
|
||||
import akka.http.javadsl.ConnectHttp;
|
||||
import akka.http.javadsl.Http;
|
||||
import akka.http.javadsl.ServerBinding;
|
||||
import akka.http.javadsl.model.*;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.settings.ParserSettings;
|
||||
import akka.http.javadsl.settings.ServerSettings;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.stream.Materializer;
|
||||
import akka.stream.javadsl.Flow;
|
||||
import org.junit.Test;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
|
||||
import java.util.concurrent.CompletionStage;
|
||||
import java.util.concurrent.ExecutionException;
|
||||
|
||||
import static akka.http.javadsl.model.HttpProtocols.HTTP_1_0;
|
||||
import static akka.http.javadsl.model.RequestEntityAcceptances.Expected;
|
||||
|
||||
public class CustomHttpMethodExamplesTest extends JUnitRouteTest {
|
||||
|
||||
@Test
|
||||
public void testComposition() throws InterruptedException, ExecutionException {
|
||||
ActorSystem system = system();
|
||||
Materializer materializer = materializer();
|
||||
LoggingAdapter loggingAdapter = NoLogging.getInstance();
|
||||
|
||||
int port = 9090;
|
||||
String host = "127.0.0.1";
|
||||
|
||||
//#customHttpMethod
|
||||
HttpMethod BOLT =
|
||||
HttpMethods.createCustom("BOLT", false, true, Expected);
|
||||
final ParserSettings parserSettings =
|
||||
ParserSettings.create(system).withCustomMethods(BOLT);
|
||||
final ServerSettings serverSettings =
|
||||
ServerSettings.create(system).withParserSettings(parserSettings);
|
||||
|
||||
final Route routes = route(
|
||||
extractMethod( method ->
|
||||
complete( "This is a " + method.name() + " request.")
|
||||
)
|
||||
);
|
||||
final Flow<HttpRequest, HttpResponse, NotUsed> handler = routes.flow(system, materializer);
|
||||
final Http http = Http.get(system);
|
||||
final CompletionStage<ServerBinding> binding =
|
||||
http.bindAndHandle(
|
||||
handler,
|
||||
ConnectHttp.toHost(host, port),
|
||||
serverSettings,
|
||||
loggingAdapter,
|
||||
materializer);
|
||||
|
||||
HttpRequest request = HttpRequest.create()
|
||||
.withUri("http://" + host + ":" + Integer.toString(port))
|
||||
.withMethod(BOLT)
|
||||
.withProtocol(HTTP_1_0);
|
||||
|
||||
CompletionStage<HttpResponse> response = http.singleRequest(request, materializer);
|
||||
//#customHttpMethod
|
||||
|
||||
assertEquals(StatusCodes.OK, response.toCompletableFuture().get().status());
|
||||
assertEquals(
|
||||
"This is a BOLT request.",
|
||||
response.toCompletableFuture().get().entity().toStrict(3000, materializer).toCompletableFuture().get().getData().utf8String()
|
||||
);
|
||||
}
|
||||
}
|
||||
|
|
@ -16,9 +16,11 @@ import akka.http.javadsl.model.headers.Host;
|
|||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.RequestContext;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
|
||||
import java.util.function.Function;
|
||||
import java.util.function.BiFunction;
|
||||
import java.util.function.Consumer;
|
||||
|
||||
import akka.http.javadsl.model.Uri;
|
||||
import akka.http.javadsl.model.headers.Location;
|
||||
import akka.http.javadsl.server.directives.DebuggingDirectives;
|
||||
|
|
@ -26,10 +28,13 @@ import akka.http.javadsl.server.directives.RouteDirectives;
|
|||
import akka.event.Logging;
|
||||
import akka.event.Logging.LogLevel;
|
||||
import akka.http.javadsl.server.directives.LogEntry;
|
||||
|
||||
import java.util.List;
|
||||
import akka.http.scaladsl.server.Rejection;
|
||||
|
||||
import akka.http.javadsl.server.Rejection;
|
||||
|
||||
import static akka.event.Logging.InfoLevel;
|
||||
|
||||
import java.util.stream.Collectors;
|
||||
import java.util.Optional;
|
||||
|
||||
|
|
@ -39,18 +44,18 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
|
|||
public void testLogRequest() {
|
||||
//#logRequest
|
||||
// logs request with "get-user"
|
||||
final Route routeBasicLogRequest = get(() ->
|
||||
final Route routeBasicLogRequest = get(() ->
|
||||
logRequest("get-user", () -> complete("logged")));
|
||||
|
||||
|
||||
// logs request with "get-user" as Info
|
||||
final Route routeBasicLogRequestAsInfo = get(() ->
|
||||
final Route routeBasicLogRequestAsInfo = get(() ->
|
||||
logRequest("get-user", InfoLevel(), () -> complete("logged")));
|
||||
|
||||
// logs just the request method at info level
|
||||
Function<HttpRequest, LogEntry> requestMethodAsInfo = (request) ->
|
||||
LogEntry.create(request.method().toString(), InfoLevel());
|
||||
Function<HttpRequest, LogEntry> requestMethodAsInfo = (request) ->
|
||||
LogEntry.create(request.method().name(), InfoLevel());
|
||||
|
||||
final Route routeUsingFunction = get(() ->
|
||||
final Route routeUsingFunction = get(() ->
|
||||
logRequest(requestMethodAsInfo, () -> complete("logged")));
|
||||
|
||||
// tests:
|
||||
|
|
@ -63,32 +68,31 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
|
|||
public void testLogRequestResult() {
|
||||
//#logRequestResult
|
||||
// using logRequestResult
|
||||
|
||||
// handle request to optionally generate a log entry
|
||||
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
|
||||
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
|
||||
(request, response) ->
|
||||
(response.status().isSuccess()) ?
|
||||
Optional.of(
|
||||
LogEntry.create(
|
||||
request.method().toString() + ":" + response.status().intValue(),
|
||||
InfoLevel()))
|
||||
(response.status().isSuccess()) ?
|
||||
Optional.of(
|
||||
LogEntry.create(
|
||||
request.method().name() + ":" + response.status().intValue(),
|
||||
InfoLevel()))
|
||||
: Optional.empty(); // not a successful response
|
||||
|
||||
// handle rejections to optionally generate a log entry
|
||||
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
|
||||
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
|
||||
(request, rejections) ->
|
||||
(!rejections.isEmpty()) ?
|
||||
(!rejections.isEmpty()) ?
|
||||
Optional.of(
|
||||
LogEntry.create(
|
||||
rejections
|
||||
rejections
|
||||
.stream()
|
||||
.map(Rejection::toString)
|
||||
.collect(Collectors.joining(", ")),
|
||||
.collect(Collectors.joining(", ")),
|
||||
InfoLevel()))
|
||||
: Optional.empty(); // no rejections
|
||||
|
||||
final Route route = get(() -> logRequestResultOptional(
|
||||
requestMethodAsInfo,
|
||||
requestMethodAsInfo,
|
||||
rejectionsAsInfo,
|
||||
() -> complete("logged")));
|
||||
// tests:
|
||||
|
|
@ -109,16 +113,16 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
|
|||
|
||||
// logs the result and the rejections as LogEntry
|
||||
Function<HttpResponse, LogEntry> showSuccessAsInfo = (response) ->
|
||||
LogEntry.create(String.format("Response code '%d'", response.status().intValue()),
|
||||
LogEntry.create(String.format("Response code '%d'", response.status().intValue()),
|
||||
InfoLevel());
|
||||
|
||||
Function<List<Rejection>, LogEntry> showRejectionAsInfo = (rejections) ->
|
||||
LogEntry.create(
|
||||
rejections
|
||||
.stream()
|
||||
.map(rejection->rejection.toString())
|
||||
.collect(Collectors.joining(", ")),
|
||||
InfoLevel());
|
||||
.stream()
|
||||
.map(rejection -> rejection.toString())
|
||||
.collect(Collectors.joining(", ")),
|
||||
InfoLevel());
|
||||
|
||||
final Route routeUsingFunction = get(() ->
|
||||
logResult(showSuccessAsInfo, showRejectionAsInfo, () -> complete("logged")));
|
||||
|
|
@ -128,4 +132,50 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
|
|||
//#logResult
|
||||
}
|
||||
|
||||
}
|
||||
@Test
|
||||
public void testLogRequestResultWithResponseTime() {
|
||||
//#logRequestResultWithResponseTime
|
||||
// using logRequestResultOptional for generating Response Time
|
||||
// handle request to optionally generate a log entry
|
||||
|
||||
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
|
||||
(request, response) -> {
|
||||
Long requestTime = System.nanoTime();
|
||||
return printResponseTime(request, response, requestTime);
|
||||
};
|
||||
|
||||
// handle rejections to optionally generate a log entry
|
||||
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
|
||||
(request, rejections) ->
|
||||
(!rejections.isEmpty()) ?
|
||||
Optional.of(
|
||||
LogEntry.create(
|
||||
rejections
|
||||
.stream()
|
||||
.map(Rejection::toString)
|
||||
.collect(Collectors.joining(", ")),
|
||||
InfoLevel()))
|
||||
: Optional.empty(); // no rejections
|
||||
|
||||
final Route route = get(() -> logRequestResultOptional(
|
||||
requestMethodAsInfo,
|
||||
rejectionsAsInfo,
|
||||
() -> complete("logged")));
|
||||
// tests:
|
||||
testRoute(route).run(HttpRequest.GET("/")).assertEntity("logged");
|
||||
//#logRequestResult
|
||||
}
|
||||
|
||||
// A function for the logging of Time
|
||||
public static Optional<LogEntry> printResponseTime(HttpRequest request, HttpResponse response, Long requestTime) {
|
||||
if (response.status().isSuccess()) {
|
||||
Long elapsedTime = (requestTime - System.nanoTime()) / 1000000;
|
||||
return Optional.of(
|
||||
LogEntry.create(
|
||||
"Logged Request:" + request.method().name() + ":" + request.getUri() + ":" + response.status() + ":" + elapsedTime,
|
||||
InfoLevel()));
|
||||
} else {
|
||||
return Optional.empty(); //not a successfull response
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ package docs.http.javadsl.server.directives;
|
|||
import akka.http.impl.engine.rendering.BodyPartRenderer;
|
||||
import akka.http.javadsl.model.*;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.Unmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.http.javadsl.server.directives.FileInfo;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.stream.javadsl.Framing;
|
||||
|
|
@ -14,8 +14,6 @@ import akka.stream.javadsl.Source;
|
|||
import akka.util.ByteString;
|
||||
import org.junit.Ignore;
|
||||
import org.junit.Test;
|
||||
import scala.concurrent.duration.Duration;
|
||||
import scala.concurrent.duration.FiniteDuration;
|
||||
|
||||
import java.io.File;
|
||||
import java.nio.charset.Charset;
|
||||
|
|
@ -24,7 +22,6 @@ import java.util.Arrays;
|
|||
import java.util.HashMap;
|
||||
import java.util.Map;
|
||||
import java.util.concurrent.CompletionStage;
|
||||
import java.util.concurrent.TimeUnit;
|
||||
import java.util.function.BiFunction;
|
||||
|
||||
public class FileUploadDirectivesExamplesTest extends JUnitRouteTest {
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ import akka.http.javadsl.model.FormData;
|
|||
import akka.http.javadsl.model.HttpRequest;
|
||||
import akka.http.javadsl.model.StatusCodes;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.StringUnmarshallers;
|
||||
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.japi.Pair;
|
||||
import org.junit.Test;
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ import java.util.concurrent.CompletableFuture;
|
|||
import java.util.concurrent.TimeUnit;
|
||||
|
||||
import akka.http.javadsl.model.HttpRequest;
|
||||
import akka.http.javadsl.server.Marshaller;
|
||||
import akka.http.javadsl.marshalling.Marshaller;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.http.scaladsl.model.StatusCodes;
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ package docs.http.javadsl.server.directives;
|
|||
import akka.http.javadsl.model.HttpRequest;
|
||||
import akka.http.javadsl.model.StatusCodes;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.Unmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import org.junit.Test;
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ import akka.http.javadsl.model.headers.ContentRange;
|
|||
import akka.http.javadsl.model.headers.Range;
|
||||
import akka.http.javadsl.model.headers.RangeUnits;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.Unmarshaller;
|
||||
import akka.http.javadsl.unmarshalling.Unmarshaller;
|
||||
import akka.http.javadsl.testkit.JUnitRouteTest;
|
||||
import akka.http.javadsl.testkit.TestRouteResult;
|
||||
import akka.stream.ActorMaterializer;
|
||||
|
|
@ -19,6 +19,7 @@ import akka.util.ByteString;
|
|||
import com.typesafe.config.Config;
|
||||
import com.typesafe.config.ConfigFactory;
|
||||
import org.junit.Test;
|
||||
import static org.junit.Assert.assertEquals;
|
||||
|
||||
import java.util.ArrayList;
|
||||
import java.util.List;
|
||||
|
|
@ -68,16 +69,16 @@ public class RangeDirectivesExamplesTest extends JUnitRouteTest {
|
|||
try {
|
||||
final List<Multipart.ByteRanges.BodyPart> bodyParts =
|
||||
completionStage.toCompletableFuture().get(3, TimeUnit.SECONDS);
|
||||
assertResult(2, bodyParts.toArray().length);
|
||||
assertEquals(2, bodyParts.toArray().length);
|
||||
|
||||
final Multipart.ByteRanges.BodyPart part1 = bodyParts.get(0);
|
||||
assertResult(bytes028Range, part1.getContentRange());
|
||||
assertResult(ByteString.fromString("ABC"),
|
||||
assertEquals(bytes028Range, part1.getContentRange());
|
||||
assertEquals(ByteString.fromString("ABC"),
|
||||
part1.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
|
||||
|
||||
final Multipart.ByteRanges.BodyPart part2 = bodyParts.get(1);
|
||||
assertResult(bytes678Range, part2.getContentRange());
|
||||
assertResult(ByteString.fromString("GH"),
|
||||
assertEquals(bytes678Range, part2.getContentRange());
|
||||
assertEquals(ByteString.fromString("GH"),
|
||||
part2.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
|
||||
|
||||
} catch (Exception e) {
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ import akka.http.javadsl.ConnectHttp;
|
|||
import akka.http.javadsl.Http;
|
||||
import akka.http.javadsl.server.AllDirectives;
|
||||
import akka.http.javadsl.server.Route;
|
||||
import akka.http.javadsl.server.StringUnmarshallers;
|
||||
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
|
||||
import akka.http.javadsl.server.examples.simple.SimpleServerApp;
|
||||
import akka.stream.ActorMaterializer;
|
||||
|
||||
|
|
|
|||
|
|
@ -98,7 +98,7 @@ public class GraphDSLDocTest extends AbstractJavaTest {
|
|||
);
|
||||
// unconnected zip.out (!) => "The inlets [] and outlets [] must correspond to the inlets [] and outlets [ZipWith2.out]"
|
||||
//#simple-graph
|
||||
fail("expected IllegalArgumentException");
|
||||
org.junit.Assert.fail("expected IllegalArgumentException");
|
||||
} catch (IllegalArgumentException e) {
|
||||
assertTrue(e != null && e.getMessage() != null && e.getMessage().contains("must correspond to"));
|
||||
}
|
||||
|
|
|
|||
|
|
@ -95,6 +95,40 @@ public class GraphStageDocTest extends AbstractJavaTest {
|
|||
}
|
||||
//#simple-source
|
||||
|
||||
//#simple-sink
|
||||
public class StdoutSink extends GraphStage<SinkShape<Integer>> {
|
||||
public final Inlet<Integer> in = Inlet.create("StdoutSink.in");
|
||||
|
||||
private final SinkShape<Integer> shape = SinkShape.of(in);
|
||||
@Override
|
||||
public SinkShape<Integer> shape() {
|
||||
return shape;
|
||||
}
|
||||
|
||||
@Override
|
||||
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
|
||||
return new GraphStageLogic(shape()) {
|
||||
|
||||
// This requests one element at the Sink startup.
|
||||
@Override
|
||||
public void preStart() {
|
||||
pull(in);
|
||||
}
|
||||
|
||||
{
|
||||
setHandler(in, new AbstractInHandler() {
|
||||
@Override
|
||||
public void onPush() throws Exception {
|
||||
Integer element = grab(in);
|
||||
System.out.println(element);
|
||||
pull(in);
|
||||
}
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
}
|
||||
//#simple-sink
|
||||
|
||||
@Test
|
||||
public void demonstrateCustomSourceUsage() throws Exception {
|
||||
|
|
@ -116,6 +150,14 @@ public class GraphStageDocTest extends AbstractJavaTest {
|
|||
assertEquals(result2.toCompletableFuture().get(3, TimeUnit.SECONDS), (Integer) 5050);
|
||||
}
|
||||
|
||||
@Test
|
||||
public void demonstrateCustomSinkUsage() throws Exception {
|
||||
Graph<SinkShape<Integer>, NotUsed> sinkGraph = new StdoutSink();
|
||||
|
||||
Sink<Integer, NotUsed> mySink = Sink.fromGraph(sinkGraph);
|
||||
|
||||
Source.from(Arrays.asList(1, 2, 3)).runWith(mySink, mat);
|
||||
}
|
||||
|
||||
//#one-to-one
|
||||
public class Map<A, B> extends GraphStage<FlowShape<A, B>> {
|
||||
|
|
|
|||
|
|
@ -65,7 +65,7 @@ public class StreamBuffersRateDocTest extends AbstractJavaTest {
|
|||
final Flow<Integer, Integer, NotUsed> flow1 =
|
||||
Flow.of(Integer.class)
|
||||
.map(elem -> elem * 2).async()
|
||||
.withAttributes(Attributes.inputBuffer(1, 1)); // the buffer size of this map is 1
|
||||
.addAttributes(Attributes.inputBuffer(1, 1)); // the buffer size of this map is 1
|
||||
final Flow<Integer, Integer, NotUsed> flow2 =
|
||||
flow1.via(
|
||||
Flow.of(Integer.class)
|
||||
|
|
|
|||
|
|
@ -100,8 +100,15 @@ public class RecipeByteStrings extends RecipeTest {
|
|||
@Override
|
||||
public void onUpstreamFinish() throws Exception {
|
||||
if (buffer.isEmpty()) completeStage();
|
||||
// elements left in buffer, keep accepting downstream pulls
|
||||
// and push from buffer until buffer is emitted
|
||||
else {
|
||||
// There are elements left in buffer, so
|
||||
// we keep accepting downstream pulls and push from buffer until emptied.
|
||||
//
|
||||
// It might be though, that the upstream finished while it was pulled, in which
|
||||
// case we will not get an onPull from the downstream, because we already had one.
|
||||
// In that case we need to emit from the buffer.
|
||||
if (isAvailable(out)) emitChunk();
|
||||
}
|
||||
}
|
||||
});
|
||||
}
|
||||
|
|
|
|||
|
|
@ -44,6 +44,10 @@ And here's another example that uses the "thread-pool-executor":
|
|||
|
||||
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-thread-pool-dispatcher-config
|
||||
|
||||
.. note::
|
||||
The thread pool executor dispatcher is implemented using by a ``java.util.concurrent.ThreadPoolExecutor``.
|
||||
You can read more about it in the JDK's `ThreadPoolExecutor documentation`_.
|
||||
|
||||
For more options, see the default-dispatcher section of the :ref:`configuration`.
|
||||
|
||||
Then you create the actor as usual and define the dispatcher in the deployment configuration.
|
||||
|
|
@ -65,6 +69,7 @@ of programmatically provided parameter.
|
|||
where you'd use periods to denote sub-sections, like this: ``"foo.bar.my-dispatcher"``
|
||||
|
||||
.. _ForkJoinPool documentation: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html
|
||||
.. _ThreadPoolExecutor documentation: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html
|
||||
|
||||
Types of dispatchers
|
||||
--------------------
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ HTTP result can be found in ``WebSocketUpgradeResponse.response``
|
|||
Message
|
||||
-------
|
||||
Messages sent and received over a WebSocket can be either :class:`TextMessage` s or :class:`BinaryMessage` s and each
|
||||
of those can be either strict (all data in one chunk) or streaming. In typical applications messages will be strict as
|
||||
of those can be either strict (all data in one chunk) or streamed. In typical applications messages will be strict as
|
||||
WebSockets are usually deployed to communicate using small messages not stream data, the protocol does however
|
||||
allow this (by not marking the first fragment as final, as described in `rfc 6455 section 5.2`__).
|
||||
|
||||
|
|
@ -37,7 +37,7 @@ __ https://tools.ietf.org/html/rfc6455#section-5.2
|
|||
The strict text is available from ``TextMessage.getStrictText`` and strict binary data from
|
||||
``BinaryMessage.getStrictData``.
|
||||
|
||||
For streaming messages ``BinaryMessage.getStreamedData`` and ``TextMessage.getStreamedText`` is used to access the data.
|
||||
For streamed messages ``BinaryMessage.getStreamedData`` and ``TextMessage.getStreamedText`` is used to access the data.
|
||||
In these cases the data is provided as a ``Source<ByteString, NotUsed>`` for binary and ``Source<String, NotUsed>``
|
||||
for text messages.
|
||||
|
||||
|
|
|
|||
|
|
@ -10,6 +10,8 @@ are left to the streaming APIs and are easily implementable as patterns in user-
|
|||
Common timeouts
|
||||
---------------
|
||||
|
||||
.. _idle-timeouts-java:
|
||||
|
||||
Idle timeouts
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
|
|
@ -22,8 +24,8 @@ independently for each of those using the following keys::
|
|||
|
||||
akka.http.server.idle-timeout
|
||||
akka.http.client.idle-timeout
|
||||
akka.http.http-connection-pool.idle-timeout
|
||||
akka.http.http-connection-pool.client.idle-timeout
|
||||
akka.http.host-connection-pool.idle-timeout
|
||||
akka.http.host-connection-pool.client.idle-timeout
|
||||
|
||||
.. note::
|
||||
For the connection pooled client side the idle period is counted only when the pool has no pending requests waiting.
|
||||
|
|
|
|||
|
|
@ -23,12 +23,12 @@ Client-Side handling of streaming HTTP Entities
|
|||
Consuming the HTTP Response Entity (Client)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The most commong use-case of course is consuming the response entity, which can be done via
|
||||
The most common use-case of course is consuming the response entity, which can be done via
|
||||
running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source,
|
||||
(or on the server-side using directives such as
|
||||
|
||||
It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest,
|
||||
for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another
|
||||
for example by framing the incoming chunks, parsing them line-by-line and then connecting the flow into another
|
||||
destination Sink, such as a File or other Akka Streams connector:
|
||||
|
||||
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-1
|
||||
|
|
@ -108,7 +108,7 @@ Closing connections is also explained in depth in the :ref:`http-closing-connect
|
|||
Pending: Automatic discarding of not used entities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request,
|
||||
Under certain conditions it is possible to detect an entity is very unlikely to be used by the user for a given request,
|
||||
and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below
|
||||
note and issues for further discussion and ideas.
|
||||
|
||||
|
|
|
|||
|
|
@ -40,5 +40,6 @@ akka-http-jackson
|
|||
implications-of-streaming-http-entity
|
||||
configuration
|
||||
server-side-https-support
|
||||
../../scala/http/migration-guide-2.4.x-experimental
|
||||
|
||||
.. _jackson: https://github.com/FasterXML/jackson
|
||||
|
|
|
|||
|
|
@ -0,0 +1,19 @@
|
|||
.. _-extractActorSystem-java-:
|
||||
|
||||
extractActorSystem
|
||||
==================
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
Extracts the ``ActorSystem`` from the ``RequestContext``, which can be useful when the external API
|
||||
in your route needs one.
|
||||
|
||||
.. warning::
|
||||
|
||||
This is only supported when the available Materializer is an ActorMaterializer.
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractActorSystem
|
||||
|
|
@ -0,0 +1,16 @@
|
|||
.. _-extractDataBytes-java-:
|
||||
|
||||
extractDataBytes
|
||||
================
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
Extracts the entities data bytes as ``Source[ByteString, Any]`` from the :class:`RequestContext`.
|
||||
|
||||
The directive returns a stream containing the request data bytes.
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractDataBytes
|
||||
|
|
@ -0,0 +1,17 @@
|
|||
.. _-extractRequestEntity-java-:
|
||||
|
||||
extractRequestEntity
|
||||
====================
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
Extracts the ``RequestEntity`` from the :class:`RequestContext`.
|
||||
|
||||
The directive returns a ``RequestEntity`` without unmarshalling the request. To extract domain entity,
|
||||
:ref:`-entity-java-` should be used.
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestEntity
|
||||
|
|
@ -0,0 +1,23 @@
|
|||
.. _-extractStrictEntity-java-:
|
||||
|
||||
extractStrictEntity
|
||||
===================
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
Extracts the strict http entity as ``HttpEntity.Strict`` from the :class:`RequestContext`.
|
||||
|
||||
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
|
||||
|
||||
.. warning::
|
||||
|
||||
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
|
||||
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
|
||||
overridden by wrapping with :ref:`-withSizeLimit-java-` or :ref:`-withoutSizeLimit-java-` directive.
|
||||
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractStrictEntity
|
||||
|
|
@ -17,11 +17,15 @@ on two axes: a) provide a constant value or extract a value from the ``RequestCo
|
|||
a single value or a tuple of values.
|
||||
|
||||
* :ref:`-extract-java-`
|
||||
* :ref:`-extractActorSystem-java-`
|
||||
* :ref:`-extractDataBytes-java-`
|
||||
* :ref:`-extractExecutionContext-java-`
|
||||
* :ref:`-extractMaterializer-java-`
|
||||
* :ref:`-extractStrictEntity-java-`
|
||||
* :ref:`-extractLog-java-`
|
||||
* :ref:`-extractRequest-java-`
|
||||
* :ref:`-extractRequestContext-java-`
|
||||
* :ref:`-extractRequestEntity-java-`
|
||||
* :ref:`-extractSettings-java-`
|
||||
* :ref:`-extractUnmatchedPath-java-`
|
||||
* :ref:`-extractUri-java-`
|
||||
|
|
@ -41,6 +45,7 @@ Transforming the Request(Context)
|
|||
* :ref:`-withMaterializer-java-`
|
||||
* :ref:`-withLog-java-`
|
||||
* :ref:`-withSettings-java-`
|
||||
* :ref:`-toStrictEntity-java-`
|
||||
|
||||
|
||||
.. _Response Transforming Directives-java:
|
||||
|
|
@ -91,11 +96,15 @@ Alphabetically
|
|||
cancelRejection
|
||||
cancelRejections
|
||||
extract
|
||||
extractActorSystem
|
||||
extractDataBytes
|
||||
extractExecutionContext
|
||||
extractMaterializer
|
||||
extractStrictEntity
|
||||
extractLog
|
||||
extractRequest
|
||||
extractRequestContext
|
||||
extractRequestEntity
|
||||
extractSettings
|
||||
extractUnmatchedPath
|
||||
extractUri
|
||||
|
|
@ -117,6 +126,7 @@ Alphabetically
|
|||
provide
|
||||
recoverRejections
|
||||
recoverRejectionsWith
|
||||
toStrictEntity
|
||||
withExecutionContext
|
||||
withMaterializer
|
||||
withLog
|
||||
|
|
|
|||
|
|
@ -0,0 +1,23 @@
|
|||
.. _-toStrictEntity-java-:
|
||||
|
||||
toStrictEntity
|
||||
==============
|
||||
|
||||
Description
|
||||
-----------
|
||||
|
||||
Transforms the request entity to strict entity before it is handled by the inner route.
|
||||
|
||||
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
|
||||
|
||||
.. warning::
|
||||
|
||||
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
|
||||
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
|
||||
overridden by wrapping with :ref:`-withSizeLimit-java-` or :ref:`-withoutSizeLimit-java-` directive.
|
||||
|
||||
|
||||
Example
|
||||
-------
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#toStrictEntity
|
||||
|
|
@ -14,3 +14,10 @@ See :ref:`-logRequest-java-` for the general description how these directives wo
|
|||
Example
|
||||
-------
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/DebuggingDirectivesExamplesTest.java#logRequestResult
|
||||
|
||||
Longer Example
|
||||
--------------
|
||||
|
||||
This example shows how to log the response time of the request using the Debugging Directive
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/DebuggingDirectivesExamplesTest.java#logRequestResultWithResponseTime
|
||||
|
|
|
|||
|
|
@ -17,3 +17,9 @@ print what type of request it was - independent of what actual HttpMethod it was
|
|||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/MethodDirectivesExamplesTest.java#extractMethod
|
||||
|
||||
Custom Http Method
|
||||
------------------
|
||||
|
||||
When you define a custom HttpMethod, you can define a route using extractMethod.
|
||||
|
||||
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CustomHttpMethodExamplesTest.java#customHttpMethod
|
||||
|
|
|
|||
|
|
@ -18,6 +18,7 @@ To use the high-level API you need to add a dependency to the ``akka-http-experi
|
|||
directives/index
|
||||
marshalling
|
||||
exception-handling
|
||||
source-streaming-support
|
||||
rejections
|
||||
testkit
|
||||
|
||||
|
|
@ -51,7 +52,6 @@ in the :ref:`exception-handling-java` section of the documtnation. You can use t
|
|||
|
||||
File uploads
|
||||
^^^^^^^^^^^^
|
||||
TODO not possible in Java DSL since there
|
||||
|
||||
For high level directives to handle uploads see the :ref:`FileUploadDirectives-java`.
|
||||
|
||||
|
|
|
|||
|
|
@ -13,7 +13,7 @@ of an HTTP request or response (depending on whether used on the client or serve
|
|||
Marshalling
|
||||
-----------
|
||||
|
||||
On the server-side marshalling is used to convert a application-domain object to a response (entity). Requests can
|
||||
On the server-side marshalling is used to convert an application-domain object to a response (entity). Requests can
|
||||
contain an ``Accept`` header that lists acceptable content types for the client. A marshaller contains the logic to
|
||||
negotiate the result content types based on the ``Accept`` and the ``AcceptCharset`` headers.
|
||||
|
||||
|
|
@ -30,7 +30,7 @@ These marshallers are provided by akka-http:
|
|||
Unmarshalling
|
||||
-------------
|
||||
|
||||
On the server-side unmarshalling is used to convert a request (entity) to a application-domain object. This is done
|
||||
On the server-side unmarshalling is used to convert a request (entity) to an application-domain object. This is done
|
||||
in the ``MarshallingDirectives.request`` or ``MarshallingDirectives.entity`` directive. There are several unmarshallers
|
||||
provided by akka-http:
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,91 @@
|
|||
.. _json-streaming-java:
|
||||
|
||||
Source Streaming
|
||||
================
|
||||
|
||||
Akka HTTP supports completing a request with an Akka ``Source<T, _>``, which makes it possible to easily build
|
||||
and consume streaming end-to-end APIs which apply back-pressure throughout the entire stack.
|
||||
|
||||
It is possible to complete requests with raw ``Source<ByteString, _>``, however often it is more convenient to
|
||||
stream on an element-by-element basis, and allow Akka HTTP to handle the rendering internally - for example as a JSON array,
|
||||
or CSV stream (where each element is separated by a new-line).
|
||||
|
||||
In the following sections we investigate how to make use of the JSON Streaming infrastructure,
|
||||
however the general hints apply to any kind of element-by-element streaming you could imagine.
|
||||
|
||||
JSON Streaming
|
||||
==============
|
||||
|
||||
`JSON Streaming`_ is a term refering to streaming a (possibly infinite) stream of element as independent JSON
|
||||
objects as a continuous HTTP request or response. The elements are most often separated using newlines,
|
||||
however do not have to be. Concatenating elements side-by-side or emitting "very long" JSON array is also another
|
||||
use case.
|
||||
|
||||
In the below examples, we'll be refering to the ``Tweet`` and ``Measurement`` case classes as our model, which are defined as:
|
||||
|
||||
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#models
|
||||
|
||||
.. _Json Streaming: https://en.wikipedia.org/wiki/JSON_Streaming
|
||||
|
||||
Responding with JSON Streams
|
||||
----------------------------
|
||||
|
||||
In this example we implement an API representing an infinite stream of tweets, very much like Twitter's `Streaming API`_.
|
||||
|
||||
Firstly, we'll need to get some additional marshalling infrastructure set up, that is able to marshal to and from an
|
||||
Akka Streams ``Source<T,_>``. Here we'll use the ``Jackson`` helper class from ``akka-http-jackson`` (a separate library
|
||||
that you should add as a dependency if you want to use Jackson with Akka HTTP).
|
||||
|
||||
First we enable JSON Streaming by making an implicit ``EntityStreamingSupport`` instance available (Step 1).
|
||||
|
||||
The default mode of rendering a ``Source`` is to represent it as an JSON Array. If you want to change this representation
|
||||
for example to use Twitter style new-line separated JSON objects, you can do so by configuring the support trait accordingly.
|
||||
|
||||
In Step 1.1. we demonstrate to configure configude the rendering to be new-line separated, and also how parallel marshalling
|
||||
can be applied. We configure the Support object to render the JSON as series of new-line separated JSON objects,
|
||||
simply by providing the ``start``, ``sep`` and ``end`` ByteStrings, which will be emitted at the apropriate
|
||||
places in the rendered stream. Although this format is *not* valid JSON, it is pretty popular since parsing it is relatively
|
||||
simple - clients need only to find the new-lines and apply JSON unmarshalling for an entire line of JSON.
|
||||
|
||||
The final step is simply completing a request using a Source of tweets, as simple as that:
|
||||
|
||||
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#response-streaming
|
||||
|
||||
.. _Streaming API: https://dev.twitter.com/streaming/overview
|
||||
|
||||
Consuming JSON Streaming uploads
|
||||
--------------------------------
|
||||
|
||||
Sometimes the client may be sending a streaming request, for example an embedded device initiated a connection with
|
||||
the server and is feeding it with one line of measurement data.
|
||||
|
||||
In this example, we want to consume this data in a streaming fashion from the request entity, and also apply
|
||||
back-pressure to the underlying TCP connection, if the server can not cope with the rate of incoming data (back-pressure
|
||||
will be applied automatically thanks to using Akka HTTP/Streams).
|
||||
|
||||
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#formats
|
||||
|
||||
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#incoming-request-streaming
|
||||
|
||||
|
||||
Simple CSV streaming example
|
||||
----------------------------
|
||||
|
||||
Akka HTTP provides another ``EntityStreamingSupport`` out of the box, namely ``csv`` (comma-separated values).
|
||||
For completeness, we demonstrate its usage in the below snippet. As you'll notice, switching betweeen streaming
|
||||
modes is fairly simple, one only has to make sure that an implicit ``Marshaller`` of the requested type is available,
|
||||
and that the streaming support operates on the same ``Content-Type`` as the rendered values. Otherwise you'll see
|
||||
an error during runtime that the marshaller did not expose the expected content type and thus we can not render
|
||||
the streaming response).
|
||||
|
||||
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#csv-example
|
||||
|
||||
Implementing custom EntityStreamingSupport traits
|
||||
-------------------------------------------------
|
||||
|
||||
The ``EntityStreamingSupport`` infrastructure is open for extension and not bound to any single format, content type
|
||||
or marshalling library. The provided JSON support does not rely on Spray JSON directly, but uses ``Marshaller<T, ByteString>``
|
||||
instances, which can be provided using any JSON marshalling library (such as Circe, Jawn or Play JSON).
|
||||
|
||||
When implementing a custom support trait, one should simply extend the ``EntityStreamingSupport`` abstract class,
|
||||
and implement all of it's methods. It's best to use the existing implementations as a guideline.
|
||||
|
|
@ -21,6 +21,8 @@ For detailed documentation for client-side HTTPS support refer to :ref:`clientSi
|
|||
|
||||
.. _akka.http.javadsl.Http: https://github.com/akka/akka/blob/master/akka-http-core/src/main/scala/akka/http/javadsl/Http.scala
|
||||
|
||||
.. _ssl-config-java:
|
||||
|
||||
SSL-Config
|
||||
----------
|
||||
|
||||
|
|
@ -57,6 +59,8 @@ keystores using the JDK keytool utility can be found `here <https://docs.oracle.
|
|||
SSL-Config provides a more targeted guide on generating certificates, so we recommend you start with the guide
|
||||
titled `Generating X.509 Certificates <http://typesafehub.github.io/ssl-config/CertificateGeneration.html>`_.
|
||||
|
||||
.. _using-https-java:
|
||||
|
||||
Using HTTPS
|
||||
-----------
|
||||
|
||||
|
|
@ -64,11 +68,34 @@ Once you have obtained the server certificate, using it is as simple as preparin
|
|||
and either setting it as the default one to be used by all servers started by the given ``Http`` extension
|
||||
or passing it in explicitly when binding the server.
|
||||
|
||||
The below example shows how setting up HTTPS works when using the ``akka.http.javadsl.server.HttpApp`` convenience class:
|
||||
The below example shows how setting up HTTPS works when using the ``akka.http.javadsl.server.HttpApp`` convenience class.
|
||||
Firstly you will create and configure an instance of ``akka.http.javadsl.HttpsConnectionContext`` :
|
||||
|
||||
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
|
||||
:snippet: https-http-config
|
||||
|
||||
Then pass it to ``akka.http.javadsl.Http`` class's ``setDefaultServerHttpContext`` method, like in the below ``main`` method.
|
||||
|
||||
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
|
||||
:snippet: https-http-app
|
||||
|
||||
Running both HTTP and HTTPS
|
||||
---------------------------
|
||||
If you want to run HTTP and HTTPS servers in a single application, you can call ``bind...`` methods twice,
|
||||
one for HTTPS, and the other for HTTP.
|
||||
|
||||
When configuring HTTPS, you can do it up like explained in the above :ref:`using-https-java` section,
|
||||
|
||||
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
|
||||
:snippet: https-http-config
|
||||
|
||||
or via :ref:`ssl-config-java` (not explained here though).
|
||||
|
||||
Then, call ``bind...`` methods twice like below.
|
||||
The blow ``SimpleServerApp.useHttps(system)`` is calling the above defined HTTP ``public static HttpsConnectionContext useHttps(ActorSystem system)`` method.
|
||||
|
||||
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerHttpHttpsApp.java
|
||||
:snippet: both-https-and-http
|
||||
|
||||
Further reading
|
||||
---------------
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Add a link
Reference in a new issue