Merge branch 'master' into wip-sync-artery-dev-2.4.9-patriknw

This commit is contained in:
Patrik Nordwall 2016-08-23 20:14:15 +02:00
commit 8ab02738b7
483 changed files with 9535 additions and 2177 deletions

View file

@ -281,7 +281,7 @@ Actors may also use a Camel `ProducerTemplate`_ for producing messages to endpoi
.. includecode:: code/docs/camel/MyActor.java#ProducerTemplate
For initiating a a two-way message exchange, one of the
For initiating a two-way message exchange, one of the
``ProducerTemplate.request*`` methods must be used.
.. includecode:: code/docs/camel/RequestBodyActor.java#RequestProducerTemplate

View file

@ -14,7 +14,7 @@ Cluster metrics information is primarily used for load-balancing routers,
and can also be used to implement advanced metrics-based node life cycles,
such as "Node Let-it-crash" when CPU steal time becomes excessive.
Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar.
Cluster Metrics Extension is a separate Akka module delivered in ``akka-cluster-metrics`` jar.
To enable usage of the extension you need to add the following dependency to your project:
::

View file

@ -270,8 +270,8 @@ Note that stopped entities will be started again when a new message is targeted
Graceful Shutdown
-----------------
You can send the message ``ClusterSharding.GracefulShutdown`` message (``ClusterSharding.gracefulShutdownInstance``
in Java) to the ``ShardRegion`` actor to handoff all shards that are hosted by that ``ShardRegion`` and then the
You can send the ``ShardRegion.gracefulShutdownInstance`` message
to the ``ShardRegion`` actor to handoff all shards that are hosted by that ``ShardRegion`` and then the
``ShardRegion`` actor will be stopped. You can ``watch`` the ``ShardRegion`` actor to know when it is completed.
During this period other regions will buffer messages for those shards in the same way as when a rebalance is
triggered by the coordinator. When the shards have been stopped the coordinator will allocate these shards elsewhere.

View file

@ -733,3 +733,13 @@ For this purpose you can define a separate dispatcher to be used for the cluster
parallelism-max = 4
}
}
.. note::
Normally it should not be necessary to configure a separate dispatcher for the Cluster.
The default-dispatcher should be sufficient for performing the Cluster tasks, i.e. ``akka.cluster.use-dispatcher``
should not be changed. If you have Cluster related problems when using the default-dispatcher that is typically an
indication that you are running blocking or CPU intensive actors/tasks on the default-dispatcher.
Use dedicated dispatchers for such actors/tasks instead of running them on the default-dispatcher,
because that may starve system internal tasks.
Related config properties: ``akka.cluster.use-dispatcher = akka.cluster.cluster-dispatcher``.
Corresponding default values: ``akka.cluster.use-dispatcher =``.

View file

@ -10,9 +10,9 @@ import org.junit.Test;
import akka.http.javadsl.model.FormData;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.StringUnmarshallers;
import akka.http.javadsl.server.StringUnmarshaller;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
import akka.http.javadsl.unmarshalling.StringUnmarshaller;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.japi.Pair;
@ -72,4 +72,4 @@ public class FormFieldRequestValsExampleTest extends JUnitRouteTest {
}
}
}

View file

@ -15,7 +15,7 @@ import akka.http.javadsl.marshallers.jackson.Jackson;
import akka.http.javadsl.model.*;
import akka.http.javadsl.model.headers.Connection;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.japi.function.Function;
import akka.stream.ActorMaterializer;
import akka.stream.IOResult;

View file

@ -0,0 +1,179 @@
/*
* Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server;
import akka.NotUsed;
import akka.http.javadsl.common.CsvEntityStreamingSupport;
import akka.http.javadsl.common.JsonEntityStreamingSupport;
import akka.http.javadsl.marshallers.jackson.Jackson;
import akka.http.javadsl.marshalling.Marshaller;
import akka.http.javadsl.model.*;
import akka.http.javadsl.model.headers.Accept;
import akka.http.javadsl.server.*;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.http.javadsl.testkit.TestRoute;
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
import akka.http.javadsl.common.EntityStreamingSupport;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.stream.javadsl.Flow;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.junit.Test;
import java.util.concurrent.CompletionStage;
public class JsonStreamingExamplesTest extends JUnitRouteTest {
//#routes
final Route tweets() {
//#formats
final Unmarshaller<ByteString, JavaTweet> JavaTweets = Jackson.byteStringUnmarshaller(JavaTweet.class);
//#formats
//#response-streaming
// Step 1: Enable JSON streaming
// we're not using this in the example, but it's the simplest way to start:
// The default rendering is a JSON array: `[el, el, el , ...]`
final JsonEntityStreamingSupport jsonStreaming = EntityStreamingSupport.json();
// Step 1.1: Enable and customise how we'll render the JSON, as a compact array:
final ByteString start = ByteString.fromString("[");
final ByteString between = ByteString.fromString(",");
final ByteString end = ByteString.fromString("]");
final Flow<ByteString, ByteString, NotUsed> compactArrayRendering =
Flow.of(ByteString.class).intersperse(start, between, end);
final JsonEntityStreamingSupport compactJsonSupport = EntityStreamingSupport.json()
.withFramingRendererFlow(compactArrayRendering);
// Step 2: implement the route
final Route responseStreaming = path("tweets", () ->
get(() ->
parameter(StringUnmarshallers.INTEGER, "n", n -> {
final Source<JavaTweet, NotUsed> tws =
Source.repeat(new JavaTweet(12, "Hello World!")).take(n);
// Step 3: call complete* with your source, marshaller, and stream rendering mode
return completeOKWithSource(tws, Jackson.marshaller(), compactJsonSupport);
})
)
);
//#response-streaming
//#incoming-request-streaming
final Route incomingStreaming = path("tweets", () ->
post(() ->
extractMaterializer(mat -> {
final JsonEntityStreamingSupport jsonSupport = EntityStreamingSupport.json();
return entityAsSourceOf(JavaTweets, jsonSupport, sourceOfTweets -> {
final CompletionStage<Integer> tweetsCount = sourceOfTweets.runFold(0, (acc, tweet) -> acc + 1, mat);
return onComplete(tweetsCount, c -> complete("Total number of tweets: " + c));
});
}
)
)
);
//#incoming-request-streaming
return responseStreaming.orElse(incomingStreaming);
}
final Route csvTweets() {
//#csv-example
final Marshaller<JavaTweet, ByteString> renderAsCsv =
Marshaller.withFixedContentType(ContentTypes.TEXT_CSV_UTF8, t ->
ByteString.fromString(t.getId() + "," + t.getMessage())
);
final CsvEntityStreamingSupport compactJsonSupport = EntityStreamingSupport.csv();
final Route responseStreaming = path("tweets", () ->
get(() ->
parameter(StringUnmarshallers.INTEGER, "n", n -> {
final Source<JavaTweet, NotUsed> tws =
Source.repeat(new JavaTweet(12, "Hello World!")).take(n);
return completeWithSource(tws, renderAsCsv, compactJsonSupport);
})
)
);
//#csv-example
return responseStreaming;
}
//#routes
@Test
public void getTweetsTest() {
//#response-streaming
// tests:
final TestRoute routes = testRoute(tweets());
// test happy path
final Accept acceptApplication = Accept.create(MediaRanges.create(MediaTypes.APPLICATION_JSON));
routes.run(HttpRequest.GET("/tweets?n=2").addHeader(acceptApplication))
.assertStatusCode(200)
.assertEntity("[{\"id\":12,\"message\":\"Hello World!\"},{\"id\":12,\"message\":\"Hello World!\"}]");
// test responses to potential errors
final Accept acceptText = Accept.create(MediaRanges.ALL_TEXT);
routes.run(HttpRequest.GET("/tweets?n=3").addHeader(acceptText))
.assertStatusCode(StatusCodes.NOT_ACCEPTABLE) // 406
.assertEntity("Resource representation is only available with these types:\napplication/json");
//#response-streaming
}
@Test
public void csvExampleTweetsTest() {
//#response-streaming
// tests --------------------------------------------
final TestRoute routes = testRoute(csvTweets());
// test happy path
final Accept acceptCsv = Accept.create(MediaRanges.create(MediaTypes.TEXT_CSV));
routes.run(HttpRequest.GET("/tweets?n=2").addHeader(acceptCsv))
.assertStatusCode(200)
.assertEntity("12,Hello World!\n" +
"12,Hello World!");
// test responses to potential errors
final Accept acceptText = Accept.create(MediaRanges.ALL_APPLICATION);
routes.run(HttpRequest.GET("/tweets?n=3").addHeader(acceptText))
.assertStatusCode(StatusCodes.NOT_ACCEPTABLE) // 406
.assertEntity("Resource representation is only available with these types:\ntext/csv; charset=UTF-8");
//#response-streaming
}
//#models
private static final class JavaTweet {
private int id;
private String message;
public JavaTweet(int id, String message) {
this.id = id;
this.message = message;
}
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public void setMessage(String message) {
this.message = message;
}
public String getMessage() {
return message;
}
}
//#models
}

View file

@ -3,6 +3,7 @@
*/
package docs.http.javadsl.server.directives;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.dispatch.ExecutionContexts;
import akka.event.Logging;
@ -31,14 +32,17 @@ import akka.util.ByteString;
import org.junit.Ignore;
import org.junit.Test;
import scala.concurrent.ExecutionContextExecutor;
import scala.concurrent.duration.FiniteDuration;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Collections;
import java.util.Iterator;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
@ -785,4 +789,105 @@ public class BasicDirectivesExamplesTest extends JUnitRouteTest {
//#extractUnmatchedPath
}
@Test
public void testExtractRequestEntity() {
//#extractRequestEntity
final Route route = extractRequestEntity(entity ->
complete("Request entity content-type is " + entity.getContentType())
);
// tests:
testRoute(route).run(
HttpRequest.POST("/abc")
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, "req"))
).assertEntity("Request entity content-type is text/plain; charset=UTF-8");
//#extractRequestEntity
}
@Test
public void testExtractDataBytes() {
//#extractDataBytes
final Route route = extractDataBytes(data -> {
final CompletionStage<Integer> sum = data.runFold(0, (acc, i) ->
acc + Integer.valueOf(i.utf8String()), materializer());
return onSuccess(() -> sum, s ->
complete(HttpResponse.create().withEntity(HttpEntities.create(s.toString()))));
});
// tests:
final Iterator iterator = Arrays.asList(
ByteString.fromString("1"),
ByteString.fromString("2"),
ByteString.fromString("3")).iterator();
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
testRoute(route).run(
HttpRequest.POST("abc")
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
).assertEntity("6");
//#extractDataBytes
}
@Test
public void testExtractStrictEntity() {
//#extractStrictEntity
final FiniteDuration timeout = FiniteDuration.create(3, TimeUnit.SECONDS);
final Route route = extractStrictEntity(timeout, strict ->
complete(strict.getData().utf8String())
);
// tests:
final Iterator iterator = Arrays.asList(
ByteString.fromString("1"),
ByteString.fromString("2"),
ByteString.fromString("3")).iterator();
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
testRoute(route).run(
HttpRequest.POST("/")
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
).assertEntity("123");
//#extractStrictEntity
}
@Test
public void testToStrictEntity() {
//#toStrictEntity
final FiniteDuration timeout = FiniteDuration.create(3, TimeUnit.SECONDS);
final Route route = toStrictEntity(timeout, () ->
extractRequest(req -> {
if (req.entity() instanceof HttpEntity.Strict) {
final HttpEntity.Strict strict = (HttpEntity.Strict)req.entity();
return complete("Request entity is strict, data=" + strict.getData().utf8String());
} else {
return complete("Ooops, request entity is not strict!");
}
})
);
// tests:
final Iterator iterator = Arrays.asList(
ByteString.fromString("1"),
ByteString.fromString("2"),
ByteString.fromString("3")).iterator();
final Source<ByteString, NotUsed> dataBytes = Source.fromIterator(() -> iterator);
testRoute(route).run(
HttpRequest.POST("/")
.withEntity(HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, dataBytes))
).assertEntity("Request entity is strict, data=123");
//#toStrictEntity
}
@Test
public void testExtractActorSystem() {
//#extractActorSystem
final Route route = extractActorSystem(actorSystem ->
complete("Actor System extracted, hash=" + actorSystem.hashCode())
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("Actor System extracted, hash=" + system().hashCode());
//#extractActorSystem
}
}

View file

@ -7,7 +7,7 @@ import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.headers.AcceptEncoding;
import akka.http.javadsl.model.headers.ContentEncoding;
import akka.http.javadsl.model.headers.HttpEncodings;
import akka.http.javadsl.server.Coder;
import akka.http.javadsl.coding.Coder;
import akka.http.javadsl.server.Rejections;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
@ -16,7 +16,7 @@ import org.junit.Test;
import java.util.Collections;
import static akka.http.javadsl.server.Unmarshaller.entityToString;
import static akka.http.javadsl.unmarshalling.Unmarshaller.entityToString;
public class CodingDirectivesExamplesTest extends JUnitRouteTest {

View file

@ -0,0 +1,77 @@
/*
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.event.LoggingAdapter;
import akka.event.NoLogging;
import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.Http;
import akka.http.javadsl.ServerBinding;
import akka.http.javadsl.model.*;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.settings.ParserSettings;
import akka.http.javadsl.settings.ServerSettings;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.stream.Materializer;
import akka.stream.javadsl.Flow;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import static akka.http.javadsl.model.HttpProtocols.HTTP_1_0;
import static akka.http.javadsl.model.RequestEntityAcceptances.Expected;
public class CustomHttpMethodExamplesTest extends JUnitRouteTest {
@Test
public void testComposition() throws InterruptedException, ExecutionException {
ActorSystem system = system();
Materializer materializer = materializer();
LoggingAdapter loggingAdapter = NoLogging.getInstance();
int port = 9090;
String host = "127.0.0.1";
//#customHttpMethod
HttpMethod BOLT =
HttpMethods.createCustom("BOLT", false, true, Expected);
final ParserSettings parserSettings =
ParserSettings.create(system).withCustomMethods(BOLT);
final ServerSettings serverSettings =
ServerSettings.create(system).withParserSettings(parserSettings);
final Route routes = route(
extractMethod( method ->
complete( "This is a " + method.name() + " request.")
)
);
final Flow<HttpRequest, HttpResponse, NotUsed> handler = routes.flow(system, materializer);
final Http http = Http.get(system);
final CompletionStage<ServerBinding> binding =
http.bindAndHandle(
handler,
ConnectHttp.toHost(host, port),
serverSettings,
loggingAdapter,
materializer);
HttpRequest request = HttpRequest.create()
.withUri("http://" + host + ":" + Integer.toString(port))
.withMethod(BOLT)
.withProtocol(HTTP_1_0);
CompletionStage<HttpResponse> response = http.singleRequest(request, materializer);
//#customHttpMethod
assertEquals(StatusCodes.OK, response.toCompletableFuture().get().status());
assertEquals(
"This is a BOLT request.",
response.toCompletableFuture().get().entity().toStrict(3000, materializer).toCompletableFuture().get().getData().utf8String()
);
}
}

View file

@ -16,9 +16,11 @@ import akka.http.javadsl.model.headers.Host;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.RequestContext;
import akka.http.javadsl.testkit.JUnitRouteTest;
import java.util.function.Function;
import java.util.function.BiFunction;
import java.util.function.Consumer;
import akka.http.javadsl.model.Uri;
import akka.http.javadsl.model.headers.Location;
import akka.http.javadsl.server.directives.DebuggingDirectives;
@ -26,10 +28,13 @@ import akka.http.javadsl.server.directives.RouteDirectives;
import akka.event.Logging;
import akka.event.Logging.LogLevel;
import akka.http.javadsl.server.directives.LogEntry;
import java.util.List;
import akka.http.scaladsl.server.Rejection;
import akka.http.javadsl.server.Rejection;
import static akka.event.Logging.InfoLevel;
import java.util.stream.Collectors;
import java.util.Optional;
@ -39,18 +44,18 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
public void testLogRequest() {
//#logRequest
// logs request with "get-user"
final Route routeBasicLogRequest = get(() ->
final Route routeBasicLogRequest = get(() ->
logRequest("get-user", () -> complete("logged")));
// logs request with "get-user" as Info
final Route routeBasicLogRequestAsInfo = get(() ->
final Route routeBasicLogRequestAsInfo = get(() ->
logRequest("get-user", InfoLevel(), () -> complete("logged")));
// logs just the request method at info level
Function<HttpRequest, LogEntry> requestMethodAsInfo = (request) ->
LogEntry.create(request.method().toString(), InfoLevel());
Function<HttpRequest, LogEntry> requestMethodAsInfo = (request) ->
LogEntry.create(request.method().name(), InfoLevel());
final Route routeUsingFunction = get(() ->
final Route routeUsingFunction = get(() ->
logRequest(requestMethodAsInfo, () -> complete("logged")));
// tests:
@ -63,32 +68,31 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
public void testLogRequestResult() {
//#logRequestResult
// using logRequestResult
// handle request to optionally generate a log entry
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
(request, response) ->
(response.status().isSuccess()) ?
Optional.of(
LogEntry.create(
request.method().toString() + ":" + response.status().intValue(),
InfoLevel()))
(response.status().isSuccess()) ?
Optional.of(
LogEntry.create(
request.method().name() + ":" + response.status().intValue(),
InfoLevel()))
: Optional.empty(); // not a successful response
// handle rejections to optionally generate a log entry
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
(request, rejections) ->
(!rejections.isEmpty()) ?
(!rejections.isEmpty()) ?
Optional.of(
LogEntry.create(
rejections
rejections
.stream()
.map(Rejection::toString)
.collect(Collectors.joining(", ")),
.collect(Collectors.joining(", ")),
InfoLevel()))
: Optional.empty(); // no rejections
final Route route = get(() -> logRequestResultOptional(
requestMethodAsInfo,
requestMethodAsInfo,
rejectionsAsInfo,
() -> complete("logged")));
// tests:
@ -109,16 +113,16 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
// logs the result and the rejections as LogEntry
Function<HttpResponse, LogEntry> showSuccessAsInfo = (response) ->
LogEntry.create(String.format("Response code '%d'", response.status().intValue()),
LogEntry.create(String.format("Response code '%d'", response.status().intValue()),
InfoLevel());
Function<List<Rejection>, LogEntry> showRejectionAsInfo = (rejections) ->
LogEntry.create(
rejections
.stream()
.map(rejection->rejection.toString())
.collect(Collectors.joining(", ")),
InfoLevel());
.stream()
.map(rejection -> rejection.toString())
.collect(Collectors.joining(", ")),
InfoLevel());
final Route routeUsingFunction = get(() ->
logResult(showSuccessAsInfo, showRejectionAsInfo, () -> complete("logged")));
@ -128,4 +132,50 @@ public class DebuggingDirectivesExamplesTest extends JUnitRouteTest {
//#logResult
}
}
@Test
public void testLogRequestResultWithResponseTime() {
//#logRequestResultWithResponseTime
// using logRequestResultOptional for generating Response Time
// handle request to optionally generate a log entry
BiFunction<HttpRequest, HttpResponse, Optional<LogEntry>> requestMethodAsInfo =
(request, response) -> {
Long requestTime = System.nanoTime();
return printResponseTime(request, response, requestTime);
};
// handle rejections to optionally generate a log entry
BiFunction<HttpRequest, List<Rejection>, Optional<LogEntry>> rejectionsAsInfo =
(request, rejections) ->
(!rejections.isEmpty()) ?
Optional.of(
LogEntry.create(
rejections
.stream()
.map(Rejection::toString)
.collect(Collectors.joining(", ")),
InfoLevel()))
: Optional.empty(); // no rejections
final Route route = get(() -> logRequestResultOptional(
requestMethodAsInfo,
rejectionsAsInfo,
() -> complete("logged")));
// tests:
testRoute(route).run(HttpRequest.GET("/")).assertEntity("logged");
//#logRequestResult
}
// A function for the logging of Time
public static Optional<LogEntry> printResponseTime(HttpRequest request, HttpResponse response, Long requestTime) {
if (response.status().isSuccess()) {
Long elapsedTime = (requestTime - System.nanoTime()) / 1000000;
return Optional.of(
LogEntry.create(
"Logged Request:" + request.method().name() + ":" + request.getUri() + ":" + response.status() + ":" + elapsedTime,
InfoLevel()));
} else {
return Optional.empty(); //not a successfull response
}
}
}

View file

@ -6,7 +6,7 @@ package docs.http.javadsl.server.directives;
import akka.http.impl.engine.rendering.BodyPartRenderer;
import akka.http.javadsl.model.*;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.http.javadsl.server.directives.FileInfo;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.stream.javadsl.Framing;
@ -14,8 +14,6 @@ import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.junit.Ignore;
import org.junit.Test;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import java.io.File;
import java.nio.charset.Charset;
@ -24,7 +22,6 @@ import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
import java.util.function.BiFunction;
public class FileUploadDirectivesExamplesTest extends JUnitRouteTest {

View file

@ -7,7 +7,7 @@ import akka.http.javadsl.model.FormData;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.StringUnmarshallers;
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.japi.Pair;
import org.junit.Test;

View file

@ -8,7 +8,7 @@ import java.util.concurrent.CompletableFuture;
import java.util.concurrent.TimeUnit;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.server.Marshaller;
import akka.http.javadsl.marshalling.Marshaller;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.http.scaladsl.model.StatusCodes;

View file

@ -6,7 +6,7 @@ package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Test;

View file

@ -11,7 +11,7 @@ import akka.http.javadsl.model.headers.ContentRange;
import akka.http.javadsl.model.headers.Range;
import akka.http.javadsl.model.headers.RangeUnits;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.unmarshalling.Unmarshaller;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.http.javadsl.testkit.TestRouteResult;
import akka.stream.ActorMaterializer;
@ -19,6 +19,7 @@ import akka.util.ByteString;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import java.util.ArrayList;
import java.util.List;
@ -68,16 +69,16 @@ public class RangeDirectivesExamplesTest extends JUnitRouteTest {
try {
final List<Multipart.ByteRanges.BodyPart> bodyParts =
completionStage.toCompletableFuture().get(3, TimeUnit.SECONDS);
assertResult(2, bodyParts.toArray().length);
assertEquals(2, bodyParts.toArray().length);
final Multipart.ByteRanges.BodyPart part1 = bodyParts.get(0);
assertResult(bytes028Range, part1.getContentRange());
assertResult(ByteString.fromString("ABC"),
assertEquals(bytes028Range, part1.getContentRange());
assertEquals(ByteString.fromString("ABC"),
part1.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
final Multipart.ByteRanges.BodyPart part2 = bodyParts.get(1);
assertResult(bytes678Range, part2.getContentRange());
assertResult(ByteString.fromString("GH"),
assertEquals(bytes678Range, part2.getContentRange());
assertEquals(ByteString.fromString("GH"),
part2.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
} catch (Exception e) {

View file

@ -11,7 +11,7 @@ import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.Http;
import akka.http.javadsl.server.AllDirectives;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.StringUnmarshallers;
import akka.http.javadsl.unmarshalling.StringUnmarshallers;
import akka.http.javadsl.server.examples.simple.SimpleServerApp;
import akka.stream.ActorMaterializer;

View file

@ -98,7 +98,7 @@ public class GraphDSLDocTest extends AbstractJavaTest {
);
// unconnected zip.out (!) => "The inlets [] and outlets [] must correspond to the inlets [] and outlets [ZipWith2.out]"
//#simple-graph
fail("expected IllegalArgumentException");
org.junit.Assert.fail("expected IllegalArgumentException");
} catch (IllegalArgumentException e) {
assertTrue(e != null && e.getMessage() != null && e.getMessage().contains("must correspond to"));
}

View file

@ -95,6 +95,40 @@ public class GraphStageDocTest extends AbstractJavaTest {
}
//#simple-source
//#simple-sink
public class StdoutSink extends GraphStage<SinkShape<Integer>> {
public final Inlet<Integer> in = Inlet.create("StdoutSink.in");
private final SinkShape<Integer> shape = SinkShape.of(in);
@Override
public SinkShape<Integer> shape() {
return shape;
}
@Override
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
return new GraphStageLogic(shape()) {
// This requests one element at the Sink startup.
@Override
public void preStart() {
pull(in);
}
{
setHandler(in, new AbstractInHandler() {
@Override
public void onPush() throws Exception {
Integer element = grab(in);
System.out.println(element);
pull(in);
}
});
}
};
}
}
//#simple-sink
@Test
public void demonstrateCustomSourceUsage() throws Exception {
@ -116,6 +150,14 @@ public class GraphStageDocTest extends AbstractJavaTest {
assertEquals(result2.toCompletableFuture().get(3, TimeUnit.SECONDS), (Integer) 5050);
}
@Test
public void demonstrateCustomSinkUsage() throws Exception {
Graph<SinkShape<Integer>, NotUsed> sinkGraph = new StdoutSink();
Sink<Integer, NotUsed> mySink = Sink.fromGraph(sinkGraph);
Source.from(Arrays.asList(1, 2, 3)).runWith(mySink, mat);
}
//#one-to-one
public class Map<A, B> extends GraphStage<FlowShape<A, B>> {

View file

@ -65,7 +65,7 @@ public class StreamBuffersRateDocTest extends AbstractJavaTest {
final Flow<Integer, Integer, NotUsed> flow1 =
Flow.of(Integer.class)
.map(elem -> elem * 2).async()
.withAttributes(Attributes.inputBuffer(1, 1)); // the buffer size of this map is 1
.addAttributes(Attributes.inputBuffer(1, 1)); // the buffer size of this map is 1
final Flow<Integer, Integer, NotUsed> flow2 =
flow1.via(
Flow.of(Integer.class)

View file

@ -100,8 +100,15 @@ public class RecipeByteStrings extends RecipeTest {
@Override
public void onUpstreamFinish() throws Exception {
if (buffer.isEmpty()) completeStage();
// elements left in buffer, keep accepting downstream pulls
// and push from buffer until buffer is emitted
else {
// There are elements left in buffer, so
// we keep accepting downstream pulls and push from buffer until emptied.
//
// It might be though, that the upstream finished while it was pulled, in which
// case we will not get an onPull from the downstream, because we already had one.
// In that case we need to emit from the buffer.
if (isAvailable(out)) emitChunk();
}
}
});
}

View file

@ -44,6 +44,10 @@ And here's another example that uses the "thread-pool-executor":
.. includecode:: ../scala/code/docs/dispatcher/DispatcherDocSpec.scala#my-thread-pool-dispatcher-config
.. note::
The thread pool executor dispatcher is implemented using by a ``java.util.concurrent.ThreadPoolExecutor``.
You can read more about it in the JDK's `ThreadPoolExecutor documentation`_.
For more options, see the default-dispatcher section of the :ref:`configuration`.
Then you create the actor as usual and define the dispatcher in the deployment configuration.
@ -65,6 +69,7 @@ of programmatically provided parameter.
where you'd use periods to denote sub-sections, like this: ``"foo.bar.my-dispatcher"``
.. _ForkJoinPool documentation: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ForkJoinPool.html
.. _ThreadPoolExecutor documentation: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html
Types of dispatchers
--------------------

View file

@ -28,7 +28,7 @@ HTTP result can be found in ``WebSocketUpgradeResponse.response``
Message
-------
Messages sent and received over a WebSocket can be either :class:`TextMessage` s or :class:`BinaryMessage` s and each
of those can be either strict (all data in one chunk) or streaming. In typical applications messages will be strict as
of those can be either strict (all data in one chunk) or streamed. In typical applications messages will be strict as
WebSockets are usually deployed to communicate using small messages not stream data, the protocol does however
allow this (by not marking the first fragment as final, as described in `rfc 6455 section 5.2`__).
@ -37,7 +37,7 @@ __ https://tools.ietf.org/html/rfc6455#section-5.2
The strict text is available from ``TextMessage.getStrictText`` and strict binary data from
``BinaryMessage.getStrictData``.
For streaming messages ``BinaryMessage.getStreamedData`` and ``TextMessage.getStreamedText`` is used to access the data.
For streamed messages ``BinaryMessage.getStreamedData`` and ``TextMessage.getStreamedText`` is used to access the data.
In these cases the data is provided as a ``Source<ByteString, NotUsed>`` for binary and ``Source<String, NotUsed>``
for text messages.

View file

@ -10,6 +10,8 @@ are left to the streaming APIs and are easily implementable as patterns in user-
Common timeouts
---------------
.. _idle-timeouts-java:
Idle timeouts
^^^^^^^^^^^^^
@ -22,8 +24,8 @@ independently for each of those using the following keys::
akka.http.server.idle-timeout
akka.http.client.idle-timeout
akka.http.http-connection-pool.idle-timeout
akka.http.http-connection-pool.client.idle-timeout
akka.http.host-connection-pool.idle-timeout
akka.http.host-connection-pool.client.idle-timeout
.. note::
For the connection pooled client side the idle period is counted only when the pool has no pending requests waiting.

View file

@ -23,12 +23,12 @@ Client-Side handling of streaming HTTP Entities
Consuming the HTTP Response Entity (Client)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The most commong use-case of course is consuming the response entity, which can be done via
The most common use-case of course is consuming the response entity, which can be done via
running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source,
(or on the server-side using directives such as
It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest,
for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another
for example by framing the incoming chunks, parsing them line-by-line and then connecting the flow into another
destination Sink, such as a File or other Akka Streams connector:
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-1
@ -108,7 +108,7 @@ Closing connections is also explained in depth in the :ref:`http-closing-connect
Pending: Automatic discarding of not used entities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request,
Under certain conditions it is possible to detect an entity is very unlikely to be used by the user for a given request,
and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below
note and issues for further discussion and ideas.

View file

@ -40,5 +40,6 @@ akka-http-jackson
implications-of-streaming-http-entity
configuration
server-side-https-support
../../scala/http/migration-guide-2.4.x-experimental
.. _jackson: https://github.com/FasterXML/jackson

View file

@ -0,0 +1,19 @@
.. _-extractActorSystem-java-:
extractActorSystem
==================
Description
-----------
Extracts the ``ActorSystem`` from the ``RequestContext``, which can be useful when the external API
in your route needs one.
.. warning::
This is only supported when the available Materializer is an ActorMaterializer.
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractActorSystem

View file

@ -0,0 +1,16 @@
.. _-extractDataBytes-java-:
extractDataBytes
================
Description
-----------
Extracts the entities data bytes as ``Source[ByteString, Any]`` from the :class:`RequestContext`.
The directive returns a stream containing the request data bytes.
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractDataBytes

View file

@ -0,0 +1,17 @@
.. _-extractRequestEntity-java-:
extractRequestEntity
====================
Description
-----------
Extracts the ``RequestEntity`` from the :class:`RequestContext`.
The directive returns a ``RequestEntity`` without unmarshalling the request. To extract domain entity,
:ref:`-entity-java-` should be used.
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestEntity

View file

@ -0,0 +1,23 @@
.. _-extractStrictEntity-java-:
extractStrictEntity
===================
Description
-----------
Extracts the strict http entity as ``HttpEntity.Strict`` from the :class:`RequestContext`.
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
.. warning::
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
overridden by wrapping with :ref:`-withSizeLimit-java-` or :ref:`-withoutSizeLimit-java-` directive.
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractStrictEntity

View file

@ -17,11 +17,15 @@ on two axes: a) provide a constant value or extract a value from the ``RequestCo
a single value or a tuple of values.
* :ref:`-extract-java-`
* :ref:`-extractActorSystem-java-`
* :ref:`-extractDataBytes-java-`
* :ref:`-extractExecutionContext-java-`
* :ref:`-extractMaterializer-java-`
* :ref:`-extractStrictEntity-java-`
* :ref:`-extractLog-java-`
* :ref:`-extractRequest-java-`
* :ref:`-extractRequestContext-java-`
* :ref:`-extractRequestEntity-java-`
* :ref:`-extractSettings-java-`
* :ref:`-extractUnmatchedPath-java-`
* :ref:`-extractUri-java-`
@ -41,6 +45,7 @@ Transforming the Request(Context)
* :ref:`-withMaterializer-java-`
* :ref:`-withLog-java-`
* :ref:`-withSettings-java-`
* :ref:`-toStrictEntity-java-`
.. _Response Transforming Directives-java:
@ -91,11 +96,15 @@ Alphabetically
cancelRejection
cancelRejections
extract
extractActorSystem
extractDataBytes
extractExecutionContext
extractMaterializer
extractStrictEntity
extractLog
extractRequest
extractRequestContext
extractRequestEntity
extractSettings
extractUnmatchedPath
extractUri
@ -117,6 +126,7 @@ Alphabetically
provide
recoverRejections
recoverRejectionsWith
toStrictEntity
withExecutionContext
withMaterializer
withLog

View file

@ -0,0 +1,23 @@
.. _-toStrictEntity-java-:
toStrictEntity
==============
Description
-----------
Transforms the request entity to strict entity before it is handled by the inner route.
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
.. warning::
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
overridden by wrapping with :ref:`-withSizeLimit-java-` or :ref:`-withoutSizeLimit-java-` directive.
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#toStrictEntity

View file

@ -14,3 +14,10 @@ See :ref:`-logRequest-java-` for the general description how these directives wo
Example
-------
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/DebuggingDirectivesExamplesTest.java#logRequestResult
Longer Example
--------------
This example shows how to log the response time of the request using the Debugging Directive
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/DebuggingDirectivesExamplesTest.java#logRequestResultWithResponseTime

View file

@ -17,3 +17,9 @@ print what type of request it was - independent of what actual HttpMethod it was
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/MethodDirectivesExamplesTest.java#extractMethod
Custom Http Method
------------------
When you define a custom HttpMethod, you can define a route using extractMethod.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CustomHttpMethodExamplesTest.java#customHttpMethod

View file

@ -18,6 +18,7 @@ To use the high-level API you need to add a dependency to the ``akka-http-experi
directives/index
marshalling
exception-handling
source-streaming-support
rejections
testkit
@ -51,7 +52,6 @@ in the :ref:`exception-handling-java` section of the documtnation. You can use t
File uploads
^^^^^^^^^^^^
TODO not possible in Java DSL since there
For high level directives to handle uploads see the :ref:`FileUploadDirectives-java`.

View file

@ -13,7 +13,7 @@ of an HTTP request or response (depending on whether used on the client or serve
Marshalling
-----------
On the server-side marshalling is used to convert a application-domain object to a response (entity). Requests can
On the server-side marshalling is used to convert an application-domain object to a response (entity). Requests can
contain an ``Accept`` header that lists acceptable content types for the client. A marshaller contains the logic to
negotiate the result content types based on the ``Accept`` and the ``AcceptCharset`` headers.
@ -30,7 +30,7 @@ These marshallers are provided by akka-http:
Unmarshalling
-------------
On the server-side unmarshalling is used to convert a request (entity) to a application-domain object. This is done
On the server-side unmarshalling is used to convert a request (entity) to an application-domain object. This is done
in the ``MarshallingDirectives.request`` or ``MarshallingDirectives.entity`` directive. There are several unmarshallers
provided by akka-http:

View file

@ -0,0 +1,91 @@
.. _json-streaming-java:
Source Streaming
================
Akka HTTP supports completing a request with an Akka ``Source<T, _>``, which makes it possible to easily build
and consume streaming end-to-end APIs which apply back-pressure throughout the entire stack.
It is possible to complete requests with raw ``Source<ByteString, _>``, however often it is more convenient to
stream on an element-by-element basis, and allow Akka HTTP to handle the rendering internally - for example as a JSON array,
or CSV stream (where each element is separated by a new-line).
In the following sections we investigate how to make use of the JSON Streaming infrastructure,
however the general hints apply to any kind of element-by-element streaming you could imagine.
JSON Streaming
==============
`JSON Streaming`_ is a term refering to streaming a (possibly infinite) stream of element as independent JSON
objects as a continuous HTTP request or response. The elements are most often separated using newlines,
however do not have to be. Concatenating elements side-by-side or emitting "very long" JSON array is also another
use case.
In the below examples, we'll be refering to the ``Tweet`` and ``Measurement`` case classes as our model, which are defined as:
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#models
.. _Json Streaming: https://en.wikipedia.org/wiki/JSON_Streaming
Responding with JSON Streams
----------------------------
In this example we implement an API representing an infinite stream of tweets, very much like Twitter's `Streaming API`_.
Firstly, we'll need to get some additional marshalling infrastructure set up, that is able to marshal to and from an
Akka Streams ``Source<T,_>``. Here we'll use the ``Jackson`` helper class from ``akka-http-jackson`` (a separate library
that you should add as a dependency if you want to use Jackson with Akka HTTP).
First we enable JSON Streaming by making an implicit ``EntityStreamingSupport`` instance available (Step 1).
The default mode of rendering a ``Source`` is to represent it as an JSON Array. If you want to change this representation
for example to use Twitter style new-line separated JSON objects, you can do so by configuring the support trait accordingly.
In Step 1.1. we demonstrate to configure configude the rendering to be new-line separated, and also how parallel marshalling
can be applied. We configure the Support object to render the JSON as series of new-line separated JSON objects,
simply by providing the ``start``, ``sep`` and ``end`` ByteStrings, which will be emitted at the apropriate
places in the rendered stream. Although this format is *not* valid JSON, it is pretty popular since parsing it is relatively
simple - clients need only to find the new-lines and apply JSON unmarshalling for an entire line of JSON.
The final step is simply completing a request using a Source of tweets, as simple as that:
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#response-streaming
.. _Streaming API: https://dev.twitter.com/streaming/overview
Consuming JSON Streaming uploads
--------------------------------
Sometimes the client may be sending a streaming request, for example an embedded device initiated a connection with
the server and is feeding it with one line of measurement data.
In this example, we want to consume this data in a streaming fashion from the request entity, and also apply
back-pressure to the underlying TCP connection, if the server can not cope with the rate of incoming data (back-pressure
will be applied automatically thanks to using Akka HTTP/Streams).
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#formats
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#incoming-request-streaming
Simple CSV streaming example
----------------------------
Akka HTTP provides another ``EntityStreamingSupport`` out of the box, namely ``csv`` (comma-separated values).
For completeness, we demonstrate its usage in the below snippet. As you'll notice, switching betweeen streaming
modes is fairly simple, one only has to make sure that an implicit ``Marshaller`` of the requested type is available,
and that the streaming support operates on the same ``Content-Type`` as the rendered values. Otherwise you'll see
an error during runtime that the marshaller did not expose the expected content type and thus we can not render
the streaming response).
.. includecode:: ../../code/docs/http/javadsl/server/JsonStreamingExamplesTest.java#csv-example
Implementing custom EntityStreamingSupport traits
-------------------------------------------------
The ``EntityStreamingSupport`` infrastructure is open for extension and not bound to any single format, content type
or marshalling library. The provided JSON support does not rely on Spray JSON directly, but uses ``Marshaller<T, ByteString>``
instances, which can be provided using any JSON marshalling library (such as Circe, Jawn or Play JSON).
When implementing a custom support trait, one should simply extend the ``EntityStreamingSupport`` abstract class,
and implement all of it's methods. It's best to use the existing implementations as a guideline.

View file

@ -21,6 +21,8 @@ For detailed documentation for client-side HTTPS support refer to :ref:`clientSi
.. _akka.http.javadsl.Http: https://github.com/akka/akka/blob/master/akka-http-core/src/main/scala/akka/http/javadsl/Http.scala
.. _ssl-config-java:
SSL-Config
----------
@ -57,6 +59,8 @@ keystores using the JDK keytool utility can be found `here <https://docs.oracle.
SSL-Config provides a more targeted guide on generating certificates, so we recommend you start with the guide
titled `Generating X.509 Certificates <http://typesafehub.github.io/ssl-config/CertificateGeneration.html>`_.
.. _using-https-java:
Using HTTPS
-----------
@ -64,11 +68,34 @@ Once you have obtained the server certificate, using it is as simple as preparin
and either setting it as the default one to be used by all servers started by the given ``Http`` extension
or passing it in explicitly when binding the server.
The below example shows how setting up HTTPS works when using the ``akka.http.javadsl.server.HttpApp`` convenience class:
The below example shows how setting up HTTPS works when using the ``akka.http.javadsl.server.HttpApp`` convenience class.
Firstly you will create and configure an instance of ``akka.http.javadsl.HttpsConnectionContext`` :
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
:snippet: https-http-config
Then pass it to ``akka.http.javadsl.Http`` class's ``setDefaultServerHttpContext`` method, like in the below ``main`` method.
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
:snippet: https-http-app
Running both HTTP and HTTPS
---------------------------
If you want to run HTTP and HTTPS servers in a single application, you can call ``bind...`` methods twice,
one for HTTPS, and the other for HTTP.
When configuring HTTPS, you can do it up like explained in the above :ref:`using-https-java` section,
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java
:snippet: https-http-config
or via :ref:`ssl-config-java` (not explained here though).
Then, call ``bind...`` methods twice like below.
The blow ``SimpleServerApp.useHttps(system)`` is calling the above defined HTTP ``public static HttpsConnectionContext useHttps(ActorSystem system)`` method.
.. includecode2:: ../../../../akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerHttpHttpsApp.java
:snippet: both-https-and-http
Further reading
---------------

View file

@ -40,7 +40,7 @@ the notion of a "strict" message to represent cases where a whole message was re
When receiving data from the network connection the WebSocket implementation tries to create a strict message whenever
possible, i.e. when the complete data was received in one chunk. However, the actual chunking of messages over a network
connection and through the various streaming abstraction layers is not deterministic from the perspective of the
application. Therefore, application code must be able to handle both streaming and strict messages and not expect
application. Therefore, application code must be able to handle both streamed and strict messages and not expect
certain messages to be strict. (Particularly, note that tests against ``localhost`` will behave differently than tests
against remote peers where data is received over a physical network connection.)
@ -104,6 +104,11 @@ and then responds with another text message that contains a greeting:
.. includecode:: ../../code/docs/http/javadsl/server/WebSocketCoreExample.java
:include: websocket-handler
.. note::
Inactive WebSocket connections will be dropped according to the :ref:`idle-timeout settings <idle-timeouts-java>`.
In case you need to keep inactive connections alive, you can either tweak your idle-timeout or inject
'keep-alive' messages regularly.
Routing support
---------------

View file

@ -189,7 +189,7 @@ For back-pressuring writes there are three modes of operation
These write models (with the exception of the second which is rather specialised) are
demonstrated in complete examples below. The full and contiguous source is
available `on github <@github@/akka-docs/rst/java/code/docs/io/japi>`_.
available `on GitHub <@github@/akka-docs/rst/java/code/docs/io/japi>`_.
For back-pressuring reads there are two modes of operation

View file

@ -211,7 +211,7 @@ The Inbox
---------
When writing code outside of actors which shall communicate with actors, the
``ask`` pattern can be a solution (see below), but there are two thing it
``ask`` pattern can be a solution (see below), but there are two things it
cannot do: receiving multiple replies (e.g. by subscribing an :class:`ActorRef`
to a notification service) and watching other actors lifecycle. For these
purposes there is the :class:`Inbox` class:

View file

@ -9,7 +9,7 @@ Overview
========
The FSM (Finite State Machine) is available as an abstract base class that implements
an akka Actor and is best described in the `Erlang design principles
an Akka Actor and is best described in the `Erlang design principles
<http://www.erlang.org/documentation/doc-4.8.2/doc/design_principles/fsm.html>`_
A FSM can be described as a set of relations of the form:

View file

@ -33,16 +33,12 @@ and systems design much simpler you may want to read Pat Helland's excellent `Im
Since with `Event Sourcing`_ the **events are immutable** and usually never deleted the way schema evolution is handled
differs from how one would go about it in a mutable database setting (e.g. in typical CRUD database applications).
The system needs to be able to continue to work in the presence of "old" events which were stored under the "old" schema.
We also want to limit complexity in the business logic layer, exposing a consistent view over all of the events of a given
type to :class:`PersistentActor` s and :ref:`persistence queries <persistence-query-java>`. This allows the business logic layer to focus on solving business problems
instead of having to explicitly deal with different schemas.
The system needs to be able to continue to work in the presence of "old" events which were stored under the "old" schema,
and we want to limit the complexity to the data layer, exposing a consistent view over all of the events of a given type
to :class:`PersistentActor` s and persistence queries, which allows these layers to focus on the business problems instead
handling the different schemas explicitly in the business logic layers.
In summary, schema evolution in event sourced systems exposes the following characteristics:
- Allow the system to continue operating without large scale migrations to be applied,

View file

@ -490,7 +490,7 @@ Akka behind NAT or in a Docker container
----------------------------------------
In setups involving Network Address Translation (NAT), Load Balancers or Docker
containers the hostname and port pair that akka binds to will be different than the "logical"
containers the hostname and port pair that Akka binds to will be different than the "logical"
host name and port pair that is used to connect to the system from the outside. This requires
special configuration that sets both the logical and the bind pairs for remoting.

View file

@ -46,7 +46,9 @@ outside of actors.
.. note::
In general, any message sent to a router will be sent onwards to its routees, but there is one exception.
The special :ref:`broadcast-messages-java` will send to *all* of a router's routees
The special :ref:`broadcast-messages-java` will send to *all* of a router's routees.
However, do not use :ref:`broadcast-messages-java` when you use :ref:`balancing-pool-java` for routees
as described in :ref:`router-special-messages-java`.
A Router Actor
^^^^^^^^^^^^^^
@ -62,11 +64,11 @@ This type of router actor comes in two distinct flavors:
* Group - The routee actors are created externally to the router and the router sends
messages to the specified path using actor selection, without watching for termination.
The settings for a router actor can be defined in configuration or programmatically.
Although router actors can be defined in the configuration file, they must still be created
programmatically, i.e. you cannot make a router through external configuration alone.
If you define the router actor in the configuration file then these settings will be used
instead of any programmatically provided parameters.
The settings for a router actor can be defined in configuration or programmatically.
In order to make an actor to make use of an externally configurable router the ``FromConfig`` props wrapper must be used
to denote that the actor accepts routing settings from configuration.
This is in contrast with Remote Deployment where such marker props is not necessary.
If the props of an actor is NOT wrapped in ``FromConfig`` it will ignore the router section of the deployment configuration.
You send messages to the routees via the router actor in the same way as for ordinary actors,
i.e. via its ``ActorRef``. The router actor forwards messages onto its routees without changing
@ -276,6 +278,10 @@ All routees share the same mailbox.
replying to the original client. The other advantage is that it does not place
a restriction on the message queue implementation as BalancingPool does.
.. note::
Do not use :ref:`broadcast-messages-java` when you use :ref:`balancing-pool-java` for routers,
as described in :ref:`router-special-messages-java`.
BalancingPool defined in configuration:
.. includecode:: ../scala/code/docs/routing/RouterDocSpec.scala#config-balancing-pool
@ -305,6 +311,20 @@ with a ``thread-pool-executor`` hinting the number of allocated threads explicit
.. includecode:: ../scala/code/docs/routing/RouterDocSpec.scala#config-balancing-pool3
It is also possible to change the ``mailbox`` used by the balancing dispatcher for
scenarios where the default unbounded mailbox is not well suited. An example of such
a scenario could arise whether there exists the need to manage priority for each message.
You can then implement a priority mailbox and configure your dispatcher:
.. includecode:: ../scala/code/docs/routing/RouterDocSpec.scala#config-balancing-pool4
.. note::
Bear in mind that ``BalancingDispatcher`` requires a message queue that must be thread-safe for
multiple concurrent consumers. So it is mandatory for the message queue backing a custom mailbox
for this kind of dispatcher to implement akka.dispatch.MultipleConsumerSemantics. See details
on how to implement your custom mailbox in :ref:`mailboxes-java`.
There is no Group variant of the BalancingPool.
SmallestMailboxPool
@ -522,6 +542,11 @@ In this example the router receives the ``Broadcast`` message, extracts its payl
(``"Watch out for Davy Jones' locker"``), and then sends the payload on to all of the router's
routees. It is up to each routee actor to handle the received payload message.
.. note::
Do not use :ref:`broadcast-messages-java` when you use :ref:`balancing-pool-java` for routers.
Routees on :ref:`balancing-pool-java` shares the same mailbox instance, thus some routees can
possibly get the broadcast message multiple times, while other routees get no broadcast message.
PoisonPill Messages
-------------------

View file

@ -35,6 +35,14 @@ one of which all other candidates are superclasses. If this condition cannot be
met, because e.g. ``java.io.Serializable`` and ``MyOwnSerializable`` both apply
and neither is a subtype of the other, a warning will be issued.
.. note::
If you are using Scala for your message protocol and your messages are contained
inside of a Scala object, then in order to reference those messages, you will need
use the fully qualified Java class name. For a message named ``Message`` contained inside
the Scala object named ``Wrapper`` you would need to reference it as
``Wrapper$Message`` instead of ``Wrapper.Message``.
Akka provides serializers for :class:`java.io.Serializable` and `protobuf
<http://code.google.com/p/protobuf/>`_
:class:`com.google.protobuf.GeneratedMessage` by default (the latter only if

View file

@ -42,7 +42,7 @@ completion but there is no actual value attached to the completion. It is used t
occurrences of ``Future<BoxedUnit>`` with ``Future<Done>`` in Java and ``Future[Unit]`` with
``Future[Done]`` in Scala.
All previous usage of ``Unit`` and ``BoxedUnit`` for these two cases in the akka streams APIs
All previous usage of ``Unit`` and ``BoxedUnit`` for these two cases in the Akka Streams APIs
has been updated.
This means that Java code like this::
@ -136,8 +136,8 @@ IO Sources / Sinks materialize IOResult
Materialized values of the following sources and sinks:
* ``FileIO.fromFile``
* ``FileIO.toFile``
* ``FileIO.fromPath``
* ``FileIO.toPath``
* ``StreamConverters.fromInputStream``
* ``StreamConverters.fromOutputStream``

View file

@ -323,6 +323,23 @@ Invoke a callback when the stream has completed or failed.
**backpressures** never
lazyInit
^^^^^^^^
Invoke sinkFactory function to create a real sink upon receiving the first element. Internal ``Sink`` will not be created if there are no elements,
because of completion or error. `fallback` will be invoked if there was no elements and completed is received from upstream.
**cancels** never
**backpressures** when initialized and when created sink backpressures
queue
^^^^^
Materialize a ``SinkQueue`` that can be pulled to trigger demand through the sink. The queue contains
a buffer in case stream emitting elements faster than queue pulling them.
**cancels** when ``SinkQueue.cancel`` is called
**backpressures** when buffer has some space
fold
^^^^
@ -570,6 +587,17 @@ it returns false the element is discarded.
**completes** when upstream completes
filterNot
^^^^^^^^
Filter the incoming elements using a predicate. If the predicate returns false the element is passed downstream, if
it returns true the element is discarded.
**emits** when the given predicate returns false for the element
**backpressures** when the given predicate returns false for the element and downstream backpressures
**completes** when upstream completes
collect
^^^^^^^
Apply a partial function to each incoming element, if the partial function is defined for a value the returned
@ -630,6 +658,17 @@ complete the current value is emitted downstream.
**completes** when upstream completes
reduce
^^^^^^
Start with first element and then apply the current and next value to the given function, when upstream
complete the current value is emitted downstream. Similar to ``fold``.
**emits** when upstream completes
**backpressures** when downstream backpressures
**completes** when upstream completes
drop
^^^^
Drop ``n`` elements and then pass any subsequent element downstream.
@ -713,6 +752,59 @@ a function has to be provided to calculate the individual cost of each element.
**completes** when upstream completes
intersperse
^^^^^^^^^^^
Intersperse stream with provided element similar to ``List.mkString``. It can inject start and end marker elements to stream.
**emits** when upstream emits an element or before with the `start` element if provided
**backpressures** when downstream backpressures
**completes** when upstream completes
limit
^^^^^
Limit number of element from upstream to given ``max`` number.
**emits** when upstream emits and the number of emitted elements has not reached max
**backpressures** when downstream backpressures
**completes** when upstream completes and the number of emitted elements has not reached max
limitWeighted
^^^^^^^^^^^^^
Ensure stream boundedness by evaluating the cost of incoming elements using a cost function.
Evaluated cost of each element defines how many elements will be allowed to travel downstream.
**emits** when upstream emits and the number of emitted elements has not reached max
**backpressures** when downstream backpressures
**completes** when upstream completes and the number of emitted elements has not reached max
log
^^^
Log elements flowing through the stream as well as completion and erroring. By default element and
completion signals are logged on debug level, and errors are logged on Error level.
This can be changed by calling ``Attributes.createLogLevels(...)`` on the given Flow.
**emits** when upstream emits
**backpressures** when downstream backpressures
**completes** when upstream completes
recoverWithRetries
^^^^^^^^^^^^^^^^^^
Switch to alternative Source on flow failure. It stays in effect after a failure has been recovered up to ``attempts``
number of times. Each time a failure is fed into the partial function and a new Source may be materialized.
**emits** when element is available from the upstream or upstream is failed and element is available from alternative Source
**backpressures** when downstream backpressures
**completes** when upstream completes or upstream failed with exception partial function can handle
Asynchronous processing stages
@ -1217,6 +1309,17 @@ Fan-out the stream to several streams. Each upstream element is emitted to the f
**completes** when upstream completes
partition
^^^^^^^^^
Fan-out the stream to several streams. Each upstream element is emitted to one downstream consumer according to the
partitioner function applied to the element.
**emits** when the chosen output stops backpressuring and there is an input element available
**backpressures** when the chosen output backpressures
**completes** when upstream completes and no output is pending
Watching status stages
----------------------

View file

@ -58,6 +58,14 @@ source as any other built-in one:
.. includecode:: ../code/docs/stream/GraphStageDocTest.java#simple-source-usage
Similarly, to create a custom :class:`Sink` one can register a subclass :class:`InHandler` with the stage :class:`Inlet`.
The ``onPush()`` callback is used to signal the handler a new element has been pushed to the stage,
and can hence be grabbed and used. ``onPush()`` can be overridden to provide custom behaviour.
Please note, most Sinks would need to request upstream elements as soon as they are created: this can be
done by calling ``pull(inlet)`` in the ``preStart()`` callback.
.. includecode:: ../code/docs/stream/GraphStageDocTest.java#simple-sink
Port states, AbstractInHandler and AbstractOutHandler
-----------------------------------------------------
@ -105,7 +113,7 @@ The following operations are available for *input* ports:
The events corresponding to an *input* port can be received in an :class:`AbstractInHandler` instance registered to the
input port using ``setHandler(in, handler)``. This handler has three callbacks:
* ``onPush()`` is called when the output port has now a new element. Now it is possible to acquire this element using
* ``onPush()`` is called when the input port has now a new element. Now it is possible to acquire this element using
``grab(in)`` and/or call ``pull(in)`` on the port to request the next element. It is not mandatory to grab the
element, but if it is pulled while the element has not been grabbed it will drop the buffered element.
* ``onUpstreamFinish()`` is called once the upstream has completed and no longer can be pulled for new elements.
@ -138,7 +146,7 @@ Finally, there are two methods available for convenience to complete the stage a
In some cases it is inconvenient and error prone to react on the regular state machine events with the
signal based API described above. For those cases there is a API which allows for a more declarative sequencing
signal based API described above. For those cases there is an API which allows for a more declarative sequencing
of actions which will greatly simplify some use cases at the cost of some extra allocations. The difference
between the two APIs could be described as that the first one is signal driven from the outside, while this API
is more active and drives its surroundings.

View file

@ -1,10 +1,10 @@
.. _stream-dynamic-scala:
.. _stream-dynamic-java:
#######################
Dynamic stream handling
#######################
.. _kill-switch-scala:
.. _kill-switch-java:
Controlling graph completion with KillSwitch
--------------------------------------------
@ -24,7 +24,7 @@ Graph completion is performed by both
A ``KillSwitch`` can control the completion of one or multiple streams, and therefore comes in two different flavours.
.. _unique-kill-switch-scala:
.. _unique-kill-switch-java:
UniqueKillSwitch
^^^^^^^^^^^^^^^^
@ -40,7 +40,7 @@ below for usage examples.
.. includecode:: ../code/docs/stream/KillSwitchDocTest.java#unique-abort
.. _shared-kill-switch-scala:
.. _shared-kill-switch-java:
SharedKillSwitch
^^^^^^^^^^^^^^^^

View file

@ -15,7 +15,7 @@ For more advanced use cases the :class:`ActorPublisher` and :class:`ActorSubscri
provided to support implementing Reactive Streams :class:`Publisher` and :class:`Subscriber` with
an :class:`Actor`.
These can be consumed by other Reactive Stream libraries or used as a
These can be consumed by other Reactive Stream libraries or used as an
Akka Streams :class:`Source` or :class:`Sink`.
.. warning::

View file

@ -100,7 +100,7 @@ Akka Streams provide simple Sources and Sinks that can work with :class:`ByteStr
on files.
Streaming data from a file is as easy as creating a `FileIO.fromFile` given a target file, and an optional
Streaming data from a file is as easy as creating a `FileIO.fromPath` given a target path, and an optional
``chunkSize`` which determines the buffer size determined as one "element" in such stream:
.. includecode:: ../code/docs/stream/io/StreamFileDocTest.java#file-source

View file

@ -90,7 +90,7 @@ accepts strings as its input and when materialized it will create auxiliary
information of type ``CompletionStage<IOResult>`` (when chaining operations on
a :class:`Source` or :class:`Flow` the type of the auxiliary information—called
the “materialized value”—is given by the leftmost starting point; since we want
to retain what the ``FileIO.toFile`` sink has to offer, we need to say
to retain what the ``FileIO.toPath`` sink has to offer, we need to say
``Keep.right()``).
We can use the new and shiny :class:`Sink` we just created by

View file

@ -25,7 +25,7 @@ lies in interfacing between private sphere and the public, but you dont want
that many doors inside your house, do you? For a longer discussion see `this
blog post <http://letitcrash.com/post/19074284309/when-to-use-typedactors>`_.
A bit more background: TypedActors can very easily be abused as RPC, and that
A bit more background: TypedActors can easily be abused as RPC, and that
is an abstraction which is `well-known
<http://doc.akka.io/docs/misc/smli_tr-94-29.pdf>`_
to be leaky. Hence TypedActors are not what we think of first when we talk

View file

@ -165,7 +165,7 @@ The Inbox
---------
When writing code outside of actors which shall communicate with actors, the
``ask`` pattern can be a solution (see below), but there are two thing it
``ask`` pattern can be a solution (see below), but there are two things it
cannot do: receiving multiple replies (e.g. by subscribing an :class:`ActorRef`
to a notification service) and watching other actors lifecycle. For these
purposes there is the :class:`Inbox` class: