diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index ab1d331da9..4d41c6a848 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -2,17 +2,7 @@ In case of questions about the contribution process or for discussion of specific issues please visit the [akka/dev gitter chat](https://gitter.im/akka/dev). -## Infrastructure - -* [Akka Contributor License Agreement](http://www.lightbend.com/contribute/cla) -* [Akka Issue Tracker](http://doc.akka.io/docs/akka/current/project/issue-tracking.html) -* [Scalariform](https://github.com/daniel-trinh/scalariform) - -# Lightbend Project & Developer Guidelines - -These guidelines are meant to be a living document that should be changed and adapted as needed. We encourage changes that make it easier to achieve our goals in an efficient way. - -These guidelines mainly apply to Lightbend’s “mature” projects - not necessarily to projects of the type ‘collection of scripts’ etc. +# Navigating around the project & codebase ## Branches summary @@ -20,37 +10,80 @@ Depending on which version (or sometimes module) you want to work on, you should * `master` – active development branch of Akka 2.4.x * `release-2.3` – maintenance branch of Akka 2.3.x +* `artery-dev` – work on the upcoming remoting implementation, codenamed "artery" * similarly `release-2.#` branches contain legacy versions of Akka +## Tags + +Akka uses tags to categorise issues into groups or mark their phase in development. + +Most notably many tags start `t:` prefix (as in `topic:`), which categorises issues in terms of which module they relate to. Examples are: + +- [t:core](https://github.com/akka/akka/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3At%3Acore) +- [t:stream](https://github.com/akka/akka/issues?q=is%3Aissue+is%3Aopen+label%3At%3Astream) +- see [all tags here](https://github.com/akka/akka/labels) + +In general *all issues are open for anyone working on them*, however if you're new to the project and looking for an issue +that will be accepted and likely is a nice one to get started you should check out the following tags: + +- [community](https://github.com/akka/akka/labels/community) - which identifies issues that the core team will likely not have time to work on, or the issue is a nice entry level ticket. If you're not sure how to solve a ticket but would like to work on it feel free to ask in the issue about clarification or tips. +- [nice-to-have (low-priority)](https://github.com/akka/akka/labels/nice-to-have%20%28low-prio%29) - are tasks which make sense, however are not very high priority (in face of other very high priority issues). If you see something interesting in this list, a contribution would be really wonderful! + +Another group of tickets are those which start from a number. They're used to signal in what phase of development an issue is: + +- [0 - new](https://github.com/akka/akka/labels/0%20-%20new) - is assigned when a ticket is unclear on it's purpose or if it is valid or not. Sometimes the additional tag `discuss` is used to mark such tickets, if they propose large scale changed and need more discussion before moving into triaged (or being closed as invalid) +- [1 - triaged](https://github.com/akka/akka/labels/1%20-%20triaged) - roughly speaking means "this ticket makes sense". Triaged tickets are safe to pick up for contributing in terms of likeliness of a patch for it being accepted. It is not recommended to start working on a ticket that is not triaged. +- [2 - pick next](https://github.com/akka/akka/labels/2%20-%20pick%20next) - used to mark issues which are next up in the queue to be worked on. Sometimes it's also used to mark which PRs are expected to be reviewed/merged for the next release. The tag is non-binding, and mostly used as organisational helper. +- [3 - in progress](https://github.com/akka/akka/labels/3%20-%20in%20progress) - means someone is working on this ticket. If you see a ticket that has the tag, however seems inactive, it could have been an omission with removing the tag, feel free to ping the ticket then if it's still being worked on. + +The last group of special tags indicate specific states a ticket is in: + +- [bug](https://github.com/akka/akka/labels/failed) - bugs take priority in being fixed above features. The core team dedicates a number of days to working on bugs each sprint. Bugs which have reproducers are also great for community contributions as they're well isolated. Sometimes we're not as lucky to have reproducers though, then a bugfix should also include a test reproducing the original error along with the fix. +- [failed](https://github.com/akka/akka/labels/failed) - tickets indicate a Jenkins failure (for example from a nightly build). These tickets usually start with the `FAILED: ...` message, and include a stacktrace + link to the Jenkins failure. The tickets are collected and worked on with priority to keep the build stable and healthy. Often times it may be simple timeout issues (Jenkins boxes are slow), though sometimes real bugs are discovered this way. + +Pull Request validation states: + +- `validating => [tested | needs-attention]` - signify pull request validation status + +# Akka contributing guidelines + +These guidelines apply to all Akka projects, by which we mean both the `akka/akka` repository, +as well as any plugins or additional repos located under the Akka GitHub organisation. + +These guidelines are meant to be a living document that should be changed and adapted as needed. +We encourage changes that make it easier to achieve our goals in an efficient way. + ## General Workflow -This is the process for committing code into master. There are of course exceptions to these rules, for example minor changes to comments and documentation, fixing a broken build etc. +The below steps are how to get a patch into a main development branch (e.g. `master`). +The steps are exactly the same for everyone involved in the project (be it core team, or first time contributor). -1. Make sure you have signed the Lightbend CLA, if not, [sign it online](http://www.lightbend.com/contribute/cla). -2. Before starting to work on a feature or a fix, make sure that: - 1. There is a ticket for your work in the project's issue tracker. If not, create it first. - 2. The ticket has been scheduled for the current milestone. - 3. The ticket is estimated by the team. - 4. The ticket have been discussed and prioritized by the team. -3. You should always perform your work in a Git feature branch. The branch should be given a descriptive name that explains its intent. Some teams also like adding the ticket number and/or the [GitHub](http://github.com) user ID to the branch name, these details is up to each of the individual teams. +1. Make sure an issue exists in the [issue tracker](https://github.com/akka/akka/issues) for the work you want to contribute. + - If there is no ticket for it, [create one](https://github.com/akka/akka/issues/new) first. +1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a PullRequest against the mainline Akka. +1. Create a branch on your fork and work on the feature. For example: `git checkout -b wip-custom-headers-akka-http` + - Please make sure to follow the general quality guidelines (specified below) when developing your patch. + - Please write additional tests covering your feature and adjust existing ones if needed before submitting your Pull Request. The `validatePullRequest` sbt task ([explained below](#validatePullRequest)) may come in handy to verify your changes are correct. +1. Once your feature is complete, prepare the commit following our [commit message guidelines](#commit-message-guidelines). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve). +1. Now it's finally time to [submit the Pull Request](https://help.github.com/articles/using-pull-requests)! +1. If you have not already done so, you will be asked by our CLA bot to [sign the Lightbend CLA](http://www.lightbend.com/contribute/cla) online CLA stands for Contributor License Agreement and is a way of protecting intellectual property disputes from harming the project. +1. If you're not already on the contributors white-list, the @akka-ci bot will ask `Can one of the repo owners verify this patch?`, to which a core member will reply by commenting `OK TO TEST`. This is just a sanity check to prevent malicious code from being run on the Jenkins cluster. +1. Now both committers and interested people will review your code. This process is to ensure the code we merge is of the best possible quality, and that no silly mistakes slip though. You're expected to follow-up these comments by adding new commits to the same branch. The commit messages of those commits can be more lose, for example: `Removed debugging using printline`, as they all will be squashed into one commit before merging into the main branch. + - The community and team are really nice people, so don't be afraid to ask follow up questions if you didn't understand some comment, or would like to clarify how to continue with a given feature. We're here to help, so feel free to ask and discuss any kind of questions you might have during review! +1. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs up–which is signalled usually by a comment saying `LGTM`, which means "Looks Good To Me". + - In general a PR is expected to get 2 LGTMs from the team before it is merged. If the PR is trivial, or under under special circumstances (such as most of the team being on vacation, a PR was very thoroughly reviewed/tested and surely is correct) one LGTM may be fine as well. +1. If the code change needs to be applied to other branches as well (for example a bugfix needing to be backported to a previous version), one of the team will either ask you to submit a PR with the same commit to the old branch, or do this for you. + - Backport pull requests such as these are marked using the phrase`for validation` in the title to make the purpose clear in the pull request list. They can be merged once validation passes without additional review (if no conflicts). +1. Once everything is said and done, your Pull Request gets merged :tada: Your feature will be available with the next “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release). And of course you will be given credit for the fix in the release stats during the release's announcement. You've made it! - Akka prefers the committer name as part of the branch name, the ticket number is optional. +The TL;DR; of the above very precise workflow version is: -4. When the feature or fix is completed you should open a [Pull Request](https://help.github.com/articles/using-pull-requests) on GitHub. -5. The Pull Request should be reviewed by other maintainers (as many as feasible/practical). Note that the maintainers can consist of outside contributors, both within and outside Lightbend. Outside contributors (for example from EPFL or independent committers) are encouraged to participate in the review process, it is not a closed process. -6. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs up. - - When the branch conflicts with its merge target (either by way of git merge conflict or failing CI tests), do **not** merge the target branch into your feature branch. Instead rebase your branch onto the target branch. Merges complicate the git history, especially for the squashing which is necessary later (see below). - -7. Once the code has passed review the Pull Request can be merged into the master branch. For this purpose the commits which were added on the feature branch should be squashed into a single commit. This can be done using the command `git rebase -i master` (or the appropriate target branch), `pick`ing the first commit and `squash`ing all following ones. - - Also make sure that the commit message conforms to the syntax specified below. - -8. If the code change needs to be applied to other branches as well, create pull requests against those branches which contain the change after rebasing it onto the respective branch and await successful verification by the continuous integration infrastructure; then merge those pull requests. - - Please mark these pull requests with `(for validation)` in the title to make the purpose clear in the pull request list. - -9. Once everything is said and done, associate the ticket with the “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release) and close it. +1. Fork Akka +2. Hack and test on your feature (on a branch) +3. Submit a PR +4. Sign the CLA if necessary +4. Keep polishing it until received enough LGTM +5. Profit! ## The `validatePullRequest` task @@ -77,41 +110,62 @@ target PR branch you can do so by setting the PR_TARGET_BRANCH environment varia PR_TARGET_BRANCH=origin/example sbt validatePullRequest ``` +## Binary compatibility +Binary compatibility rules and guarantees are described in depth in the [Binary Compatibility Rules +](http://doc.akka.io/docs/akka/snapshot/common/binary-compatibility-rules.html) section of the documentation. + +Akka uses MiMa (which is short for [Lightbend Migration Manager](https://github.com/typesafehub/migration-manager)) to +validate binary compatibility of incoming Pull Requests. If your PR fails due to binary compatibility issues, you may see +an error like this: + +``` +[info] akka-stream: found 1 potential binary incompatibilities while checking against com.typesafe.akka:akka-stream_2.11:2.4.2 (filtered 222) +[error] * method foldAsync(java.lang.Object,scala.Function2)akka.stream.scaladsl.FlowOps in trait akka.stream.scaladsl.FlowOps is present only in current version +[error] filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.foldAsync") +``` + +In such situations it's good to consult with a core team member if the violation can be safely ignored (by adding the above snippet to `project/MiMa.scala`), or if it would indeed break binary compatibility. + +Situations when it may be fine to ignore a MiMa issued warning include: + +- if it is touching any class marked as `private[akka]`, `/** INTERNAL API*/` or similar markers +- if it is concerning internal classes (often recognisable by package names like `dungeon`, `impl`, `internal` etc.) +- if it is adding API to classes / traits which are only meant for extension by Akka itself, i.e. should not be extended by end-users +- other tricky situations + + ## Pull Request Requirements For a Pull Request to be considered at all it has to meet these requirements: -1. Live up to the current code standard: - - Not violate [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself). - - [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) needs to have been applied. -2. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests. -3. The code must be well documented in the Lightbend's standard documentation format (see the ‘Documentation’ section below). -4. The commit messages must properly describe the changes, see further below. -5. All Lightbend projects must include Lightbend copyright notices. Each project can choose between one of two approaches: +1. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests. +1. The code must be well documented in the Lightbend's standard documentation format (see the ‘Documentation’ section below). +1. The commit messages must properly describe the changes, see further below. +1. All Lightbend projects must include Lightbend copyright notices. Each project can choose between one of two approaches: 1. All source files in the project must have a Lightbend copyright notice in the file header. - 2. The Notices file for the project includes the Lightbend copyright notice and no other files contain copyright notices. See http://www.apache.org/legal/src-headers.html for instructions for managing this approach for copyrights. + 1. The Notices file for the project includes the Lightbend copyright notice and no other files contain copyright notices. See http://www.apache.org/legal/src-headers.html for instructions for managing this approach for copyrights. Akka uses the first choice, having copyright notices in every file header. - Other guidelines to follow for copyright notices: - - Use a form of ``Copyright (C) 2011-2016 Lightbend Inc. ``, where the start year is when the project or file was first created and the end year is the last time the project or file was modified. - - Never delete or change existing copyright notices, just add additional info. - - Do not use ``@author`` tags since it does not encourage [Collective Code Ownership](http://www.extremeprogramming.org/rules/collective.html). However, each project should make sure that the contributors gets the credit they deserve—in a text file or page on the project website and in the release notes etc. +### Additional guidelines + +Some additional guidelines regarding source code are: + +- files should start with a ``Copyright (C) 2016 Lightbend Inc. `` copyright header +- keep the code [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself) +- apply the [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) whenever you have the chance to +- Never delete or change existing copyright notices, just add additional info. +- Do not use ``@author`` tags since it does not encourage [Collective Code Ownership](http://www.extremeprogramming.org/rules/collective.html). + - Contributors , each project should make sure that the contributors gets the credit they deserve—in a text file or page on the project website and in the release notes etc. If these requirements are not met then the code should **not** be merged into master, or even reviewed - regardless of how good or important it is. No exceptions. Whether or not a pull request (or parts of it) shall be back- or forward-ported will be discussed on the pull request discussion page, it shall therefore not be part of the commit messages. If desired the intent can be expressed in the pull request description. -## Continuous Integration - -Each project should be configured to use a continuous integration (CI) tool (i.e. a build server à la Jenkins). Lightbend has a [Jenkins server farm](https://jenkins.akka.io/) that can be used. The CI tool should, on each push to master, build the **full** distribution and run **all** tests, and if something fails it should email out a notification with the failure report to the committer and the core team. The CI tool should also be used in conjunction with a Pull Request validator (discussed below). - ## Documentation -All documentation should be generated using the sbt-site-plugin, *or* publish artifacts to a repository that can be consumed by the Lightbend stack. - All documentation must abide by the following maxims: - Example code should be run as part of an automated test suite. @@ -141,12 +195,6 @@ Which licenses are compatible with Apache 2 are defined in [this doc](http://www Each project must also create and maintain a list of all dependencies and their licenses, including all their transitive dependencies. This can be done either in the documentation or in the build file next to each dependency. -## Work In Progress - -It is ok to work on a public feature branch in the GitHub repository. Something that can sometimes be useful for early feedback etc. If so, then it is preferable to name the branch accordingly. This can be done by either prefixing the name with ``wip-`` as in ‘Work In Progress’, or using hierarchical names like ``wip/..``, ``feature/..`` or ``topic/..``. Either way is fine as long as it is clear that it is work in progress and not ready for merge. This work can temporarily have a lower standard. However, to be merged into master it will have to go through the regular process outlined above, with Pull Request, review etc.. - -Also, to facilitate both well-formed commits and working together, the ``wip`` and ``feature``/``topic`` identifiers also have special meaning. Any branch labelled with ``wip`` is considered “git-unstable” and may be rebased and have its history rewritten. Any branch with ``feature``/``topic`` in the name is considered “stable” enough for others to depend on when a group is working on a feature. - ## Creating Commits And Writing Commit Messages Follow these guidelines when creating public commits and writing commit messages. @@ -160,7 +208,7 @@ Follow these guidelines when creating public commits and writing commit messages 3. Following the single line description should be a blank line followed by an enumerated list with the details of the commit. -4. Add keywords for your commit (depending on the degree of automation we reach, the list may change over time): +4. You can request review by a specific team member for your commit (depending on the degree of automation we reach, the list may change over time): * ``Review by @gituser`` - if you want to notify someone on the team. The others can, and are encouraged to participate. Example: @@ -171,9 +219,8 @@ Example: * Details 2 * Details 3 -## How To Enforce These Guidelines? +## Pull request validation workflow details -### Make Use of Pull Request Validator Akka uses [Jenkins GitHub pull request builder plugin](https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin) that automatically merges the code, builds it, runs the tests and comments on the Pull Request in GitHub. @@ -198,8 +245,19 @@ the validator to test all projects. ## Source style +### Scala style + Akka uses [Scalariform](https://github.com/daniel-trinh/scalariform) to enforce some of the code style rules. +### Java style + +Java code is currently not automatically reformatted by sbt (expecting to have a plugin to do this soon). +Thus we ask Java contributions to follow these simple guidelines: + +- 2 spaces +- `{` on same line as method name +- in all other aspects, follow the [Oracle Java Style Guide](http://www.oracle.com/technetwork/java/codeconvtoc-136057.html) + ## Contributing Modules For external contributions of entire features, the normal way is to establish it @@ -209,3 +267,20 @@ akka-contrib subproject), then when the feature is hardened, well documented and tested it becomes an officially supported Akka feature. [List of experimental Akka features](http://doc.akka.io/docs/akka/current/experimental/index.html) + +# Supporting infrastructure + +## Continuous Integration + +Each project should be configured to use a continuous integration (CI) tool (i.e. a build server à la Jenkins). + +Lightbend is sponsoring a [Jenkins server farm](https://jenkins.akka.io/), sometimes referred to as "the Lausanne cluster". +The cluster is made out of real bare-metal boxes, and maintained by the Akka team (and other very helpful people at Lightbend). + +In addition to PR Validation the cluster is also used for nightly and performance test runs. + +## Related links + +* [Akka Contributor License Agreement](http://www.lightbend.com/contribute/cla) +* [Akka Issue Tracker](http://doc.akka.io/docs/akka/current/project/issue-tracking.html) +* [Scalariform](https://github.com/daniel-trinh/scalariform) diff --git a/README.md b/README.md index 91e67226c3..339887d464 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,8 @@ -# Akka +Akka +==== -We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are using the wrong tools and the wrong level of abstraction. +We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. +Most of the time it's because we are using the wrong tools and the wrong level of abstraction. Akka is here to change that. @@ -10,10 +12,44 @@ For resilience we adopt the "Let it crash" model which the telecom industry has Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications. +Learn more at [akka.io](http://akka.io/). + +Reference Documentation +----------------------- + +The reference documentation is available at [doc.akka.io](http://doc.akka.io), +for [Scala](http://doc.akka.io/docs/akka/current/scala.html) and [Java](http://doc.akka.io/docs/akka/current/scala.html). + + +Community +--------- +You can join these groups and chats to discuss and ask Akka related questions: + +- Mailing list: [![google groups: akka-user](https://img.shields.io/badge/group%3A-akka--user-blue.svg?style=flat-square)](https://groups.google.com/forum/#!forum/akka-user) +- Chat room about *using* Akka: [![gitter: akka/akka](https://img.shields.io/badge/gitter%3A-akka%2Fakka-blue.svg?style=flat-square)](https://gitter.im/akka/akka) +- Issue tracker: [![github: akka/akka](https://img.shields.io/badge/github%3A-issues-blue.svg?style=flat-square)](https://github.com/akka/akka/issues) + +In addition to that, you may enjoy following: + +- The [Akka Team Blog](http://blog.akka.io) +- [@akkateam](https://twitter.com/akkateam) on Twitter +- Questions tagged [#akka on StackOverflow](stackoverflow.com/questions/tagged/akka) + +Contributing +------------ +Contributions are *very* welcome! + +If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a PullRequest implementing it. + +Refer to the [CONTRIBUTING.md](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) file for more details about the workflow, +and general hints how to prepare your pull request. You can also chat ask for clarifications or guidance in github issues directly, +or in the akka/dev chat if a more real time communication would be of benefit. + +A chat room is available for all questions related to *developing and contributing* to Akka: +[![gitter: akka/dev](https://img.shields.io/badge/gitter%3A-akka%2Fdev-blue.svg?style=flat-square)](https://gitter.im/akka/dev) + + +License +------- + Akka is Open Source and available under the Apache 2 License. - -Learn more at [akka.io](http://akka.io/). Join the [akka-user](https://groups.google.com/forum/#!forum/akka-user) mailing list. Follow [@akkateam](https://twitter.com/akkateam) on twitter. - -If you are looking to contribute back to Akka, the [CONTRIBUTING.md](CONTRIBUTING.md) file should provide you with all the information needed to get started. - -[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/akka/akka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) diff --git a/akka-actor-tests/src/test/scala/akka/serialization/NoVerification.scala b/akka-actor-tests/src/test/scala/akka/serialization/NoVerification.scala new file mode 100644 index 0000000000..3f174c54a9 --- /dev/null +++ b/akka-actor-tests/src/test/scala/akka/serialization/NoVerification.scala @@ -0,0 +1,14 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package test.akka.serialization + +import akka.actor.NoSerializationVerificationNeeded + +/** + * This is currently used in NoSerializationVerificationNeeded test cases in SerializeSpec, + * as they needed a serializable class whose top package is not akka. + */ +class NoVerification extends NoSerializationVerificationNeeded with java.io.Serializable { +} diff --git a/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala b/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala index 7f1daa394d..7d14b28d6d 100644 --- a/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala +++ b/akka-actor-tests/src/test/scala/akka/serialization/SerializeSpec.scala @@ -18,6 +18,8 @@ import akka.pattern.ask import org.apache.commons.codec.binary.Hex.encodeHex import java.nio.ByteOrder import java.nio.ByteBuffer +import akka.actor.NoSerializationVerificationNeeded +import test.akka.serialization.NoVerification object SerializationTests { @@ -443,18 +445,18 @@ class DefaultSerializationWarningSpec extends AkkaSpec( ConfigFactory.parseString("akka.actor.warn-about-java-serializer-usage = on")) { val ser = SerializationExtension(system) - val messagePrefix = "Using the default Java serializer for class.*" + val messagePrefix = "Using the default Java serializer for class" "Using the default Java serializer" must { "log a warning when serializing classes outside of java.lang package" in { - EventFilter.warning(message = messagePrefix) intercept { + EventFilter.warning(start = messagePrefix, occurrences = 1) intercept { ser.serializerFor(classOf[java.math.BigDecimal]) } } "not log warning when serializing classes from java.lang package" in { - EventFilter.warning(message = messagePrefix, occurrences = 0) intercept { + EventFilter.warning(start = messagePrefix, occurrences = 0) intercept { ser.serializerFor(classOf[java.lang.String]) } } @@ -463,6 +465,54 @@ class DefaultSerializationWarningSpec extends AkkaSpec( } +class NoVerificationWarningSpec extends AkkaSpec( + ConfigFactory.parseString( + "akka.actor.warn-about-java-serializer-usage = on\n" + + "akka.actor.warn-on-no-serialization-verification = on")) { + + val ser = SerializationExtension(system) + val messagePrefix = "Using the default Java serializer for class" + + "When warn-on-no-serialization-verification = on, using the default Java serializer" must { + + "log a warning on classes without extending NoSerializationVerificationNeeded" in { + EventFilter.warning(start = messagePrefix, occurrences = 1) intercept { + ser.serializerFor(classOf[java.math.BigDecimal]) + } + } + + "still log warning on classes extending NoSerializationVerificationNeeded" in { + EventFilter.warning(start = messagePrefix, occurrences = 1) intercept { + ser.serializerFor(classOf[NoVerification]) + } + } + } +} + +class NoVerificationWarningOffSpec extends AkkaSpec( + ConfigFactory.parseString( + "akka.actor.warn-about-java-serializer-usage = on\n" + + "akka.actor.warn-on-no-serialization-verification = off")) { + + val ser = SerializationExtension(system) + val messagePrefix = "Using the default Java serializer for class" + + "When warn-on-no-serialization-verification = off, using the default Java serializer" must { + + "log a warning on classes without extending NoSerializationVerificationNeeded" in { + EventFilter.warning(start = messagePrefix, occurrences = 1) intercept { + ser.serializerFor(classOf[java.math.BigDecimal]) + } + } + + "not log warning on classes extending NoSerializationVerificationNeeded" in { + EventFilter.warning(start = messagePrefix, occurrences = 0) intercept { + ser.serializerFor(classOf[NoVerification]) + } + } + } +} + protected[akka] trait TestSerializable protected[akka] class TestSerializer extends Serializer { diff --git a/akka-actor/src/main/resources/reference.conf b/akka-actor/src/main/resources/reference.conf index 394025c4db..7f6a508f2b 100644 --- a/akka-actor/src/main/resources/reference.conf +++ b/akka-actor/src/main/resources/reference.conf @@ -592,6 +592,12 @@ akka { # you can turn this off. warn-about-java-serializer-usage = on + # To be used with the above warn-about-java-serializer-usage + # When warn-about-java-serializer-usage = on, and this warn-on-no-serialization-verification = off, + # warnings are suppressed for classes extending NoSerializationVerificationNeeded + # to reduce noize. + warn-on-no-serialization-verification = on + # Configuration namespace of serialization identifiers. # Each serializer implementation must have an entry in the following format: # `akka.actor.serialization-identifiers."FQCN" = ID` diff --git a/akka-actor/src/main/scala/akka/actor/UntypedActor.scala b/akka-actor/src/main/scala/akka/actor/UntypedActor.scala index 608de1af44..54341a25ef 100644 --- a/akka-actor/src/main/scala/akka/actor/UntypedActor.scala +++ b/akka-actor/src/main/scala/akka/actor/UntypedActor.scala @@ -98,7 +98,7 @@ abstract class UntypedActor extends Actor { * To be implemented by concrete UntypedActor, this defines the behavior of the * UntypedActor. */ - @throws(classOf[Exception]) + @throws(classOf[Throwable]) def onReceive(message: Any): Unit /** diff --git a/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala b/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala index 9e0a52c2ce..5f34ac2e5a 100644 --- a/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala +++ b/akka-actor/src/main/scala/akka/pattern/CircuitBreaker.scala @@ -464,7 +464,8 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite * @return duration to when the breaker will attempt a reset by transitioning to half-open */ private def remainingDuration(): FiniteDuration = { - val diff = System.nanoTime() - get + val fromOpened = System.nanoTime() - get + val diff = resetTimeout.toNanos - fromOpened if (diff <= 0L) Duration.Zero else diff.nanos } diff --git a/akka-actor/src/main/scala/akka/routing/OptimalSizeExploringResizer.scala b/akka-actor/src/main/scala/akka/routing/OptimalSizeExploringResizer.scala index bf2da2c760..6c20593038 100644 --- a/akka-actor/src/main/scala/akka/routing/OptimalSizeExploringResizer.scala +++ b/akka-actor/src/main/scala/akka/routing/OptimalSizeExploringResizer.scala @@ -6,7 +6,7 @@ package akka.routing import java.time.LocalDateTime import scala.collection.immutable -import scala.concurrent.forkjoin.ThreadLocalRandom +import java.util.concurrent.ThreadLocalRandom import scala.concurrent.duration._ import com.typesafe.config.Config diff --git a/akka-actor/src/main/scala/akka/serialization/Serialization.scala b/akka-actor/src/main/scala/akka/serialization/Serialization.scala index 9889759455..4017ac6689 100644 --- a/akka-actor/src/main/scala/akka/serialization/Serialization.scala +++ b/akka-actor/src/main/scala/akka/serialization/Serialization.scala @@ -315,13 +315,20 @@ class Serialization(val system: ExtendedActorSystem) extends Extension { } private val isJavaSerializationWarningEnabled = settings.config.getBoolean("akka.actor.warn-about-java-serializer-usage") + private val isWarningOnNoVerificationEnabled = settings.config.getBoolean("akka.actor.warn-on-no-serialization-verification") private def shouldWarnAboutJavaSerializer(serializedClass: Class[_], serializer: Serializer) = { + + def suppressWarningOnNonSerializationVerification(serializedClass: Class[_]) = { + //suppressed, only when warn-on-no-serialization-verification = off, and extending NoSerializationVerificationNeeded + !isWarningOnNoVerificationEnabled && classOf[NoSerializationVerificationNeeded].isAssignableFrom(serializedClass) + } + isJavaSerializationWarningEnabled && serializer.isInstanceOf[JavaSerializer] && !serializedClass.getName.startsWith("akka.") && - !serializedClass.getName.startsWith("java.lang.") + !serializedClass.getName.startsWith("java.lang.") && + !suppressWarningOnNonSerializationVerification(serializedClass) } - } diff --git a/akka-cluster-tools/src/main/java/akka/cluster/pubsub/protobuf/msg/DistributedPubSubMessages.java b/akka-cluster-tools/src/main/java/akka/cluster/pubsub/protobuf/msg/DistributedPubSubMessages.java index f36a67ce98..81dd3be1b6 100644 --- a/akka-cluster-tools/src/main/java/akka/cluster/pubsub/protobuf/msg/DistributedPubSubMessages.java +++ b/akka-cluster-tools/src/main/java/akka/cluster/pubsub/protobuf/msg/DistributedPubSubMessages.java @@ -35,6 +35,16 @@ public final class DistributedPubSubMessages { */ akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status.VersionOrBuilder getVersionsOrBuilder( int index); + + // optional bool replyToStatus = 2; + /** + * optional bool replyToStatus = 2; + */ + boolean hasReplyToStatus(); + /** + * optional bool replyToStatus = 2; + */ + boolean getReplyToStatus(); } /** * Protobuf type {@code Status} @@ -95,6 +105,11 @@ public final class DistributedPubSubMessages { versions_.add(input.readMessage(akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status.Version.PARSER, extensionRegistry)); break; } + case 16: { + bitField0_ |= 0x00000001; + replyToStatus_ = input.readBool(); + break; + } } } } catch (akka.protobuf.InvalidProtocolBufferException e) { @@ -749,6 +764,7 @@ public final class DistributedPubSubMessages { // @@protoc_insertion_point(class_scope:Status.Version) } + private int bitField0_; // repeated .Status.Version versions = 1; public static final int VERSIONS_FIELD_NUMBER = 1; private java.util.List versions_; @@ -785,8 +801,25 @@ public final class DistributedPubSubMessages { return versions_.get(index); } + // optional bool replyToStatus = 2; + public static final int REPLYTOSTATUS_FIELD_NUMBER = 2; + private boolean replyToStatus_; + /** + * optional bool replyToStatus = 2; + */ + public boolean hasReplyToStatus() { + return ((bitField0_ & 0x00000001) == 0x00000001); + } + /** + * optional bool replyToStatus = 2; + */ + public boolean getReplyToStatus() { + return replyToStatus_; + } + private void initFields() { versions_ = java.util.Collections.emptyList(); + replyToStatus_ = false; } private byte memoizedIsInitialized = -1; public final boolean isInitialized() { @@ -809,6 +842,9 @@ public final class DistributedPubSubMessages { for (int i = 0; i < versions_.size(); i++) { output.writeMessage(1, versions_.get(i)); } + if (((bitField0_ & 0x00000001) == 0x00000001)) { + output.writeBool(2, replyToStatus_); + } getUnknownFields().writeTo(output); } @@ -822,6 +858,10 @@ public final class DistributedPubSubMessages { size += akka.protobuf.CodedOutputStream .computeMessageSize(1, versions_.get(i)); } + if (((bitField0_ & 0x00000001) == 0x00000001)) { + size += akka.protobuf.CodedOutputStream + .computeBoolSize(2, replyToStatus_); + } size += getUnknownFields().getSerializedSize(); memoizedSerializedSize = size; return size; @@ -945,6 +985,8 @@ public final class DistributedPubSubMessages { } else { versionsBuilder_.clear(); } + replyToStatus_ = false; + bitField0_ = (bitField0_ & ~0x00000002); return this; } @@ -972,6 +1014,7 @@ public final class DistributedPubSubMessages { public akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status buildPartial() { akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status result = new akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status(this); int from_bitField0_ = bitField0_; + int to_bitField0_ = 0; if (versionsBuilder_ == null) { if (((bitField0_ & 0x00000001) == 0x00000001)) { versions_ = java.util.Collections.unmodifiableList(versions_); @@ -981,6 +1024,11 @@ public final class DistributedPubSubMessages { } else { result.versions_ = versionsBuilder_.build(); } + if (((from_bitField0_ & 0x00000002) == 0x00000002)) { + to_bitField0_ |= 0x00000001; + } + result.replyToStatus_ = replyToStatus_; + result.bitField0_ = to_bitField0_; onBuilt(); return result; } @@ -1022,6 +1070,9 @@ public final class DistributedPubSubMessages { } } } + if (other.hasReplyToStatus()) { + setReplyToStatus(other.getReplyToStatus()); + } this.mergeUnknownFields(other.getUnknownFields()); return this; } @@ -1295,6 +1346,39 @@ public final class DistributedPubSubMessages { return versionsBuilder_; } + // optional bool replyToStatus = 2; + private boolean replyToStatus_ ; + /** + * optional bool replyToStatus = 2; + */ + public boolean hasReplyToStatus() { + return ((bitField0_ & 0x00000002) == 0x00000002); + } + /** + * optional bool replyToStatus = 2; + */ + public boolean getReplyToStatus() { + return replyToStatus_; + } + /** + * optional bool replyToStatus = 2; + */ + public Builder setReplyToStatus(boolean value) { + bitField0_ |= 0x00000002; + replyToStatus_ = value; + onChanged(); + return this; + } + /** + * optional bool replyToStatus = 2; + */ + public Builder clearReplyToStatus() { + bitField0_ = (bitField0_ & ~0x00000002); + replyToStatus_ = false; + onChanged(); + return this; + } + // @@protoc_insertion_point(builder_scope:Status) } @@ -7508,24 +7592,25 @@ public final class DistributedPubSubMessages { descriptor; static { java.lang.String[] descriptorData = { - "\n\037DistributedPubSubMessages.proto\"d\n\006Sta" + - "tus\022!\n\010versions\030\001 \003(\0132\017.Status.Version\0327" + - "\n\007Version\022\031\n\007address\030\001 \002(\0132\010.Address\022\021\n\t" + - "timestamp\030\002 \002(\003\"\256\001\n\005Delta\022\036\n\007buckets\030\001 \003" + - "(\0132\r.Delta.Bucket\0322\n\005Entry\022\013\n\003key\030\001 \002(\t\022" + - "\017\n\007version\030\002 \002(\003\022\013\n\003ref\030\003 \001(\t\032Q\n\006Bucket\022" + - "\027\n\005owner\030\001 \002(\0132\010.Address\022\017\n\007version\030\002 \002(" + - "\003\022\035\n\007content\030\003 \003(\0132\014.Delta.Entry\"K\n\007Addr" + - "ess\022\016\n\006system\030\001 \002(\t\022\020\n\010hostname\030\002 \002(\t\022\014\n" + - "\004port\030\003 \002(\r\022\020\n\010protocol\030\004 \001(\t\"F\n\004Send\022\014\n", - "\004path\030\001 \002(\t\022\025\n\rlocalAffinity\030\002 \002(\010\022\031\n\007pa" + - "yload\030\003 \002(\0132\010.Payload\"H\n\tSendToAll\022\014\n\004pa" + - "th\030\001 \002(\t\022\022\n\nallButSelf\030\002 \002(\010\022\031\n\007payload\030" + - "\003 \002(\0132\010.Payload\"3\n\007Publish\022\r\n\005topic\030\001 \002(" + - "\t\022\031\n\007payload\030\003 \002(\0132\010.Payload\"Q\n\007Payload\022" + - "\027\n\017enclosedMessage\030\001 \002(\014\022\024\n\014serializerId" + - "\030\002 \002(\005\022\027\n\017messageManifest\030\004 \001(\014B$\n akka." + - "cluster.pubsub.protobuf.msgH\001" + "\n\037DistributedPubSubMessages.proto\"{\n\006Sta" + + "tus\022!\n\010versions\030\001 \003(\0132\017.Status.Version\022\025" + + "\n\rreplyToStatus\030\002 \001(\010\0327\n\007Version\022\031\n\007addr" + + "ess\030\001 \002(\0132\010.Address\022\021\n\ttimestamp\030\002 \002(\003\"\256" + + "\001\n\005Delta\022\036\n\007buckets\030\001 \003(\0132\r.Delta.Bucket" + + "\0322\n\005Entry\022\013\n\003key\030\001 \002(\t\022\017\n\007version\030\002 \002(\003\022" + + "\013\n\003ref\030\003 \001(\t\032Q\n\006Bucket\022\027\n\005owner\030\001 \002(\0132\010." + + "Address\022\017\n\007version\030\002 \002(\003\022\035\n\007content\030\003 \003(" + + "\0132\014.Delta.Entry\"K\n\007Address\022\016\n\006system\030\001 \002" + + "(\t\022\020\n\010hostname\030\002 \002(\t\022\014\n\004port\030\003 \002(\r\022\020\n\010pr", + "otocol\030\004 \001(\t\"F\n\004Send\022\014\n\004path\030\001 \002(\t\022\025\n\rlo" + + "calAffinity\030\002 \002(\010\022\031\n\007payload\030\003 \002(\0132\010.Pay" + + "load\"H\n\tSendToAll\022\014\n\004path\030\001 \002(\t\022\022\n\nallBu" + + "tSelf\030\002 \002(\010\022\031\n\007payload\030\003 \002(\0132\010.Payload\"3" + + "\n\007Publish\022\r\n\005topic\030\001 \002(\t\022\031\n\007payload\030\003 \002(" + + "\0132\010.Payload\"Q\n\007Payload\022\027\n\017enclosedMessag" + + "e\030\001 \002(\014\022\024\n\014serializerId\030\002 \002(\005\022\027\n\017message" + + "Manifest\030\004 \001(\014B$\n akka.cluster.pubsub.pr" + + "otobuf.msgH\001" }; akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner = new akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() { @@ -7537,7 +7622,7 @@ public final class DistributedPubSubMessages { internal_static_Status_fieldAccessorTable = new akka.protobuf.GeneratedMessage.FieldAccessorTable( internal_static_Status_descriptor, - new java.lang.String[] { "Versions", }); + new java.lang.String[] { "Versions", "ReplyToStatus", }); internal_static_Status_Version_descriptor = internal_static_Status_descriptor.getNestedTypes().get(0); internal_static_Status_Version_fieldAccessorTable = new diff --git a/akka-cluster-tools/src/main/protobuf/DistributedPubSubMessages.proto b/akka-cluster-tools/src/main/protobuf/DistributedPubSubMessages.proto index 73b10a3e86..bd674865aa 100644 --- a/akka-cluster-tools/src/main/protobuf/DistributedPubSubMessages.proto +++ b/akka-cluster-tools/src/main/protobuf/DistributedPubSubMessages.proto @@ -11,6 +11,7 @@ message Status { required int64 timestamp = 2; } repeated Version versions = 1; + optional bool replyToStatus = 2; } message Delta { diff --git a/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/DistributedPubSubMediator.scala b/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/DistributedPubSubMediator.scala index 9689a4206d..15f0f39ea8 100644 --- a/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/DistributedPubSubMediator.scala +++ b/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/DistributedPubSubMediator.scala @@ -221,12 +221,15 @@ object DistributedPubSubMediator { } @SerialVersionUID(1L) - final case class Status(versions: Map[Address, Long]) extends DistributedPubSubMessage + final case class Status(versions: Map[Address, Long], isReplyToStatus: Boolean) extends DistributedPubSubMessage with DeadLetterSuppression @SerialVersionUID(1L) final case class Delta(buckets: immutable.Iterable[Bucket]) extends DistributedPubSubMessage with DeadLetterSuppression + // Only for testing purposes, to verify replication + case object DeltaCount + case object GossipTick @SerialVersionUID(1L) @@ -500,6 +503,7 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act var registry: Map[Address, Bucket] = Map.empty.withDefault(a ⇒ Bucket(a, 0L, TreeMap.empty)) var nodes: Set[Address] = Set.empty + var deltaCount = 0L // the version is a timestamp because it is also used when pruning removed entries val nextVersion = { @@ -615,15 +619,21 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act case msg @ Unsubscribed(ack, ref) ⇒ ref ! ack - case Status(otherVersions) ⇒ - // gossip chat starts with a Status message, containing the bucket versions of the other node - val delta = collectDelta(otherVersions) - if (delta.nonEmpty) - sender() ! Delta(delta) - if (otherHasNewerVersions(otherVersions)) - sender() ! Status(versions = myVersions) // it will reply with Delta + case Status(otherVersions, isReplyToStatus) ⇒ + // only accept status from known nodes, otherwise old cluster with same address may interact + // also accept from local for testing purposes + if (nodes(sender().path.address) || sender().path.address.hasLocalScope) { + // gossip chat starts with a Status message, containing the bucket versions of the other node + val delta = collectDelta(otherVersions) + if (delta.nonEmpty) + sender() ! Delta(delta) + if (!isReplyToStatus && otherHasNewerVersions(otherVersions)) + sender() ! Status(versions = myVersions, isReplyToStatus = true) // it will reply with Delta + } case Delta(buckets) ⇒ + deltaCount += 1 + // reply from Status message in the gossip chat // the Delta contains potential updates (newer versions) from the other node // only accept deltas/buckets from known nodes, otherwise there is a risk of @@ -666,6 +676,12 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act if (matchingRole(m)) nodes += m.address + case MemberLeft(m) ⇒ + if (matchingRole(m)) { + nodes -= m.address + registry -= m.address + } + case MemberRemoved(m, _) ⇒ if (m.address == selfAddress) context stop self @@ -683,6 +699,9 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act } }.sum sender() ! count + + case DeltaCount ⇒ + sender() ! deltaCount } private def sendToDeadLetters(msg: Any) = context.system.deadLetters ! DeadLetter(msg, sender(), context.self) @@ -783,7 +802,8 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act def gossip(): Unit = selectRandomNode((nodes - selfAddress).toVector) foreach gossipTo def gossipTo(address: Address): Unit = { - context.actorSelection(self.path.toStringWithAddress(address)) ! Status(versions = myVersions) + val sel = context.actorSelection(self.path.toStringWithAddress(address)) + sel ! Status(versions = myVersions, isReplyToStatus = false) } def selectRandomNode(addresses: immutable.IndexedSeq[Address]): Option[Address] = diff --git a/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializer.scala b/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializer.scala index 6060746193..d78899d9c0 100644 --- a/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializer.scala +++ b/akka-cluster-tools/src/main/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializer.scala @@ -114,15 +114,20 @@ private[akka] class DistributedPubSubMessageSerializer(val system: ExtendedActor setTimestamp(v). build() }.toVector.asJava - dm.Status.newBuilder().addAllVersions(versions).build() + dm.Status.newBuilder() + .addAllVersions(versions) + .setReplyToStatus(status.isReplyToStatus) + .build() } private def statusFromBinary(bytes: Array[Byte]): Status = statusFromProto(dm.Status.parseFrom(decompress(bytes))) - private def statusFromProto(status: dm.Status): Status = + private def statusFromProto(status: dm.Status): Status = { + val isReplyToStatus = if (status.hasReplyToStatus) status.getReplyToStatus else false Status(status.getVersionsList.asScala.map(v ⇒ - addressFromProto(v.getAddress) → v.getTimestamp)(breakOut)) + addressFromProto(v.getAddress) → v.getTimestamp)(breakOut), isReplyToStatus) + } private def deltaToProto(delta: Delta): dm.Delta = { val buckets = delta.buckets.map { b ⇒ diff --git a/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala b/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala index 1483e8784c..05659b1abf 100644 --- a/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala +++ b/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubMediatorSpec.scala @@ -454,7 +454,7 @@ class DistributedPubSubMediatorSpec extends MultiNodeSpec(DistributedPubSubMedia val thirdAddress = node(third).address runOn(first) { - mediator ! Status(versions = Map.empty) + mediator ! Status(versions = Map.empty, isReplyToStatus = false) val deltaBuckets = expectMsgType[Delta].buckets deltaBuckets.size should ===(3) deltaBuckets.find(_.owner == firstAddress).get.content.size should ===(10) @@ -469,15 +469,15 @@ class DistributedPubSubMediatorSpec extends MultiNodeSpec(DistributedPubSubMedia for (i ← 0 until many) mediator ! Put(createChatUser("u" + (1000 + i))) - mediator ! Status(versions = Map.empty) + mediator ! Status(versions = Map.empty, isReplyToStatus = false) val deltaBuckets1 = expectMsgType[Delta].buckets deltaBuckets1.map(_.content.size).sum should ===(500) - mediator ! Status(versions = deltaBuckets1.map(b ⇒ b.owner → b.version).toMap) + mediator ! Status(versions = deltaBuckets1.map(b ⇒ b.owner → b.version).toMap, isReplyToStatus = false) val deltaBuckets2 = expectMsgType[Delta].buckets deltaBuckets1.map(_.content.size).sum should ===(500) - mediator ! Status(versions = deltaBuckets2.map(b ⇒ b.owner → b.version).toMap) + mediator ! Status(versions = deltaBuckets2.map(b ⇒ b.owner → b.version).toMap, isReplyToStatus = false) val deltaBuckets3 = expectMsgType[Delta].buckets deltaBuckets3.map(_.content.size).sum should ===(10 + 9 + 2 + many - 500 - 500) diff --git a/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubRestartSpec.scala b/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubRestartSpec.scala new file mode 100644 index 0000000000..67b12594b9 --- /dev/null +++ b/akka-cluster-tools/src/multi-jvm/scala/akka/cluster/pubsub/DistributedPubSubRestartSpec.scala @@ -0,0 +1,164 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ +package akka.cluster.pubsub + +import language.postfixOps +import scala.concurrent.duration._ +import com.typesafe.config.ConfigFactory +import akka.actor.Actor +import akka.actor.ActorRef +import akka.actor.PoisonPill +import akka.actor.Props +import akka.cluster.Cluster +import akka.cluster.ClusterEvent._ +import akka.remote.testconductor.RoleName +import akka.remote.testkit.MultiNodeConfig +import akka.remote.testkit.MultiNodeSpec +import akka.remote.testkit.STMultiNodeSpec +import akka.testkit._ +import akka.actor.ActorLogging +import akka.cluster.pubsub.DistributedPubSubMediator.Internal.Status +import akka.cluster.pubsub.DistributedPubSubMediator.Internal.Delta +import akka.actor.ActorSystem +import scala.concurrent.Await +import akka.actor.Identify +import akka.actor.RootActorPath +import akka.actor.ActorIdentity + +object DistributedPubSubRestartSpec extends MultiNodeConfig { + val first = role("first") + val second = role("second") + val third = role("third") + + commonConfig(ConfigFactory.parseString(""" + akka.loglevel = INFO + akka.cluster.pub-sub.gossip-interval = 500ms + akka.actor.provider = "akka.cluster.ClusterActorRefProvider" + akka.remote.log-remote-lifecycle-events = off + akka.cluster.auto-down-unreachable-after = off + """)) + + testTransport(on = true) + + class Shutdown extends Actor { + def receive = { + case "shutdown" ⇒ context.system.terminate() + } + } + +} + +class DistributedPubSubRestartMultiJvmNode1 extends DistributedPubSubRestartSpec +class DistributedPubSubRestartMultiJvmNode2 extends DistributedPubSubRestartSpec +class DistributedPubSubRestartMultiJvmNode3 extends DistributedPubSubRestartSpec + +class DistributedPubSubRestartSpec extends MultiNodeSpec(DistributedPubSubRestartSpec) with STMultiNodeSpec with ImplicitSender { + import DistributedPubSubRestartSpec._ + import DistributedPubSubMediator._ + + override def initialParticipants = roles.size + + def join(from: RoleName, to: RoleName): Unit = { + runOn(from) { + Cluster(system) join node(to).address + createMediator() + } + enterBarrier(from.name + "-joined") + } + + def createMediator(): ActorRef = DistributedPubSub(system).mediator + def mediator: ActorRef = DistributedPubSub(system).mediator + + def awaitCount(expected: Int): Unit = { + val probe = TestProbe() + awaitAssert { + mediator.tell(Count, probe.ref) + probe.expectMsgType[Int] should ===(expected) + } + } + + "A Cluster with DistributedPubSub" must { + + "startup 3 node cluster" in within(15 seconds) { + join(first, first) + join(second, first) + join(third, first) + enterBarrier("after-1") + } + + "handle restart of nodes with same address" in within(30 seconds) { + mediator ! Subscribe("topic1", testActor) + expectMsgType[SubscribeAck] + awaitCount(3) + + runOn(first) { + mediator ! Publish("topic1", "msg1") + } + enterBarrier("pub-msg1") + + expectMsg("msg1") + enterBarrier("got-msg1") + + runOn(second) { + mediator ! Internal.DeltaCount + val oldDeltaCount = expectMsgType[Long] + + enterBarrier("end") + + mediator ! Internal.DeltaCount + val deltaCount = expectMsgType[Long] + deltaCount should ===(oldDeltaCount) + } + + runOn(first) { + mediator ! Internal.DeltaCount + val oldDeltaCount = expectMsgType[Long] + + val thirdAddress = node(third).address + testConductor.shutdown(third).await + + within(20.seconds) { + awaitAssert { + system.actorSelection(RootActorPath(thirdAddress) / "user" / "shutdown") ! Identify(None) + expectMsgType[ActorIdentity](1.second).ref.get + } + } + + system.actorSelection(RootActorPath(thirdAddress) / "user" / "shutdown") ! "shutdown" + + enterBarrier("end") + + mediator ! Internal.DeltaCount + val deltaCount = expectMsgType[Long] + deltaCount should ===(oldDeltaCount) + } + + runOn(third) { + Await.result(system.whenTerminated, 10.seconds) + val newSystem = ActorSystem( + system.name, + ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${Cluster(system).selfAddress.port.get}").withFallback( + system.settings.config)) + try { + // don't join the old cluster + Cluster(newSystem).join(Cluster(newSystem).selfAddress) + val newMediator = DistributedPubSub(newSystem).mediator + val probe = TestProbe()(newSystem) + newMediator.tell(Subscribe("topic2", probe.ref), probe.ref) + probe.expectMsgType[SubscribeAck] + + // let them gossip, but Delta should not be exchanged + probe.expectNoMsg(5.seconds) + newMediator.tell(Internal.DeltaCount, probe.ref) + probe.expectMsg(0L) + + newSystem.actorOf(Props[Shutdown], "shutdown") + Await.ready(newSystem.whenTerminated, 10.seconds) + } finally newSystem.terminate() + } + + } + } + +} diff --git a/akka-cluster-tools/src/test/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializerSpec.scala b/akka-cluster-tools/src/test/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializerSpec.scala index ad21d3779c..5dc8bf0c97 100644 --- a/akka-cluster-tools/src/test/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializerSpec.scala +++ b/akka-cluster-tools/src/test/scala/akka/cluster/pubsub/protobuf/DistributedPubSubMessageSerializerSpec.scala @@ -30,7 +30,7 @@ class DistributedPubSubMessageSerializerSpec extends AkkaSpec { val u2 = system.actorOf(Props.empty, "u2") val u3 = system.actorOf(Props.empty, "u3") val u4 = system.actorOf(Props.empty, "u4") - checkSerialization(Status(Map(address1 → 3, address2 → 17, address3 → 5))) + checkSerialization(Status(Map(address1 → 3, address2 → 17, address3 → 5), isReplyToStatus = true)) checkSerialization(Delta(List( Bucket(address1, 3, TreeMap("/user/u1" → ValueHolder(2, Some(u1)), "/user/u2" → ValueHolder(3, Some(u2)))), Bucket(address2, 17, TreeMap("/user/u3" → ValueHolder(17, Some(u3)))), diff --git a/akka-distributed-data/src/main/scala/akka/cluster/ddata/Replicator.scala b/akka-distributed-data/src/main/scala/akka/cluster/ddata/Replicator.scala index 5a067d0090..8ccb288c0e 100644 --- a/akka-distributed-data/src/main/scala/akka/cluster/ddata/Replicator.scala +++ b/akka-distributed-data/src/main/scala/akka/cluster/ddata/Replicator.scala @@ -276,12 +276,14 @@ object Replicator { final case class Subscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage /** * Unregister a subscriber. - * @see [[Replicator.Subscribe]] + * + * @see [[Replicator.Subscribe]] */ final case class Unsubscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage /** * The data value is retrieved with [[#get]] using the typed key. - * @see [[Replicator.Subscribe]] + * + * @see [[Replicator.Subscribe]] */ final case class Changed[A <: ReplicatedData](key: Key[A])(data: A) extends ReplicatorMessage { /** @@ -752,6 +754,9 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog // cluster nodes, doesn't contain selfAddress var nodes: Set[Address] = Set.empty + // cluster weaklyUp nodes, doesn't contain selfAddress + var weaklyUpNodes: Set[Address] = Set.empty + // nodes removed from cluster, to be pruned, and tombstoned var removedNodes: Map[UniqueAddress, Long] = Map.empty var pruningPerformed: Map[UniqueAddress, Long] = Map.empty @@ -810,6 +815,7 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog case Subscribe(key, subscriber) ⇒ receiveSubscribe(key, subscriber) case Unsubscribe(key, subscriber) ⇒ receiveUnsubscribe(key, subscriber) case Terminated(ref) ⇒ receiveTerminated(ref) + case MemberWeaklyUp(m) ⇒ receiveWeaklyUpMemberUp(m) case MemberUp(m) ⇒ receiveMemberUp(m) case MemberRemoved(m, _) ⇒ receiveMemberRemoved(m) case _: MemberEvent ⇒ // not of interest @@ -998,7 +1004,7 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog changed = Set.empty[String] } - def receiveGossipTick(): Unit = selectRandomNode(nodes.toVector) foreach gossipTo + def receiveGossipTick(): Unit = selectRandomNode(nodes.union(weaklyUpNodes).toVector) foreach gossipTo def gossipTo(address: Address): Unit = { val to = replica(address) @@ -1113,15 +1119,22 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog } } - def receiveMemberUp(m: Member): Unit = + def receiveWeaklyUpMemberUp(m: Member): Unit = if (matchingRole(m) && m.address != selfAddress) + weaklyUpNodes += m.address + + def receiveMemberUp(m: Member): Unit = + if (matchingRole(m) && m.address != selfAddress) { nodes += m.address + weaklyUpNodes -= m.address + } def receiveMemberRemoved(m: Member): Unit = { if (m.address == selfAddress) context stop self else if (matchingRole(m)) { nodes -= m.address + weaklyUpNodes -= m.address removedNodes = removedNodes.updated(m.uniqueAddress, allReachableClockTime) unreachable -= m.address } diff --git a/akka-docs/rst/dev/documentation.rst b/akka-docs/rst/dev/documentation.rst index 4f8cd1df23..936da82f96 100644 --- a/akka-docs/rst/dev/documentation.rst +++ b/akka-docs/rst/dev/documentation.rst @@ -116,7 +116,7 @@ Add texlive bin to $PATH: :: - export TEXLIVE_PATH=/usr/local/texlive/2015basic/bin/universal-darwin + export TEXLIVE_PATH=/usr/local/texlive/2016basic/bin/universal-darwin export PATH=$TEXLIVE_PATH:$PATH Add missing tex packages: @@ -131,6 +131,11 @@ Add missing tex packages: sudo tlmgr install helvetic sudo tlmgr install courier sudo tlmgr install multirow + sudo tlmgr install capt-of + sudo tlmgr install needspace + sudo tlmgr install eqparbox + sudo tlmgr install environ + sudo tlmgr install trimspaces If you get the error "unknown locale: UTF-8" when generating the documentation the solution is to define the following environment variables: diff --git a/akka-docs/rst/general/addressing.rst b/akka-docs/rst/general/addressing.rst index 98e52b18d7..0768dc82b4 100644 --- a/akka-docs/rst/general/addressing.rst +++ b/akka-docs/rst/general/addressing.rst @@ -75,11 +75,7 @@ Since actors are created in a strictly hierarchical fashion, there exists a unique sequence of actor names given by recursively following the supervision links between child and parent down towards the root of the actor system. This sequence can be seen as enclosing folders in a file system, hence we adopted -the name “path” to refer to it. As in some real file-systems there also are -“symbolic links”, i.e. one actor may be reachable using more than one path, -where all but one involve some translation which decouples part of the path -from the actor’s actual supervision ancestor line; these specialities are -described in the sub-sections to follow. +the name “path” to refer to it, although actor hierarchy has some fundamental difference from file system hierarchy. An actor path consists of an anchor, which identifies the actor system, followed by the concatenation of the path elements, from root guardian to the @@ -143,6 +139,18 @@ systems or JVMs. This means that the logical path (supervision hierarchy) and the physical path (actor deployment) of an actor may diverge if one of its ancestors is remotely supervised. + +Actor path alias or symbolic link? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +As in some real file-systems you might think of a “path alias” or “symbolic link” for an actor, +i.e. one actor may be reachable using more than one path. +However, you should note that actor hierarchy is different from file system hierarchy. +You cannot freely create actor paths like symbolic links to refer to arbitrary actors. +As described in the above logical and physical actor path sections, +an actor path must be either logical path which represents supervision hierarchy, or +physical path which represents actor deployment. + + How are Actor References obtained? ---------------------------------- diff --git a/akka-docs/rst/images/graph_stage_chain.png b/akka-docs/rst/images/graph_stage_chain.png index 9de3145b21..d0fd7797f0 100644 Binary files a/akka-docs/rst/images/graph_stage_chain.png and b/akka-docs/rst/images/graph_stage_chain.png differ diff --git a/akka-docs/rst/images/graph_stage_chain.svg b/akka-docs/rst/images/graph_stage_chain.svg index b3ad21644b..accbeaa911 100644 --- a/akka-docs/rst/images/graph_stage_chain.svg +++ b/akka-docs/rst/images/graph_stage_chain.svg @@ -1,3 +1,3 @@ - Produced by OmniGraffle 6.4.1 2015-12-10 12:11:46 +0000Canvas 1Layer 1SourceSinkFilteronPush()push(out, elem)if(p(elem))onPull()pull(in)demandif(!p(elem))DuplicateonPush()push(out, elem)onPull()pull(in)demandif(oneLeft)if(!oneLeft)MaponPush()push(out, elem)f(elem)onPull()pull(in)demand + Produced by OmniGraffle 6.5.3 2016-06-10 08:45:37 +0000Canvas 1Layer 1SourceSinkFilteronPush()push(out, elem)if(p(elem))onPull()pull(in)demandif(!p(elem))DuplicateonPush()push(out, elem)onPull()pull(in)demandif(oneLeft)if(!oneLeft)MaponPush()push(out, b)b = f(elem)onPull()pull(in)demand diff --git a/akka-docs/rst/images/graph_stage_diagrams.graffle b/akka-docs/rst/images/graph_stage_diagrams.graffle index 957e6e8478..2ee29b4428 100644 Binary files a/akka-docs/rst/images/graph_stage_diagrams.graffle and b/akka-docs/rst/images/graph_stage_diagrams.graffle differ diff --git a/akka-docs/rst/images/graph_stage_map.png b/akka-docs/rst/images/graph_stage_map.png index 96666ddfee..2115135a9b 100644 Binary files a/akka-docs/rst/images/graph_stage_map.png and b/akka-docs/rst/images/graph_stage_map.png differ diff --git a/akka-docs/rst/images/graph_stage_map.svg b/akka-docs/rst/images/graph_stage_map.svg index 2943f7248f..ab77c6d497 100644 --- a/akka-docs/rst/images/graph_stage_map.svg +++ b/akka-docs/rst/images/graph_stage_map.svg @@ -1,3 +1,3 @@ - Produced by OmniGraffle 6.4.1 2015-12-10 10:31:40 +0000Canvas 1Layer 1MaponPush()push(out, elem)f(elem)onPull()pull(in)demand + Produced by OmniGraffle 6.5.3 2016-06-10 08:45:37 +0000Canvas 1Layer 1MaponPush()push(out, b)b = f(elem)pull(in)demandonPull() diff --git a/akka-docs/rst/java/cluster-metrics.rst b/akka-docs/rst/java/cluster-metrics.rst index d0f2f6a0f4..68e70effbb 100644 --- a/akka-docs/rst/java/cluster-metrics.rst +++ b/akka-docs/rst/java/cluster-metrics.rst @@ -14,9 +14,9 @@ Cluster metrics information is primarily used for load-balancing routers, and can also be used to implement advanced metrics-based node life cycles, such as "Node Let-it-crash" when CPU steal time becomes excessive. -Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar. +Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar. -To enable usage of the extension you need to add the following dependency to your project: +To enable usage of the extension you need to add the following dependency to your project: :: @@ -29,13 +29,13 @@ and add the following configuration stanza to your ``application.conf`` :: akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ] - + Make sure to disable legacy metrics in akka-cluster: ``akka.cluster.metrics.enabled=off``, since it is still enabled in akka-cluster by default (for compatibility with past releases). Cluster members with status :ref:`WeaklyUp `, if that feature is enabled, will participate in Cluster Metrics collection and dissemination. - + Metrics Collector ----------------- @@ -46,15 +46,15 @@ Certain message routing and let-it-crash functions may not work when Sigar is no Cluster metrics extension comes with two built-in collector implementations: -#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise +#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise #. ``akka.cluster.metrics.JmxMetricsCollector``, which is used as fall back, and is less rich/precise You can also plug-in your own metrics collector implementation. -By default, metrics extension will use collector provider fall back and will try to load them in this order: +By default, metrics extension will use collector provider fall back and will try to load them in this order: #. configured user-provided collector -#. built-in ``akka.cluster.metrics.SigarMetricsCollector`` +#. built-in ``akka.cluster.metrics.SigarMetricsCollector`` #. and finally ``akka.cluster.metrics.JmxMetricsCollector`` Metrics Events @@ -71,7 +71,7 @@ which was received during the collector sample period. You can subscribe your metrics listener actors to these events in order to implement custom node lifecycle :: - ClusterMetricsExtension.get(system).subscribe(metricsListenerActor); + ClusterMetricsExtension.get(system).subscribe(metricsListenerActor); Hyperic Sigar Provisioning -------------------------- @@ -79,8 +79,8 @@ Hyperic Sigar Provisioning Both user-provided and built-in metrics collectors can optionally use `Hyperic Sigar `_ for a wider and more accurate range of metrics compared to what can be retrieved from ordinary JMX MBeans. -Sigar is using a native o/s library, and requires library provisioning, i.e. -deployment, extraction and loading of the o/s native library into JVM at runtime. +Sigar is using a native o/s library, and requires library provisioning, i.e. +deployment, extraction and loading of the o/s native library into JVM at runtime. User can provision Sigar classes and native library in one of the following ways: @@ -90,8 +90,15 @@ User can provision Sigar classes and native library in one of the following ways Kamon sigar loader agent will extract and load sigar library during JVM start. #. Place ``sigar.jar`` on the ``classpath`` and Sigar native library for the o/s on the ``java.library.path``. User is required to manage both project dependency and library deployment manually. - -To enable usage of Sigar you can add the following dependency to the user project + +.. warning:: + + When using `Kamon sigar-loader `_ and running multiple + instances of the same application on the same host, you have to make sure that sigar library is extracted to a + unique per instance directory. You can control the extract directory with the + ``akka.cluster.metrics.native-library-extract-folder`` configuration setting. + +To enable usage of Sigar you can add the following dependency to the user project :: @@ -110,7 +117,7 @@ It uses random selection of routees with probabilities derived from the remainin It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights: * ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max -* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) +* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) * ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization * ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors. * Any custom implementation of ``akka.cluster.metrics.MetricsSelector`` @@ -132,7 +139,7 @@ As you can see, the router is defined in the same way as other routers, and in t .. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/factorial.conf#adaptive-router -It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router, +It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router, other things work in the same way as other routers. The same type of router could also have been defined in code: @@ -158,11 +165,11 @@ Custom Metrics Collector Metrics collection is delegated to the implementation of ``akka.cluster.metrics.MetricsCollector`` You can plug-in your own metrics collector instead of built-in -``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``. +``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``. -Look at those two implementations for inspiration. +Look at those two implementations for inspiration. -Custom metrics collector implementation class must be specified in the +Custom metrics collector implementation class must be specified in the ``akka.cluster.metrics.collector.provider`` configuration property. Configuration diff --git a/akka-docs/rst/java/cluster-usage.rst b/akka-docs/rst/java/cluster-usage.rst index 169917f11d..bcc9ea092c 100644 --- a/akka-docs/rst/java/cluster-usage.rst +++ b/akka-docs/rst/java/cluster-usage.rst @@ -147,7 +147,7 @@ status to ``down`` automatically after the configured time of unreachability. This is a naïve approach to remove unreachable nodes from the cluster membership. It works great for crashes and short transient network partitions, but not for long network -partitions. Both sides of the network partition will see the other side as unreachable +partitions. Both sides of the network partition will see the other side as unreachable and after a while remove it from its cluster membership. Since this happens on both sides the result is that two separate disconnected clusters have been created. This can also happen because of long GC pauses or system overload. @@ -155,14 +155,14 @@ can also happen because of long GC pauses or system overload. .. warning:: We recommend against using the auto-down feature of Akka Cluster in production. - This is crucial for correct behavior if you use :ref:`cluster-singleton-java` or + This is crucial for correct behavior if you use :ref:`cluster-singleton-java` or :ref:`cluster_sharding_java`, especially together with Akka :ref:`persistence-java`. - -A pre-packaged solution for the downing problem is provided by -`Split Brain Resolver `_, -which is part of the Lightbend Reactive Platform. If you don’t use RP, you should anyway carefully + +A pre-packaged solution for the downing problem is provided by +`Split Brain Resolver `_, +which is part of the Lightbend Reactive Platform. If you don’t use RP, you should anyway carefully read the `documentation `_ -of the Split Brain Resolver and make sure that the solution you are using handles the concerns +of the Split Brain Resolver and make sure that the solution you are using handles the concerns described there. .. note:: If you have *auto-down* enabled and the failure detector triggers, you @@ -427,8 +427,8 @@ If system messages cannot be delivered to a node it will be quarantined and then cannot come back from ``unreachable``. This can happen if the there are too many unacknowledged system messages (e.g. watch, Terminated, remote actor deployment, failures of actors supervised by remote parent). Then the node needs to be moved -to the ``down`` or ``removed`` states and the actor system must be restarted before -it can join the cluster again. +to the ``down`` or ``removed`` states and the actor system of the quarantined node +must be restarted before it can join the cluster again. The nodes in the cluster monitor each other by sending heartbeats to detect if a node is unreachable from the rest of the cluster. The heartbeat arrival times is interpreted diff --git a/akka-docs/rst/java/code/docs/http/javadsl/HttpClientExampleDocTest.java b/akka-docs/rst/java/code/docs/http/javadsl/HttpClientExampleDocTest.java index f34f7be433..5377fa75e8 100644 --- a/akka-docs/rst/java/code/docs/http/javadsl/HttpClientExampleDocTest.java +++ b/akka-docs/rst/java/code/docs/http/javadsl/HttpClientExampleDocTest.java @@ -4,31 +4,137 @@ package docs.http.javadsl; +import akka.Done; import akka.actor.AbstractActor; -import akka.actor.ActorSystem; import akka.http.javadsl.ConnectHttp; import akka.http.javadsl.HostConnectionPool; import akka.japi.Pair; import akka.japi.pf.ReceiveBuilder; import akka.stream.Materializer; +import akka.util.ByteString; +import scala.compat.java8.FutureConverters; import scala.concurrent.ExecutionContextExecutor; import scala.concurrent.Future; -import akka.stream.ActorMaterializer; import akka.stream.javadsl.*; import akka.http.javadsl.OutgoingConnection; -import akka.http.javadsl.model.*; import akka.http.javadsl.Http; -import scala.util.Try; import static akka.http.javadsl.ConnectHttp.toHost; import static akka.pattern.PatternsCS.*; import java.util.concurrent.CompletionStage; +//#manual-entity-consume-example-1 +import java.io.File; +import akka.actor.ActorSystem; + +import java.util.concurrent.TimeUnit; +import java.util.function.Function; +import akka.stream.ActorMaterializer; +import akka.stream.javadsl.Framing; +import akka.http.javadsl.model.*; +import scala.concurrent.duration.FiniteDuration; +import scala.util.Try; +//#manual-entity-consume-example-1 + @SuppressWarnings("unused") public class HttpClientExampleDocTest { + HttpResponse responseFromSomewhere() { + return null; + } + + void manualEntityComsumeExample() { + //#manual-entity-consume-example-1 + + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final HttpResponse response = responseFromSomewhere(); + + final Function transformEachLine = line -> line /* some transformation here */; + + final int maximumFrameLength = 256; + + response.entity().getDataBytes() + .via(Framing.delimiter(ByteString.fromString("\n"), maximumFrameLength, FramingTruncation.ALLOW)) + .map(transformEachLine::apply) + .runWith(FileIO.toPath(new File("/tmp/example.out").toPath()), materializer); + //#manual-entity-consume-example-1 + } + + private + //#manual-entity-consume-example-2 + final class ExamplePerson { + final String name; + public ExamplePerson(String name) { this.name = name; } + } + + public ExamplePerson parse(ByteString line) { + return new ExamplePerson(line.utf8String()); + } + //#manual-entity-consume-example-2 + + void manualEntityConsumeExample2() { + //#manual-entity-consume-example-2 + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final HttpResponse response = responseFromSomewhere(); + + // toStrict to enforce all data be loaded into memory from the connection + final CompletionStage strictEntity = response.entity() + .toStrict(FiniteDuration.create(3, TimeUnit.SECONDS).toMillis(), materializer); + + // while API remains the same to consume dataBytes, now they're in memory already: + + final CompletionStage person = + strictEntity + .thenCompose(strict -> + strict.getDataBytes() + .runFold(ByteString.empty(), (acc, b) -> acc.concat(b), materializer) + .thenApply(this::parse) + ); + + //#manual-entity-consume-example-2 + } + + void manualEntityDiscardExample1() { + //#manual-entity-discard-example-1 + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final HttpResponse response = responseFromSomewhere(); + + final HttpMessage.DiscardedEntity discarded = response.discardEntityBytes(materializer); + + discarded.completionStage().whenComplete((done, ex) -> { + System.out.println("Entity discarded completely!"); + }); + //#manual-entity-discard-example-1 + } + + void manualEntityDiscardExample2() { + //#manual-entity-discard-example-2 + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final HttpResponse response = responseFromSomewhere(); + + final CompletionStage discardingComplete = response.entity().getDataBytes().runWith(Sink.ignore(), materializer); + + discardingComplete.whenComplete((done, ex) -> { + System.out.println("Entity discarded completely!"); + }); + //#manual-entity-discard-example-2 + } + + // compile only test public void testConstructRequest() { //#outgoing-connection-example diff --git a/akka-docs/rst/java/code/docs/http/javadsl/WebSocketClientExampleTest.java b/akka-docs/rst/java/code/docs/http/javadsl/WebSocketClientExampleTest.java index 1348c65636..0fd90fddc4 100644 --- a/akka-docs/rst/java/code/docs/http/javadsl/WebSocketClientExampleTest.java +++ b/akka-docs/rst/java/code/docs/http/javadsl/WebSocketClientExampleTest.java @@ -14,7 +14,6 @@ import akka.http.javadsl.model.ws.TextMessage; import akka.http.javadsl.model.ws.WebSocketRequest; import akka.http.javadsl.model.ws.WebSocketUpgradeResponse; import akka.japi.Pair; -import akka.japi.function.Procedure; import akka.stream.ActorMaterializer; import akka.stream.Materializer; import akka.stream.javadsl.Flow; @@ -63,9 +62,9 @@ public class WebSocketClientExampleTest { // The first value in the pair is a CompletionStage that // completes when the WebSocket request has connected successfully (or failed) final CompletionStage connected = pair.first().thenApply(upgrade -> { - // just like a regular http request we can get 404 NotFound, - // with a response body, that will be available from upgrade.response - if (upgrade.response().status().equals(StatusCodes.OK)) { + // just like a regular http request we can access response status which is available via upgrade.response.status + // status code 101 (Switching Protocols) indicates that server support WebSockets + if (upgrade.response().status().equals(StatusCodes.SWITCHING_PROTOCOLS)) { return Done.getInstance(); } else { throw new RuntimeException("Connection failed: " + upgrade.response().status()); @@ -220,9 +219,9 @@ public class WebSocketClientExampleTest { CompletionStage connected = upgradeCompletion.thenApply(upgrade-> { - // just like a regular http request we can get 404 NotFound, - // with a response body, that will be available from upgrade.response - if (upgrade.response().status().equals(StatusCodes.OK)) { + // just like a regular http request we can access response status which is available via upgrade.response.status + // status code 101 (Switching Protocols) indicates that server support WebSockets + if (upgrade.response().status().equals(StatusCodes.SWITCHING_PROTOCOLS)) { return Done.getInstance(); } else { throw new RuntimeException(("Connection failed: " + upgrade.response().status())); diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/HttpServerExampleDocTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/HttpServerExampleDocTest.java index 37a773e5f6..d06236fd5b 100644 --- a/akka-docs/rst/java/code/docs/http/javadsl/server/HttpServerExampleDocTest.java +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/HttpServerExampleDocTest.java @@ -4,26 +4,37 @@ package docs.http.javadsl.server; +import akka.Done; import akka.NotUsed; import akka.actor.ActorSystem; import akka.http.javadsl.ConnectHttp; import akka.http.javadsl.Http; import akka.http.javadsl.IncomingConnection; import akka.http.javadsl.ServerBinding; +import akka.http.javadsl.marshallers.jackson.Jackson; import akka.http.javadsl.model.*; +import akka.http.javadsl.model.headers.Connection; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.Unmarshaller; import akka.japi.function.Function; import akka.stream.ActorMaterializer; +import akka.stream.IOResult; import akka.stream.Materializer; +import akka.stream.javadsl.FileIO; import akka.stream.javadsl.Flow; import akka.stream.javadsl.Sink; import akka.stream.javadsl.Source; import akka.util.ByteString; +import scala.concurrent.ExecutionContextExecutor; import java.io.BufferedReader; +import java.io.File; import java.io.InputStreamReader; import java.util.concurrent.CompletionStage; import java.util.concurrent.TimeUnit; +import static akka.http.javadsl.server.Directives.*; + @SuppressWarnings("unused") public class HttpServerExampleDocTest { @@ -205,4 +216,113 @@ public class HttpServerExampleDocTest { public static void main(String[] args) throws Exception { fullServerExample(); } + + + //#consume-entity-directive + class Bid { + final String userId; + final int bid; + + Bid(String userId, int bid) { + this.userId = userId; + this.bid = bid; + } + } + //#consume-entity-directive + + void consumeEntityUsingEntityDirective() { + //#consume-entity-directive + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final Unmarshaller asBid = Jackson.unmarshaller(Bid.class); + + final Route s = path("bid", () -> + put(() -> + entity(asBid, bid -> + // incoming entity is fully consumed and converted into a Bid + complete("The bid was: " + bid) + ) + ) + ); + //#consume-entity-directive + } + + void consumeEntityUsingRawDataBytes() { + //#consume-raw-dataBytes + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final Route s = + put(() -> + path("lines", () -> + withoutSizeLimit(() -> + extractDataBytes(bytes -> { + final CompletionStage res = bytes.runWith(FileIO.toPath(new File("/tmp/example.out").toPath()), materializer); + + return onComplete(() -> res, ioResult -> + // we only want to respond once the incoming data has been handled: + complete("Finished writing data :" + ioResult)); + }) + ) + ) + ); + + //#consume-raw-dataBytes + } + + void discardEntityUsingRawBytes() { + //#discard-discardEntityBytes + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final Route s = + put(() -> + path("lines", () -> + withoutSizeLimit(() -> + extractRequest(r -> { + final CompletionStage res = r.discardEntityBytes(materializer).completionStage(); + + return onComplete(() -> res, done -> + // we only want to respond once the incoming data has been handled: + complete("Finished writing data :" + done)); + }) + ) + ) + ); + //#discard-discardEntityBytes + } + + void discardEntityManuallyCloseConnections() { + //#discard-close-connections + final ActorSystem system = ActorSystem.create(); + final ExecutionContextExecutor dispatcher = system.dispatcher(); + final ActorMaterializer materializer = ActorMaterializer.create(system); + + final Route s = + put(() -> + path("lines", () -> + withoutSizeLimit(() -> + extractDataBytes(bytes -> { + // Closing connections, method 1 (eager): + // we deem this request as illegal, and close the connection right away: + bytes.runWith(Sink.cancelled(), materializer); // "brutally" closes the connection + + // Closing connections, method 2 (graceful): + // consider draining connection and replying with `Connection: Close` header + // if you want the client to close after this request/reply cycle instead: + return respondWithHeader(Connection.create("close"), () -> + complete(StatusCodes.FORBIDDEN, "Not allowed!") + ); + }) + ) + ) + ); + //#discard-close-connections + } + + } diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java new file mode 100644 index 0000000000..21d7fcc471 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java @@ -0,0 +1,788 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.actor.ActorSystem; +import akka.dispatch.ExecutionContexts; +import akka.event.Logging; +import akka.event.LoggingAdapter; +import akka.http.javadsl.model.ContentTypes; +import akka.http.javadsl.model.HttpEntities; +import akka.http.javadsl.model.HttpEntity; +import akka.http.javadsl.model.HttpMethods; +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.HttpResponse; +import akka.http.javadsl.model.ResponseEntity; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.model.headers.RawHeader; +import akka.http.javadsl.model.headers.Server; +import akka.http.javadsl.model.headers.ProductVersion; +import akka.http.javadsl.settings.RoutingSettings; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.http.javadsl.server.*; +import akka.japi.pf.PFBuilder; +import akka.stream.ActorMaterializer; +import akka.stream.ActorMaterializerSettings; +import akka.stream.javadsl.FileIO; +import akka.stream.javadsl.Sink; +import akka.stream.javadsl.Source; +import akka.util.ByteString; +import org.junit.Ignore; +import org.junit.Test; +import scala.concurrent.ExecutionContextExecutor; + +import java.nio.file.Paths; +import java.util.Arrays; +import java.util.Collections; +import java.util.Optional; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.Executors; +import java.util.function.Function; +import java.util.function.Predicate; +import java.util.function.Supplier; +import java.util.stream.StreamSupport; + +public class BasicDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testExtract() { + //#extract + final Route route = extract( + ctx -> ctx.getRequest().getUri().toString().length(), + len -> complete("The length of the request URI is " + len) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/abcdef")) + .assertEntity("The length of the request URI is 25"); + //#extract + } + + @Test + public void testExtractLog() { + //#extractLog + final Route route = extractLog(log -> { + log.debug("I'm logging things in much detail..!"); + return complete("It's amazing!"); + }); + + // tests: + testRoute(route).run(HttpRequest.GET("/abcdef")) + .assertEntity("It's amazing!"); + //#extractLog + } + + @Test + public void testWithMaterializer() { + //#withMaterializer + final ActorMaterializerSettings settings = ActorMaterializerSettings.create(system()); + final ActorMaterializer special = ActorMaterializer.create(settings, system(), "special"); + + final Route sample = path("sample", () -> + extractMaterializer(mat -> + onSuccess(() -> + // explicitly use the materializer: + Source.single("Materialized by " + mat.hashCode() + "!") + .runWith(Sink.head(), mat), this::complete + ) + ) + ); + + final Route route = route( + pathPrefix("special", () -> + withMaterializer(special, () -> sample) // `special` materializer will be used + ), + sample // default materializer will be used + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("Materialized by " + materializer().hashCode()+ "!"); + testRoute(route).run(HttpRequest.GET("/special/sample")) + .assertEntity("Materialized by " + special.hashCode()+ "!"); + //#withMaterializer + } + + @Test + public void testExtractMaterializer() { + //#extractMaterializer + final Route route = path("sample", () -> + extractMaterializer(mat -> + onSuccess(() -> + // explicitly use the materializer: + Source.single("Materialized by " + mat.hashCode() + "!") + .runWith(Sink.head(), mat), this::complete + ) + ) + ); // default materializer will be used + + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("Materialized by " + materializer().hashCode()+ "!"); + //#extractMaterializer + } + + @Test + public void testWithExecutionContext() { + //#withExecutionContext + + final ExecutionContextExecutor special = + ExecutionContexts.fromExecutor(Executors.newFixedThreadPool(1)); + + final Route sample = path("sample", () -> + extractExecutionContext(executor -> + onSuccess(() -> + CompletableFuture.supplyAsync(() -> + "Run on " + executor.hashCode() + "!", executor + ), this::complete + ) + ) + ); + + final Route route = route( + pathPrefix("special", () -> + // `special` execution context will be used + withExecutionContext(special, () -> sample) + ), + sample // default execution context will be used + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("Run on " + system().dispatcher().hashCode() + "!"); + testRoute(route).run(HttpRequest.GET("/special/sample")) + .assertEntity("Run on " + special.hashCode() + "!"); + //#withExecutionContext + } + + @Test + public void testExtractExecutionContext() { + //#extractExecutionContext + final Route route = path("sample", () -> + extractExecutionContext(executor -> + onSuccess(() -> + CompletableFuture.supplyAsync( + // uses the `executor` ExecutionContext + () -> "Run on " + executor.hashCode() + "!", executor + ), str -> complete(str) + ) + ) + ); + + //tests: + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("Run on " + system().dispatcher().hashCode() + "!"); + //#extractExecutionContext + } + + @Test + public void testWithLog() { + //#withLog + final LoggingAdapter special = Logging.getLogger(system(), "SpecialRoutes"); + + final Route sample = path("sample", () -> + extractLog(log -> { + final String msg = "Logging using " + log + "!"; + log.debug(msg); + return complete(msg); + } + ) + ); + + final Route route = route( + pathPrefix("special", () -> + withLog(special, () -> sample) + ), + sample + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("Logging using " + system().log() + "!"); + testRoute(route).run(HttpRequest.GET("/special/sample")) + .assertEntity("Logging using " + special + "!"); + //#withLog + } + + @Ignore("Ignore compile-only test") + @Test + public void testWithSettings() { + //#withSettings + final RoutingSettings special = + RoutingSettings + .create(system().settings().config()) + .withFileIODispatcher("special-io-dispatcher"); + + final Route sample = path("sample", () -> { + // internally uses the configured fileIODispatcher: + // ContentTypes.APPLICATION_JSON, source + final Source source = + FileIO.fromPath(Paths.get("example.json")) + .mapMaterializedValue(completionStage -> (Object) completionStage); + return complete( + HttpResponse.create() + .withEntity(HttpEntities.create(ContentTypes.APPLICATION_JSON, source)) + ); + }); + + final Route route = get(() -> + route( + pathPrefix("special", () -> + // `special` file-io-dispatcher will be used to read the file + withSettings(special, () -> sample) + ), + sample // default file-io-dispatcher will be used to read the file + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/special/sample")) + .assertEntity("{}"); + testRoute(route).run(HttpRequest.GET("/sample")) + .assertEntity("{}"); + //#withSettings + } + + @Test + public void testMapResponse() { + //#mapResponse + final Route route = mapResponse( + response -> response.withStatus(StatusCodes.BAD_GATEWAY), + () -> complete("abc") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/abcdef?ghi=12")) + .assertStatusCode(StatusCodes.BAD_GATEWAY); + //#mapResponse + } + + @Test + public void testMapResponseAdvanced() { + //#mapResponse-advanced + class ApiRoute { + + private final ActorSystem system; + + private final LoggingAdapter log; + + private final HttpEntity nullJsonEntity = + HttpEntities.create(ContentTypes.APPLICATION_JSON, "{}"); + + public ApiRoute(ActorSystem system) { + this.system = system; + this.log = Logging.getLogger(system, "ApiRoutes"); + } + + private HttpResponse nonSuccessToEmptyJsonEntity(HttpResponse response) { + if (response.status().isSuccess()) { + return response; + } else { + log.warning( + "Dropping response entity since response status code was: " + response.status()); + return response.withEntity((ResponseEntity) nullJsonEntity); + } + } + + /** Wrapper for all of our JSON API routes */ + private Route apiRoute(Supplier innerRoutes) { + return mapResponse(this::nonSuccessToEmptyJsonEntity, innerRoutes); + } + } + + final ApiRoute api = new ApiRoute(system()); + + final Route route = api.apiRoute(() -> + get(() -> complete(StatusCodes.INTERNAL_SERVER_ERROR)) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("{}"); + //#mapResponse-advanced + } + + @Test + public void testMapRouteResult() { + //#mapRouteResult + // this directive is a joke, don't do that :-) + final Route route = mapRouteResult(r -> { + if (r instanceof Complete) { + final HttpResponse response = ((Complete) r).getResponse(); + return RouteResults.complete(response.withStatus(200)); + } else { + return r; + } + }, () -> complete(StatusCodes.ACCEPTED)); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertStatusCode(StatusCodes.OK); + //#mapRouteResult + } + + @Test + public void testMapRouteResultFuture() { + //#mapRouteResultFuture + final Route route = mapRouteResultFuture(cr -> + cr.exceptionally(t -> { + if (t instanceof IllegalArgumentException) { + return RouteResults.complete( + HttpResponse.create().withStatus(StatusCodes.INTERNAL_SERVER_ERROR)); + } else { + return null; + } + }).thenApply(rr -> { + if (rr instanceof Complete) { + final HttpResponse res = ((Complete) rr).getResponse(); + return RouteResults.complete( + res.addHeader(Server.create(ProductVersion.create("MyServer", "1.0")))); + } else { + return rr; + } + }), () -> complete("Hello world!")); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertStatusCode(StatusCodes.OK) + .assertHeaderExists(Server.create(ProductVersion.create("MyServer", "1.0"))); + //#mapRouteResultFuture + } + + @Test + public void testMapResponseEntity() { + //#mapResponseEntity + final Function prefixEntity = entity -> { + if (entity instanceof HttpEntity.Strict) { + final HttpEntity.Strict strict = (HttpEntity.Strict) entity; + return HttpEntities.create( + strict.getContentType(), + ByteString.fromString("test").concat(strict.getData())); + } else { + throw new IllegalStateException("Unexpected entity type"); + } + }; + + final Route route = mapResponseEntity(prefixEntity, () -> complete("abc")); + + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("testabc"); + //#mapResponseEntity + } + + @Test + public void testMapResponseHeaders() { + //#mapResponseHeaders + // adds all request headers to the response + final Route echoRequestHeaders = extract( + ctx -> ctx.getRequest().getHeaders(), + headers -> respondWithHeaders(headers, () -> complete("test")) + ); + + final Route route = mapResponseHeaders(headers -> { + headers.removeIf(header -> header.lowercaseName().equals("id")); + return headers; + }, () -> echoRequestHeaders); + + // tests: + testRoute(route).run(HttpRequest.GET("/").addHeaders( + Arrays.asList(RawHeader.create("id", "12345"),RawHeader.create("id2", "67890")))) + .assertHeaderKindNotExists("id") + .assertHeaderExists("id2", "67890"); + //#mapResponseHeaders + } + + @Ignore("Not implemented yet") + @Test + public void testMapInnerRoute() { + //#mapInnerRoute + // TODO: implement mapInnerRoute + //#mapInnerRoute + } + + @Test + public void testMapRejections() { + //#mapRejections + // ignore any rejections and replace them by AuthorizationFailedRejection + final Route route = mapRejections( + rejections -> Collections.singletonList((Rejection) Rejections.authorizationFailed()), + () -> path("abc", () -> complete("abc")) + ); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(Rejections.authorizationFailed()); + testRoute(route).run(HttpRequest.GET("/abc")) + .assertStatusCode(StatusCodes.OK); + //#mapRejections + } + + @Test + public void testRecoverRejections() { + //#recoverRejections + final Function, Optional> neverAuth = + creds -> Optional.empty(); + final Function, Optional> alwaysAuth = + creds -> Optional.of("id"); + + final Route originalRoute = pathPrefix("auth", () -> + route( + path("never", () -> + authenticateBasic("my-realm", neverAuth, obj -> complete("Welcome to the bat-cave!")) + ), + path("always", () -> + authenticateBasic("my-realm", alwaysAuth, obj -> complete("Welcome to the secret place!")) + ) + ) + ); + + final Function, Boolean> existsAuthenticationFailedRejection = + rejections -> + StreamSupport.stream(rejections.spliterator(), false) + .anyMatch(r -> r instanceof AuthenticationFailedRejection); + + final Route route = recoverRejections(rejections -> { + if (existsAuthenticationFailedRejection.apply(rejections)) { + return RouteResults.complete( + HttpResponse.create().withEntity("Nothing to see here, move along.")); + } else if (!rejections.iterator().hasNext()) { // see "Empty Rejections" for more details + return RouteResults.complete( + HttpResponse.create().withStatus(StatusCodes.NOT_FOUND) + .withEntity("Literally nothing to see here.")); + } else { + return RouteResults.rejected(rejections); + } + }, () -> originalRoute); + + // tests: + testRoute(route).run(HttpRequest.GET("/auth/never")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("Nothing to see here, move along."); + testRoute(route).run(HttpRequest.GET("/auth/always")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("Welcome to the secret place!"); + testRoute(route).run(HttpRequest.GET("/auth/does_not_exist")) + .assertStatusCode(StatusCodes.NOT_FOUND) + .assertEntity("Literally nothing to see here."); + //#recoverRejections + } + + @Test + public void testRecoverRejectionsWith() { + //#recoverRejectionsWith + final Function, Optional> neverAuth = + creds -> Optional.empty(); + + final Route originalRoute = pathPrefix("auth", () -> + path("never", () -> + authenticateBasic("my-realm", neverAuth, obj -> complete("Welcome to the bat-cave!")) + ) + ); + + final Function, Boolean> existsAuthenticationFailedRejection = + rejections -> + StreamSupport.stream(rejections.spliterator(), false) + .anyMatch(r -> r instanceof AuthenticationFailedRejection); + + final Route route = recoverRejectionsWith( + rejections -> CompletableFuture.supplyAsync(() -> { + if (existsAuthenticationFailedRejection.apply(rejections)) { + return RouteResults.complete( + HttpResponse.create().withEntity("Nothing to see here, move along.")); + } else { + return RouteResults.rejected(rejections); + } + }), () -> originalRoute); + + // tests: + testRoute(route).run(HttpRequest.GET("/auth/never")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("Nothing to see here, move along."); + //#recoverRejectionsWith + } + + @Test + public void testMapRequest() { + //#mapRequest + final Route route = mapRequest(req -> + req.withMethod(HttpMethods.POST), () -> + extractRequest(req -> complete("The request method was " + req.method().name())) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("The request method was POST"); + //#mapRequest + } + + @Test + public void testMapRequestContext() { + //#mapRequestContext + final Route route = mapRequestContext(ctx -> + ctx.withRequest(HttpRequest.create().withMethod(HttpMethods.POST)), () -> + extractRequest(req -> complete(req.method().value())) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/abc/def/ghi")) + .assertEntity("POST"); + //#mapRequestContext + } + + @Test + public void testMapRouteResult0() { + //#mapRouteResult + final Route route = mapRouteResult(rr -> { + final Iterable rejections = Collections.singletonList(Rejections.authorizationFailed()); + return RouteResults.rejected(rejections); + }, () -> complete("abc")); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(Rejections.authorizationFailed()); + //#mapRouteResult + } + + public static final class MyCustomRejection implements CustomRejection {} + + @Test + public void testMapRouteResultPF() { + //#mapRouteResultPF + final Route route = mapRouteResultPF( + new PFBuilder() + .match(Rejected.class, rejected -> { + final Iterable rejections = + Collections.singletonList(Rejections.authorizationFailed()); + return RouteResults.rejected(rejections); + }).build(), () -> reject(new MyCustomRejection())); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(Rejections.authorizationFailed()); + //#mapRouteResultPF + } + + @Test + public void testMapRouteResultWithPF() { + //#mapRouteResultWithPF + final Route route = mapRouteResultWithPF( + new PFBuilder>() + .match(Rejected.class, rejected -> CompletableFuture.supplyAsync(() -> { + final Iterable rejections = + Collections.singletonList(Rejections.authorizationFailed()); + return RouteResults.rejected(rejections); + }) + ).build(), () -> reject(new MyCustomRejection())); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(Rejections.authorizationFailed()); + //#mapRouteResultWithPF + } + + @Test + public void testMapRouteResultWith() { + //#mapRouteResultWith + final Route route = mapRouteResultWith(rr -> CompletableFuture.supplyAsync(() -> { + if (rr instanceof Rejected) { + final Iterable rejections = + Collections.singletonList(Rejections.authorizationFailed()); + return RouteResults.rejected(rejections); + } else { + return rr; + } + }), () -> reject(new MyCustomRejection())); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(Rejections.authorizationFailed()); + //#mapRouteResultWith + } + + @Test + public void testPass() { + //#pass + final Route route = pass(() -> complete("abc")); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("abc"); + //#pass + } + + private Route providePrefixedStringRoute(String value) { + return provide("prefix:" + value, this::complete); + } + + @Test + public void testProvide() { + //#provide + final Route route = providePrefixedStringRoute("test"); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("prefix:test"); + //#provide + } + + @Ignore("Test failed") + @Test + public void testCancelRejections() { + //#cancelRejections + final Predicate isMethodRejection = p -> p instanceof MethodRejection; + final Route route = cancelRejections( + isMethodRejection, () -> post(() -> complete("Result")) + ); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(); + //#cancelRejections + } + + @Ignore("Test failed") + @Test + public void testCancelRejection() { + //#cancelRejection + final Route route = cancelRejection(Rejections.method(HttpMethods.POST), () -> + post(() -> complete("Result")) + ); + + // tests: + runRouteUnSealed(route, HttpRequest.GET("/")) + .assertRejections(); + //#cancelRejection + } + + @Test + public void testExtractRequest() { + //#extractRequest + final Route route = extractRequest(request -> + complete("Request method is " + request.method().name() + + " and content-type is " + request.entity().getContentType()) + ); + + // tests: + testRoute(route).run(HttpRequest.POST("/").withEntity("text")) + .assertEntity("Request method is POST and content-type is text/plain; charset=UTF-8"); + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("Request method is GET and content-type is none/none"); + //#extractRequest + } + + @Test + public void testExtractSettings() { + //#extractSettings + final Route route = extractSettings(settings -> + complete("RoutingSettings.renderVanityFooter = " + settings.getRenderVanityFooter()) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("RoutingSettings.renderVanityFooter = true"); + //#extractSettings + } + + @Test + public void testMapSettings() { + //#mapSettings + final Route route = mapSettings(settings -> + settings.withFileGetConditional(false), () -> + extractSettings(settings -> + complete("RoutingSettings.fileGetConditional = " + settings.getFileGetConditional()) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("RoutingSettings.fileGetConditional = false"); + //#mapSettings + } + + @Test + public void testExtractRequestContext() { + //#extractRequestContext + final Route route = extractRequestContext(ctx -> { + ctx.getLog().debug("Using access to additional context availablethings, like the logger."); + final HttpRequest request = ctx.getRequest(); + return complete("Request method is " + request.method().name() + + " and content-type is " + request.entity().getContentType()); + }); + + // tests: + testRoute(route).run(HttpRequest.POST("/").withEntity("text")) + .assertEntity("Request method is POST and content-type is text/plain; charset=UTF-8"); + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("Request method is GET and content-type is none/none"); + //#extractRequestContext + } + + @Test + public void testExtractUri() { + //#extractUri + final Route route = extractUri(uri -> + complete("Full URI: " + uri) + ); + + // tests: + // tests are executed with the host assumed to be "example.com" + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("Full URI: http://example.com/"); + testRoute(route).run(HttpRequest.GET("/test")) + .assertEntity("Full URI: http://example.com/test"); + //#extractUri + } + + @Test + public void testMapUnmatchedPath() { + //#mapUnmatchedPath + final Function ignore456 = path -> { + int slashPos = path.indexOf("/"); + if (slashPos != -1) { + String head = path.substring(0, slashPos); + String tail = path.substring(slashPos); + if (head.length() <= 3) { + return tail; + } else { + return path.substring(3); + } + } else { + return path; + } + }; + + final Route route = pathPrefix("123", () -> + mapUnmatchedPath(ignore456, () -> + path("abc", () -> + complete("Content") + ) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/123/abc")) + .assertEntity("Content"); + testRoute(route).run(HttpRequest.GET("/123456/abc")) + .assertEntity("Content"); + //#mapUnmatchedPath + } + + @Test + public void testExtractUnmatchedPath() { + //#extractUnmatchedPath + final Route route = pathPrefix("abc", () -> + extractUnmatchedPath(remaining -> + complete("Unmatched: '" + remaining + "'") + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/abc")) + .assertEntity("Unmatched: ''"); + testRoute(route).run(HttpRequest.GET("/abc/456")) + .assertEntity("Unmatched: '/456'"); + //#extractUnmatchedPath + } + +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java new file mode 100644 index 0000000000..0fde3f7599 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java @@ -0,0 +1,156 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.headers.AcceptEncoding; +import akka.http.javadsl.model.headers.ContentEncoding; +import akka.http.javadsl.model.headers.HttpEncodings; +import akka.http.javadsl.server.Coder; +import akka.http.javadsl.server.Rejections; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.util.ByteString; +import org.junit.Test; + +import java.util.Collections; + +import static akka.http.javadsl.server.Unmarshaller.entityToString; + +public class CodingDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testResponseEncodingAccepted() { + //#responseEncodingAccepted + final Route route = responseEncodingAccepted(HttpEncodings.GZIP, () -> + complete("content") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("content"); + runRouteUnSealed(route, + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))) + .assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP)); + //#responseEncodingAccepted + } + + @Test + public void testEncodeResponse() { + //#encodeResponse + final Route route = encodeResponse(() -> complete("content")); + + // tests: + testRoute(route).run( + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.GZIP)) + .addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)) + ).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP)); + + testRoute(route).run( + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)) + ).assertHeaderExists(ContentEncoding.create(HttpEncodings.DEFLATE)); + + // This case failed! +// testRoute(route).run( +// HttpRequest.GET("/") +// .addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY)) +// ).assertHeaderExists(ContentEncoding.create(HttpEncodings.IDENTITY)); + + //#encodeResponse + } + + @Test + public void testEncodeResponseWith() { + //#encodeResponseWith + final Route route = encodeResponseWith( + Collections.singletonList(Coder.Gzip), + () -> complete("content") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")) + .assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP)); + + testRoute(route).run( + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.GZIP)) + .addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)) + ).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP)); + + runRouteUnSealed(route, + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)) + ).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP)); + + runRouteUnSealed(route, + HttpRequest.GET("/") + .addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY)) + ).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP)); + //#encodeResponseWith + } + + @Test + public void testDecodeRequest() { + //#decodeRequest + final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello")); + final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello")); + + final Route route = decodeRequest(() -> + entity(entityToString(), content -> + complete("Request content: '" + content + "'") + ) + ); + + // tests: + testRoute(route).run( + HttpRequest.POST("/").withEntity(helloGzipped) + .addHeader(ContentEncoding.create(HttpEncodings.GZIP))) + .assertEntity("Request content: 'Hello'"); + + testRoute(route).run( + HttpRequest.POST("/").withEntity(helloDeflated) + .addHeader(ContentEncoding.create(HttpEncodings.DEFLATE))) + .assertEntity("Request content: 'Hello'"); + + testRoute(route).run( + HttpRequest.POST("/").withEntity("hello uncompressed") + .addHeader(ContentEncoding.create(HttpEncodings.IDENTITY))) + .assertEntity( "Request content: 'hello uncompressed'"); + //#decodeRequest + } + + @Test + public void testDecodeRequestWith() { + //#decodeRequestWith + final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello")); + final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello")); + + final Route route = decodeRequestWith(Coder.Gzip, () -> + entity(entityToString(), content -> + complete("Request content: '" + content + "'") + ) + ); + + // tests: + testRoute(route).run( + HttpRequest.POST("/").withEntity(helloGzipped) + .addHeader(ContentEncoding.create(HttpEncodings.GZIP))) + .assertEntity("Request content: 'Hello'"); + + runRouteUnSealed(route, + HttpRequest.POST("/").withEntity(helloDeflated) + .addHeader(ContentEncoding.create(HttpEncodings.DEFLATE))) + .assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP)); + + runRouteUnSealed(route, + HttpRequest.POST("/").withEntity("hello") + .addHeader(ContentEncoding.create(HttpEncodings.IDENTITY))) + .assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP)); + //#decodeRequestWith + } + +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java new file mode 100644 index 0000000000..b8e1809732 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java @@ -0,0 +1,75 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.ExceptionHandler; +import akka.http.javadsl.server.PathMatchers; +import akka.http.javadsl.server.RejectionHandler; +import akka.http.javadsl.server.Rejections; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.ValidationRejection; +import akka.http.javadsl.testkit.JUnitRouteTest; +import org.junit.Test; + +import static akka.http.javadsl.server.PathMatchers.integerSegment; + +public class ExecutionDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testHandleExceptions() { + //#handleExceptions + final ExceptionHandler divByZeroHandler = ExceptionHandler.newBuilder() + .match(ArithmeticException.class, x -> + complete(StatusCodes.BAD_REQUEST, "You've got your arithmetic wrong, fool!")) + .build(); + + final Route route = + path(PathMatchers.segment("divide").slash(integerSegment()).slash(integerSegment()), (a, b) -> + handleExceptions(divByZeroHandler, () -> complete("The result is " + (a / b))) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/divide/10/5")) + .assertEntity("The result is 2"); + testRoute(route).run(HttpRequest.GET("/divide/10/0")) + .assertStatusCode(StatusCodes.BAD_REQUEST) + .assertEntity("You've got your arithmetic wrong, fool!"); + //#handleExceptions + } + + @Test + public void testHandleRejections() { + //#handleRejections + final RejectionHandler totallyMissingHandler = RejectionHandler.newBuilder() + .handleNotFound(complete(StatusCodes.NOT_FOUND, "Oh man, what you are looking for is long gone.")) + .handle(ValidationRejection.class, r -> complete(StatusCodes.INTERNAL_SERVER_ERROR, r.message())) + .build(); + + final Route route = pathPrefix("handled", () -> + handleRejections(totallyMissingHandler, () -> + route( + path("existing", () -> complete("This path exists")), + path("boom", () -> reject(Rejections.validationRejection("This didn't work."))) + ) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/handled/existing")) + .assertEntity("This path exists"); + // applies default handler + testRoute(route).run(HttpRequest.GET("/missing")) + .assertStatusCode(StatusCodes.NOT_FOUND) + .assertEntity("The requested resource could not be found."); + testRoute(route).run(HttpRequest.GET("/handled/missing")) + .assertStatusCode(StatusCodes.NOT_FOUND) + .assertEntity("Oh man, what you are looking for is long gone."); + testRoute(route).run(HttpRequest.GET("/handled/boom")) + .assertStatusCode(StatusCodes.INTERNAL_SERVER_ERROR) + .assertEntity("This didn't work."); + //#handleRejections + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java new file mode 100644 index 0000000000..b9b0da0ebc --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java @@ -0,0 +1,124 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.PathMatchers; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.directives.DirectoryRenderer; +import akka.http.javadsl.testkit.JUnitRouteTest; +import org.junit.Ignore; +import org.junit.Test; +import scala.NotImplementedError; + +import static akka.http.javadsl.server.PathMatchers.segment; + +public class FileAndResourceDirectivesExamplesTest extends JUnitRouteTest { + + @Ignore("Compile only test") + @Test + public void testGetFromFile() { + //#getFromFile + final Route route = path(PathMatchers.segment("logs").slash(segment()), name -> + getFromFile(name + ".log") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/logs/example")) + .assertEntity("example file contents"); + //#getFromFile + } + + @Ignore("Compile only test") + @Test + public void testGetFromResource() { + //#getFromResource + final Route route = path(PathMatchers.segment("logs").slash(segment()), name -> + getFromResource(name + ".log") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/logs/example")) + .assertEntity("example file contents"); + //#getFromResource + } + + @Ignore("Compile only test") + @Test + public void testListDirectoryContents() { + //#listDirectoryContents + final Route route = route( + path("tmp", () -> listDirectoryContents("/tmp")), + path("custom", () -> { + // implement your custom renderer here + final DirectoryRenderer renderer = renderVanityFooter -> { + throw new NotImplementedError(); + }; + return listDirectoryContents(renderer, "/tmp"); + }) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/logs/example")) + .assertEntity("example file contents"); + //#listDirectoryContents + } + + @Ignore("Compile only test") + @Test + public void testGetFromBrowseableDirectory() { + //#getFromBrowseableDirectory + final Route route = path("tmp", () -> + getFromBrowseableDirectory("/tmp") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/tmp")) + .assertStatusCode(StatusCodes.OK); + //#getFromBrowseableDirectory + } + + @Ignore("Compile only test") + @Test + public void testGetFromBrowseableDirectories() { + //#getFromBrowseableDirectories + final Route route = path("tmp", () -> + getFromBrowseableDirectories("/main", "/backups") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/tmp")) + .assertStatusCode(StatusCodes.OK); + //#getFromBrowseableDirectories + } + + @Ignore("Compile only test") + @Test + public void testGetFromDirectory() { + //#getFromDirectory + final Route route = pathPrefix("tmp", () -> + getFromDirectory("/tmp") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/tmp/example")) + .assertEntity("example file contents"); + //#getFromDirectory + } + + @Ignore("Compile only test") + @Test + public void testGetFromResourceDirectory() { + //#getFromResourceDirectory + final Route route = pathPrefix("examples", () -> + getFromResourceDirectory("/examples") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/examples/example-1")) + .assertEntity("example file contents"); + //#getFromResourceDirectory + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java new file mode 100644 index 0000000000..61a65f06e9 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java @@ -0,0 +1,140 @@ +/** + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.impl.engine.rendering.BodyPartRenderer; +import akka.http.javadsl.model.*; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.Unmarshaller; +import akka.http.javadsl.server.directives.FileInfo; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.stream.javadsl.Framing; +import akka.stream.javadsl.Source; +import akka.util.ByteString; +import org.junit.Ignore; +import org.junit.Test; +import scala.concurrent.duration.Duration; +import scala.concurrent.duration.FiniteDuration; + +import java.io.File; +import java.nio.charset.Charset; +import java.nio.file.Files; +import java.util.Arrays; +import java.util.HashMap; +import java.util.Map; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; +import java.util.function.BiFunction; + +public class FileUploadDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testUploadedFile() { + //#uploadedFile + // function (FileInfo, File) => Route to process the file metadata and file itself + BiFunction infoFileRoute = + (info, file) -> { + // do something with the file and file metadata ... + file.delete(); + return complete(StatusCodes.OK); + }; + + + final Route route = uploadedFile("csv", infoFileRoute); + + Map filenameMapping = new HashMap<>(); + filenameMapping.put("filename", "data.csv"); + + akka.http.javadsl.model.Multipart.FormData multipartForm = + Multiparts.createStrictFormDataFromParts(Multiparts.createFormDataBodyPartStrict("csv", + HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, + "1,5,7\n11,13,17"), filenameMapping)); + + // test: + testRoute(route).run(HttpRequest.POST("/") + .withEntity( + multipartForm.toEntity(HttpCharsets.UTF_8, + BodyPartRenderer + .randomBoundaryWithDefaults()))) + .assertStatusCode(StatusCodes.OK); + //# + } + + @Test + public void testFileUpload() { + //#fileUpload + final Route route = extractRequestContext(ctx -> { + // function (FileInfo, Source) => Route to process the file contents + BiFunction, Route> processUploadedFile = + (metadata, byteSource) -> { + CompletionStage sumF = byteSource.via(Framing.delimiter( + ByteString.fromString("\n"), 1024)) + .mapConcat(bs -> Arrays.asList(bs.utf8String().split(","))) + .map(s -> Integer.parseInt(s)) + .runFold(0, (acc, n) -> acc + n, ctx.getMaterializer()); + return onSuccess(() -> sumF, sum -> complete("Sum: " + sum)); + }; + return fileUpload("csv", processUploadedFile); + }); + + Map filenameMapping = new HashMap<>(); + filenameMapping.put("filename", "primes.csv"); + + akka.http.javadsl.model.Multipart.FormData multipartForm = + Multiparts.createStrictFormDataFromParts( + Multiparts.createFormDataBodyPartStrict("csv", + HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, + "2,3,5\n7,11,13,17,23\n29,31,37\n"), filenameMapping)); + + // test: + testRoute(route).run(HttpRequest.POST("/").withEntity( + multipartForm.toEntity(HttpCharsets.UTF_8, BodyPartRenderer.randomBoundaryWithDefaults()))) + .assertStatusCode(StatusCodes.OK).assertEntityAs(Unmarshaller.entityToString(), "Sum: 178"); + //# + } + + @Ignore("compileOnly") + @Test + public void testFileProcessing() { + //#fileProcessing + final Route route = extractRequestContext(ctx -> { + // function (FileInfo, Source) => Route to process the file contents + BiFunction, Route> processUploadedFile = + (metadata, byteSource) -> { + CompletionStage sumF = byteSource.via(Framing.delimiter( + ByteString.fromString("\n"), 1024)) + .mapConcat(bs -> Arrays.asList(bs.utf8String().split(","))) + .map(s -> Integer.parseInt(s)) + .runFold(0, (acc, n) -> acc + n, ctx.getMaterializer()); + return onSuccess(() -> sumF, sum -> complete("Sum: " + sum)); + }; + return fileUpload("csv", processUploadedFile); + }); + + Map filenameMapping = new HashMap<>(); + filenameMapping.put("filename", "primes.csv"); + + String prefix = "primes"; + String suffix = ".csv"; + + File tempFile = null; + try { + tempFile = File.createTempFile(prefix, suffix); + tempFile.deleteOnExit(); + Files.write(tempFile.toPath(), Arrays.asList("2,3,5", "7,11,13,17,23", "29,31,37"), Charset.forName("UTF-8")); + } catch (Exception e) { + // ignore + } + + + akka.http.javadsl.model.Multipart.FormData multipartForm = + Multiparts.createFormDataFromPath("csv", ContentTypes.TEXT_PLAIN_UTF8, tempFile.toPath()); + + // test: + testRoute(route).run(HttpRequest.POST("/").withEntity( + multipartForm.toEntity(HttpCharsets.UTF_8, BodyPartRenderer.randomBoundaryWithDefaults()))) + .assertStatusCode(StatusCodes.OK).assertEntityAs(Unmarshaller.entityToString(), "Sum: 178"); + //# + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java new file mode 100644 index 0000000000..69c0e11239 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java @@ -0,0 +1,137 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.FormData; +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.StringUnmarshallers; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.japi.Pair; +import org.junit.Test; + +import java.util.List; +import java.util.Map; +import java.util.Map.Entry; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class FormFieldDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testFormField() { + //#formField + final Route route = route( + formField("color", color -> + complete("The color is '" + color + "'") + ), + formField(StringUnmarshallers.INTEGER, "id", id -> + complete("The id is '" + id + "'") + ) + ); + + // tests: + final FormData formData = FormData.create(Pair.create("color", "blue")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formData.toEntity())) + .assertEntity("The color is 'blue'"); + + testRoute(route).run(HttpRequest.GET("/")) + .assertStatusCode(StatusCodes.BAD_REQUEST) + .assertEntity("Request is missing required form field 'color'"); + //#formField + } + + @Test + public void testFormFieldMap() { + //#formFieldMap + final Function, String> mapToString = map -> + map.entrySet() + .stream() + .map(e -> e.getKey() + " = '" + e.getValue() +"'") + .collect(Collectors.joining(", ")); + + + final Route route = formFieldMap(fields -> + complete("The form fields are " + mapToString.apply(fields)) + ); + + // tests: + final FormData formDataDiffKey = + FormData.create( + Pair.create("color", "blue"), + Pair.create("count", "42")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity())) + .assertEntity("The form fields are color = 'blue', count = '42'"); + + final FormData formDataSameKey = + FormData.create( + Pair.create("x", "1"), + Pair.create("x", "5")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity())) + .assertEntity( "The form fields are x = '5'"); + //#formFieldMap + } + + @Test + public void testFormFieldMultiMap() { + //#formFieldMultiMap + final Function>, String> mapToString = map -> + map.entrySet() + .stream() + .map(e -> e.getKey() + " -> " + e.getValue().size()) + .collect(Collectors.joining(", ")); + + final Route route = formFieldMultiMap(fields -> + complete("There are form fields " + mapToString.apply(fields)) + ); + + // test: + final FormData formDataDiffKey = + FormData.create( + Pair.create("color", "blue"), + Pair.create("count", "42")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity())) + .assertEntity("There are form fields color -> 1, count -> 1"); + + final FormData formDataSameKey = + FormData.create( + Pair.create("x", "23"), + Pair.create("x", "4"), + Pair.create("x", "89")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity())) + .assertEntity("There are form fields x -> 3"); + //#formFieldMultiMap + } + + @Test + public void testFormFieldList() { + //#formFieldList + final Function>, String> listToString = list -> + list.stream() + .map(e -> e.getKey() + " = '" + e.getValue() +"'") + .collect(Collectors.joining(", ")); + + final Route route = formFieldList(fields -> + complete("The form fields are " + listToString.apply(fields)) + ); + + // tests: + final FormData formDataDiffKey = + FormData.create( + Pair.create("color", "blue"), + Pair.create("count", "42")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity())) + .assertEntity("The form fields are color = 'blue', count = '42'"); + + final FormData formDataSameKey = + FormData.create( + Pair.create("x", "23"), + Pair.create("x", "4"), + Pair.create("x", "89")); + testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity())) + .assertEntity("The form fields are x = '23', x = '4', x = '89'"); + //#formFieldList + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/MiscDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/MiscDirectivesExamplesTest.java new file mode 100644 index 0000000000..0f36352b76 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/MiscDirectivesExamplesTest.java @@ -0,0 +1,64 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.Unmarshaller; +import akka.http.javadsl.testkit.JUnitRouteTest; +import org.junit.Test; + +import java.util.Arrays; +import java.util.function.Function; + +public class MiscDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testWithSizeLimit() { + //#withSizeLimitExample + final Route route = withSizeLimit(500, () -> + entity(Unmarshaller.entityToString(), (entity) -> + complete("ok") + ) + ); + + Function withEntityOfSize = (sizeLimit) -> { + char[] charArray = new char[sizeLimit]; + Arrays.fill(charArray, '0'); + return HttpRequest.POST("/").withEntity(new String(charArray)); + }; + + // tests: + testRoute(route).run(withEntityOfSize.apply(500)) + .assertStatusCode(StatusCodes.OK); + + testRoute(route).run(withEntityOfSize.apply(501)) + .assertStatusCode(StatusCodes.BAD_REQUEST); + //#withSizeLimitExample + } + + @Test + public void testWithoutSizeLimit() { + //#withoutSizeLimitExample + final Route route = withoutSizeLimit(() -> + entity(Unmarshaller.entityToString(), (entity) -> + complete("ok") + ) + ); + + Function withEntityOfSize = (sizeLimit) -> { + char[] charArray = new char[sizeLimit]; + Arrays.fill(charArray, '0'); + return HttpRequest.POST("/").withEntity(new String(charArray)); + }; + + // tests: + // will work even if you have configured akka.http.parsing.max-content-length = 500 + testRoute(route).run(withEntityOfSize.apply(501)) + .assertStatusCode(StatusCodes.OK); + //#withoutSizeLimitExample + } + +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java new file mode 100644 index 0000000000..203a4a595b --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java @@ -0,0 +1,121 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.testkit.JUnitRouteTest; +import org.junit.Test; + +import java.util.Map.Entry; +import java.util.function.Function; +import java.util.stream.Collectors; + +public class ParameterDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testParameter() { + //#parameter + final Route route = parameter("color", color -> + complete("The color is '" + color + "'") + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/?color=blue")) + .assertEntity("The color is 'blue'"); + + testRoute(route).run(HttpRequest.GET("/")) + .assertStatusCode(StatusCodes.NOT_FOUND) + .assertEntity("Request is missing required query parameter 'color'"); + //#parameter + } + + @Test + public void testParameters() { + //#parameters + final Route route = parameter("color", color -> + parameter("backgroundColor", backgroundColor -> + complete("The color is '" + color + + "' and the background is '" + backgroundColor + "'") + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/?color=blue&backgroundColor=red")) + .assertEntity("The color is 'blue' and the background is 'red'"); + + testRoute(route).run(HttpRequest.GET("/?color=blue")) + .assertStatusCode(StatusCodes.NOT_FOUND) + .assertEntity("Request is missing required query parameter 'backgroundColor'"); + //#parameters + } + + @Test + public void testParameterMap() { + //#parameterMap + final Function paramString = + entry -> entry.getKey() + " = '" + entry.getValue() + "'"; + + final Route route = parameterMap(params -> { + final String pString = params.entrySet() + .stream() + .map(paramString::apply) + .collect(Collectors.joining(", ")); + return complete("The parameters are " + pString); + }); + + // tests: + testRoute(route).run(HttpRequest.GET("/?color=blue&count=42")) + .assertEntity("The parameters are color = 'blue', count = '42'"); + + testRoute(route).run(HttpRequest.GET("/?x=1&x=2")) + .assertEntity("The parameters are x = '2'"); + //#parameterMap + } + + @Test + public void testParameterMultiMap() { + //#parameterMultiMap + final Route route = parameterMultiMap(params -> { + final String pString = params.entrySet() + .stream() + .map(e -> e.getKey() + " -> " + e.getValue().size()) + .collect(Collectors.joining(", ")); + return complete("There are parameters " + pString); + }); + + // tests: + testRoute(route).run(HttpRequest.GET("/?color=blue&count=42")) + .assertEntity("There are parameters color -> 1, count -> 1"); + + testRoute(route).run(HttpRequest.GET("/?x=23&x=42")) + .assertEntity("There are parameters x -> 2"); + //#parameterMultiMap + } + + @Test + public void testParameterSeq() { + //#parameterSeq + final Function paramString = + entry -> entry.getKey() + " = '" + entry.getValue() + "'"; + + final Route route = parameterList(params -> { + final String pString = params.stream() + .map(paramString::apply) + .collect(Collectors.joining(", ")); + + return complete("The parameters are " + pString); + }); + + // tests: + testRoute(route).run(HttpRequest.GET("/?color=blue&count=42")) + .assertEntity("The parameters are color = 'blue', count = '42'"); + + testRoute(route).run(HttpRequest.GET("/?x=1&x=2")) + .assertEntity("The parameters are x = '1', x = '2'"); + //#parameterSeq + } + +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java new file mode 100644 index 0000000000..b8187f6ef0 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java @@ -0,0 +1,322 @@ +/* + * Copyright (C) 2015-2016 Lightbend Inc. + */ + +package docs.http.javadsl.server.directives; + +import java.util.Arrays; +import java.util.regex.Pattern; + +import org.junit.Test; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.testkit.JUnitRouteTest; +import static akka.http.javadsl.server.PathMatchers.segment; +import static akka.http.javadsl.server.PathMatchers.segments; +import static akka.http.javadsl.server.PathMatchers.integerSegment; +import static akka.http.javadsl.server.PathMatchers.neutral; +import static akka.http.javadsl.server.PathMatchers.slash; +import java.util.function.Supplier; +import akka.http.javadsl.server.directives.RouteAdapter; +import static java.util.regex.Pattern.compile; + +public class PathDirectivesExamplesTest extends JUnitRouteTest { + + //# path-prefix-test, path-suffix, raw-path-prefix, raw-path-prefix-test + Supplier completeWithUnmatchedPath = ()-> + extractUnmatchedPath((path) -> complete(path.toString())); + //# + + @Test + public void testPathExamples() { + //# path-dsl + // matches /foo/ + path(segment("foo").slash(), () -> complete(StatusCodes.OK)); + + // matches e.g. /foo/123 and extracts "123" as a String + path(segment("foo").slash(segment(compile("\\d+"))), (value) -> + complete(StatusCodes.OK)); + + // matches e.g. /foo/bar123 and extracts "123" as a String + path(segment("foo").slash(segment(compile("bar(\\d+)"))), (value) -> + complete(StatusCodes.OK)); + + // similar to `path(Segments)` + path(neutral().repeat(0, 10), () -> complete(StatusCodes.OK)); + + // identical to path("foo" ~ (PathEnd | Slash)) + path(segment("foo").orElse(slash()), () -> complete(StatusCodes.OK)); + //# path-dsl + } + + @Test + public void testBasicExamples() { + path("test", () -> complete(StatusCodes.OK)); + + // matches "/test", as well + path(segment("test"), () -> complete(StatusCodes.OK)); + + } + + @Test + public void testPathExample() { + //# pathPrefix + final Route route = + route( + path("foo", () -> complete("/foo")), + path(segment("foo").slash("bar"), () -> complete("/foo/bar")), + pathPrefix("ball", () -> + route( + pathEnd(() -> complete("/ball")), + path(integerSegment(), (i) -> + complete((i % 2 == 0) ? "even ball" : "odd ball")) + ) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/")).assertStatusCode(StatusCodes.NOT_FOUND); + testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo"); + testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar"); + testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball"); + //# pathPrefix + } + + @Test + public void testPathEnd() { + //# path-end + final Route route = + route( + pathPrefix("foo", () -> + route( + pathEnd(() -> complete("/foo")), + path("bar", () -> complete("/foo/bar")) + ) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo"); + testRoute(route).run(HttpRequest.GET("/foo/")).assertStatusCode(StatusCodes.NOT_FOUND); + testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar"); + //# path-end + } + + @Test + public void testPathEndOrSingleSlash() { + //# path-end-or-single-slash + final Route route = + route( + pathPrefix("foo", () -> + route( + pathEndOrSingleSlash(() -> complete("/foo")), + path("bar", () -> complete("/foo/bar")) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo"); + testRoute(route).run(HttpRequest.GET("/foo/")).assertEntity("/foo"); + testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar"); + //# path-end-or-single-slash + } + + @Test + public void testPathPrefix() { + //# path-prefix + final Route route = + route( + pathPrefix("ball", () -> + route( + pathEnd(() -> complete("/ball")), + path(integerSegment(), (i) -> + complete((i % 2 == 0) ? "even ball" : "odd ball")) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/")).assertStatusCode(StatusCodes.NOT_FOUND); + testRoute(route).run(HttpRequest.GET("/ball")).assertEntity("/ball"); + testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball"); + //# path-prefix + } + + @Test + public void testPathPrefixTest() { + //# path-prefix-test + final Route route = + route( + pathPrefixTest(segment("foo").orElse("bar"), () -> + route( + pathPrefix("foo", () -> completeWithUnmatchedPath.get()), + pathPrefix("bar", () -> completeWithUnmatchedPath.get()) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foo/doo")).assertEntity("/doo"); + testRoute(route).run(HttpRequest.GET("/bar/yes")).assertEntity("/yes"); + //# path-prefix-test + } + + @Test + public void testPathSingleSlash() { + //# path-single-slash + final Route route = + route( + pathSingleSlash(() -> complete("root")), + pathPrefix("ball", () -> + route( + pathSingleSlash(() -> complete("/ball/")), + path(integerSegment(), (i) -> complete((i % 2 == 0) ? "even ball" : "odd ball")) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/")).assertEntity("root"); + testRoute(route).run(HttpRequest.GET("/ball")).assertStatusCode(StatusCodes.NOT_FOUND); + testRoute(route).run(HttpRequest.GET("/ball/")).assertEntity("/ball/"); + testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball"); + //# path-single-slash + } + + @Test + public void testPathSuffix() { + //# path-suffix + final Route route = + route( + pathPrefix("start", () -> + route( + pathSuffix("end", () -> completeWithUnmatchedPath.get()), + pathSuffix(segment("foo").slash("bar").concat("baz"), () -> + completeWithUnmatchedPath.get()) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/start/middle/end")).assertEntity("/middle/"); + testRoute(route).run(HttpRequest.GET("/start/something/barbaz/foo")).assertEntity("/something/"); + //# path-suffix + } + + @Test + public void testPathSuffixTest() { + //# path-suffix-test + final Route route = + route( + pathSuffixTest(slash(), () -> complete("slashed")), + complete("unslashed") + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foo/")).assertEntity("slashed"); + testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("unslashed"); + //# path-suffix-test + } + + @Test + public void testRawPathPrefix() { + //# raw-path-prefix + final Route route = + route( + pathPrefix("foo", () -> + route( + rawPathPrefix("bar", () -> completeWithUnmatchedPath.get()), + rawPathPrefix("doo", () -> completeWithUnmatchedPath.get()) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foobar/baz")).assertEntity("/baz"); + testRoute(route).run(HttpRequest.GET("/foodoo/baz")).assertEntity("/baz"); + //# raw-path-prefix + } + + @Test + public void testRawPathPrefixTest() { + //# raw-path-prefix-test + final Route route = + route( + pathPrefix("foo", () -> + rawPathPrefixTest("bar", () -> completeWithUnmatchedPath.get()) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foobar")).assertEntity("bar"); + testRoute(route).run(HttpRequest.GET("/foobaz")).assertStatusCode(StatusCodes.NOT_FOUND); + //# raw-path-prefix-test + } + + @Test + public void testRedirectToNoTrailingSlashIfMissing() { + //# redirect-notrailing-slash-missing + final Route route = + redirectToTrailingSlashIfMissing( + StatusCodes.MOVED_PERMANENTLY, () -> + route( + path(segment("foo").slash(), () -> complete("OK")), + path(segment("bad-1"), () -> + // MISTAKE! + // Missing `/` in path, causes this path to never match, + // because it is inside a `redirectToTrailingSlashIfMissing` + complete(StatusCodes.NOT_IMPLEMENTED) + ), + path(segment("bad-2").slash(), () -> + // MISTAKE! + // / should be explicit as path element separator and not *in* the path element + // So it should be: "bad-1" / + complete(StatusCodes.NOT_IMPLEMENTED) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foo")) + .assertStatusCode(StatusCodes.MOVED_PERMANENTLY) + .assertEntity("This and all future requests should be directed to " + + "this URI."); + + testRoute(route).run(HttpRequest.GET("/foo/")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("OK"); + + testRoute(route).run(HttpRequest.GET("/bad-1/")) + .assertStatusCode(StatusCodes.NOT_FOUND); + //# redirect-notrailing-slash-missing + } + + @Test + public void testRedirectToNoTrailingSlashIfPresent() { + //# redirect-notrailing-slash-present + final Route route = + redirectToNoTrailingSlashIfPresent( + StatusCodes.MOVED_PERMANENTLY, () -> + route( + path("foo", () -> complete("OK")), + path(segment("bad").slash(), () -> + // MISTAKE! + // Since inside a `redirectToNoTrailingSlashIfPresent` directive + // the matched path here will never contain a trailing slash, + // thus this path will never match. + // + // It should be `path("bad")` instead. + complete(StatusCodes.NOT_IMPLEMENTED) + ) + ) + ); + // tests: + testRoute(route).run(HttpRequest.GET("/foo/")) + .assertStatusCode(StatusCodes.MOVED_PERMANENTLY) + .assertEntity("This and all future requests should be directed to " + + "this URI."); + + testRoute(route).run(HttpRequest.GET("/foo")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("OK"); + + testRoute(route).run(HttpRequest.GET("/bad")) + .assertStatusCode(StatusCodes.NOT_FOUND); + //# redirect-notrailing-slash-present + } + +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RangeDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RangeDirectivesExamplesTest.java new file mode 100644 index 0000000000..69c1358d9c --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RangeDirectivesExamplesTest.java @@ -0,0 +1,88 @@ +/** + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.Multipart; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.model.headers.ByteRange; +import akka.http.javadsl.model.headers.ContentRange; +import akka.http.javadsl.model.headers.Range; +import akka.http.javadsl.model.headers.RangeUnits; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.server.Unmarshaller; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.http.javadsl.testkit.TestRouteResult; +import akka.stream.ActorMaterializer; +import akka.util.ByteString; +import com.typesafe.config.Config; +import com.typesafe.config.ConfigFactory; +import org.junit.Test; + +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; + +public class RangeDirectivesExamplesTest extends JUnitRouteTest { + @Override + public Config additionalConfig() { + return ConfigFactory.parseString("akka.http.routing.range-coalescing-threshold=2"); + } + + @Test + public void testWithRangeSupport() { + //#withRangeSupport + final Route route = withRangeSupport(() -> complete("ABCDEFGH")); + + // test: + final String bytes348Range = ContentRange.create(RangeUnits.BYTES, + akka.http.javadsl.model.ContentRange.create(3, 4, 8)).value(); + final akka.http.javadsl.model.ContentRange bytes028Range = + akka.http.javadsl.model.ContentRange.create(0, 2, 8); + final akka.http.javadsl.model.ContentRange bytes678Range = + akka.http.javadsl.model.ContentRange.create(6, 7, 8); + final ActorMaterializer materializer = systemResource().materializer(); + + testRoute(route).run(HttpRequest.GET("/") + .addHeader(Range.create(RangeUnits.BYTES, ByteRange.createSlice(3, 4)))) + .assertHeaderKindExists("Content-Range") + .assertHeaderExists("Content-Range", bytes348Range) + .assertStatusCode(StatusCodes.PARTIAL_CONTENT) + .assertEntity("DE"); + + // we set "akka.http.routing.range-coalescing-threshold = 2" + // above to make sure we get two BodyParts + final TestRouteResult response = testRoute(route).run(HttpRequest.GET("/") + .addHeader(Range.create(RangeUnits.BYTES, + ByteRange.createSlice(0, 1), ByteRange.createSlice(1, 2), ByteRange.createSlice(6, 7)))); + response.assertHeaderKindNotExists("Content-Range"); + + final CompletionStage> completionStage = + response.entity(Unmarshaller.entityToMultipartByteRanges()).getParts() + .runFold(new ArrayList<>(), (acc, n) -> { + acc.add(n); + return acc; + }, materializer); + try { + final List bodyParts = + completionStage.toCompletableFuture().get(3, TimeUnit.SECONDS); + assertResult(2, bodyParts.toArray().length); + + final Multipart.ByteRanges.BodyPart part1 = bodyParts.get(0); + assertResult(bytes028Range, part1.getContentRange()); + assertResult(ByteString.fromString("ABC"), + part1.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData()); + + final Multipart.ByteRanges.BodyPart part2 = bodyParts.get(1); + assertResult(bytes678Range, part2.getContentRange()); + assertResult(ByteString.fromString("GH"), + part2.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData()); + + } catch (Exception e) { + // please handle this in production code + } + //# + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java new file mode 100644 index 0000000000..7771245b79 --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java @@ -0,0 +1,125 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpEntities; +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.Uri; +import akka.http.javadsl.model.headers.ContentType; +import akka.http.javadsl.model.ContentTypes; +import akka.http.javadsl.model.HttpResponse; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.Rejections; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.testkit.JUnitRouteTest; +import org.junit.Test; + +import java.util.Collections; + +public class RouteDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testComplete() { + //#complete + final Route route = route( + path("a", () -> complete(HttpResponse.create().withEntity("foo"))), + path("b", () -> complete(StatusCodes.OK)), + path("c", () -> complete(StatusCodes.CREATED, "bar")), + path("d", () -> complete(StatusCodes.get(201), "bar")), + path("e", () -> + complete(StatusCodes.CREATED, + Collections.singletonList(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)), + HttpEntities.create("bar"))), + path("f", () -> + complete(StatusCodes.get(201), + Collections.singletonList(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)), + HttpEntities.create("bar"))), + path("g", () -> complete("baz")) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/a")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("foo"); + + testRoute(route).run(HttpRequest.GET("/b")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("OK"); + + testRoute(route).run(HttpRequest.GET("/c")) + .assertStatusCode(StatusCodes.CREATED) + .assertEntity("bar"); + + testRoute(route).run(HttpRequest.GET("/d")) + .assertStatusCode(StatusCodes.CREATED) + .assertEntity("bar"); + + testRoute(route).run(HttpRequest.GET("/e")) + .assertStatusCode(StatusCodes.CREATED) + .assertHeaderExists(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)) + .assertEntity("bar"); + + testRoute(route).run(HttpRequest.GET("/f")) + .assertStatusCode(StatusCodes.CREATED) + .assertHeaderExists(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)) + .assertEntity("bar"); + + testRoute(route).run(HttpRequest.GET("/g")) + .assertStatusCode(StatusCodes.OK) + .assertEntity("baz"); + //#complete + } + + @Test + public void testReject() { + //#reject + final Route route = route( + path("a", this::reject), // don't handle here, continue on + path("a", () -> complete("foo")), + path("b", () -> reject(Rejections.validationRejection("Restricted!"))) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/a")) + .assertEntity("foo"); + + runRouteUnSealed(route, HttpRequest.GET("/b")) + .assertRejections(Rejections.validationRejection("Restricted!")); + //#reject + } + + @Test + public void testRedirect() { + //#redirect + final Route route = pathPrefix("foo", () -> + route( + pathSingleSlash(() -> complete("yes")), + pathEnd(() -> redirect(Uri.create("/foo/"), StatusCodes.PERMANENT_REDIRECT)) + ) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/foo/")) + .assertEntity("yes"); + + testRoute(route).run(HttpRequest.GET("/foo")) + .assertStatusCode(StatusCodes.PERMANENT_REDIRECT) + .assertEntity("The request, and all future requests should be repeated using this URI."); + //#redirect + } + + @Test + public void testFailWith() { + //#failWith + final Route route = path("foo", () -> + failWith(new RuntimeException("Oops.")) + ); + + // tests: + testRoute(route).run(HttpRequest.GET("/foo")) + .assertStatusCode(StatusCodes.INTERNAL_SERVER_ERROR) + .assertEntity("There was an internal server error."); + //#failWith + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java new file mode 100644 index 0000000000..68d7386bbe --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java @@ -0,0 +1,364 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package docs.http.javadsl.server.directives; + +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.model.headers.BasicHttpCredentials; +import akka.http.javadsl.model.headers.HttpChallenge; +import akka.http.javadsl.model.headers.HttpCredentials; +import akka.http.javadsl.server.Route; +import akka.http.javadsl.testkit.JUnitRouteTest; +import akka.japi.JavaPartialFunction; +import org.junit.Test; +import scala.PartialFunction; +import scala.util.Either; +import scala.util.Left; +import scala.util.Right; + +import java.util.Collections; +import java.util.HashSet; +import java.util.Set; +import java.util.concurrent.CompletableFuture; +import java.util.concurrent.CompletionStage; +import java.util.function.Function; +import java.util.Optional; + +public class SecurityDirectivesExamplesTest extends JUnitRouteTest { + + @Test + public void testAuthenticateBasic() { + //#authenticateBasic + final Function, Optional> myUserPassAuthenticator = + credentials -> + credentials.filter(c -> c.verify("p4ssw0rd")).map(ProvidedCredentials::identifier); + + final Route route = path("secured", () -> + authenticateBasic("secure site", myUserPassAuthenticator, userName -> + complete("The user is '" + userName + "'") + ) + ).seal(system(), materializer()); + + // tests: + testRoute(route).run(HttpRequest.GET("/secured")) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The resource requires authentication, which was not supplied with the request") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + + final HttpCredentials validCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials)) + .assertEntity("The user is 'John'"); + + final HttpCredentials invalidCredentials = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials)) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The supplied authentication is invalid") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + //#authenticateBasic + } + + + @Test + public void testAuthenticateBasicPF() { + //#authenticateBasicPF + final PartialFunction, String> myUserPassAuthenticator = + new JavaPartialFunction, String>() { + @Override + public String apply(Optional opt, boolean isCheck) throws Exception { + if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) { + if (isCheck) return null; + else return opt.get().identifier(); + } else if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd-special")).isPresent()) { + if (isCheck) return null; + else return opt.get().identifier() + "-admin"; + } else { + throw noMatch(); + } + } + }; + + final Route route = path("secured", () -> + authenticateBasicPF("secure site", myUserPassAuthenticator, userName -> + complete("The user is '" + userName + "'") + ) + ).seal(system(), materializer()); + + // tests: + testRoute(route).run(HttpRequest.GET("/secured")) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The resource requires authentication, which was not supplied with the request") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + + final HttpCredentials validCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials)) + .assertEntity("The user is 'John'"); + + final HttpCredentials validAdminCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd-special"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validAdminCredentials)) + .assertEntity("The user is 'John-admin'"); + + final HttpCredentials invalidCredentials = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials)) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The supplied authentication is invalid") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + //#authenticateBasicPF + } + + @Test + public void testAuthenticateBasicPFAsync() { + //#authenticateBasicPFAsync + class User { + private final String id; + public User(String id) { + this.id = id; + } + public String getId() { + return id; + } + } + + final PartialFunction, CompletionStage> myUserPassAuthenticator = + new JavaPartialFunction,CompletionStage>() { + @Override + public CompletionStage apply(Optional opt, boolean isCheck) throws Exception { + if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) { + if (isCheck) return CompletableFuture.completedFuture(null); + else return CompletableFuture.completedFuture(new User(opt.get().identifier())); + } else { + throw noMatch(); + } + } + }; + + final Route route = path("secured", () -> + authenticateBasicPFAsync("secure site", myUserPassAuthenticator, user -> + complete("The user is '" + user.getId() + "'")) + ).seal(system(), materializer()); + + // tests: + testRoute(route).run(HttpRequest.GET("/secured")) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The resource requires authentication, which was not supplied with the request") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + + final HttpCredentials validCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials)) + .assertEntity("The user is 'John'"); + + final HttpCredentials invalidCredentials = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials)) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The supplied authentication is invalid") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + //#authenticateBasicPFAsync + } + + @Test + public void testAuthenticateBasicAsync() { + //#authenticateBasicAsync + final Function, CompletionStage>> myUserPassAuthenticator = opt -> { + if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) { + return CompletableFuture.completedFuture(Optional.of(opt.get().identifier())); + } else { + return CompletableFuture.completedFuture(Optional.empty()); + } + }; + + final Route route = path("secured", () -> + authenticateBasicAsync("secure site", myUserPassAuthenticator, userName -> + complete("The user is '" + userName + "'") + ) + ).seal(system(), materializer()); + + // tests: + testRoute(route).run(HttpRequest.GET("/secured")) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The resource requires authentication, which was not supplied with the request") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + + final HttpCredentials validCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials)) + .assertEntity("The user is 'John'"); + + final HttpCredentials invalidCredentials = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials)) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The supplied authentication is invalid") + .assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\""); + //#authenticateBasicAsync + } + + @Test + public void testAuthenticateOrRejectWithChallenge() { + //#authenticateOrRejectWithChallenge + final HttpChallenge challenge = HttpChallenge.create("MyAuth", "MyRealm"); + + // your custom authentication logic: + final Function auth = credentials -> true; + + final Function, CompletionStage>> myUserPassAuthenticator = + opt -> { + if (opt.isPresent() && auth.apply(opt.get())) { + return CompletableFuture.completedFuture(Right.apply("some-user-name-from-creds")); + } else { + return CompletableFuture.completedFuture(Left.apply(challenge)); + } + }; + + final Route route = path("secured", () -> + authenticateOrRejectWithChallenge(myUserPassAuthenticator, userName -> + complete("Authenticated!") + ) + ).seal(system(), materializer()); + + // tests: + testRoute(route).run(HttpRequest.GET("/secured")) + .assertStatusCode(StatusCodes.UNAUTHORIZED) + .assertEntity("The resource requires authentication, which was not supplied with the request") + .assertHeaderExists("WWW-Authenticate", "MyAuth realm=\"MyRealm\""); + + final HttpCredentials validCredentials = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials)) + .assertStatusCode(StatusCodes.OK) + .assertEntity("Authenticated!"); + //#authenticateOrRejectWithChallenge + } + + @Test + public void testAuthorize() { + //#authorize + class User { + private final String name; + public User(String name) { + this.name = name; + } + public String getName() { + return name; + } + } + + // authenticate the user: + final Function, Optional> myUserPassAuthenticator = + opt -> { + if (opt.isPresent()) { + return Optional.of(new User(opt.get().identifier())); + } else { + return Optional.empty(); + } + }; + + // check if user is authorized to perform admin actions: + final Set admins = new HashSet<>(); + admins.add("Peter"); + final Function hasAdminPermissions = user -> admins.contains(user.getName()); + + final Route route = authenticateBasic("secure site", myUserPassAuthenticator, user -> + path("peters-lair", () -> + authorize(() -> hasAdminPermissions.apply(user), () -> + complete("'" + user.getName() +"' visited Peter's lair") + ) + ) + ).seal(system(), materializer()); + + // tests: + final HttpCredentials johnsCred = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(johnsCred)) + .assertStatusCode(StatusCodes.FORBIDDEN) + .assertEntity("The supplied authentication is not authorized to access this resource"); + + final HttpCredentials petersCred = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(petersCred)) + .assertEntity("'Peter' visited Peter's lair"); + //#authorize + } + + @Test + public void testAuthorizeAsync() { + //#authorizeAsync + class User { + private final String name; + public User(String name) { + this.name = name; + } + public String getName() { + return name; + } + } + + // authenticate the user: + final Function, Optional> myUserPassAuthenticator = + opt -> { + if (opt.isPresent()) { + return Optional.of(new User(opt.get().identifier())); + } else { + return Optional.empty(); + } + }; + + // check if user is authorized to perform admin actions, + // this could potentially be a long operation so it would return a Future + final Set admins = new HashSet<>(); + admins.add("Peter"); + final Set synchronizedAdmins = Collections.synchronizedSet(admins); + + final Function> hasAdminPermissions = + user -> CompletableFuture.completedFuture(synchronizedAdmins.contains(user.getName())); + + final Route route = authenticateBasic("secure site", myUserPassAuthenticator, user -> + path("peters-lair", () -> + authorizeAsync(() -> hasAdminPermissions.apply(user), () -> + complete("'" + user.getName() +"' visited Peter's lair") + ) + ) + ).seal(system(), materializer()); + + // tests: + final HttpCredentials johnsCred = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(johnsCred)) + .assertStatusCode(StatusCodes.FORBIDDEN) + .assertEntity("The supplied authentication is not authorized to access this resource"); + + final HttpCredentials petersCred = + BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan"); + testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(petersCred)) + .assertEntity("'Peter' visited Peter's lair"); + //#authorizeAsync + } + + @Test + public void testExtractCredentials() { + //#extractCredentials + final Route route = extractCredentials(optCreds -> { + if (optCreds.isPresent()) { + return complete("Credentials: " + optCreds.get()); + } else { + return complete("No credentials"); + } + }); + + // tests: + final HttpCredentials johnsCred = + BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd"); + testRoute(route).run(HttpRequest.GET("/").addCredentials(johnsCred)) + .assertEntity("Credentials: Basic Sm9objpwNHNzdzByZA=="); + + testRoute(route).run(HttpRequest.GET("/")) + .assertEntity("No credentials"); + //#extractCredentials + } +} diff --git a/akka-docs/rst/java/code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java new file mode 100644 index 0000000000..b5aeb28d8f --- /dev/null +++ b/akka-docs/rst/java/code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java @@ -0,0 +1,180 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ + +package docs.http.javadsl.server.directives; + +import akka.NotUsed; +import akka.actor.ActorSystem; +import akka.http.javadsl.ConnectHttp; +import akka.http.javadsl.Http; +import akka.http.javadsl.ServerBinding; +import akka.http.javadsl.model.HttpRequest; +import akka.http.javadsl.model.HttpResponse; +import akka.http.javadsl.model.StatusCode; +import akka.http.javadsl.model.StatusCodes; +import akka.http.javadsl.server.AllDirectives; +import akka.http.javadsl.server.Route; +import akka.http.scaladsl.TestUtils; +import akka.stream.ActorMaterializer; +import akka.stream.javadsl.Flow; +import akka.testkit.TestKit; +import com.typesafe.config.Config; +import com.typesafe.config.ConfigFactory; +import org.junit.After; +import org.junit.Ignore; +import org.junit.Test; +import scala.Tuple2; +import scala.Tuple3; +import scala.concurrent.duration.Duration; +import scala.runtime.BoxedUnit; + +import java.net.InetSocketAddress; +import java.util.Optional; +import java.util.concurrent.*; + +public class TimeoutDirectivesExamplesTest extends AllDirectives { + //#testSetup + private final Config testConf = ConfigFactory.parseString("akka.loggers = [\"akka.testkit.TestEventListener\"]\n" + + "akka.loglevel = ERROR\n" + + "akka.stdout-loglevel = ERROR\n" + + "windows-connection-abort-workaround-enabled = auto\n" + + "akka.log-dead-letters = OFF\n" + + "akka.http.server.request-timeout = 1000s"); + // large timeout - 1000s (please note - setting to infinite will disable Timeout-Access header + // and withRequestTimeout will not work) + + private final ActorSystem system = ActorSystem.create("TimeoutDirectivesExamplesTest", testConf); + + private final ActorMaterializer materializer = ActorMaterializer.create(system); + + private final Http http = Http.get(system); + + private CompletionStage shutdown(CompletionStage binding) { + return binding.thenAccept(b -> { + System.out.println(String.format("Unbinding from %s", b.localAddress())); + + final CompletionStage unbound = b.unbind(); + try { + unbound.toCompletableFuture().get(3, TimeUnit.SECONDS); // block... + } catch (TimeoutException | InterruptedException | ExecutionException e) { + throw new RuntimeException(e); + } + }); + } + + private Optional runRoute(ActorSystem system, ActorMaterializer materializer, Route route, String routePath) { + final Tuple3 inetaddrHostAndPort = TestUtils.temporaryServerHostnameAndPort("127.0.0.1"); + Tuple2 hostAndPort = new Tuple2<>( + inetaddrHostAndPort._2(), + (Integer) inetaddrHostAndPort._3() + ); + + final Flow routeFlow = route.flow(system, materializer); + final CompletionStage binding = http.bindAndHandle(routeFlow, ConnectHttp.toHost(hostAndPort._1(), hostAndPort._2()), materializer); + + final CompletionStage responseCompletionStage = http.singleRequest(HttpRequest.create("http://" + hostAndPort._1() + ":" + hostAndPort._2() + "/" + routePath), materializer); + + CompletableFuture responseFuture = responseCompletionStage.toCompletableFuture(); + + Optional responseOptional; + try { + responseOptional = Optional.of(responseFuture.get(3, TimeUnit.SECONDS)); // patienceConfig + } catch (Exception e) { + responseOptional = Optional.empty(); + } + + shutdown(binding); + + return responseOptional; + } + //# + + @After + public void shutDown() { + TestKit.shutdownActorSystem(system, Duration.create(1, TimeUnit.SECONDS), false); + } + + @Test + public void testRequestTimeoutIsConfigurable() { + //#withRequestTimeout-plain + final Duration timeout = Duration.create(1, TimeUnit.SECONDS); + CompletionStage slowFuture = new CompletableFuture<>(); + + final Route route = path("timeout", () -> + withRequestTimeout(timeout, () -> { + return completeOKWithFutureString(slowFuture); // very slow + }) + ); + + // test: + StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status(); + assert (StatusCodes.SERVICE_UNAVAILABLE.equals(statusCode)); + //# + } + + @Test + public void testRequestWithoutTimeoutCancelsTimeout() { + //#withoutRequestTimeout-1 + CompletionStage slowFuture = new CompletableFuture<>(); + + final Route route = path("timeout", () -> + withoutRequestTimeout(() -> { + return completeOKWithFutureString(slowFuture); // very slow + }) + ); + + // test: + Boolean receivedReply = runRoute(system, materializer, route, "timeout").isPresent(); + assert (!receivedReply); // timed-out + //# + } + + @Test + public void testRequestTimeoutAllowsCustomResponse() { + //#withRequestTimeout-with-handler + final Duration timeout = Duration.create(1, TimeUnit.MILLISECONDS); + CompletionStage slowFuture = new CompletableFuture<>(); + + HttpResponse enhanceYourCalmResponse = HttpResponse.create() + .withStatus(StatusCodes.ENHANCE_YOUR_CALM) + .withEntity("Unable to serve response within time limit, please enhance your calm."); + + final Route route = path("timeout", () -> + withRequestTimeout(timeout, (request) -> enhanceYourCalmResponse, () -> { + return completeOKWithFutureString(slowFuture); // very slow + }) + ); + + // test: + StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status(); + assert (StatusCodes.ENHANCE_YOUR_CALM.equals(statusCode)); + //# + } + + // make it compile only to avoid flaking in slow builds + @Ignore("Compile only test") + @Test + public void testRequestTimeoutCustomResponseCanBeAddedSeparately() { + //#withRequestTimeoutResponse + final Duration timeout = Duration.create(100, TimeUnit.MILLISECONDS); + CompletionStage slowFuture = new CompletableFuture<>(); + + HttpResponse enhanceYourCalmResponse = HttpResponse.create() + .withStatus(StatusCodes.ENHANCE_YOUR_CALM) + .withEntity("Unable to serve response within time limit, please enhance your calm."); + + final Route route = path("timeout", () -> + withRequestTimeout(timeout, () -> + // racy! for a very short timeout like 1.milli you can still get 503 + withRequestTimeoutResponse((request) -> enhanceYourCalmResponse, () -> { + return completeOKWithFutureString(slowFuture); // very slow + })) + ); + + // test: + StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status(); + assert (StatusCodes.ENHANCE_YOUR_CALM.equals(statusCode)); + //# + } +} diff --git a/akka-docs/rst/java/code/docs/stream/BidiFlowDocTest.java b/akka-docs/rst/java/code/docs/stream/BidiFlowDocTest.java index 3c266a6215..27eb549e8c 100644 --- a/akka-docs/rst/java/code/docs/stream/BidiFlowDocTest.java +++ b/akka-docs/rst/java/code/docs/stream/BidiFlowDocTest.java @@ -165,9 +165,12 @@ public class BidiFlowDocTest extends AbstractJavaTest { @Override public void onUpstreamFinish() throws Exception { + // either we are done if (stash.isEmpty()) completeStage(); + // or we still have bytes to emit // wait with completion and let run() complete when the // rest of the stash has been sent downstream + else if (isAvailable(out)) run(); } }); diff --git a/akka-docs/rst/java/code/docs/stream/GraphDSLDocTest.java b/akka-docs/rst/java/code/docs/stream/GraphDSLDocTest.java index ddb5149f14..51fe15ef40 100644 --- a/akka-docs/rst/java/code/docs/stream/GraphDSLDocTest.java +++ b/akka-docs/rst/java/code/docs/stream/GraphDSLDocTest.java @@ -50,25 +50,24 @@ public class GraphDSLDocTest extends AbstractJavaTest { //#simple-graph-dsl final Source in = Source.from(Arrays.asList(1, 2, 3, 4, 5)); final Sink, CompletionStage>> sink = Sink.head(); - final Sink, CompletionStage>> sink2 = Sink.head(); final Flow f1 = Flow.of(Integer.class).map(elem -> elem + 10); final Flow f2 = Flow.of(Integer.class).map(elem -> elem + 20); final Flow f3 = Flow.of(Integer.class).map(elem -> elem.toString()); final Flow f4 = Flow.of(Integer.class).map(elem -> elem + 30); final RunnableGraph>> result = - RunnableGraph.>>fromGraph( - GraphDSL - .create( - sink, - (builder, out) -> { + RunnableGraph.fromGraph( + GraphDSL // create() function binds sink, out which is sink's out port and builder DSL + .create( // we need to reference out's shape in the builder DSL below (in to() function) + sink, // previously created sink (Sink) + (builder, out) -> { // variables: builder (GraphDSL.Builder) and out (SinkShape) final UniformFanOutShape bcast = builder.add(Broadcast.create(2)); final UniformFanInShape merge = builder.add(Merge.create(2)); final Outlet source = builder.add(in).out(); builder.from(source).via(builder.add(f1)) .viaFanOut(bcast).via(builder.add(f2)).viaFanIn(merge) - .via(builder.add(f3.grouped(1000))).to(out); + .via(builder.add(f3.grouped(1000))).to(out); // to() expects a SinkShape builder.from(bcast).via(builder.add(f4)).toFanIn(merge); return ClosedShape.getInstance(); })); diff --git a/akka-docs/rst/java/code/docs/stream/KillSwitchDocTest.java b/akka-docs/rst/java/code/docs/stream/KillSwitchDocTest.java new file mode 100644 index 0000000000..e70e7e48d3 --- /dev/null +++ b/akka-docs/rst/java/code/docs/stream/KillSwitchDocTest.java @@ -0,0 +1,140 @@ +package docs.stream; + +import akka.NotUsed; +import akka.actor.ActorSystem; +import akka.japi.Pair; +import akka.stream.*; +import akka.stream.javadsl.Keep; +import akka.stream.javadsl.Sink; +import akka.stream.javadsl.Source; +import akka.testkit.JavaTestKit; +import docs.AbstractJavaTest; + +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import scala.concurrent.duration.FiniteDuration; + +import java.util.ArrayList; +import java.util.Arrays; +import java.util.concurrent.CompletionStage; +import java.util.concurrent.TimeUnit; + +import static org.junit.Assert.assertEquals; + +class KillSwitchDocTest extends AbstractJavaTest { + + static ActorSystem system; + static Materializer mat; + + @BeforeClass + public static void setup() { + system = ActorSystem.create("GraphDSLDocTest"); + mat = ActorMaterializer.create(system); + } + + @AfterClass + public static void tearDown() { + JavaTestKit.shutdownActorSystem(system); + system = null; + mat = null; + } + + @Test + public void compileOnlyTest() { + } + + public void uniqueKillSwitchShutdownExample() throws Exception { + //#unique-shutdown + final Source countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4))) + .delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure()); + final Sink> lastSnk = Sink.last(); + + final Pair> stream = countingSrc + .viaMat(KillSwitches.single(), Keep.right()) + .toMat(lastSnk, Keep.both()).run(mat); + + final UniqueKillSwitch killSwitch = stream.first(); + final CompletionStage completionStage = stream.second(); + + doSomethingElse(); + killSwitch.shutdown(); + + final int finalCount = completionStage.toCompletableFuture().get(1, TimeUnit.SECONDS); + assertEquals(2, finalCount); + //#unique-shutdown + } + + public static void uniqueKillSwitchAbortExample() throws Exception { + //#unique-abort + final Source countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4))) + .delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure()); + final Sink> lastSnk = Sink.last(); + + final Pair> stream = countingSrc + .viaMat(KillSwitches.single(), Keep.right()) + .toMat(lastSnk, Keep.both()).run(mat); + + final UniqueKillSwitch killSwitch = stream.first(); + final CompletionStage completionStage = stream.second(); + + final Exception error = new Exception("boom!"); + killSwitch.abort(error); + + final int result = completionStage.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS); + assertEquals(-1, result); + //#unique-abort + } + + public void sharedKillSwitchShutdownExample() throws Exception { + //#shared-shutdown + final Source countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4))) + .delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure()); + final Sink> lastSnk = Sink.last(); + final SharedKillSwitch killSwitch = KillSwitches.shared("my-kill-switch"); + + final CompletionStage completionStage = countingSrc + .viaMat(killSwitch.flow(), Keep.right()) + .toMat(lastSnk, Keep.right()).run(mat); + final CompletionStage completionStageDelayed = countingSrc + .delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure()) + .viaMat(killSwitch.flow(), Keep.right()) + .toMat(lastSnk, Keep.right()).run(mat); + + doSomethingElse(); + killSwitch.shutdown(); + + final int finalCount = completionStage.toCompletableFuture().get(1, TimeUnit.SECONDS); + final int finalCountDelayed = completionStageDelayed.toCompletableFuture().get(1, TimeUnit.SECONDS); + assertEquals(2, finalCount); + assertEquals(1, finalCountDelayed); + //#shared-shutdown + } + + public static void sharedKillSwitchAbortExample() throws Exception { + //#shared-abort + final Source countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4))) + .delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure()); + final Sink> lastSnk = Sink.last(); + final SharedKillSwitch killSwitch = KillSwitches.shared("my-kill-switch"); + + final CompletionStage completionStage1 = countingSrc + .viaMat(killSwitch.flow(), Keep.right()) + .toMat(lastSnk, Keep.right()).run(mat); + final CompletionStage completionStage2 = countingSrc + .viaMat(killSwitch.flow(), Keep.right()) + .toMat(lastSnk, Keep.right()).run(mat); + + final Exception error = new Exception("boom!"); + killSwitch.abort(error); + + final int result1 = completionStage1.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS); + final int result2 = completionStage2.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS); + assertEquals(-1, result1); + assertEquals(-1, result2); + //#shared-abort + } + + private static void doSomethingElse(){ + } +} diff --git a/akka-docs/rst/java/code/docs/stream/QuickStartDocTest.java b/akka-docs/rst/java/code/docs/stream/QuickStartDocTest.java index c738a9439c..c3e0395742 100644 --- a/akka-docs/rst/java/code/docs/stream/QuickStartDocTest.java +++ b/akka-docs/rst/java/code/docs/stream/QuickStartDocTest.java @@ -3,23 +3,27 @@ */ package docs.stream; +//#stream-imports +import akka.stream.*; +import akka.stream.javadsl.*; +//#stream-imports + +//#other-imports +import akka.Done; +import akka.NotUsed; +import akka.actor.ActorSystem; +import akka.util.ByteString; + import java.nio.file.Paths; import java.math.BigInteger; import java.util.concurrent.CompletionStage; import java.util.concurrent.ExecutionException; import java.util.concurrent.TimeUnit; -import org.junit.*; - -import akka.Done; -import akka.NotUsed; -import akka.actor.ActorSystem; -//#imports -import akka.stream.*; -import akka.stream.javadsl.*; -//#imports -import akka.util.ByteString; import scala.concurrent.duration.Duration; +//#other-imports + +import org.junit.*; /** * This class is not meant to be run as a test in the test suite, but it diff --git a/akka-docs/rst/java/http/common/marshalling.rst b/akka-docs/rst/java/http/common/marshalling.rst index cf098e1b88..b3feb49fed 100644 --- a/akka-docs/rst/java/http/common/marshalling.rst +++ b/akka-docs/rst/java/http/common/marshalling.rst @@ -119,7 +119,7 @@ If, however, your marshaller also needs to set things like the response status c or any headers then a ``ToEntityMarshaller[T]`` won't work. You'll need to fall down to providing a ``ToResponseMarshaller[T]`` or a ``ToRequestMarshaller[T]`` directly. -For writing you own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly. +For writing your own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly. Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller`` companion: diff --git a/akka-docs/rst/java/http/common/unmarshalling.rst b/akka-docs/rst/java/http/common/unmarshalling.rst index 9aaea15c66..37979b5c91 100644 --- a/akka-docs/rst/java/http/common/unmarshalling.rst +++ b/akka-docs/rst/java/http/common/unmarshalling.rst @@ -77,7 +77,7 @@ Custom Unmarshallers Akka HTTP gives you a few convenience tools for constructing unmarshallers for your own types. Usually you won't have to "manually" implement the ``Unmarshaller`` trait directly. -Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller`` +Rather, it should be possible to use one of the convenience construction helpers defined on the ``Unmarshaller`` companion: TODO rewrite sample for Java diff --git a/akka-docs/rst/java/http/implications-of-streaming-http-entity.rst b/akka-docs/rst/java/http/implications-of-streaming-http-entity.rst new file mode 100644 index 0000000000..b66f66da53 --- /dev/null +++ b/akka-docs/rst/java/http/implications-of-streaming-http-entity.rst @@ -0,0 +1,121 @@ +.. _implications-of-streaming-http-entities-java: + +Implications of the streaming nature of Request/Response Entities +----------------------------------------------------------------- + +Akka HTTP is streaming *all the way through*, which means that the back-pressure mechanisms enabled by Akka Streams +are exposed through all layers–from the TCP layer, through the HTTP server, all the way up to the user-facing ``HttpRequest`` +and ``HttpResponse`` and their ``HttpEntity`` APIs. + +This has suprising implications if you are used to non-streaming / not-reactive HTTP clients. +Specifically it means that: "*lack of consumption of the HTTP Entity, is signaled as back-pressure to the other +side of the connection*". This is a feature, as it allows one only to consume the entity, and back-pressure servers/clients +from overwhelming our application, possibly causing un-necessary buffering of the entity in memory. + +.. warning:: + Consuming (or discarding) the Entity of a request is mandatory! + If *accidentally* left neither consumed or discarded Akka HTTP will + asume the incoming data should remain back-pressured, and will stall the incoming data via TCP back-pressure mechanisms. + +Client-Side handling of streaming HTTP Entities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Consuming the HTTP Response Entity (Client) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The most commong use-case of course is consuming the response entity, which can be done via +running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source, +(or on the server-side using directives such as + +It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest, +for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another +destination Sink, such as a File or other Akka Streams connector: + +.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-1 + +however sometimes the need may arise to consume the entire entity as ``Strict`` entity (which means that it is +completely loaded into memory). Akka HTTP provides a special ``toStrict(timeout, materializer)`` method which can be used to +eagerly consume the entity and make it available in memory: + +.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-2 + + +Discarding the HTTP Response Entity (Client) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Sometimes when calling HTTP services we do not care about their response payload (e.g. all we care about is the response code), +yet as explained above entity still has to be consumed in some way, otherwise we'll be exherting back-pressure on the +underlying TCP connection. + +The ``discardEntityBytes`` convenience method serves the purpose of easily discarding the entity if it has no purpose for us. +It does so by piping the incoming bytes directly into an ``Sink.ignore``. + +The two snippets below are equivalent, and work the same way on the server-side for incoming HTTP Requests: + +.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-discard-example-1 + +Or the equivalent low-level code achieving the same result: + +.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-discard-example-2 + +Server-Side handling of streaming HTTP Entities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Similarily as with the Client-side, HTTP Entities are directly linked to Streams which are fed by the underlying +TCP connection. Thus, if request entities remain not consumed, the server will back-pressure the connection, expecting +that the user-code will eventually decide what to do with the incoming data. + +Note that some directives force an implicit ``toStrict`` operation, such as ``entity(exampleUnmarshaller, example -> {})`` and similar ones. + +Consuming the HTTP Request Entity (Server) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The simplest way of consuming the incoming request entity is to simply transform it into an actual domain object, +for example by using the :ref:`-entity-java-` directive: + +.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#consume-entity-directive + +Of course you can access the raw dataBytes as well and run the underlying stream, for example piping it into an +FileIO Sink, that signals completion via a ``CompletionStage`` once all the data has been written into the file: + +.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#consume-raw-dataBytes + +Discarding the HTTP Request Entity (Server) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Sometimes, depending on some validation (e.g. checking if given user is allowed to perform uploads or not) +you may want to decide to discard the uploaded entity. + +Please note that discarding means that the entire upload will proceed, even though you are not interested in the data +being streamed to the server - this may be useful if you are simply not interested in the given entity, however +you don't want to abort the entire connection (which we'll demonstrate as well), since there may be more requests +pending on the same connection still. + +In order to discard the databytes explicitly you can invoke the ``discardEntityBytes`` bytes of the incoming ``HTTPRequest``: + +.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#discard-discardEntityBytes + +A related concept is *cancelling* the incoming ``entity.getDataBytes()`` stream, which results in Akka HTTP +*abruptly closing the connection from the Client*. This may be useful when you detect that the given user should not be allowed to make any +uploads at all, and you want to drop the connection (instead of reading and ignoring the incoming data). +This can be done by attaching the incoming ``entity.getDataBytes()`` to a ``Sink.cancelled`` which will cancel +the entity stream, which in turn will cause the underlying connection to be shut-down by the server – +effectively hard-aborting the incoming request: + +.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#discard-close-connections + +Closing connections is also explained in depth in the :ref:`http-closing-connection-low-level-java` section of the docs. + +Pending: Automatic discarding of not used entities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request, +and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below +note and issues for further discussion and ideas. + +.. note:: + An advanced feature code named "auto draining" has been discussed and proposed for Akka HTTP, and we're hoping + to implement or help the community implement it. + + You can read more about it in `issue #18716 `_ + as well as `issue #18540 `_ ; as always, contributions are very welcome! + diff --git a/akka-docs/rst/java/http/index.rst b/akka-docs/rst/java/http/index.rst index 1a086d69b9..49e63aba62 100644 --- a/akka-docs/rst/java/http/index.rst +++ b/akka-docs/rst/java/http/index.rst @@ -37,6 +37,7 @@ akka-http-jackson routing-dsl/index client-side/index common/index + implications-of-streaming-http-entity configuration server-side-https-support diff --git a/akka-docs/rst/java/http/routing-dsl/directives/alphabetically.rst b/akka-docs/rst/java/http/routing-dsl/directives/alphabetically.rst index 93a10fbd12..53188a52c6 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/alphabetically.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/alphabetically.rst @@ -139,6 +139,7 @@ Directive Description :ref:`-uploadedFile-java-` Streams one uploaded file from a multipart request to a file on disk :ref:`-validate-java-` Checks a given condition before running its inner route :ref:`-withoutRequestTimeout-java-` Disables :ref:`request timeouts ` for a given route. +:ref:`-withoutSizeLimit-java-` Skips request entity size check :ref:`-withExecutionContext-java-` Runs its inner route with the given alternative ``ExecutionContext`` :ref:`-withMaterializer-java-` Runs its inner route with the given alternative ``Materializer`` :ref:`-withLog-java-` Runs its inner route with the given alternative ``LoggingAdapter`` @@ -146,5 +147,6 @@ Directive Description :ref:`-withRequestTimeout-java-` Configures the :ref:`request timeouts ` for a given route. :ref:`-withRequestTimeoutResponse-java-` Prepares the ``HttpResponse`` that is emitted if a request timeout is triggered. ``RequestContext => RequestContext`` function :ref:`-withSettings-java-` Runs its inner route with the given alternative ``RoutingSettings`` +:ref:`-withSizeLimit-java-` Applies request entity size check ================================================ ============================================================================ diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejection.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejection.rst index 8651b87a71..f1912765e2 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejection.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejection.rst @@ -16,4 +16,5 @@ which provides a nicer DSL for building rejection handlers. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#cancelRejection diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejections.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejections.rst index c91ae5649f..5204437de4 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejections.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/cancelRejections.rst @@ -18,4 +18,5 @@ which provides a nicer DSL for building rejection handlers. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#cancelRejections diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extract.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extract.rst index 45bfebf4d0..4896f35e98 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extract.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extract.rst @@ -13,4 +13,5 @@ See :ref:`ProvideDirectives-java` for an overview of similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extract diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractExecutionContext.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractExecutionContext.rst index 878538ca6e..ad37d1975c 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractExecutionContext.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractExecutionContext.rst @@ -14,4 +14,5 @@ See :ref:`-extract-java-` to learn more about how extractions work. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractExecutionContext diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractLog.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractLog.rst index 02e3d7b825..939090ea95 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractLog.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractLog.rst @@ -15,4 +15,5 @@ See :ref:`-extract-java-` and :ref:`ProvideDirectives-java` for an overview of s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractLog diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractMaterializer.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractMaterializer.rst index 447a0698d6..f1ede20d2f 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractMaterializer.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractMaterializer.rst @@ -13,4 +13,5 @@ See also :ref:`-withMaterializer-java-` to see how to customise the used materia Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractMaterializer diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequest.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequest.rst index ac990e314a..91c532ea11 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequest.rst @@ -13,4 +13,5 @@ directives. See :ref:`Request Directives-java`. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequest diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequestContext.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequestContext.rst index 44d1efa7f3..3abec29650 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequestContext.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractRequestContext.rst @@ -16,4 +16,5 @@ See also :ref:`-extractRequest-java-` if only interested in the :class:`HttpRequ Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestContext diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractSettings.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractSettings.rst index a694279c5b..3983ba7e79 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractSettings.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractSettings.rst @@ -13,4 +13,5 @@ It is possible to override the settings for specific sub-routes by using the :re Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestContext diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUnmatchedPath.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUnmatchedPath.rst index a0a07266c4..4cabc34f83 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUnmatchedPath.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUnmatchedPath.rst @@ -15,4 +15,5 @@ Use ``mapUnmatchedPath`` to change the value of the unmatched path. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractUnmatchedPath diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUri.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUri.rst index 875ab01f1e..38985f0d68 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUri.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/extractUri.rst @@ -12,4 +12,5 @@ targeted access to parts of the URI. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractUri diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapInnerRoute.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapInnerRoute.rst index f2908a90e5..a88cb022bc 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapInnerRoute.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapInnerRoute.rst @@ -12,4 +12,5 @@ with any other route. Usually, the returned route wraps the original one with cu Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapInnerRoute diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRejections.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRejections.rst index 34fdf1d440..351e903cc5 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRejections.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRejections.rst @@ -16,4 +16,5 @@ See :ref:`Response Transforming Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRejections diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequest.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequest.rst index 87c3a8fa3b..a11e8ef1b8 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequest.rst @@ -16,4 +16,5 @@ See :ref:`Request Transforming Directives-java` for an overview of similar direc Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRequest diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequestContext.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequestContext.rst index 39cd8cc3c7..f5546fa409 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequestContext.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRequestContext.rst @@ -15,4 +15,5 @@ See :ref:`Request Transforming Directives-java` for an overview of similar direc Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRequestContext diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponse.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponse.rst index c4a53d4466..912556d536 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponse.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponse.rst @@ -14,8 +14,10 @@ See also :ref:`-mapResponseHeaders-java-` or :ref:`-mapResponseEntity-java-` for Example: Override status ------------------------ -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponse Example: Default to empty JSON response on errors ------------------------------------------------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponse-advanced diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseEntity.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseEntity.rst index 8994140991..799c9618c5 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseEntity.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseEntity.rst @@ -13,4 +13,5 @@ See :ref:`Response Transforming Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponseEntity diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseHeaders.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseHeaders.rst index eacf9bb0c1..fae2264127 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseHeaders.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapResponseHeaders.rst @@ -14,4 +14,5 @@ See :ref:`Response Transforming Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponseHeaders diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResult.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResult.rst index d440ba759d..764734e1f9 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResult.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResult.rst @@ -14,4 +14,5 @@ See :ref:`Result Transformation Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResult diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultFuture.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultFuture.rst index 0a0e33b8c5..efc21b4515 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultFuture.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultFuture.rst @@ -17,4 +17,5 @@ See :ref:`Result Transformation Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultFuture diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultPF.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultPF.rst index 8ff60a8305..7ed461d4e3 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultPF.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultPF.rst @@ -17,4 +17,6 @@ See :ref:`Result Transformation Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultPF + diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWith.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWith.rst index b58e4de9ee..7757074126 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWith.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWith.rst @@ -16,4 +16,5 @@ See :ref:`Result Transformation Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultWith diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWithPF.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWithPF.rst index bf13964fac..e9f1c5d6eb 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWithPF.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapRouteResultWithPF.rst @@ -17,4 +17,5 @@ See :ref:`Result Transformation Directives-java` for similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultWithPF diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapSettings.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapSettings.rst index 763ca2fc73..b54127a8fc 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapSettings.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapSettings.rst @@ -12,4 +12,5 @@ See also :ref:`-withSettings-java-` or :ref:`-extractSettings-java-`. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapSettings diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapUnmatchedPath.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapUnmatchedPath.rst index 6cef0c4cc3..de38d61c31 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapUnmatchedPath.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/mapUnmatchedPath.rst @@ -14,4 +14,5 @@ Use ``extractUnmatchedPath`` for extracting the current value of the unmatched p Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapUnmatchedPath diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/pass.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/pass.rst index 3547026189..06dc518837 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/pass.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/pass.rst @@ -11,4 +11,5 @@ It is usually used as a "neutral element" when combining directives generically. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#pass diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/provide.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/provide.rst index 290f0f07ef..305ea9319a 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/provide.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/provide.rst @@ -13,4 +13,5 @@ See :ref:`ProvideDirectives-java` for an overview of similar directives. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#provide diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejections.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejections.rst index e561f9c515..78994357c8 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejections.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejections.rst @@ -17,4 +17,5 @@ rejections. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#recoverRejections diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejectionsWith.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejectionsWith.rst index 7b010dbdbc..7220a2cfe7 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejectionsWith.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/recoverRejectionsWith.rst @@ -20,4 +20,5 @@ See :ref:`-recoverRejections-java-` (the synchronous equivalent of this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#recoverRejectionsWith diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withExecutionContext.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withExecutionContext.rst index 746cdbb2be..d8de735585 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withExecutionContext.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withExecutionContext.rst @@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without sufracin Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withExecutionContext diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withLog.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withLog.rst index e183d088b9..e98d6ef0c2 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withLog.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withLog.rst @@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without surfacin Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withLog diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withMaterializer.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withMaterializer.rst index 8037dd11ff..510b02058e 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withMaterializer.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withMaterializer.rst @@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without sufracin Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withMaterializer diff --git a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withSettings.rst b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withSettings.rst index 362e269ab1..b284726c08 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withSettings.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/basic-directives/withSettings.rst @@ -13,4 +13,6 @@ or used by directives which internally extract the materializer without sufracin Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withSettings + diff --git a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequest.rst b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequest.rst index a8d38d2fda..75d3fe1f27 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequest.rst @@ -10,4 +10,5 @@ Decompresses the incoming request if it is ``gzip`` or ``deflate`` compressed. U Example ------- -..TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#decodeRequest diff --git a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequestWith.rst b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequestWith.rst index 430bfcb8e2..d4c151a2ac 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequestWith.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/decodeRequestWith.rst @@ -10,4 +10,5 @@ Decodes the incoming request if it is encoded with one of the given encoders. If Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#decodeRequestWith diff --git a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponse.rst b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponse.rst index 5030bc6c18..8a6eb7cf17 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponse.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponse.rst @@ -14,6 +14,7 @@ If the ``Accept-Encoding`` header is missing or empty or specifies an encoding o Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#encodeResponse .. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4 diff --git a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponseWith.rst b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponseWith.rst index b3ffe1413a..f49ac53ef0 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponseWith.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/encodeResponseWith.rst @@ -17,6 +17,7 @@ response encoding is used. Otherwise the request is rejected. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#encodeResponseWith .. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4 diff --git a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/responseEncodingAccepted.rst b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/responseEncodingAccepted.rst index 100019fcee..c3ca799b13 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/responseEncodingAccepted.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/coding-directives/responseEncodingAccepted.rst @@ -10,4 +10,5 @@ Passes the request to the inner route if the request accepts the argument encodi Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#responseEncodingAccepted diff --git a/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleExceptions.rst b/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleExceptions.rst index 888fb4447d..56165e0340 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleExceptions.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleExceptions.rst @@ -14,4 +14,5 @@ See :ref:`exception-handling-java` for general information about options for han Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java#handleExceptions diff --git a/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleRejections.rst b/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleRejections.rst index 8c0ef2d868..619155c4e7 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleRejections.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/execution-directives/handleRejections.rst @@ -13,4 +13,5 @@ See :ref:`rejections-java` for general information about options for handling re Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java#handleRejections diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectories.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectories.rst index 502e30a32d..0aed8331c3 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectories.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectories.rst @@ -19,4 +19,5 @@ For more details refer to :ref:`-getFromBrowseableDirectory-java-`. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromBrowseableDirectories diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectory.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectory.rst index 0523adb48e..72c4ae7d97 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectory.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromBrowseableDirectory.rst @@ -19,7 +19,8 @@ For more details refer to :ref:`-getFromBrowseableDirectory-java-`. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromBrowseableDirectory Default file listing page example diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromDirectory.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromDirectory.rst index 1459b17392..7fe40d9675 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromDirectory.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromDirectory.rst @@ -27,4 +27,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromDirectory diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromFile.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromFile.rst index 81d26733be..5042cc5749 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromFile.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromFile.rst @@ -27,4 +27,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromFile diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResource.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResource.rst index 17754ec360..d7032776df 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResource.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResource.rst @@ -15,4 +15,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromResource diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResourceDirectory.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResourceDirectory.rst index 32d8369cae..1e56be9cff 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResourceDirectory.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/getFromResourceDirectory.rst @@ -15,4 +15,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromResourceDirectory diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/listDirectoryContents.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/listDirectoryContents.rst index b0b0de9455..d8e58f51e3 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/listDirectoryContents.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-and-resource-directives/listDirectoryContents.rst @@ -20,4 +20,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#listDirectoryContents diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/fileUpload.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/fileUpload.rst index 7c3f703edf..01991357bf 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/fileUpload.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/fileUpload.rst @@ -14,7 +14,8 @@ with the same name, the first one will be used and the subsequent ones ignored. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java + :snippet: fileUpload :: diff --git a/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/uploadedFile.rst b/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/uploadedFile.rst index 7d66d3afa9..f6ffe06511 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/uploadedFile.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/file-upload-directives/uploadedFile.rst @@ -20,4 +20,5 @@ one will be used and the subsequent ones ignored. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java + :snippet: uploadedFile \ No newline at end of file diff --git a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formField.rst b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formField.rst index 6711ae2b37..5b9265ea8d 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formField.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formField.rst @@ -8,4 +8,5 @@ Allows extracting a single Form field sent in the request. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java#formField diff --git a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldList.rst b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldList.rst index 260e57db43..7f6ba96934 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldList.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldList.rst @@ -17,4 +17,5 @@ can cause performance issues or even an ``OutOfMemoryError`` s. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java#formFieldList diff --git a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMap.rst b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMap.rst index f8c34c0f8e..5b678f0518 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMap.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMap.rst @@ -16,4 +16,5 @@ See :ref:`-formFieldList-java-` for details. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java#formFieldMap diff --git a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMultiMap.rst b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMultiMap.rst index 7e4322023c..a922975c5b 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMultiMap.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/form-field-directives/formFieldMultiMap.rst @@ -19,4 +19,5 @@ Use of this directive can result in performance degradation or even in ``OutOfMe Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FormFieldDirectivesExamplesTest.java#formFieldMultiMap diff --git a/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/index.rst b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/index.rst index 861e527ef6..997731d05b 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/index.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/index.rst @@ -12,3 +12,5 @@ MiscDirectives requestEntityPresent selectPreferredLanguage validate + withoutSizeLimit + withSizeLimit diff --git a/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withSizeLimit.rst b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withSizeLimit.rst new file mode 100644 index 0000000000..faef2f3ebd --- /dev/null +++ b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withSizeLimit.rst @@ -0,0 +1,20 @@ +.. _-withSizeLimit-java-: + +withSizeLimit +=============== + +Description +----------- +Fails the stream with ``EntityStreamSizeException`` if its request entity size exceeds given limit. Limit given +as parameter overrides limit configured with ``akka.http.parsing.max-content-length``. + +The whole mechanism of entity size checking is intended to prevent certain Denial-of-Service attacks. +So suggested setup is to have ``akka.http.parsing.max-content-length`` relatively low and use ``withSizeLimit`` +directive for endpoints which expects bigger entities. + +See also :ref:`-withoutSizeLimit-java-` for skipping request entity size check. + +Example +------- + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/MiscDirectivesExamplesTest.java#withSizeLimitExample diff --git a/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst new file mode 100644 index 0000000000..de16f30131 --- /dev/null +++ b/akka-docs/rst/java/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst @@ -0,0 +1,19 @@ +.. _-withoutSizeLimit-java-: + +withoutSizeLimit +================ + +Description +----------- +Skips request entity size verification. + +The whole mechanism of entity size checking is intended to prevent certain Denial-of-Service attacks. +So suggested setup is to have ``akka.http.parsing.max-content-length`` relatively low and use ``withoutSizeLimit`` +directive just for endpoints for which size verification should not be performed. + +See also :ref:`-withSizeLimit-java-` for setting request entity size limit. + +Example +------- + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/MiscDirectivesExamplesTest.java#withSizeLimitExample diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/index.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/index.rst index 249c508f2f..abc97ca249 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/index.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/index.rst @@ -7,6 +7,7 @@ ParameterDirectives :maxdepth: 1 parameter + parameters parameterMap parameterMultiMap parameterSeq diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameter.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameter.rst index 93491087c1..76ad0cf33d 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameter.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameter.rst @@ -12,4 +12,5 @@ See :ref:`which-parameter-directive-java` to understand when to use which direct Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java#parameter diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMap.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMap.rst index 7713310308..0001549794 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMap.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMap.rst @@ -12,4 +12,5 @@ See also :ref:`which-parameter-directive-java` to understand when to use which d Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java#parameterMap diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMultiMap.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMultiMap.rst index 75bceb91b6..124272a93f 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMultiMap.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterMultiMap.rst @@ -17,4 +17,5 @@ See :ref:`which-parameter-directive-java` to understand when to use which direct Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java#parameterMultiMap diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterSeq.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterSeq.rst index 89c6808c91..c767389cf7 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterSeq.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameterSeq.rst @@ -13,4 +13,5 @@ See :ref:`which-parameter-directive-java` to understand when to use which direct Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java#parameterSeq diff --git a/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameters.rst b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameters.rst new file mode 100644 index 0000000000..f451a63e91 --- /dev/null +++ b/akka-docs/rst/java/http/routing-dsl/directives/parameter-directives/parameters.rst @@ -0,0 +1,15 @@ +.. _-parameters-java-: + +parameters +========== +Extracts multiple *query* parameter values from the request. + +Description +----------- + +See :ref:`which-parameter-directive-java` to understand when to use which directive. + +Example +------- + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ParameterDirectivesExamplesTest.java#parameters diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/path.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/path.rst index afbf2475fb..67cb583dcd 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/path.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/path.rst @@ -24,7 +24,11 @@ a ``path`` directive will always be empty). Depending on the type of its ``PathMatcher`` argument the ``path`` directive extracts zero or more values from the URI. If the match fails the request is rejected with an :ref:`empty rejection set `. +.. note:: The empty string (also called empty word or identity) is a **neutral element** of string concatenation operation, + so it will match everything, but remember that ``path`` requires whole remaining path being matched, so (``/``) will succeed + and (``/whatever``) will fail. The :ref:`-pathPrefix-java-` provides more liberal behaviour. + Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-dsl diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEnd.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEnd.rst index fa7513832a..a1d6207393 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEnd.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEnd.rst @@ -15,4 +15,4 @@ inner-level to discriminate "path already fully matched" from other alternatives Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-end diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEndOrSingleSlash.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEndOrSingleSlash.rst index c1512f73b7..3d2818ec21 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEndOrSingleSlash.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathEndOrSingleSlash.rst @@ -16,4 +16,4 @@ It is equivalent to ``pathEnd | pathSingleSlash`` but slightly more efficient. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-end-or-single-slash diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefix.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefix.rst index 4f8b5bae96..8fe5670d59 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefix.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefix.rst @@ -18,7 +18,10 @@ As opposed to its :ref:`-rawPathPrefix-java-` counterpart ``pathPrefix`` automat Depending on the type of its ``PathMatcher`` argument the ``pathPrefix`` directive extracts zero or more values from the URI. If the match fails the request is rejected with an :ref:`empty rejection set `. +.. note:: The empty string (also called empty word or identity) is a **neutral element** of string concatenation operation, + so it will match everything and consume nothing. The :ref:`-path-java-` provides more strict behaviour. + Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-prefix diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefixTest.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefixTest.rst index 10e9374179..d1684857d4 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefixTest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathPrefixTest.rst @@ -24,4 +24,4 @@ the URI. If the match fails the request is rejected with an :ref:`empty rejectio Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-prefix-test diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSingleSlash.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSingleSlash.rst index a9c58d02f3..16a115282e 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSingleSlash.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSingleSlash.rst @@ -14,4 +14,4 @@ This directive is a simple alias for ``pathPrefix(PathEnd)`` and is mostly used Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-single-slash diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffix.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffix.rst index 109603e960..3efcf9ef98 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffix.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffix.rst @@ -24,4 +24,4 @@ the URI. If the match fails the request is rejected with an :ref:`empty rejectio Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-suffix diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffixTest.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffixTest.rst index 46738f3f5e..f0263e9bd9 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffixTest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/pathSuffixTest.rst @@ -25,4 +25,4 @@ the URI. If the match fails the request is rejected with an :ref:`empty rejectio Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#path-suffix-test diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefix.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefix.rst index 73bd809966..78269a610b 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefix.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefix.rst @@ -21,4 +21,4 @@ the URI. If the match fails the request is rejected with an :ref:`empty rejectio Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#raw-path-prefix-test diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefixTest.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefixTest.rst index 8b31b86fcc..e37ec18813 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefixTest.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/rawPathPrefixTest.rst @@ -24,4 +24,4 @@ from the URI. If the match fails the request is rejected with an :ref:`empty rej Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#raw-path-prefix-test diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToNoTrailingSlashIfPresent.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToNoTrailingSlashIfPresent.rst index 6f15e151ba..8de32d9d8b 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToNoTrailingSlashIfPresent.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToNoTrailingSlashIfPresent.rst @@ -6,7 +6,7 @@ redirectToNoTrailingSlashIfPresent Description ----------- If the requested path does end with a trailing ``/`` character, -redirects to the same path without that trailing slash.. +redirects to the same path without that trailing slash.. Redirects the HTTP Client to the same resource yet without the trailing ``/``, in case the request contained it. When redirecting an HttpResponse with the given redirect response code (i.e. ``MovedPermanently`` or ``TemporaryRedirect`` @@ -24,6 +24,6 @@ See also :ref:`-redirectToTrailingSlashIfMissing-java-` for the opposite behavio Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#redirect-notrailing-slash-present See also :ref:`-redirectToTrailingSlashIfMissing-java-` which achieves the opposite - redirecting paths in case they do *not* have a trailing slash. diff --git a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToTrailingSlashIfMissing.rst b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToTrailingSlashIfMissing.rst index 1552eb67f4..f8d81bc9c9 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToTrailingSlashIfMissing.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/path-directives/redirectToTrailingSlashIfMissing.rst @@ -20,6 +20,6 @@ See also :ref:`-redirectToNoTrailingSlashIfPresent-java-` for the opposite behav Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/PathDirectivesExamplesTest.java#redirect-notrailing-slash-missing See also :ref:`-redirectToNoTrailingSlashIfPresent-java-` which achieves the opposite - redirecting paths in case they do have a trailing slash. diff --git a/akka-docs/rst/java/http/routing-dsl/directives/range-directives/withRangeSupport.rst b/akka-docs/rst/java/http/routing-dsl/directives/range-directives/withRangeSupport.rst index 263d6325d6..4387e7afe6 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/range-directives/withRangeSupport.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/range-directives/withRangeSupport.rst @@ -27,4 +27,5 @@ See also: https://tools.ietf.org/html/rfc7233 Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/RangeDirectivesExamplesTest.java + :snippet: withRangeSupport \ No newline at end of file diff --git a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/complete.rst b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/complete.rst index 0380a70a9b..d5e8f70629 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/complete.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/complete.rst @@ -17,4 +17,5 @@ Please note that the ``complete`` directive has multiple variants, like Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java#complete diff --git a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/failWith.rst b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/failWith.rst index 5e5aae085c..66d6e1e8d6 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/failWith.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/failWith.rst @@ -24,4 +24,5 @@ exception. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java#failWith diff --git a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/redirect.rst b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/redirect.rst index bd98427222..07f9db4377 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/redirect.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/redirect.rst @@ -10,9 +10,8 @@ Completes the request with a redirection response to a given targer URI and of a ``redirect`` is a convenience helper for completing the request with a redirection response. It is equivalent to this snippet relying on the ``complete`` directive: -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. - Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java#redirect diff --git a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/reject.rst b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/reject.rst index 2a21de1c68..e31ec91878 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/route-directives/reject.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/route-directives/reject.rst @@ -19,4 +19,5 @@ modifier for "filtering out" certain cases. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/RouteDirectivesExamplesTest.java#reject diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasic.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasic.rst index fb3999f259..16fd9479c8 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasic.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasic.rst @@ -27,4 +27,5 @@ See :ref:`credentials-and-timing-attacks-java` for details about verifying the s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authenticateBasic diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicAsync.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicAsync.rst index 4cd3f54777..2267737a5a 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicAsync.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicAsync.rst @@ -25,4 +25,5 @@ See :ref:`credentials-and-timing-attacks-java` for details about verifying the s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authenticateBasicAsync diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPF.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPF.rst index f5731af93f..9617e2a3c1 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPF.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPF.rst @@ -25,4 +25,5 @@ See :ref:`credentials-and-timing-attacks-java` for details about verifying the s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authenticateBasicPF diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPFAsync.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPFAsync.rst index ff0e95174e..e0c5e5118d 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPFAsync.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateBasicPFAsync.rst @@ -22,4 +22,5 @@ See :ref:`credentials-and-timing-attacks-java` for details about verifying the s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authenticateBasicPFAsync diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateOrRejectWithChallenge.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateOrRejectWithChallenge.rst index 76509bdb2d..4b96af6747 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateOrRejectWithChallenge.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authenticateOrRejectWithChallenge.rst @@ -16,4 +16,5 @@ More details about challenge-response authentication are available in the `RFC 2 Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authenticateOrRejectWithChallenge diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorize.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorize.rst index caa435d414..6a9306ba8a 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorize.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorize.rst @@ -24,4 +24,5 @@ See also :ref:`-authorize-java-` for the asynchronous version of this directive. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authorize diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorizeAsync.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorizeAsync.rst index c1920a79d8..32fa84a65a 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorizeAsync.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/authorizeAsync.rst @@ -25,4 +25,5 @@ See also :ref:`-authorize-java-` for the synchronous version of this directive. Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#authorizeAsync diff --git a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/extractCredentials.rst b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/extractCredentials.rst index d8c61a5d64..d24acf4484 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/security-directives/extractCredentials.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/security-directives/extractCredentials.rst @@ -13,4 +13,5 @@ See :ref:`credentials-and-timing-attacks-java` for details about verifying the s Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. + +.. includecode:: ../../../../code/docs/http/javadsl/server/directives/SecurityDirectivesExamplesTest.java#extractCredentials diff --git a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeout.rst b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeout.rst index 43fe7c2376..985e433fc4 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeout.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeout.rst @@ -33,8 +33,10 @@ For more information about various timeouts in Akka HTTP see :ref:`http-timeouts Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java + :snippet: withRequestTimeout-plain With setting the handler at the same time: -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java + :snippet: withRequestTimeout-with-handler \ No newline at end of file diff --git a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeoutResponse.rst b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeoutResponse.rst index cff7040784..dfb27824a4 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeoutResponse.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withRequestTimeoutResponse.rst @@ -23,4 +23,5 @@ To learn more about various timeouts in Akka HTTP and how to configure them see Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java + :snippet: withRequestTimeoutResponse \ No newline at end of file diff --git a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withoutRequestTimeout.rst b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withoutRequestTimeout.rst index 271489b739..8533ec5b5f 100644 --- a/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withoutRequestTimeout.rst +++ b/akka-docs/rst/java/http/routing-dsl/directives/timeout-directives/withoutRequestTimeout.rst @@ -20,4 +20,5 @@ For more information about various timeouts in Akka HTTP see :ref:`http-timeouts Example ------- -TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 `_. +.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/TimeoutDirectivesExamplesTest.java + :snippet: withoutRequestTimeout \ No newline at end of file diff --git a/akka-docs/rst/java/http/routing-dsl/index.rst b/akka-docs/rst/java/http/routing-dsl/index.rst index d5e754af0b..f84f20759f 100644 --- a/akka-docs/rst/java/http/routing-dsl/index.rst +++ b/akka-docs/rst/java/http/routing-dsl/index.rst @@ -32,7 +32,7 @@ Bind failures ^^^^^^^^^^^^^ For example the server might be unable to bind to the given port. For example when the port is already taken by another application, or if the port is privileged (i.e. only usable by ``root``). -In this case the "binding future" will fail immediatly, and we can react to if by listening on the CompletionStage's completion: +In this case the "binding future" will fail immediately, and we can react to if by listening on the CompletionStage's completion: .. includecode:: ../../code/docs/http/javadsl/server/HighLevelServerBindFailureExample.java :include: binding-failure-high-level-example diff --git a/akka-docs/rst/java/http/routing-dsl/testkit.rst b/akka-docs/rst/java/http/routing-dsl/testkit.rst index 4eddb2a2f8..ad1bac69cf 100644 --- a/akka-docs/rst/java/http/routing-dsl/testkit.rst +++ b/akka-docs/rst/java/http/routing-dsl/testkit.rst @@ -9,7 +9,7 @@ response properties in a compact way. To use the testkit you need to take these steps: -* add a dependency to the ``akka-http-testkit-experimental`` module +* add a dependency to the ``akka-http-testkit`` module * derive the test class from ``JUnitRouteTest`` * wrap the route under test with ``RouteTest.testRoute`` to create a ``TestRoute`` * run requests against the route using ``TestRoute.run(request)`` which will return diff --git a/akka-docs/rst/java/http/server-side/low-level-server-side-api.rst b/akka-docs/rst/java/http/server-side/low-level-server-side-api.rst index ff751fca87..4c865ed90f 100644 --- a/akka-docs/rst/java/http/server-side/low-level-server-side-api.rst +++ b/akka-docs/rst/java/http/server-side/low-level-server-side-api.rst @@ -121,6 +121,7 @@ Streaming of HTTP message entities is supported through subclasses of ``HttpEnti to deal with streamed entities when receiving a request as well as, in many cases, when constructing responses. See :ref:`HttpEntity-java` for a description of the alternatives. +.. _http-closing-connection-low-level-java: Closing a connection ~~~~~~~~~~~~~~~~~~~~ diff --git a/akka-docs/rst/java/stream/index.rst b/akka-docs/rst/java/stream/index.rst index e1deaf2748..3bc5c32c61 100644 --- a/akka-docs/rst/java/stream/index.rst +++ b/akka-docs/rst/java/stream/index.rst @@ -13,6 +13,7 @@ Streams stream-graphs stream-composition stream-rate + stream-dynamic stream-customize stream-integrations stream-error diff --git a/akka-docs/rst/java/stream/stream-cookbook.rst b/akka-docs/rst/java/stream/stream-cookbook.rst index e757ae910f..1a673e2e78 100644 --- a/akka-docs/rst/java/stream/stream-cookbook.rst +++ b/akka-docs/rst/java/stream/stream-cookbook.rst @@ -177,7 +177,7 @@ Triggering the flow of elements programmatically In other words, even if the stream would be able to flow (not being backpressured) we want to hold back elements until a trigger signal arrives. -This recipe solves the problem by simply zipping the stream of ``Message`` elments with the stream of ``Trigger`` +This recipe solves the problem by simply zipping the stream of ``Message`` elements with the stream of ``Trigger`` signals. Since ``Zip`` produces pairs, we simply map the output stream selecting the first element of the pair. .. includecode:: ../code/docs/stream/javadsl/cookbook/RecipeManualTrigger.java#manually-triggered-stream @@ -227,7 +227,7 @@ a special ``reduce`` operation that collapses multiple upstream elements into on the speed of the upstream unaffected by the downstream. When the upstream is faster, the reducing process of the ``conflate`` starts. Our reducer function simply takes -the freshest element. This cin a simple dropping operation. +the freshest element. This in a simple dropping operation. .. includecode:: ../code/docs/stream/javadsl/cookbook/RecipeSimpleDrop.java#simple-drop diff --git a/akka-docs/rst/java/stream/stream-dynamic.rst b/akka-docs/rst/java/stream/stream-dynamic.rst new file mode 100644 index 0000000000..f90cbdcacb --- /dev/null +++ b/akka-docs/rst/java/stream/stream-dynamic.rst @@ -0,0 +1,63 @@ +.. _stream-dynamic-scala: + +####################### +Dynamic stream handling +####################### + +.. _kill-switch-scala: + +Controlling graph completion with KillSwitch +-------------------------------------------- + +A ``KillSwitch`` allows the completion of graphs of ``FlowShape`` from the outside. It consists of a flow element that +can be linked to a graph of ``FlowShape`` needing completion control. +The ``KillSwitch`` interface allows to: + +* complete the graph(s) via ``shutdown()`` +* fail the graph(s) via ``abort(Throwable error)`` + +After the first call to either ``shutdown`` or ``abort``, all subsequent calls to any of these methods will be ignored. +Graph completion is performed by both + +* completing its downstream +* cancelling (in case of ``shutdown``) or failing (in case of ``abort``) its upstream. + +A ``KillSwitch`` can control the completion of one or multiple streams, and therefore comes in two different flavours. + +.. _unique-kill-switch-scala: + +UniqueKillSwitch +^^^^^^^^^^^^^^^^ + +``UniqueKillSwitch`` allows to control the completion of **one** materialized ``Graph`` of ``FlowShape``. Refer to the +below for usage examples. + +* **Shutdown** + +.. includecode:: ../code/docs/stream/KillSwitchDocTest.java#unique-shutdown + +* **Abort** + +.. includecode:: ../code/docs/stream/KillSwitchDocTest.java#unique-abort + +.. _shared-kill-switch-scala: + +SharedKillSwitch +^^^^^^^^^^^^^^^^ + +A ``SharedKillSwitch`` allows to control the completion of an arbitrary number graphs of ``FlowShape``. It can be +materialized multiple times via its ``flow`` method, and all materialized graphs linked to it are controlled by the switch. +Refer to the below for usage examples. + +* **Shutdown** + +.. includecode:: ../code/docs/stream/KillSwitchDocTest.java#shared-shutdown + +* **Abort** + +.. includecode:: ../code/docs/stream/KillSwitchDocTest.java#shared-abort + +.. note:: + A ``UniqueKillSwitch`` is always a result of a materialization, whilst ``SharedKillSwitch`` needs to be constructed + before any materialization takes place. + diff --git a/akka-docs/rst/java/stream/stream-introduction.rst b/akka-docs/rst/java/stream/stream-introduction.rst index 6605e2668a..f403ffee98 100644 --- a/akka-docs/rst/java/stream/stream-introduction.rst +++ b/akka-docs/rst/java/stream/stream-introduction.rst @@ -7,7 +7,7 @@ Introduction Motivation ========== -The way we consume services from the internet today includes many instances of +The way we consume services from the Internet today includes many instances of streaming data, both downloading from a service as well as uploading to it or peer-to-peer data transfers. Regarding data as a stream of elements instead of in its entirety is very useful because it matches the way computers send and diff --git a/akka-docs/rst/java/stream/stream-quickstart.rst b/akka-docs/rst/java/stream/stream-quickstart.rst index fef64e9c55..12d9016502 100644 --- a/akka-docs/rst/java/stream/stream-quickstart.rst +++ b/akka-docs/rst/java/stream/stream-quickstart.rst @@ -1,333 +1,337 @@ -.. _stream-quickstart-java: - -Quick Start Guide -================= - -A stream usually begins at a source, so this is also how we start an Akka -Stream. Before we create one, we import the full complement of streaming tools: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#imports - -Now we will start with a rather simple source, emitting the integers 1 to 100: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#create-source - -The :class:`Source` type is parameterized with two types: the first one is the -type of element that this source emits and the second one may signal that -running the source produces some auxiliary value (e.g. a network source may -provide information about the bound port or the peer’s address). Where no -auxiliary information is produced, the type ``akka.NotUsed`` is used—and a -simple range of integers surely falls into this category. - -Having created this source means that we have a description of how to emit the -first 100 natural numbers, but this source is not yet active. In order to get -those numbers out we have to run it: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#run-source - -This line will complement the source with a consumer function—in this example -we simply print out the numbers to the console—and pass this little stream -setup to an Actor that runs it. This activation is signaled by having “run” be -part of the method name; there are other methods that run Akka Streams, and -they all follow this pattern. - -You may wonder where the Actor gets created that runs the stream, and you are -probably also asking yourself what this ``materializer`` means. In order to get -this value we first need to create an Actor system: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#create-materializer - -There are other ways to create a materializer, e.g. from an -:class:`ActorContext` when using streams from within Actors. The -:class:`Materializer` is a factory for stream execution engines, it is the -thing that makes streams run—you don’t need to worry about any of the details -just now apart from that you need one for calling any of the ``run`` methods on -a :class:`Source`. - -The nice thing about Akka Streams is that the :class:`Source` is just a -description of what you want to run, and like an architect’s blueprint it can -be reused, incorporated into a larger design. We may choose to transform the -source of integers and write it to a file instead: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#transform-source - -First we use the ``scan`` combinator to run a computation over the whole -stream: starting with the number 1 (``BigInteger.ONE``) we multiple by each of -the incoming numbers, one after the other; the scan operationemits the initial -value and then every calculation result. This yields the series of factorial -numbers which we stash away as a :class:`Source` for later reuse—it is -important to keep in mind that nothing is actually computed yet, this is just a -description of what we want to have computed once we run the stream. Then we -convert the resulting series of numbers into a stream of :class:`ByteString` -objects describing lines in a text file. This stream is then run by attaching a -file as the receiver of the data. In the terminology of Akka Streams this is -called a :class:`Sink`. :class:`IOResult` is a type that IO operations return -in Akka Streams in order to tell you how many bytes or elements were consumed -and whether the stream terminated normally or exceptionally. - -Reusable Pieces ---------------- - -One of the nice parts of Akka Streams—and something that other stream libraries -do not offer—is that not only sources can be reused like blueprints, all other -elements can be as well. We can take the file-writing :class:`Sink`, prepend -the processing steps necessary to get the :class:`ByteString` elements from -incoming strings and package that up as a reusable piece as well. Since the -language for writing these streams always flows from left to right (just like -plain English), we need a starting point that is like a source but with an -“open” input. In Akka Streams this is called a :class:`Flow`: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#transform-sink - -Starting from a flow of strings we convert each to :class:`ByteString` and then -feed to the already known file-writing :class:`Sink`. The resulting blueprint -is a :class:`Sink>`, which means that it -accepts strings as its input and when materialized it will create auxiliary -information of type ``CompletionStage`` (when chaining operations on -a :class:`Source` or :class:`Flow` the type of the auxiliary information—called -the “materialized value”—is given by the leftmost starting point; since we want -to retain what the ``FileIO.toFile`` sink has to offer, we need to say -``Keep.right()``). - -We can use the new and shiny :class:`Sink` we just created by -attaching it to our ``factorials`` source—after a small adaptation to turn the -numbers into strings: - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#use-transformed-sink - -Time-Based Processing ---------------------- - -Before we start looking at a more involved example we explore the streaming -nature of what Akka Streams can do. Starting from the ``factorials`` source -we transform the stream by zipping it together with another stream, -represented by a :class:`Source` that emits the number 0 to 100: the first -number emitted by the ``factorials`` source is the factorial of zero, the -second is the factorial of one, and so on. We combine these two by forming -strings like ``"3! = 6"``. - -.. includecode:: ../code/docs/stream/QuickStartDocTest.java#add-streams - -All operations so far have been time-independent and could have been performed -in the same fashion on strict collections of elements. The next line -demonstrates that we are in fact dealing with streams that can flow at a -certain speed: we use the ``throttle`` combinator to slow down the stream to 1 -element per second (the second ``1`` in the argument list is the maximum size -of a burst that we want to allow—passing ``1`` means that the first element -gets through immediately and the second then has to wait for one second and so -on). - -If you run this program you will see one line printed per second. One aspect -that is not immediately visible deserves mention, though: if you try and set -the streams to produce a billion numbers each then you will notice that your -JVM does not crash with an OutOfMemoryError, even though you will also notice -that running the streams happens in the background, asynchronously (this is the -reason for the auxiliary information to be provided as a -:class:`CompletionStage`, in the future). The secret that makes this work is -that Akka Streams implicitly implement pervasive flow control, all combinators -respect back-pressure. This allows the throttle combinator to signal to all its -upstream sources of data that it can only accept elements at a certain -rate—when the incoming rate is higher than one per second the throttle -combinator will assert *back-pressure* upstream. - -This is basically all there is to Akka Streams in a nutshell—glossing over the -fact that there are dozens of sources and sinks and many more stream -transformation combinators to choose from, see also :ref:`stages-overview_java`. - -Reactive Tweets -=============== - -A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some -other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them. - -We will also consider the problem inherent to all non-blocking streaming -solutions: *"What if the subscriber is too slow to consume the live stream of -data?"*. Traditionally the solution is often to buffer the elements, but this -can—and usually will—cause eventual buffer overflows and instability of such -systems. Instead Akka Streams depend on internal backpressure signals that -allow to control what should happen in such scenarios. - -Here's the data model we'll be working with throughout the quickstart examples: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#model - - -.. note:: - If you would like to get an overview of the used vocabulary first instead of diving head-first - into an actual example you can have a look at the :ref:`core-concepts-java` and :ref:`defining-and-running-streams-java` - sections of the docs, and then come back to this quickstart to see it all pieced together into a simple example application. - -Transforming and consuming simple streams ------------------------------------------ -The example application we will be looking at is a simple Twitter feed stream from which we'll want to extract certain information, -like for example finding all twitter handles of users who tweet about ``#akka``. - -In order to prepare our environment by creating an :class:`ActorSystem` and :class:`ActorMaterializer`, -which will be responsible for materializing and running the streams we are about to create: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#materializer-setup - -The :class:`ActorMaterializer` can optionally take :class:`ActorMaterializerSettings` which can be used to define -materialization properties, such as default buffer sizes (see also :ref:`async-stream-buffers-java`), the dispatcher to -be used by the pipeline etc. These can be overridden with ``withAttributes`` on :class:`Flow`, :class:`Source`, :class:`Sink` and :class:`Graph`. - -Let's assume we have a stream of tweets readily available. In Akka this is expressed as a :class:`Source`: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweet-source - -Streams always start flowing from a ``Source`` then can continue through ``Flow`` elements or -more advanced graph elements to finally be consumed by a ``Sink``. - -The first type parameter—:class:`Tweet` in this case—designates the kind of elements produced -by the source while the ``M`` type parameters describe the object that is created during -materialization (:ref:`see below `)—:class:`BoxedUnit` (from the ``scala.runtime`` -package) means that no value is produced, it is the generic equivalent of ``void``. - -The operations should look familiar to anyone who has used the Scala Collections library, -however they operate on streams and not collections of data (which is a very important distinction, as some operations -only make sense in streaming and vice versa): - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-filter-map - -Finally in order to :ref:`materialize ` and run the stream computation we need to attach -the Flow to a ``Sink`` that will get the Flow running. The simplest way to do this is to call -``runWith(sink)`` on a ``Source``. For convenience a number of common Sinks are predefined and collected as static methods on -the `Sink class `_. -For now let's simply print each author: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-foreachsink-println - -or by using the shorthand version (which are defined only for the most popular Sinks such as :class:`Sink.fold` and :class:`Sink.foreach`): - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-foreach-println - -Materializing and running a stream always requires a :class:`Materializer` to be passed in explicitly, -like this: ``.run(mat)``. - -The complete snippet looks like this: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#first-sample - -Flattening sequences in streams -------------------------------- -In the previous section we were working on 1:1 relationships of elements which is the most common case, but sometimes -we might want to map from one element to a number of elements and receive a "flattened" stream, similarly like ``flatMap`` -works on Scala Collections. In order to get a flattened stream of hashtags from our stream of tweets we can use the ``mapConcat`` -combinator: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#hashtags-mapConcat - -.. note:: - The name ``flatMap`` was consciously avoided due to its proximity with for-comprehensions and monadic composition. - It is problematic for two reasons: firstly, flattening by concatenation is often undesirable in bounded stream processing - due to the risk of deadlock (with merge being the preferred strategy), and secondly, the monad laws would not hold for - our implementation of flatMap (due to the liveness issues). - - Please note that the ``mapConcat`` requires the supplied function to return a strict collection (``Out f -> java.util.List``), - whereas ``flatMap`` would have to operate on streams all the way through. - - -Broadcasting a stream ---------------------- -Now let's say we want to persist all hashtags, as well as all author names from this one live stream. -For example we'd like to write all author handles into one file, and all hashtags into another file on disk. -This means we have to split the source stream into two streams which will handle the writing to these different files. - -Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Akka Streams. -One of these that we'll be using in this example is called :class:`Broadcast`, and it simply emits elements from its -input port to all of its output ports. - -Akka Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs) -in order to offer the most convenient API for both of these cases. Graphs can express arbitrarily complex stream setups -at the expense of not reading as familiarly as collection transformations. - -Graphs are constructed using :class:`GraphDSL` like this: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#graph-dsl-broadcast - -As you can see, we use graph builder ``b`` to construct the graph using ``UniformFanOutShape`` and ``Flow`` s. - -``GraphDSL.create`` returns a :class:`Graph`, in this example a ``Graph`` where -:class:`ClosedShape` means that it is *a fully connected graph* or "closed" - there are no unconnected inputs or outputs. -Since it is closed it is possible to transform the graph into a :class:`RunnableGraph` using ``RunnableGraph.fromGraph``. -The runnable graph can then be ``run()`` to materialize a stream out of it. - -Both :class:`Graph` and :class:`RunnableGraph` are *immutable, thread-safe, and freely shareable*. - -A graph can also have one of several other shapes, with one or more unconnected ports. Having unconnected ports -expresses a graph that is a *partial graph*. Concepts around composing and nesting graphs in large structures are -explained in detail in :ref:`composition-java`. It is also possible to wrap complex computation graphs -as Flows, Sinks or Sources, which will be explained in detail in :ref:`partial-graph-dsl-java`. - - -Back-pressure in action ------------------------ - -One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks -(Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more -about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read -:ref:`back-pressure-explained-java`. - -A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough, -either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting -in either ``OutOfMemoryError`` s or other severe degradations of service responsiveness. With Akka Streams buffering can -and must be handled explicitly. For example, if we are only interested in the "*most recent tweets, with a buffer of 10 -elements*" this can be expressed using the ``buffer`` element: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-slow-consumption-dropHead - -The ``buffer`` element takes an explicit and required ``OverflowStrategy``, which defines how the buffer should react -when it receives another element while it is full. Strategies provided include dropping the oldest element (``dropHead``), -dropping the entire buffer, signalling failures etc. Be sure to pick and choose the strategy that fits your use case best. - -.. _materialized-values-quick-java: - -Materialized values -------------------- -So far we've been only processing data using Flows and consuming it into some kind of external Sink - be it by printing -values or storing them in some external system. However sometimes we may be interested in some value that can be -obtained from the materialized processing pipeline. For example, we want to know how many tweets we have processed. -While this question is not as obvious to give an answer to in case of an infinite stream of tweets (one way to answer -this question in a streaming setting would be to create a stream of counts described as "*up until now*, we've processed N tweets"), -but in general it is possible to deal with finite streams and come up with a nice result such as a total count of elements. - -First, let's write such an element counter using ``Flow.of(Class)`` and ``Sink.fold`` to see how the types look like: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-fold-count - -First we prepare a reusable ``Flow`` that will change each incoming tweet into an integer of value ``1``. We'll use this in -order to combine those with a ``Sink.fold`` that will sum all ``Integer`` elements of the stream and make its result available as -a ``CompletionStage``. Next we connect the ``tweets`` stream to ``count`` with ``via``. Finally we connect the Flow to the previously -prepared Sink using ``toMat``. - -Remember those mysterious ``Mat`` type parameters on ``Source``, ``Flow`` and ``Sink``? -They represent the type of values these processing parts return when materialized. When you chain these together, -you can explicitly combine their materialized values: in our example we used the ``Keep.right`` predefined function, -which tells the implementation to only care about the materialized type of the stage currently appended to the right. -The materialized type of ``sumSink`` is ``CompletionStage`` and because of using ``Keep.right``, the resulting :class:`RunnableGraph` -has also a type parameter of ``CompletionStage``. - -This step does *not* yet materialize the -processing pipeline, it merely prepares the description of the Flow, which is now connected to a Sink, and therefore can -be ``run()``, as indicated by its type: ``RunnableGraph>``. Next we call ``run()`` which uses the :class:`ActorMaterializer` -to materialize and run the Flow. The value returned by calling ``run()`` on a ``RunnableGraph`` is of type ``T``. -In our case this type is ``CompletionStage`` which, when completed, will contain the total length of our tweets stream. -In case of the stream failing, this future would complete with a Failure. - -A :class:`RunnableGraph` may be reused -and materialized multiple times, because it is just the "blueprint" of the stream. This means that if we materialize a stream, -for example one that consumes a live stream of tweets within a minute, the materialized values for those two materializations -will be different, as illustrated by this example: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-runnable-flow-materialized-twice - -Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or -steering these elements which will be discussed in detail in :ref:`stream-materialization-java`. Summing up this section, now we know -what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-fold-count-oneline - -.. note:: - ``runWith()`` is a convenience method that automatically ignores the materialized value of any other stages except - those appended by the ``runWith()`` itself. In the above example it translates to using ``Keep.right`` as the combiner - for materialized values. +.. _stream-quickstart-java: + +Quick Start Guide +================= + +A stream usually begins at a source, so this is also how we start an Akka +Stream. Before we create one, we import the full complement of streaming tools: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#stream-imports + +If you want to execute the code samples while you read through the quick start guide, you will also need the following imports: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#other-imports + +Now we will start with a rather simple source, emitting the integers 1 to 100: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#create-source + +The :class:`Source` type is parameterized with two types: the first one is the +type of element that this source emits and the second one may signal that +running the source produces some auxiliary value (e.g. a network source may +provide information about the bound port or the peer’s address). Where no +auxiliary information is produced, the type ``akka.NotUsed`` is used—and a +simple range of integers surely falls into this category. + +Having created this source means that we have a description of how to emit the +first 100 natural numbers, but this source is not yet active. In order to get +those numbers out we have to run it: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#run-source + +This line will complement the source with a consumer function—in this example +we simply print out the numbers to the console—and pass this little stream +setup to an Actor that runs it. This activation is signaled by having “run” be +part of the method name; there are other methods that run Akka Streams, and +they all follow this pattern. + +You may wonder where the Actor gets created that runs the stream, and you are +probably also asking yourself what this ``materializer`` means. In order to get +this value we first need to create an Actor system: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#create-materializer + +There are other ways to create a materializer, e.g. from an +:class:`ActorContext` when using streams from within Actors. The +:class:`Materializer` is a factory for stream execution engines, it is the +thing that makes streams run—you don’t need to worry about any of the details +just now apart from that you need one for calling any of the ``run`` methods on +a :class:`Source`. + +The nice thing about Akka Streams is that the :class:`Source` is just a +description of what you want to run, and like an architect’s blueprint it can +be reused, incorporated into a larger design. We may choose to transform the +source of integers and write it to a file instead: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#transform-source + +First we use the ``scan`` combinator to run a computation over the whole +stream: starting with the number 1 (``BigInteger.ONE``) we multiple by each of +the incoming numbers, one after the other; the scan operationemits the initial +value and then every calculation result. This yields the series of factorial +numbers which we stash away as a :class:`Source` for later reuse—it is +important to keep in mind that nothing is actually computed yet, this is just a +description of what we want to have computed once we run the stream. Then we +convert the resulting series of numbers into a stream of :class:`ByteString` +objects describing lines in a text file. This stream is then run by attaching a +file as the receiver of the data. In the terminology of Akka Streams this is +called a :class:`Sink`. :class:`IOResult` is a type that IO operations return +in Akka Streams in order to tell you how many bytes or elements were consumed +and whether the stream terminated normally or exceptionally. + +Reusable Pieces +--------------- + +One of the nice parts of Akka Streams—and something that other stream libraries +do not offer—is that not only sources can be reused like blueprints, all other +elements can be as well. We can take the file-writing :class:`Sink`, prepend +the processing steps necessary to get the :class:`ByteString` elements from +incoming strings and package that up as a reusable piece as well. Since the +language for writing these streams always flows from left to right (just like +plain English), we need a starting point that is like a source but with an +“open” input. In Akka Streams this is called a :class:`Flow`: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#transform-sink + +Starting from a flow of strings we convert each to :class:`ByteString` and then +feed to the already known file-writing :class:`Sink`. The resulting blueprint +is a :class:`Sink>`, which means that it +accepts strings as its input and when materialized it will create auxiliary +information of type ``CompletionStage`` (when chaining operations on +a :class:`Source` or :class:`Flow` the type of the auxiliary information—called +the “materialized value”—is given by the leftmost starting point; since we want +to retain what the ``FileIO.toFile`` sink has to offer, we need to say +``Keep.right()``). + +We can use the new and shiny :class:`Sink` we just created by +attaching it to our ``factorials`` source—after a small adaptation to turn the +numbers into strings: + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#use-transformed-sink + +Time-Based Processing +--------------------- + +Before we start looking at a more involved example we explore the streaming +nature of what Akka Streams can do. Starting from the ``factorials`` source +we transform the stream by zipping it together with another stream, +represented by a :class:`Source` that emits the number 0 to 100: the first +number emitted by the ``factorials`` source is the factorial of zero, the +second is the factorial of one, and so on. We combine these two by forming +strings like ``"3! = 6"``. + +.. includecode:: ../code/docs/stream/QuickStartDocTest.java#add-streams + +All operations so far have been time-independent and could have been performed +in the same fashion on strict collections of elements. The next line +demonstrates that we are in fact dealing with streams that can flow at a +certain speed: we use the ``throttle`` combinator to slow down the stream to 1 +element per second (the second ``1`` in the argument list is the maximum size +of a burst that we want to allow—passing ``1`` means that the first element +gets through immediately and the second then has to wait for one second and so +on). + +If you run this program you will see one line printed per second. One aspect +that is not immediately visible deserves mention, though: if you try and set +the streams to produce a billion numbers each then you will notice that your +JVM does not crash with an OutOfMemoryError, even though you will also notice +that running the streams happens in the background, asynchronously (this is the +reason for the auxiliary information to be provided as a +:class:`CompletionStage`, in the future). The secret that makes this work is +that Akka Streams implicitly implement pervasive flow control, all combinators +respect back-pressure. This allows the throttle combinator to signal to all its +upstream sources of data that it can only accept elements at a certain +rate—when the incoming rate is higher than one per second the throttle +combinator will assert *back-pressure* upstream. + +This is basically all there is to Akka Streams in a nutshell—glossing over the +fact that there are dozens of sources and sinks and many more stream +transformation combinators to choose from, see also :ref:`stages-overview_java`. + +Reactive Tweets +=============== + +A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some +other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them. + +We will also consider the problem inherent to all non-blocking streaming +solutions: *"What if the subscriber is too slow to consume the live stream of +data?"*. Traditionally the solution is often to buffer the elements, but this +can—and usually will—cause eventual buffer overflows and instability of such +systems. Instead Akka Streams depend on internal backpressure signals that +allow to control what should happen in such scenarios. + +Here's the data model we'll be working with throughout the quickstart examples: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#model + + +.. note:: + If you would like to get an overview of the used vocabulary first instead of diving head-first + into an actual example you can have a look at the :ref:`core-concepts-java` and :ref:`defining-and-running-streams-java` + sections of the docs, and then come back to this quickstart to see it all pieced together into a simple example application. + +Transforming and consuming simple streams +----------------------------------------- +The example application we will be looking at is a simple Twitter feed stream from which we'll want to extract certain information, +like for example finding all twitter handles of users who tweet about ``#akka``. + +In order to prepare our environment by creating an :class:`ActorSystem` and :class:`ActorMaterializer`, +which will be responsible for materializing and running the streams we are about to create: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#materializer-setup + +The :class:`ActorMaterializer` can optionally take :class:`ActorMaterializerSettings` which can be used to define +materialization properties, such as default buffer sizes (see also :ref:`async-stream-buffers-java`), the dispatcher to +be used by the pipeline etc. These can be overridden with ``withAttributes`` on :class:`Flow`, :class:`Source`, :class:`Sink` and :class:`Graph`. + +Let's assume we have a stream of tweets readily available. In Akka this is expressed as a :class:`Source`: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweet-source + +Streams always start flowing from a ``Source`` then can continue through ``Flow`` elements or +more advanced graph elements to finally be consumed by a ``Sink``. + +The first type parameter—:class:`Tweet` in this case—designates the kind of elements produced +by the source while the ``M`` type parameters describe the object that is created during +materialization (:ref:`see below `)—:class:`BoxedUnit` (from the ``scala.runtime`` +package) means that no value is produced, it is the generic equivalent of ``void``. + +The operations should look familiar to anyone who has used the Scala Collections library, +however they operate on streams and not collections of data (which is a very important distinction, as some operations +only make sense in streaming and vice versa): + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-filter-map + +Finally in order to :ref:`materialize ` and run the stream computation we need to attach +the Flow to a ``Sink`` that will get the Flow running. The simplest way to do this is to call +``runWith(sink)`` on a ``Source``. For convenience a number of common Sinks are predefined and collected as static methods on +the `Sink class `_. +For now let's simply print each author: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-foreachsink-println + +or by using the shorthand version (which are defined only for the most popular Sinks such as :class:`Sink.fold` and :class:`Sink.foreach`): + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#authors-foreach-println + +Materializing and running a stream always requires a :class:`Materializer` to be passed in explicitly, +like this: ``.run(mat)``. + +The complete snippet looks like this: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#first-sample + +Flattening sequences in streams +------------------------------- +In the previous section we were working on 1:1 relationships of elements which is the most common case, but sometimes +we might want to map from one element to a number of elements and receive a "flattened" stream, similarly like ``flatMap`` +works on Scala Collections. In order to get a flattened stream of hashtags from our stream of tweets we can use the ``mapConcat`` +combinator: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#hashtags-mapConcat + +.. note:: + The name ``flatMap`` was consciously avoided due to its proximity with for-comprehensions and monadic composition. + It is problematic for two reasons: firstly, flattening by concatenation is often undesirable in bounded stream processing + due to the risk of deadlock (with merge being the preferred strategy), and secondly, the monad laws would not hold for + our implementation of flatMap (due to the liveness issues). + + Please note that the ``mapConcat`` requires the supplied function to return a strict collection (``Out f -> java.util.List``), + whereas ``flatMap`` would have to operate on streams all the way through. + + +Broadcasting a stream +--------------------- +Now let's say we want to persist all hashtags, as well as all author names from this one live stream. +For example we'd like to write all author handles into one file, and all hashtags into another file on disk. +This means we have to split the source stream into two streams which will handle the writing to these different files. + +Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Akka Streams. +One of these that we'll be using in this example is called :class:`Broadcast`, and it simply emits elements from its +input port to all of its output ports. + +Akka Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs) +in order to offer the most convenient API for both of these cases. Graphs can express arbitrarily complex stream setups +at the expense of not reading as familiarly as collection transformations. + +Graphs are constructed using :class:`GraphDSL` like this: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#graph-dsl-broadcast + +As you can see, we use graph builder ``b`` to construct the graph using ``UniformFanOutShape`` and ``Flow`` s. + +``GraphDSL.create`` returns a :class:`Graph`, in this example a ``Graph`` where +:class:`ClosedShape` means that it is *a fully connected graph* or "closed" - there are no unconnected inputs or outputs. +Since it is closed it is possible to transform the graph into a :class:`RunnableGraph` using ``RunnableGraph.fromGraph``. +The runnable graph can then be ``run()`` to materialize a stream out of it. + +Both :class:`Graph` and :class:`RunnableGraph` are *immutable, thread-safe, and freely shareable*. + +A graph can also have one of several other shapes, with one or more unconnected ports. Having unconnected ports +expresses a graph that is a *partial graph*. Concepts around composing and nesting graphs in large structures are +explained in detail in :ref:`composition-java`. It is also possible to wrap complex computation graphs +as Flows, Sinks or Sources, which will be explained in detail in :ref:`partial-graph-dsl-java`. + + +Back-pressure in action +----------------------- + +One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks +(Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more +about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read +:ref:`back-pressure-explained-java`. + +A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough, +either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting +in either ``OutOfMemoryError`` s or other severe degradations of service responsiveness. With Akka Streams buffering can +and must be handled explicitly. For example, if we are only interested in the "*most recent tweets, with a buffer of 10 +elements*" this can be expressed using the ``buffer`` element: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-slow-consumption-dropHead + +The ``buffer`` element takes an explicit and required ``OverflowStrategy``, which defines how the buffer should react +when it receives another element while it is full. Strategies provided include dropping the oldest element (``dropHead``), +dropping the entire buffer, signalling failures etc. Be sure to pick and choose the strategy that fits your use case best. + +.. _materialized-values-quick-java: + +Materialized values +------------------- +So far we've been only processing data using Flows and consuming it into some kind of external Sink - be it by printing +values or storing them in some external system. However sometimes we may be interested in some value that can be +obtained from the materialized processing pipeline. For example, we want to know how many tweets we have processed. +While this question is not as obvious to give an answer to in case of an infinite stream of tweets (one way to answer +this question in a streaming setting would be to create a stream of counts described as "*up until now*, we've processed N tweets"), +but in general it is possible to deal with finite streams and come up with a nice result such as a total count of elements. + +First, let's write such an element counter using ``Flow.of(Class)`` and ``Sink.fold`` to see how the types look like: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-fold-count + +First we prepare a reusable ``Flow`` that will change each incoming tweet into an integer of value ``1``. We'll use this in +order to combine those with a ``Sink.fold`` that will sum all ``Integer`` elements of the stream and make its result available as +a ``CompletionStage``. Next we connect the ``tweets`` stream to ``count`` with ``via``. Finally we connect the Flow to the previously +prepared Sink using ``toMat``. + +Remember those mysterious ``Mat`` type parameters on ``Source``, ``Flow`` and ``Sink``? +They represent the type of values these processing parts return when materialized. When you chain these together, +you can explicitly combine their materialized values: in our example we used the ``Keep.right`` predefined function, +which tells the implementation to only care about the materialized type of the stage currently appended to the right. +The materialized type of ``sumSink`` is ``CompletionStage`` and because of using ``Keep.right``, the resulting :class:`RunnableGraph` +has also a type parameter of ``CompletionStage``. + +This step does *not* yet materialize the +processing pipeline, it merely prepares the description of the Flow, which is now connected to a Sink, and therefore can +be ``run()``, as indicated by its type: ``RunnableGraph>``. Next we call ``run()`` which uses the :class:`ActorMaterializer` +to materialize and run the Flow. The value returned by calling ``run()`` on a ``RunnableGraph`` is of type ``T``. +In our case this type is ``CompletionStage`` which, when completed, will contain the total length of our tweets stream. +In case of the stream failing, this future would complete with a Failure. + +A :class:`RunnableGraph` may be reused +and materialized multiple times, because it is just the "blueprint" of the stream. This means that if we materialize a stream, +for example one that consumes a live stream of tweets within a minute, the materialized values for those two materializations +will be different, as illustrated by this example: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-runnable-flow-materialized-twice + +Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or +steering these elements which will be discussed in detail in :ref:`stream-materialization-java`. Summing up this section, now we know +what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocTest.java#tweets-fold-count-oneline + +.. note:: + ``runWith()`` is a convenience method that automatically ignores the materialized value of any other stages except + those appended by the ``runWith()`` itself. In the above example it translates to using ``Keep.right`` as the combiner + for materialized values. diff --git a/akka-docs/rst/project/issue-tracking.rst b/akka-docs/rst/project/issue-tracking.rst index 169ba87d03..7a68465e63 100644 --- a/akka-docs/rst/project/issue-tracking.rst +++ b/akka-docs/rst/project/issue-tracking.rst @@ -19,11 +19,7 @@ have reproducible test cases that you can share. Roadmaps ^^^^^^^^ -Please refer to the `Akka roadmap -`_ -in order to find out the general theme of work to be done for upcoming versions -of Akka. - +Short and long-term plans are published in the [akka/akka-meta](https://github.com/akka/akka-meta/issues) repository. Creating tickets ---------------- @@ -35,4 +31,14 @@ have registered a GitHub user account. Thanks a lot for reporting bugs and suggesting features! +Submitting Pull Requests +------------------------ + +.. note:: *A pull request is worth a thousand +1's.* -- Old Klangian Proverb + +Pull Requests fixing issues or adding functionality are very welcome. +Please read [CONTRIBUTING.md](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) for +more information about contributing to Akka. + + diff --git a/akka-docs/rst/project/migration-guide-2.4.x-2.5.x.rst b/akka-docs/rst/project/migration-guide-2.4.x-2.5.x.rst index 5aaec0aea7..663dae3cbf 100644 --- a/akka-docs/rst/project/migration-guide-2.4.x-2.5.x.rst +++ b/akka-docs/rst/project/migration-guide-2.4.x-2.5.x.rst @@ -1,8 +1,8 @@ .. _migration-guide-2.4.x-2.5.x: -############################## -Migration Guide 2.4.x to 2.5.x -############################## +####################################### +Upcoming Migration Guide 2.4.x to 2.5.x +####################################### Akka Persistence ================ diff --git a/akka-docs/rst/scala/actors.rst b/akka-docs/rst/scala/actors.rst index 8417e10e5c..bc286254b6 100644 --- a/akka-docs/rst/scala/actors.rst +++ b/akka-docs/rst/scala/actors.rst @@ -81,6 +81,11 @@ verified during construction of the :class:`Props` object, resulting in an :class:`IllegalArgumentException` if no or multiple matching constructors are found. +.. note:: + + The recommended approach to create the actor :class:`Props` is not supported + for cases when the actor constructor takes value classes as arguments. + Dangerous Variants ^^^^^^^^^^^^^^^^^^ @@ -108,6 +113,25 @@ reference needs to be passed as the first argument). Declaring one actor within another is very dangerous and breaks actor encapsulation. Never pass an actor’s ``this`` reference into :class:`Props`! +Edge cases +^^^^^^^^^^ +There are two edge cases in actor creation with :class:`Props`: + +* An actor with :class:`AnyVal` arguments. + +.. includecode:: code/docs/actor/PropsEdgeCaseSpec.scala#props-edge-cases-value-class +.. includecode:: code/docs/actor/PropsEdgeCaseSpec.scala#props-edge-cases-value-class-example + +* An actor with default constructor values. + +.. includecode:: code/docs/actor/PropsEdgeCaseSpec.scala#props-edge-cases-default-values + +In both cases an :class:`IllegalArgumentException` will be thrown stating +no matching constructor could be found. + +The next section explains the recommended ways to create :class:`Actor` props in a way, +which simultaneously safe-guards against these edge cases. + Recommended Practices ^^^^^^^^^^^^^^^^^^^^^ @@ -162,6 +186,18 @@ another child to the same parent an :class:`InvalidActorNameException` is thrown Actors are automatically started asynchronously when created. +Value classes as constructor arguments +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The recommended way to instantiate actor props uses reflection at runtime +to determine the correct actor constructor to be invoked and due to technical +limitations is not supported when said constructor takes arguments that are +value classes. +In these cases you should either unpack the arguments or create the props by +calling the constructor manually: + +.. includecode:: code/docs/actor/ActorDocSpec.scala#actor-with-value-class-argument + Dependency Injection -------------------- diff --git a/akka-docs/rst/scala/cluster-metrics.rst b/akka-docs/rst/scala/cluster-metrics.rst index 12cbeba202..e0e3d67a3d 100644 --- a/akka-docs/rst/scala/cluster-metrics.rst +++ b/akka-docs/rst/scala/cluster-metrics.rst @@ -14,9 +14,9 @@ Cluster metrics information is primarily used for load-balancing routers, and can also be used to implement advanced metrics-based node life cycles, such as "Node Let-it-crash" when CPU steal time becomes excessive. -Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar. +Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar. -To enable usage of the extension you need to add the following dependency to your project: +To enable usage of the extension you need to add the following dependency to your project: :: "com.typesafe.akka" % "akka-cluster-metrics_@binVersion@" % "@version@" @@ -42,15 +42,15 @@ Certain message routing and let-it-crash functions may not work when Sigar is no Cluster metrics extension comes with two built-in collector implementations: -#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise +#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise #. ``akka.cluster.metrics.JmxMetricsCollector``, which is used as fall back, and is less rich/precise You can also plug-in your own metrics collector implementation. -By default, metrics extension will use collector provider fall back and will try to load them in this order: +By default, metrics extension will use collector provider fall back and will try to load them in this order: #. configured user-provided collector -#. built-in ``akka.cluster.metrics.SigarMetricsCollector`` +#. built-in ``akka.cluster.metrics.SigarMetricsCollector`` #. and finally ``akka.cluster.metrics.JmxMetricsCollector`` Metrics Events @@ -67,7 +67,7 @@ which was received during the collector sample period. You can subscribe your metrics listener actors to these events in order to implement custom node lifecycle :: - ClusterMetricsExtension(system).subscribe(metricsListenerActor) + ClusterMetricsExtension(system).subscribe(metricsListenerActor) Hyperic Sigar Provisioning -------------------------- @@ -75,8 +75,8 @@ Hyperic Sigar Provisioning Both user-provided and built-in metrics collectors can optionally use `Hyperic Sigar `_ for a wider and more accurate range of metrics compared to what can be retrieved from ordinary JMX MBeans. -Sigar is using a native o/s library, and requires library provisioning, i.e. -deployment, extraction and loading of the o/s native library into JVM at runtime. +Sigar is using a native o/s library, and requires library provisioning, i.e. +deployment, extraction and loading of the o/s native library into JVM at runtime. User can provision Sigar classes and native library in one of the following ways: @@ -86,8 +86,15 @@ User can provision Sigar classes and native library in one of the following ways Kamon sigar loader agent will extract and load sigar library during JVM start. #. Place ``sigar.jar`` on the ``classpath`` and Sigar native library for the o/s on the ``java.library.path``. User is required to manage both project dependency and library deployment manually. - -To enable usage of Sigar you can add the following dependency to the user project + +.. warning:: + + When using `Kamon sigar-loader `_ and running multiple + instances of the same application on the same host, you have to make sure that sigar library is extracted to a + unique per instance directory. You can control the extract directory with the + ``akka.cluster.metrics.native-library-extract-folder`` configuration setting. + +To enable usage of Sigar you can add the following dependency to the user project :: "io.kamon" % "sigar-loader" % "@sigarLoaderVersion@" @@ -103,7 +110,7 @@ It uses random selection of routees with probabilities derived from the remainin It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights: * ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max -* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) +* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors) * ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization * ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors. * Any custom implementation of ``akka.cluster.metrics.MetricsSelector`` @@ -125,7 +132,7 @@ As you can see, the router is defined in the same way as other routers, and in t .. includecode:: ../../../akka-samples/akka-sample-cluster-scala/src/main/resources/factorial.conf#adaptive-router -It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router, +It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router, other things work in the same way as other routers. The same type of router could also have been defined in code: @@ -151,11 +158,11 @@ Custom Metrics Collector Metrics collection is delegated to the implementation of ``akka.cluster.metrics.MetricsCollector`` You can plug-in your own metrics collector instead of built-in -``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``. +``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``. -Look at those two implementations for inspiration. +Look at those two implementations for inspiration. -Custom metrics collector implementation class must be specified in the +Custom metrics collector implementation class must be specified in the ``akka.cluster.metrics.collector.provider`` configuration property. Configuration diff --git a/akka-docs/rst/scala/cluster-usage.rst b/akka-docs/rst/scala/cluster-usage.rst index 058b17a472..4eb8177e3c 100644 --- a/akka-docs/rst/scala/cluster-usage.rst +++ b/akka-docs/rst/scala/cluster-usage.rst @@ -142,7 +142,7 @@ status to ``down`` automatically after the configured time of unreachability. This is a naïve approach to remove unreachable nodes from the cluster membership. It works great for crashes and short transient network partitions, but not for long network -partitions. Both sides of the network partition will see the other side as unreachable +partitions. Both sides of the network partition will see the other side as unreachable and after a while remove it from its cluster membership. Since this happens on both sides the result is that two separate disconnected clusters have been created. This can also happen because of long GC pauses or system overload. @@ -150,14 +150,14 @@ can also happen because of long GC pauses or system overload. .. warning:: We recommend against using the auto-down feature of Akka Cluster in production. - This is crucial for correct behavior if you use :ref:`cluster-singleton-scala` or + This is crucial for correct behavior if you use :ref:`cluster-singleton-scala` or :ref:`cluster_sharding_scala`, especially together with Akka :ref:`persistence-scala`. - -A pre-packaged solution for the downing problem is provided by -`Split Brain Resolver `_, -which is part of the Lightbend Reactive Platform. If you don’t use RP, you should anyway carefully + +A pre-packaged solution for the downing problem is provided by +`Split Brain Resolver `_, +which is part of the Lightbend Reactive Platform. If you don’t use RP, you should anyway carefully read the `documentation `_ -of the Split Brain Resolver and make sure that the solution you are using handles the concerns +of the Split Brain Resolver and make sure that the solution you are using handles the concerns described there. .. note:: If you have *auto-down* enabled and the failure detector triggers, you @@ -422,8 +422,8 @@ If system messages cannot be delivered to a node it will be quarantined and then cannot come back from ``unreachable``. This can happen if the there are too many unacknowledged system messages (e.g. watch, Terminated, remote actor deployment, failures of actors supervised by remote parent). Then the node needs to be moved -to the ``down`` or ``removed`` states and the actor system must be restarted before -it can join the cluster again. +to the ``down`` or ``removed`` states and the actor system of the quarantined node +must be restarted before it can join the cluster again. The nodes in the cluster monitor each other by sending heartbeats to detect if a node is unreachable from the rest of the cluster. The heartbeat arrival times is interpreted diff --git a/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala b/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala index 50c7adc846..e6b9912d18 100644 --- a/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/actor/ActorDocSpec.scala @@ -50,6 +50,19 @@ class ActorWithArgs(arg: String) extends Actor { def receive = { case _ => () } } +//#actor-with-value-class-argument +class Argument(val value: String) extends AnyVal +class ValueClassActor(arg: Argument) extends Actor { + def receive = { case _ => () } +} + +object ValueClassActor { + def props1(arg: Argument) = Props(classOf[ValueClassActor], arg) // fails at runtime + def props2(arg: Argument) = Props(classOf[ValueClassActor], arg.value) // ok + def props3(arg: Argument) = Props(new ValueClassActor(arg)) // ok +} +//#actor-with-value-class-argument + class DemoActorWrapper extends Actor { //#props-factory object DemoActor { @@ -312,7 +325,7 @@ class ActorDocSpec extends AkkaSpec(""" val props1 = Props[MyActor] val props2 = Props(new ActorWithArgs("arg")) // careful, see below - val props3 = Props(classOf[ActorWithArgs], "arg") + val props3 = Props(classOf[ActorWithArgs], "arg") // no support for value class arguments //#creating-props //#creating-props-deprecated @@ -618,4 +631,4 @@ class ActorDocSpec extends AkkaSpec(""" }) } -} +} \ No newline at end of file diff --git a/akka-docs/rst/scala/code/docs/actor/PropsEdgeCaseSpec.scala b/akka-docs/rst/scala/code/docs/actor/PropsEdgeCaseSpec.scala new file mode 100644 index 0000000000..3a538a61fd --- /dev/null +++ b/akka-docs/rst/scala/code/docs/actor/PropsEdgeCaseSpec.scala @@ -0,0 +1,44 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ +package docs.actor + +import akka.actor.{ Actor, Props } +import docs.CompileOnlySpec +import org.scalatest.WordSpec + +//#props-edge-cases-value-class +case class MyValueClass(v: Int) extends AnyVal + +//#props-edge-cases-value-class + +class PropsEdgeCaseSpec extends WordSpec with CompileOnlySpec { + "value-class-edge-case-example" in compileOnlySpec { + //#props-edge-cases-value-class-example + class ValueActor(value: MyValueClass) extends Actor { + def receive = { + case multiplier: Long => sender() ! (value.v * multiplier) + } + } + val valueClassProp = Props(classOf[ValueActor], MyValueClass(5)) // Unsupported + //#props-edge-cases-value-class-example + + //#props-edge-cases-default-values + class DefaultValueActor(a: Int, b: Int = 5) extends Actor { + def receive = { + case x: Int => sender() ! ((a + x) * b) + } + } + + val defaultValueProp1 = Props(classOf[DefaultValueActor], 2.0) // Unsupported + + class DefaultValueActor2(b: Int = 5) extends Actor { + def receive = { + case x: Int => sender() ! (x * b) + } + } + val defaultValueProp2 = Props[DefaultValueActor2] // Unsupported + val defaultValueProp3 = Props(classOf[DefaultValueActor2]) // Unsupported + //#props-edge-cases-default-values + } +} diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpClientExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpClientExampleSpec.scala index 8f13d319cb..817edbe4d4 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpClientExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpClientExampleSpec.scala @@ -4,37 +4,146 @@ package docs.http.scaladsl +import akka.Done import akka.actor.{ ActorLogging, ActorSystem } -import akka.stream.{ ActorMaterializerSettings } +import akka.http.scaladsl.model.HttpEntity.Strict +import akka.http.scaladsl.model.HttpMessage.DiscardedEntity +import akka.stream.{ IOResult, Materializer } +import akka.stream.scaladsl.{ Framing, Sink } import akka.util.ByteString +import docs.CompileOnlySpec import org.scalatest.{ Matchers, WordSpec } -class HttpClientExampleSpec extends WordSpec with Matchers { +import scala.concurrent.{ ExecutionContextExecutor, Future } - "outgoing-connection-example" in { - pending // compile-time only test +class HttpClientExampleSpec extends WordSpec with Matchers with CompileOnlySpec { + + "manual-entity-consume-example-1" in compileOnlySpec { + //#manual-entity-consume-example-1 + import java.io.File + import akka.actor.ActorSystem + import akka.stream.ActorMaterializer + import akka.stream.scaladsl.Framing + import akka.stream.scaladsl.FileIO + import akka.http.scaladsl.model._ + + implicit val system = ActorSystem() + implicit val dispatcher = system.dispatcher + implicit val materializer = ActorMaterializer() + + val response: HttpResponse = ??? + + response.entity.dataBytes + .via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256)) + .map(transformEachLine) + .runWith(FileIO.toPath(new File("/tmp/example.out").toPath)) + + def transformEachLine(line: ByteString): ByteString = ??? + + //#manual-entity-consume-example-1 + } + + "manual-entity-consume-example-2" in compileOnlySpec { + //#manual-entity-consume-example-2 + import java.io.File + import akka.actor.ActorSystem + import akka.stream.ActorMaterializer + import akka.http.scaladsl.model._ + import scala.concurrent.duration._ + + implicit val system = ActorSystem() + implicit val dispatcher = system.dispatcher + implicit val materializer = ActorMaterializer() + + case class ExamplePerson(name: String) + def parse(line: ByteString): ExamplePerson = ??? + + val response: HttpResponse = ??? + + // toStrict to enforce all data be loaded into memory from the connection + val strictEntity: Future[HttpEntity.Strict] = response.entity.toStrict(3.seconds) + + // while API remains the same to consume dataBytes, now they're in memory already: + val transformedData: Future[ExamplePerson] = + strictEntity flatMap { e => + e.dataBytes + .runFold(ByteString.empty) { case (acc, b) => acc ++ b } + .map(parse) + } + + //#manual-entity-consume-example-2 + } + + "manual-entity-discard-example-1" in compileOnlySpec { + //#manual-entity-discard-example-1 + import akka.actor.ActorSystem + import akka.stream.ActorMaterializer + import akka.http.scaladsl.model._ + + implicit val system = ActorSystem() + implicit val dispatcher = system.dispatcher + implicit val materializer = ActorMaterializer() + + val response1: HttpResponse = ??? // obtained from an HTTP call (see examples below) + + val discarded: DiscardedEntity = response1.discardEntityBytes() + discarded.future.onComplete { case done => println("Entity discarded completely!") } + + //#manual-entity-discard-example-1 + } + "manual-entity-discard-example-2" in compileOnlySpec { + import akka.actor.ActorSystem + import akka.stream.ActorMaterializer + import akka.http.scaladsl.model._ + + implicit val system = ActorSystem() + implicit val dispatcher = system.dispatcher + implicit val materializer = ActorMaterializer() + + //#manual-entity-discard-example-2 + val response1: HttpResponse = ??? // obtained from an HTTP call (see examples below) + + val discardingComplete: Future[Done] = response1.entity.dataBytes.runWith(Sink.ignore) + discardingComplete.onComplete { case done => println("Entity discarded completely!") } + //#manual-entity-discard-example-2 + } + + "outgoing-connection-example" in compileOnlySpec { //#outgoing-connection-example + import akka.actor.ActorSystem import akka.http.scaladsl.Http import akka.http.scaladsl.model._ import akka.stream.ActorMaterializer import akka.stream.scaladsl._ import scala.concurrent.Future + import scala.util.{ Failure, Success } - implicit val system = ActorSystem() - implicit val materializer = ActorMaterializer() + object WebClient { + def main(args: Array[String]): Unit = { + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + implicit val executionContext = system.dispatcher - val connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]] = - Http().outgoingConnection("akka.io") - val responseFuture: Future[HttpResponse] = - Source.single(HttpRequest(uri = "/")) - .via(connectionFlow) - .runWith(Sink.head) + val connectionFlow: Flow[HttpRequest, HttpResponse, Future[Http.OutgoingConnection]] = + Http().outgoingConnection("akka.io") + val responseFuture: Future[HttpResponse] = + Source.single(HttpRequest(uri = "/")) + .via(connectionFlow) + .runWith(Sink.head) + + responseFuture.andThen { + case Success(_) => println("request succeded") + case Failure(_) => println("request failed") + }.andThen { + case _ => system.terminate() + } + } + } //#outgoing-connection-example } - "host-level-example" in { - pending // compile-time only test + "host-level-example" in compileOnlySpec { //#host-level-example import akka.http.scaladsl.Http import akka.http.scaladsl.model._ @@ -55,14 +164,14 @@ class HttpClientExampleSpec extends WordSpec with Matchers { //#host-level-example } - "single-request-example" in { - pending // compile-time only test + "single-request-example" in compileOnlySpec { //#single-request-example import akka.http.scaladsl.Http import akka.http.scaladsl.model._ import akka.stream.ActorMaterializer import scala.concurrent.Future + import scala.util.{ Failure, Success } implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() @@ -72,8 +181,7 @@ class HttpClientExampleSpec extends WordSpec with Matchers { //#single-request-example } - "single-request-in-actor-example" in { - pending // compile-time only test + "single-request-in-actor-example" in compileOnlySpec { //#single-request-in-actor-example import akka.actor.Actor import akka.http.scaladsl.Http diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpServerExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpServerExampleSpec.scala index 9514448a70..d91d819b59 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpServerExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpServerExampleSpec.scala @@ -5,15 +5,13 @@ package docs.http.scaladsl import akka.event.LoggingAdapter -import akka.http.scaladsl.Http.ServerBinding -import akka.http.scaladsl.model._ -import akka.stream.ActorMaterializer -import akka.stream.scaladsl.{ Flow, Sink } +import akka.http.scaladsl.model.{ RequestEntity, StatusCodes } +import akka.stream.scaladsl.Sink import akka.testkit.TestActors import docs.CompileOnlySpec import org.scalatest.{ Matchers, WordSpec } -import scala.language.postfixOps +import scala.language.postfixOps import scala.concurrent.{ ExecutionContext, Future } class HttpServerExampleSpec extends WordSpec with Matchers @@ -44,37 +42,50 @@ class HttpServerExampleSpec extends WordSpec with Matchers "binding-failure-high-level-example" in compileOnlySpec { import akka.actor.ActorSystem import akka.http.scaladsl.Http + import akka.http.scaladsl.Http.ServerBinding import akka.http.scaladsl.server.Directives._ import akka.stream.ActorMaterializer - implicit val system = ActorSystem() - implicit val materializer = ActorMaterializer() - // needed for the future onFailure in the end - implicit val executionContext = system.dispatcher + import scala.concurrent.Future - val handler = get { - complete("Hello world!") + object WebServer { + def main(args: Array[String]) { + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + // needed for the future onFailure in the end + implicit val executionContext = system.dispatcher + + val handler = get { + complete("Hello world!") + } + + // let's say the OS won't allow us to bind to 80. + val (host, port) = ("localhost", 80) + val bindingFuture: Future[ServerBinding] = + Http().bindAndHandle(handler, host, port) + + bindingFuture.onFailure { + case ex: Exception => + log.error(ex, "Failed to bind to {}:{}!", host, port) + } + } } - - // let's say the OS won't allow us to bind to 80. - val (host, port) = ("localhost", 80) - val bindingFuture: Future[ServerBinding] = - Http().bindAndHandle(handler, host, port) - - bindingFuture.onFailure { - case ex: Exception => - log.error(ex, "Failed to bind to {}:{}!", host, port) - } - } // mock values: - import akka.http.scaladsl.Http - import akka.actor.ActorSystem - val handleConnections: Sink[Http.IncomingConnection, Future[Http.ServerBinding]] = + val handleConnections = { + import akka.stream.scaladsl.Sink Sink.ignore.mapMaterializedValue(_ => Future.failed(new Exception(""))) + } "binding-failure-handling" in compileOnlySpec { + import akka.actor.ActorSystem + import akka.http.scaladsl.Http + import akka.http.scaladsl.Http.ServerBinding + import akka.stream.ActorMaterializer + + import scala.concurrent.Future + implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() // needed for the future onFailure in the end @@ -102,11 +113,8 @@ class HttpServerExampleSpec extends WordSpec with Matchers import akka.actor.ActorSystem import akka.actor.ActorRef import akka.http.scaladsl.Http - import akka.http.scaladsl.model.HttpEntity - import akka.http.scaladsl.model.ContentTypes - import akka.http.scaladsl.server.Directives._ import akka.stream.ActorMaterializer - import scala.io.StdIn + import akka.stream.scaladsl.Flow implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() @@ -132,8 +140,9 @@ class HttpServerExampleSpec extends WordSpec with Matchers "connection-stream-failure-handling" in compileOnlySpec { import akka.actor.ActorSystem import akka.http.scaladsl.Http - import akka.http.scaladsl.model.{ ContentTypes, HttpEntity } + import akka.http.scaladsl.model._ import akka.stream.ActorMaterializer + import akka.stream.scaladsl.Flow implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() @@ -152,7 +161,7 @@ class HttpServerExampleSpec extends WordSpec with Matchers val httpEcho = Flow[HttpRequest] .via(reactToConnectionFailure) .map { request => - // simple text "echo" response: + // simple streaming (!) "echo" response: HttpResponse(entity = HttpEntity(ContentTypes.`text/plain(UTF-8)`, request.entity.dataBytes)) } @@ -188,7 +197,8 @@ class HttpServerExampleSpec extends WordSpec with Matchers case HttpRequest(GET, Uri.Path("/crash"), _, _, _) => sys.error("BOOM!") - case _: HttpRequest => + case r: HttpRequest => + r.discardEntityBytes() // important to drain incoming HTTP Entity stream HttpResponse(404, entity = "Unknown resource!") } @@ -203,6 +213,7 @@ class HttpServerExampleSpec extends WordSpec with Matchers } "low-level-server-example" in compileOnlySpec { + import akka.actor.ActorSystem import akka.http.scaladsl.Http import akka.http.scaladsl.model.HttpMethods._ import akka.http.scaladsl.model._ @@ -229,7 +240,8 @@ class HttpServerExampleSpec extends WordSpec with Matchers case HttpRequest(GET, Uri.Path("/crash"), _, _, _) => sys.error("BOOM!") - case _: HttpRequest => + case r: HttpRequest => + r.discardEntityBytes() // important to drain incoming HTTP Entity stream HttpResponse(404, entity = "Unknown resource!") } @@ -286,7 +298,9 @@ class HttpServerExampleSpec extends WordSpec with Matchers } "minimal-routing-example" in compileOnlySpec { + import akka.actor.ActorSystem import akka.http.scaladsl.Http + import akka.http.scaladsl.model._ import akka.http.scaladsl.server.Directives._ import akka.stream.ActorMaterializer import scala.io.StdIn @@ -319,13 +333,14 @@ class HttpServerExampleSpec extends WordSpec with Matchers "long-routing-example" in compileOnlySpec { //#long-routing-example - import akka.actor.ActorRef + import akka.actor.{ActorRef, ActorSystem} import akka.http.scaladsl.coding.Deflate import akka.http.scaladsl.marshalling.ToResponseMarshaller import akka.http.scaladsl.model.StatusCodes.MovedPermanently import akka.http.scaladsl.server.Directives._ import akka.http.scaladsl.unmarshalling.FromRequestUnmarshaller import akka.pattern.ask + import akka.stream.ActorMaterializer import akka.util.Timeout // types used by the API routes @@ -427,6 +442,7 @@ class HttpServerExampleSpec extends WordSpec with Matchers "stream random numbers" in compileOnlySpec { //#stream-random-numbers + import akka.actor.ActorSystem import akka.stream.scaladsl._ import akka.util.ByteString import akka.http.scaladsl.Http @@ -483,15 +499,15 @@ class HttpServerExampleSpec extends WordSpec with Matchers "interact with an actor" in compileOnlySpec { //#actor-interaction import akka.actor.ActorSystem - import akka.actor.Props - import scala.concurrent.duration._ - import akka.util.Timeout - import akka.pattern.ask - import akka.stream.ActorMaterializer import akka.http.scaladsl.Http + import akka.http.scaladsl.model.StatusCodes import akka.http.scaladsl.server.Directives._ import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._ + import akka.pattern.ask + import akka.stream.ActorMaterializer + import akka.util.Timeout import spray.json.DefaultJsonProtocol._ + import scala.concurrent.duration._ import scala.io.StdIn object WebServer { @@ -541,6 +557,126 @@ class HttpServerExampleSpec extends WordSpec with Matchers } //#actor-interaction } + + "consume entity using entity directive" in compileOnlySpec { + //#consume-entity-directive + import akka.actor.ActorSystem + import akka.http.scaladsl.server.Directives._ + import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._ + import akka.stream.ActorMaterializer + import spray.json.DefaultJsonProtocol._ + + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + // needed for the future flatMap/onComplete in the end + implicit val executionContext = system.dispatcher + + final case class Bid(userId: String, bid: Int) + + // these are from spray-json + implicit val bidFormat = jsonFormat2(Bid) + + val route = + path("bid") { + put { + entity(as[Bid]) { bid => + // incoming entity is fully consumed and converted into a Bid + complete("The bid was: " + bid) + } + } + } + //#consume-entity-directive + } + + "consume entity using raw dataBytes to file" in compileOnlySpec { + //#consume-raw-dataBytes + import akka.actor.ActorSystem + import akka.stream.scaladsl.FileIO + import akka.http.scaladsl.server.Directives._ + import akka.stream.ActorMaterializer + import java.io.File + + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + // needed for the future flatMap/onComplete in the end + implicit val executionContext = system.dispatcher + + val route = + (put & path("lines")) { + withoutSizeLimit { + extractDataBytes { bytes => + val finishedWriting = bytes.runWith(FileIO.toPath(new File("/tmp/example.out").toPath)) + + // we only want to respond once the incoming data has been handled: + onComplete(finishedWriting) { ioResult => + complete("Finished writing data: " + ioResult) + } + } + } + } + //#consume-raw-dataBytes + } + + "drain entity using request#discardEntityBytes" in compileOnlySpec { + //#discard-discardEntityBytes + import akka.actor.ActorSystem + import akka.stream.scaladsl.FileIO + import akka.http.scaladsl.server.Directives._ + import akka.stream.ActorMaterializer + import akka.http.scaladsl.model.HttpRequest + + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + // needed for the future flatMap/onComplete in the end + implicit val executionContext = system.dispatcher + + val route = + (put & path("lines")) { + withoutSizeLimit { + extractRequest { r: HttpRequest => + val finishedWriting = r.discardEntityBytes().future + + // we only want to respond once the incoming data has been handled: + onComplete(finishedWriting) { done => + complete("Drained all data from connection... (" + done + ")") + } + } + } + } + //#discard-discardEntityBytes + } + + "discard entity manually" in compileOnlySpec { + //#discard-close-connections + import akka.actor.ActorSystem + import akka.stream.scaladsl.Sink + import akka.http.scaladsl.server.Directives._ + import akka.http.scaladsl.model.headers.Connection + import akka.stream.ActorMaterializer + + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + // needed for the future flatMap/onComplete in the end + implicit val executionContext = system.dispatcher + + val route = + (put & path("lines")) { + withoutSizeLimit { + extractDataBytes { data => + // Closing connections, method 1 (eager): + // we deem this request as illegal, and close the connection right away: + data.runWith(Sink.cancelled) // "brutally" closes the connection + + // Closing connections, method 2 (graceful): + // consider draining connection and replying with `Connection: Close` header + // if you want the client to close after this request/reply cycle instead: + respondWithHeader(Connection("close")) + complete(StatusCodes.Forbidden -> "Not allowed!") + } + } + } + //#discard-close-connections + } } diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpsExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpsExamplesSpec.scala index ff30a2479c..64aeafa5e2 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/HttpsExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/HttpsExamplesSpec.scala @@ -9,13 +9,12 @@ import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.util.ByteString import com.typesafe.sslconfig.akka.AkkaSSLConfig +import docs.CompileOnlySpec import org.scalatest.{ Matchers, WordSpec } -class HttpsExamplesSpec extends WordSpec with Matchers { - - "disable SNI for connection" in { - pending // compile-time only test +class HttpsExamplesSpec extends WordSpec with Matchers with CompileOnlySpec { + "disable SNI for connection" in compileOnlySpec { val unsafeHost = "example.com" //#disable-sni-connection implicit val system = ActorSystem() diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/SprayJsonExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/SprayJsonExampleSpec.scala index fe5208a5f2..7da0505134 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/SprayJsonExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/SprayJsonExampleSpec.scala @@ -8,8 +8,6 @@ import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport import akka.http.scaladsl.server.Directives import org.scalatest.{ Matchers, WordSpec } -import scala.concurrent.Future - class SprayJsonExampleSpec extends WordSpec with Matchers { def compileOnlySpec(body: => Unit) = () @@ -53,6 +51,7 @@ class SprayJsonExampleSpec extends WordSpec with Matchers { "second-spray-json-example" in compileOnlySpec { //#second-spray-json-example import akka.actor.ActorSystem + import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.Done import akka.http.scaladsl.server.Route @@ -61,6 +60,10 @@ class SprayJsonExampleSpec extends WordSpec with Matchers { import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._ import spray.json.DefaultJsonProtocol._ + import scala.io.StdIn + + import scala.concurrent.Future + object WebServer { // domain model @@ -80,6 +83,8 @@ class SprayJsonExampleSpec extends WordSpec with Matchers { // needed to run the route implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() + // needed for the future map/flatmap in the end + implicit val executionContext = system.dispatcher val route: Route = get { @@ -104,6 +109,13 @@ class SprayJsonExampleSpec extends WordSpec with Matchers { } } + val bindingFuture = Http().bindAndHandle(route, "localhost", 8080) + println(s"Server online at http://localhost:8080/\nPress RETURN to stop...") + StdIn.readLine() // let it run until user presses return + bindingFuture + .flatMap(_.unbind()) // trigger unbinding from the port + .onComplete(_ ⇒ system.terminate()) // and shutdown when done + } } //#second-spray-json-example diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/WebSocketClientExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/WebSocketClientExampleSpec.scala index 67baf6628a..fdc88ff200 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/WebSocketClientExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/WebSocketClientExampleSpec.scala @@ -3,17 +3,14 @@ */ package docs.http.scaladsl -import akka.actor.ActorSystem -import akka.http.scaladsl.model.headers.{ Authorization, BasicHttpCredentials } import docs.CompileOnlySpec import org.scalatest.{ Matchers, WordSpec } -import scala.concurrent.Promise - class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnlySpec { "singleWebSocket-request-example" in compileOnlySpec { //#single-WebSocket-request + import akka.actor.ActorSystem import akka.{ Done, NotUsed } import akka.http.scaladsl.Http import akka.stream.ActorMaterializer @@ -23,59 +20,60 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly import scala.concurrent.Future - implicit val system = ActorSystem() - implicit val materializer = ActorMaterializer() - import system.dispatcher + object SingleWebSocketRequest { + def main(args: Array[String]) = { + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + import system.dispatcher - // print each incoming strict text message - val printSink: Sink[Message, Future[Done]] = - Sink.foreach { - case message: TextMessage.Strict => - println(message.text) - } + // print each incoming strict text message + val printSink: Sink[Message, Future[Done]] = + Sink.foreach { + case message: TextMessage.Strict => + println(message.text) + } - val helloSource: Source[Message, NotUsed] = - Source.single(TextMessage("hello world!")) + val helloSource: Source[Message, NotUsed] = + Source.single(TextMessage("hello world!")) - // the Future[Done] is the materialized value of Sink.foreach - // and it is completed when the stream completes - val flow: Flow[Message, Message, Future[Done]] = - Flow.fromSinkAndSourceMat(printSink, helloSource)(Keep.left) + // the Future[Done] is the materialized value of Sink.foreach + // and it is completed when the stream completes + val flow: Flow[Message, Message, Future[Done]] = + Flow.fromSinkAndSourceMat(printSink, helloSource)(Keep.left) - // upgradeResponse is a Future[WebSocketUpgradeResponse] that - // completes or fails when the connection succeeds or fails - // and closed is a Future[Done] representing the stream completion from above - val (upgradeResponse, closed) = - Http().singleWebSocketRequest(WebSocketRequest("ws://echo.websocket.org"), flow) + // upgradeResponse is a Future[WebSocketUpgradeResponse] that + // completes or fails when the connection succeeds or fails + // and closed is a Future[Done] representing the stream completion from above + val (upgradeResponse, closed) = + Http().singleWebSocketRequest(WebSocketRequest("ws://echo.websocket.org"), flow) - val connected = upgradeResponse.map { upgrade => - // just like a regular http request we can get 404 NotFound, - // with a response body, that will be available from upgrade.response - if (upgrade.response.status == StatusCodes.OK) { - Done - } else { - throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") + val connected = upgradeResponse.map { upgrade => + // just like a regular http request we can access response status which is available via upgrade.response.status + // status code 101 (Switching Protocols) indicates that server support WebSockets + if (upgrade.response.status == StatusCodes.SwitchingProtocols) { + Done + } else { + throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") + } + } + + // in a real application you would not side effect here + // and handle errors more carefully + connected.onComplete(println) + closed.foreach(_ => println("closed")) } } - - // in a real application you would not side effect here - // and handle errors more carefully - connected.onComplete(println) - closed.foreach(_ => println("closed")) - //#single-WebSocket-request } "half-closed-WebSocket-closing-example" in compileOnlySpec { + import akka.actor.ActorSystem import akka.{ Done, NotUsed } import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.stream.scaladsl._ - import akka.http.scaladsl.model._ import akka.http.scaladsl.model.ws._ - import scala.concurrent.Future - implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() import system.dispatcher @@ -97,14 +95,13 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly } "half-closed-WebSocket-working-example" in compileOnlySpec { - import akka.{ Done, NotUsed } + import akka.actor.ActorSystem import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.stream.scaladsl._ - import akka.http.scaladsl.model._ import akka.http.scaladsl.model.ws._ - import scala.concurrent.Future + import scala.concurrent.Promise implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() @@ -130,14 +127,14 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly } "half-closed-WebSocket-finite-working-example" in compileOnlySpec { + import akka.actor.ActorSystem import akka.{ Done, NotUsed } import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.stream.scaladsl._ - import akka.http.scaladsl.model._ import akka.http.scaladsl.model.ws._ - import scala.concurrent.Future + import scala.concurrent.Promise implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() @@ -163,11 +160,14 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly } "authorized-singleWebSocket-request-example" in compileOnlySpec { + import akka.actor.ActorSystem import akka.NotUsed import akka.http.scaladsl.Http import akka.stream.ActorMaterializer import akka.stream.scaladsl._ + import akka.http.scaladsl.model.headers.{ Authorization, BasicHttpCredentials } import akka.http.scaladsl.model.ws._ + implicit val system = ActorSystem() implicit val materializer = ActorMaterializer() import collection.immutable.Seq @@ -187,6 +187,7 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly "WebSocketClient-flow-example" in compileOnlySpec { //#WebSocket-client-flow + import akka.actor.ActorSystem import akka.Done import akka.http.scaladsl.Http import akka.stream.ActorMaterializer @@ -196,48 +197,51 @@ class WebSocketClientExampleSpec extends WordSpec with Matchers with CompileOnly import scala.concurrent.Future - implicit val system = ActorSystem() - implicit val materializer = ActorMaterializer() - import system.dispatcher + object WebSocketClientFlow { + def main(args: Array[String]) = { + implicit val system = ActorSystem() + implicit val materializer = ActorMaterializer() + import system.dispatcher - // Future[Done] is the materialized value of Sink.foreach, - // emitted when the stream completes - val incoming: Sink[Message, Future[Done]] = - Sink.foreach[Message] { - case message: TextMessage.Strict => - println(message.text) - } + // Future[Done] is the materialized value of Sink.foreach, + // emitted when the stream completes + val incoming: Sink[Message, Future[Done]] = + Sink.foreach[Message] { + case message: TextMessage.Strict => + println(message.text) + } - // send this as a message over the WebSocket - val outgoing = Source.single(TextMessage("hello world!")) + // send this as a message over the WebSocket + val outgoing = Source.single(TextMessage("hello world!")) - // flow to use (note: not re-usable!) - val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("ws://echo.websocket.org")) + // flow to use (note: not re-usable!) + val webSocketFlow = Http().webSocketClientFlow(WebSocketRequest("ws://echo.websocket.org")) - // the materialized value is a tuple with - // upgradeResponse is a Future[WebSocketUpgradeResponse] that - // completes or fails when the connection succeeds or fails - // and closed is a Future[Done] with the stream completion from the incoming sink - val (upgradeResponse, closed) = - outgoing - .viaMat(webSocketFlow)(Keep.right) // keep the materialized Future[WebSocketUpgradeResponse] - .toMat(incoming)(Keep.both) // also keep the Future[Done] - .run() + // the materialized value is a tuple with + // upgradeResponse is a Future[WebSocketUpgradeResponse] that + // completes or fails when the connection succeeds or fails + // and closed is a Future[Done] with the stream completion from the incoming sink + val (upgradeResponse, closed) = + outgoing + .viaMat(webSocketFlow)(Keep.right) // keep the materialized Future[WebSocketUpgradeResponse] + .toMat(incoming)(Keep.both) // also keep the Future[Done] + .run() - // just like a regular http request we can get 404 NotFound etc. - // that will be available from upgrade.response - val connected = upgradeResponse.flatMap { upgrade => - if (upgrade.response.status == StatusCodes.OK) { - Future.successful(Done) - } else { - throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") + // just like a regular http request we can access response status which is available via upgrade.response.status + // status code 101 (Switching Protocols) indicates that server support WebSockets + val connected = upgradeResponse.flatMap { upgrade => + if (upgrade.response.status == StatusCodes.SwitchingProtocols) { + Future.successful(Done) + } else { + throw new RuntimeException(s"Connection failed: ${upgrade.response.status}") + } + } + + // in a real application you would not side effect here + connected.onComplete(println) + closed.foreach(_ => println("closed")) } } - - // in a real application you would not side effect here - connected.onComplete(println) - closed.foreach(_ => println("closed")) - //#WebSocket-client-flow } diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala new file mode 100644 index 0000000000..7c06c9ede2 --- /dev/null +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala @@ -0,0 +1,53 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ +package docs.http.scaladsl.server + +import akka.actor.ActorSystem +import akka.http.scaladsl.server.{ Directives, Route } +import docs.CompileOnlySpec +import org.scalatest.WordSpec + +import scala.concurrent.Future + +class BlockingInHttpExamplesSpec extends WordSpec with CompileOnlySpec + with Directives { + + compileOnlySpec { + val system: ActorSystem = ??? + + //#blocking-example-in-default-dispatcher + // BAD (due to blocking in Future, on default dispatcher) + implicit val defaultDispatcher = system.dispatcher + + val routes: Route = post { + complete { + Future { // uses defaultDispatcher + Thread.sleep(5000) // will block on default dispatcher, + System.currentTimeMillis().toString // Starving the routing infrastructure + } + } + } + //# + } + + compileOnlySpec { + val system: ActorSystem = ??? + + //#blocking-example-in-dedicated-dispatcher + // GOOD (the blocking is now isolated onto a dedicated dispatcher): + implicit val blockingDispatcher = system.dispatchers.lookup("my-blocking-dispatcher") + + val routes: Route = post { + complete { + Future { // uses the good "blocking dispatcher" that we configured, + // instead of the default dispatcher- the blocking is isolated. + Thread.sleep(5000) + System.currentTimeMillis().toString + } + } + } + //# + } + +} diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/HttpsServerExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/HttpsServerExampleSpec.scala index e91fcd0d5e..e3fba4191a 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/HttpsServerExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/HttpsServerExampleSpec.scala @@ -52,7 +52,7 @@ abstract class HttpsServerExampleSpec extends WordSpec with Matchers tmf.init(ks) val sslContext: SSLContext = SSLContext.getInstance("TLS") - sslContext.init(keyManagerFactory.getKeyManagers, tmf.getTrustManagers, SecureRandom.getInstanceStrong) + sslContext.init(keyManagerFactory.getKeyManagers, tmf.getTrustManagers, new SecureRandom) val https: HttpsConnectionContext = ConnectionContext.https(sslContext) // sets default context to HTTPS – all Http() bound servers for this ActorSystem will use HTTPS from now on diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/WebSocketExampleSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/WebSocketExampleSpec.scala index a1c4541ba0..6317359124 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/WebSocketExampleSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/WebSocketExampleSpec.scala @@ -6,11 +6,11 @@ package docs.http.scaladsl.server import akka.http.scaladsl.model.ws.BinaryMessage import akka.stream.scaladsl.Sink +import docs.CompileOnlySpec import org.scalatest.{ Matchers, WordSpec } -class WebSocketExampleSpec extends WordSpec with Matchers { - "core-example" in { - pending // compile-time only test +class WebSocketExampleSpec extends WordSpec with Matchers with CompileOnlySpec { + "core-example" in compileOnlySpec { //#websocket-example-using-core import akka.actor.ActorSystem import akka.stream.ActorMaterializer @@ -49,7 +49,9 @@ class WebSocketExampleSpec extends WordSpec with Matchers { case Some(upgrade) => upgrade.handleMessages(greeterWebSocketService) case None => HttpResponse(400, entity = "Not a valid websocket request!") } - case _: HttpRequest => HttpResponse(404, entity = "Unknown resource!") + case r: HttpRequest => + r.discardEntityBytes() // important to drain incoming HTTP Entity stream + HttpResponse(404, entity = "Unknown resource!") } //#websocket-request-handling @@ -64,8 +66,7 @@ class WebSocketExampleSpec extends WordSpec with Matchers { .flatMap(_.unbind()) // trigger unbinding from the port .onComplete(_ => system.terminate()) // and shutdown when done } - "routing-example" in { - pending // compile-time only test + "routing-example" in compileOnlySpec { import akka.actor.ActorSystem import akka.stream.ActorMaterializer import akka.stream.scaladsl.{ Source, Flow } @@ -85,6 +86,7 @@ class WebSocketExampleSpec extends WordSpec with Matchers { .collect { case tm: TextMessage => TextMessage(Source.single("Hello ") ++ tm.textStream) // ignore binary messages + // TODO #20096 in case a Streamed message comes in, we should runWith(Sink.ignore) its data } //#websocket-routing diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala index 64b292a3a5..0b5ff6247e 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala @@ -303,13 +303,11 @@ class BasicDirectivesExamplesSpec extends RoutingSpec { "mapRouteResult" in { //#mapRouteResult // this directive is a joke, don't do that :-) - val makeEverythingOk = mapRouteResult { r => - r match { - case Complete(response) => - // "Everything is OK!" - Complete(response.copy(status = 200)) - case _ => r - } + val makeEverythingOk = mapRouteResult { + case Complete(response) => + // "Everything is OK!" + Complete(response.copy(status = 200)) + case r => r } val route = @@ -591,11 +589,9 @@ class BasicDirectivesExamplesSpec extends RoutingSpec { //#mapRouteResultWith-0 case object MyCustomRejection extends Rejection val rejectRejections = // not particularly useful directive - mapRouteResultWith { res => - res match { - case Rejected(_) => Future(Rejected(List(AuthorizationFailedRejection))) - case _ => Future(res) - } + mapRouteResultWith { + case Rejected(_) => Future(Rejected(List(AuthorizationFailedRejection))) + case res => Future(res) } val route = rejectRejections { @@ -694,7 +690,7 @@ class BasicDirectivesExamplesSpec extends RoutingSpec { // tests: Get("/") ~> route ~> check { - responseAs[String] shouldEqual s"RoutingSettings.renderVanityFooter = true" + responseAs[String] shouldEqual "RoutingSettings.renderVanityFooter = true" } //# } @@ -767,7 +763,7 @@ class BasicDirectivesExamplesSpec extends RoutingSpec { pathPrefix("123") { ignoring456 { path("abc") { - complete(s"Content") + complete("Content") } } } @@ -799,5 +795,36 @@ class BasicDirectivesExamplesSpec extends RoutingSpec { } //# } + "extractRequestEntity-example" in { + //#extractRequestEntity-example + val route = + extractRequestEntity { entity => + complete(s"Request entity content-type is ${entity.contentType}") + } + + // tests: + val httpEntity = HttpEntity(ContentTypes.`text/plain(UTF-8)`, "req") + Post("/abc", httpEntity) ~> route ~> check { + responseAs[String] shouldEqual s"Request entity content-type is text/plain; charset=UTF-8" + } + //# + } + "extractDataBytes-example" in { + //#extractDataBytes-example + val route = + extractDataBytes { data ⇒ + val sum = data.runFold(0) { (acc, i) ⇒ acc + i.utf8String.toInt } + onSuccess(sum) { s ⇒ + complete(HttpResponse(entity = HttpEntity(s.toString))) + } + } + + // tests: + val dataBytes = Source.fromIterator(() ⇒ Iterator.range(1, 10).map(x ⇒ ByteString(x.toString))) + Post("/abc", HttpEntity(ContentTypes.`text/plain(UTF-8)`, data = dataBytes)) ~> route ~> check { + responseAs[String] shouldEqual "45" + } + //# + } } diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala index f22a8e2bd4..53bee2bb06 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala @@ -18,7 +18,7 @@ class FileAndResourceDirectivesExamplesSpec extends RoutingSpec { val route = path("logs" / Segment) { name => - getFromFile(".log") // uses implicit ContentTypeResolver + getFromFile(s"$name.log") // uses implicit ContentTypeResolver } // tests: @@ -32,7 +32,7 @@ class FileAndResourceDirectivesExamplesSpec extends RoutingSpec { val route = path("logs" / Segment) { name => - getFromResource(".log") // uses implicit ContentTypeResolver + getFromResource(s"$name.log") // uses implicit ContentTypeResolver } // tests: @@ -46,6 +46,7 @@ class FileAndResourceDirectivesExamplesSpec extends RoutingSpec { listDirectoryContents("/tmp") } ~ path("custom") { + // implement your custom renderer here val renderer = new DirectoryRenderer { override def marshaller(renderVanityFooter: Boolean): ToEntityMarshaller[DirectoryListing] = ??? } diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala index dfa22e6eca..49ca6be86b 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala @@ -22,6 +22,7 @@ class MiscDirectivesExamplesSpec extends RoutingSpec { responseAs[String] shouldEqual "Client's ip is 192.168.3.12" } } + "rejectEmptyResponse-example" in { val route = rejectEmptyResponse { path("even" / IntNumber) { i => @@ -42,6 +43,7 @@ class MiscDirectivesExamplesSpec extends RoutingSpec { responseAs[String] shouldEqual "Number 28 is even." } } + "requestEntityEmptyPresent-example" in { val route = requestEntityEmpty { @@ -59,6 +61,7 @@ class MiscDirectivesExamplesSpec extends RoutingSpec { responseAs[String] shouldEqual "request entity empty" } } + "selectPreferredLanguage-example" in { val request = Get() ~> `Accept-Language`( Language("en-US"), @@ -78,6 +81,7 @@ class MiscDirectivesExamplesSpec extends RoutingSpec { } } ~> check { responseAs[String] shouldEqual "de-DE" } } + "validate-example" in { val route = extractUri { uri => @@ -94,4 +98,84 @@ class MiscDirectivesExamplesSpec extends RoutingSpec { rejection shouldEqual ValidationRejection("Path too long: '/abcdefghijkl'", None) } } + + "withSizeLimit-example" in { + val route = withSizeLimit(500) { + entity(as[String]) { _ ⇒ + complete(HttpResponse()) + } + } + + // tests: + def entityOfSize(size: Int) = + HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) + + Post("/abc", entityOfSize(500)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(501)) ~> Route.seal(route) ~> check { + status shouldEqual StatusCodes.BadRequest + } + + } + + "withSizeLimit-execution-moment-example" in { + val route = withSizeLimit(500) { + complete(HttpResponse()) + } + + // tests: + def entityOfSize(size: Int) = + HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) + + Post("/abc", entityOfSize(500)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(501)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + } + + "withSizeLimit-nested-example" in { + val route = + withSizeLimit(500) { + withSizeLimit(800) { + entity(as[String]) { _ ⇒ + complete(HttpResponse()) + } + } + } + + // tests: + def entityOfSize(size: Int) = + HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) + Post("/abc", entityOfSize(800)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(801)) ~> Route.seal(route) ~> check { + status shouldEqual StatusCodes.BadRequest + } + } + + "withoutSizeLimit-example" in { + val route = + withoutSizeLimit { + entity(as[String]) { _ ⇒ + complete(HttpResponse()) + } + } + + // tests: + def entityOfSize(size: Int) = + HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) + + // will work even if you have configured akka.http.parsing.max-content-length = 500 + Post("/abc", entityOfSize(501)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + } + } diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/RouteDirectivesExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/RouteDirectivesExamplesSpec.scala index be19949343..df10fd21c5 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/RouteDirectivesExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/RouteDirectivesExamplesSpec.scala @@ -22,10 +22,10 @@ class RouteDirectivesExamplesSpec extends RoutingSpec { complete(StatusCodes.OK) } ~ path("c") { - complete(StatusCodes.Created, "bar") + complete(StatusCodes.Created -> "bar") } ~ path("d") { - complete(201, "bar") + complete(201 -> "bar") } ~ path("e") { complete(StatusCodes.Created, List(`Content-Type`(`text/plain(UTF-8)`)), "bar") diff --git a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/TimeoutDirectivesExamplesSpec.scala b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/TimeoutDirectivesExamplesSpec.scala index f0bda7181a..415219fb77 100644 --- a/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/TimeoutDirectivesExamplesSpec.scala +++ b/akka-docs/rst/scala/code/docs/http/scaladsl/server/directives/TimeoutDirectivesExamplesSpec.scala @@ -5,24 +5,65 @@ package docs.http.scaladsl.server.directives import akka.http.scaladsl.model.{ HttpResponse, StatusCodes } -import akka.http.scaladsl.server.RoutingSpec +import akka.http.scaladsl.server.Route import docs.CompileOnlySpec - +import akka.http.scaladsl.{ Http, TestUtils } +import akka.http.scaladsl.server.Directives._ +import akka.stream.ActorMaterializer +import akka.http.scaladsl.model.HttpEntity._ +import akka.http.scaladsl.model._ +import com.typesafe.config.{ Config, ConfigFactory } +import org.scalatest.concurrent.ScalaFutures import scala.concurrent.duration._ import scala.concurrent.{ Future, Promise } +import akka.testkit.AkkaSpec -class TimeoutDirectivesExamplesSpec extends RoutingSpec with CompileOnlySpec { +private[this] object TimeoutDirectivesTestConfig { + val testConf: Config = ConfigFactory.parseString(""" + akka.loggers = ["akka.testkit.TestEventListener"] + akka.loglevel = ERROR + akka.stdout-loglevel = ERROR + windows-connection-abort-workaround-enabled = auto + akka.log-dead-letters = OFF + akka.http.server.request-timeout = 1000s""") + // large timeout - 1000s (please note - setting to infinite will disable Timeout-Access header + // and withRequestTimeout will not work) +} + +class TimeoutDirectivesExamplesSpec extends AkkaSpec(TimeoutDirectivesTestConfig.testConf) + with ScalaFutures with CompileOnlySpec { + //#testSetup + import system.dispatcher + implicit val materializer = ActorMaterializer() + + def slowFuture(): Future[String] = Promise[String].future // move to Future.never in Scala 2.12 + + def runRoute(route: Route, routePath: String): HttpResponse = { + val (_, hostname, port) = TestUtils.temporaryServerHostnameAndPort() + val binding = Http().bindAndHandle(route, hostname, port) + + val response = Http().singleRequest(HttpRequest(uri = s"http://$hostname:$port/$routePath")).futureValue + + binding.flatMap(_.unbind()).futureValue + + response + } + + //# "Request Timeout" should { - "be configurable in routing layer" in compileOnlySpec { + "be configurable in routing layer" in { //#withRequestTimeout-plain val route = path("timeout") { - withRequestTimeout(3.seconds) { + withRequestTimeout(1.seconds) { // modifies the global akka.http.server.request-timeout for this request val response: Future[String] = slowFuture() // very slow complete(response) } } + + // check + runRoute(route, "timeout").status should ===(StatusCodes.ServiceUnavailable) // the timeout response //# } "without timeout" in compileOnlySpec { @@ -34,14 +75,16 @@ class TimeoutDirectivesExamplesSpec extends RoutingSpec with CompileOnlySpec { complete(response) } } + + // no check as there is no time-out, the future would time out failing the test //# } - "allow mapping the response while setting the timeout" in compileOnlySpec { + "allow mapping the response while setting the timeout" in { //#withRequestTimeout-with-handler val timeoutResponse = HttpResponse( StatusCodes.EnhanceYourCalm, - entity = "Unable to serve response within time limit, please enchance your calm.") + entity = "Unable to serve response within time limit, please enhance your calm.") val route = path("timeout") { @@ -51,30 +94,33 @@ class TimeoutDirectivesExamplesSpec extends RoutingSpec with CompileOnlySpec { complete(response) } } + + // check + runRoute(route, "timeout").status should ===(StatusCodes.EnhanceYourCalm) // the timeout response //# } + // make it compile only to avoid flaking in slow builds "allow mapping the response" in compileOnlySpec { - pending // compile only spec since requires actuall Http server to be run - //#withRequestTimeoutResponse val timeoutResponse = HttpResponse( StatusCodes.EnhanceYourCalm, - entity = "Unable to serve response within time limit, please enchance your calm.") + entity = "Unable to serve response within time limit, please enhance your calm.") val route = path("timeout") { - withRequestTimeout(1.milli) { + withRequestTimeout(100.milli) { // racy! for a very short timeout like 1.milli you can still get 503 withRequestTimeoutResponse(request => timeoutResponse) { val response: Future[String] = slowFuture() // very slow complete(response) } } } + + // check + runRoute(route, "timeout").status should ===(StatusCodes.EnhanceYourCalm) // the timeout response //# } } - def slowFuture(): Future[String] = Promise[String].future - } diff --git a/akka-docs/rst/scala/code/docs/stream/BidiFlowDocSpec.scala b/akka-docs/rst/scala/code/docs/stream/BidiFlowDocSpec.scala index d69e1ca39b..f8bd88e7ba 100644 --- a/akka-docs/rst/scala/code/docs/stream/BidiFlowDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/stream/BidiFlowDocSpec.scala @@ -93,9 +93,12 @@ object BidiFlowDocSpec { } override def onUpstreamFinish(): Unit = { + // either we are done if (stash.isEmpty) completeStage() + // or we still have bytes to emit // wait with completion and let run() complete when the // rest of the stash has been sent downstream + else if (isAvailable(out)) run() } }) diff --git a/akka-docs/rst/scala/code/docs/stream/KillSwitchDocSpec.scala b/akka-docs/rst/scala/code/docs/stream/KillSwitchDocSpec.scala new file mode 100644 index 0000000000..b7a8b04201 --- /dev/null +++ b/akka-docs/rst/scala/code/docs/stream/KillSwitchDocSpec.scala @@ -0,0 +1,108 @@ +package docs.stream + +import akka.stream.scaladsl._ +import akka.stream.{ ActorMaterializer, DelayOverflowStrategy, KillSwitches } +import akka.testkit.AkkaSpec +import docs.CompileOnlySpec + +import scala.concurrent.Await +import scala.concurrent.duration._ + +class KillSwitchDocSpec extends AkkaSpec with CompileOnlySpec { + + implicit val materializer = ActorMaterializer() + + "Unique kill switch" must { + + "control graph completion with shutdown" in compileOnlySpec { + + // format: OFF + //#unique-shutdown + val countingSrc = Source(Stream.from(1)).delay(1.second, DelayOverflowStrategy.backpressure) + val lastSnk = Sink.last[Int] + + val (killSwitch, last) = countingSrc + .viaMat(KillSwitches.single)(Keep.right) + .toMat(lastSnk)(Keep.both) + .run() + + doSomethingElse() + + killSwitch.shutdown() + + Await.result(last, 1.second) shouldBe 2 + //#unique-shutdown + // format: ON + } + + "control graph completion with abort" in compileOnlySpec { + + // format: OFF + //#unique-abort + val countingSrc = Source(Stream.from(1)).delay(1.second, DelayOverflowStrategy.backpressure) + val lastSnk = Sink.last[Int] + + val (killSwitch, last) = countingSrc + .viaMat(KillSwitches.single)(Keep.right) + .toMat(lastSnk)(Keep.both).run() + + val error = new RuntimeException("boom!") + killSwitch.abort(error) + + Await.result(last.failed, 1.second) shouldBe error + //#unique-abort + // format: ON + } + } + + "Shared kill switch" must { + + "control graph completion with shutdown" in compileOnlySpec { + // format: OFF + //#shared-shutdown + val countingSrc = Source(Stream.from(1)).delay(1.second, DelayOverflowStrategy.backpressure) + val lastSnk = Sink.last[Int] + val sharedKillSwitch = KillSwitches.shared("my-kill-switch") + + val last = countingSrc + .via(sharedKillSwitch.flow) + .runWith(lastSnk) + + val delayedLast = countingSrc + .delay(1.second, DelayOverflowStrategy.backpressure) + .via(sharedKillSwitch.flow) + .runWith(lastSnk) + + doSomethingElse() + + sharedKillSwitch.shutdown() + + Await.result(last, 1.second) shouldBe 2 + Await.result(delayedLast, 1.second) shouldBe 1 + //#shared-shutdown + // format: ON + } + + "control graph completion with abort" in compileOnlySpec { + + // format: OFF + //#shared-abort + val countingSrc = Source(Stream.from(1)).delay(1.second) + val lastSnk = Sink.last[Int] + val sharedKillSwitch = KillSwitches.shared("my-kill-switch") + + val last1 = countingSrc.via(sharedKillSwitch.flow).runWith(lastSnk) + val last2 = countingSrc.via(sharedKillSwitch.flow).runWith(lastSnk) + + val error = new RuntimeException("boom!") + sharedKillSwitch.abort(error) + + Await.result(last1.failed, 1.second) shouldBe error + Await.result(last2.failed, 1.second) shouldBe error + //#shared-abort + // format: ON + } + } + + private def doSomethingElse() = ??? +} diff --git a/akka-docs/rst/scala/code/docs/stream/QuickStartDocSpec.scala b/akka-docs/rst/scala/code/docs/stream/QuickStartDocSpec.scala index bf25549841..3f8c68de04 100644 --- a/akka-docs/rst/scala/code/docs/stream/QuickStartDocSpec.scala +++ b/akka-docs/rst/scala/code/docs/stream/QuickStartDocSpec.scala @@ -3,19 +3,22 @@ */ package docs.stream -//#imports +//#stream-imports import akka.stream._ import akka.stream.scaladsl._ -//#imports +//#stream-imports + +//#other-imports import akka.{ NotUsed, Done } import akka.actor.ActorSystem import akka.util.ByteString - -import org.scalatest._ -import org.scalatest.concurrent._ import scala.concurrent._ import scala.concurrent.duration._ import java.nio.file.Paths +//#other-imports + +import org.scalatest._ +import org.scalatest.concurrent._ class QuickStartDocSpec extends WordSpec with BeforeAndAfterAll with ScalaFutures { implicit val patience = PatienceConfig(5.seconds) diff --git a/akka-docs/rst/scala/http/DispatcherBehaviourOnBadCode.png b/akka-docs/rst/scala/http/DispatcherBehaviourOnBadCode.png new file mode 100644 index 0000000000..8cfa3b8a8c Binary files /dev/null and b/akka-docs/rst/scala/http/DispatcherBehaviourOnBadCode.png differ diff --git a/akka-docs/rst/scala/http/DispatcherBehaviourOnGoodCode.png b/akka-docs/rst/scala/http/DispatcherBehaviourOnGoodCode.png new file mode 100644 index 0000000000..c764b2f737 Binary files /dev/null and b/akka-docs/rst/scala/http/DispatcherBehaviourOnGoodCode.png differ diff --git a/akka-docs/rst/scala/http/DispatcherBehaviourProperBlocking.png b/akka-docs/rst/scala/http/DispatcherBehaviourProperBlocking.png new file mode 100644 index 0000000000..cdcd1f8ad5 Binary files /dev/null and b/akka-docs/rst/scala/http/DispatcherBehaviourProperBlocking.png differ diff --git a/akka-docs/rst/scala/http/client-side/connection-level.rst b/akka-docs/rst/scala/http/client-side/connection-level.rst index ab775cd7fe..8f3e1d0468 100644 --- a/akka-docs/rst/scala/http/client-side/connection-level.rst +++ b/akka-docs/rst/scala/http/client-side/connection-level.rst @@ -7,6 +7,10 @@ The connection-level API is the lowest-level client-side API Akka HTTP provides. HTTP connections are opened and closed and how requests are to be send across which connection. As such it offers the highest flexibility at the cost of providing the least convenience. +.. note:: + It is recommended to first read the :ref:`implications-of-streaming-http-entities` section, + as it explains the underlying full-stack streaming concepts, which may be unexpected when coming + from a background with non-"streaming first" HTTP Clients. Opening HTTP Connections ------------------------ @@ -90,4 +94,4 @@ On the client-side the stand-alone HTTP layer forms a ``BidiStage`` that is defi :snippet: client-layer You create an instance of ``Http.ClientLayer`` by calling one of the two overloads of the ``Http().clientLayer`` method, -which also allows for varying degrees of configuration. \ No newline at end of file +which also allows for varying degrees of configuration. diff --git a/akka-docs/rst/scala/http/client-side/host-level.rst b/akka-docs/rst/scala/http/client-side/host-level.rst index 6a5b6ef84b..629af2b26f 100644 --- a/akka-docs/rst/scala/http/client-side/host-level.rst +++ b/akka-docs/rst/scala/http/client-side/host-level.rst @@ -7,6 +7,10 @@ As opposed to the :ref:`connection-level-api` the host-level API relieves you fr connections. It autonomously manages a configurable pool of connections to *one particular target endpoint* (i.e. host/port combination). +.. note:: + It is recommended to first read the :ref:`implications-of-streaming-http-entities` section, + as it explains the underlying full-stack streaming concepts, which may be unexpected when coming + from a background with non-"streaming first" HTTP Clients. Requesting a Host Connection Pool --------------------------------- @@ -153,4 +157,4 @@ Example ------- .. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala - :include: host-level-example \ No newline at end of file + :include: host-level-example diff --git a/akka-docs/rst/scala/http/client-side/index.rst b/akka-docs/rst/scala/http/client-side/index.rst index c0b1f9376e..0a55263911 100644 --- a/akka-docs/rst/scala/http/client-side/index.rst +++ b/akka-docs/rst/scala/http/client-side/index.rst @@ -6,6 +6,10 @@ Consuming HTTP-based Services (Client-Side) All client-side functionality of Akka HTTP, for consuming HTTP-based services offered by other endpoints, is currently provided by the ``akka-http-core`` module. +It is recommended to first read the :ref:`implications-of-streaming-http-entities` section, +as it explains the underlying full-stack streaming concepts, which may be unexpected when coming +from a background with non-"streaming first" HTTP Clients. + Depending on your application's specific needs you can choose from three different API levels: :ref:`connection-level-api` @@ -28,4 +32,4 @@ Akka HTTP will happily handle many thousand concurrent connections to a single o host-level request-level client-https-support - websocket-support \ No newline at end of file + websocket-support diff --git a/akka-docs/rst/scala/http/client-side/request-level.rst b/akka-docs/rst/scala/http/client-side/request-level.rst index 5061b124bc..8f1759b2be 100644 --- a/akka-docs/rst/scala/http/client-side/request-level.rst +++ b/akka-docs/rst/scala/http/client-side/request-level.rst @@ -7,6 +7,11 @@ The request-level API is the most convenient way of using Akka HTTP's client-sid :ref:`host-level-api` to provide you with a simple and easy-to-use way of retrieving HTTP responses from remote servers. Depending on your preference you can pick the flow-based or the future-based variant. +.. note:: + It is recommended to first read the :ref:`implications-of-streaming-http-entities` section, + as it explains the underlying full-stack streaming concepts, which may be unexpected when coming + from a background with non-"streaming first" HTTP Clients. + .. note:: The request-level API is implemented on top of a connection pool that is shared inside the ActorSystem. A consequence of using a pool is that long-running requests block a connection while running and starve other requests. Make sure not to use diff --git a/akka-docs/rst/scala/http/common/marshalling.rst b/akka-docs/rst/scala/http/common/marshalling.rst index 487fe73e3b..c97867d637 100644 --- a/akka-docs/rst/scala/http/common/marshalling.rst +++ b/akka-docs/rst/scala/http/common/marshalling.rst @@ -118,7 +118,7 @@ If, however, your marshaller also needs to set things like the response status c or any headers then a ``ToEntityMarshaller[T]`` won't work. You'll need to fall down to providing a ``ToResponseMarshaller[T]`` or a ``ToRequestMarshaller[T]`` directly. -For writing you own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly. +For writing your own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly. Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller`` companion: diff --git a/akka-docs/rst/scala/http/common/unmarshalling.rst b/akka-docs/rst/scala/http/common/unmarshalling.rst index 9f6426335c..be1e772e84 100644 --- a/akka-docs/rst/scala/http/common/unmarshalling.rst +++ b/akka-docs/rst/scala/http/common/unmarshalling.rst @@ -76,7 +76,7 @@ Custom Unmarshallers Akka HTTP gives you a few convenience tools for constructing unmarshallers for your own types. Usually you won't have to "manually" implement the ``Unmarshaller`` trait directly. -Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller`` +Rather, it should be possible to use one of the convenience construction helpers defined on the ``Unmarshaller`` companion: .. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/Unmarshaller.scala diff --git a/akka-docs/rst/scala/http/handling-blocking-operations-in-akka-http-routes.rst b/akka-docs/rst/scala/http/handling-blocking-operations-in-akka-http-routes.rst new file mode 100644 index 0000000000..3dd622e2da --- /dev/null +++ b/akka-docs/rst/scala/http/handling-blocking-operations-in-akka-http-routes.rst @@ -0,0 +1,120 @@ +.. _handling-blocking-in-http-routes-scala: + +Handling blocking operations in Akka HTTP +========================================= +Sometimes it is difficult to avoid performing the blocking operations and there +are good chances that the blocking is done inside a Future execute, which may +lead to problems. It is important to handle the blocking operations correctly. + +Problem +------- +Using ``context.dispatcher`` as the dispatcher on which the blocking Future +executes, can be a problem. The same dispatcher is used by the routing +infrastructure to actually handle the incoming requests. + +If all of the available threads are blocked, the routing infrastructure will end up *starving*. +Therefore, routing infrastructure should not be blocked. Instead, a dedicated dispatcher +for blocking operations should be used. + +.. note:: + Blocking APIs should also be avoided if possible. Try to find or build Reactive APIs, + such that blocking is minimised, or moved over to dedicated dispatchers. + + Often when integrating with existing libraries or systems it is not possible to + avoid blocking APIs, then following solution explains how to handle blocking + operations properly. + + Note that the same hints apply to managing blocking operations anywhere in Akka, + including in Actors etc. + +In the below thread state diagrams the colours have the following meaning: + +* Turquoise - Sleeping state +* Orange - Waiting state +* Green - Runnable state + +The thread information was recorded using the YourKit profiler, however any good JVM profiler +has this feature (including the free and bundled with the Oracle JDK VisualVM as well as Oracle Flight Recorder). + +Problem example: blocking the default dispatcher +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. includecode2:: ../code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala + :snippet: blocking-example-in-default-dispatcher + +Here the app is exposed to load of continous GET requests and large number +of akka.actor.default-dispatcher threads are handling requests. The orange +portion of the thread shows that they are idle. Idle threads are fine, +they're ready to accept new work. However large amounts of Turquoise (sleeping) threads are very bad! + +.. image:: DispatcherBehaviourOnBadCode.png + +After some time, the app is exposed to the load of requesting POST requests, +which will block these threads. For example "``default-akka.default-dispatcher2,3,4``" +are going into the blocking state, after being idle before. It can be observed +that the number of new threads increase, "``default-akka.actor.default-dispatcher 18,19,20,...``" +however they go to sleep state immediately, thus wasting the +resources. + +The number of such new threads depend on the default dispatcher configuration, +but likely will not exceed 50. Since many POST requests are done, the entire +thread pool is starved. The blocking operations dominate such that the routing +infra has no thread available to handle the other requests. + +In essence, the ``Thread.sleep`` has dominated all threads and caused anything +executing on the default dispatcher to starve for resources (including any Actors +that you have not configured an explicit dispatcher for (sic!)). + +Solution: Dedicated dispatcher for blocking operations +------------------------------------------------------ + +In ``application.conf``, the dispatcher dedicated for blocking behaviour should +be configured as follows:: + + my-blocking-dispatcher { + type = Dispatcher + executor = "thread-pool-executor" + thread-pool-executer { + // or in Akka 2.4.2+ + fixed-pool-size = 16 + } + throughput = 100 + } + +There are many dispatcher options available which can be found in :ref:`dispatchers-scala`. + +Here ``thread-pool-executer`` is used, which has a hard limit of threads, it can +keep available for blocking operations. The size settings depend on the app +functionality and the number of cores the server has. + +Whenever blocking has to be done, use the above configured dispatcher +instead of the default one: + +.. includecode2:: ../code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala + :snippet: blocking-example-in-dedicated-dispatcher + +This forces the app to use the same load, initially normal requests and then +the blocking requests. The thread pool behaviour is shown in the figrue. + +.. image:: DispatcherBehaviourOnGoodCode.png + +Initially, the normal requests are easily handled by default dispatcher, the +green lines, which represents the actual execution. + +When blocking operations are issued, the ``my-blocking-dispatcher`` +starts up to the number of configured threads. It handles sleeping. After +certain period of nothing happening to the threads, it shuts them down. + +If another bunch of operations have to be done, the pool will start new +threads that will take care of putting them into sleep state, but the +threads are not wasted. + +In this case, the throughput of the normal GET requests are not impacted +they were still served on the default dispatcher. + +This is the recommended way of dealing with any kind of blocking in reactive +applications. It is referred as "bulkheading" or "isolating" the bad behaving +parts of an app. In this case, bad behaviour of blocking operations. + +There is good documentation availabe in Akka docs section, +`Blocking needs careful management `_. diff --git a/akka-docs/rst/scala/http/implications-of-streaming-http-entity.rst b/akka-docs/rst/scala/http/implications-of-streaming-http-entity.rst new file mode 100644 index 0000000000..d6a0403eef --- /dev/null +++ b/akka-docs/rst/scala/http/implications-of-streaming-http-entity.rst @@ -0,0 +1,129 @@ +.. _implications-of-streaming-http-entities: + +Implications of the streaming nature of Request/Response Entities +----------------------------------------------------------------- + +Akka HTTP is streaming *all the way through*, which means that the back-pressure mechanisms enabled by Akka Streams +are exposed through all layers–from the TCP layer, through the HTTP server, all the way up to the user-facing ``HttpRequest`` +and ``HttpResponse`` and their ``HttpEntity`` APIs. + +This has suprising implications if you are used to non-streaming / not-reactive HTTP clients. +Specifically it means that: "*lack of consumption of the HTTP Entity, is signaled as back-pressure to the other +side of the connection*". This is a feature, as it allows one only to consume the entity, and back-pressure servers/clients +from overwhelming our application, possibly causing un-necessary buffering of the entity in memory. + +.. warning:: + Consuming (or discarding) the Entity of a request is mandatory! + If *accidentally* left neither consumed or discarded Akka HTTP will + asume the incoming data should remain back-pressured, and will stall the incoming data via TCP back-pressure mechanisms. + +Client-Side handling of streaming HTTP Entities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Consuming the HTTP Response Entity (Client) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The most commong use-case of course is consuming the response entity, which can be done via +running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source, +(or on the server-side using directives such as + +It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest, +for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another +destination Sink, such as a File or other Akka Streams connector: + +.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala + :include: manual-entity-consume-example-1 + +however sometimes the need may arise to consume the entire entity as ``Strict`` entity (which means that it is +completely loaded into memory). Akka HTTP provides a special ``toStrict(timeout)`` method which can be used to +eagerly consume the entity and make it available in memory: + +.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala + :include: manual-entity-consume-example-2 + + +Discarding the HTTP Response Entity (Client) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +Sometimes when calling HTTP services we do not care about their response payload (e.g. all we care about is the response code), +yet as explained above entity still has to be consumed in some way, otherwise we'll be exherting back-pressure on the +underlying TCP connection. + +The ``discardEntityBytes`` convenience method serves the purpose of easily discarding the entity if it has no purpose for us. +It does so by piping the incoming bytes directly into an ``Sink.ignore``. + +The two snippets below are equivalent, and work the same way on the server-side for incoming HTTP Requests: + +.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala + :include: manual-entity-discard-example-1 + +Or the equivalent low-level code achieving the same result: + +.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala + :include: manual-entity-discard-example-2 + +Server-Side handling of streaming HTTP Entities +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Similarily as with the Client-side, HTTP Entities are directly linked to Streams which are fed by the underlying +TCP connection. Thus, if request entities remain not consumed, the server will back-pressure the connection, expecting +that the user-code will eventually decide what to do with the incoming data. + +Note that some directives force an implicit ``toStrict`` operation, such as ``entity(as[String])`` and similar ones. + +Consuming the HTTP Request Entity (Server) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The simplest way of consuming the incoming request entity is to simply transform it into an actual domain object, +for example by using the :ref:`-entity-` directive: + +.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala + :include: consume-entity-directive + +Of course you can access the raw dataBytes as well and run the underlying stream, for example piping it into an +FileIO Sink, that signals completion via a ``Future[IoResult]`` once all the data has been written into the file: + +.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala + :include: consume-raw-dataBytes + +Discarding the HTTP Request Entity (Server) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Sometimes, depending on some validation (e.g. checking if given user is allowed to perform uploads or not) +you may want to decide to discard the uploaded entity. + +Please note that discarding means that the entire upload will proceed, even though you are not interested in the data +being streamed to the server - this may be useful if you are simply not interested in the given entity, however +you don't want to abort the entire connection (which we'll demonstrate as well), since there may be more requests +pending on the same connection still. + +In order to discard the databytes explicitly you can invoke the ``discardEntityBytes`` bytes of the incoming ``HTTPRequest``: + +.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala + :include: discard-discardEntityBytes + +A related concept is *cancelling* the incoming ``entity.dataBytes`` stream, which results in Akka HTTP +*abruptly closing the connection from the Client*. This may be useful when you detect that the given user should not be allowed to make any +uploads at all, and you want to drop the connection (instead of reading and ignoring the incoming data). +This can be done by attaching the incoming ``entity.dataBytes`` to a ``Sink.cancelled`` which will cancel +the entity stream, which in turn will cause the underlying connection to be shut-down by the server – +effectively hard-aborting the incoming request: + +.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala + :include: discard-close-connections + +Closing connections is also explained in depth in the :ref:`http-closing-connection-low-level` section of the docs. + +Pending: Automatic discarding of not used entities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request, +and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below +note and issues for further discussion and ideas. + +.. note:: + An advanced feature code named "auto draining" has been discussed and proposed for Akka HTTP, and we're hoping + to implement or help the community implement it. + + You can read more about it in `issue #18716 `_ + as well as `issue #18540 `_ ; as always, contributions are very welcome! + diff --git a/akka-docs/rst/scala/http/index.rst b/akka-docs/rst/scala/http/index.rst index 91b4858454..570f59eb3e 100644 --- a/akka-docs/rst/scala/http/index.rst +++ b/akka-docs/rst/scala/http/index.rst @@ -9,9 +9,12 @@ Akka HTTP introduction configuration common/index + implications-of-streaming-http-entity low-level-server-side-api routing-dsl/index client-side/index server-side-https-support + handling-blocking-operations-in-akka-http-routes migration-from-spray migration-from-old-http-javadsl + migration-guide-2.4.x-experimental diff --git a/akka-docs/rst/scala/http/introduction.rst b/akka-docs/rst/scala/http/introduction.rst index 1c46f92c08..453720a88f 100644 --- a/akka-docs/rst/scala/http/introduction.rst +++ b/akka-docs/rst/scala/http/introduction.rst @@ -25,11 +25,14 @@ Akka HTTP was designed specifically as “not-a-framework”, not because we don Using Akka HTTP --------------- -Akka HTTP is provided in separate jar files, to use it make sure to include the following dependencies:: +Akka HTTP is provided in a separate jar file, to use it make sure to include the following dependency:: - "com.typesafe.akka" %% "akka-http-core" % "@version@" @crossString@ "com.typesafe.akka" %% "akka-http-experimental" % "@version@" @crossString@ +Mind that ``akka-http`` comes in two modules: ``akka-http-experimental`` and ``akka-http-core``. Because ``akka-http-experimental`` +depends on ``akka-http-core`` you don't need to bring the latter explicitly. Still you may need to this in case you rely +solely on low-level API. + Routing DSL for HTTP servers ---------------------------- diff --git a/akka-docs/rst/scala/http/low-level-server-side-api.rst b/akka-docs/rst/scala/http/low-level-server-side-api.rst index c3b6bf0aed..30bf2b1620 100644 --- a/akka-docs/rst/scala/http/low-level-server-side-api.rst +++ b/akka-docs/rst/scala/http/low-level-server-side-api.rst @@ -40,6 +40,10 @@ Depending on your needs you can either use the low-level API directly or rely on :ref:`Routing DSL ` which can make the definition of more complex service logic much easier. +.. note:: + It is recommended to read the :ref:`implications-of-streaming-http-entities` section, + as it explains the underlying full-stack streaming concepts, which may be unexpected when coming + from a background with non-"streaming first" HTTP Servers. Streams and HTTP ---------------- @@ -123,6 +127,7 @@ See :ref:`HttpEntity-scala` for a description of the alternatives. If you rely on the :ref:`http-marshalling-scala` and/or :ref:`http-unmarshalling-scala` facilities provided by Akka HTTP then the conversion of custom types to and from streamed entities can be quite convenient. +.. _http-closing-connection-low-level: Closing a connection ~~~~~~~~~~~~~~~~~~~~ diff --git a/akka-docs/rst/scala/http/migration-guide-2.4.x-experimental.rst b/akka-docs/rst/scala/http/migration-guide-2.4.x-experimental.rst new file mode 100644 index 0000000000..23f9e5eeae --- /dev/null +++ b/akka-docs/rst/scala/http/migration-guide-2.4.x-experimental.rst @@ -0,0 +1,26 @@ +Migration Guide between experimental builds of Akka HTTP (2.4.x) +================================================================ + +General notes +------------- +Please note that Akka HTTP consists of a number of modules, most notably `akka-http-core` +which is **stable** and won't be breaking compatibility without a proper deprecation cycle, +and `akka-http` which contains the routing DSLs which is **experimental** still. + +The following migration guide explains migration steps to be made between breaking +versions of the **experimental** part of Akka HTTP. + +.. note:: + Please note that experimental modules are allowed (and are expected to) break compatibility + in search of the best API we can offer, before the API is frozen in a stable release. + + Please read :ref:`BinCompatRules` to understand in depth what bin-compat rules are, and where they are applied. + +Akka HTTP 2.4.7 -> 2.4.8 +------------------------ + +`SecurityDirectives#challengeFor` has moved +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The `challengeFor` directive was actually more like a factory for `HttpChallenge`, +thus it was moved to become such. It is now available as `akka.http.javadsl.model.headers.HttpChallenge#create[Basic|OAuth2]` +for JavaDSL and `akka.http.scaladsl.model.headers.HttpChallenges#[basic|oAuth2]` for ScalaDSL. diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/alphabetically.rst b/akka-docs/rst/scala/http/routing-dsl/directives/alphabetically.rst index 7e718bff58..abe414440f 100644 --- a/akka-docs/rst/scala/http/routing-dsl/directives/alphabetically.rst +++ b/akka-docs/rst/scala/http/routing-dsl/directives/alphabetically.rst @@ -47,6 +47,7 @@ Directive Description via the ``Accept-Encoding`` header (from a user-defined set) :ref:`-entity-` Extracts the request entity unmarshalled to a given type :ref:`-extract-` Extracts a single value using a ``RequestContext ⇒ T`` function +:ref:`-extractDataBytes-` Extracts the entities data bytes as a stream ``Source[ByteString, Any]`` :ref:`-extractClientIP-` Extracts the client's IP from either the ``X-Forwarded-``, ``Remote-Address`` or ``X-Real-IP`` header :ref:`-extractCredentials-` Extracts the potentially present ``HttpCredentials`` provided with the @@ -58,6 +59,7 @@ Directive Description :ref:`-extractMethod-` Extracts the request method :ref:`-extractRequest-` Extracts the current ``HttpRequest`` instance :ref:`-extractRequestContext-` Extracts the ``RequestContext`` itself +:ref:`-extractRequestEntity-` Extracts the ``RequestEntity`` from the ``RequestContext`` :ref:`-extractScheme-` Extracts the URI scheme from the request :ref:`-extractSettings-` Extracts the ``RoutingSettings`` from the ``RequestContext`` :ref:`-extractUnmatchedPath-` Extracts the yet unmatched path from the ``RequestContext`` @@ -216,6 +218,7 @@ Directive Description :ref:`-uploadedFile-` Streams one uploaded file from a multipart request to a file on disk :ref:`-validate-` Checks a given condition before running its inner route :ref:`-withoutRequestTimeout-` Disables :ref:`request timeouts ` for a given route. +:ref:`-withoutSizeLimit-` Skips request entity size check :ref:`-withExecutionContext-` Runs its inner route with the given alternative ``ExecutionContext`` :ref:`-withMaterializer-` Runs its inner route with the given alternative ``Materializer`` :ref:`-withLog-` Runs its inner route with the given alternative ``LoggingAdapter`` @@ -225,4 +228,5 @@ Directive Description :ref:`-withRequestTimeoutResponse-` Prepares the ``HttpResponse`` that is emitted if a request timeout is triggered. ``RequestContext => RequestContext`` function :ref:`-withSettings-` Runs its inner route with the given alternative ``RoutingSettings`` +:ref:`-withSizeLimit-` Applies request entity size check =========================================== ============================================================================ diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractDataBytes.rst b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractDataBytes.rst new file mode 100644 index 0000000000..5b962b2de5 --- /dev/null +++ b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractDataBytes.rst @@ -0,0 +1,24 @@ +.. _-extractDataBytes-: + +extractDataBytes +================ + +Signature +--------- + +.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala + :snippet: extractDataBytes + +Description +----------- + +Extracts the entities data bytes as ``Source[ByteString, Any]`` from the :class:`RequestContext`. + +The directive returns a stream containing the request data bytes. + + +Example +------- + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala + :snippet: extractDataBytes-example diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractRequestEntity.rst b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractRequestEntity.rst new file mode 100644 index 0000000000..53e142c5ca --- /dev/null +++ b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/extractRequestEntity.rst @@ -0,0 +1,25 @@ +.. _-extractRequestEntity-: + +extractRequestEntity +==================== + +Signature +--------- + +.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala + :snippet: extractRequestEntity + +Description +----------- + +Extracts the ``RequestEntity`` from the :class:`RequestContext`. + +The directive returns a ``RequestEntity`` without unmarshalling the request. To extract domain entity, +:ref:`-entity-` should be used. + + +Example +------- + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala + :snippet: extractRequestEntity-example diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/index.rst b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/index.rst index 2b5c0bd4cd..709f7d7b29 100644 --- a/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/index.rst +++ b/akka-docs/rst/scala/http/routing-dsl/directives/basic-directives/index.rst @@ -17,11 +17,13 @@ on two axes: a) provide a constant value or extract a value from the ``RequestCo a single value or a tuple of values. * :ref:`-extract-` + * :ref:`-extractDataBytes-` * :ref:`-extractExecutionContext-` * :ref:`-extractMaterializer-` * :ref:`-extractLog-` * :ref:`-extractRequest-` * :ref:`-extractRequestContext-` + * :ref:`-extractRequestEntity-` * :ref:`-extractSettings-` * :ref:`-extractUnmatchedPath-` * :ref:`-extractUri-` @@ -94,10 +96,12 @@ Alphabetically cancelRejections extract extractExecutionContext + extractDataBytes extractMaterializer extractLog extractRequest extractRequestContext + extractRequestEntity extractSettings extractUnmatchedPath extractUri diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/index.rst b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/index.rst index 73837a2350..9857ed1923 100644 --- a/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/index.rst +++ b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/index.rst @@ -11,4 +11,6 @@ MiscDirectives requestEntityEmpty requestEntityPresent selectPreferredLanguage - validate \ No newline at end of file + validate + withoutSizeLimit + withSizeLimit \ No newline at end of file diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withSizeLimit.rst b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withSizeLimit.rst new file mode 100644 index 0000000000..7f944e5fab --- /dev/null +++ b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withSizeLimit.rst @@ -0,0 +1,40 @@ +.. _-withSizeLimit-: + +withSizeLimit +=============== + +Signature +--------- + +.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala + :snippet: withSizeLimit + +Description +----------- +Fails the stream with ``EntityStreamSizeException`` if its request entity size exceeds given limit. Limit given +as parameter overrides limit configured with ``akka.http.parsing.max-content-length``. + +The whole mechanism of entity size checking is intended to prevent certain Denial-of-Service attacks. +So suggested setup is to have ``akka.http.parsing.max-content-length`` relatively low and use ``withSizeLimit`` +directive for endpoints which expects bigger entities. + +See also :ref:`-withoutSizeLimit-` for skipping request entity size check. + +Examples +-------- + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala + :snippet: withSizeLimit-example + +Beware that request entity size check is executed when entity is consumed. Therefore in the following example +even request with entity greater than argument to ``withSizeLimit`` will succeed (because this route +does not consume entity): + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala + :snippet: withSizeLimit-execution-moment-example + +Directive ``withSizeLimit`` is implemented in terms of ``HttpEntity.withSizeLimit`` which means that in case of +nested ``withSizeLimit`` directives the innermost is applied: + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala + :snippet: withSizeLimit-nested-example diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst new file mode 100644 index 0000000000..8833e0e90c --- /dev/null +++ b/akka-docs/rst/scala/http/routing-dsl/directives/misc-directives/withoutSizeLimit.rst @@ -0,0 +1,26 @@ +.. _-withoutSizeLimit-: + +withoutSizeLimit +================ + +Signature +--------- + +.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala + :snippet: withoutSizeLimit + +Description +----------- +Skips request entity size verification. + +The whole mechanism of entity size checking is intended to prevent certain Denial-of-Service attacks. +So suggested setup is to have ``akka.http.parsing.max-content-length`` relatively low and use ``withoutSizeLimit`` +directive just for endpoints for which size verification should not be performed. + +See also :ref:`-withSizeLimit-` for setting request entity size limit. + +Example +------- + +.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/MiscDirectivesExamplesSpec.scala + :snippet: withoutSizeLimit-example diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/path.rst b/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/path.rst index 5507c24199..0beac3c264 100644 --- a/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/path.rst +++ b/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/path.rst @@ -31,6 +31,10 @@ a ``path`` directive will always be empty). Depending on the type of its ``PathMatcher`` argument the ``path`` directive extracts zero or more values from the URI. If the match fails the request is rejected with an :ref:`empty rejection set `. +.. note:: The empty string (also called empty word or identity) is a **neutral element** of string concatenation operation, + so it will match everything, but remember that ``path`` requires whole remaining path being matched, so (``/``) will succeed + and (``/whatever``) will fail. The :ref:`-pathPrefix-` provides more liberal behaviour. + Example ------- diff --git a/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/pathPrefix.rst b/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/pathPrefix.rst index b17476dd75..579ab99d55 100644 --- a/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/pathPrefix.rst +++ b/akka-docs/rst/scala/http/routing-dsl/directives/path-directives/pathPrefix.rst @@ -25,6 +25,9 @@ As opposed to its :ref:`-rawPathPrefix-` counterpart ``pathPrefix`` automaticall Depending on the type of its ``PathMatcher`` argument the ``pathPrefix`` directive extracts zero or more values from the URI. If the match fails the request is rejected with an :ref:`empty rejection set `. +.. note:: The empty string (also called empty word or identity) is a **neutral element** of string concatenation operation, + so it will match everything and consume nothing. The :ref:`-path-` provides more strict behaviour. + Example ------- diff --git a/akka-docs/rst/scala/http/routing-dsl/index.rst b/akka-docs/rst/scala/http/routing-dsl/index.rst index 3795d5acbe..a4e1ee5121 100644 --- a/akka-docs/rst/scala/http/routing-dsl/index.rst +++ b/akka-docs/rst/scala/http/routing-dsl/index.rst @@ -8,6 +8,11 @@ defining RESTful web services. It picks up where the low-level API leaves off an functionality of typical web servers or frameworks, like deconstruction of URIs, content negotiation or static content serving. +.. note:: + It is recommended to read the :ref:`implications-of-streaming-http-entities` section, + as it explains the underlying full-stack streaming concepts, which may be unexpected when coming + from a background with non-"streaming first" HTTP Servers. + .. toctree:: :maxdepth: 1 @@ -58,7 +63,7 @@ Bind failures ^^^^^^^^^^^^^ For example the server might be unable to bind to the given port. For example when the port is already taken by another application, or if the port is privileged (i.e. only usable by ``root``). -In this case the "binding future" will fail immediatly, and we can react to if by listening on the Future's completion: +In this case the "binding future" will fail immediately, and we can react to if by listening on the Future's completion: .. includecode2:: ../../code/docs/http/scaladsl/HttpServerExampleSpec.scala :snippet: binding-failure-high-level-example @@ -100,4 +105,4 @@ and split each line before we send it to an actor for further processing: Configuring Server-side HTTPS ----------------------------- -For detailed documentation about configuring and using HTTPS on the server-side refer to :ref:`serverSideHTTPS-scala`. \ No newline at end of file +For detailed documentation about configuring and using HTTPS on the server-side refer to :ref:`serverSideHTTPS-scala`. diff --git a/akka-docs/rst/scala/http/routing-dsl/testkit.rst b/akka-docs/rst/scala/http/routing-dsl/testkit.rst index af52e695d1..35b90a1f80 100644 --- a/akka-docs/rst/scala/http/routing-dsl/testkit.rst +++ b/akka-docs/rst/scala/http/routing-dsl/testkit.rst @@ -4,6 +4,9 @@ Route TestKit One of Akka HTTP's design goals is good testability of the created services. For services built with the Routing DSL Akka HTTP provides a dedicated testkit that makes efficient testing of route logic easy and convenient. This "route test DSL" is made available with the *akka-http-testkit* module. +To use it include the following dependency:: + + "com.typesafe.akka" %% "akka-http-testkit" % "@version@" Usage diff --git a/akka-docs/rst/scala/stream/index.rst b/akka-docs/rst/scala/stream/index.rst index 485f4d00a3..a337529ac0 100644 --- a/akka-docs/rst/scala/stream/index.rst +++ b/akka-docs/rst/scala/stream/index.rst @@ -13,6 +13,7 @@ Streams stream-graphs stream-composition stream-rate + stream-dynamic stream-customize stream-integrations stream-error diff --git a/akka-docs/rst/scala/stream/stream-cookbook.rst b/akka-docs/rst/scala/stream/stream-cookbook.rst index 94ba97ad3a..b2a599371d 100644 --- a/akka-docs/rst/scala/stream/stream-cookbook.rst +++ b/akka-docs/rst/scala/stream/stream-cookbook.rst @@ -173,7 +173,7 @@ Triggering the flow of elements programmatically In other words, even if the stream would be able to flow (not being backpressured) we want to hold back elements until a trigger signal arrives. -This recipe solves the problem by simply zipping the stream of ``Message`` elments with the stream of ``Trigger`` +This recipe solves the problem by simply zipping the stream of ``Message`` elements with the stream of ``Trigger`` signals. Since ``Zip`` produces pairs, we simply map the output stream selecting the first element of the pair. .. includecode:: ../code/docs/stream/cookbook/RecipeManualTrigger.scala#manually-triggered-stream @@ -222,7 +222,7 @@ a special ``reduce`` operation that collapses multiple upstream elements into on the speed of the upstream unaffected by the downstream. When the upstream is faster, the reducing process of the ``conflate`` starts. Our reducer function simply takes -the freshest element. This cin a simple dropping operation. +the freshest element. This in a simple dropping operation. .. includecode:: ../code/docs/stream/cookbook/RecipeSimpleDrop.scala#simple-drop diff --git a/akka-docs/rst/scala/stream/stream-dynamic.rst b/akka-docs/rst/scala/stream/stream-dynamic.rst new file mode 100644 index 0000000000..cd4f5d6690 --- /dev/null +++ b/akka-docs/rst/scala/stream/stream-dynamic.rst @@ -0,0 +1,63 @@ +.. _stream-dynamic-scala: + +####################### +Dynamic stream handling +####################### + +.. _kill-switch-scala: + +Controlling graph completion with KillSwitch +-------------------------------------------- + +A ``KillSwitch`` allows the completion of graphs of ``FlowShape`` from the outside. It consists of a flow element that +can be linked to a graph of ``FlowShape`` needing completion control. +The ``KillSwitch`` trait allows to complete or fail the graph(s). + +.. includecode:: ../../../../akka-stream/src/main/scala/akka/stream/KillSwitch.scala + :include: kill-switch + +After the first call to either ``shutdown`` or ``abort``, all subsequent calls to any of these methods will be ignored. +Graph completion is performed by both + +* completing its downstream +* cancelling (in case of ``shutdown``) or failing (in case of ``abort``) its upstream. + +A ``KillSwitch`` can control the completion of one or multiple streams, and therefore comes in two different flavours. + +.. _unique-kill-switch-scala: + +UniqueKillSwitch +^^^^^^^^^^^^^^^^ + +``UniqueKillSwitch`` allows to control the completion of **one** materialized ``Graph`` of ``FlowShape``. Refer to the +below for usage examples. + +* **Shutdown** + +.. includecode:: ../code/docs/stream/KillSwitchDocSpec.scala#unique-shutdown + +* **Abort** + +.. includecode:: ../code/docs/stream/KillSwitchDocSpec.scala#unique-abort + +.. _shared-kill-switch-scala: + +SharedKillSwitch +^^^^^^^^^^^^^^^^ + +A ``SharedKillSwitch`` allows to control the completion of an arbitrary number graphs of ``FlowShape``. It can be +materialized multiple times via its ``flow`` method, and all materialized graphs linked to it are controlled by the switch. +Refer to the below for usage examples. + +* **Shutdown** + +.. includecode:: ../code/docs/stream/KillSwitchDocSpec.scala#shared-shutdown + +* **Abort** + +.. includecode:: ../code/docs/stream/KillSwitchDocSpec.scala#shared-abort + +.. note:: + A ``UniqueKillSwitch`` is always a result of a materialization, whilst ``SharedKillSwitch`` needs to be constructed + before any materialization takes place. + diff --git a/akka-docs/rst/scala/stream/stream-introduction.rst b/akka-docs/rst/scala/stream/stream-introduction.rst index 064cd46890..177cc294aa 100644 --- a/akka-docs/rst/scala/stream/stream-introduction.rst +++ b/akka-docs/rst/scala/stream/stream-introduction.rst @@ -7,7 +7,7 @@ Introduction Motivation ========== -The way we consume services from the internet today includes many instances of +The way we consume services from the Internet today includes many instances of streaming data, both downloading from a service as well as uploading to it or peer-to-peer data transfers. Regarding data as a stream of elements instead of in its entirety is very useful because it matches the way computers send and diff --git a/akka-docs/rst/scala/stream/stream-quickstart.rst b/akka-docs/rst/scala/stream/stream-quickstart.rst index aa2171c539..cdc37c1da2 100644 --- a/akka-docs/rst/scala/stream/stream-quickstart.rst +++ b/akka-docs/rst/scala/stream/stream-quickstart.rst @@ -1,329 +1,333 @@ -.. _stream-quickstart-scala: - -Quick Start Guide -================= - -A stream usually begins at a source, so this is also how we start an Akka -Stream. Before we create one, we import the full complement of streaming tools: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#imports - -Now we will start with a rather simple source, emitting the integers 1 to 100: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#create-source - -The :class:`Source` type is parameterized with two types: the first one is the -type of element that this source emits and the second one may signal that -running the source produces some auxiliary value (e.g. a network source may -provide information about the bound port or the peer’s address). Where no -auxiliary information is produced, the type ``akka.NotUsed`` is used—and a -simple range of integers surely falls into this category. - -Having created this source means that we have a description of how to emit the -first 100 natural numbers, but this source is not yet active. In order to get -those numbers out we have to run it: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#run-source - -This line will complement the source with a consumer function—in this example -we simply print out the numbers to the console—and pass this little stream -setup to an Actor that runs it. This activation is signaled by having “run” be -part of the method name; there are other methods that run Akka Streams, and -they all follow this pattern. - -You may wonder where the Actor gets created that runs the stream, and you are -probably also asking yourself what this ``materializer`` means. In order to get -this value we first need to create an Actor system: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#create-materializer - -There are other ways to create a materializer, e.g. from an -:class:`ActorContext` when using streams from within Actors. The -:class:`Materializer` is a factory for stream execution engines, it is the -thing that makes streams run—you don’t need to worry about any of the details -just now apart from that you need one for calling any of the ``run`` methods on -a :class:`Source`. The materializer is picked up implicitly if it is omitted -from the ``run`` method call arguments, which we will do in the following. - -The nice thing about Akka Streams is that the :class:`Source` is just a -description of what you want to run, and like an architect’s blueprint it can -be reused, incorporated into a larger design. We may choose to transform the -source of integers and write it to a file instead: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#transform-source - -First we use the ``scan`` combinator to run a computation over the whole -stream: starting with the number 1 (``BigInt(1)``) we multiple by each of -the incoming numbers, one after the other; the scan operation emits the initial -value and then every calculation result. This yields the series of factorial -numbers which we stash away as a :class:`Source` for later reuse—it is -important to keep in mind that nothing is actually computed yet, this is just a -description of what we want to have computed once we run the stream. Then we -convert the resulting series of numbers into a stream of :class:`ByteString` -objects describing lines in a text file. This stream is then run by attaching a -file as the receiver of the data. In the terminology of Akka Streams this is -called a :class:`Sink`. :class:`IOResult` is a type that IO operations return in -Akka Streams in order to tell you how many bytes or elements were consumed and -whether the stream terminated normally or exceptionally. - -Reusable Pieces ---------------- - -One of the nice parts of Akka Streams—and something that other stream libraries -do not offer—is that not only sources can be reused like blueprints, all other -elements can be as well. We can take the file-writing :class:`Sink`, prepend -the processing steps necessary to get the :class:`ByteString` elements from -incoming strings and package that up as a reusable piece as well. Since the -language for writing these streams always flows from left to right (just like -plain English), we need a starting point that is like a source but with an -“open” input. In Akka Streams this is called a :class:`Flow`: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#transform-sink - -Starting from a flow of strings we convert each to :class:`ByteString` and then -feed to the already known file-writing :class:`Sink`. The resulting blueprint -is a :class:`Sink[String, Future[IOResult]]`, which means that it -accepts strings as its input and when materialized it will create auxiliary -information of type ``Future[IOResult]`` (when chaining operations on -a :class:`Source` or :class:`Flow` the type of the auxiliary information—called -the “materialized value”—is given by the leftmost starting point; since we want -to retain what the ``FileIO.toFile`` sink has to offer, we need to say -``Keep.right``). - -We can use the new and shiny :class:`Sink` we just created by -attaching it to our ``factorials`` source—after a small adaptation to turn the -numbers into strings: - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#use-transformed-sink - -Time-Based Processing ---------------------- - -Before we start looking at a more involved example we explore the streaming -nature of what Akka Streams can do. Starting from the ``factorials`` source -we transform the stream by zipping it together with another stream, -represented by a :class:`Source` that emits the number 0 to 100: the first -number emitted by the ``factorials`` source is the factorial of zero, the -second is the factorial of one, and so on. We combine these two by forming -strings like ``"3! = 6"``. - -.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#add-streams - -All operations so far have been time-independent and could have been performed -in the same fashion on strict collections of elements. The next line -demonstrates that we are in fact dealing with streams that can flow at a -certain speed: we use the ``throttle`` combinator to slow down the stream to 1 -element per second (the second ``1`` in the argument list is the maximum size -of a burst that we want to allow—passing ``1`` means that the first element -gets through immediately and the second then has to wait for one second and so -on). - -If you run this program you will see one line printed per second. One aspect -that is not immediately visible deserves mention, though: if you try and set -the streams to produce a billion numbers each then you will notice that your -JVM does not crash with an OutOfMemoryError, even though you will also notice -that running the streams happens in the background, asynchronously (this is the -reason for the auxiliary information to be provided as a :class:`Future`). The -secret that makes this work is that Akka Streams implicitly implement pervasive -flow control, all combinators respect back-pressure. This allows the throttle -combinator to signal to all its upstream sources of data that it can only -accept elements at a certain rate—when the incoming rate is higher than one per -second the throttle combinator will assert *back-pressure* upstream. - -This is basically all there is to Akka Streams in a nutshell—glossing over the -fact that there are dozens of sources and sinks and many more stream -transformation combinators to choose from, see also :ref:`stages-overview_scala`. - -Reactive Tweets -=============== - -A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some -other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them. - -We will also consider the problem inherent to all non-blocking streaming -solutions: *"What if the subscriber is too slow to consume the live stream of -data?"*. Traditionally the solution is often to buffer the elements, but this -can—and usually will—cause eventual buffer overflows and instability of such -systems. Instead Akka Streams depend on internal backpressure signals that -allow to control what should happen in such scenarios. - -Here's the data model we'll be working with throughout the quickstart examples: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#model - -.. note:: - If you would like to get an overview of the used vocabulary first instead of diving head-first - into an actual example you can have a look at the :ref:`core-concepts-scala` and :ref:`defining-and-running-streams-scala` - sections of the docs, and then come back to this quickstart to see it all pieced together into a simple example application. - -Transforming and consuming simple streams ------------------------------------------ -The example application we will be looking at is a simple Twitter feed stream from which we'll want to extract certain information, -like for example finding all twitter handles of users who tweet about ``#akka``. - -In order to prepare our environment by creating an :class:`ActorSystem` and :class:`ActorMaterializer`, -which will be responsible for materializing and running the streams we are about to create: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#materializer-setup - -The :class:`ActorMaterializer` can optionally take :class:`ActorMaterializerSettings` which can be used to define -materialization properties, such as default buffer sizes (see also :ref:`async-stream-buffers-scala`), the dispatcher to -be used by the pipeline etc. These can be overridden with ``withAttributes`` on :class:`Flow`, :class:`Source`, :class:`Sink` and :class:`Graph`. - -Let's assume we have a stream of tweets readily available. In Akka this is expressed as a :class:`Source[Out, M]`: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweet-source - -Streams always start flowing from a :class:`Source[Out,M1]` then can continue through :class:`Flow[In,Out,M2]` elements or -more advanced graph elements to finally be consumed by a :class:`Sink[In,M3]` (ignore the type parameters ``M1``, ``M2`` -and ``M3`` for now, they are not relevant to the types of the elements produced/consumed by these classes – they are -"materialized types", which we'll talk about :ref:`below `). - -The operations should look familiar to anyone who has used the Scala Collections library, -however they operate on streams and not collections of data (which is a very important distinction, as some operations -only make sense in streaming and vice versa): - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-filter-map - -Finally in order to :ref:`materialize ` and run the stream computation we need to attach -the Flow to a :class:`Sink` that will get the Flow running. The simplest way to do this is to call -``runWith(sink)`` on a ``Source``. For convenience a number of common Sinks are predefined and collected as methods on -the :class:`Sink` `companion object `_. -For now let's simply print each author: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-foreachsink-println - -or by using the shorthand version (which are defined only for the most popular Sinks such as ``Sink.fold`` and ``Sink.foreach``): - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-foreach-println - -Materializing and running a stream always requires a :class:`Materializer` to be in implicit scope (or passed in explicitly, -like this: ``.run(materializer)``). - -The complete snippet looks like this: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#first-sample - -Flattening sequences in streams -------------------------------- -In the previous section we were working on 1:1 relationships of elements which is the most common case, but sometimes -we might want to map from one element to a number of elements and receive a "flattened" stream, similarly like ``flatMap`` -works on Scala Collections. In order to get a flattened stream of hashtags from our stream of tweets we can use the ``mapConcat`` -combinator: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#hashtags-mapConcat - -.. note:: - The name ``flatMap`` was consciously avoided due to its proximity with for-comprehensions and monadic composition. - It is problematic for two reasons: first, flattening by concatenation is often undesirable in bounded stream processing - due to the risk of deadlock (with merge being the preferred strategy), and second, the monad laws would not hold for - our implementation of flatMap (due to the liveness issues). - - Please note that the ``mapConcat`` requires the supplied function to return a strict collection (``f:Out=>immutable.Seq[T]``), - whereas ``flatMap`` would have to operate on streams all the way through. - -Broadcasting a stream ---------------------- -Now let's say we want to persist all hashtags, as well as all author names from this one live stream. -For example we'd like to write all author handles into one file, and all hashtags into another file on disk. -This means we have to split the source stream into two streams which will handle the writing to these different files. - -Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Akka Streams. -One of these that we'll be using in this example is called :class:`Broadcast`, and it simply emits elements from its -input port to all of its output ports. - -Akka Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs) -in order to offer the most convenient API for both of these cases. Graphs can express arbitrarily complex stream setups -at the expense of not reading as familiarly as collection transformations. - -Graphs are constructed using :class:`GraphDSL` like this: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#graph-dsl-broadcast - -As you can see, inside the :class:`GraphDSL` we use an implicit graph builder ``b`` to mutably construct the graph -using the ``~>`` "edge operator" (also read as "connect" or "via" or "to"). The operator is provided implicitly -by importing ``GraphDSL.Implicits._``. - -``GraphDSL.create`` returns a :class:`Graph`, in this example a :class:`Graph[ClosedShape, Unit]` where -:class:`ClosedShape` means that it is *a fully connected graph* or "closed" - there are no unconnected inputs or outputs. -Since it is closed it is possible to transform the graph into a :class:`RunnableGraph` using ``RunnableGraph.fromGraph``. -The runnable graph can then be ``run()`` to materialize a stream out of it. - -Both :class:`Graph` and :class:`RunnableGraph` are *immutable, thread-safe, and freely shareable*. - -A graph can also have one of several other shapes, with one or more unconnected ports. Having unconnected ports -expresses a graph that is a *partial graph*. Concepts around composing and nesting graphs in large structures are -explained in detail in :ref:`composition-scala`. It is also possible to wrap complex computation graphs -as Flows, Sinks or Sources, which will be explained in detail in -:ref:`constructing-sources-sinks-flows-from-partial-graphs-scala`. - -Back-pressure in action ------------------------ -One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks -(Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more -about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read -:ref:`back-pressure-explained-scala`. - -A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough, -either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting -in either ``OutOfMemoryError`` s or other severe degradations of service responsiveness. With Akka Streams buffering can -and must be handled explicitly. For example, if we are only interested in the "*most recent tweets, with a buffer of 10 -elements*" this can be expressed using the ``buffer`` element: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-slow-consumption-dropHead - -The ``buffer`` element takes an explicit and required ``OverflowStrategy``, which defines how the buffer should react -when it receives another element while it is full. Strategies provided include dropping the oldest element (``dropHead``), -dropping the entire buffer, signalling errors etc. Be sure to pick and choose the strategy that fits your use case best. - -.. _materialized-values-quick-scala: - -Materialized values -------------------- -So far we've been only processing data using Flows and consuming it into some kind of external Sink - be it by printing -values or storing them in some external system. However sometimes we may be interested in some value that can be -obtained from the materialized processing pipeline. For example, we want to know how many tweets we have processed. -While this question is not as obvious to give an answer to in case of an infinite stream of tweets (one way to answer -this question in a streaming setting would be to create a stream of counts described as "*up until now*, we've processed N tweets"), -but in general it is possible to deal with finite streams and come up with a nice result such as a total count of elements. - -First, let's write such an element counter using ``Sink.fold`` and see how the types look like: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-fold-count - -First we prepare a reusable ``Flow`` that will change each incoming tweet into an integer of value ``1``. We'll use this in -order to combine those with a ``Sink.fold`` that will sum all ``Int`` elements of the stream and make its result available as -a ``Future[Int]``. Next we connect the ``tweets`` stream to ``count`` with ``via``. Finally we connect the Flow to the previously -prepared Sink using ``toMat``. - -Remember those mysterious ``Mat`` type parameters on ``Source[+Out, +Mat]``, ``Flow[-In, +Out, +Mat]`` and ``Sink[-In, +Mat]``? -They represent the type of values these processing parts return when materialized. When you chain these together, -you can explicitly combine their materialized values. In our example we used the ``Keep.right`` predefined function, -which tells the implementation to only care about the materialized type of the stage currently appended to the right. -The materialized type of ``sumSink`` is ``Future[Int]`` and because of using ``Keep.right``, the resulting :class:`RunnableGraph` -has also a type parameter of ``Future[Int]``. - -This step does *not* yet materialize the -processing pipeline, it merely prepares the description of the Flow, which is now connected to a Sink, and therefore can -be ``run()``, as indicated by its type: ``RunnableGraph[Future[Int]]``. Next we call ``run()`` which uses the implicit :class:`ActorMaterializer` -to materialize and run the Flow. The value returned by calling ``run()`` on a ``RunnableGraph[T]`` is of type ``T``. -In our case this type is ``Future[Int]`` which, when completed, will contain the total length of our ``tweets`` stream. -In case of the stream failing, this future would complete with a Failure. - -A :class:`RunnableGraph` may be reused -and materialized multiple times, because it is just the "blueprint" of the stream. This means that if we materialize a stream, -for example one that consumes a live stream of tweets within a minute, the materialized values for those two materializations -will be different, as illustrated by this example: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-runnable-flow-materialized-twice - -Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or -steering these elements which will be discussed in detail in :ref:`stream-materialization-scala`. Summing up this section, now we know -what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above: - -.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-fold-count-oneline - -.. note:: - ``runWith()`` is a convenience method that automatically ignores the materialized value of any other stages except - those appended by the ``runWith()`` itself. In the above example it translates to using ``Keep.right`` as the combiner - for materialized values. +.. _stream-quickstart-scala: + +Quick Start Guide +================= + +A stream usually begins at a source, so this is also how we start an Akka +Stream. Before we create one, we import the full complement of streaming tools: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#stream-imports + +If you want to execute the code samples while you read through the quick start guide, you will also need the following imports: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#other-imports + +Now we will start with a rather simple source, emitting the integers 1 to 100: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#create-source + +The :class:`Source` type is parameterized with two types: the first one is the +type of element that this source emits and the second one may signal that +running the source produces some auxiliary value (e.g. a network source may +provide information about the bound port or the peer’s address). Where no +auxiliary information is produced, the type ``akka.NotUsed`` is used—and a +simple range of integers surely falls into this category. + +Having created this source means that we have a description of how to emit the +first 100 natural numbers, but this source is not yet active. In order to get +those numbers out we have to run it: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#run-source + +This line will complement the source with a consumer function—in this example +we simply print out the numbers to the console—and pass this little stream +setup to an Actor that runs it. This activation is signaled by having “run” be +part of the method name; there are other methods that run Akka Streams, and +they all follow this pattern. + +You may wonder where the Actor gets created that runs the stream, and you are +probably also asking yourself what this ``materializer`` means. In order to get +this value we first need to create an Actor system: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#create-materializer + +There are other ways to create a materializer, e.g. from an +:class:`ActorContext` when using streams from within Actors. The +:class:`Materializer` is a factory for stream execution engines, it is the +thing that makes streams run—you don’t need to worry about any of the details +just now apart from that you need one for calling any of the ``run`` methods on +a :class:`Source`. The materializer is picked up implicitly if it is omitted +from the ``run`` method call arguments, which we will do in the following. + +The nice thing about Akka Streams is that the :class:`Source` is just a +description of what you want to run, and like an architect’s blueprint it can +be reused, incorporated into a larger design. We may choose to transform the +source of integers and write it to a file instead: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#transform-source + +First we use the ``scan`` combinator to run a computation over the whole +stream: starting with the number 1 (``BigInt(1)``) we multiple by each of +the incoming numbers, one after the other; the scan operation emits the initial +value and then every calculation result. This yields the series of factorial +numbers which we stash away as a :class:`Source` for later reuse—it is +important to keep in mind that nothing is actually computed yet, this is just a +description of what we want to have computed once we run the stream. Then we +convert the resulting series of numbers into a stream of :class:`ByteString` +objects describing lines in a text file. This stream is then run by attaching a +file as the receiver of the data. In the terminology of Akka Streams this is +called a :class:`Sink`. :class:`IOResult` is a type that IO operations return in +Akka Streams in order to tell you how many bytes or elements were consumed and +whether the stream terminated normally or exceptionally. + +Reusable Pieces +--------------- + +One of the nice parts of Akka Streams—and something that other stream libraries +do not offer—is that not only sources can be reused like blueprints, all other +elements can be as well. We can take the file-writing :class:`Sink`, prepend +the processing steps necessary to get the :class:`ByteString` elements from +incoming strings and package that up as a reusable piece as well. Since the +language for writing these streams always flows from left to right (just like +plain English), we need a starting point that is like a source but with an +“open” input. In Akka Streams this is called a :class:`Flow`: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#transform-sink + +Starting from a flow of strings we convert each to :class:`ByteString` and then +feed to the already known file-writing :class:`Sink`. The resulting blueprint +is a :class:`Sink[String, Future[IOResult]]`, which means that it +accepts strings as its input and when materialized it will create auxiliary +information of type ``Future[IOResult]`` (when chaining operations on +a :class:`Source` or :class:`Flow` the type of the auxiliary information—called +the “materialized value”—is given by the leftmost starting point; since we want +to retain what the ``FileIO.toFile`` sink has to offer, we need to say +``Keep.right``). + +We can use the new and shiny :class:`Sink` we just created by +attaching it to our ``factorials`` source—after a small adaptation to turn the +numbers into strings: + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#use-transformed-sink + +Time-Based Processing +--------------------- + +Before we start looking at a more involved example we explore the streaming +nature of what Akka Streams can do. Starting from the ``factorials`` source +we transform the stream by zipping it together with another stream, +represented by a :class:`Source` that emits the number 0 to 100: the first +number emitted by the ``factorials`` source is the factorial of zero, the +second is the factorial of one, and so on. We combine these two by forming +strings like ``"3! = 6"``. + +.. includecode:: ../code/docs/stream/QuickStartDocSpec.scala#add-streams + +All operations so far have been time-independent and could have been performed +in the same fashion on strict collections of elements. The next line +demonstrates that we are in fact dealing with streams that can flow at a +certain speed: we use the ``throttle`` combinator to slow down the stream to 1 +element per second (the second ``1`` in the argument list is the maximum size +of a burst that we want to allow—passing ``1`` means that the first element +gets through immediately and the second then has to wait for one second and so +on). + +If you run this program you will see one line printed per second. One aspect +that is not immediately visible deserves mention, though: if you try and set +the streams to produce a billion numbers each then you will notice that your +JVM does not crash with an OutOfMemoryError, even though you will also notice +that running the streams happens in the background, asynchronously (this is the +reason for the auxiliary information to be provided as a :class:`Future`). The +secret that makes this work is that Akka Streams implicitly implement pervasive +flow control, all combinators respect back-pressure. This allows the throttle +combinator to signal to all its upstream sources of data that it can only +accept elements at a certain rate—when the incoming rate is higher than one per +second the throttle combinator will assert *back-pressure* upstream. + +This is basically all there is to Akka Streams in a nutshell—glossing over the +fact that there are dozens of sources and sinks and many more stream +transformation combinators to choose from, see also :ref:`stages-overview_scala`. + +Reactive Tweets +=============== + +A typical use case for stream processing is consuming a live stream of data that we want to extract or aggregate some +other data from. In this example we'll consider consuming a stream of tweets and extracting information concerning Akka from them. + +We will also consider the problem inherent to all non-blocking streaming +solutions: *"What if the subscriber is too slow to consume the live stream of +data?"*. Traditionally the solution is often to buffer the elements, but this +can—and usually will—cause eventual buffer overflows and instability of such +systems. Instead Akka Streams depend on internal backpressure signals that +allow to control what should happen in such scenarios. + +Here's the data model we'll be working with throughout the quickstart examples: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#model + +.. note:: + If you would like to get an overview of the used vocabulary first instead of diving head-first + into an actual example you can have a look at the :ref:`core-concepts-scala` and :ref:`defining-and-running-streams-scala` + sections of the docs, and then come back to this quickstart to see it all pieced together into a simple example application. + +Transforming and consuming simple streams +----------------------------------------- +The example application we will be looking at is a simple Twitter feed stream from which we'll want to extract certain information, +like for example finding all twitter handles of users who tweet about ``#akka``. + +In order to prepare our environment by creating an :class:`ActorSystem` and :class:`ActorMaterializer`, +which will be responsible for materializing and running the streams we are about to create: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#materializer-setup + +The :class:`ActorMaterializer` can optionally take :class:`ActorMaterializerSettings` which can be used to define +materialization properties, such as default buffer sizes (see also :ref:`async-stream-buffers-scala`), the dispatcher to +be used by the pipeline etc. These can be overridden with ``withAttributes`` on :class:`Flow`, :class:`Source`, :class:`Sink` and :class:`Graph`. + +Let's assume we have a stream of tweets readily available. In Akka this is expressed as a :class:`Source[Out, M]`: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweet-source + +Streams always start flowing from a :class:`Source[Out,M1]` then can continue through :class:`Flow[In,Out,M2]` elements or +more advanced graph elements to finally be consumed by a :class:`Sink[In,M3]` (ignore the type parameters ``M1``, ``M2`` +and ``M3`` for now, they are not relevant to the types of the elements produced/consumed by these classes – they are +"materialized types", which we'll talk about :ref:`below `). + +The operations should look familiar to anyone who has used the Scala Collections library, +however they operate on streams and not collections of data (which is a very important distinction, as some operations +only make sense in streaming and vice versa): + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-filter-map + +Finally in order to :ref:`materialize ` and run the stream computation we need to attach +the Flow to a :class:`Sink` that will get the Flow running. The simplest way to do this is to call +``runWith(sink)`` on a ``Source``. For convenience a number of common Sinks are predefined and collected as methods on +the :class:`Sink` `companion object `_. +For now let's simply print each author: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-foreachsink-println + +or by using the shorthand version (which are defined only for the most popular Sinks such as ``Sink.fold`` and ``Sink.foreach``): + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#authors-foreach-println + +Materializing and running a stream always requires a :class:`Materializer` to be in implicit scope (or passed in explicitly, +like this: ``.run(materializer)``). + +The complete snippet looks like this: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#first-sample + +Flattening sequences in streams +------------------------------- +In the previous section we were working on 1:1 relationships of elements which is the most common case, but sometimes +we might want to map from one element to a number of elements and receive a "flattened" stream, similarly like ``flatMap`` +works on Scala Collections. In order to get a flattened stream of hashtags from our stream of tweets we can use the ``mapConcat`` +combinator: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#hashtags-mapConcat + +.. note:: + The name ``flatMap`` was consciously avoided due to its proximity with for-comprehensions and monadic composition. + It is problematic for two reasons: first, flattening by concatenation is often undesirable in bounded stream processing + due to the risk of deadlock (with merge being the preferred strategy), and second, the monad laws would not hold for + our implementation of flatMap (due to the liveness issues). + + Please note that the ``mapConcat`` requires the supplied function to return a strict collection (``f:Out=>immutable.Seq[T]``), + whereas ``flatMap`` would have to operate on streams all the way through. + +Broadcasting a stream +--------------------- +Now let's say we want to persist all hashtags, as well as all author names from this one live stream. +For example we'd like to write all author handles into one file, and all hashtags into another file on disk. +This means we have to split the source stream into two streams which will handle the writing to these different files. + +Elements that can be used to form such "fan-out" (or "fan-in") structures are referred to as "junctions" in Akka Streams. +One of these that we'll be using in this example is called :class:`Broadcast`, and it simply emits elements from its +input port to all of its output ports. + +Akka Streams intentionally separate the linear stream structures (Flows) from the non-linear, branching ones (Graphs) +in order to offer the most convenient API for both of these cases. Graphs can express arbitrarily complex stream setups +at the expense of not reading as familiarly as collection transformations. + +Graphs are constructed using :class:`GraphDSL` like this: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#graph-dsl-broadcast + +As you can see, inside the :class:`GraphDSL` we use an implicit graph builder ``b`` to mutably construct the graph +using the ``~>`` "edge operator" (also read as "connect" or "via" or "to"). The operator is provided implicitly +by importing ``GraphDSL.Implicits._``. + +``GraphDSL.create`` returns a :class:`Graph`, in this example a :class:`Graph[ClosedShape, Unit]` where +:class:`ClosedShape` means that it is *a fully connected graph* or "closed" - there are no unconnected inputs or outputs. +Since it is closed it is possible to transform the graph into a :class:`RunnableGraph` using ``RunnableGraph.fromGraph``. +The runnable graph can then be ``run()`` to materialize a stream out of it. + +Both :class:`Graph` and :class:`RunnableGraph` are *immutable, thread-safe, and freely shareable*. + +A graph can also have one of several other shapes, with one or more unconnected ports. Having unconnected ports +expresses a graph that is a *partial graph*. Concepts around composing and nesting graphs in large structures are +explained in detail in :ref:`composition-scala`. It is also possible to wrap complex computation graphs +as Flows, Sinks or Sources, which will be explained in detail in +:ref:`constructing-sources-sinks-flows-from-partial-graphs-scala`. + +Back-pressure in action +----------------------- +One of the main advantages of Akka Streams is that they *always* propagate back-pressure information from stream Sinks +(Subscribers) to their Sources (Publishers). It is not an optional feature, and is enabled at all times. To learn more +about the back-pressure protocol used by Akka Streams and all other Reactive Streams compatible implementations read +:ref:`back-pressure-explained-scala`. + +A typical problem applications (not using Akka Streams) like this often face is that they are unable to process the incoming data fast enough, +either temporarily or by design, and will start buffering incoming data until there's no more space to buffer, resulting +in either ``OutOfMemoryError`` s or other severe degradations of service responsiveness. With Akka Streams buffering can +and must be handled explicitly. For example, if we are only interested in the "*most recent tweets, with a buffer of 10 +elements*" this can be expressed using the ``buffer`` element: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-slow-consumption-dropHead + +The ``buffer`` element takes an explicit and required ``OverflowStrategy``, which defines how the buffer should react +when it receives another element while it is full. Strategies provided include dropping the oldest element (``dropHead``), +dropping the entire buffer, signalling errors etc. Be sure to pick and choose the strategy that fits your use case best. + +.. _materialized-values-quick-scala: + +Materialized values +------------------- +So far we've been only processing data using Flows and consuming it into some kind of external Sink - be it by printing +values or storing them in some external system. However sometimes we may be interested in some value that can be +obtained from the materialized processing pipeline. For example, we want to know how many tweets we have processed. +While this question is not as obvious to give an answer to in case of an infinite stream of tweets (one way to answer +this question in a streaming setting would be to create a stream of counts described as "*up until now*, we've processed N tweets"), +but in general it is possible to deal with finite streams and come up with a nice result such as a total count of elements. + +First, let's write such an element counter using ``Sink.fold`` and see how the types look like: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-fold-count + +First we prepare a reusable ``Flow`` that will change each incoming tweet into an integer of value ``1``. We'll use this in +order to combine those with a ``Sink.fold`` that will sum all ``Int`` elements of the stream and make its result available as +a ``Future[Int]``. Next we connect the ``tweets`` stream to ``count`` with ``via``. Finally we connect the Flow to the previously +prepared Sink using ``toMat``. + +Remember those mysterious ``Mat`` type parameters on ``Source[+Out, +Mat]``, ``Flow[-In, +Out, +Mat]`` and ``Sink[-In, +Mat]``? +They represent the type of values these processing parts return when materialized. When you chain these together, +you can explicitly combine their materialized values. In our example we used the ``Keep.right`` predefined function, +which tells the implementation to only care about the materialized type of the stage currently appended to the right. +The materialized type of ``sumSink`` is ``Future[Int]`` and because of using ``Keep.right``, the resulting :class:`RunnableGraph` +has also a type parameter of ``Future[Int]``. + +This step does *not* yet materialize the +processing pipeline, it merely prepares the description of the Flow, which is now connected to a Sink, and therefore can +be ``run()``, as indicated by its type: ``RunnableGraph[Future[Int]]``. Next we call ``run()`` which uses the implicit :class:`ActorMaterializer` +to materialize and run the Flow. The value returned by calling ``run()`` on a ``RunnableGraph[T]`` is of type ``T``. +In our case this type is ``Future[Int]`` which, when completed, will contain the total length of our ``tweets`` stream. +In case of the stream failing, this future would complete with a Failure. + +A :class:`RunnableGraph` may be reused +and materialized multiple times, because it is just the "blueprint" of the stream. This means that if we materialize a stream, +for example one that consumes a live stream of tweets within a minute, the materialized values for those two materializations +will be different, as illustrated by this example: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-runnable-flow-materialized-twice + +Many elements in Akka Streams provide materialized values which can be used for obtaining either results of computation or +steering these elements which will be discussed in detail in :ref:`stream-materialization-scala`. Summing up this section, now we know +what happens behind the scenes when we run this one-liner, which is equivalent to the multi line version above: + +.. includecode:: ../code/docs/stream/TwitterStreamQuickstartDocSpec.scala#tweets-fold-count-oneline + +.. note:: + ``runWith()`` is a convenience method that automatically ignores the materialized value of any other stages except + those appended by the ``runWith()`` itself. In the above example it translates to using ``Keep.right`` as the combiner + for materialized values. diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpEntities.java b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpEntities.java index 02d34db036..9a236faad5 100644 --- a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpEntities.java +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpEntities.java @@ -87,7 +87,7 @@ public final class HttpEntities { (akka.http.scaladsl.model.ContentType) contentType, toScala(data)); } - + private static akka.stream.scaladsl.Source toScala(Source javaSource) { return (akka.stream.scaladsl.Source)javaSource.asScala(); } diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpMessage.java b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpMessage.java index 71b873ce81..5f1f1ae812 100644 --- a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpMessage.java +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpMessage.java @@ -4,10 +4,16 @@ package akka.http.javadsl.model; +import akka.Done; +import akka.stream.Materializer; +import akka.http.javadsl.model.headers.HttpCredentials; import akka.util.ByteString; +import scala.concurrent.Future; + import java.io.File; import java.nio.file.Path; import java.util.Optional; +import java.util.concurrent.CompletionStage; /** * The base type for an Http message (request or response). @@ -55,7 +61,44 @@ public interface HttpMessage { */ ResponseEntity entity(); - public static interface MessageTransformations { + /** + * Discards the entities data bytes by running the {@code dataBytes} Source contained by the {@code entity} + * of this HTTP message. + * + * Note: It is crucial that entities are either discarded, or consumed by running the underlying [[Source]] + * as otherwise the lack of consuming of the data will trigger back-pressure to the underlying TCP connection + * (as designed), however possibly leading to an idle-timeout that will close the connection, instead of + * just having ignored the data. + * + * Warning: It is not allowed to discard and/or consume the the {@code entity.dataBytes} more than once + * as the stream is directly attached to the "live" incoming data source from the underlying TCP connection. + * Allowing it to be consumable twice would require buffering the incoming data, thus defeating the purpose + * of its streaming nature. If the dataBytes source is materialized a second time, it will fail with an + * "stream can cannot be materialized more than once" exception. + * + * In future versions, more automatic ways to warn or resolve these situations may be introduced, see issue #18716. + */ + DiscardedEntity discardEntityBytes(Materializer materializer); + + /** + * Represents the the currently being-drained HTTP Entity which triggers completion of the contained + * Future once the entity has been drained for the given HttpMessage completely. + */ + interface DiscardedEntity { + /** + * This future completes successfully once the underlying entity stream has been + * successfully drained (and fails otherwise). + */ + Future future(); + + /** + * This future completes successfully once the underlying entity stream has been + * successfully drained (and fails otherwise). + */ + CompletionStage completionStage(); + } + + interface MessageTransformations { /** * Returns a copy of this message with a new protocol. */ @@ -71,6 +114,11 @@ public interface HttpMessage { */ Self addHeaders(Iterable headers); + /** + * Returns a copy of this message with the given http credential header added to the list of headers. + */ + Self addCredentials(HttpCredentials credentials); + /** * Returns a copy of this message with all headers of the given name (case-insensitively) removed. */ diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpRequest.java b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpRequest.java index 99288cae1f..b22acb29ca 100644 --- a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpRequest.java +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpRequest.java @@ -4,7 +4,12 @@ package akka.http.javadsl.model; +import akka.Done; import akka.http.impl.util.JavaAccessors; +import akka.stream.Materializer; +import akka.stream.javadsl.Sink; + +import java.util.concurrent.CompletionStage; /** * Represents an Http request. diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpResponse.java b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpResponse.java index d5ffeb6aca..11d5dd45fb 100644 --- a/akka-http-core/src/main/java/akka/http/javadsl/model/HttpResponse.java +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/HttpResponse.java @@ -4,7 +4,12 @@ package akka.http.javadsl.model; +import akka.Done; import akka.http.impl.util.JavaAccessors; +import akka.stream.Materializer; +import akka.stream.javadsl.Sink; + +import java.util.concurrent.CompletionStage; /** * Represents an Http response. @@ -16,7 +21,7 @@ public abstract class HttpResponse implements HttpMessage, HttpMessage.MessageTr public abstract StatusCode status(); /** - * Returns the entity of this request. + * Returns the entity of this response. */ public abstract ResponseEntity entity(); diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/Multiparts.java b/akka-http-core/src/main/java/akka/http/javadsl/model/Multiparts.java new file mode 100644 index 0000000000..8f9dd0ea14 --- /dev/null +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/Multiparts.java @@ -0,0 +1,140 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ +package akka.http.javadsl.model; + +import scala.collection.immutable.List; +import scala.collection.immutable.Nil$; + +import java.nio.file.Path; +import java.util.Collections; +import java.util.Map; + +import static akka.http.impl.util.Util.convertArray; +import static akka.http.impl.util.Util.convertMapToScala; +import static akka.http.impl.util.Util.emptyMap; + +/** + * Constructors for Multipart instances + */ +public final class Multiparts { + /** + * Constructor for `multipart/form-data` content as defined in http://tools.ietf.org/html/rfc2388. + * All parts must have distinct names. (This is not verified!) + */ + public static Multipart.FormData createFormDataFromParts(Multipart.FormData.BodyPart... parts) { + return akka.http.scaladsl.model.Multipart.FormData$.MODULE$.createNonStrict(convertArray(parts)); + } + + /** + * Constructor for `multipart/form-data` content as defined in http://tools.ietf.org/html/rfc2388. + * All parts must have distinct names. (This is not verified!) + */ + public static Multipart.FormData.Strict createStrictFormDataFromParts(Multipart.FormData.BodyPart.Strict... parts) { + return akka.http.scaladsl.model.Multipart.FormData$.MODULE$.createStrict(convertArray(parts)); + } + + /** + * Constructor for `multipart/form-data` content as defined in http://tools.ietf.org/html/rfc2388. + * All parts must have distinct names. (This is not verified!) + */ + public static Multipart.FormData.Strict createFormDataFromFields(Map fields) { + return akka.http.scaladsl.model.Multipart.FormData$.MODULE$.createStrict(toScalaMap(fields)); + } + + /** + * Creates a FormData instance that contains a single part backed by the given file. + * + * To create an instance with several parts or for multiple files, use + * `Multiparts.createFormDataFromParts(Multiparts.createFormDataPartFromPath("field1", ...), Multiparts.createFormDataPartFromPath("field2", ...)` + */ + public static Multipart.FormData createFormDataFromPath(String name, ContentType contentType, Path path, int chunkSize) { + return akka.http.scaladsl.model.Multipart.FormData$.MODULE$.fromPath(name, (akka.http.scaladsl.model.ContentType) contentType, path, chunkSize); + } + + /** + * Creates a FormData instance that contains a single part backed by the given file. + * + * To create an instance with several parts or for multiple files, use + * `Multiparts.createFormDataFromParts(Multiparts.createFormDataPartFromPath("field1", ...), Multiparts.createFormDataPartFromPath("field2", ...)` + */ + public static Multipart.FormData createFormDataFromPath(String name, ContentType contentType, Path path) { + return akka.http.scaladsl.model.Multipart.FormData$.MODULE$.fromPath(name, (akka.http.scaladsl.model.ContentType) contentType, path, -1); + } + + /** + * Creates a BodyPart backed by a file that will be streamed using a FileSource. + */ + public static Multipart.FormData.BodyPart createFormDataPartFromPath(String name, ContentType contentType, Path path, int chunkSize) { + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$.MODULE$.fromPath(name, (akka.http.scaladsl.model.ContentType) contentType, path, chunkSize); + } + + /** + * Creates a BodyPart backed by a file that will be streamed using a FileSource. + */ + public static Multipart.FormData.BodyPart createFormDataPartFromPath(String name, ContentType contentType, Path path) { + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$.MODULE$.fromPath(name, (akka.http.scaladsl.model.ContentType) contentType, path, -1); + } + + /** + * Creates a BodyPart. + */ + public static Multipart.FormData.BodyPart createFormDataBodyPart(String name, BodyPartEntity entity) { + List nil = Nil$.MODULE$; + Map additionalDispositionParams = Collections.emptyMap(); + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$Builder$.MODULE$.create(name, (akka.http.scaladsl.model.BodyPartEntity) entity, + convertMapToScala(additionalDispositionParams), nil); + } + + /** + * Creates a BodyPart. + */ + public static Multipart.FormData.BodyPart createFormDataBodyPart(String name, BodyPartEntity entity, Map additionalDispositionParams) { + List nil = Nil$.MODULE$; + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$Builder$.MODULE$.create(name, (akka.http.scaladsl.model.BodyPartEntity) entity, + convertMapToScala(additionalDispositionParams), nil); + } + + /** + * Creates a BodyPart. + */ + public static Multipart.FormData.BodyPart createFormDataBodyPart(String name, BodyPartEntity entity, Map additionalDispositionParams, java.util.List headers) { + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$Builder$.MODULE$.create(name, (akka.http.scaladsl.model.BodyPartEntity) entity, + convertMapToScala(additionalDispositionParams), toScalaSeq(headers)); + } + + /** + * Creates a BodyPart.Strict. + */ + public static Multipart.FormData.BodyPart.Strict createFormDataBodyPartStrict(String name, HttpEntity.Strict entity) { + List nil = Nil$.MODULE$; + Map additionalDispositionParams = Collections.emptyMap(); + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$StrictBuilder$.MODULE$.createStrict(name, (akka.http.scaladsl.model.HttpEntity.Strict) entity, + convertMapToScala(additionalDispositionParams), nil); + } + + /** + * Creates a BodyPart.Strict. + */ + public static Multipart.FormData.BodyPart.Strict createFormDataBodyPartStrict(String name, HttpEntity.Strict entity, Map additionalDispositionParams) { + List nil = Nil$.MODULE$; + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$StrictBuilder$.MODULE$.createStrict(name, (akka.http.scaladsl.model.HttpEntity.Strict) entity, + convertMapToScala(additionalDispositionParams), nil); + } + + /** + * Creates a BodyPart.Strict. + */ + public static Multipart.FormData.BodyPart.Strict createFormDataBodyPartStrict(String name, HttpEntity.Strict entity, Map additionalDispositionParams, java.util.List headers) { + return akka.http.scaladsl.model.Multipart$FormData$BodyPart$StrictBuilder$.MODULE$.createStrict(name, (akka.http.scaladsl.model.HttpEntity.Strict) entity, + convertMapToScala(additionalDispositionParams), toScalaSeq(headers)); + } + + private static scala.collection.immutable.Map toScalaMap(Map map) { + return emptyMap.$plus$plus(scala.collection.JavaConverters.mapAsScalaMapConverter(map).asScala()); + } + + private static scala.collection.Iterable toScalaSeq(java.util.List _headers) { + return scala.collection.JavaConverters.collectionAsScalaIterableConverter(_headers).asScala(); + } +} diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/headers/Connection.java b/akka-http-core/src/main/java/akka/http/javadsl/model/headers/Connection.java new file mode 100644 index 0000000000..429c1856fc --- /dev/null +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/headers/Connection.java @@ -0,0 +1,17 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.model.headers; + +/** + * Model for the `Connection` header. + * Specification: https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.10 + */ +public abstract class Connection extends akka.http.scaladsl.model.HttpHeader { + public abstract Iterable getTokens(); + + public static Connection create(String... directives) { + return new akka.http.scaladsl.model.headers.Connection(akka.http.impl.util.Util.convertArray(directives)); + } +} diff --git a/akka-http-core/src/main/java/akka/http/javadsl/model/headers/HttpChallenge.java b/akka-http-core/src/main/java/akka/http/javadsl/model/headers/HttpChallenge.java index ad753869ff..c018313d53 100644 --- a/akka-http-core/src/main/java/akka/http/javadsl/model/headers/HttpChallenge.java +++ b/akka-http-core/src/main/java/akka/http/javadsl/model/headers/HttpChallenge.java @@ -20,4 +20,12 @@ public abstract class HttpChallenge { public static HttpChallenge create(String scheme, String realm, Map params) { return new akka.http.scaladsl.model.headers.HttpChallenge(scheme, realm, Util.convertMapToScala(params)); } -} \ No newline at end of file + + public static HttpChallenge createBasic(String realm) { + return create("Basic", realm); + } + + public static HttpChallenge createOAuth2(String realm) { + return create("Bearer", realm); + } +} diff --git a/akka-http-core/src/main/resources/reference.conf b/akka-http-core/src/main/resources/reference.conf index 5dfe6e340b..28012b6ba1 100644 --- a/akka-http-core/src/main/resources/reference.conf +++ b/akka-http-core/src/main/resources/reference.conf @@ -177,6 +177,13 @@ akka.http { # single host endpoint is allowed to establish. Must be greater than zero. max-connections = 4 + # The minimum number of parallel connections that a pool should keep alive ("hot"). + # If the number of connections is falling below the given threshold, new ones are being spawned. + # You can use this setting to build a hot pool of "always on" connections. + # Default is 0, meaning there might be no active connection at given moment. + # Keep in mind that `min-connections` should be smaller than `max-connections` or equal + min-connections = 0 + # The maximum number of times failed requests are attempted again, # (if the request can be safely retried) before giving up and returning an error. # Set to zero to completely disable request retries. diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolConductor.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolConductor.scala index 98f7ab171c..567edb151d 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolConductor.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolConductor.scala @@ -23,7 +23,7 @@ private object PoolConductor { case class Ports( requestIn: Inlet[RequestContext], slotEventIn: Inlet[RawSlotEvent], - slotOuts: immutable.Seq[Outlet[RequestContext]]) extends Shape { + slotOuts: immutable.Seq[Outlet[SlotCommand]]) extends Shape { override val inlets = requestIn :: slotEventIn :: Nil override def outlets = slotOuts @@ -38,14 +38,18 @@ private object PoolConductor { Ports( inlets.head.asInstanceOf[Inlet[RequestContext]], inlets.last.asInstanceOf[Inlet[RawSlotEvent]], - outlets.asInstanceOf[immutable.Seq[Outlet[RequestContext]]]) + outlets.asInstanceOf[immutable.Seq[Outlet[SlotCommand]]]) + } + + final case class PoolSlotsSetting(minSlots: Int, maxSlots: Int) { + require(minSlots <= maxSlots, "min-connections must be <= max-connections") } /* Stream Setup ============ - Request- - Request- +-----------+ +-----------+ Switch- +-------------+ +-----------+ Context + Slot- + Request- +-----------+ +-----------+ Switch- +-------------+ +-----------+ Command Context | retry | | slot- | Command | doubler | | route +--------------> +--------->| Merge +---->| Selector +-------------->| (MapConcat) +---->| (Flexi +--------------> | | | | | | | Route) +--------------> @@ -63,17 +67,18 @@ private object PoolConductor { +---------+ */ - def apply(slotCount: Int, pipeliningLimit: Int, log: LoggingAdapter): Graph[Ports, Any] = + def apply(slotSettings: PoolSlotsSetting, pipeliningLimit: Int, log: LoggingAdapter): Graph[Ports, Any] = GraphDSL.create() { implicit b ⇒ import GraphDSL.Implicits._ val retryMerge = b.add(MergePreferred[RequestContext](1, eagerComplete = true)) - val slotSelector = b.add(new SlotSelector(slotCount, pipeliningLimit, log)) - val route = b.add(new Route(slotCount)) + val slotSelector = b.add(new SlotSelector(slotSettings, pipeliningLimit, log)) + val route = b.add(new Route(slotSettings.maxSlots)) val retrySplit = b.add(Broadcast[RawSlotEvent](2)) - val flatten = Flow[RawSlotEvent].mapAsyncUnordered(slotCount) { + val flatten = Flow[RawSlotEvent].mapAsyncUnordered(slotSettings.maxSlots) { case x: SlotEvent.Disconnected ⇒ FastFuture.successful(x) case SlotEvent.RequestCompletedFuture(future) ⇒ future + case x: SlotEvent.ConnectedEagerly ⇒ FastFuture.successful(x) case x ⇒ throw new IllegalStateException("Unexpected " + x) } @@ -85,7 +90,11 @@ private object PoolConductor { Ports(retryMerge.in(0), retrySplit.in, route.outArray.toList) } - private case class SwitchCommand(rc: RequestContext, slotIx: Int) + sealed trait SlotCommand + final case class DispatchCommand(rc: RequestContext) extends SlotCommand + final case object ConnectEagerlyCommand extends SlotCommand + + final case class SwitchSlotCommand(cmd: SlotCommand, slotIx: Int) // the SlotSelector keeps the state of all slots as instances of this ADT private sealed trait SlotState @@ -105,19 +114,19 @@ private object PoolConductor { private case class Busy(openRequests: Int) extends SlotState { require(openRequests > 0) } private object Busy extends Busy(1) - private class SlotSelector(slotCount: Int, pipeliningLimit: Int, log: LoggingAdapter) - extends GraphStage[FanInShape2[RequestContext, SlotEvent, SwitchCommand]] { + private class SlotSelector(slotSettings: PoolSlotsSetting, pipeliningLimit: Int, log: LoggingAdapter) + extends GraphStage[FanInShape2[RequestContext, SlotEvent, SwitchSlotCommand]] { private val ctxIn = Inlet[RequestContext]("requestContext") private val slotIn = Inlet[SlotEvent]("slotEvents") - private val out = Outlet[SwitchCommand]("switchCommand") + private val out = Outlet[SwitchSlotCommand]("slotCommand") override def initialAttributes = Attributes.name("SlotSelector") override val shape = new FanInShape2(ctxIn, slotIn, out) override def createLogic(effectiveAttributes: Attributes) = new GraphStageLogic(shape) { - val slotStates = Array.fill[SlotState](slotCount)(Unconnected) + val slotStates = Array.fill[SlotState](slotSettings.maxSlots)(Unconnected) var nextSlot = 0 setHandler(ctxIn, new InHandler { @@ -126,7 +135,7 @@ private object PoolConductor { val slot = nextSlot slotStates(slot) = slotStateAfterDispatch(slotStates(slot), ctx.request.method) nextSlot = bestSlot() - emit(out, SwitchCommand(ctx, slot), tryPullCtx) + emit(out, SwitchSlotCommand(DispatchCommand(ctx), slot), tryPullCtx) } }) @@ -137,6 +146,9 @@ private object PoolConductor { slotStates(slotIx) = slotStateAfterRequestCompleted(slotStates(slotIx)) case SlotEvent.Disconnected(slotIx, failed) ⇒ slotStates(slotIx) = slotStateAfterDisconnect(slotStates(slotIx), failed) + reconnectIfNeeded() + case SlotEvent.ConnectedEagerly(slotIx) ⇒ + // do nothing ... } pull(slotIn) val wasBlocked = nextSlot == -1 @@ -153,8 +165,21 @@ private object PoolConductor { override def preStart(): Unit = { pull(ctxIn) pull(slotIn) + + // eagerly start at least slotSettings.minSlots connections + (0 until slotSettings.minSlots).foreach { connect } } + def connect(slotIx: Int): Unit = { + emit(out, SwitchSlotCommand(ConnectEagerlyCommand, slotIx)) + slotStates(slotIx) = Idle + } + + private def reconnectIfNeeded(): Unit = + if (slotStates.count(_ != Unconnected) < slotSettings.minSlots) { + connect(slotStates.indexWhere(_ == Unconnected)) + } + def slotStateAfterDispatch(slotState: SlotState, method: HttpMethod): SlotState = slotState match { case Unconnected | Idle ⇒ if (method.isIdempotent) Loaded(1) else Busy(1) @@ -205,11 +230,11 @@ private object PoolConductor { } } - private class Route(slotCount: Int) extends GraphStage[UniformFanOutShape[SwitchCommand, RequestContext]] { + private class Route(slotCount: Int) extends GraphStage[UniformFanOutShape[SwitchSlotCommand, SlotCommand]] { override def initialAttributes = Attributes.name("PoolConductor.Route") - override val shape = new UniformFanOutShape[SwitchCommand, RequestContext](slotCount) + override val shape = new UniformFanOutShape[SwitchSlotCommand, SlotCommand](slotCount) override def createLogic(effectiveAttributes: Attributes) = new GraphStageLogic(shape) { shape.outArray foreach { setHandler(_, ignoreTerminateOutput) } @@ -217,8 +242,8 @@ private object PoolConductor { val in = shape.in setHandler(in, new InHandler { override def onPush(): Unit = { - val cmd = grab(in) - emit(shape.outArray(cmd.slotIx), cmd.rc, pullIn) + val switchCommand = grab(in) + emit(shape.outArray(switchCommand.slotIx), switchCommand.cmd, pullIn) } }) val pullIn = () ⇒ pull(in) diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolFlow.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolFlow.scala index 56799a1a98..6f404700f3 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolFlow.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolFlow.scala @@ -5,6 +5,7 @@ package akka.http.impl.engine.client import akka.NotUsed +import akka.http.impl.engine.client.PoolConductor.PoolSlotsSetting import akka.http.scaladsl.settings.ConnectionPoolSettings import scala.concurrent.{ Promise, Future } @@ -76,10 +77,14 @@ private object PoolFlow { import settings._ import GraphDSL.Implicits._ - val conductor = b.add(PoolConductor(maxConnections, pipeliningLimit, log)) + val conductor = b.add( + PoolConductor(PoolSlotsSetting(maxSlots = maxConnections, minSlots = minConnections), pipeliningLimit, log) + ) + val slots = Vector - .tabulate(maxConnections)(PoolSlot(_, connectionFlow, settings)) + .tabulate(maxConnections)(PoolSlot(_, connectionFlow)) .map(b.add) + val responseMerge = b.add(Merge[ResponseContext](maxConnections)) val slotEventMerge = b.add(Merge[PoolSlot.RawSlotEvent](maxConnections)) diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterfaceActor.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterfaceActor.scala index 8deb9c687b..1d0f7432ff 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterfaceActor.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolInterfaceActor.scala @@ -25,6 +25,8 @@ private object PoolInterfaceActor { case object Shutdown extends DeadLetterSuppression val name = SeqActorName("PoolInterfaceActor") + + def props(gateway: PoolGateway)(implicit fm: Materializer) = Props(new PoolInterfaceActor(gateway)).withDeploy(Deploy.local) } /** @@ -122,7 +124,7 @@ private class PoolInterfaceActor(gateway: PoolGateway)(implicit fm: Materializer case Shutdown ⇒ // signal coming in from gateway log.debug("Shutting down host connection pool to {}:{}", hcps.host, hcps.port) - onComplete() + onCompleteThenStop() while (!inputBuffer.isEmpty) { val PoolRequest(request, responsePromise) = inputBuffer.dequeue() responsePromise.completeWith(gateway(request)) @@ -147,9 +149,12 @@ private class PoolInterfaceActor(gateway: PoolGateway)(implicit fm: Materializer } def activateIdleTimeoutIfNecessary(): Unit = - if (remainingRequested == 0 && hcps.setup.settings.idleTimeout.isFinite) { + if (shouldStopOnIdle()) { import context.dispatcher val timeout = hcps.setup.settings.idleTimeout.asInstanceOf[FiniteDuration] activeIdleTimeout = Some(context.system.scheduler.scheduleOnce(timeout)(gateway.shutdown())) } + + private def shouldStopOnIdle(): Boolean = + remainingRequested == 0 && hcps.setup.settings.idleTimeout.isFinite && hcps.setup.settings.minConnections == 0 } diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala index 44b4de5859..48a5ce52be 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolMasterActor.scala @@ -51,8 +51,7 @@ private[http] final class PoolMasterActor extends Actor with ActorLogging { if (poolStatus.contains(gateway)) { throw new IllegalStateException(s"pool interface actor for $gateway already exists") } - val props = Props(new PoolInterfaceActor(gateway)).withDeploy(Deploy.local) - val ref = context.actorOf(props, PoolInterfaceActor.name.next()) + val ref = context.actorOf(PoolInterfaceActor.props(gateway), PoolInterfaceActor.name.next()) poolStatus += gateway → PoolInterfaceRunning(ref) poolInterfaces += ref → gateway context.watch(ref) @@ -133,7 +132,6 @@ private[http] final class PoolMasterActor extends Actor with ActorLogging { // Testing only. case PoolSize(sizePromise) ⇒ sizePromise.success(poolStatus.size) - } } diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolSlot.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolSlot.scala index 191ee07112..b73819f407 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolSlot.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/client/PoolSlot.scala @@ -5,7 +5,7 @@ package akka.http.impl.engine.client import akka.actor._ -import akka.http.scaladsl.settings.ConnectionPoolSettings +import akka.http.impl.engine.client.PoolConductor.{ ConnectEagerlyCommand, DispatchCommand, SlotCommand } import akka.http.scaladsl.model.{ HttpEntity, HttpRequest, HttpResponse } import akka.stream._ import akka.stream.actor._ @@ -29,6 +29,11 @@ private object PoolSlot { final case class RetryRequest(rc: RequestContext) extends RawSlotEvent final case class RequestCompleted(slotIx: Int) extends SlotEvent final case class Disconnected(slotIx: Int, failedRequests: Int) extends SlotEvent + /** + * Slot with id "slotIx" has responded to request from PoolConductor and connected immediately + * Ordinary connections from slots don't produce this event + */ + final case class ConnectedEagerly(slotIx: Int) extends SlotEvent } private val slotProcessorActorName = SeqActorName("SlotProcessor") @@ -47,21 +52,19 @@ private object PoolSlot { | via slotEventMerge) v */ - def apply(slotIx: Int, connectionFlow: Flow[HttpRequest, HttpResponse, Any], - settings: ConnectionPoolSettings)(implicit - system: ActorSystem, - fm: Materializer): Graph[FanOutShape2[RequestContext, ResponseContext, RawSlotEvent], Any] = + def apply(slotIx: Int, connectionFlow: Flow[HttpRequest, HttpResponse, Any])(implicit system: ActorSystem, fm: Materializer): Graph[FanOutShape2[SlotCommand, ResponseContext, RawSlotEvent], Any] = GraphDSL.create() { implicit b ⇒ import GraphDSL.Implicits._ // TODO wouldn't be better to have them under a known parent? /user/SlotProcessor-0 seems weird val name = slotProcessorActorName.next() + val slotProcessor = b.add { Flow.fromProcessor { () ⇒ val actor = system.actorOf( - Props(new SlotProcessor(slotIx, connectionFlow, settings)).withDeploy(Deploy.local), + Props(new SlotProcessor(slotIx, connectionFlow)).withDeploy(Deploy.local), name) - ActorProcessor[RequestContext, List[ProcessorOut]](actor) + ActorProcessor[SlotCommand, List[ProcessorOut]](actor) }.mapConcat(ConstantFun.scalaIdentityFunction) } val split = b.add(Broadcast[ProcessorOut](2)) @@ -78,25 +81,36 @@ private object PoolSlot { import ActorSubscriberMessage._ /** - * An actor mananging a series of materializations of the given `connectionFlow`. - * To the outside it provides a stable flow stage, consuming `RequestContext` instances on its + * An actor managing a series of materializations of the given `connectionFlow`. + * To the outside it provides a stable flow stage, consuming `SlotCommand` instances on its * input (ActorSubscriber) side and producing `List[ProcessorOut]` instances on its output * (ActorPublisher) side. * The given `connectionFlow` is materialized into a running flow whenever required. * Completion and errors from the connection are not surfaced to the outside (unless we are * shutting down completely). */ - private class SlotProcessor(slotIx: Int, connectionFlow: Flow[HttpRequest, HttpResponse, Any], - settings: ConnectionPoolSettings)(implicit fm: Materializer) + private class SlotProcessor(slotIx: Int, connectionFlow: Flow[HttpRequest, HttpResponse, Any])(implicit fm: Materializer) extends ActorSubscriber with ActorPublisher[List[ProcessorOut]] with ActorLogging { var exposedPublisher: akka.stream.impl.ActorPublisher[Any] = _ var inflightRequests = immutable.Queue.empty[RequestContext] - val runnableGraph = Source.actorPublisher[HttpRequest](Props(new FlowInportActor(self)).withDeploy(Deploy.local)) + + val runnableGraph = Source.actorPublisher[HttpRequest](flowInportProps(self)) .via(connectionFlow) - .toMat(Sink.actorSubscriber[HttpResponse](Props(new FlowOutportActor(self)).withDeploy(Deploy.local)))(Keep.both) + .toMat(Sink.actorSubscriber[HttpResponse](flowOutportProps(self)))(Keep.both) .named("SlotProcessorInternalConnectionFlow") override def requestStrategy = ZeroRequestStrategy + + /** + * How PoolProcessor changes its `receive`: + * waitingExposedPublisher -> waitingForSubscribePending -> unconnected -> + * waitingForDemandFromConnection OR waitingEagerlyConnected -> running + * Given slot can become get to 'running' state via 'waitingForDemandFromConnection' or 'waitingEagerlyConnected'. + * The difference between those two paths is that the first one is lazy - reacts to DispatchCommand and then uses + * inport and outport actors to obtain more items. + * Where the second one is eager - reacts to SlotShouldConnectCommand from PoolConductor, sends SlotEvent.ConnectedEagerly + * back to conductor and then waits for the first DispatchCommand + */ override def receive = waitingExposedPublisher def waitingExposedPublisher: Receive = { @@ -114,10 +128,16 @@ private object PoolSlot { } val unconnected: Receive = { - case OnNext(rc: RequestContext) ⇒ + case OnNext(DispatchCommand(rc: RequestContext)) ⇒ val (connInport, connOutport) = runnableGraph.run() connOutport ! Request(totalDemand) - context.become(waitingForDemandFromConnection(connInport, connOutport, rc)) + context.become(waitingForDemandFromConnection(connInport = connInport, connOutport = connOutport, rc)) + + case OnNext(ConnectEagerlyCommand) ⇒ + val (in, out) = runnableGraph.run() + onNext(SlotEvent.ConnectedEagerly(slotIx) :: Nil) + out ! Request(totalDemand) + context.become(waitingEagerlyConnected(connInport = in, connOutport = out)) case Request(_) ⇒ if (remainingRequested == 0) request(1) // ask for first request if necessary @@ -130,6 +150,17 @@ private object PoolSlot { case c @ FromConnection(msg) ⇒ // ignore ... } + def waitingEagerlyConnected(connInport: ActorRef, connOutport: ActorRef): Receive = { + case FromConnection(Request(n)) ⇒ + request(n) + + case OnNext(DispatchCommand(rc: RequestContext)) ⇒ + inflightRequests = inflightRequests.enqueue(rc) + request(1) + connInport ! OnNext(rc.request) + context.become(running(connInport, connOutport)) + } + def waitingForDemandFromConnection(connInport: ActorRef, connOutport: ActorRef, firstRequest: RequestContext): Receive = { case ev @ (Request(_) | Cancel) ⇒ connOutport ! ev @@ -151,7 +182,7 @@ private object PoolSlot { def running(connInport: ActorRef, connOutport: ActorRef): Receive = { case ev @ (Request(_) | Cancel) ⇒ connOutport ! ev case ev @ (OnComplete | OnError(_)) ⇒ connInport ! ev - case OnNext(rc: RequestContext) ⇒ + case OnNext(DispatchCommand(rc: RequestContext)) ⇒ inflightRequests = inflightRequests.enqueue(rc) connInport ! OnNext(rc.request) @@ -225,6 +256,7 @@ private object PoolSlot { context.stop(self) } } + def flowInportProps(s: ActorRef) = Props(new FlowInportActor(s)).withDeploy(Deploy.local) private class FlowOutportActor(slotProcessor: ActorRef) extends ActorSubscriber with ActorLogging { def requestStrategy = ZeroRequestStrategy @@ -237,6 +269,7 @@ private object PoolSlot { context.stop(self) } } + def flowOutportProps(s: ActorRef) = Props(new FlowOutportActor(s)).withDeploy(Deploy.local) final class UnexpectedDisconnectException(msg: String, cause: Throwable) extends RuntimeException(msg, cause) { def this(msg: String) = this(msg, null) diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala index 781873cdfb..46c997d238 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/parsing/HttpMessageParser.scala @@ -84,6 +84,8 @@ private[http] abstract class HttpMessageParser[Output >: MessageOutput <: Parser case NotEnoughDataException ⇒ // we are missing a try/catch{continue} wrapper somewhere throw new IllegalStateException("unexpected NotEnoughDataException", NotEnoughDataException) + case IllegalHeaderException(error) ⇒ + failMessageStart(StatusCodes.BadRequest, error) }) match { case Trampoline(x) ⇒ run(x) case x ⇒ x diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala index 4cfae5850d..6ba19d1913 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/BodyPartRenderer.scala @@ -17,8 +17,9 @@ import akka.stream.scaladsl.Source import akka.stream.stage._ import akka.util.ByteString import HttpEntity._ +import akka.stream.{ Attributes, FlowShape, Inlet, Outlet } -import scala.concurrent.forkjoin.ThreadLocalRandom +import java.util.concurrent.ThreadLocalRandom /** * INTERNAL API @@ -29,46 +30,60 @@ private[http] object BodyPartRenderer { boundary: String, nioCharset: Charset, partHeadersSizeHint: Int, - log: LoggingAdapter): PushPullStage[Multipart.BodyPart, Source[ChunkStreamPart, Any]] = - new PushPullStage[Multipart.BodyPart, Source[ChunkStreamPart, Any]] { + log: LoggingAdapter): GraphStage[FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]]] = + new GraphStage[FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]]] { var firstBoundaryRendered = false - override def onPush(bodyPart: Multipart.BodyPart, ctx: Context[Source[ChunkStreamPart, Any]]): SyncDirective = { - val r = new CustomCharsetByteStringRendering(nioCharset, partHeadersSizeHint) + val in: Inlet[Multipart.BodyPart] = Inlet("BodyPartRenderer.in") + val out: Outlet[Source[ChunkStreamPart, Any]] = Outlet("BodyPartRenderer.out") + override val shape: FlowShape[Multipart.BodyPart, Source[ChunkStreamPart, Any]] = FlowShape(in, out) - def bodyPartChunks(data: Source[ByteString, Any]): Source[ChunkStreamPart, Any] = { - val entityChunks = data.map[ChunkStreamPart](Chunk(_)) - (chunkStream(r.get) ++ entityChunks).mapMaterializedValue((_) ⇒ ()) - } + override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = + new GraphStageLogic(shape) with InHandler with OutHandler { + override def onPush(): Unit = { + val r = new CustomCharsetByteStringRendering(nioCharset, partHeadersSizeHint) - def completePartRendering(): Source[ChunkStreamPart, Any] = - bodyPart.entity match { - case x if x.isKnownEmpty ⇒ chunkStream(r.get) - case Strict(_, data) ⇒ chunkStream((r ~~ data).get) - case Default(_, _, data) ⇒ bodyPartChunks(data) - case IndefiniteLength(_, data) ⇒ bodyPartChunks(data) + def bodyPartChunks(data: Source[ByteString, Any]): Source[ChunkStreamPart, Any] = { + val entityChunks = data.map[ChunkStreamPart](Chunk(_)) + (chunkStream(r.get) ++ entityChunks).mapMaterializedValue((_) ⇒ ()) + } + + def completePartRendering(entity: HttpEntity): Source[ChunkStreamPart, Any] = + entity match { + case x if x.isKnownEmpty ⇒ chunkStream(r.get) + case Strict(_, data) ⇒ chunkStream((r ~~ data).get) + case Default(_, _, data) ⇒ bodyPartChunks(data) + case IndefiniteLength(_, data) ⇒ bodyPartChunks(data) + } + + renderBoundary(r, boundary, suppressInitialCrLf = !firstBoundaryRendered) + firstBoundaryRendered = true + + val bodyPart = grab(in) + renderEntityContentType(r, bodyPart.entity) + renderHeaders(r, bodyPart.headers, log) + + push(out, completePartRendering(bodyPart.entity)) } - renderBoundary(r, boundary, suppressInitialCrLf = !firstBoundaryRendered) - firstBoundaryRendered = true - renderEntityContentType(r, bodyPart.entity) - renderHeaders(r, bodyPart.headers, log) - ctx.push(completePartRendering()) - } + override def onPull(): Unit = + if (isClosed(in) && firstBoundaryRendered) + completeRendering() + else if (isClosed(in)) completeStage() + else pull(in) - override def onPull(ctx: Context[Source[ChunkStreamPart, Any]]): SyncDirective = { - val finishing = ctx.isFinishing - if (finishing && firstBoundaryRendered) { - val r = new ByteStringRendering(boundary.length + 4) - renderFinalBoundary(r, boundary) - ctx.pushAndFinish(chunkStream(r.get)) - } else if (finishing) - ctx.finish() - else - ctx.pull() - } + override def onUpstreamFinish(): Unit = + if (isAvailable(out) && firstBoundaryRendered) completeRendering() - override def onUpstreamFinish(ctx: Context[Source[ChunkStreamPart, Any]]): TerminationDirective = ctx.absorbTermination() + private def completeRendering(): Unit = { + val r = new ByteStringRendering(boundary.length + 4) + renderFinalBoundary(r, boundary) + push(out, chunkStream(r.get)) + completeStage() + } + + setHandlers(in, out, this) + } private def chunkStream(byteString: ByteString): Source[ChunkStreamPart, Any] = Source.single(Chunk(byteString)) @@ -124,4 +139,14 @@ private[http] object BodyPartRenderer { random.nextBytes(array) Base64.custom.encodeToString(array, false) } + + /** + * Creates a new random number of default length and base64 encodes it (using a custom "safe" alphabet). + */ + def randomBoundaryWithDefaults(): String = randomBoundary() + + /** + * Creates a new random number of the given length and base64 encodes it (using a custom "safe" alphabet). + */ + def randomBoundaryWithDefaultRandom(length: Int): String = randomBoundary(length) } diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala index 23895ada3a..ca4a3ef0a2 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/rendering/RenderSupport.scala @@ -15,6 +15,9 @@ import akka.http.scaladsl.model._ import akka.http.impl.util._ import akka.http.scaladsl.model.HttpEntity.ChunkStreamPart +import akka.stream.stage.{ Context, GraphStage, SyncDirective, TerminationDirective } +import akka.stream._ +import akka.stream.scaladsl.{ Sink, Source, Flow, Keep } /** * INTERNAL API */ @@ -53,19 +56,31 @@ private object RenderSupport { } object ChunkTransformer { - val flow = Flow[ChunkStreamPart].transform(() ⇒ new ChunkTransformer).named("renderChunks") + val flow = Flow.fromGraph(new ChunkTransformer).named("renderChunks") } - class ChunkTransformer extends StatefulStage[HttpEntity.ChunkStreamPart, ByteString] { - override def initial = new State { - override def onPush(chunk: HttpEntity.ChunkStreamPart, ctx: Context[ByteString]): SyncDirective = { - val bytes = renderChunk(chunk) - if (chunk.isLastChunk) ctx.pushAndFinish(bytes) - else ctx.push(bytes) + class ChunkTransformer extends GraphStage[FlowShape[HttpEntity.ChunkStreamPart, ByteString]] { + val out: Outlet[ByteString] = Outlet("ChunkTransformer.out") + val in: Inlet[HttpEntity.ChunkStreamPart] = Inlet("ChunkTransformer.in") + val shape: FlowShape[HttpEntity.ChunkStreamPart, ByteString] = FlowShape.of(in, out) + + override def createLogic(inheritedAttributes: Attributes): GraphStageLogic = + new GraphStageLogic(shape) with InHandler with OutHandler { + override def onPush(): Unit = { + val chunk = grab(in) + val bytes = renderChunk(chunk) + push(out, bytes) + if (chunk.isLastChunk) completeStage() + } + + override def onPull(): Unit = pull(in) + + override def onUpstreamFinish(): Unit = { + emit(out, defaultLastChunkBytes) + completeStage() + } + setHandlers(in, out, this) } - } - override def onUpstreamFinish(ctx: Context[ByteString]): TerminationDirective = - terminationEmit(Iterator.single(defaultLastChunkBytes), ctx) } object CheckContentLengthTransformer { diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala index dc8f453c56..9e6f78fd4a 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala @@ -392,7 +392,8 @@ private[http] object HttpServerBluePrint { case x: EntityStreamError if messageEndPending && openRequests.isEmpty ⇒ // client terminated the connection after receiving an early response to 100-continue completeStage() - case x ⇒ push(requestPrepOut, x) + case x ⇒ + push(requestPrepOut, x) } override def onUpstreamFinish() = if (openRequests.isEmpty) completeStage() @@ -414,19 +415,17 @@ private[http] object HttpServerBluePrint { val isEarlyResponse = messageEndPending && openRequests.isEmpty if (isEarlyResponse && response.status.isSuccess) log.warning( - """Sending 2xx response before end of request was received... - |Note that the connection will be closed after this response. Also, many clients will not read early responses! - |Consider waiting for the request end before dispatching this response!""".stripMargin) + "Sending an 2xx 'early' response before end of request was received... " + + "Note that the connection will be closed after this response. Also, many clients will not read early responses! " + + "Consider only issuing this response after the request data has been completely read!") val close = requestStart.closeRequested || - requestStart.expect100Continue && oneHundredContinueResponsePending || - isClosed(requestParsingIn) && openRequests.isEmpty || + (requestStart.expect100Continue && oneHundredContinueResponsePending) || + (isClosed(requestParsingIn) && openRequests.isEmpty) || isEarlyResponse + emit(responseCtxOut, ResponseRenderingContext(response, requestStart.method, requestStart.protocol, close), pullHttpResponseIn) - if (close) complete(responseCtxOut) - // when the client closes the connection, we need to pull onc more time to get the - // request parser to complete - if (close && isEarlyResponse) pull(requestParsingIn) + if (close && requestStart.expect100Continue) pull(requestParsingIn) } override def onUpstreamFinish() = if (openRequests.isEmpty && isClosed(requestParsingIn)) completeStage() @@ -609,7 +608,7 @@ private[http] object HttpServerBluePrint { }) private var activeTimers = 0 - private def timeout = ActorMaterializer.downcast(materializer).settings.subscriptionTimeoutSettings.timeout + private def timeout = ActorMaterializerHelper.downcast(materializer).settings.subscriptionTimeoutSettings.timeout private def addTimeout(s: SubscriptionTimeout): Unit = { if (activeTimers == 0) setKeepGoing(true) activeTimers += 1 diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala index 549d3be663..7803b21545 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/FrameHandler.scala @@ -6,9 +6,11 @@ package akka.http.impl.engine.ws import akka.NotUsed import akka.stream.scaladsl.Flow -import akka.stream.stage.{ SyncDirective, Context, StatefulStage } import akka.util.ByteString import Protocol.Opcode +import akka.event.Logging +import akka.stream.stage.{ GraphStage, GraphStageLogic, InHandler, OutHandler } +import akka.stream.{ Attributes, FlowShape, Inlet, Outlet } import scala.util.control.NonFatal @@ -21,158 +23,163 @@ import scala.util.control.NonFatal private[http] object FrameHandler { def create(server: Boolean): Flow[FrameEventOrError, Output, NotUsed] = - Flow[FrameEventOrError].transform(() ⇒ new HandlerStage(server)) + Flow[FrameEventOrError].via(new HandlerStage(server)) - private class HandlerStage(server: Boolean) extends StatefulStage[FrameEventOrError, Output] { - type Ctx = Context[Output] - def initial: State = Idle + private class HandlerStage(server: Boolean) extends GraphStage[FlowShape[FrameEventOrError, Output]] { + val in = Inlet[FrameEventOrError](Logging.simpleName(this) + ".in") + val out = Outlet[Output](Logging.simpleName(this) + ".out") + override val shape = FlowShape(in, out) override def toString: String = s"HandlerStage(server=$server)" - private object Idle extends StateWithControlFrameHandling { - def handleRegularFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective = - (start.header.opcode, start.isFullMessage) match { - case (Opcode.Binary, true) ⇒ publishMessagePart(BinaryMessagePart(start.data, last = true)) - case (Opcode.Binary, false) ⇒ becomeAndHandleWith(new CollectingBinaryMessage, start) - case (Opcode.Text, _) ⇒ becomeAndHandleWith(new CollectingTextMessage, start) - case x ⇒ protocolError() + override def createLogic(attributes: Attributes): GraphStageLogic = + new GraphStageLogic(shape) with OutHandler { + setHandler(out, this) + setHandler(in, IdleHandler) + + override def onPull(): Unit = pull(in) + + private object IdleHandler extends ControlFrameStartHandler { + def setAndHandleFrameStartWith(newHandler: ControlFrameStartHandler, start: FrameStart): Unit = { + setHandler(in, newHandler) + newHandler.handleFrameStart(start) + } + + override def handleRegularFrameStart(start: FrameStart): Unit = + (start.header.opcode, start.isFullMessage) match { + case (Opcode.Binary, true) ⇒ publishMessagePart(BinaryMessagePart(start.data, last = true)) + case (Opcode.Binary, false) ⇒ setAndHandleFrameStartWith(new BinaryMessagehandler, start) + case (Opcode.Text, _) ⇒ setAndHandleFrameStartWith(new TextMessageHandler, start) + case x ⇒ pushProtocolError() + } } - } - private class CollectingBinaryMessage extends CollectingMessageFrame(Opcode.Binary) { - def createMessagePart(data: ByteString, last: Boolean): MessageDataPart = BinaryMessagePart(data, last) - } - private class CollectingTextMessage extends CollectingMessageFrame(Opcode.Text) { - val decoder = Utf8Decoder.create() + private class BinaryMessagehandler extends MessageHandler(Opcode.Binary) { + override def createMessagePart(data: ByteString, last: Boolean): MessageDataPart = + BinaryMessagePart(data, last) + } - def createMessagePart(data: ByteString, last: Boolean): MessageDataPart = - TextMessagePart(decoder.decode(data, endOfInput = last).get, last) - } + private class TextMessageHandler extends MessageHandler(Opcode.Text) { + val decoder = Utf8Decoder.create() - private abstract class CollectingMessageFrame(expectedOpcode: Opcode) extends StateWithControlFrameHandling { - var expectFirstHeader = true - var finSeen = false - def createMessagePart(data: ByteString, last: Boolean): MessageDataPart + override def createMessagePart(data: ByteString, last: Boolean): MessageDataPart = + TextMessagePart(decoder.decode(data, endOfInput = last).get, last) + } - def handleRegularFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective = { - if ((expectFirstHeader && start.header.opcode == expectedOpcode) // first opcode must be the expected - || start.header.opcode == Opcode.Continuation) { // further ones continuations - expectFirstHeader = false + private abstract class MessageHandler(expectedOpcode: Opcode) extends ControlFrameStartHandler { + var expectFirstHeader = true + var finSeen = false + def createMessagePart(data: ByteString, last: Boolean): MessageDataPart - if (start.header.fin) finSeen = true - publish(start) - } else protocolError() + override def handleRegularFrameStart(start: FrameStart): Unit = { + if ((expectFirstHeader && start.header.opcode == expectedOpcode) // first opcode must be the expected + || start.header.opcode == Opcode.Continuation) { // further ones continuations + expectFirstHeader = false + + if (start.header.fin) finSeen = true + publish(start) + } else pushProtocolError() + } + + override def handleFrameData(data: FrameData): Unit = publish(data) + + def publish(part: FrameEvent): Unit = try { + publishMessagePart(createMessagePart(part.data, last = finSeen && part.lastPart)) + } catch { + case NonFatal(e) ⇒ closeWithCode(Protocol.CloseCodes.InconsistentData) + } + } + + private trait ControlFrameStartHandler extends FrameHandler { + def handleRegularFrameStart(start: FrameStart): Unit + + override def handleFrameStart(start: FrameStart): Unit = start.header match { + case h: FrameHeader if h.mask.isDefined && !server ⇒ pushProtocolError() + case h: FrameHeader if h.rsv1 || h.rsv2 || h.rsv3 ⇒ pushProtocolError() + case FrameHeader(op, _, length, fin, _, _, _) if op.isControl && (length > 125 || !fin) ⇒ pushProtocolError() + case h: FrameHeader if h.opcode.isControl ⇒ + if (start.isFullMessage) handleControlFrame(h.opcode, start.data, this) + else collectControlFrame(start, this) + case _ ⇒ handleRegularFrameStart(start) + } + + override def handleFrameData(data: FrameData): Unit = + throw new IllegalStateException("Expected FrameStart") + } + + private class ControlFrameDataHandler(opcode: Opcode, _data: ByteString, nextHandler: InHandler) extends FrameHandler { + var data = _data + + override def handleFrameData(data: FrameData): Unit = { + this.data ++= data.data + if (data.lastPart) handleControlFrame(opcode, this.data, nextHandler) + else pull(in) + } + + override def handleFrameStart(start: FrameStart): Unit = + throw new IllegalStateException("Expected FrameData") + } + + private trait FrameHandler extends InHandler { + def handleFrameData(data: FrameData): Unit + def handleFrameStart(start: FrameStart): Unit + + def handleControlFrame(opcode: Opcode, data: ByteString, nextHandler: InHandler): Unit = { + setHandler(in, nextHandler) + opcode match { + case Opcode.Ping ⇒ publishDirectResponse(FrameEvent.fullFrame(Opcode.Pong, None, data, fin = true)) + case Opcode.Pong ⇒ + // ignore unsolicited Pong frame + pull(in) + case Opcode.Close ⇒ + setHandler(in, WaitForPeerTcpClose) + push(out, PeerClosed.parse(data)) + case Opcode.Other(o) ⇒ closeWithCode(Protocol.CloseCodes.ProtocolError, "Unsupported opcode") + case other ⇒ failStage( + new IllegalStateException(s"unexpected message of type [${other.getClass.getName}] when expecting ControlFrame") + ) + } + } + + def pushProtocolError(): Unit = closeWithCode(Protocol.CloseCodes.ProtocolError) + + def closeWithCode(closeCode: Int, reason: String = ""): Unit = { + setHandler(in, CloseAfterPeerClosed) + push(out, ActivelyCloseWithCode(Some(closeCode), reason)) + } + + def collectControlFrame(start: FrameStart, nextHandler: InHandler): Unit = { + require(!start.isFullMessage) + setHandler(in, new ControlFrameDataHandler(start.header.opcode, start.data, nextHandler)) + pull(in) + } + + def publishMessagePart(part: MessageDataPart): Unit = + if (part.last) emitMultiple(out, Iterator(part, MessageEnd), () ⇒ setHandler(in, IdleHandler)) + else push(out, part) + + def publishDirectResponse(frame: FrameStart): Unit = push(out, DirectAnswer(frame)) + + override def onPush(): Unit = grab(in) match { + case data: FrameData ⇒ handleFrameData(data) + case start: FrameStart ⇒ handleFrameStart(start) + case FrameError(ex) ⇒ failStage(ex) + } + } + + private object CloseAfterPeerClosed extends InHandler { + override def onPush(): Unit = grab(in) match { + case FrameStart(FrameHeader(Opcode.Close, _, length, _, _, _, _), data) ⇒ + setHandler(in, WaitForPeerTcpClose) + push(out, PeerClosed.parse(data)) + case _ ⇒ pull(in) // ignore all other data + } + } + + private object WaitForPeerTcpClose extends InHandler { + override def onPush(): Unit = pull(in) // ignore + } } - override def handleFrameData(data: FrameData)(implicit ctx: Ctx): SyncDirective = publish(data) - - private def publish(part: FrameEvent)(implicit ctx: Ctx): SyncDirective = - try publishMessagePart(createMessagePart(part.data, last = finSeen && part.lastPart)) - catch { - case NonFatal(e) ⇒ closeWithCode(Protocol.CloseCodes.InconsistentData) - } - } - - private class CollectingControlFrame(opcode: Opcode, _data: ByteString, nextState: State) extends InFrameState { - var data = _data - - def handleFrameData(data: FrameData)(implicit ctx: Ctx): SyncDirective = { - this.data ++= data.data - if (data.lastPart) handleControlFrame(opcode, this.data, nextState) - else ctx.pull() - } - } - - private def becomeAndHandleWith(newState: State, part: FrameEvent)(implicit ctx: Ctx): SyncDirective = { - become(newState) - current.onPush(part, ctx) - } - - /** Returns a SyncDirective if it handled the message */ - private def validateHeader(header: FrameHeader)(implicit ctx: Ctx): Option[SyncDirective] = header match { - case h: FrameHeader if h.mask.isDefined && !server ⇒ Some(protocolError()) - case h: FrameHeader if h.rsv1 || h.rsv2 || h.rsv3 ⇒ Some(protocolError()) - case FrameHeader(op, _, length, fin, _, _, _) if op.isControl && (length > 125 || !fin) ⇒ Some(protocolError()) - case _ ⇒ None - } - - private def handleControlFrame(opcode: Opcode, data: ByteString, nextState: State)(implicit ctx: Ctx): SyncDirective = { - become(nextState) - opcode match { - case Opcode.Ping ⇒ publishDirectResponse(FrameEvent.fullFrame(Opcode.Pong, None, data, fin = true)) - case Opcode.Pong ⇒ - // ignore unsolicited Pong frame - ctx.pull() - case Opcode.Close ⇒ - become(WaitForPeerTcpClose) - ctx.push(PeerClosed.parse(data)) - case Opcode.Other(o) ⇒ closeWithCode(Protocol.CloseCodes.ProtocolError, "Unsupported opcode") - case other ⇒ ctx.fail(new IllegalStateException(s"unexpected message of type [${other.getClass.getName}] when expecting ControlFrame")) - } - } - private def collectControlFrame(start: FrameStart, nextState: State)(implicit ctx: Ctx): SyncDirective = { - require(!start.isFullMessage) - become(new CollectingControlFrame(start.header.opcode, start.data, nextState)) - ctx.pull() - } - - private def publishMessagePart(part: MessageDataPart)(implicit ctx: Ctx): SyncDirective = - if (part.last) emit(Iterator(part, MessageEnd), ctx, Idle) - else ctx.push(part) - private def publishDirectResponse(frame: FrameStart)(implicit ctx: Ctx): SyncDirective = - ctx.push(DirectAnswer(frame)) - - private def protocolError(reason: String = "")(implicit ctx: Ctx): SyncDirective = - closeWithCode(Protocol.CloseCodes.ProtocolError, reason) - - private def closeWithCode(closeCode: Int, reason: String = "", cause: Throwable = null)(implicit ctx: Ctx): SyncDirective = { - become(CloseAfterPeerClosed) - ctx.push(ActivelyCloseWithCode(Some(closeCode), reason)) - } - - private object CloseAfterPeerClosed extends State { - def onPush(elem: FrameEventOrError, ctx: Context[Output]): SyncDirective = - elem match { - case FrameStart(FrameHeader(Opcode.Close, _, length, _, _, _, _), data) ⇒ - become(WaitForPeerTcpClose) - ctx.push(PeerClosed.parse(data)) - case _ ⇒ ctx.pull() // ignore all other data - } - } - private object WaitForPeerTcpClose extends State { - def onPush(elem: FrameEventOrError, ctx: Context[Output]): SyncDirective = - ctx.pull() // ignore - } - - private abstract class StateWithControlFrameHandling extends BetweenFrameState { - def handleRegularFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective - - def handleFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective = - validateHeader(start.header).getOrElse { - if (start.header.opcode.isControl) - if (start.isFullMessage) handleControlFrame(start.header.opcode, start.data, this) - else collectControlFrame(start, this) - else handleRegularFrameStart(start) - } - } - private abstract class BetweenFrameState extends ImplicitContextState { - def handleFrameData(data: FrameData)(implicit ctx: Ctx): SyncDirective = - throw new IllegalStateException("Expected FrameStart") - } - private abstract class InFrameState extends ImplicitContextState { - def handleFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective = - throw new IllegalStateException("Expected FrameData") - } - private abstract class ImplicitContextState extends State { - def handleFrameData(data: FrameData)(implicit ctx: Ctx): SyncDirective - def handleFrameStart(start: FrameStart)(implicit ctx: Ctx): SyncDirective - - def onPush(part: FrameEventOrError, ctx: Ctx): SyncDirective = - part match { - case data: FrameData ⇒ handleFrameData(data)(ctx) - case start: FrameStart ⇒ handleFrameStart(start)(ctx) - case FrameError(ex) ⇒ ctx.fail(ex) - } - } } sealed trait Output diff --git a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/WebSocketClientBlueprint.scala b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/WebSocketClientBlueprint.scala index 001932af61..e193877496 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/engine/ws/WebSocketClientBlueprint.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/engine/ws/WebSocketClientBlueprint.scala @@ -8,26 +8,23 @@ import akka.NotUsed import akka.http.scaladsl.model.ws._ import scala.concurrent.{ Future, Promise } - import akka.util.ByteString import akka.event.LoggingAdapter - import akka.stream.stage._ import akka.stream._ import akka.stream.TLSProtocol._ import akka.stream.scaladsl._ - import akka.http.scaladsl.settings.ClientConnectionSettings import akka.http.scaladsl.Http -import akka.http.scaladsl.model.{ HttpResponse, HttpMethods } +import akka.http.scaladsl.model.{ HttpMethods, HttpResponse } import akka.http.scaladsl.model.headers.Host - import akka.http.impl.engine.parsing.HttpMessageParser.StateResult -import akka.http.impl.engine.parsing.ParserOutput.{ RemainingBytes, ResponseStart, NeedMoreData } -import akka.http.impl.engine.parsing.{ ParserOutput, HttpHeaderParser, HttpResponseParser } +import akka.http.impl.engine.parsing.ParserOutput.{ NeedMoreData, RemainingBytes, ResponseStart } +import akka.http.impl.engine.parsing.{ HttpHeaderParser, HttpResponseParser, ParserOutput } import akka.http.impl.engine.rendering.{ HttpRequestRendererFactory, RequestRenderingContext } import akka.http.impl.engine.ws.Handshake.Client.NegotiatedWebSocketSettings import akka.http.impl.util.StreamUtils +import akka.stream.impl.fusing.GraphStages.SimpleLinearGraphStage object WebSocketClientBlueprint { /** @@ -59,68 +56,70 @@ object WebSocketClientBlueprint { val renderedInitialRequest = HttpRequestRendererFactory.renderStrict(RequestRenderingContext(initialRequest, hostHeader), settings, log) - class UpgradeStage extends StatefulStage[ByteString, ByteString] { - type State = StageState[ByteString, ByteString] + class UpgradeStage extends SimpleLinearGraphStage[ByteString] { - def initial: State = parsingResponse - - def parsingResponse: State = new State { - // a special version of the parser which only parses one message and then reports the remaining data - // if some is available - val parser = new HttpResponseParser(settings.parserSettings, HttpHeaderParser(settings.parserSettings)()) { - var first = true - override def handleInformationalResponses = false - override protected def parseMessage(input: ByteString, offset: Int): StateResult = { - if (first) { - first = false - super.parseMessage(input, offset) - } else { - emit(RemainingBytes(input.drop(offset))) - terminate() + override def createLogic(attributes: Attributes): GraphStageLogic = + new GraphStageLogic(shape) with InHandler with OutHandler { + // a special version of the parser which only parses one message and then reports the remaining data + // if some is available + val parser = new HttpResponseParser(settings.parserSettings, HttpHeaderParser(settings.parserSettings)()) { + var first = true + override def handleInformationalResponses = false + override protected def parseMessage(input: ByteString, offset: Int): StateResult = { + if (first) { + first = false + super.parseMessage(input, offset) + } else { + emit(RemainingBytes(input.drop(offset))) + terminate() + } } } - } - parser.setContextForNextResponse(HttpResponseParser.ResponseContext(HttpMethods.GET, None)) + parser.setContextForNextResponse(HttpResponseParser.ResponseContext(HttpMethods.GET, None)) - def onPush(elem: ByteString, ctx: Context[ByteString]): SyncDirective = { - parser.parseBytes(elem) match { - case NeedMoreData ⇒ ctx.pull() - case ResponseStart(status, protocol, headers, entity, close) ⇒ - val response = HttpResponse(status, headers, protocol = protocol) - Handshake.Client.validateResponse(response, subprotocol.toList, key) match { - case Right(NegotiatedWebSocketSettings(protocol)) ⇒ - result.success(ValidUpgrade(response, protocol)) + override def onPush(): Unit = { + parser.parseBytes(grab(in)) match { + case NeedMoreData ⇒ pull(in) + case ResponseStart(status, protocol, headers, entity, close) ⇒ + val response = HttpResponse(status, headers, protocol = protocol) + Handshake.Client.validateResponse(response, subprotocol.toList, key) match { + case Right(NegotiatedWebSocketSettings(protocol)) ⇒ + result.success(ValidUpgrade(response, protocol)) - become(transparent) - valve.open() + setHandler(in, new InHandler { + override def onPush(): Unit = push(out, grab(in)) + }) + valve.open() - val parseResult = parser.onPull() - require(parseResult == ParserOutput.MessageEnd, s"parseResult should be MessageEnd but was $parseResult") - parser.onPull() match { - case NeedMoreData ⇒ ctx.pull() - case RemainingBytes(bytes) ⇒ ctx.push(bytes) - case other ⇒ - throw new IllegalStateException(s"unexpected element of type ${other.getClass}") - } - case Left(problem) ⇒ - result.success(InvalidUpgradeResponse(response, s"WebSocket server at $uri returned $problem")) - ctx.fail(new IllegalArgumentException(s"WebSocket upgrade did not finish because of '$problem'")) - } - case other ⇒ - throw new IllegalStateException(s"unexpected element of type ${other.getClass}") + val parseResult = parser.onPull() + require(parseResult == ParserOutput.MessageEnd, s"parseResult should be MessageEnd but was $parseResult") + parser.onPull() match { + case NeedMoreData ⇒ pull(in) + case RemainingBytes(bytes) ⇒ push(out, bytes) + case other ⇒ + throw new IllegalStateException(s"unexpected element of type ${other.getClass}") + } + case Left(problem) ⇒ + result.success(InvalidUpgradeResponse(response, s"WebSocket server at $uri returned $problem")) + failStage(new IllegalArgumentException(s"WebSocket upgrade did not finish because of '$problem'")) + } + case other ⇒ + throw new IllegalStateException(s"unexpected element of type ${other.getClass}") + } } - } - } - def transparent: State = new State { - def onPush(elem: ByteString, ctx: Context[ByteString]): SyncDirective = ctx.push(elem) - } + override def onPull(): Unit = pull(in) + + setHandlers(in, out, this) + } + + override def toString = "UpgradeStage" } BidiFlow.fromGraph(GraphDSL.create() { implicit b ⇒ import GraphDSL.Implicits._ - val networkIn = b.add(Flow[ByteString].transform(() ⇒ new UpgradeStage)) + val networkIn = b.add(Flow[ByteString].via(new UpgradeStage)) val wsIn = b.add(Flow[ByteString]) val handshakeRequestSource = b.add(Source.single(renderedInitialRequest) ++ valve.source) diff --git a/akka-http-core/src/main/scala/akka/http/impl/model/parser/UriParser.scala b/akka-http-core/src/main/scala/akka/http/impl/model/parser/UriParser.scala index 4d2d675b45..687292a8a2 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/model/parser/UriParser.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/model/parser/UriParser.scala @@ -16,11 +16,17 @@ import Uri._ // http://tools.ietf.org/html/rfc3986 private[http] class UriParser(val input: ParserInput, - val uriParsingCharset: Charset = UTF8, - val uriParsingMode: Uri.ParsingMode = Uri.ParsingMode.Relaxed) extends Parser + val uriParsingCharset: Charset, + val uriParsingMode: Uri.ParsingMode, + val maxValueStackSize: Int) extends Parser(maxValueStackSize) with IpAddressParsing with StringBuilding { import CharacterClasses._ + def this(input: ParserInput, + uriParsingCharset: Charset = UTF8, + uriParsingMode: Uri.ParsingMode = Uri.ParsingMode.Relaxed) = + this(input, uriParsingCharset, uriParsingMode, 1024) + def parseAbsoluteUri(): Uri = rule(`absolute-URI` ~ EOI).run() match { case Right(_) => create(_scheme, _userinfo, _host, _port, collapseDotSegments(_path), _rawQueryString, _fragment) @@ -170,16 +176,33 @@ private[http] class UriParser(val input: ParserInput, clearSBForDecoding() ~ oneOrMore('+' ~ appendSB(' ') | `query-char` ~ appendSB() | `pct-encoded`) ~ push(getDecodedString()) | push("")) + def keyValuePair: Rule2[String, String] = rule { + part ~ ('=' ~ part | push(Query.EmptyValue)) + } + + // has a max value-stack depth of 3 + def keyValuePairsWithLimitedStackUse: Rule1[Query] = rule { + keyValuePair ~> { (key, value) => Query.Cons(key, value, Query.Empty) } ~ { + zeroOrMore('&' ~ keyValuePair ~> { (prefix: Query, key, value) => Query.Cons(key, value, prefix) }) ~> + (_.reverse) + } + } + // non-tail recursion, which we accept because it allows us to directly build the query // without having to reverse it at the end. - // Also: request queries usually do not have hundreds of elements, so we should get away with - // putting some pressure onto the JVM and value stack - def keyValuePairs: Rule1[Query] = rule { - part ~ ('=' ~ part | push(Query.EmptyValue)) ~ ('&' ~ keyValuePairs | push(Query.Empty)) ~> { (key, value, tail) => + // Adds 2 values to the value stack for the first pair, then parses the remaining pairs. + def keyValuePairsWithReversalAvoidance: Rule1[Query] = rule { + keyValuePair ~ ('&' ~ keyValuePairs | push(Query.Empty)) ~> { (key, value, tail) => Query.Cons(key, value, tail) } } + // Uses a reversal-free parsing approach as long as there is enough space on the value stack, + // switching to a limited-stack approach when necessary. + def keyValuePairs: Rule1[Query] = + if (valueStack.size + 5 <= maxValueStackSize) keyValuePairsWithReversalAvoidance + else keyValuePairsWithLimitedStackUse + rule { keyValuePairs } } diff --git a/akka-http-core/src/main/scala/akka/http/impl/settings/ConnectionPoolSettingsImpl.scala b/akka-http-core/src/main/scala/akka/http/impl/settings/ConnectionPoolSettingsImpl.scala index dc4411093f..fadfb63bf0 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/settings/ConnectionPoolSettingsImpl.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/settings/ConnectionPoolSettingsImpl.scala @@ -13,6 +13,7 @@ import scala.concurrent.duration.Duration /** INTERNAL API */ private[akka] final case class ConnectionPoolSettingsImpl( val maxConnections: Int, + val minConnections: Int, val maxRetries: Int, val maxOpenRequests: Int, val pipeliningLimit: Int, @@ -21,6 +22,8 @@ private[akka] final case class ConnectionPoolSettingsImpl( extends ConnectionPoolSettings { require(maxConnections > 0, "max-connections must be > 0") + require(minConnections >= 0, "min-connections must be >= 0") + require(minConnections <= maxConnections, "min-connections must be <= max-connections") require(maxRetries >= 0, "max-retries must be >= 0") require(maxOpenRequests > 0 && (maxOpenRequests & (maxOpenRequests - 1)) == 0, "max-open-requests must be a power of 2 > 0") require(pipeliningLimit > 0, "pipelining-limit must be > 0") @@ -33,6 +36,7 @@ object ConnectionPoolSettingsImpl extends SettingsCompanion[ConnectionPoolSettin def fromSubConfig(root: Config, c: Config) = { ConnectionPoolSettingsImpl( c getInt "max-connections", + c getInt "min-connections", c getInt "max-retries", c getInt "max-open-requests", c getInt "pipelining-limit", diff --git a/akka-http-core/src/main/scala/akka/http/impl/util/StreamUtils.scala b/akka-http-core/src/main/scala/akka/http/impl/util/StreamUtils.scala index b2cea22a33..1881ebab24 100644 --- a/akka-http-core/src/main/scala/akka/http/impl/util/StreamUtils.scala +++ b/akka-http-core/src/main/scala/akka/http/impl/util/StreamUtils.scala @@ -59,7 +59,7 @@ private[http] object StreamUtils { override def onPull(): Unit = pull(in) override def onUpstreamFailure(ex: Throwable): Unit = { - promise.failure(ex) + promise.tryFailure(ex) failStage(ex) } diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ClientConnectionSettings.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ClientConnectionSettings.scala index fb93b39d04..e435eab660 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ClientConnectionSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ClientConnectionSettings.scala @@ -5,6 +5,7 @@ package akka.http.javadsl.settings import java.util.{ Optional, Random } +import akka.actor.ActorSystem import akka.http.impl.settings.ClientConnectionSettingsImpl import akka.http.javadsl.model.headers.UserAgent import akka.io.Inet.SocketOption @@ -42,4 +43,5 @@ abstract class ClientConnectionSettings private[akka] () { self: ClientConnectio object ClientConnectionSettings extends SettingsCompanion[ClientConnectionSettings] { def create(config: Config): ClientConnectionSettings = ClientConnectionSettingsImpl(config) def create(configOverrides: String): ClientConnectionSettings = ClientConnectionSettingsImpl(configOverrides) + override def create(system: ActorSystem): ClientConnectionSettings = create(system.settings.config) } diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ConnectionPoolSettings.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ConnectionPoolSettings.scala index 744138d333..f1398120ac 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ConnectionPoolSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ConnectionPoolSettings.scala @@ -3,6 +3,7 @@ */ package akka.http.javadsl.settings +import akka.actor.ActorSystem import akka.http.impl.settings.ConnectionPoolSettingsImpl import com.typesafe.config.Config @@ -14,6 +15,7 @@ import akka.http.impl.util.JavaMapping.Implicits._ */ abstract class ConnectionPoolSettings private[akka] () { self: ConnectionPoolSettingsImpl ⇒ def getMaxConnections: Int + def getMinConnections: Int def getMaxRetries: Int def getMaxOpenRequests: Int def getPipeliningLimit: Int @@ -33,4 +35,5 @@ abstract class ConnectionPoolSettings private[akka] () { self: ConnectionPoolSet object ConnectionPoolSettings extends SettingsCompanion[ConnectionPoolSettings] { override def create(config: Config): ConnectionPoolSettings = ConnectionPoolSettingsImpl(config) override def create(configOverrides: String): ConnectionPoolSettings = ConnectionPoolSettingsImpl(configOverrides) + override def create(system: ActorSystem): ConnectionPoolSettings = create(system.settings.config) } \ No newline at end of file diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ParserSettings.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ParserSettings.scala index 8eec14babf..c977b4ead7 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ParserSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ParserSettings.scala @@ -5,6 +5,7 @@ package akka.http.javadsl.settings import java.util.Optional +import akka.actor.ActorSystem import akka.http.impl.engine.parsing.BodyPartParser import akka.http.impl.settings.ParserSettingsImpl import java.{ util ⇒ ju } @@ -83,4 +84,5 @@ object ParserSettings extends SettingsCompanion[ParserSettings] { override def create(config: Config): ParserSettings = ParserSettingsImpl(config) override def create(configOverrides: String): ParserSettings = ParserSettingsImpl(configOverrides) + override def create(system: ActorSystem): ParserSettings = create(system.settings.config) } diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/RoutingSettings.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/RoutingSettings.scala index 5896b5f8a0..7983aabc05 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/RoutingSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/RoutingSettings.scala @@ -3,6 +3,7 @@ */ package akka.http.javadsl.settings +import akka.actor.ActorSystem import akka.http.impl.settings.RoutingSettingsImpl import com.typesafe.config.Config @@ -30,4 +31,5 @@ abstract class RoutingSettings private[akka] () { self: RoutingSettingsImpl ⇒ object RoutingSettings extends SettingsCompanion[RoutingSettings] { override def create(config: Config): RoutingSettings = RoutingSettingsImpl(config) override def create(configOverrides: String): RoutingSettings = RoutingSettingsImpl(configOverrides) + override def create(system: ActorSystem): RoutingSettings = create(system.settings.config) } diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ServerSettings.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ServerSettings.scala index ec5105968c..b0bed97109 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/ServerSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/ServerSettings.scala @@ -5,6 +5,7 @@ package akka.http.javadsl.settings import java.util.{ Optional, Random } +import akka.actor.ActorSystem import akka.http.impl.settings.ServerSettingsImpl import akka.http.javadsl.model.headers.Host import akka.http.javadsl.model.headers.Server @@ -71,4 +72,5 @@ object ServerSettings extends SettingsCompanion[ServerSettings] { override def create(config: Config): ServerSettings = ServerSettingsImpl(config) override def create(configOverrides: String): ServerSettings = ServerSettingsImpl(configOverrides) + override def create(system: ActorSystem): ServerSettings = create(system.settings.config) } \ No newline at end of file diff --git a/akka-http-core/src/main/scala/akka/http/javadsl/settings/SettingsCompanion.scala b/akka-http-core/src/main/scala/akka/http/javadsl/settings/SettingsCompanion.scala index c9f7414d43..e805fe0e35 100644 --- a/akka-http-core/src/main/scala/akka/http/javadsl/settings/SettingsCompanion.scala +++ b/akka-http-core/src/main/scala/akka/http/javadsl/settings/SettingsCompanion.scala @@ -3,14 +3,16 @@ package akka.http.javadsl.settings import akka.actor.ActorSystem import com.typesafe.config.Config +/** INTERNAL API */ trait SettingsCompanion[T] { /** + * WARNING: This MUST overriden in sub-classes as otherwise won't be usable (return type) from Java. * Creates an instance of settings using the configuration provided by the given ActorSystem. * * Java API */ - final def create(system: ActorSystem): T = create(system.settings.config) + def create(system: ActorSystem): T = create(system.settings.config) /** * Creates an instance of settings using the given Config. diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala index 4be23b9f8a..248f70eb2d 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala @@ -244,7 +244,11 @@ class HttpExt(private val config: Config)(implicit val system: ActorSystem) exte settings: ClientConnectionSettings, connectionContext: ConnectionContext, log: LoggingAdapter): Flow[SslTlsOutbound, SslTlsInbound, Future[OutgoingConnection]] = { val tlsStage = sslTlsStage(connectionContext, Client, Some(host → port)) - val transportFlow = Tcp().outgoingConnection(new InetSocketAddress(host, port), localAddress, + // The InetSocketAddress representing the remote address must be created unresolved because akka.io.TcpOutgoingConnection will + // not attempt DNS resolution if the InetSocketAddress is already resolved. That behavior is problematic when it comes to + // connection pools since it means that new connections opened by the pool in the future can end up using a stale IP address. + // By passing an unresolved InetSocketAddress instead, we ensure that DNS resolution is performed for every new connection. + val transportFlow = Tcp().outgoingConnection(InetSocketAddress.createUnresolved(host, port), localAddress, settings.socketOptions, halfClose = true, settings.connectingTimeout, settings.idleTimeout) tlsStage.joinMat(transportFlow) { (_, tcpConnFuture) ⇒ diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/model/HttpMessage.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/model/HttpMessage.scala index 65347d068d..9cf985f448 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/model/HttpMessage.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/model/HttpMessage.scala @@ -8,18 +8,22 @@ import java.io.File import java.nio.file.Path import java.lang.{ Iterable ⇒ JIterable } import java.util.Optional +import java.util.concurrent.CompletionStage +import scala.compat.java8.FutureConverters import scala.concurrent.duration.FiniteDuration -import scala.concurrent.{ Future, ExecutionContext } +import scala.concurrent.{ ExecutionContext, Future } import scala.collection.immutable import scala.compat.java8.OptionConverters._ -import scala.reflect.{ classTag, ClassTag } +import scala.reflect.{ ClassTag, classTag } +import akka.Done import akka.parboiled2.CharUtils import akka.stream.Materializer -import akka.util.{ HashCode, ByteString } +import akka.util.{ ByteString, HashCode } import akka.http.impl.util._ import akka.http.javadsl.{ model ⇒ jm } import akka.http.scaladsl.util.FastFuture._ +import akka.stream.scaladsl.Sink import headers._ import akka.http.impl.util.JavaMapping.Implicits._ @@ -37,6 +41,10 @@ sealed trait HttpMessage extends jm.HttpMessage { def entity: ResponseEntity def protocol: HttpProtocol + /** Drains entity stream */ + def discardEntityBytes(mat: Materializer): HttpMessage.DiscardedEntity = + new HttpMessage.DiscardedEntity(entity.dataBytes.runWith(Sink.ignore)(mat)) + /** Returns a copy of this message with the list of headers set to the given ones. */ def withHeaders(headers: HttpHeader*): Self = withHeaders(headers.toList) @@ -101,6 +109,8 @@ sealed trait HttpMessage extends jm.HttpMessage { def addHeader(header: jm.HttpHeader): Self = mapHeaders(_ :+ header.asInstanceOf[HttpHeader]) + def addCredentials(credentials: jm.headers.HttpCredentials): Self = addHeader(jm.headers.Authorization.create(credentials)) + /** Removes the header with the given name (case-insensitive) */ def removeHeader(headerName: String): Self = { val lowerHeaderName = headerName.toRootLowerCase @@ -139,6 +149,46 @@ object HttpMessage { case HttpProtocols.`HTTP/1.1` ⇒ connectionHeader.isDefined && connectionHeader.get.hasClose case HttpProtocols.`HTTP/1.0` ⇒ connectionHeader.isEmpty || !connectionHeader.get.hasKeepAlive } + + /** + * Represents the the currently being-drained HTTP Entity which triggers completion of the contained + * Future once the entity has been drained for the given HttpMessage completely. + */ + final class DiscardedEntity(f: Future[Done]) extends akka.http.javadsl.model.HttpMessage.DiscardedEntity { + /** + * This future completes successfully once the underlying entity stream has been + * successfully drained (and fails otherwise). + */ + def future: Future[Done] = f + + /** + * This future completes successfully once the underlying entity stream has been + * successfully drained (and fails otherwise). + */ + def completionStage: CompletionStage[Done] = FutureConverters.toJava(f) + } + + /** Adds Scala DSL idiomatic methods to [[HttpMessage]], e.g. versions of methods with an implicit [[Materializer]]. */ + implicit final class HttpMessageScalaDSLSugar(val httpMessage: HttpMessage) extends AnyVal { + /** + * Discards the entities data bytes by running the `dataBytes` Source contained by the `entity` of this HTTP message. + * + * Note: It is crucial that entities are either discarded, or consumed by running the underlying [[akka.stream.scaladsl.Source]] + * as otherwise the lack of consuming of the data will trigger back-pressure to the underlying TCP connection + * (as designed), however possibly leading to an idle-timeout that will close the connection, instead of + * just having ignored the data. + * + * Warning: It is not allowed to discard and/or consume the the `entity.dataBytes` more than once + * as the stream is directly attached to the "live" incoming data source from the underlying TCP connection. + * Allowing it to be consumable twice would require buffering the incoming data, thus defeating the purpose + * of its streaming nature. If the dataBytes source is materialized a second time, it will fail with an + * "stream can cannot be materialized more than once" exception. + * + * In future versions, more automatic ways to warn or resolve these situations may be introduced, see issue #18716. + */ + def discardEntityBytes()(implicit mat: Materializer): HttpMessage.DiscardedEntity = + httpMessage.discardEntityBytes(mat) + } } /** diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/model/Multipart.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/model/Multipart.scala index 7275ff8dab..e57b1578ba 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/model/Multipart.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/model/Multipart.scala @@ -61,7 +61,7 @@ sealed trait Multipart extends jm.Multipart { boundary: String = BodyPartRenderer.randomBoundary())(implicit log: LoggingAdapter = NoLogging): MessageEntity = { val chunks = parts - .transform(() ⇒ BodyPartRenderer.streamed(boundary, charset.nioCharset, partHeadersSizeHint = 128, log)) + .via(BodyPartRenderer.streamed(boundary, charset.nioCharset, partHeadersSizeHint = 128, log)) .flatMapConcat(ConstantFun.scalaIdentityFunction) HttpEntity.Chunked(mediaType withBoundary boundary withCharset charset, chunks) } @@ -212,7 +212,7 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.General.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.General.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.General.Strict]].toJava } object General { def apply(mediaType: MediaType.Multipart, parts: BodyPart.Strict*): Strict = Strict(mediaType, parts.toVector) @@ -258,7 +258,7 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.General.BodyPart.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.General.BodyPart.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.General.BodyPart.Strict]].toJava private[BodyPart] def tryCreateFormDataBodyPart[T](f: (String, Map[String, String], immutable.Seq[HttpHeader]) ⇒ T): Try[T] = { val params = dispositionParams @@ -323,12 +323,22 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.FormData.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.FormData.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.FormData.Strict]].toJava } object FormData { def apply(parts: Multipart.FormData.BodyPart.Strict*): Multipart.FormData.Strict = Strict(parts.toVector) def apply(parts: Multipart.FormData.BodyPart*): Multipart.FormData = Multipart.FormData(Source(parts.toVector)) + // FIXME: SI-2991 workaround - two functions below. Remove when (hopefully) this issue is fixed + /** INTERNAL API */ + private[akka] def createStrict(parts: Multipart.FormData.BodyPart.Strict*): Multipart.FormData.Strict = Strict(parts.toVector) + /** INTERNAL API */ + private[akka] def createNonStrict(parts: Multipart.FormData.BodyPart*): Multipart.FormData = Multipart.FormData(Source(parts.toVector)) + /** INTERNAL API */ + private[akka] def createStrict(fields: Map[String, akka.http.javadsl.model.HttpEntity.Strict]): Multipart.FormData.Strict = Multipart.FormData.Strict { + fields.map { case (name, entity: akka.http.scaladsl.model.HttpEntity.Strict) ⇒ Multipart.FormData.BodyPart.Strict(name, entity) }(collection.breakOut) + } + def apply(fields: Map[String, HttpEntity.Strict]): Multipart.FormData.Strict = Multipart.FormData.Strict { fields.map { case (name, entity) ⇒ Multipart.FormData.BodyPart.Strict(name, entity) }(collection.breakOut) } @@ -426,7 +436,7 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.FormData.BodyPart.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.FormData.BodyPart.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.FormData.BodyPart.Strict]].toJava } object BodyPart { def apply(_name: String, _entity: BodyPartEntity, @@ -467,6 +477,26 @@ object Multipart { FastFuture.successful(this) override def productPrefix = "FormData.BodyPart.Strict" } + + /** INTERNAL API */ + private[akka] object Builder { + def create(_name: String, _entity: BodyPartEntity, + _additionalDispositionParams: Map[String, String], + _additionalHeaders: Iterable[akka.http.javadsl.model.HttpHeader]): Multipart.FormData.BodyPart = { + val _headers = _additionalHeaders.to[immutable.Seq] map { case h: akka.http.scaladsl.model.HttpHeader ⇒ h } + apply(_name, _entity, _additionalDispositionParams, _headers) + } + } + + /** INTERNAL API */ + private[akka] object StrictBuilder { + def createStrict(_name: String, _entity: HttpEntity.Strict, + _additionalDispositionParams: Map[String, String], + _additionalHeaders: Iterable[akka.http.javadsl.model.HttpHeader]): Multipart.FormData.BodyPart.Strict = { + val _headers = _additionalHeaders.to[immutable.Seq] map { case h: akka.http.scaladsl.model.HttpHeader ⇒ h } + Strict(_name, _entity, _additionalDispositionParams, _headers) + } + } } } @@ -488,7 +518,7 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.ByteRanges.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.ByteRanges.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.ByteRanges.Strict]].toJava } object ByteRanges { def apply(parts: Multipart.ByteRanges.BodyPart.Strict*): Strict = Strict(parts.toVector) @@ -563,7 +593,7 @@ object Multipart { /** Java API */ override def toStrict(timeoutMillis: Long, materializer: Materializer): CompletionStage[jm.Multipart.ByteRanges.BodyPart.Strict] = - super.toStrict(timeoutMillis, materializer).asInstanceOf[Future[jm.Multipart.ByteRanges.BodyPart.Strict]].toJava + super.toStrict(timeoutMillis, materializer).toScala.asInstanceOf[Future[jm.Multipart.ByteRanges.BodyPart.Strict]].toJava } object BodyPart { def apply(_contentRange: ContentRange, _entity: BodyPartEntity, _rangeUnit: RangeUnit = RangeUnits.Bytes, diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/HttpChallenge.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/HttpChallenge.scala index ec79cc1e26..633d0d221f 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/HttpChallenge.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/HttpChallenge.scala @@ -21,3 +21,10 @@ final case class HttpChallenge(scheme: String, realm: String, /** Java API */ def getParams: util.Map[String, String] = params.asJava } + +object HttpChallenges { + + def basic(realm: String): HttpChallenge = HttpChallenge("Basic", realm) + + def oAuth2(realm: String): HttpChallenge = HttpChallenge("Bearer", realm) +} diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/headers.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/headers.scala index 15e3844a1e..bf13d2a98e 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/headers.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/model/headers/headers.scala @@ -336,7 +336,8 @@ object Connection extends ModeledCompanion[Connection] { def apply(first: String, more: String*): Connection = apply(immutable.Seq(first +: more: _*)) implicit val tokensRenderer = Renderer.defaultSeqRenderer[String] // cache } -final case class Connection(tokens: immutable.Seq[String]) extends RequestResponseHeader { +final case class Connection(tokens: immutable.Seq[String]) extends jm.headers.Connection + with RequestResponseHeader { require(tokens.nonEmpty, "tokens must not be empty") import Connection.tokensRenderer def renderValue[R <: Rendering](r: R): r.type = r ~~ tokens diff --git a/akka-http-core/src/main/scala/akka/http/scaladsl/settings/ConnectionPoolSettings.scala b/akka-http-core/src/main/scala/akka/http/scaladsl/settings/ConnectionPoolSettings.scala index 296f4e45ef..9607594699 100644 --- a/akka-http-core/src/main/scala/akka/http/scaladsl/settings/ConnectionPoolSettings.scala +++ b/akka-http-core/src/main/scala/akka/http/scaladsl/settings/ConnectionPoolSettings.scala @@ -14,6 +14,7 @@ import scala.concurrent.duration.Duration */ abstract class ConnectionPoolSettings extends js.ConnectionPoolSettings { self: ConnectionPoolSettingsImpl ⇒ def maxConnections: Int + def minConnections: Int def maxRetries: Int def maxOpenRequests: Int def pipeliningLimit: Int @@ -26,6 +27,7 @@ abstract class ConnectionPoolSettings extends js.ConnectionPoolSettings { self: final override def getPipeliningLimit: Int = pipeliningLimit final override def getIdleTimeout: Duration = idleTimeout final override def getMaxConnections: Int = maxConnections + final override def getMinConnections: Int = minConnections final override def getMaxOpenRequests: Int = maxOpenRequests final override def getMaxRetries: Int = maxRetries diff --git a/akka-http-core/src/test/java/akka/http/javadsl/model/EntityDiscardingTest.java b/akka-http-core/src/test/java/akka/http/javadsl/model/EntityDiscardingTest.java new file mode 100644 index 0000000000..fbdf6add6a --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/model/EntityDiscardingTest.java @@ -0,0 +1,71 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.model; + +import akka.Done; +import akka.actor.ActorSystem; +import akka.japi.function.Procedure; +import akka.stream.ActorMaterializer; +import akka.stream.javadsl.Sink; +import akka.stream.javadsl.Source; +import akka.util.ByteString; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +import scala.util.Try; + +import java.util.Arrays; +import java.util.concurrent.CompletableFuture; + +import static org.junit.Assert.assertEquals; + +public class EntityDiscardingTest extends JUnitSuite { + + private ActorSystem sys = ActorSystem.create("test"); + private ActorMaterializer mat = ActorMaterializer.create(sys); + private Iterable testData = Arrays.asList(ByteString.fromString("abc"), ByteString.fromString("def")); + + @Test + public void testHttpRequestDiscardEntity() { + + CompletableFuture f = new CompletableFuture<>(); + Source s = Source.from(testData).alsoTo(Sink.onComplete(completeDone(f))); + + RequestEntity reqEntity = HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, s); + HttpRequest req = HttpRequest.create().withEntity(reqEntity); + + HttpMessage.DiscardedEntity de = req.discardEntityBytes(mat); + + assertEquals(Done.getInstance(), f.join()); + assertEquals(Done.getInstance(), de.completionStage().toCompletableFuture().join()); + } + + @Test + public void testHttpResponseDiscardEntity() { + + CompletableFuture f = new CompletableFuture<>(); + Source s = Source.from(testData).alsoTo(Sink.onComplete(completeDone(f))); + + ResponseEntity respEntity = HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8, s); + HttpResponse resp = HttpResponse.create().withEntity(respEntity); + + HttpMessage.DiscardedEntity de = resp.discardEntityBytes(mat); + + assertEquals(Done.getInstance(), f.join()); + assertEquals(Done.getInstance(), de.completionStage().toCompletableFuture().join()); + } + + private Procedure> completeDone(CompletableFuture p) { + return new Procedure>() { + @Override + public void apply(Try t) throws Exception { + if(t.isSuccess()) + p.complete(Done.getInstance()); + else + p.completeExceptionally(t.failed().get()); + } + }; + } +} diff --git a/akka-http-core/src/test/java/akka/http/javadsl/settings/ClientConnectionSettingsTest.java b/akka-http-core/src/test/java/akka/http/javadsl/settings/ClientConnectionSettingsTest.java new file mode 100644 index 0000000000..cb99227476 --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/settings/ClientConnectionSettingsTest.java @@ -0,0 +1,18 @@ +/* + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.settings; + +import akka.actor.ActorSystem; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +public class ClientConnectionSettingsTest extends JUnitSuite { + + @Test + public void testCreateWithActorSystem() { + ActorSystem sys = ActorSystem.create("test"); + ClientConnectionSettings settings = ClientConnectionSettings.create(sys); + } +} diff --git a/akka-http-core/src/test/java/akka/http/javadsl/settings/ConnectionPoolSettingsTest.java b/akka-http-core/src/test/java/akka/http/javadsl/settings/ConnectionPoolSettingsTest.java new file mode 100644 index 0000000000..62f0da1b07 --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/settings/ConnectionPoolSettingsTest.java @@ -0,0 +1,18 @@ +/* + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.settings; + +import akka.actor.ActorSystem; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +public class ConnectionPoolSettingsTest extends JUnitSuite { + + @Test + public void testCreateWithActorSystem() { + ActorSystem sys = ActorSystem.create("test"); + ConnectionPoolSettings settings = ConnectionPoolSettings.create(sys); + } +} diff --git a/akka-http-core/src/test/java/akka/http/javadsl/settings/ParserSettingsTest.java b/akka-http-core/src/test/java/akka/http/javadsl/settings/ParserSettingsTest.java new file mode 100644 index 0000000000..1ade595117 --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/settings/ParserSettingsTest.java @@ -0,0 +1,18 @@ +/* + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.settings; + +import akka.actor.ActorSystem; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +public class ParserSettingsTest extends JUnitSuite { + + @Test + public void testCreateWithActorSystem() { + ActorSystem sys = ActorSystem.create("test"); + ParserSettings settings = ParserSettings.create(sys); + } +} diff --git a/akka-http-core/src/test/java/akka/http/javadsl/settings/RoutingSettingsTest.java b/akka-http-core/src/test/java/akka/http/javadsl/settings/RoutingSettingsTest.java new file mode 100644 index 0000000000..902a321cb4 --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/settings/RoutingSettingsTest.java @@ -0,0 +1,31 @@ +/* + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.settings; + +import akka.actor.ActorSystem; +import com.typesafe.config.Config; +import com.typesafe.config.ConfigFactory; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +public class RoutingSettingsTest extends JUnitSuite { + + @Test + public void testCreateWithActorSystem() { + String testConfig = + "akka.http.routing {\n" + + " verbose-error-messages = off\n" + + " file-get-conditional = on\n" + + " render-vanity-footer = yes\n" + + " range-coalescing-threshold = 80\n" + + " range-count-limit = 16\n" + + " decode-max-bytes-per-chunk = 1m\n" + + " file-io-dispatcher = \"test-only\"\n" + + "}"; + Config config = ConfigFactory.parseString(testConfig); + ActorSystem sys = ActorSystem.create("test", config); + RoutingSettings settings = RoutingSettings.create(sys); + } +} diff --git a/akka-http-core/src/test/java/akka/http/javadsl/settings/ServerSettingsTest.java b/akka-http-core/src/test/java/akka/http/javadsl/settings/ServerSettingsTest.java new file mode 100644 index 0000000000..b4ae596f06 --- /dev/null +++ b/akka-http-core/src/test/java/akka/http/javadsl/settings/ServerSettingsTest.java @@ -0,0 +1,18 @@ +/* + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.javadsl.settings; + +import akka.actor.ActorSystem; +import org.junit.Test; +import org.scalatest.junit.JUnitSuite; + +public class ServerSettingsTest extends JUnitSuite { + + @Test + public void testCreateWithActorSystem() { + ActorSystem sys = ActorSystem.create("test"); + ServerSettings settings = ServerSettings.create(sys); + } +} diff --git a/akka-http-core/src/test/scala/akka/http/impl/engine/client/ConnectionPoolSpec.scala b/akka-http-core/src/test/scala/akka/http/impl/engine/client/ConnectionPoolSpec.scala index 4fde9f5469..09f62b348a 100644 --- a/akka-http-core/src/test/scala/akka/http/impl/engine/client/ConnectionPoolSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/impl/engine/client/ConnectionPoolSpec.scala @@ -23,6 +23,7 @@ import akka.stream.testkit.{ TestPublisher, TestSubscriber } import akka.testkit.AkkaSpec import akka.util.ByteString +import scala.collection.immutable import scala.concurrent.Await import scala.concurrent.duration._ import scala.util.control.NonFatal @@ -228,6 +229,56 @@ class ConnectionPoolSpec extends AkkaSpec(""" acceptIncomingConnection() val (Success(_), 42) = responseOut.expectNext() } + + "never close hot connections when minConnections key is given and >0 (minConnections = 1)" in new TestSetup() { + val close: HttpHeader = Connection("close") + + // for lower bound of one connection + val minConnection = 1 + val (requestIn, requestOut, responseOutSub, hcpMinConnection) = + cachedHostConnectionPool[Int](idleTimeout = 100.millis, minConnections = minConnection) + val gatewayConnection = hcpMinConnection.gateway + + acceptIncomingConnection() + requestIn.sendNext(HttpRequest(uri = "/minimumslots/1", headers = immutable.Seq(close)) → 42) + responseOutSub.request(1) + requestOut.expectNextN(1) + + condHolds(500.millis) { () ⇒ + Await.result(gatewayConnection.poolStatus(), 100.millis).get shouldBe a[PoolInterfaceRunning] + } + } + + "never close hot connections when minConnections key is given and >0 (minConnections = 5)" in new TestSetup() { + val close: HttpHeader = Connection("close") + + // for lower bound of five connections + val minConnections = 5 + val (requestIn, requestOut, responseOutSub, hcpMinConnection) = cachedHostConnectionPool[Int]( + idleTimeout = 100.millis, + minConnections = minConnections, + maxConnections = minConnections + 10) + + (0 until minConnections) foreach { _ ⇒ acceptIncomingConnection() } + (0 until minConnections) foreach { i ⇒ + requestIn.sendNext(HttpRequest(uri = s"/minimumslots/5/$i", headers = immutable.Seq(close)) → 42) + } + responseOutSub.request(minConnections) + requestOut.expectNextN(minConnections) + + val gatewayConnections = hcpMinConnection.gateway + condHolds(1000.millis) { () ⇒ + val status = gatewayConnections.poolStatus() + Await.result(status, 100.millis).get shouldBe a[PoolInterfaceRunning] + } + } + + "shutdown if idle and min connection has been set to 0" in new TestSetup() { + val (_, _, _, hcp) = cachedHostConnectionPool[Int](idleTimeout = 1.second, minConnections = 0) + val gateway = hcp.gateway + Await.result(gateway.poolStatus(), 1500.millis).get shouldBe a[PoolInterfaceRunning] + awaitCond({ Await.result(gateway.poolStatus(), 1500.millis).isEmpty }, 2000.millis) + } } "The single-request client infrastructure" should { @@ -325,24 +376,30 @@ class ConnectionPoolSpec extends AkkaSpec(""" def cachedHostConnectionPool[T]( maxConnections: Int = 2, + minConnections: Int = 0, maxRetries: Int = 2, maxOpenRequests: Int = 8, pipeliningLimit: Int = 1, idleTimeout: Duration = 5.seconds, ccSettings: ClientConnectionSettings = ClientConnectionSettings(system)) = { - val settings = new ConnectionPoolSettingsImpl(maxConnections, maxRetries, maxOpenRequests, pipeliningLimit, - idleTimeout, ClientConnectionSettings(system)) - flowTestBench(Http().cachedHostConnectionPool[T](serverHostName, serverPort, settings)) + + val settings = + new ConnectionPoolSettingsImpl(maxConnections, minConnections, + maxRetries, maxOpenRequests, pipeliningLimit, + idleTimeout, ccSettings) + flowTestBench( + Http().cachedHostConnectionPool[T](serverHostName, serverPort, settings)) } def superPool[T]( maxConnections: Int = 2, + minConnections: Int = 0, maxRetries: Int = 2, maxOpenRequests: Int = 8, pipeliningLimit: Int = 1, idleTimeout: Duration = 5.seconds, ccSettings: ClientConnectionSettings = ClientConnectionSettings(system)) = { - val settings = new ConnectionPoolSettingsImpl(maxConnections, maxRetries, maxOpenRequests, pipeliningLimit, + val settings = new ConnectionPoolSettingsImpl(maxConnections, minConnections, maxRetries, maxOpenRequests, pipeliningLimit, idleTimeout, ClientConnectionSettings(system)) flowTestBench(Http().superPool[T](settings = settings)) } @@ -357,6 +414,22 @@ class ConnectionPoolSpec extends AkkaSpec(""" def connNr(r: HttpResponse): Int = r.headers.find(_ is "conn-nr").get.value.toInt def requestUri(r: HttpResponse): String = r.headers.find(_ is "req-uri").get.value + + /** + * Makes sure the given condition "f" holds in the timer period of "in". + * The given condition function should throw if not met. + * Note: Execution of "condHolds" will take at least "in" time, so for big "in" it might drain the ime budget for tests. + */ + def condHolds[T](in: FiniteDuration)(f: () ⇒ T): T = { + val end = System.nanoTime.nanos + in + + var lastR = f() + while (System.nanoTime.nanos < end) { + lastR = f() + Thread.sleep(50) + } + lastR + } } case class ConnNrHeader(nr: Int) extends CustomHeader { diff --git a/akka-http-core/src/test/scala/akka/http/impl/engine/client/HttpConfigurationSpec.scala b/akka-http-core/src/test/scala/akka/http/impl/engine/client/HttpConfigurationSpec.scala index 8371a4c8ab..599ce124a3 100644 --- a/akka-http-core/src/test/scala/akka/http/impl/engine/client/HttpConfigurationSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/impl/engine/client/HttpConfigurationSpec.scala @@ -129,6 +129,34 @@ class HttpConfigurationSpec extends AkkaSpec { server.parserSettings.illegalHeaderWarnings should ===(On) } } + + "set `akka.http.host-connection-pool.min-connections` only" in { + configuredSystem( + """ + akka.http.host-connection-pool.min-connections = 42 + akka.http.host-connection-pool.max-connections = 43 + """.stripMargin) { sys ⇒ + + val pool = ConnectionPoolSettings(sys) + pool.getMinConnections should ===(42) + pool.getMaxConnections should ===(43) + } + + configuredSystem(""" """) { sys ⇒ + + val pool = ConnectionPoolSettings(sys) + pool.minConnections should ===(0) + } + + configuredSystem( + """ + akka.http.host-connection-pool.min-connections = 101 + akka.http.host-connection-pool.max-connections = 1 + """.stripMargin) { sys ⇒ + + intercept[IllegalArgumentException] { ConnectionPoolSettings(sys) } + } + } } def configuredSystem(overrides: String)(block: ActorSystem ⇒ Unit) = { diff --git a/akka-http-core/src/test/scala/akka/http/impl/engine/parsing/RequestParserSpec.scala b/akka-http-core/src/test/scala/akka/http/impl/engine/parsing/RequestParserSpec.scala index a20c8994b2..17dc6ae6bc 100644 --- a/akka-http-core/src/test/scala/akka/http/impl/engine/parsing/RequestParserSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/impl/engine/parsing/RequestParserSpec.scala @@ -175,6 +175,28 @@ class RequestParserSpec extends FreeSpec with Matchers with BeforeAndAfterAll { |""" should parseTo(HttpRequest(GET, Uri("http://x//foo").toHttpRequestTargetOriginForm, protocol = `HTTP/1.0`)) closeAfterResponseCompletion shouldEqual Seq(true) } + + "with additional fields in Strict-Transport-Security header" in new Test { + """GET /hsts HTTP/1.1 + |Host: x + |Strict-Transport-Security: max-age=1; preload; dummy + | + |""" should parseTo(HttpRequest( + GET, + "/hsts", + headers = List(Host("x"), `Strict-Transport-Security`(1, None)), + protocol = `HTTP/1.1`)) + + """GET /hsts HTTP/1.1 + |Host: x + |Strict-Transport-Security: max-age=1; dummy; preload + | + |""" should parseTo(HttpRequest( + GET, + "/hsts", + headers = List(Host("x"), `Strict-Transport-Security`(1, None)), + protocol = `HTTP/1.1`)) + } } "properly parse a chunked request" - { @@ -453,6 +475,36 @@ class RequestParserSpec extends FreeSpec with Matchers with BeforeAndAfterAll { | |""" should parseToError(422: StatusCode, ErrorInfo("TRACE requests must not have an entity")) } + + "with additional fields in headers" in new Test { + """GET / HTTP/1.1 + |Host: x; dummy + | + |""" should parseToError( + BadRequest, + ErrorInfo("Illegal 'host' header: Invalid input ' ', expected 'EOI', ':', UPPER_ALPHA, lower-reg-name-char or pct-encoded (line 1, column 3)", "x; dummy\n ^")) + + """GET / HTTP/1.1 + |Content-length: 3; dummy + | + |""" should parseToError( + BadRequest, + ErrorInfo("Illegal `Content-Length` header value")) + + """GET / HTTP/1.1 + |Connection:keep-alive; dummy + | + |""" should parseToError( + BadRequest, + ErrorInfo("Illegal 'connection' header: Invalid input ';', expected tchar, OWS, listSep or 'EOI' (line 1, column 11)", "keep-alive; dummy\n ^")) + + """GET / HTTP/1.1 + |Transfer-Encoding: chunked; dummy + | + |""" should parseToError( + BadRequest, + ErrorInfo("Illegal 'transfer-encoding' header: Invalid input ';', expected OWS, listSep or 'EOI' (line 1, column 8)", "chunked; dummy\n ^")) + } } } diff --git a/akka-http-core/src/test/scala/akka/http/impl/engine/server/HttpServerSpec.scala b/akka-http-core/src/test/scala/akka/http/impl/engine/server/HttpServerSpec.scala index b322a88dbb..8a09968155 100644 --- a/akka-http-core/src/test/scala/akka/http/impl/engine/server/HttpServerSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/impl/engine/server/HttpServerSpec.scala @@ -684,7 +684,7 @@ class HttpServerSpec extends AkkaSpec( // client then closes the connection netIn.sendComplete() - requests.expectComplete() // this should happen, but never does + requests.expectComplete() netOut.expectComplete() }) diff --git a/akka-http-core/src/test/scala/akka/http/javadsl/model/MultipartsSpec.scala b/akka-http-core/src/test/scala/akka/http/javadsl/model/MultipartsSpec.scala new file mode 100644 index 0000000000..227822b570 --- /dev/null +++ b/akka-http-core/src/test/scala/akka/http/javadsl/model/MultipartsSpec.scala @@ -0,0 +1,62 @@ +/* + * Copyright (C) 2016-2016 Lightbend Inc. + */ + +package akka.http.javadsl.model + +import java.util +import com.typesafe.config.{ Config, ConfigFactory } +import scala.concurrent.Await +import scala.concurrent.duration._ +import org.scalatest.{ BeforeAndAfterAll, Inside, Matchers, WordSpec } +import akka.stream.ActorMaterializer +import akka.actor.ActorSystem +import scala.compat.java8.FutureConverters + +class MultipartsSpec extends WordSpec with Matchers with Inside with BeforeAndAfterAll { + + val testConf: Config = ConfigFactory.parseString(""" + akka.event-handlers = ["akka.testkit.TestEventListener"] + akka.loglevel = WARNING""") + implicit val system = ActorSystem(getClass.getSimpleName, testConf) + implicit val materializer = ActorMaterializer() + override def afterAll() = system.terminate() + + "Multiparts.createFormDataFromParts" should { + "create a model from Multiparts.createFormDataBodyPartparts" in { + val streamed = Multiparts.createFormDataFromParts( + Multiparts.createFormDataBodyPart("foo", HttpEntities.create("FOO")), + Multiparts.createFormDataBodyPart("bar", HttpEntities.create("BAR"))) + val strictCS = streamed.toStrict(1000, materializer) + val strict = Await.result(FutureConverters.toScala(strictCS), 1.second) + + strict shouldEqual akka.http.scaladsl.model.Multipart.FormData( + Map("foo" → akka.http.scaladsl.model.HttpEntity("FOO"), "bar" → akka.http.scaladsl.model.HttpEntity("BAR"))) + } + } + + "Multiparts.createFormDataFromFields" should { + "create a model from a map of fields" in { + val fields = new util.HashMap[String, HttpEntity.Strict] + fields.put("foo", HttpEntities.create("FOO")) + val streamed = Multiparts.createFormDataFromFields(fields) + val strictCS = streamed.toStrict(1000, materializer) + val strict = Await.result(FutureConverters.toScala(strictCS), 1.second) + + strict shouldEqual akka.http.scaladsl.model.Multipart.FormData( + Map("foo" → akka.http.scaladsl.model.HttpEntity("FOO"))) + } + } + + "Multiparts.createStrictFormDataFromParts" should { + "create a strict model from Multiparts.createFormDataBodyPartStrict parts" in { + val streamed = Multiparts.createStrictFormDataFromParts( + Multiparts.createFormDataBodyPartStrict("foo", HttpEntities.create("FOO")), + Multiparts.createFormDataBodyPartStrict("bar", HttpEntities.create("BAR"))) + val strict = streamed + + strict shouldEqual akka.http.scaladsl.model.Multipart.FormData( + Map("foo" → akka.http.scaladsl.model.HttpEntity("FOO"), "bar" → akka.http.scaladsl.model.HttpEntity("BAR"))) + } + } +} diff --git a/akka-http-core/src/test/scala/akka/http/scaladsl/model/EntityDiscardingSpec.scala b/akka-http-core/src/test/scala/akka/http/scaladsl/model/EntityDiscardingSpec.scala new file mode 100644 index 0000000000..404da1e424 --- /dev/null +++ b/akka-http-core/src/test/scala/akka/http/scaladsl/model/EntityDiscardingSpec.scala @@ -0,0 +1,81 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.scaladsl.model + +import akka.Done +import akka.http.scaladsl.model.HttpEntity.Chunked +import akka.http.scaladsl.{ Http, TestUtils } +import akka.stream.ActorMaterializer +import akka.stream.scaladsl._ +import akka.testkit.AkkaSpec +import scala.concurrent.duration._ +import akka.util.ByteString + +import scala.concurrent.{ Await, Promise } + +class EntityDiscardingSpec extends AkkaSpec { + + implicit val mat = ActorMaterializer() + + val testData = Vector.tabulate(200)(i ⇒ ByteString(s"row-$i")) + + "HttpRequest" should { + + "discard entity stream after .discardEntityBytes() call" in { + + val p = Promise[Done]() + val s = Source + .fromIterator[ByteString](() ⇒ testData.iterator) + .alsoTo(Sink.onComplete(t ⇒ p.complete(t))) + + val req = HttpRequest(entity = HttpEntity(ContentTypes.`text/csv(UTF-8)`, s)) + val de = req.discardEntityBytes() + + p.future.futureValue should ===(Done) + de.future.futureValue should ===(Done) + } + } + + "HttpResponse" should { + + "discard entity stream after .discardEntityBytes() call" in { + + val p = Promise[Done]() + val s = Source + .fromIterator[ByteString](() ⇒ testData.iterator) + .alsoTo(Sink.onComplete(t ⇒ p.complete(t))) + + val resp = HttpResponse(entity = HttpEntity(ContentTypes.`text/csv(UTF-8)`, s)) + val de = resp.discardEntityBytes() + + p.future.futureValue should ===(Done) + de.future.futureValue should ===(Done) + } + + // TODO consider improving this by storing a mutable "already materialized" flag somewhere + // TODO likely this is going to inter-op with the auto-draining as described in #18716 + "should not allow draining a second time" in { + val (_, host, port) = TestUtils.temporaryServerHostnameAndPort() + val bound = Http().bindAndHandleSync( + req ⇒ + HttpResponse(entity = HttpEntity( + ContentTypes.`text/csv(UTF-8)`, Source.fromIterator[ByteString](() ⇒ testData.iterator))), + host, port).futureValue + + try { + + val response = Http().singleRequest(HttpRequest(uri = s"http://$host:$port/")).futureValue + + val de = response.discardEntityBytes() + de.future.futureValue should ===(Done) + + val de2 = response.discardEntityBytes() + val secondRunException = intercept[IllegalStateException] { Await.result(de2.future, 3.seconds) } + secondRunException.getMessage should include("Source cannot be materialized more than once") + } finally bound.unbind().futureValue + } + } + +} diff --git a/akka-http-core/src/test/scala/akka/http/scaladsl/model/HttpMessageSpec.scala b/akka-http-core/src/test/scala/akka/http/scaladsl/model/HttpMessageSpec.scala index 53eab46770..28c2c5a2f6 100644 --- a/akka-http-core/src/test/scala/akka/http/scaladsl/model/HttpMessageSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/scaladsl/model/HttpMessageSpec.scala @@ -1,3 +1,7 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + package akka.http.scaladsl.model import headers.Host diff --git a/akka-http-core/src/test/scala/akka/http/scaladsl/model/MultipartSpec.scala b/akka-http-core/src/test/scala/akka/http/scaladsl/model/MultipartSpec.scala index a37f21fcc7..998a3ffe2b 100644 --- a/akka-http-core/src/test/scala/akka/http/scaladsl/model/MultipartSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/scaladsl/model/MultipartSpec.scala @@ -5,11 +5,12 @@ package akka.http.scaladsl.model import com.typesafe.config.{ Config, ConfigFactory } + import scala.concurrent.Await import scala.concurrent.duration._ import org.scalatest.{ BeforeAndAfterAll, Inside, Matchers, WordSpec } import akka.stream.ActorMaterializer -import akka.stream.scaladsl.Source +import akka.stream.scaladsl.{ Sink, Source } import akka.util.ByteString import akka.actor.ActorSystem import headers._ @@ -34,6 +35,16 @@ class MultipartSpec extends WordSpec with Matchers with Inside with BeforeAndAft MediaTypes.`multipart/mixed`, Multipart.General.BodyPart.Strict(HttpEntity("data"), List(ETag("xzy")))) } + + "support `toEntity`" in { + val streamed = Multipart.General( + MediaTypes.`multipart/mixed`, + Source(Multipart.General.BodyPart(defaultEntity("data"), List(ETag("xzy"))) :: Nil)) + val result = streamed.toEntity(boundary = "boundary") + result.contentType shouldBe MediaTypes.`multipart/mixed`.withBoundary("boundary").withCharset(HttpCharsets.`UTF-8`) + val encoding = Await.result(result.dataBytes.runWith(Sink.seq), 1.second) + encoding.map(_.utf8String).mkString shouldBe "--boundary\r\nContent-Type: text/plain; charset=UTF-8\r\nETag: \"xzy\"\r\n\r\ndata\r\n--boundary--" + } } "Multipart.FormData" should { diff --git a/akka-http-core/src/test/scala/akka/http/scaladsl/model/UriSpec.scala b/akka-http-core/src/test/scala/akka/http/scaladsl/model/UriSpec.scala index bac88cdd02..a2c9113490 100644 --- a/akka-http-core/src/test/scala/akka/http/scaladsl/model/UriSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/scaladsl/model/UriSpec.scala @@ -639,5 +639,14 @@ class UriSpec extends WordSpec with Matchers { val uri = Uri(s"http://foo.bar/$slashes") uri.toString // was reported to throw StackOverflowException in Spray's URI } + + "survive parsing a URI with thousands of query string values" in { + val uriString = (1 to 2000).map("a=" + _).mkString("http://foo.bar/?", "&", "") + val uri = Uri(uriString) + val query = uri.query() + query.size shouldEqual 2000 + query.head._2 shouldEqual "1" + query.last._2 shouldEqual "2000" + } } } diff --git a/akka-http-core/src/test/scala/akka/http/scaladsl/model/headers/HeaderSpec.scala b/akka-http-core/src/test/scala/akka/http/scaladsl/model/headers/HeaderSpec.scala index 4d3f62733d..b84d0d104b 100644 --- a/akka-http-core/src/test/scala/akka/http/scaladsl/model/headers/HeaderSpec.scala +++ b/akka-http-core/src/test/scala/akka/http/scaladsl/model/headers/HeaderSpec.scala @@ -68,6 +68,12 @@ class HeaderSpec extends FreeSpec with Matchers { headers.`Strict-Transport-Security`.parseFromValueString("max-age=30; includeSubDomains") shouldEqual Right(headers.`Strict-Transport-Security`(30, true)) headers.`Strict-Transport-Security`.parseFromValueString("max-age=30; includeSubDomains; preload") shouldEqual Right(headers.`Strict-Transport-Security`(30, true)) } + "successful parse run with additional values" in { + headers.`Strict-Transport-Security`.parseFromValueString("max-age=30; includeSubDomains; preload; dummy") shouldEqual + Right(headers.`Strict-Transport-Security`(30, true)) + headers.`Strict-Transport-Security`.parseFromValueString("max-age=30; includeSubDomains; dummy; preload") shouldEqual + Right(headers.`Strict-Transport-Security`(30, true)) + } "failing parse run" in { val Left(List(ErrorInfo(summary, detail))) = `Strict-Transport-Security`.parseFromValueString("max-age=30; includeSubDomains; preload;") summary shouldEqual "Illegal HTTP header 'Strict-Transport-Security': Invalid input 'EOI', expected OWS or token0 (line 1, column 40)" diff --git a/akka-http-core/src/test/scala/io/akka/integrationtest/http/HttpModelIntegrationSpec.scala b/akka-http-core/src/test/scala/io/akka/integrationtest/http/HttpModelIntegrationSpec.scala index 1312acde54..d1af34f168 100644 --- a/akka-http-core/src/test/scala/io/akka/integrationtest/http/HttpModelIntegrationSpec.scala +++ b/akka-http-core/src/test/scala/io/akka/integrationtest/http/HttpModelIntegrationSpec.scala @@ -150,6 +150,7 @@ class HttpModelIntegrationSpec extends WordSpec with Matchers with BeforeAndAfte "be able to wrap HttpHeaders with custom typed headers" in { + // TODO potentially use the integration for Play / Lagom APIs? // This HTTP model is typed. It uses Akka HTTP types internally, but // no Akka HTTP types are visible to users. This typed model is a // model that Play Framework may eventually move to. diff --git a/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/RouteTest.scala b/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/RouteTest.scala index 327dd75f47..e41ebed1b4 100644 --- a/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/RouteTest.scala +++ b/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/RouteTest.scala @@ -34,7 +34,7 @@ abstract class RouteTest extends AllDirectives { implicit def materializer: Materializer implicit def executionContext: ExecutionContextExecutor = system.dispatcher - protected def awaitDuration: FiniteDuration = 500.millis + protected def awaitDuration: FiniteDuration = 3.seconds protected def defaultHostInfo: DefaultHostInfo = DefaultHostInfo(Host.create("example.com"), false) diff --git a/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/TestRouteResult.scala b/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/TestRouteResult.scala index aee99e08b9..91edde58ac 100644 --- a/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/TestRouteResult.scala +++ b/akka-http-testkit/src/main/scala/akka/http/javadsl/testkit/TestRouteResult.scala @@ -194,6 +194,15 @@ abstract class TestRouteResult(_result: RouteResult, awaitAtMost: FiniteDuration this } + /** + * Assert that a header of the given type does not exist. + */ + def assertHeaderKindNotExists(name: String): TestRouteResult = { + val lowercased = name.toRootLowerCase + assertTrue(response.headers.forall(!_.is(lowercased)), s"`$name` header was not expected to appear.") + this + } + /** * Assert that a header of the given name and value exists. */ @@ -235,4 +244,4 @@ abstract class TestRouteResult(_result: RouteResult, awaitAtMost: FiniteDuration protected def assertEquals(expected: AnyRef, actual: AnyRef, message: String): Unit protected def assertEquals(expected: Int, actual: Int, message: String): Unit protected def assertTrue(predicate: Boolean, message: String): Unit -} \ No newline at end of file +} diff --git a/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java b/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java index 7a50ca72c6..dd5a60aa9c 100644 --- a/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java +++ b/akka-http-tests/src/main/java/akka/http/javadsl/server/examples/simple/SimpleServerApp.java @@ -139,7 +139,7 @@ public class SimpleServerApp extends AllDirectives { // or import Directives.* tmf.init(ks); final SSLContext sslContext = SSLContext.getInstance("TLS"); - sslContext.init(keyManagerFactory.getKeyManagers(), tmf.getTrustManagers(), SecureRandom.getInstanceStrong()); + sslContext.init(keyManagerFactory.getKeyManagers(), tmf.getTrustManagers(), new SecureRandom()); https = ConnectionContext.https(sslContext); diff --git a/akka-http-tests/src/test/java/akka/http/javadsl/server/JavaTestServer.java b/akka-http-tests/src/test/java/akka/http/javadsl/server/JavaTestServer.java index 4ccb14c1e5..33f2e98664 100644 --- a/akka-http-tests/src/test/java/akka/http/javadsl/server/JavaTestServer.java +++ b/akka-http-tests/src/test/java/akka/http/javadsl/server/JavaTestServer.java @@ -26,11 +26,18 @@ public class JavaTestServer extends AllDirectives { // or import static Directiv final Route index = path("", () -> withRequestTimeout(timeout, this::mkTimeoutResponse, () -> { - silentSleep(5000); // too long, trigger failure + silentSleep(5000); // too long, but note that this will NOT activate withRequestTimeout, see below return complete(index()); }) ); + final Route requestTimeout = path("timeout", () -> + withRequestTimeout(timeout, this::mkTimeoutResponse, () -> { + // here timeout will work + return completeOKWithFutureString(neverEndingFuture(index())); + }) + ); + final Function, Optional> handleAuth = (maybeCreds) -> { if (maybeCreds.isPresent() && maybeCreds.get().verify("pa$$word")) // some secure hash + check return Optional.of(maybeCreds.get().identifier()); @@ -58,7 +65,7 @@ public class JavaTestServer extends AllDirectives { // or import static Directiv return get(() -> - index.orElse(secure).orElse(ping).orElse(crash).orElse(inner) + index.orElse(secure).orElse(ping).orElse(crash).orElse(inner).orElse(requestTimeout) ); } @@ -70,10 +77,14 @@ public class JavaTestServer extends AllDirectives { // or import static Directiv } } + private CompletableFuture neverEndingFuture(String futureContent) { + return new CompletableFuture<>().thenApply((string) -> futureContent); + } + private HttpResponse mkTimeoutResponse(HttpRequest request) { return HttpResponse.create() .withStatus(StatusCodes.ENHANCE_YOUR_CALM) - .withEntity("Unable to serve response within time limit, please enchance your calm."); + .withEntity("Unable to serve response within time limit, please enhance your calm."); } private String index() { @@ -85,6 +96,7 @@ public class JavaTestServer extends AllDirectives { // or import static Directiv "
  • /ping
  • \n" + "
  • /secure Use any username and '<username>-password' as credentials
  • \n" + "
  • /crash
  • \n" + + "
  • /timeout Demonstrates timeout
  • \n" + " \n" + " \n" + " \n"; diff --git a/akka-http-tests/src/test/java/akka/http/javadsl/server/directives/MiscDirectivesTest.java b/akka-http-tests/src/test/java/akka/http/javadsl/server/directives/MiscDirectivesTest.java index aded1cbad1..12002d791c 100644 --- a/akka-http-tests/src/test/java/akka/http/javadsl/server/directives/MiscDirectivesTest.java +++ b/akka-http-tests/src/test/java/akka/http/javadsl/server/directives/MiscDirectivesTest.java @@ -11,12 +11,14 @@ import akka.http.javadsl.model.Uri; import akka.http.javadsl.model.headers.RawHeader; import akka.http.javadsl.model.headers.XForwardedFor; import akka.http.javadsl.model.headers.XRealIp; +import akka.http.javadsl.server.Unmarshaller; import akka.http.javadsl.testkit.JUnitRouteTest; import akka.http.javadsl.testkit.TestRoute; import org.junit.Test; import java.net.InetAddress; import java.net.UnknownHostException; +import java.util.Arrays; public class MiscDirectivesTest extends JUnitRouteTest { @@ -73,4 +75,26 @@ public class MiscDirectivesTest extends JUnitRouteTest { .assertStatusCode(StatusCodes.NOT_FOUND); } + @Test + public void testWithSizeLimit() { + TestRoute route = testRoute(withSizeLimit(500, () -> + entity(Unmarshaller.entityToString(), (entity) -> complete("ok")) + )); + + route + .run(withEntityOfSize(500)) + .assertStatusCode(StatusCodes.OK); + + route + .run(withEntityOfSize(501)) + .assertStatusCode(StatusCodes.BAD_REQUEST); + + } + + private HttpRequest withEntityOfSize(int sizeLimit) { + char[] charArray = new char[sizeLimit]; + Arrays.fill(charArray, '0'); + return HttpRequest.POST("/").withEntity(new String(charArray)); + } + } diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/coding/CoderSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/coding/CoderSpec.scala index db5d42b1b8..8142d69f7d 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/coding/CoderSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/coding/CoderSpec.scala @@ -13,7 +13,7 @@ import scala.annotation.tailrec import scala.concurrent.duration._ import scala.concurrent.Await import scala.concurrent.ExecutionContext.Implicits.global -import scala.concurrent.forkjoin.ThreadLocalRandom +import java.util.concurrent.ThreadLocalRandom import scala.util.Random import scala.util.control.NoStackTrace import org.scalatest.{ Inspectors, WordSpec } diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/WithoutSizeLimitSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/WithoutSizeLimitSpec.scala new file mode 100644 index 0000000000..5cea8ab9a5 --- /dev/null +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/WithoutSizeLimitSpec.scala @@ -0,0 +1,72 @@ +/** + * Copyright (C) 2009-2016 Lightbend Inc. + */ + +package akka.http.scaladsl.server + +import akka.actor.ActorSystem +import akka.http.scaladsl.client.RequestBuilding +import akka.http.scaladsl.model._ +import akka.http.scaladsl.server.Directives._ +import akka.http.scaladsl.{ Http, TestUtils } +import akka.stream.ActorMaterializer +import com.typesafe.config.{ Config, ConfigFactory } +import org.scalatest.{ BeforeAndAfterAll, Matchers, WordSpec } + +import scala.concurrent.Await +import scala.concurrent.duration._ + +class WithoutSizeLimitSpec extends WordSpec with Matchers with RequestBuilding with BeforeAndAfterAll { + val testConf: Config = ConfigFactory.parseString(""" + akka.loggers = ["akka.testkit.TestEventListener"] + akka.loglevel = ERROR + akka.stdout-loglevel = ERROR + akka.http.parsing.max-content-length = 800""") + implicit val system = ActorSystem(getClass.getSimpleName, testConf) + import system.dispatcher + implicit val materializer = ActorMaterializer() + + "the withoutSizeLimit directive" should { + "accept entities bigger than configured with akka.http.parsing.max-content-length" in { + val route = + path("noDirective") { + post { + entity(as[String]) { _ ⇒ + complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "

    Say hello to akka-http

    ")) + } + } + } ~ + path("withoutSizeLimit") { + post { + withoutSizeLimit { + entity(as[String]) { _ ⇒ + complete(HttpEntity(ContentTypes.`text/html(UTF-8)`, "

    Say hello to akka-http

    ")) + } + } + } + } + + val (_, hostName, port) = TestUtils.temporaryServerHostnameAndPort() + + val future = for { + _ ← Http().bindAndHandle(route, hostName, port) + + requestToNoDirective = Post(s"http://$hostName:$port/noDirective", entityOfSize(801)) + responseWithoutDirective ← Http().singleRequest(requestToNoDirective) + _ = responseWithoutDirective.status shouldEqual StatusCodes.BadRequest + + requestToDirective = Post(s"http://$hostName:$port/withoutSizeLimit", entityOfSize(801)) + responseWithDirective ← Http().singleRequest(requestToDirective) + } yield responseWithDirective + + val response = Await.result(future, 5 seconds) + response.status shouldEqual StatusCodes.OK + } + } + + override def afterAll() = { + system.terminate + } + + private def entityOfSize(size: Int) = HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) +} diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/BasicDirectivesSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/BasicDirectivesSpec.scala index 1bee6147a0..c1e619bb63 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/BasicDirectivesSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/BasicDirectivesSpec.scala @@ -5,6 +5,10 @@ package akka.http.scaladsl.server package directives +import akka.http.scaladsl.model._ +import akka.stream.scaladsl.Source +import akka.util.ByteString + class BasicDirectivesSpec extends RoutingSpec { "The `mapUnmatchedPath` directive" should { @@ -26,4 +30,27 @@ class BasicDirectivesSpec extends RoutingSpec { } ~> check { responseAs[String] shouldEqual "GET" } } } + + "The `extractDataBytes` directive" should { + "extract stream of ByteString from the RequestContext" in { + val dataBytes = Source.fromIterator(() ⇒ Iterator.range(1, 10).map(x ⇒ ByteString(x.toString))) + Post("/abc", HttpEntity(ContentTypes.`text/plain(UTF-8)`, data = dataBytes)) ~> { + extractDataBytes { data ⇒ + val sum = data.runFold(0) { (acc, i) ⇒ acc + i.utf8String.toInt } + onSuccess(sum) { s ⇒ + complete(HttpResponse(entity = HttpEntity(s.toString))) + } + } + } ~> check { responseAs[String] shouldEqual "45" } + } + } + + "The `extractRequestEntity` directive" should { + "extract entity from the RequestContext" in { + val httpEntity = HttpEntity(ContentTypes.`text/plain(UTF-8)`, "req") + Post("/abc", httpEntity) ~> { + extractRequestEntity { complete(_) } + } ~> check { responseEntity shouldEqual httpEntity } + } + } } \ No newline at end of file diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MethodDirectivesSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MethodDirectivesSpec.scala index f60347e77d..5fdb50bf99 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MethodDirectivesSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MethodDirectivesSpec.scala @@ -4,8 +4,12 @@ package akka.http.scaladsl.server.directives -import akka.http.scaladsl.model.{ StatusCodes, HttpMethods } +import akka.http.scaladsl.model.{ ContentTypes, HttpEntity, StatusCodes, HttpMethods } import akka.http.scaladsl.server._ +import akka.stream.scaladsl.Source + +import scala.concurrent.Await +import scala.concurrent.duration.Duration class MethodDirectivesSpec extends RoutingSpec { @@ -23,6 +27,26 @@ class MethodDirectivesSpec extends RoutingSpec { } } + "head" should { + val headRoute = head { + complete(HttpEntity.Default( + ContentTypes.`application/octet-stream`, + 12345L, + Source.empty + )) + } + + "allow manual complete" in { + Head() ~> headRoute ~> check { + status shouldEqual StatusCodes.OK + + val lengthF = response._3.dataBytes.runFold(0)((c, _) ⇒ c + 1) + val length = Await.result(lengthF, Duration(100, "millis")) + length shouldEqual 0 + } + } + } + "two failed `get` directives" should { "only result in a single Rejection" in { Put() ~> { diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MiscDirectivesSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MiscDirectivesSpec.scala index 58a6c10b44..ed22f1feaf 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MiscDirectivesSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/MiscDirectivesSpec.scala @@ -65,6 +65,89 @@ class MiscDirectivesSpec extends RoutingSpec { } } + "the withSizeLimit directive" should { + "not apply if entity is not consumed" in { + val route = withSizeLimit(500) { completeOk } + + Post("/abc", entityOfSize(500)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(501)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + } + + "apply if entity is consumed" in { + val route = withSizeLimit(500) { + entity(as[String]) { _ ⇒ + completeOk + } + } + + Post("/abc", entityOfSize(500)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(501)) ~> Route.seal(route) ~> check { + status shouldEqual StatusCodes.BadRequest + } + } + + "properly handle nested directives by applying innermost `withSizeLimit` directive" in { + val route = + withSizeLimit(500) { + withSizeLimit(800) { + entity(as[String]) { _ ⇒ + completeOk + } + } + } + + Post("/abc", entityOfSize(800)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(801)) ~> Route.seal(route) ~> check { + status shouldEqual StatusCodes.BadRequest + } + + val route2 = + withSizeLimit(500) { + withSizeLimit(400) { + entity(as[String]) { _ ⇒ + completeOk + } + } + } + + Post("/abc", entityOfSize(400)) ~> route2 ~> check { + status shouldEqual StatusCodes.OK + } + + Post("/abc", entityOfSize(401)) ~> Route.seal(route2) ~> check { + status shouldEqual StatusCodes.BadRequest + } + } + } + + "the withoutSizeLimit directive" should { + "skip request entity size verification" in { + val route = + withSizeLimit(500) { + withoutSizeLimit { + entity(as[String]) { _ ⇒ + completeOk + } + } + } + + Post("/abc", entityOfSize(501)) ~> route ~> check { + status shouldEqual StatusCodes.OK + } + } + } + implicit class AddStringToIn(acceptLanguageHeaderString: String) { def test(body: ((String*) ⇒ String) ⇒ Unit): Unit = s"properly handle `$acceptLanguageHeaderString`" in { @@ -88,4 +171,6 @@ class MiscDirectivesSpec extends RoutingSpec { } def remoteAddress(ip: String) = RemoteAddress(InetAddress.getByName(ip)) + + private def entityOfSize(size: Int) = HttpEntity(ContentTypes.`text/plain(UTF-8)`, "0" * size) } diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/PathDirectivesSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/PathDirectivesSpec.scala index 9a5f199e7a..608f8bf37a 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/PathDirectivesSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/PathDirectivesSpec.scala @@ -22,6 +22,34 @@ class PathDirectivesSpec extends RoutingSpec with Inside { "reject [/foo/]" in test() } + """pathPrefix("")""" should { + val test = testFor(pathPrefix("") { echoUnmatchedPath }) + + // Should match everything because pathPrefix is used and "" is a neutral element. + "accept [/] and clear the unmatchedPath=" in test("") + "accept [/foo] and clear the unmatchedPath" in test("foo") + "accept [/foo/] and clear the unmatchedPath" in test("foo/") + "accept [/bar/] and clear the unmatchedPath" in test("bar/") + } + + """path("" | "foo")""" should { + val test = testFor(path("" | "foo") { echoUnmatchedPath }) + + // Should not match anything apart of "/", because path requires whole path being matched. + "accept [/] and clear the unmatchedPath=" in test("") + "reject [/foo]" in test() + "reject [/foo/]" in test() + "reject [/bar/]" in test() + } + + """path("") ~ path("foo")""" should { + val test = testFor(path("")(echoUnmatchedPath) ~ path("foo")(echoUnmatchedPath)) + + // Should match both because ~ operator is used for two exclusive routes. + "accept [/] and clear the unmatchedPath=" in test("") + "accept [/foo] and clear the unmatchedPath=" in test("") + } + """path("foo" /)""" should { val test = testFor(path("foo" /) { echoUnmatchedPath }) "reject [/foo]" in test() diff --git a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/SecurityDirectivesSpec.scala b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/SecurityDirectivesSpec.scala index bec592cf40..9cb6b52811 100644 --- a/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/SecurityDirectivesSpec.scala +++ b/akka-http-tests/src/test/scala/akka/http/scaladsl/server/directives/SecurityDirectivesSpec.scala @@ -18,18 +18,19 @@ class SecurityDirectivesSpec extends RoutingSpec { val doOAuth2Auth = authenticateOAuth2PF("MyRealm", { case Credentials.Provided(identifier) ⇒ identifier }) val authWithAnonymous = doBasicAuth.withAnonymousUser("We are Legion") - val challenge = HttpChallenge("Basic", "MyRealm") + val basicChallenge = HttpChallenges.basic("MyRealm") + val oAuth2Challenge = HttpChallenges.oAuth2("MyRealm") "basic authentication" should { "reject requests without Authorization header with an AuthenticationFailedRejection" in { Get() ~> { dontBasicAuth { echoComplete } - } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsMissing, challenge) } + } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsMissing, basicChallenge) } } "reject unauthenticated requests with Authorization header with an AuthenticationFailedRejection" in { Get() ~> Authorization(BasicHttpCredentials("Bob", "")) ~> { dontBasicAuth { echoComplete } - } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsRejected, challenge) } + } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsRejected, basicChallenge) } } "reject requests with an OAuth2 Bearer Token Authorization header with 401" in { Get() ~> Authorization(OAuth2BearerToken("myToken")) ~> Route.seal { @@ -37,7 +38,7 @@ class SecurityDirectivesSpec extends RoutingSpec { } ~> check { status shouldEqual StatusCodes.Unauthorized responseAs[String] shouldEqual "The supplied authentication is invalid" - header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(challenge)) + header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(oAuth2Challenge)) } } "reject requests with illegal Authorization header with 401" in { @@ -46,7 +47,7 @@ class SecurityDirectivesSpec extends RoutingSpec { } ~> check { status shouldEqual StatusCodes.Unauthorized responseAs[String] shouldEqual "The resource requires authentication, which was not supplied with the request" - header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(challenge)) + header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(basicChallenge)) } } "extract the object representing the user identity created by successful authentication" in { @@ -74,12 +75,12 @@ class SecurityDirectivesSpec extends RoutingSpec { "reject requests without Authorization header with an AuthenticationFailedRejection" in { Get() ~> { dontOAuth2Auth { echoComplete } - } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsMissing, challenge) } + } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsMissing, oAuth2Challenge) } } "reject unauthenticated requests with Authorization header with an AuthenticationFailedRejection" in { Get() ~> Authorization(OAuth2BearerToken("myToken")) ~> { dontOAuth2Auth { echoComplete } - } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsRejected, challenge) } + } ~> check { rejection shouldEqual AuthenticationFailedRejection(CredentialsRejected, oAuth2Challenge) } } "reject requests with a Basic Authorization header with 401" in { Get() ~> Authorization(BasicHttpCredentials("Alice", "")) ~> Route.seal { @@ -87,7 +88,7 @@ class SecurityDirectivesSpec extends RoutingSpec { } ~> check { status shouldEqual StatusCodes.Unauthorized responseAs[String] shouldEqual "The supplied authentication is invalid" - header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(challenge)) + header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(basicChallenge)) } } "reject requests with illegal Authorization header with 401" in { @@ -96,7 +97,7 @@ class SecurityDirectivesSpec extends RoutingSpec { } ~> check { status shouldEqual StatusCodes.Unauthorized responseAs[String] shouldEqual "The resource requires authentication, which was not supplied with the request" - header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(challenge)) + header[`WWW-Authenticate`] shouldEqual Some(`WWW-Authenticate`(oAuth2Challenge)) } } "extract the object representing the user identity created by successful authentication" in { @@ -132,7 +133,7 @@ class SecurityDirectivesSpec extends RoutingSpec { status shouldEqual StatusCodes.Unauthorized headers.collect { case `WWW-Authenticate`(challenge +: Nil) ⇒ challenge - } shouldEqual Seq(challenge, otherChallenge) + } shouldEqual Seq(basicChallenge, otherChallenge) } } } diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/Rejections.scala b/akka-http/src/main/scala/akka/http/javadsl/server/Rejections.scala index a9cbe31da9..95d2e3b011 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/Rejections.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/Rejections.scala @@ -27,9 +27,14 @@ import scala.collection.JavaConverters._ * A rejection encapsulates a specific reason why a Route was not able to handle a request. Rejections are gathered * up over the course of a Route evaluation and finally converted to [[akka.http.scaladsl.model.HttpResponse]]s by the * `handleRejections` directive, if there was no way for the request to be completed. + * + * If providing custom rejections, extend [[CustomRejection]] instead. */ trait Rejection +/** To be extended by user-provided custom rejections, such that they may be consumed in either Java or Scala DSLs. */ +trait CustomRejection extends akka.http.scaladsl.server.Rejection + /** * Rejection created by method filters. * Signals that the request was rejected because the HTTP method is unsupported. diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/RouteResult.scala b/akka-http/src/main/scala/akka/http/javadsl/server/RouteResult.scala index 7a3a5642ec..0985375a2e 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/RouteResult.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/RouteResult.scala @@ -11,3 +11,21 @@ trait Complete extends RouteResult { trait Rejected extends RouteResult { def getRejections: java.lang.Iterable[Rejection] } + +object RouteResults { + import akka.http.scaladsl.{ server ⇒ s } + import akka.japi.Util + import scala.language.implicitConversions + import akka.http.impl.util.JavaMapping + import JavaMapping.Implicits._ + import RoutingJavaMapping._ + + def complete(response: HttpResponse): Complete = { + s.RouteResult.Complete(JavaMapping.toScala(response)) + } + + def rejected(rejections: java.lang.Iterable[Rejection]): Rejected = { + s.RouteResult.Rejected(Util.immutableSeq(rejections).map(_.asScala)) + } + +} diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/Unmarshaller.scala b/akka-http/src/main/scala/akka/http/javadsl/server/Unmarshaller.scala index 7d06ea7ff9..82dcc5ab7b 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/Unmarshaller.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/Unmarshaller.scala @@ -13,10 +13,11 @@ import akka.http.scaladsl.{ marshalling, model, unmarshalling } import akka.util.ByteString import akka.http.scaladsl.util.FastFuture import akka.http.scaladsl.util.FastFuture._ + import scala.concurrent.ExecutionContext import scala.annotation.varargs import akka.http.javadsl.model.HttpEntity -import akka.http.scaladsl.model.{ ContentTypeRange, ContentTypes, FormData } +import akka.http.scaladsl.model.{ ContentTypeRange, ContentTypes, FormData, Multipart } import akka.http.scaladsl import akka.http.javadsl.model.ContentType import akka.http.javadsl.model.HttpRequest @@ -57,6 +58,7 @@ object Unmarshaller { def entityToCharArray: Unmarshaller[HttpEntity, Array[Char]] = unmarshalling.Unmarshaller.charArrayUnmarshaller def entityToString: Unmarshaller[HttpEntity, String] = unmarshalling.Unmarshaller.stringUnmarshaller def entityToUrlEncodedFormData: Unmarshaller[HttpEntity, FormData] = unmarshalling.Unmarshaller.defaultUrlEncodedFormDataUnmarshaller + def entityToMultipartByteRanges: Unmarshaller[HttpEntity, Multipart.ByteRanges] = unmarshalling.MultipartUnmarshallers.defaultMultipartByteRangesUnmarshaller // format: ON val requestToEntity: Unmarshaller[HttpRequest, RequestEntity] = diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/directives/BasicDirectives.scala b/akka-http/src/main/scala/akka/http/javadsl/server/directives/BasicDirectives.scala index 2fc58edd0d..886223d662 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/directives/BasicDirectives.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/directives/BasicDirectives.scala @@ -8,8 +8,10 @@ import java.util.function.{ Function ⇒ JFunction } import akka.http.impl.util.JavaMapping import akka.http.javadsl.settings.ParserSettings -import akka.http.scaladsl.settings.RoutingSettings +import akka.http.javadsl.settings.RoutingSettings import akka.japi.Util +import akka.stream.javadsl.Source +import akka.util.ByteString import scala.concurrent.ExecutionContextExecutor import akka.http.impl.model.JavaUri @@ -73,6 +75,10 @@ abstract class BasicDirectives { D.mapRouteResult(route ⇒ f(route.asJava).asScala) { inner.get.delegate } } + def mapRouteResultPF(f: PartialFunction[RouteResult, RouteResult], inner: Supplier[Route]): Route = RouteAdapter { + D.mapRouteResult(route ⇒ f(route.asJava).asScala) { inner.get.delegate } + } + def mapRouteResultFuture(f: JFunction[CompletionStage[RouteResult], CompletionStage[RouteResult]], inner: Supplier[Route]): Route = RouteAdapter { D.mapRouteResultFuture(stage ⇒ f(toJava(stage.fast.map(_.asJava)(ExecutionContexts.sameThreadExecutionContext))).toScala.fast.map(_.asScala)(ExecutionContexts.sameThreadExecutionContext)) { @@ -84,11 +90,15 @@ abstract class BasicDirectives { D.mapRouteResultWith(r ⇒ f(r.asJava).toScala.fast.map(_.asScala)(ExecutionContexts.sameThreadExecutionContext)) { inner.get.delegate } } + def mapRouteResultWithPF(f: PartialFunction[RouteResult, CompletionStage[RouteResult]], inner: Supplier[Route]): Route = RouteAdapter { + D.mapRouteResultWith(r ⇒ f(r.asJava).toScala.fast.map(_.asScala)(ExecutionContexts.sameThreadExecutionContext)) { inner.get.delegate } + } + /** * Runs the inner route with settings mapped by the given function. */ def mapSettings(f: JFunction[RoutingSettings, RoutingSettings], inner: Supplier[Route]): Route = RouteAdapter { - D.mapSettings(rs ⇒ f(rs)) { inner.get.delegate } + D.mapSettings(rs ⇒ f(rs.asJava).asScala) { inner.get.delegate } } /** @@ -176,7 +186,7 @@ abstract class BasicDirectives { * Extracts the current http request entity. */ @CorrespondsTo("extract") - def extractEntity(inner: java.util.function.Function[RequestEntity, Route]): Route = RouteAdapter { + def extractEntity(inner: JFunction[RequestEntity, Route]): Route = RouteAdapter { D.extractRequest { rq ⇒ inner.apply(rq.entity).delegate } @@ -215,11 +225,18 @@ abstract class BasicDirectives { D.withExecutionContext(ec) { inner.get.delegate } } + /** + * Runs its inner route with the given alternative [[akka.stream.Materializer]]. + */ + def withMaterializer(mat: Materializer, inner: Supplier[Route]): Route = RouteAdapter { + D.withMaterializer(mat) { inner.get.delegate } + } + /** * Runs its inner route with the given alternative [[RoutingSettings]]. */ def withSettings(s: RoutingSettings, inner: Supplier[Route]): Route = RouteAdapter { - D.withSettings(s) { inner.get.delegate } + D.withSettings(s.asScala) { inner.get.delegate } } /** @@ -254,4 +271,16 @@ abstract class BasicDirectives { D.extractRequestContext { ctx ⇒ inner.apply(JavaMapping.toJava(ctx)(server.RoutingJavaMapping.RequestContext)).delegate } } + /** + * Extracts the entities `dataBytes` [[akka.stream.javadsl.Source]] from the [[akka.http.javadsl.server.RequestContext]]. + */ + def extractDataBytes(inner: JFunction[Source[ByteString, Any], Route]) = RouteAdapter { + D.extractRequest { ctx ⇒ inner.apply(ctx.entity.dataBytes.asJava).delegate } + } + + /** + * Extracts the [[akka.http.javadsl.model.RequestEntity]] from the [[akka.http.javadsl.server.RequestContext]]. + */ + def extractRequestEntity(inner: JFunction[RequestEntity, Route]): Route = extractEntity(inner) + } diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/directives/MiscDirectives.scala b/akka-http/src/main/scala/akka/http/javadsl/server/directives/MiscDirectives.scala index f7a35637fc..338f738ee6 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/directives/MiscDirectives.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/directives/MiscDirectives.scala @@ -61,6 +61,25 @@ abstract class MiscDirectives extends MethodDirectives { D.rejectEmptyResponse { inner.get.delegate } } + /** + * Fails the stream with [[akka.http.scaladsl.model.EntityStreamSizeException]] if its request entity size exceeds + * given limit. Limit given as parameter overrides limit configured with ``akka.http.parsing.max-content-length``. + * + * Beware that request entity size check is executed when entity is consumed. + */ + def withSizeLimit(maxBytes: Long, inner: Supplier[Route]): Route = RouteAdapter { + D.withSizeLimit(maxBytes) { inner.get.delegate } + } + + /** + * Disables the size limit (configured by `akka.http.parsing.max-content-length` by default) checking on the incoming + * [[akka.http.javadsl.model.HttpRequest]] entity. + * Can be useful when handling arbitrarily large data uploads in specific parts of your routes. + */ + def withoutSizeLimit(inner: Supplier[Route]): Route = RouteAdapter { + D.withoutSizeLimit { inner.get.delegate } + } + /** * Inspects the request's `Accept-Language` header and determines, * which of the given language alternatives is preferred by the client. diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/directives/SecurityDirectives.scala b/akka-http/src/main/scala/akka/http/javadsl/server/directives/SecurityDirectives.scala index f25d08aa42..9ed7f9a2fb 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/directives/SecurityDirectives.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/directives/SecurityDirectives.scala @@ -12,9 +12,11 @@ import scala.compat.java8.FutureConverters._ import scala.compat.java8.OptionConverters._ import akka.http.javadsl.model.headers.HttpChallenge import akka.http.javadsl.model.headers.HttpCredentials -import akka.http.javadsl.server.{ RequestContext, Route } +import akka.http.javadsl.server.{ Route, RequestContext } import akka.http.scaladsl -import akka.http.scaladsl.server.{ AuthorizationFailedRejection, Directives ⇒ D } +import akka.http.scaladsl.server.{ Directives ⇒ D } + +import scala.concurrent.{ ExecutionContextExecutor, Future } object SecurityDirectives { /** @@ -68,6 +70,50 @@ abstract class SecurityDirectives extends SchemeDirectives { } } + /** + * Wraps the inner route with Http Basic authentication support. + * The given authenticator determines whether the credentials in the request are valid + * and, if so, which user object to supply to the inner route. + * + * Authentication is required in this variant, i.e. the request is rejected if [authenticator] returns Optional.empty. + */ + def authenticateBasicPF[T](realm: String, authenticator: PartialFunction[Optional[ProvidedCredentials], T], + inner: JFunction[T, Route]): Route = RouteAdapter { + def pf: PartialFunction[scaladsl.server.directives.Credentials, Option[T]] = { + case c ⇒ Option(authenticator.applyOrElse(toJava(c), (_: Any) ⇒ null.asInstanceOf[T])) + } + + D.authenticateBasic(realm, pf) { t ⇒ + inner.apply(t).delegate + } + } + + /** + * Wraps the inner route with Http Basic authentication support. + * The given authenticator determines whether the credentials in the request are valid + * and, if so, which user object to supply to the inner route. + * + * Authentication is required in this variant, i.e. the request is rejected if [authenticator] returns Optional.empty. + */ + def authenticateBasicPFAsync[T](realm: String, authenticator: PartialFunction[Optional[ProvidedCredentials], CompletionStage[T]], + inner: JFunction[T, Route]): Route = RouteAdapter { + def pf(implicit ec: ExecutionContextExecutor): PartialFunction[scaladsl.server.directives.Credentials, Future[Option[T]]] = { + case credentials ⇒ + val jCredentials = toJava(credentials) + if (authenticator isDefinedAt jCredentials) { + authenticator(jCredentials).toScala.map(Some(_)) + } else { + Future.successful(None) + } + } + + D.extractExecutionContext { implicit ec ⇒ + D.authenticateBasicAsync(realm, pf) { t ⇒ + inner.apply(t).delegate + } + } + } + /** * Wraps the inner route with Http Basic authentication support using a given `Authenticator[T]`. * The given authenticator determines whether the credentials in the request are valid @@ -255,10 +301,4 @@ abstract class SecurityDirectives extends SchemeDirectives { def authorizeAsyncWithRequestContext(check: akka.japi.function.Function[RequestContext, CompletionStage[Boolean]], inner: Supplier[Route]): Route = RouteAdapter { D.authorizeAsync(rc ⇒ check(RequestContext.wrap(rc)).toScala)(inner.get().delegate) } - - /** - * Creates a `Basic` [[HttpChallenge]] for the given realm. - */ - def challengeFor(realm: String): HttpChallenge = HttpChallenge.create("Basic", realm) - -} \ No newline at end of file +} diff --git a/akka-http/src/main/scala/akka/http/javadsl/server/directives/TimeoutDirectives.scala b/akka-http/src/main/scala/akka/http/javadsl/server/directives/TimeoutDirectives.scala index 68ca4b9f52..b558630b30 100644 --- a/akka-http/src/main/scala/akka/http/javadsl/server/directives/TimeoutDirectives.scala +++ b/akka-http/src/main/scala/akka/http/javadsl/server/directives/TimeoutDirectives.scala @@ -43,4 +43,15 @@ abstract class TimeoutDirectives extends WebSocketDirectives { D.withoutRequestTimeout { inner.get.delegate } } + /** + * Tries to set a new request timeout handler, which produces the timeout response for a + * given request. Note that the handler must produce the response synchronously and shouldn't block! + * + * Due to the inherent raciness it is not guaranteed that the update will be applied before + * the previously set timeout has expired! + */ + def withRequestTimeoutResponse(timeoutHandler: JFunction[HttpRequest, HttpResponse], inner: Supplier[Route]): RouteAdapter = RouteAdapter { + D.withRequestTimeoutResponse(in ⇒ timeoutHandler(in.asJava).asScala) { inner.get.delegate } + } + } diff --git a/akka-http/src/main/scala/akka/http/scaladsl/marshalling/MultipartMarshallers.scala b/akka-http/src/main/scala/akka/http/scaladsl/marshalling/MultipartMarshallers.scala index ca70189ae9..5ccfde904e 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/marshalling/MultipartMarshallers.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/marshalling/MultipartMarshallers.scala @@ -4,7 +4,7 @@ package akka.http.scaladsl.marshalling -import scala.concurrent.forkjoin.ThreadLocalRandom +import java.util.concurrent.ThreadLocalRandom import akka.event.{ NoLogging, LoggingAdapter } import akka.http.impl.engine.rendering.BodyPartRenderer import akka.http.scaladsl.model._ diff --git a/akka-http/src/main/scala/akka/http/scaladsl/server/RequestContextImpl.scala b/akka-http/src/main/scala/akka/http/scaladsl/server/RequestContextImpl.scala index 2e17c3cbd0..c1c008647f 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/server/RequestContextImpl.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/server/RequestContextImpl.scala @@ -4,10 +4,10 @@ package akka.http.scaladsl.server -import scala.concurrent.{ Future, ExecutionContextExecutor } -import akka.stream.{ ActorMaterializer, Materializer } +import scala.concurrent.{ ExecutionContextExecutor, Future } +import akka.stream.{ ActorMaterializer, ActorMaterializerHelper, Materializer } import akka.event.LoggingAdapter -import akka.http.scaladsl.settings.{ RoutingSettings, ParserSettings } +import akka.http.scaladsl.settings.{ ParserSettings, RoutingSettings } import akka.http.scaladsl.marshalling.{ Marshal, ToResponseMarshallable } import akka.http.scaladsl.model._ import akka.http.scaladsl.util.FastFuture @@ -29,7 +29,7 @@ private[http] class RequestContextImpl( this(request, request.uri.path, ec, materializer, log, settings, parserSettings) def this(request: HttpRequest, log: LoggingAdapter, settings: RoutingSettings)(implicit ec: ExecutionContextExecutor, materializer: Materializer) = - this(request, request.uri.path, ec, materializer, log, settings, ParserSettings(ActorMaterializer.downcast(materializer).system)) + this(request, request.uri.path, ec, materializer, log, settings, ParserSettings(ActorMaterializerHelper.downcast(materializer).system)) def reconfigure(executionContext: ExecutionContextExecutor, materializer: Materializer, log: LoggingAdapter, settings: RoutingSettings): RequestContext = copy(executionContext = executionContext, materializer = materializer, log = log, routingSettings = settings) diff --git a/akka-http/src/main/scala/akka/http/scaladsl/server/Route.scala b/akka-http/src/main/scala/akka/http/scaladsl/server/Route.scala index 41f6c8bf49..e60a109578 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/server/Route.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/server/Route.scala @@ -5,8 +5,8 @@ package akka.http.scaladsl.server import akka.NotUsed -import akka.http.scaladsl.settings.{ RoutingSettings, ParserSettings } -import akka.stream.{ ActorMaterializer, Materializer } +import akka.http.scaladsl.settings.{ ParserSettings, RoutingSettings } +import akka.stream.{ ActorMaterializer, ActorMaterializerHelper, Materializer } import scala.concurrent.{ ExecutionContextExecutor, Future } import akka.stream.scaladsl.Flow @@ -66,7 +66,7 @@ object Route { { implicit val executionContext = effectiveEC // overrides parameter - val effectiveParserSettings = if (parserSettings ne null) parserSettings else ParserSettings(ActorMaterializer.downcast(materializer).system) + val effectiveParserSettings = if (parserSettings ne null) parserSettings else ParserSettings(ActorMaterializerHelper.downcast(materializer).system) val sealedRoute = seal(route) request ⇒ diff --git a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala index 93d4952163..1b5d4fcecc 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala @@ -5,6 +5,9 @@ package akka.http.scaladsl.server package directives +import akka.stream.scaladsl.Source +import akka.util.ByteString + import scala.concurrent.{ Future, ExecutionContextExecutor } import scala.collection.immutable import akka.event.LoggingAdapter @@ -284,6 +287,20 @@ trait BasicDirectives { * @group basic */ def extractRequestContext: Directive1[RequestContext] = BasicDirectives._extractRequestContext + + /** + * Extracts the [[akka.http.scaladsl.model.RequestEntity]] from the [[akka.http.scaladsl.server.RequestContext]]. + * + * @group basic + */ + def extractRequestEntity: Directive1[RequestEntity] = BasicDirectives._extractRequestEntity + + /** + * Extracts the entities `dataBytes` [[akka.stream.scaladsl.Source]] from the [[akka.http.scaladsl.server.RequestContext]]. + * + * @group basic + */ + def extractDataBytes: Directive1[Source[ByteString, Any]] = BasicDirectives._extractDataBytes } object BasicDirectives extends BasicDirectives { @@ -296,4 +313,6 @@ object BasicDirectives extends BasicDirectives { private val _extractSettings: Directive1[RoutingSettings] = extract(_.settings) private val _extractParserSettings: Directive1[ParserSettings] = extract(_.parserSettings) private val _extractRequestContext: Directive1[RequestContext] = extract(conforms) + private val _extractRequestEntity: Directive1[RequestEntity] = extract(_.request.entity) + private val _extractDataBytes: Directive1[Source[ByteString, Any]] = extract(_.request.entity.dataBytes) } diff --git a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala index 8d33d10b0e..9f70c1f116 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/MiscDirectives.scala @@ -6,6 +6,7 @@ package akka.http.scaladsl.server package directives import akka.http.scaladsl.model._ +import akka.http.scaladsl.server.directives.BasicDirectives._ import headers._ /** @@ -71,6 +72,27 @@ trait MiscDirectives { BasicDirectives.extractRequest.map { request ⇒ LanguageNegotiator(request.headers).pickLanguage(first :: List(more: _*)) getOrElse first } + + /** + * Fails the stream with [[akka.http.scaladsl.model.EntityStreamSizeException]] if its request entity size exceeds + * given limit. Limit given as parameter overrides limit configured with `akka.http.parsing.max-content-length`. + * + * Beware that request entity size check is executed when entity is consumed. + * + * @group misc + */ + def withSizeLimit(maxBytes: Long): Directive0 = + mapRequestContext(_.mapRequest(_.mapEntity(_.withSizeLimit(maxBytes)))) + + /** + * + * Disables the size limit (configured by `akka.http.parsing.max-content-length` by default) checking on the incoming + * [[HttpRequest]] entity. + * Can be useful when handling arbitrarily large data uploads in specific parts of your routes. + * + * @group misc + */ + def withoutSizeLimit: Directive0 = MiscDirectives._withoutSizeLimit } object MiscDirectives extends MiscDirectives { @@ -95,4 +117,7 @@ object MiscDirectives extends MiscDirectives { case Complete(response) if response.entity.isKnownEmpty ⇒ Rejected(Nil) case x ⇒ x } + + private val _withoutSizeLimit: Directive0 = + mapRequestContext(_.mapRequest(_.mapEntity(_.withoutSizeLimit))) } diff --git a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/SecurityDirectives.scala b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/SecurityDirectives.scala index 7b845e3e15..5a6b8d6bac 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/server/directives/SecurityDirectives.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/server/directives/SecurityDirectives.scala @@ -13,13 +13,15 @@ import akka.http.scaladsl.util.FastFuture._ import akka.http.scaladsl.model.headers._ import akka.http.scaladsl.server.AuthenticationFailedRejection.{ CredentialsRejected, CredentialsMissing } -import scala.util.{ Try, Success } +import scala.util.Success /** * Provides directives for securing an inner route using the standard Http authentication headers [[`WWW-Authenticate`]] - * and [[Authorization]]. Most prominently, HTTP Basic authentication as defined in RFC 2617. + * and [[Authorization]]. Most prominently, HTTP Basic authentication and OAuth 2.0 Authorization Framework + * as defined in RFC 2617 and RFC 6750 respectively. * * See: RFC 2617. + * See: RFC 6750. * * @groupname security Security directives * @groupprio security 220 @@ -95,7 +97,7 @@ trait SecurityDirectives { authenticateOrRejectWithChallenge[BasicHttpCredentials, T] { cred ⇒ authenticator(Credentials(cred)).fast.map { case Some(t) ⇒ AuthenticationResult.success(t) - case None ⇒ AuthenticationResult.failWithChallenge(challengeFor(realm)) + case None ⇒ AuthenticationResult.failWithChallenge(HttpChallenges.basic(realm)) } } } @@ -146,7 +148,7 @@ trait SecurityDirectives { authenticateOrRejectWithChallenge[OAuth2BearerToken, T] { cred ⇒ authenticator(Credentials(cred)).fast.map { case Some(t) ⇒ AuthenticationResult.success(t) - case None ⇒ AuthenticationResult.failWithChallenge(challengeFor(realm)) + case None ⇒ AuthenticationResult.failWithChallenge(HttpChallenges.oAuth2(realm)) } } } @@ -248,13 +250,6 @@ trait SecurityDirectives { } } } - - /** - * Creates a `Basic` [[HttpChallenge]] for the given realm. - * - * @group security - */ - def challengeFor(realm: String) = HttpChallenge(scheme = "Basic", realm = realm, params = Map.empty) } object SecurityDirectives extends SecurityDirectives @@ -268,24 +263,35 @@ sealed trait Credentials object Credentials { case object Missing extends Credentials abstract case class Provided(identifier: String) extends Credentials { + + /** + * First applies the passed in `hasher` function to the received secret part of the Credentials + * and then safely compares the passed in `secret` with the hashed received secret. + * This method can be used if the secret is not stored in plain text. + * Use of this method instead of manual String equality testing is recommended in order to guard against timing attacks. + * + * See also [[EnhancedString#secure_==]], for more information. + */ + def verify(secret: String, hasher: String ⇒ String): Boolean + /** * Safely compares the passed in `secret` with the received secret part of the Credentials. * Use of this method instead of manual String equality testing is recommended in order to guard against timing attacks. * * See also [[EnhancedString#secure_==]], for more information. */ - def verify(secret: String): Boolean + def verify(secret: String): Boolean = verify(secret, x ⇒ x) } def apply(cred: Option[HttpCredentials]): Credentials = { cred match { case Some(BasicHttpCredentials(username, receivedSecret)) ⇒ new Credentials.Provided(username) { - def verify(secret: String): Boolean = secret secure_== receivedSecret + def verify(secret: String, hasher: String ⇒ String): Boolean = secret secure_== hasher(receivedSecret) } case Some(OAuth2BearerToken(token)) ⇒ new Credentials.Provided(token) { - def verify(secret: String): Boolean = secret secure_== token + def verify(secret: String, hasher: String ⇒ String): Boolean = secret secure_== hasher(token) } case Some(GenericHttpCredentials(scheme, token, params)) ⇒ throw new UnsupportedOperationException("cannot verify generic HTTP credentials") diff --git a/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/MultipartUnmarshallers.scala b/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/MultipartUnmarshallers.scala index 38aa5efe3b..380f50fdd6 100644 --- a/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/MultipartUnmarshallers.scala +++ b/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/MultipartUnmarshallers.scala @@ -9,8 +9,8 @@ import akka.http.scaladsl.settings.ParserSettings import scala.collection.immutable import scala.collection.immutable.VectorBuilder import akka.util.ByteString -import akka.event.{ NoLogging, LoggingAdapter } -import akka.stream.ActorMaterializer +import akka.event.{ LoggingAdapter, NoLogging } +import akka.stream.{ ActorMaterializer, ActorMaterializerHelper } import akka.stream.impl.fusing.IteratorInterpreter import akka.stream.scaladsl._ import akka.http.impl.engine.parsing.BodyPartParser @@ -75,7 +75,7 @@ trait MultipartUnmarshallers { FastFuture.failed(new RuntimeException("Content-Type with a multipart media type must have a 'boundary' parameter")) case Some(boundary) ⇒ import BodyPartParser._ - val effectiveParserSettings = Option(parserSettings).getOrElse(ParserSettings(ActorMaterializer.downcast(mat).system)) + val effectiveParserSettings = Option(parserSettings).getOrElse(ParserSettings(ActorMaterializerHelper.downcast(mat).system)) val parser = new BodyPartParser(defaultContentType, boundary, log, effectiveParserSettings) FastFuture.successful { entity match { diff --git a/akka-persistence/src/main/resources/reference.conf b/akka-persistence/src/main/resources/reference.conf index c246b5c89c..4619f42756 100644 --- a/akka-persistence/src/main/resources/reference.conf +++ b/akka-persistence/src/main/resources/reference.conf @@ -92,11 +92,11 @@ akka.persistence { } } } - + # Fallback settings for journal plugin configurations. # These settings are used if they are not defined in plugin config section. journal-plugin-fallback { - + # Fully qualified class name providing journal plugin api implementation. # It is mandatory to specify this property. # The class must have a constructor without parameters or constructor with @@ -105,40 +105,46 @@ akka.persistence { # Dispatcher for the plugin actor. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher" - + # Dispatcher for message replay. replay-dispatcher = "akka.persistence.dispatchers.default-replay-dispatcher" - + # Removed: used to be the Maximum size of a persistent message batch written to the journal. # Now this setting is without function, PersistentActor will write as many messages # as it has accumulated since the last write. max-message-batch-size = 200 - + + # If there is more time in between individual events gotten from the journal + # recovery than this the recovery will fail. + # Note that it also affects reading the snapshot before replaying events on + # top of it, even though it is configured for the journal. + recovery-event-timeout = 30s + circuit-breaker { max-failures = 10 call-timeout = 10s reset-timeout = 30s } - - # The replay filter can detect a corrupt event stream by inspecting - # sequence numbers and writerUuid when replaying events. + + # The replay filter can detect a corrupt event stream by inspecting + # sequence numbers and writerUuid when replaying events. replay-filter { # What the filter should do when detecting invalid events. # Supported values: - # `repair-by-discard-old` : discard events from old writers, + # `repair-by-discard-old` : discard events from old writers, # warning is logged # `fail` : fail the replay, error is logged # `warn` : log warning but emit events untouched # `off` : disable this feature completely mode = repair-by-discard-old - + # It uses a look ahead buffer for analyzing the events. # This defines the size (in number of events) of the buffer. window-size = 100 - + # How many old writerUuid to remember max-old-writers = 10 - + # Set this to `on` to enable detailed debug logging of each # replayed event. debug = off @@ -148,8 +154,8 @@ akka.persistence { # Fallback settings for snapshot store plugin configurations # These settings are used if they are not defined in plugin config section. snapshot-store-plugin-fallback { - - # Fully qualified class name providing snapshot store plugin api + + # Fully qualified class name providing snapshot store plugin api # implementation. It is mandatory to specify this property if # snapshot store is enabled. # The class must have a constructor without parameters or constructor with @@ -158,7 +164,7 @@ akka.persistence { # Dispatcher for the plugin actor. plugin-dispatcher = "akka.persistence.dispatchers.default-plugin-dispatcher" - + circuit-breaker { max-failures = 5 call-timeout = 20s diff --git a/akka-persistence/src/main/scala/akka/persistence/Eventsourced.scala b/akka-persistence/src/main/scala/akka/persistence/Eventsourced.scala index 6b8c1d1b20..4f01d2739c 100644 --- a/akka-persistence/src/main/scala/akka/persistence/Eventsourced.scala +++ b/akka-persistence/src/main/scala/akka/persistence/Eventsourced.scala @@ -9,11 +9,13 @@ import java.util.UUID import scala.collection.immutable import scala.util.control.NonFatal -import akka.actor.DeadLetter -import akka.actor.StashOverflowException +import akka.actor.{ DeadLetter, ReceiveTimeout, StashOverflowException } +import akka.util.Helpers.ConfigOps import akka.event.Logging import akka.event.LoggingAdapter +import scala.concurrent.duration.{ Duration, FiniteDuration } + /** * INTERNAL API */ @@ -29,6 +31,9 @@ private[persistence] object Eventsourced { private final case class StashingHandlerInvocation(evt: Any, handler: Any ⇒ Unit) extends PendingHandlerInvocation /** does not force the actor to stash commands; Originates from either `persistAsync` or `defer` calls */ private final case class AsyncHandlerInvocation(evt: Any, handler: Any ⇒ Unit) extends PendingHandlerInvocation + + /** message used to detect that recovery timed out */ + private final case class RecoveryTick(snapshot: Boolean) } /** @@ -461,6 +466,13 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas */ private def recoveryStarted(replayMax: Long) = new State { + // protect against snapshot stalling forever because of journal overloaded and such + val timeout = extension.journalConfigFor(journalPluginId).getMillisDuration("recovery-event-timeout") + val timeoutCancellable = { + import context.dispatcher + context.system.scheduler.scheduleOnce(timeout, self, RecoveryTick(snapshot = true)) + } + private val recoveryBehavior: Receive = { val _receiveRecover = receiveRecover @@ -471,6 +483,7 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas _receiveRecover(s) case RecoveryCompleted if _receiveRecover.isDefinedAt(RecoveryCompleted) ⇒ _receiveRecover(RecoveryCompleted) + } } @@ -479,14 +492,22 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas override def stateReceive(receive: Receive, message: Any) = message match { case LoadSnapshotResult(sso, toSnr) ⇒ + timeoutCancellable.cancel() sso.foreach { case SelectedSnapshot(metadata, snapshot) ⇒ setLastSequenceNr(metadata.sequenceNr) // Since we are recovering we can ignore the receive behavior from the stack Eventsourced.super.aroundReceive(recoveryBehavior, SnapshotOffer(metadata, snapshot)) } - changeState(recovering(recoveryBehavior)) + changeState(recovering(recoveryBehavior, timeout)) journal ! ReplayMessages(lastSequenceNr + 1L, toSnr, replayMax, persistenceId, self) + + case RecoveryTick(true) ⇒ + try onRecoveryFailure( + new RecoveryTimedOut(s"Recovery timed out, didn't get snapshot within $timeout s"), + event = None) + finally context.stop(self) + case other ⇒ stashInternally(other) } @@ -502,32 +523,56 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas * * All incoming messages are stashed. */ - private def recovering(recoveryBehavior: Receive) = new State { - override def toString: String = "replay started" - override def recoveryRunning: Boolean = true + private def recovering(recoveryBehavior: Receive, timeout: FiniteDuration) = + new State { - override def stateReceive(receive: Receive, message: Any) = message match { - case ReplayedMessage(p) ⇒ - try { - updateLastSequenceNr(p) - Eventsourced.super.aroundReceive(recoveryBehavior, p) - } catch { - case NonFatal(t) ⇒ - try onRecoveryFailure(t, Some(p.payload)) finally context.stop(self) - } - case RecoverySuccess(highestSeqNr) ⇒ - onReplaySuccess() // callback for subclass implementation - changeState(processingCommands) - sequenceNr = highestSeqNr - setLastSequenceNr(highestSeqNr) - internalStash.unstashAll() - Eventsourced.super.aroundReceive(recoveryBehavior, RecoveryCompleted) - case ReplayMessagesFailure(cause) ⇒ - try onRecoveryFailure(cause, event = None) finally context.stop(self) - case other ⇒ - stashInternally(other) + // protect against snapshot stalling forever because of journal overloaded and such + val timeoutCancellable = { + import context.dispatcher + context.system.scheduler.schedule(timeout, timeout, self, RecoveryTick(snapshot = false)) + } + var eventSeenInInterval = false + + override def toString: String = "replay started" + + override def recoveryRunning: Boolean = true + + override def stateReceive(receive: Receive, message: Any) = message match { + case ReplayedMessage(p) ⇒ + try { + eventSeenInInterval = true + updateLastSequenceNr(p) + Eventsourced.super.aroundReceive(recoveryBehavior, p) + } catch { + case NonFatal(t) ⇒ + timeoutCancellable.cancel() + try onRecoveryFailure(t, Some(p.payload)) finally context.stop(self) + } + case RecoverySuccess(highestSeqNr) ⇒ + timeoutCancellable.cancel() + onReplaySuccess() // callback for subclass implementation + changeState(processingCommands) + sequenceNr = highestSeqNr + setLastSequenceNr(highestSeqNr) + internalStash.unstashAll() + Eventsourced.super.aroundReceive(recoveryBehavior, RecoveryCompleted) + case ReplayMessagesFailure(cause) ⇒ + timeoutCancellable.cancel() + try onRecoveryFailure(cause, event = None) finally context.stop(self) + case RecoveryTick(false) if !eventSeenInInterval ⇒ + timeoutCancellable.cancel() + try onRecoveryFailure( + new RecoveryTimedOut(s"Recovery timed out, didn't get event within $timeout s, highest sequence number seen $sequenceNr"), + event = None) + finally context.stop(self) + case RecoveryTick(false) ⇒ + eventSeenInInterval = false + case RecoveryTick(true) ⇒ + // snapshot tick, ignore + case other ⇒ + stashInternally(other) + } } - } private def flushBatch() { if (eventBatch.nonEmpty) { @@ -590,6 +635,10 @@ private[persistence] trait Eventsourced extends Snapshotter with PersistenceStas case WriteMessagesFailed(_) ⇒ writeInProgress = false () // it will be stopped by the first WriteMessageFailure message + + case _: RecoveryTick ⇒ + // we may have one of these in the mailbox before the scheduled timeout + // is cancelled when recovery has completed, just consume it so the concrete actor never sees it } def onWriteMessageComplete(err: Boolean): Unit diff --git a/akka-persistence/src/main/scala/akka/persistence/Persistence.scala b/akka-persistence/src/main/scala/akka/persistence/Persistence.scala index c8dd4b7e49..f76893c117 100644 --- a/akka-persistence/src/main/scala/akka/persistence/Persistence.scala +++ b/akka-persistence/src/main/scala/akka/persistence/Persistence.scala @@ -6,15 +6,18 @@ package akka.persistence import java.util.concurrent.atomic.AtomicReference import java.util.function.Consumer + import akka.actor._ import akka.event.{ Logging, LoggingAdapter } import akka.persistence.journal.{ EventAdapters, IdentityEventAdapters } import akka.util.Collections.EmptyImmutableSeq import akka.util.Helpers.ConfigOps import com.typesafe.config.Config + import scala.annotation.tailrec import scala.concurrent.duration._ import akka.util.Reflect + import scala.util.control.NonFatal /** diff --git a/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala b/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala index b95689458b..9019b2420e 100644 --- a/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala +++ b/akka-persistence/src/main/scala/akka/persistence/PersistentActor.scala @@ -4,11 +4,14 @@ package akka.persistence import java.lang.{ Iterable ⇒ JIterable } + import akka.actor._ import akka.japi.Procedure import akka.japi.Util import com.typesafe.config.Config +import scala.util.control.NoStackTrace + abstract class RecoveryCompleted /** * Sent to a [[PersistentActor]] when the journal replay has been finished. @@ -98,6 +101,8 @@ object Recovery { val none: Recovery = Recovery(toSequenceNr = 0L) } +final class RecoveryTimedOut(message: String) extends RuntimeException(message) with NoStackTrace + /** * This defines how to handle the current received message which failed to stash, when the size of * Stash exceeding the capacity of Stash. @@ -286,7 +291,7 @@ abstract class UntypedPersistentActor extends UntypedActor with Eventsourced wit * * @see [[Recovery]] */ - @throws(classOf[Exception]) + @throws(classOf[Throwable]) def onReceiveRecover(msg: Any): Unit /** @@ -294,7 +299,7 @@ abstract class UntypedPersistentActor extends UntypedActor with Eventsourced wit * communication with other actors). On successful validation, one or more events are * derived from a command and these events are then persisted by calling `persist`. */ - @throws(classOf[Exception]) + @throws(classOf[Throwable]) def onReceiveCommand(msg: Any): Unit } diff --git a/akka-persistence/src/test/scala/akka/persistence/PersistentActorRecoveryTimeoutSpec.scala b/akka-persistence/src/test/scala/akka/persistence/PersistentActorRecoveryTimeoutSpec.scala new file mode 100644 index 0000000000..c4443831ad --- /dev/null +++ b/akka-persistence/src/test/scala/akka/persistence/PersistentActorRecoveryTimeoutSpec.scala @@ -0,0 +1,136 @@ +package akka.persistence + +import akka.actor.Status.Failure +import akka.actor.{ Actor, ActorRef, Props } +import akka.persistence.journal.SteppingInmemJournal +import akka.testkit.{ AkkaSpec, ImplicitSender, TestProbe } +import com.typesafe.config.ConfigFactory + +import scala.concurrent.duration._ + +object PersistentActorRecoveryTimeoutSpec { + val journalId = "persistent-actor-recovery-timeout-spec" + + def config = + SteppingInmemJournal.config(PersistentActorRecoveryTimeoutSpec.journalId).withFallback( + ConfigFactory.parseString( + """ + |akka.persistence.journal.stepping-inmem.recovery-event-timeout=100ms + """.stripMargin)).withFallback(PersistenceSpec.config("stepping-inmem", "PersistentActorRecoveryTimeoutSpec")) + + class TestActor(probe: ActorRef) extends NamedPersistentActor("recovery-timeout-actor") { + override def receiveRecover: Receive = Actor.emptyBehavior + + override def receiveCommand: Receive = { + case x ⇒ persist(x) { _ ⇒ + sender() ! x + } + } + + override protected def onRecoveryFailure(cause: Throwable, event: Option[Any]): Unit = { + probe ! Failure(cause) + } + } + + class TestReceiveTimeoutActor(receiveTimeout: FiniteDuration, probe: ActorRef) extends NamedPersistentActor("recovery-timeout-actor-2") { + + override def preStart(): Unit = { + context.setReceiveTimeout(receiveTimeout) + } + + override def receiveRecover: Receive = { + case RecoveryCompleted ⇒ probe ! context.receiveTimeout + case _ ⇒ // we don't care + } + + override def receiveCommand: Receive = { + case x ⇒ persist(x) { _ ⇒ + sender() ! x + } + } + + override protected def onRecoveryFailure(cause: Throwable, event: Option[Any]): Unit = { + probe ! Failure(cause) + } + } + +} + +class PersistentActorRecoveryTimeoutSpec extends AkkaSpec(PersistentActorRecoveryTimeoutSpec.config) with ImplicitSender { + + import PersistentActorRecoveryTimeoutSpec.journalId + + "The recovery timeout" should { + + "fail recovery if timeout is not met when recovering" in { + val probe = TestProbe() + val persisting = system.actorOf(Props(classOf[PersistentActorRecoveryTimeoutSpec.TestActor], probe.ref)) + + awaitAssert(SteppingInmemJournal.getRef(journalId), 3.seconds) + val journal = SteppingInmemJournal.getRef(journalId) + + // initial read highest + SteppingInmemJournal.step(journal) + + persisting ! "A" + SteppingInmemJournal.step(journal) + expectMsg("A") + + watch(persisting) + system.stop(persisting) + expectTerminated(persisting) + + // now replay, but don't give the journal any tokens to replay events + // so that we cause the timeout to trigger + val replaying = system.actorOf(Props(classOf[PersistentActorRecoveryTimeoutSpec.TestActor], probe.ref)) + watch(replaying) + + // initial read highest + SteppingInmemJournal.step(journal) + + probe.expectMsgType[Failure].cause shouldBe a[RecoveryTimedOut] + expectTerminated(replaying) + + // avoid having it stuck in the next test from the + // last read request above + SteppingInmemJournal.step(journal) + } + + "should not interfere with receive timeouts" in { + val timeout = 42.days + + val probe = TestProbe() + val persisting = system.actorOf(Props(classOf[PersistentActorRecoveryTimeoutSpec.TestReceiveTimeoutActor], timeout, probe.ref)) + + awaitAssert(SteppingInmemJournal.getRef(journalId), 3.seconds) + val journal = SteppingInmemJournal.getRef(journalId) + + // initial read highest + SteppingInmemJournal.step(journal) + + persisting ! "A" + SteppingInmemJournal.step(journal) + expectMsg("A") + + watch(persisting) + system.stop(persisting) + expectTerminated(persisting) + + // now replay, but don't give the journal any tokens to replay events + // so that we cause the timeout to trigger + val replaying = system.actorOf(Props(classOf[PersistentActorRecoveryTimeoutSpec.TestReceiveTimeoutActor], timeout, probe.ref)) + + // initial read highest + SteppingInmemJournal.step(journal) + + // read journal + SteppingInmemJournal.step(journal) + + // we should get initial receive timeout back from actor when replay completes + probe.expectMsg(timeout) + + } + + } + +} diff --git a/akka-remote/src/main/java/akka/remote/ContainerFormats.java b/akka-remote/src/main/java/akka/remote/ContainerFormats.java index cdd93b7d85..1e63dd0f4b 100644 --- a/akka-remote/src/main/java/akka/remote/ContainerFormats.java +++ b/akka-remote/src/main/java/akka/remote/ContainerFormats.java @@ -3464,6 +3464,525 @@ public final class ContainerFormats { // @@protoc_insertion_point(class_scope:ActorRef) } + public interface OptionOrBuilder + extends akka.protobuf.MessageOrBuilder { + + // optional .Payload value = 1; + /** + * optional .Payload value = 1; + */ + boolean hasValue(); + /** + * optional .Payload value = 1; + */ + akka.remote.ContainerFormats.Payload getValue(); + /** + * optional .Payload value = 1; + */ + akka.remote.ContainerFormats.PayloadOrBuilder getValueOrBuilder(); + } + /** + * Protobuf type {@code Option} + */ + public static final class Option extends + akka.protobuf.GeneratedMessage + implements OptionOrBuilder { + // Use Option.newBuilder() to construct. + private Option(akka.protobuf.GeneratedMessage.Builder builder) { + super(builder); + this.unknownFields = builder.getUnknownFields(); + } + private Option(boolean noInit) { this.unknownFields = akka.protobuf.UnknownFieldSet.getDefaultInstance(); } + + private static final Option defaultInstance; + public static Option getDefaultInstance() { + return defaultInstance; + } + + public Option getDefaultInstanceForType() { + return defaultInstance; + } + + private final akka.protobuf.UnknownFieldSet unknownFields; + @java.lang.Override + public final akka.protobuf.UnknownFieldSet + getUnknownFields() { + return this.unknownFields; + } + private Option( + akka.protobuf.CodedInputStream input, + akka.protobuf.ExtensionRegistryLite extensionRegistry) + throws akka.protobuf.InvalidProtocolBufferException { + initFields(); + int mutable_bitField0_ = 0; + akka.protobuf.UnknownFieldSet.Builder unknownFields = + akka.protobuf.UnknownFieldSet.newBuilder(); + try { + boolean done = false; + while (!done) { + int tag = input.readTag(); + switch (tag) { + case 0: + done = true; + break; + default: { + if (!parseUnknownField(input, unknownFields, + extensionRegistry, tag)) { + done = true; + } + break; + } + case 10: { + akka.remote.ContainerFormats.Payload.Builder subBuilder = null; + if (((bitField0_ & 0x00000001) == 0x00000001)) { + subBuilder = value_.toBuilder(); + } + value_ = input.readMessage(akka.remote.ContainerFormats.Payload.PARSER, extensionRegistry); + if (subBuilder != null) { + subBuilder.mergeFrom(value_); + value_ = subBuilder.buildPartial(); + } + bitField0_ |= 0x00000001; + break; + } + } + } + } catch (akka.protobuf.InvalidProtocolBufferException e) { + throw e.setUnfinishedMessage(this); + } catch (java.io.IOException e) { + throw new akka.protobuf.InvalidProtocolBufferException( + e.getMessage()).setUnfinishedMessage(this); + } finally { + this.unknownFields = unknownFields.build(); + makeExtensionsImmutable(); + } + } + public static final akka.protobuf.Descriptors.Descriptor + getDescriptor() { + return akka.remote.ContainerFormats.internal_static_Option_descriptor; + } + + protected akka.protobuf.GeneratedMessage.FieldAccessorTable + internalGetFieldAccessorTable() { + return akka.remote.ContainerFormats.internal_static_Option_fieldAccessorTable + .ensureFieldAccessorsInitialized( + akka.remote.ContainerFormats.Option.class, akka.remote.ContainerFormats.Option.Builder.class); + } + + public static akka.protobuf.Parser