Merge branch 'master' into wip-sync-2.4.8-artery-patriknw

This commit is contained in:
Patrik Nordwall 2016-07-08 15:38:33 +02:00
commit ccb5d1ba04
358 changed files with 9913 additions and 2030 deletions

View file

@ -2,17 +2,7 @@
In case of questions about the contribution process or for discussion of specific issues please visit the [akka/dev gitter chat](https://gitter.im/akka/dev).
## Infrastructure
* [Akka Contributor License Agreement](http://www.lightbend.com/contribute/cla)
* [Akka Issue Tracker](http://doc.akka.io/docs/akka/current/project/issue-tracking.html)
* [Scalariform](https://github.com/daniel-trinh/scalariform)
# Lightbend Project & Developer Guidelines
These guidelines are meant to be a living document that should be changed and adapted as needed. We encourage changes that make it easier to achieve our goals in an efficient way.
These guidelines mainly apply to Lightbends “mature” projects - not necessarily to projects of the type collection of scripts etc.
# Navigating around the project & codebase
## Branches summary
@ -20,37 +10,80 @@ Depending on which version (or sometimes module) you want to work on, you should
* `master` active development branch of Akka 2.4.x
* `release-2.3` maintenance branch of Akka 2.3.x
* `artery-dev` work on the upcoming remoting implementation, codenamed "artery"
* similarly `release-2.#` branches contain legacy versions of Akka
## Tags
Akka uses tags to categorise issues into groups or mark their phase in development.
Most notably many tags start `t:` prefix (as in `topic:`), which categorises issues in terms of which module they relate to. Examples are:
- [t:core](https://github.com/akka/akka/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aopen%20label%3At%3Acore)
- [t:stream](https://github.com/akka/akka/issues?q=is%3Aissue+is%3Aopen+label%3At%3Astream)
- see [all tags here](https://github.com/akka/akka/labels)
In general *all issues are open for anyone working on them*, however if you're new to the project and looking for an issue
that will be accepted and likely is a nice one to get started you should check out the following tags:
- [community](https://github.com/akka/akka/labels/community) - which identifies issues that the core team will likely not have time to work on, or the issue is a nice entry level ticket. If you're not sure how to solve a ticket but would like to work on it feel free to ask in the issue about clarification or tips.
- [nice-to-have (low-priority)](https://github.com/akka/akka/labels/nice-to-have%20%28low-prio%29) - are tasks which make sense, however are not very high priority (in face of other very high priority issues). If you see something interesting in this list, a contribution would be really wonderful!
Another group of tickets are those which start from a number. They're used to signal in what phase of development an issue is:
- [0 - new](https://github.com/akka/akka/labels/0%20-%20new) - is assigned when a ticket is unclear on it's purpose or if it is valid or not. Sometimes the additional tag `discuss` is used to mark such tickets, if they propose large scale changed and need more discussion before moving into triaged (or being closed as invalid)
- [1 - triaged](https://github.com/akka/akka/labels/1%20-%20triaged) - roughly speaking means "this ticket makes sense". Triaged tickets are safe to pick up for contributing in terms of likeliness of a patch for it being accepted. It is not recommended to start working on a ticket that is not triaged.
- [2 - pick next](https://github.com/akka/akka/labels/2%20-%20pick%20next) - used to mark issues which are next up in the queue to be worked on. Sometimes it's also used to mark which PRs are expected to be reviewed/merged for the next release. The tag is non-binding, and mostly used as organisational helper.
- [3 - in progress](https://github.com/akka/akka/labels/3%20-%20in%20progress) - means someone is working on this ticket. If you see a ticket that has the tag, however seems inactive, it could have been an omission with removing the tag, feel free to ping the ticket then if it's still being worked on.
The last group of special tags indicate specific states a ticket is in:
- [bug](https://github.com/akka/akka/labels/failed) - bugs take priority in being fixed above features. The core team dedicates a number of days to working on bugs each sprint. Bugs which have reproducers are also great for community contributions as they're well isolated. Sometimes we're not as lucky to have reproducers though, then a bugfix should also include a test reproducing the original error along with the fix.
- [failed](https://github.com/akka/akka/labels/failed) - tickets indicate a Jenkins failure (for example from a nightly build). These tickets usually start with the `FAILED: ...` message, and include a stacktrace + link to the Jenkins failure. The tickets are collected and worked on with priority to keep the build stable and healthy. Often times it may be simple timeout issues (Jenkins boxes are slow), though sometimes real bugs are discovered this way.
Pull Request validation states:
- `validating => [tested | needs-attention]` - signify pull request validation status
# Akka contributing guidelines
These guidelines apply to all Akka projects, by which we mean both the `akka/akka` repository,
as well as any plugins or additional repos located under the Akka GitHub organisation.
These guidelines are meant to be a living document that should be changed and adapted as needed.
We encourage changes that make it easier to achieve our goals in an efficient way.
## General Workflow
This is the process for committing code into master. There are of course exceptions to these rules, for example minor changes to comments and documentation, fixing a broken build etc.
The below steps are how to get a patch into a main development branch (e.g. `master`).
The steps are exactly the same for everyone involved in the project (be it core team, or first time contributor).
1. Make sure you have signed the Lightbend CLA, if not, [sign it online](http://www.lightbend.com/contribute/cla).
2. Before starting to work on a feature or a fix, make sure that:
1. There is a ticket for your work in the project's issue tracker. If not, create it first.
2. The ticket has been scheduled for the current milestone.
3. The ticket is estimated by the team.
4. The ticket have been discussed and prioritized by the team.
3. You should always perform your work in a Git feature branch. The branch should be given a descriptive name that explains its intent. Some teams also like adding the ticket number and/or the [GitHub](http://github.com) user ID to the branch name, these details is up to each of the individual teams.
1. Make sure an issue exists in the [issue tracker](https://github.com/akka/akka/issues) for the work you want to contribute.
- If there is no ticket for it, [create one](https://github.com/akka/akka/issues/new) first.
1. [Fork the project](https://github.com/akka/akka#fork-destination-box) on GitHub. You'll need to create a feature-branch for your work on your fork, as this way you'll be able to submit a PullRequest against the mainline Akka.
1. Create a branch on your fork and work on the feature. For example: `git checkout -b wip-custom-headers-akka-http`
- Please make sure to follow the general quality guidelines (specified below) when developing your patch.
- Please write additional tests covering your feature and adjust existing ones if needed before submitting your Pull Request. The `validatePullRequest` sbt task ([explained below](#validatePullRequest)) may come in handy to verify your changes are correct.
1. Once your feature is complete, prepare the commit following our [commit message guidelines](#commit-message-guidelines). For example, a good commit message would be: `Adding compression support for Manifests #22222` (note the reference to the ticket it aimed to resolve).
1. Now it's finally time to [submit the Pull Request](https://help.github.com/articles/using-pull-requests)!
1. If you have not already done so, you will be asked by our CLA bot to [sign the Lightbend CLA](http://www.lightbend.com/contribute/cla) online CLA stands for Contributor License Agreement and is a way of protecting intellectual property disputes from harming the project.
1. If you're not already on the contributors white-list, the @akka-ci bot will ask `Can one of the repo owners verify this patch?`, to which a core member will reply by commenting `OK TO TEST`. This is just a sanity check to prevent malicious code from being run on the Jenkins cluster.
1. Now both committers and interested people will review your code. This process is to ensure the code we merge is of the best possible quality, and that no silly mistakes slip though. You're expected to follow-up these comments by adding new commits to the same branch. The commit messages of those commits can be more lose, for example: `Removed debugging using printline`, as they all will be squashed into one commit before merging into the main branch.
- The community and team are really nice people, so don't be afraid to ask follow up questions if you didn't understand some comment, or would like to clarify how to continue with a given feature. We're here to help, so feel free to ask and discuss any kind of questions you might have during review!
1. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs upwhich is signalled usually by a comment saying `LGTM`, which means "Looks Good To Me".
- In general a PR is expected to get 2 LGTMs from the team before it is merged. If the PR is trivial, or under under special circumstances (such as most of the team being on vacation, a PR was very thoroughly reviewed/tested and surely is correct) one LGTM may be fine as well.
1. If the code change needs to be applied to other branches as well (for example a bugfix needing to be backported to a previous version), one of the team will either ask you to submit a PR with the same commit to the old branch, or do this for you.
- Backport pull requests such as these are marked using the phrase`for validation` in the title to make the purpose clear in the pull request list. They can be merged once validation passes without additional review (if no conflicts).
1. Once everything is said and done, your Pull Request gets merged :tada: Your feature will be available with the next “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release). And of course you will be given credit for the fix in the release stats during the release's announcement. You've made it!
Akka prefers the committer name as part of the branch name, the ticket number is optional.
The TL;DR; of the above very precise workflow version is:
4. When the feature or fix is completed you should open a [Pull Request](https://help.github.com/articles/using-pull-requests) on GitHub.
5. The Pull Request should be reviewed by other maintainers (as many as feasible/practical). Note that the maintainers can consist of outside contributors, both within and outside Lightbend. Outside contributors (for example from EPFL or independent committers) are encouraged to participate in the review process, it is not a closed process.
6. After the review you should fix the issues as needed (pushing a new commit for new review etc.), iterating until the reviewers give their thumbs up.
When the branch conflicts with its merge target (either by way of git merge conflict or failing CI tests), do **not** merge the target branch into your feature branch. Instead rebase your branch onto the target branch. Merges complicate the git history, especially for the squashing which is necessary later (see below).
7. Once the code has passed review the Pull Request can be merged into the master branch. For this purpose the commits which were added on the feature branch should be squashed into a single commit. This can be done using the command `git rebase -i master` (or the appropriate target branch), `pick`ing the first commit and `squash`ing all following ones.
Also make sure that the commit message conforms to the syntax specified below.
8. If the code change needs to be applied to other branches as well, create pull requests against those branches which contain the change after rebasing it onto the respective branch and await successful verification by the continuous integration infrastructure; then merge those pull requests.
Please mark these pull requests with `(for validation)` in the title to make the purpose clear in the pull request list.
9. Once everything is said and done, associate the ticket with the “earliest” release milestone (i.e. if back-ported so that it will be in release x.y.z, find the relevant milestone for that release) and close it.
1. Fork Akka
2. Hack and test on your feature (on a branch)
3. Submit a PR
4. Sign the CLA if necessary
4. Keep polishing it until received enough LGTM
5. Profit!
## The `validatePullRequest` task
@ -77,41 +110,62 @@ target PR branch you can do so by setting the PR_TARGET_BRANCH environment varia
PR_TARGET_BRANCH=origin/example sbt validatePullRequest
```
## Binary compatibility
Binary compatibility rules and guarantees are described in depth in the [Binary Compatibility Rules
](http://doc.akka.io/docs/akka/snapshot/common/binary-compatibility-rules.html) section of the documentation.
Akka uses MiMa (which is short for [Lightbend Migration Manager](https://github.com/typesafehub/migration-manager)) to
validate binary compatibility of incoming Pull Requests. If your PR fails due to binary compatibility issues, you may see
an error like this:
```
[info] akka-stream: found 1 potential binary incompatibilities while checking against com.typesafe.akka:akka-stream_2.11:2.4.2 (filtered 222)
[error] * method foldAsync(java.lang.Object,scala.Function2)akka.stream.scaladsl.FlowOps in trait akka.stream.scaladsl.FlowOps is present only in current version
[error] filter with: ProblemFilters.exclude[ReversedMissingMethodProblem]("akka.stream.scaladsl.FlowOps.foldAsync")
```
In such situations it's good to consult with a core team member if the violation can be safely ignored (by adding the above snippet to `project/MiMa.scala`), or if it would indeed break binary compatibility.
Situations when it may be fine to ignore a MiMa issued warning include:
- if it is touching any class marked as `private[akka]`, `/** INTERNAL API*/` or similar markers
- if it is concerning internal classes (often recognisable by package names like `dungeon`, `impl`, `internal` etc.)
- if it is adding API to classes / traits which are only meant for extension by Akka itself, i.e. should not be extended by end-users
- other tricky situations
## Pull Request Requirements
For a Pull Request to be considered at all it has to meet these requirements:
1. Live up to the current code standard:
- Not violate [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself).
- [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) needs to have been applied.
2. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests.
3. The code must be well documented in the Lightbend's standard documentation format (see the Documentation section below).
4. The commit messages must properly describe the changes, see further below.
5. All Lightbend projects must include Lightbend copyright notices. Each project can choose between one of two approaches:
1. Regardless if the code introduces new features or fixes bugs or regressions, it must have comprehensive tests.
1. The code must be well documented in the Lightbend's standard documentation format (see the Documentation section below).
1. The commit messages must properly describe the changes, see further below.
1. All Lightbend projects must include Lightbend copyright notices. Each project can choose between one of two approaches:
1. All source files in the project must have a Lightbend copyright notice in the file header.
2. The Notices file for the project includes the Lightbend copyright notice and no other files contain copyright notices. See http://www.apache.org/legal/src-headers.html for instructions for managing this approach for copyrights.
1. The Notices file for the project includes the Lightbend copyright notice and no other files contain copyright notices. See http://www.apache.org/legal/src-headers.html for instructions for managing this approach for copyrights.
Akka uses the first choice, having copyright notices in every file header.
Other guidelines to follow for copyright notices:
- Use a form of ``Copyright (C) 2011-2016 Lightbend Inc. <http://www.lightbend.com>``, where the start year is when the project or file was first created and the end year is the last time the project or file was modified.
- Never delete or change existing copyright notices, just add additional info.
- Do not use ``@author`` tags since it does not encourage [Collective Code Ownership](http://www.extremeprogramming.org/rules/collective.html). However, each project should make sure that the contributors gets the credit they deserve—in a text file or page on the project website and in the release notes etc.
### Additional guidelines
Some additional guidelines regarding source code are:
- files should start with a ``Copyright (C) 2016 Lightbend Inc. <http://www.lightbend.com>`` copyright header
- keep the code [DRY](http://programmer.97things.oreilly.com/wiki/index.php/Don%27t_Repeat_Yourself)
- apply the [Boy Scout Rule](http://programmer.97things.oreilly.com/wiki/index.php/The_Boy_Scout_Rule) whenever you have the chance to
- Never delete or change existing copyright notices, just add additional info.
- Do not use ``@author`` tags since it does not encourage [Collective Code Ownership](http://www.extremeprogramming.org/rules/collective.html).
- Contributors , each project should make sure that the contributors gets the credit they deserve—in a text file or page on the project website and in the release notes etc.
If these requirements are not met then the code should **not** be merged into master, or even reviewed - regardless of how good or important it is. No exceptions.
Whether or not a pull request (or parts of it) shall be back- or forward-ported will be discussed on the pull request discussion page, it shall therefore not be part of the commit messages. If desired the intent can be expressed in the pull request description.
## Continuous Integration
Each project should be configured to use a continuous integration (CI) tool (i.e. a build server à la Jenkins). Lightbend has a [Jenkins server farm](https://jenkins.akka.io/) that can be used. The CI tool should, on each push to master, build the **full** distribution and run **all** tests, and if something fails it should email out a notification with the failure report to the committer and the core team. The CI tool should also be used in conjunction with a Pull Request validator (discussed below).
## Documentation
All documentation should be generated using the sbt-site-plugin, *or* publish artifacts to a repository that can be consumed by the Lightbend stack.
All documentation must abide by the following maxims:
- Example code should be run as part of an automated test suite.
@ -141,12 +195,6 @@ Which licenses are compatible with Apache 2 are defined in [this doc](http://www
Each project must also create and maintain a list of all dependencies and their licenses, including all their transitive dependencies. This can be done either in the documentation or in the build file next to each dependency.
## Work In Progress
It is ok to work on a public feature branch in the GitHub repository. Something that can sometimes be useful for early feedback etc. If so, then it is preferable to name the branch accordingly. This can be done by either prefixing the name with ``wip-`` as in Work In Progress, or using hierarchical names like ``wip/..``, ``feature/..`` or ``topic/..``. Either way is fine as long as it is clear that it is work in progress and not ready for merge. This work can temporarily have a lower standard. However, to be merged into master it will have to go through the regular process outlined above, with Pull Request, review etc..
Also, to facilitate both well-formed commits and working together, the ``wip`` and ``feature``/``topic`` identifiers also have special meaning. Any branch labelled with ``wip`` is considered “git-unstable” and may be rebased and have its history rewritten. Any branch with ``feature``/``topic`` in the name is considered “stable” enough for others to depend on when a group is working on a feature.
## Creating Commits And Writing Commit Messages
Follow these guidelines when creating public commits and writing commit messages.
@ -160,7 +208,7 @@ Follow these guidelines when creating public commits and writing commit messages
3. Following the single line description should be a blank line followed by an enumerated list with the details of the commit.
4. Add keywords for your commit (depending on the degree of automation we reach, the list may change over time):
4. You can request review by a specific team member for your commit (depending on the degree of automation we reach, the list may change over time):
* ``Review by @gituser`` - if you want to notify someone on the team. The others can, and are encouraged to participate.
Example:
@ -171,9 +219,8 @@ Example:
* Details 2
* Details 3
## How To Enforce These Guidelines?
## Pull request validation workflow details
### Make Use of Pull Request Validator
Akka uses [Jenkins GitHub pull request builder plugin](https://wiki.jenkins-ci.org/display/JENKINS/GitHub+pull+request+builder+plugin)
that automatically merges the code, builds it, runs the tests and comments on the Pull Request in GitHub.
@ -198,8 +245,19 @@ the validator to test all projects.
## Source style
### Scala style
Akka uses [Scalariform](https://github.com/daniel-trinh/scalariform) to enforce some of the code style rules.
### Java style
Java code is currently not automatically reformatted by sbt (expecting to have a plugin to do this soon).
Thus we ask Java contributions to follow these simple guidelines:
- 2 spaces
- `{` on same line as method name
- in all other aspects, follow the [Oracle Java Style Guide](http://www.oracle.com/technetwork/java/codeconvtoc-136057.html)
## Contributing Modules
For external contributions of entire features, the normal way is to establish it
@ -209,3 +267,20 @@ akka-contrib subproject), then when the feature is hardened, well documented and
tested it becomes an officially supported Akka feature.
[List of experimental Akka features](http://doc.akka.io/docs/akka/current/experimental/index.html)
# Supporting infrastructure
## Continuous Integration
Each project should be configured to use a continuous integration (CI) tool (i.e. a build server à la Jenkins).
Lightbend is sponsoring a [Jenkins server farm](https://jenkins.akka.io/), sometimes referred to as "the Lausanne cluster".
The cluster is made out of real bare-metal boxes, and maintained by the Akka team (and other very helpful people at Lightbend).
In addition to PR Validation the cluster is also used for nightly and performance test runs.
## Related links
* [Akka Contributor License Agreement](http://www.lightbend.com/contribute/cla)
* [Akka Issue Tracker](http://doc.akka.io/docs/akka/current/project/issue-tracking.html)
* [Scalariform](https://github.com/daniel-trinh/scalariform)

View file

@ -1,6 +1,8 @@
# Akka
Akka
====
We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard. Most of the time it's because we are using the wrong tools and the wrong level of abstraction.
We believe that writing correct concurrent & distributed, resilient and elastic applications is too hard.
Most of the time it's because we are using the wrong tools and the wrong level of abstraction.
Akka is here to change that.
@ -10,10 +12,44 @@ For resilience we adopt the "Let it crash" model which the telecom industry has
Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications.
Learn more at [akka.io](http://akka.io/).
Reference Documentation
-----------------------
The reference documentation is available at [doc.akka.io](http://doc.akka.io),
for [Scala](http://doc.akka.io/docs/akka/current/scala.html) and [Java](http://doc.akka.io/docs/akka/current/scala.html).
Community
---------
You can join these groups and chats to discuss and ask Akka related questions:
- Mailing list: [![google groups: akka-user](https://img.shields.io/badge/group%3A-akka--user-blue.svg?style=flat-square)](https://groups.google.com/forum/#!forum/akka-user)
- Chat room about *using* Akka: [![gitter: akka/akka](https://img.shields.io/badge/gitter%3A-akka%2Fakka-blue.svg?style=flat-square)](https://gitter.im/akka/akka)
- Issue tracker: [![github: akka/akka](https://img.shields.io/badge/github%3A-issues-blue.svg?style=flat-square)](https://github.com/akka/akka/issues)
In addition to that, you may enjoy following:
- The [Akka Team Blog](http://blog.akka.io)
- [@akkateam](https://twitter.com/akkateam) on Twitter
- Questions tagged [#akka on StackOverflow](stackoverflow.com/questions/tagged/akka)
Contributing
------------
Contributions are *very* welcome!
If you see an issue that you'd like to see fixed, the best way to make it happen is to help out by submitting a PullRequest implementing it.
Refer to the [CONTRIBUTING.md](https://github.com/akka/akka/blob/master/CONTRIBUTING.md) file for more details about the workflow,
and general hints how to prepare your pull request. You can also chat ask for clarifications or guidance in github issues directly,
or in the akka/dev chat if a more real time communication would be of benefit.
A chat room is available for all questions related to *developing and contributing* to Akka:
[![gitter: akka/dev](https://img.shields.io/badge/gitter%3A-akka%2Fdev-blue.svg?style=flat-square)](https://gitter.im/akka/dev)
License
-------
Akka is Open Source and available under the Apache 2 License.
Learn more at [akka.io](http://akka.io/). Join the [akka-user](https://groups.google.com/forum/#!forum/akka-user) mailing list. Follow [@akkateam](https://twitter.com/akkateam) on twitter.
If you are looking to contribute back to Akka, the [CONTRIBUTING.md](CONTRIBUTING.md) file should provide you with all the information needed to get started.
[![Gitter](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/akka/akka?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)

View file

@ -0,0 +1,14 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package test.akka.serialization
import akka.actor.NoSerializationVerificationNeeded
/**
* This is currently used in NoSerializationVerificationNeeded test cases in SerializeSpec,
* as they needed a serializable class whose top package is not akka.
*/
class NoVerification extends NoSerializationVerificationNeeded with java.io.Serializable {
}

View file

@ -18,6 +18,8 @@ import akka.pattern.ask
import org.apache.commons.codec.binary.Hex.encodeHex
import java.nio.ByteOrder
import java.nio.ByteBuffer
import akka.actor.NoSerializationVerificationNeeded
import test.akka.serialization.NoVerification
object SerializationTests {
@ -443,18 +445,18 @@ class DefaultSerializationWarningSpec extends AkkaSpec(
ConfigFactory.parseString("akka.actor.warn-about-java-serializer-usage = on")) {
val ser = SerializationExtension(system)
val messagePrefix = "Using the default Java serializer for class.*"
val messagePrefix = "Using the default Java serializer for class"
"Using the default Java serializer" must {
"log a warning when serializing classes outside of java.lang package" in {
EventFilter.warning(message = messagePrefix) intercept {
EventFilter.warning(start = messagePrefix, occurrences = 1) intercept {
ser.serializerFor(classOf[java.math.BigDecimal])
}
}
"not log warning when serializing classes from java.lang package" in {
EventFilter.warning(message = messagePrefix, occurrences = 0) intercept {
EventFilter.warning(start = messagePrefix, occurrences = 0) intercept {
ser.serializerFor(classOf[java.lang.String])
}
}
@ -463,6 +465,54 @@ class DefaultSerializationWarningSpec extends AkkaSpec(
}
class NoVerificationWarningSpec extends AkkaSpec(
ConfigFactory.parseString(
"akka.actor.warn-about-java-serializer-usage = on\n" +
"akka.actor.warn-on-no-serialization-verification = on")) {
val ser = SerializationExtension(system)
val messagePrefix = "Using the default Java serializer for class"
"When warn-on-no-serialization-verification = on, using the default Java serializer" must {
"log a warning on classes without extending NoSerializationVerificationNeeded" in {
EventFilter.warning(start = messagePrefix, occurrences = 1) intercept {
ser.serializerFor(classOf[java.math.BigDecimal])
}
}
"still log warning on classes extending NoSerializationVerificationNeeded" in {
EventFilter.warning(start = messagePrefix, occurrences = 1) intercept {
ser.serializerFor(classOf[NoVerification])
}
}
}
}
class NoVerificationWarningOffSpec extends AkkaSpec(
ConfigFactory.parseString(
"akka.actor.warn-about-java-serializer-usage = on\n" +
"akka.actor.warn-on-no-serialization-verification = off")) {
val ser = SerializationExtension(system)
val messagePrefix = "Using the default Java serializer for class"
"When warn-on-no-serialization-verification = off, using the default Java serializer" must {
"log a warning on classes without extending NoSerializationVerificationNeeded" in {
EventFilter.warning(start = messagePrefix, occurrences = 1) intercept {
ser.serializerFor(classOf[java.math.BigDecimal])
}
}
"not log warning on classes extending NoSerializationVerificationNeeded" in {
EventFilter.warning(start = messagePrefix, occurrences = 0) intercept {
ser.serializerFor(classOf[NoVerification])
}
}
}
}
protected[akka] trait TestSerializable
protected[akka] class TestSerializer extends Serializer {

View file

@ -592,6 +592,12 @@ akka {
# you can turn this off.
warn-about-java-serializer-usage = on
# To be used with the above warn-about-java-serializer-usage
# When warn-about-java-serializer-usage = on, and this warn-on-no-serialization-verification = off,
# warnings are suppressed for classes extending NoSerializationVerificationNeeded
# to reduce noize.
warn-on-no-serialization-verification = on
# Configuration namespace of serialization identifiers.
# Each serializer implementation must have an entry in the following format:
# `akka.actor.serialization-identifiers."FQCN" = ID`

View file

@ -98,7 +98,7 @@ abstract class UntypedActor extends Actor {
* To be implemented by concrete UntypedActor, this defines the behavior of the
* UntypedActor.
*/
@throws(classOf[Exception])
@throws(classOf[Throwable])
def onReceive(message: Any): Unit
/**

View file

@ -464,7 +464,8 @@ class CircuitBreaker(scheduler: Scheduler, maxFailures: Int, callTimeout: Finite
* @return duration to when the breaker will attempt a reset by transitioning to half-open
*/
private def remainingDuration(): FiniteDuration = {
val diff = System.nanoTime() - get
val fromOpened = System.nanoTime() - get
val diff = resetTimeout.toNanos - fromOpened
if (diff <= 0L) Duration.Zero
else diff.nanos
}

View file

@ -6,7 +6,7 @@ package akka.routing
import java.time.LocalDateTime
import scala.collection.immutable
import scala.concurrent.forkjoin.ThreadLocalRandom
import java.util.concurrent.ThreadLocalRandom
import scala.concurrent.duration._
import com.typesafe.config.Config

View file

@ -315,13 +315,20 @@ class Serialization(val system: ExtendedActorSystem) extends Extension {
}
private val isJavaSerializationWarningEnabled = settings.config.getBoolean("akka.actor.warn-about-java-serializer-usage")
private val isWarningOnNoVerificationEnabled = settings.config.getBoolean("akka.actor.warn-on-no-serialization-verification")
private def shouldWarnAboutJavaSerializer(serializedClass: Class[_], serializer: Serializer) = {
def suppressWarningOnNonSerializationVerification(serializedClass: Class[_]) = {
//suppressed, only when warn-on-no-serialization-verification = off, and extending NoSerializationVerificationNeeded
!isWarningOnNoVerificationEnabled && classOf[NoSerializationVerificationNeeded].isAssignableFrom(serializedClass)
}
isJavaSerializationWarningEnabled &&
serializer.isInstanceOf[JavaSerializer] &&
!serializedClass.getName.startsWith("akka.") &&
!serializedClass.getName.startsWith("java.lang.")
!serializedClass.getName.startsWith("java.lang.") &&
!suppressWarningOnNonSerializationVerification(serializedClass)
}
}

View file

@ -35,6 +35,16 @@ public final class DistributedPubSubMessages {
*/
akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status.VersionOrBuilder getVersionsOrBuilder(
int index);
// optional bool replyToStatus = 2;
/**
* <code>optional bool replyToStatus = 2;</code>
*/
boolean hasReplyToStatus();
/**
* <code>optional bool replyToStatus = 2;</code>
*/
boolean getReplyToStatus();
}
/**
* Protobuf type {@code Status}
@ -95,6 +105,11 @@ public final class DistributedPubSubMessages {
versions_.add(input.readMessage(akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status.Version.PARSER, extensionRegistry));
break;
}
case 16: {
bitField0_ |= 0x00000001;
replyToStatus_ = input.readBool();
break;
}
}
}
} catch (akka.protobuf.InvalidProtocolBufferException e) {
@ -749,6 +764,7 @@ public final class DistributedPubSubMessages {
// @@protoc_insertion_point(class_scope:Status.Version)
}
private int bitField0_;
// repeated .Status.Version versions = 1;
public static final int VERSIONS_FIELD_NUMBER = 1;
private java.util.List<akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status.Version> versions_;
@ -785,8 +801,25 @@ public final class DistributedPubSubMessages {
return versions_.get(index);
}
// optional bool replyToStatus = 2;
public static final int REPLYTOSTATUS_FIELD_NUMBER = 2;
private boolean replyToStatus_;
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public boolean hasReplyToStatus() {
return ((bitField0_ & 0x00000001) == 0x00000001);
}
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public boolean getReplyToStatus() {
return replyToStatus_;
}
private void initFields() {
versions_ = java.util.Collections.emptyList();
replyToStatus_ = false;
}
private byte memoizedIsInitialized = -1;
public final boolean isInitialized() {
@ -809,6 +842,9 @@ public final class DistributedPubSubMessages {
for (int i = 0; i < versions_.size(); i++) {
output.writeMessage(1, versions_.get(i));
}
if (((bitField0_ & 0x00000001) == 0x00000001)) {
output.writeBool(2, replyToStatus_);
}
getUnknownFields().writeTo(output);
}
@ -822,6 +858,10 @@ public final class DistributedPubSubMessages {
size += akka.protobuf.CodedOutputStream
.computeMessageSize(1, versions_.get(i));
}
if (((bitField0_ & 0x00000001) == 0x00000001)) {
size += akka.protobuf.CodedOutputStream
.computeBoolSize(2, replyToStatus_);
}
size += getUnknownFields().getSerializedSize();
memoizedSerializedSize = size;
return size;
@ -945,6 +985,8 @@ public final class DistributedPubSubMessages {
} else {
versionsBuilder_.clear();
}
replyToStatus_ = false;
bitField0_ = (bitField0_ & ~0x00000002);
return this;
}
@ -972,6 +1014,7 @@ public final class DistributedPubSubMessages {
public akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status buildPartial() {
akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status result = new akka.cluster.pubsub.protobuf.msg.DistributedPubSubMessages.Status(this);
int from_bitField0_ = bitField0_;
int to_bitField0_ = 0;
if (versionsBuilder_ == null) {
if (((bitField0_ & 0x00000001) == 0x00000001)) {
versions_ = java.util.Collections.unmodifiableList(versions_);
@ -981,6 +1024,11 @@ public final class DistributedPubSubMessages {
} else {
result.versions_ = versionsBuilder_.build();
}
if (((from_bitField0_ & 0x00000002) == 0x00000002)) {
to_bitField0_ |= 0x00000001;
}
result.replyToStatus_ = replyToStatus_;
result.bitField0_ = to_bitField0_;
onBuilt();
return result;
}
@ -1022,6 +1070,9 @@ public final class DistributedPubSubMessages {
}
}
}
if (other.hasReplyToStatus()) {
setReplyToStatus(other.getReplyToStatus());
}
this.mergeUnknownFields(other.getUnknownFields());
return this;
}
@ -1295,6 +1346,39 @@ public final class DistributedPubSubMessages {
return versionsBuilder_;
}
// optional bool replyToStatus = 2;
private boolean replyToStatus_ ;
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public boolean hasReplyToStatus() {
return ((bitField0_ & 0x00000002) == 0x00000002);
}
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public boolean getReplyToStatus() {
return replyToStatus_;
}
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public Builder setReplyToStatus(boolean value) {
bitField0_ |= 0x00000002;
replyToStatus_ = value;
onChanged();
return this;
}
/**
* <code>optional bool replyToStatus = 2;</code>
*/
public Builder clearReplyToStatus() {
bitField0_ = (bitField0_ & ~0x00000002);
replyToStatus_ = false;
onChanged();
return this;
}
// @@protoc_insertion_point(builder_scope:Status)
}
@ -7508,24 +7592,25 @@ public final class DistributedPubSubMessages {
descriptor;
static {
java.lang.String[] descriptorData = {
"\n\037DistributedPubSubMessages.proto\"d\n\006Sta" +
"tus\022!\n\010versions\030\001 \003(\0132\017.Status.Version\0327" +
"\n\007Version\022\031\n\007address\030\001 \002(\0132\010.Address\022\021\n\t" +
"timestamp\030\002 \002(\003\"\256\001\n\005Delta\022\036\n\007buckets\030\001 \003" +
"(\0132\r.Delta.Bucket\0322\n\005Entry\022\013\n\003key\030\001 \002(\t\022" +
"\017\n\007version\030\002 \002(\003\022\013\n\003ref\030\003 \001(\t\032Q\n\006Bucket\022" +
"\027\n\005owner\030\001 \002(\0132\010.Address\022\017\n\007version\030\002 \002(" +
"\003\022\035\n\007content\030\003 \003(\0132\014.Delta.Entry\"K\n\007Addr" +
"ess\022\016\n\006system\030\001 \002(\t\022\020\n\010hostname\030\002 \002(\t\022\014\n" +
"\004port\030\003 \002(\r\022\020\n\010protocol\030\004 \001(\t\"F\n\004Send\022\014\n",
"\004path\030\001 \002(\t\022\025\n\rlocalAffinity\030\002 \002(\010\022\031\n\007pa" +
"yload\030\003 \002(\0132\010.Payload\"H\n\tSendToAll\022\014\n\004pa" +
"th\030\001 \002(\t\022\022\n\nallButSelf\030\002 \002(\010\022\031\n\007payload\030" +
"\003 \002(\0132\010.Payload\"3\n\007Publish\022\r\n\005topic\030\001 \002(" +
"\t\022\031\n\007payload\030\003 \002(\0132\010.Payload\"Q\n\007Payload\022" +
"\027\n\017enclosedMessage\030\001 \002(\014\022\024\n\014serializerId" +
"\030\002 \002(\005\022\027\n\017messageManifest\030\004 \001(\014B$\n akka." +
"cluster.pubsub.protobuf.msgH\001"
"\n\037DistributedPubSubMessages.proto\"{\n\006Sta" +
"tus\022!\n\010versions\030\001 \003(\0132\017.Status.Version\022\025" +
"\n\rreplyToStatus\030\002 \001(\010\0327\n\007Version\022\031\n\007addr" +
"ess\030\001 \002(\0132\010.Address\022\021\n\ttimestamp\030\002 \002(\003\"\256" +
"\001\n\005Delta\022\036\n\007buckets\030\001 \003(\0132\r.Delta.Bucket" +
"\0322\n\005Entry\022\013\n\003key\030\001 \002(\t\022\017\n\007version\030\002 \002(\003\022" +
"\013\n\003ref\030\003 \001(\t\032Q\n\006Bucket\022\027\n\005owner\030\001 \002(\0132\010." +
"Address\022\017\n\007version\030\002 \002(\003\022\035\n\007content\030\003 \003(" +
"\0132\014.Delta.Entry\"K\n\007Address\022\016\n\006system\030\001 \002" +
"(\t\022\020\n\010hostname\030\002 \002(\t\022\014\n\004port\030\003 \002(\r\022\020\n\010pr",
"otocol\030\004 \001(\t\"F\n\004Send\022\014\n\004path\030\001 \002(\t\022\025\n\rlo" +
"calAffinity\030\002 \002(\010\022\031\n\007payload\030\003 \002(\0132\010.Pay" +
"load\"H\n\tSendToAll\022\014\n\004path\030\001 \002(\t\022\022\n\nallBu" +
"tSelf\030\002 \002(\010\022\031\n\007payload\030\003 \002(\0132\010.Payload\"3" +
"\n\007Publish\022\r\n\005topic\030\001 \002(\t\022\031\n\007payload\030\003 \002(" +
"\0132\010.Payload\"Q\n\007Payload\022\027\n\017enclosedMessag" +
"e\030\001 \002(\014\022\024\n\014serializerId\030\002 \002(\005\022\027\n\017message" +
"Manifest\030\004 \001(\014B$\n akka.cluster.pubsub.pr" +
"otobuf.msgH\001"
};
akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner assigner =
new akka.protobuf.Descriptors.FileDescriptor.InternalDescriptorAssigner() {
@ -7537,7 +7622,7 @@ public final class DistributedPubSubMessages {
internal_static_Status_fieldAccessorTable = new
akka.protobuf.GeneratedMessage.FieldAccessorTable(
internal_static_Status_descriptor,
new java.lang.String[] { "Versions", });
new java.lang.String[] { "Versions", "ReplyToStatus", });
internal_static_Status_Version_descriptor =
internal_static_Status_descriptor.getNestedTypes().get(0);
internal_static_Status_Version_fieldAccessorTable = new

View file

@ -11,6 +11,7 @@ message Status {
required int64 timestamp = 2;
}
repeated Version versions = 1;
optional bool replyToStatus = 2;
}
message Delta {

View file

@ -221,12 +221,15 @@ object DistributedPubSubMediator {
}
@SerialVersionUID(1L)
final case class Status(versions: Map[Address, Long]) extends DistributedPubSubMessage
final case class Status(versions: Map[Address, Long], isReplyToStatus: Boolean) extends DistributedPubSubMessage
with DeadLetterSuppression
@SerialVersionUID(1L)
final case class Delta(buckets: immutable.Iterable[Bucket]) extends DistributedPubSubMessage
with DeadLetterSuppression
// Only for testing purposes, to verify replication
case object DeltaCount
case object GossipTick
@SerialVersionUID(1L)
@ -500,6 +503,7 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act
var registry: Map[Address, Bucket] = Map.empty.withDefault(a Bucket(a, 0L, TreeMap.empty))
var nodes: Set[Address] = Set.empty
var deltaCount = 0L
// the version is a timestamp because it is also used when pruning removed entries
val nextVersion = {
@ -615,15 +619,21 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act
case msg @ Unsubscribed(ack, ref)
ref ! ack
case Status(otherVersions)
// gossip chat starts with a Status message, containing the bucket versions of the other node
val delta = collectDelta(otherVersions)
if (delta.nonEmpty)
sender() ! Delta(delta)
if (otherHasNewerVersions(otherVersions))
sender() ! Status(versions = myVersions) // it will reply with Delta
case Status(otherVersions, isReplyToStatus)
// only accept status from known nodes, otherwise old cluster with same address may interact
// also accept from local for testing purposes
if (nodes(sender().path.address) || sender().path.address.hasLocalScope) {
// gossip chat starts with a Status message, containing the bucket versions of the other node
val delta = collectDelta(otherVersions)
if (delta.nonEmpty)
sender() ! Delta(delta)
if (!isReplyToStatus && otherHasNewerVersions(otherVersions))
sender() ! Status(versions = myVersions, isReplyToStatus = true) // it will reply with Delta
}
case Delta(buckets)
deltaCount += 1
// reply from Status message in the gossip chat
// the Delta contains potential updates (newer versions) from the other node
// only accept deltas/buckets from known nodes, otherwise there is a risk of
@ -666,6 +676,12 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act
if (matchingRole(m))
nodes += m.address
case MemberLeft(m)
if (matchingRole(m)) {
nodes -= m.address
registry -= m.address
}
case MemberRemoved(m, _)
if (m.address == selfAddress)
context stop self
@ -683,6 +699,9 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act
}
}.sum
sender() ! count
case DeltaCount
sender() ! deltaCount
}
private def sendToDeadLetters(msg: Any) = context.system.deadLetters ! DeadLetter(msg, sender(), context.self)
@ -783,7 +802,8 @@ class DistributedPubSubMediator(settings: DistributedPubSubSettings) extends Act
def gossip(): Unit = selectRandomNode((nodes - selfAddress).toVector) foreach gossipTo
def gossipTo(address: Address): Unit = {
context.actorSelection(self.path.toStringWithAddress(address)) ! Status(versions = myVersions)
val sel = context.actorSelection(self.path.toStringWithAddress(address))
sel ! Status(versions = myVersions, isReplyToStatus = false)
}
def selectRandomNode(addresses: immutable.IndexedSeq[Address]): Option[Address] =

View file

@ -114,15 +114,20 @@ private[akka] class DistributedPubSubMessageSerializer(val system: ExtendedActor
setTimestamp(v).
build()
}.toVector.asJava
dm.Status.newBuilder().addAllVersions(versions).build()
dm.Status.newBuilder()
.addAllVersions(versions)
.setReplyToStatus(status.isReplyToStatus)
.build()
}
private def statusFromBinary(bytes: Array[Byte]): Status =
statusFromProto(dm.Status.parseFrom(decompress(bytes)))
private def statusFromProto(status: dm.Status): Status =
private def statusFromProto(status: dm.Status): Status = {
val isReplyToStatus = if (status.hasReplyToStatus) status.getReplyToStatus else false
Status(status.getVersionsList.asScala.map(v
addressFromProto(v.getAddress) v.getTimestamp)(breakOut))
addressFromProto(v.getAddress) v.getTimestamp)(breakOut), isReplyToStatus)
}
private def deltaToProto(delta: Delta): dm.Delta = {
val buckets = delta.buckets.map { b

View file

@ -454,7 +454,7 @@ class DistributedPubSubMediatorSpec extends MultiNodeSpec(DistributedPubSubMedia
val thirdAddress = node(third).address
runOn(first) {
mediator ! Status(versions = Map.empty)
mediator ! Status(versions = Map.empty, isReplyToStatus = false)
val deltaBuckets = expectMsgType[Delta].buckets
deltaBuckets.size should ===(3)
deltaBuckets.find(_.owner == firstAddress).get.content.size should ===(10)
@ -469,15 +469,15 @@ class DistributedPubSubMediatorSpec extends MultiNodeSpec(DistributedPubSubMedia
for (i 0 until many)
mediator ! Put(createChatUser("u" + (1000 + i)))
mediator ! Status(versions = Map.empty)
mediator ! Status(versions = Map.empty, isReplyToStatus = false)
val deltaBuckets1 = expectMsgType[Delta].buckets
deltaBuckets1.map(_.content.size).sum should ===(500)
mediator ! Status(versions = deltaBuckets1.map(b b.owner b.version).toMap)
mediator ! Status(versions = deltaBuckets1.map(b b.owner b.version).toMap, isReplyToStatus = false)
val deltaBuckets2 = expectMsgType[Delta].buckets
deltaBuckets1.map(_.content.size).sum should ===(500)
mediator ! Status(versions = deltaBuckets2.map(b b.owner b.version).toMap)
mediator ! Status(versions = deltaBuckets2.map(b b.owner b.version).toMap, isReplyToStatus = false)
val deltaBuckets3 = expectMsgType[Delta].buckets
deltaBuckets3.map(_.content.size).sum should ===(10 + 9 + 2 + many - 500 - 500)

View file

@ -0,0 +1,164 @@
/**
* Copyright (C) 2009-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package akka.cluster.pubsub
import language.postfixOps
import scala.concurrent.duration._
import com.typesafe.config.ConfigFactory
import akka.actor.Actor
import akka.actor.ActorRef
import akka.actor.PoisonPill
import akka.actor.Props
import akka.cluster.Cluster
import akka.cluster.ClusterEvent._
import akka.remote.testconductor.RoleName
import akka.remote.testkit.MultiNodeConfig
import akka.remote.testkit.MultiNodeSpec
import akka.remote.testkit.STMultiNodeSpec
import akka.testkit._
import akka.actor.ActorLogging
import akka.cluster.pubsub.DistributedPubSubMediator.Internal.Status
import akka.cluster.pubsub.DistributedPubSubMediator.Internal.Delta
import akka.actor.ActorSystem
import scala.concurrent.Await
import akka.actor.Identify
import akka.actor.RootActorPath
import akka.actor.ActorIdentity
object DistributedPubSubRestartSpec extends MultiNodeConfig {
val first = role("first")
val second = role("second")
val third = role("third")
commonConfig(ConfigFactory.parseString("""
akka.loglevel = INFO
akka.cluster.pub-sub.gossip-interval = 500ms
akka.actor.provider = "akka.cluster.ClusterActorRefProvider"
akka.remote.log-remote-lifecycle-events = off
akka.cluster.auto-down-unreachable-after = off
"""))
testTransport(on = true)
class Shutdown extends Actor {
def receive = {
case "shutdown" context.system.terminate()
}
}
}
class DistributedPubSubRestartMultiJvmNode1 extends DistributedPubSubRestartSpec
class DistributedPubSubRestartMultiJvmNode2 extends DistributedPubSubRestartSpec
class DistributedPubSubRestartMultiJvmNode3 extends DistributedPubSubRestartSpec
class DistributedPubSubRestartSpec extends MultiNodeSpec(DistributedPubSubRestartSpec) with STMultiNodeSpec with ImplicitSender {
import DistributedPubSubRestartSpec._
import DistributedPubSubMediator._
override def initialParticipants = roles.size
def join(from: RoleName, to: RoleName): Unit = {
runOn(from) {
Cluster(system) join node(to).address
createMediator()
}
enterBarrier(from.name + "-joined")
}
def createMediator(): ActorRef = DistributedPubSub(system).mediator
def mediator: ActorRef = DistributedPubSub(system).mediator
def awaitCount(expected: Int): Unit = {
val probe = TestProbe()
awaitAssert {
mediator.tell(Count, probe.ref)
probe.expectMsgType[Int] should ===(expected)
}
}
"A Cluster with DistributedPubSub" must {
"startup 3 node cluster" in within(15 seconds) {
join(first, first)
join(second, first)
join(third, first)
enterBarrier("after-1")
}
"handle restart of nodes with same address" in within(30 seconds) {
mediator ! Subscribe("topic1", testActor)
expectMsgType[SubscribeAck]
awaitCount(3)
runOn(first) {
mediator ! Publish("topic1", "msg1")
}
enterBarrier("pub-msg1")
expectMsg("msg1")
enterBarrier("got-msg1")
runOn(second) {
mediator ! Internal.DeltaCount
val oldDeltaCount = expectMsgType[Long]
enterBarrier("end")
mediator ! Internal.DeltaCount
val deltaCount = expectMsgType[Long]
deltaCount should ===(oldDeltaCount)
}
runOn(first) {
mediator ! Internal.DeltaCount
val oldDeltaCount = expectMsgType[Long]
val thirdAddress = node(third).address
testConductor.shutdown(third).await
within(20.seconds) {
awaitAssert {
system.actorSelection(RootActorPath(thirdAddress) / "user" / "shutdown") ! Identify(None)
expectMsgType[ActorIdentity](1.second).ref.get
}
}
system.actorSelection(RootActorPath(thirdAddress) / "user" / "shutdown") ! "shutdown"
enterBarrier("end")
mediator ! Internal.DeltaCount
val deltaCount = expectMsgType[Long]
deltaCount should ===(oldDeltaCount)
}
runOn(third) {
Await.result(system.whenTerminated, 10.seconds)
val newSystem = ActorSystem(
system.name,
ConfigFactory.parseString(s"akka.remote.netty.tcp.port=${Cluster(system).selfAddress.port.get}").withFallback(
system.settings.config))
try {
// don't join the old cluster
Cluster(newSystem).join(Cluster(newSystem).selfAddress)
val newMediator = DistributedPubSub(newSystem).mediator
val probe = TestProbe()(newSystem)
newMediator.tell(Subscribe("topic2", probe.ref), probe.ref)
probe.expectMsgType[SubscribeAck]
// let them gossip, but Delta should not be exchanged
probe.expectNoMsg(5.seconds)
newMediator.tell(Internal.DeltaCount, probe.ref)
probe.expectMsg(0L)
newSystem.actorOf(Props[Shutdown], "shutdown")
Await.ready(newSystem.whenTerminated, 10.seconds)
} finally newSystem.terminate()
}
}
}
}

View file

@ -30,7 +30,7 @@ class DistributedPubSubMessageSerializerSpec extends AkkaSpec {
val u2 = system.actorOf(Props.empty, "u2")
val u3 = system.actorOf(Props.empty, "u3")
val u4 = system.actorOf(Props.empty, "u4")
checkSerialization(Status(Map(address1 3, address2 17, address3 5)))
checkSerialization(Status(Map(address1 3, address2 17, address3 5), isReplyToStatus = true))
checkSerialization(Delta(List(
Bucket(address1, 3, TreeMap("/user/u1" ValueHolder(2, Some(u1)), "/user/u2" ValueHolder(3, Some(u2)))),
Bucket(address2, 17, TreeMap("/user/u3" ValueHolder(17, Some(u3)))),

View file

@ -276,12 +276,14 @@ object Replicator {
final case class Subscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage
/**
* Unregister a subscriber.
* @see [[Replicator.Subscribe]]
*
* @see [[Replicator.Subscribe]]
*/
final case class Unsubscribe[A <: ReplicatedData](key: Key[A], subscriber: ActorRef) extends ReplicatorMessage
/**
* The data value is retrieved with [[#get]] using the typed key.
* @see [[Replicator.Subscribe]]
*
* @see [[Replicator.Subscribe]]
*/
final case class Changed[A <: ReplicatedData](key: Key[A])(data: A) extends ReplicatorMessage {
/**
@ -752,6 +754,9 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog
// cluster nodes, doesn't contain selfAddress
var nodes: Set[Address] = Set.empty
// cluster weaklyUp nodes, doesn't contain selfAddress
var weaklyUpNodes: Set[Address] = Set.empty
// nodes removed from cluster, to be pruned, and tombstoned
var removedNodes: Map[UniqueAddress, Long] = Map.empty
var pruningPerformed: Map[UniqueAddress, Long] = Map.empty
@ -810,6 +815,7 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog
case Subscribe(key, subscriber) receiveSubscribe(key, subscriber)
case Unsubscribe(key, subscriber) receiveUnsubscribe(key, subscriber)
case Terminated(ref) receiveTerminated(ref)
case MemberWeaklyUp(m) receiveWeaklyUpMemberUp(m)
case MemberUp(m) receiveMemberUp(m)
case MemberRemoved(m, _) receiveMemberRemoved(m)
case _: MemberEvent // not of interest
@ -998,7 +1004,7 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog
changed = Set.empty[String]
}
def receiveGossipTick(): Unit = selectRandomNode(nodes.toVector) foreach gossipTo
def receiveGossipTick(): Unit = selectRandomNode(nodes.union(weaklyUpNodes).toVector) foreach gossipTo
def gossipTo(address: Address): Unit = {
val to = replica(address)
@ -1113,15 +1119,22 @@ final class Replicator(settings: ReplicatorSettings) extends Actor with ActorLog
}
}
def receiveMemberUp(m: Member): Unit =
def receiveWeaklyUpMemberUp(m: Member): Unit =
if (matchingRole(m) && m.address != selfAddress)
weaklyUpNodes += m.address
def receiveMemberUp(m: Member): Unit =
if (matchingRole(m) && m.address != selfAddress) {
nodes += m.address
weaklyUpNodes -= m.address
}
def receiveMemberRemoved(m: Member): Unit = {
if (m.address == selfAddress)
context stop self
else if (matchingRole(m)) {
nodes -= m.address
weaklyUpNodes -= m.address
removedNodes = removedNodes.updated(m.uniqueAddress, allReachableClockTime)
unreachable -= m.address
}

View file

@ -116,7 +116,7 @@ Add texlive bin to $PATH:
::
export TEXLIVE_PATH=/usr/local/texlive/2015basic/bin/universal-darwin
export TEXLIVE_PATH=/usr/local/texlive/2016basic/bin/universal-darwin
export PATH=$TEXLIVE_PATH:$PATH
Add missing tex packages:
@ -131,6 +131,11 @@ Add missing tex packages:
sudo tlmgr install helvetic
sudo tlmgr install courier
sudo tlmgr install multirow
sudo tlmgr install capt-of
sudo tlmgr install needspace
sudo tlmgr install eqparbox
sudo tlmgr install environ
sudo tlmgr install trimspaces
If you get the error "unknown locale: UTF-8" when generating the documentation the solution is to define the following environment variables:

View file

@ -75,11 +75,7 @@ Since actors are created in a strictly hierarchical fashion, there exists a
unique sequence of actor names given by recursively following the supervision
links between child and parent down towards the root of the actor system. This
sequence can be seen as enclosing folders in a file system, hence we adopted
the name “path” to refer to it. As in some real file-systems there also are
“symbolic links”, i.e. one actor may be reachable using more than one path,
where all but one involve some translation which decouples part of the path
from the actors actual supervision ancestor line; these specialities are
described in the sub-sections to follow.
the name “path” to refer to it, although actor hierarchy has some fundamental difference from file system hierarchy.
An actor path consists of an anchor, which identifies the actor system,
followed by the concatenation of the path elements, from root guardian to the
@ -143,6 +139,18 @@ systems or JVMs. This means that the logical path (supervision hierarchy) and
the physical path (actor deployment) of an actor may diverge if one of its
ancestors is remotely supervised.
Actor path alias or symbolic link?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
As in some real file-systems you might think of a “path alias” or “symbolic link” for an actor,
i.e. one actor may be reachable using more than one path.
However, you should note that actor hierarchy is different from file system hierarchy.
You cannot freely create actor paths like symbolic links to refer to arbitrary actors.
As described in the above logical and physical actor path sections,
an actor path must be either logical path which represents supervision hierarchy, or
physical path which represents actor deployment.
How are Actor References obtained?
----------------------------------

Binary file not shown.

Before

Width:  |  Height:  |  Size: 204 KiB

After

Width:  |  Height:  |  Size: 208 KiB

Before After
Before After

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 14 KiB

After

Width:  |  Height:  |  Size: 14 KiB

Before After
Before After

Binary file not shown.

Before

Width:  |  Height:  |  Size: 57 KiB

After

Width:  |  Height:  |  Size: 57 KiB

Before After
Before After

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 5.2 KiB

After

Width:  |  Height:  |  Size: 5.2 KiB

Before After
Before After

View file

@ -14,9 +14,9 @@ Cluster metrics information is primarily used for load-balancing routers,
and can also be used to implement advanced metrics-based node life cycles,
such as "Node Let-it-crash" when CPU steal time becomes excessive.
Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar.
Cluster Metrics Extension is a separate akka module delivered in ``akka-cluster-metrics`` jar.
To enable usage of the extension you need to add the following dependency to your project:
To enable usage of the extension you need to add the following dependency to your project:
::
<dependency>
@ -29,13 +29,13 @@ and add the following configuration stanza to your ``application.conf``
::
akka.extensions = [ "akka.cluster.metrics.ClusterMetricsExtension" ]
Make sure to disable legacy metrics in akka-cluster: ``akka.cluster.metrics.enabled=off``,
since it is still enabled in akka-cluster by default (for compatibility with past releases).
Cluster members with status :ref:`WeaklyUp <weakly_up_java>`, if that feature is enabled,
will participate in Cluster Metrics collection and dissemination.
Metrics Collector
-----------------
@ -46,15 +46,15 @@ Certain message routing and let-it-crash functions may not work when Sigar is no
Cluster metrics extension comes with two built-in collector implementations:
#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise
#. ``akka.cluster.metrics.SigarMetricsCollector``, which requires Sigar provisioning, and is more rich/precise
#. ``akka.cluster.metrics.JmxMetricsCollector``, which is used as fall back, and is less rich/precise
You can also plug-in your own metrics collector implementation.
By default, metrics extension will use collector provider fall back and will try to load them in this order:
By default, metrics extension will use collector provider fall back and will try to load them in this order:
#. configured user-provided collector
#. built-in ``akka.cluster.metrics.SigarMetricsCollector``
#. built-in ``akka.cluster.metrics.SigarMetricsCollector``
#. and finally ``akka.cluster.metrics.JmxMetricsCollector``
Metrics Events
@ -71,7 +71,7 @@ which was received during the collector sample period.
You can subscribe your metrics listener actors to these events in order to implement custom node lifecycle
::
ClusterMetricsExtension.get(system).subscribe(metricsListenerActor);
ClusterMetricsExtension.get(system).subscribe(metricsListenerActor);
Hyperic Sigar Provisioning
--------------------------
@ -79,8 +79,8 @@ Hyperic Sigar Provisioning
Both user-provided and built-in metrics collectors can optionally use `Hyperic Sigar <http://www.hyperic.com/products/sigar>`_
for a wider and more accurate range of metrics compared to what can be retrieved from ordinary JMX MBeans.
Sigar is using a native o/s library, and requires library provisioning, i.e.
deployment, extraction and loading of the o/s native library into JVM at runtime.
Sigar is using a native o/s library, and requires library provisioning, i.e.
deployment, extraction and loading of the o/s native library into JVM at runtime.
User can provision Sigar classes and native library in one of the following ways:
@ -90,8 +90,15 @@ User can provision Sigar classes and native library in one of the following ways
Kamon sigar loader agent will extract and load sigar library during JVM start.
#. Place ``sigar.jar`` on the ``classpath`` and Sigar native library for the o/s on the ``java.library.path``.
User is required to manage both project dependency and library deployment manually.
To enable usage of Sigar you can add the following dependency to the user project
.. warning::
When using `Kamon sigar-loader <https://github.com/kamon-io/sigar-loader>`_ and running multiple
instances of the same application on the same host, you have to make sure that sigar library is extracted to a
unique per instance directory. You can control the extract directory with the
``akka.cluster.metrics.native-library-extract-folder`` configuration setting.
To enable usage of Sigar you can add the following dependency to the user project
::
<dependency>
@ -110,7 +117,7 @@ It uses random selection of routees with probabilities derived from the remainin
It can be configured to use a specific MetricsSelector to produce the probabilities, a.k.a. weights:
* ``heap`` / ``HeapMetricsSelector`` - Used and max JVM heap memory. Weights based on remaining heap capacity; (max - used) / max
* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors)
* ``load`` / ``SystemLoadAverageMetricsSelector`` - System load average for the past 1 minute, corresponding value can be found in ``top`` of Linux systems. The system is possibly nearing a bottleneck if the system load average is nearing number of cpus/cores. Weights based on remaining load capacity; 1 - (load / processors)
* ``cpu`` / ``CpuMetricsSelector`` - CPU utilization in percentage, sum of User + Sys + Nice + Wait. Weights based on remaining cpu capacity; 1 - utilization
* ``mix`` / ``MixMetricsSelector`` - Combines heap, cpu and load. Weights based on mean of remaining capacity of the combined selectors.
* Any custom implementation of ``akka.cluster.metrics.MetricsSelector``
@ -132,7 +139,7 @@ As you can see, the router is defined in the same way as other routers, and in t
.. includecode:: ../../../akka-samples/akka-sample-cluster-java/src/main/resources/factorial.conf#adaptive-router
It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router,
It is only ``router`` type and the ``metrics-selector`` parameter that is specific to this router,
other things work in the same way as other routers.
The same type of router could also have been defined in code:
@ -158,11 +165,11 @@ Custom Metrics Collector
Metrics collection is delegated to the implementation of ``akka.cluster.metrics.MetricsCollector``
You can plug-in your own metrics collector instead of built-in
``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``.
``akka.cluster.metrics.SigarMetricsCollector`` or ``akka.cluster.metrics.JmxMetricsCollector``.
Look at those two implementations for inspiration.
Look at those two implementations for inspiration.
Custom metrics collector implementation class must be specified in the
Custom metrics collector implementation class must be specified in the
``akka.cluster.metrics.collector.provider`` configuration property.
Configuration

View file

@ -147,7 +147,7 @@ status to ``down`` automatically after the configured time of unreachability.
This is a naïve approach to remove unreachable nodes from the cluster membership. It
works great for crashes and short transient network partitions, but not for long network
partitions. Both sides of the network partition will see the other side as unreachable
partitions. Both sides of the network partition will see the other side as unreachable
and after a while remove it from its cluster membership. Since this happens on both
sides the result is that two separate disconnected clusters have been created. This
can also happen because of long GC pauses or system overload.
@ -155,14 +155,14 @@ can also happen because of long GC pauses or system overload.
.. warning::
We recommend against using the auto-down feature of Akka Cluster in production.
This is crucial for correct behavior if you use :ref:`cluster-singleton-java` or
This is crucial for correct behavior if you use :ref:`cluster-singleton-java` or
:ref:`cluster_sharding_java`, especially together with Akka :ref:`persistence-java`.
A pre-packaged solution for the downing problem is provided by
`Split Brain Resolver <http://doc.akka.io/docs/akka/rp-16s01p03/java/split-brain-resolver.html>`_,
which is part of the Lightbend Reactive Platform. If you dont use RP, you should anyway carefully
A pre-packaged solution for the downing problem is provided by
`Split Brain Resolver <http://doc.akka.io/docs/akka/rp-16s01p03/java/split-brain-resolver.html>`_,
which is part of the Lightbend Reactive Platform. If you dont use RP, you should anyway carefully
read the `documentation <http://doc.akka.io/docs/akka/rp-16s01p03/java/split-brain-resolver.html>`_
of the Split Brain Resolver and make sure that the solution you are using handles the concerns
of the Split Brain Resolver and make sure that the solution you are using handles the concerns
described there.
.. note:: If you have *auto-down* enabled and the failure detector triggers, you
@ -427,8 +427,8 @@ If system messages cannot be delivered to a node it will be quarantined and then
cannot come back from ``unreachable``. This can happen if the there are too many
unacknowledged system messages (e.g. watch, Terminated, remote actor deployment,
failures of actors supervised by remote parent). Then the node needs to be moved
to the ``down`` or ``removed`` states and the actor system must be restarted before
it can join the cluster again.
to the ``down`` or ``removed`` states and the actor system of the quarantined node
must be restarted before it can join the cluster again.
The nodes in the cluster monitor each other by sending heartbeats to detect if a node is
unreachable from the rest of the cluster. The heartbeat arrival times is interpreted

View file

@ -4,31 +4,137 @@
package docs.http.javadsl;
import akka.Done;
import akka.actor.AbstractActor;
import akka.actor.ActorSystem;
import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.HostConnectionPool;
import akka.japi.Pair;
import akka.japi.pf.ReceiveBuilder;
import akka.stream.Materializer;
import akka.util.ByteString;
import scala.compat.java8.FutureConverters;
import scala.concurrent.ExecutionContextExecutor;
import scala.concurrent.Future;
import akka.stream.ActorMaterializer;
import akka.stream.javadsl.*;
import akka.http.javadsl.OutgoingConnection;
import akka.http.javadsl.model.*;
import akka.http.javadsl.Http;
import scala.util.Try;
import static akka.http.javadsl.ConnectHttp.toHost;
import static akka.pattern.PatternsCS.*;
import java.util.concurrent.CompletionStage;
//#manual-entity-consume-example-1
import java.io.File;
import akka.actor.ActorSystem;
import java.util.concurrent.TimeUnit;
import java.util.function.Function;
import akka.stream.ActorMaterializer;
import akka.stream.javadsl.Framing;
import akka.http.javadsl.model.*;
import scala.concurrent.duration.FiniteDuration;
import scala.util.Try;
//#manual-entity-consume-example-1
@SuppressWarnings("unused")
public class HttpClientExampleDocTest {
HttpResponse responseFromSomewhere() {
return null;
}
void manualEntityComsumeExample() {
//#manual-entity-consume-example-1
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final HttpResponse response = responseFromSomewhere();
final Function<ByteString, ByteString> transformEachLine = line -> line /* some transformation here */;
final int maximumFrameLength = 256;
response.entity().getDataBytes()
.via(Framing.delimiter(ByteString.fromString("\n"), maximumFrameLength, FramingTruncation.ALLOW))
.map(transformEachLine::apply)
.runWith(FileIO.toPath(new File("/tmp/example.out").toPath()), materializer);
//#manual-entity-consume-example-1
}
private
//#manual-entity-consume-example-2
final class ExamplePerson {
final String name;
public ExamplePerson(String name) { this.name = name; }
}
public ExamplePerson parse(ByteString line) {
return new ExamplePerson(line.utf8String());
}
//#manual-entity-consume-example-2
void manualEntityConsumeExample2() {
//#manual-entity-consume-example-2
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final HttpResponse response = responseFromSomewhere();
// toStrict to enforce all data be loaded into memory from the connection
final CompletionStage<HttpEntity.Strict> strictEntity = response.entity()
.toStrict(FiniteDuration.create(3, TimeUnit.SECONDS).toMillis(), materializer);
// while API remains the same to consume dataBytes, now they're in memory already:
final CompletionStage<ExamplePerson> person =
strictEntity
.thenCompose(strict ->
strict.getDataBytes()
.runFold(ByteString.empty(), (acc, b) -> acc.concat(b), materializer)
.thenApply(this::parse)
);
//#manual-entity-consume-example-2
}
void manualEntityDiscardExample1() {
//#manual-entity-discard-example-1
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final HttpResponse response = responseFromSomewhere();
final HttpMessage.DiscardedEntity discarded = response.discardEntityBytes(materializer);
discarded.completionStage().whenComplete((done, ex) -> {
System.out.println("Entity discarded completely!");
});
//#manual-entity-discard-example-1
}
void manualEntityDiscardExample2() {
//#manual-entity-discard-example-2
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final HttpResponse response = responseFromSomewhere();
final CompletionStage<Done> discardingComplete = response.entity().getDataBytes().runWith(Sink.ignore(), materializer);
discardingComplete.whenComplete((done, ex) -> {
System.out.println("Entity discarded completely!");
});
//#manual-entity-discard-example-2
}
// compile only test
public void testConstructRequest() {
//#outgoing-connection-example

View file

@ -14,7 +14,6 @@ import akka.http.javadsl.model.ws.TextMessage;
import akka.http.javadsl.model.ws.WebSocketRequest;
import akka.http.javadsl.model.ws.WebSocketUpgradeResponse;
import akka.japi.Pair;
import akka.japi.function.Procedure;
import akka.stream.ActorMaterializer;
import akka.stream.Materializer;
import akka.stream.javadsl.Flow;
@ -63,9 +62,9 @@ public class WebSocketClientExampleTest {
// The first value in the pair is a CompletionStage<WebSocketUpgradeResponse> that
// completes when the WebSocket request has connected successfully (or failed)
final CompletionStage<Done> connected = pair.first().thenApply(upgrade -> {
// just like a regular http request we can get 404 NotFound,
// with a response body, that will be available from upgrade.response
if (upgrade.response().status().equals(StatusCodes.OK)) {
// just like a regular http request we can access response status which is available via upgrade.response.status
// status code 101 (Switching Protocols) indicates that server support WebSockets
if (upgrade.response().status().equals(StatusCodes.SWITCHING_PROTOCOLS)) {
return Done.getInstance();
} else {
throw new RuntimeException("Connection failed: " + upgrade.response().status());
@ -220,9 +219,9 @@ public class WebSocketClientExampleTest {
CompletionStage<Done> connected = upgradeCompletion.thenApply(upgrade->
{
// just like a regular http request we can get 404 NotFound,
// with a response body, that will be available from upgrade.response
if (upgrade.response().status().equals(StatusCodes.OK)) {
// just like a regular http request we can access response status which is available via upgrade.response.status
// status code 101 (Switching Protocols) indicates that server support WebSockets
if (upgrade.response().status().equals(StatusCodes.SWITCHING_PROTOCOLS)) {
return Done.getInstance();
} else {
throw new RuntimeException(("Connection failed: " + upgrade.response().status()));

View file

@ -4,26 +4,37 @@
package docs.http.javadsl.server;
import akka.Done;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.Http;
import akka.http.javadsl.IncomingConnection;
import akka.http.javadsl.ServerBinding;
import akka.http.javadsl.marshallers.jackson.Jackson;
import akka.http.javadsl.model.*;
import akka.http.javadsl.model.headers.Connection;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.japi.function.Function;
import akka.stream.ActorMaterializer;
import akka.stream.IOResult;
import akka.stream.Materializer;
import akka.stream.javadsl.FileIO;
import akka.stream.javadsl.Flow;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import scala.concurrent.ExecutionContextExecutor;
import java.io.BufferedReader;
import java.io.File;
import java.io.InputStreamReader;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
import static akka.http.javadsl.server.Directives.*;
@SuppressWarnings("unused")
public class HttpServerExampleDocTest {
@ -205,4 +216,113 @@ public class HttpServerExampleDocTest {
public static void main(String[] args) throws Exception {
fullServerExample();
}
//#consume-entity-directive
class Bid {
final String userId;
final int bid;
Bid(String userId, int bid) {
this.userId = userId;
this.bid = bid;
}
}
//#consume-entity-directive
void consumeEntityUsingEntityDirective() {
//#consume-entity-directive
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Unmarshaller<HttpEntity, Bid> asBid = Jackson.unmarshaller(Bid.class);
final Route s = path("bid", () ->
put(() ->
entity(asBid, bid ->
// incoming entity is fully consumed and converted into a Bid
complete("The bid was: " + bid)
)
)
);
//#consume-entity-directive
}
void consumeEntityUsingRawDataBytes() {
//#consume-raw-dataBytes
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Route s =
put(() ->
path("lines", () ->
withoutSizeLimit(() ->
extractDataBytes(bytes -> {
final CompletionStage<IOResult> res = bytes.runWith(FileIO.toPath(new File("/tmp/example.out").toPath()), materializer);
return onComplete(() -> res, ioResult ->
// we only want to respond once the incoming data has been handled:
complete("Finished writing data :" + ioResult));
})
)
)
);
//#consume-raw-dataBytes
}
void discardEntityUsingRawBytes() {
//#discard-discardEntityBytes
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Route s =
put(() ->
path("lines", () ->
withoutSizeLimit(() ->
extractRequest(r -> {
final CompletionStage<Done> res = r.discardEntityBytes(materializer).completionStage();
return onComplete(() -> res, done ->
// we only want to respond once the incoming data has been handled:
complete("Finished writing data :" + done));
})
)
)
);
//#discard-discardEntityBytes
}
void discardEntityManuallyCloseConnections() {
//#discard-close-connections
final ActorSystem system = ActorSystem.create();
final ExecutionContextExecutor dispatcher = system.dispatcher();
final ActorMaterializer materializer = ActorMaterializer.create(system);
final Route s =
put(() ->
path("lines", () ->
withoutSizeLimit(() ->
extractDataBytes(bytes -> {
// Closing connections, method 1 (eager):
// we deem this request as illegal, and close the connection right away:
bytes.runWith(Sink.cancelled(), materializer); // "brutally" closes the connection
// Closing connections, method 2 (graceful):
// consider draining connection and replying with `Connection: Close` header
// if you want the client to close after this request/reply cycle instead:
return respondWithHeader(Connection.create("close"), () ->
complete(StatusCodes.FORBIDDEN, "Not allowed!")
);
})
)
)
);
//#discard-close-connections
}
}

View file

@ -0,0 +1,788 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.actor.ActorSystem;
import akka.dispatch.ExecutionContexts;
import akka.event.Logging;
import akka.event.LoggingAdapter;
import akka.http.javadsl.model.ContentTypes;
import akka.http.javadsl.model.HttpEntities;
import akka.http.javadsl.model.HttpEntity;
import akka.http.javadsl.model.HttpMethods;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.HttpResponse;
import akka.http.javadsl.model.ResponseEntity;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.model.headers.RawHeader;
import akka.http.javadsl.model.headers.Server;
import akka.http.javadsl.model.headers.ProductVersion;
import akka.http.javadsl.settings.RoutingSettings;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.http.javadsl.server.*;
import akka.japi.pf.PFBuilder;
import akka.stream.ActorMaterializer;
import akka.stream.ActorMaterializerSettings;
import akka.stream.javadsl.FileIO;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.junit.Ignore;
import org.junit.Test;
import scala.concurrent.ExecutionContextExecutor;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Collections;
import java.util.Optional;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.Executors;
import java.util.function.Function;
import java.util.function.Predicate;
import java.util.function.Supplier;
import java.util.stream.StreamSupport;
public class BasicDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testExtract() {
//#extract
final Route route = extract(
ctx -> ctx.getRequest().getUri().toString().length(),
len -> complete("The length of the request URI is " + len)
);
// tests:
testRoute(route).run(HttpRequest.GET("/abcdef"))
.assertEntity("The length of the request URI is 25");
//#extract
}
@Test
public void testExtractLog() {
//#extractLog
final Route route = extractLog(log -> {
log.debug("I'm logging things in much detail..!");
return complete("It's amazing!");
});
// tests:
testRoute(route).run(HttpRequest.GET("/abcdef"))
.assertEntity("It's amazing!");
//#extractLog
}
@Test
public void testWithMaterializer() {
//#withMaterializer
final ActorMaterializerSettings settings = ActorMaterializerSettings.create(system());
final ActorMaterializer special = ActorMaterializer.create(settings, system(), "special");
final Route sample = path("sample", () ->
extractMaterializer(mat ->
onSuccess(() ->
// explicitly use the materializer:
Source.single("Materialized by " + mat.hashCode() + "!")
.runWith(Sink.head(), mat), this::complete
)
)
);
final Route route = route(
pathPrefix("special", () ->
withMaterializer(special, () -> sample) // `special` materializer will be used
),
sample // default materializer will be used
);
// tests:
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("Materialized by " + materializer().hashCode()+ "!");
testRoute(route).run(HttpRequest.GET("/special/sample"))
.assertEntity("Materialized by " + special.hashCode()+ "!");
//#withMaterializer
}
@Test
public void testExtractMaterializer() {
//#extractMaterializer
final Route route = path("sample", () ->
extractMaterializer(mat ->
onSuccess(() ->
// explicitly use the materializer:
Source.single("Materialized by " + mat.hashCode() + "!")
.runWith(Sink.head(), mat), this::complete
)
)
); // default materializer will be used
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("Materialized by " + materializer().hashCode()+ "!");
//#extractMaterializer
}
@Test
public void testWithExecutionContext() {
//#withExecutionContext
final ExecutionContextExecutor special =
ExecutionContexts.fromExecutor(Executors.newFixedThreadPool(1));
final Route sample = path("sample", () ->
extractExecutionContext(executor ->
onSuccess(() ->
CompletableFuture.supplyAsync(() ->
"Run on " + executor.hashCode() + "!", executor
), this::complete
)
)
);
final Route route = route(
pathPrefix("special", () ->
// `special` execution context will be used
withExecutionContext(special, () -> sample)
),
sample // default execution context will be used
);
// tests:
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("Run on " + system().dispatcher().hashCode() + "!");
testRoute(route).run(HttpRequest.GET("/special/sample"))
.assertEntity("Run on " + special.hashCode() + "!");
//#withExecutionContext
}
@Test
public void testExtractExecutionContext() {
//#extractExecutionContext
final Route route = path("sample", () ->
extractExecutionContext(executor ->
onSuccess(() ->
CompletableFuture.supplyAsync(
// uses the `executor` ExecutionContext
() -> "Run on " + executor.hashCode() + "!", executor
), str -> complete(str)
)
)
);
//tests:
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("Run on " + system().dispatcher().hashCode() + "!");
//#extractExecutionContext
}
@Test
public void testWithLog() {
//#withLog
final LoggingAdapter special = Logging.getLogger(system(), "SpecialRoutes");
final Route sample = path("sample", () ->
extractLog(log -> {
final String msg = "Logging using " + log + "!";
log.debug(msg);
return complete(msg);
}
)
);
final Route route = route(
pathPrefix("special", () ->
withLog(special, () -> sample)
),
sample
);
// tests:
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("Logging using " + system().log() + "!");
testRoute(route).run(HttpRequest.GET("/special/sample"))
.assertEntity("Logging using " + special + "!");
//#withLog
}
@Ignore("Ignore compile-only test")
@Test
public void testWithSettings() {
//#withSettings
final RoutingSettings special =
RoutingSettings
.create(system().settings().config())
.withFileIODispatcher("special-io-dispatcher");
final Route sample = path("sample", () -> {
// internally uses the configured fileIODispatcher:
// ContentTypes.APPLICATION_JSON, source
final Source<ByteString, Object> source =
FileIO.fromPath(Paths.get("example.json"))
.mapMaterializedValue(completionStage -> (Object) completionStage);
return complete(
HttpResponse.create()
.withEntity(HttpEntities.create(ContentTypes.APPLICATION_JSON, source))
);
});
final Route route = get(() ->
route(
pathPrefix("special", () ->
// `special` file-io-dispatcher will be used to read the file
withSettings(special, () -> sample)
),
sample // default file-io-dispatcher will be used to read the file
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/special/sample"))
.assertEntity("{}");
testRoute(route).run(HttpRequest.GET("/sample"))
.assertEntity("{}");
//#withSettings
}
@Test
public void testMapResponse() {
//#mapResponse
final Route route = mapResponse(
response -> response.withStatus(StatusCodes.BAD_GATEWAY),
() -> complete("abc")
);
// tests:
testRoute(route).run(HttpRequest.GET("/abcdef?ghi=12"))
.assertStatusCode(StatusCodes.BAD_GATEWAY);
//#mapResponse
}
@Test
public void testMapResponseAdvanced() {
//#mapResponse-advanced
class ApiRoute {
private final ActorSystem system;
private final LoggingAdapter log;
private final HttpEntity nullJsonEntity =
HttpEntities.create(ContentTypes.APPLICATION_JSON, "{}");
public ApiRoute(ActorSystem system) {
this.system = system;
this.log = Logging.getLogger(system, "ApiRoutes");
}
private HttpResponse nonSuccessToEmptyJsonEntity(HttpResponse response) {
if (response.status().isSuccess()) {
return response;
} else {
log.warning(
"Dropping response entity since response status code was: " + response.status());
return response.withEntity((ResponseEntity) nullJsonEntity);
}
}
/** Wrapper for all of our JSON API routes */
private Route apiRoute(Supplier<Route> innerRoutes) {
return mapResponse(this::nonSuccessToEmptyJsonEntity, innerRoutes);
}
}
final ApiRoute api = new ApiRoute(system());
final Route route = api.apiRoute(() ->
get(() -> complete(StatusCodes.INTERNAL_SERVER_ERROR))
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("{}");
//#mapResponse-advanced
}
@Test
public void testMapRouteResult() {
//#mapRouteResult
// this directive is a joke, don't do that :-)
final Route route = mapRouteResult(r -> {
if (r instanceof Complete) {
final HttpResponse response = ((Complete) r).getResponse();
return RouteResults.complete(response.withStatus(200));
} else {
return r;
}
}, () -> complete(StatusCodes.ACCEPTED));
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertStatusCode(StatusCodes.OK);
//#mapRouteResult
}
@Test
public void testMapRouteResultFuture() {
//#mapRouteResultFuture
final Route route = mapRouteResultFuture(cr ->
cr.exceptionally(t -> {
if (t instanceof IllegalArgumentException) {
return RouteResults.complete(
HttpResponse.create().withStatus(StatusCodes.INTERNAL_SERVER_ERROR));
} else {
return null;
}
}).thenApply(rr -> {
if (rr instanceof Complete) {
final HttpResponse res = ((Complete) rr).getResponse();
return RouteResults.complete(
res.addHeader(Server.create(ProductVersion.create("MyServer", "1.0"))));
} else {
return rr;
}
}), () -> complete("Hello world!"));
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertStatusCode(StatusCodes.OK)
.assertHeaderExists(Server.create(ProductVersion.create("MyServer", "1.0")));
//#mapRouteResultFuture
}
@Test
public void testMapResponseEntity() {
//#mapResponseEntity
final Function<ResponseEntity, ResponseEntity> prefixEntity = entity -> {
if (entity instanceof HttpEntity.Strict) {
final HttpEntity.Strict strict = (HttpEntity.Strict) entity;
return HttpEntities.create(
strict.getContentType(),
ByteString.fromString("test").concat(strict.getData()));
} else {
throw new IllegalStateException("Unexpected entity type");
}
};
final Route route = mapResponseEntity(prefixEntity, () -> complete("abc"));
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("testabc");
//#mapResponseEntity
}
@Test
public void testMapResponseHeaders() {
//#mapResponseHeaders
// adds all request headers to the response
final Route echoRequestHeaders = extract(
ctx -> ctx.getRequest().getHeaders(),
headers -> respondWithHeaders(headers, () -> complete("test"))
);
final Route route = mapResponseHeaders(headers -> {
headers.removeIf(header -> header.lowercaseName().equals("id"));
return headers;
}, () -> echoRequestHeaders);
// tests:
testRoute(route).run(HttpRequest.GET("/").addHeaders(
Arrays.asList(RawHeader.create("id", "12345"),RawHeader.create("id2", "67890"))))
.assertHeaderKindNotExists("id")
.assertHeaderExists("id2", "67890");
//#mapResponseHeaders
}
@Ignore("Not implemented yet")
@Test
public void testMapInnerRoute() {
//#mapInnerRoute
// TODO: implement mapInnerRoute
//#mapInnerRoute
}
@Test
public void testMapRejections() {
//#mapRejections
// ignore any rejections and replace them by AuthorizationFailedRejection
final Route route = mapRejections(
rejections -> Collections.singletonList((Rejection) Rejections.authorizationFailed()),
() -> path("abc", () -> complete("abc"))
);
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections(Rejections.authorizationFailed());
testRoute(route).run(HttpRequest.GET("/abc"))
.assertStatusCode(StatusCodes.OK);
//#mapRejections
}
@Test
public void testRecoverRejections() {
//#recoverRejections
final Function<Optional<ProvidedCredentials>, Optional<Object>> neverAuth =
creds -> Optional.empty();
final Function<Optional<ProvidedCredentials>, Optional<Object>> alwaysAuth =
creds -> Optional.of("id");
final Route originalRoute = pathPrefix("auth", () ->
route(
path("never", () ->
authenticateBasic("my-realm", neverAuth, obj -> complete("Welcome to the bat-cave!"))
),
path("always", () ->
authenticateBasic("my-realm", alwaysAuth, obj -> complete("Welcome to the secret place!"))
)
)
);
final Function<Iterable<Rejection>, Boolean> existsAuthenticationFailedRejection =
rejections ->
StreamSupport.stream(rejections.spliterator(), false)
.anyMatch(r -> r instanceof AuthenticationFailedRejection);
final Route route = recoverRejections(rejections -> {
if (existsAuthenticationFailedRejection.apply(rejections)) {
return RouteResults.complete(
HttpResponse.create().withEntity("Nothing to see here, move along."));
} else if (!rejections.iterator().hasNext()) { // see "Empty Rejections" for more details
return RouteResults.complete(
HttpResponse.create().withStatus(StatusCodes.NOT_FOUND)
.withEntity("Literally nothing to see here."));
} else {
return RouteResults.rejected(rejections);
}
}, () -> originalRoute);
// tests:
testRoute(route).run(HttpRequest.GET("/auth/never"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("Nothing to see here, move along.");
testRoute(route).run(HttpRequest.GET("/auth/always"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("Welcome to the secret place!");
testRoute(route).run(HttpRequest.GET("/auth/does_not_exist"))
.assertStatusCode(StatusCodes.NOT_FOUND)
.assertEntity("Literally nothing to see here.");
//#recoverRejections
}
@Test
public void testRecoverRejectionsWith() {
//#recoverRejectionsWith
final Function<Optional<ProvidedCredentials>, Optional<Object>> neverAuth =
creds -> Optional.empty();
final Route originalRoute = pathPrefix("auth", () ->
path("never", () ->
authenticateBasic("my-realm", neverAuth, obj -> complete("Welcome to the bat-cave!"))
)
);
final Function<Iterable<Rejection>, Boolean> existsAuthenticationFailedRejection =
rejections ->
StreamSupport.stream(rejections.spliterator(), false)
.anyMatch(r -> r instanceof AuthenticationFailedRejection);
final Route route = recoverRejectionsWith(
rejections -> CompletableFuture.supplyAsync(() -> {
if (existsAuthenticationFailedRejection.apply(rejections)) {
return RouteResults.complete(
HttpResponse.create().withEntity("Nothing to see here, move along."));
} else {
return RouteResults.rejected(rejections);
}
}), () -> originalRoute);
// tests:
testRoute(route).run(HttpRequest.GET("/auth/never"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("Nothing to see here, move along.");
//#recoverRejectionsWith
}
@Test
public void testMapRequest() {
//#mapRequest
final Route route = mapRequest(req ->
req.withMethod(HttpMethods.POST), () ->
extractRequest(req -> complete("The request method was " + req.method().name()))
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("The request method was POST");
//#mapRequest
}
@Test
public void testMapRequestContext() {
//#mapRequestContext
final Route route = mapRequestContext(ctx ->
ctx.withRequest(HttpRequest.create().withMethod(HttpMethods.POST)), () ->
extractRequest(req -> complete(req.method().value()))
);
// tests:
testRoute(route).run(HttpRequest.GET("/abc/def/ghi"))
.assertEntity("POST");
//#mapRequestContext
}
@Test
public void testMapRouteResult0() {
//#mapRouteResult
final Route route = mapRouteResult(rr -> {
final Iterable<Rejection> rejections = Collections.singletonList(Rejections.authorizationFailed());
return RouteResults.rejected(rejections);
}, () -> complete("abc"));
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections(Rejections.authorizationFailed());
//#mapRouteResult
}
public static final class MyCustomRejection implements CustomRejection {}
@Test
public void testMapRouteResultPF() {
//#mapRouteResultPF
final Route route = mapRouteResultPF(
new PFBuilder<RouteResult, RouteResult>()
.match(Rejected.class, rejected -> {
final Iterable<Rejection> rejections =
Collections.singletonList(Rejections.authorizationFailed());
return RouteResults.rejected(rejections);
}).build(), () -> reject(new MyCustomRejection()));
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections(Rejections.authorizationFailed());
//#mapRouteResultPF
}
@Test
public void testMapRouteResultWithPF() {
//#mapRouteResultWithPF
final Route route = mapRouteResultWithPF(
new PFBuilder<RouteResult, CompletionStage<RouteResult>>()
.match(Rejected.class, rejected -> CompletableFuture.supplyAsync(() -> {
final Iterable<Rejection> rejections =
Collections.singletonList(Rejections.authorizationFailed());
return RouteResults.rejected(rejections);
})
).build(), () -> reject(new MyCustomRejection()));
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections(Rejections.authorizationFailed());
//#mapRouteResultWithPF
}
@Test
public void testMapRouteResultWith() {
//#mapRouteResultWith
final Route route = mapRouteResultWith(rr -> CompletableFuture.supplyAsync(() -> {
if (rr instanceof Rejected) {
final Iterable<Rejection> rejections =
Collections.singletonList(Rejections.authorizationFailed());
return RouteResults.rejected(rejections);
} else {
return rr;
}
}), () -> reject(new MyCustomRejection()));
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections(Rejections.authorizationFailed());
//#mapRouteResultWith
}
@Test
public void testPass() {
//#pass
final Route route = pass(() -> complete("abc"));
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("abc");
//#pass
}
private Route providePrefixedStringRoute(String value) {
return provide("prefix:" + value, this::complete);
}
@Test
public void testProvide() {
//#provide
final Route route = providePrefixedStringRoute("test");
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("prefix:test");
//#provide
}
@Ignore("Test failed")
@Test
public void testCancelRejections() {
//#cancelRejections
final Predicate<Rejection> isMethodRejection = p -> p instanceof MethodRejection;
final Route route = cancelRejections(
isMethodRejection, () -> post(() -> complete("Result"))
);
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections();
//#cancelRejections
}
@Ignore("Test failed")
@Test
public void testCancelRejection() {
//#cancelRejection
final Route route = cancelRejection(Rejections.method(HttpMethods.POST), () ->
post(() -> complete("Result"))
);
// tests:
runRouteUnSealed(route, HttpRequest.GET("/"))
.assertRejections();
//#cancelRejection
}
@Test
public void testExtractRequest() {
//#extractRequest
final Route route = extractRequest(request ->
complete("Request method is " + request.method().name() +
" and content-type is " + request.entity().getContentType())
);
// tests:
testRoute(route).run(HttpRequest.POST("/").withEntity("text"))
.assertEntity("Request method is POST and content-type is text/plain; charset=UTF-8");
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("Request method is GET and content-type is none/none");
//#extractRequest
}
@Test
public void testExtractSettings() {
//#extractSettings
final Route route = extractSettings(settings ->
complete("RoutingSettings.renderVanityFooter = " + settings.getRenderVanityFooter())
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("RoutingSettings.renderVanityFooter = true");
//#extractSettings
}
@Test
public void testMapSettings() {
//#mapSettings
final Route route = mapSettings(settings ->
settings.withFileGetConditional(false), () ->
extractSettings(settings ->
complete("RoutingSettings.fileGetConditional = " + settings.getFileGetConditional())
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("RoutingSettings.fileGetConditional = false");
//#mapSettings
}
@Test
public void testExtractRequestContext() {
//#extractRequestContext
final Route route = extractRequestContext(ctx -> {
ctx.getLog().debug("Using access to additional context availablethings, like the logger.");
final HttpRequest request = ctx.getRequest();
return complete("Request method is " + request.method().name() +
" and content-type is " + request.entity().getContentType());
});
// tests:
testRoute(route).run(HttpRequest.POST("/").withEntity("text"))
.assertEntity("Request method is POST and content-type is text/plain; charset=UTF-8");
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("Request method is GET and content-type is none/none");
//#extractRequestContext
}
@Test
public void testExtractUri() {
//#extractUri
final Route route = extractUri(uri ->
complete("Full URI: " + uri)
);
// tests:
// tests are executed with the host assumed to be "example.com"
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("Full URI: http://example.com/");
testRoute(route).run(HttpRequest.GET("/test"))
.assertEntity("Full URI: http://example.com/test");
//#extractUri
}
@Test
public void testMapUnmatchedPath() {
//#mapUnmatchedPath
final Function<String, String> ignore456 = path -> {
int slashPos = path.indexOf("/");
if (slashPos != -1) {
String head = path.substring(0, slashPos);
String tail = path.substring(slashPos);
if (head.length() <= 3) {
return tail;
} else {
return path.substring(3);
}
} else {
return path;
}
};
final Route route = pathPrefix("123", () ->
mapUnmatchedPath(ignore456, () ->
path("abc", () ->
complete("Content")
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/123/abc"))
.assertEntity("Content");
testRoute(route).run(HttpRequest.GET("/123456/abc"))
.assertEntity("Content");
//#mapUnmatchedPath
}
@Test
public void testExtractUnmatchedPath() {
//#extractUnmatchedPath
final Route route = pathPrefix("abc", () ->
extractUnmatchedPath(remaining ->
complete("Unmatched: '" + remaining + "'")
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/abc"))
.assertEntity("Unmatched: ''");
testRoute(route).run(HttpRequest.GET("/abc/456"))
.assertEntity("Unmatched: '/456'");
//#extractUnmatchedPath
}
}

View file

@ -0,0 +1,156 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.headers.AcceptEncoding;
import akka.http.javadsl.model.headers.ContentEncoding;
import akka.http.javadsl.model.headers.HttpEncodings;
import akka.http.javadsl.server.Coder;
import akka.http.javadsl.server.Rejections;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.util.ByteString;
import org.junit.Test;
import java.util.Collections;
import static akka.http.javadsl.server.Unmarshaller.entityToString;
public class CodingDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testResponseEncodingAccepted() {
//#responseEncodingAccepted
final Route route = responseEncodingAccepted(HttpEncodings.GZIP, () ->
complete("content")
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("content");
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE)))
.assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
//#responseEncodingAccepted
}
@Test
public void testEncodeResponse() {
//#encodeResponse
final Route route = encodeResponse(() -> complete("content"));
// tests:
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.GZIP))
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.DEFLATE));
// This case failed!
// testRoute(route).run(
// HttpRequest.GET("/")
// .addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY))
// ).assertHeaderExists(ContentEncoding.create(HttpEncodings.IDENTITY));
//#encodeResponse
}
@Test
public void testEncodeResponseWith() {
//#encodeResponseWith
final Route route = encodeResponseWith(
Collections.singletonList(Coder.Gzip),
() -> complete("content")
);
// tests:
testRoute(route).run(HttpRequest.GET("/"))
.assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
testRoute(route).run(
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.GZIP))
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertHeaderExists(ContentEncoding.create(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.DEFLATE))
).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.GET("/")
.addHeader(AcceptEncoding.create(HttpEncodings.IDENTITY))
).assertRejections(Rejections.unacceptedResponseEncoding(HttpEncodings.GZIP));
//#encodeResponseWith
}
@Test
public void testDecodeRequest() {
//#decodeRequest
final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello"));
final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello"));
final Route route = decodeRequest(() ->
entity(entityToString(), content ->
complete("Request content: '" + content + "'")
)
);
// tests:
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloGzipped)
.addHeader(ContentEncoding.create(HttpEncodings.GZIP)))
.assertEntity("Request content: 'Hello'");
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloDeflated)
.addHeader(ContentEncoding.create(HttpEncodings.DEFLATE)))
.assertEntity("Request content: 'Hello'");
testRoute(route).run(
HttpRequest.POST("/").withEntity("hello uncompressed")
.addHeader(ContentEncoding.create(HttpEncodings.IDENTITY)))
.assertEntity( "Request content: 'hello uncompressed'");
//#decodeRequest
}
@Test
public void testDecodeRequestWith() {
//#decodeRequestWith
final ByteString helloGzipped = Coder.Gzip.encode(ByteString.fromString("Hello"));
final ByteString helloDeflated = Coder.Deflate.encode(ByteString.fromString("Hello"));
final Route route = decodeRequestWith(Coder.Gzip, () ->
entity(entityToString(), content ->
complete("Request content: '" + content + "'")
)
);
// tests:
testRoute(route).run(
HttpRequest.POST("/").withEntity(helloGzipped)
.addHeader(ContentEncoding.create(HttpEncodings.GZIP)))
.assertEntity("Request content: 'Hello'");
runRouteUnSealed(route,
HttpRequest.POST("/").withEntity(helloDeflated)
.addHeader(ContentEncoding.create(HttpEncodings.DEFLATE)))
.assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP));
runRouteUnSealed(route,
HttpRequest.POST("/").withEntity("hello")
.addHeader(ContentEncoding.create(HttpEncodings.IDENTITY)))
.assertRejections(Rejections.unsupportedRequestEncoding(HttpEncodings.GZIP));
//#decodeRequestWith
}
}

View file

@ -0,0 +1,75 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.ExceptionHandler;
import akka.http.javadsl.server.PathMatchers;
import akka.http.javadsl.server.RejectionHandler;
import akka.http.javadsl.server.Rejections;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.ValidationRejection;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Test;
import static akka.http.javadsl.server.PathMatchers.integerSegment;
public class ExecutionDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testHandleExceptions() {
//#handleExceptions
final ExceptionHandler divByZeroHandler = ExceptionHandler.newBuilder()
.match(ArithmeticException.class, x ->
complete(StatusCodes.BAD_REQUEST, "You've got your arithmetic wrong, fool!"))
.build();
final Route route =
path(PathMatchers.segment("divide").slash(integerSegment()).slash(integerSegment()), (a, b) ->
handleExceptions(divByZeroHandler, () -> complete("The result is " + (a / b)))
);
// tests:
testRoute(route).run(HttpRequest.GET("/divide/10/5"))
.assertEntity("The result is 2");
testRoute(route).run(HttpRequest.GET("/divide/10/0"))
.assertStatusCode(StatusCodes.BAD_REQUEST)
.assertEntity("You've got your arithmetic wrong, fool!");
//#handleExceptions
}
@Test
public void testHandleRejections() {
//#handleRejections
final RejectionHandler totallyMissingHandler = RejectionHandler.newBuilder()
.handleNotFound(complete(StatusCodes.NOT_FOUND, "Oh man, what you are looking for is long gone."))
.handle(ValidationRejection.class, r -> complete(StatusCodes.INTERNAL_SERVER_ERROR, r.message()))
.build();
final Route route = pathPrefix("handled", () ->
handleRejections(totallyMissingHandler, () ->
route(
path("existing", () -> complete("This path exists")),
path("boom", () -> reject(Rejections.validationRejection("This didn't work.")))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/handled/existing"))
.assertEntity("This path exists");
// applies default handler
testRoute(route).run(HttpRequest.GET("/missing"))
.assertStatusCode(StatusCodes.NOT_FOUND)
.assertEntity("The requested resource could not be found.");
testRoute(route).run(HttpRequest.GET("/handled/missing"))
.assertStatusCode(StatusCodes.NOT_FOUND)
.assertEntity("Oh man, what you are looking for is long gone.");
testRoute(route).run(HttpRequest.GET("/handled/boom"))
.assertStatusCode(StatusCodes.INTERNAL_SERVER_ERROR)
.assertEntity("This didn't work.");
//#handleRejections
}
}

View file

@ -0,0 +1,124 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.PathMatchers;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.directives.DirectoryRenderer;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Ignore;
import org.junit.Test;
import scala.NotImplementedError;
import static akka.http.javadsl.server.PathMatchers.segment;
public class FileAndResourceDirectivesExamplesTest extends JUnitRouteTest {
@Ignore("Compile only test")
@Test
public void testGetFromFile() {
//#getFromFile
final Route route = path(PathMatchers.segment("logs").slash(segment()), name ->
getFromFile(name + ".log")
);
// tests:
testRoute(route).run(HttpRequest.GET("/logs/example"))
.assertEntity("example file contents");
//#getFromFile
}
@Ignore("Compile only test")
@Test
public void testGetFromResource() {
//#getFromResource
final Route route = path(PathMatchers.segment("logs").slash(segment()), name ->
getFromResource(name + ".log")
);
// tests:
testRoute(route).run(HttpRequest.GET("/logs/example"))
.assertEntity("example file contents");
//#getFromResource
}
@Ignore("Compile only test")
@Test
public void testListDirectoryContents() {
//#listDirectoryContents
final Route route = route(
path("tmp", () -> listDirectoryContents("/tmp")),
path("custom", () -> {
// implement your custom renderer here
final DirectoryRenderer renderer = renderVanityFooter -> {
throw new NotImplementedError();
};
return listDirectoryContents(renderer, "/tmp");
})
);
// tests:
testRoute(route).run(HttpRequest.GET("/logs/example"))
.assertEntity("example file contents");
//#listDirectoryContents
}
@Ignore("Compile only test")
@Test
public void testGetFromBrowseableDirectory() {
//#getFromBrowseableDirectory
final Route route = path("tmp", () ->
getFromBrowseableDirectory("/tmp")
);
// tests:
testRoute(route).run(HttpRequest.GET("/tmp"))
.assertStatusCode(StatusCodes.OK);
//#getFromBrowseableDirectory
}
@Ignore("Compile only test")
@Test
public void testGetFromBrowseableDirectories() {
//#getFromBrowseableDirectories
final Route route = path("tmp", () ->
getFromBrowseableDirectories("/main", "/backups")
);
// tests:
testRoute(route).run(HttpRequest.GET("/tmp"))
.assertStatusCode(StatusCodes.OK);
//#getFromBrowseableDirectories
}
@Ignore("Compile only test")
@Test
public void testGetFromDirectory() {
//#getFromDirectory
final Route route = pathPrefix("tmp", () ->
getFromDirectory("/tmp")
);
// tests:
testRoute(route).run(HttpRequest.GET("/tmp/example"))
.assertEntity("example file contents");
//#getFromDirectory
}
@Ignore("Compile only test")
@Test
public void testGetFromResourceDirectory() {
//#getFromResourceDirectory
final Route route = pathPrefix("examples", () ->
getFromResourceDirectory("/examples")
);
// tests:
testRoute(route).run(HttpRequest.GET("/examples/example-1"))
.assertEntity("example file contents");
//#getFromResourceDirectory
}
}

View file

@ -0,0 +1,140 @@
/**
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.impl.engine.rendering.BodyPartRenderer;
import akka.http.javadsl.model.*;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.server.directives.FileInfo;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.stream.javadsl.Framing;
import akka.stream.javadsl.Source;
import akka.util.ByteString;
import org.junit.Ignore;
import org.junit.Test;
import scala.concurrent.duration.Duration;
import scala.concurrent.duration.FiniteDuration;
import java.io.File;
import java.nio.charset.Charset;
import java.nio.file.Files;
import java.util.Arrays;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
import java.util.function.BiFunction;
public class FileUploadDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testUploadedFile() {
//#uploadedFile
// function (FileInfo, File) => Route to process the file metadata and file itself
BiFunction<FileInfo, File, Route> infoFileRoute =
(info, file) -> {
// do something with the file and file metadata ...
file.delete();
return complete(StatusCodes.OK);
};
final Route route = uploadedFile("csv", infoFileRoute);
Map<String, String> filenameMapping = new HashMap<>();
filenameMapping.put("filename", "data.csv");
akka.http.javadsl.model.Multipart.FormData multipartForm =
Multiparts.createStrictFormDataFromParts(Multiparts.createFormDataBodyPartStrict("csv",
HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8,
"1,5,7\n11,13,17"), filenameMapping));
// test:
testRoute(route).run(HttpRequest.POST("/")
.withEntity(
multipartForm.toEntity(HttpCharsets.UTF_8,
BodyPartRenderer
.randomBoundaryWithDefaults())))
.assertStatusCode(StatusCodes.OK);
//#
}
@Test
public void testFileUpload() {
//#fileUpload
final Route route = extractRequestContext(ctx -> {
// function (FileInfo, Source<ByteString,Object>) => Route to process the file contents
BiFunction<FileInfo, Source<ByteString, Object>, Route> processUploadedFile =
(metadata, byteSource) -> {
CompletionStage<Integer> sumF = byteSource.via(Framing.delimiter(
ByteString.fromString("\n"), 1024))
.mapConcat(bs -> Arrays.asList(bs.utf8String().split(",")))
.map(s -> Integer.parseInt(s))
.runFold(0, (acc, n) -> acc + n, ctx.getMaterializer());
return onSuccess(() -> sumF, sum -> complete("Sum: " + sum));
};
return fileUpload("csv", processUploadedFile);
});
Map<String, String> filenameMapping = new HashMap<>();
filenameMapping.put("filename", "primes.csv");
akka.http.javadsl.model.Multipart.FormData multipartForm =
Multiparts.createStrictFormDataFromParts(
Multiparts.createFormDataBodyPartStrict("csv",
HttpEntities.create(ContentTypes.TEXT_PLAIN_UTF8,
"2,3,5\n7,11,13,17,23\n29,31,37\n"), filenameMapping));
// test:
testRoute(route).run(HttpRequest.POST("/").withEntity(
multipartForm.toEntity(HttpCharsets.UTF_8, BodyPartRenderer.randomBoundaryWithDefaults())))
.assertStatusCode(StatusCodes.OK).assertEntityAs(Unmarshaller.entityToString(), "Sum: 178");
//#
}
@Ignore("compileOnly")
@Test
public void testFileProcessing() {
//#fileProcessing
final Route route = extractRequestContext(ctx -> {
// function (FileInfo, Source<ByteString,Object>) => Route to process the file contents
BiFunction<FileInfo, Source<ByteString, Object>, Route> processUploadedFile =
(metadata, byteSource) -> {
CompletionStage<Integer> sumF = byteSource.via(Framing.delimiter(
ByteString.fromString("\n"), 1024))
.mapConcat(bs -> Arrays.asList(bs.utf8String().split(",")))
.map(s -> Integer.parseInt(s))
.runFold(0, (acc, n) -> acc + n, ctx.getMaterializer());
return onSuccess(() -> sumF, sum -> complete("Sum: " + sum));
};
return fileUpload("csv", processUploadedFile);
});
Map<String, String> filenameMapping = new HashMap<>();
filenameMapping.put("filename", "primes.csv");
String prefix = "primes";
String suffix = ".csv";
File tempFile = null;
try {
tempFile = File.createTempFile(prefix, suffix);
tempFile.deleteOnExit();
Files.write(tempFile.toPath(), Arrays.asList("2,3,5", "7,11,13,17,23", "29,31,37"), Charset.forName("UTF-8"));
} catch (Exception e) {
// ignore
}
akka.http.javadsl.model.Multipart.FormData multipartForm =
Multiparts.createFormDataFromPath("csv", ContentTypes.TEXT_PLAIN_UTF8, tempFile.toPath());
// test:
testRoute(route).run(HttpRequest.POST("/").withEntity(
multipartForm.toEntity(HttpCharsets.UTF_8, BodyPartRenderer.randomBoundaryWithDefaults())))
.assertStatusCode(StatusCodes.OK).assertEntityAs(Unmarshaller.entityToString(), "Sum: 178");
//#
}
}

View file

@ -0,0 +1,137 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.FormData;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.StringUnmarshallers;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.japi.Pair;
import org.junit.Test;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import java.util.function.Function;
import java.util.stream.Collectors;
public class FormFieldDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testFormField() {
//#formField
final Route route = route(
formField("color", color ->
complete("The color is '" + color + "'")
),
formField(StringUnmarshallers.INTEGER, "id", id ->
complete("The id is '" + id + "'")
)
);
// tests:
final FormData formData = FormData.create(Pair.create("color", "blue"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formData.toEntity()))
.assertEntity("The color is 'blue'");
testRoute(route).run(HttpRequest.GET("/"))
.assertStatusCode(StatusCodes.BAD_REQUEST)
.assertEntity("Request is missing required form field 'color'");
//#formField
}
@Test
public void testFormFieldMap() {
//#formFieldMap
final Function<Map<String, String>, String> mapToString = map ->
map.entrySet()
.stream()
.map(e -> e.getKey() + " = '" + e.getValue() +"'")
.collect(Collectors.joining(", "));
final Route route = formFieldMap(fields ->
complete("The form fields are " + mapToString.apply(fields))
);
// tests:
final FormData formDataDiffKey =
FormData.create(
Pair.create("color", "blue"),
Pair.create("count", "42"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity()))
.assertEntity("The form fields are color = 'blue', count = '42'");
final FormData formDataSameKey =
FormData.create(
Pair.create("x", "1"),
Pair.create("x", "5"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity()))
.assertEntity( "The form fields are x = '5'");
//#formFieldMap
}
@Test
public void testFormFieldMultiMap() {
//#formFieldMultiMap
final Function<Map<String, List<String>>, String> mapToString = map ->
map.entrySet()
.stream()
.map(e -> e.getKey() + " -> " + e.getValue().size())
.collect(Collectors.joining(", "));
final Route route = formFieldMultiMap(fields ->
complete("There are form fields " + mapToString.apply(fields))
);
// test:
final FormData formDataDiffKey =
FormData.create(
Pair.create("color", "blue"),
Pair.create("count", "42"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity()))
.assertEntity("There are form fields color -> 1, count -> 1");
final FormData formDataSameKey =
FormData.create(
Pair.create("x", "23"),
Pair.create("x", "4"),
Pair.create("x", "89"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity()))
.assertEntity("There are form fields x -> 3");
//#formFieldMultiMap
}
@Test
public void testFormFieldList() {
//#formFieldList
final Function<List<Entry<String, String>>, String> listToString = list ->
list.stream()
.map(e -> e.getKey() + " = '" + e.getValue() +"'")
.collect(Collectors.joining(", "));
final Route route = formFieldList(fields ->
complete("The form fields are " + listToString.apply(fields))
);
// tests:
final FormData formDataDiffKey =
FormData.create(
Pair.create("color", "blue"),
Pair.create("count", "42"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataDiffKey.toEntity()))
.assertEntity("The form fields are color = 'blue', count = '42'");
final FormData formDataSameKey =
FormData.create(
Pair.create("x", "23"),
Pair.create("x", "4"),
Pair.create("x", "89"));
testRoute(route).run(HttpRequest.POST("/").withEntity(formDataSameKey.toEntity()))
.assertEntity("The form fields are x = '23', x = '4', x = '89'");
//#formFieldList
}
}

View file

@ -0,0 +1,64 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Test;
import java.util.Arrays;
import java.util.function.Function;
public class MiscDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testWithSizeLimit() {
//#withSizeLimitExample
final Route route = withSizeLimit(500, () ->
entity(Unmarshaller.entityToString(), (entity) ->
complete("ok")
)
);
Function<Integer, HttpRequest> withEntityOfSize = (sizeLimit) -> {
char[] charArray = new char[sizeLimit];
Arrays.fill(charArray, '0');
return HttpRequest.POST("/").withEntity(new String(charArray));
};
// tests:
testRoute(route).run(withEntityOfSize.apply(500))
.assertStatusCode(StatusCodes.OK);
testRoute(route).run(withEntityOfSize.apply(501))
.assertStatusCode(StatusCodes.BAD_REQUEST);
//#withSizeLimitExample
}
@Test
public void testWithoutSizeLimit() {
//#withoutSizeLimitExample
final Route route = withoutSizeLimit(() ->
entity(Unmarshaller.entityToString(), (entity) ->
complete("ok")
)
);
Function<Integer, HttpRequest> withEntityOfSize = (sizeLimit) -> {
char[] charArray = new char[sizeLimit];
Arrays.fill(charArray, '0');
return HttpRequest.POST("/").withEntity(new String(charArray));
};
// tests:
// will work even if you have configured akka.http.parsing.max-content-length = 500
testRoute(route).run(withEntityOfSize.apply(501))
.assertStatusCode(StatusCodes.OK);
//#withoutSizeLimitExample
}
}

View file

@ -0,0 +1,121 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Test;
import java.util.Map.Entry;
import java.util.function.Function;
import java.util.stream.Collectors;
public class ParameterDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testParameter() {
//#parameter
final Route route = parameter("color", color ->
complete("The color is '" + color + "'")
);
// tests:
testRoute(route).run(HttpRequest.GET("/?color=blue"))
.assertEntity("The color is 'blue'");
testRoute(route).run(HttpRequest.GET("/"))
.assertStatusCode(StatusCodes.NOT_FOUND)
.assertEntity("Request is missing required query parameter 'color'");
//#parameter
}
@Test
public void testParameters() {
//#parameters
final Route route = parameter("color", color ->
parameter("backgroundColor", backgroundColor ->
complete("The color is '" + color
+ "' and the background is '" + backgroundColor + "'")
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/?color=blue&backgroundColor=red"))
.assertEntity("The color is 'blue' and the background is 'red'");
testRoute(route).run(HttpRequest.GET("/?color=blue"))
.assertStatusCode(StatusCodes.NOT_FOUND)
.assertEntity("Request is missing required query parameter 'backgroundColor'");
//#parameters
}
@Test
public void testParameterMap() {
//#parameterMap
final Function<Entry, String> paramString =
entry -> entry.getKey() + " = '" + entry.getValue() + "'";
final Route route = parameterMap(params -> {
final String pString = params.entrySet()
.stream()
.map(paramString::apply)
.collect(Collectors.joining(", "));
return complete("The parameters are " + pString);
});
// tests:
testRoute(route).run(HttpRequest.GET("/?color=blue&count=42"))
.assertEntity("The parameters are color = 'blue', count = '42'");
testRoute(route).run(HttpRequest.GET("/?x=1&x=2"))
.assertEntity("The parameters are x = '2'");
//#parameterMap
}
@Test
public void testParameterMultiMap() {
//#parameterMultiMap
final Route route = parameterMultiMap(params -> {
final String pString = params.entrySet()
.stream()
.map(e -> e.getKey() + " -> " + e.getValue().size())
.collect(Collectors.joining(", "));
return complete("There are parameters " + pString);
});
// tests:
testRoute(route).run(HttpRequest.GET("/?color=blue&count=42"))
.assertEntity("There are parameters color -> 1, count -> 1");
testRoute(route).run(HttpRequest.GET("/?x=23&x=42"))
.assertEntity("There are parameters x -> 2");
//#parameterMultiMap
}
@Test
public void testParameterSeq() {
//#parameterSeq
final Function<Entry, String> paramString =
entry -> entry.getKey() + " = '" + entry.getValue() + "'";
final Route route = parameterList(params -> {
final String pString = params.stream()
.map(paramString::apply)
.collect(Collectors.joining(", "));
return complete("The parameters are " + pString);
});
// tests:
testRoute(route).run(HttpRequest.GET("/?color=blue&count=42"))
.assertEntity("The parameters are color = 'blue', count = '42'");
testRoute(route).run(HttpRequest.GET("/?x=1&x=2"))
.assertEntity("The parameters are x = '1', x = '2'");
//#parameterSeq
}
}

View file

@ -0,0 +1,322 @@
/*
* Copyright (C) 2015-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import java.util.Arrays;
import java.util.regex.Pattern;
import org.junit.Test;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import static akka.http.javadsl.server.PathMatchers.segment;
import static akka.http.javadsl.server.PathMatchers.segments;
import static akka.http.javadsl.server.PathMatchers.integerSegment;
import static akka.http.javadsl.server.PathMatchers.neutral;
import static akka.http.javadsl.server.PathMatchers.slash;
import java.util.function.Supplier;
import akka.http.javadsl.server.directives.RouteAdapter;
import static java.util.regex.Pattern.compile;
public class PathDirectivesExamplesTest extends JUnitRouteTest {
//# path-prefix-test, path-suffix, raw-path-prefix, raw-path-prefix-test
Supplier<RouteAdapter> completeWithUnmatchedPath = ()->
extractUnmatchedPath((path) -> complete(path.toString()));
//#
@Test
public void testPathExamples() {
//# path-dsl
// matches /foo/
path(segment("foo").slash(), () -> complete(StatusCodes.OK));
// matches e.g. /foo/123 and extracts "123" as a String
path(segment("foo").slash(segment(compile("\\d+"))), (value) ->
complete(StatusCodes.OK));
// matches e.g. /foo/bar123 and extracts "123" as a String
path(segment("foo").slash(segment(compile("bar(\\d+)"))), (value) ->
complete(StatusCodes.OK));
// similar to `path(Segments)`
path(neutral().repeat(0, 10), () -> complete(StatusCodes.OK));
// identical to path("foo" ~ (PathEnd | Slash))
path(segment("foo").orElse(slash()), () -> complete(StatusCodes.OK));
//# path-dsl
}
@Test
public void testBasicExamples() {
path("test", () -> complete(StatusCodes.OK));
// matches "/test", as well
path(segment("test"), () -> complete(StatusCodes.OK));
}
@Test
public void testPathExample() {
//# pathPrefix
final Route route =
route(
path("foo", () -> complete("/foo")),
path(segment("foo").slash("bar"), () -> complete("/foo/bar")),
pathPrefix("ball", () ->
route(
pathEnd(() -> complete("/ball")),
path(integerSegment(), (i) ->
complete((i % 2 == 0) ? "even ball" : "odd ball"))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/")).assertStatusCode(StatusCodes.NOT_FOUND);
testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo");
testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar");
testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball");
//# pathPrefix
}
@Test
public void testPathEnd() {
//# path-end
final Route route =
route(
pathPrefix("foo", () ->
route(
pathEnd(() -> complete("/foo")),
path("bar", () -> complete("/foo/bar"))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo");
testRoute(route).run(HttpRequest.GET("/foo/")).assertStatusCode(StatusCodes.NOT_FOUND);
testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar");
//# path-end
}
@Test
public void testPathEndOrSingleSlash() {
//# path-end-or-single-slash
final Route route =
route(
pathPrefix("foo", () ->
route(
pathEndOrSingleSlash(() -> complete("/foo")),
path("bar", () -> complete("/foo/bar"))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("/foo");
testRoute(route).run(HttpRequest.GET("/foo/")).assertEntity("/foo");
testRoute(route).run(HttpRequest.GET("/foo/bar")).assertEntity("/foo/bar");
//# path-end-or-single-slash
}
@Test
public void testPathPrefix() {
//# path-prefix
final Route route =
route(
pathPrefix("ball", () ->
route(
pathEnd(() -> complete("/ball")),
path(integerSegment(), (i) ->
complete((i % 2 == 0) ? "even ball" : "odd ball"))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/")).assertStatusCode(StatusCodes.NOT_FOUND);
testRoute(route).run(HttpRequest.GET("/ball")).assertEntity("/ball");
testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball");
//# path-prefix
}
@Test
public void testPathPrefixTest() {
//# path-prefix-test
final Route route =
route(
pathPrefixTest(segment("foo").orElse("bar"), () ->
route(
pathPrefix("foo", () -> completeWithUnmatchedPath.get()),
pathPrefix("bar", () -> completeWithUnmatchedPath.get())
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo/doo")).assertEntity("/doo");
testRoute(route).run(HttpRequest.GET("/bar/yes")).assertEntity("/yes");
//# path-prefix-test
}
@Test
public void testPathSingleSlash() {
//# path-single-slash
final Route route =
route(
pathSingleSlash(() -> complete("root")),
pathPrefix("ball", () ->
route(
pathSingleSlash(() -> complete("/ball/")),
path(integerSegment(), (i) -> complete((i % 2 == 0) ? "even ball" : "odd ball"))
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/")).assertEntity("root");
testRoute(route).run(HttpRequest.GET("/ball")).assertStatusCode(StatusCodes.NOT_FOUND);
testRoute(route).run(HttpRequest.GET("/ball/")).assertEntity("/ball/");
testRoute(route).run(HttpRequest.GET("/ball/1337")).assertEntity("odd ball");
//# path-single-slash
}
@Test
public void testPathSuffix() {
//# path-suffix
final Route route =
route(
pathPrefix("start", () ->
route(
pathSuffix("end", () -> completeWithUnmatchedPath.get()),
pathSuffix(segment("foo").slash("bar").concat("baz"), () ->
completeWithUnmatchedPath.get())
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/start/middle/end")).assertEntity("/middle/");
testRoute(route).run(HttpRequest.GET("/start/something/barbaz/foo")).assertEntity("/something/");
//# path-suffix
}
@Test
public void testPathSuffixTest() {
//# path-suffix-test
final Route route =
route(
pathSuffixTest(slash(), () -> complete("slashed")),
complete("unslashed")
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo/")).assertEntity("slashed");
testRoute(route).run(HttpRequest.GET("/foo")).assertEntity("unslashed");
//# path-suffix-test
}
@Test
public void testRawPathPrefix() {
//# raw-path-prefix
final Route route =
route(
pathPrefix("foo", () ->
route(
rawPathPrefix("bar", () -> completeWithUnmatchedPath.get()),
rawPathPrefix("doo", () -> completeWithUnmatchedPath.get())
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foobar/baz")).assertEntity("/baz");
testRoute(route).run(HttpRequest.GET("/foodoo/baz")).assertEntity("/baz");
//# raw-path-prefix
}
@Test
public void testRawPathPrefixTest() {
//# raw-path-prefix-test
final Route route =
route(
pathPrefix("foo", () ->
rawPathPrefixTest("bar", () -> completeWithUnmatchedPath.get())
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foobar")).assertEntity("bar");
testRoute(route).run(HttpRequest.GET("/foobaz")).assertStatusCode(StatusCodes.NOT_FOUND);
//# raw-path-prefix-test
}
@Test
public void testRedirectToNoTrailingSlashIfMissing() {
//# redirect-notrailing-slash-missing
final Route route =
redirectToTrailingSlashIfMissing(
StatusCodes.MOVED_PERMANENTLY, () ->
route(
path(segment("foo").slash(), () -> complete("OK")),
path(segment("bad-1"), () ->
// MISTAKE!
// Missing `/` in path, causes this path to never match,
// because it is inside a `redirectToTrailingSlashIfMissing`
complete(StatusCodes.NOT_IMPLEMENTED)
),
path(segment("bad-2").slash(), () ->
// MISTAKE!
// / should be explicit as path element separator and not *in* the path element
// So it should be: "bad-1" /
complete(StatusCodes.NOT_IMPLEMENTED)
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo"))
.assertStatusCode(StatusCodes.MOVED_PERMANENTLY)
.assertEntity("This and all future requests should be directed to " +
"<a href=\"http://example.com/foo/\">this URI</a>.");
testRoute(route).run(HttpRequest.GET("/foo/"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("OK");
testRoute(route).run(HttpRequest.GET("/bad-1/"))
.assertStatusCode(StatusCodes.NOT_FOUND);
//# redirect-notrailing-slash-missing
}
@Test
public void testRedirectToNoTrailingSlashIfPresent() {
//# redirect-notrailing-slash-present
final Route route =
redirectToNoTrailingSlashIfPresent(
StatusCodes.MOVED_PERMANENTLY, () ->
route(
path("foo", () -> complete("OK")),
path(segment("bad").slash(), () ->
// MISTAKE!
// Since inside a `redirectToNoTrailingSlashIfPresent` directive
// the matched path here will never contain a trailing slash,
// thus this path will never match.
//
// It should be `path("bad")` instead.
complete(StatusCodes.NOT_IMPLEMENTED)
)
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo/"))
.assertStatusCode(StatusCodes.MOVED_PERMANENTLY)
.assertEntity("This and all future requests should be directed to " +
"<a href=\"http://example.com/foo\">this URI</a>.");
testRoute(route).run(HttpRequest.GET("/foo"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("OK");
testRoute(route).run(HttpRequest.GET("/bad"))
.assertStatusCode(StatusCodes.NOT_FOUND);
//# redirect-notrailing-slash-present
}
}

View file

@ -0,0 +1,88 @@
/**
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.Multipart;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.model.headers.ByteRange;
import akka.http.javadsl.model.headers.ContentRange;
import akka.http.javadsl.model.headers.Range;
import akka.http.javadsl.model.headers.RangeUnits;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.server.Unmarshaller;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.http.javadsl.testkit.TestRouteResult;
import akka.stream.ActorMaterializer;
import akka.util.ByteString;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import org.junit.Test;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
public class RangeDirectivesExamplesTest extends JUnitRouteTest {
@Override
public Config additionalConfig() {
return ConfigFactory.parseString("akka.http.routing.range-coalescing-threshold=2");
}
@Test
public void testWithRangeSupport() {
//#withRangeSupport
final Route route = withRangeSupport(() -> complete("ABCDEFGH"));
// test:
final String bytes348Range = ContentRange.create(RangeUnits.BYTES,
akka.http.javadsl.model.ContentRange.create(3, 4, 8)).value();
final akka.http.javadsl.model.ContentRange bytes028Range =
akka.http.javadsl.model.ContentRange.create(0, 2, 8);
final akka.http.javadsl.model.ContentRange bytes678Range =
akka.http.javadsl.model.ContentRange.create(6, 7, 8);
final ActorMaterializer materializer = systemResource().materializer();
testRoute(route).run(HttpRequest.GET("/")
.addHeader(Range.create(RangeUnits.BYTES, ByteRange.createSlice(3, 4))))
.assertHeaderKindExists("Content-Range")
.assertHeaderExists("Content-Range", bytes348Range)
.assertStatusCode(StatusCodes.PARTIAL_CONTENT)
.assertEntity("DE");
// we set "akka.http.routing.range-coalescing-threshold = 2"
// above to make sure we get two BodyParts
final TestRouteResult response = testRoute(route).run(HttpRequest.GET("/")
.addHeader(Range.create(RangeUnits.BYTES,
ByteRange.createSlice(0, 1), ByteRange.createSlice(1, 2), ByteRange.createSlice(6, 7))));
response.assertHeaderKindNotExists("Content-Range");
final CompletionStage<List<Multipart.ByteRanges.BodyPart>> completionStage =
response.entity(Unmarshaller.entityToMultipartByteRanges()).getParts()
.runFold(new ArrayList<>(), (acc, n) -> {
acc.add(n);
return acc;
}, materializer);
try {
final List<Multipart.ByteRanges.BodyPart> bodyParts =
completionStage.toCompletableFuture().get(3, TimeUnit.SECONDS);
assertResult(2, bodyParts.toArray().length);
final Multipart.ByteRanges.BodyPart part1 = bodyParts.get(0);
assertResult(bytes028Range, part1.getContentRange());
assertResult(ByteString.fromString("ABC"),
part1.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
final Multipart.ByteRanges.BodyPart part2 = bodyParts.get(1);
assertResult(bytes678Range, part2.getContentRange());
assertResult(ByteString.fromString("GH"),
part2.toStrict(1000, materializer).toCompletableFuture().get().getEntity().getData());
} catch (Exception e) {
// please handle this in production code
}
//#
}
}

View file

@ -0,0 +1,125 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpEntities;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.Uri;
import akka.http.javadsl.model.headers.ContentType;
import akka.http.javadsl.model.ContentTypes;
import akka.http.javadsl.model.HttpResponse;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.Rejections;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import org.junit.Test;
import java.util.Collections;
public class RouteDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testComplete() {
//#complete
final Route route = route(
path("a", () -> complete(HttpResponse.create().withEntity("foo"))),
path("b", () -> complete(StatusCodes.OK)),
path("c", () -> complete(StatusCodes.CREATED, "bar")),
path("d", () -> complete(StatusCodes.get(201), "bar")),
path("e", () ->
complete(StatusCodes.CREATED,
Collections.singletonList(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)),
HttpEntities.create("bar"))),
path("f", () ->
complete(StatusCodes.get(201),
Collections.singletonList(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8)),
HttpEntities.create("bar"))),
path("g", () -> complete("baz"))
);
// tests:
testRoute(route).run(HttpRequest.GET("/a"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("foo");
testRoute(route).run(HttpRequest.GET("/b"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("OK");
testRoute(route).run(HttpRequest.GET("/c"))
.assertStatusCode(StatusCodes.CREATED)
.assertEntity("bar");
testRoute(route).run(HttpRequest.GET("/d"))
.assertStatusCode(StatusCodes.CREATED)
.assertEntity("bar");
testRoute(route).run(HttpRequest.GET("/e"))
.assertStatusCode(StatusCodes.CREATED)
.assertHeaderExists(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8))
.assertEntity("bar");
testRoute(route).run(HttpRequest.GET("/f"))
.assertStatusCode(StatusCodes.CREATED)
.assertHeaderExists(ContentType.create(ContentTypes.TEXT_PLAIN_UTF8))
.assertEntity("bar");
testRoute(route).run(HttpRequest.GET("/g"))
.assertStatusCode(StatusCodes.OK)
.assertEntity("baz");
//#complete
}
@Test
public void testReject() {
//#reject
final Route route = route(
path("a", this::reject), // don't handle here, continue on
path("a", () -> complete("foo")),
path("b", () -> reject(Rejections.validationRejection("Restricted!")))
);
// tests:
testRoute(route).run(HttpRequest.GET("/a"))
.assertEntity("foo");
runRouteUnSealed(route, HttpRequest.GET("/b"))
.assertRejections(Rejections.validationRejection("Restricted!"));
//#reject
}
@Test
public void testRedirect() {
//#redirect
final Route route = pathPrefix("foo", () ->
route(
pathSingleSlash(() -> complete("yes")),
pathEnd(() -> redirect(Uri.create("/foo/"), StatusCodes.PERMANENT_REDIRECT))
)
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo/"))
.assertEntity("yes");
testRoute(route).run(HttpRequest.GET("/foo"))
.assertStatusCode(StatusCodes.PERMANENT_REDIRECT)
.assertEntity("The request, and all future requests should be repeated using <a href=\"/foo/\">this URI</a>.");
//#redirect
}
@Test
public void testFailWith() {
//#failWith
final Route route = path("foo", () ->
failWith(new RuntimeException("Oops."))
);
// tests:
testRoute(route).run(HttpRequest.GET("/foo"))
.assertStatusCode(StatusCodes.INTERNAL_SERVER_ERROR)
.assertEntity("There was an internal server error.");
//#failWith
}
}

View file

@ -0,0 +1,364 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.model.headers.BasicHttpCredentials;
import akka.http.javadsl.model.headers.HttpChallenge;
import akka.http.javadsl.model.headers.HttpCredentials;
import akka.http.javadsl.server.Route;
import akka.http.javadsl.testkit.JUnitRouteTest;
import akka.japi.JavaPartialFunction;
import org.junit.Test;
import scala.PartialFunction;
import scala.util.Either;
import scala.util.Left;
import scala.util.Right;
import java.util.Collections;
import java.util.HashSet;
import java.util.Set;
import java.util.concurrent.CompletableFuture;
import java.util.concurrent.CompletionStage;
import java.util.function.Function;
import java.util.Optional;
public class SecurityDirectivesExamplesTest extends JUnitRouteTest {
@Test
public void testAuthenticateBasic() {
//#authenticateBasic
final Function<Optional<ProvidedCredentials>, Optional<String>> myUserPassAuthenticator =
credentials ->
credentials.filter(c -> c.verify("p4ssw0rd")).map(ProvidedCredentials::identifier);
final Route route = path("secured", () ->
authenticateBasic("secure site", myUserPassAuthenticator, userName ->
complete("The user is '" + userName + "'")
)
).seal(system(), materializer());
// tests:
testRoute(route).run(HttpRequest.GET("/secured"))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The resource requires authentication, which was not supplied with the request")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
final HttpCredentials validCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials))
.assertEntity("The user is 'John'");
final HttpCredentials invalidCredentials =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The supplied authentication is invalid")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
//#authenticateBasic
}
@Test
public void testAuthenticateBasicPF() {
//#authenticateBasicPF
final PartialFunction<Optional<ProvidedCredentials>, String> myUserPassAuthenticator =
new JavaPartialFunction<Optional<ProvidedCredentials>, String>() {
@Override
public String apply(Optional<ProvidedCredentials> opt, boolean isCheck) throws Exception {
if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) {
if (isCheck) return null;
else return opt.get().identifier();
} else if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd-special")).isPresent()) {
if (isCheck) return null;
else return opt.get().identifier() + "-admin";
} else {
throw noMatch();
}
}
};
final Route route = path("secured", () ->
authenticateBasicPF("secure site", myUserPassAuthenticator, userName ->
complete("The user is '" + userName + "'")
)
).seal(system(), materializer());
// tests:
testRoute(route).run(HttpRequest.GET("/secured"))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The resource requires authentication, which was not supplied with the request")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
final HttpCredentials validCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials))
.assertEntity("The user is 'John'");
final HttpCredentials validAdminCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd-special");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validAdminCredentials))
.assertEntity("The user is 'John-admin'");
final HttpCredentials invalidCredentials =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The supplied authentication is invalid")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
//#authenticateBasicPF
}
@Test
public void testAuthenticateBasicPFAsync() {
//#authenticateBasicPFAsync
class User {
private final String id;
public User(String id) {
this.id = id;
}
public String getId() {
return id;
}
}
final PartialFunction<Optional<ProvidedCredentials>, CompletionStage<User>> myUserPassAuthenticator =
new JavaPartialFunction<Optional<ProvidedCredentials>,CompletionStage<User>>() {
@Override
public CompletionStage<User> apply(Optional<ProvidedCredentials> opt, boolean isCheck) throws Exception {
if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) {
if (isCheck) return CompletableFuture.completedFuture(null);
else return CompletableFuture.completedFuture(new User(opt.get().identifier()));
} else {
throw noMatch();
}
}
};
final Route route = path("secured", () ->
authenticateBasicPFAsync("secure site", myUserPassAuthenticator, user ->
complete("The user is '" + user.getId() + "'"))
).seal(system(), materializer());
// tests:
testRoute(route).run(HttpRequest.GET("/secured"))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The resource requires authentication, which was not supplied with the request")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
final HttpCredentials validCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials))
.assertEntity("The user is 'John'");
final HttpCredentials invalidCredentials =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The supplied authentication is invalid")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
//#authenticateBasicPFAsync
}
@Test
public void testAuthenticateBasicAsync() {
//#authenticateBasicAsync
final Function<Optional<ProvidedCredentials>, CompletionStage<Optional<String>>> myUserPassAuthenticator = opt -> {
if (opt.filter(c -> (c != null) && c.verify("p4ssw0rd")).isPresent()) {
return CompletableFuture.completedFuture(Optional.of(opt.get().identifier()));
} else {
return CompletableFuture.completedFuture(Optional.empty());
}
};
final Route route = path("secured", () ->
authenticateBasicAsync("secure site", myUserPassAuthenticator, userName ->
complete("The user is '" + userName + "'")
)
).seal(system(), materializer());
// tests:
testRoute(route).run(HttpRequest.GET("/secured"))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The resource requires authentication, which was not supplied with the request")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
final HttpCredentials validCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials))
.assertEntity("The user is 'John'");
final HttpCredentials invalidCredentials =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(invalidCredentials))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The supplied authentication is invalid")
.assertHeaderExists("WWW-Authenticate", "Basic realm=\"secure site\"");
//#authenticateBasicAsync
}
@Test
public void testAuthenticateOrRejectWithChallenge() {
//#authenticateOrRejectWithChallenge
final HttpChallenge challenge = HttpChallenge.create("MyAuth", "MyRealm");
// your custom authentication logic:
final Function<HttpCredentials, Boolean> auth = credentials -> true;
final Function<Optional<HttpCredentials>, CompletionStage<Either<HttpChallenge, String>>> myUserPassAuthenticator =
opt -> {
if (opt.isPresent() && auth.apply(opt.get())) {
return CompletableFuture.completedFuture(Right.apply("some-user-name-from-creds"));
} else {
return CompletableFuture.completedFuture(Left.apply(challenge));
}
};
final Route route = path("secured", () ->
authenticateOrRejectWithChallenge(myUserPassAuthenticator, userName ->
complete("Authenticated!")
)
).seal(system(), materializer());
// tests:
testRoute(route).run(HttpRequest.GET("/secured"))
.assertStatusCode(StatusCodes.UNAUTHORIZED)
.assertEntity("The resource requires authentication, which was not supplied with the request")
.assertHeaderExists("WWW-Authenticate", "MyAuth realm=\"MyRealm\"");
final HttpCredentials validCredentials =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/secured").addCredentials(validCredentials))
.assertStatusCode(StatusCodes.OK)
.assertEntity("Authenticated!");
//#authenticateOrRejectWithChallenge
}
@Test
public void testAuthorize() {
//#authorize
class User {
private final String name;
public User(String name) {
this.name = name;
}
public String getName() {
return name;
}
}
// authenticate the user:
final Function<Optional<ProvidedCredentials>, Optional<User>> myUserPassAuthenticator =
opt -> {
if (opt.isPresent()) {
return Optional.of(new User(opt.get().identifier()));
} else {
return Optional.empty();
}
};
// check if user is authorized to perform admin actions:
final Set<String> admins = new HashSet<>();
admins.add("Peter");
final Function<User, Boolean> hasAdminPermissions = user -> admins.contains(user.getName());
final Route route = authenticateBasic("secure site", myUserPassAuthenticator, user ->
path("peters-lair", () ->
authorize(() -> hasAdminPermissions.apply(user), () ->
complete("'" + user.getName() +"' visited Peter's lair")
)
)
).seal(system(), materializer());
// tests:
final HttpCredentials johnsCred =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(johnsCred))
.assertStatusCode(StatusCodes.FORBIDDEN)
.assertEntity("The supplied authentication is not authorized to access this resource");
final HttpCredentials petersCred =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(petersCred))
.assertEntity("'Peter' visited Peter's lair");
//#authorize
}
@Test
public void testAuthorizeAsync() {
//#authorizeAsync
class User {
private final String name;
public User(String name) {
this.name = name;
}
public String getName() {
return name;
}
}
// authenticate the user:
final Function<Optional<ProvidedCredentials>, Optional<User>> myUserPassAuthenticator =
opt -> {
if (opt.isPresent()) {
return Optional.of(new User(opt.get().identifier()));
} else {
return Optional.empty();
}
};
// check if user is authorized to perform admin actions,
// this could potentially be a long operation so it would return a Future
final Set<String> admins = new HashSet<>();
admins.add("Peter");
final Set<String> synchronizedAdmins = Collections.synchronizedSet(admins);
final Function<User, CompletionStage<Object>> hasAdminPermissions =
user -> CompletableFuture.completedFuture(synchronizedAdmins.contains(user.getName()));
final Route route = authenticateBasic("secure site", myUserPassAuthenticator, user ->
path("peters-lair", () ->
authorizeAsync(() -> hasAdminPermissions.apply(user), () ->
complete("'" + user.getName() +"' visited Peter's lair")
)
)
).seal(system(), materializer());
// tests:
final HttpCredentials johnsCred =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(johnsCred))
.assertStatusCode(StatusCodes.FORBIDDEN)
.assertEntity("The supplied authentication is not authorized to access this resource");
final HttpCredentials petersCred =
BasicHttpCredentials.createBasicHttpCredentials("Peter", "pan");
testRoute(route).run(HttpRequest.GET("/peters-lair").addCredentials(petersCred))
.assertEntity("'Peter' visited Peter's lair");
//#authorizeAsync
}
@Test
public void testExtractCredentials() {
//#extractCredentials
final Route route = extractCredentials(optCreds -> {
if (optCreds.isPresent()) {
return complete("Credentials: " + optCreds.get());
} else {
return complete("No credentials");
}
});
// tests:
final HttpCredentials johnsCred =
BasicHttpCredentials.createBasicHttpCredentials("John", "p4ssw0rd");
testRoute(route).run(HttpRequest.GET("/").addCredentials(johnsCred))
.assertEntity("Credentials: Basic Sm9objpwNHNzdzByZA==");
testRoute(route).run(HttpRequest.GET("/"))
.assertEntity("No credentials");
//#extractCredentials
}
}

View file

@ -0,0 +1,180 @@
/*
* Copyright (C) 2016-2016 Lightbend Inc. <http://www.lightbend.com>
*/
package docs.http.javadsl.server.directives;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.http.javadsl.ConnectHttp;
import akka.http.javadsl.Http;
import akka.http.javadsl.ServerBinding;
import akka.http.javadsl.model.HttpRequest;
import akka.http.javadsl.model.HttpResponse;
import akka.http.javadsl.model.StatusCode;
import akka.http.javadsl.model.StatusCodes;
import akka.http.javadsl.server.AllDirectives;
import akka.http.javadsl.server.Route;
import akka.http.scaladsl.TestUtils;
import akka.stream.ActorMaterializer;
import akka.stream.javadsl.Flow;
import akka.testkit.TestKit;
import com.typesafe.config.Config;
import com.typesafe.config.ConfigFactory;
import org.junit.After;
import org.junit.Ignore;
import org.junit.Test;
import scala.Tuple2;
import scala.Tuple3;
import scala.concurrent.duration.Duration;
import scala.runtime.BoxedUnit;
import java.net.InetSocketAddress;
import java.util.Optional;
import java.util.concurrent.*;
public class TimeoutDirectivesExamplesTest extends AllDirectives {
//#testSetup
private final Config testConf = ConfigFactory.parseString("akka.loggers = [\"akka.testkit.TestEventListener\"]\n"
+ "akka.loglevel = ERROR\n"
+ "akka.stdout-loglevel = ERROR\n"
+ "windows-connection-abort-workaround-enabled = auto\n"
+ "akka.log-dead-letters = OFF\n"
+ "akka.http.server.request-timeout = 1000s");
// large timeout - 1000s (please note - setting to infinite will disable Timeout-Access header
// and withRequestTimeout will not work)
private final ActorSystem system = ActorSystem.create("TimeoutDirectivesExamplesTest", testConf);
private final ActorMaterializer materializer = ActorMaterializer.create(system);
private final Http http = Http.get(system);
private CompletionStage<Void> shutdown(CompletionStage<ServerBinding> binding) {
return binding.thenAccept(b -> {
System.out.println(String.format("Unbinding from %s", b.localAddress()));
final CompletionStage<BoxedUnit> unbound = b.unbind();
try {
unbound.toCompletableFuture().get(3, TimeUnit.SECONDS); // block...
} catch (TimeoutException | InterruptedException | ExecutionException e) {
throw new RuntimeException(e);
}
});
}
private Optional<HttpResponse> runRoute(ActorSystem system, ActorMaterializer materializer, Route route, String routePath) {
final Tuple3<InetSocketAddress, String, Object> inetaddrHostAndPort = TestUtils.temporaryServerHostnameAndPort("127.0.0.1");
Tuple2<String, Integer> hostAndPort = new Tuple2<>(
inetaddrHostAndPort._2(),
(Integer) inetaddrHostAndPort._3()
);
final Flow<HttpRequest, HttpResponse, NotUsed> routeFlow = route.flow(system, materializer);
final CompletionStage<ServerBinding> binding = http.bindAndHandle(routeFlow, ConnectHttp.toHost(hostAndPort._1(), hostAndPort._2()), materializer);
final CompletionStage<HttpResponse> responseCompletionStage = http.singleRequest(HttpRequest.create("http://" + hostAndPort._1() + ":" + hostAndPort._2() + "/" + routePath), materializer);
CompletableFuture<HttpResponse> responseFuture = responseCompletionStage.toCompletableFuture();
Optional<HttpResponse> responseOptional;
try {
responseOptional = Optional.of(responseFuture.get(3, TimeUnit.SECONDS)); // patienceConfig
} catch (Exception e) {
responseOptional = Optional.empty();
}
shutdown(binding);
return responseOptional;
}
//#
@After
public void shutDown() {
TestKit.shutdownActorSystem(system, Duration.create(1, TimeUnit.SECONDS), false);
}
@Test
public void testRequestTimeoutIsConfigurable() {
//#withRequestTimeout-plain
final Duration timeout = Duration.create(1, TimeUnit.SECONDS);
CompletionStage<String> slowFuture = new CompletableFuture<>();
final Route route = path("timeout", () ->
withRequestTimeout(timeout, () -> {
return completeOKWithFutureString(slowFuture); // very slow
})
);
// test:
StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status();
assert (StatusCodes.SERVICE_UNAVAILABLE.equals(statusCode));
//#
}
@Test
public void testRequestWithoutTimeoutCancelsTimeout() {
//#withoutRequestTimeout-1
CompletionStage<String> slowFuture = new CompletableFuture<>();
final Route route = path("timeout", () ->
withoutRequestTimeout(() -> {
return completeOKWithFutureString(slowFuture); // very slow
})
);
// test:
Boolean receivedReply = runRoute(system, materializer, route, "timeout").isPresent();
assert (!receivedReply); // timed-out
//#
}
@Test
public void testRequestTimeoutAllowsCustomResponse() {
//#withRequestTimeout-with-handler
final Duration timeout = Duration.create(1, TimeUnit.MILLISECONDS);
CompletionStage<String> slowFuture = new CompletableFuture<>();
HttpResponse enhanceYourCalmResponse = HttpResponse.create()
.withStatus(StatusCodes.ENHANCE_YOUR_CALM)
.withEntity("Unable to serve response within time limit, please enhance your calm.");
final Route route = path("timeout", () ->
withRequestTimeout(timeout, (request) -> enhanceYourCalmResponse, () -> {
return completeOKWithFutureString(slowFuture); // very slow
})
);
// test:
StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status();
assert (StatusCodes.ENHANCE_YOUR_CALM.equals(statusCode));
//#
}
// make it compile only to avoid flaking in slow builds
@Ignore("Compile only test")
@Test
public void testRequestTimeoutCustomResponseCanBeAddedSeparately() {
//#withRequestTimeoutResponse
final Duration timeout = Duration.create(100, TimeUnit.MILLISECONDS);
CompletionStage<String> slowFuture = new CompletableFuture<>();
HttpResponse enhanceYourCalmResponse = HttpResponse.create()
.withStatus(StatusCodes.ENHANCE_YOUR_CALM)
.withEntity("Unable to serve response within time limit, please enhance your calm.");
final Route route = path("timeout", () ->
withRequestTimeout(timeout, () ->
// racy! for a very short timeout like 1.milli you can still get 503
withRequestTimeoutResponse((request) -> enhanceYourCalmResponse, () -> {
return completeOKWithFutureString(slowFuture); // very slow
}))
);
// test:
StatusCode statusCode = runRoute(system, materializer, route, "timeout").get().status();
assert (StatusCodes.ENHANCE_YOUR_CALM.equals(statusCode));
//#
}
}

View file

@ -165,9 +165,12 @@ public class BidiFlowDocTest extends AbstractJavaTest {
@Override
public void onUpstreamFinish() throws Exception {
// either we are done
if (stash.isEmpty()) completeStage();
// or we still have bytes to emit
// wait with completion and let run() complete when the
// rest of the stash has been sent downstream
else if (isAvailable(out)) run();
}
});

View file

@ -50,25 +50,24 @@ public class GraphDSLDocTest extends AbstractJavaTest {
//#simple-graph-dsl
final Source<Integer, NotUsed> in = Source.from(Arrays.asList(1, 2, 3, 4, 5));
final Sink<List<String>, CompletionStage<List<String>>> sink = Sink.head();
final Sink<List<Integer>, CompletionStage<List<Integer>>> sink2 = Sink.head();
final Flow<Integer, Integer, NotUsed> f1 = Flow.of(Integer.class).map(elem -> elem + 10);
final Flow<Integer, Integer, NotUsed> f2 = Flow.of(Integer.class).map(elem -> elem + 20);
final Flow<Integer, String, NotUsed> f3 = Flow.of(Integer.class).map(elem -> elem.toString());
final Flow<Integer, Integer, NotUsed> f4 = Flow.of(Integer.class).map(elem -> elem + 30);
final RunnableGraph<CompletionStage<List<String>>> result =
RunnableGraph.<CompletionStage<List<String>>>fromGraph(
GraphDSL
.create(
sink,
(builder, out) -> {
RunnableGraph.fromGraph(
GraphDSL // create() function binds sink, out which is sink's out port and builder DSL
.create( // we need to reference out's shape in the builder DSL below (in to() function)
sink, // previously created sink (Sink)
(builder, out) -> { // variables: builder (GraphDSL.Builder) and out (SinkShape)
final UniformFanOutShape<Integer, Integer> bcast = builder.add(Broadcast.create(2));
final UniformFanInShape<Integer, Integer> merge = builder.add(Merge.create(2));
final Outlet<Integer> source = builder.add(in).out();
builder.from(source).via(builder.add(f1))
.viaFanOut(bcast).via(builder.add(f2)).viaFanIn(merge)
.via(builder.add(f3.grouped(1000))).to(out);
.via(builder.add(f3.grouped(1000))).to(out); // to() expects a SinkShape
builder.from(bcast).via(builder.add(f4)).toFanIn(merge);
return ClosedShape.getInstance();
}));

View file

@ -0,0 +1,140 @@
package docs.stream;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.japi.Pair;
import akka.stream.*;
import akka.stream.javadsl.Keep;
import akka.stream.javadsl.Sink;
import akka.stream.javadsl.Source;
import akka.testkit.JavaTestKit;
import docs.AbstractJavaTest;
import org.junit.AfterClass;
import org.junit.BeforeClass;
import org.junit.Test;
import scala.concurrent.duration.FiniteDuration;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.TimeUnit;
import static org.junit.Assert.assertEquals;
class KillSwitchDocTest extends AbstractJavaTest {
static ActorSystem system;
static Materializer mat;
@BeforeClass
public static void setup() {
system = ActorSystem.create("GraphDSLDocTest");
mat = ActorMaterializer.create(system);
}
@AfterClass
public static void tearDown() {
JavaTestKit.shutdownActorSystem(system);
system = null;
mat = null;
}
@Test
public void compileOnlyTest() {
}
public void uniqueKillSwitchShutdownExample() throws Exception {
//#unique-shutdown
final Source<Integer, NotUsed> countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4)))
.delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure());
final Sink<Integer, CompletionStage<Integer>> lastSnk = Sink.last();
final Pair<UniqueKillSwitch, CompletionStage<Integer>> stream = countingSrc
.viaMat(KillSwitches.single(), Keep.right())
.toMat(lastSnk, Keep.both()).run(mat);
final UniqueKillSwitch killSwitch = stream.first();
final CompletionStage<Integer> completionStage = stream.second();
doSomethingElse();
killSwitch.shutdown();
final int finalCount = completionStage.toCompletableFuture().get(1, TimeUnit.SECONDS);
assertEquals(2, finalCount);
//#unique-shutdown
}
public static void uniqueKillSwitchAbortExample() throws Exception {
//#unique-abort
final Source<Integer, NotUsed> countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4)))
.delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure());
final Sink<Integer, CompletionStage<Integer>> lastSnk = Sink.last();
final Pair<UniqueKillSwitch, CompletionStage<Integer>> stream = countingSrc
.viaMat(KillSwitches.single(), Keep.right())
.toMat(lastSnk, Keep.both()).run(mat);
final UniqueKillSwitch killSwitch = stream.first();
final CompletionStage<Integer> completionStage = stream.second();
final Exception error = new Exception("boom!");
killSwitch.abort(error);
final int result = completionStage.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS);
assertEquals(-1, result);
//#unique-abort
}
public void sharedKillSwitchShutdownExample() throws Exception {
//#shared-shutdown
final Source<Integer, NotUsed> countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4)))
.delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure());
final Sink<Integer, CompletionStage<Integer>> lastSnk = Sink.last();
final SharedKillSwitch killSwitch = KillSwitches.shared("my-kill-switch");
final CompletionStage<Integer> completionStage = countingSrc
.viaMat(killSwitch.flow(), Keep.right())
.toMat(lastSnk, Keep.right()).run(mat);
final CompletionStage<Integer> completionStageDelayed = countingSrc
.delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure())
.viaMat(killSwitch.flow(), Keep.right())
.toMat(lastSnk, Keep.right()).run(mat);
doSomethingElse();
killSwitch.shutdown();
final int finalCount = completionStage.toCompletableFuture().get(1, TimeUnit.SECONDS);
final int finalCountDelayed = completionStageDelayed.toCompletableFuture().get(1, TimeUnit.SECONDS);
assertEquals(2, finalCount);
assertEquals(1, finalCountDelayed);
//#shared-shutdown
}
public static void sharedKillSwitchAbortExample() throws Exception {
//#shared-abort
final Source<Integer, NotUsed> countingSrc = Source.from(new ArrayList<>(Arrays.asList(1, 2, 3, 4)))
.delay(FiniteDuration.apply(1, TimeUnit.SECONDS), DelayOverflowStrategy.backpressure());
final Sink<Integer, CompletionStage<Integer>> lastSnk = Sink.last();
final SharedKillSwitch killSwitch = KillSwitches.shared("my-kill-switch");
final CompletionStage<Integer> completionStage1 = countingSrc
.viaMat(killSwitch.flow(), Keep.right())
.toMat(lastSnk, Keep.right()).run(mat);
final CompletionStage<Integer> completionStage2 = countingSrc
.viaMat(killSwitch.flow(), Keep.right())
.toMat(lastSnk, Keep.right()).run(mat);
final Exception error = new Exception("boom!");
killSwitch.abort(error);
final int result1 = completionStage1.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS);
final int result2 = completionStage2.toCompletableFuture().exceptionally(e -> -1).get(1, TimeUnit.SECONDS);
assertEquals(-1, result1);
assertEquals(-1, result2);
//#shared-abort
}
private static void doSomethingElse(){
}
}

View file

@ -3,23 +3,27 @@
*/
package docs.stream;
//#stream-imports
import akka.stream.*;
import akka.stream.javadsl.*;
//#stream-imports
//#other-imports
import akka.Done;
import akka.NotUsed;
import akka.actor.ActorSystem;
import akka.util.ByteString;
import java.nio.file.Paths;
import java.math.BigInteger;
import java.util.concurrent.CompletionStage;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeUnit;
import org.junit.*;
import akka.Done;
import akka.NotUsed;
import akka.actor.ActorSystem;
//#imports
import akka.stream.*;
import akka.stream.javadsl.*;
//#imports
import akka.util.ByteString;
import scala.concurrent.duration.Duration;
//#other-imports
import org.junit.*;
/**
* This class is not meant to be run as a test in the test suite, but it

View file

@ -119,7 +119,7 @@ If, however, your marshaller also needs to set things like the response status c
or any headers then a ``ToEntityMarshaller[T]`` won't work. You'll need to fall down to providing a
``ToResponseMarshaller[T]`` or a ``ToRequestMarshaller[T]`` directly.
For writing you own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly.
For writing your own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly.
Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller``
companion:

View file

@ -77,7 +77,7 @@ Custom Unmarshallers
Akka HTTP gives you a few convenience tools for constructing unmarshallers for your own types.
Usually you won't have to "manually" implement the ``Unmarshaller`` trait directly.
Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller``
Rather, it should be possible to use one of the convenience construction helpers defined on the ``Unmarshaller``
companion:
TODO rewrite sample for Java

View file

@ -0,0 +1,121 @@
.. _implications-of-streaming-http-entities-java:
Implications of the streaming nature of Request/Response Entities
-----------------------------------------------------------------
Akka HTTP is streaming *all the way through*, which means that the back-pressure mechanisms enabled by Akka Streams
are exposed through all layersfrom the TCP layer, through the HTTP server, all the way up to the user-facing ``HttpRequest``
and ``HttpResponse`` and their ``HttpEntity`` APIs.
This has suprising implications if you are used to non-streaming / not-reactive HTTP clients.
Specifically it means that: "*lack of consumption of the HTTP Entity, is signaled as back-pressure to the other
side of the connection*". This is a feature, as it allows one only to consume the entity, and back-pressure servers/clients
from overwhelming our application, possibly causing un-necessary buffering of the entity in memory.
.. warning::
Consuming (or discarding) the Entity of a request is mandatory!
If *accidentally* left neither consumed or discarded Akka HTTP will
asume the incoming data should remain back-pressured, and will stall the incoming data via TCP back-pressure mechanisms.
Client-Side handling of streaming HTTP Entities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Consuming the HTTP Response Entity (Client)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The most commong use-case of course is consuming the response entity, which can be done via
running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source,
(or on the server-side using directives such as
It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest,
for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another
destination Sink, such as a File or other Akka Streams connector:
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-1
however sometimes the need may arise to consume the entire entity as ``Strict`` entity (which means that it is
completely loaded into memory). Akka HTTP provides a special ``toStrict(timeout, materializer)`` method which can be used to
eagerly consume the entity and make it available in memory:
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-consume-example-2
Discarding the HTTP Response Entity (Client)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes when calling HTTP services we do not care about their response payload (e.g. all we care about is the response code),
yet as explained above entity still has to be consumed in some way, otherwise we'll be exherting back-pressure on the
underlying TCP connection.
The ``discardEntityBytes`` convenience method serves the purpose of easily discarding the entity if it has no purpose for us.
It does so by piping the incoming bytes directly into an ``Sink.ignore``.
The two snippets below are equivalent, and work the same way on the server-side for incoming HTTP Requests:
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-discard-example-1
Or the equivalent low-level code achieving the same result:
.. includecode:: ../code/docs/http/javadsl/HttpClientExampleDocTest.java#manual-entity-discard-example-2
Server-Side handling of streaming HTTP Entities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similarily as with the Client-side, HTTP Entities are directly linked to Streams which are fed by the underlying
TCP connection. Thus, if request entities remain not consumed, the server will back-pressure the connection, expecting
that the user-code will eventually decide what to do with the incoming data.
Note that some directives force an implicit ``toStrict`` operation, such as ``entity(exampleUnmarshaller, example -> {})`` and similar ones.
Consuming the HTTP Request Entity (Server)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The simplest way of consuming the incoming request entity is to simply transform it into an actual domain object,
for example by using the :ref:`-entity-java-` directive:
.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#consume-entity-directive
Of course you can access the raw dataBytes as well and run the underlying stream, for example piping it into an
FileIO Sink, that signals completion via a ``CompletionStage<IoResult>`` once all the data has been written into the file:
.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#consume-raw-dataBytes
Discarding the HTTP Request Entity (Server)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes, depending on some validation (e.g. checking if given user is allowed to perform uploads or not)
you may want to decide to discard the uploaded entity.
Please note that discarding means that the entire upload will proceed, even though you are not interested in the data
being streamed to the server - this may be useful if you are simply not interested in the given entity, however
you don't want to abort the entire connection (which we'll demonstrate as well), since there may be more requests
pending on the same connection still.
In order to discard the databytes explicitly you can invoke the ``discardEntityBytes`` bytes of the incoming ``HTTPRequest``:
.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#discard-discardEntityBytes
A related concept is *cancelling* the incoming ``entity.getDataBytes()`` stream, which results in Akka HTTP
*abruptly closing the connection from the Client*. This may be useful when you detect that the given user should not be allowed to make any
uploads at all, and you want to drop the connection (instead of reading and ignoring the incoming data).
This can be done by attaching the incoming ``entity.getDataBytes()`` to a ``Sink.cancelled`` which will cancel
the entity stream, which in turn will cause the underlying connection to be shut-down by the server
effectively hard-aborting the incoming request:
.. includecode:: ../code/docs/http/javadsl/server/HttpServerExampleDocTest.java#discard-close-connections
Closing connections is also explained in depth in the :ref:`http-closing-connection-low-level-java` section of the docs.
Pending: Automatic discarding of not used entities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request,
and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below
note and issues for further discussion and ideas.
.. note::
An advanced feature code named "auto draining" has been discussed and proposed for Akka HTTP, and we're hoping
to implement or help the community implement it.
You can read more about it in `issue #18716 <https://github.com/akka/akka/issues/18716>`_
as well as `issue #18540 <https://github.com/akka/akka/issues/18540>`_ ; as always, contributions are very welcome!

View file

@ -37,6 +37,7 @@ akka-http-jackson
routing-dsl/index
client-side/index
common/index
implications-of-streaming-http-entity
configuration
server-side-https-support

View file

@ -139,6 +139,7 @@ Directive Description
:ref:`-uploadedFile-java-` Streams one uploaded file from a multipart request to a file on disk
:ref:`-validate-java-` Checks a given condition before running its inner route
:ref:`-withoutRequestTimeout-java-` Disables :ref:`request timeouts <request-timeout-java>` for a given route.
:ref:`-withoutSizeLimit-java-` Skips request entity size check
:ref:`-withExecutionContext-java-` Runs its inner route with the given alternative ``ExecutionContext``
:ref:`-withMaterializer-java-` Runs its inner route with the given alternative ``Materializer``
:ref:`-withLog-java-` Runs its inner route with the given alternative ``LoggingAdapter``
@ -146,5 +147,6 @@ Directive Description
:ref:`-withRequestTimeout-java-` Configures the :ref:`request timeouts <request-timeout-java>` for a given route.
:ref:`-withRequestTimeoutResponse-java-` Prepares the ``HttpResponse`` that is emitted if a request timeout is triggered. ``RequestContext => RequestContext`` function
:ref:`-withSettings-java-` Runs its inner route with the given alternative ``RoutingSettings``
:ref:`-withSizeLimit-java-` Applies request entity size check
================================================ ============================================================================

View file

@ -16,4 +16,5 @@ which provides a nicer DSL for building rejection handlers.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#cancelRejection

View file

@ -18,4 +18,5 @@ which provides a nicer DSL for building rejection handlers.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#cancelRejections

View file

@ -13,4 +13,5 @@ See :ref:`ProvideDirectives-java` for an overview of similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extract

View file

@ -14,4 +14,5 @@ See :ref:`-extract-java-` to learn more about how extractions work.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractExecutionContext

View file

@ -15,4 +15,5 @@ See :ref:`-extract-java-` and :ref:`ProvideDirectives-java` for an overview of s
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractLog

View file

@ -13,4 +13,5 @@ See also :ref:`-withMaterializer-java-` to see how to customise the used materia
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractMaterializer

View file

@ -13,4 +13,5 @@ directives. See :ref:`Request Directives-java`.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequest

View file

@ -16,4 +16,5 @@ See also :ref:`-extractRequest-java-` if only interested in the :class:`HttpRequ
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestContext

View file

@ -13,4 +13,5 @@ It is possible to override the settings for specific sub-routes by using the :re
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractRequestContext

View file

@ -15,4 +15,5 @@ Use ``mapUnmatchedPath`` to change the value of the unmatched path.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractUnmatchedPath

View file

@ -12,4 +12,5 @@ targeted access to parts of the URI.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#extractUri

View file

@ -12,4 +12,5 @@ with any other route. Usually, the returned route wraps the original one with cu
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapInnerRoute

View file

@ -16,4 +16,5 @@ See :ref:`Response Transforming Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRejections

View file

@ -16,4 +16,5 @@ See :ref:`Request Transforming Directives-java` for an overview of similar direc
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRequest

View file

@ -15,4 +15,5 @@ See :ref:`Request Transforming Directives-java` for an overview of similar direc
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRequestContext

View file

@ -14,8 +14,10 @@ See also :ref:`-mapResponseHeaders-java-` or :ref:`-mapResponseEntity-java-` for
Example: Override status
------------------------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponse
Example: Default to empty JSON response on errors
-------------------------------------------------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponse-advanced

View file

@ -13,4 +13,5 @@ See :ref:`Response Transforming Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponseEntity

View file

@ -14,4 +14,5 @@ See :ref:`Response Transforming Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapResponseHeaders

View file

@ -14,4 +14,5 @@ See :ref:`Result Transformation Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResult

View file

@ -17,4 +17,5 @@ See :ref:`Result Transformation Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultFuture

View file

@ -17,4 +17,6 @@ See :ref:`Result Transformation Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultPF

View file

@ -16,4 +16,5 @@ See :ref:`Result Transformation Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultWith

View file

@ -17,4 +17,5 @@ See :ref:`Result Transformation Directives-java` for similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapRouteResultWithPF

View file

@ -12,4 +12,5 @@ See also :ref:`-withSettings-java-` or :ref:`-extractSettings-java-`.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapSettings

View file

@ -14,4 +14,5 @@ Use ``extractUnmatchedPath`` for extracting the current value of the unmatched p
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#mapUnmatchedPath

View file

@ -11,4 +11,5 @@ It is usually used as a "neutral element" when combining directives generically.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#pass

View file

@ -13,4 +13,5 @@ See :ref:`ProvideDirectives-java` for an overview of similar directives.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#provide

View file

@ -17,4 +17,5 @@ rejections.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#recoverRejections

View file

@ -20,4 +20,5 @@ See :ref:`-recoverRejections-java-` (the synchronous equivalent of this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#recoverRejectionsWith

View file

@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without sufracin
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withExecutionContext

View file

@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without surfacin
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withLog

View file

@ -14,4 +14,5 @@ or used by directives which internally extract the materializer without sufracin
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withMaterializer

View file

@ -13,4 +13,6 @@ or used by directives which internally extract the materializer without sufracin
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/BasicDirectivesExamplesTest.java#withSettings

View file

@ -10,4 +10,5 @@ Decompresses the incoming request if it is ``gzip`` or ``deflate`` compressed. U
Example
-------
..TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#decodeRequest

View file

@ -10,4 +10,5 @@ Decodes the incoming request if it is encoded with one of the given encoders. If
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#decodeRequestWith

View file

@ -14,6 +14,7 @@ If the ``Accept-Encoding`` header is missing or empty or specifies an encoding o
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#encodeResponse
.. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4

View file

@ -17,6 +17,7 @@ response encoding is used. Otherwise the request is rejected.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#encodeResponseWith
.. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4

View file

@ -10,4 +10,5 @@ Passes the request to the inner route if the request accepts the argument encodi
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/CodingDirectivesExamplesTest.java#responseEncodingAccepted

View file

@ -14,4 +14,5 @@ See :ref:`exception-handling-java` for general information about options for han
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java#handleExceptions

View file

@ -13,4 +13,5 @@ See :ref:`rejections-java` for general information about options for handling re
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/ExecutionDirectivesExamplesTest.java#handleRejections

View file

@ -19,4 +19,5 @@ For more details refer to :ref:`-getFromBrowseableDirectory-java-`.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromBrowseableDirectories

View file

@ -19,7 +19,8 @@ For more details refer to :ref:`-getFromBrowseableDirectory-java-`.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromBrowseableDirectory
Default file listing page example

View file

@ -27,4 +27,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromDirectory

View file

@ -27,4 +27,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromFile

View file

@ -15,4 +15,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromResource

View file

@ -15,4 +15,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#getFromResourceDirectory

View file

@ -20,4 +20,5 @@ Note that it's not required to wrap this directive with ``get`` as this directiv
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode:: ../../../../code/docs/http/javadsl/server/directives/FileAndResourceDirectivesExamplesTest.java#listDirectoryContents

View file

@ -14,7 +14,8 @@ with the same name, the first one will be used and the subsequent ones ignored.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java
:snippet: fileUpload
::

View file

@ -20,4 +20,5 @@ one will be used and the subsequent ones ignored.
Example
-------
TODO: Example snippets for JavaDSL are subject to community contributions! Help us complete the docs, read more about it here: `write example snippets for Akka HTTP Java DSL #20466 <https://github.com/akka/akka/issues/20466>`_.
.. includecode2:: ../../../../code/docs/http/javadsl/server/directives/FileUploadDirectivesExamplesTest.java
:snippet: uploadedFile

Some files were not shown because too many files have changed in this diff Show more