Remove Akka-HTTP sources from akka/akka, moving to akka/akka-http! (#21690)

This commit is contained in:
Konrad Malawski 2016-10-18 15:17:17 +02:00 committed by GitHub
parent 09a6d2ede1
commit a6a5556a8f
1155 changed files with 20 additions and 96517 deletions

Binary file not shown.

Before

Width:  |  Height:  |  Size: 202 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 258 KiB

View file

@ -1,134 +0,0 @@
.. _clientSideHTTPS:
Client-Side HTTPS Support
=========================
Akka HTTP supports TLS encryption on the client-side as well as on the :ref:`server-side <serverSideHTTPS-scala>`.
.. warning:
Akka HTTP 1.0 does not completely validate certificates when using HTTPS. Please do not treat HTTPS connections
made with this version as secure. Requests are vulnerable to a Man-In-The-Middle attack via certificate substitution.
The central vehicle for configuring encryption is the ``HttpsConnectionContext``, which can be created using
the static method ``ConnectionContext.https`` which is defined like this:
.. includecode:: /../../akka-http-core/src/main/scala/akka/http/scaladsl/ConnectionContext.scala
:include: https-context-creation
In addition to the ``outgoingConnection``, ``newHostConnectionPool`` and ``cachedHostConnectionPool`` methods the
`akka.http.scaladsl.Http`_ extension also defines ``outgoingConnectionTls``, ``newHostConnectionPoolTls`` and
``cachedHostConnectionPoolTls``. These methods work identically to their counterparts without the ``-Tls`` suffix,
with the exception that all connections will always be encrypted.
The ``singleRequest`` and ``superPool`` methods determine the encryption state via the scheme of the incoming request,
i.e. requests to an "https" URI will be encrypted, while requests to an "http" URI won't.
The encryption configuration for all HTTPS connections, i.e. the ``HttpsContext`` is determined according to the
following logic:
1. If the optional ``httpsContext`` method parameter is defined it contains the configuration to be used (and thus
takes precedence over any potentially set default client-side ``HttpsContext``).
2. If the optional ``httpsContext`` method parameter is undefined (which is the default) the default client-side
``HttpsContext`` is used, which can be set via the ``setDefaultClientHttpsContext`` on the ``Http`` extension.
3. If no default client-side ``HttpsContext`` has been set via the ``setDefaultClientHttpsContext`` on the ``Http``
extension the default system configuration is used.
Usually the process is, if the default system TLS configuration is not good enough for your application's needs,
that you configure a custom ``HttpsContext`` instance and set it via ``Http().setDefaultClientHttpsContext``.
Afterwards you simply use ``outgoingConnectionTls``, ``newHostConnectionPoolTls``, ``cachedHostConnectionPoolTls``,
``superPool`` or ``singleRequest`` without a specific ``httpsContext`` argument, which causes encrypted connections
to rely on the configured default client-side ``HttpsContext``.
If no custom ``HttpsContext`` is defined the default context uses Java's default TLS settings. Customizing the
``HttpsContext`` can make the Https client less secure. Understand what you are doing!
SSL-Config
----------
Akka HTTP heavily relies on, and delegates most configuration of any SSL/TLS related options to
`Lightbend SSL-Config`_, which is a library specialized in providing an secure-by-default SSLContext
and related options.
Please refer to the `Lightbend SSL-Config`_ documentation for detailed documentation of all available settings.
SSL Config settings used by Akka HTTP (as well as Streaming TCP) are located under the `akka.ssl-config` namespace.
.. _Lightbend SSL-Config: http://typesafehub.github.io/ssl-config/
Detailed configuration and workarounds
--------------------------------------
Akka HTTP relies on `Typesafe SSL-Config`_ which is a library maintained by Lightbend that makes configuring
things related to SSL/TLS much simpler than using the raw SSL APIs provided by the JDK. Please refer to its
documentation to learn more about it.
All configuration options available to this library may be set under the ``akka.ssl-context`` configuration for Akka HTTP applications.
.. note::
When encountering problems connecting to HTTPS hosts we highly encourage to reading up on the excellent ssl-config
configuration. Especially the quick start sections about `adding certificates to the trust store`_ should prove
very useful, for example to easily trust a self-signed certificate that applications might use in development mode.
.. warning::
While it is possible to disable certain checks using the so called "loose" settings in SSL Config, we **strongly recommend**
to instead attempt to solve these issues by properly configuring TLSfor example by adding trusted keys to the keystore.
If however certain checks really need to be disabled because of misconfigured (or legacy) servers that your
application has to speak to, instead of disabling the checks globally (i.e. in ``application.conf``) we suggest
configuring the loose settings for *specific connections* that are known to need them disabled (and trusted for some other reason).
The pattern of doing so is documented in the folowing sub-sections.
.. _adding certificates to the trust store: http://typesafehub.github.io/ssl-config/WSQuickStart.html#connecting-to-a-remote-server-over-https
Hostname verification
^^^^^^^^^^^^^^^^^^^^^
Hostname verification proves that the Akka HTTP client is actually communicating with the server it intended to
communicate with. Without this check a man-in-the-middle attack is possible. In the attack scenario, an alternative
certificate would be presented which was issued for another host name. Checking the host name in the certificate
against the host name the connection was opened against is therefore vital.
The default ``HttpsContext`` enables hostname verification. Akka HTTP relies on the `Typesafe SSL-Config`_ library
to implement this and security options for SSL/TLS. Hostname verification is provided by the JDK
and used by Akka HTTP since Java 7, and on Java 6 the verification is implemented by ssl-config manually.
For further recommended reading we would like to highlight the `fixing hostname verification blog post`_ by blog post by Will Sargent.
.. _Typesafe SSL-Config: http://typesafehub.github.io/ssl-config
.. _fixing hostname verification blog post: https://tersesystems.com/2014/03/23/fixing-hostname-verification/
.. _akka.http.scaladsl.Http: @github@/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala
Server Name Indication (SNI)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
SNI is an TLS extension which aims to guard against man-in-the-middle attacks. It does so by having the client send the
name of the virtual domain it is expecting to talk to as part of the TLS handshake.
It is specified as part of `RFC 6066`_.
Disabling TLS security features, at your own risk
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. warning::
It is highly discouraged to disable any of the security features of TLS, however do acknowlage that workarounds may sometimes be needed.
Before disabling any of the features one should consider if they may be solvable *within* the TLS world,
for example by `trusting a certificate`_, or `configuring the trusted cipher suites`_ etc.
If disabling features is indeed desired, we recommend doing so for *specific connections*,
instead of globally configuring it via ``application.conf``.
The following shows an example of disabling SNI for a given connection:
.. includecode:: ../../code/docs/http/scaladsl/HttpsExamplesSpec.scala
:include: disable-sni-connection
The ``badSslConfig`` is a copy of the default ``AkkaSSLConfig`` with with the slightly changed configuration to disable SNI.
This value can be cached and used for connections which should indeed not use this feature.
.. _RFC 6066: https://tools.ietf.org/html/rfc6066#page-6
.. _trusting a certificate: http://typesafehub.github.io/ssl-config/WSQuickStart.html
.. _configuring the trusted cipher suites: http://typesafehub.github.io/ssl-config/CipherSuites.html

View file

@ -1,97 +0,0 @@
.. _connection-level-api:
Connection-Level Client-Side API
================================
The connection-level API is the lowest-level client-side API Akka HTTP provides. It gives you full control over when
HTTP connections are opened and closed and how requests are to be send across which connection. As such it offers the
highest flexibility at the cost of providing the least convenience.
.. note::
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
from a background with non-"streaming first" HTTP Clients.
Opening HTTP Connections
------------------------
With the connection-level API you open a new HTTP connection to a target endpoint by materializing a ``Flow``
returned by the ``Http().outgoingConnection(...)`` method. Here is an example:
.. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: outgoing-connection-example
Apart from the host name and port the ``Http().outgoingConnection(...)`` method also allows you to specify socket options
and a number of configuration settings for the connection.
Note that no connection is attempted until the returned flow is actually materialized! If the flow is materialized
several times then several independent connections will be opened (one per materialization).
If the connection attempt fails, for whatever reason, the materialized flow will be immediately terminated with a
respective exception.
Request-Response Cycle
----------------------
Once the connection flow has been materialized it is ready to consume ``HttpRequest`` instances from the source it is
attached to. Each request is sent across the connection and incoming responses dispatched to the downstream pipeline.
Of course and as always, back-pressure is adequately maintained across all parts of the
connection. This means that, if the downstream pipeline consuming the HTTP responses is slow, the request source will
eventually be slowed down in sending requests.
Any errors occurring on the underlying connection are surfaced as exceptions terminating the response stream (and
canceling the request source).
Note that, if the source produces subsequent requests before the prior responses have arrived, these requests will be
pipelined__ across the connection, which is something that is not supported by all HTTP servers.
Also, if the server closes the connection before responses to all requests have been received this will result in the
response stream being terminated with a truncation error.
__ http://en.wikipedia.org/wiki/HTTP_pipelining
Closing Connections
-------------------
Akka HTTP actively closes an established connection upon reception of a response containing ``Connection: close`` header.
The connection can also be closed by the server.
An application can actively trigger the closing of the connection by completing the request stream. In this case the
underlying TCP connection will be closed when the last pending response has been received.
The connection will also be closed if the response entity is cancelled (e.g. by attaching it to ``Sink.cancelled``)
or consumed only partially (e.g. by using ``take`` combinator). In order to prevent this behaviour the entity should be
explicitly drained by attaching it to ``Sink.ignore``.
Timeouts
--------
Currently Akka HTTP doesn't implement client-side request timeout checking itself as this functionality can be regarded
as a more general purpose streaming infrastructure feature.
It should be noted that Akka Streams provide various timeout functionality so any API that uses streams can benefit
from the stream stages such as ``idleTimeout``, ``backpressureTimeout``, ``completionTimeout``, ``initialTimeout``
and ``throttle``. To learn more about these refer to their documentation in Akka Streams (and Scala Doc).
For more details about timeout support in Akka HTTP in general refer to :ref:`http-timeouts-scala`.
.. _http-client-layer:
Stand-Alone HTTP Layer Usage
----------------------------
Due to its Reactive-Streams-based nature the Akka HTTP layer is fully detachable from the underlying TCP
interface. While in most applications this "feature" will not be crucial it can be useful in certain cases to be able
to "run" the HTTP layer (and, potentially, higher-layers) against data that do not come from the network but rather
some other source. Potential scenarios where this might be useful include tests, debugging or low-level event-sourcing
(e.g by replaying network traffic).
On the client-side the stand-alone HTTP layer forms a ``BidiStage`` that is defined like this:
.. includecode2:: /../../akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala
:snippet: client-layer
You create an instance of ``Http.ClientLayer`` by calling one of the two overloads of the ``Http().clientLayer`` method,
which also allows for varying degrees of configuration.

View file

@ -1,160 +0,0 @@
.. _host-level-api:
Host-Level Client-Side API
==========================
As opposed to the :ref:`connection-level-api` the host-level API relieves you from manually managing individual HTTP
connections. It autonomously manages a configurable pool of connections to *one particular target endpoint* (i.e.
host/port combination).
.. note::
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
from a background with non-"streaming first" HTTP Clients.
Requesting a Host Connection Pool
---------------------------------
The best way to get a hold of a connection pool to a given target endpoint is the ``Http().cachedHostConnectionPool(...)``
method, which returns a ``Flow`` that can be "baked" into an application-level stream setup. This flow is also called
a "pool client flow".
The connection pool underlying a pool client flow is cached. For every ``ActorSystem``, target endpoint and pool
configuration there will never be more than a single pool live at any time.
Also, the HTTP layer transparently manages idle shutdown and restarting of connection pools as configured.
The client flow instances therefore remain valid throughout the lifetime of the application, i.e. they can be
materialized as often as required and the time between individual materialization is of no importance.
When you request a pool client flow with ``Http().cachedHostConnectionPool(...)`` Akka HTTP will immediately start
the pool, even before the first client flow materialization. However, this running pool will not actually open the
first connection to the target endpoint until the first request has arrived.
Configuring a Host Connection Pool
----------------------------------
Apart from the connection-level config settings and socket options there are a number of settings that allow you to
influence the behavior of the connection pool logic itself.
Check out the ``akka.http.host-connection-pool`` section of the Akka HTTP :ref:`akka-http-configuration` for
more information about which settings are available and what they mean.
Note that, if you request pools with different configurations for the same target host you will get *independent* pools.
This means that, in total, your application might open more concurrent HTTP connections to the target endpoint than any
of the individual pool's ``max-connections`` settings allow!
There is one setting that likely deserves a bit deeper explanation: ``max-open-requests``.
This setting limits the maximum number of requests that can be in-flight at any time for a single connection pool.
If an application calls ``Http().cachedHostConnectionPool(...)`` 3 times (with the same endpoint and settings) it will get
back ``3`` different client flow instances for the same pool. If each of these client flows is then materialized ``4`` times
(concurrently) the application will have 12 concurrently running client flow materializations.
All of these share the resources of the single pool.
This means that, if the pool's ``pipelining-limit`` is left at ``1`` (effecitvely disabeling pipelining), no more than 12 requests can be open at any time.
With a ``pipelining-limit`` of ``8`` and 12 concurrent client flow materializations the theoretical open requests
maximum is ``96``.
The ``max-open-requests`` config setting allows for applying a hard limit which serves mainly as a protection against
erroneous connection pool use, e.g. because the application is materializing too many client flows that all compete for
the same pooled connections.
.. _using-a-host-connection-pool:
Using a Host Connection Pool
----------------------------
The "pool client flow" returned by ``Http().cachedHostConnectionPool(...)`` has the following type::
Flow[(HttpRequest, T), (Try[HttpResponse], T), HostConnectionPool]
This means it consumes tuples of type ``(HttpRequest, T)`` and produces tuples of type ``(Try[HttpResponse], T)``
which might appear more complicated than necessary on first sight.
The reason why the pool API includes objects of custom type ``T`` on both ends lies in the fact that the underlying
transport usually comprises more than a single connection and as such the pool client flow often generates responses in
an order that doesn't directly match the consumed requests.
We could have built the pool logic in a way that reorders responses according to their requests before dispatching them
to the application, but this would have meant that a single slow response could block the delivery of potentially many
responses that would otherwise be ready for consumption by the application.
In order to prevent unnecessary head-of-line blocking the pool client-flow is allowed to dispatch responses as soon as
they arrive, independently of the request order. Of course this means that there needs to be another way to associate a
response with its respective request. The way that this is done is by allowing the application to pass along a custom
"context" object with the request, which is then passed back to the application with the respective response.
This context object of type ``T`` is completely opaque to Akka HTTP, i.e. you can pick whatever works best for your
particular application scenario.
.. note::
A consequence of using a pool is that long-running requests block a connection while running and may starve other
requests. Make sure not to use a connection pool for long-running requests like long-polling GET requests.
Use the :ref:`connection-level-api` instead.
Connection Allocation Logic
---------------------------
This is how Akka HTTP allocates incoming requests to the available connection "slots":
1. If there is a connection alive and currently idle then schedule the request across this connection.
2. If no connection is idle and there is still an unconnected slot then establish a new connection.
3. If all connections are already established and "loaded" with other requests then pick the connection with the least
open requests (< the configured ``pipelining-limit``) that only has requests with idempotent methods scheduled to it,
if there is one.
4. Otherwise apply back-pressure to the request source, i.e. stop accepting new requests.
For more information about scheduling more than one request at a time across a single connection see
`this wikipedia entry on HTTP pipelining`__.
__ http://en.wikipedia.org/wiki/HTTP_pipelining
Retrying a Request
------------------
If the ``max-retries`` pool config setting is greater than zero the pool retries idempotent requests for which
a response could not be successfully retrieved. Idempotent requests are those whose HTTP method is defined to be
idempotent by the HTTP spec, which are all the ones currently modelled by Akka HTTP except for the ``POST``, ``PATCH``
and ``CONNECT`` methods.
When a response could not be received for a certain request there are essentially three possible error scenarios:
1. The request got lost on the way to the server.
2. The server experiences a problem while processing the request.
3. The response from the server got lost on the way back.
Since the host connector cannot know which one of these possible reasons caused the problem and therefore ``PATCH`` and
``POST`` requests could have already triggered a non-idempotent action on the server these requests cannot be retried.
In these cases, as well as when all retries have not yielded a proper response, the pool produces a failed ``Try``
(i.e. a ``scala.util.Failure``) together with the custom request context.
Pool Shutdown
-------------
Completing a pool client flow will simply detach the flow from the pool. The connection pool itself will continue to run
as it may be serving other client flows concurrently or in the future. Only after the configured ``idle-timeout`` for
the pool has expired will Akka HTTP automatically terminate the pool and free all its resources.
If a new client flow is requested with ``Http().cachedHostConnectionPool(...)`` or if an already existing client flow is
re-materialized the respective pool is automatically and transparently restarted.
In addition to the automatic shutdown via the configured idle timeouts it's also possible to trigger the immediate
shutdown of a specific pool by calling ``shutdown()`` on the :class:`HostConnectionPool` instance that the pool client
flow materializes into. This ``shutdown()`` call produces a ``Future[Unit]`` which is fulfilled when the pool
termination has been completed.
It's also possible to trigger the immediate termination of *all* connection pools in the ``ActorSystem`` at the same
time by calling ``Http().shutdownAllConnectionPools()``. This call too produces a ``Future[Unit]`` which is fulfilled when
all pools have terminated.
.. note::
When encoutering unexpected ``akka.stream.AbruptTerminationException`` exceptions during ``ActorSystem`` **shutdown**
please make sure that active connections are shut down before shutting down the entire system, this can be done by
calling the ``Http().shutdownAllConnectionPools()`` method, and only once its Future completes, shutting down the actor system.
Example
-------
.. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: host-level-example

View file

@ -1,35 +0,0 @@
.. _http-client-side:
Consuming HTTP-based Services (Client-Side)
===========================================
All client-side functionality of Akka HTTP, for consuming HTTP-based services offered by other endpoints, is currently
provided by the ``akka-http-core`` module.
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
from a background with non-"streaming first" HTTP Clients.
Depending on your application's specific needs you can choose from three different API levels:
:ref:`connection-level-api`
for full-control over when HTTP connections are opened/closed and how requests are scheduled across them
:ref:`host-level-api`
for letting Akka HTTP manage a connection-pool to *one specific* host/port endpoint
:ref:`request-level-api`
for letting Akka HTTP perform all connection management
You can interact with different API levels at the same time and, independently of which API level you choose,
Akka HTTP will happily handle many thousand concurrent connections to a single or many different hosts.
.. toctree::
:maxdepth: 2
connection-level
host-level
request-level
client-https-support
websocket-support

View file

@ -1,83 +0,0 @@
.. _request-level-api:
Request-Level Client-Side API
=============================
The request-level API is the most convenient way of using Akka HTTP's client-side functionality. It internally builds upon the
:ref:`host-level-api` to provide you with a simple and easy-to-use way of retrieving HTTP responses from remote servers.
Depending on your preference you can pick the flow-based or the future-based variant.
.. note::
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
from a background with non-"streaming first" HTTP Clients.
.. note::
The request-level API is implemented on top of a connection pool that is shared inside the ActorSystem. A consequence of
using a pool is that long-running requests block a connection while running and starve other requests. Make sure not to use
the request-level API for long-running requests like long-polling GET requests. Use the :ref:`connection-level-api` instead.
Flow-Based Variant
------------------
The flow-based variant of the request-level client-side API is presented by the ``Http().superPool(...)`` method.
It creates a new "super connection pool flow", which routes incoming requests to a (cached) host connection pool
depending on their respective effective URIs.
The ``Flow`` returned by ``Http().superPool(...)`` is very similar to the one from the :ref:`host-level-api`, so the
:ref:`using-a-host-connection-pool` section also applies here.
However, there is one notable difference between a "host connection pool client flow" for the host-level API and a
"super-pool flow":
Since in the former case the flow has an implicit target host context the requests it takes don't need to have absolute
URIs or a valid ``Host`` header. The host connection pool will automatically add a ``Host`` header if required.
For a super-pool flow this is not the case. All requests to a super-pool must either have an absolute URI or a valid
``Host`` header, because otherwise it'd be impossible to find out which target endpoint to direct the request to.
Future-Based Variant
--------------------
Sometimes your HTTP client needs are very basic. You simply need the HTTP response for a certain request and don't
want to bother with setting up a full-blown streaming infrastructure.
For these cases Akka HTTP offers the ``Http().singleRequest(...)`` method, which simply turns an ``HttpRequest`` instance
into ``Future[HttpResponse]``. Internally the request is dispatched across the (cached) host connection pool for the
request's effective URI.
Just like in the case of the super-pool flow described above the request must have either an absolute URI or a valid
``Host`` header, otherwise the returned future will be completed with an error.
Using the Future-Based API in Actors
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When using the ``Future`` based API from inside an ``Actor``, all the usual caveats apply to how one should deal
with the futures completion. For example you should not access the Actors state from within the Future's callbacks
(such as ``map``, ``onComplete``, ...) and instead you should use the ``pipeTo`` pattern to pipe the result back
to the Actor as a message.
.. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: single-request-in-actor-example
Example
-------
.. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: single-request-example
.. warning::
Be sure to consume the response entities ``dataBytes:Source[ByteString,Unit]`` by for example connecting it
to a ``Sink`` (for example ``response.discardEntityBytes()`` if you don't care about the
response entity), since otherwise Akka HTTP (and the underlying Streams infrastructure) will understand the
lack of entity consumption as a back-pressure signal and stop reading from the underlying TCP connection!
This is a feature of Akka HTTP that allows consuming entities (and pulling them through the network) in
a streaming fashion, and only *on demand* when the client is ready to consume the bytes -
it may be a bit surprising at first though.
There are tickets open about automatically dropping entities if not consumed (`#18716`_ and `#18540`_),
so these may be implemented in the near future.
.. _#18540: https://github.com/akka/akka/issues/18540
.. _#18716: https://github.com/akka/akka/issues/18716

View file

@ -1,123 +0,0 @@
.. _client-side-websocket-support:
Client-Side WebSocket Support
=============================
Client side WebSocket support is available through ``Http.singleWebSocketRequest`` ,
``Http.webSocketClientFlow`` and ``Http.webSocketClientLayer``.
A WebSocket consists of two streams of messages, incoming messages (a :class:`Sink`) and outgoing messages
(a :class:`Source`) where either may be signalled first; or even be the only direction in which messages flow during
the lifetime of the connection. Therefore a WebSocket connection is modelled as either something you connect a
``Flow[Message, Message, Mat]`` to or a ``Flow[Message, Message, Mat]`` that you connect a ``Source[Message, Mat]`` and
a ``Sink[Message, Mat]`` to.
A WebSocket request starts with a regular HTTP request which contains an ``Upgrade`` header (and possibly
other regular HTTP request properties), so in addition to the flow of messages there also is an initial response
from the server, this is modelled with :class:`WebSocketUpgradeResponse`.
The methods of the WebSocket client API handle the upgrade to WebSocket on connection success and materializes
the connected WebSocket stream. If the connection fails, for example with a ``404 NotFound`` error, this regular
HTTP result can be found in ``WebSocketUpgradeResponse.response``
.. note::
Make sure to read and understand the section about :ref:`half-closed-client-websockets` as the behavior
when using WebSockets for one-way communication may not be what you would expect.
Message
-------
Messages sent and received over a WebSocket can be either :class:`TextMessage` s or :class:`BinaryMessage` s and each
of those has two subtypes :class:`Strict` or :class:`Streamed`. In typical applications messages will be ``Strict`` as
WebSockets are usually deployed to communicate using small messages not stream data, the protocol does however
allow this (by not marking the first fragment as final, as described in `rfc 6455 section 5.2`__).
__ https://tools.ietf.org/html/rfc6455#section-5.2
For such streamed messages :class:`BinaryMessage.Streamed` and :class:`TextMessage.Streamed` will be used. In these cases
the data is provided as a ``Source[ByteString, NotUsed]`` for binary and ``Source[String, NotUsed]`` for text messages.
singleWebSocketRequest
----------------------
``singleWebSocketRequest`` takes a :class:`WebSocketRequest` and a flow it will connect to the source and
sink of the WebSocket connection. It will trigger the request right away and returns a tuple containing the
``Future[WebSocketUpgradeResponse]`` and the materialized value from the flow passed to the method.
The future will succeed when the WebSocket connection has been established or the server returned a regular
HTTP response, or fail if the connection fails with an exception.
Simple example sending a message and printing any incoming message:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: single-WebSocket-request
The websocket request may also include additional headers, like in this example, HTTP Basic Auth:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: authorized-single-WebSocket-request
webSocketClientFlow
-------------------
``webSocketClientFlow`` takes a request, and returns a ``Flow[Message, Message, Future[WebSocketUpgradeResponse]]``.
The future that is materialized from the flow will succeed when the WebSocket connection has been established or
the server returned a regular HTTP response, or fail if the connection fails with an exception.
.. note::
The :class:`Flow` that is returned by this method can only be materialized once. For each request a new
flow must be acquired by calling the method again.
Simple example sending a message and printing any incoming message:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: WebSocket-client-flow
webSocketClientLayer
--------------------
Just like the :ref:`http-client-layer` for regular HTTP requests, the WebSocket layer can be used fully detached from the
underlying TCP interface. The same scenarios as described for regular HTTP requests apply here.
The returned layer forms a ``BidiFlow[Message, SslTlsOutbound, SslTlsInbound, Message, Future[WebSocketUpgradeResponse]]``.
.. _half-closed-client-websockets:
Half-Closed WebSockets
----------------------
The Akka HTTP WebSocket API does not support half-closed connections which means that if the either stream completes the
entire connection is closed (after a "Closing Handshake" has been exchanged or a timeout of 3 seconds has passed).
This may lead to unexpected behavior, for example if we are trying to only consume messages coming from the server,
like this:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: half-closed-WebSocket-closing-example
This will in fact quickly close the connection because of the ``Source.empty`` being completed immediately when the
stream is materialized. To solve this you can make sure to not complete the outgoing source by using for example
``Source.maybe`` like this:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: half-closed-WebSocket-working-example
This will keep the outgoing source from completing, but without emitting any elements until the ``Promise`` is manually
completed which makes the ``Source`` complete and the connection to close.
The same problem holds true if emitting a finite number of elements, as soon as the last element is reached the ``Source``
will close and cause the connection to close. To avoid that you can concatenate ``Source.maybe`` to the finite stream:
.. includecode:: ../../code/docs/http/scaladsl/WebSocketClientExampleSpec.scala
:include: half-closed-WebSocket-finite-working-example
Scenarios that exist with the two streams in a WebSocket and possible ways to deal with it:
=========================================== ================================================================================
Scenario Possible solution
=========================================== ================================================================================
Two-way communication ``Flow.fromSinkAndSource``, or ``Flow.map`` for a request-response protocol
Infinite incoming stream, no outgoing ``Flow.fromSinkAndSource(someSink, Source.maybe)``
Infinite outgoing stream, no incoming ``Flow.fromSinkAndSource(Sink.ignore, yourSource)``
=========================================== ================================================================================

View file

@ -1,16 +0,0 @@
Encoding / Decoding
===================
The `HTTP spec`_ defines a ``Content-Encoding`` header, which signifies whether the entity body of an HTTP message is
"encoded" and, if so, by which algorithm. The only commonly used content encodings are compression algorithms.
Currently Akka HTTP supports the compression and decompression of HTTP requests and responses with the ``gzip`` or
``deflate`` encodings.
The core logic for this lives in the `akka.http.scaladsl.coding`_ package.
The support is not enabled automatically, but must be explicitly requested.
For enabling message encoding/decoding with :ref:`Routing DSL <http-high-level-server-side-api>` see the :ref:`CodingDirectives`.
.. _HTTP spec: http://tools.ietf.org/html/rfc7231#section-3.1.2.1
.. _akka.http.scaladsl.coding: @github@/akka-http/src/main/scala/akka/http/scaladsl/coding

View file

@ -1,385 +0,0 @@
.. _http-model-scala:
HTTP Model
==========
Akka HTTP model contains a deeply structured, fully immutable, case-class based model of all the major HTTP data
structures, like HTTP requests, responses and common headers.
It lives in the *akka-http-core* module and forms the basis for most of Akka HTTP's APIs.
Overview
--------
Since akka-http-core provides the central HTTP data structures you will find the following import in quite a
few places around the code base (and probably your own code as well):
.. includecode:: ../../code/docs/http/scaladsl/ModelSpec.scala
:include: import-model
This brings all of the most relevant types in scope, mainly:
- ``HttpRequest`` and ``HttpResponse``, the central message model
- ``headers``, the package containing all the predefined HTTP header models and supporting types
- Supporting types like ``Uri``, ``HttpMethods``, ``MediaTypes``, ``StatusCodes``, etc.
A common pattern is that the model of a certain entity is represented by an immutable type (class or trait),
while the actual instances of the entity defined by the HTTP spec live in an accompanying object carrying the name of
the type plus a trailing plural 's'.
For example:
- Defined ``HttpMethod`` instances live in the ``HttpMethods`` object.
- Defined ``HttpCharset`` instances live in the ``HttpCharsets`` object.
- Defined ``HttpEncoding`` instances live in the ``HttpEncodings`` object.
- Defined ``HttpProtocol`` instances live in the ``HttpProtocols`` object.
- Defined ``MediaType`` instances live in the ``MediaTypes`` object.
- Defined ``StatusCode`` instances live in the ``StatusCodes`` object.
HttpRequest
-----------
``HttpRequest`` and ``HttpResponse`` are the basic case classes representing HTTP messages.
An ``HttpRequest`` consists of
- a method (GET, POST, etc.)
- a URI
- a seq of headers
- an entity (body data)
- a protocol
Here are some examples how to construct an ``HttpRequest``:
.. includecode:: ../../code/docs/http/scaladsl/ModelSpec.scala
:include: construct-request
All parameters of ``HttpRequest.apply`` have default values set, so ``headers`` for example don't need to be specified
if there are none. Many of the parameters types (like ``HttpEntity`` and ``Uri``) define implicit conversions
for common use cases to simplify the creation of request and response instances.
HttpResponse
------------
An ``HttpResponse`` consists of
- a status code
- a seq of headers
- an entity (body data)
- a protocol
Here are some examples how to construct an ``HttpResponse``:
.. includecode:: ../../code/docs/http/scaladsl/ModelSpec.scala
:include: construct-response
In addition to the simple ``HttpEntity`` constructors which create an entity from a fixed ``String`` or ``ByteString``
as shown here the Akka HTTP model defines a number of subclasses of ``HttpEntity`` which allow body data to be specified as a
stream of bytes.
.. _HttpEntity-scala:
HttpEntity
----------
An ``HttpEntity`` carries the data bytes of a message together with its Content-Type and, if known, its Content-Length.
In Akka HTTP there are five different kinds of entities which model the various ways that message content can be
received or sent:
HttpEntity.Strict
The simplest entity, which is used when all the entity are already available in memory.
It wraps a plain ``ByteString`` and represents a standard, unchunked entity with a known ``Content-Length``.
HttpEntity.Default
The general, unchunked HTTP/1.1 message entity.
It has a known length and presents its data as a ``Source[ByteString]`` which can be only materialized once.
It is an error if the provided source doesn't produce exactly as many bytes as specified.
The distinction of ``Strict`` and ``Default`` is an API-only one. One the wire, both kinds of entities look the same.
HttpEntity.Chunked
The model for HTTP/1.1 `chunked content`__ (i.e. sent with ``Transfer-Encoding: chunked``).
The content length is unknown and the individual chunks are presented as a ``Source[HttpEntity.ChunkStreamPart]``.
A ``ChunkStreamPart`` is either a non-empty ``Chunk`` or a ``LastChunk`` containing optional trailer headers.
The stream consists of zero or more ``Chunked`` parts and can be terminated by an optional ``LastChunk`` part.
HttpEntity.CloseDelimited
An unchunked entity of unknown length that is implicitly delimited by closing the connection (``Connection: close``).
The content data are presented as a ``Source[ByteString]``.
Since the connection must be closed after sending an entity of this type it can only be used on the server-side for
sending a response.
Also, the main purpose of ``CloseDelimited`` entities is compatibility with HTTP/1.0 peers, which do not support
chunked transfer encoding. If you are building a new application and are not constrained by legacy requirements you
shouldn't rely on ``CloseDelimited`` entities, since implicit terminate-by-connection-close is not a robust way of
signaling response end, especially in the presence of proxies. Additionally this type of entity prevents connection
reuse which can seriously degrade performance. Use ``HttpEntity.Chunked`` instead!
HttpEntity.IndefiniteLength
A streaming entity of unspecified length for use in a ``Multipart.BodyPart``.
__ http://tools.ietf.org/html/rfc7230#section-4.1
Entity types ``Strict``, ``Default``, and ``Chunked`` are a subtype of ``HttpEntity.Regular`` which allows to use them
for requests and responses. In contrast, ``HttpEntity.CloseDelimited`` can only be used for responses.
Streaming entity types (i.e. all but ``Strict``) cannot be shared or serialized. To create a strict, sharable copy of an
entity or message use ``HttpEntity.toStrict`` or ``HttpMessage.toStrict`` which returns a ``Future`` of the object with
the body data collected into a ``ByteString``.
The ``HttpEntity`` companion object contains several helper constructors to create entities from common types easily.
You can pattern match over the subtypes of ``HttpEntity`` if you want to provide special handling for each of the
subtypes. However, in many cases a recipient of an ``HttpEntity`` doesn't care about of which subtype an entity is
(and how data is transported exactly on the HTTP layer). Therefore, the general method ``HttpEntity.dataBytes`` is
provided which returns a ``Source[ByteString, Any]`` that allows access to the data of an entity regardless of its
concrete subtype.
.. note::
When to use which subtype?
- Use ``Strict`` if the amount of data is "small" and already available in memory (e.g. as a ``String`` or ``ByteString``)
- Use ``Default`` if the data is generated by a streaming data source and the size of the data is known
- Use ``Chunked`` for an entity of unknown length
- Use ``CloseDelimited`` for a response as a legacy alternative to ``Chunked`` if the client doesn't support
chunked transfer encoding. Otherwise use ``Chunked``!
- In a ``Multipart.Bodypart`` use ``IndefiniteLength`` for content of unknown length.
.. caution::
When you receive a non-strict message from a connection then additional data are only read from the network when you
request them by consuming the entity data stream. This means that, if you *don't* consume the entity stream then the
connection will effectively be stalled. In particular no subsequent message (request or response) will be read from
the connection as the entity of the current message "blocks" the stream.
Therefore you must make sure that you always consume the entity data, even in the case that you are not actually
interested in it!
Limiting message entity length
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
All message entities that Akka HTTP reads from the network automatically get a length verification check attached to
them. This check makes sure that the total entity size is less than or equal to the configured
``max-content-length`` [#]_, which is an important defense against certain Denial-of-Service attacks.
However, a single global limit for all requests (or responses) is often too inflexible for applications that need to
allow large limits for *some* requests (or responses) but want to clamp down on all messages not belonging into that
group.
In order to give you maximum flexibility in defining entity size limits according to your needs the ``HttpEntity``
features a ``withSizeLimit`` method, which lets you adjust the globally configured maximum size for this particular
entity, be it to increase or decrease any previously set value.
This means that your application will receive all requests (or responses) from the HTTP layer, even the ones whose
``Content-Length`` exceeds the configured limit (because you might want to increase the limit yourself).
Only when the actual data stream ``Source`` contained in the entity is materialized will the boundary checks be
actually applied. In case the length verification fails the respective stream will be terminated with an
:class:`EntityStreamSizeException` either directly at materialization time (if the ``Content-Length`` is known) or whenever more
data bytes than allowed have been read.
When called on ``Strict`` entities the ``withSizeLimit`` method will return the entity itself if the length is within
the bound, otherwise a ``Default`` entity with a single element data stream. This allows for potential refinement of the
entity size limit at a later point (before materialization of the data stream).
By default all message entities produced by the HTTP layer automatically carry the limit that is defined in the
application's ``max-content-length`` config setting. If the entity is transformed in a way that changes the
content-length and then another limit is applied then this new limit will be evaluated against the new
content-length. If the entity is transformed in a way that changes the content-length and no new limit is applied
then the previous limit will be applied against the previous content-length.
Generally this behavior should be in line with your expectations.
.. [#] `akka.http.parsing.max-content-length` (applying to server- as well as client-side),
`akka.http.server.parsing.max-content-length` (server-side only),
`akka.http.client.parsing.max-content-length` (client-side only) or
`akka.http.host-connection-pool.client.parsing.max-content-length` (only host-connection-pools)
Special processing for HEAD requests
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`RFC 7230`_ defines very clear rules for the entity length of HTTP messages.
Especially this rule requires special treatment in Akka HTTP:
Any response to a HEAD request and any response with a 1xx
(Informational), 204 (No Content), or 304 (Not Modified) status
code is always terminated by the first empty line after the
header fields, regardless of the header fields present in the
message, and thus cannot contain a message body.
Responses to HEAD requests introduce the complexity that `Content-Length` or `Transfer-Encoding` headers
can be present but the entity is empty. This is modeled by allowing `HttpEntity.Default` and `HttpEntity.Chunked`
to be used for HEAD responses with an empty data stream.
Also, when a HEAD response has an `HttpEntity.CloseDelimited` entity the Akka HTTP implementation will *not* close the
connection after the response has been sent. This allows the sending of HEAD responses without `Content-Length`
header across persistent HTTP connections.
.. _RFC 7230: http://tools.ietf.org/html/rfc7230#section-3.3.3
.. _header-model-scala:
Header Model
------------
Akka HTTP contains a rich model of the most common HTTP headers. Parsing and rendering is done automatically so that
applications don't need to care for the actual syntax of headers. Headers not modelled explicitly are represented
as a ``RawHeader`` (which is essentially a String/String name/value pair).
See these examples of how to deal with headers:
.. includecode:: ../../code/docs/http/scaladsl/ModelSpec.scala
:include: headers
HTTP Headers
------------
When the Akka HTTP server receives an HTTP request it tries to parse all its headers into their respective
model classes. Independently of whether this succeeds or not, the HTTP layer will
always pass on all received headers to the application. Unknown headers as well as ones with invalid syntax (according
to the header parser) will be made available as ``RawHeader`` instances. For the ones exhibiting parsing errors a
warning message is logged depending on the value of the ``illegal-header-warnings`` config setting.
Some headers have special status in HTTP and are therefore treated differently from "regular" headers:
Content-Type
The Content-Type of an HTTP message is modeled as the ``contentType`` field of the ``HttpEntity``.
The ``Content-Type`` header therefore doesn't appear in the ``headers`` sequence of a message.
Also, a ``Content-Type`` header instance that is explicitly added to the ``headers`` of a request or response will
not be rendered onto the wire and trigger a warning being logged instead!
Transfer-Encoding
Messages with ``Transfer-Encoding: chunked`` are represented via the ``HttpEntity.Chunked`` entity.
As such chunked messages that do not have another deeper nested transfer encoding will not have a ``Transfer-Encoding``
header in their ``headers`` sequence.
Similarly, a ``Transfer-Encoding`` header instance that is explicitly added to the ``headers`` of a request or
response will not be rendered onto the wire and trigger a warning being logged instead!
Content-Length
The content length of a message is modelled via its :ref:`HttpEntity-scala`. As such no ``Content-Length`` header will ever
be part of a message's ``header`` sequence.
Similarly, a ``Content-Length`` header instance that is explicitly added to the ``headers`` of a request or
response will not be rendered onto the wire and trigger a warning being logged instead!
Server
A ``Server`` header is usually added automatically to any response and its value can be configured via the
``akka.http.server.server-header`` setting. Additionally an application can override the configured header with a
custom one by adding it to the response's ``header`` sequence.
User-Agent
A ``User-Agent`` header is usually added automatically to any request and its value can be configured via the
``akka.http.client.user-agent-header`` setting. Additionally an application can override the configured header with a
custom one by adding it to the request's ``header`` sequence.
Date
The ``Date`` response header is added automatically but can be overridden by supplying it manually.
Connection
On the server-side Akka HTTP watches for explicitly added ``Connection: close`` response headers and as such honors
the potential wish of the application to close the connection after the respective response has been sent out.
The actual logic for determining whether to close the connection is quite involved. It takes into account the
request's method, protocol and potential ``Connection`` header as well as the response's protocol, entity and
potential ``Connection`` header. See `this test`__ for a full table of what happens when.
Strict-Transport-Security
HTTP Strict Transport Security (HSTS) is a web security policy mechanism which is communicated by the
``Strict-Transport-Security`` header. The most important security vulnerability that HSTS can fix is SSL-stripping
man-in-the-middle attacks. The SSL-stripping attact works by transparently converting a secure HTTPS connection into a
plain HTTP connection. The user can see that the connection is insecure, but crucially there is no way of knowing
whether the connection should be secure. HSTS addresses this problem by informing the browser that connections to the
site should always use TLS/SSL. See also `RFC 6797`_.
.. _RFC 6797: http://tools.ietf.org/html/rfc6797
__ @github@/akka-http-core/src/test/scala/akka/http/impl/engine/rendering/ResponseRendererSpec.scala#L422
.. _custom-headers-scala:
Custom Headers
--------------
Sometimes you may need to model a custom header type which is not part of HTTP and still be able to use it
as convienient as is possible with the built-in types.
Because of the number of ways one may interact with headers (i.e. try to match a ``CustomHeader`` against a ``RawHeader``
or the other way around etc), a helper trait for custom Header types and their companions classes are provided by Akka HTTP.
Thanks to extending :class:`ModeledCustomHeader` instead of the plain ``CustomHeader`` such header can be matched
.. includecode:: ../../../../../akka-http-tests/src/test/scala/akka/http/scaladsl/server/ModeledCustomHeaderSpec.scala
:include: modeled-api-key-custom-header
Which allows the this CustomHeader to be used in the following scenarios:
.. includecode:: ../../../../../akka-http-tests/src/test/scala/akka/http/scaladsl/server/ModeledCustomHeaderSpec.scala
:include: matching-examples
Including usage within the header directives like in the following :ref:`-headerValuePF-` example:
.. includecode:: ../../../../../akka-http-tests/src/test/scala/akka/http/scaladsl/server/ModeledCustomHeaderSpec.scala
:include: matching-in-routes
One can also directly extend :class:`CustomHeader` which requires less boilerplate, however that has the downside of
matching against :class:`RawHeader` instances not working out-of-the-box, thus limiting its usefulnes in the routing layer
of Akka HTTP. For only rendering such header however it would be enough.
.. note::
When defining custom headers, prefer to extend :class:`ModeledCustomHeader` instead of :class:`CustomHeader` directly
as it will automatically make your header abide all the expected pattern matching semantics one is accustomed to
when using built-in types (such as matching a custom header against a ``RawHeader`` as is often the case in routing
layers of Akka HTTP applications).
Parsing / Rendering
-------------------
Parsing and rendering of HTTP data structures is heavily optimized and for most types there's currently no public API
provided to parse (or render to) Strings or byte arrays.
.. note::
Various parsing and rendering settings are available to tweak in the configuration under ``akka.http.client[.parsing]``,
``akka.http.server[.parsing]`` and ``akka.http.host-connection-pool[.client.parsing]``, with defaults for all of these
being defined in the ``akka.http.parsing`` configuration section.
For example, if you want to change a parsing setting for all components, you can set the ``akka.http.parsing.illegal-header-warnings = off``
value. However this setting can be stil overriden by the more specific sections, like for example ``akka.http.server.parsing.illegal-header-warnings = on``.
In this case both ``client`` and ``host-connection-pool`` APIs will see the setting ``off``, however the server will see ``on``.
In the case of ``akka.http.host-connection-pool.client`` settings, they default to settings set in ``akka.http.client``,
and can override them if needed. This is useful, since both ``client`` and ``host-connection-pool`` APIs,
such as the Client API ``Http().outgoingConnection`` or the Host Connection Pool APIs ``Http().singleRequest`` or ``Http().superPool``,
usually need the same settings, however the ``server`` most likely has a very different set of settings.
.. _registeringCustomMediaTypes:
Registering Custom Media Types
------------------------------
Akka HTTP `predefines`_ most commonly encountered media types and emits them in their well-typed form while parsing http messages.
Sometimes you may want to define a custom media type and inform the parser infrastructure about how to handle these custom
media types, e.g. that ``application/custom`` is to be treated as ``NonBinary`` with ``WithFixedCharset``. To achieve this you
need to register the custom media type in the server's settings by configuring ``ParserSettings`` like this:
.. includecode:: ../../../../../akka-http-tests/src/test/scala/akka/http/scaladsl/CustomMediaTypesSpec.scala
:include: application-custom
You may also want to read about MediaType `Registration trees`_, in order to register your vendor specific media types
in the right style / place.
.. _Registration trees: https://en.wikipedia.org/wiki/Media_type#Registration_trees
.. _predefines: https://github.com/akka/akka/blob/master/akka-http-core/src/main/scala/akka/http/scaladsl/model/MediaType.scala#L297
The URI model
-------------
Akka HTTP offers its own specialised URI model class which is tuned for both performance and idiomatic usage within
other types of the HTTP model. For example, an HTTPRequest's target URI is parsed into this type, where all character
escaping and other URI specific semantics are applied.
Obtaining the Raw Request URI
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Sometimes it may be needed to obtain the "raw" value of an incoming URI, without applying any escaping or parsing to it.
While this use-case is rare, it comes up every once in a while. It is possible to obtain the "raw" request URI in Akka
HTTP Server side by turning on the ``akka.http.server.raw-request-uri-header`` flag.
When enabled, a ``Raw-Request-URI`` header will be added to each request. This header will hold the original raw request's
URI that was used. For an example check the reference configuration.

View file

@ -1,22 +0,0 @@
.. _http-scala-common-scala:
Common Abstractions (Client- and Server-Side)
=============================================
HTTP and related specifications define a great number of concepts and functionality that is not specific to either
HTTP's client- or server-side since they are meaningful on both end of an HTTP connection.
The documentation for their counterparts in Akka HTTP lives in this section rather than in the ones for the
:ref:`Client-Side API <http-client-side>`, :ref:`http-low-level-server-side-api` or :ref:`http-high-level-server-side-api`,
which are specific to one side only.
.. toctree::
:maxdepth: 2
http-model
marshalling
unmarshalling
de-coding
json-support
xml-support
timeouts

View file

@ -1,50 +0,0 @@
.. _akka-http-spray-json:
JSON Support
============
Akka HTTP's :ref:`marshalling <http-marshalling-scala>` and :ref:`unmarshalling <http-unmarshalling-scala>`
infrastructure makes it rather easy to seamlessly support specific wire representations of your data objects, like JSON,
XML or even binary encodings.
For JSON Akka HTTP currently provides support for `spray-json`_ right out of the box through it's
``akka-http-spray-json`` module.
Other JSON libraries are supported by the community.
See `the list of current community extensions for Akka HTTP`_.
.. _`the list of current community extensions for Akka HTTP`: http://akka.io/community/#extensions-to-akka-http
spray-json Support
------------------
The SprayJsonSupport_ trait provides a ``FromEntityUnmarshaller[T]`` and ``ToEntityMarshaller[T]`` for every type ``T``
that an implicit ``spray.json.RootJsonReader`` and/or ``spray.json.RootJsonWriter`` (respectively) is available for.
This is how you enable automatic support for (un)marshalling from and to JSON with `spray-json`_:
1. Add a library dependency onto ``"com.typesafe.akka" %% "akka-http-spray-json-experimental" % "@version@"``.
2. ``import akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport._`` or mix in the
``akka.http.scaladsl.marshallers.sprayjson.SprayJsonSupport`` trait.
3. Provide a ``RootJsonFormat[T]`` for your type and bring it into scope.
Check out the `spray-json`_ documentation for more info on how to do this.
Once you have done this (un)marshalling between JSON and your type ``T`` should work nicely and transparently.
.. includecode2:: ../../code/docs/http/scaladsl/SprayJsonExampleSpec.scala
:snippet: minimal-spray-json-example
4. By default, spray-json marshals your types to pretty printed json by implicit conversion using PrettyPrinter, as defined in
``implicit def sprayJsonMarshallerConverter[T](writer: RootJsonWriter[T])(implicit printer: JsonPrinter = PrettyPrinter): ToEntityMarshaller[T]``.
Alternately to marshal your types to compact printed json, bring a ``CompactPrinter`` in scope to perform implicit conversion.
.. includecode:: ../../code/docs/http/scaladsl/SprayJsonCompactMarshalSpec.scala
:include: example
To learn more about how spray-json works please refer to its `documentation <https://github.com/spray/spray-json>`_.
.. _spray-json: https://github.com/spray/spray-json
.. _SprayJsonSupport: @github@/akka-http-marshallers-scala/akka-http-spray-json/src/main/scala/akka/http/scaladsl/marshallers/sprayjson/SprayJsonSupport.scala

View file

@ -1,164 +0,0 @@
.. _http-marshalling-scala:
Marshalling
===========
"Marshalling" is the process of converting a higher-level (object) structure into some kind of lower-level
representation, often a "wire format". Other popular names for it are "Serialization" or "Pickling".
In Akka HTTP "Marshalling" means the conversion of an object of type ``T`` into a lower-level target type,
e.g. a ``MessageEntity`` (which forms the "entity body" of an HTTP request or response) or a full ``HttpRequest`` or
``HttpResponse``.
Basic Design
------------
Marshalling of instances of type ``A`` into instances of type ``B`` is performed by a ``Marshaller[A, B]``.
Akka HTTP also predefines a number of helpful aliases for the types of marshallers that you'll likely work with most:
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/marshalling/package.scala
:snippet: marshaller-aliases
Contrary to what you might initially expect ``Marshaller[A, B]`` is not a plain function ``A => B`` but rather
essentially a function ``A => Future[List[Marshalling[B]]]``.
Let's dissect this rather complicated looking signature piece by piece to understand what marshallers are designed this
way.
Given an instance of type ``A`` a ``Marshaller[A, B]`` produces:
1. A ``Future``: This is probably quite clear. Marshallers are not required to synchronously produce a result, so instead
they return a future, which allows for asynchronicity in the marshalling process.
2. of ``List``: Rather than only a single target representation for ``A`` marshallers can offer several ones. Which
one will be rendered onto the wire in the end is decided by content negotiation.
For example, the ``ToEntityMarshaller[OrderConfirmation]`` might offer a JSON as well as an XML representation.
The client can decide through the addition of an ``Accept`` request header which one is preferred. If the client doesn't
express a preference the first representation is picked.
3. of ``Marshalling[B]``: Rather than returning an instance of ``B`` directly marshallers first produce a
``Marshalling[B]``. This allows for querying the ``MediaType`` and potentially the ``HttpCharset`` that the marshaller
will produce before the actual marshalling is triggered. Apart from enabling content negotiation this design allows for
delaying the actual construction of the marshalling target instance to the very last moment when it is really needed.
This is how ``Marshalling`` is defined:
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/marshalling/Marshaller.scala
:snippet: marshalling
Predefined Marshallers
----------------------
Akka HTTP already predefines a number of marshallers for the most common types.
Specifically these are:
- PredefinedToEntityMarshallers_
- ``Array[Byte]``
- ``ByteString``
- ``Array[Char]``
- ``String``
- ``akka.http.scaladsl.model.FormData``
- ``akka.http.scaladsl.model.MessageEntity``
- ``T <: akka.http.scaladsl.model.Multipart``
- PredefinedToResponseMarshallers_
- ``T``, if a ``ToEntityMarshaller[T]`` is available
- ``HttpResponse``
- ``StatusCode``
- ``(StatusCode, T)``, if a ``ToEntityMarshaller[T]`` is available
- ``(Int, T)``, if a ``ToEntityMarshaller[T]`` is available
- ``(StatusCode, immutable.Seq[HttpHeader], T)``, if a ``ToEntityMarshaller[T]`` is available
- ``(Int, immutable.Seq[HttpHeader], T)``, if a ``ToEntityMarshaller[T]`` is available
- PredefinedToRequestMarshallers_
- ``HttpRequest``
- ``Uri``
- ``(HttpMethod, Uri, T)``, if a ``ToEntityMarshaller[T]`` is available
- ``(HttpMethod, Uri, immutable.Seq[HttpHeader], T)``, if a ``ToEntityMarshaller[T]`` is available
- GenericMarshallers_
- ``Marshaller[Throwable, T]``
- ``Marshaller[Option[A], B]``, if a ``Marshaller[A, B]`` and an ``EmptyValue[B]`` is available
- ``Marshaller[Either[A1, A2], B]``, if a ``Marshaller[A1, B]`` and a ``Marshaller[A2, B]`` is available
- ``Marshaller[Future[A], B]``, if a ``Marshaller[A, B]`` is available
- ``Marshaller[Try[A], B]``, if a ``Marshaller[A, B]`` is available
.. _PredefinedToEntityMarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/marshalling/PredefinedToEntityMarshallers.scala
.. _PredefinedToResponseMarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/marshalling/PredefinedToResponseMarshallers.scala
.. _PredefinedToRequestMarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/marshalling/PredefinedToRequestMarshallers.scala
.. _GenericMarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/marshalling/GenericMarshallers.scala
Implicit Resolution
-------------------
The marshalling infrastructure of Akka HTTP relies on a type-class based approach, which means that ``Marshaller``
instances from a certain type ``A`` to a certain type ``B`` have to be available implicitly.
The implicits for most of the predefined marshallers in Akka HTTP are provided through the companion object of the
``Marshaller`` trait. This means that they are always available and never need to be explicitly imported.
Additionally, you can simply "override" them by bringing your own custom version into local scope.
Custom Marshallers
------------------
Akka HTTP gives you a few convenience tools for constructing marshallers for your own types.
Before you do that you need to think about what kind of marshaller you want to create.
If all your marshaller needs to produce is a ``MessageEntity`` then you should probably provide a
``ToEntityMarshaller[T]``. The advantage here is that it will work on both the client- as well as the server-side since
a ``ToResponseMarshaller[T]`` as well as a ``ToRequestMarshaller[T]`` can automatically be created if a
``ToEntityMarshaller[T]`` is available.
If, however, your marshaller also needs to set things like the response status code, the request method, the request URI
or any headers then a ``ToEntityMarshaller[T]`` won't work. You'll need to fall down to providing a
``ToResponseMarshaller[T]`` or a ``ToRequestMarshaller[T]`` directly.
For writing your own marshallers you won't have to "manually" implement the ``Marshaller`` trait directly.
Rather, it should be possible to use one of the convenience construction helpers defined on the ``Marshaller``
companion:
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/marshalling/Marshaller.scala
:snippet: marshaller-creation
Deriving Marshallers
--------------------
Sometimes you can save yourself some work by reusing existing marshallers for your custom ones.
The idea is to "wrap" an existing marshaller with some logic to "re-target" it to your type.
In this regard wrapping a marshaller can mean one or both of the following two things:
- Transform the input before it reaches the wrapped marshaller
- Transform the output of the wrapped marshaller
For the latter (transforming the output) you can use ``baseMarshaller.map``, which works exactly as it does for functions.
For the former (transforming the input) you have four alternatives:
- ``baseMarshaller.compose``
- ``baseMarshaller.composeWithEC``
- ``baseMarshaller.wrap``
- ``baseMarshaller.wrapWithEC``
``compose`` works just like it does for functions.
``wrap`` is a compose that allows you to also change the ``ContentType`` that the marshaller marshals to.
The ``...WithEC`` variants allow you to receive an ``ExecutionContext`` internally if you need one, without having to
depend on one being available implicitly at the usage site.
Using Marshallers
-----------------
In many places throughput Akka HTTP marshallers are used implicitly, e.g. when you define how to :ref:`-complete-` a
request using the :ref:`Routing DSL <http-high-level-server-side-api>`.
However, you can also use the marshalling infrastructure directly if you wish, which can be useful for example in tests.
The best entry point for this is the ``akka.http.scaladsl.marshalling.Marshal`` object, which you can use like this:
.. includecode2:: ../../code/docs/http/scaladsl/MarshalSpec.scala
:snippet: use marshal

View file

@ -1,78 +0,0 @@
.. _http-timeouts-scala:
Akka HTTP Timeouts
==================
Akka HTTP comes with a variety of built-in timeout mechanisms to protect your servers from malicious attacks or
programming mistakes. Some of these are simply configuration options (which may be overriden in code) while others
are left to the streaming APIs and are easily implementable as patterns in user-code directly.
Common timeouts
---------------
.. _idle-timeouts-scala:
Idle timeouts
^^^^^^^^^^^^^
The ``idle-timeout`` is a global setting which sets the maximum inactivity time of a given connection.
In other words, if a connection is open but no request/response is being written to it for over ``idle-timeout`` time,
the connection will be automatically closed.
The setting works the same way for all connections, be it server-side or client-side, and it's configurable
independently for each of those using the following keys::
akka.http.server.idle-timeout
akka.http.client.idle-timeout
akka.http.host-connection-pool.idle-timeout
akka.http.host-connection-pool.client.idle-timeout
.. note::
For the connection pooled client side the idle period is counted only when the pool has no pending requests waiting.
Server timeouts
---------------
.. _request-timeout-scala:
Request timeout
^^^^^^^^^^^^^^^
Request timeouts are a mechanism that limits the maximum time it may take to produce an ``HttpResponse`` from a route.
If that deadline is not met the server will automatically inject a Service Unavailable HTTP response and close the connection
to prevent it from leaking and staying around indefinitely (for example if by programming error a Future would never complete,
never sending the real response otherwise).
The default ``HttpResponse`` that is written when a request timeout is exceeded looks like this:
.. includecode2:: /../../akka-http-core/src/main/scala/akka/http/impl/engine/server/HttpServerBluePrint.scala
:snippet: default-request-timeout-httpresponse
A default request timeout is applied globally to all routes and can be configured using the
``akka.http.server.request-timeout`` setting (which defaults to 20 seconds).
.. note::
Please note that if multiple requests (``R1,R2,R3,...``) were sent by a client (see "HTTP pipelining")
using the same connection and the ``n-th`` request triggers a request timeout the server will reply with an Http Response
and close the connection, leaving the ``(n+1)-th`` (and subsequent requests on the same connection) unhandled.
The request timeout can be configured at run-time for a given route using the any of the :ref:`TimeoutDirectives`.
Bind timeout
^^^^^^^^^^^^
The bind timeout is the time period within which the TCP binding process must be completed (using any of the ``Http().bind*`` methods).
It can be configured using the ``akka.http.server.bind-timeout`` setting.
Client timeouts
---------------
Connecting timeout
^^^^^^^^^^^^^^^^^^
The connecting timeout is the time period within which the TCP connecting process must be completed.
Tweaking it should rarely be required, but it allows erroring out the connection in case a connection
is unable to be established for a given amount of time.
it can be configured using the ``akka.http.client.connecting-timeout`` setting.

View file

@ -1,119 +0,0 @@
.. _http-unmarshalling-scala:
Unmarshalling
=============
"Unmarshalling" is the process of converting some kind of a lower-level representation, often a "wire format", into a
higher-level (object) structure. Other popular names for it are "Deserialization" or "Unpickling".
In Akka HTTP "Unmarshalling" means the conversion of a lower-level source object, e.g. a ``MessageEntity``
(which forms the "entity body" of an HTTP request or response) or a full ``HttpRequest`` or ``HttpResponse``,
into an instance of type ``T``.
Basic Design
------------
Unmarshalling of instances of type ``A`` into instances of type ``B`` is performed by an ``Unmarshaller[A, B]``.
Akka HTTP also predefines a number of helpful aliases for the types of unmarshallers that you'll likely work with most:
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/package.scala
:snippet: unmarshaller-aliases
At its core an ``Unmarshaller[A, B]`` is very similar to a function ``A => Future[B]`` and as such quite a bit simpler
than its :ref:`marshalling <http-marshalling-scala>` counterpart. The process of unmarshalling does not have to support
content negotiation which saves two additional layers of indirection that are required on the marshalling side.
Predefined Unmarshallers
------------------------
Akka HTTP already predefines a number of marshallers for the most common types.
Specifically these are:
- PredefinedFromStringUnmarshallers_
- ``Byte``
- ``Short``
- ``Int``
- ``Long``
- ``Float``
- ``Double``
- ``Boolean``
- PredefinedFromEntityUnmarshallers_
- ``Array[Byte]``
- ``ByteString``
- ``Array[Char]``
- ``String``
- ``akka.http.scaladsl.model.FormData``
- GenericUnmarshallers_
- ``Unmarshaller[T, T]`` (identity unmarshaller)
- ``Unmarshaller[Option[A], B]``, if an ``Unmarshaller[A, B]`` is available
- ``Unmarshaller[A, Option[B]]``, if an ``Unmarshaller[A, B]`` is available
.. _PredefinedFromStringUnmarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/PredefinedFromStringUnmarshallers.scala
.. _PredefinedFromEntityUnmarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/PredefinedFromEntityUnmarshallers.scala
.. _GenericUnmarshallers: @github@/akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/GenericUnmarshallers.scala
Implicit Resolution
-------------------
The unmarshalling infrastructure of Akka HTTP relies on a type-class based approach, which means that ``Unmarshaller``
instances from a certain type ``A`` to a certain type ``B`` have to be available implicitly.
The implicits for most of the predefined unmarshallers in Akka HTTP are provided through the companion object of the
``Unmarshaller`` trait. This means that they are always available and never need to be explicitly imported.
Additionally, you can simply "override" them by bringing your own custom version into local scope.
Custom Unmarshallers
--------------------
Akka HTTP gives you a few convenience tools for constructing unmarshallers for your own types.
Usually you won't have to "manually" implement the ``Unmarshaller`` trait directly.
Rather, it should be possible to use one of the convenience construction helpers defined on the ``Unmarshaller``
companion:
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/unmarshalling/Unmarshaller.scala
:snippet: unmarshaller-creation
Deriving Unmarshallers
----------------------
Sometimes you can save yourself some work by reusing existing unmarshallers for your custom ones.
The idea is to "wrap" an existing unmarshaller with some logic to "re-target" it to your type.
Usually what you want to do is to transform the output of some existing unmarshaller and convert it to your type.
For this type of unmarshaller transformation Akka HTTP defines these methods:
- ``baseUnmarshaller.transform``
- ``baseUnmarshaller.map``
- ``baseUnmarshaller.mapWithInput``
- ``baseUnmarshaller.flatMap``
- ``baseUnmarshaller.flatMapWithInput``
- ``baseUnmarshaller.recover``
- ``baseUnmarshaller.withDefaultValue``
- ``baseUnmarshaller.mapWithCharset`` (only available for FromEntityUnmarshallers)
- ``baseUnmarshaller.forContentTypes`` (only available for FromEntityUnmarshallers)
The method signatures should make their semantics relatively clear.
Using Unmarshallers
-------------------
In many places throughput Akka HTTP unmarshallers are used implicitly, e.g. when you want to access the :ref:`-entity-`
of a request using the :ref:`Routing DSL <http-high-level-server-side-api>`.
However, you can also use the unmarshalling infrastructure directly if you wish, which can be useful for example in tests.
The best entry point for this is the ``akka.http.scaladsl.unmarshalling.Unmarshal`` object, which you can use like this:
.. includecode2:: ../../code/docs/http/scaladsl/UnmarshalSpec.scala
:snippet: use unmarshal

View file

@ -1,31 +0,0 @@
.. _akka-http-xml-marshalling:
XML Support
===========
Akka HTTP's :ref:`marshalling <http-marshalling-scala>` and :ref:`unmarshalling <http-unmarshalling-scala>`
infrastructure makes it rather easy to seamlessly support specific wire representations of your data objects, like JSON,
XML or even binary encodings.
For XML Akka HTTP currently provides support for `Scala XML`_ right out of the box through it's
``akka-http-xml`` module.
Scala XML Support
-----------------
The ScalaXmlSupport_ trait provides a ``FromEntityUnmarshaller[NodeSeq]`` and ``ToEntityMarshaller[NodeSeq]`` that
you can use directly or build upon.
This is how you enable support for (un)marshalling from and to JSON with `Scala XML`_ ``NodeSeq``:
1. Add a library dependency onto ``"com.typesafe.akka" %% "akka-http-xml-experimental" % "1.x"``.
2. ``import akka.http.scaladsl.marshallers.xml.ScalaXmlSupport._`` or mix in the
``akka.http.scaladsl.marshallers.xml.ScalaXmlSupport`` trait.
Once you have done this (un)marshalling between XML and ``NodeSeq`` instances should work nicely and transparently.
.. _Scala XML: https://github.com/scala/scala-xml
.. _ScalaXmlSupport: @github@/akka-http-marshallers-scala/akka-http-xml/src/main/scala/akka/http/scaladsl/marshallers/xml/ScalaXmlSupport.scala

View file

@ -1,28 +0,0 @@
.. _akka-http-configuration:
Configuration
=============
Just like any other Akka module Akka HTTP is configured via `Typesafe Config`_.
Usually this means that you provide an ``application.conf`` which contains all the application-specific settings that
differ from the default ones provided by the reference configuration files from the individual Akka modules.
These are the relevant default configuration values for the Akka HTTP modules.
akka-http-core
~~~~~~~~~~~~~~
.. literalinclude:: ../../../../akka-http-core/src/main/resources/reference.conf
:language: none
akka-http
~~~~~~~~~
.. literalinclude:: ../../../../akka-http/src/main/resources/reference.conf
:language: none
The other Akka HTTP modules do not offer any configuration via `Typesafe Config`_.
.. _Typesafe Config: https://github.com/typesafehub/config

View file

@ -1,120 +0,0 @@
.. _handling-blocking-in-http-routes-scala:
Handling blocking operations in Akka HTTP
=========================================
Sometimes it is difficult to avoid performing the blocking operations and there
are good chances that the blocking is done inside a Future execute, which may
lead to problems. It is important to handle the blocking operations correctly.
Problem
-------
Using ``context.dispatcher`` as the dispatcher on which the blocking Future
executes, can be a problem. The same dispatcher is used by the routing
infrastructure to actually handle the incoming requests.
If all of the available threads are blocked, the routing infrastructure will end up *starving*.
Therefore, routing infrastructure should not be blocked. Instead, a dedicated dispatcher
for blocking operations should be used.
.. note::
Blocking APIs should also be avoided if possible. Try to find or build Reactive APIs,
such that blocking is minimised, or moved over to dedicated dispatchers.
Often when integrating with existing libraries or systems it is not possible to
avoid blocking APIs, then following solution explains how to handle blocking
operations properly.
Note that the same hints apply to managing blocking operations anywhere in Akka,
including in Actors etc.
In the below thread state diagrams the colours have the following meaning:
* Turquoise - Sleeping state
* Orange - Waiting state
* Green - Runnable state
The thread information was recorded using the YourKit profiler, however any good JVM profiler
has this feature (including the free and bundled with the Oracle JDK VisualVM as well as Oracle Flight Recorder).
Problem example: blocking the default dispatcher
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. includecode2:: ../code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala
:snippet: blocking-example-in-default-dispatcher
Here the app is exposed to load of continous GET requests and large number
of akka.actor.default-dispatcher threads are handling requests. The orange
portion of the thread shows that they are idle. Idle threads are fine,
they're ready to accept new work. However large amounts of Turquoise (sleeping) threads are very bad!
.. image:: DispatcherBehaviourOnBadCode.png
After some time, the app is exposed to the load of requesting POST requests,
which will block these threads. For example "``default-akka.default-dispatcher2,3,4``"
are going into the blocking state, after being idle before. It can be observed
that the number of new threads increase, "``default-akka.actor.default-dispatcher 18,19,20,...``"
however they go to sleep state immediately, thus wasting the
resources.
The number of such new threads depend on the default dispatcher configuration,
but likely will not exceed 50. Since many POST requests are done, the entire
thread pool is starved. The blocking operations dominate such that the routing
infra has no thread available to handle the other requests.
In essence, the ``Thread.sleep`` has dominated all threads and caused anything
executing on the default dispatcher to starve for resources (including any Actors
that you have not configured an explicit dispatcher for (sic!)).
Solution: Dedicated dispatcher for blocking operations
------------------------------------------------------
In ``application.conf``, the dispatcher dedicated for blocking behaviour should
be configured as follows::
my-blocking-dispatcher {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
// or in Akka 2.4.2+
fixed-pool-size = 16
}
throughput = 100
}
There are many dispatcher options available which can be found in :ref:`dispatchers-scala`.
Here ``thread-pool-executor`` is used, which has a hard limit of threads, it can
keep available for blocking operations. The size settings depend on the app
functionality and the number of cores the server has.
Whenever blocking has to be done, use the above configured dispatcher
instead of the default one:
.. includecode2:: ../code/docs/http/scaladsl/server/BlockingInHttpExamplesSpec.scala
:snippet: blocking-example-in-dedicated-dispatcher
This forces the app to use the same load, initially normal requests and then
the blocking requests. The thread pool behaviour is shown in the figrue.
.. image:: DispatcherBehaviourOnGoodCode.png
Initially, the normal requests are easily handled by default dispatcher, the
green lines, which represents the actual execution.
When blocking operations are issued, the ``my-blocking-dispatcher``
starts up to the number of configured threads. It handles sleeping. After
certain period of nothing happening to the threads, it shuts them down.
If another bunch of operations have to be done, the pool will start new
threads that will take care of putting them into sleep state, but the
threads are not wasted.
In this case, the throughput of the normal GET requests are not impacted
they were still served on the default dispatcher.
This is the recommended way of dealing with any kind of blocking in reactive
applications. It is referred as "bulkheading" or "isolating" the bad behaving
parts of an app. In this case, bad behaviour of blocking operations.
There is good documentation availabe in Akka docs section,
`Blocking needs careful management <http://doc.akka.io/docs/akka/current/general/actor-systems.html#Blocking_Needs_Careful_Management>`_.

View file

@ -1,130 +0,0 @@
.. _implications-of-streaming-http-entities:
Implications of the streaming nature of Request/Response Entities
-----------------------------------------------------------------
Akka HTTP is streaming *all the way through*, which means that the back-pressure mechanisms enabled by Akka Streams
are exposed through all layersfrom the TCP layer, through the HTTP server, all the way up to the user-facing ``HttpRequest``
and ``HttpResponse`` and their ``HttpEntity`` APIs.
This has surprising implications if you are used to non-streaming / not-reactive HTTP clients.
Specifically it means that: "*lack of consumption of the HTTP Entity, is signaled as back-pressure to the other
side of the connection*". This is a feature, as it allows one only to consume the entity, and back-pressure servers/clients
from overwhelming our application, possibly causing un-necessary buffering of the entity in memory.
.. warning::
Consuming (or discarding) the Entity of a request is mandatory!
If *accidentally* left neither consumed or discarded Akka HTTP will
assume the incoming data should remain back-pressured, and will stall the incoming data via TCP back-pressure mechanisms.
A client should consume the Entity regardless of the status of the ``HttpResponse``.
Client-Side handling of streaming HTTP Entities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Consuming the HTTP Response Entity (Client)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The most common use-case of course is consuming the response entity, which can be done via
running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source,
(or on the server-side using directives such as ``BasicDirectives.extractDataBytes``).
It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest,
for example by framing the incoming chunks, parsing them line-by-line and then connecting the flow into another
destination Sink, such as a File or other Akka Streams connector:
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: manual-entity-consume-example-1
however sometimes the need may arise to consume the entire entity as ``Strict`` entity (which means that it is
completely loaded into memory). Akka HTTP provides a special ``toStrict(timeout)`` method which can be used to
eagerly consume the entity and make it available in memory:
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: manual-entity-consume-example-2
Discarding the HTTP Response Entity (Client)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes when calling HTTP services we do not care about their response payload (e.g. all we care about is the response code),
yet as explained above entity still has to be consumed in some way, otherwise we'll be exherting back-pressure on the
underlying TCP connection.
The ``discardEntityBytes`` convenience method serves the purpose of easily discarding the entity if it has no purpose for us.
It does so by piping the incoming bytes directly into an ``Sink.ignore``.
The two snippets below are equivalent, and work the same way on the server-side for incoming HTTP Requests:
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: manual-entity-discard-example-1
Or the equivalent low-level code achieving the same result:
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: manual-entity-discard-example-2
Server-Side handling of streaming HTTP Entities
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Similarily as with the Client-side, HTTP Entities are directly linked to Streams which are fed by the underlying
TCP connection. Thus, if request entities remain not consumed, the server will back-pressure the connection, expecting
that the user-code will eventually decide what to do with the incoming data.
Note that some directives force an implicit ``toStrict`` operation, such as ``entity(as[String])`` and similar ones.
Consuming the HTTP Request Entity (Server)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The simplest way of consuming the incoming request entity is to simply transform it into an actual domain object,
for example by using the :ref:`-entity-` directive:
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: consume-entity-directive
Of course you can access the raw dataBytes as well and run the underlying stream, for example piping it into an
FileIO Sink, that signals completion via a ``Future[IoResult]`` once all the data has been written into the file:
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: consume-raw-dataBytes
Discarding the HTTP Request Entity (Server)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sometimes, depending on some validation (e.g. checking if given user is allowed to perform uploads or not)
you may want to decide to discard the uploaded entity.
Please note that discarding means that the entire upload will proceed, even though you are not interested in the data
being streamed to the server - this may be useful if you are simply not interested in the given entity, however
you don't want to abort the entire connection (which we'll demonstrate as well), since there may be more requests
pending on the same connection still.
In order to discard the databytes explicitly you can invoke the ``discardEntityBytes`` bytes of the incoming ``HTTPRequest``:
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: discard-discardEntityBytes
A related concept is *cancelling* the incoming ``entity.dataBytes`` stream, which results in Akka HTTP
*abruptly closing the connection from the Client*. This may be useful when you detect that the given user should not be allowed to make any
uploads at all, and you want to drop the connection (instead of reading and ignoring the incoming data).
This can be done by attaching the incoming ``entity.dataBytes`` to a ``Sink.cancelled`` which will cancel
the entity stream, which in turn will cause the underlying connection to be shut-down by the server
effectively hard-aborting the incoming request:
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: discard-close-connections
Closing connections is also explained in depth in the :ref:`http-closing-connection-low-level` section of the docs.
Pending: Automatic discarding of not used entities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Under certain conditions it is possible to detect an entity is very unlikely to be used by the user for a given request,
and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below
note and issues for further discussion and ideas.
.. note::
An advanced feature code named "auto draining" has been discussed and proposed for Akka HTTP, and we're hoping
to implement or help the community implement it.
You can read more about it in `issue #18716 <https://github.com/akka/akka/issues/18716>`_
as well as `issue #18540 <https://github.com/akka/akka/issues/18540>`_ ; as always, contributions are very welcome!

View file

@ -1,20 +1,5 @@
.. _http-scala:
Akka HTTP Documentation (Scala) moved!
======================================
Akka HTTP
=========
.. toctree::
:maxdepth: 2
introduction
configuration
common/index
implications-of-streaming-http-entity
low-level-server-side-api
routing-dsl/index
client-side/index
server-side-https-support
handling-blocking-operations-in-akka-http-routes
migration-from-spray
migration-from-old-http-javadsl
migration-guide-2.4.x-experimental
Akka HTTP has been released as independent stable module (from Akka HTTP 3.x onwards).
The documentation is available under `doc.akka.io/akka-http/current/ <http://doc.akka.io/docs/akka-http/current/scala.html>`_.

View file

@ -1,156 +0,0 @@
.. _http-introduction-scala:
Introduction
============
The Akka HTTP modules implement a full server- and client-side HTTP stack on top of *akka-actor* and *akka-stream*. It's
not a web-framework but rather a more general toolkit for providing and consuming HTTP-based services. While interaction
with a browser is of course also in scope it is not the primary focus of Akka HTTP.
Akka HTTP follows a rather open design and many times offers several different API levels for "doing the same thing".
You get to pick the API level of abstraction that is most suitable for your application.
This means that, if you have trouble achieving something using a high-level API, there's a good chance that you can get
it done with a low-level API, which offers more flexibility but might require you to write more application code.
Philosophy
----------
Akka HTTP has been driven with a clear focus on providing tools for building integration layers rather than application cores. As such it regards itself as a suite of libraries rather than a framework.
A framework, as wed like to think of the term, gives you a “frame”, in which you build your application. It comes with a lot of decisions already pre-made and provides a foundation including support structures that lets you get started and deliver results quickly. In a way a framework is like a skeleton onto which you put the “flesh” of your application in order to have it come alive. As such frameworks work best if you choose them before you start application development and try to stick to the frameworks “way of doing things” as you go along.
For example, if you are building a browser-facing web application it makes sense to choose a web framework and build your application on top of it because the “core” of the application is the interaction of a browser with your code on the web-server. The framework makers have chosen one “proven” way of designing such applications and let you “fill in the blanks” of a more or less flexible “application-template”. Being able to rely on best-practice architecture like this can be a great asset for getting things done quickly.
However, if your application is not primarily a web application because its core is not browser-interaction but some specialized maybe complex business service and you are merely trying to connect it to the world via a REST/HTTP interface a web-framework might not be what you need. In this case the application architecture should be dictated by what makes sense for the core not the interface layer. Also, you probably wont benefit from the possibly existing browser-specific framework components like view templating, asset management, JavaScript- and CSS generation/manipulation/minification, localization support, AJAX support, etc.
Akka HTTP was designed specifically as “not-a-framework”, not because we dont like frameworks, but for use cases where a framework is not the right choice. Akka HTTP is made for building integration layers based on HTTP and as such tries to “stay on the sidelines”. Therefore you normally dont build your application “on top of” Akka HTTP, but you build your application on top of whatever makes sense and use Akka HTTP merely for the HTTP integration needs.
Using Akka HTTP
---------------
Akka HTTP is provided in a separate jar file, to use it make sure to include the following dependency::
"com.typesafe.akka" %% "akka-http-experimental" % "@version@" @crossString@
Mind that ``akka-http`` comes in two modules: ``akka-http-experimental`` and ``akka-http-core``. Because ``akka-http-experimental``
depends on ``akka-http-core`` you don't need to bring the latter explicitly. Still you may need to this in case you rely
solely on low-level API.
Routing DSL for HTTP servers
----------------------------
The high-level, routing API of Akka HTTP provides a DSL to describe HTTP "routes" and how they should be handled.
Each route is composed of one or more level of ``Directive`` s that narrows down to handling one specific type of
request.
For example one route might start with matching the ``path`` of the request, only matching if it is "/hello", then
narrowing it down to only handle HTTP ``get`` requests and then ``complete`` those with a string literal, which
will be sent back as a HTTP OK with the string as response body.
Transforming request and response bodies between over-the-wire formats and objects to be used in your application is
done separately from the route declarations, in marshallers, which are pulled in implicitly using the "magnet" pattern.
This means that you can ``complete`` a request with any kind of object a as long as there is an implicit marshaller
available in scope.
Default marshallers are provided for simple objects like String or ByteString, and you can define your own for example
for JSON. An additional module provides JSON serialization using the spray-json library (see :ref:`akka-http-spray-json`
for details).
The ``Route`` created using the Route DSL is then "bound" to a port to start serving HTTP requests:
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: minimal-routing-example
A common use case is to reply to a request using a model object having the marshaller transform it into JSON. In
this case shown by two separate routes. The first route queries an asynchronous database and marshalls the
``Future[Option[Item]]`` result into a JSON response. The second unmarshalls an ``Order`` from the incoming request
saves it to the database and replies with an OK when done.
.. includecode2:: ../code/docs/http/scaladsl/SprayJsonExampleSpec.scala
:snippet: second-spray-json-example
The logic for the marshalling and unmarshalling JSON in this example is provided by the "spray-json" library
(details on how to use that here: :ref:`akka-http-spray-json`).
One of the strengths of Akka HTTP is that streaming data is at its heart meaning that both request and response bodies
can be streamed through the server achieving constant memory usage even for very large requests or responses. Streaming
responses will be backpressured by the remote client so that the server will not push data faster than the client can
handle, streaming requests means that the server decides how fast the remote client can push the data of the request
body.
Example that streams random numbers as long as the client accepts them:
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: stream-random-numbers
Connecting to this service with a slow HTTP client would backpressure so that the next random number is produced on
demand with constant memory usage on the server. This can be seen using curl and limiting the rate
``curl --limit-rate 50b 127.0.0.1:8080/random``
Akka HTTP routes easily interacts with actors. In this example one route allows for placing bids in a fire-and-forget
style while the second route contains a request-response interaction with an actor. The resulting response is rendered
as json and returned when the response arrives from the actor.
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:include: actor-interaction
Again the logic for the marshalling and unmarshalling JSON in this example is provided by the "spray-json" library
(details on how to use that here: :ref:`akka-http-spray-json`)
Read more about the details of the high level APIs in the section :ref:`http-high-level-server-side-api`.
Low-level HTTP server APIs
--------------------------
The low-level Akka HTTP server APIs allows for handling connections or individual requests by accepting
``HttpRequest`` s and answering them by producing ``HttpResponse`` s. This is provided by the ``akka-http-core`` module.
APIs for handling such request-responses as function calls and as a ``Flow[HttpRequest, HttpResponse, _]`` are available.
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: low-level-server-example
Read more details about the low level APIs in the section :ref:`http-low-level-server-side-api`.
HTTP client API
---------------
The client APIs provide methods for calling a HTTP server using the same ``HttpRequest`` and ``HttpResponse`` abstractions
that Akka HTTP server uses but adds the concept of connection pools to allow multiple requests to the same server to be
handled more performantly by re-using TCP connections to the server.
Example simple request:
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
:include: single-request-example
Read more about the details of the client APIs in the section :ref:`http-client-side`.
The modules that make up Akka HTTP
----------------------------------
Akka HTTP is structured into several modules:
akka-http
Higher-level functionality, like (un)marshalling, (de)compression as well as a powerful DSL
for defining HTTP-based APIs on the server-side, this is the recommended way to write HTTP servers
with Akka HTTP. Details can be found in the section :ref:`http-high-level-server-side-api`
akka-http-core
A complete, mostly low-level, server- and client-side implementation of HTTP (incl. WebSockets)
Details can be found in sections :ref:`http-low-level-server-side-api` and :ref:`http-client-side`
akka-http-testkit
A test harness and set of utilities for verifying server-side service implementations
akka-http-spray-json
Predefined glue-code for (de)serializing custom types from/to JSON with spray-json_
Details can be found here: :ref:`akka-http-spray-json`
akka-http-xml
Predefined glue-code for (de)serializing custom types from/to XML with scala-xml_
Details can be found here: :ref:`akka-http-xml-marshalling`
.. _spray-json: https://github.com/spray/spray-json
.. _scala-xml: https://github.com/scala/scala-xml

View file

@ -1,256 +0,0 @@
.. _http-low-level-server-side-api:
Low-Level Server-Side API
=========================
Apart from the :ref:`HTTP Client <http-client-side>` Akka HTTP also provides an embedded,
`Reactive-Streams`_-based, fully asynchronous HTTP/1.1 server implemented on top of :ref:`Akka Stream <streams-scala>`.
It sports the following features:
- Full support for `HTTP persistent connections`_
- Full support for `HTTP pipelining`_
- Full support for asynchronous HTTP streaming including "chunked" transfer encoding accessible through an idiomatic API
- Optional SSL/TLS encryption
- WebSocket support
.. _HTTP persistent connections: http://en.wikipedia.org/wiki/HTTP_persistent_connection
.. _HTTP pipelining: http://en.wikipedia.org/wiki/HTTP_pipelining
.. _Reactive-Streams: http://www.reactive-streams.org/
The server-side components of Akka HTTP are split into two layers:
1. The basic low-level server implementation in the ``akka-http-core`` module
2. Higher-level functionality in the ``akka-http`` module
The low-level server (1) is scoped with a clear focus on the essential functionality of an HTTP/1.1 server:
- Connection management
- Parsing and rendering of messages and headers
- Timeout management (for requests and connections)
- Response ordering (for transparent pipelining support)
All non-core features of typical HTTP servers (like request routing, file serving, compression, etc.) are left to
the higher layers, they are not implemented by the ``akka-http-core``-level server itself.
Apart from general focus this design keeps the server core small and light-weight as well as easy to understand and
maintain.
Depending on your needs you can either use the low-level API directly or rely on the high-level
:ref:`Routing DSL <http-high-level-server-side-api>` which can make the definition of more complex service logic much
easier.
.. note::
It is recommended to read the :ref:`implications-of-streaming-http-entities` section,
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
from a background with non-"streaming first" HTTP Servers.
Streams and HTTP
----------------
The Akka HTTP server is implemented on top of :ref:`Akka Stream <streams-scala>` and makes heavy use of it - in its
implementation as well as on all levels of its API.
On the connection level Akka HTTP offers basically the same kind of interface as :ref:`Akka Stream IO <stream-io-scala>`:
A socket binding is represented as a stream of incoming connections. The application pulls connections from this stream
source and, for each of them, provides a ``Flow[HttpRequest, HttpResponse, _]`` to "translate" requests into responses.
Apart from regarding a socket bound on the server-side as a ``Source[IncomingConnection]`` and each connection as a
``Source[HttpRequest]`` with a ``Sink[HttpResponse]`` the stream abstraction is also present inside a single HTTP
message: The entities of HTTP requests and responses are generally modeled as a ``Source[ByteString]``. See also
the :ref:`http-model-scala` for more information on how HTTP messages are represented in Akka HTTP.
Starting and Stopping
---------------------
On the most basic level an Akka HTTP server is bound by invoking the ``bind`` method of the `akka.http.scaladsl.Http`_
extension:
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: binding-example
Arguments to the ``Http().bind`` method specify the interface and port to bind to and register interest in handling
incoming HTTP connections. Additionally, the method also allows for the definition of socket options as well as a larger
number of settings for configuring the server according to your needs.
The result of the ``bind`` method is a ``Source[Http.IncomingConnection]`` which must be drained by the application in
order to accept incoming connections.
The actual binding is not performed before this source is materialized as part of a processing pipeline. In
case the bind fails (e.g. because the port is already busy) the materialized stream will immediately be terminated with
a respective exception.
The binding is released (i.e. the underlying socket unbound) when the subscriber of the incoming
connection source has cancelled its subscription. Alternatively one can use the ``unbind()`` method of the
``Http.ServerBinding`` instance that is created as part of the connection source's materialization process.
The ``Http.ServerBinding`` also provides a way to get a hold of the actual local address of the bound socket, which is
useful for example when binding to port zero (and thus letting the OS pick an available port).
.. _akka.http.scaladsl.Http: @github@/akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala
Request-Response Cycle
----------------------
When a new connection has been accepted it will be published as an ``Http.IncomingConnection`` which consists
of the remote address and methods to provide a ``Flow[HttpRequest, HttpResponse, _]`` to handle requests coming in over
this connection.
Requests are handled by calling one of the ``handleWithXXX`` methods with a handler, which can either be
- a ``Flow[HttpRequest, HttpResponse, _]`` for ``handleWith``,
- a function ``HttpRequest => HttpResponse`` for ``handleWithSyncHandler``,
- a function ``HttpRequest => Future[HttpResponse]`` for ``handleWithAsyncHandler``.
Here is a complete example:
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: full-server-example
In this example, a request is handled by transforming the request stream with a function ``HttpRequest => HttpResponse``
using ``handleWithSyncHandler`` (or equivalently, Akka Stream's ``map`` operator). Depending on the use case many
other ways of providing a request handler are conceivable using Akka Stream's combinators.
If the application provides a ``Flow`` it is also the responsibility of the application to generate exactly one response
for every request and that the ordering of responses matches the ordering of the associated requests (which is relevant
if HTTP pipelining is enabled where processing of multiple incoming requests may overlap). When relying on
``handleWithSyncHandler`` or ``handleWithAsyncHandler``, or the ``map`` or ``mapAsync`` stream operators, this
requirement will be automatically fulfilled.
Streaming Request/Response Entities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Streaming of HTTP message entities is supported through subclasses of ``HttpEntity``. The application needs to be able
to deal with streamed entities when receiving a request as well as, in many cases, when constructing responses.
See :ref:`HttpEntity-scala` for a description of the alternatives.
If you rely on the :ref:`http-marshalling-scala` and/or :ref:`http-unmarshalling-scala` facilities provided by
Akka HTTP then the conversion of custom types to and from streamed entities can be quite convenient.
.. _http-closing-connection-low-level:
Closing a connection
~~~~~~~~~~~~~~~~~~~~
The HTTP connection will be closed when the handling ``Flow`` cancels its upstream subscription or the peer closes the
connection. An often times more convenient alternative is to explicitly add a ``Connection: close`` header to an
``HttpResponse``. This response will then be the last one on the connection and the server will actively close the
connection when it has been sent out.
Connection will also be closed if request entity has been cancelled (e.g. by attaching it to ``Sink.cancelled``)
or consumed only partially (e.g. by using ``take`` combinator). In order to prevent this behaviour entity should be
explicitly drained by attaching it to ``Sink.ignore``.
Configuring Server-side HTTPS
-----------------------------
For detailed documentation about configuring and using HTTPS on the server-side refer to :ref:`serverSideHTTPS-scala`.
.. _http-server-layer-scala:
Stand-Alone HTTP Layer Usage
----------------------------
Due to its Reactive-Streams-based nature the Akka HTTP layer is fully detachable from the underlying TCP
interface. While in most applications this "feature" will not be crucial it can be useful in certain cases to be able
to "run" the HTTP layer (and, potentially, higher-layers) against data that do not come from the network but rather
some other source. Potential scenarios where this might be useful include tests, debugging or low-level event-sourcing
(e.g by replaying network traffic).
On the server-side the stand-alone HTTP layer forms a ``BidiFlow`` that is defined like this:
.. includecode2:: /../../akka-http-core/src/main/scala/akka/http/scaladsl/Http.scala
:snippet: server-layer
You create an instance of ``Http.ServerLayer`` by calling one of the two overloads of the ``Http().serverLayer`` method,
which also allows for varying degrees of configuration.
Controlling server parallelism
------------------------------
Request handling can be parallelized on two axes, by handling several connections in parallel and by
relying on HTTP pipelining to send several requests on one connection without waiting for a response first. In both
cases the client controls the number of ongoing requests. To prevent being overloaded by too many requests, Akka HTTP
can limit the number of requests it handles in parallel.
To limit the number of simultaneously open connections, use the ``akka.http.server.max-connections`` setting. This setting
applies to all of ``Http.bindAndHandle*`` methods. If you use ``Http.bind``, incoming connections are represented by
a ``Source[IncomingConnection, ...]``. Use Akka Stream's combinators to apply backpressure to control the flow of
incoming connections, e.g. by using ``throttle`` or ``mapAsync``.
HTTP pipelining is generally discouraged (and `disabled by most browsers <https://en.wikipedia.org/w/index.php?title=HTTP_pipelining&oldid=700966692#Implementation_in_web_browsers>`_) but
is nevertheless fully supported in Akka HTTP. The limit is applied on two levels. First, there's the
``akka.http.server.pipeline-limit`` config setting which prevents that more than the given number of outstanding requests
is ever given to the user-supplied handler-flow. On the other hand, the handler flow itself can apply any kind of throttling
itself. If you use one of the ``Http.bindAndHandleSync`` or ``Http.bindAndHandleAsync``
entry-points, you can specify the ``parallelism`` argument (default = 1, i.e. pipelining disabled) to control the
number of concurrent requests per connection. If you use ``Http.bindAndHandle`` or ``Http.bind``, the user-supplied handler
flow has full control over how many request it accepts simultaneously by applying backpressure. In this case, you can
e.g. use Akka Stream's ``mapAsync`` combinator with a given parallelism to limit the number of concurrently handled requests.
Effectively, the more constraining one of these two measures, config setting and manual flow shaping, will determine
how parallel requests on one connection are handled.
.. _handling-http-server-failures-low-level-scala:
Handling HTTP Server failures in the Low-Level API
--------------------------------------------------
There are various situations when failure may occur while initialising or running an Akka HTTP server.
Akka by default will log all these failures, however sometimes one may want to react to failures in addition to them
just being logged, for example by shutting down the actor system, or notifying some external monitoring end-point explicitly.
There are multiple things that can fail when creating and materializing an HTTP Server (similarily, the same applied to
a plain streaming ``Tcp()`` server). The types of failures that can happen on different layers of the stack, starting
from being unable to start the server, and ending with failing to unmarshal an HttpRequest, examples of failures include
(from outer-most, to inner-most):
- Failure to ``bind`` to the specified address/port,
- Failure while accepting new ``IncommingConnection`` s, for example when the OS has run out of file descriptors or memory,
- Failure while handling a connection, for example if the incoming ``HttpRequest`` is malformed.
This section describes how to handle each failure situation, and in which situations these failures may occur.
Bind failures
^^^^^^^^^^^^^
The first type of failure is when the server is unable to bind to the given port. For example when the port
is already taken by another application, or if the port is privileged (i.e. only usable by ``root``).
In this case the "binding future" will fail immediatly, and we can react to if by listening on the Future's completion:
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: binding-failure-handling
Once the server has successfully bound to a port, the ``Source[IncomingConnection, _]`` starts running and emiting
new incoming connections. This source technically can signal a failure as well, however this should only happen in very
dramantic situations such as running out of file descriptors or memory available to the system, such that it's not able
to accept a new incoming connection. Handling failures in Akka Streams is pretty stright forward, as failures are signaled
through the stream starting from the stage which failed, all the way downstream to the final stages.
Connections Source failures
^^^^^^^^^^^^^^^^^^^^^^^^^^^
In the example below we add a custom ``GraphStage`` (see :ref:`stream-customize-scala`) in order to react to the
stream's failure. We signal a ``failureMonitor`` actor with the cause why the stream is going down, and let the Actor
handle the rest maybe it'll decide to restart the server or shutdown the ActorSystem, that however is not our concern anymore.
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: incoming-connections-source-failure-handling
Connection failures
^^^^^^^^^^^^^^^^^^^
The third type of failure that can occur is when the connection has been properly established,
however afterwards is terminated abruptly for example by the client aborting the underlying TCP connection.
To handle this failure we can use the same pattern as in the previous snippet, however apply it to the connection's Flow:
.. includecode2:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
:snippet: connection-stream-failure-handling
These failures can be described more or less infrastructure related, they are failing bindings or connections.
Most of the time you won't need to dive into those very deeply, as Akka will simply log errors of this kind
anyway, which is a reasonable default for such problems.
In order to learn more about handling exceptions in the actual routing layer, which is where your application code
comes into the picture, refer to :ref:`exception-handling-scala` which focuses explicitly on explaining how exceptions
thrown in routes can be handled and transformed into :class:`HttpResponse` s with apropriate error codes and human-readable failure descriptions.

View file

@ -1,70 +0,0 @@
.. _http-javadsl-migration-guide:
Migration Guide from "old" HTTP JavaDSL
=======================================
The so-called "old" JavaDSL for Akka HTTP was initially developed during the project's experimental phase,
and thanks to multiple user comments and contributions we were able to come up with a more Java 8 "feel",
which at the same time is also closer to the existing ScalaDSL.
The previous DSL has been entirely removed and replaced with the the so-called "new" one.
Upgrading to the new DSL is **highly encouraged** since the old one not only was rather hard to work with,
it actually was not possible to express many typical use-cases using it.
The most major changes include:
HttpApp is gone
---------------
``HttpApp`` (a helper class containing a ``main()`` implementation) is gone, as we would like to encourage understanding
how the various elements of the API fit together.
Instead developers should start applications "manually", by converting a ``Route`` to a ``Flow<HttpRequest, HttpResponse, ?>``
using the ``Route.flow`` method. For examples of full apps refer to :ref:`http-testkit-java`.
``RequestVal`` is gone
----------------------
The old API heavily relied on the concept of "request values" which could be used to extract a value from a request context.
Based on community feedback and our own experience we found them too hard to work with in more complex settings.
The concept of a request value has been completely removed, and replaced with proper "directives", exacly like in the ScalaDSL.
**Previously**::
RequestVal<Host> host = Headers.byClass(Host.class).instance();
final Route route =
route(
handleWith1(host, (ctx, h) ->
ctx.complete(String.format("Host header was: %s", h.host()))
)
);
**Now**::
final Route route =
headerValueByType(Host.class, host -> complete("Host was: " + host));
All of ScalaDSL routing has corresponding JavaDSL
-------------------------------------------------
Both ``Route``, ``RouteResult`` and other important core concepts such as ``Rejections`` are now modeled 1:1 with Scala,
making is much simpler to understand one API based on the other one tremendously useful when learning about some nice
pattern from blogs which used Scala, yet need to apply it in Java and the other way around.
It is now possible to implement marshallers using Java. Refer to :ref:`marshalling-java` for details.
Some complete* overloads changed to completeOK*
-----------------------------------------------
In JavaDSL when complete is called with only an entity, the ``OK`` response code is *assumed*,
to make this more explicit these methods contain the word ``OK`` in them.
This has been made more consistent than previously, across all overloads and Future-versions of these APIs.
Migration help
--------------
As always, feel free to reach out via the `akka-user <https://groups.google.com/forum/#!searchin/akka-user/>`_ mailing list or gitter channels,
to seek help or guidance when migrating from the old APIs.
For Lightbend subscription owners it is possible to reach out to the core team for help in the migration by asking specific
questions via the `Lightbend customer portal <https://portal.lightbend.com/>`_.

View file

@ -1,185 +0,0 @@
Migration Guide from Spray
==========================
General notes
-------------
Features which are not ported to the akka-http:
- ``respondWithStatus`` also known as ``overrideStatusCode`` has not been forward ported to Akka HTTP,
as it has been seen mostly as an anti-pattern. More information here: https://github.com/akka/akka/issues/18626
- ``respondWithMediaType`` was considered an anti-pattern in spray and is not ported to Akka HTTP.
Instead users should rely on content type negotiation as Akka HTTP implements it.
More information here: https://github.com/akka/akka/issues/18625
- :ref:`registeringCustomMediaTypes` changed from Spray in order not to rely on global state.
Removed HttpService
-------------------
Sprays ``HttpService`` was removed. This means that scala code like this::
val service = system.actorOf(Props(new HttpServiceActor(routes)))
IO(Http)(system) ! Http.Bind(service, "0.0.0.0", port = 8080)
needs to be changed into::
Http().bindAndHandle(routes, "0.0.0.0", port = 8080)
Changes in Marshalling
----------------------
Marshaller.of can be replaced with ``Marshaller.withFixedContentType``.
Was::
Marshaller.of[JsonApiObject](`application/json`) { (value, contentType, ctx) =>
ctx.marshalTo(HttpEntity(contentType, value.toJson.toString))
}
Replace with::
Marshaller.withFixedContentType(`application/json`) { obj =>
HttpEntity(`application/json`, obj.toJson.compactPrint)
}
Akka HTTP marshallers support content negotiation, now it's not necessary to specify content type
when creating one “super” marshaller from other marshallers:
Before::
ToResponseMarshaller.oneOf(
`application/vnd.api+json`,
`application/json`
)(
jsonApiMarshaller,
jsonMarshaller
}
After::
Marshaller.oneOf(
jsonApiMarshaller,
jsonMarshaller
)
Changes in Unmarshalling
------------------------
Akka Http contains a set of predefined unmarshallers. This means that scala code like this::
Unmarshaller[Entity](`application/json`) {
case HttpEntity.NonEmpty(contentType, data) =>
data.asString.parseJson.convertTo[Entity]
}
needs to be changed into::
Unmarshaller
.stringUnmarshaller
.forContentTypes(`application/json`)
.map(_.parseJson.convertTo[Entity])
Changes in MediaTypes
---------------------
``MediaType.custom`` can be replaced with specific methods in ``MediaType`` object.
Was::
MediaType.custom("application/vnd.acme+json")
Replace with::
MediaType.applicationWithFixedCharset("application/vnd.acme+json", HttpCharsets.`UTF-8`)
Changes in Rejection Handling
-----------------------------
``RejectionHandler`` now uses a builder pattern see the example:
Before::
def rootRejectionHandler = RejectionHandler {
case Nil =>
requestUri { uri =>
logger.error("Route: {} does not exist.", uri)
complete((NotFound, mapErrorToRootObject(notFoundError)))
}
case AuthenticationFailedRejection(cause, challengeHeaders) :: _ => {
logger.error(s"Request is rejected with cause: $cause")
complete((Unauthorized, mapErrorToRootObject(unauthenticatedError)))
}
}
After::
RejectionHandler
.newBuilder()
.handle {
case AuthenticationFailedRejection(cause, challengeHeaders) =>
logger.error(s"Request is rejected with cause: $cause")
complete((Unauthorized, mapErrorToRootObject(unauthenticatedError)))
.handleNotFound { ctx =>
logger.error("Route: {} does not exist.", ctx.request.uri.toString())
ctx.complete((NotFound, mapErrorToRootObject(notFoundError)))
}
.result()
.withFallback(RejectionHandler.default)
Changes in HTTP Client
----------------------
The Spray-client pipeline was removed. Https ``singleRequest`` should be used instead of ``sendReceive``::
//this will not longer work
val token = Authorization(OAuth2BearerToken(accessToken))
val pipeline: HttpRequest => Future[HttpResponse] = (addHeader(token) ~> sendReceive)
val patch: HttpRequest = Patch(uri, object))
pipeline(patch).map { response ⇒
}
needs to be changed into::
val request = HttpRequest(
method = PATCH,
uri = Uri(uri),
headers = List(Authorization(OAuth2BearerToken(accessToken))),
entity = HttpEntity(MediaTypes.`application/json`, object)
)
http.singleRequest(request).map {
case … => …
}
Changes in Headers
------------------
All HTTP headers have been moved to the ``akka.http.scaladsl.model.headers._`` package.
Changes in form fields and file upload directives
-------------------------------------------------
With the streaming nature of http entity, its important to have a strict http entity before accessing
multiple form fields or use file upload directives.
One solution might be using next directive before working with form fields::
val toStrict: Directive0 = extractRequest flatMap { request =>
onComplete(request.entity.toStrict(5.seconds)) flatMap {
case Success(strict) =>
mapRequest( req => req.copy(entity = strict))
case _ => reject
}
}
And one can use it like this::
toStrict {
formFields("name".as[String]) { name =>
...
}
}

View file

@ -1,42 +0,0 @@
Migration Guide between experimental builds of Akka HTTP (2.4.x)
================================================================
General notes
-------------
Please note that Akka HTTP consists of a number of modules, most notably `akka-http-core`
which is **stable** and won't be breaking compatibility without a proper deprecation cycle,
and `akka-http` which contains the routing DSLs which is **experimental** still.
The following migration guide explains migration steps to be made between breaking
versions of the **experimental** part of Akka HTTP.
.. note::
Please note that experimental modules are allowed (and are expected to) break compatibility
in search of the best API we can offer, before the API is frozen in a stable release.
Please read :ref:`BinCompatRules` to understand in depth what bin-compat rules are, and where they are applied.
Akka HTTP 2.4.7 -> 2.4.8
------------------------
``SecurityDirectives#challengeFor`` has moved
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``challengeFor`` directive was actually more like a factory for ``HttpChallenge``,
thus it was moved to become such. It is now available as ``akka.http.javadsl.model.headers.HttpChallenge#create[Basic|OAuth2]``
for JavaDSL and ``akka.http.scaladsl.model.headers.HttpChallenges#[basic|oAuth2]`` for ScalaDSL.
Akka HTTP 2.4.8 -> 2.4.9
------------------------
Java DSL Package structure changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We have aligned the package structure of the Java based DSL with the Scala based DSL
and moved classes that were in the wrong or unexpected places around a bit. This means
that Java DSL users must update their imports as follows:
Classes dealing with unmarshalling and marshalling used to reside in ``akka.http.javadsl.server``,
but are now available from the packages ``akka.http.javadsl.unmarshalling`` and ``akka.http.javadsl.marshalling``.
``akka.http.javadsl.server.Coder`` is now ``akka.http.javadsl.coding.Coder``.
``akka.http.javadsl.server.RegexConverters`` is now ``akka.http.javadsl.common.RegexConverters``.

View file

@ -1,68 +0,0 @@
.. _Case Class Extraction:
Case Class Extraction
=====================
The value extraction performed by :ref:`Directives` is a nice way of providing your route logic with interesting request
properties, all with proper type-safety and error handling. However, in some case you might want even more.
Consider this example:
.. includecode2:: ../../code/docs/http/scaladsl/server/CaseClassExtractionExamplesSpec.scala
:snippet: example-1
Here the :ref:`-parameters-scala-` directives is employed to extract three ``Int`` values, which are then used to construct an
instance of the ``Color`` case class. So far so good. However, if the model classes we'd like to work with have more
than just a few parameters the overhead introduced by capturing the arguments as extractions only to feed them into the
model class constructor directly afterwards can somewhat clutter up your route definitions.
If your model classes are case classes, as in our example, Akka HTTP supports an even shorter and more concise
syntax. You can also write the example above like this:
.. includecode2:: ../../code/docs/http/scaladsl/server/CaseClassExtractionExamplesSpec.scala
:snippet: example-2
You can postfix any directive with extractions with an ``as(...)`` call. By simply passing the companion object of your
model case class to the ``as`` modifier method the underlying directive is transformed into an equivalent one, which
extracts only one value of the type of your model class. Note that there is no reflection involved and your case class
does not have to implement any special interfaces. The only requirement is that the directive you attach the ``as``
call to produces the right number of extractions, with the right types and in the right order.
If you'd like to construct a case class instance from extractions produced by *several* directives you can first join
the directives with the ``&`` operator before using the ``as`` call:
.. includecode2:: ../../code/docs/http/scaladsl/server/CaseClassExtractionExamplesSpec.scala
:snippet: example-3
Here the ``Color`` class has gotten another member, ``name``, which is supplied not as a parameter but as a path
element. By joining the ``path`` and ``parameters`` directives with ``&`` you create a directive extracting 4 values,
which directly fit the member list of the ``Color`` case class. Therefore you can use the ``as`` modifier to convert
the directive into one extracting only a single ``Color`` instance.
Generally, when you have routes that work with, say, more than 3 extractions it's a good idea to introduce a case class
for these and resort to case class extraction. Especially since it supports another nice feature: validation.
.. caution:: There is one quirk to look out for when using case class extraction: If you create an explicit companion
object for your case class, no matter whether you actually add any members to it or not, the syntax presented above
will not (quite) work anymore. Instead of ``as(Color)`` you will then have to say ``as(Color.apply)``. This behavior
appears as if it's not really intended, so this might be improved in future Scala versions.
Case Class Validation
---------------------
In many cases your web service needs to verify input parameters according to some logic before actually working with
them. E.g. in the example above the restriction might be that all color component values must be between 0 and 255.
You could get this done with a few :ref:`-validate-` directives but this would quickly become cumbersome and hard to
read.
If you use case class extraction you can put the verification logic into the constructor of your case class, where it
should be:
.. includecode2:: ../../code/docs/http/scaladsl/server/CaseClassExtractionExamplesSpec.scala
:snippet: example-4
If you write your validations like this Akka HTTP's case class extraction logic will properly pick up all error
messages and generate a ``ValidationRejection`` if something goes wrong. By default, ``ValidationRejections`` are
converted into ``400 Bad Request`` error response by the default :ref:`RejectionHandler <The RejectionHandler>`, if no
subsequent route successfully handles the request.

View file

@ -1,232 +0,0 @@
.. _Predefined Directives:
Predefined Directives (alphabetically)
======================================
=========================================== ============================================================================
Directive Description
=========================================== ============================================================================
:ref:`-authenticateBasic-` Wraps the inner route with Http Basic authentication support using a given
``Authenticator[T]``
:ref:`-authenticateBasicAsync-` Wraps the inner route with Http Basic authentication support using a given
``AsyncAuthenticator[T]``
:ref:`-authenticateBasicPF-` Wraps the inner route with Http Basic authentication support using a given
``AuthenticatorPF[T]``
:ref:`-authenticateBasicPFAsync-` Wraps the inner route with Http Basic authentication support using a given
``AsyncAuthenticatorPF[T]``
:ref:`-authenticateOAuth2-` Wraps the inner route with OAuth Bearer Token authentication support using
a given ``AuthenticatorPF[T]``
:ref:`-authenticateOAuth2Async-` Wraps the inner route with OAuth Bearer Token authentication support using
a given ``AsyncAuthenticator[T]``
:ref:`-authenticateOAuth2PF-` Wraps the inner route with OAuth Bearer Token authentication support using
a given ``AuthenticatorPF[T]``
:ref:`-authenticateOAuth2PFAsync-` Wraps the inner route with OAuth Bearer Token authentication support using
a given ``AsyncAuthenticatorPF[T]``
:ref:`-authenticateOrRejectWithChallenge-` Lifts an authenticator function into a directive
:ref:`-authorize-` Applies the given authorization check to the request
:ref:`-authorizeAsync-` Applies the given asynchronous authorization check to the request
:ref:`-cancelRejection-` Adds a ``TransformationRejection`` cancelling all rejections equal to the
given one to the rejections potentially coming back from the inner route.
:ref:`-cancelRejections-` Adds a ``TransformationRejection`` cancelling all matching rejections
to the rejections potentially coming back from the inner route
:ref:`-checkSameOrigin-` Checks that the request comes from the same origin
:ref:`-complete-` Completes the request using the given arguments
:ref:`-completeOrRecoverWith-` "Unwraps" a ``Future[T]`` and runs the inner route when the future has
failed with the error as an extraction of type ``Throwable``
:ref:`-completeWith-` Uses the marshaller for a given type to extract a completion function
:ref:`-conditional-` Wraps its inner route with support for conditional requests as defined
by http://tools.ietf.org/html/rfc7232
:ref:`-cookie-` Extracts the ``HttpCookie`` with the given name
:ref:`-decodeRequest-` Decompresses the request if it is ``gzip`` or ``deflate`` compressed
:ref:`-decodeRequestWith-` Decodes the incoming request using one of the given decoders
:ref:`-delete-` Rejects all non-DELETE requests
:ref:`-deleteCookie-` Adds a ``Set-Cookie`` response header expiring the given cookies
:ref:`-encodeResponse-` Encodes the response with the encoding that is requested by the client
via the ``Accept-Encoding`` header (``NoCoding``, ``Gzip`` and ``Deflate``)
:ref:`-encodeResponseWith-` Encodes the response with the encoding that is requested by the client
via the ``Accept-Encoding`` header (from a user-defined set)
:ref:`-entity-` Extracts the request entity unmarshalled to a given type
:ref:`-extract-` Extracts a single value using a ``RequestContext ⇒ T`` function
:ref:`-extractDataBytes-` Extracts the entities data bytes as a stream ``Source[ByteString, Any]``
:ref:`-extractClientIP-` Extracts the client's IP from either the ``X-Forwarded-``,
``Remote-Address`` or ``X-Real-IP`` header
:ref:`-extractCredentials-` Extracts the potentially present ``HttpCredentials`` provided with the
request's ``Authorization`` header
:ref:`-extractExecutionContext-` Extracts the ``ExecutionContext`` from the ``RequestContext``
:ref:`-extractMaterializer-` Extracts the ``Materializer`` from the ``RequestContext``
:ref:`-extractHost-` Extracts the hostname part of the Host request header value
:ref:`-extractLog-` Extracts the ``LoggingAdapter`` from the ``RequestContext``
:ref:`-extractMethod-` Extracts the request method
:ref:`-extractRequest-` Extracts the current ``HttpRequest`` instance
:ref:`-extractRequestContext-` Extracts the ``RequestContext`` itself
:ref:`-extractRequestEntity-` Extracts the ``RequestEntity`` from the ``RequestContext``
:ref:`-extractScheme-` Extracts the URI scheme from the request
:ref:`-extractSettings-` Extracts the ``RoutingSettings`` from the ``RequestContext``
:ref:`-extractUnmatchedPath-` Extracts the yet unmatched path from the ``RequestContext``
:ref:`-extractUri-` Extracts the complete request URI
:ref:`-failWith-` Bubbles the given error up the response chain where it is dealt with by the
closest :ref:`-handleExceptions-` directive and its ``ExceptionHandler``
:ref:`-fileUpload-` Provides a stream of an uploaded file from a multipart request
:ref:`-formField-scala-` Extracts an HTTP form field from the request
:ref:`-formFieldMap-` Extracts a number of HTTP form field from the request as
a ``Map[String, String]``
:ref:`-formFieldMultiMap-` Extracts a number of HTTP form field from the request as
a ``Map[String, List[String]``
:ref:`-formFields-` Extracts a number of HTTP form field from the request
:ref:`-formFieldSeq-` Extracts a number of HTTP form field from the request as
a ``Seq[(String, String)]``
:ref:`-get-` Rejects all non-GET requests
:ref:`-getFromBrowseableDirectories-` Serves the content of the given directories as a file-system browser, i.e.
files are sent and directories served as browseable listings
:ref:`-getFromBrowseableDirectory-` Serves the content of the given directory as a file-system browser, i.e.
files are sent and directories served as browseable listings
:ref:`-getFromDirectory-` Completes GET requests with the content of a file underneath a given
file-system directory
:ref:`-getFromFile-` Completes GET requests with the content of a given file
:ref:`-getFromResource-` Completes GET requests with the content of a given class-path resource
:ref:`-getFromResourceDirectory-` Completes GET requests with the content of a file underneath a given
"class-path resource directory"
:ref:`-handleExceptions-` Transforms exceptions thrown during evaluation of the inner route using the
given ``ExceptionHandler``
:ref:`-handleRejections-` Transforms rejections produced by the inner route using the given
``RejectionHandler``
:ref:`-handleWebSocketMessages-` Handles websocket requests with the given handler and rejects other requests
with an ``ExpectedWebSocketRequestRejection``
:ref:`-handleWebSocketMessagesForProtocol-` Handles websocket requests with the given handler if the subprotocol matches
and rejects other requests with an ``ExpectedWebSocketRequestRejection`` or
an ``UnsupportedWebSocketSubprotocolRejection``.
:ref:`-handleWith-` Completes the request using a given function
:ref:`-head-` Rejects all non-HEAD requests
:ref:`-headerValue-` Extracts an HTTP header value using a given ``HttpHeader ⇒ Option[T]``
function
:ref:`-headerValueByName-` Extracts the value of the first HTTP request header with a given name
:ref:`-headerValueByType-` Extracts the first HTTP request header of the given type
:ref:`-headerValuePF-` Extracts an HTTP header value using a given
``PartialFunction[HttpHeader, T]``
:ref:`-host-` Rejects all requests with a non-matching host name
:ref:`-listDirectoryContents-` Completes GET requests with a unified listing of the contents of all given
file-system directories
:ref:`-logRequest-` Produces a log entry for every incoming request
:ref:`-logRequestResult-` Produces a log entry for every incoming request and ``RouteResult``
:ref:`-logResult-` Produces a log entry for every ``RouteResult``
:ref:`-mapInnerRoute-` Transforms its inner ``Route`` with a ``Route => Route`` function
:ref:`-mapRejections-` Transforms rejections from a previous route with an
``immutable.Seq[Rejection] ⇒ immutable.Seq[Rejection]`` function
:ref:`-mapRequest-` Transforms the request with an ``HttpRequest => HttpRequest`` function
:ref:`-mapRequestContext-` Transforms the ``RequestContext`` with a
``RequestContext => RequestContext`` function
:ref:`-mapResponse-` Transforms the response with an ``HttpResponse => HttpResponse`` function
:ref:`-mapResponseEntity-` Transforms the response entity with an ``ResponseEntity ⇒ ResponseEntity``
function
:ref:`-mapResponseHeaders-` Transforms the response headers with an
``immutable.Seq[HttpHeader] ⇒ immutable.Seq[HttpHeader]`` function
:ref:`-mapRouteResult-` Transforms the ``RouteResult`` with a ``RouteResult ⇒ RouteResult``
function
:ref:`-mapRouteResultFuture-` Transforms the ``RouteResult`` future with a
``Future[RouteResult] ⇒ Future[RouteResult]`` function
:ref:`-mapRouteResultPF-` Transforms the ``RouteResult`` with a
``PartialFunction[RouteResult, RouteResult]``
:ref:`-mapRouteResultWith-` Transforms the ``RouteResult`` with a
``RouteResult ⇒ Future[RouteResult]`` function
:ref:`-mapRouteResultWithPF-` Transforms the ``RouteResult`` with a
``PartialFunction[RouteResult, Future[RouteResult]]``
:ref:`-mapSettings-` Transforms the ``RoutingSettings`` with a
``RoutingSettings ⇒ RoutingSettings`` function
:ref:`-mapUnmatchedPath-` Transforms the ``unmatchedPath`` of the ``RequestContext`` using a
``Uri.Path ⇒ Uri.Path`` function
:ref:`-method-` Rejects all requests whose HTTP method does not match the given one
:ref:`-onComplete-` "Unwraps" a ``Future[T]`` and runs the inner route after future completion
with the future's value as an extraction of type ``Try[T]``
:ref:`-onCompleteWithBreaker-` "Unwraps" a ``Future[T]`` inside a ``CircuitBreaker`` and runs the inner
route after future completion with the future's value as an extraction of
type ``Try[T]``
:ref:`-onSuccess-` "Unwraps" a ``Future[T]`` and runs the inner route after future completion
with the future's value as an extraction of type ``T``
:ref:`-optionalCookie-` Extracts the ``HttpCookiePair`` with the given name as an
``Option[HttpCookiePair]``
:ref:`-optionalHeaderValue-` Extracts an optional HTTP header value using a given
``HttpHeader ⇒ Option[T]`` function
:ref:`-optionalHeaderValueByName-` Extracts the value of the first optional HTTP request header with a given
name
:ref:`-optionalHeaderValueByType-` Extracts the first optional HTTP request header of the given type
:ref:`-optionalHeaderValuePF-` Extracts an optional HTTP header value using a given
``PartialFunction[HttpHeader, T]``
:ref:`-options-` Rejects all non-OPTIONS requests
:ref:`-overrideMethodWithParameter-` Changes the request method to the value of the specified query parameter
:ref:`-parameter-` Extracts a query parameter value from the request
:ref:`-parameterMap-` Extracts the request's query parameters as a ``Map[String, String]``
:ref:`-parameterMultiMap-` Extracts the request's query parameters as a ``Map[String, List[String]]``
:ref:`-parameters-scala-` Extracts a number of query parameter values from the request
:ref:`-parameterSeq-` Extracts the request's query parameters as a ``Seq[(String, String)]``
:ref:`-pass-` Always simply passes the request on to its inner route, i.e. doesn't do
anything, neither with the request nor the response
:ref:`-patch-` Rejects all non-PATCH requests
:ref:`-path-` Applies the given ``PathMatcher`` to the remaining unmatched path after
consuming a leading slash
:ref:`-pathEnd-` Only passes on the request to its inner route if the request path has been
matched completely
:ref:`-pathEndOrSingleSlash-` Only passes on the request to its inner route if the request path has been
matched completely or only consists of exactly one remaining slash
:ref:`-pathPrefix-` Applies the given ``PathMatcher`` to a prefix of the remaining unmatched
path after consuming a leading slash
:ref:`-pathPrefixTest-` Checks whether the unmatchedPath has a prefix matched by the given
``PathMatcher`` after implicitly consuming a leading slash
:ref:`-pathSingleSlash-` Only passes on the request to its inner route if the request path
consists of exactly one remaining slash
:ref:`-pathSuffix-` Applies the given ``PathMatcher`` to a suffix of the remaining unmatched
path (Caution: check scaladoc!)
:ref:`-pathSuffixTest-` Checks whether the unmatched path has a suffix matched by the given
``PathMatcher`` (Caution: check scaladoc!)
:ref:`-post-` Rejects all non-POST requests
:ref:`-provide-` Injects a given value into a directive
:ref:`-put-` Rejects all non-PUT requests
:ref:`-rawPathPrefix-` Applies the given matcher directly to a prefix of the unmatched path of the
``RequestContext``, without implicitly consuming a leading slash
:ref:`-rawPathPrefixTest-` Checks whether the unmatchedPath has a prefix matched by the given
``PathMatcher``
:ref:`-recoverRejections-` Transforms rejections from the inner route with an
``immutable.Seq[Rejection] ⇒ RouteResult`` function
:ref:`-recoverRejectionsWith-` Transforms rejections from the inner route with an
``immutable.Seq[Rejection] ⇒ Future[RouteResult]`` function
:ref:`-redirect-` Completes the request with redirection response of the given type to the
given URI
:ref:`-redirectToNoTrailingSlashIfPresent-` If the request path ends with a slash, redirects to the same uri without
trailing slash in the path
:ref:`-redirectToTrailingSlashIfMissing-` If the request path doesn't end with a slash, redirects to the same uri with
trailing slash in the path
:ref:`-reject-` Rejects the request with the given rejections
:ref:`-rejectEmptyResponse-` Converts responses with an empty entity into (empty) rejections
:ref:`-requestEncodedWith-` Rejects the request with an ``UnsupportedRequestEncodingRejection`` if its
encoding doesn't match the given one
:ref:`-requestEntityEmpty-` Rejects if the request entity is non-empty
:ref:`-requestEntityPresent-` Rejects with a ``RequestEntityExpectedRejection`` if the request entity is
empty
:ref:`-respondWithDefaultHeader-` Adds a given response header if the response doesn't already contain a
header with the same name
:ref:`-respondWithDefaultHeaders-` Adds the subset of the given headers to the response which doesn't already
have a header with the respective name present in the response
:ref:`-respondWithHeader-` Unconditionally adds a given header to the outgoing response
:ref:`-respondWithHeaders-` Unconditionally adds the given headers to the outgoing response
:ref:`-responseEncodingAccepted-` Rejects the request with an ``UnacceptedResponseEncodingRejection`` if the
given response encoding is not accepted by the client
:ref:`-scheme-` Rejects all requests whose URI scheme doesn't match the given one
:ref:`-selectPreferredLanguage-` Inspects the request's ``Accept-Language`` header and determines, which of
a given set of language alternatives is preferred by the client
:ref:`-setCookie-` Adds a ``Set-Cookie`` response header with the given cookies
:ref:`-textract-` Extracts a number of values using a ``RequestContext ⇒ Tuple`` function
:ref:`-tprovide-` Injects a given tuple of values into a directive
:ref:`-uploadedFile-` Streams one uploaded file from a multipart request to a file on disk
:ref:`-validate-` Checks a given condition before running its inner route
:ref:`-withoutRequestTimeout-` Disables :ref:`request timeouts <request-timeout-scala>` for a given route.
:ref:`-withoutSizeLimit-` Skips request entity size check
:ref:`-withExecutionContext-` Runs its inner route with the given alternative ``ExecutionContext``
:ref:`-withMaterializer-` Runs its inner route with the given alternative ``Materializer``
:ref:`-withLog-` Runs its inner route with the given alternative ``LoggingAdapter``
:ref:`-withRangeSupport-` Adds ``Accept-Ranges: bytes`` to responses to GET requests, produces partial
responses if the initial request contained a valid ``Range`` header
:ref:`-withRequestTimeout-` Configures the :ref:`request timeouts <request-timeout-scala>` for a given route.
:ref:`-withRequestTimeoutResponse-` Prepares the ``HttpResponse`` that is emitted if a request timeout is triggered.
``RequestContext => RequestContext`` function
:ref:`-withSettings-` Runs its inner route with the given alternative ``RoutingSettings``
:ref:`-withSizeLimit-` Applies request entity size check
=========================================== ============================================================================

View file

@ -1,27 +0,0 @@
.. _-cancelRejection-:
cancelRejection
===============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: cancelRejection
Description
-----------
Adds a ``TransformationRejection`` cancelling all rejections equal to the
given one to the rejections potentially coming back from the inner route.
Read :ref:`rejections-scala` to learn more about rejections.
For more advanced handling of rejections refer to the :ref:`-handleRejections-` directive
which provides a nicer DSL for building rejection handlers.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: cancelRejection-example

View file

@ -1,29 +0,0 @@
.. _-cancelRejections-:
cancelRejections
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: cancelRejections
Description
-----------
Adds a ``TransformationRejection`` cancelling all rejections created by the inner route for which
the condition argument function returns ``true``.
See also :ref:`-cancelRejection-`, for canceling a specific rejection.
Read :ref:`rejections-scala` to learn more about rejections.
For more advanced handling of rejections refer to the :ref:`-handleRejections-` directive
which provides a nicer DSL for building rejection handlers.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: cancelRejections-filter-example

View file

@ -1,25 +0,0 @@
.. _-extract-:
extract
=======
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extract
Description
-----------
The ``extract`` directive is used as a building block for :ref:`Custom Directives` to extract data from the
``RequestContext`` and provide it to the inner route. It is a special case for extracting one value of the more
general :ref:`-textract-` directive that can be used to extract more than one value.
See :ref:`ProvideDirectives` for an overview of similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0extract

View file

@ -1,26 +0,0 @@
.. _-extractActorSystem-:
extractActorSystem
==================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractActorSystem
Description
-----------
Extracts the ``ActorSystem`` from the ``RequestContext``, which can be useful when the external API
in your route needs one.
.. warning::
This is only supported when the available Materializer is an ActorMaterializer.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractActorSystem-example

View file

@ -1,24 +0,0 @@
.. _-extractDataBytes-:
extractDataBytes
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractDataBytes
Description
-----------
Extracts the entities data bytes as ``Source[ByteString, Any]`` from the :class:`RequestContext`.
The directive returns a stream containing the request data bytes.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractDataBytes-example

View file

@ -1,25 +0,0 @@
.. _-extractExecutionContext-:
extractExecutionContext
=======================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractExecutionContext
Description
-----------
Extracts the ``ExecutionContext`` from the ``RequestContext``.
See :ref:`-withExecutionContext-` to see how to customise the execution context provided for an inner route.
See :ref:`-extract-` to learn more about how extractions work.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractExecutionContext-0

View file

@ -1,26 +0,0 @@
.. _-extractLog-:
extractLog
==========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractLog
Description
-----------
Extracts a :class:`LoggingAdapter` from the request context which can be used for logging inside the route.
The ``extractLog`` directive is used for providing logging to routes, such that they don't have to depend on
closing over a logger provided in the class body.
See :ref:`-extract-` and :ref:`ProvideDirectives` for an overview of similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0extractLog

View file

@ -1,24 +0,0 @@
.. _-extractMaterializer-:
extractMaterializer
===================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractMaterializer
Description
-----------
Extracts the ``Materializer`` from the ``RequestContext``, which can be useful when you want to run an
Akka Stream directly in your route.
See also :ref:`-withMaterializer-` to see how to customise the used materializer for specific inner routes.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractMaterializer-0

View file

@ -1,24 +0,0 @@
.. _-extractRequest-:
extractRequest
==============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractRequest
Description
-----------
Extracts the complete ``HttpRequest`` instance.
Use ``extractRequest`` to extract just the complete URI of the request. Usually there's little use of
extracting the complete request because extracting of most of the aspects of HttpRequests is handled by specialized
directives. See :ref:`Request Directives`.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractRequest-example

View file

@ -1,27 +0,0 @@
.. _-extractRequestContext-:
extractRequestContext
=====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractRequestContext
Description
-----------
Extracts the request's underlying :class:`RequestContext`.
This directive is used as a building block for most of the other directives,
which extract the context and by inspecting some of it's values can decide
what to do with the request - for example provide a value, or reject the request.
See also :ref:`-extractRequest-` if only interested in the :class:`HttpRequest` instance itself.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractRequestContext-example

View file

@ -1,25 +0,0 @@
.. _-extractRequestEntity-:
extractRequestEntity
====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractRequestEntity
Description
-----------
Extracts the ``RequestEntity`` from the :class:`RequestContext`.
The directive returns a ``RequestEntity`` without unmarshalling the request. To extract domain entity,
:ref:`-entity-` should be used.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractRequestEntity-example

View file

@ -1,24 +0,0 @@
.. _-extractSettings-:
extractSettings
===============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractSettings
Description
-----------
Extracts the ``RoutingSettings`` from the :class:`RequestContext`.
By default the settings of the ``Http()`` extension running the route will be returned.
It is possible to override the settings for specific sub-routes by using the :ref:`-withSettings-` directive.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractSettings-examples

View file

@ -1,30 +0,0 @@
.. _-extractStrictEntity-:
extractStrictEntity
===================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractStrictEntity
Description
-----------
Extracts the strict http entity as ``HttpEntity.Strict`` from the :class:`RequestContext`.
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
.. warning::
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
overridden by wrapping with :ref:`-withSizeLimit-` or :ref:`-withoutSizeLimit-` directive.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractStrictEntity-example

View file

@ -1,26 +0,0 @@
.. _-extractUnmatchedPath-:
extractUnmatchedPath
====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractUnmatchedPath
Description
-----------
Extracts the unmatched path from the request context.
The ``extractUnmatchedPath`` directive extracts the remaining path that was not yet matched by any of the :ref:`PathDirectives`
(or any custom ones that change the unmatched path field of the request context). You can use it for building directives
that handle complete suffixes of paths (like the ``getFromDirectory`` directives and similar ones).
Use ``mapUnmatchedPath`` to change the value of the unmatched path.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractUnmatchedPath-example

View file

@ -1,23 +0,0 @@
.. _-extractUri-:
extractUri
==========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: extractUri
Description
-----------
Access the full URI of the request.
Use :ref:`SchemeDirectives`, :ref:`HostDirectives`, :ref:`PathDirectives`, and :ref:`ParameterDirectives` for more
targeted access to parts of the URI.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: extractUri-example

View file

@ -1,137 +0,0 @@
.. _BasicDirectives:
BasicDirectives
===============
Basic directives are building blocks for building :ref:`Custom Directives`. As such they
usually aren't used in a route directly but rather in the definition of new directives.
.. _ProvideDirectives:
Providing Values to Inner Routes
--------------------------------
These directives provide values to the inner routes with extractions. They can be distinguished
on two axes: a) provide a constant value or extract a value from the ``RequestContext`` b) provide
a single value or a tuple of values.
* :ref:`-extract-`
* :ref:`-extractActorSystem-`
* :ref:`-extractDataBytes-`
* :ref:`-extractExecutionContext-`
* :ref:`-extractMaterializer-`
* :ref:`-extractStrictEntity-`
* :ref:`-extractLog-`
* :ref:`-extractRequest-`
* :ref:`-extractRequestContext-`
* :ref:`-extractRequestEntity-`
* :ref:`-extractSettings-`
* :ref:`-extractUnmatchedPath-`
* :ref:`-extractUri-`
* :ref:`-textract-`
* :ref:`-provide-`
* :ref:`-tprovide-`
.. _Request Transforming Directives:
Transforming the Request(Context)
---------------------------------
* :ref:`-mapRequest-`
* :ref:`-mapRequestContext-`
* :ref:`-mapSettings-`
* :ref:`-mapUnmatchedPath-`
* :ref:`-withExecutionContext-`
* :ref:`-withMaterializer-`
* :ref:`-withLog-`
* :ref:`-withSettings-`
* :ref:`-toStrictEntity-`
.. _Response Transforming Directives:
Transforming the Response
-------------------------
These directives allow to hook into the response path and transform the complete response or
the parts of a response or the list of rejections:
* :ref:`-mapResponse-`
* :ref:`-mapResponseEntity-`
* :ref:`-mapResponseHeaders-`
.. _Result Transformation Directives:
Transforming the RouteResult
----------------------------
These directives allow to transform the RouteResult of the inner route.
* :ref:`-cancelRejection-`
* :ref:`-cancelRejections-`
* :ref:`-mapRejections-`
* :ref:`-mapRouteResult-`
* :ref:`-mapRouteResultFuture-`
* :ref:`-mapRouteResultPF-`
* :ref:`-mapRouteResultWith-`
* :ref:`-mapRouteResultWithPF-`
* :ref:`-recoverRejections-`
* :ref:`-recoverRejectionsWith-`
Other
-----
* :ref:`-mapInnerRoute-`
* :ref:`-pass-`
Alphabetically
--------------
.. toctree::
:maxdepth: 1
cancelRejection
cancelRejections
extract
extractActorSystem
extractDataBytes
extractExecutionContext
extractMaterializer
extractStrictEntity
extractLog
extractRequest
extractRequestContext
extractRequestEntity
extractSettings
extractUnmatchedPath
extractUri
mapInnerRoute
mapRejections
mapRequest
mapRequestContext
mapResponse
mapResponseEntity
mapResponseHeaders
mapRouteResult
mapRouteResultFuture
mapRouteResultPF
mapRouteResultWith
mapRouteResultWithPF
mapSettings
mapUnmatchedPath
pass
provide
recoverRejections
recoverRejectionsWith
textract
toStrictEntity
tprovide
withExecutionContext
withMaterializer
withLog
withSettings

View file

@ -1,23 +0,0 @@
.. _-mapInnerRoute-:
mapInnerRoute
=============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapInnerRoute
Description
-----------
Changes the execution model of the inner route by wrapping it with arbitrary logic.
The ``mapInnerRoute`` directive is used as a building block for :ref:`Custom Directives` to replace the inner route
with any other route. Usually, the returned route wraps the original one with custom execution logic.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapInnerRoute

View file

@ -1,27 +0,0 @@
.. _-mapRejections-:
mapRejections
=============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRejections
Description
-----------
**Low level directive** unless you're sure you need to be working on this low-level you might instead
want to try the :ref:`-handleRejections-` directive which provides a nicer DSL for building rejection handlers.
The ``mapRejections`` directive is used as a building block for :ref:`Custom Directives` to transform a list
of rejections from the inner route to a new list of rejections.
See :ref:`Response Transforming Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRejections

View file

@ -1,27 +0,0 @@
.. _-mapRequest-:
mapRequest
==========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRequest
Description
-----------
Transforms the request before it is handled by the inner route.
The ``mapRequest`` directive is used as a building block for :ref:`Custom Directives` to transform a request before it
is handled by the inner route. Changing the ``request.uri`` parameter has no effect on path matching in the inner route
because the unmatched path is a separate field of the ``RequestContext`` value which is passed into routes. To change
the unmatched path or other fields of the ``RequestContext`` use the :ref:`-mapRequestContext-` directive.
See :ref:`Request Transforming Directives` for an overview of similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0mapRequest

View file

@ -1,26 +0,0 @@
.. _-mapRequestContext-:
mapRequestContext
=================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRequestContext
Description
-----------
Transforms the ``RequestContext`` before it is passed to the inner route.
The ``mapRequestContext`` directive is used as a building block for :ref:`Custom Directives` to transform
the request context before it is passed to the inner route. To change only the request value itself the
:ref:`-mapRequest-` directive can be used instead.
See :ref:`Request Transforming Directives` for an overview of similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRequestContext

View file

@ -1,31 +0,0 @@
.. _-mapResponse-:
mapResponse
===========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapResponse
Description
-----------
The ``mapResponse`` directive is used as a building block for :ref:`Custom Directives` to transform a response that
was generated by the inner route. This directive transforms complete responses.
See also :ref:`-mapResponseHeaders-` or :ref:`-mapResponseEntity-` for more specialized variants and
:ref:`Response Transforming Directives` for similar directives.
Example: Override status
------------------------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0mapResponse
Example: Default to empty JSON response on errors
-------------------------------------------------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 1mapResponse-advanced

View file

@ -1,24 +0,0 @@
.. _-mapResponseEntity-:
mapResponseEntity
=================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapResponseEntity
Description
-----------
The ``mapResponseEntity`` directive is used as a building block for :ref:`Custom Directives` to transform a
response entity that was generated by the inner route.
See :ref:`Response Transforming Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapResponseEntity

View file

@ -1,25 +0,0 @@
.. _-mapResponseHeaders-:
mapResponseHeaders
==================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapResponseHeaders
Description
-----------
Changes the list of response headers that was generated by the inner route.
The ``mapResponseHeaders`` directive is used as a building block for :ref:`Custom Directives` to transform the list of
response headers that was generated by the inner route.
See :ref:`Response Transforming Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapResponseHeaders

View file

@ -1,25 +0,0 @@
.. _-mapRouteResult-:
mapRouteResult
==============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRouteResult
Description
-----------
Changes the message the inner route sends to the responder.
The ``mapRouteResult`` directive is used as a building block for :ref:`Custom Directives` to transform the
:ref:`RouteResult` coming back from the inner route.
See :ref:`Result Transformation Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0mapRouteResult

View file

@ -1,27 +0,0 @@
.. _-mapRouteResultFuture-:
mapRouteResultFuture
====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRouteResultFuture
Description
-----------
Asynchronous version of :ref:`-mapRouteResult-`.
It's similar to :ref:`-mapRouteResultWith-`, however it's ``Future[RouteResult] ⇒ Future[RouteResult]``
instead of ``RouteResult ⇒ Future[RouteResult]`` which may be useful when combining multiple transformantions
and / or wanting to ``recover`` from a failed route result.
See :ref:`Result Transformation Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRouteResultFuture

View file

@ -1,28 +0,0 @@
.. _-mapRouteResultPF-:
mapRouteResultPF
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRouteResultPF
Description
-----------
*Partial Function* version of :ref:`-mapRouteResult-`.
Changes the message the inner route sends to the responder.
The ``mapRouteResult`` directive is used as a building block for :ref:`Custom Directives` to transform the
:ref:`RouteResult` coming back from the inner route. It's similar to the :ref:`-mapRouteResult-` directive but allows to
specify a partial function that doesn't have to handle all potential ``RouteResult`` instances.
See :ref:`Result Transformation Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRouteResultPF

View file

@ -1,27 +0,0 @@
.. _-mapRouteResultWith-:
mapRouteResultWith
==================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRouteResultWith
Description
-----------
Changes the message the inner route sends to the responder.
The ``mapRouteResult`` directive is used as a building block for :ref:`Custom Directives` to transform the
:ref:`RouteResult` coming back from the inner route. It's similar to the :ref:`-mapRouteResult-` directive but
returning a ``Future`` instead of a result immediadly, which may be useful for longer running transformations.
See :ref:`Result Transformation Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRouteResultWith-0

View file

@ -1,28 +0,0 @@
.. _-mapRouteResultWithPF-:
mapRouteResultWithPF
====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapRouteResultWithPF
Description
-----------
Asynchronous variant of :ref:`-mapRouteResultPF-`.
Changes the message the inner route sends to the responder.
The ``mapRouteResult`` directive is used as a building block for :ref:`Custom Directives` to transform the
:ref:`RouteResult` coming back from the inner route.
See :ref:`Result Transformation Directives` for similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapRouteResultWithPF-0

View file

@ -1,23 +0,0 @@
.. _-mapSettings-:
mapSettings
===========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapSettings
Description
-----------
Transforms the ``RoutingSettings`` with a ``RoutingSettings ⇒ RoutingSettings`` function.
See also :ref:`-withSettings-` or :ref:`-extractSettings-`.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: withSettings-0

View file

@ -1,25 +0,0 @@
.. _-mapUnmatchedPath-:
mapUnmatchedPath
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: mapUnmatchedPath
Description
-----------
Transforms the unmatchedPath field of the request context for inner routes.
The ``mapUnmatchedPath`` directive is used as a building block for writing :ref:`Custom Directives`. You can use it
for implementing custom path matching directives.
Use ``extractUnmatchedPath`` for extracting the current value of the unmatched path.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: mapUnmatchedPath-example

View file

@ -1,22 +0,0 @@
.. _-pass-:
pass
====
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: pass
Description
-----------
A directive that passes the request unchanged to its inner route.
It is usually used as a "neutral element" when combining directives generically.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: pass

View file

@ -1,25 +0,0 @@
.. _-provide-:
provide
=======
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: provide
Description
-----------
Provides a constant value to the inner route.
The `provide` directive is used as a building block for :ref:`Custom Directives` to provide a single value to the
inner route. To provide several values use the :ref:`-tprovide-` directive.
See :ref:`ProvideDirectives` for an overview of similar directives.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0provide

View file

@ -1,28 +0,0 @@
.. _-recoverRejections-:
recoverRejections
=================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: recoverRejections
Description
-----------
**Low level directive** unless you're sure you need to be working on this low-level you might instead
want to try the :ref:`-handleRejections-` directive which provides a nicer DSL for building rejection handlers.
Transforms rejections from the inner route with an ``immutable.Seq[Rejection] ⇒ RouteResult`` function.
A ``RouteResult`` is either a ``Complete(HttpResponse(...))`` or rejections ``Rejected(rejections)``.
.. note::
To learn more about how and why rejections work read the :ref:`rejections-scala` section of the documentation.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: recoverRejections

View file

@ -1,31 +0,0 @@
.. _-recoverRejectionsWith-:
recoverRejectionsWith
=====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: recoverRejectionsWith
Description
-----------
**Low level directive** unless you're sure you need to be working on this low-level you might instead
want to try the :ref:`-handleRejections-` directive which provides a nicer DSL for building rejection handlers.
Transforms rejections from the inner route with an ``immutable.Seq[Rejection] ⇒ Future[RouteResult]`` function.
Asynchronous version of :ref:`-recoverRejections-`.
See :ref:`-recoverRejections-` (the synchronous equivalent of this directive) for a detailed description.
.. note::
To learn more about how and why rejections work read the :ref:`rejections-scala` section of the documentation.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: recoverRejectionsWith

View file

@ -1,30 +0,0 @@
.. _-textract-:
textract
========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: textract
Description
-----------
Extracts a tuple of values from the request context and provides them to the inner route.
The ``textract`` directive is used as a building block for :ref:`Custom Directives` to extract data from the
``RequestContext`` and provide it to the inner route. To extract just one value use the :ref:`-extract-` directive. To
provide a constant value independent of the ``RequestContext`` use the :ref:`-tprovide-` directive instead.
See :ref:`ProvideDirectives` for an overview of similar directives.
See also :ref:`-extract-` for extracting a single value.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: textract

View file

@ -1,30 +0,0 @@
.. _-toStrictEntity-:
toStrictEntity
==============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: toStrictEntity
Description
-----------
Transforms the request entity to strict entity before it is handled by the inner route.
A timeout parameter is given and if the stream isn't completed after the timeout, the directive will be failed.
.. warning::
The directive will read the request entity into memory within the size limit(8M by default) and effectively disable streaming.
The size limit can be configured globally with ``akka.http.parsing.max-content-length`` or
overridden by wrapping with :ref:`-withSizeLimit-` or :ref:`-withoutSizeLimit-` directive.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: toStrictEntity-example

View file

@ -1,30 +0,0 @@
.. _-tprovide-:
tprovide
========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: tprovide
Description
-----------
Provides a tuple of values to the inner route.
The ``tprovide`` directive is used as a building block for :ref:`Custom Directives` to provide data to the inner route.
To provide just one value use the :ref:`-provide-` directive. If you want to provide values calculated from the
``RequestContext`` use the :ref:`-textract-` directive instead.
See :ref:`ProvideDirectives` for an overview of similar directives.
See also :ref:`-provide-` for providing a single value.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: tprovide

View file

@ -1,25 +0,0 @@
.. _-withExecutionContext-:
withExecutionContext
====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: withExecutionContext
Description
-----------
Allows running an inner route using an alternative ``ExecutionContext`` in place of the default one.
The execution context can be extracted in an inner route using :ref:`-extractExecutionContext-` directly,
or used by directives which internally extract the materializer without sufracing this fact in the API.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: withExecutionContext-0

View file

@ -1,25 +0,0 @@
.. _-withLog-:
withLog
=======
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: withLog
Description
-----------
Allows running an inner route using an alternative :class:`LoggingAdapter` in place of the default one.
The logging adapter can be extracted in an inner route using :ref:`-extractLog-` directly,
or used by directives which internally extract the materializer without sufracing this fact in the API.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: 0withLog

View file

@ -1,25 +0,0 @@
.. _-withMaterializer-:
withMaterializer
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: withMaterializer
Description
-----------
Allows running an inner route using an alternative ``Materializer`` in place of the default one.
The materializer can be extracted in an inner route using :ref:`-extractMaterializer-` directly,
or used by directives which internally extract the materializer without sufracing this fact in the API
(e.g. responding with a Chunked entity).
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: withMaterializer-0

View file

@ -1,24 +0,0 @@
.. _-withSettings-:
withSettings
============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/BasicDirectives.scala
:snippet: withSettings
Description
-----------
Allows running an inner route using an alternative :class:`RoutingSettings` in place of the default one.
The execution context can be extracted in an inner route using :ref:`-extractSettings-` directly,
or used by directives which internally extract the materializer without sufracing this fact in the API.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/BasicDirectivesExamplesSpec.scala
:snippet: withSettings-0

View file

@ -1,110 +0,0 @@
Predefined Directives (by trait)
================================
All predefined directives are organized into traits that form one part of the overarching ``Directives`` trait.
.. _Request Directives:
Directives filtering or extracting from the request
---------------------------------------------------
:ref:`MethodDirectives`
Filter and extract based on the request method.
:ref:`HeaderDirectives`
Filter and extract based on request headers.
:ref:`PathDirectives`
Filter and extract from the request URI path.
:ref:`HostDirectives`
Filter and extract based on the target host.
:ref:`ParameterDirectives`, :ref:`FormFieldDirectives`
Filter and extract based on query parameters or form fields.
:ref:`CodingDirectives`
Filter and decode compressed request content.
:ref:`MarshallingDirectives`
Extract the request entity.
:ref:`SchemeDirectives`
Filter and extract based on the request scheme.
:ref:`SecurityDirectives`
Handle authentication data from the request.
:ref:`CookieDirectives`
Filter and extract cookies.
:ref:`BasicDirectives` and :ref:`MiscDirectives`
Directives handling request properties.
:ref:`FileUploadDirectives`
Handle file uploads.
.. _Response Directives:
Directives creating or transforming the response
------------------------------------------------
:ref:`CacheConditionDirectives`
Support for conditional requests (``304 Not Modified`` responses).
:ref:`CookieDirectives`
Set, modify, or delete cookies.
:ref:`CodingDirectives`
Compress responses.
:ref:`FileAndResourceDirectives`
Deliver responses from files and resources.
:ref:`RangeDirectives`
Support for range requests (``206 Partial Content`` responses).
:ref:`RespondWithDirectives`
Change response properties.
:ref:`RouteDirectives`
Complete or reject a request with a response.
:ref:`BasicDirectives` and :ref:`MiscDirectives`
Directives handling or transforming response properties.
:ref:`TimeoutDirectives`
Configure request timeouts and automatic timeout responses.
List of predefined directives by trait
--------------------------------------
.. toctree::
:maxdepth: 1
basic-directives/index
cache-condition-directives/index
coding-directives/index
cookie-directives/index
debugging-directives/index
execution-directives/index
file-and-resource-directives/index
file-upload-directives/index
form-field-directives/index
future-directives/index
header-directives/index
host-directives/index
marshalling-directives/index
method-directives/index
misc-directives/index
parameter-directives/index
path-directives/index
range-directives/index
respond-with-directives/index
route-directives/index
scheme-directives/index
security-directives/index
websocket-directives/index
timeout-directives/index

View file

@ -1,40 +0,0 @@
.. _-conditional-:
conditional
===========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CacheConditionDirectives.scala
:snippet: conditional
Description
-----------
Wraps its inner route with support for Conditional Requests as defined
by http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26.
Depending on the given ``eTag`` and ``lastModified`` values this directive immediately responds with
``304 Not Modified`` or ``412 Precondition Failed`` (without calling its inner route) if the request comes with the
respective conditional headers. Otherwise the request is simply passed on to its inner route.
The algorithm implemented by this directive closely follows what is defined in `this section`__ of the
`HTTPbis spec`__.
All responses (the ones produces by this directive itself as well as the ones coming back from the inner route) are
augmented with respective ``ETag`` and ``Last-Modified`` response headers.
Since this directive requires the ``EntityTag`` and ``lastModified`` time stamp for the resource as concrete arguments
it is usually used quite deep down in the route structure (i.e. close to the leaf-level), where the exact resource
targeted by the request has already been established and the respective ETag/Last-Modified values can be determined.
The :ref:`FileAndResourceDirectives` internally use the ``conditional`` directive for ETag and Last-Modified support
(if the ``akka.http.routing.file-get-conditional`` setting is enabled).
__ http://tools.ietf.org/html/draft-ietf-httpbis-p4-conditional-26#section-6
__ https://datatracker.ietf.org/wg/httpbis/

View file

@ -1,9 +0,0 @@
.. _CacheConditionDirectives:
CacheConditionDirectives
========================
.. toctree::
:maxdepth: 1
conditional

View file

@ -1,21 +0,0 @@
.. _-decodeRequest-:
decodeRequest
=============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: decodeRequest
Description
-----------
Decompresses the incoming request if it is ``gzip`` or ``deflate`` compressed. Uncompressed requests are passed through untouched. If the request encoded with another encoding the request is rejected with an ``UnsupportedRequestEncodingRejection``.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CodingDirectivesExamplesSpec.scala
:snippet: "decodeRequest"

View file

@ -1,21 +0,0 @@
.. _-decodeRequestWith-:
decodeRequestWith
=================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: decodeRequestWith
Description
-----------
Decodes the incoming request if it is encoded with one of the given encoders. If the request encoding doesn't match one of the given encoders the request is rejected with an ``UnsupportedRequestEncodingRejection``. If no decoders are given the default encoders (``Gzip``, ``Deflate``, ``NoCoding``) are used.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CodingDirectivesExamplesSpec.scala
:snippet: decodeRequestWith

View file

@ -1,27 +0,0 @@
.. _-encodeResponse-:
encodeResponse
==============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: encodeResponse
Description
-----------
Encodes the response with the encoding that is requested by the client via the ``Accept-Encoding`` header or rejects the request with an ``UnacceptedResponseEncodingRejection(supportedEncodings)``.
The response encoding is determined by the rules specified in RFC7231_.
If the ``Accept-Encoding`` header is missing or empty or specifies an encoding other than identity, gzip or deflate then no encoding is used.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CodingDirectivesExamplesSpec.scala
:snippet: "encodeResponse"
.. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4

View file

@ -1,30 +0,0 @@
.. _-encodeResponseWith-:
encodeResponseWith
==================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: encodeResponseWith
Description
-----------
Encodes the response with the encoding that is requested by the client via the ``Accept-Encoding`` if it is among the provided encoders or rejects the request with an ``UnacceptedResponseEncodingRejection(supportedEncodings)``.
The response encoding is determined by the rules specified in RFC7231_.
If the ``Accept-Encoding`` header is missing then the response is encoded using the ``first`` encoder.
If the ``Accept-Encoding`` header is empty and ``NoCoding`` is part of the encoders then no
response encoding is used. Otherwise the request is rejected.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CodingDirectivesExamplesSpec.scala
:snippet: encodeResponseWith
.. _RFC7231: http://tools.ietf.org/html/rfc7231#section-5.3.4

View file

@ -1,14 +0,0 @@
.. _CodingDirectives:
CodingDirectives
================
.. toctree::
:maxdepth: 1
decodeRequest
decodeRequestWith
encodeResponse
encodeResponseWith
requestEncodedWith
responseEncodingAccepted

View file

@ -1,19 +0,0 @@
.. _-requestEncodedWith-:
requestEncodedWith
==================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: requestEncodedWith
Description
-----------
Passes the request to the inner route if the request is encoded with the argument encoding. Otherwise, rejects the request with an ``UnacceptedRequestEncodingRejection(encoding)``.
This directive is the `building block`_ for ``decodeRequest`` to reject unsupported encodings.
.. _`building block`: @github@/akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala

View file

@ -1,21 +0,0 @@
.. _-responseEncodingAccepted-:
responseEncodingAccepted
========================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CodingDirectives.scala
:snippet: responseEncodingAccepted
Description
-----------
Passes the request to the inner route if the request accepts the argument encoding. Otherwise, rejects the request with an ``UnacceptedResponseEncodingRejection(encoding)``.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CodingDirectivesExamplesSpec.scala
:snippet: responseEncodingAccepted

View file

@ -1,24 +0,0 @@
.. _-cookie-:
cookie
======
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CookieDirectives.scala
:snippet: cookie
Description
-----------
Extracts a cookie with a given name from a request or otherwise rejects the request with a ``MissingCookieRejection`` if
the cookie is missing.
Use the :ref:`-optionalCookie-` directive instead if you want to support missing cookies in your inner route.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CookieDirectivesExamplesSpec.scala
:snippet: cookie

View file

@ -1,22 +0,0 @@
.. _-deleteCookie-:
deleteCookie
============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CookieDirectives.scala
:snippet: deleteCookie
Description
-----------
Adds a header to the response to request the removal of the cookie with the given name on the client.
Use the :ref:`-setCookie-` directive to update a cookie.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CookieDirectivesExamplesSpec.scala
:snippet: deleteCookie

View file

@ -1,12 +0,0 @@
.. _CookieDirectives:
CookieDirectives
================
.. toctree::
:maxdepth: 1
cookie
deleteCookie
optionalCookie
setCookie

View file

@ -1,23 +0,0 @@
.. _-optionalCookie-:
optionalCookie
==============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CookieDirectives.scala
:snippet: optionalCookie
Description
-----------
Extracts an optional cookie with a given name from a request.
Use the :ref:`-cookie-` directive instead if the inner route does not handle a missing cookie.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CookieDirectivesExamplesSpec.scala
:snippet: optionalCookie

View file

@ -1,23 +0,0 @@
.. _-setCookie-:
setCookie
=========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/CookieDirectives.scala
:snippet: setCookie
Description
-----------
Adds a header to the response to request the update of the cookie with the given name on the client.
Use the :ref:`-deleteCookie-` directive to delete a cookie.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/CookieDirectivesExamplesSpec.scala
:snippet: setCookie

View file

@ -1,196 +0,0 @@
.. _Custom Directives:
Custom Directives
=================
Part of the power of akka-http directives comes from the ease with which its possible to define
custom directives at differing levels of abstraction.
There are essentially three ways of creating custom directives:
1. By introducing new “labels” for configurations of existing directives
2. By transforming existing directives
3. By writing a directive “from scratch”
Configuration Labeling
______________________
The easiest way to create a custom directive is to simply assign a new name for a certain configuration
of one or more existing directives. In fact, most of the predefined akka-http directives can be considered
named configurations of more low-level directives.
The basic technique is explained in the chapter about Composing Directives, where, for example, a new directive
``getOrPut`` is defined like this:
.. includecode2:: ../../../code/docs/http/scaladsl/server/directives/CustomDirectivesExamplesSpec.scala
:snippet: labeling
Another example is the :ref:`MethodDirectives` which are simply instances of a preconfigured :ref:`-method-` directive.
The low-level directives that most often form the basis of higher-level “named configuration” directives are grouped
together in the :ref:`BasicDirectives` trait.
Transforming Directives
_______________________
The second option for creating new directives is to transform an existing one using one of the
“transformation methods”, which are defined on the `Directive`__ class, the base class of all “regular” directives.
__ @github@/akka-http/src/main/scala/akka/http/scaladsl/server/Directive.scala
Apart from the combinator operators (``|`` and ``&``) and the case-class extractor (``as[T]``)
there following transformations is also defined on all ``Directive`` instances:
* :ref:`map/tmap`
* :ref:`flatMap/tflatMap`
* :ref:`require/trequire`
* :ref:`recover/recoverPF`
.. _map/tmap:
map and tmap
------------
If the Directive is a single-value ``Directive``, the ``map`` method allows
for simple transformations:
.. includecode2:: ../../../code/docs/http/scaladsl/server/directives/CustomDirectivesExamplesSpec.scala
:snippet: map-0
One example of a predefined directive relying on ``map`` is the `optionalHeaderValue`__ directive.
__ @github@/akka-http/src/main/scala/akka/http/scaladsl/server/directives/HeaderDirectives.scala#L67
The tmap modifier has this signature (somewhat simplified)::
def tmap[R](f: L ⇒ R): Directive[Out]
It can be used to transform the ``Tuple`` of extractions into another ``Tuple``.
The number and/or types of the extractions can be changed arbitrarily. For example
if ``R`` is ``Tuple2[A, B]`` then the result will be a ``Directive[(A, B)]``. Here is a
somewhat contrived example:
.. includecode2:: ../../../code/docs/http/scaladsl/server/directives/CustomDirectivesExamplesSpec.scala
:snippet: tmap-1
.. _flatMap/tflatMap:
flatMap and tflatMap
--------------------
With map and tmap you can transform the values a directive extracts
but you cannot change the “extracting” nature of the directive.
For example, if you have a directive extracting an ``Int`` you can use map to turn
it into a directive that extracts that ``Int`` and doubles it, but you cannot transform
it into a directive, that doubles all positive ``Int`` values and rejects all others.
In order to do the latter you need ``flatMap`` or ``tflatMap``. The ``tflatMap``
modifier has this signature::
def tflatMap[R: Tuple](f: L ⇒ Directive[R]): Directive[R]
The given function produces a new directive depending on the Tuple of extractions
of the underlying one. As in the case of :ref:`map/tmap` there is also a single-value
variant called ``flatMap``, which simplifies the operation for Directives only extracting one single value.
Here is the (contrived) example from above, which doubles positive Int values and rejects all others:
.. includecode2:: ../../../code/docs/http/scaladsl/server/directives/CustomDirectivesExamplesSpec.scala
:snippet: flatMap-0
A common pattern that relies on flatMap is to first extract a value
from the RequestContext with the extract directive and then flatMap with
some kind of filtering logic. For example, this is the implementation
of the method directive:
.. includecode2:: ../../../../../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/MethodDirectives.scala
:snippet: method
The explicit type parameter ``[Unit]`` on the flatMap i`s needed in this case
because the result of the flatMap is directly concatenated with the
``cancelAllRejections`` directive, thereby preventing “outside-in”
inference of the type parameter value.
.. _require/trequire:
require and trequire
--------------------
The require modifier transforms a single-extraction directive into a directive
without extractions, which filters the requests according the a predicate function.
All requests, for which the predicate is false are rejected, all others pass unchanged.
The signature of require is this::
def require(predicate: T ⇒ Boolean, rejections: Rejection*): Directive0
One example of a predefined directive relying on require is the first overload of the host directive:
.. includecode2:: ../../../../../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/HostDirectives.scala
:snippet: require-host
You can only call require on single-extraction directives. The trequire modifier is the
more general variant, which takes a predicate of type ``Tuple => Boolean``.
It can therefore also be used on directives with several extractions.
.. _recover/recoverPF:
recover and recoverPF
---------------------
The ``recover`` modifier allows you “catch” rejections produced by the underlying
directive and, instead of rejecting, produce an alternative directive with the same type(s) of extractions.
The signature of recover is this::
def recover[R >: L: Tuple](recovery: Seq[Rejection] ⇒ Directive[R]): Directive[R] =
In many cases the very similar ``recoverPF`` modifier might be little bit
easier to use since it doesnt require the handling of all rejections::
def recoverPF[R >: L: Tuple](
recovery: PartialFunction[Seq[Rejection], Directive[R]]): Directive[R]
One example of a predefined directive relying ``recoverPF`` is the optionalHeaderValue directive:
.. includecode2:: ../../../../../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/HeaderDirectives.scala
:snippet: optional-header
Directives from Scratch
_______________________
The third option for creating custom directives is to do it “from scratch”,
by directly subclassing the Directive class. The Directive is defined like this
(leaving away operators and modifiers):
.. includecode2:: ../../../../../../akka-http/src/main/scala/akka/http/scaladsl/server/Directive.scala
:snippet: basic
It only has one abstract member that you need to implement, the happly method, which creates
the Route the directives presents to the outside from its inner Route building function
(taking the extractions as parameter).
Extractions are kept as a Tuple. Here are a few examples:
A ``Directive[Unit]`` extracts nothing (like the get directive).
Because this type is used quite frequently akka-http defines a type alias for it::
type Directive0 = Directive[Unit]
A ``Directive[(String)]`` extracts one String value (like the hostName directive). The type alias for it is::
type Directive1[T] = Directive[Tuple1[T]]
A Directive[(Int, String)] extracts an ``Int`` value and a ``String`` value
(like a ``parameters('a.as[Int], 'b.as[String])`` directive).
Keeping extractions as ``Tuples`` has a lot of advantages, mainly great flexibility
while upholding full type safety and “inferability”. However, the number of times
where youll really have to fall back to defining a directive from scratch should
be very small. In fact, if you find yourself in a position where a “from scratch”
directive is your only option, wed like to hear about it,
so we can provide a higher-level “something” for other users.

View file

@ -1,11 +0,0 @@
.. _DebuggingDirectives:
DebuggingDirectives
===================
.. toctree::
:maxdepth: 1
logRequest
logRequestResult
logResult

View file

@ -1,44 +0,0 @@
.. _-logRequest-:
logRequest
==========
Signature
---------
::
def logRequest(marker: String)(implicit log: LoggingContext): Directive0
def logRequest(marker: String, level: LogLevel)(implicit log: LoggingContext): Directive0
def logRequest(show: HttpRequest => String)(implicit log: LoggingContext): Directive0
def logRequest(show: HttpRequest => LogEntry)(implicit log: LoggingContext): Directive0
def logRequest(magnet: LoggingMagnet[HttpRequest => Unit])(implicit log: LoggingContext): Directive0
The signature shown is simplified, the real signature uses magnets. [1]_
.. [1] See `The Magnet Pattern`_ for an explanation of magnet-based overloading.
.. _`The Magnet Pattern`: http://spray.io/blog/2012-12-13-the-magnet-pattern/
Description
-----------
Logs the request using the supplied ``LoggingMagnet[HttpRequest => Unit]``. This ``LoggingMagnet`` is a wrapped
function ``HttpRequest => Unit`` that can be implicitly created from the different constructors shown above. These
constructors build a ``LoggingMagnet`` from these components:
* A marker to prefix each log message with.
* A log level.
* A ``show`` function that calculates a string representation for a request.
* An implicit ``LoggingContext`` that is used to emit the log message.
* A function that creates a ``LogEntry`` which is a combination of the elements above.
It is also possible to use any other function ``HttpRequest => Unit`` for logging by wrapping it with ``LoggingMagnet``.
See the examples for ways to use the ``logRequest`` directive.
Use ``logResult`` for logging the response, or ``logRequestResult`` for logging both.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/DebuggingDirectivesExamplesSpec.scala
:snippet: logRequest-0

View file

@ -1,42 +0,0 @@
.. _-logRequestResult-:
logRequestResult
================
Signature
---------
::
def logRequestResult(marker: String)(implicit log: LoggingContext): Directive0
def logRequestResult(marker: String, level: LogLevel)(implicit log: LoggingContext): Directive0
def logRequestResult(show: HttpRequest => RouteResult => Option[LogEntry])(implicit log: LoggingContext): Directive0
The signature shown is simplified, the real signature uses magnets. [1]_
.. [1] See `The Magnet Pattern`_ for an explanation of magnet-based overloading.
.. _`The Magnet Pattern`: http://spray.io/blog/2012-12-13-the-magnet-pattern/
Description
-----------
Logs both, the request and the response.
This directive is a combination of :ref:`-logRequest-` and :ref:`-logResult-`.
See :ref:`-logRequest-` for the general description how these directives work.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/DebuggingDirectivesExamplesSpec.scala
:snippet: logRequestResult
Building Advanced Directives
----------------------------
This example will showcase the advanced logging using the ``DebuggingDirectives``.
The built `logResponseTime` directive will log the request time (or rejection reason):
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/DebuggingDirectivesExamplesSpec.scala
:snippet: logRequestResultWithResponseTime

View file

@ -1,36 +0,0 @@
.. _-logResult-:
logResult
=========
Signature
---------
::
def logResult(marker: String)(implicit log: LoggingContext): Directive0
def logResult(marker: String, level: LogLevel)(implicit log: LoggingContext): Directive0
def logResult(show: RouteResult => String)(implicit log: LoggingContext): Directive0
def logResult(show: RouteResult => LogEntry)(implicit log: LoggingContext): Directive0
def logResult(magnet: LoggingMagnet[RouteResult => Unit])(implicit log: LoggingContext): Directive0
The signature shown is simplified, the real signature uses magnets. [1]_
.. [1] See `The Magnet Pattern`_ for an explanation of magnet-based overloading.
.. _`The Magnet Pattern`: http://spray.io/blog/2012-12-13-the-magnet-pattern/
Description
-----------
Logs the response.
See :ref:`-logRequest-` for the general description how these directives work. This directive is different
as it requires a ``LoggingMagnet[RouteResult => Unit]``. Instead of just logging ``HttpResponses``, ``logResult`` is able to
log any :ref:`RouteResult` coming back from the inner route.
Use ``logRequest`` for logging the request, or ``logRequestResult`` for logging both.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/DebuggingDirectivesExamplesSpec.scala
:snippet: logResult

View file

@ -1,25 +0,0 @@
.. _-handleExceptions-:
handleExceptions
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/ExecutionDirectives.scala
:snippet: handleExceptions
Description
-----------
Catches exceptions thrown by the inner route and handles them using the specified ``ExceptionHandler``.
Using this directive is an alternative to using a global implicitly defined ``ExceptionHandler`` that
applies to the complete route.
See :ref:`exception-handling-scala` for general information about options for handling exceptions.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/ExecutionDirectivesExamplesSpec.scala
:snippet: handleExceptions

View file

@ -1,24 +0,0 @@
.. _-handleRejections-:
handleRejections
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/ExecutionDirectives.scala
:snippet: handleRejections
Description
-----------
Using this directive is an alternative to using a global implicitly defined ``RejectionHandler`` that
applies to the complete route.
See :ref:`rejections-scala` for general information about options for handling rejections.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/ExecutionDirectivesExamplesSpec.scala
:snippet: handleRejections

View file

@ -1,10 +0,0 @@
.. _ExecutionDirectives:
ExecutionDirectives
===================
.. toctree::
:maxdepth: 1
handleExceptions
handleRejections

View file

@ -1,30 +0,0 @@
.. _-getFromBrowseableDirectories-:
getFromBrowseableDirectories
============================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromBrowseableDirectories
Description
-----------
The ``getFromBrowseableDirectories`` is a combination of serving files from the specified directories
(like ``getFromDirectory``) and listing a browseable directory with ``listDirectoryContents``.
Nesting this directive beneath ``get`` is not necessary as this directive will only respond to ``GET`` requests.
Use ``getFromBrowseableDirectory`` to serve only one directory.
Use ``getFromDirectory`` if directory browsing isn't required.
For more details refer to :ref:`-getFromBrowseableDirectory-`.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromBrowseableDirectories-examples

View file

@ -1,51 +0,0 @@
.. _-getFromBrowseableDirectory-:
getFromBrowseableDirectory
==========================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromBrowseableDirectory
Description
-----------
The ``getFromBrowseableDirectories`` is a combination of serving files from the specified directories (like
``getFromDirectory``) and listing a browseable directory with ``listDirectoryContents``.
Nesting this directive beneath ``get`` is not necessary as this directive will only respond to ``GET`` requests.
Use ``getFromBrowseableDirectory`` to serve only one directory.
Use ``getFromDirectory`` if directory browsing isn't required.
For more details refer to :ref:`-getFromBrowseableDirectory-`.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromBrowseableDirectory-examples
Default file listing page example
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Directives which list directories (e.g. ``getFromBrowsableDirectory``) use an implicit ``DirectoryRenderer``
instance to perfm the actual rendering of the file listing. This rendered can be easily overriden by simply
providing one in-scope for the directives to use, so you can build your custom directory listings.
The default renderer is ``akka.http.scaladsl.server.directives.FileAndResourceDirectives.defaultDirectoryRenderer``,
and renders a listing which looks like this:
.. figure:: ../../../../../images/akka-http-file-listing.png
:scale: 75%
:align: center
Example page rendered by the ``defaultDirectoryRenderer``.
It's possible to turn off rendering the footer stating which version of Akka HTTP is rendering this page by configuring
the ``akka.http.routing.render-vanity-footer`` configuration option to ``off``.

View file

@ -1,38 +0,0 @@
.. _-getFromDirectory-:
getFromDirectory
================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromDirectory
Description
-----------
Allows exposing a directory's files for GET requests for its contents.
The ``unmatchedPath`` (see :ref:`-extractUnmatchedPath-`) of the ``RequestContext`` is first transformed by
the given ``pathRewriter`` function, before being appended to the given directory name to build the final file name.
To serve a single file use :ref:`-getFromFile-`.
To serve browsable directory listings use :ref:`-getFromBrowseableDirectories-`.
To serve files from a classpath directory use :ref:`-getFromResourceDirectory-` instead.
Note that it's not required to wrap this directive with ``get`` as this directive will only respond to ``GET`` requests.
.. note::
The file's contents will be read using an Akka Streams `Source` which *automatically uses
a pre-configured dedicated blocking io dispatcher*, which separates the blocking file operations from the rest of the stream.
Note also that thanks to using Akka Streams internally, the file will be served at the highest speed reachable by
the client, and not faster i.e. the file will *not* end up being loaded in full into memory before writing it to
the client.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromDirectory-examples

View file

@ -1,38 +0,0 @@
.. _-getFromFile-:
getFromFile
===========
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromFile
Description
-----------
Allows exposing a file to be streamed to the client issuing the request.
The ``unmatchedPath`` (see :ref:`-extractUnmatchedPath-`) of the ``RequestContext`` is first transformed by
the given ``pathRewriter`` function, before being appended to the given directory name to build the final file name.
To files from a given directory use :ref:`-getFromDirectory-`.
To serve browsable directory listings use :ref:`-getFromBrowseableDirectories-`.
To serve files from a classpath directory use :ref:`-getFromResourceDirectory-` instead.
Note that it's not required to wrap this directive with ``get`` as this directive will only respond to ``GET`` requests.
.. note::
The file's contents will be read using an Akka Streams `Source` which *automatically uses
a pre-configured dedicated blocking io dispatcher*, which separates the blocking file operations from the rest of the stream.
Note also that thanks to using Akka Streams internally, the file will be served at the highest speed reachable by
the client, and not faster i.e. the file will *not* end up being loaded in full into memory before writing it to
the client.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromFile-examples

View file

@ -1,26 +0,0 @@
.. _-getFromResource-:
getFromResource
===============
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromResource
Description
-----------
Completes GET requests with the content of the given classpath resource.
For details refer to :ref:`-getFromFile-` which works the same way but obtaining the file from the filesystem
instead of the applications classpath.
Note that it's not required to wrap this directive with ``get`` as this directive will only respond to ``GET`` requests.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromResource-examples

View file

@ -1,26 +0,0 @@
.. _-getFromResourceDirectory-:
getFromResourceDirectory
========================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: getFromResourceDirectory
Description
-----------
Completes GET requests with the content of the given classpath resource directory.
For details refer to :ref:`-getFromDirectory-` which works the same way but obtaining the file from the filesystem
instead of the applications classpath.
Note that it's not required to wrap this directive with ``get`` as this directive will only respond to ``GET`` requests.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: getFromResourceDirectory-examples

View file

@ -1,24 +0,0 @@
.. _FileAndResourceDirectives:
FileAndResourceDirectives
=========================
Like the :ref:`RouteDirectives` the ``FileAndResourceDirectives`` are somewhat special in akka-http's routing DSL.
Contrary to all other directives they do not produce instances of type ``Directive[L <: HList]`` but rather "plain"
routes of type ``Route``.
The reason is that they are not meant for wrapping an inner route (like most other directives, as intermediate-level
elements of a route structure, do) but rather form the actual route structure **leaves**.
So in most cases the inner-most element of a route structure branch is one of the :ref:`RouteDirectives` or
``FileAndResourceDirectives``.
.. toctree::
:maxdepth: 1
getFromBrowseableDirectories
getFromBrowseableDirectory
getFromDirectory
getFromFile
getFromResource
getFromResourceDirectory
listDirectoryContents

View file

@ -1,31 +0,0 @@
.. _-listDirectoryContents-:
listDirectoryContents
=====================
Signature
---------
.. includecode2:: /../../akka-http/src/main/scala/akka/http/scaladsl/server/directives/FileAndResourceDirectives.scala
:snippet: listDirectoryContents
Description
-----------
Completes GET requests with a unified listing of the contents of all given directories. The actual rendering of the
directory contents is performed by the in-scope ``Marshaller[DirectoryListing]``.
To just serve files use :ref:`-getFromDirectory-`.
To serve files and provide a browseable directory listing use :ref:`-getFromBrowseableDirectories-` instead.
The rendering can be overridden by providing a custom ``Marshaller[DirectoryListing]``, you can read more about it in
:ref:`-getFromDirectory-` 's documentation.
Note that it's not required to wrap this directive with ``get`` as this directive will only respond to ``GET`` requests.
Example
-------
.. includecode2:: ../../../../code/docs/http/scaladsl/server/directives/FileAndResourceDirectivesExamplesSpec.scala
:snippet: listDirectoryContents-examples

Some files were not shown because too many files have changed in this diff Show more