* +doc #20192 explain need of draining entities in server/client HTTP * missing javadsl for Connection header * Update HttpClientExampleDocTest.java
This commit is contained in:
parent
9683e4bc58
commit
60fb163331
20 changed files with 766 additions and 16 deletions
|
|
@ -7,6 +7,10 @@ The connection-level API is the lowest-level client-side API Akka HTTP provides.
|
|||
HTTP connections are opened and closed and how requests are to be send across which connection. As such it offers the
|
||||
highest flexibility at the cost of providing the least convenience.
|
||||
|
||||
.. note::
|
||||
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Clients.
|
||||
|
||||
Opening HTTP Connections
|
||||
------------------------
|
||||
|
|
@ -90,4 +94,4 @@ On the client-side the stand-alone HTTP layer forms a ``BidiStage`` that is defi
|
|||
:snippet: client-layer
|
||||
|
||||
You create an instance of ``Http.ClientLayer`` by calling one of the two overloads of the ``Http().clientLayer`` method,
|
||||
which also allows for varying degrees of configuration.
|
||||
which also allows for varying degrees of configuration.
|
||||
|
|
|
|||
|
|
@ -7,6 +7,10 @@ As opposed to the :ref:`connection-level-api` the host-level API relieves you fr
|
|||
connections. It autonomously manages a configurable pool of connections to *one particular target endpoint* (i.e.
|
||||
host/port combination).
|
||||
|
||||
.. note::
|
||||
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Clients.
|
||||
|
||||
Requesting a Host Connection Pool
|
||||
---------------------------------
|
||||
|
|
@ -153,4 +157,4 @@ Example
|
|||
-------
|
||||
|
||||
.. includecode:: ../../code/docs/http/scaladsl/HttpClientExampleSpec.scala
|
||||
:include: host-level-example
|
||||
:include: host-level-example
|
||||
|
|
|
|||
|
|
@ -6,6 +6,10 @@ Consuming HTTP-based Services (Client-Side)
|
|||
All client-side functionality of Akka HTTP, for consuming HTTP-based services offered by other endpoints, is currently
|
||||
provided by the ``akka-http-core`` module.
|
||||
|
||||
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Clients.
|
||||
|
||||
Depending on your application's specific needs you can choose from three different API levels:
|
||||
|
||||
:ref:`connection-level-api`
|
||||
|
|
@ -28,4 +32,4 @@ Akka HTTP will happily handle many thousand concurrent connections to a single o
|
|||
host-level
|
||||
request-level
|
||||
client-https-support
|
||||
websocket-support
|
||||
websocket-support
|
||||
|
|
|
|||
|
|
@ -7,6 +7,11 @@ The request-level API is the most convenient way of using Akka HTTP's client-sid
|
|||
:ref:`host-level-api` to provide you with a simple and easy-to-use way of retrieving HTTP responses from remote servers.
|
||||
Depending on your preference you can pick the flow-based or the future-based variant.
|
||||
|
||||
.. note::
|
||||
It is recommended to first read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Clients.
|
||||
|
||||
.. note::
|
||||
The request-level API is implemented on top of a connection pool that is shared inside the ActorSystem. A consequence of
|
||||
using a pool is that long-running requests block a connection while running and starve other requests. Make sure not to use
|
||||
|
|
|
|||
|
|
@ -0,0 +1,129 @@
|
|||
.. _implications-of-streaming-http-entities:
|
||||
|
||||
Implications of the streaming nature of Request/Response Entities
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Akka HTTP is streaming *all the way through*, which means that the back-pressure mechanisms enabled by Akka Streams
|
||||
are exposed through all layers–from the TCP layer, through the HTTP server, all the way up to the user-facing ``HttpRequest``
|
||||
and ``HttpResponse`` and their ``HttpEntity`` APIs.
|
||||
|
||||
This has suprising implications if you are used to non-streaming / not-reactive HTTP clients.
|
||||
Specifically it means that: "*lack of consumption of the HTTP Entity, is signaled as back-pressure to the other
|
||||
side of the connection*". This is a feature, as it allows one only to consume the entity, and back-pressure servers/clients
|
||||
from overwhelming our application, possibly causing un-necessary buffering of the entity in memory.
|
||||
|
||||
.. warning::
|
||||
Consuming (or discarding) the Entity of a request is mandatory!
|
||||
If *accidentally* left neither consumed or discarded Akka HTTP will
|
||||
asume the incoming data should remain back-pressured, and will stall the incoming data via TCP back-pressure mechanisms.
|
||||
|
||||
Client-Side handling of streaming HTTP Entities
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Consuming the HTTP Response Entity (Client)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The most commong use-case of course is consuming the response entity, which can be done via
|
||||
running the underlying ``dataBytes`` Source. This is as simple as running the dataBytes source,
|
||||
(or on the server-side using directives such as
|
||||
|
||||
It is encouraged to use various streaming techniques to utilise the underlying infrastructure to its fullest,
|
||||
for example by framing the incoming chunks, parsing them line-by-line and the connecting the flow into another
|
||||
destination Sink, such as a File or other Akka Streams connector:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
|
||||
:include: manual-entity-consume-example-1
|
||||
|
||||
however sometimes the need may arise to consume the entire entity as ``Strict`` entity (which means that it is
|
||||
completely loaded into memory). Akka HTTP provides a special ``toStrict(timeout)`` method which can be used to
|
||||
eagerly consume the entity and make it available in memory:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
|
||||
:include: manual-entity-consume-example-2
|
||||
|
||||
|
||||
Discarding the HTTP Response Entity (Client)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Sometimes when calling HTTP services we do not care about their response payload (e.g. all we care about is the response code),
|
||||
yet as explained above entity still has to be consumed in some way, otherwise we'll be exherting back-pressure on the
|
||||
underlying TCP connection.
|
||||
|
||||
The ``discardEntityBytes`` convenience method serves the purpose of easily discarding the entity if it has no purpose for us.
|
||||
It does so by piping the incoming bytes directly into an ``Sink.ignore``.
|
||||
|
||||
The two snippets below are equivalent, and work the same way on the server-side for incoming HTTP Requests:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
|
||||
:include: manual-entity-discard-example-1
|
||||
|
||||
Or the equivalent low-level code achieving the same result:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpClientExampleSpec.scala
|
||||
:include: manual-entity-discard-example-2
|
||||
|
||||
Server-Side handling of streaming HTTP Entities
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Similarily as with the Client-side, HTTP Entities are directly linked to Streams which are fed by the underlying
|
||||
TCP connection. Thus, if request entities remain not consumed, the server will back-pressure the connection, expecting
|
||||
that the user-code will eventually decide what to do with the incoming data.
|
||||
|
||||
Note that some directives force an implicit ``toStrict`` operation, such as ``entity(as[String])`` and similar ones.
|
||||
|
||||
Consuming the HTTP Request Entity (Server)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The simplest way of consuming the incoming request entity is to simply transform it into an actual domain object,
|
||||
for example by using the :ref:`-entity-` directive:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
|
||||
:include: consume-entity-directive
|
||||
|
||||
Of course you can access the raw dataBytes as well and run the underlying stream, for example piping it into an
|
||||
FileIO Sink, that signals completion via a ``Future[IoResult]`` once all the data has been written into the file:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
|
||||
:include: consume-raw-dataBytes
|
||||
|
||||
Discarding the HTTP Request Entity (Server)
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sometimes, depending on some validation (e.g. checking if given user is allowed to perform uploads or not)
|
||||
you may want to decide to discard the uploaded entity.
|
||||
|
||||
Please note that discarding means that the entire upload will proceed, even though you are not interested in the data
|
||||
being streamed to the server - this may be useful if you are simply not interested in the given entity, however
|
||||
you don't want to abort the entire connection (which we'll demonstrate as well), since there may be more requests
|
||||
pending on the same connection still.
|
||||
|
||||
In order to discard the databytes explicitly you can invoke the ``discardEntityBytes`` bytes of the incoming ``HTTPRequest``:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
|
||||
:include: discard-discardEntityBytes
|
||||
|
||||
A related concept is *cancelling* the incoming ``entity.dataBytes`` stream, which results in Akka HTTP
|
||||
*abruptly closing the connection from the Client*. This may be useful when you detect that the given user should not be allowed to make any
|
||||
uploads at all, and you want to drop the connection (instead of reading and ignoring the incoming data).
|
||||
This can be done by attaching the incoming ``entity.dataBytes`` to a ``Sink.cancelled`` which will cancel
|
||||
the entity stream, which in turn will cause the underlying connection to be shut-down by the server –
|
||||
effectively hard-aborting the incoming request:
|
||||
|
||||
.. includecode:: ../code/docs/http/scaladsl/HttpServerExampleSpec.scala
|
||||
:include: discard-close-connections
|
||||
|
||||
Closing connections is also explained in depth in the :ref:`http-closing-connection-low-level` section of the docs.
|
||||
|
||||
Pending: Automatic discarding of not used entities
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Under certin conditions is is possible to detect an entity is very unlikely to be used by the user for a given request,
|
||||
and issue warnings or discard the entity automatically. This advanced feature has not been implemented yet, see the below
|
||||
note and issues for further discussion and ideas.
|
||||
|
||||
.. note::
|
||||
An advanced feature code named "auto draining" has been discussed and proposed for Akka HTTP, and we're hoping
|
||||
to implement or help the community implement it.
|
||||
|
||||
You can read more about it in `issue #18716 <https://github.com/akka/akka/issues/18716>`_
|
||||
as well as `issue #18540 <https://github.com/akka/akka/issues/18540>`_ ; as always, contributions are very welcome!
|
||||
|
||||
|
|
@ -9,6 +9,7 @@ Akka HTTP
|
|||
introduction
|
||||
configuration
|
||||
common/index
|
||||
implications-of-streaming-http-entity
|
||||
low-level-server-side-api
|
||||
routing-dsl/index
|
||||
client-side/index
|
||||
|
|
|
|||
|
|
@ -40,6 +40,10 @@ Depending on your needs you can either use the low-level API directly or rely on
|
|||
:ref:`Routing DSL <http-high-level-server-side-api>` which can make the definition of more complex service logic much
|
||||
easier.
|
||||
|
||||
.. note::
|
||||
It is recommended to read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Servers.
|
||||
|
||||
Streams and HTTP
|
||||
----------------
|
||||
|
|
@ -123,6 +127,7 @@ See :ref:`HttpEntity-scala` for a description of the alternatives.
|
|||
If you rely on the :ref:`http-marshalling-scala` and/or :ref:`http-unmarshalling-scala` facilities provided by
|
||||
Akka HTTP then the conversion of custom types to and from streamed entities can be quite convenient.
|
||||
|
||||
.. _http-closing-connection-low-level:
|
||||
|
||||
Closing a connection
|
||||
~~~~~~~~~~~~~~~~~~~~
|
||||
|
|
|
|||
|
|
@ -8,6 +8,11 @@ defining RESTful web services. It picks up where the low-level API leaves off an
|
|||
functionality of typical web servers or frameworks, like deconstruction of URIs, content negotiation or
|
||||
static content serving.
|
||||
|
||||
.. note::
|
||||
It is recommended to read the :ref:`implications-of-streaming-http-entities` section,
|
||||
as it explains the underlying full-stack streaming concepts, which may be unexpected when coming
|
||||
from a background with non-"streaming first" HTTP Servers.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
|
|
@ -100,4 +105,4 @@ and split each line before we send it to an actor for further processing:
|
|||
Configuring Server-side HTTPS
|
||||
-----------------------------
|
||||
|
||||
For detailed documentation about configuring and using HTTPS on the server-side refer to :ref:`serverSideHTTPS-scala`.
|
||||
For detailed documentation about configuring and using HTTPS on the server-side refer to :ref:`serverSideHTTPS-scala`.
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue