The best way to get a hold of a connection pool to a given target endpoint is the ``Http.get(system).cachedHostConnectionPool(...)``
method, which returns a ``Flow`` that can be "baked" into an application-level stream setup. This flow is also called
a "pool client flow".
The connection pool underlying a pool client flow is cached. For every ``ActorSystem``, target endpoint and pool
configuration there will never be more than a single pool live at any time.
Also, the HTTP layer transparently manages idle shutdown and restarting of connection pools as configured.
The client flow instances therefore remain valid throughout the lifetime of the application, i.e. they can be
materialized as often as required and the time between individual materialization is of no importance.
When you request a pool client flow with ``Http.get(system).cachedHostConnectionPool(...)`` Akka HTTP will immediately start
the pool, even before the first client flow materialization. However, this running pool will not actually open the
first connection to the target endpoint until the first request has arrived.
Configuring a Host Connection Pool
----------------------------------
Apart from the connection-level config settings and socket options there are a number of settings that allow you to
influence the behavior of the connection pool logic itself.
Check out the ``akka.http.client.host-connection-pool`` section of the Akka HTTP :ref:`akka-http-configuration-java` for
more information about which settings are available and what they mean.
Note that, if you request pools with different configurations for the same target host you will get *independent* pools.
This means that, in total, your application might open more concurrent HTTP connections to the target endpoint than any
of the individual pool's ``max-connections`` settings allow!
There is one setting that likely deserves a bit deeper explanation: ``max-open-requests``.
This setting limits the maximum number of requests that can be in-flight at any time for a single connection pool.
If an application calls ``Http.get(system).cachedHostConnectionPool(...)`` 3 times (with the same endpoint and settings) it will get
back ``3`` different client flow instances for the same pool. If each of these client flows is then materialized ``4`` times
(concurrently) the application will have 12 concurrently running client flow materializations.
All of these share the resources of the single pool.
This means that, if the pool's ``pipelining-limit`` is left at ``1`` (effecitvely disabeling pipelining) (effecitvely disabeling pipelining), no more than 12 requests can be open at any time.
With a ``pipelining-limit`` of ``8`` and 12 concurrent client flow materializations the theoretical open requests
maximum is ``96``.
The ``max-open-requests`` config setting allows for applying a hard limit which serves mainly as a protection against
erroneous connection pool use, e.g. because the application is materializing too many client flows that all compete for
the same pooled connections.
.._using-a-host-connection-pool-java:
Using a Host Connection Pool
----------------------------
The "pool client flow" returned by ``Http.get(system).cachedHostConnectionPool(...)`` has the following type::
// TODO Tuple2 will be changed to be `akka.japi.Pair`