* fix memory leak in SystemMessageDelivery
* initial set of tests for idle outbound associations, credit to mboogerd
* close inbound compression when quarantined, #23967
* make sure compressions for quarantined are removed in case they are lingering around
* also means that advertise will not be done for quarantined
* remove tombstone in InboundCompressions
* simplify async callbacks by using invokeWithFeedback
* compression for old incarnation, #24400
* it was fixed by the other previous changes
* also confirmed by running the SimpleClusterApp with TCP
as described in the ticket
* test with tcp and tls-tcp transport
* handle the stop signals differently for tcp transport because they
are converted to StreamTcpException
* cancel timers on shutdown
* share the top-level FR for all Association instances
* use linked queue for control and large streams, less memory usage
* remove quarantined idle Association completely after a configured delay
* note that shallow Association instances may still lingering in the
heap because of cached references from RemoteActorRef, which may
be cached by LruBoundedCache (used by resolve actor ref).
Those are small, since the queues have been removed, and the cache
is bounded.
* Refactoring to separate the Aeron specific things, ArteryAeronUdpTransport
* move Aeron specific classes to akka.remote.artery.aeron package
* move Version to ArterySettings, and describe strategy for envelope header changes
* Pass HandshakeReq in all inbound lanes, #23527
The HandshakeReq message must be passed in each inbound lane to
ensure that it arrives before any application message. Otherwise there is a risk
that an application message arrives in the InboundHandshake stage before the
handshake is completed and then it would be dropped.
* mima
* WIP early preview of moving compressions ownership to Decoder
* Compression table created in transport, but owned by Decoder
Added test for restart of inbound stream
* =art snapshot not needed in HeavyHitters since owned by Decoder
* +str #18793 StageLogging that allows logger access in stages
Also, non ActorMaterializers can opt-into providing a logger here.
* +str #18794 add javadsl for StageLogging
* fix missing test method on compile only class
* they can't be stopped immediately because we want to send
some final message and we reply to inbound messages with `Quarantined`
* and improve logging
* comprehensive integration test that revealed many bugs
* confirmations of manifests were wrong, at two places
* using wrong tables when system is restarted, including
originUid in the tables with checks when receiving advertisments
* close (stop scheduling) of advertisments when new incarnation,
quarantine, or restart
* cleanup how deadLetters ref was treated, and made it more robust
* make Decoder tolerant to decompression failures, can happen in
case of system restart before handshake completed
* give up resending advertisment after a few attempts without confirmation,
to avoid keeping outbound association open to possible dead system
* don't advertise new table when no inbound messages,
to avoid keeping outbound association open to possible dead system
* HeaderBuilder could use manifest field from previous message, added
resetMessageFields
* No compression for ArteryMessage, e.g. handshake messages must go
through without depending on compression tables being in sync
* improve debug logging, including originUid
* If the retried resolve isn't successful the ref is banned and
we will not do the delayed retry resolve again. The reason for that is
if many messages are sent to such dead refs the resolve process will slow
down other messages.
* for parallel serialziation/deserialization
* MergeHub for the outbound lanes
* BroadcastHub + filter for the inbound lanes, until we
have a PartitionHub
* simplify materialization of test stage
* add RemoteSendConsistencyWithThreeLanesSpec
* outbound compression is now immutable, by simply using
CompressionTable[ActorRef] and CompressionTable[String]
* immutable outbound compression will make it possible to use
them from multiple Encoder instances, when we add several lanes
for parallel serialization
* outbound compression tables not shared via AssociationState
* the advertised tables are sent to the Encoder stage via async
callback, no need to reference the tables in other places than
the Encoder stage, no more races via shared mutable state
* when outbound stream is started or restarted it can start out
without compression, until next advertisement is received
* ensure outbound compression is cleared before handshake is signaled complete
* when rate exceeds 1000 msg/s adaptive sampling of the
heavy hitters tracking is enabled by sampling every 256th message
* also fixed some bugs related to advertise in progress
* update InboundCompression state atomically
* enable compression in LatencySpec
* =art now correctly compresses and 2 table mode working
* =art AGRESSIVELY optimising hashing, not convienved about correctness yet
* fix HandshakeShouldDropCompressionTableSpec
There were two related problems with remote deployment when
using Artery.
* DaemonMsgCreate is not a SystemMessage, but must be sent over the control stream because
remote deployment process depends on message ordering for DaemonMsgCreate and Watch messages.
It must also be sent over the ordinary message stream so that it arrives (and creates the
destination) before the first ordinary message arrives.
* The first point solves the creation of the remote deployed actor but it's not enough.
Resolve of the recipient actor ref may still happen before the actor is created. This
is solved by retrying the resolve for the first message of a remote deployed actor.
* reduce allocations with specialized ImmutableLongMap
* backed by arrays, allocation free lookups with binary search
* use it for UID -> Association Map
* pass Association in InboundEnvelope to reduce to only
one lookup per incoming message
* use ImmutableLongMap instead of the QuarantinedUIDSet
* we don't want to include the full origin address in each message,
only the UID
* that means that the restarted receiving system can't initate a
new handshake immediately when it sees message from unknown origin
* instead we inject HandshakeReq from the sending system once in a while
(1 per second) which will trigger the new handshake
* any messages that arrives before the HandshakeReq are dropped, but
that is fine since the system was just restarted anyway
* note that the injected handshake is only done for active connections,
when a message is sent
* also changed the UID to a Long, but there are more places in old remoting
that must be changed before we actually can use a Long value
fix lost first message, #20566
* the first message was sometimes dropped by the InboundHandshake stage
because it came from unknown origin, i.e. the handshake had not completed
* that happended because the ordinary messagage arrived before the
first HandshakeReq, which may happen since we sent the HandshakeReq
over the control stream
* this changes so that HandshakeReq is sent over the same stream, not
only on the control stream and thereby the HandshakeReq will arrive
before any other message
* always send HandshakeReq as first message
* also when the handshake on sender side has been completed at startup
* moved code from preStart to onPull
* because FQCN can become a problem for rolling upgrade scenarios
where you want to rename serializer classes
* also renamed classManifest to manifest because it doesn't have
to be class names
* CodecBenchmark that tests encode, decode and combined
encode + decode
* refactoring of codec stages to make it possible to
run them without real ArteryTransport
* also fixed a bug in inbound stream for large messages,
it was using wrong envelope pool
* First stab at separate large message channel for Artery
* Full actor paths, no implicit "/user/" part
* Various small fixes after review
* Fixes to make it work after rebasing
* Use a separate EnvelopeBufferPool for the large message stream
* Docs for actorSelection not sending through large message stream
* caching of actor refs in Encoder, Decoder
* dynamicAccess.getClassFor in Serialization is costly,
so introduced a cache for the class manifests there