silly serialization mistake, should have fixed serialize as well
tage actors now can have names, which helps a lot in debugging
thread weirdness
make sure to fail properly, actually go over remoting
issue with not receiving the SinkRef... what
initial working SinkRef over remoting
remote Sink failure must fail origin Source as well
cleaning up and adding failyre handling
SinkRef now with low-watermark RequestStrategy
source ref works, yet completely duplicated code
* Utility for stashing, #22275
* The main reason for providing these utilities and promote
a standardized way of doing the buffering is that monitoring
instrumentation can be added to these classes, which is not
possible if we just say "buffer in some collection".
* unstash a few at a time, became rather complicated
* separate api and impl, and more tests
Untyped actor systems currently only support specifying dispatchers via a
name reference to the config. The other selectors can be revived when other
ways of configuring dispatchers are available for untyped actor systems
(see #17568).
* The technical reason for not naming it Behavior is that
it would be duplicate import conflicts of
akka.actor.typed.Behavior and akka.actor.typed.scaladsl.Behavior
* Plural naming is pretty common for factories like this,
e.g. java.util.Collections
* Allow tagging in persistence typed (#23817)
* Use Set[String] for tags
* Documentation for persistence typed tagging
* Rename tagging parameter to tagger
* More generous timeout in DaemonicSpec
Also:
* allow it to complete faster if possible
* avoid potential race with port number
* always shut the extra actor system down
* less dilation
This test is using the tcp ports assigned from the multi-jvm infra (classic remoting)
for the Aeron udp. Even though tcp and udp ports are independent and same port number
can be used at the same time for tcp and udp there is no guarantee that the udp port
is free just because the tcp port was.
* When leaving/downing the last node in a DC it would not
be removed in another DC, since that was only done by the
leader in the owning DC (and that is gone).
* It should be ok to eagerly remove such nodes also by
leaders in other DCs.
* Note that gossip is already sent out so for the last node
that will be spread to other DC, unless there is a network
partition. For that we can't do anything. It will be replaced
if joining again.
* There might be one case when the singleton coordinator
hand over might start before the gracful stop of the
region is completed on other node.
* I think this is rare enough to just accept that message
might be sent to wrong location (we don't guarantee anything
more than best effort anyway).
* Safe rolling upgrade should keep the coordinator (oldest)
until last to avoid such races
* Use expectNoMessage in typed testkit
Interestingly most calls to expectNoMsg did not appear to expect the parameter
to be dilated
Might be nice to allow configuring a variant that uses a (configurable) default
delay (other than the single-expect-default)
* Add a probe.expectNoMessage with no parameters
If you want to 'just wait a while' and not care too much about how long
exactly, will be dilated.