Note: This is NOT aimed to provide an micro-benchmarking solution.
The goal is to provide data for broad trend analysis. For techniques
that fight the inliner and other specialised techniques, refer to JMH.
+ custom console and graphite reporters
- had to be custom because it's not possible to add custom metric
types to the existing reporters
+ initial hdr.Histogram histogram() provider, see
http://latencyutils.github.io/LatencyUtils/
+ Not using timers provided by Metrics, instead use the above histogram
+ Added average Actor size measurement
+ Measuring the "blocking time" when an actor is created, before we fire
of the async part of this process; Measures in loop and will fluctuate
a lot. Times are in `us` -- System.nanoTime should provide good enough
resolution.
+ Measuring total actor creation time by using
`KnownOpsInTimespanTimer`, which given a known number of ops, in a
large amount of time, roughtly estimates time per one operation.
// Yes, we are aware of the possibility of GC pauses and other horrors
+ All classes are `private[akka]`, we should not encourage people to use
this yet
+ Counters use Java 8's `LongAdder`, which is metric's private;
The new trend in Java land will be copy paste-ing this class ;)
+ Metrics are logged to Graphite, so we can long-term analyse these
+ Reporters are configurable using typesafe-config
! I'm not very happy about how I work around Metrics not being too open
for adding additional custom metrics. Seems like a hack at places.
I will consider removing the Metrics dependency all together.
numbers
Example output:
```
-- KnownOpsInTimespanTimer-------------------------------------------
actor-creation.total.creating-100000-actors.Props|new-EmptyArgsActor|…||-same
ops = 100000
time = 1.969 s
ops/s = 50782.22
avg = 19.69 μs
-- AveragingGauge---------------------------------------------------
actor-creation.Props|new-EmptyArgsActor|…||-same.avg-mem-per-actor
avg = 439.67
```
* The reason for the problem with NoSuchElementException in ClusterSharding was
that actor references were not serialized with full address information. In
certain fail over scenarios the references could not be resolved and therefore
the ShardRegionTerminated did not match corresponding ShardRegionRegistered.
* Wrap serialization with transport information from defaultAddress
(cherry picked from commit 3e73ae5925cf1293a9a5d61e48919b1708e84df2)
* Problem when using PersistentChannel from Processor
* When the seq numbers of the sending processor and the seq numbers
of the PersistentChannel was out of sync the PersistentChannel
did not de-duplicate confirmed deliveres that were resent by
the processor.
* There is a hand-off in the RequestWriter that confirms the
Processor seq number, and therefore the seq number of the
RequestWriter must be used in the ConfirmablePersistent from
the RequestReader
* More tests, covering this scenario
* Add supervisor level that will start the ShardCoordinator again after
a configurable backoff duration
* Make the timeout of SharedLeveldbJournal configurable
* Include cause of PersistenceFailure in message of ActorKilledException
* Added a setter for Java lambda actors to "hide" the not so nice looking type signature of the "receive" method.
* Updated docs to reflect the changes.
* Converted samples to use the new setter.
- Provided new interfaces for akka-persistence to be usable directly
through ReceiveBuilder/PartialFunction. Added a sample java project to
showcase the usage of these API's with akka-persistence.
- Fixed a minor comment block in javadoc code snippet.
- Renamed java event persistor and fixed a documentation typo.
- Put back java event persistence methods in
UntypedEventsourcedProcessor and copied them into
AbstractEventsourcedProcessor for the sake of clarity in javadocs.
Also corrected some doc punctuations.
- Documentation for akka-persistence java 8 lambda expressions support.
- Moved code examples referred from within lambda-persistence.rst to
java8 compatible sample project.
- Removed remaining unwanted java8 compatible source files.
- Built-in redelivery mechanism for Channel and PersistentChannel
- redelivery counter on ConfirmablePersistent
- redeliveries out of initial message delivery order
- relative order of redelivered messages is preserved
- configurable redelivery policy (ChannelSettings)
- Major refactorings of channels (and channel tests)
- Throughput load test for PersistentChannel
Todo:
- Paged/throtlled replay (another pull request)
- Resequencer (another pull request)
- internal batching of individually received Persistent messages
- testing fault tolerance of Processor in presence of random
* journaling failures
* processing failures
- single and bulk deletion of messages
- single and bulk deletion of snapshots
- run journal and snapshot store as system actors
- rename physical parameter in delete methods to permanent
- StashSupport.prepend docs and implementation enhancements
- Persistent channel
- ConfirmablePersistent message type delivered by channel
- Sender resolution performance improvements
* unstash() instead of unstashAll()
These enhancements required the following changes
- Unified implementation of processor stash and user stash
- Persistence message plugin API separated from implementation
- Physical deletion of messages