* Config of node roles cluster.role
* Cluster router configurable with use-role
* RoleLeaderChanged event
* Cluster singleton per role
* Cluster only starts once all required per-role node
counts are reached,
role.<role-name>.min-nr-of-members config
* Update documentation and make use of the roles in the examples
* ClusterSingletonManager
* ClusterSingletonManagerSpec multi-node test
* Use in cluster router with single master sample
* Extensive logging to be able to understand what is
going on
* Java api
* Add cluster dependency to contrib
* Add contrib dependency to sample
* Scaladoc
* rst docs in contrib area, ref from cluster docs
* We had the same issue in ticket 2820 and then the conclusion was that
the within timeout should be increased due to "publish on convergence",
but it was not done correctly in TransformationSampleJapiSpec, see
8bf7c03158
* Also fixed the wrong (but harmless) subscribe in TransformationBackend
* case object and case class for MixMetricsSelector
* Rename decay-half-life-duration to moving-average-half-life
* Clarification of decay-half-life-duration and collect-interval
* Removed Fields, Java compatibility issue
* Adapt for-yield variables
* Comment metrics collector constructor that takes system param
* Don't copy EWMA if not needed
* LogOf2 constant 0.69315
* Don't use mapValues
* Remove RichInt conversion
* sigar version replace tag in docs
* createDeployer factory method to make it possible to override
deployer in subclass
* Improve readability of MetricsListener (in sample)
* Better startup of factorial sample (no sleep)
* Many minor enhancements and cleanups
* Refactoring of standard metrics extractors and data structures
* Removed optional value in Metric, simplified a lot
* Configuration of EWMA by using half-life duration
* Renamed DataStream to EWMA
* Incorporate review feedback
* Use binarySearch for selecting weighted routees
* More metrics selectors for the router
* Removed network metrics, since not supported on linux
* Configuration of router
* Rename to AdaptiveLoadBalancingRouter
* Remove total cores metrics, since it's the same as jmx getAvailableProcessors,
tested on intel 24 core server and amd 48 core server, and MBP
* API cleanup
* Java API additions
* Documentation of metrics and AdaptiveLoadBalancingRouter
* New cluster sample to illustrate metrics in the documentation,
and play around with (factorial)
* Cluster sample
* The problem was that the sent job was sometimes sent to deadLetters
* The reason was that statsService was not created when a facade
at another node forwared the job to the statsService, i.e.
deadLetters
* Solved by using ask and recover with JobFailed in case of
timeout in the facade, so that the facade always sends back
JobResult or JobFailed
- add more cross references to ActorDSL docs
- improve SBT command line for running the multi-node test
- correct small error in Restart Hooks section of actors.rst