Commit graph

271 commits

Author SHA1 Message Date
Patrik Nordwall
ffe2992917 =con #15788 Harden ClusterShardingSpec 2014-11-03 08:09:32 +01:00
Patrik Nordwall
0e3dfde838 =doc Clarify cluster sharding docs 2014-10-31 10:06:51 +01:00
Patrik Nordwall
6a370ead48 =con #15577 Harden ReliableProxyDocSpec
* I couldn't find anything wrong
* Increasing the test timeout, it takes 1.5 s for the reconnects,
  so the previous total of 3 s might not have been enough
  (for that run)
2014-08-29 12:09:41 +02:00
Patrik Nordwall
dd71de5f93 Merge pull request #15734 from akka/wip-harden-ClusterShardingSpec-patriknw
=con Harden ClusterShardingSpec some more
2014-08-28 09:53:51 +02:00
Patrik Nordwall
e5cd47279d =con Harden ClusterShardingSpec some more
* Replace sleep with awaitAssert
* Use separate probes for awaitAssert checks to avoid spill-over
  to the testActor
* Some additional cleanup
* Deliver buffered messages when HostShard is received
  Test failures showed that initial messages could be re-ordered otherwise
2014-08-28 08:32:16 +02:00
Viktor Klang
cd8e97c060 +act - 15757 - Reworks implementation of ActorSystem shutdown
* deprecates awaitTermination, shutdown and isTerminated
  * introduces a terminate-method that returns a Future[Unit]
  * introduces a whenTerminated-method that returns a Future[Unit]
  * simplifies the implementation by removing blocking constructs
  * adds tests for terminate() and whenTerminated
2014-08-25 15:49:28 +02:00
Dominic Black
d4047a2e1f =con #15699 Fix race in Cluster Sharding tests 2014-08-22 14:04:32 +01:00
Dominic Black
af657880e2 !con #15496 Remember entries in cluster sharding
- Move all entry related logic out of the ShardRegion and into a
  new dedicated child `Shard` actor.
- Shard actor persists entry started and passivated messages.
- Non passivated entries get restarted on termination.
- Shard Coordinator restarts shards on other regions upon region failure or handoff
- Ensures shard rebalance restarts shards.
- Shard buffers messages after an EntryStarted is received until state persisted
- Shard buffers messages (still) after a Passivate is received until state persisted
- Shard will retry persisting state until success
- Shard will restart entries automatically (after a backoff) if not passivated and remembering entries
- Added Entry path change to the migration docs
2014-08-19 13:13:20 +01:00
Roland Kuhn
51062ff494 Merge pull request #1869 from ktonga/wip-3737-interceptors-chain-for-receive-ktonga
#3737 Interceptors chain for receive
2014-08-08 09:52:00 +02:00
Gaston M. Tonietti
b92e0c99b4 +con #13737 Interceptors chain for Actors' bahavior.
* Add new pattern to akka-contrib.
* Mixin ReceivePipeline trait into Actors that have to be intercepted.
2014-08-06 15:50:59 -03:00
Patrik Nordwall
4c4b1a1d27 =con #15613 Rename one of the DistributedPubSubMediatorSpec classes 2014-08-06 10:12:12 +02:00
LeonardMeyer
92041b0995 =con #15600 Fix config in PeekMailboxSpec 2014-07-30 10:21:31 +02:00
Konrad 'ktoso' Malawski
c76b8a0338 =con #15574 impr TimerBasedThrottlerSpec stability
* the lower bound was rather racy, depends on "where in it's Tick"
  time the throtteler currently was. In general the upper bound is also
  not exact, but "good enough" because the `.5` is an estimation of "the
  throtteler must finish it's previous tick, and then it sends the data"
2014-07-23 15:48:59 +02:00
Michal Knapik
7ca3a9699e +tes #12681 add EchoActor 2014-07-11 11:16:35 +02:00
Patrik Nordwall
a188099f91 =con #15440 Add invariant checks to ClusterSharding state
* I suspect that the issue #15440 happens because of replay of events
  in wrong order (ShardHomeAllocated received before ShardRegionRegistered)
  by the hbase journal
* This does not fix that issue, but the additional invariant checks and
  debug statements  would perhaps make it easier for us to diagnose such
  issues
* These changes also ensures that the allocation strategy does not return
  the wrong thing.
* It also tightens a possible error if a region is terminated while a
  rebalance is in progress

(cherry picked from commit d07b9db4958236d580b8bfb8f92461969ff88cbc)
2014-06-30 13:19:00 +02:00
Konrad 'ktoso' Malawski
b1d1d87111 !per #15436 make persistenceId abstract in NEW classes
(cherry picked from commit de3249f7f4b859c3caa232e579d9a3bae7406803)

Conflicts:
	akka-samples/akka-sample-persistence-scala/src/main/scala/sample/persistence/PersistentActorExample.scala
2014-06-26 16:29:30 +02:00
Konrad 'ktoso' Malawski
3fd240384c +per #15424 Added PersistentView, deprecated View
A PersistentView works the same way as View did previously, except:

* it requires an `peristenceId` (no default is provided)
* messages given to `receive` are NOT wrapped in Persistent()

akka-streams not touched, will update them afterwards on different branch

Also solves #15436 by making persistentId in PersistentView abstract.

(cherry picked from commit dcafaf788236fe6d018388dd55d5bf9650ded696)

Conflicts:
	akka-docs/rst/java/lambda-persistence.rst
	akka-docs/rst/java/persistence.rst
	akka-docs/rst/scala/persistence.rst
	akka-persistence/src/main/scala/akka/persistence/Persistent.scala
	akka-persistence/src/main/scala/akka/persistence/View.scala
2014-06-26 10:10:09 +02:00
Marcin Kubala
f4793a399f =act,clu,con,doc,per,rem,sam #15114 append missing parens at Actor.sender() invocations 2014-06-20 23:05:51 +02:00
Martynas Mickevičius
0cd7252561 Merge pull request #15362 from 2m/pubsub-wrap-before-routing
=con #15285 wrap message in RouterEnvelope before routing (for validation)
2014-06-10 15:09:22 +03:00
Martynas Mickevicius
04d5cef3d9 =con #15285 wrap message in RouterEnvelope before routing
PubSubMediator uses router which always unwraps RouterEnvelope messages.
However unwrapping is undesirable if user sends message in
ConsistentHashableEnvelope. Thus PubSubMediator should always wrap user
messages in RouterEnvelope which will be unwrapped by the router, leaving
user message unchanged.

Also disallow consistent hashing routing logic in pub-sub mediator.
2014-06-10 12:46:52 +03:00
Konrad 'ktoso' Malawski
d51b79c95a !per persistAsync
Breaks binary compatibility because adding new methods to Eventsourced
trait. Since akka-persistence is experimental this is ok, yet
source-level compatibility has been perserved thankfuly :-)

Deprecates:
* Rename of EventsourcedProcessor -> PersistentActor
* Processor -> suggest using PersistentActor
* Migration guide for akka-persistence is separate, as wel'll deprecate in minor versions (its experimental)
* Persistent as well as ConfirmablePersistent - since Processor, their
  main user will be removed soon.

Other changes:
* persistAsync works as expected when mixed with persist
* A counter must be kept for pending stashing invocations
* Uses only 1 shared list buffer for persit / persistAsync
* Includes small benchmark
* Docs also include info about not using Persistent() wrapper
* uses java LinkedList, for best performance of append / head on
  persistInvocations; the get(0) is safe, because these msgs only
  come in response to persistInvocations
* Renamed internal *MessagesSuccess/Failure messages because we kept
  small mistakes seeing the class "with s" and "without s" as the same
* Updated everything that refered to EventsourcedProcessor to
  PersistentActor, including samples

Refs #15227

Conflicts:
	akka-docs/rst/project/migration-guides.rst
	akka-persistence/src/main/scala/akka/persistence/JournalProtocol.scala
	akka-persistence/src/main/scala/akka/persistence/Persistent.scala
	akka-persistence/src/test/scala/akka/persistence/PersistentActorSpec.scala
	project/AkkaBuild.scala
2014-06-10 11:09:12 +02:00
Patrik Nordwall
ca4dda10ea =act #13678 #15149 Reply with ActorIdentity(None) from deadLetters
(cherry picked from commit b2f96668baf9efa77de5f97223f055c0a78d0cb8)
2014-06-05 08:28:40 +02:00
Jeroen Gordijn
619585c50e +con #15157 Changed ClusterSharding.start to return the shardRegion
ActorRef #15209

* Changed ClusterSharding.start to return the ActorRef to the shardRegion (#15157)

* Fixed indentation, and removed unused import

* Test for new API
* removed unused import

- Moved barrier outside of the runon
2014-05-20 10:28:49 +02:00
Martynas Mickevicius
fdcd964165 =pro #15031 separate sbt build file for every module 2014-05-14 10:05:09 +02:00
Shikhar Bhushan
efc254db87 minor docfixes in ClusterSingletonManager.scala 2014-05-07 11:09:53 +05:30
Konrad Malawski
44f499f434 Merge pull request #15008 from ktoso/3986-cluster-singleton-may-become-doubleton-during-splits-ktoso
=doc #3986 Slight updates in wording on cluster singleton docs
2014-04-22 17:14:25 +02:00
Xingrun CHEN
f421e4260b +con #3972 Make Distributedpubsubmediator support consumer group
1. allow Topic have children topics, which are the groups
2. when publish with sendOneMessageToEachGroup flag, it will send to one
actor each group
2014-04-15 18:54:07 +08:00
Konrad Malawski
990ad99ca3 =doc #3986 Slight updates in wording on cluster singleton docs 2014-04-15 10:22:53 +02:00
Ahmed Soliman
d9f0a1aac3 =con #3993 Send UnsubscribeAck in DistributedPubSubMediator to the right sender 2014-04-11 14:50:56 +03:00
Konrad Malawski
2173a037cb Merge pull request #2126 from ktoso/3986-cluster-singleton-may-become-doubleton-during-splits-ktoso
=doc #3986 Cluster Singleton should not be used with AutoDown
2014-04-10 15:39:31 +02:00
Konrad Malawski
08fd4c93fa =doc #3986 Cluster Singleton should not be used with AutoDown
unless you want each partition of the cluster (effectively new clusters),
to spin up their "own" singleton.
2014-04-10 15:38:22 +02:00
Patrik Nordwall
e860a94e33 Merge pull request #2119 from akka/wip-3974-sharding-NoSuchElementException-master-patriknw
=3974 per Persist (serialize) actor refs  with transport info (forward port)
2014-04-09 14:04:01 +02:00
Patrik Nordwall
93d069fc8f =con #3975 Check for wrong id in cluster sharding
* Also, a watch leftover from ticket 3882

(cherry picked from commit cbc9dc535c0692a7df00bfb7292e62de1bed7e3f)

Conflicts:
	akka-contrib/src/main/scala/akka/contrib/pattern/DistributedPubSubMediator.scala
2014-04-08 11:51:31 +02:00
Patrik Nordwall
e4b2af3783 =3974 per Persist (serialize) actor refs with transport info
* The reason for the problem with NoSuchElementException in ClusterSharding was
  that actor references were not serialized with full address information. In
  certain fail over scenarios the references could not be resolved and therefore
  the ShardRegionTerminated did not match corresponding ShardRegionRegistered.
* Wrap serialization with transport information from defaultAddress

(cherry picked from commit 3e73ae5925cf1293a9a5d61e48919b1708e84df2)
2014-04-07 14:08:04 +02:00
Patrik Nordwall
d7757b90f6 Merge pull request #2099 from akka/wip-3933-persistent-channel-patriknw
=per #3933 Correction of seq number logic for persistent channel
2014-03-25 13:40:37 +01:00
Patrik Nordwall
c7e157121a =per #3933 Correction of seq number logic for persistent channel
* Problem when using PersistentChannel from Processor
* When the seq numbers of the sending processor and the seq numbers
  of the PersistentChannel was out of sync the PersistentChannel
  did not de-duplicate confirmed deliveres that were resent by
  the processor.
* There is a hand-off in the RequestWriter that confirms the
  Processor seq number, and therefore the seq number of the
  RequestWriter must be used in the ConfirmablePersistent from
  the RequestReader
* More tests, covering this scenario
2014-03-24 13:10:35 +01:00
Patrik Nordwall
8ebc413643 +con #3937 Start ShardCoordinator again after PersistenceFailure
* Add supervisor level that will start the ShardCoordinator again after
  a configurable backoff duration
* Make the timeout of SharedLeveldbJournal configurable
* Include cause of PersistenceFailure in message of ActorKilledException
2014-03-23 20:14:19 +01:00
Roland Kuhn
95d27e3f82 Merge pull request #2067 from akka/wip-WhatIsAkka-∂π
=doc clean up what-is-akka.rst and switch to www.typesafe.com
2014-03-17 14:16:47 +01:00
Patrik Nordwall
f457e0a30c Merge pull request #1977 from giovannibotta/master
#3843 Add ClusterSingletonProxy
2014-03-14 14:33:27 +01:00
Patrik Nordwall
54671271e9 !con #3920 Remove JavaLoggingEventHandler 2014-03-14 14:14:31 +01:00
Roland Kuhn
98c282f115 =doc clean up what-is-akka.rst and switch to www.typesafe.com
the latter is necessary because of broken DNS requirements which make
apex domains brittle (since they must resolve to an A record with a
single IP)
2014-03-13 12:42:47 +01:00
Giovanni Botta
ee01a8dffe +con #3843 Add ClusterSingletonProxy 2014-03-12 17:42:26 -04:00
dario.rexin
2cbad298d6 =all #3858 Make case classes final 2014-03-07 13:20:01 +01:00
Patrik Nordwall
d1a7956d17 =doc Links to activator and some doc improvements 2014-02-21 11:24:01 +01:00
Patrik Nordwall
e3a7138991 Merge pull request #2020 from akka/wip-3882-sharding-watch-after-recovery-patriknw
=con #3882 Defer watch in ClusterSharding until after recovery
2014-02-19 14:57:44 +01:00
Patrik Nordwall
5d2761b81c =con #3882 Defer watch in ClusterSharding until after recovery
* To avoid unnecessary and costly watch/unwatch to non-existing systems.
* This avoids the problematic scario revealed in ticket 3879
2014-02-19 08:33:35 +01:00
Patrik Nordwall
21e8f89f53 =con #3880 Keep track of all shards per region in ClusterSharding
* The problem was that ShardRegion actor only kept track of one shard
  id per region actor.  Therefore the Terminated message only removes
  one of the shards from its registry when there are multiple shards
  per region.
* Added failing test and solved the problem by keeping track of all
  shards per region
* Also, rebalance must not be done before any regions have been
  registered
2014-02-19 08:11:11 +01:00
Patrik Nordwall
c2a932768b Merge pull request #1997 from akka/wip-3228-doc-TimerBasedThrottler-patriknw
=con #3228 Adjust structure of TimerBasedThrottler ScalaDoc
2014-02-13 12:27:30 +01:00
Patrik Nordwall
4b33cf98df =con #3865 Fix race in pub-sub when nodes are removed
* The race can happen if the MemberRemoved event is received followed by a Delta update from
  a node that has not yet got the MemberRemoved. That will make the bucket for the removed
  node to be added back in the registry.
2014-02-13 12:25:56 +01:00
Patrik Nordwall
89a5772e87 =con #3228 Adjust structure of TimerBasedThrottler ScalaDoc
* The documentation was good, but some parts were "hidden" by separating
  it two places. I understand the original reason for the separation but
  it might be easier for the user (as reported in the ticket) to have
  everything in one place.
2014-02-07 16:22:04 +01:00