You might try coordinating this activity outside of riak if at all
possible. If there is a single point of origin of these events (ie, a
dedicated master for each partition of writes) then you could maintain
reasonable guarantees that you dont need sibling processing on the riak end
since data is b
Max,
This sounds a bit complex, what would need to happen if you didn't process an
event (or batch of events) in time? What about using time-based expiry for
your events which is supported by the Bitcask backend. You could use
Multi-backend to setup a bucket that expires in N seconds. When
Hi,
what's the best approach to process batch of events in N seconds after
latest event in a group happen? Events are grouped by key.
I am thinking about following scheme:
1) events are recorded in a way that every write creates new sibling
to avoid read/write multiple cycles per event
2) with