Hi,

I'm considering using Cassandra as backend for implementing a
distributed event queue, probably via an established framework such as
ActiveMQ, RabbitMQ, Spring Integration etc.

I need a solution which can handle both high throughput, short-lived
events such as an outgoing email box, as well as longer lived events
spanning days/weeks.

I have some open issues/questions. I would appreciate some feedback /
sanity check, I'm pretty new to Cassandra.

1.
To avoid polling I'm considering only using Cassandra as the data
store for the queue. The actual notification mechanism would be built
using a distributed coordination service such as ZooKeeper (which is
not suitable for holding "big" event data), for instance:
http://zookeeper.apache.org/doc/current/recipes.html#sc_recipes_Queues

The ZooKeeper znode representing the event could hold the Cassandra
<column family, row id, column id> 3-tuple.

2.
Data model in Cassandra (with inspiration from
http://www.slideshare.net/warpforge/cassandra-queuing).

a) Model each queue topic as one row, each event data in one column
(+) easy model, maintains ordering of events
(-) limited by max size of a row 2 billion (probably enough...). All
event data must fit on disk on one node.

b) One event per row
(+) scales over the cluster, no size limitations
(+) order can be kept using order preserving partitioner and/or
ZooKeeper (sequence child znodes)
(-) no event order if using random partitioner, less of an issue of
order can be kept by ZooKeeper, however in this case range queries for
many events are not efficient

c) ??

3.
Number of created SSTables and need to run compaction frequently?

Assume one has a very high throughput with short-lived events. In this
case I guess the event would be both created and deleted before the
Memtables get flushed to disk as SSTables. How does this work? Does
Cassandra have logic during the SSTable creation to see that a
specific value has both been created and deleted before the flush, so
that it in fact can skip writing it to the SSTable? My guess is no - a
"down" node could have missed the event delete and when online again
falsely propagate the now deleted event to other nodes...

What could one do the alleviate this problem (preferably per keyspace)?
- Reduce "gc_grace_seconds" for short-lived event queues
- Modify SStable flushing code to not write a "create/delete" pair if
some special flag such as "eventQueue=true" is set on the
keyspace-level. This would give a global "deliver at least once" queue
behavior, not a "deliver exactly once" in some rare cases.
- ??

Regards,
-Martin

Reply via email to