persistent [priority] queues are better suited to something like HornetQ
than Cassandra.
On Wed, May 25, 2011 at 9:10 PM, Dan Kuebrich wrote:
> It sounds like the problem is that the row is getting filled up with
> tombstones and becoming enormous? Another idea then, which might not be
> worth t
It sounds like the problem is that the row is getting filled up with
tombstones and becoming enormous? Another idea then, which might not be
worth the added complexity, is to progressively use new rows. Depending on
volume, this could mean having 5-minute-window rows, or 1 minute, or
whatever wor
You're basically intentionally inflicting the worst case scenario on
the Cassandra storage engine:
http://wiki.apache.org/cassandra/DistributedDeletes
You could play around with reducing gc_grace_seconds but a PQ with
"millions" of items is something you should probably just do in memory
these day
Hi all,
I'm trying to implement a priority queue for holding a large number (millions)
of items that need to be processed in time order. My solution works - but gets
slower and slower until performance is unacceptable - even with a small number
of items.
Each item essentially needs to be popped