+1 for redis for this use case.  

On Aug 16, 2013, at 10:54 AM, Robert Coli <rc...@eventbrite.com> wrote:

> On Fri, Aug 16, 2013 at 10:43 AM, Todd Nine <tn...@apigee.com> wrote:
>   We're using expiring columns as a mean for locking.
> 
> Perhaps a log structured data store with immutable data files is not ideal 
> for your use case?
> 
> If I were you, I'd put this use case in Redis and be done with it instead of 
> trying to get Cassandra to do something that is the opposite of what it is 
> optimized for.
>  
> However, we're still seeing very wide rows in our sstables, which (I'm
> assuming) are due to tombstones since I only get < 10 columns back on
> a full range scan.
> 
> Don't assume, use tracing to show you how many tombstones are being read?
>  
>  We're trying to eliminate the need to go to disk
> completely in the Locks\HLocks CF.    Is there anything further we can
> do?
> 
> populate_io_cache_on_flush option may help you.
> 
> =Rob 

Reply via email to