Dear Aaron, Thank you for your suggestion. I'll be evaluating it.
Since all my other use cases are implemented in Cassandra, now I had the question in my mind, if it was possible to implement the sorted set in Cassandra :) The problem here is, in a few hours I might be resolving more than 2M pages. Using redis would also cause a problem on deletion it seems so. However in cassandra I might be trusting the expritation of the columns. It looks like the sorted set won't be able to support partitioning, thus won't be scalable at the end of the day. Regards, Utku On Thu, Feb 10, 2011 at 9:54 AM, aaron morton <aa...@thelastpickle.com>wrote: > FWIW and depending on the size of data, I would use consider using sorted > sets in redis http://redis.io/commands#sorted_set Where the member is the > page url and the weight is time stamp, use ZRANGE to get back the top 1,000 > entries in the set. > > Would that work for you? > > Aaron > > On 9 Feb 2011, at 23:58, Utku Can Topçu wrote: > > > Hi All, > > > > I'm sure people here have tried to solve similar questions. > > Say I'm tracking pages, I want to access the least recently used 1000 > unique pages (i.e. columnnames). How can I achieve this? > > > > Using a row with say, ttl=60 seconds would solve the problem of accessing > the least recently used unique pages in the last minute. > > > > Thanks for any comments and helps. > > > > Regards, > > Utku > >