How do you do the deletes ? Aaron
On 20 Apr 2011, at 08:39, Héctor Izquierdo Seliva wrote: > El mar, 19-04-2011 a las 23:33 +0300, shimi escribió: >> You can use memtable_flush_after_mins instead of the cron >> >> >> Shimi >> > > Good point! I'll try that. > > Wouldn't it be better to count a delete as a one column operation so it > contributes to flush by operations? > >> 2011/4/19 Héctor Izquierdo Seliva <izquie...@strands.com> >> >> El mié, 20-04-2011 a las 08:16 +1200, aaron morton escribió: >>> I think their may be an issue here, we are counting the >> number of columns in the operation. When deleting an entire >> row we do not have a column count. >>> >>> Can you let us know what version you are using and how you >> are doing the delete ? >>> >>> Thanks >>> Aaron >>> >> >> >> I'm using 0.7.4. I have a file with all the row keys I have to >> delete >> (around 100 million) and I just go through the file and issue >> deletes >> through pelops. >> >> Should I manually issue flushes with a cron every x time? >> >> >>> On 20 Apr 2011, at 04:21, Héctor Izquierdo Seliva wrote: >>> >>>> Ok, I've read about gc grace seconds, but i'm not sure I >> understand it >>>> fully. Untill gc grace seconds have passed, and there is a >> compaction, >>>> the tombstones live in memory? I have to delete 100 >> million rows and my >>>> insert rate is very low, so I don't have a lot of >> compactions. What >>>> should I do in this case? Lower the major compaction >> threshold and >>>> memtable_operations to some very low number? >>>> >>>> Thanks >>>> >>>> El mar, 19-04-2011 a las 17:36 +0200, Héctor Izquierdo >> Seliva escribió: >>>>> Hi everyone. I've configured in one of my column families >>>>> memtable_operations = 0.02 and started deleting keys. I >> have already >>>>> deleted 54k, but there hasn't been any flush of the >> memtable. Memory >>>>> keeps pilling up and eventually nodes start to do >> stop-the-world GCs. Is >>>>> this the way this is supposed to work or have I done >> something wrong? >>>>> >>>>> Thanks! >>>>> >>>> >>>> >>> >> >> >> >> >> > >