Thanks for the info guys. Regardless of the reason for using nodetool compact, it seems like the question still stands... but he impression I'm getting is that nodetool compact on JBOD as I described will basically fall apart. Is that correct?
To answer Colin's question as an aside: we have a dataset with fairly high insert load and periodic range reads (batch processing). We have a situation where we may want rewrite some rows (changing the primary key) by deleting and inserting as a new row. This is not something we would do on a regular basis, but after or during the process a compact would greatly help to clear out tombstones/rewritten data. @Ryan Svihla it also sounds like your suggestion in this case would be: create a new column family, rewrite all data into that, truncate/remove the previous one, and replace it with the new one. On Tue, Jan 6, 2015 at 9:39 AM, Ryan Svihla <r...@foundev.pro> wrote: > nodetool compact is the ultimate "running with scissors" solution, far > more people manage to stab themselves in the eye. Customers running with > scissors successfully not withstanding. > > My favorite discussions usually tend to result: > > 1. "We still have tombstones" ( so they set gc_grace_seconds to 0) > 2. "We added a node after fixing it and now a bunch of records that > were deleted have come back" (usually after setting gc_grace_seconds to 0 > and then not blanking nodes that have been offline) > 3. Why are my read latencies so spikey? (cause they're on STC and now > have a giant single huge SStable which worked fine when their data set was > tiny, now they're looking at 100 sstables on STC, which means sllllloooowww > reads) > 4. "We still have tombstones" (yeah I know this again, but this is > usually when they've switched to LCS, which basically noops with nodetool > compact) > > All of this is managed when you have a team that understands the tradeoffs > of nodetool compact, but I categorically reject it's a good experience for > new users, as I've unfortunately had about dozen fire drills this year as a > result of nodetool compact alone. > > Data modeling around partitions that are truncated when falling out of > scope is typically far more manageable, works with any compaction strategy, > and doesn't require operational awareness at the same scale. > > On Fri, Jan 2, 2015 at 2:15 PM, Robert Coli <rc...@eventbrite.com> wrote: > >> On Fri, Jan 2, 2015 at 11:28 AM, Colin <co...@clark.ws> wrote: >> >>> Forcing a major compaction is usually a bad idea. What is your reason >>> for doing that? >>> >> >> I'd say "often" and not "usually". Lots of people have schema where they >> create way too much garbage, and major compaction can be a good response. >> The docs' historic incoherent FUD notwithstanding. >> >> =Rob >> >> > > > > -- > > Thanks, > Ryan Svihla > > -- Dan Kinder Senior Software Engineer Turnitin – www.turnitin.com dkin...@turnitin.com