Remove my email please
De: Vitali Dyachuk [mailto:vdjat...@gmail.com] Enviada em: quinta-feira, 6 de setembro de 2018 08:00 Para: user@cassandra.apache.org Assunto: Re: Large sstables What i have done is: 1) added more disks, so the compaction will carry on 2) when i've switched to LCS from STCS the STCS queues for the processing big sstables have remained, so i've stopped these queues with nodetool stop -id queue_id and LCS compaction has started to process sstables , i'm using 3.0.17 C* with RF3 However the question remains if i use sstablesplit on 200Gb sstables to split it to 200Mb files, will it help the LCS compaction? Will LCS just take some data from that big sstable and try to merge with other sstable on L0 adn other levels so i just have to wait until the LCS compaction will finish? On Sun, Sep 2, 2018 at 9:55 AM shalom sagges <shalomsag...@gmail.com <mailto:shalomsag...@gmail.com> > wrote: If there are a lot of droppable tombstones, you could also run User Defined Compaction on that (and on other) SSTable(s). This blog post explains it well: http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <mohamadrezarosta...@gmail.com <mailto:mohamadrezarosta...@gmail.com> > wrote: Hi,Dear Vitali The best option for you is migrating data to the new table and change portion key patterns to a better distribution of data and you sstables become smaller but if your data already have good distribution and your data is really big you must add new server to your datacenter, if you change compassion strategy it has some risk. > On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa <jji...@gmail.com > <mailto:jji...@gmail.com> > wrote: > > Either of those are options, but there’s also sstablesplit to break it up a > bit > > Switching to LCS can be a problem depending on how many sstables /overlaps > you have > > -- > Jeff Jirsa > > >> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk <vdjat...@gmail.com >> <mailto:vdjat...@gmail.com> > wrote: >> >> Hi, >> Some of the sstables got too big 100gb and more so they are not compactiong >> any more so some of the disks are running out of space. I'm running C* >> 3.0.17, RF3 with 10 disks/jbod with STCS. >> What are my options? Completely delete all data on this node and rejoin it >> to the cluster, change CS to LCS then run repair? >> Vitali. >> > > --------------------------------------------------------------------- > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org > <mailto:user-unsubscr...@cassandra.apache.org> > For additional commands, e-mail: user-h...@cassandra.apache.org > <mailto:user-h...@cassandra.apache.org> > --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org <mailto:user-unsubscr...@cassandra.apache.org> For additional commands, e-mail: user-h...@cassandra.apache.org <mailto:user-h...@cassandra.apache.org>