If there are a lot of droppable tombstones, you could also run User Defined
Compaction on that (and on other) SSTable(s).

This blog post explains it well:
http://thelastpickle.com/blog/2016/10/18/user-defined-compaction.html

On Fri, Aug 31, 2018 at 12:04 AM Mohamadreza Rostami <
mohamadrezarosta...@gmail.com> wrote:

> Hi,Dear Vitali
> The best option for you is migrating data to the new table and change
> portion key patterns to a better distribution of data and you sstables
> become smaller but if your data already have good distribution and your
> data is really big you must add new server to your datacenter, if you
> change compassion strategy it has some risk.
>
> > On Shahrivar 8, 1397 AP, at 19:54, Jeff Jirsa <jji...@gmail.com> wrote:
> >
> > Either of those are options, but there’s also sstablesplit to break it
> up a bit
> >
> > Switching to LCS can be a problem depending on how many sstables
> /overlaps you have
> >
> > --
> > Jeff Jirsa
> >
> >
> >> On Aug 30, 2018, at 8:05 AM, Vitali Dyachuk <vdjat...@gmail.com> wrote:
> >>
> >> Hi,
> >> Some of the sstables got too big 100gb and more so they are not
> compactiong any more so some of the disks are running out of space. I'm
> running C* 3.0.17, RF3 with 10 disks/jbod with STCS.
> >> What are my options? Completely delete all data on this node and rejoin
> it to the cluster, change CS to LCS then run repair?
> >> Vitali.
> >>
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>

Reply via email to