I solved this problem with a sub-properties of compaction.
(unchecked_tombstone_compaction, tombstone_threshold,
tombstone_compaction_interval)
It took time. Eventually, two datacenters were again balanced.
Thank you.
> On 24 Dec 2018, at 3:48 PM, Eunsu Kim wrote:
>
> Oh I’m sorry.
> It is m
Oh I’m sorry.
It is marked as included in 3.11.1.
It seems to be confused with other comments in the middle.
However, I am not sure what to do with this page..
> On 24 Dec 2018, at 3:35 PM, Eunsu Kim wrote:
>
> Thank you for your response.
>
> The patch for the issue page you linked to may be n
Thank you for your response.
The patch for the issue page you linked to may be not included in 3.11.3.
If I run repair -pr on all nodes, will both datacenter use the same amount of
disk?
> On 24 Dec 2018, at 2:25 PM, Jeff Jirsa wrote:
>
> Seems like this is getting asked more and more, that’s
Seems like this is getting asked more and more, that’s unfortunate. Wish I had
time to fix this by making flush smarter or TWCS split old data. But I don’t.
You can search the list archives for more examples, but what’s probably
happening is that you have sstables overlapping which prevents TWC
I’m using TimeWindowCompactionStrategy.
All consistency level is ONE.
> On 24 Dec 2018, at 2:01 PM, Jeff Jirsa wrote:
>
> What compaction strategy are you using ?
>
> What consistency level do you use on writes? Reads?
>
> --
> Jeff Jirsa
>
>
>> On Dec 23, 2018, at 11:53 PM, Eunsu Kim wr
What compaction strategy are you using ?
What consistency level do you use on writes? Reads?
--
Jeff Jirsa
> On Dec 23, 2018, at 11:53 PM, Eunsu Kim wrote:
>
> Merry Christmas
>
> The Cassandra cluster I operate consists of two datacenters.
>
> Most data has a TTL of 14 days and stores on