Indeed, sorry. Subscribed to both so missed which one this was.
Sent from my iPhone
> On 11 May 2017, at 19:56, Michael Kjellman
> wrote:
>
> This discussion should be on the C* user mailing list. Thanks!
>
> best,
> kjellman
>
>> On May 11, 2017, at 10:53 AM, Oskar Kjellin wrote:
>>
>> T
This discussion should be on the C* user mailing list. Thanks!
best,
kjellman
> On May 11, 2017, at 10:53 AM, Oskar Kjellin wrote:
>
> That seems way too low. Depending on what type of disk you have it should be
> closer to 1-200MB.
> That's probably causing your problems. It would still take
Hi Oskar,
Thanks for response.
Yes, could see lot of threads for compaction. Actually we are loading
around 400GB data per node on 3 node cassandra cluster.
Throttling was set to write around 7k TPS per node. Job ran fine for 2 days
and then, we start getting Mutation drops , longer GC and ver
*nodetool getcompactionthrougput*
./nodetool getcompactionthroughput
Current compaction throughput: 16 MB/s
Regards,
Varun Saluja
On 11 May 2017 at 23:18, varun saluja wrote:
> Hi,
>
> PFB results for same. Numbers are scary here.
>
> [root@WA-CASSDB2 bin]# ./nodetool compactionstats
> pending
Hi,
PFB results for same. Numbers are scary here.
[root@WA-CASSDB2 bin]# ./nodetool compactionstats
pending tasks: 137
compaction type keyspace tablecompleted
totalunit progress
Compaction system hints 5762711108
837522
That seems way too low. Depending on what type of disk you have it should be
closer to 1-200MB.
That's probably causing your problems. It would still take a while for you to
compact all your data tho
Sent from my iPhone
> On 11 May 2017, at 19:50, varun saluja wrote:
>
> nodetool getcompacti
What does nodetool compactionstats show?
I meant compaction throttling. nodetool getcompactionthrougput
> On 11 May 2017, at 19:41, varun saluja wrote:
>
> Hi Oskar,
>
> Thanks for response.
>
> Yes, could see lot of threads for compaction. Actually we are loading around
> 400GB data per
Do you have a lot of compactions going on? It sounds like you might've built up
a huge backlog. Is your throttling configured properly?
> On 11 May 2017, at 18:50, varun saluja wrote:
>
> Hi Experts,
>
> Seeking your help on a production issue. We were running high write
> intensive job on o
Hi Experts,
Seeking your help on a production issue. We were running high write intensive
job on our 3 node cassandra cluster V 2.1.7.
TPS on nodes were high. Job ran for more than 2 days and thereafter, loadavg on
1 of the node increased to very high number like loadavg : 29.
System log repo