The repair results is following (we run it Friday): Cannot proceed on
repair because a neighbor (/192.168.61.201) is dead: session failed
But to be honest the neighbor did not died. It seemed to trigger a series
of full GC events on the initiating node. The results form logs are:
[2015-02-20 16:4
Hi,
2.1.3 is now the official latest release - I checked this morning and
got this good surprise. Now it's update time - thanks to all guys
involved, if I meet anyone one beer from me :-)
The changelist is rather long:
https://git1-us-west.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=C
compactions must decrease as well...
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Roni Balthazar
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 18, 2015 at 12:39 PM, Ja Sam wrote:
&g
S, because
>>>> guys
>>>> > from DataStax suggest us that we should not use Leveled and alter
>>>> tables in
>>>> > STCS, because we don't have SSD. After this change we did not run any
>>>> > repair. Anyway I do
gt;> > Leveled compaction before. Last week we ALTER tables to STCS, because
>>>> > guys
>>>> > from DataStax suggest us that we should not use Leveled and alter tables
>>>> > in
>>>> > STCS, because we don't have SSD. After this change we di
because we don't have SSD. After this change we did not run any
>>> > repair. Anyway I don't think it will change anything in SSTable count
>>> - if I
>>> > am wrong please give me an information
>>> >
>>> > 2) I did this. My tables
e me an information
>> >
>> > 2) I did this. My tables are 99% write only. It is audit system
>> >
>> > 3) Yes I am using default values
>> >
>> > 4) In both operations I am using LOCAL_QUORUM.
>> >
>> > I am almost sure that READ
disk space available on each node. Of course, it will
also make sure data is replicated to the right nodes in the process.
[]s
From: user@cassandra.apache.org
Subject: Re: Many pending compactions
Can you explain me what is the correlation between growing SSTables and repair?
I was sure, until
; > am wrong please give me an information
> >
> > 2) I did this. My tables are 99% write only. It is audit system
> >
> > 3) Yes I am using default values
> >
> > 4) In both operations I am using LOCAL_QUORUM.
> >
> > I am almost sure that READ timeout
re 99% write only. It is audit system
>
> 3) Yes I am using default values
>
> 4) In both operations I am using LOCAL_QUORUM.
>
> I am almost sure that READ timeout happens because of too much SSTables.
> Anyway firstly I would like to fix to many pending compactions. I stil
AD timeout happens because of too much SSTables.
Anyway firstly I would like to fix to many pending compactions. I still
don't know how to speed up them.
On Wed, Feb 18, 2015 at 2:49 PM, Roni Balthazar
wrote:
> Are you running repairs within gc_grace_seconds? (default is 10 days)
>
>
Are you running repairs within gc_grace_seconds? (default is 10 days)
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html
Double check if you set cold_reads_to_omit to 0.0 on tables with STCS
that you do not read often.
Are you using default values for
I don't have problems with DC_B (replica) only in DC_A(my system write only
to it) I have read timeouts.
I checked in OpsCenter SSTable count and I have:
1) in DC_A same +-10% for last week, a small increase for last 24h (it is
more than 15000-2 SSTables depends on node)
2) in DC_B last 24h
Hi,
You can check if the number of SSTables is decreasing. Look for the
"SSTable count" information of your tables using "nodetool cfstats".
The compaction history can be viewed using "nodetool
compactionhistory".
About the timeouts, check this out:
http://www.datastax.com/dev/blog/how-cassandra-
Hi,
Thanks for your "tip" it looks that something changed - I still don't know
if it is ok.
My nodes started to do more compaction, but it looks that some compactions
are really slow.
In IO we have idle, CPU is quite ok (30%-40%). We set compactionthrouput to
999, but I do not see difference.
Can
HI,
Yes... I had the same issue and setting cold_reads_to_omit to 0.0 was
the solution...
The number of SSTables decreased from many thousands to a number below
a hundred and the SSTables are now much bigger with several gigabytes
(most of them).
Cheers,
Roni Balthazar
On Tue, Feb 17, 2015 at
After some diagnostic ( we didn't set yet cold_reads_to_omit ). Compaction
are running but VERY slow with "idle" IO.
We had a lot of "Data files" in Cassandra. In DC_A it is about ~12
(only xxx-Data.db) in DC_B has only ~4000.
I don't know if this change anything but:
1) in DC_A avg size of D
I set setcompactionthroughput 999 permanently and it doesn't change
anything. IO is still same. CPU is idle.
On Tue, Feb 17, 2015 at 1:15 AM, Roni Balthazar
wrote:
> Hi,
>
> You can run "nodetool compactionstats" to view statistics on compactions.
> Setting cold_reads_to_omit to 0.0 can help to
Hi,
You can run "nodetool compactionstats" to view statistics on compactions.
Setting cold_reads_to_omit to 0.0 can help to reduce the number of
SSTables when you use Size-Tiered compaction.
You can also create a cron job to increase the value of
setcompactionthroughput during the night or when yo
One think I do not understand. In my case compaction is running
permanently. Is there a way to check which compaction is pending? The only
information is about total count.
On Monday, February 16, 2015, Ja Sam wrote:
> Of couse I made a mistake. I am using 2.1.2. Anyway night build is
> availabl
Of couse I made a mistake. I am using 2.1.2. Anyway night build is
available from
http://cassci.datastax.com/job/cassandra-2.1/
I read about cold_reads_to_omit It looks promising. Should I set also
compaction throughput?
p.s. I am really sad that I didn't read this before:
https://engineering.eve
Hi 100% in agreement with Roland,
2.1.x series is a pain! I would never recommend the current 2.1.x series
for production.
Clocks is a pain, and check your connectivity! Also check tpstats to see if
your threadpools are being overrun.
Regards,
Carlos Juzarte Rolo
Cassandra Consultant
Pythian -
Hi,
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
7) minimal reads (usually none, sometimes few)
those two points keep me repeating an anwser I got. First where did you
get 2.1.3 from? Maybe I missed it, I will have a look. But if it is
2.1.2 whi
*Environment*
1) Actual Cassandra 2.1.3, it was upgraded from 2.1.0 (suggested by Al
Tobey from DataStax)
2) not using vnodes
3)Two data centres: 5 nodes in one DC (DC_A), 4 nodes in second DC (DC_B)
4) each node is set up on a physical box with two 16-Core HT Xeon
processors (E5-2660), 64GB RAM an
24 matches
Mail list logo