Hi !

@Paulo
.
Yes we are using vnodes, 256 per node and we have 6 nodes in the cluster.
RF=3. The data is inserted using jmeter with consistency=LOCAL_ONE.
Because this is a test, we generate our data and we insert them using
jmeter.

After the repair finished, all the nodes seems to be freezed, no compaction
were runing, even if nodetool tpstats says
CompactionExecutor
    2        83           1424         0                 0

After I restart cassandra, the comactions started to run.




Saludos

Jean Carlo

"The best way to predict the future is to invent it" Alan Kay

On Thu, Feb 11, 2016 at 8:42 AM, Marcus Eriksson <krum...@gmail.com> wrote:

> The reason for this is probably
> https://issues.apache.org/jira/browse/CASSANDRA-10831 (which only affects
> 2.1)
>
> So, if you had problems with incremental repair and LCS before, upgrade to
> 2.1.13 and try again
>
> /Marcus
>
> On Wed, Feb 10, 2016 at 2:59 PM, horschi <hors...@gmail.com> wrote:
>
>> Hi Jean,
>>
>> we had the same issue, but on SizeTieredCompaction. During repair the
>> number of SSTables and pending compactions were exploding.
>>
>> It not only affected latencies, at some point Cassandra ran out of heap.
>>
>> After the upgrade to 2.2 things got much better.
>>
>> regards,
>> Christian
>>
>>
>> On Wed, Feb 10, 2016 at 2:46 PM, Jean Carlo <jean.jeancar...@gmail.com>
>> wrote:
>> > Hi Horschi !!!
>> >
>> > I have the 2.1.12. But I think it is something related to Level
>> compaction
>> > strategy. It is impressive that we passed from 6 sstables to 3k sstable.
>> > I think this will affect the latency on production because the number of
>> > compactions going on
>> >
>> >
>> >
>> > Best regards
>> >
>> > Jean Carlo
>> >
>> > "The best way to predict the future is to invent it" Alan Kay
>> >
>> > On Wed, Feb 10, 2016 at 2:37 PM, horschi <hors...@gmail.com> wrote:
>> >>
>> >> Hi Jean,
>> >>
>> >> which Cassandra version do you use?
>> >>
>> >> Incremental repair got much better in 2.2 (for us at least).
>> >>
>> >> kind regards,
>> >> Christian
>> >>
>> >> On Wed, Feb 10, 2016 at 2:33 PM, Jean Carlo <jean.jeancar...@gmail.com
>> >
>> >> wrote:
>> >> > Hello guys!
>> >> >
>> >> > I am testing the repair inc in my custer cassandra. I am doing my
>> test
>> >> > over
>> >> > these tables
>> >> >
>> >> > CREATE TABLE pns_nonreg_bench.cf3 (
>> >> >     s text,
>> >> >     sp int,
>> >> >     d text,
>> >> >     dp int,
>> >> >     m map<text, text>,
>> >> >     t timestamp,
>> >> >     PRIMARY KEY (s, sp, d, dp)
>> >> > ) WITH CLUSTERING ORDER BY (sp ASC, d ASC, dp ASC)
>> >> >
>> >> > AND compaction = {'class':
>> >> > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>> >> >     AND compression = {'sstable_compression':
>> >> > 'org.apache.cassandra.io.compress.SnappyCompressor'}
>> >> >
>> >> > CREATE TABLE pns_nonreg_bench.cf1 (
>> >> >     ise text PRIMARY KEY,
>> >> >     int_col int,
>> >> >     text_col text,
>> >> >     ts_col timestamp,
>> >> >     uuid_col uuid
>> >> > ) WITH bloom_filter_fp_chance = 0.01
>> >> >  AND compaction = {'class':
>> >> > 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy'}
>> >> >     AND compression = {'sstable_compression':
>> >> > 'org.apache.cassandra.io.compress.SnappyCompressor'}
>> >> >
>> >> > table cf1
>> >> >         Space used (live): 665.7 MB
>> >> > table cf2
>> >> >         Space used (live): 697.03 MB
>> >> >
>> >> > It happens that when I do repair -inc -par on theses tables, cf2 got
>> a
>> >> > pick
>> >> > of 3k sstables. When the repair finish, it takes 30 min or more to
>> >> > finish
>> >> > all the compactations and return to 6 sstable.
>> >> >
>> >> > I am a little concern about if this will happen on production. is it
>> >> > normal?
>> >> >
>> >> > Saludos
>> >> >
>> >> > Jean Carlo
>> >> >
>> >> > "The best way to predict the future is to invent it" Alan Kay
>> >
>> >
>>
>
>

Reply via email to