Hi !
@Paulo
.
Yes we are using vnodes, 256 per node and we have 6 nodes in the cluster.
RF=3. The data is inserted using jmeter with consistency=LOCAL_ONE.
Because this is a test, we generate our data and we insert them using
jmeter.
After the repair finished, all the nodes seems to be freezed, n
The reason for this is probably
https://issues.apache.org/jira/browse/CASSANDRA-10831 (which only affects
2.1)
So, if you had problems with incremental repair and LCS before, upgrade to
2.1.13 and try again
/Marcus
On Wed, Feb 10, 2016 at 2:59 PM, horschi wrote:
> Hi Jean,
>
> we had the same
Are you using vnodes by any chance? If so, how many? How many nodes and
what's the replication factor? How was data inserted (at what consistency
level)?
Streaming might create a large number of sstables with vnodes (see
CASSANDRA-10495), so in case data is inconsistent between nodes (detected
dur
Hello Kai
This is for *cf3*
nodetool cfstats pns_nonreg_bench.cf3 -H
Keyspace: pns_nonreg_bench
Read Count: 23594
Read Latency: 1.2980987539204882 ms.
Write Count: 148161
Write Latency: 0.04608940274431193 ms.
Pending Flushes: 0
Table: cf3
SSTable count: 11489
Jean,
What does your cfstats look like? Especially "SSTables in each level" line.
On Wed, Feb 10, 2016 at 8:33 AM, Jean Carlo
wrote:
> Hello guys!
>
> I am testing the repair inc in my custer cassandra. I am doing my test
> over these tables
>
> *CREATE TABLE pns_nonreg_bench.cf3* (
> s tex
Hello Horschi,
Yes I understand. Thx
Best regards
Jean Carlo
"The best way to predict the future is to invent it" Alan Kay
On Wed, Feb 10, 2016 at 3:00 PM, horschi wrote:
> btw: I am not saying incremental Repair in 2.1 is broken, but ... ;-)
>
> On Wed, Feb 10, 2016 at 2:59 PM, horschi
btw: I am not saying incremental Repair in 2.1 is broken, but ... ;-)
On Wed, Feb 10, 2016 at 2:59 PM, horschi wrote:
> Hi Jean,
>
> we had the same issue, but on SizeTieredCompaction. During repair the
> number of SSTables and pending compactions were exploding.
>
> It not only affected latencie
Hi Jean,
we had the same issue, but on SizeTieredCompaction. During repair the
number of SSTables and pending compactions were exploding.
It not only affected latencies, at some point Cassandra ran out of heap.
After the upgrade to 2.2 things got much better.
regards,
Christian
On Wed, Feb 10
Hi Horschi !!!
I have the 2.1.12. But I think it is something related to Level compaction
strategy. It is impressive that we passed from 6 sstables to 3k sstable.
I think this will affect the latency on production because the number of
compactions going on
Best regards
Jean Carlo
"The best wa
Hi Jean,
which Cassandra version do you use?
Incremental repair got much better in 2.2 (for us at least).
kind regards,
Christian
On Wed, Feb 10, 2016 at 2:33 PM, Jean Carlo wrote:
> Hello guys!
>
> I am testing the repair inc in my custer cassandra. I am doing my test over
> these tables
>
>
Correction:
*table cf3*
*Space used (live): 697.03 MB*
It happens that when I do repair -inc -par on theses tables, *cf3 got a
pick of 3k sstables*. When the repair finish, it takes 30 min or more to
finish all the compactations and return to 6 sstable.
Saludos
Jean Carlo
"The best wa
Hello guys!
I am testing the repair inc in my custer cassandra. I am doing my test over
these tables
*CREATE TABLE pns_nonreg_bench.cf3* (
s text,
sp int,
d text,
dp int,
m map,
t timestamp,
PRIMARY KEY (s, sp, d, dp)
) WITH CLUSTERING ORDER BY (sp ASC, d ASC, dp ASC)
12 matches
Mail list logo