Hello,
Yes we encountered the same issue. See CASSANDRA-13125, CASSANDRA-12144, and
14008
The scrub helped us, but it took almost 4-5 hrs one 1 table of 145GB per node.
We are still scrubbing our table to resolve this issue. The next step is to
upgrade to next version as well.
Thank you
On
I think I narrowed down the constant Full GC due to accumulation of large
partitions as a result of upgrade to 3.11.
And how the large partitions were produced maybe related to
https://issues.apache.org/jira/browse/CASSANDRA-11887
I started to see duplicate records in one of the tables which proba
I upgraded from 2.1 to 3.11 and started to get huge number of Digest
mismatch errors in the DEBUG log. I ran a full repair in the cluster but
it did not seem to cut back on these.
My consistency is QUORUM so my guess is this error is on reads but would
this also affect my writes to all replicas i
Hi,
I have a cluster of 3 nodes using Cassandra 3.11.1 running on AWS using EBS
disks.
I am having some issue with a repair of a keyspace with the command
'nodetool repair my_ks'
Validation failed in /10.1.20.10 (progress: 0%)
[2018-08-31 15:01:30,566] Some repair failed
[2018-08-31 15:01:30,5
TTL 60 seconds - small value (even smaller than compaction window). This means
that even if all replicas are consistent, data is deleted really quickly so
that results may differ even for 2 consecutive queries. How about this theory?
CL in your driver - depends on which CL is default for your pa
Hi Kyrylo
I have already tried consistency quorum and all,still the same result.
the java code write data to cassandra does not set CL,does this mean the
default CL is one?
the tpstats out is like below ,there are some dropped mutations, but it doesn't
grow during a very long time
Pool Name
Looks like you're querying the table at CL = ONE which is default for cqlsh.
If you run cqlsh on nodeX it doesn't mean you retrieve data from this node.
What this means is that nodeX will be coordinator, whereas actual data will be
retrieved from any node, based on token range + dynamic snitch da
Hi Experts,
I am using 3.9 cassandra in production environment,we have 6 nodes,the RF
of keyspace is 3, I have a table which below definition:
CREATE TABLE nev_prod_tsp.heartbeat (
vin text PRIMARY KEY,
create_time timestamp,
pkg_sn text,
prot_ver text,
trace_time timestam