Hi,
We have seen a weird behaviour in cassandra 3.6.
Once our node was went down more than 10 hrs. After that, we had ran
Nodetool repair multiple times. But tombstone are not getting sync properly
over the cluster. On day- today basis, on expiry of every grace period,
deleted records start surfac
Hi Atul,
could you be more specific on how you are running repair ? What's the
precise command line for that, does it run on several nodes at the same
time, etc...
What is your gc_grace_seconds ?
Do you see errors in your logs that would be linked to repairs (Validation
failure or failure to creat
Hello,
I have created a column family for User File Management.
CREATE TABLE "UserFile" ("USERID" bigint,"FILEID" text,"FILETYPE"
int,"FOLDER_UID" text,"FILEPATHINFO" text,"JSONCOLUMN" text,PRIMARY KEY
("USERID","FILEID"));
Sample Entry
(4*003, 3f9**
Hi Robert,
Materialized Views are regular C* tables underneath, so based on their PK
they can generate big partitions.
It is often advised to keep partition size under 100MB because larger
partitions are hard to read and compact. They usually put pressure on the
heap and lead to long GC pauses +
Hi,
We are not sure whether this issue is linked to that node or not. Our
application does frequent delete and insert.
May be our approach is not correct for nodetool repair. Yes, we generally
fire repair on all boxes at same time. Till now, it was manual with default
configuration ( command: "no
Atul,
since you're using 3.6, by default you're running incremental repair, which
doesn't like concurrency very much.
Validation errors are not occurring on a partition or partition range base,
but if you're trying to run both anticompaction and validation compaction
on the same SSTable.
Like adv
Thanks Alexander.
Will look into all these.
On Thu, Sep 29, 2016 at 4:39 PM, Alexander Dejanovski <
a...@thelastpickle.com> wrote:
> Atul,
>
> since you're using 3.6, by default you're running incremental repair,
> which doesn't like concurrency very much.
> Validation errors are not occurring o
Thanks!
Robert Sicoie
On Thu, Sep 29, 2016 at 12:49 PM, Alexander Dejanovski <
a...@thelastpickle.com> wrote:
> Hi Robert,
>
> Materialized Views are regular C* tables underneath, so based on their PK
> they can generate big partitions.
> It is often advised to keep partition size under 100MB be
Thanks Alexander,
After roll restart the blocked repair job stopped and I was able to run
repair again.
Regards,
Robert
Robert Sicoie
On Wed, Sep 28, 2016 at 6:46 PM, Alexander Dejanovski <
a...@thelastpickle.com> wrote:
> Robert,
>
> You can restart them in any order, that doesn't make a diff
Hi Alexander,
There is compatibility issue raised with spotify/cassandra-reaper for
cassandra version 3.x.
Is it comaptible with 3.6 in fork thelastpickle/cassandra-reaper ?
There are some suggestions mentioned by *brstgt* which we can try on our
side.
On Thu, Sep 29, 2016 at 5:42 PM, Atul Saroh
Hi Julian,
The problem with any deletes here is that you can *read* potentially many
tombstones. I mean you have two concerns: 1. Avoid to read tombstones during a
query 2. How to evict tombstones as quickly as possible to reclaim disk space
The first point is a data model consideration. Gene
Hi,
@Edward > In older versions you can not control when this call will
timeout,truncate_request_timeout_in_ms is available for many years, starting
from 1.2. Maybe you have another setting parameter in mind?
@GeorgeTry to put cassandra logs in debug
Best,
Romain
Le Mercredi 28 septembre 2
Atul,
our fork has been tested on 2.1 and 3.0.x clusters.
I've just tested with a CCM 3.6 cluster and it worked with no issue.
With Reaper, if you set incremental to false, it'll perform a full subrange
repair with no anticompaction.
You'll see this message in the logs : INFO [AntiEntropyStage:1
Yes we are using token aware but not shuffling replicas.
On Wed, Sep 21, 2016 at 10:04 AM, Romain Hardouin
wrote:
> Hi,
>
> Do you shuffle the replicas with TokenAwarePolicy?
> TokenAwarePolicy(LoadBalancingPolicy childPolicy, boolean
> shuffleReplicas)
>
> Best,
>
> Romain
> Le Mardi 20 septemb
Hai we are taking backups using nodetool snapshots but i occasionally see
that my script pauses while taking a snapshot of a CF, is this because
when it is taking snapshot does the sstables got compacted to a different
one so it couldn't find that particular sstable on which it is taking
snapshot
I am seeing mutation drops on one of my nodes in the cluster, the load is
low no Gc pauses no wide partitions either, so can i debug what is the
reason for mutation drops ??
i ran nodetool tpstats only one node out of 9 is dropping rest 8 nodes in
the cluster are having 0 mutation drops.
How can
Romain,
I was trying what you mentioned as below:
a. nodetool stop VALIDATION
b. echo run -b org.apache.cassandra.db:type=StorageService
forceTerminateAllRepairSessions | java -jar
/tmp/jmxterm/jmxterm-1.0-alpha-4-uber.jar
-l 127.0.0.1:7199
to stop a seemingly forever-going repair but seeing rea
The Cassandra team is pleased to announce the release of Apache
Cassandra version 3.8.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source and
The Cassandra team is pleased to announce the release of Apache
Cassandra version 3.9.
Apache Cassandra is a fully distributed database. It is the right choice
when you need scalability and high availability without compromising
performance.
http://cassandra.apache.org/
Downloads of source and
So how does documentation work? Example: I'm interested in Change Data
Capture.
*I do appreciate the work done.
On Thu, Sep 29, 2016 at 11:02 PM, Michael Shuler
wrote:
> The Cassandra team is pleased to announce the release of Apache
> Cassandra version 3.9.
>
> Apache Cassandra is a fully dist
I have dc1 and dc2.
I want to keep a keyspace only on dc2.
But I only have my app on dc1.
And I want to write to dc1 (lower latency) which will not keep data locally
but just push it to dc2.
While reading will only work for dc2.
Since my app is mostly write, my app ~will be faster while not having
On 09/29/2016 04:08 PM, Dorian Hoxha wrote:
> So how does documentation work? Example: I'm interested in Change Data
> Capture.
The documentation is in-tree, under doc/source, so create a patch and
upload it to a JIRA, just as any source change. :)
The docs on patches do have testing details, so
You can do something like this, though your use of terminology like "queue"
really do not apply.
You can setup your keyspace with replication in only one data center.
CREATE KEYSPACE NTSkeyspace WITH REPLICATION = { 'class' :
'NetworkTopologyStrategy', 'dc2' : 3 };
This will make the NTSkeyspace
Thanks !
For subrange repairs I have seen two approaches. For our specific requirement,
we want to do repairs on a small set of keyspaces.
1. Thrift describe_local_ring(keyspace), parse and get token ranges for a
given node, split token ranges for given keyspace + table using
describe_
Hi,
We are using Cassandra 3.6 and I have been facing this issue for a while.
When I connect to a cassandra cluster using cqlsh and disconnect the
network keeping cqlsh on, I get really high cpu utilization on client by
cqlsh python process. On network reconnect things return back to normal.
On
25 matches
Mail list logo