> My guess is the initial query was causing a read repair so, on subsequent
queries, there were replicas of the data on every node and it still got
returned at consistency one
got it
>There are a number of ways the data could have become inconsistent in the
first place - eg badly overloaded or do
If you have succesfully run a repair between the initial insert and running
the first select then that should have ensured that all replicas are there.
Are you sure your repairs are completing successfully?
To check if all replicas are not been written during the periods of high
load you can monit
Thanks guys!
On Fri, Apr 26, 2019 at 1:17 PM Alain RODRIGUEZ wrote:
> Hello Ivan,
>
> Is there a way I can do one command to backup and one to restore a backup?
>
>
>
> Handling backups and restore automatically is not an easy task to work on.
> It's not straight forward. But it's doable and som
Hi Experts,
I have a cassandra cluster running with 5 nodes. For some reason, I was
creating a new cassandra cluster, but one of the nodes intended for new
cluster had the same cassandra.yml file as the existing cluster. This
resulted in the new node joining the existing cluster, making total no.
I would just stop the service of the joining node and then delete the data,
commit logs and saved caches.
After stopping the node while joining, the cluster will remove it from the
list (i.e. nodetool status) without the need to decommission.
On Tue, Apr 30, 2019 at 2:44 PM Akshay Bhardwaj <
aks
Just stop the server/kill C* process as node never fully joined the cluster
yet – that should be enough. You can safely remove the data i.e. streamed-in on
new node so you can use the node for other new cluster.
From: Akshay Bhardwaj [mailto:akshay.bhardwaj1...@gmail.com]
Sent: Tuesday, April
Thank you for prompt replies. The solutions worked!
Akshay Bhardwaj
+91-97111-33849
On Tue, Apr 30, 2019 at 5:56 PM ZAIDI, ASAD A wrote:
> Just stop the server/kill C* process as node never fully joined the
> cluster yet – that should be enough. You can safely remove the data i.e.
> streamed-
Just curious - why are you using such large batches? Most of the time
when someone asks this question, it's because they're using batches as
they would in an RDBMS, because larger transactions improve
performance. That doesn't apply with Cassandra.
Batches are OK at keeping multiple tables in sy
Hello -
I have a 48 node C* cluster spread across 4 AWS regions with RF=3. A few
months ago I started noticing disk usage on some nodes increasing
consistently. At first I solved the problem by destroying the nodes and
rebuilding them, but the problem returns.
I did some more investigation recent
We have a requirement to store blob data .
Sent from my iPhone
> On Apr 30, 2019, at 9:16 AM, Jon Haddad wrote:
>
> Just curious - why are you using such large batches? Most of the time
> when someone asks this question, it's because they're using batches as
> they would in an RDBMS, because l
Reviewing the documentation & based on my testing, using C* 2.2.8, I was not
able to extend the cluster by adding multiple nodes simultaneously. I got an
error message …
Other bootstrapping/leaving/moving nodes detected, cannot bootstrap while
cassandra.consistent.rangemovement is true
I unde
When a new node joins the ring, it needs to own new token ranges. This should
be unique to the new node and we don’t want to end up in a situation where two
nodes joining simultaneously can own same range (and ideally evenly
distributed). Cassandra has this 2 minute wait rule for gossip state to
Do you have search on the same nodes or is it only cassandra. In my case it
was due to a memory leak bug in dse search that consumed more memory
resulting in oom.
On Tue, Apr 30, 2019, 2:58 AM yeomii...@gmail.com
wrote:
> Hello,
>
> I'm suffering from similar problem with OSS cassandra version3.
13 matches
Mail list logo