Hi guys,
Cassandra 3.11.4
nodetool gossipinfo
/10.1.20.49
generation:1571694191
heartbeat:279800
STATUS:279798:LEFT,-1013739435631815991,1572225050446
LOAD:279791:3.4105213781E11
SCHEMA:12:5cad59d2-c3d0-3a12-ad10-7578d225b082
DC:8:live
RACK:10:us-east-1a
RELEASE_VERSION:4:3.11.4
Thanks Reid!
I agree with all the things that you said!
Best,
Sergio
Il giorno gio 24 ott 2019 alle ore 09:25 Reid Pinchback <
rpinchb...@tripadvisor.com> ha scritto:
> Two different AWS AZs are in two different physical locations. Typically
> different cities. Which means that you’re trying
Thanks Jon!
This is very helpful - allow me to follow-up and ask a question.
(1) Yes, incremental repairs will never be used (unless it becomes viable
in Cassandra 4.x someday).
(2) I hear you on the JVM - will look into that.
(3) Been looking at Cassandra version 3.11.x though was unaware that 3
Hi Reid,
Many thanks - I have seen that article though will definitely give it
another read.
Note that nodetool scrub has been tried (no effect) and sstablescrub cannot
currently be run with the Cassandra image in use (though certainly a new
image that allows the server to be stopped but keeps th
There's some major warning signs for me with your environment. 4GB heap is
too low, and Cassandra 3.7 isn't something I would put into production.
Your surface area for problems is massive right now. Things I'd do:
1. Never use incremental repair. Seems like you've already stopped doing
them,
Hi Sergio,
No, not at this time.
It was in use with this cluster previously, and while there were no
reaper-specific issues, it was removed to help simplify investigation of
the underlying repair issues I've described.
Thanks.
On Thu, Oct 24, 2019 at 4:21 PM Sergio wrote:
> Are you using Cass
Ben, you may find this helpful:
https://blog.pythian.com/so-you-have-a-broken-cassandra-sstable-file/
From: Ben Mills
Reply-To: "user@cassandra.apache.org"
Date: Thursday, October 24, 2019 at 3:31 PM
To: "user@cassandra.apache.org"
Subject: Repair Issues
Message from External Sender
Greeting
Are you using Cassandra reaper?
On Thu, Oct 24, 2019, 12:31 PM Ben Mills wrote:
> Greetings,
>
> Inherited a small Cassandra cluster with some repair issues and need some
> advice on recommended next steps. Apologies in advance for a long email.
>
> Issue:
>
> Intermittent repair failures on two
Greetings,
Inherited a small Cassandra cluster with some repair issues and need some
advice on recommended next steps. Apologies in advance for a long email.
Issue:
Intermittent repair failures on two non-system keyspaces.
- platform_users
- platform_management
Repair Type:
Full, parallel rep
Have anyone used this yet?
Any intial thoughts?
When will be full and final ready fornprod version coming in?
On Sunday, September 8, 2019, Michael Shuler wrote:
> The Cassandra team is pleased to announce the release of Apache Cassandra
> version 4.0-alpha1.
>
> Apache Cassandra is a fully dist
If data was TTL'ed & tombstoned, shouldn't select be returning just 3
elements from the list column ?
Thanks,
Murali
On Thu, Oct 24, 2019 at 12:03 PM ZAIDI, ASAD wrote:
> Guess TTL’ed data is lurking around? If so , you can try get rid of
> tombstones (by reducing gc_grace_seconds to zero? ) a
Two different AWS AZs are in two different physical locations. Typically
different cities. Which means that you’re trying to manage the risk of an AZ
going dark, so you use more than one AZ just in case. The downside is that you
will have some degree of network performance difference between
Thanks Reid and Jon!
Yes I will stick with one rack per DC for sure and I will look at the
Vnodes problem later on.
What's the difference in terms of reliability between
A) spreading 2 Datacenters across 3 AZ
B) having 2 Datacenters in 2 separate AZ
?
Best,
Sergio
On Thu, Oct 24, 2019, 7:36
Guess TTL’ed data is lurking around? If so , you can try get rid of tombstones
(by reducing gc_grace_seconds to zero? ) and let compaction take care of
tombstone before sstable migration. Do keep an eye on hinted handoffs because
of zero’ed gc_grace_second property.
From: Muralikrishna Gut
Thanks, List datatype has been in-use for this table almost over a few
years now and never had issues. We ran into this issue recently when we did
the keyspace migration.
Thanks,
Murali
On Thu, Oct 24, 2019 at 11:36 AM ZAIDI, ASAD wrote:
> Have you chosen correct datatype to begin with, if you
Have you chosen correct datatype to begin with, if you don’t want duplicates?
Generally speaking:
A set and a list both represent multiple values but do so differently.
A set doesn’t save ordering and values are sorted in ascending order. No
duplicates are allowed.
A list saves ordering where y
Hello Guys,
We started noticing strange behavior after we migrated one keyspace from
existing cluster to new cluster.
We expanded our source cluster from 18 node to 36 nodes and Didn't run
"nodetool cleanup".
We took sstable backups on source cluster and restored which has duplicate
data and rest
Hey Sergio,
Forgive but I’m at work and had to skim the info quickly.
When in doubt, simplify. So 1 rack per DC. Distributed systems get rapidly
harder to reason about the more complicated you make them. There’s more than
enough to learn about C* without jumping into the complexity too soon.
I tested dsbulk too. But there are many errors:
"[1710949318] Error writing cancel request. This is not critical (the
request will eventually time out server-side)."
"Forcing termination of Connection[/127.0.0.1:9042-14, inFlight=1,
closed=true]. This should not happen and is likely a bug, please
19 matches
Mail list logo