; Sincerely,
>
> Myron A. Semack
>
>
>
> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
> *Sent:* Monday, September 18, 2017 6:02 PM
> *To:* cassandra
>
> *Subject:* Re: Re[6]: Modify keyspace replication strategy and rebalance
> the nodes
>
>
>
> Using C
satisfied)?
Sincerely,
Myron A. Semack
From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Monday, September 18, 2017 6:02 PM
To: cassandra
Subject: Re: Re[6]: Modify keyspace replication strategy and rebalance the nodes
Using CL:ALL basically forces you to always include the first replica in the
So I haven't completely thought through this, so don't just go ahead and do
it. Definitely test first. Also if anyone sees something terribly wrong
don't be afraid to say.
Seeing as you're only using SimpleStrategy and it doesn't care about racks,
you could change to SimpleSnitch, or GossipingProp
No worries, that makes both of us, my first contribution to this thread was
similarly going-too-fast and trying to remember things I don't use often (I
thought originally SimpleStrategy would consult the EC2 snitch, but it
doesn't).
- Jeff
On Mon, Sep 18, 2017 at 1:56 PM, Jon Haddad
wrote:
> So
sandra.apache.org
> *Subject:* Re: Re[6]: Modify keyspace replication strategy and rebalance
> the nodes
>
>
>
> The hard part here is nobody's going to be able to tell you exactly what's
> involved in fixing this because nobody sees your ring
>
>
>
> And
: Re: Re[6]: Modify keyspace replication strategy and rebalance the nodes
The hard part here is nobody's going to be able to tell you exactly what's
involved in fixing this because nobody sees your ring
And since you're using vnodes and have a nontrivial number of instances,
sharing
Sorry, you’re right. This is what happens when you try to do two things at
once. Google too quickly, look like an idiot. Thanks for the correction.
> On Sep 18, 2017, at 1:37 PM, Jeff Jirsa wrote:
>
> For what its worth, the problem isn't the snitch it's the replication
> strategy - he's u
For what its worth, the problem isn't the snitch it's the replication strategy
- he's using the right snitch but SimpleStrategy ignores it
That's the same reason that adding a new DC doesn't work - the relocation
strategy is dc agnostic and changing it safely IS the problem
--
Jeff Jirsa
>
For those of you who like trivia, simpleSnitch is hard coded to report every
node in DC in “datacenter1” and in rack “rack1”, there’s no way around it.
https://github.com/apache/cassandra/blob/8b3a60b9a7dbefeecc06bace617279612ec7092d/src/java/org/apache/cassandra/locator/SimpleSnitch.java#L28
The hard part here is nobody's going to be able to tell you exactly what's
involved in fixing this because nobody sees your ring
And since you're using vnodes and have a nontrivial number of instances,
sharing that ring (and doing anything actionable with it) is nontrivial.
If you weren't usin
@jeff what do you think is the best approach here to fix this problem?
Thank you all for helping me.
>Thursday, September 14, 2017 3:28 PM -07:00 from kurt greaves
>:
>
>Sorry that only applies our you're using NTS. You're right that simple
>strategy won't work very well in this case. To migrat
Sorry that only applies our you're using NTS. You're right that simple
strategy won't work very well in this case. To migrate you'll likely need
to do a DC migration to ensuite no downtime, as replica placement will
change even if RF stays the same.
On 15 Sep. 2017 08:26, "kurt greaves" wrote:
>
If you have racks configured and lose nodes you should replace the node
with one from the same rack. You then need to repair, and definitely don't
decommission until you do.
Also 40 nodes with 256 vnodes is not a fun time for repair.
On 15 Sep. 2017 03:36, "Dominik Petrovic"
wrote:
> @jeff,
> I
@jeff,
I'm using 3 availability zones, during the life of the cluster we lost nodes,
retired others and we end up having some of the data written/replicated on a
single availability zone. We saw it with nodetool getendpoints.
Regards
>Thursday, September 14, 2017 9:23 AM -07:00 from Jeff Jirsa
With one datacenter/region, what did you discover in an outage you think you'll
solve with network topology strategy? It should be equivalent for a single D.C.
--
Jeff Jirsa
> On Sep 14, 2017, at 8:47 AM, Dominik Petrovic
> wrote:
>
> Thank you for the replies!
>
> @jeff my current cluste
Thank you for the replies!
@jeff my current cluster details are:
1 datacenter
40 nodes, with vnodes=256
RF=3
What is your advice? is it a production cluster, so I need to be very careful
about it.
Regards
>Thu, 14 Sep 2017 -2:47:52 -0700 from Jeff Jirsa :
>
>The token distribution isn't going t
The token distribution isn't going to change - the way Cassandra maps replicas
will change.
How many data centers/regions will you have when you're done? What's your RF
now? You definitely need to run repair before you ALTER, but you've got a bit
of a race here between the repairs and the ALTE
Hi,
the steps are:
- ALTER KEYSPACE to change your replication strategy
- "nodetool repair -pr " on ALL nodes or full repair
"nodetool repair " on enough replica to distribute and
rebalance your data to replicas
- nodetool cleanup on every node to remove superfluous data
Please note that you'd be
Dear community,
I'd like to receive additional info on how to modify a keyspace replication
strategy.
My Cassandra cluster is on AWS, Cassandra 2.1.15 using vnodes, the cluster's
snitch is configured to Ec2Snitch, but the keyspace the developers created has
replication class SimpleStrategy = 3.
19 matches
Mail list logo