>
> Is the OP expecting a perfect 50%/50% split?
best result I got was 240gb/30gb split, which I think is not properly
balanced.
> Also, what are your outputs when you call out specific keyspaces? Do the
> numbers get more even?
i don't know what you mean by *call out specific key spaces?* ca
Hi!
We're running a Cassandra cluster on AWS. I want to replace an old node
with EBS storage with a new one. The steps I'm following are as follows and
I want to get a second opinion on whether this is the right thing to do:
1. Remove old node from gossip.
2. Run nodetool drain
3. Stop cassandra
Steps are good Rutvij. Step 1 is not mandatory.
We snapshot EBS volume and then restored on new node. How are you re-attaching
EBS volume without snapshot?
I
> On Jun 13, 2017, at 10:21 AM, Rutvij Bhatt wrote:
>
> Hi!
>
> We're running a Cassandra cluster on AWS. I want to replace an old no
Nitan,
Yes, that is what I've done. I snapshotted the volume after step 3 and will
create a new volume from that snapshot and attach it to the new instance.
Curious if I am indeed replacing a node completely, is there any logical
difference between snapshot->create->attach vs detach from old->atta
Hello,
I think that’s not the optimal way to handle it.
If you are just attaching the same EBS volume to a new node you can do like
this:
1) nodetool drain on old
2) stop cassandra on old
3) Attach EBS to new node
4) Start Cassandra on new node
Cassandra automatically realizes that have just eff
Hannu,
"Cassandra automatically realizes that have just effectively changed IP
address” —> are you sure C* will take care of IP change as is? How will it know
which token range to be assigned to this new IP address?
> On Jun 13, 2017, at 10:51 AM, Hannu Kröger wrote:
>
> Cassandra automatica
Hello,
So the local information about tokens is stored in the system keyspace.
Also the host id and all that.
Also documented here:
https://support.datastax.com/hc/en-us/articles/204289959-Changing-IP-addresses-in-DSE
If for any reason that causes issues, you can also check this:
https://issues.
Thank you Hannu.
> On Jun 13, 2017, at 10:59 AM, Hannu Kröger wrote:
>
> Hello,
>
> So the local information about tokens is stored in the system keyspace. Also
> the host id and all that.
>
> Also documented here:
> https://support.datastax.com/hc/en-us/articles/204289959-Changing-IP-addres
Hannu/Nitan,
Thanks for your help so far! From what you said in your first response, I
can get away with just attaching the EBS volume to Cassandra and starting
it with the old node's private IP as my listen_address because it will take
over the token assignment from the old node using the data fi
Nevermind, I misunderstood the first link. In this case, the replacement
would just be leaving the listen_address as is (to
InetAddress.getLocalHost()) and just start the new instance up as you
pointed out in your original answer Hannu.
Thanks.
On Tue, Jun 13, 2017 at 12:35 PM Rutvij Bhatt wrote
Simplest way of all, if you are using RF>=2 simple terminate the old
instance and create a new one.
Cheers.
On 13-06-2017 18:01, Rutvij Bhatt wrote:
> Nevermind, I misunderstood the first link. In this case, the
> replacement would just be leaving the listen_address as is (to
> InetAddress.getLoc
In aws, I just grow the cluster 2x, then shrink away the old nodes via
decommission. Mind you I am not dealing with TBs of data, just hundreds of
gigs. Also, I have deployment automated with Cloud Formation and Priam.
YMMV.
On Tue, Jun 13, 2017 at 2:22 PM Cogumelos Maravilha <
cogumelosmaravi...@s
OP, I was just looking at your original numbers and I have some questions:
270GB on one node and 414KB on the other, but something close to 50/50 on
"Owns(effective)".
What replication factor are your keyspaces set up with? 1x or 2x or ??
I would say you are seeing 50/50 because the tokens are al
Hi,
I came across the following method (
https://github.com/apache/cassandra/blob/afd68abe60742c6deb6357ba4605268dfb3d06ea/src/java/org/apache/cassandra/service/StorageService.java#L5006-L5021).
It seems data is evenly split across disks according to local token ranges.
It might be that data stor
Scratch that theory - the flamegraphs show that GC is only 3-4% of two
newer machine's overall processing, compared to 18% on the slow machine.
I took that machine out of the cluster completely and recreated the
keyspaces. The ingest tests now run slightly faster (!). I would have
expected a li
Did you try adding more client stress nodes as Patrick recommended?
On Tue, Jun 13, 2017 at 9:31 PM Eric Pederson wrote:
> Scratch that theory - the flamegraphs show that GC is only 3-4% of two
> newer machine's overall processing, compared to 18% on the slow machine.
>
> I took that machine out
Shoot - I didn't see that one. I subscribe to the digest but was focusing
on the direct replies and accidentally missed Patrick and Jeff Jirsa's
messages. Sorry about that...
I've been using a combination of cassandra-stress, cqlsh COPY FROM and a
custom C++ application for my ingestion testing.
17 matches
Mail list logo