out of interest, why -100 and not -1 or + 1? any particular reason?
On 06/09/2012 19:17, Tyler Hobbs wrote:
To minimize the impact on the cluster, I would bootstrap a new 1d node
at (42535295865117307932921825928971026432 - 100), then decommission
the 1c node at 42535295865117307932921825928971026432 and run cleanup
on your us-east nodes.
On Thu, Sep 6, 2012 at 1:11 PM, William Oberman
<ober...@civicscience.com <mailto:ober...@civicscience.com>> wrote:
Didn't notice the racks! Of course....
If I change a 1c to a 1d, what would I have to do to make sure
data shuffles around correctly? Repair everywhere?
will
On Thu, Sep 6, 2012 at 2:09 PM, Tyler Hobbs <ty...@datastax.com
<mailto:ty...@datastax.com>> wrote:
The main issue is that one of your us-east nodes is in rack
1d, while the restart are in rack 1c. With NTS and multiple
racks, Cassandra will try use one node from each rack as a
replica for a range until it either meets the RF for the DC,
or runs out of racks, in which case it just picks nodes
sequentially going clockwise around the ring (starting from
the range being considered, not the last node that was chosen
as a replica).
To fix this, you'll either need to make the 1d node a 1c node,
or make 42535295865117307932921825928971026432 a 1d node so
that you're alternating racks within that DC.
On Thu, Sep 6, 2012 at 12:54 PM, William Oberman
<ober...@civicscience.com <mailto:ober...@civicscience.com>>
wrote:
Hi,
I recently upgraded from 0.8.x to 1.1.x (through 1.0
briefly) and nodetool -ring seems to have changed from
"owns" to "effectively owns". "Effectively owns" seems to
account for replication factor (RF). I'm ok with all of
this, yet I still can't figure out what's up with my
cluster. I have a NetworkTopologyStrategy with two data
centers (DCs) with RF/number nodes in DC combinations of:
DC Name, RF, # in DC
analytics, 1, 2
us-east, 3, 4
So I'd expect 50% on each analytics node, and 75% for each
us-east node. Instead, I have two nodes in us-east with
50/100??? (the other two are 75/75 as expected).
Here is the output of nodetool (all nodes report the same
thing):
Address DC Rack Status State Load
Effective-Ownership Token
127605887595351923798765477786913079296
x.x.x.x us-east 1c Up Normal 94.57 GB
75.00% 0
x.x.x.x analytics 1c Up Normal 60.64 GB
50.00% 1
x.x.x.x us-east 1c Up Normal 131.76 GB
75.00% 42535295865117307932921825928971026432
x.x.x.x us-east 1c Up Normal 43.45 GB
50.00% 85070591730234615865843651857942052864
x.x.x.x analytics 1d Up Normal 60.88 GB
50.00% 85070591730234615865843651857942052865
x.x.x.x us-east 1d Up Normal 98.56 GB
100.00% 127605887595351923798765477786913079296
If I use cassandra-cli to do "show keyspaces;" I get (and
again, all nodes report the same thing):
Keyspace: civicscience:
Replication Strategy:
org.apache.cassandra.locator.NetworkTopologyStrategy
Durable Writes: true
Options: [analytics:1, us-east:3]
I removed the output about all of my column families
(CFs), hopefully that doesn't matter.
Did I compute the tokens wrong? Is there a combination of
nodetool commands I can run to migrate the data around to
rebalance to 75/75/75/75? I routinely run repair already.
And as the release notes required, I ran upgradesstables
during the upgrade process.
Before the upgrade, I was getting analytics = 0%, and
us-east = 25% on each node, which I expected for "owns".
will
--
Tyler Hobbs
DataStax <http://datastax.com/>
--
Will Oberman
Civic Science, Inc.
3030 Penn Avenue., First Floor
Pittsburgh, PA 15201
(M) 412-480-7835 <tel:412-480-7835>
(E) ober...@civicscience.com <mailto:ober...@civicscience.com>
--
Tyler Hobbs
DataStax <http://datastax.com/>