kup plan is to snapshot all data, raise a complete fresh 6 node
> cluster and stream the data using sstable loader. Are there any objections
> about that plan from your point of view?
>
> Thanks in advance!
>
> Andi
>
> From: Aaron
able loader. Are there any objections
about that plan from your point of view?
Thanks in advance!
Andi
From: Aaron Morton [aa...@thelastpickle.com]
Sent: Wednesday, December 18, 2013 3:14 AM
To: Cassandra User
Subject: Re: Unbalanced ring with C* 2.0
> Node: 4 CPU, 6 GB RAM, virtual appliance
>
> Cassandra: 3 GB Heap, vnodes 256
FWIW that’s a very low powered node.
> Maybe we forgot necessary actions during or after cluster expanding process.
> We are open for every idea.
Where the nodes in the seed list when they joined the cluster? If so
Check the logs for messages about nodes going up and down, and also look at the
MessagingService MBean for timeouts. If the node in DR 2 times out replying to
DR1 the DR1 node will store a hint.
Also when hints are stored they are TTL'd to the gc_grace_seconds for the CF
(IIRC). If that's low
Here is some more information.
I am running full repair on one of the nodes and I am observing strange
behavior.
Both DCs were up during the data load. But repair is reporting a lot of
out-of-sync data. Why would that be ? Is there a way for me to tell
that WAN may be dropping hinted handoff
Wanted to add one more thing:
I can also tell that the numbers are not consistent across DRs this way
-- I have a column family with really wide rows (a couple million
columns).
DC1 reports higher column counts than DC2. DC2 only becomes consistent
after I do the command a couple of times an
Actually, doing a nodetool ring is always showing the current node as
owning 99% of the ring
From db-1a-1:
Address DC RackStatus State Load
Effective-Ownership Token
Token(bytes[eaa8])
10.0.4.22 us-east 1a Up
Maybe people think that 1.2 = Vnodes, when Vnodes are actually not
mandatory and furthermore it is advised to upgrade and then, after a while,
when all is running smooth, eventually switch to vnodes...
2013/2/13 Brandon Williams
> On Tue, Feb 12, 2013 at 6:13 PM, Edward Capriolo
> wrote:
> >
>
On Tue, Feb 12, 2013 at 6:13 PM, Edward Capriolo wrote:
>
> Are vnodes on by default. It seems that many on list are using this feature
> with small clusters.
They are not.
-Brandon
I take that back. vnodes are useful for any size cluster, but I do not see
them as a day one requirement. It seems like many people are stumbling over
this.
On Tuesday, February 12, 2013, Edward Capriolo
wrote:
>
> Are vnodes on by default. It seems that many on list are using this
feature with s
Are vnodes on by default. It seems that many on list are using this feature
with small clusters.
I know these days anything named virtual is sexy, but they are not useful
for small clusters are they. I do not see why people are using them.
On Monday, February 11, 2013, aaron morton wrote:
> So
ase advise the sender immediately by reply
e-mail and delete this message. Thank you for your cooperation.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Monday, February 11, 2013 12:51 PM
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
The tokens are not right, not right at
ware replication,
> your allocation is suspicious.
>
> I’m not sure what you mean by this.
>
> Steve
>
> -Original Message-
> From: Eric Evans [mailto:eev...@acunu.com]
> Sent: Thursday, February 07, 2013 9:56 AM
> To: user@cassandra.apache.org
> Su
> I have about 11M rows of data in this keyspace and none of them are
> exceptionally long … it’s data pulled from Oracle and didn’t include any
> BLOB, etc.
[ ... ]
> From: aaron morton [mailto:aa...@thelastpickle.com]
> Sent: Tuesday, February 05, 2013 3:41 PM
> To: user@ca
.
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Tuesday, February 05, 2013 3:41 PM
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
Use nodetool status with vnodes
http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes
The different load can be caused by
Use nodetool status with vnodes
http://www.datastax.com/dev/blog/upgrading-an-existing-cluster-to-vnodes
The different load can be caused by rack affinity, are all the nodes in the
same rack ? Another simple check is have you created some very big rows?
Cheers
-
Aaron Morton
Fre
ave received this
message in error, please contact the sender immediately and irrevocably delete
this message and any copies.
From: Alain RODRIGUEZ [mailto:arodr...@gmail.com]
Sent: Thursday, October 11, 2012 09:17
To: user@cassandra.apache.org
Subject: Re: unbalanced ring
Tamar be carefull. Datastax
Tamar be carefull. Datastax doesn't recommand major compactions in
production environnement.
If I got it right, performing major compaction will convert all your
SSTables into a big one, improving substantially your reads performence, at
least for a while... The problem is that will disable minor
it should not have any other impact except increased usage of system
resources.
and i suppose, cleanup would not have an affect (over normal compaction) if
all nodes contain the same data
On Wed, Oct 10, 2012 at 12:12 PM, Tamar Fraenkel wrote:
> Hi!
> Apart from being heavy load (the compact), w
Hi!
Apart from being heavy load (the compact), will it have other effects?
Also, will cleanup help if I have replication factor = number of nodes?
Thanks
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
F
major compaction in production is fine, however it is a heavy operation on
the node and will take I/O and some CPU.
the only time i have seen this happen is when i have changed the tokens in
the ring, like "nodetool movetoken". cassandra does not auto-delete data
that it doesn't use anymore just
Hi,
Same thing here:
2 nodes, RF = 2. RCL = 1, WCL = 1.
Like Tamar I never ran a major compaction and repair once a week each node.
10.59.21.241eu-west 1b Up Normal 133.02 GB
50.00% 0
10.58.83.109eu-west 1b Up Normal 98.12 GB
50.00%
Hi!
I am re-posting this, now that I have more data and still *unbalanced ring*:
3 nodes,
RF=3, RCL=WCL=QUORUM
Address DC RackStatus State Load
OwnsToken
113427455640312821154458202477256070485
x.x.x.xus-east 1c Up Normal 24.02 GB33.33
> Does cleanup only cleanup keys that no longer belong to that node.
Yes.
I guess it could be an artefact of the bulk load. It's not been reported
previously though. Try the cleanup and see how it goes.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelas
Nick, thanks for the response. Does cleanup only cleanup keys that no
longer belong to that node. Just to add more color, when I bulk loaded all
my data into these 6 nodes, all of them had the same amount of data. After
the first nodetool repair, the first node started having more data than the
res
No. Cleanup will scan each sstable to remove data that is no longer
owned by that specific node. It won't compact the sstables together
however.
On Tue, Jun 19, 2012 at 11:11 PM, Raj N wrote:
> But wont that also run a major compaction which is not recommended anymore.
>
> -Raj
>
>
> On Sun, Jun
But wont that also run a major compaction which is not recommended anymore.
-Raj
On Sun, Jun 17, 2012 at 11:58 PM, aaron morton wrote:
> Assuming you have been running repair, it' can't hurt.
>
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thel
Assuming you have been running repair, it' can't hurt.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 17/06/2012, at 4:06 AM, Raj N wrote:
> Nick, do you think I should still run cleanup on the first node.
>
> -Rajesh
>
> On Fri, Jun 15
Nick, do you think I should still run cleanup on the first node.
-Rajesh
On Fri, Jun 15, 2012 at 3:47 PM, Raj N wrote:
> I did run nodetool move. But that was when I was setting up the cluster
> which means I didn't have any data at that time.
>
> -Raj
>
>
> On Fri, Jun 15, 2012 at 1:29 PM, Nic
I did run nodetool move. But that was when I was setting up the cluster
which means I didn't have any data at that time.
-Raj
On Fri, Jun 15, 2012 at 1:29 PM, Nick Bailey wrote:
> Did you start all your nodes at the correct tokens or did you balance
> by moving them? Moving nodes around won't d
Did you start all your nodes at the correct tokens or did you balance
by moving them? Moving nodes around won't delete unneeded data after
the move is done.
Try running 'nodetool cleanup' on all of your nodes.
On Fri, Jun 15, 2012 at 12:24 PM, Raj N wrote:
> Actually I am not worried about the p
Actually I am not worried about the percentage. Its the data I am concerned
about. Look at the first node. It has 102.07GB data. And the other nodes
have around 60 GB(one has 69, but lets ignore that one). I am not
understanding why the first node has almost double the data.
Thanks
-Raj
On Fri, J
This is just a known problem with the nodetool output and multiple
DCs. Your configuration is correct. The problem with nodetool is fixed
in 1.1.1
https://issues.apache.org/jira/browse/CASSANDRA-3412
On Fri, Jun 15, 2012 at 9:59 AM, Raj N wrote:
> Hi experts,
> I have a 6 node cluster across
Thanks, I will wait and see as data accumulates.
Thanks,
*Tamar Fraenkel *
Senior Software Engineer, TOK Media
[image: Inline image 1]
ta...@tok-media.com
Tel: +972 2 6409736
Mob: +972 54 8356490
Fax: +972 2 5612956
On Tue, Mar 27, 2012 at 9:00 AM, R. Verlangen wrote:
> Cassandra is
Cassandra is built to store tons and tons of data. In my opinion roughly ~
6MB per node is not enough data to allow it to become a fully balanced
cluster.
2012/3/27 Tamar Fraenkel
> This morning I have
> nodetool ring -h localhost
> Address DC RackStatus State Load
>
This morning I have
nodetool ring -h localhost
Address DC RackStatus State LoadOwns
Token
113427455640312821154458202477256070485
10.34.158.33us-east 1c Up Normal 5.78 MB
33.33% 0
10.38.175.131 us-east 1c Up No
What version are you using?
Anyway try nodetool repair & compact.
maki
2012/3/26 Tamar Fraenkel
> Hi!
> I created Amazon ring using datastax image and started filling the db.
> The cluster seems un-balanced.
>
> nodetool ring returns:
> Address DC RackStatus State Loa
How can I fix this?
add more data. 1.5M is not enough to get reliable reports
38 matches
Mail list logo