On 14 April 2013 00:56, Rustam Aliyev wrote:
> Just a followup on this issue. Due to the cost of shuffle, we decided
> not to do it. Recently, we added new node and ended up in not well balanced
> cluster:
>
> Datacenter: datacenter1
> ===
> Status=Up/Down
> |/ State=Normal/L
How does Cassandra with vnodes exactly decide how many vnodes to move?
The num_tokens setting in the yaml file. What did you set this to?
256, same as on all other nodes.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.c
> How does Cassandra with vnodes exactly decide how many vnodes to move?
The num_tokens setting in the yaml file. What did you set this to?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 14/04/2013, at 11:56 AM, Rust
Just a followup on this issue. Due to the cost of shuffle, we decided
not to do it. Recently, we added new node and ended up in not well
balanced cluster:
Datacenter: datacenter1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Token
After 2 days of endless compactions and streaming I had to stop this and
cancel shuffle. One of the nodes even complained that there's no free
disk space (grew from 30GB to 400GB). After all these problems number of
the moved tokens were less than 40 (out of 1280!).
Now, when nodes start they
[ Rustam Aliyev ]
> Hi,
>
> After upgrading to the vnodes I created and enabled shuffle
> operation as suggested. After running for a couple of hours I had to
> disable it because nodes were not catching up with compactions. I
> repeated this process 3 times (enable/disable).
>
> I have 5 nodes a
I am not familiar with shuffle, but if you attempt a shuffle and it fails
if would be a good idea to let compaction die down, or even trigger major
compaction on the nodes where the size grew. The reason is because once the
data files are on disk, even if they are duplicates, cassandra does not
kno
Hi,
After upgrading to the vnodes I created and enabled shuffle operation as
suggested. After running for a couple of hours I had to disable it
because nodes were not catching up with compactions. I repeated this
process 3 times (enable/disable).
I have 5 nodes and each of them had ~35GB. Af