Output from nodetool ring:

Address         DC          Rack        Status State   Load
Owns    Token

85070591730234615865843651857942052864
110.82.155.2   datacenter1 rack1       Up     Normal  78.23 MB
50.00%  0
110.82.155.4   datacenter1 rack1       Up     Normal  67.21 MB
50.00%  85070591730234615865843651857942052864

On Wed, Dec 21, 2011 at 1:18 PM, aaron morton <aa...@thelastpickle.com>wrote:

> Post the output from nodetool ring and take a look at
> http://wiki.apache.org/cassandra/Operations#Token_selection
>
> Cheers
>
> -----------------
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 22/12/2011, at 5:21 AM, Blake Starkenburg wrote:
>
> Thank You!
>
> Could the lack of routine repair be why nodetool ring reports: node(1)
> Load -> 78.24 MB and node(2) Load -> 67.21 MB? The load span between the
> two nodes has been increasing ever so slowly...
>
> On Wed, Dec 21, 2011 at 1:00 AM, aaron morton <aa...@thelastpickle.com>wrote:
>
>> Here you go
>> http://wiki.apache.org/cassandra/Operations#Dealing_with_the_consequences_of_nodetool_repair_not_running_within_GCGraceSeconds
>>
>> Cheers
>>
>>   -----------------
>> Aaron Morton
>> Freelance Developer
>> @aaronmorton
>> http://www.thelastpickle.com
>>
>> On 21/12/2011, at 2:44 PM, Blake Starkenburg wrote:
>>
>> I have been playing around with Cassandra for a few months now. Starting
>> to explore more of the routine maintenance and backup strategies and I have
>> a general question about nodetool repair. After reading the following page:
>> http://www.datastax.com/docs/0.8/operations/cluster_management it has
>> occurred to me that for these past few months I have NOT DONE any cleanup
>> or repair commands on a test 2-node cluster (and their has been quite a few
>> deletes, writes, etc.).
>>
>> For some reason I was under the assumption that Cassandra handled the
>> tombstone records from deletes automatically? Should I still run nodetool
>> repair and if so, what about old deletes which occurred months ago?
>>
>> Thank You!
>>
>>
>>
>
>

Reply via email to