You will need to run nodetool removetoken with the old node's token to
permanently remove it from the cluster.

On Fri, Sep 14, 2012 at 3:06 PM, rohit reddy <rohit.kommare...@gmail.com>wrote:

> Thanks for the inputs.
> The disk on the EC2 node failed. This led to the problem. Now i have
> created a new cassandra node and added it to the cluster.
>
> Do i need to do anything to delete the old node from the cluster, or will
> the cluster balance it self.
> Asking this since in Datastax ops center its still showing the old node.
>
> Thanks
> Rohit
>
>
> On Fri, Sep 14, 2012 at 7:42 PM, Robin Verlangen <ro...@us2.nl> wrote:
>
>> Cassandra writes to memtables, that will get flushed to disk when it's
>> time. That might be because of running out of memory (the log message you
>> just posted), on a shutdown, or at other times. That's why you're using
>> memory while writing.
>>
>> You seem to be running on AWS, are you sure your data location is on the
>> right disk? Default is /var/lib/cassandra/data
>>
>> Best regards,
>>
>> Robin Verlangen
>> *Software engineer*
>> *
>> *
>> W http://www.robinverlangen.nl
>> E ro...@us2.nl
>>
>> Disclaimer: The information contained in this message and attachments is
>> intended solely for the attention and use of the named addressee and may be
>> confidential. If you are not the intended recipient, you are reminded that
>> the information remains the property of the sender. You must not use,
>> disclose, distribute, copy, print or rely on this e-mail. If you have
>> received this message in error, please contact the sender immediately and
>> irrevocably delete this message and any copies.
>>
>>
>>
>> 2012/9/14 rohit reddy <rohit.kommare...@gmail.com>
>>
>>> Hi Robin,
>>>
>>> I had checked that. Our disk size is about 800GB, and the total data
>>> size is not more than 40GB. Even if all the data is stored in one node,
>>> this won't happen.
>>>
>>> I'll try to see if the disk failed.
>>>
>>> Is this anything to do with VM memory?.. cause this logs suggests that..
>>> Heap is 0.7515559786053904 full.  You may need to reduce memtable and/or
>>> cache sizes.  Cassandra will now flush up to the two largest memtables to
>>> free up memory.  Adjust flush_largest_memtables_at threshold in
>>> cassandra.yaml if you don't want Cassandra to do this automatically
>>>
>>> But, i'm only testing writes, there are no reads on the cluster. Will
>>> the writes require so much memory. A large instance has 7.5GB, so by
>>> default cassandra allocates about 3.75 GB for the VM.
>>>
>>>
>>>
>>> On Fri, Sep 14, 2012 at 6:58 PM, Robin Verlangen <ro...@us2.nl> wrote:
>>>
>>>> Hi Robbit,
>>>>
>>>> I think it's running out of disk space, please verify that (on Linux:
>>>> df -h ).
>>>>
>>>> Best regards,
>>>>
>>>> Robin Verlangen
>>>> *Software engineer*
>>>> *
>>>> *
>>>> W http://www.robinverlangen.nl
>>>> E ro...@us2.nl
>>>>
>>>> Disclaimer: The information contained in this message and attachments
>>>> is intended solely for the attention and use of the named addressee and may
>>>> be confidential. If you are not the intended recipient, you are reminded
>>>> that the information remains the property of the sender. You must not use,
>>>> disclose, distribute, copy, print or rely on this e-mail. If you have
>>>> received this message in error, please contact the sender immediately and
>>>> irrevocably delete this message and any copies.
>>>>
>>>>
>>>>
>>>> 2012/9/14 rohit reddy <rohit.kommare...@gmail.com>
>>>>
>>>>> Hi,
>>>>>
>>>>> I'm facing a problem in Cassandra cluster deployed on EC2 where the
>>>>> node is going down under write load.
>>>>>
>>>>> I have configured a cluster of 4 Large EC2 nodes with RF of 2.
>>>>> All nodes are instance storage backed. DISK is RAID0 with 800GB
>>>>>
>>>>> I'm pumping in write requests at about 4000 writes/sec. One of the
>>>>> node went down under this load. The total data size in each node was not
>>>>> more than 7GB
>>>>> Got the following WARN messages in the LOG file...
>>>>>
>>>>> 1. setting live ratio to minimum of 1.0 instead of 0.9003153296009601
>>>>> 2. Heap is 0.7515559786053904 full.  You may need to reduce memtable
>>>>> and/or cache sizes.  Cassandra will now flush up to the two largest
>>>>> memtables to free up memory.  Adjust flush_largest_memtables_at threshold
>>>>> in cassandra.yaml if you don't want Cassandra to do
>>>>> this automatically
>>>>> 3. WARN [CompactionExecutor:570] 2012-09-14 11:45:12,024
>>>>> CompactionTask.java (line 84) insufficient space to compact all requested
>>>>> files
>>>>>
>>>>> All cassandra settings are default settings.
>>>>> Do i need to tune anything to support this write rate?
>>>>>
>>>>> Thanks
>>>>> Rohit
>>>>>
>>>>>
>>>>
>>>
>>
>


-- 
Tyler Hobbs
DataStax <http://datastax.com/>

Reply via email to