If you don't want downtime, you can take the original data and use the bulk sstable loader to send it back into the cluster. If you don't mind downtime you can take all the files from both data folders and put them together, make sure there aren't any with the same names (rename them if there are) and then start cassandra, it will pick up all the files.

-Jeremiah

On 12/12/2011 12:53 PM, Stephane Legay wrote:
Here's the situation. We're running a 2-node cluster on EC2 (v 0.8.6). Each node writes data to a mounted EBS volume mounted on /mnt2.

On Dec. 9th, for some reason both instances were rebooted (not sure yet what triggered the reboot). But the EBS volumes were not added to /etc/fstab, and didn't mount upon reboot. Cassandra did auto-start without any problems, created a new data folder on the system drive and started writing there. We just found out about the issue today with users missing data.

So, to recap:

- each node contains data created since 12-09-2011, stored on the system drive - each node has access to data created on or before 12-09-2011 on an EBS volume - we need to move the data stored on the system drive to the EBS volume and restart Cassandra into a stable state will all data available

What's the best way for me to do this?

Thanks

Reply via email to