On Monday, July 21, 2014, Cristian Falcas <cristi.fal...@gmail.com> wrote:

> Hello,
>
> We have a test project where we are using ceph+openstack.
>
> Today we had some problems with this setup and we had to force reboot the
> server. After that, the partition where we keep the ceph journal could not
> mount.
>
> When we checked it, we got this:
>
> btrfsck /dev/mapper/vg_ssd-ceph_ssd
> Checking filesystem on /dev/mapper/vg_ssd-ceph_ssd
> UUID: 7121568d-3f6b-46b2-afaa-b2e543f31ba4
> checking extents
> checking fs roots
> root 5 inode 257 errors 80
> Segmentation fault
>
>
> Considering that we are using btrfs on ceph, could we format the journal
> and continue our work? Or will this kill our entire node? We don't care
> very much about the data from the last minutes before the crash.
>
> Best regards,
> Cristian Falcas
>

Usually this is very unsafe, but with btrfs it should be fine (it takes
periodic snapshots and will roll back to the latest one to get a consistent
view). You can find help on reformatting the journals in the doc or help
text. :)
-Greg

-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to