Hi all,

I have ceph cluster that has the following:

# ceph osd tree
# id    weight    type name    up/down    reweight
-1    11.13    root default
-2    8.14        host h1
 1    0.9             osd.1     up    1
 3    0.9             osd.3     up    1
 4    0.9             osd.4     up    1
 5    0.68            osd.5     up    1
 6    0.68            osd.6     up    1
 7    0.68            osd.7     up    1
 8    0.68            osd.8     up    1
 9    0.68            osd.9     up    1
10    0.68            osd.10    up    1
11    0.68            osd.11    up    1
12    0.68            osd.12    up    1
-3    0.45        host s3
 2    0.45            osd.2     up    1
-4    0.9         host s2
13    0.9             osd.13    up    1
-5    1.64        host s1
14    0.29            osd.14    up    1
 0    0.27            osd.0     up    1
15    0.27            osd.15    up    1
16    0.27            osd.16    up    1
17    0.27            osd.17    up    1
18    0.27            osd.18    up    1

s2 and s3 will get more drives in future, but this is the setup for now.

I mount cephfs in /etc/fstab and all seemed well for quite a few months.
Now however, I start seeing strange things like directories with corrupted
files names in the file system.

My question is: How can the filesystem be checked for errors and fixed?  Or
does it heal itself automatically.  The disks are all formatted with btrfs.

thanks

*Roland*
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to