>     We have a 23 node cluster and normally when we add OSDs they end up 
> mounting like
> this:
> 
>     /dev/sde1       3.7T  2.0T  1.8T  54% /var/lib/ceph/osd/ceph-15
> 
>     /dev/sdj1       3.7T  2.0T  1.7T  55% /var/lib/ceph/osd/ceph-20
> 
>     /dev/sdd1       3.7T  2.1T  1.6T  58% /var/lib/ceph/osd/ceph-14
> 
>     /dev/sdc1       3.7T  1.8T  1.9T  49% /var/lib/ceph/osd/ceph-13
> 

I'm pretty sure those OSDs have been deployed with Filestore backend as the 
first partition of the device is the data partition and needs to be mounted.

>     However I noticed this morning that the 3 new servers have the OSDs 
> mounted like
> this:
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-246
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-240
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-248
> 
>     tmpfs            47G   28K   47G   1% /var/lib/ceph/osd/ceph-237
> 

And here, it looks like those OSDs are using Bluestore backend because this 
backend doesn't need to mount any data partitions.
What you're seeing is the Bluestore metadata in this tmpfs.
You should find in the mount point some usefull information (fsid, keyring and 
symlinks to the data block and/or db/wal).

I don't know if you're using ceph-disk or ceph-volume but you can find 
information about this by running either:
  - ceph-disk list
  - ceph-volume lvm list
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to