eph ceph 4 Jun 30 2017 mkfs_done
> -rw-r--r-- 1 ceph ceph 6 Jun 30 2017 ready
> -rw-r--r-- 1 ceph ceph 3 Oct 19 2019 require_osd_release
> -rw-r--r-- 1 ceph ceph 0 Sep 26 2019 systemd
> -rw-r--r-- 1 ceph ceph 10 Jun 30 2017 type
> -rw-r--r-- 1 ceph ceph 2 Jun 30 2017 whoami
&
019 systemd
-rw-r--r-- 1 ceph ceph 10 Jun 30 2017 type
-rw-r--r-- 1 ceph ceph 2 Jun 30 2017 whoami
-Original Message-
To: ceph-users@ceph.io
Subject: [ceph-users] Re: OSDs and tmpfs
> We have a 23 node cluster and normally when we add OSDs they end
> up mounting like
> this:
&
> We have a 23 node cluster and normally when we add OSDs they end up
> mounting like
> this:
>
> /dev/sde1 3.7T 2.0T 1.8T 54% /var/lib/ceph/osd/ceph-15
>
> /dev/sdj1 3.7T 2.0T 1.7T 55% /var/lib/ceph/osd/ceph-20
>
> /dev/sdd1 3.7T 2.1T 1.6T 58% /var/li
I am going to attempt to answer my own question here and someone can correct me
if I am wrong.
Looking at a few of the other OSDs that we have replaced over the last year or
so it looks like they are mounted using tmpfs as well and that this is just a
result of switching from filestore to blues