[ceph-users] osd marked down

2021-09-21 Thread Abdelillah Asraoui
Hi, one of the osd in the cluster went down, is there a workaround to bring back this osd? logs from ceph osd pod shows the following: kubectl -n rook-ceph logs rook-ceph-osd-3-6497bdc65b-pn7mg debug 2021-09-20T14:32:46.388+ 7f930fe9cf00 -1 auth: unable to find a keyring on /var/lib/ceph/o

[ceph-users] Re: osd marked down

2021-09-28 Thread Abdelillah Asraoui
at 8:34 AM Abdelillah Asraoui wrote: > Hi, > > one of the osd in the cluster went down, is there a workaround to bring > back this osd? > > > logs from ceph osd pod shows the following: > > kubectl -n rook-ceph logs rook-ceph-osd-3-6497bdc65b-pn7mg > > d

[ceph-users] Re: osd marked down

2021-09-29 Thread Abdelillah Asraoui
to reflect the actual key of OSD.3, correct? If not, run > 'ceph auth get osd.3' first and set the key in the osd.3.export file > before importing it to ceph. > > > Zitat von Abdelillah Asraoui : > > > i have created keyring for the osd3 but still pod is no

[ceph-users] Re: osd marked down

2021-09-30 Thread Abdelillah Asraoui
gt; > /var/lib/ceph/osd/ceph-3/keyring > > Then update your osd.3.export file with the correct keyring and then > import the correct back to ceph. > > > Zitat von Abdelillah Asraoui : > > > I must have imported osd.2 key instead, now osd.3 has the same key as >

[ceph-users] Re: osd marked down

2021-10-04 Thread Abdelillah Asraoui
low *" > > > Make sure the file owner is ceph and try to restart the OSD. In this > case you wouldn't need to import anything. This just worked for me in > my lab environment, so give it a shot. > > > > Zitat von Abdelillah Asraoui : > > > the /var/lib/c

[ceph-users] Re: osd marked down

2021-10-04 Thread Abdelillah Asraoui
ner? > > > Zitat von Abdelillah Asraoui : > > > I have create the keyring file: andvar/lib/ceph/osd/ceph-3/keyring and > > chown to ceph but still getting these error on the osd pod log: > > > > k -n rook-ceph logs rook-ceph-osd-3-6497bdc65b-5cvx3 > > > >

[ceph-users] 1 MDS report slow metadata IOs

2021-10-05 Thread Abdelillah Asraoui
Ceph is reporting warning on slow metdataIOs on one of the MDS server, this is a new cluster with no upgrade.. Anyone has encountered this and is there a workaround .. ceph -s cluster: id: 801691e6xx-x-xx-xx-xx health: HEALTH_WARN 1 MDSs report slow metadata IOs

[ceph-users] Re: 1 MDS report slow metadata IOs

2021-10-05 Thread Abdelillah Asraoui
our PGs are inactive, if two of four OSDs are down and you > probably have a pool size of 3 then no IO can be served. You’d need at > least three up ODSs to resolve that. > > > Zitat von Abdelillah Asraoui : > > > Ceph is reporting warning on slow metdataIOs on one of

[ceph-users] Re: 1 MDS report slow metadata IOs

2021-10-27 Thread Abdelillah Asraoui
shing OSD could help identify the issue. > > > Zitat von Abdelillah Asraoui : > > > The osds are continuously flapping up/down due to the slow MDS metadata > IOs > > .. > > what is causing the slow MDS metadata IOs ? > > currently, there are 2 mds and 3 monitors