[ceph-users] Re: 5 pgs inactive, 5 pgs incomplete

2020-08-11 Thread Kevin Myers
Replica count of 2 is a sure fire way to a crisis ! Sent from my iPad > On 11 Aug 2020, at 18:45, Martin Palma wrote: > > Hello, > after an unexpected power outage our production cluster has 5 PGs > inactive and incomplete. The OSDs on which these 5 PGs are located all > show "stuck requests a

[ceph-users] Re: Understanding what ceph-volume does, with bootstrap-osd/ceph.keyring, tmpfs

2020-09-22 Thread Kevin Myers
Tbh ceph caused us more problems than it tried to fix ymmv good luck > On 22 Sep 2020, at 13:04, t...@postix.net wrote: > > The key is stored in the ceph cluster config db. It can be retrieved by > > KEY=`/usr/bin/ceph --cluster ceph --name client.osd-lockbox.${OSD_FSID} > --keyring $OSD_PATH