The reason why you moved to ceph storage, is that you do not want to do such 
things. Remove the drive, and let ceph recover. 


On May 6, 2019 11:06 PM, Florent B wrote: > > Hi, > > It seems that OSD disk is 
dead (hardware problem), badblocks command > returns a lot of badblocks. > > Is 
there any way to force Ceph to recover as many objects as it can on > that disk 
? I know data will be lost but I would like to recover as many > as possible. > 
> Thank you. > > On 04/05/2019 18:56, Florent B wrote: > > Hi, > > > > I have a 
problem with an OSD that stopped itself and then can't restart. > > > > I use 
Luminous 12.2.12. > > > > I can see these lines in logs : > > > > -2> 
2019-05-04 18:48:33.687087 7f3aedfe1700  4 rocksdb: EVENT_LOG_v1 > > 
{"time_micros": 1556988513687079, "job": 3, "event": > > "compaction_started", 
"files_L1": [7456], "files_L2": [7414, 7415, 7416, > > 7417, 7418, 7419, 7420], 
"score": 1.25456, "input_data_size": 494084689} > >     -1> 2019-05-04 
18:48:33.947802 7f3b023bde00 -1 bdev(0x55ed83fb3200 > > 
/var/lib/ceph/osd/ceph-38/block) direct_read_unaligned > > 0x3cbb14c2b8~1c9631 
error: (61) No data available > >      0> 2019-05-04 18:48:33.952965 
7f3b023bde00 -1 > > /build/ceph-12.2.12/src/os/bluestore/BlueFS.cc: In function 
'int > > BlueFS::_read_random(BlueFS::FileReader*, uint64_t, size_t, char*)' > 
> thread 7f3b023bde00 time 2019-05-04 18:48:33.947917 > > 
/build/ceph-12.2.12/src/os/bluestore/BlueFS.cc: 936: FAILED assert(r == 0) > > 
> > Is there any way to recover this problem ? > > > > Thank you. > > > > 
Florent > > > > _______________________________________________ > > ceph-users 
mailing list > > ceph-users@lists.ceph.com > > 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > > > 
_______________________________________________ > ceph-users mailing list > 
ceph-users@lists.ceph.com > 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to