[ceph-users] Re: Full OSD's on cephfs_metadata pool

2020-03-19 Thread Derek Yarnell
Hi Robert, Sorry to hear that this impacted you but I feel a bit better that I wasn't alone. Did you have a lot of log segments to trim on the MDSs when you recovered? I would agree that this was a very odd sudden onset of space consumption for us. We have usually like 600GB consumed of around

[ceph-users] Re: bluefs enospc

2020-03-18 Thread Derek Yarnell
Hi Igor, I just want to thank you for taking the time to help with this issue. On 3/18/20 5:30 AM, Igor Fedotov wrote: >>> Most probably you will need additional 30GB of free space per each OSD >>> if going this way. So please let me know if you can afford this. >> Well I had already increased 70

[ceph-users] Re: bluefs enospc

2020-03-16 Thread Derek Yarnell
Hi Igor, On 3/16/20 10:34 AM, Igor Fedotov wrote: > I can suggest the following non-straightforward way for now: > > 1) Check osd startup log for the following line: > > 2020-03-15 14:43:27.845 7f41bb6baa80  1 > bluestore(/var/lib/ceph/osd/ceph-681) _open_alloc loaded 23 GiB in 97 > extents > >

[ceph-users] Re: bluefs enospc

2020-03-16 Thread Derek Yarnell
Hi Igor, Thank you for the help. On 3/16/20 7:47 AM, Igor Fedotov wrote: > OSD-709 has been already expanded, right? Correct with 'ceph-bluestore-tool --log-level 30 --path /var/lib/ceph/osd/ceph-709 --command bluefs-bdev-expand'. Does this expand bluefs and the data allocation? Is there a way

[ceph-users] bluefs enospc

2020-03-15 Thread Derek Yarnell
Hi, We have a production cluster that just suffered an issue with multiple of our NVMe OSDs. Multiple of them died (>12) with errors that they no longer had space with a 'ENOSPC from bluestore, misconfigured cluster' error over 4 nodes. These are all simple one device bluestore osds. ceph versi