[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-12-02 Thread John Jasen
e > misaligned. The solution (included in 17.2.6) was to allow BlueFS to > allocate 4k extents when it couldn't find 64k contiguous extents. However, > it seems that even with this fix, these OSDs still can't boot up. > > Therefore, the recommendation is to extend the RocksDB

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-30 Thread Frédéric Nass
it's caused by high fragmentation and BlueFS'es inability >>> > to use chunks smaller than 64K. In fact fragmentation >>> > issue is fixed since 17.2.6 so I doubt that's the problem. > >>> > Hi Igor, > >>> > I wasn't a

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-30 Thread Frédéric Nass
o? Does this ability depend on near/full thresholds >> > being reached or not? If so then icreasing these thresholds by >> > 1-2% may help avoiding the crash, no? >> > Also, if BlueFS is aware of these thresholds, shouldn't an >> > OSDs be able to start

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-29 Thread Martin Konold
Frédéric. > > Thanks, > > Igor > > On 27.11.2024 4:01, Szabo, Istvan (Agoda) wrote: > >             Hi, > >         This issue should not happen anymore from 17.2.8 am I > correct? In thi

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-29 Thread Frédéric Nass
t;> restart >>>>>> Email received from the internet. If in doubt, don't click any link nor >>>>>> open any >>>>>> attachment ! >>>>>> ________________ >>>>>> Hi John, >>&

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-29 Thread Igor Fedotov
- *From:* Frédéric Nass <mailto:frederic.n...@univ-lorraine.fr> *Sent:* Wednesday, November 27, 2024 6:12:46 AM *To:* John Jasen <mailto:jja...@gmail.com> *Cc:*

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-28 Thread Frédéric Nass
ks, >>> Igor >>> On 27.11.2024 4:01, Szabo, Istvan (Agoda) wrote: >>>> Hi, >>>> This issue should not happen anymore from 17.2.8 am I correct? In this >>>> version >>>> all the fragmentation

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-28 Thread Igor Fedotov
<https://github.com/rook/rook/issues/9885#issuecomment-1761076861";> ____________ De : John Jasen <mailto:jja...@gmail.com> Envoyé : mardi 26 novembre 2024 18:50 À : Igor Fedotov Cc: ceph-users Obje

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Frédéric Nass
[ mailto:jja...@gmail.com | ] >> Cc: Igor Fedotov [ mailto:igor.fedo...@croit.io | ] ; >> ceph-users [ mailto:ceph-users@ceph.io | ] >> Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to >> restart >> Email received from the internet. If in doubt,

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
Sent:* Wednesday, November 27, 2024 10:33 AM *To:* Frédéric Nass ; John Jasen ; Igor Fedotov *Cc:* ceph-users *Subject:* [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Got it, the perf dump can give information: ceph daemon osd.x perf dump|jq .bluefs _

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
with collocated wal+db+block. *From:* Frédéric Nass *Sent:* Wednesday, November 27, 2024 6:12:46 AM *To:* John Jasen *Cc:* Igor Fedotov ; ceph-users *Subject:* [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Em

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-27 Thread Igor Fedotov
Yep! But better try with a single OSD first. On 26.11.2024 20:48, John Jasen wrote: Let me see if I have the approach right'ish: scrounge some more disk for the servers with full/down OSDs. partition the new disks into LVs for each downed OSD. Attach as a lvm new-db to the downed OSDs. Restar

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Szabo, Istvan (Agoda)
AM To: John Jasen Cc: Igor Fedotov ; ceph-users Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Email received from the internet. If in doubt, don't click any link nor open any attachment ! Hi John, That's about right

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Szabo, Istvan (Agoda)
Cc: Igor Fedotov ; ceph-users Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Email received from the internet. If in doubt, don't click any link nor open any attachment ! Hi John, That's about right. Two potential solut

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Szabo, Istvan (Agoda)
Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Email received from the internet. If in doubt, don't click any link nor open any attachment ! Hi John, That's about right. Two potential solutions exist: 1. Adding a new dr

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Szabo, Istvan (Agoda)
; ceph-users Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Email received from the internet. If in doubt, don't click any link nor open any attachment ! Hi John, That's about right. Two potential solutions exist: 1. Ad

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Frédéric Nass
https://github.com/rook/rook/issues/9885#issuecomment-1761076861"; De : John Jasen Envoyé : mardi 26 novembre 2024 18:50 À : Igor Fedotov Cc: ceph-users Objet : [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart Let me see if I have the

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread John Jasen
Let me see if I have the approach right'ish: scrounge some more disk for the servers with full/down OSDs. partition the new disks into LVs for each downed OSD. Attach as a lvm new-db to the downed OSDs. Restart the OSDs. Profit. Is that about right? On Tue, Nov 26, 2024 at 11:28 AM Igor Fedotov

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Igor Fedotov
Well, so there is a single shared volume (disk) per OSD, right? If so one can add dedicated DB volume to such an OSD - one done OSD will have two underlying devices: main(which is original shared disk) and new dedicated DB ones.  And hence this will effectively provide additional space for Blu

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread John Jasen
They're all bluefs_single_shared_device, if I understand your question. There's no room left on the devices to expand. We started at quincy with this cluster, and didn't vary too much from the Redhat Ceph storage 6 documentation for setting it up. On Tue, Nov 26, 2024 at 4:48 AM Igor Fedotov wr

[ceph-users] Re: down OSDs, Bluestore out of space, unable to restart

2024-11-26 Thread Igor Fedotov
Hi John, you haven't described your OSD volume configuration but you might want to try adding standalone DB volume if OSD uses LVM and has single main device only. 'ceph-volume lvm new-db' command is the preferred way of doing that, see https://docs.ceph.com/en/quincy/ceph-volume/lvm/newdb/