e
> misaligned. The solution (included in 17.2.6) was to allow BlueFS to
> allocate 4k extents when it couldn't find 64k contiguous extents. However,
> it seems that even with this fix, these OSDs still can't boot up.
>
> Therefore, the recommendation is to extend the RocksDB
it's caused by high fragmentation and BlueFS'es inability
>>> > to use chunks smaller than 64K. In fact fragmentation
>>> > issue is fixed since 17.2.6 so I doubt that's the problem.
>
>>> > Hi Igor,
>
>>> > I wasn't a
o? Does this ability depend on near/full thresholds
>> > being reached or not? If so then icreasing these thresholds by
>> > 1-2% may help avoiding the crash, no?
>> > Also, if BlueFS is aware of these thresholds, shouldn't an
>> > OSDs be able to start
Frédéric.
>
> Thanks,
>
> Igor
>
> On 27.11.2024 4:01, Szabo, Istvan (Agoda) wrote:
>
> Hi,
>
> This issue should not happen anymore from 17.2.8 am I
> correct? In thi
t;> restart
>>>>>> Email received from the internet. If in doubt, don't click any link nor
>>>>>> open any
>>>>>> attachment !
>>>>>> ________________
>>>>>> Hi John,
>>&
-
*From:* Frédéric Nass
<mailto:frederic.n...@univ-lorraine.fr>
*Sent:* Wednesday, November 27, 2024 6:12:46 AM
*To:* John Jasen
<mailto:jja...@gmail.com>
*Cc:*
ks,
>>> Igor
>>> On 27.11.2024 4:01, Szabo, Istvan (Agoda) wrote:
>>>> Hi,
>>>> This issue should not happen anymore from 17.2.8 am I correct? In this
>>>> version
>>>> all the fragmentation
<https://github.com/rook/rook/issues/9885#issuecomment-1761076861";>
____________
De : John Jasen <mailto:jja...@gmail.com>
Envoyé : mardi 26 novembre 2024 18:50
À : Igor Fedotov
Cc: ceph-users
Obje
[ mailto:jja...@gmail.com | ]
>> Cc: Igor Fedotov [ mailto:igor.fedo...@croit.io | ] ;
>> ceph-users [ mailto:ceph-users@ceph.io | ]
>> Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to
>> restart
>> Email received from the internet. If in doubt,
Sent:* Wednesday, November 27, 2024 10:33 AM
*To:* Frédéric Nass ; John Jasen
; Igor Fedotov
*Cc:* ceph-users
*Subject:* [ceph-users] Re: down OSDs, Bluestore out of space, unable
to restart
Got it, the perf dump can give information:
ceph daemon osd.x perf dump|jq .bluefs
_
with
collocated wal+db+block.
*From:* Frédéric Nass
*Sent:* Wednesday, November 27, 2024 6:12:46 AM
*To:* John Jasen
*Cc:* Igor Fedotov ; ceph-users
*Subject:* [ceph-users] Re: down OSDs, Bluestore out of space, unable
to restart
Em
Yep!
But better try with a single OSD first.
On 26.11.2024 20:48, John Jasen wrote:
Let me see if I have the approach right'ish:
scrounge some more disk for the servers with full/down OSDs.
partition the new disks into LVs for each downed OSD.
Attach as a lvm new-db to the downed OSDs.
Restar
AM
To: John Jasen
Cc: Igor Fedotov ; ceph-users
Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart
Email received from the internet. If in doubt, don't click any link nor open
any attachment !
Hi John,
That's about right
Cc: Igor Fedotov ; ceph-users
Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart
Email received from the internet. If in doubt, don't click any link nor open
any attachment !
Hi John,
That's about right. Two potential solut
Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart
Email received from the internet. If in doubt, don't click any link nor open
any attachment !
Hi John,
That's about right. Two potential solutions exist:
1. Adding a new dr
; ceph-users
Subject: [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart
Email received from the internet. If in doubt, don't click any link nor open
any attachment !
Hi John,
That's about right. Two potential solutions exist:
1. Ad
https://github.com/rook/rook/issues/9885#issuecomment-1761076861";
De : John Jasen
Envoyé : mardi 26 novembre 2024 18:50
À : Igor Fedotov
Cc: ceph-users
Objet : [ceph-users] Re: down OSDs, Bluestore out of space, unable to restart
Let me see if I have the
Let me see if I have the approach right'ish:
scrounge some more disk for the servers with full/down OSDs.
partition the new disks into LVs for each downed OSD.
Attach as a lvm new-db to the downed OSDs.
Restart the OSDs.
Profit.
Is that about right?
On Tue, Nov 26, 2024 at 11:28 AM Igor Fedotov
Well, so there is a single shared volume (disk) per OSD, right?
If so one can add dedicated DB volume to such an OSD - one done OSD will
have two underlying devices: main(which is original shared disk) and new
dedicated DB ones. And hence this will effectively provide additional
space for Blu
They're all bluefs_single_shared_device, if I understand your question.
There's no room left on the devices to expand.
We started at quincy with this cluster, and didn't vary too much from the
Redhat Ceph storage 6 documentation for setting it up.
On Tue, Nov 26, 2024 at 4:48 AM Igor Fedotov wr
Hi John,
you haven't described your OSD volume configuration but you might want
to try adding standalone DB volume if OSD uses LVM and has single main
device only.
'ceph-volume lvm new-db' command is the preferred way of doing that, see
https://docs.ceph.com/en/quincy/ceph-volume/lvm/newdb/
21 matches
Mail list logo