Original Message
Subject: [ceph-users] crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 3:56:21 AM
First a all, thanks a lot for for info and taking time to help
a beginner :)
Pools are a logical name for a storage space but how can
On 26.01.2024 22:08, Wesley Dillingham wrote:
I faced a similar issue. The PG just would never finish recovery.
Changing
all OSDs in the PG to "osd_op_queue wpq" and then restarting them
serially
ultimately allowed the PG to recover. Seemed to be some issue with
mclock.
Thank you Wes, switchi
On 26.01.2024 23:09, Mark Nelson wrote:
For what it's worth, we saw this last week at Clyso on two separate
customer clusters on 17.2.7 and also solved it by moving back to wpq.
We've been traveling this week so haven't created an upstream tracker
for it yet, but we're back to recommending wpq
>
> First a all, thanks a lot for for info and taking time to help
> a beginner :)
Nichts zu denken. This is a community, it’s what we do. Next year you’ll
help someone else.
>>>
> Oh! so the device class is more like an arbitrary label not a immutable
> defined property!
> looking at
22 is more often there than the others. Other operations may be blocked
because of a deep-scrub is not finished yet. I would remove OSD 22, just to
be sure about this: ceph orch osd rm osd.22
If this does not help, just add it again.
Am Fr., 26. Jan. 2024 um 08:05 Uhr schrieb Michel Niyoyita <
mi
Hi
Just a continuation of this mail, Could you help me out to understand the ceph
df output. PFA the screenshot with this mail.
1. Raw storage is 180 TB
2. Stored Value is 37 TB
3. Used Value is 112 TB
4. Available Value is 67 TB
5. Pool Max Available Value is 16 TB
Though the Available Value
>
> Just a continuation of this mail, Could you help me out to understand the ceph
> df output. PFA the screenshot with this mail.
No idea what PFA means, but attachments usually don’t make it through on
mailing lists. Paste text instead.
> 1. Raw storage is 180 TB
The sum of OSD total capac
>> Oh! so the device class is more like an arbitrary label not a immutable
>> defined property!
>> looking at
>> https://docs.ceph.com/en/reef/rados/operations/crush-map/#device-classes
>> this is not specified …
"By default, OSDs automatically set their class at startup to hdd, ssd, or
Hi,
You have 67 TB of raw space available. With a replication factor of 3,
which is what you seem to be using, that is ~22 TB usable space under ideal
conditions.
MAX AVAIL column shows the available space, taking into account the raw
space, the replication factor and the CRUSH map, before the fi
Original Message
Subject: [ceph-users] Re: crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 6:03:21 PM
First a all, thanks a lot for for info and taking time to help
a beginner :)
Nichts zu denken. This is a community, it’s what we
>
>>> so it depends on failure domain .. but with host failure domain, if there
>>> is space on some other OSDs
>>> will the missing OSDs be "healed" on the available space on some other OSDs?
>> Yes, if you have enough hosts. When using 3x replication it is thus
>> advantageous to have at leas
Original Message
Subject: [ceph-users] crushmap rules :: host selection
From: Anthony D'Atri
To: Adrian Sevcenco
Date: 1/28/2024, 11:34:00 PM
so it depends on failure domain .. but with host failure domain, if there is
space on some other OSDs
will the missing OSDs be "healed
>
> so .. in a PG there are no "file data" but pieces of "file data"?
Yes. Chapter 8 may help here, but be warned, it’s pretty dense and may confuse
more than help.
The foundation layer of Ceph is RADOS — services including block (RBD), file
(CephFS), and object (RGW) storage are built on top
Den sön 28 jan. 2024 kl 23:02 skrev Adrian Sevcenco :
>
> >> is it wrong to think of PGs like a kind of object bucket (S3 like)?
> >
> > Mostly, yes.
> so .. in a PG there are no "file data" but pieces of "file data"?
> so 100 GB file with 2x replication will be placed in more than 2 PGs?
> Is ther
Now they are increasing , Friday I tried to deep-scrubbing manually and
they have been successfully done , but Monday morning I found that they are
increasing to 37 , is it the best to deep-scrubbing manually while we are
using the cluster? if not what is the best to do in order to address that .
Hi all,
how can radosgw be deployed manually? For Ceph cluster deployment,
there is still (fortunately!) a documented method which works flawlessly
even in Reef:
https://docs.ceph.com/en/latest/install/manual-deployment/#monitor-bootstrapping
But as for radosgw, there is no such descript
16 matches
Mail list logo