[ceph-users] Automatic OSD activation after host reinstall

2025-02-13 Thread Eugen Block

Good morning,

this week I observed something new, I think. At least I can't recall  
having seen that yet. Last week I upgraded a customer cluster to  
18.2.4 (everything worked fine except RGWs keep crashing [0]), this  
week I reinstalled the OS on one of the hosts. And after a successful  
registry login, the OSDs were automatically activated without me  
having to run 'ceph cephadm osd activate ' as documented in [1].  
Zac improved the docs just last week, is that obsolete now? Or is that  
version specific? A few weeks ago we reinstalled our own Ceph servers  
as well, but we still run Pacific 16.2.15, and there I had to issue  
the OSD activation manually. Can anyone confirm this?


Thanks!
Eugen

[0] https://tracker.ceph.com/issues/69885
[1]  
https://docs.ceph.com/en/latest/cephadm/services/osd/#activate-existing-osds

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: <---- breaks grouping of messages

2025-02-13 Thread Chris Palmer
And it is also the root cause of some of the DMARC problems that I 
highlighted back in 2020 [0], that were sort of fudged around in 2023 [1].


If the list stopped changing the Subject line these problems would go 
away. Injecting a specific header identifying the mailing list name is a 
better way, and easy to identify for automatic filing of messages.


[0] https://www.mail-archive.com/ceph-users@ceph.io/msg05564.html
[1] https://www.mail-archive.com/ceph-users@ceph.io/msg20016.html

On 13/02/2025 11:48, Marc wrote:

It is annoying that the subject modification of putting [ceph-users] first is 
breaking grouping of messages.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Radosgw log Custom Headers

2025-02-13 Thread Paul JURCO
Config reference specifies it should be a list of comma-delimited headers,
so remove spaces:

Comma-delimited list of HTTP headers to include with ops log entries.
> Header names are case insensitive, and use the full header name with words
> separated by underscores.



-- 
Paul Jurco


On Wed, Feb 12, 2025 at 9:43 PM Rok Jaklič  wrote:

> What about something like this in rgw section in ceph.conf?
>
> rgw_enable_ops_log = true
> rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5
> rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log
>
> Rok
>
>
>
> On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO  wrote:
>
>> Same here, it worked only after rgw service was restarted using this
>> config:
>> rgw_log_http_headers   http_x_forwarded_for
>>
>> --
>> Paul Jurco
>>
>>
>> On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski <
>> a.jazdzew...@googlemail.com> wrote:
>>
>> > Hi folks,
>> >
>> > i like to make sure that the RadosGW is using the X-Forwarded-For as
>> > source-ip for ACL's
>> > However i do not find the information in the logs:
>> >
>> > i have set (using beast):
>> > ceph config set global rgw_remote_addr_param http_x_forwarded_for
>> > ceph config set global rgw_log_http_headers http_x_forwarded_for
>> >
>> >
>> >
>> https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_log_http_headers
>> >
>> > I hope someone can point me in the right direction!
>> >
>> > Thanks,
>> > Ansgar
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>> >
>> ___
>> ceph-users mailing list -- ceph-users@ceph.io
>> To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Anthony D'Atri
Does the new host show up under the proper CRUSH bucket?  Do its OSDs?  Send 
`ceph osd tree` please.


>> 
>> 
>>  > Hello guys,
>>  > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>> single
>>  > pool that consumes all OSDs of all nodes. After adding another
>> host, I
>>  > noticed that no extra space was added. Can this be a result of
>> the
>>  > number
>>  > of PGs I am using?
>>  >
>>  > I mean, when adding more hosts/OSDs, should I always consider
>> increasing
>>  > the number of PGs from a pool?
>>  >
>> 
>>  ceph osd tree
>> 
>>  shows all up and with correct weight?
>> 
> 
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Thanks for the prompt reply.

Yes, it does. All of them are up, with the correct class that is used by
the crush algorithm.

On Thu, Feb 13, 2025 at 7:47 AM Marc  wrote:

> > Hello guys,
> > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> > pool that consumes all OSDs of all nodes. After adding another host, I
> > noticed that no extra space was added. Can this be a result of the
> > number
> > of PGs I am using?
> >
> > I mean, when adding more hosts/OSDs, should I always consider increasing
> > the number of PGs from a pool?
> >
>
> ceph osd tree
>
> shows all up and with correct weight?
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Hello guys,
Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
pool that consumes all OSDs of all nodes. After adding another host, I
noticed that no extra space was added. Can this be a result of the number
of PGs I am using?

I mean, when adding more hosts/OSDs, should I always consider increasing
the number of PGs from a pool?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Marc
> Hello guys,
> Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> pool that consumes all OSDs of all nodes. After adding another host, I
> noticed that no extra space was added. Can this be a result of the
> number
> of PGs I am using?
> 
> I mean, when adding more hosts/OSDs, should I always consider increasing
> the number of PGs from a pool?
>

ceph osd tree 

shows all up and with correct weight?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Marc
I still think they are not part of the cluster somehow. "ceph osd status" shows 
most likely they are not used. When you add just 1 osd you should see something 
in your cluster capacity and some rebalancing. Status of ceph is HEALTH_OK? 


> Thanks for the prompt reply.
> 
> Yes, it does. All of them are up, with the correct class that is used by
> the crush algorithm.
> 
> On Thu, Feb 13, 2025 at 7:47 AM Marc   > wrote:
> 
> 
>   > Hello guys,
>   > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
> single
>   > pool that consumes all OSDs of all nodes. After adding another
> host, I
>   > noticed that no extra space was added. Can this be a result of
> the
>   > number
>   > of PGs I am using?
>   >
>   > I mean, when adding more hosts/OSDs, should I always consider
> increasing
>   > the number of PGs from a pool?
>   >
> 
>   ceph osd tree
> 
>   shows all up and with correct weight?
> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] <---- breaks grouping of messages

2025-02-13 Thread Marc


It is annoying that the subject modification of putting [ceph-users] first is 
breaking grouping of messages. 
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Thanks for the feedback!

Yes, the Heath_ok is there.]

The OSD status show all of them as "exists,up".

The interesting part is that "ceph df" shows the correct values in the "RAW
STORAGE" section. However, for the SSD pool I have, it shows only the
previous value as the max usable value.
I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
with replica as 3. Therefore, I had about 128TiB as possible usable space
for the pool before. Now that I added a new node, I would expect 480 RAW
space, which is what I have in the RAW STORAGE section, but the usable
space to be used in the pool has not changed. I would expect the usable
space to grow at about 160TiB. I know that these limits will never be
reached as we have locks in 85%-90% for each OSD.

On Thu, Feb 13, 2025 at 8:41 AM Marc  wrote:

> I still think they are not part of the cluster somehow. "ceph osd status"
> shows most likely they are not used. When you add just 1 osd you should see
> something in your cluster capacity and some rebalancing. Status of ceph is
> HEALTH_OK?
>
>
> > Thanks for the prompt reply.
> >
> > Yes, it does. All of them are up, with the correct class that is used by
> > the crush algorithm.
> >
> > On Thu, Feb 13, 2025 at 7:47 AM Marc  >  > wrote:
> >
> >
> >   > Hello guys,
> >   > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
> > single
> >   > pool that consumes all OSDs of all nodes. After adding another
> > host, I
> >   > noticed that no extra space was added. Can this be a result of
> > the
> >   > number
> >   > of PGs I am using?
> >   >
> >   > I mean, when adding more hosts/OSDs, should I always consider
> > increasing
> >   > the number of PGs from a pool?
> >   >
> >
> >   ceph osd tree
> >
> >   shows all up and with correct weight?
> >
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Yes, the bucket that represents the new host is under the ROOT bucket as
the others. Also, the OSDs are in the right/expected bucket.

I am guessing that the problem is the number of PGs. I have 120 OSDs across
all hosts, and I guess that 512 PGS, which is what the pool is using, is
not enough. I did not change it yet, because I wanted to understand the
effect on PG number in Ceph pool usable volume.

On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri 
wrote:

> Does the new host show up under the proper CRUSH bucket?  Do its OSDs?
> Send `ceph osd tree` please.
>
>
> >>
> >>
> >>  > Hello guys,
> >>  > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
> >> single
> >>  > pool that consumes all OSDs of all nodes. After adding another
> >> host, I
> >>  > noticed that no extra space was added. Can this be a result of
> >> the
> >>  > number
> >>  > of PGs I am using?
> >>  >
> >>  > I mean, when adding more hosts/OSDs, should I always consider
> >> increasing
> >>  > the number of PGs from a pool?
> >>  >
> >>
> >>  ceph osd tree
> >>
> >>  shows all up and with correct weight?
> >>
> >
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Yes, everything has finished converging already.

On Thu, Feb 13, 2025 at 12:33 PM Janne Johansson 
wrote:

> Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
> :
> > Thanks for the feedback!
> > Yes, the Heath_ok is there.]
> > The OSD status show all of them as "exists,up".
> >
> > The interesting part is that "ceph df" shows the correct values in the
> "RAW
> > STORAGE" section. However, for the SSD pool I have, it shows only the
> > previous value as the max usable value.
> > I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
> > with replica as 3. Therefore, I had about 128TiB as possible usable space
> > for the pool before. Now that I added a new node, I would expect 480 RAW
> > space, which is what I have in the RAW STORAGE section, but the usable
> > space to be used in the pool has not changed. I would expect the usable
> > space to grow at about 160TiB. I know that these limits will never be
> > reached as we have locks in 85%-90% for each OSD.
>
> Has all PGs moved yet? If not, then you have to wait until the old
> OSDs have moved PGs over the the newly added ones.
>
> --
> May the most significant bit of your life be positive.
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph Events Survey -- Your Input Wanted!

2025-02-13 Thread Dan van der Ster
Hi all,

The Ceph Foundation and Ambassadors are busy building a full calendar
of Ceph events for 2025!

As many of you know, Ceph has always thrived on its amazing community.
Events like Ceph Days and Cephalocon are key points of interaction
where we all can learn, connect, and share experience.

Following the success of Cephalocon at CERN and Ceph Days in India,
we’ve announced Ceph Days London & Silicon Valley -- check out
https://ceph.io/en/community/events/ to get involved.
And watch that space -- Ceph Days in Seattle, New York, and Berlin
will be announced soon!

Looking forward, we need your help to help shape our future events...
and to plan our next Cephalocon!
If you have a moment, please share your thoughts in our Ceph Events
survey: https://forms.gle/Rm41d547Rb59S8xf9

Looking forward to seeing you at an event soon!

---
Dan van der Ster
Ceph Executive Council
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Anthony D'Atri
Assuming that the pool is replicated, 512 PGs is pretty low if this is the only 
substantial pool on the cluster.  When you do `ceph osd df`, if this is the 
only substantial pool, the PGS column at right would average around 12 or 13 
which is sper low.  

> On Feb 13, 2025, at 11:40 AM, Work Ceph  
> wrote:
> 
> Yes, the bucket that represents the new host is under the ROOT bucket as the 
> others. Also, the OSDs are in the right/expected bucket.
> 
> I am guessing that the problem is the number of PGs. I have 120 OSDs across 
> all hosts, and I guess that 512 PGS, which is what the pool is using, is not 
> enough. I did not change it yet, because I wanted to understand the effect on 
> PG number in Ceph pool usable volume.
> 
> On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri  > wrote:
>> Does the new host show up under the proper CRUSH bucket?  Do its OSDs?  Send 
>> `ceph osd tree` please.
>> 
>> 
>> >> 
>> >> 
>> >>  > Hello guys,
>> >>  > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>> >> single
>> >>  > pool that consumes all OSDs of all nodes. After adding another
>> >> host, I
>> >>  > noticed that no extra space was added. Can this be a result of
>> >> the
>> >>  > number
>> >>  > of PGs I am using?
>> >>  >
>> >>  > I mean, when adding more hosts/OSDs, should I always consider
>> >> increasing
>> >>  > the number of PGs from a pool?
>> >>  >
>> >> 
>> >>  ceph osd tree
>> >> 
>> >>  shows all up and with correct weight?
>> >> 
>> > 
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io 
>> > To unsubscribe send an email to ceph-users-le...@ceph.io 
>> > 
>> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Work Ceph
Exactly, that is what I am assuming. However, my question is. Can I assume
that the PG number will affect the Max available space that a pool will be
able to use?

On Thu, Feb 13, 2025 at 3:09 PM Anthony D'Atri 
wrote:

> Assuming that the pool is replicated, 512 PGs is pretty low if this is the
> only substantial pool on the cluster.  When you do `ceph osd df`, if this
> is the only substantial pool, the PGS column at right would average around
> 12 or 13 which is sper low.
>
> On Feb 13, 2025, at 11:40 AM, Work Ceph 
> wrote:
>
> Yes, the bucket that represents the new host is under the ROOT bucket as
> the others. Also, the OSDs are in the right/expected bucket.
>
> I am guessing that the problem is the number of PGs. I have 120 OSDs
> across all hosts, and I guess that 512 PGS, which is what the pool is
> using, is not enough. I did not change it yet, because I wanted to
> understand the effect on PG number in Ceph pool usable volume.
>
> On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri 
> wrote:
>
>> Does the new host show up under the proper CRUSH bucket?  Do its OSDs?
>> Send `ceph osd tree` please.
>>
>>
>> >>
>> >>
>> >>  > Hello guys,
>> >>  > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>> >> single
>> >>  > pool that consumes all OSDs of all nodes. After adding another
>> >> host, I
>> >>  > noticed that no extra space was added. Can this be a result of
>> >> the
>> >>  > number
>> >>  > of PGs I am using?
>> >>  >
>> >>  > I mean, when adding more hosts/OSDs, should I always consider
>> >> increasing
>> >>  > the number of PGs from a pool?
>> >>  >
>> >>
>> >>  ceph osd tree
>> >>
>> >>  shows all up and with correct weight?
>> >>
>> >
>> > ___
>> > ceph-users mailing list -- ceph-users@ceph.io
>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>
>>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Radosgw log Custom Headers

2025-02-13 Thread Ansgar Jazdzewski
hi folks,

I got it to work with the mentioned changes.
My fault was to look for the log in the `rgw_ops_log_file_path`

however i would love to have the custom headers as additional values
in the beast log-output we can verify if the fields will make its way
into the radosgw.

I also mage a suggestion to change the documentation for
rgw_remote_addr_param and rgw_log_http_headers

cheers
Ansgar

Am Do., 13. Feb. 2025 um 11:30 Uhr schrieb Paul JURCO :
>
> Config reference specifies it should be a list of comma-delimited headers, so 
> remove spaces:
>
>> Comma-delimited list of HTTP headers to include with ops log entries. Header 
>> names are case insensitive, and use the full header name with words 
>> separated by underscores.
>
>
>
> --
> Paul Jurco
>
>
> On Wed, Feb 12, 2025 at 9:43 PM Rok Jaklič  wrote:
>>
>> What about something like this in rgw section in ceph.conf?
>>
>> rgw_enable_ops_log = true
>> rgw_log_http_headers = http_x_forwarded_for, http_expect, http_content_md5
>> rgw_ops_log_file_path = /var/log/ceph/mon1.rgw-ops.log
>>
>> Rok
>>
>>
>>
>> On Wed, Feb 12, 2025 at 2:19 PM Paul JURCO  wrote:
>>>
>>> Same here, it worked only after rgw service was restarted using this config:
>>> rgw_log_http_headers   http_x_forwarded_for
>>>
>>> --
>>> Paul Jurco
>>>
>>>
>>> On Wed, Feb 12, 2025 at 2:29 PM Ansgar Jazdzewski <
>>> a.jazdzew...@googlemail.com> wrote:
>>>
>>> > Hi folks,
>>> >
>>> > i like to make sure that the RadosGW is using the X-Forwarded-For as
>>> > source-ip for ACL's
>>> > However i do not find the information in the logs:
>>> >
>>> > i have set (using beast):
>>> > ceph config set global rgw_remote_addr_param http_x_forwarded_for
>>> > ceph config set global rgw_log_http_headers http_x_forwarded_for
>>> >
>>> >
>>> > https://docs.ceph.com/en/quincy/radosgw/config-ref/#confval-rgw_log_http_headers
>>> >
>>> > I hope someone can point me in the right direction!
>>> >
>>> > Thanks,
>>> > Ansgar
>>> > ___
>>> > ceph-users mailing list -- ceph-users@ceph.io
>>> > To unsubscribe send an email to ceph-users-le...@ceph.io
>>> >
>>> ___
>>> ceph-users mailing list -- ceph-users@ceph.io
>>> To unsubscribe send an email to ceph-users-le...@ceph.io
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Ceph calculator

2025-02-13 Thread Devender Singh
Hello all


Do we have a good cluster design calculator which can suggest failure
domain and pool size and min size according the number of nodes and drive
and their size for both replicated and EC pools.

Regards
Dev
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Janne Johansson
Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
:
> Thanks for the feedback!
> Yes, the Heath_ok is there.]
> The OSD status show all of them as "exists,up".
>
> The interesting part is that "ceph df" shows the correct values in the "RAW
> STORAGE" section. However, for the SSD pool I have, it shows only the
> previous value as the max usable value.
> I had 384 TiB as the RAW space before. The SSD pool is a replicated pool
> with replica as 3. Therefore, I had about 128TiB as possible usable space
> for the pool before. Now that I added a new node, I would expect 480 RAW
> space, which is what I have in the RAW STORAGE section, but the usable
> space to be used in the pool has not changed. I would expect the usable
> space to grow at about 160TiB. I know that these limits will never be
> reached as we have locks in 85%-90% for each OSD.

Has all PGs moved yet? If not, then you have to wait until the old
OSDs have moved PGs over the the newly added ones.

-- 
May the most significant bit of your life be positive.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph calculator

2025-02-13 Thread Marc
Yes we have!! I asked same question not so long ago with some nice result. May 
search for it with something like mail-archive.com

> Hello all
> 
> 
> Do we have a good cluster design calculator which can suggest failure
> domain and pool size and min size according the number of nodes and
> drive
> and their size for both replicated and EC pools.
> 
> Regards
> Dev
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Does the number of PGs affect the total usable size of a pool?

2025-02-13 Thread Anthony D'Atri
I think that would only happen if pg_num for a 3R pool were less than like ~ 
1/3 the number of OSDs.  Assuming aligned device classes, proper CRUSH rules 
and topology etc.

Mind you if pg_num is low, the balancer won’t be able to do a great job of 
distributing data uniformly.  If you set pg_num to a non power of two that 
complicates things as well since you’d have PGs of very different sizes, but 
this is something rarely seen.

Sharing `ceph osd tree` and `ceph osd dump | grep pool` and `ceph osd df` would 
help.

> On Feb 13, 2025, at 1:11 PM, Work Ceph  
> wrote:
> 
> Exactly, that is what I am assuming. However, my question is. Can I assume 
> that the PG number will affect the Max available space that a pool will be 
> able to use?
> 
> On Thu, Feb 13, 2025 at 3:09 PM Anthony D'Atri  > wrote:
>> Assuming that the pool is replicated, 512 PGs is pretty low if this is the 
>> only substantial pool on the cluster.  When you do `ceph osd df`, if this is 
>> the only substantial pool, the PGS column at right would average around 12 
>> or 13 which is sper low.  
>> 
>>> On Feb 13, 2025, at 11:40 AM, Work Ceph >> > wrote:
>>> 
>>> Yes, the bucket that represents the new host is under the ROOT bucket as 
>>> the others. Also, the OSDs are in the right/expected bucket.
>>> 
>>> I am guessing that the problem is the number of PGs. I have 120 OSDs across 
>>> all hosts, and I guess that 512 PGS, which is what the pool is using, is 
>>> not enough. I did not change it yet, because I wanted to understand the 
>>> effect on PG number in Ceph pool usable volume.
>>> 
>>> On Thu, Feb 13, 2025 at 12:03 PM Anthony D'Atri >> > wrote:
 Does the new host show up under the proper CRUSH bucket?  Do its OSDs?  
 Send `ceph osd tree` please.
 
 
 >> 
 >> 
 >>  > Hello guys,
 >>  > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
 >> single
 >>  > pool that consumes all OSDs of all nodes. After adding another
 >> host, I
 >>  > noticed that no extra space was added. Can this be a result of
 >> the
 >>  > number
 >>  > of PGs I am using?
 >>  >
 >>  > I mean, when adding more hosts/OSDs, should I always consider
 >> increasing
 >>  > the number of PGs from a pool?
 >>  >
 >> 
 >>  ceph osd tree
 >> 
 >>  shows all up and with correct weight?
 >> 
 > 
 > ___
 > ceph-users mailing list -- ceph-users@ceph.io 
 > To unsubscribe send an email to ceph-users-le...@ceph.io 
 > 
 
>> 

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io