Good morning,
this week I observed something new, I think. At least I can't recall
having seen that yet. Last week I upgraded a customer cluster to
18.2.4 (everything worked fine except RGWs keep crashing [0]), this
week I reinstalled the OS on one of the hosts. And after a successful
reg
And it is also the root cause of some of the DMARC problems that I
highlighted back in 2020 [0], that were sort of fudged around in 2023 [1].
If the list stopped changing the Subject line these problems would go
away. Injecting a specific header identifying the mailing list name is a
better wa
Config reference specifies it should be a list of comma-delimited headers,
so remove spaces:
Comma-delimited list of HTTP headers to include with ops log entries.
> Header names are case insensitive, and use the full header name with words
> separated by underscores.
--
Paul Jurco
On Wed, Fe
Does the new host show up under the proper CRUSH bucket? Do its OSDs? Send
`ceph osd tree` please.
>>
>>
>> > Hello guys,
>> > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a
>> single
>> > pool that consumes all OSDs of all nodes. After adding another
>> host
Thanks for the prompt reply.
Yes, it does. All of them are up, with the correct class that is used by
the crush algorithm.
On Thu, Feb 13, 2025 at 7:47 AM Marc wrote:
> > Hello guys,
> > Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> > pool that consumes all OSDs of a
Hello guys,
Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
pool that consumes all OSDs of all nodes. After adding another host, I
noticed that no extra space was added. Can this be a result of the number
of PGs I am using?
I mean, when adding more hosts/OSDs, should I alwa
> Hello guys,
> Let's say I have a cluster with 4 nodes with 24 SSDs each, and a single
> pool that consumes all OSDs of all nodes. After adding another host, I
> noticed that no extra space was added. Can this be a result of the
> number
> of PGs I am using?
>
> I mean, when adding more hosts/OSD
I still think they are not part of the cluster somehow. "ceph osd status" shows
most likely they are not used. When you add just 1 osd you should see something
in your cluster capacity and some rebalancing. Status of ceph is HEALTH_OK?
> Thanks for the prompt reply.
>
> Yes, it does. All of t
It is annoying that the subject modification of putting [ceph-users] first is
breaking grouping of messages.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Thanks for the feedback!
Yes, the Heath_ok is there.]
The OSD status show all of them as "exists,up".
The interesting part is that "ceph df" shows the correct values in the "RAW
STORAGE" section. However, for the SSD pool I have, it shows only the
previous value as the max usable value.
I had 38
Yes, the bucket that represents the new host is under the ROOT bucket as
the others. Also, the OSDs are in the right/expected bucket.
I am guessing that the problem is the number of PGs. I have 120 OSDs across
all hosts, and I guess that 512 PGS, which is what the pool is using, is
not enough. I d
Yes, everything has finished converging already.
On Thu, Feb 13, 2025 at 12:33 PM Janne Johansson
wrote:
> Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
> :
> > Thanks for the feedback!
> > Yes, the Heath_ok is there.]
> > The OSD status show all of them as "exists,up".
> >
> > The interesting
Hi all,
The Ceph Foundation and Ambassadors are busy building a full calendar
of Ceph events for 2025!
As many of you know, Ceph has always thrived on its amazing community.
Events like Ceph Days and Cephalocon are key points of interaction
where we all can learn, connect, and share experience.
Assuming that the pool is replicated, 512 PGs is pretty low if this is the only
substantial pool on the cluster. When you do `ceph osd df`, if this is the
only substantial pool, the PGS column at right would average around 12 or 13
which is sper low.
> On Feb 13, 2025, at 11:40 AM, Work
Exactly, that is what I am assuming. However, my question is. Can I assume
that the PG number will affect the Max available space that a pool will be
able to use?
On Thu, Feb 13, 2025 at 3:09 PM Anthony D'Atri
wrote:
> Assuming that the pool is replicated, 512 PGs is pretty low if this is the
>
hi folks,
I got it to work with the mentioned changes.
My fault was to look for the log in the `rgw_ops_log_file_path`
however i would love to have the custom headers as additional values
in the beast log-output we can verify if the fields will make its way
into the radosgw.
I also mage a sugges
Hello all
Do we have a good cluster design calculator which can suggest failure
domain and pool size and min size according the number of nodes and drive
and their size for both replicated and EC pools.
Regards
Dev
___
ceph-users mailing list -- ceph-u
Den tors 13 feb. 2025 kl 12:54 skrev Work Ceph
:
> Thanks for the feedback!
> Yes, the Heath_ok is there.]
> The OSD status show all of them as "exists,up".
>
> The interesting part is that "ceph df" shows the correct values in the "RAW
> STORAGE" section. However, for the SSD pool I have, it shows
Yes we have!! I asked same question not so long ago with some nice result. May
search for it with something like mail-archive.com
> Hello all
>
>
> Do we have a good cluster design calculator which can suggest failure
> domain and pool size and min size according the number of nodes and
> drive
I think that would only happen if pg_num for a 3R pool were less than like ~
1/3 the number of OSDs. Assuming aligned device classes, proper CRUSH rules
and topology etc.
Mind you if pg_num is low, the balancer won’t be able to do a great job of
distributing data uniformly. If you set pg_num
20 matches
Mail list logo