van der Ster" , "ceph-users"
>
> Sent: Monday, 14 December, 2020 10:25:32
> Subject: [ceph-users] Re: osd_pglog memory hoarding - another case
> Hi all,
> Ok, so I have some updates on this.
>
> We noticed that we had a bucket with tons of RGW garbage collection p
> Cc: "ceph-users"
> Sent: Tuesday, 1 December, 2020 16:53:50
> Subject: Re: [ceph-users] Re: osd_pglog memory hoarding - another case
> Hi Kalle,
>
> Thanks for the update. Unfortunately I haven't made any progress on
> understanding the root cause of this issu
t; Kalle
>
> - Original Message -
> > From: "Kalle Happonen"
> > To: "Dan van der Ster"
> > Cc: "ceph-users"
> > Sent: Tuesday, 1 December, 2020 15:09:37
> > Subject: [ceph-users] Re: osd_pglog memory hoarding - another cas
Happonen"
> To: "Dan van der Ster"
> Cc: "ceph-users"
> Sent: Tuesday, 1 December, 2020 15:09:37
> Subject: [ceph-users] Re: osd_pglog memory hoarding - another case
> Hi All,
> back to this. Dan, it seems we're following exactly in your footsteps.
>
lle Happonen"
> To: "Dan van der Ster"
> Cc: "ceph-users"
> Sent: Thursday, 19 November, 2020 13:56:37
> Subject: [ceph-users] Re: osd_pglog memory hoarding - another case
> Hello,
> I thought I'd post an update.
>
> Setting the pg_log size
efully not increase pg_log memory consumption.
Cheers,
Kalle
- Original Message -
> From: "Kalle Happonen"
> To: "Dan van der Ster"
> Cc: "ceph-users"
> Sent: Tuesday, 17 November, 2020 16:07:03
> Subject: [ceph-users] Re: osd_pglog memory
gt;> Kalle
>>
>>
>>
>> - Original Message -----
>> > From: "Kalle Happonen"
>> > To: "Dan van der Ster"
>> > Cc: "ceph-users"
>> > Sent: Tuesday, 17 November, 2020 12:45:25
>> > Subject:
"ceph-users"
Sent: Tuesday, 17 November, 2020 12:45:25
Subject: [ceph-users] Re: osd_pglog memory hoarding - another case
Hi Dan @ co.,
Thanks for the support (moral and technical).
That sounds like a good guess, but it seems like there is nothing alarming here.
In all our pools, some p
> > From: "Kalle Happonen"
> > To: "Dan van der Ster"
> > Cc: "ceph-users"
> > Sent: Tuesday, 17 November, 2020 12:45:25
> > Subject: [ceph-users] Re: osd_pglog memory hoarding - another case
>
> > Hi Dan @ co.,
> > Thank
have
issues with memory.
Cheers,
Kalle
- Original Message -
> From: "Kalle Happonen"
> To: "Dan van der Ster"
> Cc: "ceph-users"
> Sent: Tuesday, 17 November, 2020 12:45:25
> Subject: [ceph-users] Re: osd_pglog memory hoarding - another cas
On Tue, Nov 17, 2020 at 11:45 AM Kalle Happonen wrote:
>
> Hi Dan @ co.,
> Thanks for the support (moral and technical).
>
> That sounds like a good guess, but it seems like there is nothing alarming
> here. In all our pools, some pgs are a bit over 3100, but not at any
> exceptional values.
>
>
Hi Dan @ co.,
Thanks for the support (moral and technical).
That sounds like a good guess, but it seems like there is nothing alarming
here. In all our pools, some pgs are a bit over 3100, but not at any
exceptional values.
cat pgdumpfull.txt | jq '.pg_map.pg_stats[] |
select(.ondisk_log_size >
Hi Kalle,
Do you have active PGs now with huge pglogs?
You can do something like this to find them:
ceph pg dump -f json | jq '.pg_map.pg_stats[] |
select(.ondisk_log_size > 3000)'
If you find some, could you increase to debug_osd = 10 then share the osd log.
I am interested in the debug line
Hi Xie,
On Tue, Nov 17, 2020 at 11:14 AM wrote:
>
> Hi Dan,
>
>
> > Given that it adds a case where the pg_log is not trimmed, I wonder if
> > there could be an unforeseen condition where `last_update_ondisk`
> > isn't being updated correctly, and therefore the osd stops trimming
> > the pg_log a
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi Kalle,
Strangely and luckily, in our case the memory explosion didn't reoccur
after that incident. So I can mostly only offer moral support.
But if this bug indeed appeared between 14.2.8 and 14.2.13, then I
think this is suspicious:
b670715eb4 osd/PeeringState: do not trim pg log past las
16 matches
Mail list logo