Hi Thomas!
On Tue, Feb 4, 2025 at 10:22 PM Thomas Munro wrote:
>
> On Sun, Feb 2, 2025 at 3:44 AM Ants Aasma wrote:
> > The other direction is to split off WAL decoding, buffer lookup and maybe
> > even pinning to a separate process from the main redo loop.
>
> Hi Ants,
>
[..]
> An assumption I
On Wed, Feb 5, 2025 at 10:22 AM Thomas Munro wrote:
> (replaying LsnReadQueue)
s/replaying/replacing/
On Sun, Feb 2, 2025 at 3:44 AM Ants Aasma wrote:
> The other direction is to split off WAL decoding, buffer lookup and maybe
> even pinning to a separate process from the main redo loop.
Hi Ants,
FWIW I have a patch set that changes xlogprefetcher.c to use
read_stream.c, which I hope to propose
On Sat, 1 Feb 2025 at 16:55, Andres Freund wrote:
>
> Hi,
>
> On 2025-02-01 15:43:41 +0100, Ants Aasma wrote:
> > On Fri, Jan 31, 2025, 15:43 Andres Freund wrote:
> >
> > > > Maybe it's a red herring though, but it looks pretty suspicious.
> > >
> > > It's unfortunately not too surprising - our b
Hi,
On 2025-02-01 15:43:41 +0100, Ants Aasma wrote:
> On Fri, Jan 31, 2025, 15:43 Andres Freund wrote:
>
> > > Maybe it's a red herring though, but it looks pretty suspicious.
> >
> > It's unfortunately not too surprising - our buffer mapping table is a
> > pretty
> > big bottleneck. Both becau
Hi,
On 2025-02-01 03:46:33 -0800, Dmitry Koterov wrote:
> > It'd be interesting to see what the paths towards
> hash_search_with_hash_value
> are.
>
> One of the popular paths is on the screenshot. They are all more or less
> the same when recovery_prefetch=on (and when it's off, the replica beha
On Fri, Jan 31, 2025, 15:43 Andres Freund wrote:
> > Maybe it's a red herring though, but it looks pretty suspicious.
>
> It's unfortunately not too surprising - our buffer mapping table is a
> pretty
> big bottleneck. Both because a hash table is just not a good fit for the
> buffer mapping tab
Hi,
On 2025-01-31 03:30:35 -0800, Dmitry Koterov wrote:
> Debugging some replication lag on a replica when the master node
> experiences heavy writes.
>
> PG "startup recovering" eats up a lot of CPU (like 65 %user and 30 %sys),
> which is a little surprising (what is it doing with all those CPU
31.01.2025 17:25, Álvaro Herrera пишет:
> On 2025-Jan-31, Dmitry Koterov wrote:
>
>> PG "startup recovering" eats up a lot of CPU (like 65 %user and 30 %sys),
>> which is a little surprising (what is it doing with all those CPU cycles?
>> it looked like WAL replay should be more IO bound than CPU
On 2025-Jan-31, Dmitry Koterov wrote:
> PG "startup recovering" eats up a lot of CPU (like 65 %user and 30 %sys),
> which is a little surprising (what is it doing with all those CPU cycles?
> it looked like WAL replay should be more IO bound than CPU bound?).
>
> Running "perf top -p ", it shows
On Fri, Jan 31, 2025 at 5:00 PM Dmitry Koterov
wrote:
> Hi.
>
> Debugging some replication lag on a replica when the master node
> experiences heavy writes.
>
> PG "startup recovering" eats up a lot of CPU (like 65 %user and 30 %sys),
> which is a little surprising (what is it doing with all thos
Hi.
Debugging some replication lag on a replica when the master node
experiences heavy writes.
PG "startup recovering" eats up a lot of CPU (like 65 %user and 30 %sys),
which is a little surprising (what is it doing with all those CPU cycles?
it looked like WAL replay should be more IO bound than
12 matches
Mail list logo