Our pg_stat_wal view currently doesn't expose the number of WAL segments
recycled, although this information is already logged by the checkpointer
in the database log. For example,
LOG: checkpoint complete: wrote 317 buffers (1.9%); 0 WAL file(s) added, 0
removed, 3 recycled; write=0.003 s, sync=
On Wed, Jan 22, 2025 at 10:18 PM Robert Pang wrote:
>
> On Wed, Jan 15, 2025 at 12:05 PM Andres Freund wrote:
> >
> > If you have wal_recycle=true, this overhead will only be paid the first
> > time a
> > WAL segment is used, of course, not after recycling.
>
> Today, our pg_stat_wal view [1] do
On Wed, Jan 15, 2025 at 12:05 PM Andres Freund wrote:
>
> If you have wal_recycle=true, this overhead will only be paid the first time a
> WAL segment is used, of course, not after recycling.
Today, our pg_stat_wal view [1] does not report the no. of WAL
segments recycled. How about if we add a c
Hi @Andres Freund
> I'm not sure I understand the specifics here - did the high WAL generation
> rate result in the recycling taking too long? Or did checkpointer take
too
> long to write out data, and because of that recycling didn't happen
frequently
> enough?
If the WAL generation rate highl
Hi,
> On Fri, Jan 17, 2025 at 04:29:14PM -0500, Andres Freund wrote:
>> I think what we instead ought to do is to more aggressively initialize WAL
>> files ahead of time, so it doesn't happen while holding crucial locks. We
>> know the recent rate of WAL generation, and we could easily track u
On Fri, Jan 17, 2025 at 04:29:14PM -0500, Andres Freund wrote:
> I think what we instead ought to do is to more aggressively initialize WAL
> files ahead of time, so it doesn't happen while holding crucial locks. We
> know the recent rate of WAL generation, and we could easily track up to which
>
Thinking back I can see now why disabling WAL writes with
wal_level=minimal in COPY resulted in 3X better write performance
instead of expected 2x -
With wal_level=minimal only the heap page writes were needed, whereas
with WAL writes the same page was written 3x - (heap + WAL zero-fill +
WAL).
-
On Fri, Jan 17, 2025 at 10:29 PM Andres Freund wrote:
...
> > I see, PG once had fallocate [1] (which was reverted by [2] due to some
> > performance regression concern). The original OSS discussion was in [3].
> > The perf regression was reported in [4]. Looks like this was due to how
> > ext4 ha
Hi,
On 2025-01-16 14:50:57 +0530, Ritu Bhandari wrote:
> Adding to Andy Fan's point above:
>
> If we increase WAL segment size from 16MB to 64MB, initializing the 64MB
> WAL segment inline can cause several seconds of freeze on all write
> transactions when it happens. Writing out a newly zero-fil
On Thu, Jan 16, 2025 at 10:21 AM Ritu Bhandari
wrote:
> Could we consider adding back fallocate?
Or if not adding it back for all then maybe have a 3-value wal_init_zero :
wal_init_zero = on;
wal_init_zero = off;
wal_init_zero = fallocate;
?
Hi,
Adding to Andy Fan's point above:
If we increase WAL segment size from 16MB to 64MB, initializing the 64MB
WAL segment inline can cause several seconds of freeze on all write
transactions when it happens. Writing out a newly zero-filled 64MB WAL
segment takes several seconds for smaller disk
Hi,
>
> c=1 && \
> psql -c checkpoint -c 'select pg_switch_wal()' && \
> pgbench -n -M prepared -c$c -j$c -f <(echo "SELECT
> pg_logical_emit_message(true, 'test', repeat('0', 8192));";) -P1 -t 1
>
> wal_init_zero = 1: 885 TPS
> wal_init_zero = 0: 286 TPS.
Your theory looks clear and t
Hi,
On 2025-01-15 09:12:17 +, Andy Fan wrote:
> It is unclear to me why do we need wal_init_zero. Per comments:
>
> /*
>* Zero-fill the file. With this setting, we do this the hard
> way to
>* ensure that all the file space has really been alloca
Hi Michael,
> My understanding was that if we have pre-allocated wal space (and
> re-cycle already used wal files), we can still write wal records into
> that pre-allocated space and still issue changes to data files as long
> as we don't need to enlarge any. So an out-of-space situation is less
>
Hi,
On Wed, Jan 15, 2025 at 09:12:17AM +, Andy Fan wrote:
> I can understand that "the file space has really been allocated", but
> why do we care about this?
>
> One reason I can think of is it has something with "out-of-disk-space"
> sistuation, even though what's the benefit of it since we
Hi,
> Good catch. This comment is not 100% clear to me either.
> [...]
TWIMC zero-filling was added in 33cc5d8a4d0d (year 2001). The commit
doesn't reference the discussion behind this change and the comment
text hasn't changed since then.
--
Best regards,
Aleksander Alekseev
Hi,
>> I can understand that "the file space has really been allocated", but
>> why do we care about this?
>>
>> [...]
>
> Can you report the benchmark difference with false (disabled)?
> Maybe It's worth considering leaving false as the default.
Good catch. This comment is not 100% clear to me e
Hi.
Em qua., 15 de jan. de 2025 às 06:12, Andy Fan
escreveu:
>
> Hi,
>
> It is unclear to me why do we need wal_init_zero. Per comments:
>
> /*
> * Zero-fill the file. With this setting, we do this the
> hard way to
> * ensure that all the file
18 matches
Mail list logo