On 7/20/20 6:02 AM, Fujii Masao wrote:


On 2020/07/20 13:48, Fujii Masao wrote:


On 2020/07/17 20:24, David Steele wrote:

On 7/17/20 5:11 AM, Fujii Masao wrote:


On 2020/07/14 20:30, David Steele wrote:
On 7/14/20 12:00 AM, Fujii Masao wrote:

The patch was no longer applied cleanly because of recent commit.
So I updated the patch. Attached.

Barring any objection, I will commit this patch.

This doesn't look right:

+   the <xref linkend="guc-wal-keep-size"/> most recent megabytes
+   WAL files plus one WAL file are

How about:

+   <xref linkend="guc-wal-keep-size"/> megabytes of
+   WAL files plus one WAL file are

Thanks for the comment! Isn't it better to keep "most recent" part?
If so, what about either of the followings?

1. <xref linkend="guc-wal-keep-size"/> megabytes of WAL files plus
     one WAL file that were most recently generated are kept all time.

2. <xref linkend="guc-wal-keep-size"/> megabytes + <xref linkend="guc-wal-segment-size"> bytes
     of WAL files that were most recently generated are kept all time.

"most recent" seemed implied to me, but I see your point.

How about:

+   the most recent <xref linkend="guc-wal-keep-size"/> megabytes of
+   WAL files plus one additional WAL file are

I adopted this and pushed the patch. Thanks!

Also we need to update the release note for v13. What about adding the following?

------------------------------------
Rename configuration parameter wal_keep_segments to wal_keep_size.

This allows how much WAL files to retain for the standby server, by bytes instead of the number of files. If you previously used wal_keep_segments, the following formula will give you an approximately equivalent setting:

wal_keep_size = wal_keep_segments * wal_segment_size (typically 16MB)
------------------------------------

I would rework that first sentence a bit. How about:

+ This determines how much WAL to retain for the standby server,
+ specified in megabytes rather than number of files.

The rest looks fine to me.

Regards,
--
-David
da...@pgmasters.net


Reply via email to