On 2020-11-25 20:19, Fujii Masao wrote:
On 2020/11/19 16:31, Masahiro Ikeda wrote:
On 2020-11-17 11:46, Fujii Masao wrote:
On 2020/11/16 16:35, Masahiro Ikeda wrote:
On 2020-11-12 14:58, Fujii Masao wrote:
On 2020/11/06 10:25, Masahiro Ikeda wrote:
On 2020-10-30 11:50, Fujii Masao wrote:
On 2020/10/29 17:03, Masahiro Ikeda wrote:
Hi,
Thanks for your comments and advice. I updated the patch.
On 2020-10-21 18:03, Kyotaro Horiguchi wrote:
At Tue, 20 Oct 2020 16:11:29 +0900, Masahiro Ikeda
<ikeda...@oss.nttdata.com> wrote in
On 2020-10-20 12:46, Amit Kapila wrote:
> I see that we also need to add extra code to capture these stats (some
> of which is in performance-critical path especially in
> XLogInsertRecord) which again makes me a bit uncomfortable. It might
> be that it is all fine as it is very important to collect these stats
> at cluster-level in spite that the same information can be gathered at
> statement-level to help customers but I don't see a very strong case
> for that in your proposal.
We should avoid that duplication as possible even if the both
number
are important.
Also about performance, I thought there are few impacts
because it
increments stats in memory. If I can implement to reuse
pgWalUsage's
value which already collects these stats, there is no impact
in
XLogInsertRecord.
For example, how about pg_stat_wal() calculates the
accumulated
value of wal_records, wal_fpi, and wal_bytes to use
pgWalUsage's
value?
I don't think that works, but it would work that
pgstat_send_wal()
takes the difference of that values between two successive
calls.
WalUsage prevWalUsage;
...
pgstat_send_wal()
{
..
/* fill in some values using pgWalUsage */
WalStats.m_wal_bytes = pgWalUsage.wal_bytes -
prevWalUsage.wal_bytes;
WalStats.m_wal_records = pgWalUsage.wal_records -
prevWalUsage.wal_records;
WalStats.m_wal_wal_fpi = pgWalUsage.wal_fpi -
prevWalUsage.wal_fpi;
...
pgstat_send(&WalStats, sizeof(WalStats));
/* remember the current numbers */
prevWalUsage = pgWalUsage;
Thanks for Horiguchi-san's advice, I changed to reuse pgWalUsage
which is already defined and eliminates the extra overhead.
+ /* fill in some values using pgWalUsage */
+ WalStats.m_wal_bytes = pgWalUsage.wal_bytes -
prevWalUsage.wal_bytes;
+ WalStats.m_wal_records = pgWalUsage.wal_records -
prevWalUsage.wal_records;
+ WalStats.m_wal_fpi = pgWalUsage.wal_fpi -
prevWalUsage.wal_fpi;
It's better to use WalUsageAccumDiff() here?
Yes, thanks. I fixed it.
prevWalUsage needs to be initialized with pgWalUsage?
+ if (AmWalWriterProcess()){
+ WalStats.m_wal_write_walwriter++;
+ }
+ else
+ {
+ WalStats.m_wal_write_backend++;
+ }
I think that it's better not to separate m_wal_write_xxx into two
for
walwriter and other processes. Instead, we can use one
m_wal_write_xxx
counter and make pgstat_send_wal() send also the process type to
the stats collector. Then the stats collector can accumulate the
counters
per process type if necessary. If we adopt this approach, we can
easily
extend pg_stat_wal so that any fields can be reported per process
type.
I'll remove the above source code because these counters are not
useful.
On 2020-10-30 12:00, Fujii Masao wrote:
On 2020/10/20 11:31, Masahiro Ikeda wrote:
Hi,
I think we need to add some statistics to pg_stat_wal view.
Although there are some parameter related WAL,
there are few statistics for tuning them.
I think it's better to provide the following statistics.
Please let me know your comments.
```
postgres=# SELECT * from pg_stat_wal;
-[ RECORD 1 ]-------+------------------------------
wal_records | 2000224
wal_fpi | 47
wal_bytes | 248216337
wal_buffers_full | 20954
wal_init_file | 8
wal_write_backend | 20960
wal_write_walwriter | 46
wal_write_time | 51
wal_sync_backend | 7
wal_sync_walwriter | 8
wal_sync_time | 0
stats_reset | 2020-10-20 11:04:51.307771+09
```
1. Basic statistics of WAL activity
- wal_records: Total number of WAL records generated
- wal_fpi: Total number of WAL full page images generated
- wal_bytes: Total amount of WAL bytes generated
To understand DB's performance, first, we will check the
performance
trends for the entire database instance.
For example, if the number of wal_fpi becomes higher, users may
tune
"wal_compression", "checkpoint_timeout" and so on.
Although users can check the above statistics via EXPLAIN,
auto_explain,
autovacuum and pg_stat_statements now,
if users want to see the performance trends for the entire
database,
they must recalculate the statistics.
I think it is useful to add the sum of the basic statistics.
2. WAL segment file creation
- wal_init_file: Total number of WAL segment files created.
To create a new WAL file may have an impact on the performance
of
a write-heavy workload generating lots of WAL. If this number is
reported high,
to reduce the number of this initialization, we can tune
WAL-related parameters
so that more "recycled" WAL files can be held.
3. Number of when WAL is flushed
- wal_write_backend : Total number of WAL data written to the
disk by backends
- wal_write_walwriter : Total number of WAL data written to the
disk by walwriter
- wal_sync_backend : Total number of WAL data synced to the disk
by backends
- wal_sync_walwriter : Total number of WAL data synced to the
disk by walwrite
I think it's useful for tuning "synchronous_commit" and
"commit_delay" for query executions.
If the number of WAL is flushed is high, users can know
"synchronous_commit" is useful for the workload.
I just wonder how useful these counters are. Even without these
counters,
we already know synchronous_commit=off is likely to cause the
better
performance (but has the risk of data loss). So ISTM that these
counters are
not so useful when tuning synchronous_commit.
Thanks, my understanding was wrong.
I agreed that your comments.
I merged the statistics of *_backend and *_walwriter.
I think the sum of them is useful to calculate the average per
write/sync time.
For example, per write time is equals wal_write_time / wal_write.
Understood.
Thanks for updating the patch!
Thanks for your comments.
patching file src/include/catalog/pg_proc.dat
Hunk #1 FAILED at 5491.
1 out of 1 hunk FAILED -- saving rejects to file
src/include/catalog/pg_proc.dat.rej
I got this failure when applying the patch. Could you update the
patch?
Thanks, I updated the patch.
- Number of times WAL data was written to the disk because
WAL
buffers got full
+ Total number of times WAL data written to the disk because
WAL
buffers got full
Isn't "was" necessary between "data" and "written"?
Yes, I fixed it.
+ <entry role="catalog_table_entry"><para
role="column_definition">
+ <structfield>wal_bytes</structfield> <type>bigint</type>
Shouldn't the type of wal_bytes be numeric because the total number
of
WAL bytes can exceed the range of bigint? I think that the type of
pg_stat_statements.wal_bytes is also numeric for the same reason.
Thanks, I fixed it.
Since I cast the type of wal_bytes from PgStat_Counter to uint64,
I changed the type of PgStat_MsgWal and PgStat_WalStats too.
+ <entry role="catalog_table_entry"><para
role="column_definition">
+ <structfield>wal_write_time</structfield>
<type>bigint</type>
Shouldn't the type of wal_xxx_time be double precision,
like pg_stat_database.blk_write_time?
Thanks, I changed it.
Even when fsync is set to off or wal_sync_method is set to
open_sync,
wal_sync is incremented. Isn't this behavior confusing?
What do you think about this comment?
Sorry, I'll change to increment wal_sync and wal_sync_time only
if a specific fsync method is called.
I found that we discussed track-WAL-IO-timing feature at the past
discussion
about the similar feature [1]. But the feature was droppped from the
proposal
patch because there was the performance concern. So probably we need
to
revisit the past discussion and benchmark the performance. Thought?
If track-WAL-IO-timing feature may cause performance regression,
it might be an idea to extract wal_records, wal_fpi and wal_bytes
parts
from the patch and commit it at first.
[1]
https://postgr.es/m/cajrrpgc6apfugyncpe4qcnxpl8gxkyv1kst+vwjcftcscey...@mail.gmail.com
Thanks, I'll check the thread.
I agree to add basic statistics at first and I attached the patch.
Thanks!
+ /* Send WAL statistics */
+ pgstat_send_wal();
This is not necessary because walwriter generates no WAL data?
No, it's not necessary.
Thanks. I fixed it.
Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 98e1995453..6e661d6fae 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -3447,6 +3447,33 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
</thead>
<tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>wal_records</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Total number of WAL records generated
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>wal_fpi</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Total number of WAL full page images generated
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>wal_bytes</structfield> <type>numeric</type>
+ </para>
+ <para>
+ Total amount of WAL bytes generated
+ </para></entry>
+ </row>
+
<row>
<entry role="catalog_table_entry"><para role="column_definition">
<structfield>wal_buffers_full</structfield> <type>bigint</type>
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index 2e4aa1c4b6..b140c210bc 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -993,6 +993,9 @@ CREATE VIEW pg_stat_bgwriter AS
CREATE VIEW pg_stat_wal AS
SELECT
+ w.wal_records,
+ w.wal_fpi,
+ w.wal_bytes,
w.wal_buffers_full,
w.stats_reset
FROM pg_stat_get_wal() w;
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index e76e627c6b..781f2a5248 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -41,6 +41,7 @@
#include "catalog/pg_database.h"
#include "catalog/pg_proc.h"
#include "common/ip.h"
+#include "executor/instrument.h"
#include "libpq/libpq.h"
#include "libpq/pqsignal.h"
#include "mb/pg_wchar.h"
@@ -142,6 +143,7 @@ char *pgstat_stat_tmpname = NULL;
*/
PgStat_MsgBgWriter BgWriterStats;
PgStat_MsgWal WalStats;
+static WalUsage prevWalUsage;
/*
* List of SLRU names that we keep stats for. There is no central registry of
@@ -3048,6 +3050,9 @@ pgstat_initialize(void)
MyBEEntry = &BackendStatusArray[MaxBackends + MyAuxProcType];
}
+ /* Initialize WAL usage status information */
+ prevWalUsage = pgWalUsage;
+
/* Set up a process-exit hook to clean up */
on_shmem_exit(pgstat_beshutdown_hook, 0);
}
@@ -4577,6 +4582,15 @@ pgstat_send_wal(void)
/* We assume this initializes to zeroes */
static const PgStat_MsgWal all_zeroes;
+ /* fill in some values using pgWalUsage */
+ WalUsage walusage;
+ memset(&walusage, 0, sizeof(WalUsage));
+ WalUsageAccumDiff(&walusage, &pgWalUsage, &prevWalUsage);
+
+ WalStats.m_wal_bytes = walusage.wal_bytes;
+ WalStats.m_wal_records = walusage.wal_records;
+ WalStats.m_wal_fpi = walusage.wal_fpi;
+
/*
* This function can be called even if nothing at all has happened. In
* this case, avoid sending a completely empty message to the stats
@@ -4591,6 +4605,9 @@ pgstat_send_wal(void)
pgstat_setheader(&WalStats.m_hdr, PGSTAT_MTYPE_WAL);
pgstat_send(&WalStats, sizeof(WalStats));
+ /* remember the current numbers */
+ prevWalUsage = pgWalUsage;
+
/*
* Clear out the statistics buffer, so it can be re-used.
*/
@@ -6759,6 +6776,9 @@ pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len)
static void
pgstat_recv_wal(PgStat_MsgWal *msg, int len)
{
+ walStats.wal_records += msg->m_wal_records;
+ walStats.wal_fpi += msg->m_wal_fpi;
+ walStats.wal_bytes += msg->m_wal_bytes;
walStats.wal_buffers_full += msg->m_wal_buffers_full;
}
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index a210fc93b4..6afe1b6f56 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1703,10 +1703,11 @@ pg_stat_get_buf_alloc(PG_FUNCTION_ARGS)
Datum
pg_stat_get_wal(PG_FUNCTION_ARGS)
{
-#define PG_STAT_GET_WAL_COLS 2
+#define PG_STAT_GET_WAL_COLS 5
TupleDesc tupdesc;
Datum values[PG_STAT_GET_WAL_COLS];
bool nulls[PG_STAT_GET_WAL_COLS];
+ char buf[256];
PgStat_WalStats *wal_stats;
/* Initialise values and NULL flags arrays */
@@ -1715,9 +1716,15 @@ pg_stat_get_wal(PG_FUNCTION_ARGS)
/* Initialise attributes information in the tuple descriptor */
tupdesc = CreateTemplateTupleDesc(PG_STAT_GET_WAL_COLS);
- TupleDescInitEntry(tupdesc, (AttrNumber) 1, "wal_buffers_full",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 1, "wal_records",
INT8OID, -1, 0);
- TupleDescInitEntry(tupdesc, (AttrNumber) 2, "stats_reset",
+ TupleDescInitEntry(tupdesc, (AttrNumber) 2, "wal_fpi",
+ INT8OID, -1, 0);
+ TupleDescInitEntry(tupdesc, (AttrNumber) 3, "wal_bytes",
+ NUMERICOID, -1, 0);
+ TupleDescInitEntry(tupdesc, (AttrNumber) 4, "wal_buffers_full",
+ INT8OID, -1, 0);
+ TupleDescInitEntry(tupdesc, (AttrNumber) 5, "stats_reset",
TIMESTAMPTZOID, -1, 0);
BlessTupleDesc(tupdesc);
@@ -1726,8 +1733,18 @@ pg_stat_get_wal(PG_FUNCTION_ARGS)
wal_stats = pgstat_fetch_stat_wal();
/* Fill values and NULLs */
- values[0] = Int64GetDatum(wal_stats->wal_buffers_full);
- values[1] = TimestampTzGetDatum(wal_stats->stat_reset_timestamp);
+ values[0] = Int64GetDatum(wal_stats->wal_records);
+ values[1] = Int64GetDatum(wal_stats->wal_fpi);
+
+ /* Convert to numeric. */
+ snprintf(buf, sizeof buf, UINT64_FORMAT, wal_stats->wal_bytes);
+ values[2] = DirectFunctionCall3(numeric_in,
+ CStringGetDatum(buf),
+ ObjectIdGetDatum(0),
+ Int32GetDatum(-1));
+
+ values[3] = Int64GetDatum(wal_stats->wal_buffers_full);
+ values[4] = TimestampTzGetDatum(wal_stats->stat_reset_timestamp);
/* Returns the record as Datum */
PG_RETURN_DATUM(HeapTupleGetDatum(heap_form_tuple(tupdesc, values, nulls)));
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index c01da4bf01..67ab861260 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5503,8 +5503,9 @@
{ oid => '1136', descr => 'statistics: information about WAL activity',
proname => 'pg_stat_get_wal', proisstrict => 'f', provolatile => 's',
proparallel => 'r', prorettype => 'record', proargtypes => '',
- proallargtypes => '{int8,timestamptz}', proargmodes => '{o,o}',
- proargnames => '{wal_buffers_full,stats_reset}',
+ proallargtypes => '{int8,int8,numeric,int8,timestamptz}',
+ proargmodes => '{o,o,o,o,o}',
+ proargnames => '{wal_records,wal_fpi,wal_bytes,wal_buffers_full,stats_reset}',
prosrc => 'pg_stat_get_wal' },
{ oid => '2306', descr => 'statistics: information about SLRU caches',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 257e515bfe..cee3d28a2d 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -459,6 +459,9 @@ typedef struct PgStat_MsgBgWriter
typedef struct PgStat_MsgWal
{
PgStat_MsgHdr m_hdr;
+ PgStat_Counter m_wal_records;
+ PgStat_Counter m_wal_fpi;
+ uint64 m_wal_bytes;
PgStat_Counter m_wal_buffers_full;
} PgStat_MsgWal;
@@ -798,6 +801,9 @@ typedef struct PgStat_GlobalStats
*/
typedef struct PgStat_WalStats
{
+ PgStat_Counter wal_records;
+ PgStat_Counter wal_fpi;
+ uint64 wal_bytes;
PgStat_Counter wal_buffers_full;
TimestampTz stat_reset_timestamp;
} PgStat_WalStats;
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 097ff5d111..6293ab57bc 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2138,9 +2138,12 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
pg_stat_all_tables.autoanalyze_count
FROM pg_stat_all_tables
WHERE ((pg_stat_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_all_tables.schemaname !~ '^pg_toast'::text));
-pg_stat_wal| SELECT w.wal_buffers_full,
+pg_stat_wal| SELECT w.wal_records,
+ w.wal_fpi,
+ w.wal_bytes,
+ w.wal_buffers_full,
w.stats_reset
- FROM pg_stat_get_wal() w(wal_buffers_full, stats_reset);
+ FROM pg_stat_get_wal() w(wal_records, wal_fpi, wal_bytes, wal_buffers_full, stats_reset);
pg_stat_wal_receiver| SELECT s.pid,
s.status,
s.receive_start_lsn,
119,132d118
< diff --git a/src/backend/postmaster/walwriter.c
b/src/backend/postmaster/walwriter.c
< index a52832fe90..ce9f4b7cf7 100644
< --- a/src/backend/postmaster/walwriter.c
< +++ b/src/backend/postmaster/walwriter.c
< @@ -253,6 +253,9 @@ WalWriterMain(void)
< else if (left_till_hibernate > 0)
< left_till_hibernate--;
<
< + /* Send WAL statistics */
< + pgstat_send_wal();
< +
< /*
< * Sleep until we are signaled or WalWriterDelay has elapsed.
If we
< * haven't done anything useful for quite some time, lengthen
the