Thanks for the review and advice!
On 2020-09-03 16:05, Fujii Masao wrote:
On 2020/09/02 18:56, Masahiro Ikeda wrote:
+/* ----------
+ * Backend types
+ * ----------
You seem to forget to add "*/" into the above comment.
This issue could cause the following compiler warning.
../../src/include/pgstat.h:761:1: warning: '/*' within block comment
[-Wcomment]
Thanks for the comment. I fixed.
Thanks for the fix! But why are those comments necessary?
Sorry about that. This comment is not necessary.
I removed it.
The pg_stat_walwriter is not security restricted now, so ordinary
users can access it.
It has the same security level as pg_stat_archiver. If you have any
comments, please let me know.
+ <structfield>dirty_writes</structfield> <type>bigint</type>
I guess that the column name "dirty_writes" derived from
the DTrace probe name. Isn't this name confusing? We should
rename it to "wal_buffers_full" or something?
I agree and rename it to "wal_buffers_full".
+/* ----------
+ * PgStat_MsgWalWriter Sent by the walwriter to update
statistics.
This comment seems not accurate because backends also send it.
+/*
+ * WAL writes statistics counter is updated in XLogWrite function
+ */
+extern PgStat_MsgWalWriter WalWriterStats;
This comment seems not right because the counter is not updated in
XLogWrite().
Right. I fixed it to "Sent by each backend and background workers to
update WAL statistics."
In the future, other statistics will be included so I remove the
function's name.
+-- There will surely and maximum one record
+select count(*) = 1 as ok from pg_stat_walwriter;
What about changing this comment to "There must be only one record"?
Thanks, I fixed.
+ WalWriterStats.m_xlog_dirty_writes++;
LWLockRelease(WALWriteLock);
Since WalWriterStats.m_xlog_dirty_writes doesn't need to be protected
with WALWriteLock, isn't it better to increment that after releasing
the lock?
Thanks, I fixed.
+CREATE VIEW pg_stat_walwriter AS
+ SELECT
+ pg_stat_get_xlog_dirty_writes() AS dirty_writes,
+ pg_stat_get_walwriter_stat_reset_time() AS stats_reset;
+
CREATE VIEW pg_stat_progress_vacuum AS
In system_views.sql, the definition of pg_stat_walwriter should be
placed just after that of pg_stat_bgwriter not
pg_stat_progress_analyze.
OK, I fixed it.
}
-
/*
* We found an existing collector stats file. Read it and put all the
You seem to accidentally have removed the empty line here.
Sorry about that. I fixed it.
- errhint("Target must be \"archiver\" or
\"bgwriter\".")));
+ errhint("Target must be \"archiver\" or
\"bgwriter\" or
\"walwriter\".")));
There are two "or" in the message, but the former should be replaced
with ","?
Thanks, I fixed.
On 2020-09-05 18:40, Magnus Hagander wrote:
On Fri, Sep 4, 2020 at 5:42 AM Fujii Masao
<masao.fu...@oss.nttdata.com> wrote:
On 2020/09/04 11:50, tsunakawa.ta...@fujitsu.com wrote:
From: Fujii Masao <masao.fu...@oss.nttdata.com>
I changed the view name from pg_stat_walwrites to
pg_stat_walwriter.
I think it is better to match naming scheme with other views
like
pg_stat_bgwriter,
which is for bgwriter statistics but it has the statistics
related to backend.
I prefer the view name pg_stat_walwriter for the consistency with
other view names. But we also have pg_stat_wal_receiver. Which
makes me think that maybe pg_stat_wal_writer is better for
the consistency. Thought? IMO either of them works for me.
I'd like to hear more opinons about this.
I think pg_stat_bgwriter is now a misnomer, because it contains
the backends' activity. Likewise, pg_stat_walwriter leads to
misunderstanding because its information is not limited to WAL
writer.
How about simply pg_stat_wal? In the future, we may want to
include WAL reads in this view, e.g. reading undo logs in zheap.
Sounds reasonable.
+1.
pg_stat_bgwriter has had the "wrong name" for quite some time now --
it became even more apparent when the checkpointer was split out to
it's own process, and that's not exactly a recent change. And it had
allocs in it from day one...
I think naming it for what the data in it is ("wal") rather than which
process deals with it ("walwriter") is correct, unless the statistics
can be known to only *ever* affect one type of process. (And then
different processes can affect different columns in the view). As a
general rule -- and that's from what I can tell exactly what's being
proposed.
Thanks for your comments. I agree with your opinions.
I changed the view name to "pg_stat_wal".
I fixed the code to send the WAL statistics from not only backend and
walwriter
but also checkpointer, walsender and autovacuum worker.
Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION
diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml
index 673a0e73e4..6d56912221 100644
--- a/doc/src/sgml/monitoring.sgml
+++ b/doc/src/sgml/monitoring.sgml
@@ -424,6 +424,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser
</entry>
</row>
+ <row>
+ <entry><structname>pg_stat_wal</structname><indexterm><primary>pg_stat_wal</primary></indexterm></entry>
+ <entry>One row only, showing statistics about the WAL writing activity. See
+ <xref linkend="monitoring-pg-stat-wal-view"/> for details.
+ </entry>
+ </row>
+
<row>
<entry><structname>pg_stat_database</structname><indexterm><primary>pg_stat_database</primary></indexterm></entry>
<entry>One row per database, showing database-wide statistics. See
@@ -3280,6 +3287,56 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
</sect2>
+ <sect2 id="monitoring-pg-stat-wal-view">
+ <title><structname>pg_stat_wal</structname></title>
+
+ <indexterm>
+ <primary>pg_stat_wal</primary>
+ </indexterm>
+
+ <para>
+ The <structname>pg_stat_wal</structname> view will always have a
+ single row, containing data about the WAL writing activity of the cluster.
+ </para>
+
+ <table id="pg-stat-wal-view" xreflabel="pg_stat_wal">
+ <title><structname>pg_stat_wal</structname> View</title>
+ <tgroup cols="1">
+ <thead>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ Column Type
+ </para>
+ <para>
+ Description
+ </para></entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>wal_buffers_full</structfield> <type>bigint</type>
+ </para>
+ <para>
+ Number of WAL writes when the <xref linkend="guc-wal-buffers"/> are full
+ </para></entry>
+ </row>
+
+ <row>
+ <entry role="catalog_table_entry"><para role="column_definition">
+ <structfield>stats_reset</structfield> <type>timestamp with time zone</type>
+ </para>
+ <para>
+ Time at which these statistics were last reset
+ </para></entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+</sect2>
+
<sect2 id="monitoring-pg-stat-database-view">
<title><structname>pg_stat_database</structname></title>
@@ -4668,8 +4725,9 @@ SELECT pid, wait_event_type, wait_event FROM pg_stat_activity WHERE wait_event i
argument. The argument can be <literal>bgwriter</literal> to reset
all the counters shown in
the <structname>pg_stat_bgwriter</structname>
- view, or <literal>archiver</literal> to reset all the counters shown in
- the <structname>pg_stat_archiver</structname> view.
+ view, <literal>archiver</literal> to reset all the counters shown in
+ the <structname>pg_stat_archiver</structname> view ,or <literal>wal</literal>
+ to reset all the counters shown in the <structname>pg_stat_wal</structname> view.
</para>
<para>
This function is restricted to superusers by default, but other users
diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c
index 92389e6666..5c97da49ae 100644
--- a/src/backend/access/heap/vacuumlazy.c
+++ b/src/backend/access/heap/vacuumlazy.c
@@ -604,6 +604,7 @@ heap_vacuum_rel(Relation onerel, VacuumParams *params,
onerel->rd_rel->relisshared,
Max(new_live_tuples, 0),
vacrelstats->new_dead_tuples);
+ pgstat_send_wal();
pgstat_progress_end_command();
/* and log the action if appropriate */
diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c
index 09c01ed4ae..b485ff49f9 100644
--- a/src/backend/access/transam/xlog.c
+++ b/src/backend/access/transam/xlog.c
@@ -2194,6 +2194,7 @@ AdvanceXLInsertBuffer(XLogRecPtr upto, bool opportunistic)
WriteRqst.Flush = 0;
XLogWrite(WriteRqst, false);
LWLockRelease(WALWriteLock);
+ WalStats.m_wal_buffers_full++;
TRACE_POSTGRESQL_WAL_BUFFER_WRITE_DIRTY_DONE();
}
/* Re-acquire WALBufMappingLock and retry */
diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql
index ed4f3f142d..643445c189 100644
--- a/src/backend/catalog/system_views.sql
+++ b/src/backend/catalog/system_views.sql
@@ -979,6 +979,11 @@ CREATE VIEW pg_stat_bgwriter AS
pg_stat_get_buf_alloc() AS buffers_alloc,
pg_stat_get_bgwriter_stat_reset_time() AS stats_reset;
+CREATE VIEW pg_stat_wal AS
+ SELECT
+ pg_stat_get_wal_buffers_full() AS wal_buffers_full,
+ pg_stat_get_wal_stat_reset_time() AS stats_reset;
+
CREATE VIEW pg_stat_progress_analyze AS
SELECT
S.pid AS pid, S.datid AS datid, D.datname AS datname,
diff --git a/src/backend/postmaster/bgwriter.c b/src/backend/postmaster/bgwriter.c
index 069e27e427..450c19968b 100644
--- a/src/backend/postmaster/bgwriter.c
+++ b/src/backend/postmaster/bgwriter.c
@@ -238,6 +238,9 @@ BackgroundWriterMain(void)
*/
pgstat_send_bgwriter();
+ /* Send wal statistics */
+ pgstat_send_wal();
+
if (FirstCallSinceLastCheckpoint())
{
/*
diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c
index 624a3238b8..b82ba54523 100644
--- a/src/backend/postmaster/checkpointer.c
+++ b/src/backend/postmaster/checkpointer.c
@@ -494,6 +494,9 @@ CheckpointerMain(void)
*/
pgstat_send_bgwriter();
+ /* Send wal statistics to the stats collector. */
+ pgstat_send_wal();
+
/*
* If any checkpoint flags have been set, redo the loop to handle the
* checkpoint without sleeping.
diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c
index 5f4b168fd1..e23446179f 100644
--- a/src/backend/postmaster/pgstat.c
+++ b/src/backend/postmaster/pgstat.c
@@ -141,6 +141,13 @@ char *pgstat_stat_tmpname = NULL;
*/
PgStat_MsgBgWriter BgWriterStats;
+/*
+ * WAL global statistics counter.
+ * This counter is incremented by both each backend and background.
+ * And then, sent to the stat collector process.
+ */
+PgStat_MsgWal WalStats;
+
/*
* List of SLRU names that we keep stats for. There is no central registry of
* SLRUs, so we use this fixed list instead. The "other" entry is used for
@@ -281,6 +288,7 @@ static int localNumBackends = 0;
*/
static PgStat_ArchiverStats archiverStats;
static PgStat_GlobalStats globalStats;
+static PgStat_WalStats walStats;
static PgStat_SLRUStats slruStats[SLRU_NUM_ELEMENTS];
/*
@@ -353,6 +361,7 @@ static void pgstat_recv_vacuum(PgStat_MsgVacuum *msg, int len);
static void pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len);
static void pgstat_recv_archiver(PgStat_MsgArchiver *msg, int len);
static void pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len);
+static void pgstat_recv_wal(PgStat_MsgWal *msg, int len);
static void pgstat_recv_slru(PgStat_MsgSLRU *msg, int len);
static void pgstat_recv_funcstat(PgStat_MsgFuncstat *msg, int len);
static void pgstat_recv_funcpurge(PgStat_MsgFuncpurge *msg, int len);
@@ -938,6 +947,9 @@ pgstat_report_stat(bool force)
/* Now, send function statistics */
pgstat_send_funcstats();
+ /* Send wal statistics */
+ pgstat_send_wal();
+
/* Finally send SLRU statistics */
pgstat_send_slru();
}
@@ -1370,11 +1382,13 @@ pgstat_reset_shared_counters(const char *target)
msg.m_resettarget = RESET_ARCHIVER;
else if (strcmp(target, "bgwriter") == 0)
msg.m_resettarget = RESET_BGWRITER;
+ else if (strcmp(target, "wal") == 0)
+ msg.m_resettarget = RESET_WAL;
else
ereport(ERROR,
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
errmsg("unrecognized reset target: \"%s\"", target),
- errhint("Target must be \"archiver\" or \"bgwriter\".")));
+ errhint("Target must be \"archiver\", \"bgwriter\" or \"wal\".")));
pgstat_setheader(&msg.m_hdr, PGSTAT_MTYPE_RESETSHAREDCOUNTER);
pgstat_send(&msg, sizeof(msg));
@@ -2674,6 +2688,21 @@ pgstat_fetch_global(void)
return &globalStats;
}
+/*
+ * ---------
+ * pgstat_fetch_stat_wal() -
+ *
+ * Support function for the SQL-callable pgstat* functions. Returns
+ * a pointer to the wal statistics struct.
+ * ---------
+ */
+PgStat_WalStats *
+pgstat_fetch_stat_wal(void)
+{
+ backend_read_statsfile();
+
+ return &walStats;
+}
/*
* ---------
@@ -4419,6 +4448,38 @@ pgstat_send_bgwriter(void)
MemSet(&BgWriterStats, 0, sizeof(BgWriterStats));
}
+/* ----------
+ * pgstat_send_wal() -
+ *
+ * Send wal statistics to the collector
+ * ----------
+ */
+void
+pgstat_send_wal(void)
+{
+ /* We assume this initializes to zeroes */
+ static const PgStat_MsgWal all_zeroes;
+
+ /*
+ * This function can be called even if nothing at all has happened. In
+ * this case, avoid sending a completely empty message to the stats
+ * collector.
+ */
+ if (memcmp(&WalStats, &all_zeroes, sizeof(PgStat_MsgWal)) == 0)
+ return;
+
+ /*
+ * Prepare and send the message
+ */
+ pgstat_setheader(&WalStats.m_hdr, PGSTAT_MTYPE_WAL);
+ pgstat_send(&WalStats, sizeof(WalStats));
+
+ /*
+ * Clear out the statistics buffer, so it can be re-used.
+ */
+ MemSet(&WalStats, 0, sizeof(WalStats));
+}
+
/* ----------
* pgstat_send_slru() -
*
@@ -4658,6 +4719,10 @@ PgstatCollectorMain(int argc, char *argv[])
pgstat_recv_bgwriter(&msg.msg_bgwriter, len);
break;
+ case PGSTAT_MTYPE_WAL:
+ pgstat_recv_wal(&msg.msg_wal, len);
+ break;
+
case PGSTAT_MTYPE_SLRU:
pgstat_recv_slru(&msg.msg_slru, len);
break;
@@ -4927,6 +4992,12 @@ pgstat_write_statsfiles(bool permanent, bool allDbs)
rc = fwrite(&archiverStats, sizeof(archiverStats), 1, fpout);
(void) rc; /* we'll check for error with ferror */
+ /*
+ * Write wal stats struct
+ */
+ rc = fwrite(&walStats, sizeof(walStats), 1, fpout);
+ (void) rc; /* we'll check for error with ferror */
+
/*
* Write SLRU stats struct
*/
@@ -5186,11 +5257,12 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep)
HASH_ELEM | HASH_BLOBS | HASH_CONTEXT);
/*
- * Clear out global and archiver statistics so they start from zero in
+ * Clear out global, archiver and wal statistics so they start from zero in
* case we can't load an existing statsfile.
*/
memset(&globalStats, 0, sizeof(globalStats));
memset(&archiverStats, 0, sizeof(archiverStats));
+ memset(&walStats, 0, sizeof(walStats));
memset(&slruStats, 0, sizeof(slruStats));
/*
@@ -5199,6 +5271,7 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep)
*/
globalStats.stat_reset_timestamp = GetCurrentTimestamp();
archiverStats.stat_reset_timestamp = globalStats.stat_reset_timestamp;
+ walStats.stat_reset_timestamp = globalStats.stat_reset_timestamp;
/*
* Set the same reset timestamp for all SLRU items too.
@@ -5268,6 +5341,17 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep)
goto done;
}
+ /*
+ * Read wal stats struct
+ */
+ if (fread(&walStats, 1, sizeof(walStats), fpin) != sizeof(walStats))
+ {
+ ereport(pgStatRunningInCollector ? LOG : WARNING,
+ (errmsg("corrupted statistics file \"%s\"", statfile)));
+ memset(&walStats, 0, sizeof(walStats));
+ goto done;
+ }
+
/*
* Read SLRU stats struct
*/
@@ -5632,6 +5716,17 @@ pgstat_read_db_statsfile_timestamp(Oid databaseid, bool permanent,
return false;
}
+ /*
+ * Read wal stats struct
+ */
+ if (fread(&walStats, 1, sizeof(walStats), fpin) != sizeof(walStats))
+ {
+ ereport(pgStatRunningInCollector ? LOG : WARNING,
+ (errmsg("corrupted statistics file \"%s\"", statfile)));
+ FreeFile(fpin);
+ return false;
+ }
+
/*
* Read SLRU stats struct
*/
@@ -6208,6 +6303,12 @@ pgstat_recv_resetsharedcounter(PgStat_MsgResetsharedcounter *msg, int len)
memset(&archiverStats, 0, sizeof(archiverStats));
archiverStats.stat_reset_timestamp = GetCurrentTimestamp();
}
+ else if (msg->m_resettarget == RESET_WAL)
+ {
+ /* Reset the wal statistics for the cluster. */
+ memset(&walStats, 0, sizeof(walStats));
+ walStats.stat_reset_timestamp = GetCurrentTimestamp();
+ }
/*
* Presumably the sender of this message validated the target, don't
@@ -6422,6 +6523,18 @@ pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len)
globalStats.buf_alloc += msg->m_buf_alloc;
}
+/* ----------
+ * pgstat_recv_wal() -
+ *
+ * Process a WAL message.
+ * ----------
+ */
+static void
+pgstat_recv_wal(PgStat_MsgWal *msg, int len)
+{
+ walStats.wal_buffers_full += msg->m_wal_buffers_full;
+}
+
/* ----------
* pgstat_recv_slru() -
*
diff --git a/src/backend/postmaster/walwriter.c b/src/backend/postmaster/walwriter.c
index 45a2757969..8fead4ca51 100644
--- a/src/backend/postmaster/walwriter.c
+++ b/src/backend/postmaster/walwriter.c
@@ -243,6 +243,9 @@ WalWriterMain(void)
else if (left_till_hibernate > 0)
left_till_hibernate--;
+ /* Send wal statistics */
+ pgstat_send_wal();
+
/*
* Sleep until we are signaled or WalWriterDelay has elapsed. If we
* haven't done anything useful for quite some time, lengthen the
diff --git a/src/backend/replication/walsender.c b/src/backend/replication/walsender.c
index 3f756b470a..9ae7b9d6e6 100644
--- a/src/backend/replication/walsender.c
+++ b/src/backend/replication/walsender.c
@@ -1430,6 +1430,9 @@ WalSndWaitForWal(XLogRecPtr loc)
else
RecentFlushPtr = GetXLogReplayRecPtr(NULL);
+ /* Send wal statistics */
+ pgstat_send_wal();
+
/*
* If postmaster asked us to stop, don't wait anymore.
*
diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c
index 95738a4e34..aa41330796 100644
--- a/src/backend/utils/adt/pgstatfuncs.c
+++ b/src/backend/utils/adt/pgstatfuncs.c
@@ -1697,6 +1697,18 @@ pg_stat_get_buf_alloc(PG_FUNCTION_ARGS)
PG_RETURN_INT64(pgstat_fetch_global()->buf_alloc);
}
+Datum
+pg_stat_get_wal_buffers_full(PG_FUNCTION_ARGS)
+{
+ PG_RETURN_INT64(pgstat_fetch_stat_wal()->wal_buffers_full);
+}
+
+Datum
+pg_stat_get_wal_stat_reset_time(PG_FUNCTION_ARGS)
+{
+ PG_RETURN_TIMESTAMPTZ(pgstat_fetch_stat_wal()->stat_reset_timestamp);
+}
+
/*
* Returns statistics of SLRU caches.
*/
diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat
index 687509ba92..13cc892abc 100644
--- a/src/include/catalog/pg_proc.dat
+++ b/src/include/catalog/pg_proc.dat
@@ -5484,6 +5484,14 @@
proname => 'pg_stat_get_buf_alloc', provolatile => 's', proparallel => 'r',
prorettype => 'int8', proargtypes => '', prosrc => 'pg_stat_get_buf_alloc' },
+{ oid => '8000', descr => 'statistics: number of WAL writes when the wal buffers are full',
+ proname => 'pg_stat_get_wal_buffers_full', provolatile => 's', proparallel => 'r',
+ prorettype => 'int8', proargtypes => '', prosrc => 'pg_stat_get_wal_buffers_full' },
+{ oid => '8001', descr => 'statistics: last reset for the walwriter',
+ proname => 'pg_stat_get_wal_stat_reset_time', provolatile => 's',
+ proparallel => 'r', prorettype => 'timestamptz', proargtypes => '',
+ prosrc => 'pg_stat_get_wal_stat_reset_time' },
+
{ oid => '2306', descr => 'statistics: information about SLRU caches',
proname => 'pg_stat_get_slru', prorows => '100', proisstrict => 'f',
proretset => 't', provolatile => 's', proparallel => 'r',
diff --git a/src/include/pgstat.h b/src/include/pgstat.h
index 0dfbac46b4..eb706068ba 100644
--- a/src/include/pgstat.h
+++ b/src/include/pgstat.h
@@ -61,6 +61,7 @@ typedef enum StatMsgType
PGSTAT_MTYPE_ANALYZE,
PGSTAT_MTYPE_ARCHIVER,
PGSTAT_MTYPE_BGWRITER,
+ PGSTAT_MTYPE_WAL,
PGSTAT_MTYPE_SLRU,
PGSTAT_MTYPE_FUNCSTAT,
PGSTAT_MTYPE_FUNCPURGE,
@@ -122,7 +123,8 @@ typedef struct PgStat_TableCounts
typedef enum PgStat_Shared_Reset_Target
{
RESET_ARCHIVER,
- RESET_BGWRITER
+ RESET_BGWRITER,
+ RESET_WAL
} PgStat_Shared_Reset_Target;
/* Possible object types for resetting single counters */
@@ -436,6 +438,16 @@ typedef struct PgStat_MsgBgWriter
PgStat_Counter m_checkpoint_sync_time;
} PgStat_MsgBgWriter;
+/* ----------
+ * PgStat_MsgWal Sent by each backend and background workers to update WAL statistics.
+ * ----------
+ */
+typedef struct PgStat_MsgWal
+{
+ PgStat_MsgHdr m_hdr;
+ PgStat_Counter m_wal_buffers_full; /* number of WAL write caused by full of WAL buffers */
+} PgStat_MsgWal;
+
/* ----------
* PgStat_MsgSLRU Sent by a backend to update SLRU statistics.
* ----------
@@ -596,6 +608,7 @@ typedef union PgStat_Msg
PgStat_MsgAnalyze msg_analyze;
PgStat_MsgArchiver msg_archiver;
PgStat_MsgBgWriter msg_bgwriter;
+ PgStat_MsgWal msg_wal;
PgStat_MsgSLRU msg_slru;
PgStat_MsgFuncstat msg_funcstat;
PgStat_MsgFuncpurge msg_funcpurge;
@@ -745,6 +758,15 @@ typedef struct PgStat_GlobalStats
TimestampTz stat_reset_timestamp;
} PgStat_GlobalStats;
+/*
+ * WAL statistics kept in the stats collector
+ */
+typedef struct PgStat_WalStats
+{
+ PgStat_Counter wal_buffers_full; /* number of WAL write caused by full of WAL buffers */
+ TimestampTz stat_reset_timestamp; /* last time when the stats reset */
+} PgStat_WalStats;
+
/*
* SLRU statistics kept in the stats collector
*/
@@ -1265,6 +1287,11 @@ extern char *pgstat_stat_filename;
*/
extern PgStat_MsgBgWriter BgWriterStats;
+/*
+ * WAL writes statistics counter is updated by backend and background workers
+ */
+extern PgStat_MsgWal WalStats;
+
/*
* Updated by pgstat_count_buffer_*_time macros
*/
@@ -1464,6 +1491,7 @@ extern void pgstat_twophase_postabort(TransactionId xid, uint16 info,
extern void pgstat_send_archiver(const char *xlog, bool failed);
extern void pgstat_send_bgwriter(void);
+extern void pgstat_send_wal(void);
/* ----------
* Support functions for the SQL-callable functions to
@@ -1478,6 +1506,7 @@ extern PgStat_StatFuncEntry *pgstat_fetch_stat_funcentry(Oid funcid);
extern int pgstat_fetch_stat_numbackends(void);
extern PgStat_ArchiverStats *pgstat_fetch_stat_archiver(void);
extern PgStat_GlobalStats *pgstat_fetch_global(void);
+extern PgStat_WalStats *pgstat_fetch_stat_wal(void);
extern PgStat_SLRUStats *pgstat_fetch_slru(void);
extern void pgstat_count_slru_page_zeroed(int slru_idx);
diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out
index 2a18dc423e..1e4ac4432e 100644
--- a/src/test/regress/expected/rules.out
+++ b/src/test/regress/expected/rules.out
@@ -2129,6 +2129,8 @@ pg_stat_user_tables| SELECT pg_stat_all_tables.relid,
pg_stat_all_tables.autoanalyze_count
FROM pg_stat_all_tables
WHERE ((pg_stat_all_tables.schemaname <> ALL (ARRAY['pg_catalog'::name, 'information_schema'::name])) AND (pg_stat_all_tables.schemaname !~ '^pg_toast'::text));
+pg_stat_wal| SELECT pg_stat_get_wal_buffers_full() AS wal_buffers_full,
+ pg_stat_get_wal_stat_reset_time() AS stats_reset;
pg_stat_wal_receiver| SELECT s.pid,
s.status,
s.receive_start_lsn,
diff --git a/src/test/regress/expected/sysviews.out b/src/test/regress/expected/sysviews.out
index 1cffc3349d..81bdacf59d 100644
--- a/src/test/regress/expected/sysviews.out
+++ b/src/test/regress/expected/sysviews.out
@@ -76,6 +76,13 @@ select count(*) >= 0 as ok from pg_prepared_xacts;
t
(1 row)
+-- There must be only one record
+select count(*) = 1 as ok from pg_stat_wal;
+ ok
+----
+ t
+(1 row)
+
-- This is to record the prevailing planner enable_foo settings during
-- a regression test run.
select name, setting from pg_settings where name like 'enable%';
diff --git a/src/test/regress/sql/sysviews.sql b/src/test/regress/sql/sysviews.sql
index ac4a0e1cbb..b9b875bc6a 100644
--- a/src/test/regress/sql/sysviews.sql
+++ b/src/test/regress/sql/sysviews.sql
@@ -37,6 +37,9 @@ select count(*) = 0 as ok from pg_prepared_statements;
-- See also prepared_xacts.sql
select count(*) >= 0 as ok from pg_prepared_xacts;
+-- There must be only one record
+select count(*) = 1 as ok from pg_stat_wal;
+
-- This is to record the prevailing planner enable_foo settings during
-- a regression test run.
select name, setting from pg_settings where name like 'enable%';