On Tue, Aug 3, 2021 at 2:13 PM Andres Freund <and...@anarazel.de> wrote: > > Hi, > > On 2021-08-02 18:25:56 -0400, Melanie Plageman wrote: > > Thanks for the feedback! > > > > I agree it makes sense to count strategy writes separately. > > > > I thought about this some more, and I don't know if it makes sense to > > only count "avoidable" strategy writes. > > > > This would mean that a backend writing out a buffer from the strategy > > ring when no clean shared buffers (as well as no clean strategy buffers) > > are available would not count that write as a strategy write (even > > though it is writing out a buffer from its strategy ring). But, it > > obviously doesn't make sense to count it as a regular buffer being > > written out. So, I plan to change this code. > > What do you mean with "no clean shared buffers ... are available"? >
I think I was talking about the scenario in which a backend using a strategy does not find a clean buffer in the strategy ring and goes to look in the freelist for a clean shared buffer and doesn't find one. I was probably talking in circles up there. I think the current patch counts the right writes in the right way, though. > > > > The most substantial missing piece of the patch right now is persisting > > the data across reboots. > > > > The two places in the code I can see to persist the buffer action stats > > data are: > > 1) using the stats collector code (like in > > pgstat_read/write_statsfiles() > > 2) using a before_shmem_exit() hook which writes the data structure to a > > file and then read from it when making the shared memory array initially > > I think it's pretty clear that we should go for 1. Having two mechanisms for > persisting stats data is a bad idea. New version uses the stats collector. > > > > Also, I'm unsure how writing the buffer action stats out in > > pgstat_write_statsfiles() will work, since I think that backends can > > update their buffer action stats after we would have already persisted > > the data from the BufferActionStatsArray -- causing us to lose those > > updates. > > I was thinking it'd work differently. Whenever a connection ends, it reports > its data up to pgstats.c (otherwise we'd loose those stats). By the time > shutdown happens, they all need to have already have reported their stats - so > we don't need to do anything to get the data to pgstats.c during shutdown > time. > When you say "whenever a connection ends", what part of the code are you referring to specifically? Also, when you say "shutdown", do you mean a backend shutting down or all backends shutting down (including postmaster) -- like pg_ctl stop? > > > And, I don't think I can use pgstat_read_statsfiles() since the > > BufferActionStatsArray should have the data from the file as soon as the > > view containing the buffer action stats can be queried. Thus, it seems > > like I would need to read the file while initializing the array in > > CreateBufferActionStatsCounters(). > > Why would backends need to read that data back? > To get totals across restarts, but, doesn't matter now that I am using stats collector. > > > diff --git a/src/backend/catalog/system_views.sql > > b/src/backend/catalog/system_views.sql > > index 55f6e3711d..96cac0a74e 100644 > > --- a/src/backend/catalog/system_views.sql > > +++ b/src/backend/catalog/system_views.sql > > @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS > > pg_stat_get_bgwriter_buf_written_checkpoints() AS > > buffers_checkpoint, > > pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean, > > pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean, > > - pg_stat_get_buf_written_backend() AS buffers_backend, > > - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync, > > - pg_stat_get_buf_alloc() AS buffers_alloc, > > pg_stat_get_bgwriter_stat_reset_time() AS stats_reset; > > Material for a separate patch, not this. But if we're going to break > monitoring queries anyway, I think we should consider also renaming > maxwritten_clean (and perhaps a few others), because nobody understands what > that is supposed to mean. > > Do you mean I shouldn't remove anything from the pg_stat_bgwriter view? > > > @@ -1089,10 +1077,6 @@ ForwardSyncRequest(const FileTag *ftag, > > SyncRequestType type) > > > > LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE); > > > > - /* Count all backend writes regardless of if they fit in the queue */ > > - if (!AmBackgroundWriterProcess()) > > - CheckpointerShmem->num_backend_writes++; > > - > > /* > > * If the checkpointer isn't running or the request queue is full, the > > * backend will have to perform its own fsync request. But before > > forcing > > @@ -1106,8 +1090,10 @@ ForwardSyncRequest(const FileTag *ftag, > > SyncRequestType type) > > * Count the subset of writes where backends have to do their > > own > > * fsync > > */ > > + /* TODO: should we count fsyncs for all types of procs? */ > > if (!AmBackgroundWriterProcess()) > > - CheckpointerShmem->num_backend_fsync++; > > + pgstat_increment_buffer_action(BA_Fsync); > > + > > Yes, I think that'd make sense. Now that we can disambiguate the different > types of syncs between procs, I don't see a point of having a process-type > filter here. We just loose data... > > Done > > > /* don't set checksum for all-zero page */ > > @@ -1229,11 +1234,60 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, > > ForkNumber forkNum, > > if (XLogNeedsFlush(lsn) && > > > > StrategyRejectBuffer(strategy, buf)) > > { > > + /* > > + * Unset the strat write > > flag, as we will not be writing > > + * this particular buffer > > from our ring out and may end > > + * up having to find a buffer > > from main shared buffers, > > + * which, if it is dirty, we > > may have to write out, which > > + * could have been prevented > > by checkpointing and background > > + * writing > > + */ > > + > > StrategyUnChooseBufferFromRing(strategy); > > + > > /* Drop lock/pin and loop > > around for another buffer */ > > > > LWLockRelease(BufferDescriptorGetContentLock(buf)); > > UnpinBuffer(buf, true); > > continue; > > } > > Could we combine this with StrategyRejectBuffer()? It seems a bit wasteful to > have two function calls into freelist.c when the second happens exactly when > the first returns true? > > > > + > > + /* > > + * TODO: there is certainly a better > > way to write this > > + * logic > > + */ > > + > > + /* > > + * The dirty buffer that will be > > written out was selected > > + * from the ring and we did not > > bother checking the > > + * freelist or doing a clock sweep to > > look for a clean > > + * buffer to use, thus, this write > > will be counted as a > > + * strategy write -- one that may be > > unnecessary without a > > + * strategy > > + */ > > + if > > (StrategyIsBufferFromRing(strategy)) > > + { > > + > > pgstat_increment_buffer_action(BA_Write_Strat); > > + } > > + > > + /* > > + * If the dirty buffer was > > one we grabbed from the > > + * freelist or through a > > clock sweep, it could have been > > + * written out by bgwriter or > > checkpointer, thus, we will > > + * count it as a regular write > > + */ > > + else > > + > > pgstat_increment_buffer_action(BA_Write); > > It seems this would be better solved by having an "bool *from_ring" or > GetBufferSource* parameter to StrategyGetBuffer(). > I've addressed both of these in the new version. > > > @@ -2895,6 +2948,20 @@ FlushBuffer(BufferDesc *buf, SMgrRelation reln) > > /* > > * bufToWrite is either the shared buffer or a copy, as appropriate. > > */ > > + > > + /* > > + * TODO: consider that if we did not need to distinguish between a > > buffer > > + * flushed that was grabbed from the ring buffer and written out as > > part > > + * of a strategy which was not from main Shared Buffers (and thus > > + * preventable by bgwriter or checkpointer), then we could move all > > calls > > + * to pgstat_increment_buffer_action() here except for the one for > > + * extends, which would remain in ReadBuffer_common() before > > smgrextend() > > + * (unless we decide to start counting other extends). That includes > > the > > + * call to count buffers written by bgwriter and checkpointer which go > > + * through FlushBuffer() but not BufferAlloc(). That would make it > > + * simpler. Perhaps instead we can find somewhere else to indicate > > that > > + * the buffer is from the ring of buffers to reuse. > > + */ > > smgrwrite(reln, > > buf->tag.forkNum, > > buf->tag.blockNum, > > Can we just add a parameter to FlushBuffer indicating what the source of the > write is? > I just noticed this comment now, so I'll address that in the next version. I rebased today and noticed merge conflicts, so, it looks like v5 will be on its way soon anyway. > > > @@ -247,7 +257,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 > > *buf_state) > > * the rate of buffer consumption. Note that buffers recycled by a > > * strategy object are intentionally not counted here. > > */ > > - pg_atomic_fetch_add_u32(&StrategyControl->numBufferAllocs, 1); > > + pgstat_increment_buffer_action(BA_Alloc); > > > > /* > > * First check, without acquiring the lock, whether there's buffers > > in the > > > @@ -411,11 +421,6 @@ StrategySyncStart(uint32 *complete_passes, uint32 > > *num_buf_alloc) > > */ > > *complete_passes += nextVictimBuffer / NBuffers; > > } > > - > > - if (num_buf_alloc) > > - { > > - *num_buf_alloc = > > pg_atomic_exchange_u32(&StrategyControl->numBufferAllocs, 0); > > - } > > SpinLockRelease(&StrategyControl->buffer_strategy_lock); > > return result; > > } > > Hm. Isn't bgwriter using the *num_buf_alloc value to pace its activity? I > suspect this patch shouldn't get rid of numBufferAllocs at the same time as > overhauling the stats stuff. Perhaps we don't need both - but it's not obvious > that that's the case / how we can make that work. > > I initially meant to add a function to the patch like pg_stat_get_buffer_actions() but which took a BufferActionType and BackendType as parameters and returned a single value which is the number of buffer action types of that type for that type of backend. let's say I defined it like this: uint64 pg_stat_get_backend_buffer_actions_stats(BackendType backend_type, BufferActionType ba_type) Then, I intended to use that in StrategySyncStart() to set num_buf_alloc by subtracting the value of StrategyControl->numBufferAllocs from the value returned by pg_stat_get_backend_buffer_actions_stats(B_BG_WRITER, BA_Alloc), val, then adding that value, val, to StrategyControl->numBufferAllocs. I think that would have the same behavior as current, though I'm not sure if the performance would end up being better or worse. It wouldn't be atomically incrementing StrategyControl->numBufferAllocs, but it would do a few additional atomic operations in StrategySyncStart() than before. Also, we would do all the work done by pg_stat_get_buffer_actions() in StrategySyncStart(). But that is called comparatively infrequently, right? > > > > +void > > +pgstat_increment_buffer_action(BufferActionType ba_type) > > +{ > > + volatile PgBackendStatus *beentry = MyBEEntry; > > + > > + if (!beentry || !pgstat_track_activities) > > + return; > > + > > + if (ba_type == BA_Alloc) > > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.allocs, > > 1); > > + else if (ba_type == BA_Extend) > > + > > pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.extends, 1); > > + else if (ba_type == BA_Fsync) > > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.fsyncs, > > 1); > > + else if (ba_type == BA_Write) > > + pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes, > > 1); > > + else if (ba_type == BA_Write_Strat) > > + > > pg_atomic_add_fetch_u64(&beentry->buffer_action_stats.writes_strat, 1); > > +} > > I don't think we want to use atomic increments here - they're *slow*. And > there only ever can be a single writer to a backend's stats. So just doing > something like > pg_atomic_write_u64(&var, pg_atomic_read_u64(&var) + 1) > should do the trick. > Done > > > +/* > > + * Called for a single backend at the time of death to persist its I/O > > stats > > + */ > > +void > > +pgstat_record_dead_backend_buffer_actions(void) > > +{ > > + volatile PgBackendBufferActionStats *ba_stats; > > + volatile PgBackendStatus *beentry = MyBEEntry; > > + > > + if (beentry->st_procpid != 0) > > + return; > > + > > + // TODO: is this correct? could there be a data race? do I need a > > lock? > > + ba_stats = &BufferActionStatsArray[beentry->st_backendType]; > > + pg_atomic_add_fetch_u64(&ba_stats->allocs, > > pg_atomic_read_u64(&beentry->buffer_action_stats.allocs)); > > + pg_atomic_add_fetch_u64(&ba_stats->extends, > > pg_atomic_read_u64(&beentry->buffer_action_stats.extends)); > > + pg_atomic_add_fetch_u64(&ba_stats->fsyncs, > > pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs)); > > + pg_atomic_add_fetch_u64(&ba_stats->writes, > > pg_atomic_read_u64(&beentry->buffer_action_stats.writes)); > > + pg_atomic_add_fetch_u64(&ba_stats->writes_strat, > > pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat)); > > +} > > I don't see a race, FWIW. > > This is where I propose that we instead report the values up to the stats > collector, instead of having a separate array that we need to persist > Changed > > > +/* > > + * Fill the provided values array with the accumulated counts of buffer > > actions > > + * taken by all backends of type backend_type (input parameter), both > > alive and > > + * dead. This is currently only used by pg_stat_get_buffer_actions() to > > create > > + * the rows in the pg_stat_buffer_actions system view. > > + */ > > +void > > +pgstat_recount_all_buffer_actions(BackendType backend_type, Datum *values) > > +{ > > + int i; > > + volatile PgBackendStatus *beentry; > > + > > + /* > > + * Add stats from all exited backends > > + */ > > + values[BA_Alloc] = > > pg_atomic_read_u64(&BufferActionStatsArray[backend_type].allocs); > > + values[BA_Extend] = > > pg_atomic_read_u64(&BufferActionStatsArray[backend_type].extends); > > + values[BA_Fsync] = > > pg_atomic_read_u64(&BufferActionStatsArray[backend_type].fsyncs); > > + values[BA_Write] = > > pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes); > > + values[BA_Write_Strat] = > > pg_atomic_read_u64(&BufferActionStatsArray[backend_type].writes_strat); > > + > > + /* > > + * Loop through all live backends and count their buffer actions > > + */ > > + // TODO: see note in pg_stat_get_buffer_actions() about inefficiency > > of this method > > + > > + beentry = BackendStatusArray; > > + for (i = 1; i <= MaxBackends; i++) > > + { > > + /* Don't count dead backends. They should already be counted > > */ > > + if (beentry->st_procpid == 0) > > + continue; > > + if (beentry->st_backendType != backend_type) > > + continue; > > + > > + values[BA_Alloc] += > > pg_atomic_read_u64(&beentry->buffer_action_stats.allocs); > > + values[BA_Extend] += > > pg_atomic_read_u64(&beentry->buffer_action_stats.extends); > > + values[BA_Fsync] += > > pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs); > > + values[BA_Write] += > > pg_atomic_read_u64(&beentry->buffer_action_stats.writes); > > + values[BA_Write_Strat] += > > pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat); > > + > > + beentry++; > > + } > > +} > > It seems to make a bit more sense to have this sum up the stats for all > backend types at once. Changed. > > > + /* > > + * Currently, the only supported backend types for stats are > > the following. > > + * If this were to change, pg_proc.dat would need to be > > changed as well > > + * to reflect the new expected number of rows. > > + */ > > + Datum values[BUFFER_ACTION_NUM_TYPES]; > > + bool nulls[BUFFER_ACTION_NUM_TYPES]; > > Ah ;) > I just went ahead and made a row for each backend type. - Melanie
From ab751bdbc96c8c52a341d9ced3f9e1fe929e2010 Mon Sep 17 00:00:00 2001 From: Melanie Plageman <melanieplage...@gmail.com> Date: Mon, 2 Aug 2021 17:56:07 -0400 Subject: [PATCH v4] Add system view tracking shared buffer actions Add a system view which tracks - number of shared buffers the checkpointer and bgwriter write out - number of shared buffers a regular backend is forced to flush - number of extends done by a regular backend through shared buffers - number of buffers flushed by a backend or autovacuum using a BufferAccessStrategy which, were they not to use this strategy, could perhaps have been avoided if a clean shared buffer was available - number of fsyncs done by a backend which could have been done by checkpointer if sync queue had not been full - number of buffers allocated by a regular backend or autovacuum worker for either a new block or an existing block of a relation which is not currently in a buffer All of these stats which were in the system view pg_stat_bgwriter have been removed from that view. All backends, on exit, will update a shared memory array with the buffers they wrote or extended. When the view is queried, add all live backend's statuses to the totals in the shared memory array and return that as the full total. Each row of the view is for a particular backend type and each column is the number of a particular kind of buffer action taken by the various backends. TODO: - Some kind of test? - Docs change --- src/backend/catalog/system_views.sql | 14 +++- src/backend/postmaster/checkpointer.c | 27 +----- src/backend/postmaster/pgstat.c | 40 ++++++++- src/backend/storage/buffer/bufmgr.c | 30 +++++-- src/backend/storage/buffer/freelist.c | 16 +++- src/backend/utils/activity/backend_status.c | 62 ++++++++++++++ src/backend/utils/adt/pgstatfuncs.c | 91 ++++++++++++++++++--- src/backend/utils/init/miscinit.c | 2 + src/include/catalog/pg_proc.dat | 21 ++--- src/include/miscadmin.h | 12 +++ src/include/pgstat.h | 26 ++++-- src/include/storage/buf_internals.h | 4 +- src/include/utils/backend_status.h | 15 +++- src/test/regress/expected/rules.out | 10 ++- src/test/regress/sql/stats.sql | 1 + 15 files changed, 297 insertions(+), 74 deletions(-) diff --git a/src/backend/catalog/system_views.sql b/src/backend/catalog/system_views.sql index 55f6e3711d..96cac0a74e 100644 --- a/src/backend/catalog/system_views.sql +++ b/src/backend/catalog/system_views.sql @@ -1067,9 +1067,6 @@ CREATE VIEW pg_stat_bgwriter AS pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint, pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean, pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean, - pg_stat_get_buf_written_backend() AS buffers_backend, - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync, - pg_stat_get_buf_alloc() AS buffers_alloc, pg_stat_get_bgwriter_stat_reset_time() AS stats_reset; CREATE VIEW pg_stat_wal AS @@ -1085,6 +1082,17 @@ CREATE VIEW pg_stat_wal AS w.stats_reset FROM pg_stat_get_wal() w; +CREATE VIEW pg_stat_buffer_actions AS +SELECT + b.backend_type, + b.buffers_alloc, + b.buffers_extend, + b.buffers_fsync, + b.buffers_write, + b.buffers_write_strat +FROM pg_stat_get_buffer_actions() b; + + CREATE VIEW pg_stat_progress_analyze AS SELECT S.pid AS pid, S.datid AS datid, D.datname AS datname, diff --git a/src/backend/postmaster/checkpointer.c b/src/backend/postmaster/checkpointer.c index bc9ac7ccfa..db1c6c45c2 100644 --- a/src/backend/postmaster/checkpointer.c +++ b/src/backend/postmaster/checkpointer.c @@ -90,17 +90,8 @@ * requesting backends since the last checkpoint start. The flags are * chosen so that OR'ing is the correct way to combine multiple requests. * - * num_backend_writes is used to count the number of buffer writes performed - * by user backend processes. This counter should be wide enough that it - * can't overflow during a single processing cycle. num_backend_fsync - * counts the subset of those writes that also had to do their own fsync, - * because the checkpointer failed to absorb their request. - * * The requests array holds fsync requests sent by backends and not yet * absorbed by the checkpointer. - * - * Unlike the checkpoint fields, num_backend_writes, num_backend_fsync, and - * the requests fields are protected by CheckpointerCommLock. *---------- */ typedef struct @@ -124,9 +115,6 @@ typedef struct ConditionVariable start_cv; /* signaled when ckpt_started advances */ ConditionVariable done_cv; /* signaled when ckpt_done advances */ - uint32 num_backend_writes; /* counts user backend buffer writes */ - uint32 num_backend_fsync; /* counts user backend fsync calls */ - int num_requests; /* current # of requests */ int max_requests; /* allocated array size */ CheckpointerRequest requests[FLEXIBLE_ARRAY_MEMBER]; @@ -1089,10 +1077,6 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type) LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE); - /* Count all backend writes regardless of if they fit in the queue */ - if (!AmBackgroundWriterProcess()) - CheckpointerShmem->num_backend_writes++; - /* * If the checkpointer isn't running or the request queue is full, the * backend will have to perform its own fsync request. But before forcing @@ -1106,8 +1090,8 @@ ForwardSyncRequest(const FileTag *ftag, SyncRequestType type) * Count the subset of writes where backends have to do their own * fsync */ - if (!AmBackgroundWriterProcess()) - CheckpointerShmem->num_backend_fsync++; + pgstat_increment_buffer_action(BA_Fsync); + LWLockRelease(CheckpointerCommLock); return false; } @@ -1264,13 +1248,6 @@ AbsorbSyncRequests(void) LWLockAcquire(CheckpointerCommLock, LW_EXCLUSIVE); - /* Transfer stats counts into pending pgstats message */ - BgWriterStats.m_buf_written_backend += CheckpointerShmem->num_backend_writes; - BgWriterStats.m_buf_fsync_backend += CheckpointerShmem->num_backend_fsync; - - CheckpointerShmem->num_backend_writes = 0; - CheckpointerShmem->num_backend_fsync = 0; - /* * We try to avoid holding the lock for a long time by copying the request * array, and processing the requests after releasing the lock. diff --git a/src/backend/postmaster/pgstat.c b/src/backend/postmaster/pgstat.c index 11702f2a80..0db6cd0587 100644 --- a/src/backend/postmaster/pgstat.c +++ b/src/backend/postmaster/pgstat.c @@ -129,6 +129,7 @@ char *pgstat_stat_tmpname = NULL; * without needing to copy things around. We assume these init to zeroes. */ PgStat_MsgBgWriter BgWriterStats; +PgStat_MsgBufferActions BufferActionsStats; PgStat_MsgWal WalStats; /* @@ -348,6 +349,7 @@ static void pgstat_recv_analyze(PgStat_MsgAnalyze *msg, int len); static void pgstat_recv_anl_ancestors(PgStat_MsgAnlAncestors *msg, int len); static void pgstat_recv_archiver(PgStat_MsgArchiver *msg, int len); static void pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len); +static void pgstat_recv_buffer_actions(PgStat_MsgBufferActions *msg, int len); static void pgstat_recv_wal(PgStat_MsgWal *msg, int len); static void pgstat_recv_slru(PgStat_MsgSLRU *msg, int len); static void pgstat_recv_funcstat(PgStat_MsgFuncstat *msg, int len); @@ -3040,6 +3042,16 @@ pgstat_send_bgwriter(void) MemSet(&BgWriterStats, 0, sizeof(BgWriterStats)); } +void +pgstat_send_buffer_actions(void) +{ + pgstat_setheader(&BufferActionsStats.m_hdr, PGSTAT_MTYPE_BUFFER_ACTIONS); + pgstat_send(&BufferActionsStats, sizeof(BufferActionsStats)); + + // TODO: not needed because backends only call this when exiting? + MemSet(&BufferActionsStats, 0, sizeof(BufferActionsStats)); +} + /* ---------- * pgstat_send_wal() - * @@ -3382,6 +3394,10 @@ PgstatCollectorMain(int argc, char *argv[]) pgstat_recv_bgwriter(&msg.msg_bgwriter, len); break; + case PGSTAT_MTYPE_BUFFER_ACTIONS: + pgstat_recv_buffer_actions(&msg.msg_buffer_actions, len); + break; + case PGSTAT_MTYPE_WAL: pgstat_recv_wal(&msg.msg_wal, len); break; @@ -4056,6 +4072,8 @@ pgstat_read_statsfiles(Oid onlydb, bool permanent, bool deep) goto done; } + + /* * We found an existing collector stats file. Read it and put all the * hashtable entries into place. @@ -5352,9 +5370,25 @@ pgstat_recv_bgwriter(PgStat_MsgBgWriter *msg, int len) globalStats.buf_written_checkpoints += msg->m_buf_written_checkpoints; globalStats.buf_written_clean += msg->m_buf_written_clean; globalStats.maxwritten_clean += msg->m_maxwritten_clean; - globalStats.buf_written_backend += msg->m_buf_written_backend; - globalStats.buf_fsync_backend += msg->m_buf_fsync_backend; - globalStats.buf_alloc += msg->m_buf_alloc; +} + + +static void +pgstat_recv_buffer_actions(PgStat_MsgBufferActions *msg, int len) +{ + globalStats.buffer_actions[msg->backend_type].backend_type = msg->backend_type; + globalStats.buffer_actions[msg->backend_type].allocs += msg->allocs; + globalStats.buffer_actions[msg->backend_type].extends += msg->extends; + globalStats.buffer_actions[msg->backend_type].fsyncs += msg->fsyncs; + globalStats.buffer_actions[msg->backend_type].writes += msg->writes; + globalStats.buffer_actions[msg->backend_type].writes_strat += msg->writes_strat; + +} + +PgStat_MsgBufferActions * +pgstat_get_buffer_action_stats(BackendType backend_type) +{ + return &globalStats.buffer_actions[backend_type]; } /* ---------- diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c index 33d99f604a..8bfdf848a4 100644 --- a/src/backend/storage/buffer/bufmgr.c +++ b/src/backend/storage/buffer/bufmgr.c @@ -963,6 +963,11 @@ ReadBuffer_common(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, if (isExtend) { + /* + * Extends counted here are only those that go through shared buffers + */ + pgstat_increment_buffer_action(BA_Extend); + /* new buffers are zero-filled */ MemSet((char *) bufBlock, 0, BLCKSZ); /* don't set checksum for all-zero page */ @@ -1163,6 +1168,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, /* Loop here in case we have to try another victim buffer */ for (;;) { + bool from_ring = false; /* * Ensure, while the spinlock's not yet held, that there's a free * refcount entry. @@ -1173,7 +1179,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, * Select a victim buffer. The buffer is returned with its header * spinlock still held! */ - buf = StrategyGetBuffer(strategy, &buf_state); + buf = StrategyGetBuffer(strategy, &buf_state, &from_ring); Assert(BUF_STATE_GET_REFCOUNT(buf_state) == 0); @@ -1210,6 +1216,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, if (LWLockConditionalAcquire(BufferDescriptorGetContentLock(buf), LW_SHARED)) { + BufferActionType buffer_action; /* * If using a nondefault strategy, and writing the buffer * would require a WAL flush, let the strategy decide whether @@ -1227,7 +1234,7 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, UnlockBufHdr(buf, buf_state); if (XLogNeedsFlush(lsn) && - StrategyRejectBuffer(strategy, buf)) + StrategyRejectBuffer(strategy, buf, &from_ring)) { /* Drop lock/pin and loop around for another buffer */ LWLockRelease(BufferDescriptorGetContentLock(buf)); @@ -1236,6 +1243,20 @@ BufferAlloc(SMgrRelation smgr, char relpersistence, ForkNumber forkNum, } } + /* + * When a strategy is in use, if the dirty buffer was selected + * from the strategy ring and we did not bother checking the + * freelist or doing a clock sweep to look for a clean shared + * buffer to use, the write will be counted as a strategy + * write. However, if the dirty buffer was obtained from the + * freelist or a clock sweep, it is counted as a regular write. + * When a strategy is not in use, at this point, the write can + * only be a "regular" write of a dirty buffer. + */ + + buffer_action = from_ring ? BA_Write_Strat : BA_Write; + pgstat_increment_buffer_action(buffer_action); + /* OK, do the I/O */ TRACE_POSTGRESQL_BUFFER_WRITE_DIRTY_START(forkNum, blockNum, smgr->smgr_rnode.node.spcNode, @@ -2246,9 +2267,6 @@ BgBufferSync(WritebackContext *wb_context) */ strategy_buf_id = StrategySyncStart(&strategy_passes, &recent_alloc); - /* Report buffer alloc counts to pgstat */ - BgWriterStats.m_buf_alloc += recent_alloc; - /* * If we're not running the LRU scan, just stop after doing the stats * stuff. We mark the saved state invalid so that we can recover sanely @@ -2543,6 +2561,8 @@ SyncOneBuffer(int buf_id, bool skip_recently_used, WritebackContext *wb_context) * Pin it, share-lock it, write it. (FlushBuffer will do nothing if the * buffer is clean by the time we've locked it.) */ + pgstat_increment_buffer_action(BA_Write); + PinBuffer_Locked(bufHdr); LWLockAcquire(BufferDescriptorGetContentLock(bufHdr), LW_SHARED); diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c index 6be80476db..e8a8d9f788 100644 --- a/src/backend/storage/buffer/freelist.c +++ b/src/backend/storage/buffer/freelist.c @@ -19,6 +19,7 @@ #include "storage/buf_internals.h" #include "storage/bufmgr.h" #include "storage/proc.h" +#include "utils/backend_status.h" #define INT_ACCESS_ONCE(var) ((int)(*((volatile int *)&(var)))) @@ -198,7 +199,7 @@ have_free_buffer(void) * return the buffer with the buffer header spinlock still held. */ BufferDesc * -StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state) +StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state, bool *from_ring) { BufferDesc *buf; int bgwprocno; @@ -213,7 +214,10 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state) { buf = GetBufferFromRing(strategy, buf_state); if (buf != NULL) + { + *from_ring = true; return buf; + } } /* @@ -247,6 +251,7 @@ StrategyGetBuffer(BufferAccessStrategy strategy, uint32 *buf_state) * the rate of buffer consumption. Note that buffers recycled by a * strategy object are intentionally not counted here. */ + pgstat_increment_buffer_action(BA_Alloc); pg_atomic_fetch_add_u32(&StrategyControl->numBufferAllocs, 1); /* @@ -683,7 +688,7 @@ AddBufferToRing(BufferAccessStrategy strategy, BufferDesc *buf) * if this buffer should be written and re-used. */ bool -StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf) +StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf, bool *from_ring) { /* We only do this in bulkread mode */ if (strategy->btype != BAS_BULKREAD) @@ -700,5 +705,12 @@ StrategyRejectBuffer(BufferAccessStrategy strategy, BufferDesc *buf) */ strategy->buffers[strategy->current] = InvalidBuffer; + /* + * Since we will not be writing out a dirty buffer from the ring, set + * from_ring to false so that we do not count this write as a "strategy + * write" and can do proper bookkeeping for pg_stat_buffer_actions. + */ + *from_ring = false; + return true; } diff --git a/src/backend/utils/activity/backend_status.c b/src/backend/utils/activity/backend_status.c index 2901f9f5a9..d720c73e70 100644 --- a/src/backend/utils/activity/backend_status.c +++ b/src/backend/utils/activity/backend_status.c @@ -75,6 +75,7 @@ static MemoryContext backendStatusSnapContext; static void pgstat_beshutdown_hook(int code, Datum arg); static void pgstat_read_current_status(void); static void pgstat_setup_backend_status_context(void); +static void pgstat_record_dead_backend_buffer_actions(void); /* @@ -399,6 +400,11 @@ pgstat_bestart(void) lbeentry.st_progress_command = PROGRESS_COMMAND_INVALID; lbeentry.st_progress_command_target = InvalidOid; lbeentry.st_query_id = UINT64CONST(0); + pg_atomic_init_u64(&lbeentry.buffer_action_stats.allocs, 0); + pg_atomic_init_u64(&lbeentry.buffer_action_stats.extends, 0); + pg_atomic_init_u64(&lbeentry.buffer_action_stats.fsyncs, 0); + pg_atomic_init_u64(&lbeentry.buffer_action_stats.writes, 0); + pg_atomic_init_u64(&lbeentry.buffer_action_stats.writes_strat, 0); /* * we don't zero st_progress_param here to save cycles; nobody should @@ -469,6 +475,11 @@ pgstat_beshutdown_hook(int code, Datum arg) beentry->st_procpid = 0; /* mark invalid */ PGSTAT_END_WRITE_ACTIVITY(beentry); + + // TODO: should this go in pgstat_report_stat() instead + // TODO: should this check be here? Is it possible that members were zero-initialized if database ID is not valid? + if (OidIsValid(MyDatabaseId)) + pgstat_record_dead_backend_buffer_actions(); } /* @@ -1041,6 +1052,57 @@ pgstat_get_my_query_id(void) */ return MyBEEntry->st_query_id; } +void +pgstat_increment_buffer_action(BufferActionType ba_type) +{ + volatile PgBackendStatus *beentry = MyBEEntry; + + if (!beentry || !pgstat_track_activities) + return; + + if (ba_type == BA_Alloc) + pg_atomic_write_u64(&beentry->buffer_action_stats.allocs, pg_atomic_read_u64(&beentry->buffer_action_stats.allocs) + 1); + else if (ba_type == BA_Extend) + pg_atomic_write_u64(&beentry->buffer_action_stats.extends, pg_atomic_read_u64(&beentry->buffer_action_stats.extends) + 1); + else if (ba_type == BA_Fsync) + pg_atomic_write_u64(&beentry->buffer_action_stats.fsyncs, pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs) + 1); + else if (ba_type == BA_Write) + pg_atomic_write_u64(&beentry->buffer_action_stats.writes, pg_atomic_read_u64(&beentry->buffer_action_stats.writes) + 1); + else if (ba_type == BA_Write_Strat) + pg_atomic_write_u64(&beentry->buffer_action_stats.writes_strat, pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat) + 1); + +} + +/* + * Called for a single backend at the time of death to persist its I/O stats + */ +void +pgstat_record_dead_backend_buffer_actions(void) +{ + volatile PgBackendStatus *beentry = MyBEEntry; + + if (beentry->st_procpid != 0) + return; + + // TODO: should I add this or just set it -- seems like it would only happen once - + BufferActionsStats.backend_type = beentry->st_backendType; + BufferActionsStats.allocs = pg_atomic_read_u64(&beentry->buffer_action_stats.allocs); + BufferActionsStats.extends = pg_atomic_read_u64(&beentry->buffer_action_stats.extends); + BufferActionsStats.fsyncs = pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs); + BufferActionsStats.writes = pg_atomic_read_u64(&beentry->buffer_action_stats.writes); + BufferActionsStats.writes_strat = pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat); + pgstat_send_buffer_actions(); +} + +// TODO: this is clearly no good, but I'm not sure if I have to/want to/can use +// the below pgstat_fetch_stat_beentry and doing the loop that is in +// pg_stat_get_buffer_actions() into this file will likely mean having to pass a +// two-dimensional array as a parameter which is unappealing to me +volatile PgBackendStatus * +pgstat_access_backend_status_array(void) +{ + return BackendStatusArray; +} /* ---------- diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c index f0e09eae4d..163679f60b 100644 --- a/src/backend/utils/adt/pgstatfuncs.c +++ b/src/backend/utils/adt/pgstatfuncs.c @@ -1780,21 +1780,88 @@ pg_stat_get_bgwriter_stat_reset_time(PG_FUNCTION_ARGS) } Datum -pg_stat_get_buf_written_backend(PG_FUNCTION_ARGS) +pg_stat_get_buffer_actions(PG_FUNCTION_ARGS) { - PG_RETURN_INT64(pgstat_fetch_global()->buf_written_backend); -} + ReturnSetInfo *rsinfo = (ReturnSetInfo *) fcinfo->resultinfo; + TupleDesc tupdesc; + Tuplestorestate *tupstore; + MemoryContext per_query_ctx; + MemoryContext oldcontext; + PgStat_MsgBufferActions *buffer_actions; + int i; + volatile PgBackendStatus *beentry; + Datum all_values[BACKEND_NUM_TYPES][BUFFER_ACTION_NUM_TYPES]; + bool all_nulls[BACKEND_NUM_TYPES][BUFFER_ACTION_NUM_TYPES]; -Datum -pg_stat_get_buf_fsync_backend(PG_FUNCTION_ARGS) -{ - PG_RETURN_INT64(pgstat_fetch_global()->buf_fsync_backend); -} + /* Build a tuple descriptor for our result type */ + if (get_call_result_type(fcinfo, NULL, &tupdesc) != TYPEFUNC_COMPOSITE) + elog(ERROR, "return type must be a row type"); -Datum -pg_stat_get_buf_alloc(PG_FUNCTION_ARGS) -{ - PG_RETURN_INT64(pgstat_fetch_global()->buf_alloc); + per_query_ctx = rsinfo->econtext->ecxt_per_query_memory; + oldcontext = MemoryContextSwitchTo(per_query_ctx); + + tupstore = tuplestore_begin_heap(true, false, work_mem); + rsinfo->returnMode = SFRM_Materialize; + rsinfo->setResult = tupstore; + rsinfo->setDesc = tupdesc; + + MemoryContextSwitchTo(oldcontext); + pgstat_fetch_global(); + for (i = 1; i < BACKEND_NUM_TYPES; i++) + { + Datum *values = all_values[i]; + bool *nulls = all_nulls[i]; + + MemSet(values, 0, sizeof(Datum[BUFFER_ACTION_NUM_TYPES])); + MemSet(nulls, 0, sizeof(Datum[BUFFER_ACTION_NUM_TYPES])); + + values[0] = CStringGetTextDatum(GetBackendTypeDesc(i)); + /* + * Add stats from all exited backends + */ + buffer_actions = pgstat_get_buffer_action_stats(i); + + values[BA_Alloc] += buffer_actions->allocs; + values[BA_Extend] += buffer_actions->extends; + values[BA_Fsync] += buffer_actions->fsyncs; + values[BA_Write] += buffer_actions->writes; + values[BA_Write_Strat] += buffer_actions->writes_strat; + } + + /* + * Loop through all live backends and count their buffer actions + */ + + beentry = pgstat_access_backend_status_array(); + for (i = 0; i <= MaxBackends; i++) + { + Datum *values; + beentry++; + /* Don't count dead backends. They should already be counted */ + if (beentry->st_procpid == 0) + continue; + values = all_values[beentry->st_backendType]; + + + values[BA_Alloc] += pg_atomic_read_u64(&beentry->buffer_action_stats.allocs); + values[BA_Extend] += pg_atomic_read_u64(&beentry->buffer_action_stats.extends); + values[BA_Fsync] += pg_atomic_read_u64(&beentry->buffer_action_stats.fsyncs); + values[BA_Write] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes); + values[BA_Write_Strat] += pg_atomic_read_u64(&beentry->buffer_action_stats.writes_strat); + + } + + for (i = 1; i < BACKEND_NUM_TYPES; i++) + { + Datum *values = all_values[i]; + bool *nulls = all_nulls[i]; + tuplestore_putvalues(tupstore, tupdesc, values, nulls); + } + + /* clean up and return the tuplestore */ + tuplestore_donestoring(tupstore); + + return (Datum) 0; } /* diff --git a/src/backend/utils/init/miscinit.c b/src/backend/utils/init/miscinit.c index 8b73850d0d..d0923407ff 100644 --- a/src/backend/utils/init/miscinit.c +++ b/src/backend/utils/init/miscinit.c @@ -277,6 +277,8 @@ GetBackendTypeDesc(BackendType backendType) case B_LOGGER: backendDesc = "logger"; break; + case BACKEND_NUM_TYPES: + break; } return backendDesc; diff --git a/src/include/catalog/pg_proc.dat b/src/include/catalog/pg_proc.dat index 8cd0252082..32257dcde8 100644 --- a/src/include/catalog/pg_proc.dat +++ b/src/include/catalog/pg_proc.dat @@ -5565,18 +5565,15 @@ proname => 'pg_stat_get_checkpoint_sync_time', provolatile => 's', proparallel => 'r', prorettype => 'float8', proargtypes => '', prosrc => 'pg_stat_get_checkpoint_sync_time' }, -{ oid => '2775', descr => 'statistics: number of buffers written by backends', - proname => 'pg_stat_get_buf_written_backend', provolatile => 's', - proparallel => 'r', prorettype => 'int8', proargtypes => '', - prosrc => 'pg_stat_get_buf_written_backend' }, -{ oid => '3063', - descr => 'statistics: number of backend buffer writes that did their own fsync', - proname => 'pg_stat_get_buf_fsync_backend', provolatile => 's', - proparallel => 'r', prorettype => 'int8', proargtypes => '', - prosrc => 'pg_stat_get_buf_fsync_backend' }, -{ oid => '2859', descr => 'statistics: number of buffer allocations', - proname => 'pg_stat_get_buf_alloc', provolatile => 's', proparallel => 'r', - prorettype => 'int8', proargtypes => '', prosrc => 'pg_stat_get_buf_alloc' }, + + { oid => '8459', descr => 'statistics: counts of buffer actions taken by each backend type', + proname => 'pg_stat_get_buffer_actions', provolatile => 's', proisstrict => 'f', + prorows => '13', proretset => 't', + proparallel => 'r', prorettype => 'record', proargtypes => '', + proallargtypes => '{text,int8,int8,int8,int8,int8}', + proargmodes => '{o,o,o,o,o,o}', + proargnames => '{backend_type,buffers_alloc,buffers_extend,buffers_fsync,buffers_write,buffers_write_strat}', + prosrc => 'pg_stat_get_buffer_actions' }, { oid => '1136', descr => 'statistics: information about WAL activity', proname => 'pg_stat_get_wal', proisstrict => 'f', provolatile => 's', diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h index 68d840d699..74b18dad0f 100644 --- a/src/include/miscadmin.h +++ b/src/include/miscadmin.h @@ -336,8 +336,20 @@ typedef enum BackendType B_ARCHIVER, B_STATS_COLLECTOR, B_LOGGER, + BACKEND_NUM_TYPES, } BackendType; +typedef enum BufferActionType +{ + BA_Invalid = 0, + BA_Alloc, + BA_Extend, + BA_Fsync, + BA_Write, + BA_Write_Strat, + BUFFER_ACTION_NUM_TYPES, +} BufferActionType; + extern BackendType MyBackendType; extern const char *GetBackendTypeDesc(BackendType backendType); diff --git a/src/include/pgstat.h b/src/include/pgstat.h index 9612c0a6c2..ee545a9d63 100644 --- a/src/include/pgstat.h +++ b/src/include/pgstat.h @@ -72,6 +72,7 @@ typedef enum StatMsgType PGSTAT_MTYPE_ANL_ANCESTORS, PGSTAT_MTYPE_ARCHIVER, PGSTAT_MTYPE_BGWRITER, + PGSTAT_MTYPE_BUFFER_ACTIONS, PGSTAT_MTYPE_WAL, PGSTAT_MTYPE_SLRU, PGSTAT_MTYPE_FUNCSTAT, @@ -475,13 +476,22 @@ typedef struct PgStat_MsgBgWriter PgStat_Counter m_buf_written_checkpoints; PgStat_Counter m_buf_written_clean; PgStat_Counter m_maxwritten_clean; - PgStat_Counter m_buf_written_backend; - PgStat_Counter m_buf_fsync_backend; - PgStat_Counter m_buf_alloc; PgStat_Counter m_checkpoint_write_time; /* times in milliseconds */ PgStat_Counter m_checkpoint_sync_time; } PgStat_MsgBgWriter; +typedef struct PgStat_MsgBufferActions +{ + PgStat_MsgHdr m_hdr; + + BackendType backend_type; + uint64 allocs; + uint64 extends; + uint64 fsyncs; + uint64 writes; + uint64 writes_strat; +} PgStat_MsgBufferActions; + /* ---------- * PgStat_MsgWal Sent by backends and background processes to update WAL statistics. * ---------- @@ -700,6 +710,7 @@ typedef union PgStat_Msg PgStat_MsgAnlAncestors msg_anl_ancestors; PgStat_MsgArchiver msg_archiver; PgStat_MsgBgWriter msg_bgwriter; + PgStat_MsgBufferActions msg_buffer_actions; PgStat_MsgWal msg_wal; PgStat_MsgSLRU msg_slru; PgStat_MsgFuncstat msg_funcstat; @@ -854,9 +865,7 @@ typedef struct PgStat_GlobalStats PgStat_Counter buf_written_checkpoints; PgStat_Counter buf_written_clean; PgStat_Counter maxwritten_clean; - PgStat_Counter buf_written_backend; - PgStat_Counter buf_fsync_backend; - PgStat_Counter buf_alloc; + PgStat_MsgBufferActions buffer_actions[BACKEND_NUM_TYPES]; TimestampTz stat_reset_timestamp; } PgStat_GlobalStats; @@ -941,6 +950,8 @@ extern char *pgstat_stat_filename; */ extern PgStat_MsgBgWriter BgWriterStats; +extern PgStat_MsgBufferActions BufferActionsStats; + /* * WAL statistics counter is updated by backends and background processes */ @@ -1091,6 +1102,9 @@ extern void pgstat_twophase_postabort(TransactionId xid, uint16 info, extern void pgstat_send_archiver(const char *xlog, bool failed); extern void pgstat_send_bgwriter(void); +extern void pgstat_send_buffer_actions(void); + +extern PgStat_MsgBufferActions * pgstat_get_buffer_action_stats(BackendType backend_type); extern void pgstat_send_wal(bool force); /* ---------- diff --git a/src/include/storage/buf_internals.h b/src/include/storage/buf_internals.h index 33fcaf5c9a..7e385135db 100644 --- a/src/include/storage/buf_internals.h +++ b/src/include/storage/buf_internals.h @@ -310,10 +310,10 @@ extern void ScheduleBufferTagForWriteback(WritebackContext *context, BufferTag * /* freelist.c */ extern BufferDesc *StrategyGetBuffer(BufferAccessStrategy strategy, - uint32 *buf_state); + uint32 *buf_state, bool *from_ring); extern void StrategyFreeBuffer(BufferDesc *buf); extern bool StrategyRejectBuffer(BufferAccessStrategy strategy, - BufferDesc *buf); + BufferDesc *buf, bool *from_ring); extern int StrategySyncStart(uint32 *complete_passes, uint32 *num_buf_alloc); extern void StrategyNotifyBgWriter(int bgwprocno); diff --git a/src/include/utils/backend_status.h b/src/include/utils/backend_status.h index 8042b817df..0aeac79184 100644 --- a/src/include/utils/backend_status.h +++ b/src/include/utils/backend_status.h @@ -13,6 +13,7 @@ #include "datatype/timestamp.h" #include "libpq/pqcomm.h" #include "miscadmin.h" /* for BackendType */ +#include "port/atomics.h" #include "utils/backend_progress.h" @@ -79,6 +80,15 @@ typedef struct PgBackendGSSStatus } PgBackendGSSStatus; +typedef struct PgBackendBufferActionStats +{ + pg_atomic_uint64 allocs; + pg_atomic_uint64 extends; + pg_atomic_uint64 fsyncs; + pg_atomic_uint64 writes; + pg_atomic_uint64 writes_strat; +} PgBackendBufferActionStats; + /* ---------- * PgBackendStatus @@ -168,6 +178,7 @@ typedef struct PgBackendStatus /* query identifier, optionally computed using post_parse_analyze_hook */ uint64 st_query_id; + PgBackendBufferActionStats buffer_action_stats; } PgBackendStatus; @@ -282,7 +293,7 @@ extern PGDLLIMPORT PgBackendStatus *MyBEEntry; */ extern Size BackendStatusShmemSize(void); extern void CreateSharedBackendStatus(void); - +extern void CreateBufferActionStatsCounters(void); /* ---------- * Functions called from backends @@ -305,7 +316,9 @@ extern const char *pgstat_get_backend_current_activity(int pid, bool checkUser); extern const char *pgstat_get_crashed_backend_activity(int pid, char *buffer, int buflen); extern uint64 pgstat_get_my_query_id(void); +extern void pgstat_increment_buffer_action(BufferActionType ba_type); +extern volatile PgBackendStatus *pgstat_access_backend_status_array(void); /* ---------- * Support functions for the SQL-callable functions to diff --git a/src/test/regress/expected/rules.out b/src/test/regress/expected/rules.out index e5ab11275d..609ccf3b7b 100644 --- a/src/test/regress/expected/rules.out +++ b/src/test/regress/expected/rules.out @@ -1824,10 +1824,14 @@ pg_stat_bgwriter| SELECT pg_stat_get_bgwriter_timed_checkpoints() AS checkpoints pg_stat_get_bgwriter_buf_written_checkpoints() AS buffers_checkpoint, pg_stat_get_bgwriter_buf_written_clean() AS buffers_clean, pg_stat_get_bgwriter_maxwritten_clean() AS maxwritten_clean, - pg_stat_get_buf_written_backend() AS buffers_backend, - pg_stat_get_buf_fsync_backend() AS buffers_backend_fsync, - pg_stat_get_buf_alloc() AS buffers_alloc, pg_stat_get_bgwriter_stat_reset_time() AS stats_reset; +pg_stat_buffer_actions| SELECT b.backend_type, + b.buffers_alloc, + b.buffers_extend, + b.buffers_fsync, + b.buffers_write, + b.buffers_write_strat + FROM pg_stat_get_buffer_actions() b(backend_type, buffers_alloc, buffers_extend, buffers_fsync, buffers_write, buffers_write_strat); pg_stat_database| SELECT d.oid AS datid, d.datname, CASE diff --git a/src/test/regress/sql/stats.sql b/src/test/regress/sql/stats.sql index feaaee6326..fb4b613d4b 100644 --- a/src/test/regress/sql/stats.sql +++ b/src/test/regress/sql/stats.sql @@ -176,4 +176,5 @@ FROM prevstats AS pr; DROP TABLE trunc_stats_test, trunc_stats_test1, trunc_stats_test2, trunc_stats_test3, trunc_stats_test4; DROP TABLE prevstats; +SELECT * FROM pg_stat_buffer_actions; -- End of Stats Test -- 2.27.0