Attached patch is a first cut at what moving one contrib module (in this
case pg_buffercache) to a new directory structure might look like. The
idea is that src/extension could become a place for "first-class"
extensions to live. Those are ones community is committed to providing
in core, but are just better implemented as extensions than in-database
functions, for reasons that include security. This idea has been shared
by a lot of people for a while, only problem is that it wasn't really
practical to implement cleanly until the extensions code hit. I think
it is now, this attempts to prove it.
Since patches involving file renaming are clunky, the changes might be
easier to see at
https://github.com/greg2ndQuadrant/postgres/commit/507923e21e963c873a84f1b850d64e895776574f
where I just pushed this change too. The install step for the modules
looks like this now:
gsmith@grace:~/pgwork/src/move-contrib/src/extension/pg_buffercache$
make install
/bin/mkdir -p '/home/gsmith/pgwork/inst/move-contrib/lib/postgresql'
/bin/mkdir -p
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension'
/bin/sh ../../../config/install-sh -c -m 755 pg_buffercache.so
'/home/gsmith/pgwork/inst/move-contrib/lib/postgresql/pg_buffercache.so'
/bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache.control
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
/bin/sh ../../../config/install-sh -c -m 644 ./pg_buffercache--1.0.sql
./pg_buffercache--unpackaged--1.0.sql
'/home/gsmith/pgwork/inst/move-contrib/share/postgresql/extension/'
$ psql -c "create extension pg_buffercache"
CREATE EXTENSION
The only clunky bit I wasn't really happy with is the amount of code
duplication that comes from having a src/extension/Makefile that looks
almost, but not quite, identical to contrib/Makefile. The rest of the
changes don't seem too bad to me, and even that's really only 36 lines
that aren't touched often. Yes, the paths are different, so backports
won't happen without an extra step. But the code changes required were
easier than I was expecting, due to the general good modularity of the
extensions infrastructure. So long as the result ends up in
share/postgresql/extension/ , whether they started in contrib/<module>
or src/extension/<module> doesn't really matter to CREATE EXTENSION.
But having them broke out this way makes it easy for the default
Makefile to build and install them all. (I recognize I didn't do that
last step yet though)
I'll happily go covert pgstattuple and the rest of the internal
diagnostics modules to this scheme, and do the doc cleanups, this
upcoming week if it means I'll be able to use those things without
installing all of contrib one day. Ditto for proposing RPM and Debian
packaging changes that match them. All that work will get paid back the
first time I don't have to fill out a bunch of paperwork (again) at a
customer site justifying why they need to install the contrib [RPM|deb]
package (which has some scary stuff in it) on all their servers, just so
I can get some bloat or buffer inspection module.
--
Greg Smith 2ndQuadrant US g...@2ndquadrant.com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
diff --git a/contrib/Makefile b/contrib/Makefile
index 6967767..04cf330 100644
--- a/contrib/Makefile
+++ b/contrib/Makefile
@@ -30,7 +30,6 @@ SUBDIRS = \
pageinspect \
passwordcheck \
pg_archivecleanup \
- pg_buffercache \
pg_freespacemap \
pg_standby \
pg_stat_statements \
diff --git a/contrib/pg_buffercache/Makefile b/contrib/pg_buffercache/Makefile
deleted file mode 100644
index 323c0ac..0000000
--- a/contrib/pg_buffercache/Makefile
+++ /dev/null
@@ -1,18 +0,0 @@
-# contrib/pg_buffercache/Makefile
-
-MODULE_big = pg_buffercache
-OBJS = pg_buffercache_pages.o
-
-EXTENSION = pg_buffercache
-DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
-
-ifdef USE_PGXS
-PG_CONFIG = pg_config
-PGXS := $(shell $(PG_CONFIG) --pgxs)
-include $(PGXS)
-else
-subdir = contrib/pg_buffercache
-top_builddir = ../..
-include $(top_builddir)/src/Makefile.global
-include $(top_srcdir)/contrib/contrib-global.mk
-endif
diff --git a/contrib/pg_buffercache/pg_buffercache--1.0.sql b/contrib/pg_buffercache/pg_buffercache--1.0.sql
deleted file mode 100644
index 9407d21..0000000
--- a/contrib/pg_buffercache/pg_buffercache--1.0.sql
+++ /dev/null
@@ -1,17 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--1.0.sql */
-
--- Register the function.
-CREATE FUNCTION pg_buffercache_pages()
-RETURNS SETOF RECORD
-AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
-LANGUAGE C;
-
--- Create a view for convenient access.
-CREATE VIEW pg_buffercache AS
- SELECT P.* FROM pg_buffercache_pages() AS P
- (bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
- relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
-
--- Don't want these to be available to public.
-REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
-REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
deleted file mode 100644
index f00a954..0000000
--- a/contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
+++ /dev/null
@@ -1,4 +0,0 @@
-/* contrib/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
-
-ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
-ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/contrib/pg_buffercache/pg_buffercache.control b/contrib/pg_buffercache/pg_buffercache.control
deleted file mode 100644
index 709513c..0000000
--- a/contrib/pg_buffercache/pg_buffercache.control
+++ /dev/null
@@ -1,5 +0,0 @@
-# pg_buffercache extension
-comment = 'examine the shared buffer cache'
-default_version = '1.0'
-module_pathname = '$libdir/pg_buffercache'
-relocatable = true
diff --git a/contrib/pg_buffercache/pg_buffercache_pages.c b/contrib/pg_buffercache/pg_buffercache_pages.c
deleted file mode 100644
index ed88288..0000000
--- a/contrib/pg_buffercache/pg_buffercache_pages.c
+++ /dev/null
@@ -1,219 +0,0 @@
-/*-------------------------------------------------------------------------
- *
- * pg_buffercache_pages.c
- * display some contents of the buffer cache
- *
- * contrib/pg_buffercache/pg_buffercache_pages.c
- *-------------------------------------------------------------------------
- */
-#include "postgres.h"
-
-#include "access/heapam.h"
-#include "catalog/pg_type.h"
-#include "funcapi.h"
-#include "storage/buf_internals.h"
-#include "storage/bufmgr.h"
-#include "utils/relcache.h"
-
-
-#define NUM_BUFFERCACHE_PAGES_ELEM 8
-
-PG_MODULE_MAGIC;
-
-Datum pg_buffercache_pages(PG_FUNCTION_ARGS);
-
-
-/*
- * Record structure holding the to be exposed cache data.
- */
-typedef struct
-{
- uint32 bufferid;
- Oid relfilenode;
- Oid reltablespace;
- Oid reldatabase;
- ForkNumber forknum;
- BlockNumber blocknum;
- bool isvalid;
- bool isdirty;
- uint16 usagecount;
-} BufferCachePagesRec;
-
-
-/*
- * Function context for data persisting over repeated calls.
- */
-typedef struct
-{
- TupleDesc tupdesc;
- BufferCachePagesRec *record;
-} BufferCachePagesContext;
-
-
-/*
- * Function returning data from the shared buffer cache - buffer number,
- * relation node/tablespace/database/blocknum and dirty indicator.
- */
-PG_FUNCTION_INFO_V1(pg_buffercache_pages);
-
-Datum
-pg_buffercache_pages(PG_FUNCTION_ARGS)
-{
- FuncCallContext *funcctx;
- Datum result;
- MemoryContext oldcontext;
- BufferCachePagesContext *fctx; /* User function context. */
- TupleDesc tupledesc;
- HeapTuple tuple;
-
- if (SRF_IS_FIRSTCALL())
- {
- int i;
- volatile BufferDesc *bufHdr;
-
- funcctx = SRF_FIRSTCALL_INIT();
-
- /* Switch context when allocating stuff to be used in later calls */
- oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
-
- /* Create a user function context for cross-call persistence */
- fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
-
- /* Construct a tuple descriptor for the result rows. */
- tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
- TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
- INT4OID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
- OIDOID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
- OIDOID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
- OIDOID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
- INT2OID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
- INT8OID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
- BOOLOID, -1, 0);
- TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
- INT2OID, -1, 0);
-
- fctx->tupdesc = BlessTupleDesc(tupledesc);
-
- /* Allocate NBuffers worth of BufferCachePagesRec records. */
- fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
-
- /* Set max calls and remember the user function context. */
- funcctx->max_calls = NBuffers;
- funcctx->user_fctx = fctx;
-
- /* Return to original context when allocating transient memory */
- MemoryContextSwitchTo(oldcontext);
-
- /*
- * To get a consistent picture of the buffer state, we must lock all
- * partitions of the buffer map. Needless to say, this is horrible
- * for concurrency. Must grab locks in increasing order to avoid
- * possible deadlocks.
- */
- for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
- LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
-
- /*
- * Scan though all the buffers, saving the relevant fields in the
- * fctx->record structure.
- */
- for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
- {
- /* Lock each buffer header before inspecting. */
- LockBufHdr(bufHdr);
-
- fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
- fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
- fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
- fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
- fctx->record[i].forknum = bufHdr->tag.forkNum;
- fctx->record[i].blocknum = bufHdr->tag.blockNum;
- fctx->record[i].usagecount = bufHdr->usage_count;
-
- if (bufHdr->flags & BM_DIRTY)
- fctx->record[i].isdirty = true;
- else
- fctx->record[i].isdirty = false;
-
- /* Note if the buffer is valid, and has storage created */
- if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
- fctx->record[i].isvalid = true;
- else
- fctx->record[i].isvalid = false;
-
- UnlockBufHdr(bufHdr);
- }
-
- /*
- * And release locks. We do this in reverse order for two reasons:
- * (1) Anyone else who needs more than one of the locks will be trying
- * to lock them in increasing order; we don't want to release the
- * other process until it can get all the locks it needs. (2) This
- * avoids O(N^2) behavior inside LWLockRelease.
- */
- for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
- LWLockRelease(FirstBufMappingLock + i);
- }
-
- funcctx = SRF_PERCALL_SETUP();
-
- /* Get the saved state */
- fctx = funcctx->user_fctx;
-
- if (funcctx->call_cntr < funcctx->max_calls)
- {
- uint32 i = funcctx->call_cntr;
- Datum values[NUM_BUFFERCACHE_PAGES_ELEM];
- bool nulls[NUM_BUFFERCACHE_PAGES_ELEM];
-
- values[0] = Int32GetDatum(fctx->record[i].bufferid);
- nulls[0] = false;
-
- /*
- * Set all fields except the bufferid to null if the buffer is unused
- * or not valid.
- */
- if (fctx->record[i].blocknum == InvalidBlockNumber ||
- fctx->record[i].isvalid == false)
- {
- nulls[1] = true;
- nulls[2] = true;
- nulls[3] = true;
- nulls[4] = true;
- nulls[5] = true;
- nulls[6] = true;
- nulls[7] = true;
- }
- else
- {
- values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
- nulls[1] = false;
- values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
- nulls[2] = false;
- values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
- nulls[3] = false;
- values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
- nulls[4] = false;
- values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
- nulls[5] = false;
- values[6] = BoolGetDatum(fctx->record[i].isdirty);
- nulls[6] = false;
- values[7] = Int16GetDatum(fctx->record[i].usagecount);
- nulls[7] = false;
- }
-
- /* Build and return the tuple. */
- tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
- result = HeapTupleGetDatum(tuple);
-
- SRF_RETURN_NEXT(funcctx, result);
- }
- else
- SRF_RETURN_DONE(funcctx);
-}
diff --git a/src/extension/Makefile b/src/extension/Makefile
new file mode 100644
index 0000000..5d7a86a
--- /dev/null
+++ b/src/extension/Makefile
@@ -0,0 +1,36 @@
+# $PostgreSQL: pgsql/src/extension/Makefile $
+
+subdir = src/extension
+top_builddir = ../..
+include $(top_builddir)/src/Makefile.global
+
+SUBDIRS = \
+ pg_buffercache
+
+ifeq ($(with_openssl),yes)
+SUBDIRS += sslinfo
+endif
+
+ifeq ($(with_ossp_uuid),yes)
+SUBDIRS += uuid-ossp
+endif
+
+ifeq ($(with_libxml),yes)
+SUBDIRS += xml2
+endif
+
+# Missing:
+# start-scripts \ (does not have a makefile)
+
+
+all install installdirs uninstall distprep clean distclean maintainer-clean:
+ @for dir in $(SUBDIRS); do \
+ $(MAKE) -C $$dir $@ || exit; \
+ done
+
+# We'd like check operations to run all the subtests before failing.
+check installcheck:
+ @CHECKERR=0; for dir in $(SUBDIRS); do \
+ $(MAKE) -C $$dir $@ || CHECKERR=$$?; \
+ done; \
+ exit $$CHECKERR
diff --git a/src/extension/extension-global.mk b/src/extension/extension-global.mk
new file mode 100644
index 0000000..cc7643b
--- /dev/null
+++ b/src/extension/extension-global.mk
@@ -0,0 +1,5 @@
+# $PostgreSQL: pgsql/extension/extension-global.mk,v 1.10 2005/09/27 17:43:31 tgl Exp $
+
+NO_PGXS = 1
+MODULEDIR=extension
+include $(top_srcdir)/src/makefiles/pgxs.mk
diff --git a/src/extension/pg_buffercache/Makefile b/src/extension/pg_buffercache/Makefile
new file mode 100644
index 0000000..70d857f
--- /dev/null
+++ b/src/extension/pg_buffercache/Makefile
@@ -0,0 +1,19 @@
+# src/extension/pg_buffercache/Makefile
+
+MODULE_big = pg_buffercache
+OBJS = pg_buffercache_pages.o
+MODULEDIR=extension
+
+EXTENSION = pg_buffercache
+DATA = pg_buffercache--1.0.sql pg_buffercache--unpackaged--1.0.sql
+
+ifdef USE_PGXS
+PG_CONFIG = pg_config
+PGXS := $(shell $(PG_CONFIG) --pgxs)
+include $(PGXS)
+else
+subdir = src/extension/pg_buffercache
+top_builddir = ../../..
+include $(top_builddir)/src/Makefile.global
+include $(top_srcdir)/src/extension/extension-global.mk
+endif
diff --git a/src/extension/pg_buffercache/pg_buffercache--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
new file mode 100644
index 0000000..ceca6ae
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--1.0.sql
@@ -0,0 +1,17 @@
+/* src/extension/pg_buffercache/pg_buffercache--1.0.sql */
+
+-- Register the function.
+CREATE FUNCTION pg_buffercache_pages()
+RETURNS SETOF RECORD
+AS 'MODULE_PATHNAME', 'pg_buffercache_pages'
+LANGUAGE C;
+
+-- Create a view for convenient access.
+CREATE VIEW pg_buffercache AS
+ SELECT P.* FROM pg_buffercache_pages() AS P
+ (bufferid integer, relfilenode oid, reltablespace oid, reldatabase oid,
+ relforknumber int2, relblocknumber int8, isdirty bool, usagecount int2);
+
+-- Don't want these to be available to public.
+REVOKE ALL ON FUNCTION pg_buffercache_pages() FROM PUBLIC;
+REVOKE ALL ON pg_buffercache FROM PUBLIC;
diff --git a/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
new file mode 100644
index 0000000..0cfa317
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql
@@ -0,0 +1,4 @@
+/* src/extension/pg_buffercache/pg_buffercache--unpackaged--1.0.sql */
+
+ALTER EXTENSION pg_buffercache ADD function pg_buffercache_pages();
+ALTER EXTENSION pg_buffercache ADD view pg_buffercache;
diff --git a/src/extension/pg_buffercache/pg_buffercache.control b/src/extension/pg_buffercache/pg_buffercache.control
new file mode 100644
index 0000000..709513c
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache.control
@@ -0,0 +1,5 @@
+# pg_buffercache extension
+comment = 'examine the shared buffer cache'
+default_version = '1.0'
+module_pathname = '$libdir/pg_buffercache'
+relocatable = true
diff --git a/src/extension/pg_buffercache/pg_buffercache_pages.c b/src/extension/pg_buffercache/pg_buffercache_pages.c
new file mode 100644
index 0000000..a44610f
--- /dev/null
+++ b/src/extension/pg_buffercache/pg_buffercache_pages.c
@@ -0,0 +1,219 @@
+/*-------------------------------------------------------------------------
+ *
+ * pg_buffercache_pages.c
+ * display some contents of the buffer cache
+ *
+ * src/extension/pg_buffercache/pg_buffercache_pages.c
+ *-------------------------------------------------------------------------
+ */
+#include "postgres.h"
+
+#include "access/heapam.h"
+#include "catalog/pg_type.h"
+#include "funcapi.h"
+#include "storage/buf_internals.h"
+#include "storage/bufmgr.h"
+#include "utils/relcache.h"
+
+
+#define NUM_BUFFERCACHE_PAGES_ELEM 8
+
+PG_MODULE_MAGIC;
+
+Datum pg_buffercache_pages(PG_FUNCTION_ARGS);
+
+
+/*
+ * Record structure holding the to be exposed cache data.
+ */
+typedef struct
+{
+ uint32 bufferid;
+ Oid relfilenode;
+ Oid reltablespace;
+ Oid reldatabase;
+ ForkNumber forknum;
+ BlockNumber blocknum;
+ bool isvalid;
+ bool isdirty;
+ uint16 usagecount;
+} BufferCachePagesRec;
+
+
+/*
+ * Function context for data persisting over repeated calls.
+ */
+typedef struct
+{
+ TupleDesc tupdesc;
+ BufferCachePagesRec *record;
+} BufferCachePagesContext;
+
+
+/*
+ * Function returning data from the shared buffer cache - buffer number,
+ * relation node/tablespace/database/blocknum and dirty indicator.
+ */
+PG_FUNCTION_INFO_V1(pg_buffercache_pages);
+
+Datum
+pg_buffercache_pages(PG_FUNCTION_ARGS)
+{
+ FuncCallContext *funcctx;
+ Datum result;
+ MemoryContext oldcontext;
+ BufferCachePagesContext *fctx; /* User function context. */
+ TupleDesc tupledesc;
+ HeapTuple tuple;
+
+ if (SRF_IS_FIRSTCALL())
+ {
+ int i;
+ volatile BufferDesc *bufHdr;
+
+ funcctx = SRF_FIRSTCALL_INIT();
+
+ /* Switch context when allocating stuff to be used in later calls */
+ oldcontext = MemoryContextSwitchTo(funcctx->multi_call_memory_ctx);
+
+ /* Create a user function context for cross-call persistence */
+ fctx = (BufferCachePagesContext *) palloc(sizeof(BufferCachePagesContext));
+
+ /* Construct a tuple descriptor for the result rows. */
+ tupledesc = CreateTemplateTupleDesc(NUM_BUFFERCACHE_PAGES_ELEM, false);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 1, "bufferid",
+ INT4OID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 2, "relfilenode",
+ OIDOID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 3, "reltablespace",
+ OIDOID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 4, "reldatabase",
+ OIDOID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 5, "relforknumber",
+ INT2OID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 6, "relblocknumber",
+ INT8OID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 7, "isdirty",
+ BOOLOID, -1, 0);
+ TupleDescInitEntry(tupledesc, (AttrNumber) 8, "usage_count",
+ INT2OID, -1, 0);
+
+ fctx->tupdesc = BlessTupleDesc(tupledesc);
+
+ /* Allocate NBuffers worth of BufferCachePagesRec records. */
+ fctx->record = (BufferCachePagesRec *) palloc(sizeof(BufferCachePagesRec) * NBuffers);
+
+ /* Set max calls and remember the user function context. */
+ funcctx->max_calls = NBuffers;
+ funcctx->user_fctx = fctx;
+
+ /* Return to original context when allocating transient memory */
+ MemoryContextSwitchTo(oldcontext);
+
+ /*
+ * To get a consistent picture of the buffer state, we must lock all
+ * partitions of the buffer map. Needless to say, this is horrible
+ * for concurrency. Must grab locks in increasing order to avoid
+ * possible deadlocks.
+ */
+ for (i = 0; i < NUM_BUFFER_PARTITIONS; i++)
+ LWLockAcquire(FirstBufMappingLock + i, LW_SHARED);
+
+ /*
+ * Scan though all the buffers, saving the relevant fields in the
+ * fctx->record structure.
+ */
+ for (i = 0, bufHdr = BufferDescriptors; i < NBuffers; i++, bufHdr++)
+ {
+ /* Lock each buffer header before inspecting. */
+ LockBufHdr(bufHdr);
+
+ fctx->record[i].bufferid = BufferDescriptorGetBuffer(bufHdr);
+ fctx->record[i].relfilenode = bufHdr->tag.rnode.relNode;
+ fctx->record[i].reltablespace = bufHdr->tag.rnode.spcNode;
+ fctx->record[i].reldatabase = bufHdr->tag.rnode.dbNode;
+ fctx->record[i].forknum = bufHdr->tag.forkNum;
+ fctx->record[i].blocknum = bufHdr->tag.blockNum;
+ fctx->record[i].usagecount = bufHdr->usage_count;
+
+ if (bufHdr->flags & BM_DIRTY)
+ fctx->record[i].isdirty = true;
+ else
+ fctx->record[i].isdirty = false;
+
+ /* Note if the buffer is valid, and has storage created */
+ if ((bufHdr->flags & BM_VALID) && (bufHdr->flags & BM_TAG_VALID))
+ fctx->record[i].isvalid = true;
+ else
+ fctx->record[i].isvalid = false;
+
+ UnlockBufHdr(bufHdr);
+ }
+
+ /*
+ * And release locks. We do this in reverse order for two reasons:
+ * (1) Anyone else who needs more than one of the locks will be trying
+ * to lock them in increasing order; we don't want to release the
+ * other process until it can get all the locks it needs. (2) This
+ * avoids O(N^2) behavior inside LWLockRelease.
+ */
+ for (i = NUM_BUFFER_PARTITIONS; --i >= 0;)
+ LWLockRelease(FirstBufMappingLock + i);
+ }
+
+ funcctx = SRF_PERCALL_SETUP();
+
+ /* Get the saved state */
+ fctx = funcctx->user_fctx;
+
+ if (funcctx->call_cntr < funcctx->max_calls)
+ {
+ uint32 i = funcctx->call_cntr;
+ Datum values[NUM_BUFFERCACHE_PAGES_ELEM];
+ bool nulls[NUM_BUFFERCACHE_PAGES_ELEM];
+
+ values[0] = Int32GetDatum(fctx->record[i].bufferid);
+ nulls[0] = false;
+
+ /*
+ * Set all fields except the bufferid to null if the buffer is unused
+ * or not valid.
+ */
+ if (fctx->record[i].blocknum == InvalidBlockNumber ||
+ fctx->record[i].isvalid == false)
+ {
+ nulls[1] = true;
+ nulls[2] = true;
+ nulls[3] = true;
+ nulls[4] = true;
+ nulls[5] = true;
+ nulls[6] = true;
+ nulls[7] = true;
+ }
+ else
+ {
+ values[1] = ObjectIdGetDatum(fctx->record[i].relfilenode);
+ nulls[1] = false;
+ values[2] = ObjectIdGetDatum(fctx->record[i].reltablespace);
+ nulls[2] = false;
+ values[3] = ObjectIdGetDatum(fctx->record[i].reldatabase);
+ nulls[3] = false;
+ values[4] = ObjectIdGetDatum(fctx->record[i].forknum);
+ nulls[4] = false;
+ values[5] = Int64GetDatum((int64) fctx->record[i].blocknum);
+ nulls[5] = false;
+ values[6] = BoolGetDatum(fctx->record[i].isdirty);
+ nulls[6] = false;
+ values[7] = Int16GetDatum(fctx->record[i].usagecount);
+ nulls[7] = false;
+ }
+
+ /* Build and return the tuple. */
+ tuple = heap_form_tuple(fctx->tupdesc, values, nulls);
+ result = HeapTupleGetDatum(tuple);
+
+ SRF_RETURN_NEXT(funcctx, result);
+ }
+ else
+ SRF_RETURN_DONE(funcctx);
+}
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers