Re: Commitfest app release on Feb 17 with many improvements

2025-02-01 Thread Jelte Fennema-Nio
On Fri, 31 Jan 2025 at 15:23, Jelte Fennema-Nio  wrote:
> You can try out these changes on the staging environment

It seems that someone tried it out, and found a 500 error when
searching by Message-ID. That's fixed now.

I also fixed some broken links to CirrusCI on the patch page.




Re: Proposal to CREATE FOREIGN TABLE LIKE

2025-02-01 Thread Álvaro Herrera
On 2025-Feb-01, Zhang Mingli wrote:

> Our DBA team uses foreign tables for ETL processes in Greenplum and 
> Cloudberry,
> and we often need to create foreign tables that match the column definitions 
> of local tables.
> 
> When dealing with wide tables and lots of those foreign tables, it can get 
> pretty tedious and mistakes happen easily.

Sure.  Did you consider IMPORT FOREIGN SCHEMA?

-- 
Álvaro Herrera   48°01'N 7°57'E  —  https://www.EnterpriseDB.com/
[…] indem ich in meinem Leben oft an euch gedacht, euch glücklich zu machen. 
Seyd es!
A menudo he pensado en vosotros, en haceros felices. ¡Sedlo, pues!
Heiligenstädter Testament, L. v. Beethoven, 1802
https://de.wikisource.org/wiki/Heiligenstädter_Testament




Cross-type index comparison support in contrib/btree_gin

2025-02-01 Thread Tom Lane
We've had multiple requests for $SUBJECT over the years
([1][2][3][4][5], and I'm sure my archive search missed some).
I finally decided to look into what it'd take to make that happen.
It's not as bad as I feared, and attached is a draft patch.

The thing that makes this sticky is that GIN itself doesn't support
any such thing as cross-type comparisons: all the Datums that it deals
with directly have to be of the same type as the stored index keys.
However, for the cases that btree_gin deals with, we can make use of
the "partial match" feature because all the entries we need to find
will be consecutive in the index.  And it turns out that the
comparePartial() method is only ever applied to compare the original
query value with an index entry, which means that internally to
comparePartial() we can apply the proper cross-type comparison
operator.  Our GIN index documentation about comparePartial() doesn't
quite say that in so many words, but btree_gin was already relying on
it --- in a very confusing and ill-explained way, if you ask me, but
it was relying on it.  (The 0001 patch below is mainly concerned with
making that reliance simpler and clearer.)

The other thing that has to be dealt with is that cross-type or not,
we need to somehow create a Datum of the index key type to perform
the initial index descent with.  But I realized that this isn't
that tough after all.  Aside from boring change-of-representation
work, there are these special cases:

* Query value is out of range for the index type.  We can simply
clamp it to the index type's range, so that GIN descends to one
end of the index or the other and then searches normally.  GIN
might falsely think that the endmost entry(s) of the index equal
the search datum, but it doesn't matter too much what GIN thinks
because comparePartial can filter away the false matches by
applying the correct comparison with the original query value.

* Query value falls between possible values of the index type
(possible in float8->float4 or timestamp->date cases, for example).
We can just use our usual conversion rules, though.  The critical
observation here is that it does not matter whether the conversion
rounds to the next lower or next higher possible value.  If we are
searching for equality, neither of those values will pass the
cross-type comparison so it doesn't matter.  If we are searching for
inequality, for example "indcol <= value", then only index entries
strictly less than the query value can match.  Rounding down clearly
doesn't hurt, while rounding up at worst makes the search include
some index entries just larger than the query value, which will be
correctly rejected by the cross-type comparison.

So basically all I had to do was write a bunch of non-error-throwing
conversion routines and set up some boilerplate infrastructure.
Patch series attached --- it's rather long, but a lot of it is
new test cases.

regards, tom lane

[1] 
https://www.postgresql.org/message-id/flat/58782480-ab75-4416-a177-ccf91be288a9%40app.fastmail.com
[2] 
https://www.postgresql.org/message-id/flat/17079-c5edf57c47debc2c%40postgresql.org
[3] 
https://www.postgresql.org/message-id/flat/20170207150420.1409.58748%40wrigleys.postgresql.org
[4] 
https://www.postgresql.org/message-id/flat/20160415185902.22924.77993%40wrigleys.postgresql.org
[5] 
https://www.postgresql.org/message-id/flat/VisenaEmail.42.91df4628bdf7755c.1537e96e852%40tc7-visena

From b6c502a8ed215b51dd26179194d9ddb6885f6e13 Mon Sep 17 00:00:00 2001
From: Tom Lane 
Date: Sat, 1 Feb 2025 17:59:54 -0500
Subject: [PATCH v1 1/5] Preliminary refactoring.

This step doesn't change any behavior.  It cleans the code up
slightly and documents it better.  In particular, the trick
being used by gin_btree_compare_prefix is better explained (IMO)
and there's now an Assert backing up the assumption it has to make.
---
 contrib/btree_gin/btree_gin.c| 85 +++-
 src/tools/pgindent/typedefs.list |  1 +
 2 files changed, 53 insertions(+), 33 deletions(-)

diff --git a/contrib/btree_gin/btree_gin.c b/contrib/btree_gin/btree_gin.c
index 533c55e9ea..d364e72226 100644
--- a/contrib/btree_gin/btree_gin.c
+++ b/contrib/btree_gin/btree_gin.c
@@ -16,14 +16,18 @@
 
 PG_MODULE_MAGIC;
 
+/* extra data passed from gin_btree_extract_query to gin_btree_compare_prefix */
 typedef struct QueryInfo
 {
-	StrategyNumber strategy;
-	Datum		datum;
-	bool		is_varlena;
-	Datum		(*typecmp) (FunctionCallInfo);
+	StrategyNumber strategy;	/* operator strategy number */
+	Datum		orig_datum;		/* original query (comparison) datum */
+	Datum		entry_datum;	/* datum we reported as the entry value */
+	PGFunction	typecmp;		/* appropriate btree comparison function */
 } QueryInfo;
 
+typedef Datum (*btree_gin_leftmost_function) (void);
+
+
 /*** GIN support functions shared by all datatypes ***/
 
 static Datum
@@ -33,6 +37,7 @@ gin_btree_extract_value(FunctionCallInfo fcinfo, bool is_varlena)
 	int32	   *nentries =

Re: Add a function to get the version of installed extension

2025-02-01 Thread Tom Lane
Yugo Nagata  writes:
> This might be avoidable if the binary is made carefully to check the existing
> of objects, but I think it is useful if an extension module function can check
> the current extension version. So, I would like to propose a new function to
> return the current extension version, get_extension_version. I've attached a
> patch.

While I don't say this is a bad idea, I do say you've not made a very
good case for it.  How would an extension know its own OID in order
to call the function?  If it did manage to call the function, what
exactly would it do with the result, and how would that be easier than
maintaining backwards compatibility with its old SQL definitions?
We've not found the backwards-compatibility requirement to be hugely
onerous in our contrib extensions.

A concrete example use-case would make this a lot more convincing.

> Also, as an application of this function, how about allowing to include 
> versions
> in the "requires" list in the control file?

That could be a good feature, but it doesn't follow that an
implementation of it would have any use for this exact function.
It might well be, for example, that the code would need to fetch
the pg_extension catalog entry for other purposes and so could
read the extversion field "for free" then.

BTW, the cfbot claims that this patch doesn't compile.  The
error is a little confusing, but I'm suspicious that it somehow
traces to the fact that you applied NameStr to a field that is
not of type "name".  See ExecAlterExtensionStmt for known-to-work
code for fetching extversion.

regards, tom lane




Re: NOT ENFORCED constraint feature

2025-02-01 Thread jian he
hi.
after applying the v11-0002 to v11-0006.
there is a bug in ATExecAlterConstrRecurse, i think.

in ATExecAlterConstrRecurse, after applying the patch, the code is

if (currcon->condeferrable != cmdcon->deferrable ||
currcon->condeferred != cmdcon->initdeferred ||
currcon->conenforced != cmdcon->is_enforced)
 {
 }
 if (currcon->conenforced != cmdcon->is_enforced)
 {
 ATExecAlterConstrEnforceability
 }
 else
 {
 AlterConstrTriggerDeferrability...
if (rel->rd_rel->relkind == RELKIND_PARTITIONED_TABLE ||
get_rel_relkind(refrelid) == RELKIND_PARTITIONED_TABLE)
ATExecAlterChildConstr(cmdcon, conrel, tgrel, fkrelid, pkrelid,
   contuple, otherrelids, lockmode);
 }


drop table if exists PKTABLE, fktable cascade;
CREATE TABLE PKTABLE (ptest1 int PRIMARY KEY, ptest2 text);
CREATE TABLE FKTABLE (ftest1 int REFERENCES PKTABLE MATCH FULL ON
DELETE CASCADE ON UPDATE CASCADE NOT ENFORCED,
   ftest2 int);
ALTER TABLE fktable ALTER CONSTRAINT fktable_ftest1_fkey deferrable;

\d fktable
  Table "public.fktable"
 Column |  Type   | Collation | Nullable | Default
+-+---+--+-
 ftest1 | integer |   |  |
 ftest2 | integer |   |  |
Foreign-key constraints:
"fktable_ftest1_fkey" FOREIGN KEY (ftest1) REFERENCES
pktable(ptest1) MATCH FULL ON UPDATE CASCADE ON DELETE CASCADE
DEFERRABLE NOT VALID

Currently "ALTER TABLE fktable ALTER CONSTRAINT fktable_ftest1_fkey deferrable;"
imply the constraint fktable_ftest1_fkey is changing from "not
enforced" to "enforced".
but here we didn't explicitly mean to change the "enforced" status.
We only want to change the deferriability.

So the code should only call AlterConstrTriggerDeferrability,
not call ATExecAlterConstrEnforceability?




Re: hash_search_with_hash_value is high in "perf top" on a replica

2025-02-01 Thread Andres Freund
Hi,

On 2025-02-01 03:46:33 -0800, Dmitry Koterov wrote:
> > It'd be interesting to see what the paths towards
> hash_search_with_hash_value
> are.
> 
> One of the popular paths is on the screenshot. They are all more or less
> the same when recovery_prefetch=on (and when it's off, the replica behaves
> worse, more replication lag).

Yea, I was hoping for a profile with recovery_prefetch=off, so we can see what
records are causing most of the "slow" lookups.

Greetings,

Andres Freund




Re: hash_search_with_hash_value is high in "perf top" on a replica

2025-02-01 Thread Ants Aasma
On Fri, Jan 31, 2025, 15:43 Andres Freund  wrote:

> > Maybe it's a red herring though, but it looks pretty suspicious.
>
> It's unfortunately not too surprising - our buffer mapping table is a
> pretty
> big bottleneck.  Both because a hash table is just not a good fit for the
> buffer mapping table due to the lack of locality and because dynahash is
> really poor hash table implementation.
>

I measured similar things when looking at apply throughput recently. For
in-cache workloads buffer lookup and locking was about half of the load.

One other direction is to extract more memory concurrency. Prefetcher could
batch multiple lookups together so CPU OoO execution has a chance to fire
off multiple memory accesses at the same time.

The other direction is to split off WAL decoding, buffer lookup and maybe
even pinning to a separate process from the main redo loop.

--
Ants Aasma

>


Re: Non-text mode for pg_dumpall

2025-02-01 Thread Srinath Reddy
Hi,
i think we have to change the pg_dumpall "--help" message similar to
pg_dump's specifying that now pg_dumpall dumps cluster into to other
non-text formats.
Need similar "--help" message change in pg_restore to specify that now
pg_restore supports restoring whole cluster from archive created from
pg_dumpall.

diff --git a/src/bin/pg_dump/pg_dumpall.c b/src/bin/pg_dump/pg_dumpall.c
index 3e022ecdeb..728abe841c 100644
--- a/src/bin/pg_dump/pg_dumpall.c
+++ b/src/bin/pg_dump/pg_dumpall.c
@@ -667,7 +667,7 @@ main(int argc, char *argv[])
 static void
 help(void)
 {
- printf(_("%s extracts a PostgreSQL database cluster into an SQL script
file.\n\n"), progname);
+ printf(_("%s extracts a PostgreSQL database cluster into an SQL script
file or to other formats.\n\n"), progname);

diff --git a/src/bin/pg_dump/pg_restore.c b/src/bin/pg_dump/pg_restore.c
index fc248a441e..c4e58c1f3b 100644
--- a/src/bin/pg_dump/pg_restore.c
+++ b/src/bin/pg_dump/pg_restore.c
@@ -582,6 +582,8 @@ static void
 usage(const char *progname)
 {
  printf(_("%s restores a PostgreSQL database from an archive created by
pg_dump.\n\n"), progname);
+ printf(_("[or]\n"));
+ printf(_("%s restores a PostgreSQL entire cluster from an archive created
by pg_dumpall.\n\n"), progname); Regards, Srinath Reddy Sadipiralla, EDB:
https://www.enterprisedb.com 


Re: Proposal to CREATE FOREIGN TABLE LIKE

2025-02-01 Thread Zhang Mingli


Zhang Mingli
www.hashdata.xyz
On Feb 1, 2025 at 20:20 +0800, Álvaro Herrera , wrote:
>
> Sure. Did you consider IMPORT FOREIGN SCHEMA?
Hi, Álvaro

Thank you very much for your suggestion.

I've looked into it, and it certainly can be beneficial, especially for 
postgres_fdw.
However, I believe that not all FDWs support the concept of a schema or can be 
used with the IMPORT FOREIGN SCHEMA command, is it?


For example, we use kafka_fdw to produce and consume data from a Kafka server.
In our scenario, we sometimes need to write records from a local table into 
Kafka. Here’s a brief outline of our process:

1. We already have a wide table, local_wide_table in Postgres.
2. We need to create a foreign table, foreign_table, with the same definition 
as local_wide_table.
3. Insert records into foreign_table by selecting from local_wide_table with 
the some quals.

In step 2, we currently have to manually create the foreign table using CREATE 
FOREIGN TABLE and copy the column definitions one by one.


Re: hash_search_with_hash_value is high in "perf top" on a replica

2025-02-01 Thread Andres Freund
Hi,

On 2025-02-01 15:43:41 +0100, Ants Aasma wrote:
> On Fri, Jan 31, 2025, 15:43 Andres Freund  wrote:
> 
> > > Maybe it's a red herring though, but it looks pretty suspicious.
> >
> > It's unfortunately not too surprising - our buffer mapping table is a
> > pretty
> > big bottleneck.  Both because a hash table is just not a good fit for the
> > buffer mapping table due to the lack of locality and because dynahash is
> > really poor hash table implementation.
> >
> 
> I measured similar things when looking at apply throughput recently. For
> in-cache workloads buffer lookup and locking was about half of the load.
> 
> One other direction is to extract more memory concurrency. Prefetcher could
> batch multiple lookups together so CPU OoO execution has a chance to fire
> off multiple memory accesses at the same time.

I think at the moment we have a *hilariously* cache-inefficient buffer lookup,
that's the first thing to address. A hash table for buffer mapping lookups imo
is a bad idea, due to loosing all locality in a workload that exhibits a *lot*
of locality. But furthermore, dynahash.c is very far from a cache efficient
hashtable implementation.

The other aspect is that in many workloads we'll look up a small set of
buffers over and over, which a) wastes cycles b) wastes cache space for stuff
that could be elided much more efficiently.

We also do a lot of hash lookups for smgr, because we don't have any
cross-record caching infrastructure for that.


> The other direction is to split off WAL decoding, buffer lookup and maybe
> even pinning to a separate process from the main redo loop.

Maybe, but I think we're rather far away from those things being the most
productive thing to tackle.

Greetings,

Andres Freund




Re: Improving tracking/processing of buildfarm test failures

2025-02-01 Thread Alexander Lakhin

Hello hackers,

Please take a look at the January report on buildfarm failures:

# SELECT br, count(*) FROM failures WHERE dt >= '2025-01-01' AND
 dt < '2025-02-01' GROUP BY br;
REL_13_STABLE: 1
REL_14_STABLE: 1
REL_15_STABLE: 4
REL_16_STABLE: 12
REL_17_STABLE: 10
master: 64
-- Total: 92
(Counting test failures only, excluding indent-check, Configure, Build
errors.)

# SELECT COUNT(*) FROM (SELECT DISTINCT issue_link FROM failures WHERE
 dt >= '2025-01-01' AND dt < '2025-02-01');
19

# SELECT issue_link, count(*) FROM failures WHERE dt >= '2025-01-01' AND
 dt < '2025-02-01' GROUP BY issue_link ORDER BY 2 DESC LIMIT 4;
https://www.postgresql.org/message-id/559462.1737760111%40sss.pgh.pa.us : 17
-- Fixed

https://www.postgresql.org/message-id/a9a97e83-9ec8-5de5-bf69-80e9560f5...@gmail.com
 : 11

https://www.postgresql.org/message-id/657815a2-5a89-fcc1-1c9d-d77a6986b...@gmail.com
 : 9

https://www.postgresql.org/message-id/3611540.1737518430%40sss.pgh.pa.us : 7

# SELECT count(*) FROM failures WHERE dt >= '2025-01-01' AND
 dt < '2025-02-01' AND issue_link IS NULL; -- Unsorted/unhelpful failures
18

Short-lived failures:
30

I've also offloaded past year's content of the "Known Buildfarm Test Failures" 
page to:
https://wiki.postgresql.org/wiki/Known_Buildfarm_Test_Failures_-_Archive

Best regards,
Alexander Lakhin
Neon (https://neon.tech)




Re: more backtraces

2025-02-01 Thread Andrey Borodin


> On 5 Dec 2019, at 00:45, Peter Eisentraut  
> wrote:
> 
> <0001-Print-backtrace-on-SIGABRT-SIGBUS-SIGSEGV.patch>

Hello!

I find this patch very useful when hacking, so let's have a rebased version in 
archives.

And +1 for the feature. The backtrace could help collect diagnostics in case of 
production incidents.

Thanks!


Best regards, Andrey Borodin.


v2-0001-Print-backtrace-on-SIGABRT-SIGBUS-SIGSEGV.patch
Description: Binary data


[PATCH] Fix build on MINGW on ARM64

2025-02-01 Thread Lars Kanis
This patch limits the workaround of using __buildin_setjmp on the
Windows MINGW platform. This workaround is only necessary for legacy
MSVCRT based toolchain, but not for UCRT based. It is not available at
all on clang on ARM64 resulting in the following compiler error:

  error: __builtin_longjmp is not supported for the current target

This patch is used since years in MSYS2 packages:
  
https://github.com/msys2/MINGW-packages/blob/master/mingw-w64-postgresql/postgresql-14.0-use-mingw-setjmp-on-ucrt.patch

It is also used in ruby-pg to allow compiling for
aarch64-w64-windows-gnu: https://github.com/ged/ruby-pg/pull/626/files

It would be nice if this patch could be merged upstream.

--
Regards, Lars


0001-Use-workaround-of-__builtin_setjmp-only-on-MINGW-on-.patch
Description: Binary data


Re: Add trim_trailing_whitespace to editorconfig file

2025-02-01 Thread Peter Eisentraut

On 26.12.24 00:52, Jelte Fennema-Nio wrote:

On Mon, 7 Oct 2024 at 22:21, Jelte Fennema-Nio  wrote:

Is this blocked on anything? I feel it's ready to merge at this point.
I'd really like to not have this problem with trailing whitespace in
sgml files anymore.


I noticed Peter added some additional rules to .gitattributes last
month. So I reran the script that's part of this patch for an up to
date .editorconfig file. See attached.


Committed.  (I did some small tweaks to the script to satisfy 
pycodestyle more (except the long lines warnings).)






Re: Make COPY format extendable: Extract COPY TO format implementations

2025-02-01 Thread Sutou Kouhei
Hi,

In 
  "Re: Make COPY format extendable: Extract COPY TO format implementations" on 
Fri, 31 Jan 2025 16:34:52 -0800,
  Masahiko Sawada  wrote:

> Again, what I'd like to avoid is that we end up adding everything
> (including new fields we add in the future) related to copy operation
> to copyapi.h. For example, with v28 that moves both CopyFromState and
> CopyToState to copyapi.h, file_fdw.c includes unrelated CopyToState
> struct via copyfrom_internal.h -> copyapi.h. In addition to that, both
> copyfrom.c and copyfrom_internal.h did the same, which made me think
> copyfrom_internal.h mostly no longer plays its role. I'm very welcome
> to other ideas too if they could achieve the same goal.

For the propose, copyapi.h should not include
copy{to,from}_internal.h. If we do it, copyto.c includes
CopyFromState and copyfrom*.c include CopyToState.

What do you think about the following change? Note that
extensions must include copy{to,from}_internal.h explicitly
in addition to copyapi.h.

-
diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c
index 10f80ef3654..a2dc2d04407 100644
--- a/src/backend/commands/copy.c
+++ b/src/backend/commands/copy.c
@@ -23,6 +23,8 @@
 #include "access/xact.h"
 #include "catalog/pg_authid.h"
 #include "commands/copyapi.h"
+#include "commands/copyto_internal.h"
+#include "commands/copyfrom_internal.h"
 #include "commands/defrem.h"
 #include "executor/executor.h"
 #include "mb/pg_wchar.h"
diff --git a/src/backend/commands/copyfrom.c b/src/backend/commands/copyfrom.c
index 3f6b0031d94..7bcf1c6544b 100644
--- a/src/backend/commands/copyfrom.c
+++ b/src/backend/commands/copyfrom.c
@@ -29,6 +29,7 @@
 #include "access/xact.h"
 #include "catalog/namespace.h"
 #include "commands/copyapi.h"
+#include "commands/copyfrom_internal.h"
 #include "commands/progress.h"
 #include "commands/trigger.h"
 #include "executor/execPartition.h"
diff --git a/src/backend/commands/copyfromparse.c 
b/src/backend/commands/copyfromparse.c
index b016f43a711..7296745d6d2 100644
--- a/src/backend/commands/copyfromparse.c
+++ b/src/backend/commands/copyfromparse.c
@@ -63,6 +63,7 @@
 #include 
 
 #include "commands/copyapi.h"
+#include "commands/copyfrom_internal.h"
 #include "commands/progress.h"
 #include "executor/executor.h"
 #include "libpq/libpq.h"
diff --git a/src/backend/commands/copyto.c b/src/backend/commands/copyto.c
index da281f32950..a69771ea6da 100644
--- a/src/backend/commands/copyto.c
+++ b/src/backend/commands/copyto.c
@@ -20,6 +20,7 @@
 
 #include "access/tableam.h"
 #include "commands/copyapi.h"
+#include "commands/copyto_internal.h"
 #include "commands/progress.h"
 #include "executor/execdesc.h"
 #include "executor/executor.h"
diff --git a/src/include/commands/copyapi.h b/src/include/commands/copyapi.h
index 389f887b2c1..dfab62372a7 100644
--- a/src/include/commands/copyapi.h
+++ b/src/include/commands/copyapi.h
@@ -14,8 +14,7 @@
 #ifndef COPYAPI_H
 #define COPYAPI_H
 
-#include "commands/copyto_internal.h"
-#include "commands/copyfrom_internal.h"
+#include "commands/copy.h"
 
 /*
  * API structure for a COPY TO format implementation. Note this must be
diff --git a/src/test/modules/test_copy_format/test_copy_format.c 
b/src/test/modules/test_copy_format/test_copy_format.c
index d72d5c33c1b..c05d65557a9 100644
--- a/src/test/modules/test_copy_format/test_copy_format.c
+++ b/src/test/modules/test_copy_format/test_copy_format.c
@@ -14,6 +14,8 @@
 #include "postgres.h"
 
 #include "commands/copyapi.h"
+#include "commands/copyfrom_internal.h"
+#include "commands/copyto_internal.h"
 #include "commands/defrem.h"
 
 PG_MODULE_MAGIC;
-

>> If we use the approach, we can't show error position when a
>> custom COPY format handler function returns invalid routine
>> because DefElem for the "format" option isn't available in
>> BeginCopyTo(). Is it acceptable? If it's acceptable, let's
>> use the approach.
> 
> I think we can live with it. All errors happening while processing the
> copy options don't necessarily show the error position.

OK. I attach the v31 patch set that uses this
approach. Mainly, 0003 and 0006 were changed. The v31 patch
set also includes the above
copyapi.h/copy{to,from}_internal.h related changes.

If we have a feature that returns a function name from Oid,
we can improve error messages by including function name
(format name) when a custom format handler function returns
not Copy{To,From}Routine...


Thanks,
-- 
kou

>From 7c9a6d7be003f5a63d12e4c3c3a30231c726c794 Mon Sep 17 00:00:00 2001
From: Sutou Kouhei 
Date: Sat, 28 Sep 2024 23:24:49 +0900
Subject: [PATCH v31 1/9] Refactor COPY TO to use format callback functions.

This commit introduces a new CopyToRoutine struct, which is a set of
callback routines to copy tuples in a specific format. It also makes
the existing formats (text, CSV, and binary) utilize these format
callbacks.

This change is a preliminary step towards making the COPY TO command
extensible in terms of output formats.

Additionally, thi