------- Original Message -------
On Thursday, January 19th, 2023 at 4:45 PM, Tomas Vondra 
<tomas.von...@enterprisedb.com> wrote:


> 
> 
> On 1/18/23 20:05, gkokola...@pm.me wrote:
> 
> > ------- Original Message -------
> > On Wednesday, January 18th, 2023 at 3:00 PM, Tomas Vondra 
> > tomas.von...@enterprisedb.com wrote:
> > 
> > > Hi,
> > > 
> > > On 1/16/23 16:14, gkokola...@pm.me wrote:
> > > 
> > > > Hi,
> > > > 
> > > > I admit I am completely at lost as to what is expected from me anymore.
> > 
> > <snip>
> > 
> > > Unfortunately, this plays against this patch - I'm certainly in favor of
> > > adding lz4 (and other compression algos) into pg_dump, but if I commit
> > > 0001 we get little benefit, and the other parts actually adding lz4/zstd
> > > are treated as "WIP / for completeness" so it's unclear when we'd get to
> > > commit them.
> > 
> > Thank you for your kindness and for taking the time to explain.
> > 
> > > So if I could recommend one thing, it'd be to get at least one of those
> > > WIP patches into a shape that's likely committable right after 0001.
> > 
> > This was clearly my fault. I misunderstood a suggestion upthread to focus
> > on the first patch of the series and ignore documentation and comments on
> > the rest.
> > 
> > Please find v21 to contain 0002 and 0003 in a state which I no longer 
> > consider
> > as WIP but worthy of proper consideration. Some guidance on where is best 
> > to add
> > documentation in 0002 for the function pointers in CompressFileHandle will
> > be welcomed.
> 
> 
> This is internal-only API, not meant for use by regular users and/or
> extension authors, so I don't think we need sgml docs. I'd just add
> regular code-level documentation to compress_io.h.
> 
> For inspiration see docs for "struct ReorderBuffer" in reorderbuffer.h,
> or "struct _archiveHandle" in pg_backup_archiver.h.
> 
> Or what other kind of documentation you had in mind?

This is exactly what I was after. I was between compress_io.c and compress_io.h.
Thank you.

> > > > I had posted v19-0001 for a committer's consideration and v19-000{2,3} 
> > > > for completeness.
> > > > Please find a rebased v20 attached.
> > > 
> > > I took a quick look at 0001, so a couple comments (sorry if some of this
> > > was already discussed in the thread):
> > 
> > Much appreciated!
> > 
> > > 1) I don't think a "refactoring" patch should reference particular
> > > compression algorithms (lz4/zstd), and in particular I don't think we
> > > should have "not yet implemented" messages. We only have a couple other
> > > places doing that, when we didn't have a better choice. But here we can
> > > simply reject the algorithm when parsing the options, we don't need to
> > > do that in a dozen other places.
> > 
> > I have now removed lz4/zstd from where they were present with the exception
> > of pg_dump.c which is responsible for parsing.
> 
> 
> I'm not sure I understand why leave the lz4/zstd in this place?

You are right, it is not obvious. Those were added in 5e73a60488 which is
already committed in master and I didn't want to backtrack. Of course, I am
not opposing in doing so if you wish.

> 
> > > 2) I wouldn't reorder the cases in WriteDataToArchive, i.e. I'd keep
> > > "none" at the end. It might make backpatches harder.
> > 
> > Agreed. However a 'default' is needed in order to avoid compilation 
> > warnings.
> > Also note that 0002 completely does away with cases within 
> > WriteDataToArchive.
> 
> 
> OK, although that's also a consequence of using a "switch" instead of
> plan "if" branches.
> 
> Furthermore, I'm not sure we really need the pg_fatal() about invalid
> compression method in these default blocks. I mean, how could we even
> get to these places when the build does not support the algorithm? All
> of this (ReadDataFromArchive, WriteDataToArchive, EndCompressor, ...)
> happens looong after the compressor was initialized and the method
> checked, no? So maybe either this should simply do Assert(false) or use
> a different error message.

I like Assert(false).

> > > 3) While building, I get bunch of warnings about missing cfdopen()
> > > prototype and pg_backup_archiver.c not knowing about cfdopen() and
> > > adding an implicit prototype (so I doubt it actually works).
> > 
> > Fixed. cfdopen() got prematurely introduced in 5e73a6048 and then got 
> > removed
> > in 69fb29d1af. v20 failed to properly take 69fb29d1af in consideration. Note
> > that cfdopen is removed in 0002 which explains why cfbot didn't complain.
> 
> 
> OK.
> 
> > > 4) "cfp" struct no longer wraps gzFile, but the comment was not updated.
> > > FWIW I'm not sure switching to "void *" is an improvement, maybe it'd be
> > > better to have a "union" of correct types?
> > 
> > Please find and updated comment and a union in place of the void *. Also
> > note that 0002 completely does away with cfp in favour of a new struct
> > CompressFileHandle. I maintained the void * there because it is used by
> > private methods of the compressors. 0003 contains such an example with
> > LZ4CompressorState.
> 
> 
> I wonder if this (and also the previous item) makes sense to keep 0001
> and 0002 or to combine them. The "intermediate" state is a bit annoying.

Agreed. It was initially submitted as one patch. Then it was requested to be
split up in two parts, one to expand the use of the existing API and one to
replace with the new interface. Unfortunately the expansion of usage of the
existing API requires some tweaking, but that is not a very good reason for
the current patch set. I should have done a better job there.

Please find v22 attach which combines back 0001 and 0002. It is missing the
documentation that was discussed above as I wanted to give a quick feedback.
Let me know if you think that the combined version is the one to move forward
with.

Cheers,
//Georgios

> 
> > > 5) cfopen/cfdopen are missing comments. cfopen_internal has an updated
> > > comment, but that's a static function while cfopen/cfdopen are the
> > > actual API.
> > 
> > Added comments to cfopen/cfdopen.
> 
> 
> OK.
> 
> 
> regards
> 
> --
> Tomas Vondra
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
From 48451d2316a7016bfb2824c33ba38594ec03953e Mon Sep 17 00:00:00 2001
From: Georgios Kokolatos <gkokola...@pm.me>
Date: Wed, 18 Jan 2023 18:15:42 +0000
Subject: [PATCH v22 2/2] Add LZ4 compression to pg_dump

This is mostly done within pg_dump's compression streaming and file APIs.
It is confined within the newly introduced compress_lz4.{c,h} files.

The first one, is aimed at inlined use cases and thus simple
lz4.h calls can be used directly. The second one is generating output, or is
parsing input, which can be read/generated via the lz4 utility.

Wherever the LZ4F api does not implement all the functionality corresponding
to fread(), fwrite(), fgets(), fgetc(), feof(), and fclose(), it has been
implemented localy.
---
 doc/src/sgml/ref/pg_dump.sgml        |  13 +-
 src/bin/pg_dump/Makefile             |   2 +
 src/bin/pg_dump/compress_io.c        |  22 +-
 src/bin/pg_dump/compress_lz4.c       | 618 +++++++++++++++++++++++++++
 src/bin/pg_dump/compress_lz4.h       |  22 +
 src/bin/pg_dump/meson.build          |   8 +-
 src/bin/pg_dump/pg_backup_archiver.c |  14 +-
 src/bin/pg_dump/pg_dump.c            |   5 +-
 src/bin/pg_dump/t/002_pg_dump.pl     |  82 +++-
 src/tools/pginclude/cpluspluscheck   |   1 +
 src/tools/pgindent/typedefs.list     |   1 +
 11 files changed, 770 insertions(+), 18 deletions(-)
 create mode 100644 src/bin/pg_dump/compress_lz4.c
 create mode 100644 src/bin/pg_dump/compress_lz4.h

diff --git a/doc/src/sgml/ref/pg_dump.sgml b/doc/src/sgml/ref/pg_dump.sgml
index 2c938cd7e1..49d218905f 100644
--- a/doc/src/sgml/ref/pg_dump.sgml
+++ b/doc/src/sgml/ref/pg_dump.sgml
@@ -330,9 +330,10 @@ PostgreSQL documentation
            machine-readable format that <application>pg_restore</application>
            can read. A directory format archive can be manipulated with
            standard Unix tools; for example, files in an uncompressed archive
-           can be compressed with the <application>gzip</application> tool.
-           This format is compressed by default and also supports parallel
-           dumps.
+           can be compressed with the <application>gzip</application> or
+           <application>lz4</application>tool.
+           This format is compressed by default using <literal>gzip</literal>
+           and also supports parallel dumps.
           </para>
          </listitem>
         </varlistentry>
@@ -654,7 +655,7 @@ PostgreSQL documentation
        <para>
         Specify the compression method and/or the compression level to use.
         The compression method can be set to <literal>gzip</literal> or
-        <literal>none</literal> for no compression.
+        <literal>lz4</literal> or <literal>none</literal> for no compression.
         A compression detail string can optionally be specified.  If the
         detail string is an integer, it specifies the compression level.
         Otherwise, it should be a comma-separated list of items, each of the
@@ -675,8 +676,8 @@ PostgreSQL documentation
         individual table-data segments, and the default is to compress using
         <literal>gzip</literal> at a moderate level. For plain text output,
         setting a nonzero compression level causes the entire output file to be compressed,
-        as though it had been fed through <application>gzip</application>; but the default
-        is not to compress.
+        as though it had been fed through <application>gzip</application> or
+        <application>lz4</application>; but the default is not to compress.
        </para>
        <para>
         The tar archive format currently does not support compression at all.
diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index 7a19f5d617..a1401377ab 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -17,6 +17,7 @@ top_builddir = ../../..
 include $(top_builddir)/src/Makefile.global
 
 export GZIP_PROGRAM=$(GZIP)
+export LZ4
 export with_icu
 
 override CPPFLAGS := -I$(libpq_srcdir) $(CPPFLAGS)
@@ -25,6 +26,7 @@ LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
 OBJS = \
 	$(WIN32RES) \
 	compress_gzip.o \
+	compress_lz4.o \
 	compress_io.o \
 	dumputils.o \
 	parallel.o \
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index d60111b2b8..96947cd3ea 100644
--- a/src/bin/pg_dump/compress_io.c
+++ b/src/bin/pg_dump/compress_io.c
@@ -56,7 +56,7 @@
  *	InitDiscoverCompressFileHandle tries to deffer the compression by the
  *	filename suffix. If the suffix is not yet known, then it tries to simply
  *	open the file, and if it fails, it tries to open the same file with the .gz
- *	suffix.
+ *	suffix, and then again with the .lz4 suffix.
  *
  * IDENTIFICATION
  *	   src/bin/pg_dump/compress_io.c
@@ -70,6 +70,7 @@
 
 #include "compress_io.h"
 #include "compress_gzip.h"
+#include "compress_lz4.h"
 #include "pg_backup_utils.h"
 
 /*----------------------
@@ -146,6 +147,9 @@ AllocateCompressor(const pg_compress_specification compression_spec,
 		case PG_COMPRESSION_GZIP:
 			InitCompressorGzip(cs, compression_spec);
 			break;
+		case PG_COMPRESSION_LZ4:
+			InitCompressorLZ4(cs, compression_spec);
+			break;
 		default:
 			pg_fatal("invalid compression method \"%s\"",
 					 get_compress_algorithm_name(compression_spec.algorithm));
@@ -339,6 +343,9 @@ InitCompressFileHandle(const pg_compress_specification compression_spec)
 		case PG_COMPRESSION_GZIP:
 			InitCompressGzip(CFH, compression_spec);
 			break;
+		case PG_COMPRESSION_LZ4:
+			InitCompressLZ4(CFH, compression_spec);
+			break;
 		default:
 			pg_fatal("invalid compression method \"%s\"",
 					 get_compress_algorithm_name(compression_spec.algorithm));
@@ -357,7 +364,7 @@ InitCompressFileHandle(const pg_compress_specification compression_spec)
  * throughout. Otherwise the compression will be deferred by iteratively trying
  * to open the file at 'path', first as is, then by appending known compression
  * suffixes. So if you pass "foo" as 'path', this will open either "foo" or
- * "foo.gz", trying in that order.
+ * "foo.gz" or "foo.lz4", trying in that order.
  *
  * On failure, return NULL with an error code in errno.
  */
@@ -395,6 +402,17 @@ InitDiscoverCompressFileHandle(const char *path, const char *mode)
 			if (exists)
 				compression_spec.algorithm = PG_COMPRESSION_GZIP;
 		}
+#endif
+#ifdef USE_LZ4
+		if (!exists)
+		{
+			free_keep_errno(fname);
+			fname = psprintf("%s.lz4", path);
+			exists = (stat(fname, &st) == 0);
+
+			if (exists)
+				compression_spec.algorithm = PG_COMPRESSION_LZ4;
+		}
 #endif
 	}
 
diff --git a/src/bin/pg_dump/compress_lz4.c b/src/bin/pg_dump/compress_lz4.c
new file mode 100644
index 0000000000..c97e16187a
--- /dev/null
+++ b/src/bin/pg_dump/compress_lz4.c
@@ -0,0 +1,618 @@
+/*-------------------------------------------------------------------------
+ *
+ * compress_lz4.c
+ *	 Routines for archivers to write an uncompressed or compressed data
+ *	 stream.
+ *
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	   src/bin/pg_dump/compress_lz4.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres_fe.h"
+#include "pg_backup_utils.h"
+
+#include "compress_lz4.h"
+
+#ifdef USE_LZ4
+#include <lz4.h>
+#include <lz4frame.h>
+
+#define LZ4_OUT_SIZE	(4 * 1024)
+#define LZ4_IN_SIZE		(16 * 1024)
+
+/*
+ * LZ4F_HEADER_SIZE_MAX first appeared in v1.7.5 of the library.
+ * Redefine it for installations with a lesser version.
+ */
+#ifndef LZ4F_HEADER_SIZE_MAX
+#define LZ4F_HEADER_SIZE_MAX	32
+#endif
+
+/*----------------------
+ * Compressor API
+ *----------------------
+ */
+
+typedef struct LZ4CompressorState
+{
+	char	   *outbuf;
+	size_t		outsize;
+} LZ4CompressorState;
+
+/* Private routines that support LZ4 compressed data I/O */
+static void ReadDataFromArchiveLZ4(ArchiveHandle *AH, CompressorState *cs);
+static void WriteDataToArchiveLZ4(ArchiveHandle *AH, CompressorState *cs,
+								  const void *data, size_t dLen);
+static void EndCompressorLZ4(ArchiveHandle *AH, CompressorState *cs);
+
+static void
+ReadDataFromArchiveLZ4(ArchiveHandle *AH, CompressorState *cs)
+{
+	LZ4_streamDecode_t lz4StreamDecode;
+	char	   *buf;
+	char	   *decbuf;
+	size_t		buflen;
+	size_t		cnt;
+
+	buflen = LZ4_IN_SIZE;
+	buf = pg_malloc(buflen);
+	decbuf = pg_malloc(buflen);
+
+	LZ4_setStreamDecode(&lz4StreamDecode, NULL, 0);
+
+	while ((cnt = cs->readF(AH, &buf, &buflen)))
+	{
+		int			decBytes = LZ4_decompress_safe_continue(&lz4StreamDecode,
+															buf, decbuf,
+															cnt, buflen);
+
+		ahwrite(decbuf, 1, decBytes, AH);
+	}
+
+	pg_free(buf);
+	pg_free(decbuf);
+}
+
+static void
+WriteDataToArchiveLZ4(ArchiveHandle *AH, CompressorState *cs,
+					  const void *data, size_t dLen)
+{
+	LZ4CompressorState *LZ4cs = (LZ4CompressorState *) cs->private_data;
+	size_t		compressed;
+	size_t		requiredsize = LZ4_compressBound(dLen);
+
+	if (requiredsize > LZ4cs->outsize)
+	{
+		LZ4cs->outbuf = pg_realloc(LZ4cs->outbuf, requiredsize);
+		LZ4cs->outsize = requiredsize;
+	}
+
+	compressed = LZ4_compress_default(data, LZ4cs->outbuf,
+									  dLen, LZ4cs->outsize);
+
+	if (compressed <= 0)
+		pg_fatal("failed to LZ4 compress data");
+
+	cs->writeF(AH, LZ4cs->outbuf, compressed);
+}
+
+static void
+EndCompressorLZ4(ArchiveHandle *AH, CompressorState *cs)
+{
+	LZ4CompressorState *LZ4cs;
+
+	LZ4cs = (LZ4CompressorState *) cs->private_data;
+	if (LZ4cs)
+	{
+		pg_free(LZ4cs->outbuf);
+		pg_free(LZ4cs);
+		cs->private_data = NULL;
+	}
+}
+
+
+/* Public routines that support LZ4 compressed data I/O */
+void
+InitCompressorLZ4(CompressorState *cs, const pg_compress_specification compression_spec)
+{
+	cs->readData = ReadDataFromArchiveLZ4;
+	cs->writeData = WriteDataToArchiveLZ4;
+	cs->end = EndCompressorLZ4;
+
+	cs->compression_spec = compression_spec;
+
+	/* Will be lazy init'd */
+	cs->private_data = pg_malloc0(sizeof(LZ4CompressorState));
+}
+
+/*----------------------
+ * Compress File API
+ *----------------------
+ */
+
+/*
+ * State needed for LZ4 (de)compression using the CompressFileHandle API.
+ */
+typedef struct LZ4File
+{
+	FILE	   *fp;
+
+	LZ4F_preferences_t prefs;
+
+	LZ4F_compressionContext_t ctx;
+	LZ4F_decompressionContext_t dtx;
+
+	bool		inited;
+	bool		compressing;
+
+	size_t		buflen;
+	char	   *buffer;
+
+	size_t		overflowalloclen;
+	size_t		overflowlen;
+	char	   *overflowbuf;
+
+	size_t		errcode;
+}			LZ4File;
+
+/*
+ * LZ4 equivalent to feof() or gzeof(). The end of file
+ * is reached if there is no decompressed output in the
+ * overflow buffer and the end of the file is reached.
+ */
+static int
+LZ4File_eof(CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+
+	return fs->overflowlen == 0 && feof(fs->fp);
+}
+
+static const char *
+LZ4File_get_error(CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	const char *errmsg;
+
+	if (LZ4F_isError(fs->errcode))
+		errmsg = LZ4F_getErrorName(fs->errcode);
+	else
+		errmsg = strerror(errno);
+
+	return errmsg;
+}
+
+/*
+ * Prepare an already alloc'ed LZ4File struct for subsequent calls.
+ *
+ * It creates the nessary contexts for the operations. When compressing,
+ * it additionally writes the LZ4 header in the output stream.
+ */
+static int
+LZ4File_init(LZ4File * fs, int size, bool compressing)
+{
+	size_t		status;
+
+	if (fs->inited)
+		return 0;
+
+	fs->compressing = compressing;
+	fs->inited = true;
+
+	if (fs->compressing)
+	{
+		fs->buflen = LZ4F_compressBound(LZ4_IN_SIZE, &fs->prefs);
+		if (fs->buflen < LZ4F_HEADER_SIZE_MAX)
+			fs->buflen = LZ4F_HEADER_SIZE_MAX;
+
+		status = LZ4F_createCompressionContext(&fs->ctx, LZ4F_VERSION);
+		if (LZ4F_isError(status))
+		{
+			fs->errcode = status;
+			return 1;
+		}
+
+		fs->buffer = pg_malloc(fs->buflen);
+		status = LZ4F_compressBegin(fs->ctx, fs->buffer, fs->buflen,
+									&fs->prefs);
+
+		if (LZ4F_isError(status))
+		{
+			fs->errcode = status;
+			return 1;
+		}
+
+		if (fwrite(fs->buffer, 1, status, fs->fp) != status)
+		{
+			errno = errno ? : ENOSPC;
+			return 1;
+		}
+	}
+	else
+	{
+		status = LZ4F_createDecompressionContext(&fs->dtx, LZ4F_VERSION);
+		if (LZ4F_isError(status))
+		{
+			fs->errcode = status;
+			return 1;
+		}
+
+		fs->buflen = size > LZ4_OUT_SIZE ? size : LZ4_OUT_SIZE;
+		fs->buffer = pg_malloc(fs->buflen);
+
+		fs->overflowalloclen = fs->buflen;
+		fs->overflowbuf = pg_malloc(fs->overflowalloclen);
+		fs->overflowlen = 0;
+	}
+
+	return 0;
+}
+
+/*
+ * Read already decompressed content from the overflow buffer into 'ptr' up to
+ * 'size' bytes, if available. If the eol_flag is set, then stop at the first
+ * occurance of the new line char prior to 'size' bytes.
+ *
+ * Any unread content in the overflow buffer, is moved to the beginning.
+ */
+static int
+LZ4File_read_overflow(LZ4File * fs, void *ptr, int size, bool eol_flag)
+{
+	char	   *p;
+	int			readlen = 0;
+
+	if (fs->overflowlen == 0)
+		return 0;
+
+	if (fs->overflowlen >= size)
+		readlen = size;
+	else
+		readlen = fs->overflowlen;
+
+	if (eol_flag && (p = memchr(fs->overflowbuf, '\n', readlen)))
+		/* Include the line terminating char */
+		readlen = p - fs->overflowbuf + 1;
+
+	memcpy(ptr, fs->overflowbuf, readlen);
+	fs->overflowlen -= readlen;
+
+	if (fs->overflowlen > 0)
+		memmove(fs->overflowbuf, fs->overflowbuf + readlen, fs->overflowlen);
+
+	return readlen;
+}
+
+/*
+ * The workhorse for reading decompressed content out of an LZ4 compressed
+ * stream.
+ *
+ * It will read up to 'ptrsize' decompressed content, or up to the new line char
+ * if found first when the eol_flag is set. It is possible that the decompressed
+ * output generated by reading any compressed input via the LZ4F API, exceeds
+ * 'ptrsize'. Any exceeding decompressed content is stored at an overflow
+ * buffer within LZ4File. Of course, when the function is called, it will first
+ * try to consume any decompressed content already present in the overflow
+ * buffer, before decompressing new content.
+ */
+static int
+LZ4File_read_internal(LZ4File * fs, void *ptr, int ptrsize, bool eol_flag)
+{
+	size_t		dsize = 0;
+	size_t		rsize;
+	size_t		size = ptrsize;
+	bool		eol_found = false;
+
+	void	   *readbuf;
+
+	/* Lazy init */
+	if (!fs->inited && LZ4File_init(fs, size, false /* decompressing */ ))
+		return -1;
+
+	/* Verfiy that there is enough space in the outbuf */
+	if (size > fs->buflen)
+	{
+		fs->buflen = size;
+		fs->buffer = pg_realloc(fs->buffer, size);
+	}
+
+	/* use already decompressed content if available */
+	dsize = LZ4File_read_overflow(fs, ptr, size, eol_flag);
+	if (dsize == size || (eol_flag && memchr(ptr, '\n', dsize)))
+		return dsize;
+
+	readbuf = pg_malloc(size);
+
+	do
+	{
+		char	   *rp;
+		char	   *rend;
+
+		rsize = fread(readbuf, 1, size, fs->fp);
+		if (rsize < size && !feof(fs->fp))
+			return -1;
+
+		rp = (char *) readbuf;
+		rend = (char *) readbuf + rsize;
+
+		while (rp < rend)
+		{
+			size_t		status;
+			size_t		outlen = fs->buflen;
+			size_t		read_remain = rend - rp;
+
+			memset(fs->buffer, 0, outlen);
+			status = LZ4F_decompress(fs->dtx, fs->buffer, &outlen,
+									 rp, &read_remain, NULL);
+			if (LZ4F_isError(status))
+			{
+				fs->errcode = status;
+				return -1;
+			}
+
+			rp += read_remain;
+
+			/*
+			 * fill in what space is available in ptr if the eol flag is set,
+			 * either skip if one already found or fill up to EOL if present
+			 * in the outbuf
+			 */
+			if (outlen > 0 && dsize < size && eol_found == false)
+			{
+				char	   *p;
+				size_t		lib = (eol_flag == 0) ? size - dsize : size - 1 - dsize;
+				size_t		len = outlen < lib ? outlen : lib;
+
+				if (eol_flag == true &&
+					(p = memchr(fs->buffer, '\n', outlen)) &&
+					(size_t) (p - fs->buffer + 1) <= len)
+				{
+					len = p - fs->buffer + 1;
+					eol_found = true;
+				}
+
+				memcpy((char *) ptr + dsize, fs->buffer, len);
+				dsize += len;
+
+				/* move what did not fit, if any, at the begining of the buf */
+				if (len < outlen)
+					memmove(fs->buffer, fs->buffer + len, outlen - len);
+				outlen -= len;
+			}
+
+			/* if there is available output, save it */
+			if (outlen > 0)
+			{
+				while (fs->overflowlen + outlen > fs->overflowalloclen)
+				{
+					fs->overflowalloclen *= 2;
+					fs->overflowbuf = pg_realloc(fs->overflowbuf,
+												 fs->overflowalloclen);
+				}
+
+				memcpy(fs->overflowbuf + fs->overflowlen, fs->buffer, outlen);
+				fs->overflowlen += outlen;
+			}
+		}
+	} while (rsize == size && dsize < size && eol_found == 0);
+
+	pg_free(readbuf);
+
+	return (int) dsize;
+}
+
+/*
+ * Compress size bytes from ptr and write them to the stream.
+ */
+static size_t
+LZ4File_write(const void *ptr, size_t size, CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	size_t		status;
+	int			remaining = size;
+
+	if (!fs->inited && LZ4File_init(fs, size, true))
+		return -1;
+
+	while (remaining > 0)
+	{
+		int			chunk = remaining < LZ4_IN_SIZE ? remaining : LZ4_IN_SIZE;
+
+		remaining -= chunk;
+
+		status = LZ4F_compressUpdate(fs->ctx, fs->buffer, fs->buflen,
+									 ptr, chunk, NULL);
+		if (LZ4F_isError(status))
+		{
+			fs->errcode = status;
+			return -1;
+		}
+
+		if (fwrite(fs->buffer, 1, status, fs->fp) != status)
+		{
+			errno = errno ? : ENOSPC;
+			return 1;
+		}
+	}
+
+	return size;
+}
+
+/*
+ * fread() equivalent implementation for LZ4 compressed files.
+ */
+static size_t
+LZ4File_read(void *ptr, size_t size, CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	int			ret;
+
+	ret = LZ4File_read_internal(fs, ptr, size, false);
+	if (ret != size && !LZ4File_eof(CFH))
+		pg_fatal("could not read from input file: %s", LZ4File_get_error(CFH));
+
+	return ret;
+}
+
+/*
+ * fgetc() equivalent implementation for LZ4 compressed files.
+ */
+static int
+LZ4File_getc(CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	unsigned char c;
+
+	if (LZ4File_read_internal(fs, &c, 1, false) != 1)
+	{
+		if (!LZ4File_eof(CFH))
+			pg_fatal("could not read from input file: %s", LZ4File_get_error(CFH));
+		else
+			pg_fatal("could not read from input file: end of file");
+	}
+
+	return c;
+}
+
+/*
+ * fgets() equivalent implementation for LZ4 compressed files.
+ */
+static char *
+LZ4File_gets(char *ptr, int size, CompressFileHandle *CFH)
+{
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	size_t		dsize;
+
+	dsize = LZ4File_read_internal(fs, ptr, size, true);
+	if (dsize < 0)
+		pg_fatal("could not read from input file: %s", LZ4File_get_error(CFH));
+
+	/* Done reading */
+	if (dsize == 0)
+		return NULL;
+
+	return ptr;
+}
+
+/*
+ * Finalize (de)compression of a stream. When compressing it will write any
+ * remaining content and/or generated footer from the LZ4 API.
+ */
+static int
+LZ4File_close(CompressFileHandle *CFH)
+{
+	FILE	   *fp;
+	LZ4File    *fs = (LZ4File *) CFH->private_data;
+	size_t		status;
+	int			ret;
+
+	fp = fs->fp;
+	if (fs->inited)
+	{
+		if (fs->compressing)
+		{
+			status = LZ4F_compressEnd(fs->ctx, fs->buffer, fs->buflen, NULL);
+			if (LZ4F_isError(status))
+				pg_fatal("failed to end compression: %s",
+						 LZ4F_getErrorName(status));
+			else if ((ret = fwrite(fs->buffer, 1, status, fs->fp)) != status)
+			{
+				errno = errno ? : ENOSPC;
+				WRITE_ERROR_EXIT;
+			}
+
+			status = LZ4F_freeCompressionContext(fs->ctx);
+			if (LZ4F_isError(status))
+				pg_fatal("failed to end compression: %s",
+						 LZ4F_getErrorName(status));
+		}
+		else
+		{
+			status = LZ4F_freeDecompressionContext(fs->dtx);
+			if (LZ4F_isError(status))
+				pg_fatal("failed to end decompression: %s",
+						 LZ4F_getErrorName(status));
+			pg_free(fs->overflowbuf);
+		}
+
+		pg_free(fs->buffer);
+	}
+
+	pg_free(fs);
+
+	return fclose(fp);
+}
+
+static int
+LZ4File_open(const char *path, int fd, const char *mode,
+			 CompressFileHandle *CFH)
+{
+	FILE	   *fp;
+	LZ4File    *lz4fp = (LZ4File *) CFH->private_data;
+
+	if (fd >= 0)
+		fp = fdopen(fd, mode);
+	else
+		fp = fopen(path, mode);
+	if (fp == NULL)
+	{
+		lz4fp->errcode = errno;
+		return 1;
+	}
+
+	lz4fp->fp = fp;
+
+	return 0;
+}
+
+static int
+LZ4File_open_write(const char *path, const char *mode, CompressFileHandle *CFH)
+{
+	char	   *fname;
+	int			ret;
+
+	fname = psprintf("%s.lz4", path);
+	ret = CFH->open_func(fname, -1, mode, CFH);
+	pg_free(fname);
+
+	return ret;
+}
+
+void
+InitCompressLZ4(CompressFileHandle *CFH, const pg_compress_specification compression_spec)
+{
+	LZ4File    *lz4fp;
+
+	CFH->open_func = LZ4File_open;
+	CFH->open_write_func = LZ4File_open_write;
+	CFH->read_func = LZ4File_read;
+	CFH->write_func = LZ4File_write;
+	CFH->gets_func = LZ4File_gets;
+	CFH->getc_func = LZ4File_getc;
+	CFH->eof_func = LZ4File_eof;
+	CFH->close_func = LZ4File_close;
+	CFH->get_error_func = LZ4File_get_error;
+
+	CFH->compression_spec = compression_spec;
+	lz4fp = pg_malloc0(sizeof(*lz4fp));
+	if (CFH->compression_spec.level >= 0)
+		lz4fp->prefs.compressionLevel = CFH->compression_spec.level;
+
+	CFH->private_data = lz4fp;
+}
+#else							/* USE_LZ4 */
+void
+InitCompressorLZ4(CompressorState *cs, const pg_compress_specification compression_spec)
+{
+	pg_fatal("this build does not support compression with %s", "LZ4");
+}
+
+void
+InitCompressLZ4(CompressFileHandle *CFH, const pg_compress_specification compression_spec)
+{
+	pg_fatal("this build does not support compression with %s", "LZ4");
+}
+#endif							/* USE_LZ4 */
diff --git a/src/bin/pg_dump/compress_lz4.h b/src/bin/pg_dump/compress_lz4.h
new file mode 100644
index 0000000000..74595db1b9
--- /dev/null
+++ b/src/bin/pg_dump/compress_lz4.h
@@ -0,0 +1,22 @@
+/*-------------------------------------------------------------------------
+ *
+ * compress_lz4.h
+ *	 Interface to compress_io.c routines
+ *
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	   src/bin/pg_dump/compress_lz4.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _COMPRESS_LZ4_H_
+#define _COMPRESS_LZ4_H_
+
+#include "compress_io.h"
+
+extern void InitCompressorLZ4(CompressorState *cs, const pg_compress_specification compression_spec);
+extern void InitCompressLZ4(CompressFileHandle *CFH, const pg_compress_specification compression_spec);
+
+#endif							/* _COMPRESS_LZ4_H_ */
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index aa2c91829c..473d40d456 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -3,6 +3,7 @@
 pg_dump_common_sources = files(
   'compress_io.c',
   'compress_gzip.c',
+  'compress_lz4.c',
   'dumputils.c',
   'parallel.c',
   'pg_backup_archiver.c',
@@ -17,7 +18,7 @@ pg_dump_common_sources = files(
 pg_dump_common = static_library('libpgdump_common',
   pg_dump_common_sources,
   c_pch: pch_postgres_fe_h,
-  dependencies: [frontend_code, libpq, zlib],
+  dependencies: [frontend_code, libpq, lz4, zlib],
   kwargs: internal_lib_args,
 )
 
@@ -85,7 +86,10 @@ tests += {
   'sd': meson.current_source_dir(),
   'bd': meson.current_build_dir(),
   'tap': {
-    'env': {'GZIP_PROGRAM': gzip.path()},
+    'env': {
+      'GZIP_PROGRAM': gzip.path(),
+      'LZ4': program_lz4.found() ? program_lz4.path() : '',
+    },
     'tests': [
       't/001_basic.pl',
       't/002_pg_dump.pl',
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index cc3b7f0992..7005ffcd5b 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -395,6 +395,10 @@ RestoreArchive(Archive *AHX)
 #ifndef HAVE_LIBZ
 				if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
 					supports_compression = false;
+#endif
+#ifndef USE_LZ4
+				if (AH->compression_spec.algorithm == PG_COMPRESSION_LZ4)
+					supports_compression = false;
 #endif
 				if (supports_compression == false)
 					pg_fatal("cannot restore from compressed archive (compression not supported in this installation)");
@@ -2074,7 +2078,7 @@ _discoverArchiveFormat(ArchiveHandle *AH)
 
 		/*
 		 * Check if the specified archive is a directory. If so, check if
-		 * there's a "toc.dat" (or "toc.dat.gz") file in it.
+		 * there's a "toc.dat" (or "toc.dat.{gz,lz4}") file in it.
 		 */
 		if (stat(AH->fSpec, &st) == 0 && S_ISDIR(st.st_mode))
 		{
@@ -2084,6 +2088,10 @@ _discoverArchiveFormat(ArchiveHandle *AH)
 #ifdef HAVE_LIBZ
 			if (_fileExistsInDirectory(AH->fSpec, "toc.dat.gz"))
 				return AH->format;
+#endif
+#ifdef USE_LZ4
+			if (_fileExistsInDirectory(AH->fSpec, "toc.dat.lz4"))
+				return AH->format;
 #endif
 			pg_fatal("directory \"%s\" does not appear to be a valid archive (\"toc.dat\" does not exist)",
 					 AH->fSpec);
@@ -3747,6 +3755,10 @@ ReadHead(ArchiveHandle *AH)
 #ifndef HAVE_LIBZ
 		if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
 			unsupported = true;
+#endif
+#ifndef USE_LZ4
+		if (AH->compression_spec.algorithm == PG_COMPRESSION_LZ4)
+			unsupported = true;
 #endif
 		if (unsupported)
 			pg_fatal("archive is compressed, but this installation does not support compression");
diff --git a/src/bin/pg_dump/pg_dump.c b/src/bin/pg_dump/pg_dump.c
index 527c7651ab..08105337b1 100644
--- a/src/bin/pg_dump/pg_dump.c
+++ b/src/bin/pg_dump/pg_dump.c
@@ -715,13 +715,12 @@ main(int argc, char **argv)
 		case PG_COMPRESSION_NONE:
 			/* fallthrough */
 		case PG_COMPRESSION_GZIP:
+			/* fallthrough */
+		case PG_COMPRESSION_LZ4:
 			break;
 		case PG_COMPRESSION_ZSTD:
 			pg_fatal("compression with %s is not yet supported", "ZSTD");
 			break;
-		case PG_COMPRESSION_LZ4:
-			pg_fatal("compression with %s is not yet supported", "LZ4");
-			break;
 	}
 
 	/*
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index f3ba926321..f497ec6040 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -139,6 +139,80 @@ my %pgdump_runs = (
 			args    => [ '-d', "$tempdir/compression_gzip_plain.sql.gz", ],
 		},
 	},
+
+	# Do not use --no-sync to give test coverage for data sync.
+	compression_lz4_custom => {
+		test_key       => 'compression',
+		compile_option => 'lz4',
+		dump_cmd       => [
+			'pg_dump',      '--format=custom',
+			'--compress=lz4', "--file=$tempdir/compression_lz4_custom.dump",
+			'postgres',
+		],
+		restore_cmd => [
+			'pg_restore',
+			"--file=$tempdir/compression_lz4_custom.sql",
+			"$tempdir/compression_lz4_custom.dump",
+		],
+		command_like => {
+			command => [
+				'pg_restore',
+				'-l', "$tempdir/compression_lz4_custom.dump",
+			],
+			expected => qr/Compression: lz4/,
+			name => 'data content is lz4 compressed'
+		},
+	},
+
+	# Do not use --no-sync to give test coverage for data sync.
+	compression_lz4_dir => {
+		test_key       => 'compression',
+		compile_option => 'lz4',
+		dump_cmd       => [
+			'pg_dump',                              '--jobs=2',
+			'--format=directory',                   '--compress=lz4:1',
+			"--file=$tempdir/compression_lz4_dir", 'postgres',
+		],
+		# Give coverage for manually compressed blob.toc files during
+		# restore.
+		compress_cmd => {
+			program => $ENV{'LZ4'},
+			args    => [
+				'-z', '-f', '--rm',
+				"$tempdir/compression_lz4_dir/blobs.toc",
+				"$tempdir/compression_lz4_dir/blobs.toc.lz4",
+			],
+		},
+		# Verify that data files where compressed
+		glob_patterns => [
+			"$tempdir/compression_lz4_dir/toc.dat",
+		    "$tempdir/compression_lz4_dir/*.dat.lz4",
+		],
+		restore_cmd => [
+			'pg_restore', '--jobs=2',
+			"--file=$tempdir/compression_lz4_dir.sql",
+			"$tempdir/compression_lz4_dir",
+		],
+	},
+
+	compression_lz4_plain => {
+		test_key       => 'compression',
+		compile_option => 'lz4',
+		dump_cmd       => [
+			'pg_dump', '--format=plain', '--compress=lz4',
+			"--file=$tempdir/compression_lz4_plain.sql.lz4", 'postgres',
+		],
+		# Decompress the generated file to run through the tests.
+		compress_cmd => {
+			program => $ENV{'LZ4'},
+			args    => [
+				'-d', '-f',
+				"$tempdir/compression_lz4_plain.sql.lz4",
+				"$tempdir/compression_lz4_plain.sql",
+			],
+		},
+	},
+
 	clean => {
 		dump_cmd => [
 			'pg_dump',
@@ -4175,11 +4249,11 @@ foreach my $run (sort keys %pgdump_runs)
 	my $run_db   = 'postgres';
 
 	# Skip command-level tests for gzip if there is no support for it.
-	if (   defined($pgdump_runs{$run}->{compile_option})
-		&& $pgdump_runs{$run}->{compile_option} eq 'gzip'
-		&& !$supports_gzip)
+	if ($pgdump_runs{$run}->{compile_option} &&
+		($pgdump_runs{$run}->{compile_option} eq 'gzip' && !$supports_gzip) ||
+		($pgdump_runs{$run}->{compile_option} eq 'lz4' && !$supports_lz4))
 	{
-		note "$run: skipped due to no gzip support";
+		note "$run: skipped due to no $pgdump_runs{$run}->{compile_option} support";
 		next;
 	}
 
diff --git a/src/tools/pginclude/cpluspluscheck b/src/tools/pginclude/cpluspluscheck
index da8e4861f4..1f7f817e4d 100755
--- a/src/tools/pginclude/cpluspluscheck
+++ b/src/tools/pginclude/cpluspluscheck
@@ -152,6 +152,7 @@ do
 	# as field names, which is unfortunate but we won't change it now.
 	test "$f" = src/bin/pg_dump/compress_gzip.h && continue
 	test "$f" = src/bin/pg_dump/compress_io.h && continue
+	test "$f" = src/bin/pg_dump/compress_lz4.h && continue
 	test "$f" = src/bin/pg_dump/parallel.h && continue
 	test "$f" = src/bin/pg_dump/pg_backup_archiver.h && continue
 	test "$f" = src/bin/pg_dump/pg_dump.h && continue
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 840191d680..232228d427 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -1381,6 +1381,7 @@ LWLock
 LWLockHandle
 LWLockMode
 LWLockPadded
+LZ4CompressorState
 LZ4F_compressionContext_t
 LZ4F_decompressOptions_t
 LZ4F_decompressionContext_t
-- 
2.34.1

From 1fc195f9c37a8f175b7f2843ccd46953f6b19b0f Mon Sep 17 00:00:00 2001
From: Georgios Kokolatos <gkokola...@pm.me>
Date: Wed, 18 Jan 2023 18:20:43 +0000
Subject: [PATCH v22 1/2] Introduce Compressor API in pg_dump and use it
 throughout

The purpose of this API is to allow for easier addition of new compression
methods. CompressFileHandle is substituting the cfp* family of functions under a
struct of function pointers for opening, writing, etc. The implementor of a new
compression method is now able to "simply" just add those definitions.

Custom compressed archives now need to store the compression algorithm in their
header. This requires a bump in the version number. The level of compression
is no longer stored in the dump as it is now irrelevant.

Commit  bf9aa490db introduced cfp in compress_io.{c,h} with the intent of
unifying compression related code and allow for the introduction of additional
archive formats. However, pg_backup_archiver.c was not using that API. This
commit teaches pg_backup_archiver.c about it.
---
 src/bin/pg_dump/Makefile              |   1 +
 src/bin/pg_dump/compress_gzip.c       | 406 +++++++++++++
 src/bin/pg_dump/compress_gzip.h       |  22 +
 src/bin/pg_dump/compress_io.c         | 815 +++++++++-----------------
 src/bin/pg_dump/compress_io.h         |  70 ++-
 src/bin/pg_dump/meson.build           |   1 +
 src/bin/pg_dump/pg_backup_archiver.c  | 198 +++----
 src/bin/pg_dump/pg_backup_archiver.h  |  32 +-
 src/bin/pg_dump/pg_backup_custom.c    |  23 +-
 src/bin/pg_dump/pg_backup_directory.c |  94 +--
 src/bin/pg_dump/t/002_pg_dump.pl      |  10 +-
 src/tools/pginclude/cpluspluscheck    |   1 +
 src/tools/pgindent/typedefs.list      |   2 +
 13 files changed, 916 insertions(+), 759 deletions(-)
 create mode 100644 src/bin/pg_dump/compress_gzip.c
 create mode 100644 src/bin/pg_dump/compress_gzip.h

diff --git a/src/bin/pg_dump/Makefile b/src/bin/pg_dump/Makefile
index ef1ed0f3e5..7a19f5d617 100644
--- a/src/bin/pg_dump/Makefile
+++ b/src/bin/pg_dump/Makefile
@@ -24,6 +24,7 @@ LDFLAGS_INTERNAL += -L$(top_builddir)/src/fe_utils -lpgfeutils $(libpq_pgport)
 
 OBJS = \
 	$(WIN32RES) \
+	compress_gzip.o \
 	compress_io.o \
 	dumputils.o \
 	parallel.o \
diff --git a/src/bin/pg_dump/compress_gzip.c b/src/bin/pg_dump/compress_gzip.c
new file mode 100644
index 0000000000..f365b93b76
--- /dev/null
+++ b/src/bin/pg_dump/compress_gzip.c
@@ -0,0 +1,406 @@
+/*-------------------------------------------------------------------------
+ *
+ * compress_gzip.c
+ *	 Routines for archivers to write an uncompressed or compressed data
+ *	 stream.
+ *
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	   src/bin/pg_dump/compress_gzip.c
+ *
+ *-------------------------------------------------------------------------
+ */
+#include "postgres_fe.h"
+#include <unistd.h>
+
+#include "compress_gzip.h"
+#include "pg_backup_utils.h"
+
+#ifdef HAVE_LIBZ
+#include "zlib.h"
+
+/*----------------------
+ * Compressor API
+ *----------------------
+ */
+typedef struct GzipCompressorState
+{
+	z_streamp	zp;
+
+	void	   *outbuf;
+	size_t		outsize;
+} GzipCompressorState;
+
+/* Private routines that support gzip compressed data I/O */
+static void
+DeflateCompressorGzip(ArchiveHandle *AH, CompressorState *cs, bool flush)
+{
+	GzipCompressorState *gzipcs = (GzipCompressorState *) cs->private_data;
+	z_streamp	zp = gzipcs->zp;
+	void	   *out = gzipcs->outbuf;
+	int			res = Z_OK;
+
+	while (gzipcs->zp->avail_in != 0 || flush)
+	{
+		res = deflate(zp, flush ? Z_FINISH : Z_NO_FLUSH);
+		if (res == Z_STREAM_ERROR)
+			pg_fatal("could not compress data: %s", zp->msg);
+		if ((flush && (zp->avail_out < gzipcs->outsize))
+			|| (zp->avail_out == 0)
+			|| (zp->avail_in != 0)
+			)
+		{
+			/*
+			 * Extra paranoia: avoid zero-length chunks, since a zero length
+			 * chunk is the EOF marker in the custom format. This should never
+			 * happen but...
+			 */
+			if (zp->avail_out < gzipcs->outsize)
+			{
+				/*
+				 * Any write function should do its own error checking but to
+				 * make sure we do a check here as well...
+				 */
+				size_t		len = gzipcs->outsize - zp->avail_out;
+
+				cs->writeF(AH, (char *) out, len);
+			}
+			zp->next_out = out;
+			zp->avail_out = gzipcs->outsize;
+		}
+
+		if (res == Z_STREAM_END)
+			break;
+	}
+}
+
+static void
+EndCompressorGzip(ArchiveHandle *AH, CompressorState *cs)
+{
+	GzipCompressorState *gzipcs = (GzipCompressorState *) cs->private_data;
+	z_streamp	zp;
+
+	if (gzipcs->zp)
+	{
+		zp = gzipcs->zp;
+		zp->next_in = NULL;
+		zp->avail_in = 0;
+
+		/* Flush any remaining data from zlib buffer */
+		DeflateCompressorGzip(AH, cs, true);
+
+		if (deflateEnd(zp) != Z_OK)
+			pg_fatal("could not close compression stream: %s", zp->msg);
+
+		pg_free(gzipcs->outbuf);
+		pg_free(gzipcs->zp);
+	}
+
+	pg_free(gzipcs);
+	cs->private_data = NULL;
+}
+
+static void
+WriteDataToArchiveGzip(ArchiveHandle *AH, CompressorState *cs,
+					   const void *data, size_t dLen)
+{
+	GzipCompressorState *gzipcs = (GzipCompressorState *) cs->private_data;
+	z_streamp	zp;
+
+	if (!gzipcs->zp)
+	{
+		zp = gzipcs->zp = (z_streamp) pg_malloc(sizeof(z_stream));
+		zp->zalloc = Z_NULL;
+		zp->zfree = Z_NULL;
+		zp->opaque = Z_NULL;
+
+		/*
+		 * outsize is the buffer size we tell zlib it can output to.  We
+		 * actually allocate one extra byte because some routines want to
+		 * append a trailing zero byte to the zlib output.
+		 */
+		gzipcs->outbuf = pg_malloc(ZLIB_OUT_SIZE + 1);
+		gzipcs->outsize = ZLIB_OUT_SIZE;
+
+		/*
+		 * A level of zero simply copies the input one block at the time.
+		 * This is probably not what the user wanted when calling this
+		 * interface.
+		 */
+		if (cs->compression_spec.level == 0)
+			pg_fatal("requested to compress the archive yet no level was specified");
+
+		if (deflateInit(zp, cs->compression_spec.level) != Z_OK)
+			pg_fatal("could not initialize compression library: %s", zp->msg);
+
+		/* Just be paranoid - maybe End is called after Start, with no Write */
+		zp->next_out = gzipcs->outbuf;
+		zp->avail_out = gzipcs->outsize;
+	}
+
+	gzipcs->zp->next_in = (void *) unconstify(void *, data);
+	gzipcs->zp->avail_in = dLen;
+	DeflateCompressorGzip(AH, cs, false);
+}
+
+static void
+ReadDataFromArchiveGzip(ArchiveHandle *AH, CompressorState *cs)
+{
+	z_streamp	zp;
+	char	   *out;
+	int			res = Z_OK;
+	size_t		cnt;
+	char	   *buf;
+	size_t		buflen;
+
+	zp = (z_streamp) pg_malloc(sizeof(z_stream));
+	zp->zalloc = Z_NULL;
+	zp->zfree = Z_NULL;
+	zp->opaque = Z_NULL;
+
+	buf = pg_malloc(ZLIB_IN_SIZE);
+	buflen = ZLIB_IN_SIZE;
+
+	out = pg_malloc(ZLIB_OUT_SIZE + 1);
+
+	if (inflateInit(zp) != Z_OK)
+		pg_fatal("could not initialize compression library: %s",
+				 zp->msg);
+
+	/* no minimal chunk size for zlib */
+	while ((cnt = cs->readF(AH, &buf, &buflen)))
+	{
+		zp->next_in = (void *) buf;
+		zp->avail_in = cnt;
+
+		while (zp->avail_in > 0)
+		{
+			zp->next_out = (void *) out;
+			zp->avail_out = ZLIB_OUT_SIZE;
+
+			res = inflate(zp, 0);
+			if (res != Z_OK && res != Z_STREAM_END)
+				pg_fatal("could not uncompress data: %s", zp->msg);
+
+			out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
+			ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
+		}
+	}
+
+	zp->next_in = NULL;
+	zp->avail_in = 0;
+	while (res != Z_STREAM_END)
+	{
+		zp->next_out = (void *) out;
+		zp->avail_out = ZLIB_OUT_SIZE;
+		res = inflate(zp, 0);
+		if (res != Z_OK && res != Z_STREAM_END)
+			pg_fatal("could not uncompress data: %s", zp->msg);
+
+		out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
+		ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
+	}
+
+	if (inflateEnd(zp) != Z_OK)
+		pg_fatal("could not close compression library: %s", zp->msg);
+
+	free(buf);
+	free(out);
+	free(zp);
+}
+
+/* Public routines that support gzip compressed data I/O */
+void
+InitCompressorGzip(CompressorState *cs, const pg_compress_specification compression_spec)
+{
+	GzipCompressorState *gzipcs;
+
+	cs->readData = ReadDataFromArchiveGzip;
+	cs->writeData = WriteDataToArchiveGzip;
+	cs->end = EndCompressorGzip;
+
+	cs->compression_spec = compression_spec;
+
+	gzipcs = (GzipCompressorState *) pg_malloc0(sizeof(GzipCompressorState));
+
+	cs->private_data = gzipcs;
+}
+
+
+/*----------------------
+ * Compress File API
+ *----------------------
+ */
+
+static size_t
+Gzip_read(void *ptr, size_t size, CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+	size_t		ret;
+
+	ret = gzread(gzfp, ptr, size);
+	if (ret != size && !gzeof(gzfp))
+	{
+		int			errnum;
+		const char *errmsg = gzerror(gzfp, &errnum);
+
+		pg_fatal("could not read from input file: %s",
+				 errnum == Z_ERRNO ? strerror(errno) : errmsg);
+	}
+
+	return ret;
+}
+
+static size_t
+Gzip_write(const void *ptr, size_t size, CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+
+	return gzwrite(gzfp, ptr, size);
+}
+
+static int
+Gzip_getc(CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+	int			ret;
+
+	errno = 0;
+	ret = gzgetc(gzfp);
+	if (ret == EOF)
+	{
+		if (!gzeof(gzfp))
+			pg_fatal("could not read from input file: %s", strerror(errno));
+		else
+			pg_fatal("could not read from input file: end of file");
+	}
+
+	return ret;
+}
+
+static char *
+Gzip_gets(char *ptr, int size, CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+
+	return gzgets(gzfp, ptr, size);
+}
+
+static int
+Gzip_close(CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+	int			save_errno;
+	int			ret;
+
+	CFH->private_data = NULL;
+
+	ret = gzclose(gzfp);
+
+	save_errno = errno;
+	errno = save_errno;
+
+	return ret;
+}
+
+static int
+Gzip_eof(CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+
+	return gzeof(gzfp);
+}
+
+static const char *
+Gzip_get_error(CompressFileHandle *CFH)
+{
+	gzFile		gzfp = (gzFile) CFH->private_data;
+	const char *errmsg;
+	int			errnum;
+
+	errmsg = gzerror(gzfp, &errnum);
+	if (errnum == Z_ERRNO)
+		errmsg = strerror(errno);
+
+	return errmsg;
+}
+
+static int
+Gzip_open(const char *path, int fd, const char *mode, CompressFileHandle *CFH)
+{
+	gzFile		gzfp;
+	char		mode_compression[32];
+
+	if (CFH->compression_spec.level != Z_DEFAULT_COMPRESSION)
+	{
+		/*
+		 * user has specified a compression level, so tell zlib to use it
+		 */
+		snprintf(mode_compression, sizeof(mode_compression), "%s%d",
+				 mode, CFH->compression_spec.level);
+	}
+	else
+		strcpy(mode_compression, mode);
+
+	if (fd >= 0)
+		gzfp = gzdopen(dup(fd), mode_compression);
+	else
+		gzfp = gzopen(path, mode_compression);
+
+	if (gzfp == NULL)
+		return 1;
+
+	CFH->private_data = gzfp;
+
+	return 0;
+}
+
+static int
+Gzip_open_write(const char *path, const char *mode, CompressFileHandle *CFH)
+{
+	char	   *fname;
+	int			ret;
+	int			save_errno;
+
+	fname = psprintf("%s.gz", path);
+	ret = CFH->open_func(fname, -1, mode, CFH);
+
+	save_errno = errno;
+	pg_free(fname);
+	errno = save_errno;
+
+	return ret;
+}
+
+void
+InitCompressGzip(CompressFileHandle *CFH, const pg_compress_specification compression_spec)
+{
+	CFH->open_func = Gzip_open;
+	CFH->open_write_func = Gzip_open_write;
+	CFH->read_func = Gzip_read;
+	CFH->write_func = Gzip_write;
+	CFH->gets_func = Gzip_gets;
+	CFH->getc_func = Gzip_getc;
+	CFH->close_func = Gzip_close;
+	CFH->eof_func = Gzip_eof;
+	CFH->get_error_func = Gzip_get_error;
+
+	CFH->compression_spec = compression_spec;
+
+	CFH->private_data = NULL;
+}
+#else							/* HAVE_LIBZ */
+void
+InitCompressorGzip(CompressorState *cs, const pg_compress_specification compression_spec)
+{
+	pg_fatal("this build does not support compression with %s", "gzip");
+}
+
+void
+InitCompressGzip(CompressFileHandle *CFH, const pg_compress_specification compression_spec)
+{
+	pg_fatal("this build does not support compression with %s", "gzip");
+}
+#endif							/* HAVE_LIBZ */
diff --git a/src/bin/pg_dump/compress_gzip.h b/src/bin/pg_dump/compress_gzip.h
new file mode 100644
index 0000000000..6dfd0eb04d
--- /dev/null
+++ b/src/bin/pg_dump/compress_gzip.h
@@ -0,0 +1,22 @@
+/*-------------------------------------------------------------------------
+ *
+ * compress_gzip.h
+ *	 Interface to compress_io.c routines
+ *
+ * Portions Copyright (c) 1996-2022, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ * IDENTIFICATION
+ *	   src/bin/pg_dump/compress_gzip.h
+ *
+ *-------------------------------------------------------------------------
+ */
+#ifndef _COMPRESS_GZIP_H_
+#define _COMPRESS_GZIP_H_
+
+#include "compress_io.h"
+
+extern void InitCompressorGzip(CompressorState *cs, const pg_compress_specification compression_spec);
+extern void InitCompressGzip(CompressFileHandle *CFH, const pg_compress_specification compression_spec);
+
+#endif							/* _COMPRESS_GZIP_H_ */
diff --git a/src/bin/pg_dump/compress_io.c b/src/bin/pg_dump/compress_io.c
index 7a2c80bbc4..d60111b2b8 100644
--- a/src/bin/pg_dump/compress_io.c
+++ b/src/bin/pg_dump/compress_io.c
@@ -9,42 +9,54 @@
  *
  * This file includes two APIs for dealing with compressed data. The first
  * provides more flexibility, using callbacks to read/write data from the
- * underlying stream. The second API is a wrapper around fopen/gzopen and
+ * underlying stream. The second API is a wrapper around fopen and
  * friends, providing an interface similar to those, but abstracts away
- * the possible compression. Both APIs use libz for the compression, but
- * the second API uses gzip headers, so the resulting files can be easily
- * manipulated with the gzip utility.
+ * the possible compression. The second API is aimed for the resulting
+ * files to be easily manipulated with an external compression utility
+ * program.
+ *
+ * This file also includes the implementation when compression is none for
+ * both API's.
  *
  * Compressor API
  * --------------
  *
  *	The interface for writing to an archive consists of three functions:
- *	AllocateCompressor, WriteDataToArchive and EndCompressor. First you call
- *	AllocateCompressor, then write all the data by calling WriteDataToArchive
- *	as many times as needed, and finally EndCompressor. WriteDataToArchive
- *	and EndCompressor will call the WriteFunc that was provided to
- *	AllocateCompressor for each chunk of compressed data.
- *
- *	The interface for reading an archive consists of just one function:
- *	ReadDataFromArchive. ReadDataFromArchive reads the whole compressed input
- *	stream, by repeatedly calling the given ReadFunc. ReadFunc returns the
- *	compressed data chunk at a time, and ReadDataFromArchive decompresses it
- *	and passes the decompressed data to ahwrite(), until ReadFunc returns 0
- *	to signal EOF.
+ *	AllocateCompressor, writeData, and EndCompressor. First you call
+ *	AllocateCompressor, then write all the data by calling writeData as many
+ *	times as needed, and finally EndCompressor. writeData will call the
+ *	WriteFunc that was provided to AllocateCompressor for each chunk of
+ *	compressed data.
  *
- *	The interface is the same for compressed and uncompressed streams.
+ *	The interface for reading an archive consists of the same three functions:
+ *	AllocateCompressor, readData, and EndCompressor. First you call
+ *  AllocateCompressor, then read all the data by calling readData to read the
+ *	whole compressed stream which repeatedly calls the given ReadFunc. ReadFunc
+ *	returns the compressed data one chunk at a time. Then readData decompresses
+ *	it and passes the decompressed data to ahwrite(), until ReadFunc returns 0
+ *	to signal EOF. The interface is the same for compressed and uncompressed
+ *	streams.
  *
  * Compressed stream API
  * ----------------------
  *
- *	The compressed stream API is a wrapper around the C standard fopen() and
- *	libz's gzopen() APIs. It allows you to use the same functions for
- *	compressed and uncompressed streams. cfopen_read() first tries to open
- *	the file with given name, and if it fails, it tries to open the same
- *	file with the .gz suffix. cfopen_write() opens a file for writing, an
- *	extra argument specifies if the file should be compressed, and adds the
- *	.gz suffix to the filename if so. This allows you to easily handle both
- *	compressed and uncompressed files.
+ *	The compressed stream API is providing a set of function pointers for
+ *  opening, reading, writing, and finally closing files. The implemented
+ *  function pointers are documented in the corresponding header file and are
+ *  common for all streams. It allows the caller to use the same functions for
+ *  both compressed and uncompressed streams.
+ *
+ *  The interface consists of three functions, InitCompressFileHandle,
+ *  InitDiscoverCompressFileHandle, and EndCompressFileHandle. If the
+ *  compression is known, then start by calling InitCompressFileHandle,
+ *  otherwise discover it by using InitDiscoverCompressFileHandle. Then call
+ *  the function pointers as required for the read/write operations. Finally
+ *  call EndCompressFileHandle to end the stream.
+ *
+ *	InitDiscoverCompressFileHandle tries to deffer the compression by the
+ *	filename suffix. If the suffix is not yet known, then it tries to simply
+ *	open the file, and if it fails, it tries to open the same file with the .gz
+ *	suffix.
  *
  * IDENTIFICATION
  *	   src/bin/pg_dump/compress_io.c
@@ -53,7 +65,11 @@
  */
 #include "postgres_fe.h"
 
+#include <sys/stat.h>
+#include <unistd.h>
+
 #include "compress_io.h"
+#include "compress_gzip.h"
 #include "pg_backup_utils.h"
 
 /*----------------------
@@ -61,110 +77,82 @@
  *----------------------
  */
 
-/* typedef appears in compress_io.h */
-struct CompressorState
+/* Private routines */
+
+static void
+ReadDataFromArchiveNone(ArchiveHandle *AH, CompressorState *cs)
 {
-	pg_compress_specification compression_spec;
-	WriteFunc	writeF;
+	size_t		cnt;
+	char	   *buf;
+	size_t		buflen;
 
-#ifdef HAVE_LIBZ
-	z_streamp	zp;
-	char	   *zlibOut;
-	size_t		zlibOutSize;
-#endif
-};
+	buf = pg_malloc(ZLIB_OUT_SIZE);
+	buflen = ZLIB_OUT_SIZE;
 
-/* Routines that support zlib compressed data I/O */
-#ifdef HAVE_LIBZ
-static void InitCompressorZlib(CompressorState *cs, int level);
-static void DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs,
-								  bool flush);
-static void ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF);
-static void WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
-								   const char *data, size_t dLen);
-static void EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs);
-#endif
+	while ((cnt = cs->readF(AH, &buf, &buflen)))
+	{
+		ahwrite(buf, 1, cnt, AH);
+	}
 
-/* Routines that support uncompressed data I/O */
-static void ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF);
-static void WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
-								   const char *data, size_t dLen);
+	free(buf);
+}
 
-/* Public interface routines */
+/* Compressor API None implementation */
 
-/* Allocate a new compressor */
-CompressorState *
-AllocateCompressor(const pg_compress_specification compression_spec,
-				   WriteFunc writeF)
+static void
+WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
+					   const void *data, size_t dLen)
 {
-	CompressorState *cs;
-
-#ifndef HAVE_LIBZ
-	if (compression_spec.algorithm == PG_COMPRESSION_GZIP)
-		pg_fatal("this build does not support compression with %s", "gzip");
-#endif
-
-	cs = (CompressorState *) pg_malloc0(sizeof(CompressorState));
-	cs->writeF = writeF;
-	cs->compression_spec = compression_spec;
-
-	/*
-	 * Perform compression algorithm specific initialization.
-	 */
-#ifdef HAVE_LIBZ
-	if (cs->compression_spec.algorithm == PG_COMPRESSION_GZIP)
-		InitCompressorZlib(cs, cs->compression_spec.level);
-#endif
+	cs->writeF(AH, data, dLen);
+}
 
-	return cs;
+static void
+EndCompressorNone(ArchiveHandle *AH, CompressorState *cs)
+{
+	/* no op */
 }
 
-/*
- * Read all compressed data from the input stream (via readF) and print it
- * out with ahwrite().
- */
-void
-ReadDataFromArchive(ArchiveHandle *AH,
-					const pg_compress_specification compression_spec,
-					ReadFunc readF)
+static void
+InitCompressorNone(CompressorState *cs,
+				   const pg_compress_specification compression_spec)
 {
-	if (compression_spec.algorithm == PG_COMPRESSION_NONE)
-		ReadDataFromArchiveNone(AH, readF);
-	if (compression_spec.algorithm == PG_COMPRESSION_GZIP)
-	{
-#ifdef HAVE_LIBZ
-		ReadDataFromArchiveZlib(AH, readF);
-#else
-		pg_fatal("this build does not support compression with %s", "gzip");
-#endif
-	}
+	cs->readData = ReadDataFromArchiveNone;
+	cs->writeData = WriteDataToArchiveNone;
+	cs->end = EndCompressorNone;
+
+	cs->compression_spec = compression_spec;
 }
 
+/* Public interface routines */
+
 /*
- * Compress and write data to the output stream (via writeF).
+ * Allocate a new compressor.
  */
-void
-WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
-				   const void *data, size_t dLen)
+CompressorState *
+AllocateCompressor(const pg_compress_specification compression_spec,
+				   ReadFunc readF, WriteFunc writeF)
 {
-	switch (cs->compression_spec.algorithm)
+	CompressorState *cs;
+
+	cs = (CompressorState *) pg_malloc0(sizeof(CompressorState));
+	cs->readF = readF;
+	cs->writeF = writeF;
+
+	switch (compression_spec.algorithm)
 	{
-		case PG_COMPRESSION_GZIP:
-#ifdef HAVE_LIBZ
-			WriteDataToArchiveZlib(AH, cs, data, dLen);
-#else
-			pg_fatal("this build does not support compression with %s", "gzip");
-#endif
-			break;
 		case PG_COMPRESSION_NONE:
-			WriteDataToArchiveNone(AH, cs, data, dLen);
+			InitCompressorNone(cs, compression_spec);
+			break;
+		case PG_COMPRESSION_GZIP:
+			InitCompressorGzip(cs, compression_spec);
 			break;
-		case PG_COMPRESSION_LZ4:
-			/* fallthrough */
-		case PG_COMPRESSION_ZSTD:
-			pg_fatal("invalid compression method");
+		default:
+			pg_fatal("invalid compression method \"%s\"",
+					 get_compress_algorithm_name(compression_spec.algorithm));
 			break;
 	}
+
+	return cs;
 }
 
 /*
@@ -173,527 +161,268 @@ WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
 void
 EndCompressor(ArchiveHandle *AH, CompressorState *cs)
 {
-#ifdef HAVE_LIBZ
-	if (cs->compression_spec.algorithm == PG_COMPRESSION_GZIP)
-		EndCompressorZlib(AH, cs);
-#endif
-	free(cs);
+	cs->end(AH, cs);
+	pg_free(cs);
 }
 
-/* Private routines, specific to each compression method. */
-
-#ifdef HAVE_LIBZ
-/*
- * Functions for zlib compressed output.
+/*----------------------
+ * Compressed stream API
+ *----------------------
  */
 
-static void
-InitCompressorZlib(CompressorState *cs, int level)
+/* Private routines */
+
+static int
+hasSuffix(const char *filename, const char *suffix)
 {
-	z_streamp	zp;
-
-	zp = cs->zp = (z_streamp) pg_malloc(sizeof(z_stream));
-	zp->zalloc = Z_NULL;
-	zp->zfree = Z_NULL;
-	zp->opaque = Z_NULL;
-
-	/*
-	 * zlibOutSize is the buffer size we tell zlib it can output to.  We
-	 * actually allocate one extra byte because some routines want to append a
-	 * trailing zero byte to the zlib output.
-	 */
-	cs->zlibOut = (char *) pg_malloc(ZLIB_OUT_SIZE + 1);
-	cs->zlibOutSize = ZLIB_OUT_SIZE;
-
-	if (deflateInit(zp, level) != Z_OK)
-		pg_fatal("could not initialize compression library: %s",
-				 zp->msg);
-
-	/* Just be paranoid - maybe End is called after Start, with no Write */
-	zp->next_out = (void *) cs->zlibOut;
-	zp->avail_out = cs->zlibOutSize;
+	int			filenamelen = strlen(filename);
+	int			suffixlen = strlen(suffix);
+
+	if (filenamelen < suffixlen)
+		return 0;
+
+	return memcmp(&filename[filenamelen - suffixlen],
+				  suffix,
+				  suffixlen) == 0;
 }
 
+/* free() without changing errno; useful in several places below */
 static void
-EndCompressorZlib(ArchiveHandle *AH, CompressorState *cs)
+free_keep_errno(void *p)
 {
-	z_streamp	zp = cs->zp;
+	int			save_errno = errno;
 
-	zp->next_in = NULL;
-	zp->avail_in = 0;
+	free(p);
+	errno = save_errno;
+}
+
+/*
+ * Compressed stream API None implementation
+ */
+static size_t
+read_none(void *ptr, size_t size, CompressFileHandle *CFH)
+{
+	FILE	   *fp = (FILE *) CFH->private_data;
+	size_t		ret;
 
-	/* Flush any remaining data from zlib buffer */
-	DeflateCompressorZlib(AH, cs, true);
+	if (size == 0)
+		return 0;
 
-	if (deflateEnd(zp) != Z_OK)
-		pg_fatal("could not close compression stream: %s", zp->msg);
+	ret = fread(ptr, 1, size, fp);
+	if (ret != size && !feof(fp))
+		pg_fatal("could not read from input file: %s",
+				 strerror(errno));
 
-	free(cs->zlibOut);
-	free(cs->zp);
+	return ret;
 }
 
-static void
-DeflateCompressorZlib(ArchiveHandle *AH, CompressorState *cs, bool flush)
+static size_t
+write_none(const void *ptr, size_t size, CompressFileHandle *CFH)
 {
-	z_streamp	zp = cs->zp;
-	char	   *out = cs->zlibOut;
-	int			res = Z_OK;
-
-	while (cs->zp->avail_in != 0 || flush)
-	{
-		res = deflate(zp, flush ? Z_FINISH : Z_NO_FLUSH);
-		if (res == Z_STREAM_ERROR)
-			pg_fatal("could not compress data: %s", zp->msg);
-		if ((flush && (zp->avail_out < cs->zlibOutSize))
-			|| (zp->avail_out == 0)
-			|| (zp->avail_in != 0)
-			)
-		{
-			/*
-			 * Extra paranoia: avoid zero-length chunks, since a zero length
-			 * chunk is the EOF marker in the custom format. This should never
-			 * happen but...
-			 */
-			if (zp->avail_out < cs->zlibOutSize)
-			{
-				/*
-				 * Any write function should do its own error checking but to
-				 * make sure we do a check here as well...
-				 */
-				size_t		len = cs->zlibOutSize - zp->avail_out;
-
-				cs->writeF(AH, out, len);
-			}
-			zp->next_out = (void *) out;
-			zp->avail_out = cs->zlibOutSize;
-		}
+	return fwrite(ptr, 1, size, (FILE *) CFH->private_data);
+}
 
-		if (res == Z_STREAM_END)
-			break;
-	}
+static const char *
+get_error_none(CompressFileHandle *CFH)
+{
+	return strerror(errno);
 }
 
-static void
-WriteDataToArchiveZlib(ArchiveHandle *AH, CompressorState *cs,
-					   const char *data, size_t dLen)
+static char *
+gets_none(char *ptr, int size, CompressFileHandle *CFH)
 {
-	cs->zp->next_in = (void *) unconstify(char *, data);
-	cs->zp->avail_in = dLen;
-	DeflateCompressorZlib(AH, cs, false);
+	return fgets(ptr, size, (FILE *) CFH->private_data);
 }
 
-static void
-ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF)
+static int
+getc_none(CompressFileHandle *CFH)
 {
-	z_streamp	zp;
-	char	   *out;
-	int			res = Z_OK;
-	size_t		cnt;
-	char	   *buf;
-	size_t		buflen;
+	FILE	   *fp = (FILE *) CFH->private_data;
+	int			ret;
 
-	zp = (z_streamp) pg_malloc(sizeof(z_stream));
-	zp->zalloc = Z_NULL;
-	zp->zfree = Z_NULL;
-	zp->opaque = Z_NULL;
+	ret = fgetc(fp);
+	if (ret == EOF)
+	{
+		if (!feof(fp))
+			pg_fatal("could not read from input file: %s", strerror(errno));
+		else
+			pg_fatal("could not read from input file: end of file");
+	}
 
-	buf = pg_malloc(ZLIB_IN_SIZE);
-	buflen = ZLIB_IN_SIZE;
+	return ret;
+}
 
-	out = pg_malloc(ZLIB_OUT_SIZE + 1);
+static int
+close_none(CompressFileHandle *CFH)
+{
+	FILE	   *fp = (FILE *) CFH->private_data;
+	int			ret = 0;
 
-	if (inflateInit(zp) != Z_OK)
-		pg_fatal("could not initialize compression library: %s",
-				 zp->msg);
+	CFH->private_data = NULL;
 
-	/* no minimal chunk size for zlib */
-	while ((cnt = readF(AH, &buf, &buflen)))
-	{
-		zp->next_in = (void *) buf;
-		zp->avail_in = cnt;
+	if (fp)
+		ret = fclose(fp);
 
-		while (zp->avail_in > 0)
-		{
-			zp->next_out = (void *) out;
-			zp->avail_out = ZLIB_OUT_SIZE;
+	return ret;
+}
 
-			res = inflate(zp, 0);
-			if (res != Z_OK && res != Z_STREAM_END)
-				pg_fatal("could not uncompress data: %s", zp->msg);
+static int
+eof_none(CompressFileHandle *CFH)
+{
+	return feof((FILE *) CFH->private_data);
+}
 
-			out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
-			ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
-		}
-	}
+static int
+open_none(const char *path, int fd, const char *mode, CompressFileHandle *CFH)
+{
+	Assert(CFH->private_data == NULL);
 
-	zp->next_in = NULL;
-	zp->avail_in = 0;
-	while (res != Z_STREAM_END)
-	{
-		zp->next_out = (void *) out;
-		zp->avail_out = ZLIB_OUT_SIZE;
-		res = inflate(zp, 0);
-		if (res != Z_OK && res != Z_STREAM_END)
-			pg_fatal("could not uncompress data: %s", zp->msg);
-
-		out[ZLIB_OUT_SIZE - zp->avail_out] = '\0';
-		ahwrite(out, 1, ZLIB_OUT_SIZE - zp->avail_out, AH);
-	}
+	if (fd >= 0)
+		CFH->private_data = fdopen(dup(fd), mode);
+	else
+		CFH->private_data = fopen(path, mode);
 
-	if (inflateEnd(zp) != Z_OK)
-		pg_fatal("could not close compression library: %s", zp->msg);
+	if (CFH->private_data == NULL)
+		return 1;
 
-	free(buf);
-	free(out);
-	free(zp);
+	return 0;
 }
-#endif							/* HAVE_LIBZ */
-
 
-/*
- * Functions for uncompressed output.
- */
-
-static void
-ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF)
+static int
+open_write_none(const char *path, const char *mode, CompressFileHandle *CFH)
 {
-	size_t		cnt;
-	char	   *buf;
-	size_t		buflen;
-
-	buf = pg_malloc(ZLIB_OUT_SIZE);
-	buflen = ZLIB_OUT_SIZE;
+	Assert(CFH->private_data == NULL);
 
-	while ((cnt = readF(AH, &buf, &buflen)))
-	{
-		ahwrite(buf, 1, cnt, AH);
-	}
+	CFH->private_data = fopen(path, mode);
+	if (CFH->private_data == NULL)
+		return 1;
 
-	free(buf);
+	return 0;
 }
 
 static void
-WriteDataToArchiveNone(ArchiveHandle *AH, CompressorState *cs,
-					   const char *data, size_t dLen)
+InitCompressNone(CompressFileHandle *CFH,
+				 const pg_compress_specification compression_spec)
 {
-	cs->writeF(AH, data, dLen);
+	CFH->open_func = open_none;
+	CFH->open_write_func = open_write_none;
+	CFH->read_func = read_none;
+	CFH->write_func = write_none;
+	CFH->gets_func = gets_none;
+	CFH->getc_func = getc_none;
+	CFH->close_func = close_none;
+	CFH->eof_func = eof_none;
+	CFH->get_error_func = get_error_none;
+
+	CFH->private_data = NULL;
 }
 
-
-/*----------------------
- * Compressed stream API
- *----------------------
- */
-
 /*
- * cfp represents an open stream, wrapping the underlying FILE or gzFile
- * pointer. This is opaque to the callers.
+ * Public interface
  */
-struct cfp
-{
-	FILE	   *uncompressedfp;
-#ifdef HAVE_LIBZ
-	gzFile		compressedfp;
-#endif
-};
-
-#ifdef HAVE_LIBZ
-static int	hasSuffix(const char *filename, const char *suffix);
-#endif
-
-/* free() without changing errno; useful in several places below */
-static void
-free_keep_errno(void *p)
-{
-	int			save_errno = errno;
-
-	free(p);
-	errno = save_errno;
-}
 
 /*
- * Open a file for reading. 'path' is the file to open, and 'mode' should
- * be either "r" or "rb".
- *
- * If the file at 'path' does not exist, we append the ".gz" suffix (if 'path'
- * doesn't already have it) and try again. So if you pass "foo" as 'path',
- * this will open either "foo" or "foo.gz".
- *
- * On failure, return NULL with an error code in errno.
+ * Initialize a compress file handle for the specified compression algorithm.
  */
-cfp *
-cfopen_read(const char *path, const char *mode)
+CompressFileHandle *
+InitCompressFileHandle(const pg_compress_specification compression_spec)
 {
-	cfp		   *fp;
+	CompressFileHandle *CFH;
 
-	pg_compress_specification compression_spec = {0};
+	CFH = pg_malloc0(sizeof(CompressFileHandle));
 
-#ifdef HAVE_LIBZ
-	if (hasSuffix(path, ".gz"))
+	switch (compression_spec.algorithm)
 	{
-		compression_spec.algorithm = PG_COMPRESSION_GZIP;
-		fp = cfopen(path, mode, compression_spec);
+		case PG_COMPRESSION_NONE:
+			InitCompressNone(CFH, compression_spec);
+			break;
+		case PG_COMPRESSION_GZIP:
+			InitCompressGzip(CFH, compression_spec);
+			break;
+		default:
+			pg_fatal("invalid compression method \"%s\"",
+					 get_compress_algorithm_name(compression_spec.algorithm));
+			break;
 	}
-	else
-#endif
-	{
-		compression_spec.algorithm = PG_COMPRESSION_NONE;
-		fp = cfopen(path, mode, compression_spec);
-#ifdef HAVE_LIBZ
-		if (fp == NULL)
-		{
-			char	   *fname;
 
-			fname = psprintf("%s.gz", path);
-			compression_spec.algorithm = PG_COMPRESSION_GZIP;
-			fp = cfopen(fname, mode, compression_spec);
-			free_keep_errno(fname);
-		}
-#endif
-	}
-	return fp;
+	return CFH;
 }
 
 /*
- * Open a file for writing. 'path' indicates the path name, and 'mode' must
- * be a filemode as accepted by fopen() and gzopen() that indicates writing
- * ("w", "wb", "a", or "ab").
+ * Open a file for reading. 'path' is the file to open, and 'mode' should
+ * be either "r" or "rb".
  *
- * If 'compression_spec.algorithm' is GZIP, a gzip compressed stream is opened,
- * and 'compression_spec.level' used. The ".gz" suffix is automatically added to
- * 'path' in that case.
+ * If the file at 'path' contains the suffix of a supported compression method,
+ * currently this includes only ".gz", then this compression will be used
+ * throughout. Otherwise the compression will be deferred by iteratively trying
+ * to open the file at 'path', first as is, then by appending known compression
+ * suffixes. So if you pass "foo" as 'path', this will open either "foo" or
+ * "foo.gz", trying in that order.
  *
  * On failure, return NULL with an error code in errno.
  */
-cfp *
-cfopen_write(const char *path, const char *mode,
-			 const pg_compress_specification compression_spec)
+CompressFileHandle *
+InitDiscoverCompressFileHandle(const char *path, const char *mode)
 {
-	cfp		   *fp;
-
-	if (compression_spec.algorithm == PG_COMPRESSION_NONE)
-		fp = cfopen(path, mode, compression_spec);
-	else
-	{
-#ifdef HAVE_LIBZ
-		char	   *fname;
-
-		fname = psprintf("%s.gz", path);
-		fp = cfopen(fname, mode, compression_spec);
-		free_keep_errno(fname);
-#else
-		pg_fatal("this build does not support compression with %s", "gzip");
-		fp = NULL;				/* keep compiler quiet */
-#endif
-	}
-	return fp;
-}
+	CompressFileHandle *CFH = NULL;
+	struct stat st;
+	char	   *fname;
+	pg_compress_specification compression_spec = {0};
 
-/*
- * Opens file 'path' in 'mode'. If compression is GZIP, the file
- * is opened with libz gzopen(), otherwise with plain fopen().
- *
- * On failure, return NULL with an error code in errno.
- */
-cfp *
-cfopen(const char *path, const char *mode,
-	   const pg_compress_specification compression_spec)
-{
-	cfp		   *fp = pg_malloc(sizeof(cfp));
+	compression_spec.algorithm = PG_COMPRESSION_NONE;
 
-	if (compression_spec.algorithm == PG_COMPRESSION_GZIP)
-	{
-#ifdef HAVE_LIBZ
-		if (compression_spec.level != Z_DEFAULT_COMPRESSION)
-		{
-			/* user has specified a compression level, so tell zlib to use it */
-			char		mode_compression[32];
+	Assert(strcmp(mode, "r") == 0 || strcmp(mode, "rb") == 0);
 
-			snprintf(mode_compression, sizeof(mode_compression), "%s%d",
-					 mode, compression_spec.level);
-			fp->compressedfp = gzopen(path, mode_compression);
-		}
-		else
-		{
-			/* don't specify a level, just use the zlib default */
-			fp->compressedfp = gzopen(path, mode);
-		}
+	fname = strdup(path);
 
-		fp->uncompressedfp = NULL;
-		if (fp->compressedfp == NULL)
-		{
-			free_keep_errno(fp);
-			fp = NULL;
-		}
-#else
-		pg_fatal("this build does not support compression with %s", "gzip");
-#endif
-	}
+	if (hasSuffix(fname, ".gz"))
+		compression_spec.algorithm = PG_COMPRESSION_GZIP;
 	else
 	{
-#ifdef HAVE_LIBZ
-		fp->compressedfp = NULL;
-#endif
-		fp->uncompressedfp = fopen(path, mode);
-		if (fp->uncompressedfp == NULL)
-		{
-			free_keep_errno(fp);
-			fp = NULL;
-		}
-	}
-
-	return fp;
-}
-
-
-int
-cfread(void *ptr, int size, cfp *fp)
-{
-	int			ret;
-
-	if (size == 0)
-		return 0;
+		bool		exists;
 
+		exists = (stat(path, &st) == 0);
+		/* avoid unused warning if it is not build with compression */
+		if (exists)
+			compression_spec.algorithm = PG_COMPRESSION_NONE;
 #ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-	{
-		ret = gzread(fp->compressedfp, ptr, size);
-		if (ret != size && !gzeof(fp->compressedfp))
+		if (!exists)
 		{
-			int			errnum;
-			const char *errmsg = gzerror(fp->compressedfp, &errnum);
+			free_keep_errno(fname);
+			fname = psprintf("%s.gz", path);
+			exists = (stat(fname, &st) == 0);
 
-			pg_fatal("could not read from input file: %s",
-					 errnum == Z_ERRNO ? strerror(errno) : errmsg);
+			if (exists)
+				compression_spec.algorithm = PG_COMPRESSION_GZIP;
 		}
-	}
-	else
 #endif
-	{
-		ret = fread(ptr, 1, size, fp->uncompressedfp);
-		if (ret != size && !feof(fp->uncompressedfp))
-			READ_ERROR_EXIT(fp->uncompressedfp);
 	}
-	return ret;
-}
 
-int
-cfwrite(const void *ptr, int size, cfp *fp)
-{
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-		return gzwrite(fp->compressedfp, ptr, size);
-	else
-#endif
-		return fwrite(ptr, 1, size, fp->uncompressedfp);
-}
-
-int
-cfgetc(cfp *fp)
-{
-	int			ret;
-
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-	{
-		ret = gzgetc(fp->compressedfp);
-		if (ret == EOF)
-		{
-			if (!gzeof(fp->compressedfp))
-				pg_fatal("could not read from input file: %s", strerror(errno));
-			else
-				pg_fatal("could not read from input file: end of file");
-		}
-	}
-	else
-#endif
+	CFH = InitCompressFileHandle(compression_spec);
+	if (CFH->open_func(fname, -1, mode, CFH))
 	{
-		ret = fgetc(fp->uncompressedfp);
-		if (ret == EOF)
-			READ_ERROR_EXIT(fp->uncompressedfp);
+		free_keep_errno(CFH);
+		CFH = NULL;
 	}
+	free_keep_errno(fname);
 
-	return ret;
-}
-
-char *
-cfgets(cfp *fp, char *buf, int len)
-{
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-		return gzgets(fp->compressedfp, buf, len);
-	else
-#endif
-		return fgets(buf, len, fp->uncompressedfp);
-}
-
-int
-cfclose(cfp *fp)
-{
-	int			result;
-
-	if (fp == NULL)
-	{
-		errno = EBADF;
-		return EOF;
-	}
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-	{
-		result = gzclose(fp->compressedfp);
-		fp->compressedfp = NULL;
-	}
-	else
-#endif
-	{
-		result = fclose(fp->uncompressedfp);
-		fp->uncompressedfp = NULL;
-	}
-	free_keep_errno(fp);
-
-	return result;
+	return CFH;
 }
 
+/*
+ * Close an open file handle and release its memory.
+ *
+ * On failure, returns an error value and sets errno appropriately.
+ */
 int
-cfeof(cfp *fp)
-{
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-		return gzeof(fp->compressedfp);
-	else
-#endif
-		return feof(fp->uncompressedfp);
-}
-
-const char *
-get_cfp_error(cfp *fp)
+EndCompressFileHandle(CompressFileHandle *CFH)
 {
-#ifdef HAVE_LIBZ
-	if (fp->compressedfp)
-	{
-		int			errnum;
-		const char *errmsg = gzerror(fp->compressedfp, &errnum);
-
-		if (errnum != Z_ERRNO)
-			return errmsg;
-	}
-#endif
-	return strerror(errno);
-}
+	int			ret = 0;
 
-#ifdef HAVE_LIBZ
-static int
-hasSuffix(const char *filename, const char *suffix)
-{
-	int			filenamelen = strlen(filename);
-	int			suffixlen = strlen(suffix);
+	if (CFH->private_data)
+		ret = CFH->close_func(CFH);
 
-	if (filenamelen < suffixlen)
-		return 0;
+	free_keep_errno(CFH);
 
-	return memcmp(&filename[filenamelen - suffixlen],
-				  suffix,
-				  suffixlen) == 0;
+	return ret;
 }
-
-#endif
diff --git a/src/bin/pg_dump/compress_io.h b/src/bin/pg_dump/compress_io.h
index a429dc4789..bd2b5623a5 100644
--- a/src/bin/pg_dump/compress_io.h
+++ b/src/bin/pg_dump/compress_io.h
@@ -37,32 +37,64 @@ typedef void (*WriteFunc) (ArchiveHandle *AH, const char *buf, size_t len);
  */
 typedef size_t (*ReadFunc) (ArchiveHandle *AH, char **buf, size_t *buflen);
 
-/* struct definition appears in compress_io.c */
 typedef struct CompressorState CompressorState;
+struct CompressorState
+{
+	/*
+	 * Read all compressed data from the input stream (via readF) and print it
+	 * out with ahwrite().
+	 */
+	void		(*readData) (ArchiveHandle *AH, CompressorState *cs);
+
+	/*
+	 * Compress and write data to the output stream (via writeF).
+	 */
+	void		(*writeData) (ArchiveHandle *AH, CompressorState *cs,
+							  const void *data, size_t dLen);
+
+	/*
+	 * End compression and flush internal buffers if any.
+	 */
+	void		(*end) (ArchiveHandle *AH, CompressorState *cs);
+
+	ReadFunc	readF;
+	WriteFunc	writeF;
+
+	pg_compress_specification compression_spec;
+	void	   *private_data;
+};
 
 extern CompressorState *AllocateCompressor(const pg_compress_specification compression_spec,
+										   ReadFunc readF,
 										   WriteFunc writeF);
-extern void ReadDataFromArchive(ArchiveHandle *AH,
-								const pg_compress_specification compression_spec,
-								ReadFunc readF);
-extern void WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
-							   const void *data, size_t dLen);
 extern void EndCompressor(ArchiveHandle *AH, CompressorState *cs);
 
+/*
+ * Compress File Handle
+ */
+typedef struct CompressFileHandle CompressFileHandle;
 
-typedef struct cfp cfp;
+struct CompressFileHandle
+{
+	int			(*open_func) (const char *path, int fd, const char *mode,
+							  CompressFileHandle *CFH);
+	int			(*open_write_func) (const char *path, const char *mode,
+									CompressFileHandle *cxt);
+	size_t		(*read_func) (void *ptr, size_t size, CompressFileHandle *CFH);
+	size_t		(*write_func) (const void *ptr, size_t size,
+							   struct CompressFileHandle *CFH);
+	char	   *(*gets_func) (char *s, int size, CompressFileHandle *CFH);
+	int			(*getc_func) (CompressFileHandle *CFH);
+	int			(*eof_func) (CompressFileHandle *CFH);
+	int			(*close_func) (CompressFileHandle *CFH);
+	const char *(*get_error_func) (CompressFileHandle *CFH);
 
-extern cfp *cfopen(const char *path, const char *mode,
-				   const pg_compress_specification compression_spec);
-extern cfp *cfopen_read(const char *path, const char *mode);
-extern cfp *cfopen_write(const char *path, const char *mode,
-						 const pg_compress_specification compression_spec);
-extern int	cfread(void *ptr, int size, cfp *fp);
-extern int	cfwrite(const void *ptr, int size, cfp *fp);
-extern int	cfgetc(cfp *fp);
-extern char *cfgets(cfp *fp, char *buf, int len);
-extern int	cfclose(cfp *fp);
-extern int	cfeof(cfp *fp);
-extern const char *get_cfp_error(cfp *fp);
+	pg_compress_specification compression_spec;
+	void	   *private_data;
+};
 
+extern CompressFileHandle *InitCompressFileHandle(const pg_compress_specification compression_spec);
+extern CompressFileHandle *InitDiscoverCompressFileHandle(const char *path,
+														  const char *mode);
+extern int	EndCompressFileHandle(CompressFileHandle *CFH);
 #endif
diff --git a/src/bin/pg_dump/meson.build b/src/bin/pg_dump/meson.build
index ca62f9a374..aa2c91829c 100644
--- a/src/bin/pg_dump/meson.build
+++ b/src/bin/pg_dump/meson.build
@@ -2,6 +2,7 @@
 
 pg_dump_common_sources = files(
   'compress_io.c',
+  'compress_gzip.c',
   'dumputils.c',
   'parallel.c',
   'pg_backup_archiver.c',
diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c
index 7f7a0f1ce7..cc3b7f0992 100644
--- a/src/bin/pg_dump/pg_backup_archiver.c
+++ b/src/bin/pg_dump/pg_backup_archiver.c
@@ -31,6 +31,7 @@
 #endif
 
 #include "common/string.h"
+#include "compress_io.h"
 #include "dumputils.h"
 #include "fe_utils/string_utils.h"
 #include "lib/stringinfo.h"
@@ -43,13 +44,6 @@
 #define TEXT_DUMP_HEADER "--\n-- PostgreSQL database dump\n--\n\n"
 #define TEXT_DUMPALL_HEADER "--\n-- PostgreSQL database cluster dump\n--\n\n"
 
-/* state needed to save/restore an archive's output target */
-typedef struct _outputContext
-{
-	void	   *OF;
-	int			gzOut;
-} OutputContext;
-
 /*
  * State for tracking TocEntrys that are ready to process during a parallel
  * restore.  (This used to be a list, and we still call it that, though now
@@ -101,8 +95,8 @@ static void dump_lo_buf(ArchiveHandle *AH);
 static void dumpTimestamp(ArchiveHandle *AH, const char *msg, time_t tim);
 static void SetOutput(ArchiveHandle *AH, const char *filename,
 					  const pg_compress_specification compression_spec);
-static OutputContext SaveOutput(ArchiveHandle *AH);
-static void RestoreOutput(ArchiveHandle *AH, OutputContext savedContext);
+static CompressFileHandle *SaveOutput(ArchiveHandle *AH);
+static void RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput);
 
 static int	restore_toc_entry(ArchiveHandle *AH, TocEntry *te, bool is_parallel);
 static void restore_toc_entries_prefork(ArchiveHandle *AH,
@@ -277,11 +271,8 @@ CloseArchive(Archive *AHX)
 	AH->ClosePtr(AH);
 
 	/* Close the output */
-	errno = 0;					/* in case gzclose() doesn't set it */
-	if (AH->gzOut)
-		res = GZCLOSE(AH->OF);
-	else if (AH->OF != stdout)
-		res = fclose(AH->OF);
+	errno = 0;
+	res = EndCompressFileHandle(AH->OF);
 
 	if (res != 0)
 		pg_fatal("could not close output file: %m");
@@ -362,8 +353,9 @@ RestoreArchive(Archive *AHX)
 	ArchiveHandle *AH = (ArchiveHandle *) AHX;
 	RestoreOptions *ropt = AH->public.ropt;
 	bool		parallel_mode;
+	bool		supports_compression;
 	TocEntry   *te;
-	OutputContext sav;
+	CompressFileHandle *sav;
 
 	AH->stage = STAGE_INITIALIZING;
 
@@ -391,17 +383,24 @@ RestoreArchive(Archive *AHX)
 	/*
 	 * Make sure we won't need (de)compression we haven't got
 	 */
-#ifndef HAVE_LIBZ
-	if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP &&
+	supports_compression = true;
+	if (AH->compression_spec.algorithm != PG_COMPRESSION_NONE &&
+		AH->compression_spec.algorithm == PG_COMPRESSION_GZIP &&
 		AH->PrintTocDataPtr != NULL)
 	{
 		for (te = AH->toc->next; te != AH->toc; te = te->next)
 		{
 			if (te->hadDumper && (te->reqs & REQ_DATA) != 0)
-				pg_fatal("cannot restore from compressed archive (compression not supported in this installation)");
+			{
+#ifndef HAVE_LIBZ
+				if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
+					supports_compression = false;
+#endif
+				if (supports_compression == false)
+					pg_fatal("cannot restore from compressed archive (compression not supported in this installation)");
+			}
 		}
 	}
-#endif
 
 	/*
 	 * Prepare index arrays, so we can assume we have them throughout restore.
@@ -1128,7 +1127,7 @@ PrintTOCSummary(Archive *AHX)
 	TocEntry   *te;
 	pg_compress_specification out_compression_spec = {0};
 	teSection	curSection;
-	OutputContext sav;
+	CompressFileHandle *sav;
 	const char *fmtName;
 	char		stamp_str[64];
 
@@ -1144,9 +1143,10 @@ PrintTOCSummary(Archive *AHX)
 		strcpy(stamp_str, "[unknown]");
 
 	ahprintf(AH, ";\n; Archive created at %s\n", stamp_str);
-	ahprintf(AH, ";     dbname: %s\n;     TOC Entries: %d\n;     Compression: %d\n",
+	ahprintf(AH, ";     dbname: %s\n;     TOC Entries: %d\n;     Compression: %s\n",
 			 sanitize_line(AH->archdbname, false),
-			 AH->tocCount, AH->compression_spec.level);
+			 AH->tocCount,
+			 get_compress_algorithm_name(AH->compression_spec.algorithm));
 
 	switch (AH->format)
 	{
@@ -1503,95 +1503,60 @@ static void
 SetOutput(ArchiveHandle *AH, const char *filename,
 		  const pg_compress_specification compression_spec)
 {
-	int			fn;
+	CompressFileHandle *CFH;
+	const char *mode;
+	int			fn = -1;
 
 	if (filename)
 	{
 		if (strcmp(filename, "-") == 0)
 			fn = fileno(stdout);
-		else
-			fn = -1;
 	}
 	else if (AH->FH)
 		fn = fileno(AH->FH);
 	else if (AH->fSpec)
 	{
-		fn = -1;
 		filename = AH->fSpec;
 	}
 	else
 		fn = fileno(stdout);
 
-	/* If compression explicitly requested, use gzopen */
-#ifdef HAVE_LIBZ
-	if (compression_spec.algorithm == PG_COMPRESSION_GZIP)
-	{
-		char		fmode[14];
-
-		/* Don't use PG_BINARY_x since this is zlib */
-		sprintf(fmode, "wb%d", compression_spec.level);
-		if (fn >= 0)
-			AH->OF = gzdopen(dup(fn), fmode);
-		else
-			AH->OF = gzopen(filename, fmode);
-		AH->gzOut = 1;
-	}
+	if (AH->mode == archModeAppend)
+		mode = PG_BINARY_A;
 	else
-#endif
-	{							/* Use fopen */
-		if (AH->mode == archModeAppend)
-		{
-			if (fn >= 0)
-				AH->OF = fdopen(dup(fn), PG_BINARY_A);
-			else
-				AH->OF = fopen(filename, PG_BINARY_A);
-		}
-		else
-		{
-			if (fn >= 0)
-				AH->OF = fdopen(dup(fn), PG_BINARY_W);
-			else
-				AH->OF = fopen(filename, PG_BINARY_W);
-		}
-		AH->gzOut = 0;
-	}
+		mode = PG_BINARY_W;
 
-	if (!AH->OF)
+	CFH = InitCompressFileHandle(compression_spec);
+
+	if (CFH->open_func(filename, fn, mode, CFH))
 	{
 		if (filename)
 			pg_fatal("could not open output file \"%s\": %m", filename);
 		else
 			pg_fatal("could not open output file: %m");
 	}
+
+	AH->OF = CFH;
 }
 
-static OutputContext
+static CompressFileHandle *
 SaveOutput(ArchiveHandle *AH)
 {
-	OutputContext sav;
-
-	sav.OF = AH->OF;
-	sav.gzOut = AH->gzOut;
-
-	return sav;
+	return (CompressFileHandle *) AH->OF;
 }
 
 static void
-RestoreOutput(ArchiveHandle *AH, OutputContext savedContext)
+RestoreOutput(ArchiveHandle *AH, CompressFileHandle *savedOutput)
 {
 	int			res;
 
-	errno = 0;					/* in case gzclose() doesn't set it */
-	if (AH->gzOut)
-		res = GZCLOSE(AH->OF);
-	else
-		res = fclose(AH->OF);
+	errno = 0;
+	res = EndCompressFileHandle(AH->OF);
 
 	if (res != 0)
 		pg_fatal("could not close output file: %m");
 
-	AH->gzOut = savedContext.gzOut;
-	AH->OF = savedContext.OF;
+	AH->OF = savedOutput;
 }
 
 
@@ -1715,21 +1680,20 @@ ahwrite(const void *ptr, size_t size, size_t nmemb, ArchiveHandle *AH)
 
 		bytes_written = size * nmemb;
 	}
-	else if (AH->gzOut)
-		bytes_written = GZWRITE(ptr, size, nmemb, AH->OF);
 	else if (AH->CustomOutPtr)
 		bytes_written = AH->CustomOutPtr(AH, ptr, size * nmemb);
 
+	/*
+	 * If we're doing a restore, and it's direct to DB, and we're connected
+	 * then send it to the DB.
+	 */
+	else if (RestoringToDB(AH))
+		bytes_written = ExecuteSqlCommandBuf(&AH->public, (const char *) ptr, size * nmemb);
 	else
 	{
-		/*
-		 * If we're doing a restore, and it's direct to DB, and we're
-		 * connected then send it to the DB.
-		 */
-		if (RestoringToDB(AH))
-			bytes_written = ExecuteSqlCommandBuf(&AH->public, (const char *) ptr, size * nmemb);
-		else
-			bytes_written = fwrite(ptr, size, nmemb, AH->OF) * size;
+		CompressFileHandle *CFH = (CompressFileHandle *) AH->OF;
+
+		bytes_written = CFH->write_func(ptr, size * nmemb, CFH);
 	}
 
 	if (bytes_written != size * nmemb)
@@ -2072,6 +2036,18 @@ ReadStr(ArchiveHandle *AH)
 	return buf;
 }
 
+static bool
+_fileExistsInDirectory(const char *dir, const char *filename)
+{
+	struct stat st;
+	char		buf[MAXPGPATH];
+
+	if (snprintf(buf, MAXPGPATH, "%s/%s", dir, filename) >= MAXPGPATH)
+		pg_fatal("directory name too long: \"%s\"", dir);
+
+	return (stat(buf, &st) == 0 && S_ISREG(st.st_mode));
+}
+
 static int
 _discoverArchiveFormat(ArchiveHandle *AH)
 {
@@ -2102,26 +2078,12 @@ _discoverArchiveFormat(ArchiveHandle *AH)
 		 */
 		if (stat(AH->fSpec, &st) == 0 && S_ISDIR(st.st_mode))
 		{
-			char		buf[MAXPGPATH];
-
-			if (snprintf(buf, MAXPGPATH, "%s/toc.dat", AH->fSpec) >= MAXPGPATH)
-				pg_fatal("directory name too long: \"%s\"",
-						 AH->fSpec);
-			if (stat(buf, &st) == 0 && S_ISREG(st.st_mode))
-			{
-				AH->format = archDirectory;
+			AH->format = archDirectory;
+			if (_fileExistsInDirectory(AH->fSpec, "toc.dat"))
 				return AH->format;
-			}
-
 #ifdef HAVE_LIBZ
-			if (snprintf(buf, MAXPGPATH, "%s/toc.dat.gz", AH->fSpec) >= MAXPGPATH)
-				pg_fatal("directory name too long: \"%s\"",
-						 AH->fSpec);
-			if (stat(buf, &st) == 0 && S_ISREG(st.st_mode))
-			{
-				AH->format = archDirectory;
+			if (_fileExistsInDirectory(AH->fSpec, "toc.dat.gz"))
 				return AH->format;
-			}
 #endif
 			pg_fatal("directory \"%s\" does not appear to be a valid archive (\"toc.dat\" does not exist)",
 					 AH->fSpec);
@@ -2219,6 +2181,8 @@ _allocAH(const char *FileSpec, const ArchiveFormat fmt,
 		 SetupWorkerPtrType setupWorkerPtr)
 {
 	ArchiveHandle *AH;
+	CompressFileHandle *CFH;
+	pg_compress_specification out_compress_spec = {0};
 
 	pg_log_debug("allocating AH for %s, format %d",
 				 FileSpec ? FileSpec : "(stdio)", fmt);
@@ -2272,8 +2236,11 @@ _allocAH(const char *FileSpec, const ArchiveFormat fmt,
 	memset(&(AH->sqlparse), 0, sizeof(AH->sqlparse));
 
 	/* Open stdout with no compression for AH output handle */
-	AH->gzOut = 0;
-	AH->OF = stdout;
+	out_compress_spec.algorithm = PG_COMPRESSION_NONE;
+	CFH = InitCompressFileHandle(out_compress_spec);
+	if (CFH->open_func(NULL, fileno(stdout), PG_BINARY_A, CFH))
+		pg_fatal("could not open stdout for appending: %m");
+	AH->OF = CFH;
 
 	/*
 	 * On Windows, we need to use binary mode to read/write non-text files,
@@ -3686,12 +3653,7 @@ WriteHead(ArchiveHandle *AH)
 	AH->WriteBytePtr(AH, AH->intSize);
 	AH->WriteBytePtr(AH, AH->offSize);
 	AH->WriteBytePtr(AH, AH->format);
-	/*
-	 * For now the compression type is implied by the level.  This will need
-	 * to change once support for more compression algorithms is added,
-	 * requiring a format bump.
-	 */
-	WriteInt(AH, AH->compression_spec.level);
+	AH->WriteBytePtr(AH, AH->compression_spec.algorithm);
 	crtm = *localtime(&AH->createDate);
 	WriteInt(AH, crtm.tm_sec);
 	WriteInt(AH, crtm.tm_min);
@@ -3762,10 +3724,11 @@ ReadHead(ArchiveHandle *AH)
 		pg_fatal("expected format (%d) differs from format found in file (%d)",
 				 AH->format, fmt);
 
-	/* Guess the compression method based on the level */
-	AH->compression_spec.algorithm = PG_COMPRESSION_NONE;
-	if (AH->version >= K_VERS_1_2)
+	if (AH->version >= K_VERS_1_15)
+		AH->compression_spec.algorithm = AH->ReadBytePtr(AH);
+	else if (AH->version >= K_VERS_1_2)
 	{
+		/* Guess the compression method based on the level */
 		if (AH->version < K_VERS_1_4)
 			AH->compression_spec.level = AH->ReadBytePtr(AH);
 		else
@@ -3777,10 +3740,17 @@ ReadHead(ArchiveHandle *AH)
 	else
 		AH->compression_spec.algorithm = PG_COMPRESSION_GZIP;
 
+	if (AH->compression_spec.algorithm != PG_COMPRESSION_NONE)
+	{
+		bool		unsupported = false;
+
 #ifndef HAVE_LIBZ
-	if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
-		pg_fatal("archive is compressed, but this installation does not support compression");
+		if (AH->compression_spec.algorithm == PG_COMPRESSION_GZIP)
+			unsupported = true;
 #endif
+		if (unsupported)
+			pg_fatal("archive is compressed, but this installation does not support compression");
+	}
 
 	if (AH->version >= K_VERS_1_4)
 	{
diff --git a/src/bin/pg_dump/pg_backup_archiver.h b/src/bin/pg_dump/pg_backup_archiver.h
index f72446ed5b..18b38c17ab 100644
--- a/src/bin/pg_dump/pg_backup_archiver.h
+++ b/src/bin/pg_dump/pg_backup_archiver.h
@@ -32,30 +32,6 @@
 
 #define LOBBUFSIZE 16384
 
-#ifdef HAVE_LIBZ
-#include <zlib.h>
-#define GZCLOSE(fh) gzclose(fh)
-#define GZWRITE(p, s, n, fh) gzwrite(fh, p, (n) * (s))
-#define GZREAD(p, s, n, fh) gzread(fh, p, (n) * (s))
-#define GZEOF(fh)	gzeof(fh)
-#else
-#define GZCLOSE(fh) fclose(fh)
-#define GZWRITE(p, s, n, fh) (fwrite(p, s, n, fh) * (s))
-#define GZREAD(p, s, n, fh) fread(p, s, n, fh)
-#define GZEOF(fh)	feof(fh)
-/* this is just the redefinition of a libz constant */
-#define Z_DEFAULT_COMPRESSION (-1)
-
-typedef struct _z_stream
-{
-	void	   *next_in;
-	void	   *next_out;
-	size_t		avail_in;
-	size_t		avail_out;
-} z_stream;
-typedef z_stream *z_streamp;
-#endif
-
 /* Data block types */
 #define BLK_DATA 1
 #define BLK_BLOBS 3
@@ -89,10 +65,13 @@ typedef z_stream *z_streamp;
 #define K_VERS_1_13 MAKE_ARCHIVE_VERSION(1, 13, 0)	/* change search_path
 													 * behavior */
 #define K_VERS_1_14 MAKE_ARCHIVE_VERSION(1, 14, 0)	/* add tableam */
+#define K_VERS_1_15 MAKE_ARCHIVE_VERSION(1, 15, 0)	/* add
+													 * compression_algorithm
+													 * in header */
 
 /* Current archive version number (the format we can output) */
 #define K_VERS_MAJOR 1
-#define K_VERS_MINOR 14
+#define K_VERS_MINOR 15
 #define K_VERS_REV 0
 #define K_VERS_SELF MAKE_ARCHIVE_VERSION(K_VERS_MAJOR, K_VERS_MINOR, K_VERS_REV)
 
@@ -319,8 +298,7 @@ struct _archiveHandle
 
 	char	   *fSpec;			/* Archive File Spec */
 	FILE	   *FH;				/* General purpose file handle */
-	void	   *OF;
-	int			gzOut;			/* Output file */
+	void	   *OF;				/* Output file */
 
 	struct _tocEntry *toc;		/* Header of circular list of TOC entries */
 	int			tocCount;		/* Number of TOC entries */
diff --git a/src/bin/pg_dump/pg_backup_custom.c b/src/bin/pg_dump/pg_backup_custom.c
index d1e54644a9..512ab043af 100644
--- a/src/bin/pg_dump/pg_backup_custom.c
+++ b/src/bin/pg_dump/pg_backup_custom.c
@@ -298,7 +298,9 @@ _StartData(ArchiveHandle *AH, TocEntry *te)
 	_WriteByte(AH, BLK_DATA);	/* Block type */
 	WriteInt(AH, te->dumpId);	/* For sanity check */
 
-	ctx->cs = AllocateCompressor(AH->compression_spec, _CustomWriteFunc);
+	ctx->cs = AllocateCompressor(AH->compression_spec,
+								 NULL,
+								 _CustomWriteFunc);
 }
 
 /*
@@ -317,15 +319,15 @@ _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
 	CompressorState *cs = ctx->cs;
 
 	if (dLen > 0)
-		/* WriteDataToArchive() internally throws write errors */
-		WriteDataToArchive(AH, cs, data, dLen);
+		/* writeData() internally throws write errors */
+		cs->writeData(AH, cs, data, dLen);
 }
 
 /*
  * Called by the archiver when a dumper's 'DataDumper' routine has
  * finished.
  *
- * Optional.
+ * Mandatory.
  */
 static void
 _EndData(ArchiveHandle *AH, TocEntry *te)
@@ -333,6 +335,8 @@ _EndData(ArchiveHandle *AH, TocEntry *te)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	EndCompressor(AH, ctx->cs);
+	ctx->cs = NULL;
+
 	/* Send the end marker */
 	WriteInt(AH, 0);
 }
@@ -377,7 +381,9 @@ _StartLO(ArchiveHandle *AH, TocEntry *te, Oid oid)
 
 	WriteInt(AH, oid);
 
-	ctx->cs = AllocateCompressor(AH->compression_spec, _CustomWriteFunc);
+	ctx->cs = AllocateCompressor(AH->compression_spec,
+								 NULL,
+								 _CustomWriteFunc);
 }
 
 /*
@@ -566,7 +572,12 @@ _PrintTocData(ArchiveHandle *AH, TocEntry *te)
 static void
 _PrintData(ArchiveHandle *AH)
 {
-	ReadDataFromArchive(AH, AH->compression_spec, _CustomReadFunc);
+	CompressorState *cs;
+
+	cs = AllocateCompressor(AH->compression_spec,
+							_CustomReadFunc, NULL);
+	cs->readData(AH, cs);
+	EndCompressor(AH, cs);
 }
 
 static void
diff --git a/src/bin/pg_dump/pg_backup_directory.c b/src/bin/pg_dump/pg_backup_directory.c
index 6800c3ccee..a2f88995c0 100644
--- a/src/bin/pg_dump/pg_backup_directory.c
+++ b/src/bin/pg_dump/pg_backup_directory.c
@@ -50,9 +50,8 @@ typedef struct
 	 */
 	char	   *directory;
 
-	cfp		   *dataFH;			/* currently open data file */
-
-	cfp		   *LOsTocFH;		/* file handle for blobs.toc */
+	CompressFileHandle *dataFH; /* currently open data file */
+	CompressFileHandle *LOsTocFH; /* file handle for blobs.toc */
 	ParallelState *pstate;		/* for parallel backup / restore */
 } lclContext;
 
@@ -198,11 +197,11 @@ InitArchiveFmt_Directory(ArchiveHandle *AH)
 	else
 	{							/* Read Mode */
 		char		fname[MAXPGPATH];
-		cfp		   *tocFH;
+		CompressFileHandle *tocFH;
 
 		setFilePath(AH, fname, "toc.dat");
 
-		tocFH = cfopen_read(fname, PG_BINARY_R);
+		tocFH = InitDiscoverCompressFileHandle(fname, PG_BINARY_R);
 		if (tocFH == NULL)
 			pg_fatal("could not open input file \"%s\": %m", fname);
 
@@ -218,7 +217,7 @@ InitArchiveFmt_Directory(ArchiveHandle *AH)
 		ReadToc(AH);
 
 		/* Nothing else in the file, so close it again... */
-		if (cfclose(tocFH) != 0)
+		if (EndCompressFileHandle(tocFH) != 0)
 			pg_fatal("could not close TOC file: %m");
 		ctx->dataFH = NULL;
 	}
@@ -327,9 +326,9 @@ _StartData(ArchiveHandle *AH, TocEntry *te)
 
 	setFilePath(AH, fname, tctx->filename);
 
-	ctx->dataFH = cfopen_write(fname, PG_BINARY_W,
-							   AH->compression_spec);
-	if (ctx->dataFH == NULL)
+	ctx->dataFH = InitCompressFileHandle(AH->compression_spec);
+
+	if (ctx->dataFH->open_write_func(fname, PG_BINARY_W, ctx->dataFH))
 		pg_fatal("could not open output file \"%s\": %m", fname);
 }
 
@@ -346,15 +345,16 @@ static void
 _WriteData(ArchiveHandle *AH, const void *data, size_t dLen)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->dataFH;
 
 	errno = 0;
-	if (dLen > 0 && cfwrite(data, dLen, ctx->dataFH) != dLen)
+	if (dLen > 0 && CFH->write_func(data, dLen, CFH) != dLen)
 	{
 		/* if write didn't set errno, assume problem is no disk space */
 		if (errno == 0)
 			errno = ENOSPC;
 		pg_fatal("could not write to output file: %s",
-				 get_cfp_error(ctx->dataFH));
+				 CFH->get_error_func(CFH));
 	}
 }
 
@@ -370,7 +370,7 @@ _EndData(ArchiveHandle *AH, TocEntry *te)
 	lclContext *ctx = (lclContext *) AH->formatData;
 
 	/* Close the file */
-	if (cfclose(ctx->dataFH) != 0)
+	if (EndCompressFileHandle(ctx->dataFH) != 0)
 		pg_fatal("could not close data file: %m");
 
 	ctx->dataFH = NULL;
@@ -385,26 +385,25 @@ _PrintFileData(ArchiveHandle *AH, char *filename)
 	size_t		cnt;
 	char	   *buf;
 	size_t		buflen;
-	cfp		   *cfp;
+	CompressFileHandle *CFH;
 
 	if (!filename)
 		return;
 
-	cfp = cfopen_read(filename, PG_BINARY_R);
-
-	if (!cfp)
+	CFH = InitDiscoverCompressFileHandle(filename, PG_BINARY_R);
+	if (!CFH)
 		pg_fatal("could not open input file \"%s\": %m", filename);
 
 	buf = pg_malloc(ZLIB_OUT_SIZE);
 	buflen = ZLIB_OUT_SIZE;
 
-	while ((cnt = cfread(buf, buflen, cfp)))
+	while ((cnt = CFH->read_func(buf, buflen, CFH)))
 	{
 		ahwrite(buf, 1, cnt, AH);
 	}
 
 	free(buf);
-	if (cfclose(cfp) != 0)
+	if (EndCompressFileHandle(CFH) != 0)
 		pg_fatal("could not close data file \"%s\": %m", filename);
 }
 
@@ -435,6 +434,7 @@ _LoadLOs(ArchiveHandle *AH)
 {
 	Oid			oid;
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH;
 	char		tocfname[MAXPGPATH];
 	char		line[MAXPGPATH];
 
@@ -442,14 +442,14 @@ _LoadLOs(ArchiveHandle *AH)
 
 	setFilePath(AH, tocfname, "blobs.toc");
 
-	ctx->LOsTocFH = cfopen_read(tocfname, PG_BINARY_R);
+	CFH = ctx->LOsTocFH = InitDiscoverCompressFileHandle(tocfname, PG_BINARY_R);
 
 	if (ctx->LOsTocFH == NULL)
 		pg_fatal("could not open large object TOC file \"%s\" for input: %m",
 				 tocfname);
 
 	/* Read the LOs TOC file line-by-line, and process each LO */
-	while ((cfgets(ctx->LOsTocFH, line, MAXPGPATH)) != NULL)
+	while ((CFH->gets_func(line, MAXPGPATH, CFH)) != NULL)
 	{
 		char		lofname[MAXPGPATH + 1];
 		char		path[MAXPGPATH];
@@ -464,11 +464,11 @@ _LoadLOs(ArchiveHandle *AH)
 		_PrintFileData(AH, path);
 		EndRestoreLO(AH, oid);
 	}
-	if (!cfeof(ctx->LOsTocFH))
+	if (!CFH->eof_func(CFH))
 		pg_fatal("error reading large object TOC file \"%s\"",
 				 tocfname);
 
-	if (cfclose(ctx->LOsTocFH) != 0)
+	if (EndCompressFileHandle(ctx->LOsTocFH) != 0)
 		pg_fatal("could not close large object TOC file \"%s\": %m",
 				 tocfname);
 
@@ -488,15 +488,16 @@ _WriteByte(ArchiveHandle *AH, const int i)
 {
 	unsigned char c = (unsigned char) i;
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->dataFH;
 
 	errno = 0;
-	if (cfwrite(&c, 1, ctx->dataFH) != 1)
+	if (CFH->write_func(&c, 1, CFH) != 1)
 	{
 		/* if write didn't set errno, assume problem is no disk space */
 		if (errno == 0)
 			errno = ENOSPC;
 		pg_fatal("could not write to output file: %s",
-				 get_cfp_error(ctx->dataFH));
+				 CFH->get_error_func(CFH));
 	}
 
 	return 1;
@@ -512,8 +513,9 @@ static int
 _ReadByte(ArchiveHandle *AH)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->dataFH;
 
-	return cfgetc(ctx->dataFH);
+	return CFH->getc_func(CFH);
 }
 
 /*
@@ -524,15 +526,16 @@ static void
 _WriteBuf(ArchiveHandle *AH, const void *buf, size_t len)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->dataFH;
 
 	errno = 0;
-	if (cfwrite(buf, len, ctx->dataFH) != len)
+	if (CFH->write_func(buf, len, CFH) != len)
 	{
 		/* if write didn't set errno, assume problem is no disk space */
 		if (errno == 0)
 			errno = ENOSPC;
 		pg_fatal("could not write to output file: %s",
-				 get_cfp_error(ctx->dataFH));
+				 CFH->get_error_func(CFH));
 	}
 }
 
@@ -545,12 +548,13 @@ static void
 _ReadBuf(ArchiveHandle *AH, void *buf, size_t len)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->dataFH;
 
 	/*
-	 * If there was an I/O error, we already exited in cfread(), so here we
+	 * If there was an I/O error, we already exited in readF(), so here we
 	 * exit on short reads.
 	 */
-	if (cfread(buf, len, ctx->dataFH) != len)
+	if (CFH->read_func(buf, len, CFH) != len)
 		pg_fatal("could not read from input file: end of file");
 }
 
@@ -573,7 +577,7 @@ _CloseArchive(ArchiveHandle *AH)
 
 	if (AH->mode == archModeWrite)
 	{
-		cfp		   *tocFH;
+		CompressFileHandle *tocFH;
 		pg_compress_specification compression_spec = {0};
 		char		fname[MAXPGPATH];
 
@@ -584,8 +588,8 @@ _CloseArchive(ArchiveHandle *AH)
 
 		/* The TOC is always created uncompressed */
 		compression_spec.algorithm = PG_COMPRESSION_NONE;
-		tocFH = cfopen_write(fname, PG_BINARY_W, compression_spec);
-		if (tocFH == NULL)
+		tocFH = InitCompressFileHandle(compression_spec);
+		if (tocFH->open_write_func(fname, PG_BINARY_W, tocFH))
 			pg_fatal("could not open output file \"%s\": %m", fname);
 		ctx->dataFH = tocFH;
 
@@ -598,7 +602,7 @@ _CloseArchive(ArchiveHandle *AH)
 		WriteHead(AH);
 		AH->format = archDirectory;
 		WriteToc(AH);
-		if (cfclose(tocFH) != 0)
+		if (EndCompressFileHandle(tocFH) != 0)
 			pg_fatal("could not close TOC file: %m");
 		WriteDataChunks(AH, ctx->pstate);
 
@@ -649,8 +653,8 @@ _StartLOs(ArchiveHandle *AH, TocEntry *te)
 
 	/* The LO TOC file is never compressed */
 	compression_spec.algorithm = PG_COMPRESSION_NONE;
-	ctx->LOsTocFH = cfopen_write(fname, "ab", compression_spec);
-	if (ctx->LOsTocFH == NULL)
+	ctx->LOsTocFH = InitCompressFileHandle(compression_spec);
+	if (ctx->LOsTocFH->open_write_func(fname, "ab", ctx->LOsTocFH))
 		pg_fatal("could not open output file \"%s\": %m", fname);
 }
 
@@ -667,9 +671,8 @@ _StartLO(ArchiveHandle *AH, TocEntry *te, Oid oid)
 
 	snprintf(fname, MAXPGPATH, "%s/blob_%u.dat", ctx->directory, oid);
 
-	ctx->dataFH = cfopen_write(fname, PG_BINARY_W, AH->compression_spec);
-
-	if (ctx->dataFH == NULL)
+	ctx->dataFH = InitCompressFileHandle(AH->compression_spec);
+	if (ctx->dataFH->open_write_func(fname, PG_BINARY_W, ctx->dataFH))
 		pg_fatal("could not open output file \"%s\": %m", fname);
 }
 
@@ -682,18 +685,19 @@ static void
 _EndLO(ArchiveHandle *AH, TocEntry *te, Oid oid)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
+	CompressFileHandle *CFH = ctx->LOsTocFH;
 	char		buf[50];
 	int			len;
 
-	/* Close the LO data file itself */
-	if (cfclose(ctx->dataFH) != 0)
-		pg_fatal("could not close LO data file: %m");
+	/* Close the BLOB data file itself */
+	if (EndCompressFileHandle(ctx->dataFH) != 0)
+		pg_fatal("could not close blob data file: %m");
 	ctx->dataFH = NULL;
 
 	/* register the LO in blobs.toc */
 	len = snprintf(buf, sizeof(buf), "%u blob_%u.dat\n", oid, oid);
-	if (cfwrite(buf, len, ctx->LOsTocFH) != len)
-		pg_fatal("could not write to LOs TOC file");
+	if (CFH->write_func(buf, len, CFH) != len)
+		pg_fatal("could not write to blobs TOC file");
 }
 
 /*
@@ -706,8 +710,8 @@ _EndLOs(ArchiveHandle *AH, TocEntry *te)
 {
 	lclContext *ctx = (lclContext *) AH->formatData;
 
-	if (cfclose(ctx->LOsTocFH) != 0)
-		pg_fatal("could not close LOs TOC file: %m");
+	if (EndCompressFileHandle(ctx->LOsTocFH) != 0)
+		pg_fatal("could not close blobs TOC file: %m");
 	ctx->LOsTocFH = NULL;
 }
 
diff --git a/src/bin/pg_dump/t/002_pg_dump.pl b/src/bin/pg_dump/t/002_pg_dump.pl
index 2eeef2a478..f3ba926321 100644
--- a/src/bin/pg_dump/t/002_pg_dump.pl
+++ b/src/bin/pg_dump/t/002_pg_dump.pl
@@ -94,7 +94,7 @@ my %pgdump_runs = (
 			command => [
 				'pg_restore', '-l', "$tempdir/compression_gzip_custom.dump",
 			],
-			expected => qr/Compression: 1/,
+			expected => qr/Compression: gzip/,
 			name     => 'data content is gzip-compressed'
 		},
 	},
@@ -239,8 +239,8 @@ my %pgdump_runs = (
 			command =>
 			  [ 'pg_restore', '-l', "$tempdir/defaults_custom_format.dump", ],
 			expected => $supports_gzip ?
-			qr/Compression: -1/ :
-			qr/Compression: 0/,
+			qr/Compression: gzip/ :
+			qr/Compression: none/,
 			name => 'data content is gzip-compressed by default if available',
 		},
 	},
@@ -264,8 +264,8 @@ my %pgdump_runs = (
 			command =>
 			  [ 'pg_restore', '-l', "$tempdir/defaults_dir_format", ],
 			expected => $supports_gzip ?
-			qr/Compression: -1/ :
-			qr/Compression: 0/,
+			qr/Compression: gzip/ :
+			qr/Compression: none/,
 			name => 'data content is gzip-compressed by default',
 		},
 		glob_patterns => [
diff --git a/src/tools/pginclude/cpluspluscheck b/src/tools/pginclude/cpluspluscheck
index e52fe9f509..da8e4861f4 100755
--- a/src/tools/pginclude/cpluspluscheck
+++ b/src/tools/pginclude/cpluspluscheck
@@ -150,6 +150,7 @@ do
 
 	# pg_dump is not C++-clean because it uses "public" and "namespace"
 	# as field names, which is unfortunate but we won't change it now.
+	test "$f" = src/bin/pg_dump/compress_gzip.h && continue
 	test "$f" = src/bin/pg_dump/compress_io.h && continue
 	test "$f" = src/bin/pg_dump/parallel.h && continue
 	test "$f" = src/bin/pg_dump/pg_backup_archiver.h && continue
diff --git a/src/tools/pgindent/typedefs.list b/src/tools/pgindent/typedefs.list
index 23bafec5f7..840191d680 100644
--- a/src/tools/pgindent/typedefs.list
+++ b/src/tools/pgindent/typedefs.list
@@ -428,6 +428,7 @@ CompiledExprState
 CompositeIOData
 CompositeTypeStmt
 CompoundAffixFlag
+CompressFileHandle
 CompressionLocation
 CompressorState
 ComputeXidHorizonsResult
@@ -1034,6 +1035,7 @@ GucStack
 GucStackState
 GucStringAssignHook
 GucStringCheckHook
+GzipCompressorState
 HANDLE
 HASHACTION
 HASHBUCKET
-- 
2.34.1

Reply via email to