client backwards compatible with older servers

2021-04-30 Thread Rajesh Madiwale
Hi Team,

Do we have any reference documentation which tells above postgres client's
backward compatibility ?
Example: I am having PG server installed is PG-11.8 and PG client installed
is PG-13 , will I face any issue? or are there any limitations if I use it ?

Regards,
 Rajesh Madiwale.


Re: client backwards compatible with older servers

2021-04-30 Thread Vijaykumar Jain
I am not sure which postgresql client is being referred to.
If it is psql command line then from the notes section of the docs below
says

https://www.postgresql.org/docs/current/app-psql.html


psql works best with servers of the same or an older major version.
Backslash commands are particularly likely to fail if the server is of a
newer version than psql itself. However, backslash commands of the \d family
should work with servers of versions back to 7.4, though not necessarily
with servers newer than psql itself. The general functionality of running
SQL commands and displaying query results should also work with servers of
a newer major version, but this cannot be guaranteed in all cases.

If you want to use psql to connect to several servers of different major
versions, it is recommended that you use the newest version of psql.
Alternatively, you can keep around a copy of psql from each major version
and be sure to use the version that matches the respective server. But in
practice, this additional complication should not be necessary.
If it is language specific clients, I guess you may have to talk to the
maintainers of those libraries/ clients.

Thanks,
Vijay

On Fri, Apr 30, 2021, 1:21 PM Rajesh Madiwale 
wrote:

> Hi Team,
>
> Do we have any reference documentation which tells above postgres client's
> backward compatibility ?
> Example: I am having PG server installed is PG-11.8 and PG client
> installed is PG-13 , will I face any issue? or are there any limitations if
> I use it ?
>
> Regards,
>  Rajesh Madiwale.
>
>


Re: Oracle vs. PostgreSQL - a comment

2021-04-30 Thread Ludovico Caldara
Il giorno gio 29 apr 2021 alle ore 19:13 Paul Förster <
paul.foers...@gmail.com> ha scritto:

> nothing of it was a FUD. It was a comparison done on a single machine.
> Then, I drew my conclusions from that and added my personal view. You don't
> necessarily havet to agree to my opinion nor did I ask you to agree. But
> it's definitely not FUD!
>

Features are not an opinion. I am not trying to convincing you that Oracle
is better than PostgreSQL (postgresql official mailing lists are not a good
place for that ^^)
But I can't stand when people advocate against Oracle (or FWIW, whatever
technology) using, among understandable  arguments, also false claims.
Oracle is "heavy" (but fast in application performance), it takes time to
install it. Stopping and starting the instances takes time. Patching can be
painful if you have encountered too many bugs in the past and need to merge
the patches. It is the most expensive database in the world (at least,
looking at what you pay and not what you get). It is complex for the DBAs
and the learning curve gets steeper and steeper with more and more features
added at every release. All these points are true.

Now, let's keep this momentum and continue with more incontestable truth:

> Online Datafile Movement has existed since 12cR1. 8 years!
> https://oracle-base.com/articles/12c/online-move-datafile-12cr1
>

> yes, I know. But did you try to move SYSTEM, UNDO or TEMP tablespace or
> online redo log files? Did you try to move the *whole* database? You can
> move all data/index tablespace files with that (one by one which is
> tiresome with many files), but you can't move the essential tablespace
> files! Well, you can move the online reado log files by creating new ones
> and dropping the old ones but that's about it. You still can't move the
> essential tablespace files. I admit that I didn't try that with 19.x but it
> wasn't possible up to now.
>
>
Tried? I blogged about it 8 years ago:
http://www.ludovicocaldara.net/dba/oracle-database-12c-move-datafile-online/
And I use this feature extensively like tons of DBAs out there.

Some more examples:

--- UNDO, move online back and forth
SQL>  alter database move datafile
'+DATA/_MGMTDB/DATAFILE/undotbs1.279.1071175797' to '/tmp/undotbs1.dbf';

Database altered.

--- SYSTEM, move online back and forth
SQL> ALTER DATABASE MOVE DATAFILE
'/u02/app/oracle/oradata/cdb1/system01.dbf' TO '+DATA';

Database altered.

SQL> alter database move datafile '/tmp/system01.dbf' to '+DATA';

Database altered.

-- TEMPFILE: add a new one and drop the old one
SQL> alter tablespace temp add tempfile '/tmp/temp01.dbf' size 50M;

Tablespace altered.

SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055'
offline;

Database altered.

SQL> alter database tempfile '+DATA/_MGMTDB/TEMPFILE/temp.284.1070901055'
drop including datafiles;

Database altered.

 > Well, you can move the online reado log files by creating new ones and
dropping the old ones but that's about it.

what do you mean... "but that's about it"? redo logs are not datafiles, new
ones are created when the old ones are full, just like WAL files. You
decide where to put them and just archive the old ones.

And if I want to move an entire database... with ASM, 100% with Oracle
technology, you can switch from a storage to another one without instance
downtime or brownouts. Or you can do some junior-level scripting and do
the online datafile movement automatically between different filesystems.
Control files (and only them) are the only thing that you cannot move
without bouncing. Truth. (it's in my blog as well). But with ASM you can
change disks online.


> > PostgreSQL must be stopped in order to move the database to a new path,
> and if it is to a new filesystem, you need the time for a full copy of the
> data, unless you do it via backup and recovery to reduce the downtime.
>
> that's not true. pg_basebackup it while running to a new destination. Set
> up primary_conn_info and replication and start up the copy. Once it's in
> sync and you have a physical copy, change the port in postgresql.conf of
> the copy, stop both and then only launch the copy. Promote it then. The
> switch takes 2-3 secs of downtime.
>

I did say the truth and I quote myself again in case you have skipped my
sentence: "unless you do it via backup and recovery to reduce the
downtime." That's what a replica is. A backup that you keep recovering
until you switch to it.
This applies for Oracle as well, BTW, in case you want to relocate to
another server. It might be longer to relocate, but there are technologies
that make it transparent to the application (complex to implement, yes, but
still they exist).


> again no, you can't move SYSTEM, UNDO and TEMP! Also, what I mentioned was
> a quick and dirty hack and is not recommended.
>

Read above. Read the doc. Try it yourself. Ask your friends. Do something
but stop telling that it's not possible.

but that makes 99.9% of all applications, no mat

"Skipping" BitmapHeapScan for simple counts

2021-04-30 Thread Maarten Mortier
Hello,

We have a table with raw jsonb data in a field "data".

We have a jsonb_ops GIN index on this field, because we query the following
"freeform" match:

SELECT COUNT(id)
FROM records
WHERE data @@ '$.**.label == "person"';

When analysing this plan, it seems the BitmapIndexScan to do this is fairly
fast (as it uses the GIN index, which is quite powerful), but the
BitmapHeapScan that follows, is slow, when the amount of matching records
span a lot of heap blocks.

I can understand why this happens: the BitmapIndexScan is supposed to be
somewhat optimistic, and rows need to be discarded by properly going
through the data in a BitmapHeapScan. This optimism can (for other types of
indices) be somewhat tamed by "vacuum"ing the indices, which would allow
IndexOnly queries that include availability of the records with the index.

However, in the case of jsonb queries above, there is no need to do the
BitmapHeapScan really: the BitmapIndexScan has the correct row count,
always.

So, I would like for us to just skip it. If we need the data, we can do
LIMIT/OFFSET windows to tame the BitmapHeapScan, but having a full count
would be nice.

Can we skip this in some way? Can I find the right spot in the code where
this could be skipped?

Thanks so much for any help,

-- 
Maarten


Access a newer Version of PGDB (v13) with an older libpq (v10 x86)

2021-04-30 Thread Wolfgang Rißler

Hello community,

we are (unfortunately) on windows with our DB and our application.
Now we want to switch to a new version of Postgres, because we want to 
use logical replication (to replace our old Postgres 8.3 with 
slony-replication).


The problem is, that our application (IDE MS-VisualStudio, C++) has to 
be 32bit, because of some old 32bit-dll's, which we cant kick out at the 
moment.

So I compiled a libpqxx with the last 32bit libpq (which is v10).

I tried to access a PostgresDB 64bit v10, and it worked very nice.
So my question is, could (or should) we rather use PostgresDB v13 (to 
get the newest) or should we stay on PostgresDB v10 (to be more 
compatible to our libpq/libpqxx).


Thank you,
woelfchen72

--

- may the source be with you -




Re: Access a newer Version of PGDB (v13) with an older libpq (v10 x86)

2021-04-30 Thread Laurenz Albe
On Fri, 2021-04-30 at 10:49 +0200, Wolfgang Rißler wrote:
> we are (unfortunately) on windows with our DB and our application.
> Now we want to switch to a new version of Postgres, because we want to 
> use logical replication (to replace our old Postgres 8.3 with 
> slony-replication).
> 
> The problem is, that our application (IDE MS-VisualStudio, C++) has to 
> be 32bit, because of some old 32bit-dll's, which we cant kick out at the 
> moment.
> So I compiled a libpqxx with the last 32bit libpq (which is v10).
> 
> I tried to access a PostgresDB 64bit v10, and it worked very nice.
> So my question is, could (or should) we rather use PostgresDB v13 (to 
> get the newest) or should we stay on PostgresDB v10 (to be more 
> compatible to our libpq/libpqxx).

Use the latest PostgreSQL client if you can, it shouldn't be a problem.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Streaming replica failure

2021-04-30 Thread Vijaykumar Jain
Were there any issues with hardware ?
Memory/storage ?
I am not sure but it look like data loss to me.

If you have admin access?

Can you run dmesg -a,  system logs, hyperion logs etc and see if there are
any errors related to memory corruption.
Coz if there are hardware issues, this will happen again IMHO.
But if this is not a hardware issue, then I guess one of the experts may
have to chime in.

Thanks,
Vijay



On Tue, Apr 27, 2021, 3:31 PM Aleš Zelený  wrote:

> Hello,
>
> we are using PostgreSQL 12.4 on CentOS 7. The hot standby failed:
>
> 2021-04-24 09:19:27 CEST [20956]: [747-1] user=,db=,host=,app= LOG:
>  recovery restart point at 3D8C/352B4CE8
> 2021-04-24 09:19:27 CEST [20956]: [748-1] user=,db=,host=,app= DETAIL:
>  Last completed transaction was at log time 2021-04-24 09:19:27.221313+02.
> 2021-04-24 09:20:57 CEST [20956]: [749-1] user=,db=,host=,app= LOG:
>  restartpoint starting: time
> 2021-04-24 09:24:27 CEST [20956]: [750-1] user=,db=,host=,app= LOG:
>  restartpoint complete: wrote 171233 buffers (13.4%); 0 WAL file(s) added,
> 68 removed, 0 recycled; write
> =209.663 s, sync=0.012 s, total=209.963 s; sync files=283, longest=0.001
> s, average=0.000 s; distance=684762 kB, estimate=684762 kB
> 2021-04-24 09:24:27 CEST [20956]: [751-1] user=,db=,host=,app= LOG:
>  recovery restart point at 3D8C/5EF6B858
> 2021-04-24 09:24:27 CEST [20956]: [752-1] user=,db=,host=,app= DETAIL:
>  Last completed transaction was at log time 2021-04-24 09:24:27.288115+02.
> 2021-04-24 09:25:15 CEST [20955]: [11-1] user=,db=,host=,app= WARNING:
>  page 366603 of relation base/20955/10143636 is uninitialized
> 2021-04-24 09:25:15 CEST [20955]: [12-1] user=,db=,host=,app= CONTEXT:
>  WAL redo at 3D8C/D79F6500 for Heap2/VISIBLE: cutoff xid 3806260577 flags
> 0x01
> 2021-04-24 09:25:15 CEST [20955]: [13-1] user=,db=,host=,app= PANIC:  WAL
> contains references to invalid pages
> 2021-04-24 09:25:15 CEST [20955]: [14-1] user=,db=,host=,app= CONTEXT:
>  WAL redo at 3D8C/D79F6500 for Heap2/VISIBLE: cutoff xid 3806260577 flags
> 0x01
> 2021-04-24 09:25:16 CEST [20953]: [11-1] user=,db=,host=,app= LOG:
>  startup process (PID 20955) was terminated by signal 6: Neúspěšně ukončen
> (SIGABRT)
> 2021-04-24 09:25:16 CEST [20953]: [12-1] user=,db=,host=,app= LOG:
>  terminating any other active server processes
>
> The relation base/20955/10143636 is a standard table.
>
> Version details:
> postgres=# select version();
>  version
>
>
> -
>  PostgreSQL 12.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
> 20150623 (Red Hat 4.8.5-39), 64-bit
> (1 row)
>
> Settings:
> postgres=# select name, setting, category from pg_settings where category
> = 'Write-Ahead Log / Settings';
>   name  |  setting  |  category
> +---+
>  commit_delay   | 0 | Write-Ahead Log / Settings
>  commit_siblings| 5 | Write-Ahead Log / Settings
>  fsync  | on| Write-Ahead Log / Settings
>  full_page_writes   | on| Write-Ahead Log / Settings
>  synchronous_commit | on| Write-Ahead Log / Settings
>  wal_buffers| 2048  | Write-Ahead Log / Settings
>  wal_compression| off   | Write-Ahead Log / Settings
>  wal_init_zero  | on| Write-Ahead Log / Settings
>  wal_level  | logical   | Write-Ahead Log / Settings
>  wal_log_hints  | off   | Write-Ahead Log / Settings
>  wal_recycle| on| Write-Ahead Log / Settings
>  wal_sync_method| fdatasync | Write-Ahead Log / Settings
>  wal_writer_delay   | 200   | Write-Ahead Log / Settings
>  wal_writer_flush_after | 128   | Write-Ahead Log / Settings
> (14 rows)
>
> pg_waldump output:
> -bash-4.2$ /usr/pgsql-12/bin/pg_waldump 00013D8C00D7 2>&1 |
> tail
> rmgr: Btree   len (rec/tot): 64/64, tx: 3812802559, lsn:
> 3D8C/D7CF5A98, prev 3D8C/D7CF5A58, desc: INSERT_LEAF off 360, blkref #0:
> rel 1663/20955/11280092 blk 19
> 509
> rmgr: Transaction len (rec/tot): 46/46, tx: 3812802557, lsn:
> 3D8C/D7CF5AD8, prev 3D8C/D7CF5A98, desc: COMMIT 2021-04-24 09:25:16.160687
> CEST
> rmgr: Heaplen (rec/tot):159/   159, tx: 3812802559, lsn:
> 3D8C/D7CF5B08, prev 3D8C/D7CF5AD8, desc: INSERT off 8 flags 0x08, blkref
> #0: rel 1663/20955/11280066 bl
> k 165603
> rmgr: Btree   len (rec/tot): 64/64, tx: 3812802559, lsn:
> 3D8C/D7CF5BA8, prev 3D8C/D7CF5B08, desc: INSERT_LEAF off 317, blkref #0:
> rel 1663/20955/11280073 blk 15
> 340
> rmgr: Btree   len (rec/tot): 64/64, tx: 3812802559, lsn:
> 3D8C/D7CF5BE8, prev 3D8C/D7CF5BA8, desc: INSERT_LEAF off 130, blkref #0:
> rel 1663/20955/11280091 blk 22
> 003
> rmgr: Heaplen (rec/tot): 80/80, tx: 38128025

Re: -1/0 virtualtransaction

2021-04-30 Thread Thomas Munro
On Wed, Apr 28, 2021 at 2:25 AM Mike Beachy  wrote:
> On Tue, Apr 27, 2021 at 2:56 AM Laurenz Albe  wrote:
>>
>> Not sure, but do you see prepared transactions in "pg_prepared_xacts"?
>
> No, the -1 in the virtualtransaction 
> (https://www.postgresql.org/docs/11/view-pg-locks.html) for pg_prepared_xacts 
> was another clue I saw! But, it seems more or less a dead end as I have 
> nothing in pg_prepared_xacts.
>
> Thanks for the idea, though.

There is another way to get a pg_lock that shows -1/0 there: if we run
out of SERIALIZABLEXACT objects, we transfer the locks of the oldest
committed transaction to a single SERIALIZABLEXACT object called
OldCommittedSxact, and it has an invalid virtual xid.  You can see
this if you recompile with TEST_SUMMARIZE_SERIAL defined to force that
path, and then run three overlapping transactions like this:

tx1: BEGIN;
tx1: SELECT * FROM t WHERE id = 42;
tx2:   BEGIN;
tx2:   SELECT * FROM t WHERE id = 999;
tx1: COMMIT;

Even though it comitted, at this point we still see tx1's locks, along
with tx2's.

tx3: BEGIN;
tx3: SELECT * FROM t WHERE id = 1234;

At this point we see tx1's locks still but their vxid has changed to -1/0.

> I still need to put more effort into Tom's idea about SIReadLock hanging out 
> after the transaction, but some evidence pointing in this direction is that 
> I've reduced the number of db connections and found that the '-1/0' locks 
> will eventually go away! I interpret this as the db needing to find time when 
> no overlapping read/write transactions are present. This doesn't seem 
> completely correct, as I don't have any long lived transactions running while 
> these locks are hanging out. Confusion still remains, for sure.

But do you have lots of short overlapping transactions so that there
is never a moment where there are zero transactions running?

As mentioned, locks (and transactions, and conflicts) hang around
after you commit.  That's because things that your finished
transaction saw can cause transactions that are still running to
become uncommittable, by forming part of a cycle.   The rules for when
the locks can be freed change once you reach the degraded "summarised"
mode, though, due to lack of bookkeeping space.  Not sure of the exact
details without rereading the source code 42 times with much coffee,
but it's something like: summarised locks can only be freed at times
when there are zero active read/write serializable transactions,
because that's when "CanPartialClearThrough" advances, while in the
normal case they can be freed as soon as there are no SSI snapshots
that were taken before it committed, because that's when
"SxactGlobalXmin" advances.

The number of SERIALIZABLEXACT objects is (max_connections +
max_prepared_transactions) * 10.  So, you could try increasing
max_connections (without increasing the actual number of connections)
to see if you can get to a point where you don't see these invalid
virtual xids, and then maybe it'll be able to clean up locks more
aggressively.




Re: Access a newer Version of PGDB (v13) with an older libpq (v10 x86)

2021-04-30 Thread Tom Lane
=?UTF-8?Q?Wolfgang_Ri=c3=9fler?=  writes:
> The problem is, that our application (IDE MS-VisualStudio, C++) has to 
> be 32bit, because of some old 32bit-dll's, which we cant kick out at the 
> moment.
> So I compiled a libpqxx with the last 32bit libpq (which is v10).

Uh ... what's this about "last 32-bit libpq"?

I can believe that a particular packager (EDB, say) might not be shipping
prebuilt 32-bit binaries anymore.  But if you are in a position to compile
your own libraries then you can certainly build any release you want as
32-bit.

I would recommend trying to use a reasonably late-vintage libpq; we do
fix bugs in it on a regular basis.

The common stumbling block for cross-version situations is that the
client makes assumptions about system catalog contents that are not
valid in some other server release.  libpq proper doesn't really touch
the catalogs, so it's mostly impervious to that problem; but you'll need
to test your applications.

regards, tom lane




Re: "Skipping" BitmapHeapScan for simple counts

2021-04-30 Thread Tom Lane
Maarten Mortier  writes:
> We have a jsonb_ops GIN index on this field, because we query the following
> "freeform" match:

> SELECT COUNT(id)
> FROM records
> WHERE data @@ '$.**.label == "person"';

> However, in the case of jsonb queries above, there is no need to do the
> BitmapHeapScan really: the BitmapIndexScan has the correct row count,
> always.

This claim is wrong (in general) because of the need to verify visibility
of the heap row.  You might be able to look aside at the visibility map
and find out that a particular heap page is all-visible, but unless the
table is mostly static that can easily be a net waste of cycles.  The
reason we make a distinction between plain indexscans and index-only
scans is to encode in the plan whether the planner thought such checks
were likely to be a win.  To transpose the idea into bitmapscans, you'd
need similar infrastructure, which is not going to be a hack-it-in-one-
place project.

More to the point: I doubt this will help at all for the query above,
because I doubt that the index is lossless for this query.  From what
I recall of GIN for jsonb, what it will return is TIDs for tuples that
contain both "label" and "person".  Whether they have the relationship
of "person" being the value of a "label" tag can't be told without
looking at the actual JSON value.

regards, tom lane




Re: pg_upgrade and wraparound

2021-04-30 Thread Bruce Momjian
On Sat, Mar 13, 2021 at 08:43:54AM -0500, Jan Wieck wrote:
> On 3/12/21 8:30 PM, Michael Paquier wrote:
> > Hi Jan,
> > 
> > On Fri, Mar 12, 2021 at 06:13:33PM -0500, Jan Wieck wrote:
> > > One of the things in my way is that when using pg_resetwal to put the
> > > NextXID way into the future (to push the old cluster close to wraparound 
> > > for
> > > example), the postmaster won't start because it doesn't have the pg_xact
> > > files for that around. Should pg_resetwal create the files in the gap
> > > between the old NextXID and the new one?
> > 
> > I think that you should add this patch to the next commit fest to
> > track it properly:
> > https://commitfest.postgresql.org/33/
> > --
> > Michael
> > 
> 
> Actually this is the wrong patch (this one is for PG-12, not for HEAD). Will
> update later today.
> 
> But yes, putting it into the next commitfest after initial discussion is the
> plan.

Uh, were either of these things done?

-- 
  Bruce Momjian  https://momjian.us
  EDB  https://enterprisedb.com

  If only the physical world exists, free will is an illusion.





Re: -1/0 virtualtransaction

2021-04-30 Thread Mike Beachy
On Fri, Apr 30, 2021 at 7:12 AM Thomas Munro  wrote:
> But do you have lots of short overlapping transactions so that there
> is never a moment where there are zero transactions running?

Yeah, that almost certainly explains it.

Thanks very much for the explanation about the summarized locks.

> The number of SERIALIZABLEXACT objects is (max_connections +
> max_prepared_transactions) * 10.  So, you could try increasing
> max_connections (without increasing the actual number of connections)
> to see if you can get to a point where you don't see these invalid
> virtual xids, and then maybe it'll be able to clean up locks more
> aggressively.

Aha! I hadn't considered that some parameter besides
max_pred_locks_per_transaction would come into play.  I'll give this a
shot.

Thanks,
Mike




Re: Access a newer Version of PGDB (v13) with an older libpq (v10 x86)

2021-04-30 Thread Wolfgang Rißler

Am 30.04.2021 um 13:00 schrieb Laurenz Albe:

On Fri, 2021-04-30 at 10:49 +0200, Wolfgang Rißler wrote:

we are (unfortunately) on windows with our DB and our application.
Now we want to switch to a new version of Postgres, because we want to
use logical replication (to replace our old Postgres 8.3 with
slony-replication).

The problem is, that our application (IDE MS-VisualStudio, C++) has to
be 32bit, because of some old 32bit-dll's, which we cant kick out at the
moment.
So I compiled a libpqxx with the last 32bit libpq (which is v10).

I tried to access a PostgresDB 64bit v10, and it worked very nice.
So my question is, could (or should) we rather use PostgresDB v13 (to
get the newest) or should we stay on PostgresDB v10 (to be more
compatible to our libpq/libpqxx).


Use the latest PostgreSQL client if you can, it shouldn't be a problem.

Yours,
Laurenz Albe



Thanks for answering, but
I didn't ask for a PostgreSQL client. I asked for the c-language-lib 
libpq. (which is maybe used by some clients).
I would use the latest libpq, but I cant, because our c++-project stucks 
in 32bit architecture. The latest 32bit pqlib is that from v10.


Can I access a PostgreSQL v13 Database with this pqlib safely?



--

Wolfgang Rißler
mailto: wolfgang.riss...@freenet.de
mobil: +49 1520 9191759
- may the source be with you -




Re: Access a newer Version of PGDB (v13) with an older libpq (v10 x86)

2021-04-30 Thread Wolfgang Rißler

Am 30.04.2021 um 13:41 schrieb Laurenz Albe:
> Please reply to the list> Sorry my mistake, I fixed this.

> On Fri, 2021-04-30 at 13:20 +0200, Wolfgang Rißler wrote:
>> Thanks for answering, but
>> I didn't ask for a PostgreSQL client. I asked for the c-language-lib
>> libpq. (which is maybe used by some clients).
>
> That *is* a client (in my book).
Ok, I agree. Sorry for admonition.

>
>> I would use the latest libpq, but I cant, because our c++-project stucks
>> in 32bit architecture. The latest 32bit pqlib is that from v10.
>
> You can build libpq on a 32-bit system.  No problem.
Yes it is: I'm missing the experience. Until now I took the libpq from 
the actual WindowsInstaller-installation from EDB. But they dont provide 
32bit anymore. PG10 ist the last one.
Compiling a libpqxx on this base with VisualStudio and CMake works like 
a charm.

Can you give a hint, how to build a pqlib v13 x86 in windows with VS?

>
>> Can I access a PostgreSQL v13 Database with this pqlib safely?
>
> Yes.
This makes me happy.
Thank you.

[snip]


--

- may the source be with you -