PQconnect()

2024-07-31 Thread Igor Korot
Hi, ALL,
In the https://www.postgresql.org/docs/8.1/libpq.html#LIBPQ-CONNECT
it gives the "options" parameter", but it doesn't specify what options
are available.

Could someone please reference this?

I am specifically looking for the "KeepAlive" option...

Thank you.




Re: PQconnect()

2024-07-31 Thread David G. Johnston
On Wed, Jul 31, 2024 at 9:28 AM Igor Korot  wrote:

> Hi, ALL,
> In the https://www.postgresql.org/docs/8.1/libpq.html#LIBPQ-CONNECT
> it gives the "options" parameter", but it doesn't specify what options
> are available.
>
>
You are looking at the 8.1 docs.  Don't do that.

Feel free to report back if you find a similar lack still exists in current
documentation.

David J.


Fwd: Suggestions to overcome 'multixact "members" limit exceeded' in temporary tables

2024-07-31 Thread Jim Vanns
(resending to general since I believe I originally sent it to hackers by
mistake)

I've reached the limit of my understanding and attempts at correcting my
code/use of temporary tables in the face of multixact members and have come
to ask for your help! Here's a brief description of my software;

Pool of N connection sessions, persistent for the duration of the program
lifetime.
Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON COMMIT
DELETE ROWS statements are made for bulk ingest.
Each session is acquired by a thread for use when ingesting data and
therefore each temporary table remains until the session is terminated
The thread performs a COPY  FROM STDIN in binary format
Then an INSERT INTO  SELECT FROM  WHERE...

This has been working great for a while and with excellent throughput.
However, upon scaling up I eventually hit this error;

ERROR:  multixact "members" limit exceeded
DETAIL:  This command would create a multixact with 2 members, but the
remaining space is only enough for 0 members.
HINT:  Execute a database-wide VACUUM in database with OID 16467 with
reduced vacuum_multixact_freeze_min_age and
vacuum_multixact_freeze_table_age settings.

And it took me quite a while to identify that it appears to be coming from
the temporary table (the other 'main' tables were being autovacuumed OK) -
which makes sense because they have a long lifetime, aren't auto vacuumed
and shared by transactions (in turn).

I first attempted to overcome this by introducing an initial step of always
creating the temporary table before the copy (and using on commit drop) but
this lead to a terrible performance degradation.
Next, I reverted the above and instead I introduced a VACUUM step every
100 (configurable) ingest operations
Finally, I introduced a TRUNCATE step in addition to the occasional VACUUM
since the TRUNCATE allowed the COPY option of FREEZE.

The new overhead appears minimal until after several hours and again I've
hit a performance degradation seemingly dominated by the TRUNCATE.

My questions are;

1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the
temporary table)?
2) Is there really any benefit to using FREEZE here or is it best to just
VACUUM the temporary tables occasionally?
3) Is there a better way of managing all this!? Perhaps re-CREATING the TT
every day or something?

I understand that I can create a Linux tmpfs partition for a tablespace for
the temporary tables and that may speed up the TRUNCATE but that seems like
a hack and I'd rather not do it at all if it's avoidable.

Thanks for your help,

Jim

PS. PG version in use is 15.4 if that matters here

-- 
Jim Vanns
Principal Production Engineer
Industrial Light & Magic, London


Re: Suggestions to overcome 'multixact "members" limit exceeded' in temporary tables

2024-07-31 Thread Jim Vanns
I've been able to observe that the performance degradation with TRUNCATE
appears to happen when other ancillary processes are running that are also
heavy users of temporary tables. If I used an exclusive tablespace, would
that improve things?

Cheers

Jim

On Wed, 31 Jul 2024 at 19:27, Jim Vanns  wrote:

> (resending to general since I believe I originally sent it to hackers by
> mistake)
>
> I've reached the limit of my understanding and attempts at correcting my
> code/use of temporary tables in the face of multixact members and have come
> to ask for your help! Here's a brief description of my software;
>
> Pool of N connection sessions, persistent for the duration of the program
> lifetime.
> Upon each session initialisation, a set of CREATE TEMPORARY TABLE ON
> COMMIT DELETE ROWS statements are made for bulk ingest.
> Each session is acquired by a thread for use when ingesting data and
> therefore each temporary table remains until the session is terminated
> The thread performs a COPY  FROM STDIN in binary format
> Then an INSERT INTO  SELECT FROM  WHERE...
>
> This has been working great for a while and with excellent throughput.
> However, upon scaling up I eventually hit this error;
>
> ERROR:  multixact "members" limit exceeded
> DETAIL:  This command would create a multixact with 2 members, but the
> remaining space is only enough for 0 members.
> HINT:  Execute a database-wide VACUUM in database with OID 16467 with
> reduced vacuum_multixact_freeze_min_age and
> vacuum_multixact_freeze_table_age settings.
>
> And it took me quite a while to identify that it appears to be coming from
> the temporary table (the other 'main' tables were being autovacuumed OK) -
> which makes sense because they have a long lifetime, aren't auto vacuumed
> and shared by transactions (in turn).
>
> I first attempted to overcome this by introducing an initial step of
> always creating the temporary table before the copy (and using on commit
> drop) but this lead to a terrible performance degradation.
> Next, I reverted the above and instead I introduced a VACUUM step every
> 100 (configurable) ingest operations
> Finally, I introduced a TRUNCATE step in addition to the occasional VACUUM
> since the TRUNCATE allowed the COPY option of FREEZE.
>
> The new overhead appears minimal until after several hours and again I've
> hit a performance degradation seemingly dominated by the TRUNCATE.
>
> My questions are;
>
> 1) Is the VACUUM necessary if I use TRUNCATE + COPY FREEZE (on the
> temporary table)?
> 2) Is there really any benefit to using FREEZE here or is it best to just
> VACUUM the temporary tables occasionally?
> 3) Is there a better way of managing all this!? Perhaps re-CREATING the TT
> every day or something?
>
> I understand that I can create a Linux tmpfs partition for a tablespace
> for the temporary tables and that may speed up the TRUNCATE but that seems
> like a hack and I'd rather not do it at all if it's avoidable.
>
> Thanks for your help,
>
> Jim
>
> PS. PG version in use is 15.4 if that matters here
>
> --
> Jim Vanns
> Principal Production Engineer
> Industrial Light & Magic, London
>


Re: Monitoring DB size

2024-07-31 Thread semab tariq
Hi Manuel

Sorry for the late reply saw this email just today. Anyways how about using
something like?

postgres=# SELECT
postgres-# pg_size_pretty(pg_database_size('postgres')) AS
database_size,
postgres-# pg_size_pretty(pg_total_relation_size('accounts')) AS
table_size,
postgres-# pg_size_pretty(tuple_len) AS live_tuple_size,
postgres-# pg_size_pretty(dead_tuple_len) AS dead_tuple_size,
postgres-# pg_size_pretty(free_space) AS free_space
postgres-# FROM
postgres-# pgstattuple('accounts');
 database_size | table_size | live_tuple_size | dead_tuple_size |
free_space
---++-+-+
 8500 kB   | 40 kB  | 80 bytes| 80 bytes| 7988 bytes
(1 row)


Thanks and regards
Semab


On Tue, Jul 16, 2024 at 4:38 PM Shenavai, Manuel 
wrote:

> Thanks for the suggestion. I think this will not help us to differentiate
> between live tuples, dead tuples and free space.
>
>
>
> Best regards,
>
> Manuel
>
>
>
> *From:* Torsten Förtsch 
> *Sent:* 15 July 2024 18:59
> *To:* Shenavai, Manuel 
> *Cc:* pgsql-general@lists.postgresql.org
> *Subject:* Re: Monitoring DB size
>
>
>
> Slightly different approach than you might expect. For larger DBs you'd
> likely want to exclude base and instead use pg_database_size() in addition.
>
>
>
> postgres(2454884) =# create temp table xx(dir text, sz bigint);
> CREATE TABLE
> Time: 2.587 ms
> postgres(2454884) =# copy xx(sz, dir) from program 'du -s *';
>
> COPY 21
> Time: 3.793 ms
> postgres(2454884) =# select * from xx;
> dir  |  sz
> --+---
> base | 26280
> global   |   568
> pg_commit_ts |12
> pg_dynshmem  | 4
> pg_logical   |16
> pg_multixact |28
> pg_notify| 4
> pg_replslot  | 4
> pg_serial| 4
> pg_snapshots | 4
> pg_stat  | 4
> pg_stat_tmp  | 4
> pg_subtrans  |12
> pg_tblspc| 4
> pg_twophase  | 4
> PG_VERSION   | 4
> pg_wal   | 16392
> pg_xact  |12
> postgresql.auto.conf | 4
> postmaster.opts  | 4
> postmaster.pid   | 4
> (21 rows)
>
> Time: 0.282 ms
>
>
>
> On Mon, Jul 15, 2024 at 4:42 PM Shenavai, Manuel 
> wrote:
>
> Hi everyone,
>
>
>
> we currently capture the db size (pg_database_size) which gives the “Disk
> space used by the database with the specified name”. Is it possible to
> further split this data how much space is occupied by live tuples, dead
> tuples and free space?
>
>
>
> We would like to have something like:
>
> DB Size: 200 MB, live tuple 100MB, Dead Tuple: 20 MB, free space 80MB
>
>
>
> Is this possible?
>
>
>
> Best regards,
>
> Manuel
>
>