Re: Implementing product-aggregate

2024-03-14 Thread Jan Kohnert
Hi again,

Am Montag, 11. März 2024, 10:16:33 CET schrieb Jan Kohnert:
> create aggregate prod(numeric) (sfunc = numeric_mul, stype = numeric)

my basic idea was creating a function

create function multiply(arg1 numeric, arg2 numeric)
returns numeric 
language sql
immutable
returns null on null input
return arg1 * arg2;

and use that function instead of the undocumented numeric_mul as the sfunc in 
the aggregate definition.

Then again, this seems odd, too, since we're only reimplementing basic stuff 
that's 
already there.

I'm still undecided...

-- 
MfG Jan


Re: Implementing product-aggregate

2024-03-14 Thread Tom Lane
Jan Kohnert  writes:
> Am Montag, 11. März 2024, 10:16:33 CET schrieb Jan Kohnert:
>> create aggregate prod(numeric) (sfunc = numeric_mul, stype = numeric)
> ...
> Then again, this seems odd, too, since we're only reimplementing basic stuff 
> that's 
> already there.

I wouldn't be concerned about relying on numeric_mul (or any of the
other functions underlying standard operators).  They're undocumented
only because documenting both the functions and the operators would
bloat the documentation to little purpose.  Using one makes your code
not so portable to non-Postgres DBMSes, but you already crossed that
bridge by deciding to use a custom aggregate.

A bigger question is whether this implementation actually has the
properties you want --- notably, maybe you should be using type
float8 not numeric.  Numeric would get pretty slow and be carrying
an awful lot of decimal places by the end of the query, I fear.

regards, tom lane




select results on pg_class incomplete

2024-03-14 Thread Thiemo Kellner

Hi

I am trying to access PostgreSQL meta data, possibly in a vane attempt 
to get size data.


I use DbVis with a connection as shown in https://ibb.co/2SDzhXt . I try 
to get information on a regular table "umsaetze". When doing the DbVis 
object I can see them - https://ibb.co/WxMnY2c . If I execute following 
SQL query in DbVis's SQL Commander, the result set is empty - 
https://ibb.co/GngdWLH .


select *
  from PG_CLASS
 where RELNAME = 'umsaetze';

I noticed that the sessions producing the different results are not the 
same - https://ibb.co/wdKcCFc , but seem to connect to different 
databases. The "missing" table is indeed in the budget database.


The connection user is, apart from being member of pg_monitor vanilla - 
https://ibb.co/DGs6sQz and https://ibb.co/8xzHrvP .


It seems, that in pg_class only is, with respect to custom databases, 
listed what is in the database one connects to - https://ibb.co/dbbJVbJ.


template1=> select count(*)
  from PG_CLASS
 where RELNAME = 'umsaetze';
 count
---
 0
(1 row)

template1=> \q

C:\Users\thiemo\AppData\Roaming\MOBAXT~1\home>psql -h hp-slimline-260 -p 
5436 -U monitor budget

psql (11.2, server 16.1 (Debian 16.1-1.pgdg110+1))
WARNING: psql major version 11, server major version 16.
 Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, 
bits: 256, compression: off)

Type "help" for help.

budget=> select count(*)
  from PG_CLASS
 where RELNAME = 'umsaetze';
 count
---
 2
(1 row)

budget=> \q


Is there a possibility to make the user monitor see all the objects of 
the cluster? Background is that I was hoping to create a query to spit 
out the size of tables in the cluster.


Kind regards

Thiemo




Re: select results on pg_class incomplete

2024-03-14 Thread Adrian Klaver

On 3/14/24 09:41, Thiemo Kellner wrote:

Hi

I am trying to access PostgreSQL meta data, possibly in a vane attempt 
to get size data.


I use DbVis with a connection as shown in https://ibb.co/2SDzhXt . I try 
to get information on a regular table "umsaetze". When doing the DbVis 
object I can see them - https://ibb.co/WxMnY2c . If I execute following 
SQL query in DbVis's SQL Commander, the result set is empty - 
https://ibb.co/GngdWLH .


select *
   from PG_CLASS
  where RELNAME = 'umsaetze';

I noticed that the sessions producing the different results are not the 
same - https://ibb.co/wdKcCFc , but seem to connect to different 
databases. The "missing" table is indeed in the budget database.


The connection user is, apart from being member of pg_monitor vanilla - 
https://ibb.co/DGs6sQz and https://ibb.co/8xzHrvP .


It seems, that in pg_class only is, with respect to custom databases, 
listed what is in the database one connects to - https://ibb.co/dbbJVbJ.


As listed on the tin:

https://www.postgresql.org/docs/current/catalogs-overview.html

"Most system catalogs are copied from the template database during 
database creation and are thereafter database-specific. A few catalogs 
are physically shared across all databases in a cluster; these are noted 
in the descriptions of the individual catalogs."


pg_class is not one of the global tables.



template1=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  0
(1 row)

template1=> \q

C:\Users\thiemo\AppData\Roaming\MOBAXT~1\home>psql -h hp-slimline-260 -p 
5436 -U monitor budget

psql (11.2, server 16.1 (Debian 16.1-1.pgdg110+1))
WARNING: psql major version 11, server major version 16.
  Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, 
bits: 256, compression: off)

Type "help" for help.

budget=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  2
(1 row)

budget=> \q


Is there a possibility to make the user monitor see all the objects of 
the cluster? Background is that I was hoping to create a query to spit 
out the size of tables in the cluster.


Kind regards

Thiemo




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: select results on pg_class incomplete

2024-03-14 Thread Thiemo Kellner
Thanks for the enlightenment. A pity. I suppose, there is no working 
around this?


Am 14.03.2024 um 18:01 schrieb Adrian Klaver:

On 3/14/24 09:41, Thiemo Kellner wrote:

Hi

I am trying to access PostgreSQL meta data, possibly in a vane attempt 
to get size data.


I use DbVis with a connection as shown in https://ibb.co/2SDzhXt . I 
try to get information on a regular table "umsaetze". When doing the 
DbVis object I can see them - https://ibb.co/WxMnY2c . If I execute 
following SQL query in DbVis's SQL Commander, the result set is empty 
- https://ibb.co/GngdWLH .


select *
   from PG_CLASS
  where RELNAME = 'umsaetze';

I noticed that the sessions producing the different results are not 
the same - https://ibb.co/wdKcCFc , but seem to connect to different 
databases. The "missing" table is indeed in the budget database.


The connection user is, apart from being member of pg_monitor vanilla 
- https://ibb.co/DGs6sQz and https://ibb.co/8xzHrvP .


It seems, that in pg_class only is, with respect to custom databases, 
listed what is in the database one connects to - https://ibb.co/dbbJVbJ.


As listed on the tin:

https://www.postgresql.org/docs/current/catalogs-overview.html

"Most system catalogs are copied from the template database during 
database creation and are thereafter database-specific. A few catalogs 
are physically shared across all databases in a cluster; these are noted 
in the descriptions of the individual catalogs."


pg_class is not one of the global tables.



template1=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  0
(1 row)

template1=> \q

C:\Users\thiemo\AppData\Roaming\MOBAXT~1\home>psql -h hp-slimline-260 
-p 5436 -U monitor budget

psql (11.2, server 16.1 (Debian 16.1-1.pgdg110+1))
WARNING: psql major version 11, server major version 16.
  Some psql features might not work.
SSL connection (protocol: TLSv1.2, cipher: 
ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)

Type "help" for help.

budget=> select count(*)
   from PG_CLASS
  where RELNAME = 'umsaetze';
  count
---
  2
(1 row)

budget=> \q


Is there a possibility to make the user monitor see all the objects of 
the cluster? Background is that I was hoping to create a query to spit 
out the size of tables in the cluster.


Kind regards

Thiemo









Re: select results on pg_class incomplete

2024-03-14 Thread David G. Johnston
On Thu, Mar 14, 2024, 11:08 Thiemo Kellner 
wrote:

> Thanks for the enlightenment. A pity. I suppose, there is no working
> around this?



Write a script to do the query in a loop on all databases - that catalog is
global.

David J.


Re: Implementing product-aggregate

2024-03-14 Thread Jan Kohnert
Hello,

Am Donnerstag, 14. März 2024, 15:17:58 CET schrieb Tom Lane:
> I wouldn't be concerned about relying on numeric_mul (or any of the
> other functions underlying standard operators).  They're undocumented
> only because documenting both the functions and the operators would
> bloat the documentation to little purpose.  Using one makes your code
> not so portable to non-Postgres DBMSes, but you already crossed that
> bridge by deciding to use a custom aggregate.

thank you for clearifying this. We're not too concerned about portability. 
Let's face 
the facts: Porting a reasonably complex database and the application using it 
from one DBMS to another will almost certainly introduce an awful lot of 
portability issues (f.e. the pseudo-types (big)serial, upserts, differences in 
merge 
implementations, progammability, and so on). My main concern was, that 
undocumented features sometimes tend to change without notice, since users 
are not expected to use them..
 
> A bigger question is whether this implementation actually has the
> properties you want --- notably, maybe you should be using type
> float8 not numeric.  Numeric would get pretty slow and be carrying
> an awful lot of decimal places by the end of the query, I fear.

This needs to be checked on our side. I was expecting, that using an aggregate 
this way would be significantly faster than using exp(sum(log())). Though we're 
not multiplying too many lines in a statement, if using the aggregate slows 
down 
performance, we should propably stick the old way doing it.

Best regards!

-- 
MfG Jan


Re: Postgresql docker health check

2024-03-14 Thread Peter J. Holzer
On 2024-03-12 15:43:13 +0100, Amna Abdul Rehman wrote:
> When my application runs overnight, I receive the error message
> 'FATAL: Sorry, too many clients already.' To address this, I increased
> the number of max_connections to 3000, 5000, and also 1, but I
> still receive the error. Now, I'm considering implementing a health
> check. Specifically, I'd like to automatically restart the PostgreSQL
> container if this error occurs.

While implementing "emergency brakes" like this is generally a good
idea, you should investigate why you have so many connections. If you
have way more connections than you can reasonably expect, something is
wrong, And it is better to fix the root cause than to just hit
everything over the head with a hammer periodically.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Seeing high query planning time on Azure Postgres Single Server version 11.

2024-03-14 Thread hassan rafi
We have migrated to postgres version 16.1, but still due to very high
update activity on our DB, we are seeing elevated response times, though
now the planning time is less.

catalog-v2=> explain (analyze, verbose, settings, buffers) SELECT
products_inventory_delta.upc FROM products_inventory_delta WHERE
products_inventory_delta.modality = 'pickup' AND
products_inventory_delta.store_id = '70600372' ORDER BY upc DESC LIMIT 51
OFFSET 3;


QUERY PLAN


--

 Limit  (cost=1450.68..1450.73 rows=1 width=14) (actual
time=5049.115..5049.116 rows=0 loops=1)

   Output: upc

   Buffers: shared hit=33359 read=6590 dirtied=9379

   ->  Index Only Scan Backward using products_inventory_delta_pkey on
public.products_inventory_delta  (cost=0.57..1450.68 rows=28606 width=14)
(actual time=1.056..5047.472 rows=28299 loops=1)

 Output: upc

 Index Cond: ((products_inventory_delta.store_id =
'70600372'::text) AND (products_inventory_delta.modality =
'pickup'::modality))

 Heap Fetches: 16840

 Buffers: shared hit=33359 read=6590 dirtied=9379

 Settings: effective_cache_size = '192GB', jit = 'off', random_page_cost =
'2', work_mem = '2097151kB'

 Query Identifier: 220372279818787780

 Planning Time: 0.062 ms

 Execution Time: 5049.131 ms

Thanks,
Hassan

On Mon, Mar 11, 2024 at 12:00 PM hassan rafi 
wrote:

> Thanks all. Will try upgrading the postgres version.
>
> On Sun, Mar 10, 2024 at 11:44 PM Ron Johnson 
> wrote:
>
>> On Sun, Mar 10, 2024 at 1:34 PM Greg Sabino Mullane 
>> wrote:
>>
>>>
>>> On Sat, Mar 9, 2024 at 1:57 PM hassan rafi 
>>> wrote:
>>>
 Would upgrading to the latest version of Postgres potentially solve the
 issue?

>>>
>>> Potentially, yes, but the only one who can answer that for sure is you.
>>> Upgrade to 11.22 and re-run the query. Worst case scenario, it runs the
>>> same speed but you gained yourself a bunch of bugfixes and CVE resolutions.
>>> If the problem persists on 11.22, spin up a Postgres 16, load the data, and
>>> test it there.
>>>
>>
>> We have a similar situation with 9.6.24.  14.10 is
>> noticeably faster (between 10% and 80%, depending on the query.
>>
>>


Re: select results on pg_class incomplete

2024-03-14 Thread David Rowley
On Fri, 15 Mar 2024 at 07:13, David G. Johnston
 wrote:
> On Thu, Mar 14, 2024, 11:08 Thiemo Kellner  
> wrote:
>>
>> Thanks for the enlightenment. A pity. I suppose, there is no working
>> around this?
>
> Write a script to do the query in a loop on all databases - that catalog is 
> global.

Yeah, maybe dblink and a LATERAL join might be an easy way. Something like:

create extension dblink;
select d.datname,c.relname from pg_database d, lateral (select * from
dblink('dbname='||d.datname,$$select relname from pg_class where
relname = 'pg_class';$$) c(relname text)) c
(relname) where d.datallowconn;
  datname   | relname
+--
 postgres   | pg_class
 template1  | pg_class
 regression | pg_class
(3 rows)

David




Re: Seeing high query planning time on Azure Postgres Single Server version 11.

2024-03-14 Thread David Rowley
On Fri, 15 Mar 2024 at 08:01, hassan rafi  wrote:
> We have migrated to postgres version 16.1, but still due to very high update 
> activity on our DB, we are seeing elevated response times, though now the 
> planning time is less.

>Buffers: shared hit=33359 read=6590 dirtied=9379

>  Execution Time: 5049.131 ms

You could try "SET track_io_timing = 1;" to see how much of that time
is spent in reads.

David