Re: how to get value of parameter set in session for other user

2021-12-08 Thread Vikas Sharma
Hi Tom,

Thanks for the clarification.

Regards
Vikas

On Tue, 7 Dec 2021 at 14:45, Tom Lane  wrote:

> Vikas Sharma  writes:
> > Is it possible in postgres to get the value of the parameter currently
> set
> > in the session of the other user?
>
> No.  That information only exists within the other session's process.
>
> regards, tom lane
>


Re: Are Foreign Key Disabled During Logical Replication Initial Sync?

2021-12-08 Thread Peter Eisentraut

On 07.12.21 08:51, Avi Weinberg wrote:

Just to clarify, they are disabled during initial sync only or are always 
disabled on subscriber side?
Are all triggers disabled during initial sync or just foreign keys?


All triggers are by default disabled on replicas.  See the ALTER TABLE 
clauses DISABLE/ENABLE [ REPLICA | ALWAYS ] TRIGGER to change this.



How can I know that initial sync completed for all tables?  Is it checking when 
pg_subscription_rel.srsubstate is 'i' or 'd' for all tables or there is a 
better way?


There are various ways to phrase this.  The test suite often uses this 
query:


SELECT count(1) = 0 FROM pg_subscription_rel WHERE srsubstate NOT IN 
('r', 's');





Re: error connecting to pgbouncer admin console

2021-12-08 Thread Peter Eisentraut

On 07.12.21 17:10, Zwettler Markus (OIZ) wrote:

I did a pgbouncer configuration using the following ini file:



[databases]
* = host=localhost port=5433 auth_user=pgbouncer


Using the name "pgbouncer" for auth_user is buggy.  Try using a 
different name.






Re: Working with fixed-point calculations in C

2021-12-08 Thread Peter J. Holzer
On 2021-12-07 22:41:38 -0500, Tom Lane wrote:
> I think what you are looking for is NUMERIC.  (The type name NUMBER
> is an Oracle-ism, I believe.  I'm not sure how closely that corresponds
> to our NUMERIC.)

Not very. Oracle NUMBER is a decimal floating point number with a 38
digit mantissa stored in a variable length format.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


How to allow null as an option when using regexp_matches?

2021-12-08 Thread Shaozhong SHI
We can do this:
select count(*) from regexp_matches('Great London', 'Great
London|Information Centre|Department for Transport', 'g');

Is it possible to allow null as an option?  something like this
select count(*) from regexp_matches('Great London', 'null|Great
London|Information Centre|Department for Transport', 'g');

Regards,

David


Re: How to allow null as an option when using regexp_matches?

2021-12-08 Thread Karsten Hilbert
Am Wed, Dec 08, 2021 at 12:07:13PM + schrieb Shaozhong SHI:

> We can do this:
> select count(*) from regexp_matches('Great London', 'Great
> London|Information Centre|Department for Transport', 'g');
>
> Is it possible to allow null as an option?  something like this
> select count(*) from regexp_matches('Great London', 'null|Great
> London|Information Centre|Department for Transport', 'g');

You seem to want to apply coalesce() judiciously.

Best,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




AW: error connecting to pgbouncer admin console

2021-12-08 Thread Zwettler Markus (OIZ)
Simply a bug.

https://github.com/pgbouncer/pgbouncer/issues/568

Cheers, Markus



> -Ursprüngliche Nachricht-
> Von: Zwettler Markus (OIZ) 
> Gesendet: Dienstag, 7. Dezember 2021 17:10
> An: pgsql-general@lists.postgresql.org
> Betreff: error connecting to pgbouncer admin console
> 
> I did a pgbouncer configuration using the following ini file:
> 
> 
> 
> [databases]
> * = host=localhost port=5433 auth_user=pgbouncer [users] [pgbouncer] logfile =
> /pgdata/pcl_l002/pgbouncer/pgbouncer_pcl_l002.log
> pidfile = /pgdata/pcl_l002/pgbouncer/pgbouncer_pcl_l002.pid
> listen_addr = *
> listen_port = 6433
> auth_type = hba
> auth_file = /pgdata/pcl_l002/pgbouncer/userlist_pcl_l002.txt
> auth_hba_file = /pgdata/pcl_l002/pg_hba.conf auth_user = pgbouncer auth_query
> = SELECT * FROM pgbouncer.get_auth($1) admin_users = monitor stats_users =
> stats, monitor max_client_conn = 500 default_pool_size = 20 log_connections = 
> 1
> log_disconnections = 1 log_pooler_errors = 1
> 
> 
> 
> Everything is fine as long as I connect to any database within the postgres 
> cluster
> as any user.
> 
> 
> 
> As soon as I try to connect to the pgbouncer admin console I get the following
> error:
> 
> $ psql -h localhost -p 6433 -U monitor pgbouncer
> psql: server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> 
> $ psql -h localhost -p 6433 -U stats pgbouncer
> psql: server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> 
> 
> 
> There is always the same error within the pgbouncer logfile:
> ...
> ...
> 2021-12-07 16:56:53.550 CET [73046] LOG process up: PgBouncer 1.16.1,
> libevent 2.0.21-stable (epoll), adns: libc-2.17, tls: OpenSSL 1.0.2k-fips  26 
> Jan
> 2017
> 2021-12-07 16:57:00.969 CET [73046] FATAL @src/objects.c:312 in function
> put_in_order(): put_in_order: found existing elem
> 
> 
> 
> $ pgbouncer --version
> PgBouncer 1.16.1
> libevent 2.0.21-stable
> adns: libc-2.17
> tls: OpenSSL 1.0.2k-fips  26 Jan 2017
> 
> 
> 
> Any idea?
> 
> 
> 
> Thanks, Markus
> 
> 





RE: Working with fixed-point calculations in C

2021-12-08 Thread Oskar Stenberg
Hi, thanks for your answer!


Yeah, that's correct, I meant NUMERIC and not NUMBER. It was late and I must 
have accidentally mixed up the names... But thanks for pointing that out! :) 
But just to clarify, the data type NUMERIC is a fixed point number and not a 
floating point number?


Alright, so the documentation is just out of date, that's actually what I was 
hoping for! And while it would be great if it could be updated! Do you in the 
meantime happen to know where the datatype NUMERIC is defined in the code? And 
is it the same name in C as in postgres? Or is it something else?


Best Regards

Oskar


 Originalmeddelande 
Från: Tom Lane 
Datum: 2021-12-08 04:41 (GMT+01:00)
Till: Oskar Stenberg 
Kopia: pgsql-general@lists.postgresql.org
Ämne: Re: Working with fixed-point calculations in C

Oskar Stenberg  writes:
> I need to make some fixed-point calculations inside the C code that I'm
> developing as an extension to PostgreSQL. I was reading a bit, and if I
> understood the datatype NUMBER correctly, it seems to be just what I'm
> looking for, a fixed-point datatype. (I couldn't actually find any
> thing in the documentation that specifically mentions that it is a
> fixed point datatype. But it seems to work the same. So I might be
> wrong here and if so please tell me)

I think what you are looking for is NUMERIC.  (The type name NUMBER
is an Oracle-ism, I believe.  I'm not sure how closely that corresponds
to our NUMERIC.)

> Link to the documentation:
> https://www.postgresql.org/docs/current/xfunc-c.html#XFUNC-C-TYPE-TABLE

Hmm, that table seems a bit incomplete/out of date.  While it's
not really meant to cover every type, surely it should mention
bigint, numeric, and timestamptz ...

regards, tom lane


Re: How to allow null as an option when using regexp_matches?

2021-12-08 Thread Shaozhong SHI
Hi, Karsten,

That sounds interesting.

Any good example?

Regards,  David

On Wed, 8 Dec 2021 at 12:10, Karsten Hilbert 
wrote:

> Am Wed, Dec 08, 2021 at 12:07:13PM + schrieb Shaozhong SHI:
>
> > We can do this:
> > select count(*) from regexp_matches('Great London', 'Great
> > London|Information Centre|Department for Transport', 'g');
> >
> > Is it possible to allow null as an option?  something like this
> > select count(*) from regexp_matches('Great London', 'null|Great
> > London|Information Centre|Department for Transport', 'g');
>
> You seem to want to apply coalesce() judiciously.
>
> Best,
> Karsten
> --
> GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B
>
>
>


Re: How to allow null as an option when using regexp_matches?

2021-12-08 Thread Vincent Veyron
On Wed, 8 Dec 2021 17:07:27 +
Shaozhong SHI  wrote:
> 
> Any good example?
> 

Plenty of them in the fine documentation :

https://www.postgresql.org/search/?q=coalesce




--  Bien à vous, Vincent Veyron 

https://compta.libremen.com
Logiciel libre de comptabilité générale en partie double





Re: How to allow null as an option when using regexp_matches?

2021-12-08 Thread Roxanne Reid-Bennett
On 12/8/2021 4:07 AM, Shaozhong SHI wrote:
> We can do this:
> select count(*) from regexp_matches('Great London', 'Great London|Information 
> Centre|Department for Transport', 'g');
>
> Is it possible to allow null as an option?  something like this
> select count(*) from regexp_matches('Great London', 'null|Great 
> London|Information Centre|Department for Transport', 'g');
>
> Regards,
>
> David
>
Hi David,

I'm assuming that 'Great London' is coming from some column value.
Given that NULL is a state, not a value, regexp really cannot "find" or not 
"find"  it.
you could use COALESCE the source of 'Great London' to a predictable value that 
you CAN match on.

or you could possibly construct your query something like this:

select CASE WHEN 'Great London' IS NULL THEN 1 ELSE 0 END + (SELECT count(*) 
from regexp_matches('Great London', 'Great London|Information Centre|Department 
for Transport', 'g'))

select CASE WHEN NULL IS NULL THEN 1 ELSE 0 END + (SELECT count(*) from 
regexp_matches(NULL, 'Great London|Information Centre|Department for 
Transport', 'g'))

Interestingly to me,  the following returns 2 - possibly because an empty 
string matches anything?

select count(*) from regexp_matches('Great London', 'Great London||Information 
Centre|Department for Transport', 'g');

Roxanne





performance expectations for table(s) with 2B recs

2021-12-08 Thread David Gauthier
11.5 on linux
server = VM provided by our IT dept (IOW, can be grown if needed)
DB itself is on NFS

So far, the tables I have in my DB have relatively low numbers of records
(most are < 10K, all are < 10M).  Things have been running great in terms
of performance.  But a project is being brainstormed which may require some
tables to contain a couple billion records.

I'm familiar with the need to properly index columns in these tables which
will be used for table joining and query performance (candidates are
columns used in query predicate, etc...).  Let's assume that's done right.
And let's assume that the biggest table will have 20 varchars (<= 16 chars
per) and 20 ints.  No blobs or jsonb or anything like that.

What else should I be worried about ?

I suspect that part of why things are running really well so far is that
the relatively small amounts of data in these tables ends up in the DB
cache and disk I/O is kept at a minimum.  Will that no longer be the case
once queries start running on these big tables ?

What about DBA stuff... vacuum and fragmentation and index maintenance,
etc... ?

I don't want to step into this completely blind.  Any warnings/insights
would be appreciated.


Re: performance expectations for table(s) with 2B recs

2021-12-08 Thread Peter J. Holzer
On 2021-12-08 14:44:47 -0500, David Gauthier wrote:
> So far, the tables I have in my DB have relatively low numbers of records 
> (most
> are < 10K, all are < 10M).  Things have been running great in terms of
> performance.  But a project is being brainstormed which may require some 
> tables
> to contain a couple billion records.
[...]
> What else should I be worried about ?
> 
> I suspect that part of why things are running really well so far is that the
> relatively small amounts of data in these tables ends up in the DB cache and
> disk I/O is kept at a minimum.  Will that no longer be the case once queries
> start running on these big tables ?

Depends a lot on how good the locality of your queries is. If most read
only the same parts of the same indexes, those will still be in the
cache. If they are all over the place or if you have queries which need
to read large parts of your tables, cache misses will make your
performance a lot less predictable, yes. That stuff is also hard to
test, because when you are testing a query twice in a row, the second
time it will likely hit the cache and be quite fast.

But in my experience the biggest problem with large tables are unstable
execution plans - for most of the parameters the optimizer will choose
to use an index, but for some it will erroneously think that a full
table scan is faster. That can lead to a situation where a query
normally takes less than a second, but sometimes (seemingly at random)
it takes several minutes - users will understandably be upset about such
behaviour. It is in any case a good idea to monitor execution times to
find such problems (ideally before users complain), but each needs to be
treated on an individual basis, and sometimes there seems to be no good
solution.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: CTE Materialization

2021-12-08 Thread Дмитрий Иванов
Спасибо!
--
С уважением, Дмитрий!


ср, 8 дек. 2021 г. в 22:58, Paul van der Linden :

> This one quite nicely explains it:
> https://stackoverflow.com/questions/14897816/how-can-i-prevent-postgres-from-inlining-a-subquery
>
> On Wed, Dec 8, 2021 at 3:14 AM David G. Johnston <
> david.g.johns...@gmail.com> wrote:
>
>> On Tue, Dec 7, 2021 at 6:40 PM Дмитрий Иванов 
>> wrote:
>>
>>> I beg your pardon.
>>> The problem is more or less clear to me, but the solution is not. What
>>> does the "hack is to add an "offset 0" to the query" suggest? Thank you.
>>>
>>>
>> A subquery with a LIMIT clause cannot have where clause expressions in
>> upper parts of the query tree pushed down it without changing the overall
>> query result - something the planner is not allowed to do.  For the hack,
>> since adding an actual LIMIT clause doesn't make sense you omit it, but
>> still add the related OFFSET clause so the planner still treats the
>> subquery as a LIMIT subquery.  And since you don't want to skip any rows
>> you specify 0 for the offset.
>>
>> David J.
>>
>>