Re: No primary key table

2019-09-13 Thread SERHAD ERDEM
Hi,
if you have not seen any benefit , of course you can remove identity column 
from a DWH table , there is a sequence and a trigger  for identity serial 
column.
ID columns are being generaly used for base tables which are under end-user 
operations.



From: Ertan Küçükoglu 
Sent: Friday, September 13, 2019 3:34 PM
To: pgsql-general@lists.postgresql.org 
Subject: No primary key table

Hello,

We are using PostgreSQL 10.0 on a Windows  VM.

There is one database in that server.
There are several tables that will be used for data warehouse purposes.

There are daily inserts and relatively heavy bulk (whole month data at once) 
reads at end of months. Reads will be from several hundred clients and will be 
over internet and no local network connection.

Daily data saving application check for duplicate entries of a single record 
using an sql before each insert. That is only select statement during in month 
days.

End of the month queries will be selecting bulk data from previous month 
records and will filter on GUID field and a varchar(25) field.

There is one primary key of a bigint identity column on each table. Primary key 
won't be used for any purpose for any queries.

We wonder if it maybe a suggested to remove primary key index and column. There 
is no unique key index on these tables otherwise.

Thanks & regards,
Ertan Küçükoğlu

Sent from my iPhone



Re: SQL SERVER migration to PostgreSql

2019-11-08 Thread SERHAD ERDEM
2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

CREATE OR REPLACE FUNCTION get_film (p_pattern VARCHAR)

RETURNS TABLE (

film_title VARCHAR,

film_release_year INT

)

AS $$

BEGIN

RETURN QUERY SELECT

title,

cast( release_year as integer)

FROM

film

WHERE

title ILIKE p_pattern ;

END; $$



LANGUAGE 'plpgsql';





2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

CREATE OR REPLACE FUNCTION get_film (p_pattern VARCHAR,p_year INT)

RETURNS TABLE (

film_title VARCHAR,

film_release_year INT

) AS $$

DECLARE

var_r record;

BEGIN

FOR var_r IN(SELECT

title,

release_year

FROM film

WHERE title ILIKE p_pattern AND

release_year = p_year)

LOOP

film_title := upper(var_r.title) ;

film_release_year := var_r.release_year;

RETURN NEXT;

END LOOP;

END; $$

LANGUAGE 'plpgsql';


From: İlyas Derse 
Sent: Thursday, November 7, 2019 1:28 PM
To: pgsql-general@lists.postgresql.org 
Subject: SQL SERVER migration to PostgreSql

I'm trying to migration to PostgreSql from SQL Server. I have Stored Procedures 
what have output parameters and returning tables.But you know what, we can not 
returning tables in stored procedures in PostgreSql and we can not use output 
parameters in functions in PostgreSql.

So i did not find to solves this problem. Anybody have an idea ?


Re: Cluster OID Limit

2022-06-09 Thread SERHAD ERDEM
Hi ,
its  about  xid.
u may use the following sqls  for check.


---Transaction ID Exhaustion Analysis  --

SELECT datname
, age(datfrozenxid)
, current_setting('autovacuum_freeze_max_age')
FROM pg_database where datallowconn = true
ORDER BY 2 DESC;



WITH max_age AS (
SELECT 20 as max_old_xid
, setting AS autovacuum_freeze_max_age
FROM pg_catalog.pg_settings
WHERE name = 'autovacuum_freeze_max_age' )
, per_database_stats AS (
SELECT datname
, m.max_old_xid::int
, m.autovacuum_freeze_max_age::int
, age(d.datfrozenxid) AS oldest_current_xid
FROM pg_catalog.pg_database d
JOIN max_age m ON (true)
WHERE d.datallowconn )
SELECT max(oldest_current_xid) AS oldest_current_xid
, max(ROUND(100*(oldest_current_xid/max_old_xid::float))) AS 
percent_towards_wraparound
, max(ROUND(100*(oldest_current_xid/autovacuum_freeze_max_age::float))) AS 
percent_towards_emergency_autovac
FROM per_database_stats;





SELECT c.oid::regclass
, age(c.relfrozenxid)
, pg_size_pretty(pg_total_relation_size(c.oid))
FROM pg_class c
JOIN pg_namespace n on c.relnamespace = n.oid
WHERE relkind IN ('r', 't', 'm')
AND n.nspname NOT IN ('pg_toast')
ORDER BY 2 DESC LIMIT 100;

From: Adrian Klaver 
Sent: Thursday, June 9, 2022 3:02 PM
To: Lucas ; pgsql-general@lists.postgresql.org 

Subject: Re: Cluster OID Limit

On 6/9/22 02:10, Lucas wrote:
> Hello,
>
> In the company I work for, some clusters reached the OID limit (2^32)
> and we had to reinstall the cluster.

Was this really about OIDs or XID wraparound?:

https://www.postgresql.org/docs/14/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND

>
> I was wondering if there is any discussion on:
> * "compress" the OID space
> * "warp around" the OID space
> * segment a OID range for temporary tables with "wrap around"
>
> --
> Lucas


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Different execution plans in PG17 and pgBouncer...

2025-05-05 Thread SERHAD ERDEM
Hi  , wish you good lock with the  "transaction mode" 🙂   if pgbouncer is not 
really needed  , remove and use plain connections.i have experienced  
pgbouncer   in session mode  over 2 years with situation like  "pain in the 
ass" , finaly  removed this  bouncing  layer.


From: Mladen Marinović 
Sent: Monday, May 5, 2025 1:26 PM
To: Efrain J. Berdecia 
Cc: SERHAD ERDEM ; Achilleas Mantzios 
; pgsql-general@lists.postgresql.org 

Subject: Re: Different execution plans in PG17 and pgBouncer...

Hi,

Mystery not solved...but identified. The pool is in transaction mode and some 
connections use set enable_mergejoin=off, but they do not set it back to on. 
Upon getting the connection from the pool the parameter is still set to off 
causing the planner to not use this kind of join which results in different 
plans when using this tainted pgbouncer connection instead of the clean one 
from pg17.

The problem is that server_reset_query is not used when the pool is in 
transaction mode. Now, we have to see how to fix this problem.

Regards,
Mladen Marinović

On Mon, May 5, 2025 at 3:10 PM Efrain J. Berdecia 
mailto:ejberde...@yahoo.com>> wrote:
Is the query using parameter markers? Is the source executing the query forcing 
a "bad" data type casting?

Yahoo Mail: Search, Organize, 
Conquer<https://mail.onelink.me/107872968?pid=nativeplacement&c=US_Acquisition_YMktg_315_SearchOrgConquer_EmailSignature&af_sub1=Acquisition&af_sub2=US_YMktg&af_sub3=&af_sub4=12039&af_sub5=C01_Email_Static_&af_ios_store_cpp=0c38e4b0-a27e-40f9-a211-f4e2de32ab91&af_android_url=https://play.google.com/store/apps/details?id=com.yahoo.mobile.client.android.mail&listing=search_organize_conquer>

On Mon, May 5, 2025 at 8:52 AM, Mladen Marinović
mailto:ma...@kset.org>> wrote:


On Mon, May 5, 2025 at 2:38 PM SERHAD ERDEM 
mailto:serh...@hotmail.com>> wrote:
Hi  , you had better try  vacuum  analyze for the whole db ,   pgbouncer  
connection layer can not causeslow queries.

I did that already. But the slow query is the consequence of the different 
plan, not the statistics.


From: Mladen Marinović mailto:ma...@kset.org>>
Sent: Monday, May 5, 2025 12:27 PM
To: Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>>
Cc: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org> 
mailto:pgsql-general@lists.postgresql.org>>
Subject: Re: Different execution plans in PG17 and pgBouncer...



On Mon, May 5, 2025 at 12:07 PM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 11:00, Mladen Marinović wrote:


On Mon, May 5, 2025 at 11:24 AM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 09:52, Mladen Marinović wrote:
Hi,

We recently migrated our production instances from PG11 to PG17. While doing so 
we upgraded our pgBouncer instances from 1.12 to 1.24. As everything worked on 
the test servers we pushed this to production a few weeks ago. We did not 
notice any problems until a few days ago (but the problems were here from the 
start). The main manifestation of the problems is a service that runs a fixed 
query to get a backlog of unprocessed data (limited to a 1000 rows). When 
testing the query using pgAdmin connected directly to the database we get a 
result in cca. 20 seconds. The same query runs for 2 hours when using pgBouncer 
to connect to the same database.


That's a huge jump, I hope you guys did extensive testing of your app. In which 
language is your app written? If java, then define prepareThreshold=0 in your 
jdbc and set max_prepared_statements = 0 in pgbouncer.

Mainly python, but the problem was noticed in a java service.
Prepare treshold was already set to 0. We changed the  max_prepared_statements 
to 0 from the default (200) but no change was noticed.

How about search paths ? any difference on those between the two runs ? Do you 
set search_path in pgbouncer ? what is "cca." btw ?

The more interesting part is that when we issue an explain of the same query we 
get different plans. We did this a few seconds apart so there should be no 
difference in collected statistics. We ruled out prepared statements, as we 
suspected the generic plan might be the problem, but it is not. Is there any 
pgBouncer or PG17 parameter that might be the cause of this?


Does this spawn any connections (such as dblink) ? are there limits per user/db 
pool_size in pgbouncer ?

No additional connection nor dbling. Just plain SQL (CTE, SELECT, INSERT, 
UPDATE, DELETE,...) There are limits, but they are not hit. The query just uses 
a different plan and runs slower because of that.

Pgbouncer, in contrast to its old friend PgPool-II is completely passive, just 
passes through SQL to the server as fast as possible as it can. But I am sure 
you know tha

Re: Different execution plans in PG17 and pgBouncer...

2025-05-05 Thread SERHAD ERDEM
Hi, if you are sure that exac plans are the same ,   try the model  of  select  
 count (*)  from   type   ,  instead of   select   * fromlimit;you may  
understand that the problem it is due to  returning rows  or   not.

From: Mladen Marinović 
Sent: Monday, May 5, 2025 12:52 PM
To: SERHAD ERDEM 
Cc: Achilleas Mantzios ; 
pgsql-general@lists.postgresql.org 
Subject: Re: Different execution plans in PG17 and pgBouncer...



On Mon, May 5, 2025 at 2:38 PM SERHAD ERDEM 
mailto:serh...@hotmail.com>> wrote:
Hi  , you had better try  vacuum  analyze for the whole db ,   pgbouncer  
connection layer can not causeslow queries.

I did that already. But the slow query is the consequence of the different 
plan, not the statistics.


From: Mladen Marinović mailto:ma...@kset.org>>
Sent: Monday, May 5, 2025 12:27 PM
To: Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>>
Cc: 
pgsql-general@lists.postgresql.org<mailto:pgsql-general@lists.postgresql.org> 
mailto:pgsql-general@lists.postgresql.org>>
Subject: Re: Different execution plans in PG17 and pgBouncer...



On Mon, May 5, 2025 at 12:07 PM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 11:00, Mladen Marinović wrote:


On Mon, May 5, 2025 at 11:24 AM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 09:52, Mladen Marinović wrote:
Hi,

We recently migrated our production instances from PG11 to PG17. While doing so 
we upgraded our pgBouncer instances from 1.12 to 1.24. As everything worked on 
the test servers we pushed this to production a few weeks ago. We did not 
notice any problems until a few days ago (but the problems were here from the 
start). The main manifestation of the problems is a service that runs a fixed 
query to get a backlog of unprocessed data (limited to a 1000 rows). When 
testing the query using pgAdmin connected directly to the database we get a 
result in cca. 20 seconds. The same query runs for 2 hours when using pgBouncer 
to connect to the same database.


That's a huge jump, I hope you guys did extensive testing of your app. In which 
language is your app written? If java, then define prepareThreshold=0 in your 
jdbc and set max_prepared_statements = 0 in pgbouncer.

Mainly python, but the problem was noticed in a java service.
Prepare treshold was already set to 0. We changed the  max_prepared_statements 
to 0 from the default (200) but no change was noticed.

How about search paths ? any difference on those between the two runs ? Do you 
set search_path in pgbouncer ? what is "cca." btw ?

The more interesting part is that when we issue an explain of the same query we 
get different plans. We did this a few seconds apart so there should be no 
difference in collected statistics. We ruled out prepared statements, as we 
suspected the generic plan might be the problem, but it is not. Is there any 
pgBouncer or PG17 parameter that might be the cause of this?


Does this spawn any connections (such as dblink) ? are there limits per user/db 
pool_size in pgbouncer ?

No additional connection nor dbling. Just plain SQL (CTE, SELECT, INSERT, 
UPDATE, DELETE,...) There are limits, but they are not hit. The query just uses 
a different plan and runs slower because of that.

Pgbouncer, in contrast to its old friend PgPool-II is completely passive, just 
passes through SQL to the server as fast as possible as it can. But I am sure 
you know that. Good luck, keep us posted!

Yes, that is what puzzles me.

What is the pgbouncer's timeout in the server connections ?

How about "idle in transaction" ? do you get any of those? What's the isolation 
level ?

How about the user ? is this the same user doing pgadmin queries VS via the app 
?

Can you identify the user under which the problem is manifested and :

ALTER user "unlucky_user" SET log_statement = 'all';

ALTER user "unlucky_user" SET log_min_duration_statement = 0; -- to help you 
debug the prepared statements .. just in case , and other stuff not printed by 
log_statement = all.

None of those parameters should affect the fact that when issuing the explain 
select query (the statement is not prepared) from psql directly gives a 
different result than issuing it over the pgbouncer connection. The result is 
repeatable.

We have rolled back pgbouncer to 1.12. and it seems the problem persists. This 
is one of the weirdest things I have ever seen with PostgreSQL.

Regards,
Mladen Marinović


Re: Different execution plans in PG17 and pgBouncer...

2025-05-05 Thread SERHAD ERDEM
Hi  , you had better try  vacuum  analyze for the whole db ,   pgbouncer  
connection layer can not causeslow queries.


From: Mladen Marinović 
Sent: Monday, May 5, 2025 12:27 PM
To: Achilleas Mantzios 
Cc: pgsql-general@lists.postgresql.org 
Subject: Re: Different execution plans in PG17 and pgBouncer...



On Mon, May 5, 2025 at 12:07 PM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 11:00, Mladen Marinović wrote:


On Mon, May 5, 2025 at 11:24 AM Achilleas Mantzios 
mailto:a.mantz...@cloud.gatewaynet.com>> wrote:


On 5/5/25 09:52, Mladen Marinović wrote:
Hi,

We recently migrated our production instances from PG11 to PG17. While doing so 
we upgraded our pgBouncer instances from 1.12 to 1.24. As everything worked on 
the test servers we pushed this to production a few weeks ago. We did not 
notice any problems until a few days ago (but the problems were here from the 
start). The main manifestation of the problems is a service that runs a fixed 
query to get a backlog of unprocessed data (limited to a 1000 rows). When 
testing the query using pgAdmin connected directly to the database we get a 
result in cca. 20 seconds. The same query runs for 2 hours when using pgBouncer 
to connect to the same database.


That's a huge jump, I hope you guys did extensive testing of your app. In which 
language is your app written? If java, then define prepareThreshold=0 in your 
jdbc and set max_prepared_statements = 0 in pgbouncer.

Mainly python, but the problem was noticed in a java service.
Prepare treshold was already set to 0. We changed the  max_prepared_statements 
to 0 from the default (200) but no change was noticed.

How about search paths ? any difference on those between the two runs ? Do you 
set search_path in pgbouncer ? what is "cca." btw ?

The more interesting part is that when we issue an explain of the same query we 
get different plans. We did this a few seconds apart so there should be no 
difference in collected statistics. We ruled out prepared statements, as we 
suspected the generic plan might be the problem, but it is not. Is there any 
pgBouncer or PG17 parameter that might be the cause of this?


Does this spawn any connections (such as dblink) ? are there limits per user/db 
pool_size in pgbouncer ?

No additional connection nor dbling. Just plain SQL (CTE, SELECT, INSERT, 
UPDATE, DELETE,...) There are limits, but they are not hit. The query just uses 
a different plan and runs slower because of that.

Pgbouncer, in contrast to its old friend PgPool-II is completely passive, just 
passes through SQL to the server as fast as possible as it can. But I am sure 
you know that. Good luck, keep us posted!

Yes, that is what puzzles me.

What is the pgbouncer's timeout in the server connections ?

How about "idle in transaction" ? do you get any of those? What's the isolation 
level ?

How about the user ? is this the same user doing pgadmin queries VS via the app 
?

Can you identify the user under which the problem is manifested and :

ALTER user "unlucky_user" SET log_statement = 'all';

ALTER user "unlucky_user" SET log_min_duration_statement = 0; -- to help you 
debug the prepared statements .. just in case , and other stuff not printed by 
log_statement = all.

None of those parameters should affect the fact that when issuing the explain 
select query (the statement is not prepared) from psql directly gives a 
different result than issuing it over the pgbouncer connection. The result is 
repeatable.

We have rolled back pgbouncer to 1.12. and it seems the problem persists. This 
is one of the weirdest things I have ever seen with PostgreSQL.

Regards,
Mladen Marinović


Re: Looking for pgbench Benchmark Results Across PostgreSQL Versions

2025-05-09 Thread SERHAD ERDEM
Hi ,you may look at this ,
https://smalldatum.blogspot.com/2023/10/postgres-versions-11-12-13-14-15-and-16.html
[https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiTQGvK-hx60qAae2LBQxBCnM0DOvzplpt8dtv0q29mtGvASKuop-QcubW1fmGgdfwer6SEBRoAPK1lN9ejTt9ofTcN2YVQA7FyH2gplVhDiiLULFvYZxykzk8NYjYqskdhEXjmqZDOvLObgNIlRjKzpzgBIl2S3Lz-V4dnQTpm8YWOUOz0daagV3eOJApz/w1200-h630-p-k-no-nu/QPS%20relative%20to%2011.21_%20point%20queries,%20part%201.png]
Postgres versions 11, 12, 13, 14, 15, and 16 vs sysbench with a medium server - 
Blogger
I used sysbench and my usage is explained here.Postgres was configured to cache 
all tables. This benchmark used a c2-standard-30 server from GCP with 15 cores, 
hyperthreads disabled, 120G of RAM, Ubuntu 22.04 and 1.5TB of NVMe SSD with XFS 
(SW RAID 0 over 4 local devices).
smalldatum.blogspot.com



From: Özkan Pakdil 
Sent: Thursday, May 8, 2025 8:07 PM
To: pgsql-general@lists.postgresql.org 
Subject: Looking for pgbench Benchmark Results Across PostgreSQL Versions


Hi everyone,

I’ve been searching for a website that provides pgbench results for different 
PostgreSQL versions, from 11 to 18, including the latest beta or alpha releases.

Does anyone know of such a site?

Thanks!