Re: walreceiver fails on asynchronous replica [SEC=UNOFFICIAL]

2024-02-26 Thread Kyotaro Horiguchi
At Fri, 23 Feb 2024 04:04:03 +, Mark Schloss  
wrote in 
> <2024-02-23 07:50:05.637 AEDT [1957121]: [1-1] user=,db= > LOG:  started 
> streaming WAL from primary at 6/B000 on timeline 5
> <2024-02-23 07:50:05.696 AEDT [1957117]: [6-1] user=,db= > LOG:  invalid 
> magic number  in log segment 0005000600B0, offset 0

This appears to suggest that the WAL file that the standby fetched was
zero-filled on the primary side, which cannot happen by a normal
operation. A preallocated WAL segment can be zero-filled but it cannot
be replicated under normal operations.

> <2024-02-22 14:20:23.383 AEDT [565231]: [6-1] user=,db= > FATAL:  terminating 
> walreceiver process due to administrator command

This may suggest a config reload with some parameter changes.

One possible scenario matching the log lines could be that someone
switched primary_conninfo to a newly-restored primary. However, if the
new primary had older data than the previously connected primary,
possibly leading to the situation where the segment 0..5..6..B0 on it
was a preallocated one that was filled with zeros, the standby could
end up fetching the zero-filled WAL segment (file) and might fail this
way. If this is the case, such operations should be avoided.

Unfortunately, due to the lack of a reproducer or detailed operations
that took place there, the best I can do now is to guess a possible
scenario as described above. I'm not sure how come the situation
actually arose.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Fastest way to clone schema ~1000x

2024-02-26 Thread Emiel Mols
We've experimented with shared buffers to no effect.

The 2048 we actually need for our test setup. The way this works is that we
have a single preforked backend for all tests where each backend worker
maintains persistent connections *per test* (in database-per-test), so with
say 50 backend processes, they might each meantain ~10-50 connections to
postgres. That's the reason for looking into schema-per-test that would
require only 1 persistent connection per worker and possibly only changing
the search_path to handle a request (and hencefort cheap initialization of
schema.sql being a new impediment).

But based on your answer, we will do some more research into pooling these
requests over pgbouncer with pool_mode=transaction. That should multiplex
all these backend->bouncer connections over a lot less connections to
postgres itself.

Thanks!

On Mon, Feb 26, 2024 at 2:14 PM Pavel Stehule 
wrote:

>
>
> po 26. 2. 2024 v 8:08 odesílatel Emiel Mols  napsal:
>
>> Thanks, as indicated we're using that right now. The 30% spinlock
>> overhead unfortunately persists.
>>
>
> try to increase shared_buffer
>
> 128MB can be too low
>
> max_connection = 2048 - it unrealistic high
>
>
>> - Fsync was already disabled, too. Complete postgresql.conf used in
>> testing:
>> listen_addresses = ''
>> max_connections = 2048
>> unix_socket_directories = '..'
>> shared_buffers = 128MB
>> log_line_prefix = ''
>> synchronous_commit = 'off'
>> wal_level = 'minimal'
>>
>> - linux perf report comparing schema-per-test vs database-per-test:
>> https://ibb.co/CW5w2MW
>>
>> - Emiel
>>
>>
>> On Mon, Feb 26, 2024 at 1:36 PM Pavel Stehule 
>> wrote:
>>
>>> Hi
>>>
>>> po 26. 2. 2024 v 7:28 odesílatel Emiel Mols  napsal:
>>>
 Hello,

 To improve our unit and end-to-end testing performance, we are looking
 to optimize initialization of around 500-1000 database *schemas* from a
 schema.sql file.

 Background: in postgres, you cannot change databases on
 existing connections, and using schemas would allow us to minimize the
 amount of active connections needed from our test runners, as we can reuse
 connections by updating search_path. In our current database-per-test setup
 we see that there is around a ~30% (!!) total CPU overhead in
 native_queued_spin_lock_slowpath (as profiled with linux perf), presumably
 because of the high connection count. We run ~200 tests in parallel to
 saturate a 128 core machine.

 In the schema-per-test setup, however, it becomes harder to cheaply
 create the schema. Before we could `create database test001 with template
 testbase` to set up the database for a test, and this was reasonably fast.
 Re-inserting a schema with ~500 table/index definitions across 500 test
 schema's is prohibitively expensive (around 1000ms per schema insertion
 means we're wasting 500 cpu-seconds, and there seems to be quite some lock
 contention too). Linux perf shows that in this setup we're reducing the
 native_queued_spin_lock_slowpath overhead to around 10%, but total test
 time is still slower due to all schema initialization being done. Also it
 feels quite elegant functions and types can be reused between tests.

 Does anyone have some insights or great ideas :)? Also pretty curious
 to the fundamental reason why having high/changing connection counts to
 postgres results in this much (spin)lock contention (or perhaps we're doing
 something wrong in either our configuration or measurements?).

 An alternative we haven't explored yet is to see if we can use
 pgbouncer or other connection pooler to mitigate the 30% issue (set limits
 so there are only ever X connections to postgres in total, and perhaps max
 Y per test/database). This does add another piece of
 infrastructure/complexity, so not really prefered.

>>>
>>> For testing
>>>
>>> a) use templates - CREATE DATABASE test TEMPLATE some;
>>>
>>> b) disable fsync (only for testing!!!)
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>>
 Best,

 Emiel

>>>


Re: Fastest way to clone schema ~1000x

2024-02-26 Thread Daniel Gustafsson
> On 26 Feb 2024, at 09:46, Emiel Mols  wrote:

> The 2048 we actually need for our test setup.

There is a measurable overhead in connections, regardless of if they are used
or not.  If you are looking to squeeze out performance then doing more over
already established connections, and reducing max_connections, is a good place
to start.

--
Daniel Gustafsson





Re: Fastest way to clone schema ~1000x

2024-02-26 Thread Emiel Mols
On Mon, Feb 26, 2024 at 3:50 PM Daniel Gustafsson  wrote:

> There is a measurable overhead in connections, regardless of if they are
> used
> or not.  If you are looking to squeeze out performance then doing more over
> already established connections, and reducing max_connections, is a good
> place
> to start.
>

Clear, but with database-per-test (and our backend setup), it would have
been *great* if we could have switched database on the same connection
(similar to "USE xxx" in mysql). That would limit the connections to the
amount of workers, not multiplied by tests.

Even with a pooler, we're still going to be maintaining 1000s of
connections from the backend workers to the pooler. I would expect this to
be rather efficient, but still unnecessary. Also, both pgbouncer/pgpool
don't seem to support switching database in-connection (they could have
implemented the aforementioned "USE" statement I think!). [Additionally
we're using PHP that doesn't seem to have a good shared memory pool
implementation -- pg_pconnect is pretty buggy].

I'll continue with some more testing. Thanks for now!


Re: Performance issue debugging

2024-02-26 Thread veem v
On Fri, 23 Feb, 2024, 2:54 am Vick Khera,  wrote:

> On Thu, Feb 22, 2024 at 4:03 PM veem v  wrote:
>
>> Hi All,
>> As i understand we have pg_stats_activity which shows the real time
>> activity of sessions currently running in the database. And the
>> pg_stats_statement provides the aggregated information of the historical
>> execution of all the queries in the database. But I don't see any sampling
>> or timing information in those views. For example at a certain point in
>> time in the past , what queries were getting executed in the database and
>> overall wait events etc.
>>
>> So is there any other view which provides such information to dig into
>> the past to diagnose any historical performance issues ? or should we
>> create our own table and flush the information from the pg_stats_activity
>> view to that with the current timestamp and that would be helpful in
>> analyzing performance issues or any extension available for such?
>>
>>
> Look at the auto_explain setting. Taking a random interval snapshot of
> running queries likely will not teach you anything useful.
>
>
>> Also even the explain analyze can only provide the exact run time stats
>> of a completed query. If we want to see what's going on for a long running
>> query and at what step in the execution path the query is spending most
>> resources and time when it keeps running in the database, is there any
>> available option in postgres database?  for e.g. in a SELECT query index
>> access path if taking most of the time OR in an INSERT query INDEX block is
>> causing contention while inserting data into the table , how would we be
>> able to find that for a currently running query or a for a historical query?
>>
>
> You can see locking contention in the pg_locks table. In my experience I
> rarely ever saw anything in there even when I was cranking tens of millions
> of inserts and updates per day. I don't think there's anything for
> historical queries or to probe anything more about a currently running
> query's progress.
>
> Take some time to think about your queries and how you can reduce any
> locking they need. If you do need some locking, consider using the FOR
> UPDATE clause in SELECT to limit what you do lock.
>

Thank you. Actually in Oracle Database there used to be a view
(v$active_session_history)which used to collect the stats of session
activity in certain time period and from that we used to be able to track
and debug historical performance issues at exact point in time. So I was
thinking, if such thing not available by default in postgres, and if we can
do it manually and that will be helpful in long-term.


Clarity regarding the procedures call in postgresql for public and non-public schema

2024-02-26 Thread Sasmit Utkarsh
Hi Postgresql Team,

***Facing the below issue, while calling the below procedures in public and
non-public schema *shc* for ( *shc_uadmin* user)

[shc_user@cucmtpccu1 ~]$ export PGHOST=
cucmpsgsu0.postgres.database.azure.com
[shc_user@cucmtpccu1 ~]$ export PGDATABASE=mshcd
[shc_user@cucmtpccu1 ~]$ *export PGUSER=shc_uadmin*
[shc_user@cucmtpccu1 ~]$ export PGPASSWORD=abc123
[shc_user@cucmtpccu1 ~]$ *export PGOPTIONS='--search_path=shc'*
[shc_user@cucmtpccu1 ~]$ .nix-profile/bin/test 'K-T-*' -q
<6>0164503{----.noterm}-[INFO] BUILD_TAG =
jenkins-Tpfasmbuild1-516
<6>0164503{----.noterm}-[INFO]
sql_tty_connect_db: tty_conninfo:'dbname=shc_tty_data'
<6>0164503{----.noterm}-[INFO]
process_name=10.166.29.36#164503
<7>0164503{----.noterm}-
<7>0164503{----.noterm}-Using
PostgreSQL database
<7>0164503{----.noterm}-

<7>0164503{----.noterm}-SQL_init_db_connection()
conninfo=dbname=shc_data
<7>0164503{----.noterm}-SQL_init_db_connection()
Connection to shc_data database SUCCESSFUL
<6>0164503{----.noterm}-[INFO] PostgreSQL
Server Version = 1500.4 protocol 3
<7>0164503{----.noterm}-SQL_init_db_connection()
PREPARE AdvisoryLockData PQresultStatus = PGRES_COMMAND_OK
<7>0164503{----.noterm}-SQL_init_db_connection()
PREPARE AdvisoryUnLockData PQresultStatus = PGRES_COMMAND_OK
<7>0164503{----.noterm}-SQL_init_db_connection()
PREPARE SelectSize PQresultStatus = PGRES_FATAL_ERROR



*<3>0164503{----.noterm}-[ERROR] PREPARE
failed for RIAT! ERROR:  procedure sql_select_size_procedure(text, integer,
unknown) does not existLINE 1: CALL SQL_select_size_procedure($1, $2,
NULL) ^HINT:  No procedure matches the given name and argument
types. You might need to add explicit type casts.*
 [../tpfasm.c:13961:SQL_init_db_connection]
<7>0164503{----.noterm}-ROLLBACK TRANSACTION

***For (*PGUSER=pgddb_admin* kind of admin user) , I see an expected
message.. Please see below

export PGUSER=pgddb_admin
export PGPASSWORD=xyz123
[shc_user@cucmtpccu1 ~]$ .nix-profile/bin/test 'K-T-*' -q
<6>3429958-[INFO] BUILD_TAG = jenkins-Tpfasmbuild1-515
<6>3429958-[INFO] sql_tty_connect_db: tty_conninfo:'dbname=shc_tty_data'
<6>3429958-[INFO] process_name=10.166.29.36#3429958
<7>3429958-
<7>3429958-Using PostgreSQL database
<7>3429958-

<7>3429958-SQL_init_db_connection() conninfo=dbname=shc_data
<7>3429958-SQL_init_db_connection() Connection to shc_data database
SUCCESSFUL
<6>3429958-[INFO] PostgreSQL Server Version = 1500.4 protocol 3
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryLockData PQresultStatus
= PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE AdvisoryUnLockData
PQresultStatus = PGRES_COMMAND_OK
*<7>3429958-SQL_init_db_connection() PREPARE SelectSize PQresultStatus =
PGRES_COMMAND_OK*
<7>3429958-SQL_init_db_connection() PREPARE SelectData PQresultStatus =
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE InsertData PQresultStatus =
PGRES_COMMAND_OK
<7>3429958-SQL_init_db_connection() PREPARE UpdateData PQresultStatus =
PGRES_COMMAND_OK

***When i list the procedures, with the help of command line

[shc_user@cucmtpccu1 ~]$ psql "host=cucmpsgsu0.postgres.database.azure.com
port=5432 dbname=mshcd* user=shc_uadmin* password= abc123  sslmode=require
*options=--search_path=shc*"
mshcd=> \df

  List of functions
 Schema |   Name| Result data type |
   Argument data types
 | Type
+---+--+---+--
 shc| *sql_insert_data_procedure* |  | IN fa integer,
IN ft integer, IN ord integer, IN xaddr text, IN recid text, IN blk_size
integer, IN indata bytea, INOUT outdata bytea | proc
 shc| *sql_select_data_procedure* |  | IN fa integer,
IN hold boolean, INOUT blksize integer, INOUT fadata bytea
| proc
 shc| *sql_select_size_procedure* |  | IN hexid text,
IN rtp_in integer, INOUT size_data text
   | proc
 shc| *sql_update_data_procedure *|  | IN indata bytea,
IN unhold boolean, IN fa integer
  

Re: Clarity regarding the procedures call in postgresql for public and non-public schema

2024-02-26 Thread Sasmit Utkarsh
Hello Boris,

Please find the below snippets for sql_select_size_procedure

/** creation **/
res = PQexec(conn," CREATE OR REPLACE PROCEDURE
*sql_select_size_procedure*(hexid text, rtp_in integer, INOUT size_data text
) LANGUAGE plpgsql AS $$ BEGIN SELECT size FROM riat WHERE id = hexid AND
rtp = rtp_in INTO size_data; END; $$;");
LOG_DEBUG("%s() CREATE sql_select_size_procedure
PQresultStatus = %s",__func__,PQresStatus(PQresultStatus(res)));
if(PQresultStatus(res) != PGRES_COMMAND_OK)
{
LOG_ERROR("CREATE sql_select_size_procedure
failed! %s", PQerrorMessage(conn));
SQL_exit_nicely(conn,res);
}

 /**Calling
sprintf(SelectSizeName,"%s","SelectSize");
 if(SQL_vsn10) {
 sprintf(SelectSizeCommand,"%s","SELECT size FROM riat WHERE id =
$1 AND rtp = $2");
 } else {
 sprintf(SelectSizeCommand,"%s","*CALL
SQL_select_size_procedure($1, $2, NULL)*");
 }
 SelectSizeNParams   = 2;
 SelectSizeParamTypes[0] = 25;  // {text}
 SelectSizeParamTypes[1] = 23;  // {int}

The point here I'm trying to make is that the same procedure is called with
similar inputs in the earlier mail, But it is getting executed for 1 user
i.e "pgddb_admin"(admin user) but not for *shc_uadmin. *


Regards,
Sasmit Utkarsh
+91-7674022625


On Mon, Feb 26, 2024 at 6:24 PM Boris Zentner  wrote:

> You call the function with null as last argument.
> *(SQL_select_size_procedure)*But the function expect text. Either provide
> text or cast the null  like null::text. Or change the function input.
> --
> Boris
>
>
> Am 26.02.2024 um 13:27 schrieb Sasmit Utkarsh :
>
> SQL_select_size_procedure
>
>


could not open file "global/pg_filenode.map": Operation not permitted

2024-02-26 Thread Nick Renders
Hello,

We have a Postgres server that intermittently logs the following:

2024-02-26 10:29:41.580 CET [63962] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:11.147 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:30:11.149 CET [63975] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:35.941 CET [63986] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:41.546 CET [63991] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:30:44.398 CET [63994] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:31:11.149 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:31:11.151 CET [64008] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:31:41.546 CET [64023] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:32:11.150 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:32:11.153 CET [64035] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:32:41.547 CET [64050] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:33:11.151 CET [90610] LOG:  could not open file "postmaster.pid": 
Operation not permitted; continuing anyway
2024-02-26 10:33:11.153 CET [64062] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted
2024-02-26 10:33:41.548 CET [64087] FATAL:  could not open file 
"global/pg_filenode.map": Operation not permitted


This has happened 3 times over the last 2 weeks now, without any indication 
what caused it.
The privileges of those 2 files are all in order.
When this happens, the server is no longer accessible, and we need to restart 
the service (pg_ctl restart).
Once restarted, Popstgres runs fine again for a couple of days.

We are running PostgreSQL 16.2 on macOS 14.3.1.

Any idea what might be causing this issue, or how to resolve it?


Best regards,

Nick Renders




Re: could not open file "global/pg_filenode.map": Operation not permitted

2024-02-26 Thread Laurenz Albe
On Mon, 2024-02-26 at 15:14 +0100, Nick Renders wrote:
> We have a Postgres server that intermittently logs the following:
> 
> 2024-02-26 10:29:41.580 CET [63962] FATAL:  could not open file 
> "global/pg_filenode.map": Operation not permitted
> 2024-02-26 10:30:11.147 CET [90610] LOG:  could not open file 
> "postmaster.pid": Operation not permitted; continuing anyway
> 
> This has happened 3 times over the last 2 weeks now, without any indication 
> what caused it.
> The privileges of those 2 files are all in order.
> When this happens, the server is no longer accessible, and we need to restart 
> the service (pg_ctl restart).
> Once restarted, Popstgres runs fine again for a couple of days.
> 
> We are running PostgreSQL 16.2 on macOS 14.3.1.

Perhaps that is some kind of virus checker or something else that locks files.

Yours,
Laurenz Albe




pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner

Hi

My names can contain a special character (⠒), e.g. to separate the 
donator object from the name or the name from the type. Typically, I use 
this with foreign keys, e.g. table PARENT has the surrogate key ID, the 
table CHILD would then have the column PARENT⠒ID. That way, I can use 
the underscore to segment column names like yet_another_column_name. I 
do not like camel-case in that case because names do not have to be case 
sensitive.


However, I want to create a (trigger) function to impose data 
consistency. For that purpose, I try to copy the data type of a PL/pgSQL 
variable from the base object, a view in that case. Trying so, I get the 
following error on installation of the function.


V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;

Syntax error at "%" … invalid type name. If I use the actual type of the 
column, all is fine.


V⠒NODE_TYPE⠒NAME text;

Please find attached script files of objects directly involved in the 
trigger function.


Is there something, I am doing wrongly?

Kind Regards

Thiemo@echo Set up function NODE_GOOD⠒TR_B_IU_R;

set search_path = snowrunner, public;

create or replace function NODE_GOOD⠒TR_B_IU_R()
  returns trigger
  language plpgsql
  stable as
$body$
declare
-- V⠒NODE_TYPE⠒NAME text;
V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;
begin
select NODE_TYPE⠒NAME into V⠒NODE_TYPE⠒NAME
  from NODE⠒V
 where 1 = 1
   and ID = new.NODE⠒ID
   and 1 = 1;
if (V⠒NODE_TYPE⠒NAME = 'Drop-off'
and new.TASK_NAME is null) then
raise exception 'A good connection to a drop-off node must have a 
task name!';
elsif (V⠒NODE_TYPE⠒NAME != 'Drop-off'
   and new.TASK_NAME is not null) then
raise exception 'A good connection to a non-drop-off node cannot 
have a task name!';
end if;
return null;
end;
$body$;


commit;
@echo Set view NODE⠒V up;

create or replace view NODE⠒V as
  select N.NAME as NODE⠒NAME,
 -- N.QUANTITY,
 T.NAME as NODE_TYPE⠒NAME,
 M.MAP⠒NAME,
 M.MAP⠒CODE,
 M.REGION⠒NAME,
 M.REGION⠒COMPLETION_PERCENTATGE,
 A.CENTRICITY⠒NAME,
 A.DIRECTION⠒CODE,
 N.DESCRIPTION as NODE⠒DESCRIPTION,
 T.DESCRIPTION as NODE_TYPE⠒DESCRIPTION,
 M.MAP⠒DESCRIPTION,
 M.REGION⠒DESCRIPTION,
 A.AREA⠒DESCRIPTION,
 A.CENTRICITY⠒DESCRIPTION,
 A.DIRECTION⠒DESCRIPTION,
 N.ID,
 N.NODE_TYPE⠒ID,
 N.MAP⠒ID,
 M.REGION⠒ID,
 N.AREA⠒ID,
 A.CENTRICITY⠒ID,
 A.DIRECTION⠒ID
from NODE N
  inner join NODE_TYPE T
  on N.NODE_TYPE⠒ID = T.ID
  inner join MAP⠒V M
  on N.MAP⠒ID = M.ID
  inner join AREA⠒V A
  on N.AREA⠒ID = A.ID;


create or replace rule NODE⠒R_I as
  on insert
  to NODE⠒V
  do instead
insert into NODE (MAP⠒ID,
  AREA⠒ID,
  NODE_TYPE⠒ID,
  NAME,
  -- QUANTITY,
  DESCRIPTION)
values (/* MAP⠒I */ case
   -- ID given
   when new.MAP⠒ID is not null then 
new.MAP⠒ID
   -- name or code and region given
   when (   new.MAP⠒CODE is not null
 or new.MAP⠒NAME is not null)
and (   new.REGION⠒ID is not null
 or new.REGION⠒NAME is not 
null) then (select ID

 from MAP⠒V

where (   MAP⠒CODE = new.MAP⠒CODE

   or MAP⠒NAME = new.MAP⠒NAME)

  and (   REGION⠒ID = new.REGION⠒ID

   or REGION⠒NAME = new.REGION⠒NAME))
   else null
end,
/* AREA⠒ID */   case
-- ID given
when new.AREA⠒ID is not null then 
new.AREA⠒ID
-- name given
   when (   new.DIRECTION⠒CODE is not 
null
 or new.DIRECTION⠒ID is not 
null)

Postgres > 12 with Windows Server 2012

2024-02-26 Thread Markus Oley
Hello everyone,
I need to run Postgres on Windows Server 2012. Postgres 12 is EndOfLive in
< 12 months, so I'd like to use newer versions (I've got Postgres 15
sorted) in this scenario. However, I have concerns because Postgres 13
onwards is no longer specified for Windows Server 2012.
Is this just due to the fact that Windows Server 2012 is already EndOfLive
or are there really hard reasons (such as reliability degradation or
expected technical problems) why this combination should be avoided?

Would be nice if I could hear a well-founded opinion on this.
Thanks in advance
Regards
Markus


Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Oh, I totally forgot to mention that I ran the scripts with DbVisualizer 
against a 16.1 (Debian 16.1-1.pgdg110+1) server using PostgreSQL JDBC 
Driver 42.6.0 .


Am 26.02.2024 um 16:51 schrieb Thiemo Kellner:

Hi

My names can contain a special character (⠒), e.g. to separate the 
donator object from the name or the name from the type. Typically, I use 
this with foreign keys, e.g. table PARENT has the surrogate key ID, the 
table CHILD would then have the column PARENT⠒ID. That way, I can use 
the underscore to segment column names like yet_another_column_name. I 
do not like camel-case in that case because names do not have to be case 
sensitive.


However, I want to create a (trigger) function to impose data 
consistency. For that purpose, I try to copy the data type of a PL/pgSQL 
variable from the base object, a view in that case. Trying so, I get the 
following error on installation of the function.


V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;

Syntax error at "%" … invalid type name. If I use the actual type of the 
column, all is fine.


V⠒NODE_TYPE⠒NAME text;

Please find attached script files of objects directly involved in the 
trigger function.


Is there something, I am doing wrongly?

Kind Regards

Thiemo





Re: Fastest way to clone schema ~1000x

2024-02-26 Thread Adrian Klaver

On 2/26/24 01:06, Emiel Mols wrote:
On Mon, Feb 26, 2024 at 3:50 PM Daniel Gustafsson > wrote:


There is a measurable overhead in connections, regardless of if they
are used
or not.  If you are looking to squeeze out performance then doing
more over
already established connections, and reducing max_connections, is a
good place
to start.


Clear, but with database-per-test (and our backend setup), it would have 
been *great* if we could have switched database on the same connection 
(similar to "USE xxx" in mysql). That would limit the connections to the 
amount of workers, not multiplied by tests.


That is because:

https://dev.mysql.com/doc/refman/8.3/en/glossary.html#glos_schema

"In MySQL, physically, a schema is synonymous with a database. You can 
substitute the keyword SCHEMA instead of DATABASE in MySQL SQL syntax, 
for example using CREATE SCHEMA instead of CREATE DATABASE. "





Even with a pooler, we're still going to be maintaining 1000s of 
connections from the backend workers to the pooler. I would expect this 
to be rather efficient, but still unnecessary. Also, both 
pgbouncer/pgpool don't seem to support switching database in-connection 
(they could have implemented the aforementioned "USE" statement I 
think!). [Additionally we're using PHP that doesn't seem to have a good 
shared memory pool implementation -- pg_pconnect is pretty buggy].


I'll continue with some more testing. Thanks for now!


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Clarity regarding the procedures call in postgresql for public and non-public schema

2024-02-26 Thread Adrian Klaver

On 2/26/24 04:26, Sasmit Utkarsh wrote:

Hi Postgresql Team,

***Facing the below issue, while calling the below procedures in public 
and non-public schema *shc* for ( *shc_uadmin* user)




*<3>0164503{----.noterm}-[ERROR] PREPARE 
failed for RIAT! ERROR:  procedure sql_select_size_procedure(text, 
integer, unknown) does not exist

LINE 1: CALL SQL_select_size_procedure($1, $2, NULL)
              ^
HINT:  No procedure matches the given name and argument types. You might 
need to add explicit type casts.*

  [../tpfasm.c:13961:SQL_init_db_connection]
<7>0164503{----.noterm}-ROLLBACK TRANSACTION


1) From the error message:

"ERROR:  procedure sql_select_size_procedure(text,
integer, unknown) does not exist"

"No procedure matches the given name and argument types"

From the call:

CALL SQL_select_size_procedure($1, $2, NULL)

You might try:

CALL SQL_select_size_procedure($1, $2, NULL::text)

2) Any clue as to where this:

"...failed for RIAT..."

is comimg from?



--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Tom Lane
Thiemo Kellner  writes:
> However, I want to create a (trigger) function to impose data 
> consistency. For that purpose, I try to copy the data type of a PL/pgSQL 
> variable from the base object, a view in that case. Trying so, I get the 
> following error on installation of the function.

> V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;

> Syntax error at "%" … invalid type name. If I use the actual type of the 
> column, all is fine.

FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

regards, tom lane




Re: Clarity regarding the procedures call in postgresql for public and non-public schema

2024-02-26 Thread Tom Lane
Adrian Klaver  writes:
> On 2/26/24 04:26, Sasmit Utkarsh wrote:
>> ***Facing the below issue, while calling the below procedures in public 
>> and non-public schema *shc* for ( *shc_uadmin* user)

> 1) From the error message:

> "ERROR:  procedure sql_select_size_procedure(text,
> integer, unknown) does not exist"
> "No procedure matches the given name and argument types"

This perhaps would be better understood as "there is no matching
procedure in the current search_path".  The fact that it works as
one user and not as another could be explained if the users have
different search paths.

regards, tom lane




Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please find 
the entire code attached and a screenshot from the error.


Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a PL/pgSQL
variable from the base object, a view in that case. Trying so, I get the
following error on installation of the function.



V⠒NODE_TYPE⠒NAMENODE⠒V.NODE_TYPE⠒NAME%type := null;



Syntax error at "%" … invalid type name. If I use the actual type of the
column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

regards, tom lane

snowrunner.7z
Description: application/compressed


Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Adrian Klaver

On 2/26/24 08:53, Thiemo Kellner wrote:
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please find 
the entire code attached and a screenshot from the error.


I quick test:

create table type_test(NODE_TYPE⠒NAME text);

 \d type_test
Table "public.type_test"
 Column | Type | Collation | Nullable | Default
+--+---+--+-
 node_type⠒name | text |   |

CREATE OR REPLACE FUNCTION public.type_test_fnc()
 RETURNS void
 LANGUAGE plpgsql
AS $function$
DECLARE
type_name  type_test.node_type⠒name%TYPE :=NULL;
BEGIN
RAISE NOTICE 'TEST';
END;

$function$

select type_test_fnc();
NOTICE:  TEST
 type_test_fnc
---

Shows that it is not the name itself that is the problem, at least in my 
case. I'm going to say it as Tom Lane said, there is a search_path 
issue. I tried to follow all the set search_path calls in your code and 
got lost as to where that ended up. I would try a simple test case, 
using psql, like above to verify that it is not the name in your case 
either. Assuming that works then you will need to track down what the 
actual search_path is when you run the function.





Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a PL/pgSQL
variable from the base object, a view in that case. Trying so, I get the
following error on installation of the function.



V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;



Syntax error at "%" … invalid type name. If I use the actual type of the
column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

    regards, tom lane


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Thiemo Kellner
Shame on me. My bad. It was the order of installation that did not work. 
Sorry for that. I was mislead by the error message. If an object is 
missing I would not expect an invalid type name message.


Thanks

Am 26.02.2024 um 17:53 schrieb Thiemo Kellner:
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please find 
the entire code attached and a screenshot from the error.


Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a PL/pgSQL
variable from the base object, a view in that case. Trying so, I get the
following error on installation of the function.



V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;



Syntax error at "%" … invalid type name. If I use the actual type of the
column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

    regards, tom lane





Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Adrian Klaver

On 2/26/24 09:30, Thiemo Kellner wrote:
Shame on me. My bad. It was the order of installation that did not work. 
Sorry for that. I was mislead by the error message. If an object is 
missing I would not expect an invalid type name message.


For all the code knows it could be just a misspelling.



Thanks

Am 26.02.2024 um 17:53 schrieb Thiemo Kellner:
Thanks for the hint and care. The install script has a set statement 
already and I now added the search_path clause to no avail. Please 
find the entire code attached and a screenshot from the error.


Am 26.02.2024 um 17:35 schrieb Tom Lane:

Thiemo Kellner  writes:

However, I want to create a (trigger) function to impose data
consistency. For that purpose, I try to copy the data type of a 
PL/pgSQL
variable from the base object, a view in that case. Trying so, I get 
the

following error on installation of the function.



V⠒NODE_TYPE⠒NAME    NODE⠒V.NODE_TYPE⠒NAME%type := null;


Syntax error at "%" … invalid type name. If I use the actual type of 
the

column, all is fine.


FWIW, I couldn't reproduce this with the fragmentary scripts you
provided.  I suspect the problem is not about the special characters
in the names, rather about search_path not including the NODE⠒V view.
Consider schema-qualifying the view name, or attaching a "SET
search_path" clause to the function.

    regards, tom lane





--
Adrian Klaver
adrian.kla...@aklaver.com





Re: pPL/pgSQL restriction on characters for copying types?

2024-02-26 Thread Tom Lane
Adrian Klaver  writes:
> On 2/26/24 09:30, Thiemo Kellner wrote:
>> Shame on me. My bad. It was the order of installation that did not work. 
>> Sorry for that. I was mislead by the error message. If an object is 
>> missing I would not expect an invalid type name message.

> For all the code knows it could be just a misspelling.

I think Thiemo's got a point: "invalid type name" isn't the sort
of phrasing we'd normally use.  Compare

regression=# select 0::foo;
ERROR:  type "foo" does not exist

regression=# create function f() returns foo.bar%type as 'select 1' language 
sql;
ERROR:  relation "foo" does not exist

regression=# create function f() returns void language plpgsql as
$$declare x foo.bar%type; begin end$$;
ERROR:  syntax error at or near "%"
LINE 2: $$declare x foo.bar%type; begin end$$;
   ^
CONTEXT:  invalid type name "foo.bar%type"

Digging in the plpgsql code, I notice that there's already a comment
complaining about how this is unhelpful:

 * If we have a simple or composite identifier, check for %TYPE and
 * %ROWTYPE constructs.  (Note that if plpgsql_parse_wordtype et al fail
 * to recognize the identifier, we'll fall through and pass the whole
 * string to parse_datatype, which will assuredly give an unhelpful
 * "syntax error".  Should we try to give a more specific error?)

which I believe I wrote not very long ago as part of an unrelated
change (digs ... yeah, see 5e8674dc8).  I'd not gone further than that
because the previous behavior was no better, but maybe it's time to
work harder.  The main problem is that this code doesn't know whether
the appropriate complaint is about a table not existing or a table
column not existing.  Maybe it's okay to let plpgsql_parse_wordtype
etc throw the error for themselves, though.

regards, tom lane




Re: Unable to get PostgreSQL 15 with Kerberos (GSS) working

2024-02-26 Thread Stephen Frost
Greetings,

* Matthew Dennison (m...@matty-uk.co.uk) wrote:
> No matter what I try I don't seem to be able to get the psl command locally 
> to work using Kerberos.  I receive for following message:
> FATAL:  GSSAPI authentication failed for user "postgres"
> FATAL:  GSSAPI authentication failed for user myad.usern...@mydomain.net
[...]
> kinit -kt /pgcluster/data/postgres.keytab 
> POSTGRES/hostname.mydomain@mydomain.net
> klist
> Ticket cache: KCM:0:20151
> Default principal: POSTGRES/hostname.mydomain@mydomain.net
> 
> Valid starting ExpiresService principal
> 23/02/24 10:19:12  23/02/24 20:19:12  krbtgt/mydomain@mydomain.net
> renew until 23/02/24 20:19:12

Doesn't look like you're actually getting a PG tickets ...

> psql -h localhost -U postgres -d postgres

And this might be why.  Don't use '-h localhost' because that'll, by
default anyway, cause the Kerberos library to try to do reverse DNS on
the address you are trying to connect to (::1/128, for example) ... and
that possibly just resolves to 'localhost', which isn't the server's
name that you're trying to connect to.  If the rDNS lookup fails then
we'll use what you provided- but again, that's just 'localhost' and
isn't the server's name in the AD realm.

Try doing:

psql -h hostname.mydomain.net -U postgres -d postgres

instead, and update your pg_hba.conf to match those connections which
are coming into the system's actual IP address instead of only matching
loaclhost connection attempts.

You're definitely not going to have any succcess until you can do a
'klist' and see a POSTGRES/hostname.mydomain.net ticket listed after
doing your psql attempt.

Thanks,

Stephen


signature.asc
Description: PGP signature