Re: A question about possible recovery inconsistency

2023-10-11 Thread Laurenz Albe
On Tue, 2023-10-10 at 11:46 -0400, Eugen Konkov wrote:
> [wants to avoid
>  FATAL:  recovery ended before configured recovery target was reached
>  that is issued in v13 and later]
>
> 1. Why here (in experiment2.txt) redo done at 0/728 when recovery
> target name "2023-10-10 15:07:37" is at 0/790?
> I suppose 0/790 should be included, because
> `recovery_target_inclusive=true` by default.

Because there was no transaction at 0/790.

> 2. Is there any way to include a label into a base backup which I can
> use as `recoverty_target_name`?
> This is not clear from documentation
> https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP
> Is 'label' the name for `recovery_target_name` /
> `pg_create_restore_point` is called by `pg_backup_start`?

No.

> 3. Is there any way to get the latest time from a base backup which is
> reachable and could be used as the value for `recovery_target_time`?
> As a workaround for XX000 error I inserted one additional record into
> the database, so a new WAL file is generated. Then I can use the t3
> value for `recovery_target_time`.
> This only works when archive_command/restore_command was configured.
> But without them it seems I can not use the `recovery_target_time`
> option. Is this true?

Perhaps you could use the time from the "backup" file in the WAL archive,
not sure.

But why do you want to do that, if all that you have to do is specify
"recovery_target = 'immediate'" to recover to the end of the backup?

Yours,
Laurenz Albe




Subject: FATAL: cache lookup failed for relation 1247

2023-10-11 Thread Israr Khan
Hey PgSQL ,

I am facing the postgresql database access issue after doing vacuum on a
table.
Please help and suggest as soon as possible.

Error : FATAL: cache lookup failed for relation 1247


Thank you
Israr

-- 
Disclaimer : This message (including any attachments) is sent on behalf of 
Amnex Infotechnologies Private Limited and contains confidential 
information intended for a specific individual and purpose, and is 
protected by law. If you are not the intended recipient, you should delete 
this message and are hereby notified that any disclosure, copying, or 
distribution of this message, or the taking of any action based on it, is 
strictly prohibited.


Can not drop partition if exist foreign keys

2023-10-11 Thread Олег Самойлов
Hi all.
There are two common practice to drop partition from partitioned table: just 
drop or detach-drop. But simple drop don't work if exist foreign key. Example 
script attached.


test.sql
Description: Binary data


$ psql -p 5416 -a -f test.sql
\setenv PSQL_EDITOR 'vim'
\setenv PSQL_EDITOR_LINENUMBER_ARG '+'
\set ON_ERROR_ROLLBACK 'interactive'
\set ON_ERROR_STOP 'on'
--\set SHOW_CONTEXT 'always'
\set PROMPT1 '%[%033[38;5;'`echo $PROMPT_COLOR`'m%]%x%n@%m:%>/%/\n%R%# 
%[%033[m%]'
\set PROMPT2 '%[%033[38;5;'`echo $PROMPT_COLOR`'m%]%R%# %[%033[m%]'
BEGIN;
BEGIN
CREATE TABLE parent (
id  int primary key
) PARTITION BY RANGE (id);
CREATE TABLE
CREATE TABLE parent_0 PARTITION OF parent
FOR VALUES FROM (0) TO (100);
CREATE TABLE
CREATE TABLE children (
id  int primary key references parent(id)
) PARTITION BY RANGE (id);
CREATE TABLE
CREATE TABLE children_0 PARTITION OF children
FOR VALUES FROM (0) TO (100);
CREATE TABLE
DROP TABLE children_0;
DROP TABLE
DROP TABLE parent_0;
psql:test.sql:15: ERROR:  cannot drop table parent_0 because other objects 
depend on it
DETAIL:  constraint children_id_fkey on table children depends on table parent_0
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

Looked like a bug.

Re: Can not drop partition if exist foreign keys

2023-10-11 Thread Alvaro Herrera
On 2023-Oct-11, Олег Самойлов wrote:

> There are two common practice to drop partition from partitioned
> table: just drop or detach-drop. But simple drop don't work if exist
> foreign key. Example script attached.

Yeah.  Detach it first, then you should be able to drop it.

> psql:test.sql:15: ERROR:  cannot drop table parent_0 because other objects 
> depend on it
> DETAIL:  constraint children_id_fkey on table children depends on table 
> parent_0
> HINT:  Use DROP ... CASCADE to drop the dependent objects too.
> 
> Looked like a bug.

We tried to make DROP work, but we didn't find a way.  Patches welcome.

-- 
Álvaro Herrera




Re: Can not drop partition if exist foreign keys

2023-10-11 Thread Laurenz Albe
On Wed, 2023-10-11 at 14:08 +0300, Олег Самойлов wrote:
> There are two common practice to drop partition from partitioned table: just 
> drop or detach-drop.
> But simple drop don't work if exist foreign key. Example script attached.

That is working as designed.  You cannot detach a partition of a table if a
foreign key points to it.

Create the foreign key constraints between the partitions instead.

Yours,
Laurenz Albe




moving data with pg_dump/pg_restore between database with different locale

2023-10-11 Thread Zwettler Markus (OIZ)
hi,

if you want to retrieve data by means of pg_dump / pg_restore from a database 
with locale de_CH.UTF8 into a database with locale en_US.UTF8 are there any 
other things to consider than the behavior implications mentioned in the 
documentation:

https://www.postgresql.org/docs/current/locale.html#LOCALE-BEHAVIOR

Thanks,
Markus



Re: Subject: FATAL: cache lookup failed for relation 1247

2023-10-11 Thread Tom Lane
Israr Khan  writes:
> I am facing the postgresql database access issue after doing vacuum on a
> table.

There has to have been more to it than that.

> Error : FATAL: cache lookup failed for relation 1247

1247 is pg_type, so this looks like rather nasty catalog corruption.
If you're really lucky, this might just be a messed-up index for
pg_class, but I'm afraid this might be something that calls for
professional data recovery help (if the data in that database is
valuable enough to you to justify that).

In any case, your first move should be to stop the postmaster and
make a physical backup (e.g. with tar) of the data directory.
Then at least you can get back to where you are if recovery
attempts make things worse.

After that, see if you can connect to that DB with system indexes
disabled, and if so try to issue REINDEX TABLE pg_class.

There's some other helpful info at

https://wiki.postgresql.org/wiki/Corruption

regards, tom lane




Re: No Data Being Inserted

2023-10-11 Thread Adrian Klaver

On 10/11/23 04:46, Anthony Apollis wrote:

I have a table that i populate using joins.
One of the tables don't insert its data. Tables are:



My Test to see if the data has been inserted:
image.png
I get blanks or no output for above test.

My Source Tables have data:
image.png
Can't seem to figure out where the issue is.



1) Verify that your client is not lying to you, use psql to see if there 
is data in the table.


2) Look at the Postgres log to see what is happening. You would want
log_statement in postgresql.conf to be at least 'mod'.

--
Adrian Klaver
adrian.kla...@aklaver.com





Re: A question about possible recovery inconsistency

2023-10-11 Thread Eugen Konkov
>But why do you want to do that, if all that you have to do is specify
"recovery_target = 'immediate'" to recover to the end of the backup?

Because automation scripts do not know if transactions are available
after some point in time or not. But automation scripts know that
backup was completed successfully at that point.
For example:
We want to provide time to recover the database.
1. Base backup restored, wal files are applied successfully if there
is a transaction.
2. Base backup restored, wal files are not applied successfully, even
if we have the correct wal file after target time.
eg. this wal file was created with help: pg_create_restore_point / pg_switch_wal

It looks inconsistent, because we can restore save archive by name,
but we can not restore it by time, even if this time is less when the
named point was created.

As workaround we just insert a fake record into database, but this
looks very questionable: Why do we need to insert more records into
database after successful backup??



On Wed, Oct 11, 2023 at 3:45 AM Laurenz Albe  wrote:
>
> On Tue, 2023-10-10 at 11:46 -0400, Eugen Konkov wrote:
> > [wants to avoid
> >  FATAL:  recovery ended before configured recovery target was reached
> >  that is issued in v13 and later]
> >
> > 1. Why here (in experiment2.txt) redo done at 0/728 when recovery
> > target name "2023-10-10 15:07:37" is at 0/790?
> > I suppose 0/790 should be included, because
> > `recovery_target_inclusive=true` by default.
>
> Because there was no transaction at 0/790.
>
> > 2. Is there any way to include a label into a base backup which I can
> > use as `recoverty_target_name`?
> > This is not clear from documentation
> > https://www.postgresql.org/docs/current/continuous-archiving.html#BACKUP-LOWLEVEL-BASE-BACKUP
> > Is 'label' the name for `recovery_target_name` /
> > `pg_create_restore_point` is called by `pg_backup_start`?
>
> No.
>
> > 3. Is there any way to get the latest time from a base backup which is
> > reachable and could be used as the value for `recovery_target_time`?
> > As a workaround for XX000 error I inserted one additional record into
> > the database, so a new WAL file is generated. Then I can use the t3
> > value for `recovery_target_time`.
> > This only works when archive_command/restore_command was configured.
> > But without them it seems I can not use the `recovery_target_time`
> > option. Is this true?
>
> Perhaps you could use the time from the "backup" file in the WAL archive,
> not sure.
>
> But why do you want to do that, if all that you have to do is specify
> "recovery_target = 'immediate'" to recover to the end of the backup?
>
> Yours,
> Laurenz Albe



-- 


Eugen Konkov

DevOps Engineer, Planitar Inc.

M. 416-276-1715

ekonk...@planitar.com | goiguide.com

560 Parkside Drive, Unit 401

Waterloo, ON, Canada N2L 5Z4




log wal file transfer in error logs

2023-10-11 Thread Atul Kumar
Hi,

I need to log all wal files that are getting transferred to all standby
servers in the postgresql error logs.

I also need to log those wal files that are being applied in standbys.

My postgres version is 12 and running on centos 7.

I am unable to understand the proper parameters in the postgresql.conf
file, please help me in achieving this.



Regards,
Atul


Re: moving data with pg_dump/pg_restore between database with different locale

2023-10-11 Thread Laurenz Albe
On Wed, 2023-10-11 at 11:34 +, Zwettler Markus (OIZ) wrote: 
> if you want to retrieve data by means of pg_dump / pg_restore from a database 
> with
> locale de_CH.UTF8 into a database with locale en_US.UTF8 are there any other 
> things
> to consider than the behavior implications mentioned in the documentation:

The sort order will change, the language of error messages and the log file
will change (unless you set lc_messages differently), some format letters in
to_timestamp(), to_number() and to_char() will change semantics (unless you
set lc_numeric and lc_time differently).

That's about it.

Yours,
Laurenz Albe




Re: log wal file transfer in error logs

2023-10-11 Thread Laurenz Albe
On Wed, 2023-10-11 at 22:15 +0530, Atul Kumar wrote:
> I need to log all wal files that are getting transferred to all standby 
> servers in the postgresql error logs.

WAL files are not transferred to standby servers...

WAL is streamed, so what would you want to log?

Yours,
Laurenz Albe




Re: A question about possible recovery inconsistency

2023-10-11 Thread Ron

On 10/11/23 09:52, Eugen Konkov wrote:

But why do you want to do that, if all that you have to do is specify

"recovery_target = 'immediate'" to recover to the end of the backup?

Because automation scripts do not know if transactions are available
after some point in time or not. But automation scripts know that
backup was completed successfully at that point.
For example:
We want to provide time to recover the database.
1. Base backup restored, wal files are applied successfully if there
is a transaction.


Doesn't "pg_basebackup --wal-method=stream" already do that?

Since the commands below automagically starts physical streaming, there must 
be a variation where all the wal data is applied but /doesn't/ start 
replication.


pg_basebackup --dbname=service=basebackup -D $PGDATA --progress 
--checkpoint=fast -v \
    --write-recovery-conf --wal-method=stream --create-slot 
--slot=pgstandby1 --compress=server-zstd

pg_ctl start -w

Maybe just remove the two "slot" options:
pg_basebackup --dbname=service=basebackup -D $PGDATA --progress 
--checkpoint=fast -v \

    --write-recovery-conf --wal-method=stream --compress=server-zstd

--
Born in Arizona, moved to Babylonia.

Re: log wal file transfer in error logs

2023-10-11 Thread Atul Kumar
Yes, I meant streamed, I need to log those wal files that are getting
streamed and replayed on standby servers in error logs of primary as well
as standby servers.

So that I can cross check that whichever file is streamed has been replayed
on standby.

I need to keep a track of these wal files in db error logs.



Regard

On Wed, Oct 11, 2023 at 10:29 PM Atul Kumar  wrote:

> Yss, I meant streamed, I need to log those wal files that are getting
> streamed and replayed on standby servers in error logs of primary as well
> as standby servers.
>
> So that I can cross check that whichever file is streamed has been
> replayed on standby.
>
> I need to keep a track of these wal files in db error logs.
>
>
>
> Regards,
> Atul
>
>
>
> On Wed, Oct 11, 2023 at 10:21 PM Laurenz Albe 
> wrote:
>
>> On Wed, 2023-10-11 at 22:15 +0530, Atul Kumar wrote:
>> > I need to log all wal files that are getting transferred to all standby
>> servers in the postgresql error logs.
>>
>> WAL files are not transferred to standby servers...
>>
>> WAL is streamed, so what would you want to log?
>>
>> Yours,
>> Laurenz Albe
>>
>


Re: log wal file transfer in error logs

2023-10-11 Thread Ron


Tables (on the primary) pg_stat_replication and (on the secondary) 
pg_stat_wal_receiver do that for you.


On 10/11/23 12:50, Atul Kumar wrote:
Yes, I meant streamed, I need to log those wal files that are getting 
streamed and replayed on standby servers in error logs of primary as well 
as standby servers.


So that I can cross check that whichever file is streamed has been 
replayed on standby.


I need to keep a track of these wal files in db error logs.



Regard

On Wed, Oct 11, 2023 at 10:29 PM Atul Kumar  wrote:

Yss, I meant streamed, I need to log those wal files that are getting
streamed and replayed on standby servers in error logs of primary as
well as standby servers.

So that I can cross check that whichever file is streamed has been
replayed on standby.

I need to keep a track of these wal files in db error logs.



Regards,
Atul



On Wed, Oct 11, 2023 at 10:21 PM Laurenz Albe
 wrote:

On Wed, 2023-10-11 at 22:15 +0530, Atul Kumar wrote:
> I need to log all wal files that are getting transferred to all
standby servers in the postgresql error logs.

WAL files are not transferred to standby servers...

WAL is streamed, so what would you want to log?

Yours,
Laurenz Albe



--
Born in Arizona, moved to Babylonia.

Postgresql HA cluster

2023-10-11 Thread Jason Grammenos
Hello,

I am new to PostgreSQL and having to migrate a Cassandra cluster to PostgreSQL.
Have a reasonable amount of experience with MySQL and use Master -> Master 
MySQL replication quite successfully.

I am trying to understand what configuration/option to pick for PostgreSQL that 
will provide the same or similar level of operational easy as the Cassandra 
cluster. What I mean by that is : Backups, Patching (rebooting nodes to patch)
With Cassandra any single node can be patched and rebooting without incurring 
downtime. This is also true with MySQL master->master replication, which we 
have fronted by a HAproxy reverse proxy (we can fail over between the two MySQL 
node, draining the connections as we do so and then patching and reboot).

I have found the following documentation:
https://www.postgresql.org/docs/current/different-replication-solutions.html

and have heard of "Traktor", "pgEdge" and some other third-party tools.
My difficulty is that with very little knowledge of PostgreSQL I am having a 
hard time sorting through the various options and determining which are 
potential candidates for the kind of setup I am looking for (something 
approximating MySQL master->master replication).

Any advice or resources would be much appreciated.
Regards
Jason Grammenos

[cid:image001.png@01D9FB7D.85F88200]
Jason Grammenos | Operations & Infrastructure Analyst
Pronouns: he/him
P: 613.232.7797 x1131
Toll-free: 866.545.3745 x1131
jason.gramme...@agilitypr.com
agilitypr.com
Learn new PR tips from our free resources.



Assitance needed for the resolution of memory leak

2023-10-11 Thread Sasmit Utkarsh
Hi Team,

I'm trying to insert multiple duplicate rows into the table using libpq in
C. I'm getting a memory leak reported from addresssanitizer tool. Please
let me know if i can share the code snippet


Regards,
Sasmit Utkarsh
+91-7674022625


Re: Assitance needed for the resolution of memory leak

2023-10-11 Thread Adrian Klaver

On 10/11/23 12:15, Sasmit Utkarsh wrote:

Hi Team,

I'm trying to insert multiple duplicate rows into the table using libpq 
in C. I'm getting a memory leak reported from addresssanitizer tool. 
Please let me know if i can share the code snippet


That is the only way you are going to get an answer, so share away.

Also include the report information from addresssanitizer.




Regards,
Sasmit Utkarsh
+91-7674022625


--
Adrian Klaver
adrian.kla...@aklaver.com





Re: Assitance needed for the resolution of memory leak

2023-10-11 Thread Sasmit Utkarsh
HI Adrian,

I have a main program which actually parses the input data from the files
and inserts them to a table in postgres db. The input can have many
duplicates and we cannot avoid it for now but we have to achieve the
insertions of the records as quickly as possible. Below are the functions
which are called from main to insert the record in a loop untill it reaches
the end of the input file. If the input file has all unique records then
there are no leaks reported by addresssanitizer tool but when there are
duplicates it reports leaks.

Like for an ex: I have provided the test run for a file which has 4
duplicate entries in the input file and below is the leak reported. I have
followed
consistently using PQclear to free all PGresult objects. Not sure why it
references to *PQmakeEmptyPGresult and **pqResultAlloc *even though I have
not used it explicitly. Is anything missing here? How can I avoid memory
leaks? Please assist here

*addresssanitizer report :*

 TOTAL rows inserted into fs_data table = 0
 TOTAL rows FAILED to insert into table = 4
 END TRANSACtION
 CLOSE shc_data
 FINISH

=
==336368==ERROR: LeakSanitizer: detected memory leaks







*Direct leak of 640 byte(s) in 4 object(s) allocated from:#0 0xf7a1d0bb
in __interceptor_malloc
(/nix/store/3qz5qcx23d522i1fivjgh2nw0bs0pywf-gcc-10.3.0-lib/lib/libasan.so.6+0xaf0bb)
  #1 0xf792b7f3 in PQmakeEmptyPGresult
(/nix/store/0rh8jnm4xgav5916nasd1d2ciy4nccsp-postgresql-13.7-lib/lib/libpq.so.5+0x117f3)Indirect
leak of 8192 byte(s) in 4 object(s) allocated from:#0 0xf7a1d0bb in
__interceptor_malloc
(/nix/store/3qz5qcx23d522i1fivjgh2nw0bs0pywf-gcc-10.3.0-lib/lib/libasan.so.6+0xaf0bb)
  #1 0xf792b3bc in pqResultAlloc
(/nix/store/0rh8jnm4xgav5916nasd1d2ciy4nccsp-postgresql-13.7-lib/lib/libpq.so.5+0x113bc)*

SUMMARY: AddressSanitizer: 8832 byte(s) leaked in 8 allocation(s).

*Insert function:*

int postgreSQL_insert(PGconn *conn, int fa, int ft, int ord, unsigned char
*data, int blk_size, char *recid, char *xaddr)
{
PGresult  *res;
int rc = 0;
int rc2;
char *ErrorMsg;
char cur_recid[5] = "\0\0\0\0\0";
int ft3;
int ord3;
char Command[250];;

uint32_t fa2 = htonl(fa);
uint32_t ft2 = htonl(ft);
uint32_t ord2 = htonl(ord);
uint32_t blk_size2 = htonl(blk_size);
uint32_t xaddr_size = strlen(xaddr);
uint32_t recid_size = strlen(recid);

char *fa_val  = (char *)&fa2;
char *ft_val  = (char *)&ft2;
char *ord_val = (char *)&ord2;
char *blk_size_val = (char *)&blk_size2;

int   nParams   = 7;
//char *command   = "INSERT INTO fs_data (file_address,
face_type, ordinal, xaddr, recid, blk_size, data) VALUES ($1, $2, $3, $4,
$5, $6, $7)";
//char *command   = "INSERT INTO fs_data (file_address,
face_type, ordinal, xaddr, recid, blk_size, data) VALUES ($1, $2, $3, $4,
$5, $6, $7) ON CONFLICT (file_address) DO NOTHING";
char *command   = "*INSERT INTO fs_data (file_address,
face_type, ordinal, xaddr, recid, blk_size, data) VALUES ($1, $2, $3, $4,
$5, $6, $7) ON CONFLICT (file_address) DO NOTHING RETURNING file_address";*
Oid   paramTypes[7] = {23, 23, 23, 25, 25, 23, 17};   // {int, int,
int, text, text, int, bytea}
const char *paramValues[7]  = {fa_val, ft_val, ord_val, xaddr, recid,
blk_size_val, data};
int   paramLengths[7]   = {sizeof(fa2), sizeof(ft2), sizeof(ord2),
xaddr_size, recid_size, sizeof(blk_size2), blk_size};
int   paramFormats[7]   = {1, 1, 1, 1, 1, 1, 1}; // identify all
parameters as binary
int   resultFormat  = 1;   // Result will be in binary
format.

if(init_only == false)
{
rc = postgreSQL_delete(conn, fa);
}

if ( sql_debug ) { fprintf(stderr,"PostgreSQL_insert INPUT fa %i ft %i ord
%i xaddr %s id %s blk_size %i\n",fa,ft,ord,xaddr,recid,blk_size); }
/* INSERT row in table fs_data */

if ( sql_debug ) { fprintf(stderr,"Command : (%s)\n", command); }
if ( sql_debug ) { fprintf(stderr,"Param. 0: faddr   (%i)\n", fa); }
if ( sql_debug ) { fprintf(stderr,"Param. 1: face_type   (%i)\n", ft); }
if ( sql_debug ) { fprintf(stderr,"Param. 2: ordinal (%i)\n", ord); }
if ( sql_debug ) { fprintf(stderr,"Param. 3: xaddr   (%s)\n", xaddr); }
if ( sql_debug ) { fprintf(stderr,"Param. 4: recid   (%s)\n", recid); }
if ( sql_debug ) { fprintf(stderr,"Param. 5: blk_size(%i)\n",
blk_size); }
res = PQexecParams(conn, // shc_data database connection
  command, // "INSERT INTO fs_data ...)";
  nParams, // parameter count
  paramTypes, // integer,  integer,   integer, bytea
  paramValues, // file_address, face_type, ordinal, data
  paramLengths, // Lenght of each parameter
  paramFormats, // all parameters are binary format
  resultFormat); // result is binary


//fprintf(stderr," res = %s\n",PQresStatus(PQresultStatus(res))); // show
the value of res

if ( ( PQresultStatus(res) == PGRES_TUPLES_OK)
 && (PQntuples(res) == 0) ) // if tuples == 0, we hit the ON CONFLICT, so
report the ERROR
{
rc = 1;
fprin

Re: Assitance needed for the resolution of memory leak

2023-10-11 Thread Tom Lane
Sasmit Utkarsh  writes:
> Like for an ex: I have provided the test run for a file which has 4
> duplicate entries in the input file and below is the leak reported. I have
> followed
> consistently using PQclear to free all PGresult objects.

No you haven't: you've got two PQexec-something calls and
only one PQclear.  If control reaches here:

> res = PQexec(conn, Command);

that will overwrite your only pointer to the PQexecParams
result, and you won't clear it.

BTW, if this is what your code actually looks like in your editor,
it's no wonder you can't follow its basic control flow.  Indent
according to the brace structure, and your life will get easier.

regards, tom lane




Re: Assitance needed for the resolution of memory leak

2023-10-11 Thread Sasmit Utkarsh
Thanks Tom, It did work, and yeah I do have indentation in my editor, there
was some formatting issue due to the copy and paste of the code.
:)

Regards,
Sasmit Utkarsh
+91-7674022625


On Thu, Oct 12, 2023 at 2:07 AM Tom Lane  wrote:

> Sasmit Utkarsh  writes:
> > Like for an ex: I have provided the test run for a file which has 4
> > duplicate entries in the input file and below is the leak reported. I
> have
> > followed
> > consistently using PQclear to free all PGresult objects.
>
> No you haven't: you've got two PQexec-something calls and
> only one PQclear.  If control reaches here:
>
> > res = PQexec(conn, Command);
>
> that will overwrite your only pointer to the PQexecParams
> result, and you won't clear it.
>
> BTW, if this is what your code actually looks like in your editor,
> it's no wonder you can't follow its basic control flow.  Indent
> according to the brace structure, and your life will get easier.
>
> regards, tom lane
>


Re: Assitance needed for the resolution of memory leak

2023-10-11 Thread Ron

That's why you /attach/ files, not paste text.

On 10/11/23 16:19, Sasmit Utkarsh wrote:
Thanks Tom, It did work, and yeah I do have indentation in my editor, 
there was some formatting issue due to the copy and paste of the code.

:)

Regards,
Sasmit Utkarsh
+91-7674022625


On Thu, Oct 12, 2023 at 2:07 AM Tom Lane  wrote:

Sasmit Utkarsh  writes:
> Like for an ex: I have provided the test run for a file which has 4
> duplicate entries in the input file and below is the leak reported.
I have
> followed
> consistently using PQclear to free all PGresult objects.

No you haven't: you've got two PQexec-something calls and
only one PQclear.  If control reaches here:

> res = PQexec(conn, Command);

that will overwrite your only pointer to the PQexecParams
result, and you won't clear it.

BTW, if this is what your code actually looks like in your editor,
it's no wonder you can't follow its basic control flow. Indent
according to the brace structure, and your life will get easier.

                        regards, tom lane



--
Born in Arizona, moved to Babylonia.

Re: A question about possible recovery inconsistency

2023-10-11 Thread Ron

On 10/11/23 12:07, Ron wrote:

On 10/11/23 09:52, Eugen Konkov wrote:

But why do you want to do that, if all that you have to do is specify

"recovery_target = 'immediate'" to recover to the end of the backup?

Because automation scripts do not know if transactions are available
after some point in time or not. But automation scripts know that
backup was completed successfully at that point.
For example:
We want to provide time to recover the database.
1. Base backup restored, wal files are applied successfully if there
is a transaction.


Doesn't "pg_basebackup --wal-method=stream" already do that?

Since the commands below automagically starts physical streaming, there 
must be a variation where all the wal data is applied but /doesn't/ start 
replication.


pg_basebackup --dbname=service=basebackup -D $PGDATA --progress 
--checkpoint=fast -v \
    --write-recovery-conf --wal-method=stream --create-slot 
--slot=pgstandby1 --compress=server-zstd

pg_ctl start -w

Maybe just remove the two "slot" options:
pg_basebackup --dbname=service=basebackup -D $PGDATA --progress 
--checkpoint=fast -v \

    --write-recovery-conf --wal-method=stream --compress=server-zstd



After that pg_basebackup, do this to get R/W copy of the database:
pg_ctl start
pg_ctl promote

--
Born in Arizona, moved to Babylonia.

logs are not getting logged in error logs

2023-10-11 Thread Atul Kumar
Hi,

I have postgres 12 running on centos 7.

i have configured streaming replication having one primary and one standby.

It is strange to see that the logs are not getting generated on standby.

I have checked the log directory using show command and it shows the
correct output.

I also intentionally created a few errors on standby to log and they are
also getting logged.

But apart from these error logs I can't see any additional logs on standby.

Please help me what exactly is happening and how can I troubleshoot more ?



Regards.


Re: logs are not getting logged in error logs

2023-10-11 Thread Adrian Klaver

On 10/11/23 17:24, Atul Kumar wrote:

Hi,

I have postgres 12 running on centos 7.

i have configured streaming replication having one primary and one standby.

It is strange to see that the logs are not getting generated on standby.

I have checked the log directory using show command and it shows the 
correct output.


I also intentionally created a few errors on standby to log and they are 
also getting logged.


But apart from these error logs I can't see any additional logs on standby.

Please help me what exactly is happening and how can I troubleshoot more ?


Read this:

https://www.postgresql.org/docs/current/runtime-config-logging.html

and then see what is set in the standby postgresql.conf.





Regards.


--
Adrian Klaver
adrian.kla...@aklaver.com





Change error code severity for syslog?

2023-10-11 Thread Abhishek Bhola
Hi

The Postgres errors when sent to syslog have their severity levels
translated as follows:
https://www.postgresql.org/docs/current/runtime-config-logging.html#RUNTIME-CONFIG-SEVERITY-LEVELS

I have a system which raises an alert every time syslog receives an ERROR
(or higher severity level).

For most of the Postgres errors, translating it to WARNING level in syslog
works, however, I wanted to translate *ERROR codes XX000/1/2* in Postgres
to be ERROR in syslog as well, so that it triggers the alert system and I
can notified.

Is there any way to change the severity level for these specific error
codes in Postgres?

Thanks

-- 
_This correspondence (including any attachments) is for the intended 
recipient(s) only. It may contain confidential or privileged information or 
both. No confidentiality or privilege is waived or lost by any 
mis-transmission. If you receive this correspondence by mistake, please 
contact the sender immediately, delete this correspondence (and all 
attachments) and destroy any hard copies. You must not use, disclose, copy, 
distribute or rely on any part of this correspondence (including any 
attachments) if you are not the intended 
recipient(s).本メッセージに記載および添付されている情報(以下、総称して「本情報」といいます。)は、本来の受信者による使用のみを意図しています。誤送信等により本情報を取得された場合でも、本情報に係る秘密、または法律上の秘匿特権が失われるものではありません。本電子メールを受取られた方が、本来の受信者ではない場合には、本情報及びそのコピーすべてを削除・破棄し、本電子メールが誤って届いた旨を発信者宛てにご通知下さいますようお願いします。本情報の閲覧、発信または本情報に基づくいかなる行為も明確に禁止されていることをご了承ください。_


Re: Change error code severity for syslog?

2023-10-11 Thread Michael Paquier
On Thu, Oct 12, 2023 at 09:42:47AM +0900, Abhishek Bhola wrote:
> For most of the Postgres errors, translating it to WARNING level in syslog
> works, however, I wanted to translate *ERROR codes XX000/1/2* in Postgres
> to be ERROR in syslog as well, so that it triggers the alert system and I
> can notified.
> 
> Is there any way to change the severity level for these specific error
> codes in Postgres?

You could, hypothetically, use the elog hook to switch dynamically the
error data reports to something else depending on the types of filters
you would like to apply to them.  I have an example of this kind of
idea in one of my plugin modules here:
https://github.com/michaelpq/pg_plugins/tree/main/jsonlog
--
Michael


signature.asc
Description: PGP signature