Re: Support for dates before 4713 BC

2022-08-23 Thread Simon Riggs
On Mon, 22 Aug 2022 at 11:14, stefan eichert  wrote:

> I can also fully support what Alex has written. I am an archaeologist at the 
> Natural History Museum Vienna and PostgreSQL is a perfect Open Source 
> software and we really love working with it for our archaeological and 
> (pre)historical projects.

We are very glad to hear that and I would be happy to help you further.


> The limitation of dates before 4713 BC however is a bit of a bottleneck and 
> we need to use certain workarounds, that, as Alex has written, are error 
> prone and cumbersome.
> The need for dates before 4713 has various reasons:
>
> For example if we have certain dates, like dendrochronological ones, that in 
> some cases can give you a certain time span in which a tree has been cut, 
> like in autumn 6000 BC (so lets say between March and beginning of July), 
> then we would like to map this information in the database with an earliest 
> and latest timestamp that would in that case be 6000BC, March 1st and 6000BC, 
> June 30th.
>
> Radiocarbon dates are similar, even if they only provide a timespan in a 
> format of years before present with a certain +/- range.
> They would be mapped again with an earliest date, e.g. 6322 BC, Jan. 1st and 
> a latest possible one: 6262, Dec. 31st.
>
> In many other cases we are using years as starting point for periodisations, 
> that of course are arbitrary concepts, but still are needed for statistically 
> working with data and for various calculations.
>
> In order to deal with all dates, historical and prehistoric ones, in a 
> consistent way, the implementation of timestamps/dates for before 4713 BC 
> would be very helpful, as we really do have dates before 4713 BC we are 
> working with, that in some cases also have information on months respectively 
> days.

One possibility is to store dates as the INTERVAL datatype, using the
convention for Before Present, rather than worrying about BC/AD.

create table arch (i interval year);
insert into arch values ('-5000 years');
select * from arch;

  i
-
 -5000 years

This can also be used in a column specification like this INTERVAL
YEAR TO MONTH, which would store only years and months.
e.g.
CREATE TABLE arch (age INTERVAL YEAR TO MONTH);


Will that be sufficient, or do you need or want more?

-- 
Simon Riggshttp://www.EnterpriseDB.com/




Db log error

2022-08-23 Thread eswar reddy
Hello sir,

I am getting log error like below

FATAL: unsupported frientend protocol 1234.5680: server supports 1.0 to 3.0

I am using PostgreSQL 9.6.3 version and redhat 7.3 version efm 4.1 version.

Above error is repeated in db log file how to identify the issue and how to
fix.

Please could suggest me sir. What is the problem?. How to solve this
problem.

Thanks

Eswar


Re: Support for dates before 4713 BC

2022-08-23 Thread Peter J. Holzer
On 2022-08-23 10:25:12 +0100, Simon Riggs wrote:
> On Mon, 22 Aug 2022 at 11:14, stefan eichert  wrote:
> > In order to deal with all dates, historical and prehistoric ones, in
> > a consistent way, the implementation of timestamps/dates for before
> > 4713 BC would be very helpful, as we really do have dates before
> > 4713 BC we are working with, that in some cases also have
> > information on months respectively days.
> 
> One possibility is to store dates as the INTERVAL datatype, using the
> convention for Before Present, rather than worrying about BC/AD.
> 
> create table arch (i interval year);
> insert into arch values ('-5000 years');
> select * from arch;
> 
>   i
> -
>  -5000 years

[Disclaimer: I am not an archeologist]

I think this works well if you know an approximate age. If something is
about 5000 years old now, it will still be about 5000 years old next
year and even in ten years.

But it breaks down if you already have a relatively precise date.

4980 years before now probably should be 4981 years before now next year
and definitely 4990 years before now in ten years. So you would have to
continuosly update those values.

You can get around this by defining an epoch (e.g. all relative dates
are relative to 2000 CE). So something which is 4980 years old now would
be stored as 4958 years before 2000.

(Of course this now starts look awfully like time_t (seconds
before/after 1970). Store that in a float8 and you can probably cover
the age of the universe in sufficient precision.)

In any case you probably want to convert to regular dates for display,
and you would run into PostgreSQL's limitations if you use PostgreSQL's
DATE type for that (you may be fine if you do it in the application,
depending on the programming language).

I'm also not sure whether one data type is sufficient for archeological
data. For example, what do you store if you know that something happened
in spring (because you found pollen or whatever), but the year has an
uncertainty of +/- 50 years?

I guess to really store "what do I know about when something happened"
you would have to be able to store a number of constraints (like
"between year x and y", "at least d years after event e", "between month
m and n", etc.)

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature


Re: Db log error

2022-08-23 Thread Tom Lane
eswar reddy  writes:
> FATAL: unsupported frientend protocol 1234.5680: server supports 1.0 to 3.0

This is from a client library trying to use GSSAPI encryption, which that
old server doesn't understand.  You could ignore these messages,
or you could update to a Postgres version that's not EOL.

regards, tom lane




Re: Support for dates before 4713 BC

2022-08-23 Thread Erik Wienhold


> On 23/08/2022 14:38 CEST Peter J. Holzer  wrote:
> 
>  
> On 2022-08-23 10:25:12 +0100, Simon Riggs wrote:
> > On Mon, 22 Aug 2022 at 11:14, stefan eichert  
> > wrote:
> > > In order to deal with all dates, historical and prehistoric ones, in
> > > a consistent way, the implementation of timestamps/dates for before
> > > 4713 BC would be very helpful, as we really do have dates before
> > > 4713 BC we are working with, that in some cases also have
> > > information on months respectively days.
> > 
> > One possibility is to store dates as the INTERVAL datatype, using the
> > convention for Before Present, rather than worrying about BC/AD.
> > 
> > create table arch (i interval year);
> > insert into arch values ('-5000 years');
> > select * from arch;
> > 
> >   i
> > -
> >  -5000 years
> 
> [Disclaimer: I am not an archeologist]

Me neither ;)

> I think this works well if you know an approximate age. If something is
> about 5000 years old now, it will still be about 5000 years old next
> year and even in ten years.
> 
> But it breaks down if you already have a relatively precise date.
> 
> 4980 years before now probably should be 4981 years before now next year
> and definitely 4990 years before now in ten years. So you would have to
> continuosly update those values.

Would be easier to store the estimated age and the assessment date so the age
doesn't have to be updated until it's reassessed.  The time from assessment to
now can be handled as you described: either the age is still considered roughly
the same or it increases as time between assessment and now increases.

--
Erik




RE: Unable to start replica after failover

2022-08-23 Thread Lahnov, Igor
We know what the problem is, but don't know how to solve it correctly.

After failover, *new leader* promoted and read local partial log to LSN 
54E/FB348118

->
2022-05-23 01:47:52.124 [12088] LOG: record with incorrect prev-link 0/100 at 
54E/FB348118
2022-05-23 01:47:52.124 [12088] LOG: redo done at 54E/FB3480F0 system usage: 
CPU: user: 321.21 s, system: 144.77 s, elapsed: 354977.44 s
rmgr: XLOG len (rec/tot): 42/ 42, tx: 0, lsn: 54E/FB348118, prev 54E/FB3480F0, 
desc: END_OF_RECOVERY tli 3; prev tli 2; time 2022-05-23 01:47:53.342100 
2022-05-23 01:47:52.900 [12088] LOG: selected new timeline ID: 3
2022-05-23 01:47:53.658 [12084] LOG: database system is ready to accept 
connections
<-

The *previous leader* started after a failure and read the local partial log to 
LSN 54E/FB348150

->
2022-05-23 01:50:12.945 [122891] LOG: redo starts at 54E/F97F4878
2022-05-23 01:50:13.010 [122891] LOG: invalid record length at 54E/FB348150: 
wanted 24, got 0
2022-05-23 01:50:13.011 [122891] LOG: consistent recovery state reached at 
54E/FB348150
2022-05-23 01:50:13.012 [122883] LOG: database system is ready to accept 
read-only connections
<-

Our 'restore_command' on *previous leader* restores a partial file from archive 
(from *new leader*)

->
2022-05-23 01:50:14 [123730]: [1-1]: INFO: pg_probackup archive-get WAL file: 
0002054E00FB, remote: ssh, threads: 1/1, batch: 1
<-

And this file has a higher priority than the original local partial file on the 
* previous leader*. And this leads to the problem:

---
2022-05-23 01:50:14.448 [122891] LOG: new timeline 3 forked off current 
database system timeline 2 before current recovery point 54E/FB348150
---

And pg_rewind tries to use this file (from *new leader*) to build a map of 
pages that have changed since the last checkpoint

---
2022-05-23 01:51:32,202 INFO: Lock owner: pg51; I am pg01
2022-05-23 01:51:32,209 INFO: Local timeline=2 lsn=54E/FB348150
2022-05-23 01:51:32,252 INFO: master_timeline=3
2022-05-23 01:51:32,254 INFO: master: history=1 53E/FD65D298 no recovery target 
specified
2 54E/FB348118 no recovery target specified
2022-05-23 01:51:32,499 INFO: running pg_rewind from pg51
2022-05-23 01:51:32,574 INFO: running pg_rewind from dbname=postgres 
user=postgres host= IP port=5432 target_session_attrs=read-write
2022-05-23 01:51:32,640 INFO: pg_rewind exit code=1
2022-05-23 01:51:32,640 INFO: stdout=
2022-05-23 01:51:32,640 INFO: stderr=pg_rewind: servers diverged at WAL 
location 54E/FB348118 on timeline 2
pg_rewind: fatal: could not find previous WAL record at 54E/FB348118: 
unexpected pageaddr 54E/7B34A000 in log segment 0002054E00FB, 
offset 3448832
---

We checked this log (from *new leader*), and when direct reading, the output 
like this:
---
record with incorrect prev-link 0/100 at 54E/FB348118

when backward reading:
---
54E/FB348118: unexpected pageaddr 54E/7B34A000 in log segment 
0002054E00FB, offset 3448832
---

Now we have disabled restore_command, but I think it's not best solution. What 
the correct solution in this case?




Possible values of DATESTYLE / PGDATESTYLE

2022-08-23 Thread Sebastien Flaesch
Hi!

I can find possible values for the DATESTYLE parameter in an old 7.2 doc page:
https://www.postgresql.org/docs/7.2/sql-set.html

But when switching to V14 there are no more values listed in the equivalent 
page...

Where can I find the list of possible values for this DATESTYLE parameter, for 
V14?

BR,
Seb



Re: Possible values of DATESTYLE / PGDATESTYLE

2022-08-23 Thread Christoph Moench-Tegeder
## Sebastien Flaesch (sebastien.flae...@4js.com):

> Where can I find the list of possible values for this DATESTYLE parameter, 
> for V14?

https://www.postgresql.org/docs/14/runtime-config-client.html#GUC-DATESTYLE

Regards,
Christoph

-- 
Spare Space




Re: Possible values of DATESTYLE / PGDATESTYLE

2022-08-23 Thread Tom Lane
Sebastien Flaesch  writes:
> I can find possible values for the DATESTYLE parameter in an old 7.2 doc page:
> https://www.postgresql.org/docs/7.2/sql-set.html
> But when switching to V14 there are no more values listed in the equivalent 
> page...

All such info got moved to the per-setting documentation in chapter 20:

https://www.postgresql.org/docs/current/runtime-config-client.html#RUNTIME-CONFIG-CLIENT-FORMAT

regards, tom lane




Corrupted Postgresql Microsoft Binaries

2022-08-23 Thread Hillary Masha
Hello,

I downloaded the microsoft postgresql binaries for versions 14.5, 13.8,
12.12, 11.17, 10.22 from
https://www.enterprisedb.com/download-postgresql-binaries and found that
there was an error with opening the zip files. Does anyone else use these
files and found that they ran into the same issue? If so, where else can I
get the files?


Re: Corrupted Postgresql Microsoft Binaries

2022-08-23 Thread Adrian Klaver

On 8/23/22 11:58, Hillary Masha wrote:

Hello,

I downloaded the microsoft postgresql binaries for versions 14.5, 13.8, 
12.12, 11.17, 10.22 from 
https://www.enterprisedb.com/download-postgresql-binaries 
 and found 
that there was an error with opening the zip files. Does anyone else use 


I could unzip the file on Linux with no errors.

What OS are you on Windows or MacOS and what version?

How did you download?

What software are you using to unzip the file?

these files and found that they ran into the same issue? If so, where 
else can I get the files?




There are the installers:

https://www.enterprisedb.com/downloads/postgres-postgresql-downloads


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Setting up a server with previous day data

2022-08-23 Thread Stephen Frost
Greetings,

* Srinivasa T N (seen...@gmail.com) wrote:
>I have a primary postgresql 12 server which is being continuously used
> for transaction processing.  For reporting purposes, I want to set up a
> secondary server which has got previous day data.  Everyday night, I want
> the data from primary to be shifted to secondary.  I can achieve this
> manually using pg_basebackup on primary and pg_restore on secondary.  Is
> there any other automated efficient way to achieve the same?  Any relevant
> docs would be helpful.

You might consider checking out pgbackrest and the incremental backup
and delta restore options that it has.  Incremental backups will only
copy files from the PG server that have changed since the last backup,
and the delta restore option will only update the files that are
different between the backup and the files that are in place.
pgbackrest is also able to parallelize these operations.

(this use-case, more-or-less, was part of the original reason pgbackrest
was developed, btw)

Thanks,

Stephen


signature.asc
Description: PGP signature


Question regarding failover behavior

2022-08-23 Thread Koen De Groote
Hello all,

I have a system that was originally set up on 9.3, a few years ago moved to
11.2

A shared mount is used to host the basebackup and wal archives.

The failover procedure was basically manual and as follow:

1/ Take out the primary, remove the IP from the primary machine/VM
2/ Create the trigger_file on the standby, add the IP to the new primary
machine/VM
3/ Create a basebackup from the new primary
4/ Redeploy the new standby, which will unpack this basebackup from step 3
5/ The new standby will have a restore_command to get synced back up.

I see there's more recent tools like pg_rewind these days. Haven't had time
to check that out and probably won't for a while.

My question is: is the basebackup actually necessary? Can I not simply
modify the configuration old the old primary, set it up like a standby, and
then start the process/container, and have it pick up the necessary
wal_archives with the restore_command?

Or is it more complicated than that? Something like a timeline jump or
something else that makes it impossible to simply set the old primary to
standby and start it again?

Kind regards,
Koen De Groote


Re: Surprising results from current_role in a "security invoker" trigger function in a "cascade delete via FK" scenario

2022-08-23 Thread Bryn Llewellyn
> b...@yugabyte.com wrote:
> 
>> david.g.johns...@gmail.com wrote:
>> 
>> You are correct that the behavior here is not explicitly documented [where] 
>> I would expect to find it.
>> 
>> My expectation is that the trigger owner is the context in which the trigger 
>> function is executed. Table owners can grant the ability to other roles to 
>> create triggers on the owned tables should the trigger owner and table owner 
>> require different roles. Triggers are like views in this regard.
>> 
>> I don't expect cascade update/cascade delete to be a factor here at all, 
>> other than making the process easier to perform. This extends from the prior 
>> expectation.
>> 
>> I expect [all this] not because I can point to a place where all that is 
>> said.
> 
> Good. We're converging. Thanks, David… So this is what I'll do:
> 
> I'll create a placeholder GitHub issue in "yugabyte/yugabyte-db" and send you 
> the URL. Anybody can access this repo, read the account of the issues, and 
> download an attached .zip of a testcase. I'll mention in my account that the 
> behavior that I observe in YugabyteDB reproduces exactly in PG 14.4, that the 
> YugabyteDB issue is filed for tracking purposes, and that I'll update the 
> account with more information from the PG folks in due course.

I've submitted the promised GitHub issue. It has an attached .zip of a 
self-contained, one-touch testcase. Here it is:

https://github.com/yugabyte/yugabyte-db/issues/13736 


Everything points to a bug in that, no matter what the intended behavior is, 
the same rules should apply in all scenarios—but they don't. Further, I agree 
with David's:

>> I don't expect cascade update/cascade delete to be a factor here at all.


My testcase tries a gazillion variations (including what happens with a 
function-based constraint and when DML to one table "cascades" to another 
because of trigger action). After all, in the presence of one bug where 
something unexpected happens in one scenario, who knows what unexpected 
outcomes might be seen in other comparable scenarios?

Here's my summary of my findings, copied from the issues report:

«
It very much does seems that the intended rule is this:

- The value that current_role will report in a security invoker trigger 
function for a DML trigger is the role that does the DML to the table that 
fires the trigger.

It seems, too, that this testcase has revealed a long-standing bug—present at 
least as far back as PostgreSQL Version 11:

- The value that current_role will report in a security invoker trigger 
function for a DML trigger on a "details" table will be the owner of that table 
when its rows are cascade-deleted as a consequence of deleting its parent 
"masters" row. However, this buggy outcome is seen only for "before delete" 
triggers, both at “statement" level and at "row" level.

- The bug has an obvious downstream consequence: any operation on other tables 
that are done by such a trigger function that sees the wrong current_role will 
be executed by that wrong role—and so on down the line.
»

W.r.t. David's 

>> My expectation is that the trigger owner is the context in which the trigger 
>> function is executed.


This can't be right because a trigger doesn't have an owner. You can see this 
from the "pg_trigger" table. It has its own "oid" and a "tgrelid". But it has 
no FK to "pg_roles" or to "pg_namespace”. In other words, a trigger isn't a 
first-class schema object. Rather, from the user's P.o.V., it’s uniquely 
identified by the table to which it's attached. In this sense, it's like a 
constraint (and especially like one that's based on a function). Each of these 
two, trigger and function-based-constraint, is a device that associates a 
"call-back" function with a table so that the function is invoked in response 
to DML to the table.

The "pg_constraint" table, too, has no FK to "pg_roles". Mysteriously, though, 
it does have a "connamespace" column. Presumably this is a denormalization such 
that its value is always equal to "relnamespace" in "pg_class" for the table to 
which the constraint is attached..

It seems to me, therefore, that the role that creates the trigger is out of the 
picture once the trigger has been created. (There's no analogy here for a 
constraint because there's no "grant alter" on a table to correspond to "grant 
trigger" on a table.)

It seems, too, that the owner of the trigger function (and of the constraint 
function) is out of the picture at run-time (when these are "security invoker") 
for determining the value that "current_role" in such a function will report.

In other words, and as I see it, there are only two candidate answers: the role 
that does the DML that causes the function to be invoked; and the role that 
owns the table—DML to which causes the function to be invoked.

In most cases, current_role here shows who does the DML. But in those two rare 
corner cases that my testcase

Re: Setting up a server with previous day data

2022-08-23 Thread Ron

On 7/19/22 02:22, Srinivasa T N wrote:

Hi All,
   I have a primary postgresql 12 server which is being continuously used 
for transaction processing.  For reporting purposes, I want to set up a 
secondary server which has got previous day data.  Everyday night, I want 
the data from primary to be shifted to secondary.  I can achieve this 
manually using pg_basebackup on primary and pg_restore on secondary.  Is 
there any other automated efficient way to achieve the same?  Any relevant 
docs would be helpful.


We populated a "reporting" database from the OLTP database (not PostgreSQL) 
by creating "/X/_log1" and "/X/_log2" tables which had the same columns as 
relevant "main" tables, plus an ACTION_CODE with values 'I", "U" or "D", and 
a datetime field which defaults to CURRENT_TIMESTAMP.


ON INSERT, ON UPDATE and ON DELETE triggers were added to the "main" tables 
which inserted into X_log2 on even days, and into X_log1 on odd days.


Soon after midnight, a cron job dumped "yesterday's" _log table, loaded it 
into the reporting table, and then truncated the _log table.


That was before someone developed a utility to convert the roll-forward logs 
into INSERT, UPDATE and DELETE statements.


Such a utility for PostgreSQL that would convert yesterday's WAL files into 
SQL would /really/ solve your problem.


--
Angular momentum makes the world go 'round.

Re: Surprising results from current_role in a "security invoker" trigger function in a "cascade delete via FK" scenario

2022-08-23 Thread David G. Johnston
On Tue, Aug 23, 2022 at 5:12 PM Bryn Llewellyn  wrote:

> *What do you all think?*
>
>
That bug reports should be sent to the -bugs mailing list with a succinct
test case demonstrating the bug.

David J.


Re: Surprising results from current_role in a "security invoker" trigger function in a "cascade delete via FK" scenario

2022-08-23 Thread Bryn Llewellyn
david.g.johns...@gmail.com wrote:b...@yugabyte.com wrote:What do you all think?That bug reports should be sent to the -bugs mailing list with a succinct test case demonstrating the bug.I explained that the bug doesn't allow a short testcase because there are many degrees of freedom and you don't know, before trying them all, what pattern will emerge.It would be different if I could read a clear statement of expected behavior. But you've already said that there is none.The account of my GitHub issue includes a preliminary test that shows that there's something to investigate further. I copied it below. Please tell me if it meets your succinctness criteria. If it does, then I'll submit it to the psql-bugs list as you suggest.Notice, though, that, counting from « the testcase proper », it's ~70 lines long.Moreover, it uses five different non-super users, all of whom can use and create in a schema whose name doesn't matter but that must be first in each user's search_path. That prelude (« drop and re-create the "bryn" database and the five users that the testcase needs ») is ~45 lines long.It produces this output:current_role, table, operation, v: client, masters, INSERT, Marycurrent_role, table, operation, v: client, details, INSERT, shampoocurrent_role, table, operation, v: client, details, INSERT, soapcurrent_role, table, operation, v: client, details, DELETE, soapcurrent_role, table, operation, v: client, masters, DELETE, Marycurrent_role, table, operation, v: d_owner, details, DELETE, shampooI've attached it as "t.zip". It unzips to the single file "t.sql". This can be run and re-run time and again. I've proved to myself (again) that I can send from my "b...@yugabyte.com" to other users with various email domains (like "icloud.com" and "gmail.com"). But I've done nothing to try to solve why my attachments don't make it to "pgsql-general@lists.postgresql.org". Please tell me if you get it at your "gmail.com" address.<>
-- DROP AND RE-CREATE THE "bryn" DATABASE AND THE FIVE USERS THAT THE TESTCASE NEEDS.\c postgres postgresset client_min_messages = warning;drop database if exists bryn;create database bryn owner postgres;\c bryn postgresset client_min_messages = warning;revoke all on database bryn from public;drop schema public cascade;create schema s authorization postgres;create procedure s.create_role(name in text, can_create in boolean = false)  security invoker  set search_path = pg_catalog, pg_temp  language plpgsqlas $body$begin  begin    execute format('drop owned by %I cascade', name);  exception    when undefined_object then null;  end;  execute format('drop role if exists %I', name);  execute format('create role %I login password ''p''', name);  execute format('grant connect on database bryn to %I', name);  execute format('grant usage on schema s to %I', name);  execute format('alter user %I set search_path = s, pg_catalog, pg_temp', name);  if can_create then    execute format('grant create on database bryn to %I', name);    execute format('grant create on schema s to %I', name);  end if;end;$body$;call s.create_role('m_owner',        true);call s.create_role('d_owner',        true);call s.create_role('trg_functions',  true);call s.create_role('trg_creator',    true);call s.create_role('client'              );-- THE TESTCASE PROPER.\c bryn m_ownercreate table masters(  mk  serial primary key,  v   text not null unique);grant all on table     masters         to public;grant all on sequence  masters_mk_seq  to public;\c bryn d_ownercreate table details(  mk  int,  dk  serial,  v   text not null unique,  constraint details_pk primary key(mk, dk),  constraint details_fk foreign key(mk)    references masters(mk)    on delete cascade);grant all on table     details         to public;grant all on sequence  details_dk_seq  to public;\c bryn trg_functionscreate function trg_fn()  returns trigger  security invoker  set search_path = s, pg_catalog, pg_temp  language plpgsqlas $body$declare  vv constant text not null :=    case tg_op      when 'INSERT' then new.v      when 'DELETE' then old.v    end;begin  raise info 'current_role, table, operation, v: %, %, %, %', current_role, tg_table_name, tg_op, vv;  return case tg_op    when 'INSERT' then new    when 'DELETE' then old  end;end;$body$;grant all on function trg_fn() to public;\c bryn trg_creatorcreate trigger masters_trg  before insert or delete  on masters  for each row  execute function trg_fn();create trigger details_trg  before insert or delete  on details  for each row  execute function trg_fn();\c bryn clientdo $body$declare  new_mk int not null := 0;begin  insert into masters(v) values('Mary') returning mk into new_mk;  insert into details(mk, v) values(new_mk, 'shampoo');  insert into details(mk, v) values(new_mk, 'soap');end;$body$;delete from details where v = 'soap';delete from masters 

Re: Surprising results from current_role in a "security invoker" trigger function in a "cascade delete via FK" scenario

2022-08-23 Thread David G. Johnston
On Tue, Aug 23, 2022 at 7:32 PM Bryn Llewellyn  wrote:

>
> The account of my GitHub issue includes a preliminary test that shows that
> there's something to investigate further. I copied it below. Please tell me
> if it meets your succinctness criteria. If it does, then I'll submit it to
> the psql-bugs list as you suggest.
>
>
Fair point, go ahead and just post to -bugs because I'm not going to be
diving that deeply into this right now.

David J.


Re: recovery_command has precedence over phisical slots?

2022-08-23 Thread Kyotaro Horiguchi
At Fri, 19 Aug 2022 18:37:53 +0200, Laurenz Albe  
wrote in 
> On Fri, 2022-08-19 at 16:54 +0200, Giovanni Biscontini wrote:
> > Hello everyone, 
> > I'm experiencing a behaviour I don't really understand if is a 
> > misconfiguration or a wanted behaviour:
> > 1) I set up a primary server (a.k.a. db1) with and archive_command to a 
> > storage
> > 2) I set up a replica (a.k.a. db2) that created a slot named as slot_2 and 
> > that has the recovery_command set to read archived wal on the storage.
> > If I shutdown replica db2 during a pgbench I see the safe_wal_size queried 
> > from pg_replication_slots on the primary decrease to a certain amount but 
> > still in the max_slot_wal_kepp_size window: even
> > if I restart the replica db2 before the slot_state changes to unreserved or 
> > lost I see that the replica gets needed wals from the storage using 
> > recovery_command but doesn't use slot on primary.
> > Only if I comment the recovery command on the .conf of the replica then it 
> > uses slot.
> > If this is a wanted behaviour I can't understand the need of slots on 
> > primary.
> 
> This is normal behavior and is no problem.
> 
> After the standby has caught up using "restore_command", it will connection to
> the primary as defined in "primary_conninfo" and stream WAL from there.

The reason that db2 ran recovery beyond the slot LSN is the db2's
restore_command (I guess) points to db1's archive.  If db2 had its own
archive directory or no archive (that is, restore_command is empty),
archive recovery stops at (approximately) the slot LSN and replication
will start from there (from the beginning of the segment, to be
exact).

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




Re: Unable to start replica after failover

2022-08-23 Thread Alexander Kukushkin
Hi,

On Tue, 23 Aug 2022 at 16:31, Lahnov, Igor  wrote:

>
> Our 'restore_command' on *previous leader* restores a partial file from
> archive (from *new leader*)
>
> ->
> 2022-05-23 01:50:14 [123730]: [1-1]: INFO: pg_probackup archive-get WAL
> file: 0002054E00FB, remote: ssh, threads: 1/1, batch: 1
> <-
>
>
Does it really restore the *partial* file? And why would it do that? The
old primary didn't have a chance to archive 0002054E00FB, the
new primary archived it as 0002054E00FB.partial after promotion.
Postgres is requesting the 0002054E00FB file, but somehow gets
a 0002054E00FB.partial instead. Why?
IMO, pg_probackup does something weird.

Regards,
--
Alexander Kukushkin


Re: Setting up a server with previous day data

2022-08-23 Thread Peter J. Holzer
On 2022-08-23 19:15:58 -0500, Ron wrote:
> That was before someone developed a utility to convert the roll-forward logs
> into INSERT, UPDATE and DELETE statements.
> 
> Such a utility for PostgreSQL that would convert yesterday's WAL files into 
> SQL
> would really solve your problem.

Isn't that what logical replication basically does?

I also think I've seen other tools parsing the WAL stream and doing
something useful with the results.

hp

-- 
   _  | Peter J. Holzer| Story must make more sense than reality.
|_|_) ||
| |   | h...@hjp.at |-- Charles Stross, "Creative writing
__/   | http://www.hjp.at/ |   challenge!"


signature.asc
Description: PGP signature