Re: pgbouncer with ldap

2019-09-16 Thread Achilleas Mantzios

Please dont top-post.

On 14/9/19 9:41 π.μ., Ayub M wrote:

Yes I did set that, here is how pgbouncer looks like ---

-rwsrwsr-x. 1 root root 2087504 Sep 13 00:45 pgbouncer




If you had set the same password in the postgresql server for the user and in 
the pgbouncer local unix user it should work.
What are the contents of your /etc/pam.d files?
How do /etc/pam.d/other /etc/pam.d/common-auth /etc/pam.d/common-account look 
like?
How about data/pg_hba.conf ?

Also try to do your tests by tail -f :
* the pgbouncer log
* linux auth.log or equivalent
* the pgsql log
and watch them for every enter you press.

You might have to tweak data/pg_hba.conf as well in order to look for md5 
passwords for this user from the pgbouncer machine.


On Fri, Sep 13, 2019 at 6:50 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 13/9/19 10:19 π.μ., Ayub M wrote:

Stumbled in the first step - PAM authentication via pgbouncer. After 
compiling pgbouncer with the pam plug-in, I am unable to login into the db - 
throws PAM error message. Please help.

User created with the same password as linux user --
localhost:~$ psql -h dbhost -p 3306 -U admin -W db1
db1=> create user testuser password 'hello123';
CREATE ROLE

[ec2-user@ip-1.1.1.1  pam.d]$ psql -h localhost 
-p 5432 testdb -U testuser
Password for user testuser:
psql: ERROR:  auth failed


ok, pgbouncer should be able to read /etc/pam* files.
Did you miss the
|# chown root:staff ~pgbouncer/pgbouncer-1.9.0/pgbouncer |
|# chmod +s ~pgbouncer/pgbouncer-1.9.0/pgbouncer|
part?



Log entries - pgbouncer.log
2019-09-13 06:51:47.180 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 login attempt: db=testdb user=testuser tls=no
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_send(12, 9) = 9
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=86, parse=86, 
recv=86
2019-09-13 06:51:47.180 UTC [5752] NOISE resync(12): done=0, parse=0, recv=0
2019-09-13 06:51:47.180 UTC [5752] NOISE safe_recv(12, 4096) = 14
2019-09-13 06:51:47.180 UTC [5752] NOISE C-0x1243020: 
testdb/testuser@[::1]:52408 read pkt='p' len=14
2019-09-13 06:51:47.180 UTC [5752] DEBUG C-0x1243020: 
testdb/testuser@[::1]:52408 pam_auth_begin(): pam_first_taken_slot=1, 
pam_first_free_slot=1
2019-09-13 06:51:47.180 UTC [5752] DEBUG pam_auth_worker(): processing slot 
1
2019-09-13 06:51:47.180 UTC [5752] WARNING pam_authenticate() failed: 
Authentication failure
2019-09-13 06:51:47.181 UTC [5752] DEBUG pam_auth_worker(): authorization 
completed, status=3
2019-09-13 06:51:47.386 UTC [5752] LOG C-0x1243020: 
testdb/testuser@[::1]:52408 closing because: auth failed (age=0s)
2019-09-13 06:51:47.386 UTC [5752] WARNING C-0x1243020: 
testdb/testuser@[::1]:52408 pooler error: auth failed

Able to login as testuser
[ec2-user@ip-1.1.1.1  pam.d]$ su - testuser
Password:
Last login: Fri Sep 13 06:21:12 UTC 2019 on pts/1
[testuser@ip-1.1.1.1  ~]$ id
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser) 
context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

The user was created as follows
[root@ip-1.1.1.1  ~]# adduser -p hello123 testuser
[root@ip-1.1.1.1  ~]# id testuser
uid=1001(testuser) gid=1001(testuser) groups=1001(testuser)

Here is the pgbouncer.ini config
[ec2-user@ip-1.1.1.1  etc]$ less pgbouncer.ini 
| grep -v '^$' | grep -v '^;'
[databases]
testdb = host=dbhost port=3306 dbname=db1
[users]
[pgbouncer]
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
listen_addr = *
listen_port = 5432
auth_type = pam

Am I missing something? Any permissions?

On Thu, Sep 12, 2019 at 4:54 AM Ayub M mailto:hia...@gmail.com>> wrote:

Okay, thanks for the response. Unfortunately Aurora does not expose 
these files or I should say there is no concept of these files in AWS managed 
Aurora DB service. Anyway I will give a try
and let you know.

On Thu, Sep 12, 2019 at 1:52 AM Achilleas Mantzios mailto:ach...@matrix.gatewaynet.com>> wrote:

On 11/9/19 2:47 μ.μ., Ayub M wrote:

Achilleas, for this setup to work are changes to postgresql.conf 
and pg_hba.conf needed? I am trying to implement this for AWS rds Aurora where 
these files are not accessible.


Those files are needed in any case if you work with postgresql. 
Unfortunately no experience with Aurora. He have been building from source for 
ages.

On Mon, Sep 9, 2019, 6:46 AM Achilleas Mantzios 
mailto:ach...@matrix.gatewaynet.com>> wrote:

On 9/9/19 12:41 μ.μ., Laurenz Albe wrote:
> Christoph Moench-Tegeder wrote:
>>> It has hba and via hba 

Querying nested relations

2019-09-16 Thread Natalia Ostapuk
Hi everyone,

My name is Natalia, I'm a PhD student and I'm currently looking into the
performance of queries on nested relations (expressed with arrays and UDT
in Postgres).

Now I'm trying to figure out the problem and find some good examples. If
you've ever had any problems with these kinds of queries (e.g. join on an
element of an array) could you please share them with me?

Any help would be appreciated.

Best regards,
Natalia


Re: Cascade Trigger Not Firing

2019-09-16 Thread George Neuner
On Sat, 14 Sep 2019 10:00:18 -0500, Ron 
wrote:

>On 9/14/19 9:54 AM, Tom Lane wrote:
>[snip
>> The only mention of this feature in 38.1 "Overview of Trigger Behavior"
>> is
>>
>>  UPDATE triggers*can*  moreover be set to fire only if certain columns
>>  are mentioned in the SET clause of the UPDATE statement.
>>
>> which seems to me to be plenty specific enough --- it is carefully
>> *not* saying that the trigger will fire if the column changes value.
>> The CREATE TRIGGER man page never says that, either.
>
>Given that the UPDATE "*can* ... be set to fire only if certain columns are 
>mentioned in the SET clause of the UPDATE statement", it logically follows 
>that the default behavior is something else (for example, if the field value 
>changes for whatever reason.

But the default could be "any column mentioned", not necessarily any
value changed.

George





deadlock on declarative partitioned table (11.3)

2019-09-16 Thread Kevin Wilkinson
on linux, pg11.3, i have a (declarative) partitioned table with a 
deadlock that i do not understand. one process does a copy to the 
partitioned table. another process is executing a jdbc batch of commands 
to "atomically" replace one of the table partitions. it has the 
following commands (autocommit is off).


   lock table foo;
   alter table foo detach partition foo_nn;
   alter table foo_nn rename to foo_nn_old;
   alter table new_foo_nn rename to foo_nn
   alter table foo attach partition foo_nn for values  from (...) to (...);
   commit;

the log says the deadlock is on the first alter table command but i 
think that is misleading. i suspect what is happening is that the 
explicit lock command attempts to lock each partition of foo in turn 
rather than locking all partitions immediately. so it acquires some 
locks in some unknown order while the copy acquires locks as needed. so 
they deadlock.


or is something else going on? is there a better way to atomically 
replace a table partition? the table is partitioned by timestamp but i 
don't think that matters.


thanks,

kevin







Re: FW: Re: FW: Re: Shouldn;t this trigger be called?

2019-09-16 Thread stan
On Mon, Sep 16, 2019 at 12:12:34AM -0400, Tom Lane wrote:
> stan  writes:
> > On Sun, Sep 15, 2019 at 12:27:14PM -0700, Adrian Klaver wrote:
> >> On 9/15/19 10:46 AM, stan wrote:
> >>> So, my test tell me that the validity check is done BEFORE an attempt to
> >>> insert (thus firing the trigger) occurs.
> 
> >> What validity check?
> 
> > The check to see if it is the type enum.
> 
> Indeed, a trigger cannot fix an input-validity error, because that
> will happen while trying to form the row value that would be passed
> to the trigger.  So I guess that when you say "the trigger doesn't
> fire" you really mean "this other error is raised first".
> 
> However, I still don't understand your claim that it works the
> way you wanted in an INSERT statement.  The enum input function
> is going to complain in either context.
> 
> Generally you need to fix issues like this before trying to
> insert the data into your table.  You might try preprocessing
> the data file before feeding it to COPY.  Another way is to
> copy into a temporary table that has very lax column data types
> (all "text", perhaps) and then transform the data using
> INSERT ... SELECT from the temp table to the final storage table.
> 
>   regards, tom lane
Thanks for educating me. I thought I had tested and seen that this worked on
an INSERT, but once you told me it does not, I re tested to convince myself
that my test was invalid. let me show you what I was trying to do:



CREATE FUNCTION fix_customer_types_case()
RETURNS trigger AS $$
BEGIN
if NEW.c_type  IS NOT NULL
THEN
NEW.c_type := upper(cast( NEW.c_type AS TEXT));
END IF ;
if NEW.status  IS NOT NULL
THEN
/*
RAISE NOTICE 'Called With %', NEW.status;
*/
NEW.status := upper(cast( NEW.status AS TEXT));
END IF ;
/*
RAISE NOTICE 'Left With With NEW.status %', NEW.status;
RAISE NOTICE 'Left With With NEW.c_type %', NEW.c_type;
*/
return NEW;
END;
$$ 
LANGUAGE PLPGSQL;

CREATE TRIGGER fix_customer_types_case_trig BEFORE INSERT OR UPDATE ON customer
FOR EACH ROW EXECUTE FUNCTION fix_customer_types_case();

all of this is to deal with columns defined as this user defined type.


CREATE TYPE activity_status AS ENUM ('ACTIVE' ,
'INACTIVE');


Can you think of a better way to make the system "human data entry friendly"?

-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Replication protocol vs replication functions

2019-09-16 Thread Eqbal Z
I am using Postgresql's logical replication in my application with the
test_decoding plugin. I am trying to understand the difference between
replication protocol and replication functions provided by Postgresql.
Specifically, CREATE_REPLICATION_SLOT call returns a snapshot name, but
pg_create_logical_replication_slot does not. The driver I am using does not
support replication protocol, so I am trying to use the functions. But how
can I retrieve the snapshot from the function call?

Can I use a combination of pg_export_snapshot with
pg_create_logical_replication_slot to get a snapshot of the open
transaction, that I can then use with pg_dumpfor example?

Thanks.


Re: FW: Re: FW: Re: Shouldn;t this trigger be called?

2019-09-16 Thread Adrian Klaver

On 9/16/19 11:52 AM, stan wrote:

On Mon, Sep 16, 2019 at 12:12:34AM -0400, Tom Lane wrote:

stan  writes:

On Sun, Sep 15, 2019 at 12:27:14PM -0700, Adrian Klaver wrote:

On 9/15/19 10:46 AM, stan wrote:

So, my test tell me that the validity check is done BEFORE an attempt to
insert (thus firing the trigger) occurs.



What validity check?



The check to see if it is the type enum.


Indeed, a trigger cannot fix an input-validity error, because that
will happen while trying to form the row value that would be passed
to the trigger.  So I guess that when you say "the trigger doesn't
fire" you really mean "this other error is raised first".

However, I still don't understand your claim that it works the
way you wanted in an INSERT statement.  The enum input function
is going to complain in either context.

Generally you need to fix issues like this before trying to
insert the data into your table.  You might try preprocessing
the data file before feeding it to COPY.  Another way is to
copy into a temporary table that has very lax column data types
(all "text", perhaps) and then transform the data using
INSERT ... SELECT from the temp table to the final storage table.

regards, tom lane

Thanks for educating me. I thought I had tested and seen that this worked on
an INSERT, but once you told me it does not, I re tested to convince myself
that my test was invalid. let me show you what I was trying to do:



So was it invalid?




CREATE FUNCTION fix_customer_types_case()
RETURNS trigger AS $$
BEGIN
 if NEW.c_type  IS NOT NULL
THEN
NEW.c_type := upper(cast( NEW.c_type AS TEXT));
END IF ;
 if NEW.status  IS NOT NULL
THEN
/*
RAISE NOTICE 'Called With %', NEW.status;
*/
NEW.status := upper(cast( NEW.status AS TEXT));
END IF ;
/*
RAISE NOTICE 'Left With With NEW.status %', NEW.status;
RAISE NOTICE 'Left With With NEW.c_type %', NEW.c_type;
*/
 return NEW;
END;
$$
LANGUAGE PLPGSQL;

CREATE TRIGGER fix_customer_types_case_trig BEFORE INSERT OR UPDATE ON customer
 FOR EACH ROW EXECUTE FUNCTION fix_customer_types_case();

all of this is to deal with columns defined as this user defined type.


CREATE TYPE activity_status AS ENUM ('ACTIVE' ,
'INACTIVE');


Can you think of a better way to make the system "human data entry friendly"?




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: FW: Re: FW: Re: Shouldn;t this trigger be called?

2019-09-16 Thread stan


On Mon, Sep 16, 2019 at 12:44:49PM -0700, Adrian Klaver wrote:
> On 9/16/19 11:53 AM, stan wrote:
> > On Sun, Sep 15, 2019 at 09:16:35PM -0700, Adrian Klaver wrote:
> > > On 9/15/19 6:04 PM, stan wrote:
> > > > On Sun, Sep 15, 2019 at 12:27:14PM -0700, Adrian Klaver wrote:
> > > > > On 9/15/19 10:46 AM, stan wrote:
> > > > > > Forgot to cc the list again. Have to look at settings in mutt.
> > > > > > 
> > > > > > > > > 
> > > 
> > > > > What validity check?
> > > > > 
> > > > 
> > > > The check to see if it is the type enum.
> > > > 
> > > 
> > > This would get solved a lot quicker if full information was provided:
> > > 
> > > 1) Schema of the table.
> > >   Including associated triggers
> > > 
> > > 2) The actual check code.
> > > 
> > 
> > OK, please let me know if what I put in my reply to Tom Lane is not 
> > sufficient.
> > 
> 
> 
> It was not sufficient, you did not include the table schema or the check
> code.

OK, understood here is the table:

/* Contains one record for each customer */

CREATE TABLE customer  (
customer_keyinteger DEFAULT nextval('customer_key_serial') 
PRIMARY KEY ,
cust_no smallint NOT NULL UNIQUE ,
namevarchar UNIQUE ,
c_type  customer_type ,
locationvarchar ,
bill_address_1  varchar ,
bill_address_2  varchar ,
bill_city   varchar ,
bill_state  varchar(2) ,
bill_zipus_postal_code NOT NULL DEFAULT '0', 
bill_countryvarchar ,
bill_attention  varchar ,
bill_addresse   varchar ,
ship_address_1  varchar ,
ship_address_2  varchar ,
ship_addresse   varchar ,
ship_attention  varchar ,
ship_city   varchar ,
ship_state  varchar(2) ,
ship_zipus_postal_code NOT NULL DEFAULT '0', 
office_phone_area_code  numeric(3), 
office_phone_exchange   numeric(3), 
office_phone_number numeric(4), 
office_phone_extension  numeric(4), 
cell_phone_area_codenumeric(3), 
cell_phone_exchange numeric(3), 
cell_phone_number   numeric(4), 
ship_phone_area_codenumeric(3), 
ship_phone_exchange numeric(3), 
ship_phone_number   numeric(4), 
ship_phone_extensionnumeric(4), 
fax_phone_area_code numeric(3), 
fax_phone_exchange  numeric(3), 
fax_phone_numbernumeric(4), 
status  activity_status NOT NULL DEFAULT 'ACTIVE', 
modtime timestamptz NOT NULL DEFAULT current_timestamp 
);

I am not certain what you mean by the check. As you can see, there is nor
specific check clause. I was referring to the built in check of data being
entered versus the legal values for the user defined type. Make sense?


-- 
"They that would give up essential liberty for temporary safety deserve
neither liberty nor safety."
-- Benjamin Franklin




Re: PostgreSQL License

2019-09-16 Thread Laurenz Albe
Please don't cross-post your question to more than one list!

Ashkar Dev wrote:
> Can anyone explain the PostgreSQL License to me?
> Can I create a database with it and sell the database also preventing
> the buyer from reselling it?
> Can I change in the logos of the PostgreSQL system and its name?

Yes.

You only have to make sure that the original license text is included
in your license.  This does not limit what you are allowed to do with
the software.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Dubugging an intermittent foreign key insert error with csvlog

2019-09-16 Thread Jeff Ross

Hi all,

I've been debugging an intermittent foreign key insert error on our 
single database / multi-tenant server.  To help isolate traffic by 
tenant, I've switched to using csvlog and for the duration and have set 
log_min_duration_statement to 0 to get *everything*.  Fortunately, daily 
80G csvlogs compress nicely.


For lack of a readable way to paste in the 12 lines of relevant csvlog 
into an e-mail, I've uploaded a very small 3K csv file to my web server at


https://openvistas.net/hansens_error.csv

The bare bones of the issue involve inserting a row into a table named 
load_det, then getting the serial sequence of that table, getting the 
last_value of that sequence and then inserting into another table named 
cargo_det using that retrieved last_value as the foreign key that ties 
this row to the load_det table.  The vast majority of these succeed 
without issue but not all.


The csvlog snip shows what I believe are 2 simultaneous but separate 
sessions and the session that attempts to insert into the cargo_det 
table is not the same session that inserted into the load_det table. 
That's what my hunch is but what is unclear to me is if those separate 
sessions are also in separate transactions.


csvlog has a couple of columns that I'm unclear about: 
session_start_time, virtual_transaction_id.  Is session_start_time the 
time inside a transaction block, as in beginning with a begin but before 
a commit or rollback?  Or is it maybe just how long this pgbouncer 
session has been connected?  virtual_transaction_id is defined in the 
docs as backendID/localXID--do separate backendIDs also represent 
separate transactions?  Is there a better way to determine separate 
transactions within csvlog?


Also, the app code that does this is legacy perl using DBD::Pg but the 
original code was written for Informix.  We've been in the process of 
moving off informix for a while now and should be done within the month. 
 I intend to re-work this to use returning id (available in postgres 
since 8.2!) instead of the serial sequence / last_value hack but not 
quite there yet.


Thanks,

Jeff Ross




Re: Dubugging an intermittent foreign key insert error with csvlog

2019-09-16 Thread Adrian Klaver

On 9/16/19 1:46 PM, Jeff Ross wrote:

Hi all,

I've been debugging an intermittent foreign key insert error on our 
single database / multi-tenant server.  To help isolate traffic by 
tenant, I've switched to using csvlog and for the duration and have set 
log_min_duration_statement to 0 to get *everything*.  Fortunately, daily 
80G csvlogs compress nicely.


For lack of a readable way to paste in the 12 lines of relevant csvlog 
into an e-mail, I've uploaded a very small 3K csv file to my web server at


     https://openvistas.net/hansens_error.csv

The bare bones of the issue involve inserting a row into a table named 
load_det, then getting the serial sequence of that table, getting the 
last_value of that sequence and then inserting into another table named 
cargo_det using that retrieved last_value as the foreign key that ties 
this row to the load_det table.  The vast majority of these succeed 
without issue but not all.


The csvlog snip shows what I believe are 2 simultaneous but separate 
sessions and the session that attempts to insert into the cargo_det 
table is not the same session that inserted into the load_det table. 
That's what my hunch is but what is unclear to me is if those separate 
sessions are also in separate transactions.


To me it looks like the INSERT into load_det and into cargo_det are 
occurring in the same transaction(934281062). The part that would 
concern me is that:


select last_value from load_det_id_seq

occurs in different transactions and sessions. From here:

https://www.postgresql.org/docs/11/sql-createsequence.html

"Also, last_value will reflect the latest value reserved by any session, 
whether or not it has yet been returned by nextval."


Especially as the error is coming from a different 
transaction(934281063) and session then the INSERTs. I'm guessing that 
there is cross talk on the sequence number fetch and application to 
cargo_det.




csvlog has a couple of columns that I'm unclear about: 
session_start_time, virtual_transaction_id.  Is session_start_time the 
time inside a transaction block, as in beginning with a begin but before 
a commit or rollback?  Or is it maybe just how long this pgbouncer 
session has been connected?  virtual_transaction_id is defined in the 
docs as backendID/localXID--do separate backendIDs also represent 
separate transactions?  Is there a better way to determine separate 
transactions within csvlog?


Also, the app code that does this is legacy perl using DBD::Pg but the 
original code was written for Informix.  We've been in the process of 
moving off informix for a while now and should be done within the month. 
  I intend to re-work this to use returning id (available in postgres 
since 8.2!) instead of the serial sequence / last_value hack but not 
quite there yet.


Thanks,

Jeff Ross





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Dubugging an intermittent foreign key insert error with csvlog

2019-09-16 Thread Jeff Ross

On 9/16/19 4:07 PM, Adrian Klaver wrote:

On 9/16/19 1:46 PM, Jeff Ross wrote:




The csvlog snip shows what I believe are 2 simultaneous but separate 
sessions and the session that attempts to insert into the cargo_det 
table is not the same session that inserted into the load_det table. 
That's what my hunch is but what is unclear to me is if those separate 
sessions are also in separate transactions.


To me it looks like the INSERT into load_det and into cargo_det are 
occurring in the same transaction(934281062). The part that would 
concern me is that:


select last_value from load_det_id_seq

occurs in different transactions and sessions. From here:

https://www.postgresql.org/docs/11/sql-createsequence.html

"Also, last_value will reflect the latest value reserved by any session, 
whether or not it has yet been returned by nextval."


Especially as the error is coming from a different 
transaction(934281063) and session then the INSERTs. I'm guessing that 
there is cross talk on the sequence number fetch and application to 
cargo_det.




Thank you Adrian--I think my hunch was basically correct then.  Now all 
I need to do is figure out why we have multiple sessions and 
transactions.  Or it might be time to skip ahead and get rid of the 
last_value query.


Jeff




Re: FW: Re: FW: Re: Shouldn;t this trigger be called?

2019-09-16 Thread Adrian Klaver

On 9/16/19 12:55 PM, stan wrote:


On Mon, Sep 16, 2019 at 12:44:49PM -0700, Adrian Klaver wrote:

On 9/16/19 11:53 AM, stan wrote:

On Sun, Sep 15, 2019 at 09:16:35PM -0700, Adrian Klaver wrote:

On 9/15/19 6:04 PM, stan wrote:

On Sun, Sep 15, 2019 at 12:27:14PM -0700, Adrian Klaver wrote:

On 9/15/19 10:46 AM, stan wrote:

Forgot to cc the list again. Have to look at settings in mutt.






What validity check?



The check to see if it is the type enum.



This would get solved a lot quicker if full information was provided:

1) Schema of the table.
Including associated triggers

2) The actual check code.



OK, please let me know if what I put in my reply to Tom Lane is not sufficient.




It was not sufficient, you did not include the table schema or the check
code.


OK, understood here is the table:

/* Contains one record for each customer */




 status activity_status NOT NULL DEFAULT 'ACTIVE',
 modtimetimestamptz NOT NULL DEFAULT current_timestamp
);

I am not certain what you mean by the check. As you can see, there is nor
specific check clause. I was referring to the built in check of data being
entered versus the legal values for the user defined type. Make sense?



To be clear you are seeing an error because a value of say 'active' is 
being rejected before your trigger has a chance to upper case it, correct?


Also this happens whether you use \copy or manually INSERT the values?





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Dubugging an intermittent foreign key insert error with csvlog

2019-09-16 Thread Adrian Klaver

On 9/16/19 3:16 PM, Jeff Ross wrote:

On 9/16/19 4:07 PM, Adrian Klaver wrote:

On 9/16/19 1:46 PM, Jeff Ross wrote:




The csvlog snip shows what I believe are 2 simultaneous but separate 
sessions and the session that attempts to insert into the cargo_det 
table is not the same session that inserted into the load_det table. 
That's what my hunch is but what is unclear to me is if those 
separate sessions are also in separate transactions.


To me it looks like the INSERT into load_det and into cargo_det are 
occurring in the same transaction(934281062). The part that would 
concern me is that:


select last_value from load_det_id_seq

occurs in different transactions and sessions. From here:

https://www.postgresql.org/docs/11/sql-createsequence.html

"Also, last_value will reflect the latest value reserved by any 
session, whether or not it has yet been returned by nextval."


Especially as the error is coming from a different 
transaction(934281063) and session then the INSERTs. I'm guessing that 
there is cross talk on the sequence number fetch and application to 
cargo_det.




Thank you Adrian--I think my hunch was basically correct then.  Now all 
I need to do is figure out why we have multiple sessions and 
transactions.  Or it might be time to skip ahead and get rid of the 
last_value query.


Yes, RETURNING makes this sort of thing so much easier it pays to invest 
the time in making it part of the query.




Jeff





--
Adrian Klaver
adrian.kla...@aklaver.com




pgbackrest restore to new location?

2019-09-16 Thread Ron

Hi,

In order to do this, do I create a new stanza in config file which has 
pg1-path point to the new/empty directory structure while still pointing to 
the existing backup directory, and restore that stanza?


Thanks

--
Angular momentum makes the world go 'round.




pldbgapi extension

2019-09-16 Thread Prakash Ramakrishnan
Hi Team,

Not able to create the extension  pldbgapi in cluster and if i run make and
make install it will going .sql file in  /usr/share/pgsql/extension this
path but actually i need to create these files into below path,

How to create this one please advise

==> psql
psql (11.5)
Type "help" for help.

postgres=# CREATE EXTENSION pldbgapi;
ERROR:  could not open extension control file
*"/usr/pgsql-11/share/extension/*pldbgapi.control": No such file or
directory
postgres=#
postgres=#
postgres=# \q

[root@cvgrhehhsd006 pldebugger-master]# export
PATH=$PGHOME/bin:$PATH:$HOME/.local/bin:$HOME/bin:$PGBASE/edbmtk/bin:/usr/pgsql-11/bin
[root@cvgrhehhsd006 pldebugger-master]#  make USE_PGXS=1
make: Nothing to be done for `all'.
[root@cvgrhehhsd006 pldebugger-master]#
[root@cvgrhehhsd006 pldebugger-master]# make install USE_PGXS=1
/usr/bin/mkdir -p '/usr/lib64/pgsql'
/usr/bin/mkdir -p '/usr/share/pgsql/extension'
/usr/bin/mkdir -p '/usr/share/pgsql/extension'
/usr/bin/mkdir -p '/usr/share/doc/pgsql/extension'
/bin/sh /usr/lib64/pgsql/pgxs/src/makefiles/../../config/install-sh -c -m
755  plugin_debugger.so '/usr/lib64/pgsql/plugin_debugger.so'
/bin/sh /usr/lib64/pgsql/pgxs/src/makefiles/../../config/install-sh -c -m
644 ./pldbgapi.control '/usr/share/pgsql/extension/'
/bin/sh /usr/lib64/pgsql/pgxs/src/makefiles/../../config/install-sh -c -m
644 ./pldbgapi--1.0.sql ./pldbgapi--unpackaged--1.0.sql
 '/usr/share/pgsql/extension/'
/bin/sh /usr/lib64/pgsql/pgxs/src/makefiles/../../config/install-sh -c -m
644 ./README.pldebugger '/usr/share/doc/pgsql/extension/'
[root@cvgrhehhsd006 pldebugger-master]#
[root@cvgrhehhsd006 pldebugger-master]# cd /usr/share/pgsql/extension
[root@cvgrhehhsd006 extension]# ls
pldbgapi--1.0.sql  pldbgapi.control  pldbgapi--unpackaged--1.0.sql
[root@cvgrhehhsd006 extension]# ll
total 16
-rw-r--r--. 1 root root 7457 Sep 17 02:46 pldbgapi--1.0.sql
-rw-r--r--. 1 root root  181 Sep 17 02:46 pldbgapi.control
-rw-r--r--. 1 root root 2258 Sep 17 02:46 pldbgapi--unpackaged--1.0.sql

-- 
Thanks,
Prakash.R
PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On call
: +91-8939599426