Re: [EXT EMAIL] Re: First Time Starting Up PostgreSQL and Having Problems

2019-06-20 Thread p...@cmicdo.com
 Does the file have any  \015's in it?

On Wednesday, June 19, 2019, 7:03:25 PM EDT, Andrew Gierth 
 wrote:  
 
 > "Rob" == Rob Sargent  writes:

 >>> local  all            all                                    trust

 Rob> That line has four values and the header has 5.

That's standard for "local" lines, which lack an ADDRESS field.

-- 
Andrew (irc:RhodiumToad)


  

Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Shay Rojansky
Shay here, maintainer of the Npgsql driver for .NET.

>> Is there a setting where i can disable the DETAIL field being populated
>> with row data?
>
> See:
>
>
https://www.postgresql.org/docs/11/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT
>
> log_error_verbosity

While this is helpful, this does not seem to quite fit:

1. As this is about personal sensitive data (including conceivably
authentication information), the fact that the default is to log seems
problematic.
2. The TERSE setting also disables HINT, QUERY and CONTEXT.
3. There may be other information sent in the DETAIL messages which does
not contain sensitive data. There's no reason to have that disabled along
with the sensitive data.

 In other words, this isn't about verbosity, but about sensitive data. It
seems like a specific knob for sensitive information may be required, which
would be off by default and would potentially affect other fields as well
(if relevant).


Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Karsten Hilbert
On Thu, Jun 20, 2019 at 01:26:23PM +0200, Shay Rojansky wrote:

>  In other words, this isn't about verbosity, but about sensitive data. It
> seems like a specific knob for sensitive information may be required, which
> would be off by default and would potentially affect other fields as well
> (if relevant).

A specifig knob for "sensitive data" cannot be supplied by
PostgreSQL because it cannot know beforehand what information
will be considered sensitive under a given, future, usage
scenario.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Adrian Klaver

On 6/20/19 4:26 AM, Shay Rojansky wrote:

Shay here, maintainer of the Npgsql driver for .NET.

 >> Is there a setting where i can disable the DETAIL field being populated
 >> with row data?
 >
 > See:
 >
 > 
https://www.postgresql.org/docs/11/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT

 >
 > log_error_verbosity

While this is helpful, this does not seem to quite fit:

1. As this is about personal sensitive data (including conceivably 
authentication information), the fact that the default is to log seems 
problematic.

2. The TERSE setting also disables HINT, QUERY and CONTEXT.
3. There may be other information sent in the DETAIL messages which does 
not contain sensitive data. There's no reason to have that disabled 
along with the sensitive data.


  In other words, this isn't about verbosity, but about sensitive data. 
It seems like a specific knob for sensitive information may be required, 
which would be off by default and would potentially affect other fields 
as well (if relevant).


As Karsten said that is beyond the scope of the Postgres logging. The 
prudent thing would be to prevent the log information reaching the 
application logs. Or put it a log that can only be seen by authorized 
personnel.


--
Adrian Klaver
adrian.kla...@aklaver.com




Detaching multiple partitions in 1 ALTER TABLE statement

2019-06-20 Thread Ben Hood
Hi,

I’m wondering what is the syntax for detaching multiple partitions in 1 ALTER 
TABLE statement.

I’ve tried this:


# ALTER TABLE movement_legs DETACH PARTITION movement_legs_2019_06_20_11_15, 
DETACH PARTITION movement_legs_2019_06_20_11_20;
psql: ERROR:  syntax error at or near ","
LINE 1: ...gs DETACH PARTITION movement_legs_2019_06_20_11_15, DETACH P…


The documentation suggests "DETACH PARTITION can be combined into a list of 
multiple alterations to be applied together”:

https://www.postgresql.org/docs/12/sql-altertable.html

But I haven’t found an example of where multiple partitions are detached in 1 
statement.

This is using version 12 beta 1.

Many thanks,

Ben





Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Tom Lane
Karsten Hilbert  writes:
> On Thu, Jun 20, 2019 at 01:26:23PM +0200, Shay Rojansky wrote:
>> In other words, this isn't about verbosity, but about sensitive data. It
>> seems like a specific knob for sensitive information may be required, which
>> would be off by default and would potentially affect other fields as well
>> (if relevant).

> A specifig knob for "sensitive data" cannot be supplied by
> PostgreSQL because it cannot know beforehand what information
> will be considered sensitive under a given, future, usage
> scenario.

Yeah, it's fairly hard to see how we could respond to this complaint
without lobotomizing our error messages to the point of near uselessness.
Almost any non-constant text in an error report could possibly be seen
as hazardous.

More generally: I find this complaint a little confusing.  We did not
consider reporting the "show row contents" DETAIL to the client to be a
security hazard when it was added, because one would think that that's
just data that the client already knows anyway.  I'd be interested to see
a plausible use-case in which the message would reflect PII that had not
been supplied by or available to the client.

regards, tom lane




Re: Inserts restricted to a trigger

2019-06-20 Thread Adrian Klaver

On 6/19/19 3:07 PM, Miles Elam wrote:

Hi Adrian, thanks for responding.

How would I restrict access to the SECURITY DEFINER function? If it can 
be called by the trigger, it can be called by the user as well I would 
think. Same issue as access to the table itself only now with a 
superuser intermediary, right?


We may need to back this up:

1) What is the audit table recording?

2) How much access do your users have to the database itself, outside 
the Web app?


3) What access do users have in the Web app?







On Tue, Jun 18, 2019 at 6:20 PM Adrian Klaver > wrote:


On 6/18/19 10:14 AM, Miles Elam wrote:
 > Thanks for the suggestion. Unfortunately we only have a single login
 > role (it's a web app) and then we SET ROLE according to the
contents of
 > a JSON Web Token. So we end up with SESSION_USER as the logged in
user
 > and the active role as CURRENT_USER.

Have not tried it but nested function?:

1) Outer function runs as normal user and grabs the CURRENT_USER. This
is passed into 2)

2) Audit function that runs with SECURITY DEFINER.

Other option is to record the CURRENT_USER in the table the trigger is
on and just pass that to the audit function.

 >
 > It may be that we're just stuck with a gap and need to just try
and keep
 > track of our mutation points, such as limit what is accessible
through
 > REST or GraphQL, and there is no way to fundamentally lock this
down in
 > Postgres. I was checking the mailing list to see if I'd missed
anything.
 >
 >
 > On Tue, Jun 18, 2019 at 9:47 AM Torsten Förtsch
mailto:tfoertsch...@gmail.com>
 > >>
wrote:
 >
 >     Have you tried session_user?
 >
 >     create function xx() returns table (cur text, sess text)
 >     security definer language sql as $$
 >          select current_user::text, session_user::text;
 >     $$;
 >
 >     Then log in as different user and:
 >
 >     => select (xx()).*;
 >         cur    | sess
 >     --+---
 >       postgres | write
 >
 >
 >     On Tue, Jun 18, 2019 at 6:30 PM Miles Elam
 >     mailto:miles.e...@productops.com>
>> wrote:
 >
 >         That seems straightforward. Unfortunately I also want to know
 >         the user/role that performed the operation. If I use SECURITY
 >         DEFINER, I get the superuser account back from
CURRENT_USER, not
 >         the actual user.
 >
 >         Sorry, should have included that in the original email.
How do I
 >         restrict access while still retaining info about the current
 >         user/role?
 >
 >
 >         On Mon, Jun 17, 2019 at 5:47 PM mailto:r...@raf.org>
 >         >> wrote:
 >
 >             Adrian Klaver wrote:
 >
 >              > On 6/17/19 4:54 PM, Miles Elam wrote:
 >              > > Is there are way to restrict direct access to a
table
 >             for inserts but
 >              > > allow a trigger on another table to perform an
insert
 >             for that user?
 >              > >
 >              > > I'm trying to implement an audit table without
allowing
 >             user tampering
 >              > > with the audit information.
 >              >
 >              > Would the below not work?:
 >              > CREATE the table as superuser or other privileged user
 >              > Have trigger function run as above user(use
SECURITY DEFINER)
 >
 >             and make sure not to give any other users
insert/update/delete
 >             permissions on the audit table.
 >
 >              > > Thanks in advance,
 >              > >
 >              > > Miles Elam
 >              >
 >              > --
 >              > Adrian Klaver
 >              > adrian.kla...@aklaver.com
 >
 >
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




RE: [EXT EMAIL] Re: First Time Starting Up PostgreSQL and Having Problems

2019-06-20 Thread Brent Bates
 I found the problem.  I cleared everything out and started from scratch, 
then slowly added my changes back to the configuration files.  The problem was 
in the postgresql.conf.  At the bottom of the file I had uncommented all the 
‘include’ lines, so they looked like this:

include_dir = ''# include files ending in 
'.conf' from
# a directory, e.g., 'conf.d'
include_if_exists = ''  # include file only if it exists
include = ''# include file

The first one is what caused all the headaches.  I recommented out these lines 
and everything works now.  Thanks for all the quick suggestions.

Brent L. Bates - UNIX System Administrator - Langley Federal Credit Union
Phone: (757) 848-5443, Mobile: (757) 419-0295

COMMUNICATION CONFIDENTIALITY: All information in this communication, including 
attachments, is strictly confidential and intended solely for delivery to and 
authorized use by the addressee(s) identified above, and may contain 
privileged, confidential, proprietary and/or secret information entitled to 
protection and/or exempt from disclosure under applicable law. If you are not 
the intended recipient, please destroy or delete this communication and any/all 
attached documents as the possession, distribution and/or copying of this 
communication is unauthorized and may be unlawful. If you have received this 
communication in error, please remove it in its entirety from your computer and 
notify the sender.


Re: Problem with row-level lock

2019-06-20 Thread Adrian Klaver

On 6/19/19 9:06 AM, Job wrote:

Hi guys,

i am struggling with a problem due, sometime, to a double concurrent update on 
the same row of a table.


What is the error message you get?



Since the client application cannot be control, i need to manage - in order to 
avoid this situation - the lock of a record due to two updates at the same time.
Is there way to force serializing, i don't know how, the updates on the same 
row in order to avoid this locks?

Thank you, cheers!

FC






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: [EXT EMAIL] Re: First Time Starting Up PostgreSQL and Having Problems

2019-06-20 Thread Tom Lane
Brent Bates  writes:
>  I found the problem.  I cleared everything out and started from scratch, 
> then slowly added my changes back to the configuration files.  The problem 
> was in the postgresql.conf.  At the bottom of the file I had uncommented all 
> the ‘include’ lines, so they looked like this:
> include_dir = ''# include files ending in 
> '.conf' from
> # a directory, e.g., 'conf.d'
> include_if_exists = ''  # include file only if it exists
> include = ''# include file

> The first one is what caused all the headaches.  I recommented out these 
> lines and everything works now.  Thanks for all the quick suggestions.

Ah-hah!  I wonder if we should disallow empty values for these GUCs?
And/or put in some kind of test for recursive opening of the same
config file?  I don't think it'd occurred to anyone that it's this
easy to get the code to try to do that.

regards, tom lane




Re: Inserts restricted to a trigger

2019-06-20 Thread Adrian Klaver

On 6/19/19 3:07 PM, Miles Elam wrote:

Hi Adrian, thanks for responding.

How would I restrict access to the SECURITY DEFINER function? If it can 
be called by the trigger, it can be called by the user as well I would 
think. Same issue as access to the table itself only now with a 
superuser intermediary, right?




Should have also mentioned, if you are not adverse to a third party 
solution there is PGAudit:


https://www.pgaudit.org/


--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Shay Rojansky
Karsten,

>>  In other words, this isn't about verbosity, but about sensitive data. It
>> seems like a specific knob for sensitive information may be required,
which
>> would be off by default and would potentially affect other fields as well
>> (if relevant).
>
> A specifig knob for "sensitive data" cannot be supplied by
> PostgreSQL because it cannot know beforehand what information
> will be considered sensitive under a given, future, usage
> scenario.

It seems generally agreed that all data from the database should be
considered potentially sensitive and should therefore not be leaked in log
messages - unless an explicit, informed opt-in is done. It is extremely
easy to imagine a (poorly-written) UI or web application which simply
surfaces database exceptions, allowing attackers to potentially extract
data from the database. In the worst case, passwords and other auth
information may get exposed in this way, but even any sort of personal
information is a big problem.

It seems worth at least having a conversation about it...


Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Karsten Hilbert
On Thu, Jun 20, 2019 at 05:22:20PM +0200, Shay Rojansky wrote:

> It seems generally agreed that all data from the database should be
> considered potentially sensitive and should therefore not be leaked in log
> messages - unless an explicit, informed opt-in is done. It is extremely
> easy to imagine a (poorly-written) UI or web application which simply
> surfaces database exceptions, allowing attackers to potentially extract
> data from the database. In the worst case, passwords and other auth
> information may get exposed in this way, but even any sort of personal
> information is a big problem.
>
> It seems worth at least having a conversation about it...

Sure, but we are currently exploring whether the database
reflects any values that it had not been given by the same
user beforehand.

There might be another scenario:

user enters value for column 1

app adds in secret-to-the-user value for column 2

UPDATE fails

error message reflects val 1 and secret val 2

app displays both values

user knows secret value 2

but I don't see how PostgreSQL can do anything *reasonable*
about that short of sitting tight-and-mum and not reflect
much of *anything* beyond "error". And even that can be a
side channel.

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




Re: [EXT EMAIL] Re: First Time Starting Up PostgreSQL and Having Problems

2019-06-20 Thread Ian Barwick

On 6/20/19 11:55 PM, Tom Lane wrote:

Brent Bates  writes:

  I found the problem.  I cleared everything out and started from scratch, 
then slowly added my changes back to the configuration files.  The problem was 
in the postgresql.conf.  At the bottom of the file I had uncommented all the 
‘include’ lines, so they looked like this:
 include_dir = ''# include files ending in 
'.conf' from
 # a directory, e.g., 'conf.d'
 include_if_exists = ''  # include file only if it exists
 include = ''# include file



The first one is what caused all the headaches.  I recommented out these lines 
and everything works now.  Thanks for all the quick suggestions.


Ah-hah!  I wonder if we should disallow empty values for these GUCs?
And/or put in some kind of test for recursive opening of the same
config file?  I don't think it'd occurred to anyone that it's this
easy to get the code to try to do that.


Funnily enough I was looking at that code the other day for totally other
reasons and vaguely wondered if there was any potential for circular
inclusions and the like. There's a check for nesting depth, but I'm not
sure what else. I could take a look in the next few days if no-one else
wants to.


Regards

Ian Barwick

--
 Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Shay Rojansky
>> A specifig knob for "sensitive data" cannot be supplied by
>> PostgreSQL because it cannot know beforehand what information
>> will be considered sensitive under a given, future, usage
>> scenario.

> Yeah, it's fairly hard to see how we could respond to this complaint
> without lobotomizing our error messages to the point of near uselessness.
> Almost any non-constant text in an error report could possibly be seen
> as hazardous.

I don't think that's true - schema information (table, column names)
definitely seems like it's in a different category than table contents.

> More generally: I find this complaint a little confusing.  We did not
> consider reporting the "show row contents" DETAIL to the client to be a
> security hazard when it was added, because one would think that that's
> just data that the client already knows anyway.  I'd be interested to see
> a plausible use-case in which the message would reflect PII that had not
> been supplied by or available to the client.

I'm imagining two main scenarios here.

First, there are many lazily-written applications out there which simply
show exception/error messages to users. So user A could interact with a
website in a way that triggers a unique constraint violation, and thereby
get access to the data in the row which caused the violation.

Second, such exceptions and errors frequently get logged (e.g. to disk).
Logs in general aren't kept as secure as database data itself (who knows
where they end up and who handles them).

In this day and age of increased attention to personal information it seems
quite risky to be sending row contents via error messages without an
opt-in...


Re: Detaching multiple partitions in 1 ALTER TABLE statement

2019-06-20 Thread Simon Riggs
On Thu, 20 Jun 2019 at 15:19, Ben Hood  wrote:

> Hi,
>
> I’m wondering what is the syntax for detaching multiple partitions in 1
> ALTER TABLE statement.
>
> I’ve tried this:
>
>
> # ALTER TABLE movement_legs DETACH PARTITION
> movement_legs_2019_06_20_11_15, DETACH PARTITION
> movement_legs_2019_06_20_11_20;
> psql: ERROR:  syntax error at or near ","
> LINE 1: ...gs DETACH PARTITION movement_legs_2019_06_20_11_15, DETACH P…
>
>
> The documentation suggests "DETACH PARTITION can be combined into a list
> of multiple alterations to be applied together”:
>

Docs say this" All the forms of ALTER TABLE that act on a single table,
except RENAME , SET SCHEMA , ATTACH
PARTITION , and DETACH PARTITION  can be combined into a list of multiple
alterations to be
applied together."

Perhaps a comma would be helpful, but its clear that you can do multiple
detaches in one command at present.

But it sounds like a great feature.

-- 
Simon Riggshttp://www.2ndQuadrant.com/

PostgreSQL Solutions for the Enterprise


Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Tom Lane
Karsten Hilbert  writes:
> Sure, but we are currently exploring whether the database
> reflects any values that it had not been given by the same
> user beforehand.

> There might be another scenario:
>   user enters value for column 1
>   app adds in secret-to-the-user value for column 2
>   UPDATE fails
>   error message reflects val 1 and secret val 2
>   app displays both values
>   user knows secret value 2

I wondered about this, but a test case did not show it to be possible,
and after some looking around I found the responsible code.  The
"Failing row contains ..." message is built by execMain.c's
ExecBuildSlotValueDescription() function, and it will only print columns
for which (a) the value was supplied in the current INSERT or UPDATE
command, or (b) the calling user has privileges to SELECT that column.
So we have expended at least some effort towards blocking that sort of
hole.

Admittedly, in your example there's a difference between what "the app"
should know and what "the user using the app" should know.  But I'm not
really seeing how Postgres could usefully model that situation.  We have
no idea about the structure of the client-side logic.

BTW, I notice that ExecBuildSlotValueDescription punts and never prints
anything if the table in question has RLS enabled.  So maybe a workable
kluge for the OP is to enable RLS?

regards, tom lane




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Tom Lane
Shay Rojansky  writes:
>> Yeah, it's fairly hard to see how we could respond to this complaint
>> without lobotomizing our error messages to the point of near uselessness.
>> Almost any non-constant text in an error report could possibly be seen
>> as hazardous.

> I don't think that's true - schema information (table, column names)
> definitely seems like it's in a different category than table contents.

You're not visualizing this with the appropriate amount of paranoia: just
because the database thinks that something is a table or column name
doesn't mean that that's what it actually is.  A screw-up in query syntax
could lead to reporting back something that was really a data value.
Admittedly, that's not very likely given a well-debugged application
issuing the queries, but it's not something to ignore either.

> First, there are many lazily-written applications out there which simply
> show exception/error messages to users. So user A could interact with a
> website in a way that triggers a unique constraint violation, and thereby
> get access to the data in the row which caused the violation.

Well, that's just bad webapp design ...

> Second, such exceptions and errors frequently get logged (e.g. to disk).
> Logs in general aren't kept as secure as database data itself (who knows
> where they end up and who handles them).

Yeah.  We more commonly see this type of complaint in the alternate guise
of "you're logging sensitive information in the postmaster log!  I can't
accept that!".  We've basically established a project policy that the
postmaster log has to be secured to exactly the same extent as the
database files themselves, since otherwise there's no certainty that it
won't leak data you don't want leaked.  On the whole, I think the right
response to this complaint is that equal attention has to be given to
securing error logs on the client side.

regards, tom lane




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Ravi Krishna
> More generally: I find this complaint a little confusing.  We did not
> consider reporting the "show row contents" DETAIL to the client to be a
> security hazard when it was added, because one would think that that's
> just data that the client already knows anyway.  I'd be interested to see
> a plausible use-case in which the message would reflect PII that had not
> been supplied by or available to the client.

I had the same issue in pgaudit which was spilling PHI data in PG logs which we
were feeding to sumologic.  I had to write a python masking program to strip out
literal values from the PG log.




Re: Detaching multiple partitions in 1 ALTER TABLE statement

2019-06-20 Thread Adrian Klaver

On 6/20/19 9:00 AM, Simon Riggs wrote:
On Thu, 20 Jun 2019 at 15:19, Ben Hood > wrote:


Hi,

I’m wondering what is the syntax for detaching multiple partitions
in 1 ALTER TABLE statement.

I’ve tried this:


# ALTER TABLE movement_legs DETACH PARTITION
movement_legs_2019_06_20_11_15, DETACH PARTITION
movement_legs_2019_06_20_11_20;
psql: ERROR:  syntax error at or near ","
LINE 1: ...gs DETACH PARTITION movement_legs_2019_06_20_11_15, DETACH P…


The documentation suggests "DETACH PARTITION can be combined into a
list of multiple alterations to be applied together”:


Docs say this" All the forms of ALTER TABLE that act on a single table, 
except RENAME , SET SCHEMA , ATTACH
PARTITION , and DETACH PARTITION  can be combined into a list of 
multiple alterations to be

applied together."



Perhaps a comma would be helpful, but its clear that you can do multiple 

  ^not
Or I am thoroughly confused:)


detaches in one command at present.

But it sounds like a great feature.

--
Simon Riggshttp://www.2ndQuadrant.com/ 
PostgreSQL Solutions for the Enterprise



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread David G. Johnston
On Thu, Jun 20, 2019 at 9:17 AM Tom Lane  wrote:

> Karsten Hilbert  writes:
> > Sure, but we are currently exploring whether the database
> > reflects any values that it had not been given by the same
> > user beforehand.
>
> > There might be another scenario:
> >   user enters value for column 1
> >   app adds in secret-to-the-user value for column 2
> >   UPDATE fails
> >   error message reflects val 1 and secret val 2
> >   app displays both values
> >   user knows secret value 2
>
> I wondered about this, but a test case did not show it to be possible,
> and after some looking around I found the responsible code.  The
> "Failing row contains ..." message is built by execMain.c's
> ExecBuildSlotValueDescription() function, and it will only print columns
> for which (a) the value was supplied in the current INSERT or UPDATE
> command, or (b) the calling user has privileges to SELECT that column.
> So we have expended at least some effort towards blocking that sort of
> hole.
>
>
Just to be clear here, the OP provided the following query example:

test=# update person set email = null;
ERROR:  null value in column "email" violates not-null constraint
DETAIL:  Failing row contains (william, denton, null).

The presence of william and denton in the error detail was because the user
updating the table has select access on first and last name.  If they did
not those fields would not have been part of the error message?  I'm not in
a position to experiment right now but what does/should it show in the
restrictive case?

David J.


Re: Row data is reflected in DETAIL message when constraints fail on insert/update

2019-06-20 Thread Tom Lane
"David G. Johnston"  writes:
> Just to be clear here, the OP provided the following query example:

> test=# update person set email = null;
> ERROR:  null value in column "email" violates not-null constraint
> DETAIL:  Failing row contains (william, denton, null).

> The presence of william and denton in the error detail was because the user
> updating the table has select access on first and last name.  If they did
> not those fields would not have been part of the error message?  I'm not in
> a position to experiment right now but what does/should it show in the
> restrictive case?

regression=# create user joe;
CREATE ROLE
test=# create table person(first text, last text, email text not null);
CREATE TABLE
test=# grant select(email),update(email) on person to joe;
GRANT
test=# insert into person values('william','denton','w...@gmail.com');
INSERT 0 1
test=# \c - joe
You are now connected to database "test" as user "joe".
test=> update person set email = null;
psql: ERROR:  null value in column "email" violates not-null constraint
DETAIL:  Failing row contains (email) = (null).

The DETAIL in this case would be the same whether joe had select(email)
privilege or not; the email value is considered OK to report since it
came from the query not the table.

regards, tom lane




Why does the pg_dumpall command have a database option?

2019-06-20 Thread Espresso Beanies
I'm trying to see if someone could answer to me why the pg_dumpall command
has a database option when it's designed to dump all the databases in a
PostgreSQL server instance. I'm only asking because when I create a .pgpass
file and try to use the pg_dumpall command, I'm still required to specify a
specific database even though the command itself should be targeting all
databases. Can anyone explain this to me a bit better?

Thanks,


Re: Why does the pg_dumpall command have a database option?

2019-06-20 Thread Adrian Klaver

On 6/20/19 1:03 PM, Espresso Beanies wrote:
I'm trying to see if someone could answer to me why the pg_dumpall 
command has a database option when it's designed to dump all the 
databases in a PostgreSQL server instance. I'm only asking because when 
I create a .pgpass file and try to use the pg_dumpall command, I'm still 
required to specify a specific database even though the command itself 
should be targeting all databases. Can anyone explain this to me a bit 
better?


Thanks,


Because pg_dumpall is a client and needs to connect to a database to 
kick start the process/fetch global information:


https://www.postgresql.org/docs/11/app-pg-dumpall.html

-l dbname
--database=dbname

Specifies the name of the database to connect to for dumping global 
objects and discovering what other databases should be dumped. If not 
specified, the postgres database will be used, and if that does not 
exist, template1 will be used.



--
Adrian Klaver
adrian.kla...@aklaver.com




Odd Update Behaviour

2019-06-20 Thread Harry Ambrose
Hi,

I noticed some weird update behaviour today in one of our development
environments. In my opinion this appears to be a bug, but before reporting
it I thought I should seek the opinions of others in the community. Maybe
this is known and/or seen before?

The behaviour is visible when a subquery is used in an update. More
specifically when a subquery is rejected by the parser when executed on
it's own. This causes the where condition to be ignored entirely and thus
the whole table updated rather than the parser throwing an error.

Example below that can be use to replicate:
update_bug=# create table dummy_data (rowid bigserial primary key, type
int);
CREATE TABLE

update_bug=# insert into dummy_data (type) select 1 from
generate_series(1,1);
INSERT 0 1

update_bug=# insert into dummy_data (type) select 2 from
generate_series(1,1);
INSERT 0 1

update_bug=# select type, count(*) from dummy_data group by 1 order by 1;
 type | count
--+---
1 | 1
2 | 1
(2 rows)

update_bug=# select rowid from (select rowid as idnumber from dummy_data
where type = 1) q;
ERROR:  column "rowid" does not exist
LINE 1: select rowid from (select rowid as idnumber from dummy_data ...

update_bug=# update dummy_data set type = 3 where rowid in (select rowid
from (select rowid as idnumber from dummy_data where type = 1) q);
UPDATE 2

update_bug=# select type, count(*) from dummy_data group by 1 order by 1;
 type | count
--+---
3 | 2
(1 row)

I have managed to replicate this behaviour on both 9.5.16 & 10.6 and will
continue to test further.

Any thoughts/opinions are obviously welcomed.

Best wishes,
Harry


Re: Odd Update Behaviour

2019-06-20 Thread David G. Johnston
On Thu, Jun 20, 2019 at 2:13 PM Harry Ambrose 
wrote:

> Hi,
>
> I noticed some weird update behaviour today in one of our development
> environments. In my opinion this appears to be a bug, but before reporting
> it I thought I should seek the opinions of others in the community. Maybe
> this is known and/or seen before?
>

Yep, seen before, even has an FAQ entry:

https://wiki.postgresql.org/wiki/FAQ#Why_doesn.27t_PostgreSQL_report_a_column_not_found_error_when_using_the_wrong_name_in_a_subquery.3F


> The behaviour is visible when a subquery is used in an update. More
> specifically when a subquery is rejected by the parser when executed on
> it's own. This causes the where condition to be ignored entirely and thus
> the whole table updated rather than the parser throwing an error.
>

David J.


Need create table statements from metadata

2019-06-20 Thread David Gauthier
psql (9.6.7, server 9.5.2) on linux.

I have 2 DBs, one for dev the other is live.  I want to recreate several
tables in the dev db using the same metadata found in the live db.  But I'm
too lazy to manually transcribe everything and that's prone to error anyway.

In the past, I would just run pg_dump and capture metadata only for
selected tables, then use that to (re)create that tables in the other DB.
But where I am now, they don't give me the privs to run pg_dump.

So I was wondering if there's a way to do this quickly and cleanly without
pg_dump.  Perhaps a stored procedure that pg_dump uses (or something like
that) ?

Thanks in Advance !


List tables for a specific schema

2019-06-20 Thread David Gauthier
psql (9.6.7, server 9.5.2)

I created a schema in my DB called "dvm".  Then I created a table a-la...
create table dvm.foo (col1 tedxt); .  I see the schema with \dnS+.  But I
can't see my table using \d.  I can see the dable with \d dvm.foo, so it's
in there.  The first column of the \d output is "Schema", so I would have
expected to see it there. But no luck.

What I would like is to see a list of all the tables in the dvm schema and
nothing else.  "\d dvm.*" works, but it also spills the details of every
table.  I just want the list of tables in the schema.

Is there a way to get what I want ?

Thanks !


Re: List tables for a specific schema

2019-06-20 Thread Adrian Klaver

On 6/20/19 2:43 PM, David Gauthier wrote:

psql (9.6.7, server 9.5.2)

I created a schema in my DB called "dvm".  Then I created a table a-la...
create table dvm.foo (col1 tedxt); .  I see the schema with \dnS+.  But 
I can't see my table using \d.  I can see the dable with \d dvm.foo, so 
it's in there.  The first column of the \d output is "Schema", so I 
would have expected to see it there. But no luck.


What I would like is to see a list of all the tables in the dvm schema 
and nothing else.  "\d dvm.*" works, but it also spills the details of 
every table.  I just want the list of tables in the schema.


Is there a way to get what I want ?


Looks like a search_path issue.

In psql:

1) show search_path
To see what your search path is.

2) Then take a look at:
https://www.postgresql.org/docs/11/ddl-schemas.html#DDL-SCHEMAS-PATH

to see how to change it.



Thanks !



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Need create table statements from metadata

2019-06-20 Thread Adrian Klaver

On 6/20/19 2:32 PM, David Gauthier wrote:

psql (9.6.7, server 9.5.2) on linux.

I have 2 DBs, one for dev the other is live.  I want to recreate several 
tables in the dev db using the same metadata found in the live db.  But 
I'm too lazy to manually transcribe everything and that's prone to error 
anyway.


In the past, I would just run pg_dump and capture metadata only for 
selected tables, then use that to (re)create that tables in the other 
DB.  But where I am now, they don't give me the privs to run pg_dump.


When you say metadata are you talking about the DDL for an object?

Use FDW?:
https://www.postgresql.org/docs/9.6/postgres-fdw.html



So I was wondering if there's a way to do this quickly and cleanly 
without pg_dump.  Perhaps a stored procedure that pg_dump uses (or 
something like that) ?


Thanks in Advance !



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: List tables for a specific schema

2019-06-20 Thread Tom Lane
David Gauthier  writes:
> What I would like is to see a list of all the tables in the dvm schema and
> nothing else.  "\d dvm.*" works, but it also spills the details of every
> table.  I just want the list of tables in the schema.

Try "\dt dvm.*".

regards, tom lane




Re: Need create table statements from metadata

2019-06-20 Thread Adrian Klaver

On 6/20/19 2:32 PM, David Gauthier wrote:

psql (9.6.7, server 9.5.2) on linux.

I have 2 DBs, one for dev the other is live.  I want to recreate several 
tables in the dev db using the same metadata found in the live db.  But 
I'm too lazy to manually transcribe everything and that's prone to error 
anyway.


In the past, I would just run pg_dump and capture metadata only for 
selected tables, then use that to (re)create that tables in the other 
DB.  But where I am now, they don't give me the privs to run pg_dump.


So I was wondering if there's a way to do this quickly and cleanly 
without pg_dump.  Perhaps a stored procedure that pg_dump uses (or 
something like that) ?


To add to my previous post you can use third party tools:

1) SQL Workbench/J
http://www.sql-workbench.eu/
http://www.sql-workbench.eu/TableSource_png.html

2) pgAdmin
https://www.pgadmin.org/
https://www.pgadmin.org/docs/pgadmin4/4.x/table_dialog.html



Thanks in Advance !



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Inserts restricted to a trigger

2019-06-20 Thread Miles Elam
Thanks for the reply, Adrian.

1. The audit tables (plural) are recording the historical data for a table,
ie., updates and deletes. All the same data as found in a given table along
with the role that performed the operation, the transaction id, and the
time range where this data was in active use.

2. Only thorough a web UI via an API service.

3. Should be limited to web app, but the data scientists may need direct
access in the near future.

PGAudit does not cover our use case. We are making a temporal table system
since PostgreSQL does not support one natively. For example: "What would
this query have returned yesterday at 4:27pm PT?" Access is as expected for
inserts but updates and deletes are logged to history tables. We cannot use
3rd party extensions because we are on AWS managed databases. We are
following the model detailed here (
https://wiki.postgresql.org/wiki/SQL2011Temporal) with some modifications.

Given the model listed in the link, it's not clear how we can prevent user
tampering with history inserts. (History updates and deletes are already
REVOKE restricted.) Since we are going through an API server via REST
and/or GraphQL, the possibility is very unlikely, but we would prefer a
defense in depth approach in case an oversight somehow allowed arbitrary
query access to the database with the web user. For the most part, we're
fairly well locked down, but I just can't quite see how to restrict
aforementioned query access from inserting to the history in an ad-hoc
manner rather than the trigger-based predetermined insert pattern.



On Thu, Jun 20, 2019 at 8:01 AM Adrian Klaver 
wrote:

> On 6/19/19 3:07 PM, Miles Elam wrote:
> > Hi Adrian, thanks for responding.
> >
> > How would I restrict access to the SECURITY DEFINER function? If it can
> > be called by the trigger, it can be called by the user as well I would
> > think. Same issue as access to the table itself only now with a
> > superuser intermediary, right?
> >
>
> Should have also mentioned, if you are not adverse to a third party
> solution there is PGAudit:
>
> https://www.pgaudit.org/
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


perl extension error

2019-06-20 Thread Prakash Ramakrishnan
Hi Team,

We have using PostgreSQL 11 in one production server and while creating
extension getting below error,please do the needful,

A4.Italy_UAT=#
A4.Italy_UAT=# Create extension plperl;
ERROR:  could not open extension control file
"/usr/pgsql-11/share/extension/plperl.control": No such file or directory
A4.Italy_UAT=#



-- 
Thanks,
Prakash.R


Re: perl extension error

2019-06-20 Thread Ian Barwick
On Fri, 21 Jun 2019 at 14:22, Prakash Ramakrishnan <
prakash.ramakrishnan...@nielsen.com> wrote:

> Hi Team,
>
> We have using PostgreSQL 11 in one production server and while creating
> extension getting below error,please do the needful,
>
> A4.Italy_UAT=#
> A4.Italy_UAT=# Create extension plperl;
> ERROR:  could not open extension control file
> "/usr/pgsql-11/share/extension/plperl.control": No such file or directory
> A4.Italy_UAT=#
>
>
You need to install whatever package contains PL/Perl.


Regards

Ian Barwick

-- 
  Ian Barwick   https://www.2ndQuadrant.com/
  PostgreSQL Development, 24x7 Support, Training & Services