Re: Something else about Redo Logs disappearing

2020-06-13 Thread Peter
On Thu, Jun 11, 2020 at 10:53:15PM +0200, Laurenz Albe wrote:
! On Thu, 2020-06-11 at 22:35 +0200, Magnus Hagander wrote:
! > I believe somebody around that time also wrote a set of bash scripts that 
can be used in a pre/post-backup-job combination with the current APIs.
! 
! https://github.com/cybertec-postgresql/safe-backup

Ah, thank You, very nice.

I've never seen anybody coding bash - it is strongly shunned in the
Berkeley community. 

Some Questions:
 1. There are explicit error messages in loc-82 and -92 of pgpre.sh.
To where are these written?
 2. The result data from pg_stop_backup() are stored into the living
database. But, according to the docs, they should be placed into
the completed backup. Do I have a misunderstanding here?
 3. The most common error cause of a backup might be tape-side
malfunction. So far as I see, the way to handle this is currently,
to provide a timeout for pgpre.sh (which is problematic, because
we might as well have just reached end-of-tape and have to wait
until monday for the operator to change it). May I suggest to add
a switch to pgpost.sh, in order to volutarily fail out of the job?
 4. If, by misconfiguration and/or operator error, the backup system
happens to start a second backup. in parallel to the first,
then do I correctly assume, both backups will be rendered
inconsistent while this may not be visible to the operator; and
the earlier backup would be flagged as apparently successful while
carrying the wrong (later) label?

BTW: what does, in general, happen, if a backup_label file gets
accidentially swapped with one from a parallel, but slightly later
backup? Do I correctly assume that such mistake gets somehow detected,
as otherwise it would have just the same unwelcome effects
(i.e. silent data corruption) as no backup_label at all?

cheerio,
PMc




Re: Returning SELECTed rows immediately instead of all at the end?

2020-06-13 Thread Ron

On 6/13/20 12:10 AM, Tom Lane wrote:

Ron  writes:

I'm running amcheck on a set of indices (test machine, not prod) and want to
track the progress.  Is there a SELECT clause that makes rows display as
they are created, or do I have to explicitly call bt_index_check() from a
shell script or SQL function in order to see the output as each index is
checked?

If you're willing to code at the libpq level, see

https://www.postgresql.org/docs/current/libpq-single-row-mode.html


An option (maybe "-r") for psql to put it in single row mode would be useful 
for DBAs.


--
Angular momentum makes the world go 'round.




Re: Returning SELECTed rows immediately instead of all at the end?

2020-06-13 Thread Ron

On 6/13/20 12:04 AM, David G. Johnston wrote:
On Friday, June 12, 2020, Ron > wrote:



I'm running amcheck on a set of indices (test machine, not prod) and
want to track the progress.  Is there a SELECT clause that makes rows
display as they are created,


No

or do I have to explicitly call bt_index_check() from a shell script
or SQL function in order to see the output as each index is checked?


You could wrap the function call in a custom plpgsql function (or just do 
a plpgsql loop) and side channel output via notice but i’d probably just 
do a shell script wrapper absent any other constraint.


Yeah, a shell script was my next option.  Thanks.

--
Angular momentum makes the world go 'round.


Re: Something else about Redo Logs disappearing

2020-06-13 Thread Laurenz Albe
On Sat, 2020-06-13 at 05:24 +0200, Peter wrote:
> ! https://github.com/cybertec-postgresql/safe-backup
> 
> Ah, thank You, very nice.
> 
> I've never seen anybody coding bash - it is strongly shunned in the
> Berkeley community. 

Strange, but then I don't move in these circles.

> Some Questions:
>  1. There are explicit error messages in loc-82 and -92 of pgpre.sh.
> To where are these written?

Standard error.  It is up to the caller of the script to route that
somewhere useful.

>  2. The result data from pg_stop_backup() are stored into the living
> database. But, according to the docs, they should be placed into
> the completed backup. Do I have a misunderstanding here?

Right, but these scripts don't know anything about that backup itself.
They are designed to be called before and after the backup.
In between, you back up the data directory however you think fit.

It is the responsibility of the caller of the post-backup script
to add the "backup_label" file to the backup.

>  3. The most common error cause of a backup might be tape-side
> malfunction. So far as I see, the way to handle this is currently,
> to provide a timeout for pgpre.sh (which is problematic, because
> we might as well have just reached end-of-tape and have to wait
> until monday for the operator to change it). May I suggest to add
> a switch to pgpost.sh, in order to volutarily fail out of the job?

As said above, the backup itself is not the job of these scripts.

>  4. If, by misconfiguration and/or operator error, the backup system
> happens to start a second backup. in parallel to the first,
> then do I correctly assume, both backups will be rendered
> inconsistent while this may not be visible to the operator; and
> the earlier backup would be flagged as apparently successful while
> carrying the wrong (later) label?

If you are using my scripts and start a second backup while the first
one is still running, the first backup will be interrupted.

This is specific to my scripts, PostgreSQL's non-exclusive backup
can perform more than one concurrent backup successfully.
I tried to keep things simple.

> BTW: what does, in general, happen, if a backup_label file gets
> accidentially swapped with one from a parallel, but slightly later
> backup? Do I correctly assume that such mistake gets somehow detected,
> as otherwise it would have just the same unwelcome effects
> (i.e. silent data corruption) as no backup_label at all?

If you have the wrong "backup_label", you end up with silent data corruption.

That should not happen with my scripts, though.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





A query in Streaming Replication

2020-06-13 Thread Sreerama Manoj
 Hello,
  I use streaming replication in async mode. When master gets down,
slave will be promoted using a trigger file. During this process ".partial"
file will be created and a WAL file with same ID will be created in a
different time line in slave. When master comes back as slave, it will be
synced to the current master. This is the normal procedure as far as I
understood. But in some cases, ".partial" file is not getting created and
peer DB which comes back as slave is unable to sync when this happens..
Please suggest if this happens in any scenario and how to overcome this.

Thanks & Regards:
Manoj.


Re: Fwd: not able to give usage access to public schema

2020-06-13 Thread Adrian Klaver

On 6/13/20 10:33 AM, sekhar chandra wrote:
Adrian - when I follow the same steps what you did . in my case , the 
result is false.



grant usage on schema public to role_test ;
GRANT

SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
from pg_roles where rolname = 'role_test';
    rolname  | has_schema_privilege
---+--
   role_test | f


Postgres version and where are you running this e.g. cloud service?

What does \dn+ public show before and after you redo commands as asked 
below?


Can you start over and provide complete sequence for above including 
CREATE ROLE and what user you are doing the above as?






On Fri, Jun 12, 2020 at 10:11 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 6/12/20 3:56 PM, sekhar chandra wrote:
 > I am not able to give usage permission to public schema. below
are the
 > steps.
 >
 >
 > Logged in as super user
 > created a new user as user1
 > grant usage on public to user1

Either the above is a cut and paste error or you got an error:

grant usage on public to role_test ;
ERROR:  relation "public" does not exist

grant usage on schema public to role_test ;
GRANT

SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
from pg_roles where rolname = 'role_test';
    rolname  | has_schema_privilege
---+--
   role_test | t


 >
 > command completed successfully , but verification statement
showing he
 > doesnt have usage permission.
 > SELECT rolname, has_schema_privilege(rolname, 'public', 'usage')
from
 > pg_roles;
 >
 > this is strage. what could go wrong.


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump of database with numerous objects

2020-06-13 Thread Bruce Momjian
On Wed, Jun  3, 2020 at 04:10:55PM -0400, Alvaro Herrera wrote:
> On 2020-May-31, t...@exquisiteimages.com wrote:
> 
> > I am now needing to upgrade to a new version of PostgreSQL and I am running
> > into problems when pg_upgrade calls pg_dump. pg_dump stalled at: "pg_dump:
> > saving database definition" for 24 hours before I killed the process.
> > 
> > My pg_class table contains 9,000,000 entries and I have 9004 schema.
> 
> We've made a number of performance improvements to pg_dump so that it
> can dump databases that are "large" in several different dimensions, but
> evidently from your report it is not yet good enough when it comes to
> dumping millions of tables in thousands of schemas.  It will probably
> take some profiling of pg_dump to figure out where the bottleneck is,
> and some careful optimization work in order to make it faster.  Not a
> weekend job, I'm afraid :-(

FYI, we never actually found what version of pg_dump was being used,
since pg_upgrade uses the pg_dump version in the newer cluster.  We only
know the user is coming _from_ 9.3.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: pg_dump of database with numerous objects

2020-06-13 Thread Bruce Momjian
On Sun, May 31, 2020 at 02:02:08PM -0700, Adrian Klaver wrote:
> On 5/31/20 1:38 PM, Christophe Pettus wrote:
> > 
> > 
> > > On May 31, 2020, at 13:37, Adrian Klaver  
> > > wrote:
> > > 
> > > Just a reminder that the OP's original issue was with using pg_upgrade.
> > 
> > True, although IIRC pg_ugprade uses pg_dump under the hood to do the schema 
> > migration.
> 
> Again true, but pg_upgrade will not work older to newer:
> 
> /usr/local/pgsql11/bin/pg_upgrade  --check -U postgres -d
> /usr/local/pgsql11/data -D /usr/local/pgsql12_up/data -b
> /usr/local/pgsql11/bin -B /usr/local/pgsql12/bin
> 
> Performing Consistency Checks
> -
> Checking cluster versions
> This utility can only upgrade to PostgreSQL version 11.
> Failure, exiting

This is saying you can only use pg_upgrade 11.X to upgrade _to_ Postgres
11.X.  If you want to upgrade to 12, you have to use pg_upgrade from 12.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Fwd: not able to give usage access to public schema

2020-06-13 Thread Adrian Klaver

On 6/13/20 10:33 AM, sekhar chandra wrote:
Adrian - when I follow the same steps what you did . in my case , the 
result is false.



grant usage on schema public to role_test ;
GRANT

SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
from pg_roles where rolname = 'role_test';
    rolname  | has_schema_privilege
---+--
   role_test | f



Might be a good time to point out :) :

https://www.postgresql.org/docs/12/ddl-schemas.html#DDL-SCHEMAS-PUBLIC

" Note that by default, everyone has CREATE and USAGE privileges on the 
schema public. This allows all users that are able to connect to a given 
database to create objects in its public schema. "



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> I agree these are all technical issues, but nevertheless - "implementation
> details", which DBAs don't care about. What's important from a DBA's
> perspective is not whether WAL is cluster-wide or database-wide, but whether
> it's possible to manage backups/PITR/restores of individual databases in a 
> more
> convenient matter, which other RDBMS-vendors seem to provide.
>  
> I love PG, have been using it professionally since 6.5, and our company 
> depends
> on it, but there are things other RDBMS-vendors do better...

The bigger issue is that while we _could_ do this, it would add more
problems and complexity, and ultimately, I think would make the
software less usable overall and would be a net-negative.  We know of no
way to do it without a ton of negatives.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Tue, Jun  2, 2020 at 03:45:08PM -0400, Ravi Krishna wrote:
> 
> > 
> > Generally speaking, I discourage having lots of databases under one PG
> > cluster for exactly these kinds of reasons.  PG's individual clusters
> > are relatively lightweight, after all.
> > 
> 
> Plus PG does not directly support cross database queries using 3 part name, 
> something
> sqlserver excels at.

We consider the lack of this ability to be a security benefit. 
Cross-container queries can be done using schemas.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Something else about Redo Logs disappearing

2020-06-13 Thread Peter
On Sat, Jun 13, 2020 at 01:53:28PM +0200, Laurenz Albe wrote:

! > I've never seen anybody coding bash - it is strongly shunned in the
! > Berkeley community. 
! 
! Strange, but then I don't move in these circles.

Never mind.

! > Some Questions:
! >  1. There are explicit error messages in loc-82 and -92 of pgpre.sh.
! > To where are these written?
! 
! Standard error.  It is up to the caller of the script to route that
! somewhere useful.

Understood.

! >  2. The result data from pg_stop_backup() are stored into the living
! > database. But, according to the docs, they should be placed into
! > the completed backup. Do I have a misunderstanding here?
! 
! Right, but these scripts don't know anything about that backup itself.
! They are designed to be called before and after the backup.
! In between, you back up the data directory however you think fit.
! 
! It is the responsibility of the caller of the post-backup script
! to add the "backup_label" file to the backup.

I see.

! >  4. If, by misconfiguration and/or operator error, the backup system
! > happens to start a second backup. in parallel to the first,
! > then do I correctly assume, both backups will be rendered
! > inconsistent while this may not be visible to the operator; and
! > the earlier backup would be flagged as apparently successful while
! > carrying the wrong (later) label?
! 
! If you are using my scripts and start a second backup while the first
! one is still running, the first backup will be interrupted.

This is not what I am asking. It appears correct to me, that, on
the database, the first backup will be interrupted. But on the
tape side, this might go unnoticed, and on completion it will
successfully receive the termination code from the *SECOND*
backup - which means that on tape we will have a seemingly
successful backup, which
 1. is corrupted, and
 2. carries a wrong label.

! This is specific to my scripts, PostgreSQL's non-exclusive backup
! can perform more than one concurrent backup successfully.
! I tried to keep things simple.

I understand. But the operator may not know that and/or accidentially
start a second backup while one is still running. And this will then
result in ...

! If you have the wrong "backup_label", you end up with silent data corruption.

 ... this.

Indeed this is difficult to avoid, because the high risk of silent
data corruption is an elementary architectural feature of the so-called
"new API". Which is why this is not going to run on my ship.

But you will have to wait - the to-be-expected inrush of service-sales
due to corrupted clusters will only happen after R.13 is active and
peope are *forced* to cope with that "new API".

Thanks for the effort of answering my questions.

cheerio,
PMc




Re: Something else about Redo Logs disappearing

2020-06-13 Thread Peter
On Thu, Jun 11, 2020 at 10:35:13PM +0200, Magnus Hagander wrote:
! > Okay. So lets behave like professional people and figure how that
! > can be achieved:
! > At first, we drop that WAL requirement, because with WAL archiving
! > it is already guaranteed that an unbroken chain of WAL is always
! > present in the backup (except when we have a bug like the one that
! > lead to this discussion).
! > So this is **not part of the scope**.
! >
! 
! I would assume that anybody who deals with backups professionally wouldn't
! consider that out of scope,

I strongly disagree. I might suppose You haven't thought this to the
proper end. See:

The WAL backup is, for a couple of reasons, different to most other
backup objects:
 - it runs on demand, not per schedule.
 - it must be immediately repeated (or alerted), until success,
   otherwise the database might stop.
 - it might use a different storage (e.g. twofold/mirrored),
   for obvious reasons.
 - it gets deleted after backup, and that is a point-of-no-return.

For the data tree backup, on the contrary, all these attributes are
identical to those of any other incr/full filesystem backup.

You can see that all the major attributes (scheduling, error-handling,
signalling, ...) of a WAL backup are substantially different to that
of any usual backup.
This is a different *Class* of backup object, therefore it needs an
appropriate infrastructure that can handle these attributes correctly.

The WAL do *not belong* into the scope of the data tree backup, they
are to be handled by different schemes in a different way.

But, if You never have considered *continuous* archiving, and only
intend to take a functional momentarily backup of a cluster, then You
may well have never noticed these differences. I noticed them mainly
because I did *BUILD* such an infrastructure (the 20 lines of shell
script, you know).

And yes, I was indeed talking about *professional* approaches.


! There is *absolutely* no need for threading to use the current APIs. You
! need to run one query, go do something else, and then run another
! query.

Wrong. The point is, I dont want to "go do something else", I have to
exit() and get back to the initiator at that place.

This is also clearly visible in Laurenz' code: he utilizes two
unchecked background tasks (processes, in this case) with loose
coupling for the purpose, as it does not work otherwise.


! But then, lets think another step forward: for what purpose do we
! > actually need to call pg_start_backup() and pg_stop_backup() at all?
! > I couldn't find exhaustive information about that, only some partial
! > facts.
! >
! 
! Since you don't trust the documentation, I suggest you take a look at
! 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=blob;f=src/backend/access/transam/xlog.c;h=55cac186dc71fcc2f4628f9974b30850bb51eb5d;hb=92c58fd94801dd5c81ee20e26c5bb71ad64552a8#l10438

Thanks, this indeed gives a most entertaining read. Honestly, I've
never seen such a source before - there is thrice as much storytime
than code. :)

The most interesting point in there appears to be this:
  > that the backup label and tablespace map files are not written to
  > disk. Instead, their would-be contents are returned in *labelfile
  > and *tblspcmapfile,

This is in do_pg_start_backup() - so we actually HAVE this data
already at the *START* time of the backup!

Then why in hell do we wait until the END of the backup before we
hand this data to the operator: at a time when the DVD with the
backup is already fixated and cannot be changed anymore, so that
we have to send this data to quarternary storage (spell: the line
printer) and have girl Friday sort them out (and probably mix them
up)??

Is this indeed Boeing-security: try to avoid possible mistake by
creating a much bigger risk?


! Things that remain to be figured out:
! >  1. What does pg_start_backup actually do and why would that be
! > necessary? I could not find exhaustive information, but this can
! > probably figured from the source. Currently I know so much:
! >  - it writes a backup_label file. That is just a few lines of
! >ASCII and should not be difficult to produce.
! >
! 
! It does that only in exclusive mode, and doing that is one of the big
! problems with exclusive mode. So don't do that.

As I can read, there is no difference in the function requirements
between exclusive and non-exclusive mode, in that regard: the
backup-label file is NOT necessary in the running cluster data tree,
BUT it should get into the RESTORED data tree before starting it.

And I can't find a single one of those "big problems". What I do find
is just people whining that their cluster doesn't start and they can't
simply delete a file, even if told so. Like soldier complaining that
his gun doesn't shoot and he has no idea how to reload.

I could devise a dozen ways how to detect that situation automatically
during rc.d and remove the file - if it were of any concern to me.


! > I now hop

Re: A query in Streaming Replication

2020-06-13 Thread Sreerama Manoj
Hello,

Forgot to add the version details. I'm using postgres 10.7 .

On Sat, Jun 13, 2020, 20:26 Sreerama Manoj 
wrote:

> Hello,
>   I use streaming replication in async mode. When master gets down,
> slave will be promoted using a trigger file. During this process ".partial"
> file will be created and a WAL file with same ID will be created in a
> different time line in slave. When master comes back as slave, it will be
> synced to the current master. This is the normal procedure as far as I
> understood. But in some cases, ".partial" file is not getting created and
> peer DB which comes back as slave is unable to sync when this happens..
> Please suggest if this happens in any scenario and how to overcome this.
>
> Thanks & Regards:
> Manoj.
>
>
>


Re: Fwd: not able to give usage access to public schema

2020-06-13 Thread sekhar chandra
Adrian - when I follow the same steps what you did . in my case , the
result is false.


grant usage on schema public to role_test ;
GRANT

SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
from pg_roles where rolname = 'role_test';
   rolname  | has_schema_privilege
---+--
  role_test | f

On Fri, Jun 12, 2020 at 10:11 PM Adrian Klaver 
wrote:

> On 6/12/20 3:56 PM, sekhar chandra wrote:
> > I am not able to give usage permission to public schema. below are the
> > steps.
> >
> >
> > Logged in as super user
> > created a new user as user1
> > grant usage on public to user1
>
> Either the above is a cut and paste error or you got an error:
>
> grant usage on public to role_test ;
> ERROR:  relation "public" does not exist
>
> grant usage on schema public to role_test ;
> GRANT
>
> SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
> from pg_roles where rolname = 'role_test';
>rolname  | has_schema_privilege
> ---+--
>   role_test | t
>
>
> >
> > command completed successfully , but verification statement showing he
> > doesnt have usage permission.
> > SELECT rolname, has_schema_privilege(rolname, 'public', 'usage') from
> > pg_roles;
> >
> > this is strage. what could go wrong.
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Ron

On 6/13/20 1:46 PM, Bruce Momjian wrote:

On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:

I agree these are all technical issues, but nevertheless - "implementation
details", which DBAs don't care about. What's important from a DBA's
perspective is not whether WAL is cluster-wide or database-wide, but whether
it's possible to manage backups/PITR/restores of individual databases in a more
convenient matter, which other RDBMS-vendors seem to provide.
  
I love PG, have been using it professionally since 6.5, and our company depends

on it, but there are things other RDBMS-vendors do better...

The bigger issue is that while we _could_ do this, it would add more
problems and complexity, and ultimately, I think would make the
software less usable overall and would be a net-negative.  We know of no
way to do it without a ton of negatives.


How do other RDBMSs do it with ease?  (I know it's an architectural issue, 
but what's the architectural issue?)


--
Angular momentum makes the world go 'round.




Re: postgres issue

2020-06-13 Thread Adrian Klaver

On 6/13/20 2:55 PM, Shailesh Rangani wrote:



+ Community DL.





On Saturday, June 13, 2020, 04:39:01 PM CDT, Shailesh Rangani 
 wrote:



Hi Adrian,

Please find the attached sequence.



Please post output of command below as text.

What does \dn+ public show?


Regards
Shailesh





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres issue

2020-06-13 Thread Adrian Klaver

On 6/13/20 3:28 PM, Shailesh Rangani wrote:

\dn+ public
                                           List of schemas
             Name             |     Owner     |       Access privileges  
       |      Description

-+---++-
publicrdsadmindailypay_root=UC/rdsadmin dailypay=U/rdsadmin    standard 
public schema



In the docx file you showed a CREATE TABLE permissions error for the 
public schema. Having USAGE will not fix that. To allow a user to do 
that they need CREATE(C) privilege on the schema.


For more information on privileges see:
https://www.postgresql.org/docs/12/ddl-priv.html

What database in the cluster are trying to GRANT schema privileges in?

Are you doing the GRANT as the rdsadmin user?

And are you in that database when you run has_schema_privilege()?

What does the below show?

\l+








On Saturday, June 13, 2020, 05:12:32 PM CDT, Adrian Klaver 
 wrote:



On 6/13/20 2:52 PM, Shailesh Rangani wrote:
 > + Community DL.
 >
 >
 >
 >
 >
 > On Saturday, June 13, 2020, 04:39:01 PM CDT, Shailesh Rangani
 > mailto:shailesh.rang...@yahoo.com>> wrote:
 >
 >
 > Hi Adrian,
 >
 > Please find the attached sequence.

Please post output of command below as text.

What does \dn+ public show?

 >
 > Regards
 > Shailesh
 >
 >
 >
 >
 >
 > -- Forwarded message -
 >
 > From: *Adrian Klaver* 

 > >>
 > Date: Sat, Jun 13, 2020 at 10:43 AM
 > Subject: Re: Fwd: not able to give usage access to public schema
 > To: sekhar chandra 
 > >>
 > Cc:  
>>

 >
 >
 >
 > On 6/13/20 10:33 AM, sekhar chandra wrote:
 >  > Adrian - when I follow the same steps what you did . in my case , the
 >  > result is false.
 >  >
 >  >
 >  > grant usage on schema public to role_test ;
 >  > GRANT
 >  >
 >  > SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
 >  > from pg_roles where rolname = 'role_test';
 >  >     rolname  | has_schema_privilege
 >  > ---+--
 >  >    role_test | f
 >
 > Postgres version and where are you running this e.g. cloud service?
 >
 > What does \dn+ public show before and after you redo commands as asked
 > below?
 >
 > Can you start over and provide complete sequence for above including
 > CREATE ROLE and what user you are doing the above as?
 >
 >
 >
 >  >
 >  > On Fri, Jun 12, 2020 at 10:11 PM Adrian Klaver
 >  > mailto:adrian.kla...@aklaver.com> 
>
 >  
 wrote:
 >  >
 >  >     On 6/12/20 3:56 PM, sekhar chandra wrote:
 >  >      > I am not able to give usage permission to public schema. below
 >  >     are the
 >  >      > steps.
 >  >      >
 >  >      >
 >  >      > Logged in as super user
 >  >      > created a new user as user1
 >  >      > grant usage on public to user1
 >  >
 >  >     Either the above is a cut and paste error or you got an error:
 >  >
 >  >     grant usage on public to role_test ;
 >  >     ERROR:  relation "public" does not exist
 >  >
 >  >     grant usage on schema public to role_test ;
 >  >     GRANT
 >  >
 >  >     SELECT rolname, has_schema_privilege('role_test', 'public', 
'usage')

 >  >     from pg_roles where rolname = 'role_test';
 >  >         rolname  | has_schema_privilege
 >  >     ---+--
 >  >        role_test | t
 >  >
 >  >
 >  >      >
 >  >      > command completed successfully , but verification statement
 >  >     showing he
 >  >      > doesnt have usage permission.
 >  >      > SELECT rolname, has_schema_privilege(rolname, 'public', 
'usage')

 >  >     from
 >  >      > pg_roles;
 >  >      >
 >  >      > this is strage. what could go wrong.
 >  >
 >  >
 >  >     --
 >  >     Adrian Klaver
 >  > adrian.kla...@aklaver.com  
>
 >  
>>

 >  >
 >
 >
 > --
 > Adrian Klaver
 > adrian.kla...@aklaver.com  
>

 >
 > //Datavail/Internal - Limited External Distribution
 >
 > This email (including any attachments) is for the use of the intended
 > recipient(s) only and may contain confidential or proprietary
 > information. If you have received this email in error, please notify the
 > sender immediat

Re: Oracle vs. PostgreSQL - a comment

2020-06-13 Thread Bruce Momjian
On Sat, Jun 13, 2020 at 05:06:37PM -0500, Ron wrote:
> On 6/13/20 1:46 PM, Bruce Momjian wrote:
> > On Wed, Jun  3, 2020 at 08:53:45PM +0200, Andreas Joseph Krogh wrote:
> > > I agree these are all technical issues, but nevertheless - "implementation
> > > details", which DBAs don't care about. What's important from a DBA's
> > > perspective is not whether WAL is cluster-wide or database-wide, but 
> > > whether
> > > it's possible to manage backups/PITR/restores of individual databases in 
> > > a more
> > > convenient matter, which other RDBMS-vendors seem to provide.
> > > I love PG, have been using it professionally since 6.5, and our company 
> > > depends
> > > on it, but there are things other RDBMS-vendors do better...
> > The bigger issue is that while we _could_ do this, it would add more
> > problems and complexity, and ultimately, I think would make the
> > software less usable overall and would be a net-negative.  We know of no
> > way to do it without a ton of negatives.
> 
> How do other RDBMSs do it with ease?  (I know it's an architectural issue,
> but what's the architectural issue?)

I don't know.

-- 
  Bruce Momjian  https://momjian.us
  EnterpriseDB https://enterprisedb.com

  The usefulness of a cup is in its emptiness, Bruce Lee





Re: Fwd: not able to give usage access to public schema

2020-06-13 Thread Tom Lane
sekhar chandra  writes:
> Adrian - when I follow the same steps what you did . in my case , the
> result is false.

> grant usage on schema public to role_test ;
> GRANT

> SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
> from pg_roles where rolname = 'role_test';
>rolname  | has_schema_privilege
> ---+--
>   role_test | f

This is verging on impossible to believe.  To start with, in a standard
installation rights on the public schema are granted to PUBLIC, so that
any role should *already* have usage privilege as soon as it's created.
Thus:

regression=# create user role_test;
CREATE ROLE
regression=# select has_schema_privilege('role_test', 'public', 'usage');
 has_schema_privilege 
--
 t
(1 row)

Even if you'd revoked that public grant, manually granting should
certainly have worked.  So my thoughts are running towards maybe
you have created a nonstandard version of has_schema_privilege()
that doesn't do what you think.

Anyway, I'd suggest removing some variables from the equation by
looking directly at the catalog:

postgres=# table pg_namespace;
  oid  |  nspname   | nspowner |   nspacl
---++--+-
...
  2200 | public |   10 | {postgres=UC/postgres,=UC/postgres}
...

That's what I get in a default installation.  If I manually GRANT, it
changes to

   2200 | public|   10 | 
{postgres=UC/postgres,=UC/postgres,role_test=U/postgres}

What do you see?

regards, tom lane




Re: Fwd: not able to give usage access to public schema

2020-06-13 Thread Adrian Klaver

On 6/13/20 7:15 PM, Tom Lane wrote:

sekhar chandra  writes:

Adrian - when I follow the same steps what you did . in my case , the
result is false.



grant usage on schema public to role_test ;
GRANT



SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
from pg_roles where rolname = 'role_test';
rolname  | has_schema_privilege
---+--
   role_test | f


This is verging on impossible to believe.  To start with, in a standard


I'm wrestling with the same thing. What also is in the file that Sekhar 
sent was the fact this is happening in an AWS Aurora instance. I've gone 
over the Aurora docs and can't see anything that says the behavior is 
different. Still it is another data point.



installation rights on the public schema are granted to PUBLIC, so that
any role should *already* have usage privilege as soon as it's created.
Thus:

regression=# create user role_test;
CREATE ROLE
regression=# select has_schema_privilege('role_test', 'public', 'usage');
  has_schema_privilege
--
  t
(1 row)

Even if you'd revoked that public grant, manually granting should
certainly have worked.  So my thoughts are running towards maybe
you have created a nonstandard version of has_schema_privilege()
that doesn't do what you think.

Anyway, I'd suggest removing some variables from the equation by
looking directly at the catalog:

postgres=# table pg_namespace;
   oid  |  nspname   | nspowner |   nspacl
---++--+-
...
   2200 | public |   10 | {postgres=UC/postgres,=UC/postgres}
...

That's what I get in a default installation.  If I manually GRANT, it
changes to

2200 | public|   10 | 
{postgres=UC/postgres,=UC/postgres,role_test=U/postgres}

What do you see?

regards, tom lane




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: postgres issue

2020-06-13 Thread Shailesh Rangani
\dn+ public                                            List of schemas          
  Name             |     Owner     |       Access privileges        |      
Description-+---++-public
 rdsadmin dailypay_root=UC/rdsadmin dailypay=U/rdsadmin    standard public 
schema



 

On Saturday, June 13, 2020, 05:12:32 PM CDT, Adrian Klaver 
 wrote:  
 
 On 6/13/20 2:52 PM, Shailesh Rangani wrote:
> + Community DL.
> 
> 
> 
> 
> 
> On Saturday, June 13, 2020, 04:39:01 PM CDT, Shailesh Rangani 
>  wrote:
> 
> 
> Hi Adrian,
> 
> Please find the attached sequence.

Please post output of command below as text.

What does \dn+ public show?

> 
> Regards
> Shailesh
> 
> 
> 
> 
> 
> -- Forwarded message -
> 
> From: *Adrian Klaver*  >
> Date: Sat, Jun 13, 2020 at 10:43 AM
> Subject: Re: Fwd: not able to give usage access to public schema
> To: sekhar chandra  >
> Cc: mailto:pgsql-gene...@postgresql.org>>
> 
> 
> 
> On 6/13/20 10:33 AM, sekhar chandra wrote:
>  > Adrian - when I follow the same steps what you did . in my case , the
>  > result is false.
>  >
>  >
>  > grant usage on schema public to role_test ;
>  > GRANT
>  >
>  > SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
>  > from pg_roles where rolname = 'role_test';
>  >     rolname  | has_schema_privilege
>  > ---+--
>  >    role_test | f
> 
> Postgres version and where are you running this e.g. cloud service?
> 
> What does \dn+ public show before and after you redo commands as asked
> below?
> 
> Can you start over and provide complete sequence for above including
> CREATE ROLE and what user you are doing the above as?
> 
> 
> 
>  >
>  > On Fri, Jun 12, 2020 at 10:11 PM Adrian Klaver
>  > mailto:adrian.kla...@aklaver.com> 
> >> 
> wrote:
>  >
>  >     On 6/12/20 3:56 PM, sekhar chandra wrote:
>  >      > I am not able to give usage permission to public schema. below
>  >     are the
>  >      > steps.
>  >      >
>  >      >
>  >      > Logged in as super user
>  >      > created a new user as user1
>  >      > grant usage on public to user1
>  >
>  >     Either the above is a cut and paste error or you got an error:
>  >
>  >     grant usage on public to role_test ;
>  >     ERROR:  relation "public" does not exist
>  >
>  >     grant usage on schema public to role_test ;
>  >     GRANT
>  >
>  >     SELECT rolname, has_schema_privilege('role_test', 'public', 'usage')
>  >     from pg_roles where rolname = 'role_test';
>  >         rolname  | has_schema_privilege
>  >     ---+--
>  >        role_test | t
>  >
>  >
>  >      >
>  >      > command completed successfully , but verification statement
>  >     showing he
>  >      > doesnt have usage permission.
>  >      > SELECT rolname, has_schema_privilege(rolname, 'public', 'usage')
>  >     from
>  >      > pg_roles;
>  >      >
>  >      > this is strage. what could go wrong.
>  >
>  >
>  >     --
>  >     Adrian Klaver
>  > adrian.kla...@aklaver.com  
> >
>  >
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 
> 
> //Datavail/Internal - Limited External Distribution
> 
> This email (including any attachments) is for the use of the intended 
> recipient(s) only and may contain confidential or proprietary 
> information. If you have received this email in error, please notify the 
> sender immediately and then delete it. If you are not the intended 
> recipient, you must not keep, use, disclose, copy or distribute this 
> email without the author's prior permission. //Datavail/Business - 
> Limited External Distribution


-- 
Adrian Klaver
adrian.kla...@aklaver.com
  

pg_service.conf and client support

2020-06-13 Thread Niels Jespersen
Hello all

I just found out about the pg service file. 
https://www.postgresql.org/docs/current/libpq-pgservice.html

I don't know why it took me so long finding this. I have been looking for ways 
to abstract physical details of data location away for clients (much like 
Oracle's tnsnames).

Want to move a cluster to a new host. Sure, move it, edit pg_service.conf and 
clients will not know the difference.

It works great for at least psql and psycopg2.

But I cannot find anything on pg_service.conf and the Postgres ODBC driver and 
Npgsql for .Net

I know pg_service.conf support is implemented through libpq and support for 
rivers not using libpq is not a given thing.

But I think the need for the abstraction of connection details is a general one.

Can anyone shed som light on the ubiquitousness of support for pg_service.conf?

Are there any other mechanisms with broader support, that can be used instead 
of pg_service.conf (if support is scarce beyond what builds on libpq)?

Thank you.

Regards Niels Jespersen


Re: pg_service.conf and client support

2020-06-13 Thread David G. Johnston
On Saturday, June 13, 2020, Niels Jespersen  wrote:

> Can anyone shed som light on the ubiquitousness of support for
> pg_service.conf?
>
AFAIK same non-support for JDBC and Node.js

> Are there any other mechanisms with broader support, that can be used
> instead of pg_service.conf (if support is scarce beyond what builds on
> libpq)?
>
> Not that I am aware, though programmatically accessed libraries are
somewhat able to be adapted using custom wrappers so that the pg_service
file contents can be leveraged.

David J.