Re: Fixing set-returning functions are not allowed in UPDATE after upgrade to Postgres 12

2020-04-03 Thread David G. Johnston
On Thu, Apr 2, 2020 at 11:38 PM Andrus  wrote:

> Hi!
>
> >Simply replace
> >   SET col = unnest(array_value)
> >with
> >
> >   SET col = array_value[1]
>
> I tried
>
>

> update temprid set
> ContactFirstName =xpath(
>
>  
> '/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)[1]::text
>
>
Parens around the function call expression are required per the syntax
documentation for array subscript access:

https://www.postgresql.org/docs/current/sql-expressions.html#SQL-EXPRESSIONS-SUBSCRIPTS


update temprid set
ContactFirstName =(xpath(

 
'/E-Document/Document/DocumentParties/BuyerParty/ContactData/ContactFirstName/text()',x)
)[1]::text

David J.


Re: Improve COPY performance into table with indexes.

2020-04-03 Thread Tim Cross


James Brauman  writes:

> I am using the COPY command to insert 10 million rows from a CSV file
> into a database table and I am finding the performance is
> unacceptable. When the COPY command is executed the disk I/O is
> extremely high which leads to degraded query performance for other
> queries being executed on the database.
>
> I have tried removing the indexes from the database table and this
> dramatically improved performance (both reducing the execution time
> and reducing disk I/O).
>
> The constraints I am working with are:
>  -  Indexes must be in place to ensure that other queries that are
> executed on the table while the COPY command is executing have
> acceptable performance.
>  - The table should not be locked for reads during the COPY (it is
> fine to lock for writes).
>
> I don't know how COPY is implemented, but it seems that the indexes
> are updated as data is inserted into the table. Ideally I would like
> to delay updating the indexes until the COPY command has completed and
> just update the indexes a single time, I think this would lead to much
> better performance. Is this possible?
>
> Another idea that I had was to start a transaction, drop the indexes,
> run the COPY command and then recreate the indexes. The problem with
> this approach is that DROP INDEX acquires an exclusive lock on the
> table, which is not acceptable in this system (otherwise other queries
> won't be able to execute). I tried using DROP INDEX CONCURRENTLY, but
> that cannot be executed inside a transaction.
>
> What are my options in this scenario?
>
> Thanks

I don't think there is any way you can disable/drop indexes just for one
transaction and not impact on other activities, such as queries. What
options you have depends on a number of factors. Some
questions/suggestions.

The main problem with any solution which relies on dropping indexes is
that you still have to pay that cost at some point. If you drop the
indexes, you then have to recreate them and do an analyze to update the
stats. Depending on the table, amount of data and number/type of
indexes, this can be very slow and also impact your queries. 

We use the copy command a lot - inserting approx 400M records per day
(from Javascript rather than CSV file). Our tables are large and our
queries need to be very fast. Initial implementations were not perform
particularly well. Below are some of the things we did that helped. Note
that all of these things did not always help - it really depends a lot
on the tables, types of queries, indexes and column/row definitions. 

- You don't indicate what current performance is and what would be an
  acceptable level of performance. How long does it take to insert your
  10M records? What is the acceptable performance time for queries while
  the update is occurring?

- Not all indexes are equal. Have you tried dropping some rather than
  all of the indexes? Have you verified all the indexes are being used
  by the queries that run during the update. Sometimes, dropping or
  modifying just one or a couple of the indexes can have significant
  impact. You may be able to get a better balance where inserts are
  faster and only a small drop in performance for queries. It is very
  common for there to be indexes which are unnecessary or unused,
  especially if you have a number of developers. As queries are
  refined/changed, people often forget to remove indexes which are no
  longer needed. Verifying index use is something If find to be a
  valuable maintenance task.

- Have you tried using a temporary staging table. We were able to get
  significant performance improvement by copying into a temporary table
  and then having a database function which inserted the data from the
  temp table into the target table. While the overall time for the
  process was slightly longer, the actual amount of time spent updating
  the target table was significantly less, reducing impact on other
  activity like queries. This can be especially useful if you also need
  to do some other data transformation or validation prior to inserting
  the data and some protection against bad CSV data resulting in failed
  copies and having to rollback on a core prod table.

- Do you have any triggers on the table. Personally, I don't like
  triggers - they are a side effect which often have performance
  implications, easily overlooked and a frequent source of bugs. It is
  rare that there isn't an alternative solution which is easier to
  maintain (I know this is a 'touchy' topic and others will disagree).
  At any rate, if there are triggers, make sure to analyze them and
  understand their impact on the process.

- CSV quality. Make sure the CSV files are good quality and you are not
  doing any implicit data conversion. Consider whether breaking up your
  CSV into smaller units might help (sometimes, multiple but shorter
  performance impacts can be more acceptable than one long one).

- Make sure you have good logging enabled and 

Re: Improve COPY performance into table with indexes.

2020-04-03 Thread Laurenz Albe
On Fri, 2020-04-03 at 18:27 +1100, Tim Cross wrote:
> > I am using the COPY command to insert 10 million rows from a CSV file
> > into a database table and I am finding the performance is
> > unacceptable. When the COPY command is executed the disk I/O is
> > extremely high which leads to degraded query performance for other
> > queries being executed on the database.
> > 
> > What are my options in this scenario?
> 
> I don't think there is any way you can disable/drop indexes just for one
> transaction and not impact on other activities, such as queries. What
> options you have depends on a number of factors. Some
> questions/suggestions.

These are all good suggestions.

Let me add that there is a trade-off between fast queries and fast data
modifications.  You may drop those indexes that are not absolutely required
and accept that some queries become slower.

Another way to save one index is to partition on one column and drop the
index on that column.  Then queries can use a sequential scan on one
partition rather than an index scan, which might still be acceptable.

Speaking of partitioning: perhaps it is an option to use a partitioned
table and load the new data in a new partition. Then you can add the index
after loading and only then you turn the new table into a table partition.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Backing out of privilege grants rabbit hole

2020-04-03 Thread Laurenz Albe
On Fri, 2020-04-03 at 00:59 -0400, AC Gomez wrote:
> Granted. But we are where we are, so I'm assuming this is going to be hand to 
> hand combat.

Yes, unfortunately there is no better way.
But DROP ROLE will produce messages that help you along.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: Cstore_fdw issue.

2020-04-03 Thread Moses Mafusire
Hi Adrian, Thanks for your response. Here is what I'm getting when I run 
pg_config;
$ pg_configBINDIR = /usr/binDOCDIR = /usr/share/doc/pgsqlHTMLDIR = 
/usr/share/doc/pgsqlINCLUDEDIR = /usr/includePKGINCLUDEDIR = 
/usr/include/pgsqlINCLUDEDIR-SERVER = /usr/include/pgsql/serverLIBDIR = 
/usr/lib64PKGLIBDIR = /usr/lib64/pgsqlLOCALEDIR = /usr/share/localeMANDIR = 
/usr/share/manSHAREDIR = /usr/share/pgsqlSYSCONFDIR = /etcPGXS = 
/usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mkCONFIGURE = 
'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--disable-rpath' '--with-ldap' '--with-openssl' 
'--with-gssapi' '--enable-nls' '--without-readline' 
'--datadir=/usr/share/pgsql' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'LDFLAGS=-Wl,-z,relro  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld' 'CXXFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'CC = gccCPPFLAGS = 
-D_GNU_SOURCECFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 
-g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 
-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong 
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protectionCFLAGS_SL 
= -fPICLDFLAGS = -Wl,-z,relro -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,--as-neededLDFLAGS_EX = 
LDFLAGS_SL = LIBS = -lpgcommon -lpgport -lpthread -lssl -lcrypto -lgssapi_krb5 
-lz -lrt -lcrypt -ldl -lm VERSION = PostgreSQL 12.1




Moses MafusireMCITP DBA, MCTS SQLDeveloper, MCTS Windows 7 Fundamentals, 
Certified Internet CIW FoundationsAssociate, Project+, A+, Security+, Linux+, 
ITIL
 

On Thursday, April 2, 2020, 06:27:31 PM EDT, Adrian Klaver 
 wrote:  
 
 On 4/2/20 1:40 PM, Moses Mafusire wrote:
> Hi,
> I am new to PostgreSQL, successfully installed PGSql v12.2 on my CentOS 
> 7 machine and I am trying to install cstore_fdw.
> 
>      1. I have managed to run this command;
> /sudo yum install protobuf-c-devel/
> /
> /
> /    2. /Per the instructions I am followin;
> /Once you have protobuf-c installed on your machine, you are 
> ready to build cstore_fdw. For this, you need to include the pg_config 
> directory path in your make command. /
> /
> /
> // 3. I have searched for pg_config directory and found 2 locations;
>  /usr/bin/pg_config
>  /usr/pgsql-12/bin/pg_config
> //4. However when I try to run makefile in c_store_fdw I get this error;
>          [postgres@dart-centos cstore_fdw]$ sudo PATH=/usr/bin/:$PATH make
> /[sudo] password for postgres: /
>  Makefile:50: *** PostgreSQL 9.3 to 12 is required to compile 
> this extension.  Stop.
> 
> // I even tried the other path, same error;
>  [postgres@dart-centos cstore_fdw]$ 
> PATH=/usr/pgsql-12/bin/:$PATH make
>  Makefile:50: *** PostgreSQL 9.3 to 12 is required to compile 
> this extension.  Stop.
> 
> 
>      5. PostgreSQL Version
> /postgres=# show server_version;/
>          server_version
>  
>   12.2
>       (1 row)
> 
>  postgres=# \! psql -V
>  psql (PostgreSQL) 12.2
> 
> May you please assist me with this issue. //

What do you get if you run:

pg_config

from the command line.

An example from my machine:

aklaver@maura:~> pg_config
BINDIR = /usr/local/pgsql12/bin
DOCDIR = /usr/local/pgsql12/share/doc
HTMLDIR = /usr/local/pgsql12/share/doc
INCLUDEDIR = /usr/local/pgsql12/include
PKGINCLUDEDIR = /usr/local/pgsql12/include
INCLUDEDIR-SERVER = /usr/local/pgsql12/include/server
LIBDIR = /usr/local/pgsql12/lib64
PKGLIBDIR = /usr/local/pgsql12/lib64
LOCAL

Re: Backing out of privilege grants rabbit hole

2020-04-03 Thread Rob Sargent



> On Apr 3, 2020, at 2:50 AM, Laurenz Albe  wrote:
> 
> On Fri, 2020-04-03 at 00:59 -0400, AC Gomez wrote:
>> Granted. But we are where we are, so I'm assuming this is going to be hand 
>> to hand combat.
> 
> Yes, unfortunately there is no better way.
> But DROP ROLE will produce messages that help you along.
> 
> Yours,
> Laurenz Albe
> -- 
> Cybertec | https://www.cybertec-postgresql.com
> 
> 
Perhaps there is a script which has all the grants for a new hire?
I trust login has been disabled?
 
> 




Re: Database Cache Hit Ratio (Warning)

2020-04-03 Thread David G. Johnston
On Fri, Apr 3, 2020 at 5:17 AM Rajiv Ranjan  wrote:

> Does this unnamed monitoring tool (and undefined query) really think
> higher percentages are worse or are you mis-communicating?
>
> Forget about the tool used for monitoring, important is to monitor the
> "Cache hit ratio" is good or we can ignore it?
>

The query at least would help, since it seems to be measuring something
other that "cache hit ratio".

I'm not in a position to judge whether monitoring "cache hit ratio" is
something you should be doing in your situation.  It provides information
and is fairly cheap to capture and store.  From the sounds of it you should
probably continue capturing the data but turn off the alert.  That way if
there is a problem the data still exists to be considered.  The metric
itself does not measure something of critical importance.

David J.


Re: Cstore_fdw issue.

2020-04-03 Thread Adrian Klaver

On 4/3/20 5:53 AM, Moses Mafusire wrote:

Hi Adrian,
Thanks for your response. Here is what I'm getting when I run /pg_config;/



VERSION = PostgreSQL 12.1


So that is the correct version.

Just to be clear there is only one version of Postgres on this machine, 
correct?


Before you run make again, edit the Makefile so:

ifndef MAJORVERSION
MAJORVERSION := $(basename $(VERSION))
endif

becomes:

ifndef MAJORVERSION
MAJORVERSION := $(basename $(VERSION))
endif
$(info  MAJORVERSION is $(MAJORVERSION))


When I run the above I get:

aklaver@maura:~/test/cstore_fdw-1.7.0> PATH=/usr/local/pgsql/bin/:$PATH 
make

MAJORVERSION is 12

...

and the code compiles.



*Moses Mafusire*




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Improve COPY performance into table with indexes.

2020-04-03 Thread Adrian Klaver

On 4/2/20 9:42 PM, James Brauman wrote:

I am using the COPY command to insert 10 million rows from a CSV file
into a database table and I am finding the performance is
unacceptable. When the COPY command is executed the disk I/O is
extremely high which leads to degraded query performance for other
queries being executed on the database.

I have tried removing the indexes from the database table and this
dramatically improved performance (both reducing the execution time
and reducing disk I/O).


Is there the option to use tablespaces to move the index(s) to another 
disk?:


https://www.postgresql.org/docs/12/sql-alterindex.html

"
SET TABLESPACE

This form changes the index's tablespace to the specified 
tablespace and moves the data file(s) associated with the index to the 
new tablespace.  ..."





The constraints I am working with are:
  -  Indexes must be in place to ensure that other queries that are
executed on the table while the COPY command is executing have
acceptable performance.
  - The table should not be locked for reads during the COPY (it is
fine to lock for writes).

I don't know how COPY is implemented, but it seems that the indexes
are updated as data is inserted into the table. Ideally I would like
to delay updating the indexes until the COPY command has completed and
just update the indexes a single time, I think this would lead to much
better performance. Is this possible?

Another idea that I had was to start a transaction, drop the indexes,
run the COPY command and then recreate the indexes. The problem with
this approach is that DROP INDEX acquires an exclusive lock on the
table, which is not acceptable in this system (otherwise other queries
won't be able to execute). I tried using DROP INDEX CONCURRENTLY, but
that cannot be executed inside a transaction.

What are my options in this scenario?

Thanks





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Improve COPY performance into table with indexes.

2020-04-03 Thread Rob Sargent




On 4/3/20 9:52 AM, Adrian Klaver wrote:

On 4/2/20 9:42 PM, James Brauman wrote:

I am using the COPY command to insert 10 million rows from a CSV file
into a database table and I am finding the performance is
unacceptable. When the COPY command is executed the disk I/O is
extremely high which leads to degraded query performance for other
queries being executed on the database.

I have tried removing the indexes from the database table and this
dramatically improved performance (both reducing the execution time
and reducing disk I/O).


I've had excellent results dealing similar data quantity with the 
earlier suggestion of writing to a staging table.  Simply splitting the 
insert statements to cover 1/16[1] of the data turned untenable into 
real-time.


[1] I split on an id column which is of UUID type.  Any arbitrary 
splitting that reduces the footprint sufficiently will do.






Re: Cstore_fdw issue.

2020-04-03 Thread Moses Mafusire
Hi Adrian,Yes VERSION = PostgreSQL 12.1, is the only version I have. So I am 
novice to Linux and Postgres, where should I run this ifndef MAJORVERSION
    MAJORVERSION := $(basename $(VERSION))
endif
$(info  MAJORVERSION is $(MAJORVERSION))
and do I run as is or do I need to do something else, if so may you please 
assist. 
ThanksMoses 
On Friday, April 3, 2020, 11:39:57 AM EDT, Adrian Klaver 
 wrote:  
 
 On 4/3/20 5:53 AM, Moses Mafusire wrote:
> Hi Adrian,
> Thanks for your response. Here is what I'm getting when I run /pg_config;/

> VERSION = PostgreSQL 12.1

So that is the correct version.

Just to be clear there is only one version of Postgres on this machine, 
correct?

Before you run make again, edit the Makefile so:

ifndef MAJORVERSION
    MAJORVERSION := $(basename $(VERSION))
endif

becomes:

ifndef MAJORVERSION
    MAJORVERSION := $(basename $(VERSION))
endif
$(info  MAJORVERSION is $(MAJORVERSION))


When I run the above I get:

aklaver@maura:~/test/cstore_fdw-1.7.0> PATH=/usr/local/pgsql/bin/:$PATH 
make
MAJORVERSION is 12

...

and the code compiles.


> *Moses Mafusire*



-- 
Adrian Klaver
adrian.kla...@aklaver.com
  

Re: Out of memory in big transactions after upgrade to 12.2

2020-04-03 Thread Michael Lewis
If you didn't turn it off, you have parallel workers on by default with
v12. If work_mem is set high, memory use may be much higher as each node in
a complex plan could end up executing in parallel.

Also, do you use a connection pooler such as pgbouncer or pgpool? What is
max_connections set to?

>


Re: Database Cache Hit Ratio (Warning)

2020-04-03 Thread Rajiv Ranjan
Does this unnamed monitoring tool (and undefined query) really think higher
percentages are worse or are you mis-communicating?

Forget about the tool used for monitoring, important is to monitor the
"Cache hit ratio" is good or we can ignore it?


On Wed, 1 Apr 2020 at 10:44, David G. Johnston 
wrote:

> On Tuesday, March 31, 2020, Rajiv Ranjan  wrote:
>
>> Hi,
>>
>> Received a Database cache hit ratio warning alert from one of the
>> monitoring tools, the threshold for the “database cache hit ratio %” is
>> 90% for a High and 95% for Critical
>>
>
> Does this unnamed monitoring tool (and undefined query) really think
> higher percentages are worse or are you mis-communicating?
>
> David J.
>


-- 
*Thanks,*
*Rajiv Ranjan *


Re: Cstore_fdw issue.

2020-04-03 Thread Adrian Klaver

On 4/3/20 9:26 AM, Moses Mafusire wrote:

Hi Adrian,
Yes VERSION = PostgreSQL 12.1, is the only version I have. So I am 
novice to Linux and Postgres, where should I run this

/ifndef MAJORVERSION
     MAJORVERSION := $(basename $(VERSION))
endif
$(info  MAJORVERSION is $(MAJORVERSION))/

and do I run as is or do I need to do something else, if so may you 
please assist.


No you can't run the above by itself. In the directory you uncompressed 
the files there will be a file named Makefile. In my case:


aklaver@maura:~/test/cstore_fdw-1.7.0> ls -al

...
-rw-r--r--  1 aklaver users  1814 Apr  3 08:37 Makefile
...

Use a text editor to open that file and add the line:

$(info  MAJORVERSION is $(MAJORVERSION))

Under the section:


ifndef MAJORVERSION
MAJORVERSION := $(basename $(VERSION))
endif

and save.

Then run:

PATH=/usr/pgsql-12/bin/:$PATH make




Thanks
*Moses *

On Friday, April 3, 2020, 11:39:57 AM EDT, Adrian Klaver 
 wrote:



On 4/3/20 5:53 AM, Moses Mafusire wrote:
 > Hi Adrian,
 > Thanks for your response. Here is what I'm getting when I run 
/pg_config;/


 > VERSION = PostgreSQL 12.1

So that is the correct version.

Just to be clear there is only one version of Postgres on this machine,
correct?

Before you run make again, edit the Makefile so:

ifndef MAJORVERSION
     MAJORVERSION := $(basename $(VERSION))
endif

becomes:

ifndef MAJORVERSION
     MAJORVERSION := $(basename $(VERSION))
endif
$(info  MAJORVERSION is $(MAJORVERSION))


When I run the above I get:

aklaver@maura :~/test/cstore_fdw-1.7.0> 
PATH=/usr/local/pgsql/bin/:$PATH

make
MAJORVERSION is 12

...

and the code compiles.



 > *Moses Mafusire*



--
Adrian Klaver
adrian.kla...@aklaver.com 



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Backing out of privilege grants rabbit hole

2020-04-03 Thread Adrian Klaver

On 4/2/20 9:59 PM, AC Gomez wrote:
Granted. But we are where we are, so I'm assuming this is going to be 
hand to hand combat.


Well you could even the odds somewhat by using the below as a starting 
point:



SELECT
relname,
pg_roles.rolname,
acl.*
FROM
pg_class,
aclexplode(relacl) AS acl
JOIN pg_roles ON acl.grantee = pg_roles.oid
WHERE
pg_roles.oid = 'some_role'::regrole;






On Fri, Apr 3, 2020, 12:57 AM raf mailto:r...@raf.org>> wrote:

It's probably more sensible to grant permissions to roles that
represent groups, and have roles for individual users that
inherit the permissions of the group roles. Then you don't
need to revoke the permissions just because an individiual
has left.

cheers,
raf

AC Gomez wrote:

 > Thanks for the quick response. The problem is, in most cases the
owner is
 > not the grantee. So if a role, let's say a temp employee, gets
grants, then
 > leaves, I can't do a drop owned because that temp never owned those
 > objects, he just was granted access. Is there a "drop granted"
kind of
 > thing?
 >
 > On Thu, Apr 2, 2020, 11:37 PM Guyren Howe mailto:guy...@gmail.com>> wrote:
 >
 > > https://www.postgresql.org/docs/12/sql-drop-owned.html
 > >
 > > On Apr 2, 2020, at 20:34 , AC Gomez mailto:ant...@gmail.com>> wrote:
 > >
 > > Do I understand correctly that if a role was assigned countless
object
 > > privileges and you want to delete that role you have to sift
through a
 > > myriad of privilege grants in what amounts to a time consuming
trial and
 > > error exercise until you've got them all?
 > >
 > > Or is there a single command that with just delete the role and
do a
 > > blanket grant removal at the same time?





--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Backing out of privilege grants rabbit hole

2020-04-03 Thread Adrian Klaver

On 4/3/20 10:18 AM, Adrian Klaver wrote:

On 4/2/20 9:59 PM, AC Gomez wrote:
Granted. But we are where we are, so I'm assuming this is going to be 
hand to hand combat.


Well you could even the odds somewhat by using the below as a starting 
point:



SELECT
     relname,
     pg_roles.rolname,
     acl.*
FROM
     pg_class,
     aclexplode(relacl) AS acl
     JOIN pg_roles ON acl.grantee = pg_roles.oid
WHERE
     pg_roles.oid = 'some_role'::regrole;



Possible flaw in plan, the above only works with v12+.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Backing out of privilege grants rabbit hole

2020-04-03 Thread AC Gomez
yeah I'm on 9.5, but thanks for the info.

On Fri, Apr 3, 2020 at 1:24 PM Adrian Klaver 
wrote:

> On 4/3/20 10:18 AM, Adrian Klaver wrote:
> > On 4/2/20 9:59 PM, AC Gomez wrote:
> >> Granted. But we are where we are, so I'm assuming this is going to be
> >> hand to hand combat.
> >
> > Well you could even the odds somewhat by using the below as a starting
> > point:
> >
> >
> > SELECT
> >  relname,
> >  pg_roles.rolname,
> >  acl.*
> > FROM
> >  pg_class,
> >  aclexplode(relacl) AS acl
> >  JOIN pg_roles ON acl.grantee = pg_roles.oid
> > WHERE
> >  pg_roles.oid = 'some_role'::regrole;
> >
>
> Possible flaw in plan, the above only works with v12+.
>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: Cstore_fdw issue.

2020-04-03 Thread Moses Mafusire
Adrian,I really appreciate your help. I was able to open the file in a text 
editor in centOS, but it's read-only. I cannot even replace the file. Any 
tricks up your sleeves for me to edit this file.
Thanks
Moses 
On Friday, April 3, 2020, 12:40:19 PM EDT, Adrian Klaver 
 wrote:  
 
 On 4/3/20 9:26 AM, Moses Mafusire wrote:
> Hi Adrian,
> Yes VERSION = PostgreSQL 12.1, is the only version I have. So I am 
> novice to Linux and Postgres, where should I run this
> /ifndef MAJORVERSION
>      MAJORVERSION := $(basename $(VERSION))
> endif
> $(info  MAJORVERSION is $(MAJORVERSION))/
> 
> and do I run as is or do I need to do something else, if so may you 
> please assist.

No you can't run the above by itself. In the directory you uncompressed 
the files there will be a file named Makefile. In my case:

aklaver@maura:~/test/cstore_fdw-1.7.0> ls -al

...
-rw-r--r--  1 aklaver users  1814 Apr  3 08:37 Makefile
...

Use a text editor to open that file and add the line:

$(info  MAJORVERSION is $(MAJORVERSION))

Under the section:


ifndef MAJORVERSION
    MAJORVERSION := $(basename $(VERSION))
endif

and save.

Then run:

PATH=/usr/pgsql-12/bin/:$PATH make


> 
> Thanks
> *Moses *
> 
> On Friday, April 3, 2020, 11:39:57 AM EDT, Adrian Klaver 
>  wrote:
> 
> 
> On 4/3/20 5:53 AM, Moses Mafusire wrote:
>  > Hi Adrian,
>  > Thanks for your response. Here is what I'm getting when I run 
> /pg_config;/
> 
>  > VERSION = PostgreSQL 12.1
> 
> So that is the correct version.
> 
> Just to be clear there is only one version of Postgres on this machine,
> correct?
> 
> Before you run make again, edit the Makefile so:
> 
> ifndef MAJORVERSION
>      MAJORVERSION := $(basename $(VERSION))
> endif
> 
> becomes:
> 
> ifndef MAJORVERSION
>      MAJORVERSION := $(basename $(VERSION))
> endif
> $(info  MAJORVERSION is $(MAJORVERSION))
> 
> 
> When I run the above I get:
> 
> aklaver@maura :~/test/cstore_fdw-1.7.0> 
> PATH=/usr/local/pgsql/bin/:$PATH
> make
> MAJORVERSION is 12
> 
> ...
> 
> and the code compiles.
> 
> 
> 
>  > *Moses Mafusire*
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com 


-- 
Adrian Klaver
adrian.kla...@aklaver.com


  

Re: Cstore_fdw issue.

2020-04-03 Thread Adrian Klaver

On 4/3/20 10:48 AM, Moses Mafusire wrote:

Adrian,
I really appreciate your help. I was able to open the file in a text 
editor in centOS, but it's read-only. I cannot even replace the file. 
Any tricks up your sleeves for me to edit this file.


Looks like a permissions issue.

In your first post you had:

[postgres@dart-centos cstore_fdw]$ PATH=/usr/pgsql-12/bin/:$PATH make

From above I am assuming you unpacked the cstore source as the postgres 
user into the cstore_fdw directory. Is that correct?


If that is the case then you will need to open the Makefile as the 
postgres user in order to edit it.


While you are in:

[postgres@dart-centos cstore_fdw]$

run:

/usr/bin/pg_config

and

/usr/pgsql-12/bin/pg_config

and confirm they are actually returning information.



Thanks

*Moses *



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Cstore_fdw issue.

2020-04-03 Thread Moses Mafusire
I actually started the installation process as me. I was logged as user 
postgres because after trying and getting that version error message. I thought 
as user postgres I would be able to install, but failed). I get the same 
results though;
[username@dart-centos ~]$ /usr/bin/pg_configBINDIR = /usr/binDOCDIR = 
/usr/share/doc/pgsqlHTMLDIR = /usr/share/doc/pgsqlINCLUDEDIR = 
/usr/includePKGINCLUDEDIR = /usr/include/pgsqlINCLUDEDIR-SERVER = 
/usr/include/pgsql/serverLIBDIR = /usr/lib64PKGLIBDIR = 
/usr/lib64/pgsqlLOCALEDIR = /usr/share/localeMANDIR = /usr/share/manSHAREDIR = 
/usr/share/pgsqlSYSCONFDIR = /etcPGXS = 
/usr/lib64/pgsql/pgxs/src/makefiles/pgxs.mkCONFIGURE = 
'--build=x86_64-redhat-linux-gnu' '--host=x86_64-redhat-linux-gnu' 
'--program-prefix=' '--disable-dependency-tracking' '--prefix=/usr' 
'--exec-prefix=/usr' '--bindir=/usr/bin' '--sbindir=/usr/sbin' 
'--sysconfdir=/etc' '--datadir=/usr/share' '--includedir=/usr/include' 
'--libdir=/usr/lib64' '--libexecdir=/usr/libexec' '--localstatedir=/var' 
'--sharedstatedir=/var/lib' '--mandir=/usr/share/man' 
'--infodir=/usr/share/info' '--disable-rpath' '--with-ldap' '--with-openssl' 
'--with-gssapi' '--enable-nls' '--without-readline' 
'--datadir=/usr/share/pgsql' 'build_alias=x86_64-redhat-linux-gnu' 
'host_alias=x86_64-redhat-linux-gnu' 'CFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'LDFLAGS=-Wl,-z,relro  -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld' 'CXXFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig'CC = gccCPPFLAGS = 
-D_GNU_SOURCECFLAGS = -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -Wno-format-truncation -Wno-stringop-truncation -O2 
-g -pipe -Wall -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 
-Wp,-D_GLIBCXX_ASSERTIONS -fexceptions -fstack-protector-strong 
-grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protectionCFLAGS_SL 
= -fPICLDFLAGS = -Wl,-z,relro -Wl,-z,now 
-specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,--as-neededLDFLAGS_EX = 
LDFLAGS_SL = LIBS = -lpgcommon -lpgport -lpthread -lssl -lcrypto -lgssapi_krb5 
-lz -lrt -lcrypt -ldl -lm VERSION = PostgreSQL 12.1



[username@dart-centos ~]$ /usr/pgsql-12/bin/pg_configBINDIR = 
/usr/pgsql-12/binDOCDIR = /usr/pgsql-12/docHTMLDIR = 
/usr/pgsql-12/doc/htmlINCLUDEDIR = /usr/pgsql-12/includePKGINCLUDEDIR = 
/usr/pgsql-12/includeINCLUDEDIR-SERVER = /usr/pgsql-12/include/serverLIBDIR = 
/usr/pgsql-12/libPKGLIBDIR = /usr/pgsql-12/libLOCALEDIR = 
/usr/pgsql-12/share/localeMANDIR = /usr/pgsql-12/share/manSHAREDIR = 
/usr/pgsql-12/shareSYSCONFDIR = /etc/sysconfig/pgsqlPGXS = 
/usr/pgsql-12/lib/pgxs/src/makefiles/pgxs.mkCONFIGURE = '--enable-rpath' 
'--prefix=/usr/pgsql-12' '--includedir=/usr/pgsql-12/include' 
'--libdir=/usr/pgsql-12/lib' '--mandir=/usr/pgsql-12/share/man' 
'--datadir=/usr/pgsql-12/share' '--with-icu' '--with-llvm' '--with-perl' 
'--with-python' '--with-tcl' '--with-tclconfig=/usr/lib64' '--with-openssl' 
'--with-pam' '--with-gssapi' '--with-includes=/usr/include' 
'--with-libraries=/usr/lib64' '--enable-nls' '--enable-dtrace' 
'--with-uuid=e2fs' '--with-libxml' '--with-libxslt' '--with-ldap' 
'--with-selinux' '--with-systemd' '--with-system-tzdata=/usr/share/zoneinfo' 
'--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/pgsql-12/doc' 
'--htmldir=/usr/pgsql-12/doc/html' 'CFLAGS=-O2 -g -pipe -Wall 
-Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS 
-fexceptions -fstack-protector-strong -grecord-gcc-switches 
-specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 
-specs=/usr/lib/rpm/redhat/redhat-annobin-cc1 -m64 -mtune=generic 
-fasynchronous-unwind-tables -fstack-clash-protection -fcf-protection' 
'LDFLAGS=-Wl,--as-needed' 
'PKG_CONFIG_PATH=:/usr/lib64/pkgconfig:/usr/share/pkgconfig' 
'PYTHON=/usr/bin/python2'CC = gccCPPFLAGS = -D_GNU_SOURCE 
-I/usr/include/libxml2 -I/usr/includeCFLAGS = -Wall -Wmissing-prototypes 
-Wpointer-arith -Wdeclaration-after-statement -Werror=vla -Wendif-labels 
-Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv 
-fexcess-precision=standard -Wno-

Re: Cstore_fdw issue.

2020-04-03 Thread Adrian Klaver

On 4/3/20 11:52 AM, Moses Mafusire wrote:
I actually started the installation process as me. I was logged as user 
postgres because after trying and getting that version error message. I 
thought as user postgres I would be able to install, but failed). I get 
the same results though;


Yes, but you where doing the compile as the postgres user. That is where 
the error was being shown. In order to troubleshoot this you need to 
stick to consistent set of operations. Jumping around as different user 
does not count:)


So:

1) Pick a user.

2) Unpack the source as that user in the users home directory. If not 
already done.


3) In the unpacked source directory run:

/usr/bin/pg_config

and

/usr/pgsql-12/bin/pg_config

4) Then:

PATH=/usr/bin/:$PATH make

Preferably with the modification to the Makefile I suggested upstream.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Cstore_fdw issue.

2020-04-03 Thread Adrian Klaver

On 4/3/20 11:52 AM, Moses Mafusire wrote:
I actually started the installation process as me. I was logged as user 
postgres because after trying and getting that version error message. I 
thought as user postgres I would be able to install, but failed). I get 
the same results though;


Dawned on me that I may guilty of assuming. Have you installed the 
CentOS development packages?:


https://www.scalescale.com/tips/nginx/install-gcc-development-tools-centos-7/



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: EINTR while resizing dsm segment.

2020-04-03 Thread Thomas Munro
On Thu, Apr 2, 2020 at 9:25 PM Kyotaro Horiguchi
 wrote:
> I provided the subject, and added -hackers.
>
> > Hello,
> > I am running postgres 11.5 and we were having issues with shared segments.
> > So I increased the max_connection as suggested by you guys and reduced my
> > work_mem to 600M.
> >
> > Right now instead, it is the second time I see this error :
> >
> > ERROR:  could not resize shared memory segment "/PostgreSQL.2137675995" to
> > 33624064 bytes: Interrupted system call
>
> The function posix_fallocate is protected against EINTR.
>
> | do
> | {
> |   rc = posix_fallocate(fd, 0, size);
> | } while (rc == EINTR && !(ProcDiePending || QueryCancelPending));
>
> But not for ftruncate and write. Don't we need to protect them from
> ENTRI as the attached?

We don't handle EINTR for write() generally because that's not
supposed to be necessary on local files (local disks are not "slow
devices", and we document that if you're using something like NFS you
should use its "hard" mount option so that it behaves that way too).
As for ftruncate(), you'd think it'd be similar, and I can't think of
a more local filesystem than tmpfs (where POSIX shmem lives on Linux),
but I can't seem to figure that out from reading man pages; maybe I'm
reading the wrong ones.  Perhaps in low memory situations, an I/O wait
path reached by ftruncate() can return EINTR here rather than entering
D state (non-interruptable sleep) or restarting due to our SA_RESTART
flag... anyone know?

Another thought: is there some way for the posix_fallocate() retry
loop to exit because (ProcDiePending || QueryCancelPending), but then
for CHECK_FOR_INTERRUPTS() to do nothing, so that we fall through to
reporting the EINTR?