pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi Team,

Please give me some pointers to resolve this issue.

-bash-4.2$ export OLDCLUSTER=/usr/pgsql-9.2
-bash-4.2$ export NEWCLUSTER=/usr/pgsql-11

-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433 --check
Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket "/var/lib/pgsql/.s.PGSQL.5432"?

could not connect to source postmaster started with the command:
"/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
Failure, exiting
-bash-4.2$


+++

-bash-4.2$ /usr/pgsql-9.2/bin/pg_ctl -D "/data/db/data" start -l logfile
server starting
-bash-4.2$

+++
-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433  --check
Performing Consistency Checks on Old Live Server

Checking cluster versions   ok
Checking database user is the install user  ok
Checking database connection settings   ok
Checking for prepared transactions  ok
Checking for reg* data types in user tables ok
Checking for contrib/isn with bigint-passing mismatch   ok
Checking for invalid "unknown" user columns ok
Checking for hash indexes   ok
Checking for roles starting with "pg_"  ok
Checking for incompatible "line" data type  ok
Checking for presence of required libraries ok
Checking database user is the install user  ok
Checking for prepared transactions  ok

*Clusters are compatible*
-bash-4.2$

+++

-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433

There seems to be a postmaster servicing the old cluster.
Please shutdown that postmaster and try again.
Failure, exiting
-bash-4.2$

-bash-4.2$ /usr/pgsql-9.2/bin/pg_ctl -D "/data/db/data" stop -l
logfilewaiting for server to shut down done
server stopped
-bash-4.2$


-bash-4.2$ /usr/pgsql-11/bin/pg_upgrade --old-bindir=$OLDCLUSTER/bin
--new-bindir=$NEWCLUSTER/bin --old-datadir=/data/db/data
--new-datadir=/pgdata/11/data -p 5432 -P 5433
Performing Consistency Checks
-
Checking cluster versions   ok

*failure*
Consult the last few lines of "pg_upgrade_server.log" for
the probable cause of the failure.

connection to database failed: could not connect to server: No such file or
directory
Is the server running locally and accepting
connections on Unix domain socket "/var/lib/pgsql/.s.PGSQL.5432"?

could not connect to source postmaster started with the command:
"/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
Failure, exiting
-bash-4.2$


Thanks,


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Luca Ferrari
On Tue, Jul 23, 2019 at 3:56 PM Perumal Raj  wrote:
> could not connect to source postmaster started with the command:
> "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D "/data/db/data" 
> -o "-p 5432 -b  -c listen_addresses='' -c unix_socket_permissions=0700 -c 
> unix_socket_directory='/var/lib/pgsql'" start
> Failure, exiting

Is /var/lib/pgsql directory on the system?
Can you start the instance with the above command?


Luca




Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi Luca

-bash-4.2$ "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start
waiting for server to start stopped waiting
pg_ctl: could not start server
Examine the log output.
-bash-4.2$

-bash-4.2$ pwd
/var/lib/pgsql
-bash-4.2$

upgrade log :


command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
start >> "pg_upgrade_server.log" 2>&1
waiting for server to startFATAL:  unrecognized configuration parameter
"unix_socket_directory"
 stopped waiting
pg_ctl: could not start server
Examine the log output.


On Tue, Jul 23, 2019 at 7:15 AM Luca Ferrari  wrote:

> On Tue, Jul 23, 2019 at 3:56 PM Perumal Raj  wrote:
> > could not connect to source postmaster started with the command:
> > "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
> unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
> start
> > Failure, exiting
>
> Is /var/lib/pgsql directory on the system?
> Can you start the instance with the above command?
>
>
> Luca
>


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Adrian Klaver

On 7/23/19 7:17 AM, Perumal Raj wrote:

Hi Luca

-bash-4.2$ "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D 
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c 
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'" 
start

waiting for server to start stopped waiting
pg_ctl: could not start server
Examine the log output.
-bash-4.2$

-bash-4.2$ pwd
/var/lib/pgsql
-bash-4.2$

upgrade log :


command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D 
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c 
unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'" 
start >> "pg_upgrade_server.log" 2>&1
waiting for server to startFATAL:  unrecognized configuration 
parameter "unix_socket_directory"


Well unix_socket_directory changed to  unix_socket_directories in 9.3.

Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2 
binary?



  stopped waiting
pg_ctl: could not start server
Examine the log output.


On Tue, Jul 23, 2019 at 7:15 AM Luca Ferrari > wrote:


On Tue, Jul 23, 2019 at 3:56 PM Perumal Raj mailto:peruci...@gmail.com>> wrote:
 > could not connect to source postmaster started with the command:
 > "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
"/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
unix_socket_permissions=0700 -c
unix_socket_directory='/var/lib/pgsql'" start
 > Failure, exiting

Is /var/lib/pgsql directory on the system?
Can you start the instance with the above command?


Luca




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Tom Lane
Adrian Klaver  writes:
> On 7/23/19 7:17 AM, Perumal Raj wrote:
>> command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D 
>> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c 
>> unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'" 
>> start >> "pg_upgrade_server.log" 2>&1
>> waiting for server to startFATAL:  unrecognized configuration 
>> parameter "unix_socket_directory"

> Well unix_socket_directory changed to  unix_socket_directories in 9.3.

Yeah, this is clearly a version-skew problem.  pg_upgrade knows it
should say unix_socket_directory not unix_socket_directories to a pre-9.3
server, but that's going wrong somehow.

> Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2 
> binary?

For quite some time, Red Hat shipped versions of 9.2.x that were patched
to understand unix_socket_directories not unix_socket_directory.  I would
be suspicious that the source server was one of those, except that the
cited path doesn't match where the Red Hat RPMs put it.

regards, tom lane




Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi All,

Yes, , bin file correct only as it came up with CentOS 7.6 .

Anyhow , thanks for the pointers which helped me to look at hacking
solution :-)

Currently , upgrade is running. will keep u posted with results.

Thanks,

On Tue, Jul 23, 2019 at 7:43 AM Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 7/23/19 7:17 AM, Perumal Raj wrote:
> >> command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
> >> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
> >> unix_socket_permissions=0700 -c unix_socket_directory='/var/lib/pgsql'"
> >> start >> "pg_upgrade_server.log" 2>&1
> >> waiting for server to startFATAL:  unrecognized configuration
> >> parameter "unix_socket_directory"
>
> > Well unix_socket_directory changed to  unix_socket_directories in 9.3.
>
> Yeah, this is clearly a version-skew problem.  pg_upgrade knows it
> should say unix_socket_directory not unix_socket_directories to a pre-9.3
> server, but that's going wrong somehow.
>
> > Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2
> > binary?
>
> For quite some time, Red Hat shipped versions of 9.2.x that were patched
> to understand unix_socket_directories not unix_socket_directory.  I would
> be suspicious that the source server was one of those, except that the
> cited path doesn't match where the Red Hat RPMs put it.
>
> regards, tom lane
>


Default ordering option

2019-07-23 Thread Cyril Champier
Hi,

In this documentation
, it is said:

> If sorting is not chosen, the rows will be returned in an unspecified
> order. The actual order in that case will depend on the scan and join plan
> types and the order on disk, but it must not be relied on.


I would like to know if there is any way to change that to have a "real"
random behaviour.

My use case:
At Doctolib, we do a lot of automatic tests.
Sometimes, people forgot to order their queries. Most of the time, there is
no real problem on production. Let say, we display a user list without
order.
When a developer writes a test for this feature, he will create 2 users A
and B, then assert that they are displayed "[A, B]".
99% of the time the test will be ok, but sometimes, the displayed list will
be "[B,A]", and the test will fail.

One solution could be to ensure random order with an even distribution, so
that such failing test would be detected quicker.

Is that possible? Maybe with a plugin?

Thanks,
Cyril


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Hi All,

Finally upgrade completed successfully after implementing the following
Workaround.

mv /usr/bin/pg_ctl{,-orig}
echo '#!/bin/bash' > /usr/bin/pg_ctl
echo '"$0"-orig "${@/unix_socket_directory/unix_socket_directories}"' >>
 /usr/bin/pg_ctl
chmod +x /usr/bin/pg_ctl
*Special thanks to ''Ziggy Crueltyfree Zeitgeister '*



On Tue, Jul 23, 2019 at 7:51 AM Perumal Raj  wrote:

> Hi All,
>
> Yes, , bin file correct only as it came up with CentOS 7.6 .
>
> Anyhow , thanks for the pointers which helped me to look at hacking
> solution :-)
>
> Currently , upgrade is running. will keep u posted with results.
>
> Thanks,
>
> On Tue, Jul 23, 2019 at 7:43 AM Tom Lane  wrote:
>
>> Adrian Klaver  writes:
>> > On 7/23/19 7:17 AM, Perumal Raj wrote:
>> >> command: "/usr/pgsql-9.2/bin/pg_ctl" -w -l "pg_upgrade_server.log" -D
>> >> "/data/db/data" -o "-p 5432 -b  -c listen_addresses='' -c
>> >> unix_socket_permissions=0700 -c
>> unix_socket_directory='/var/lib/pgsql'"
>> >> start >> "pg_upgrade_server.log" 2>&1
>> >> waiting for server to startFATAL:  unrecognized configuration
>> >> parameter "unix_socket_directory"
>>
>> > Well unix_socket_directory changed to  unix_socket_directories in 9.3.
>>
>> Yeah, this is clearly a version-skew problem.  pg_upgrade knows it
>> should say unix_socket_directory not unix_socket_directories to a pre-9.3
>> server, but that's going wrong somehow.
>>
>> > Are you sure that /usr/pgsql-9.2/bin/pg_ctl is really pointing to a 9.2
>> > binary?
>>
>> For quite some time, Red Hat shipped versions of 9.2.x that were patched
>> to understand unix_socket_directories not unix_socket_directory.  I would
>> be suspicious that the source server was one of those, except that the
>> cited path doesn't match where the Red Hat RPMs put it.
>>
>> regards, tom lane
>>
>


Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Tom Lane
Perumal Raj  writes:
> Finally upgrade completed successfully after implementing the following
> Workaround.

You didn't say where you got these executables from, but if the 9.2 and 11
packages were from the same packager, you should complain to them about
it.  If they're patching 9.2 to have unix_socket_directories, they should
also patch pg_upgrade in later versions to understand that.

regards, tom lane




LWLock optimization

2019-07-23 Thread Alexander Pyhalov
Hi.

I was asked to bring up this topic on maling lists after asking question on 
IRC. 

The issue is the following.
On one of our servers (10.5), I see a lot of queries with 
wait_event_type='LWLock', wait_event='lock_manager' 

This is a stat gathering/analyzing application with tables which have > 100 
partitions.
The queries itself are mostly simple (select by pkey, update by primary key and 
so on).

 select count(*) from pg_locks shows about 40K  records (all granted) and 

 select count(*) from (select distinct 
locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid
 from pg_locks) 

is varying from about 1K to 25K (I haven't stored this value in monitoring 
system, just observed manually).

Currently I was adviced to increase LOG2_NUM_LOCK_PARTITIONS at compile time. 
If it bothers us enough, we would likely do it, but I have a question,  if this 
 (or NUM_LOCK_PARTITIONS) value should be increased by default  or set tunable?


С уважением,
Александр Пыхалов,
программист отдела телекоммуникационной инфраструктуры
управления информационно-коммуникационной инфраструктуры ЮФУ





Re: Default ordering option

2019-07-23 Thread Adrian Klaver

On 7/23/19 8:43 AM, Cyril Champier wrote:

Hi,

In this documentation 
, it is said:


If sorting is not chosen, the rows will be returned in an
unspecified order. The actual order in that case will depend on the
scan and join plan types and the order on disk, but it must not be
relied on.


I would like to know if there is any way to change that to have a "real" 
random behaviour.


My use case:
At Doctolib, we do a lot of automatic tests.
Sometimes, people forgot to order their queries. Most of the time, there 
is no real problem on production. Let say, we display a user list 
without order.
When a developer writes a test for this feature, he will create 2 users 
A and B, then assert that they are displayed "[A, B]".
99% of the time the test will be ok, but sometimes, the displayed list 
will be "[B,A]", and the test will fail.


One solution could be to ensure random order with an even distribution, 
so that such failing test would be detected quicker.


Is that possible? Maybe with a plugin?


Not that I know of.

A possible solution given below:

test_(postgres)> insert into t1 values (1, 'dog'), (2, 'cat'), (3, 'fish');
INSERT 0 3

test_(postgres)> select * from t1 ;
 a |  b
---+--
 1 | dog
 2 | cat
 3 | fish
(3 rows)

test_(postgres)> update  t1 set b = 'dogfish' where  a =1;
UPDATE 1

test_(postgres)> select * from t1 ;
 a |b
---+-
 2 | cat
 3 | fish
 1 | dogfish
(3 rows)

An UPDATE reorders the rows. Maybe throw an UPDATE into the test after 
creating the users to force an 'out of order' result?





Thanks,
Cyril






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_upgrade : 9.X to 11.X issue CentoOS 7.6

2019-07-23 Thread Perumal Raj
Sorry i missed to refer link.
https://dba.stackexchange.com/questions/50135/pg-upgrade-unrecognized-configuration-parameter-unix-socket-directory


On Tue, Jul 23, 2019 at 9:20 AM Tom Lane  wrote:

> Perumal Raj  writes:
> > Finally upgrade completed successfully after implementing the following
> > Workaround.
>
> You didn't say where you got these executables from, but if the 9.2 and 11
> packages were from the same packager, you should complain to them about
> it.  If they're patching 9.2 to have unix_socket_directories, they should
> also patch pg_upgrade in later versions to understand that.
>
> regards, tom lane
>


Too slow to create new schema and their tables, functions, triggers.

2019-07-23 Thread PegoraroF10
We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.

Basically, my script creates an entire structure for a new customer:
- Create schema TempSchema; --just to be sure that nobody will connect until
it finishes
- create tables (100), constraints, functions, etc.
- import data using restore only data.
- rename TempSchema to production name.

On second step, when creating tables, functions and so on, it takes
sometimes a minute just to create one table or one function.

So, does this happens because we have almost 200 schemas on that database ?
I´m almost sure about.
What do I need do to run my script as before ? Do I need to Reindex ? Vacuum
? Or am I reaching a limit in a number of schemas in a Postgres database ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-23 Thread Adrian Klaver

On 7/23/19 12:20 PM, PegoraroF10 wrote:

We have in a single database 190 identical schemas. Now, when we create a new
one, with exactly same structure as the previous ones, it takes 20 or 30
minutes to finish. Usual time to finish that script was 30 seconds.

Basically, my script creates an entire structure for a new customer:
- Create schema TempSchema; --just to be sure that nobody will connect until
it finishes
- create tables (100), constraints, functions, etc.
- import data using restore only data.
- rename TempSchema to production name.

On second step, when creating tables, functions and so on, it takes
sometimes a minute just to create one table or one function.


Can you show example of portion of script?

Have you looked at the Postgres log during the above to see if there any 
relevant messages?




So, does this happens because we have almost 200 schemas on that database ?
I´m almost sure about.
What do I need do to run my script as before ? Do I need to Reindex ? Vacuum
? Or am I reaching a limit in a number of schemas in a Postgres database ?



--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html






--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Too slow to create new schema and their tables, functions, triggers.

2019-07-23 Thread Michael Lewis
On Tue, Jul 23, 2019 at 1:36 PM Adrian Klaver 
wrote:

> On 7/23/19 12:20 PM, PegoraroF10 wrote:
> > We have in a single database 190 identical schemas. Now, when we create
> a new
> > one, with exactly same structure as the previous ones, it takes 20 or 30
> > minutes to finish. Usual time to finish that script was 30 seconds.


 Can you create 200 schemas using your script without the data load step
and reproduce the issue? With 19,000 tables on that database, how
aggressive is autovacuum?


Query plan: SELECT vs INSERT from same select

2019-07-23 Thread Alexander Voytsekhovskyy
I have quite complicated query:

SELECT axis_x1, axis_y1, SUM(delivery_price)  as v_1 FROM (
SELECT to_char(delivery_data.delivery_date, '-MM') as axis_x1,
clients.id_client as axis_y1, delivery_data.amount * production_price.price
* groups.discount as delivery_price

FROM delivery_data
JOIN client_tt ON (client_tt.id_client_tt = delivery_data.id_client_tt)
JOIN clients ON (client_tt.id_client = clients.id_client)
JOIN production ON (production.id = delivery_data.id_product)
JOIN groups ON (groups.id = delivery_data.delivery_group_id AND
client_tt.id_group = groups.id AND groups.id = clients.id_group)
LEFT JOIN production_price on (delivery_data.id_product =
production_price.id_production AND groups.price_list_id =
production_price.price_list_id AND delivery_data.delivery_date BETWEEN
production_price.date_from AND production_price.date_to)

WHERE delivery_data.delivery_date between '2019-03-01' AND '2019-06-30'
AND delivery_data.delivery_group_id IN (...short list of values...)
AND delivery_data.id_product IN ()) AS tmpsource

WHERE TRUE
GROUP BY GROUPING SETS ((axis_x1, axis_y1), (axis_x1), (axis_y1), ())

It runs well, took 1s and returns 4000 rows.

You can see explain analyze verbose here:
https://explain.depesz.com/s/AEWj

The problem is, when i wrap it to

A)
INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
SELECT  SAME QUERY

OR even

B)
WITH rows AS (
... SAME SELECT QUERY ...
)
INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
SELECT * FROM rows

The query time dramatically drops to 500+ seconds.

You can see explain analyze verbose here
https://explain.depesz.com/s/AEWj

As you can see, 100% of time goes to same SELECT query, there is no issues
with INSERT-part

I have played a lot and it's reproducing all time.

So my question is, why wrapping SELECT query with INSERT FROM SELECT
dramatically change query plan and make it 500x slower?


Re: Query plan: SELECT vs INSERT from same select

2019-07-23 Thread Igor Korot
Hi,

On Tue, Jul 23, 2019 at 3:29 PM Alexander Voytsekhovskyy
 wrote:
>
> I have quite complicated query:
>
> SELECT axis_x1, axis_y1, SUM(delivery_price)  as v_1 FROM (
> SELECT to_char(delivery_data.delivery_date, '-MM') as axis_x1, 
> clients.id_client as axis_y1, delivery_data.amount * production_price.price * 
> groups.discount as delivery_price
>
> FROM delivery_data
> JOIN client_tt ON (client_tt.id_client_tt = delivery_data.id_client_tt)
> JOIN clients ON (client_tt.id_client = clients.id_client)
> JOIN production ON (production.id = delivery_data.id_product)
> JOIN groups ON (groups.id = delivery_data.delivery_group_id AND 
> client_tt.id_group = groups.id AND groups.id = clients.id_group)
> LEFT JOIN production_price on (delivery_data.id_product = 
> production_price.id_production AND groups.price_list_id = 
> production_price.price_list_id AND delivery_data.delivery_date BETWEEN 
> production_price.date_from AND production_price.date_to)
>
> WHERE delivery_data.delivery_date between '2019-03-01' AND '2019-06-30'
> AND delivery_data.delivery_group_id IN (...short list of values...)
> AND delivery_data.id_product IN ()) AS tmpsource
>
> WHERE TRUE
> GROUP BY GROUPING SETS ((axis_x1, axis_y1), (axis_x1), (axis_y1), ())
>
> It runs well, took 1s and returns 4000 rows.
>
> You can see explain analyze verbose here:
> https://explain.depesz.com/s/AEWj
>
> The problem is, when i wrap it to
>
> A)
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT  SAME QUERY
>
> OR even
>
> B)
> WITH rows AS (
> ... SAME SELECT QUERY ...
> )
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT * FROM rows
>
> The query time dramatically drops to 500+ seconds.
>
> You can see explain analyze verbose here
> https://explain.depesz.com/s/AEWj
>
> As you can see, 100% of time goes to same SELECT query, there is no issues 
> with INSERT-part
>
> I have played a lot and it's reproducing all time.
>
> So my question is, why wrapping SELECT query with INSERT FROM SELECT 
> dramatically change query plan and make it 500x slower?

Which version of PostgreSQL do you have?
Which OS does it running on?

Thank you.




Re: Query plan: SELECT vs INSERT from same select

2019-07-23 Thread Tom Lane
Alexander Voytsekhovskyy  writes:
> You can see explain analyze verbose here:
> https://explain.depesz.com/s/AEWj

> The problem is, when i wrap it to

> A)
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT  SAME QUERY

> OR even

> B)
> WITH rows AS (
> ... SAME SELECT QUERY ...
> )
> INSERT INTO norepl_1542_result (axis_x1, axis_y1, v_1)
> SELECT * FROM rows

> The query time dramatically drops to 500+ seconds.

> You can see explain analyze verbose here
> https://explain.depesz.com/s/AEWj

That's the same link.

> As you can see, 100% of time goes to same SELECT query, there is no issues
> with INSERT-part

We can't see any such thing from what you posted.

regards, tom lane




Re: Query plan: SELECT vs INSERT from same select

2019-07-23 Thread Tom Lane
[ please keep the list cc'd ]

Alexander Voytsekhovskyy  writes:
> Sorry again
> here is both links:
> https://explain.depesz.com/s/AEWj
> https://explain.depesz.com/s/CHwF

Don't think I believe that those are the same query --- there's a
CTE in the second one that doesn't appear in the first, and it is
eating a lot of time too.

I have a vague recollection that there are cases where optimizations
are possible in plain SELECT but not in data-modifying queries.
So maybe this isn't pilot error but something triggered by that.
We'd need to see a self-contained test case to verify that though.

regards, tom lane




pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
Hi, 

I'm trying to copy a schema from one PG database (ver 11) to PG 10. 
Previously the first database version was 9.6 and the way I did the copying was 

ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 -d 
dbnme

However after migrating from PG 9.6 to 11, when I did the same thing as before, 
I started getting a bunch of commands in the dump like this

GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different between 
different machines. 
It is clear that the change is related to the way pg_dump is implemented now in 
PG11 that global objects are dumped. 
But the question is how do I duplicate the previous behaviour, i.e. Dump just 
the schema and permission on the schema, not on the database. 

I do know that I could grep the dump, but that's very annoying when dumping 
hundreds of millions of rows.

Thanks in advance for the help.

      Sergey


Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 3:19 PM, Sergey Koposov wrote:

Hi,

I'm trying to copy a schema from one PG database (ver 11) to PG 10.
Previously the first database version was 9.6 and the way I did the copying was

ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 -d 
dbnme

However after migrating from PG 9.6 to 11, when I did the same thing as before, 
I started getting a bunch of commands in the dump like this

GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different between 
different machines.
It is clear that the change is related to the way pg_dump is implemented now in 
PG11 that global objects are dumped.
But the question is how do I duplicate the previous behaviour, i.e. Dump just 
the schema and permission on the schema, not on the database.


https://www.postgresql.org/docs/11/app-pgdump.html

-x
--no-privileges
--no-acl

Prevent dumping of access privileges (grant/revoke commands).



I do know that I could grep the dump, but that's very annoying when dumping 
hundreds of millions of rows.

Thanks in advance for the help.

       Sergey




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:
> On 7/23/19 3:19 PM, Sergey Koposov wrote:
> > 
> > Hi,
> > 
> > I'm trying to copy a schema from one PG database (ver 11) to PG 10.
> > Previously the first database version was 9.6 and the way I did the copying 
> > was
> > 
> > ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
> > schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 
> > -d dbnme
> > 
> > However after migrating from PG 9.6 to 11, when I did the same thing as 
> > before, I started getting a bunch of commands in the dump like this
> > 
> > GRANT CONNECT ON DATABASE dbname TO usernameXX;
> > 
> > which don't work for me because the list of users is different between 
> > different machines.
> > It is clear that the change is related to the way pg_dump is implemented 
> > now in PG11 that global objects are dumped.
> > But the question is how do I duplicate the previous behaviour, i.e. Dump 
> > just the schema and permission on the schema, not on the database.
> https://www.postgresql.org/docs/11/app-pgdump.html
> 
> -x
> --no-privileges
> --no-acl
> 
>  Prevent dumping of access privileges (grant/revoke commands).
Yes I saw that, but that will not dump privileges on the schema itself, which 
were dumped before as far as I understand ... 

    S

Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 3:23 PM, Sergey Koposov wrote:

On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:

On 7/23/19 3:19 PM, Sergey Koposov wrote:


Hi,

I'm trying to copy a schema from one PG database (ver 11) to PG 10.
Previously the first database version was 9.6 and the way I did the copying was

ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 -d 
dbnme

However after migrating from PG 9.6 to 11, when I did the same thing as before, 
I started getting a bunch of commands in the dump like this

GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different between 
different machines.
It is clear that the change is related to the way pg_dump is implemented now in 
PG11 that global objects are dumped.
But the question is how do I duplicate the previous behaviour, i.e. Dump just 
the schema and permission on the schema, not on the database.

https://www.postgresql.org/docs/11/app-pgdump.html

-x
--no-privileges
--no-acl

  Prevent dumping of access privileges (grant/revoke commands).

Yes I saw that, but that will not dump privileges on the schema itself, which 
were dumped before as far as I understand ...


So the roles for the schema don't change, but everything else does?



     S




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:
> On 7/23/19 3:23 PM, Sergey Koposov wrote:
> > 
> > On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:
> > > 
> > > On 7/23/19 3:19 PM, Sergey Koposov wrote:
> > > > 
> > > > 
> > > > Hi,
> > > > 
> > > > I'm trying to copy a schema from one PG database (ver 11) to PG 10.
> > > > Previously the first database version was 9.6 and the way I did the 
> > > > copying was
> > > > 
> > > > ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump 
> > > > --no-tablespaces -n schemaname -Fc  -U dbadmin dbname'  | pg_restore -U 
> > > > dbadmin -h localhost -1 -d dbnme
> > > > 
> > > > However after migrating from PG 9.6 to 11, when I did the same thing as 
> > > > before, I started getting a bunch of commands in the dump like this
> > > > 
> > > > GRANT CONNECT ON DATABASE dbname TO usernameXX;
> > > > 
> > > > which don't work for me because the list of users is different between 
> > > > different machines.
> > > > It is clear that the change is related to the way pg_dump is 
> > > > implemented now in PG11 that global objects are dumped.
> > > > But the question is how do I duplicate the previous behaviour, i.e. 
> > > > Dump just the schema and permission on the schema, not on the database.
> > > https://www.postgresql.org/docs/11/app-pgdump.html
> > > 
> > > -x
> > > --no-privileges
> > > --no-acl
> > > 
> > >   Prevent dumping of access privileges (grant/revoke commands).
> > Yes I saw that, but that will not dump privileges on the schema itself, 
> > which were dumped before as far as I understand ...
> So the roles for the schema don't change, but everything else does?

The schema permissions are granted to a generic user 'dbuser'. And a bunch of 
users are members of this role and that's how they access the schema.  
The database permissions on the other hand are granted specifically per 
individual user. 

I am not saying this is optimal, but it would be very annoying if now it became 
impossible to deal with this... 

      S

Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 3:42 PM, Sergey Koposov wrote:

On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:

On 7/23/19 3:23 PM, Sergey Koposov wrote:


On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:


On 7/23/19 3:19 PM, Sergey Koposov wrote:



Hi,

I'm trying to copy a schema from one PG database (ver 11) to PG 10.
Previously the first database version was 9.6 and the way I did the copying was

ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 -d 
dbnme

However after migrating from PG 9.6 to 11, when I did the same thing as before, 
I started getting a bunch of commands in the dump like this

GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different between 
different machines.
It is clear that the change is related to the way pg_dump is implemented now in 
PG11 that global objects are dumped.
But the question is how do I duplicate the previous behaviour, i.e. Dump just 
the schema and permission on the schema, not on the database.

https://www.postgresql.org/docs/11/app-pgdump.html

-x
--no-privileges
--no-acl

   Prevent dumping of access privileges (grant/revoke commands).

Yes I saw that, but that will not dump privileges on the schema itself, which 
were dumped before as far as I understand ...

So the roles for the schema don't change, but everything else does?


The schema permissions are granted to a generic user 'dbuser'. And a bunch of 
users are members of this role and that's how they access the schema.
The database permissions on the other hand are granted specifically per 
individual user.


What version of pg_dump are you using?

When I dump a version 10 database using a version 11 pg_dump I do not 
see: GRANT CONNECT ON DATABASE


pg_dump -Fc -n utility  -s -d production -U postgres -p 5422 -f 
schema_test.out


pg_restore -f schema_test_fc.sql schema_test.out

aklaver@ranger:~> grep CONNECT schema_test_fc.sql
aklaver@ranger:~>



I am not saying this is optimal, but it would be very annoying if now it became 
impossible to deal with this...

       S




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
On Tue, 2019-07-23 at 15:52 -0700, Adrian Klaver wrote:
> On 7/23/19 3:42 PM, Sergey Koposov wrote:
> > 
> > On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:
> > > 
> > > On 7/23/19 3:23 PM, Sergey Koposov wrote:
> > > > 
> > > > 
> > > > On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:
> > > > > 
> > > > > 
> > > > > On 7/23/19 3:19 PM, Sergey Koposov wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > Hi,
> > > > > > 
> > > > > > I'm trying to copy a schema from one PG database (ver 11) to PG 10.
> > > > > > Previously the first database version was 9.6 and the way I did the 
> > > > > > copying was
> > > > > > 
> > > > > > ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump 
> > > > > > --no-tablespaces -n schemaname -Fc  -U dbadmin dbname'  | 
> > > > > > pg_restore -U dbadmin -h localhost -1 -d dbnme
> > > > > > 
> > > > > > However after migrating from PG 9.6 to 11, when I did the same 
> > > > > > thing as before, I started getting a bunch of commands in the dump 
> > > > > > like this
> > > > > > 
> > > > > > GRANT CONNECT ON DATABASE dbname TO usernameXX;
> > > > > > 
> > > > > > which don't work for me because the list of users is different 
> > > > > > between different machines.
> > > > > > It is clear that the change is related to the way pg_dump is 
> > > > > > implemented now in PG11 that global objects are dumped.
> > > > > > But the question is how do I duplicate the previous behaviour, i.e. 
> > > > > > Dump just the schema and permission on the schema, not on the 
> > > > > > database.
> > > > > https://www.postgresql.org/docs/11/app-pgdump.html
> > > > > 
> > > > > -x
> > > > > --no-privileges
> > > > > --no-acl
> > > > > 
> > > > >    Prevent dumping of access privileges (grant/revoke commands).
> > > > Yes I saw that, but that will not dump privileges on the schema itself, 
> > > > which were dumped before as far as I understand ...
> > > So the roles for the schema don't change, but everything else does?
> > The schema permissions are granted to a generic user 'dbuser'. And a bunch 
> > of users are members of this role and that's how they access the schema.
> > The database permissions on the other hand are granted specifically per 
> > individual user.
> What version of pg_dump are you using?
> 
> When I dump a version 10 database using a version 11 pg_dump I do not 
> see: GRANT CONNECT ON DATABASE
> 
> pg_dump -Fc -n utility  -s -d production -U postgres -p 5422 -f 
> schema_test.out
> 
> pg_restore -f schema_test_fc.sql schema_test.out

I'm dumping version 11 database using version 11 pg_dump. I double checked 
this. 

I don't know if in your test-case you have custom users whom you granted 
connect permissions. I do have them. 
Also what I'm seeing matches perfectly the release notes on pgdump which 
explicitely mention dumping of the global properties. 
 https://www.postgresql.org/docs/11/release-11.html#id-1.11.6.9.4


       S

Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 3:58 PM, Sergey Koposov wrote:

On Tue, 2019-07-23 at 15:52 -0700, Adrian Klaver wrote:

On 7/23/19 3:42 PM, Sergey Koposov wrote:


On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:


On 7/23/19 3:23 PM, Sergey Koposov wrote:



On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:



On 7/23/19 3:19 PM, Sergey Koposov wrote:




Hi,

I'm trying to copy a schema from one PG database (ver 11) to PG 10.
Previously the first database version was 9.6 and the way I did the copying was

ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump --no-tablespaces -n 
schemaname -Fc  -U dbadmin dbname'  | pg_restore -U dbadmin -h localhost -1 -d 
dbnme

However after migrating from PG 9.6 to 11, when I did the same thing as before, 
I started getting a bunch of commands in the dump like this

GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different between 
different machines.
It is clear that the change is related to the way pg_dump is implemented now in 
PG11 that global objects are dumped.
But the question is how do I duplicate the previous behaviour, i.e. Dump just 
the schema and permission on the schema, not on the database.

https://www.postgresql.org/docs/11/app-pgdump.html

-x
--no-privileges
--no-acl

    Prevent dumping of access privileges (grant/revoke commands).

Yes I saw that, but that will not dump privileges on the schema itself, which 
were dumped before as far as I understand ...

So the roles for the schema don't change, but everything else does?

The schema permissions are granted to a generic user 'dbuser'. And a bunch of 
users are members of this role and that's how they access the schema.
The database permissions on the other hand are granted specifically per 
individual user.

What version of pg_dump are you using?

When I dump a version 10 database using a version 11 pg_dump I do not
see: GRANT CONNECT ON DATABASE

pg_dump -Fc -n utility  -s -d production -U postgres -p 5422 -f
schema_test.out

pg_restore -f schema_test_fc.sql schema_test.out


I'm dumping version 11 database using version 11 pg_dump. I double checked this.

I don't know if in your test-case you have custom users whom you granted 
connect permissions. I do have them.
Also what I'm seeing matches perfectly the release notes on pgdump which 
explicitely mention dumping of the global properties.
  https://www.postgresql.org/docs/11/release-11.html#id-1.11.6.9.4


That is only supposed to happen if you use -C(--create) and I am seeing 
that in your examples.





        S




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 4:04 PM, Adrian Klaver wrote:

On 7/23/19 3:58 PM, Sergey Koposov wrote:

On Tue, 2019-07-23 at 15:52 -0700, Adrian Klaver wrote:

On 7/23/19 3:42 PM, Sergey Koposov wrote:


On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:


On 7/23/19 3:23 PM, Sergey Koposov wrote:



On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:



On 7/23/19 3:19 PM, Sergey Koposov wrote:




Hi,

I'm trying to copy a schema from one PG database (ver 11) to PG 10.
Previously the first database version was 9.6 and the way I did 
the copying was


ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump 
--no-tablespaces -n schemaname -Fc  -U dbadmin dbname'  | 
pg_restore -U dbadmin -h localhost -1 -d dbnme


However after migrating from PG 9.6 to 11, when I did the same 
thing as before, I started getting a bunch of commands in the 
dump like this


GRANT CONNECT ON DATABASE dbname TO usernameXX;

which don't work for me because the list of users is different 
between different machines.
It is clear that the change is related to the way pg_dump is 
implemented now in PG11 that global objects are dumped.
But the question is how do I duplicate the previous behaviour, 
i.e. Dump just the schema and permission on the schema, not on 
the database.

https://www.postgresql.org/docs/11/app-pgdump.html

-x
--no-privileges
--no-acl

Prevent dumping of access privileges (grant/revoke 
commands).
Yes I saw that, but that will not dump privileges on the schema 
itself, which were dumped before as far as I understand ...

So the roles for the schema don't change, but everything else does?
The schema permissions are granted to a generic user 'dbuser'. And a 
bunch of users are members of this role and that's how they access 
the schema.
The database permissions on the other hand are granted specifically 
per individual user.

What version of pg_dump are you using?

When I dump a version 10 database using a version 11 pg_dump I do not
see: GRANT CONNECT ON DATABASE

pg_dump -Fc -n utility  -s -d production -U postgres -p 5422 -f
schema_test.out

pg_restore -f schema_test_fc.sql schema_test.out


I'm dumping version 11 database using version 11 pg_dump. I double 
checked this.


I don't know if in your test-case you have custom users whom you 
granted connect permissions. I do have them.
Also what I'm seeing matches perfectly the release notes on pgdump 
which explicitely mention dumping of the global properties.

  https://www.postgresql.org/docs/11/release-11.html#id-1.11.6.9.4


That is only supposed to happen if you use -C(--create) and I am seeing 

   ^not

that in your examples.




        S







--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
On Tue, 2019-07-23 at 16:04 -0700, Adrian Klaver wrote:
> On 7/23/19 3:58 PM, Sergey Koposov wrote:
> > 
> > On Tue, 2019-07-23 at 15:52 -0700, Adrian Klaver wrote:
> > > 
> > > On 7/23/19 3:42 PM, Sergey Koposov wrote:
> > > > 
> > > > 
> > > > On Tue, 2019-07-23 at 15:37 -0700, Adrian Klaver wrote:
> > > > > 
> > > > > 
> > > > > On 7/23/19 3:23 PM, Sergey Koposov wrote:
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On Tue, 2019-07-23 at 15:21 -0700, Adrian Klaver wrote:
> > > > > > > 
> > > > > > > 
> > > > > > > 
> > > > > > > On 7/23/19 3:19 PM, Sergey Koposov wrote:
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > 
> > > > > > > > Hi,
> > > > > > > > 
> > > > > > > > I'm trying to copy a schema from one PG database (ver 11) to PG 
> > > > > > > > 10.
> > > > > > > > Previously the first database version was 9.6 and the way I did 
> > > > > > > > the copying was
> > > > > > > > 
> > > > > > > > ssh -o Compression=no user@host '/opt/pgsql/bin/pg_dump 
> > > > > > > > --no-tablespaces -n schemaname -Fc  -U dbadmin dbname'  | 
> > > > > > > > pg_restore -U dbadmin -h localhost -1 -d dbnme
> > > > > > > > 
> > > > > > > > However after migrating from PG 9.6 to 11, when I did the same 
> > > > > > > > thing as before, I started getting a bunch of commands in the 
> > > > > > > > dump like this
> > > > > > > > 
> > > > > > > > GRANT CONNECT ON DATABASE dbname TO usernameXX;
> > > > > > > > 
> > > > > > > > which don't work for me because the list of users is different 
> > > > > > > > between different machines.
> > > > > > > > It is clear that the change is related to the way pg_dump is 
> > > > > > > > implemented now in PG11 that global objects are dumped.
> > > > > > > > But the question is how do I duplicate the previous behaviour, 
> > > > > > > > i.e. Dump just the schema and permission on the schema, not on 
> > > > > > > > the database.
> > > > > > > https://www.postgresql.org/docs/11/app-pgdump.html
> > > > > > > 
> > > > > > > -x
> > > > > > > --no-privileges
> > > > > > > --no-acl
> > > > > > > 
> > > > > > > Prevent dumping of access privileges (grant/revoke 
> > > > > > > commands).
> > > > > > Yes I saw that, but that will not dump privileges on the schema 
> > > > > > itself, which were dumped before as far as I understand ...
> > > > > So the roles for the schema don't change, but everything else does?
> > > > The schema permissions are granted to a generic user 'dbuser'. And a 
> > > > bunch of users are members of this role and that's how they access the 
> > > > schema.
> > > > The database permissions on the other hand are granted specifically per 
> > > > individual user.
> > > What version of pg_dump are you using?
> > > 
> > > When I dump a version 10 database using a version 11 pg_dump I do not
> > > see: GRANT CONNECT ON DATABASE
> > > 
> > > pg_dump -Fc -n utility  -s -d production -U postgres -p 5422 -f
> > > schema_test.out
> > > 
> > > pg_restore -f schema_test_fc.sql schema_test.out
> > I'm dumping version 11 database using version 11 pg_dump. I double checked 
> > this.
> > 
> > I don't know if in your test-case you have custom users whom you granted 
> > connect permissions. I do have them.
> > Also what I'm seeing matches perfectly the release notes on pgdump which 
> > explicitely mention dumping of the global properties.
> >   https://www.postgresql.org/docs/11/release-11.html#id-1.11.6.9.4
> That is only supposed to happen if you use -C(--create) and I am seeing 
> that in your examples.

For some reason I see when I just have 
a command  like this it doesnt' have a grant connect on database
~/soft/pgsql_install/bin/pg_dump -U skoposov -h localhost -n xx  
--no-tablespaces test1
but when I add  '-Fc' flag to pg_dump, I can see 'grant connect' inside the 
output (interdispersed with binary stuff)

I don't get it

       S 

> > 
> > 
> > 
> >         S
> > 
> 

Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Tom Lane
[ hey guys, please trim your replies ]

Sergey Koposov  writes:
> On Tue, 2019-07-23 at 16:04 -0700, Adrian Klaver wrote:
>> That is only supposed to happen if you use -C(--create) and I am seeing 
>> that in your examples.

> For some reason I see when I just have 
> a command  like this it doesnt' have a grant connect on database
> ~/soft/pgsql_install/bin/pg_dump -U skoposov -h localhost -n xx  
> --no-tablespaces test1
> but when I add  '-Fc' flag to pg_dump, I can see 'grant connect' inside the 
> output (interdispersed with binary stuff)

The GRANT CONNECT will be in the archive, but pg_restore should only
print it if you specified -C.

Experimenting, however, I see that that only works as intended if
I use v11 pg_restore.  I can replicate your result if I use v10
pg_restore.  So ... don't do that.

regards, tom lane




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Sergey Koposov
On Tue, 2019-07-23 at 19:24 -0400, Tom Lane wrote:
> [ hey guys, please trim your replies ]
sorry
> 
> Experimenting, however, I see that that only works as intended if
> I use v11 pg_restore.  I can replicate your result if I use v10
> pg_restore.  So ... don't do that.

Thanks very much! 

I've started the pg_dump|pg_restore using the v11 pg_restore.  I'll see  it 
works in a few hours.

      S

Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 4:17 PM, Sergey Koposov wrote:



For some reason I see when I just have
a command  like this it doesnt' have a grant connect on database
~/soft/pgsql_install/bin/pg_dump -U skoposov -h localhost -n xx  
--no-tablespaces test1
but when I add  '-Fc' flag to pg_dump, I can see 'grant connect' inside the 
output (interdispersed with binary stuff)

I don't get it


As Tom explained the custom format will include a lot of information 
that you can retrieve later or not. For this case:


https://www.postgresql.org/docs/11/app-pgdump.html

"-C
--create

Begin the output with a command to create the database itself and 
reconnect to the created database. (With a script of this form, it 
doesn't matter which database in the destination installation you 
connect to before running the script.) If --clean is also specified, the 
script drops and recreates the target database before reconnecting to it.


With --create, the output also includes the database's comment if 
any, and any configuration variable settings that are specific to this 
database, that is, any ALTER DATABASE ... SET ... and ALTER ROLE ... IN 
DATABASE ... SET ... commands that mention this database. Access 
privileges for the database itself are also dumped, unless --no-acl is 
specified.


This option is only meaningful for the plain-text format. For the 
archive formats, you can specify the option when you call pg_restore.

"

NOTE the last paragraph.






        S





         S




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: pg_dump schema in pg11 without global permissions

2019-07-23 Thread Adrian Klaver

On 7/23/19 4:32 PM, Sergey Koposov wrote:

On Tue, 2019-07-23 at 19:24 -0400, Tom Lane wrote:

[ hey guys, please trim your replies ]

sorry


Experimenting, however, I see that that only works as intended if
I use v11 pg_restore.  I can replicate your result if I use v10
pg_restore.  So ... don't do that.


Thanks very much!

I've started the pg_dump|pg_restore using the v11 pg_restore.  I'll see  it 
works in a few hours.


For future use. To test this sort of thing use the -s switch to limit 
the restore to objects only(no data). This will speed the process and 
confirm the permissions on the objects.




       S




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Default ordering option

2019-07-23 Thread Ian Barwick

On 7/24/19 2:23 AM, Adrian Klaver wrote:

On 7/23/19 8:43 AM, Cyril Champier wrote:

Hi,

In this documentation , 
it is said:

    If sorting is not chosen, the rows will be returned in an
    unspecified order. The actual order in that case will depend on the
    scan and join plan types and the order on disk, but it must not be
    relied on.


I would like to know if there is any way to change that to have a "real" random 
behaviour.

My use case:
At Doctolib, we do a lot of automatic tests.
Sometimes, people forgot to order their queries. Most of the time, there is no 
real problem on production. Let say, we display a user list without order.
When a developer writes a test for this feature, he will create 2 users A and B, then 
assert that they are displayed "[A, B]".
99% of the time the test will be ok, but sometimes, the displayed list will be 
"[B,A]", and the test will fail.

One solution could be to ensure random order with an even distribution, so that 
such failing test would be detected quicker.

Is that possible? Maybe with a plugin?


Not that I know of.

A possible solution given below:

test_(postgres)> insert into t1 values (1, 'dog'), (2, 'cat'), (3, 'fish');
INSERT 0 3

test_(postgres)> select * from t1 ;
  a |  b
---+--
  1 | dog
  2 | cat
  3 | fish
(3 rows)

test_(postgres)> update  t1 set b = 'dogfish' where  a =1;
UPDATE 1

test_(postgres)> select * from t1 ;
  a |    b
---+-
  2 | cat
  3 | fish
  1 | dogfish
(3 rows)

An UPDATE reorders the rows. Maybe throw an UPDATE into the test after creating 
the users to force an 'out of order' result?


An UPDATE without changing any values should have the same effect, e.g. :

UPDATE t1 SET b = b WHERE a = 1;

Something like this

WITH x AS (SELECT * FROM t1 ORDER BY a DESC)
UPDATE t1 t
   SET a = t.a
  FROM x
 WHERE t.a = x.a

would shuffle the rows into reverse order, which might be enough to catch
out any missing ORDER BY (this assumes nothing else will touch the table
and reorder it before the test is run).

You could also order by RANDOM() but there would be a chance the rows would
end up in sequential order.

Regards


Ian Barwick

--
 Ian Barwick   https://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services




Re: LWLock optimization

2019-07-23 Thread Andres Freund
Hi,

On 2019-07-23 16:28:52 +, Alexander Pyhalov wrote:
> I was asked to bring up this topic on maling lists after asking question on 
> IRC. 
> 
> The issue is the following.
> On one of our servers (10.5), I see a lot of queries with 
> wait_event_type='LWLock', wait_event='lock_manager' 

Could you qualify this a bit more?  What does "a lot" mean, in
comparison to the total number of queries/session?


> This is a stat gathering/analyzing application with tables which have
> > 100 partitions.

Hm, what kind of partitioning scheme is this? The "new"-ish partitioning
support isn't yet always that good to only perform the minimal amount of
metadata lookups (which in turn require locks). Especially in 10.

Are you using prepared statements?


> The queries itself are mostly simple (select by pkey, update by primary key 
> and so on).
> 
>  select count(*) from pg_locks shows about 40K  records (all granted) and 
> 
>  select count(*) from (select distinct 
> locktype,database,relation,page,tuple,virtualxid,transactionid,classid,objid,objsubid
>  from pg_locks) 
> 
> is varying from about 1K to 25K (I haven't stored this value in monitoring 
> system, just observed manually).

I think to know what exactly is going on, we'd need a cpu profile, and
then perhaps a followup profile for also getting the callsites for lock
waits.  Could you take a perf profile?

https://wiki.postgresql.org/wiki/Profiling_with_perf

> Currently I was adviced to increase LOG2_NUM_LOCK_PARTITIONS at
> compile time. If it bothers us enough, we would likely do it, but I
> have a question, if this (or NUM_LOCK_PARTITIONS) value should be
> increased by default or set tunable?

I'm not quite sure what you're asking precisely here. Are you wondering
whether postgres should increase the default value for
LOG2_NUM_LOCK_PARTITIONS should be increased? And whether it should be
runtime configurable?  I'm fairly sure we don't want to make it runtime
configurable, that'd add overhead to some pretty central code.
Increasing the default would need a lot of benchmarks, to prove it
doesn't cause regressions in other common scenarios.

Greetings,

Andres Freund