Re: Creating Certificates

2018-10-06 Thread Tatsuo Ishii
After sending below to pgsql-docs, I noticed if I follow the step
described in the doc[1], generated root.crt lacks below.

X509v3 extensions:
X509v3 Subject Key Identifier: 
3B:16:EA:86:0B:7C:E4:7A:16:F2:4E:54:F5:9C:0E:0F:38:02:8C:CF
X509v3 Authority Key Identifier: 

keyid:3B:16:EA:86:0B:7C:E4:7A:16:F2:4E:54:F5:9C:0E:0F:38:02:8C:CF

X509v3 Basic Constraints: critical
CA:TRUE
Signature Algorithm: sha256WithRSAEncryption

This is present if I use command[2]:
> openssl req -new -x509 -nodes -text -days 3650 \
>   -config /etc/ssl/openssl.cnf -extensions v3_ca \
>   -out root.crt -keyout root.key -subj "/CN=root.yourdomain.com"

I wonder if this is normal or not.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp

From: Tatsuo Ishii 
Subject: Creating Certificates
Date: Sat, 06 Oct 2018 08:17:04 +0900 (JST)
Message-ID: <20181006.081704.1372328430253415862.t-is...@sraoss.co.jp>

> In "18.9.3. Creating Certificates",
> 
> --
[1]
> To create a server certificate whose identity can be validated by
> clients, first create a certificate signing request (CSR) and a
> public/private key file:
> 
> openssl req -new -nodes -text -out root.csr \
>   -keyout root.key -subj "/CN=root.yourdomain.com"
> chmod og-rwx root.key
> 
> Then, sign the request with the key to create a root certificate
> authority (using the default OpenSSL configuration file location on
> Linux):
> 
> openssl x509 -req -in root.csr -text -days 3650 \
>   -extfile /etc/ssl/openssl.cnf -extensions v3_ca \
>   -signkey root.key -out root.crt
> --
> 
> For me it seesm the two-step procedure can be replaced with following
> one command:
> 
[2]
> openssl req -new -x509 -nodes -text -days 3650 \
>   -config /etc/ssl/openssl.cnf -extensions v3_ca \
>   -out root.crt -keyout root.key -subj "/CN=root.yourdomain.com"
> 
> Is there any reaon why our doc recommend the two-step procedure?
> 
> Best regards,
> --
> Tatsuo Ishii
> SRA OSS, Inc. Japan
> English: http://www.sraoss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
> 



Re: BUG #15307: Low numerical precision of (Co-) Variance

2018-10-06 Thread Dean Rasheed
On Wed, 3 Oct 2018 at 15:58, Madeleine Thompson  wrote:
> This diff looks good to me. Also, it applies cleanly against
> abd9ca377d669a6e0560e854d7e987438d0e612e and passes `make
> check-world`.
>
> I agree that this is not suitable for a patch release.
>

Pushed to master. Thanks for the review.

Regards,
Dean



Re: Problem while setting the fpw with SIGHUP

2018-10-06 Thread Amit Kapila
On Fri, Sep 28, 2018 at 5:16 PM Amit Kapila  wrote:
>
> On Fri, Sep 28, 2018 at 4:23 AM Michael Paquier  wrote:
> >
> > On Thu, Sep 27, 2018 at 07:38:31PM +0530, Amit Kapila wrote:
> > > Okay, I am planning to commit the attached patch tomorrow unless you
> > > or anybody else has any objections to it.
> >
> > None from here.  Thanks for taking care of it.
> >
>
> Thanks, pushed!  I have backpatched till 9.5 as this has been
> introduced by the commit 2c03216d83 which added the XLOG machinery
> initialization in RecoveryInProgress code path.
>

I have marked the originally reported issue as fixed in the open-items list.


--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: TAP tests for pg_verify_checksums

2018-10-06 Thread Michael Paquier
On Fri, Oct 05, 2018 at 01:38:05PM +0200, Michael Banck wrote:
> It's too late for v11 though at this point I guess?

Unfortunately yes.

> I think it would be easy to also test the -r command-line option, as we
> already create a table.

Good idea.  Let's add this test.

> That comment should read 'that checksums are enabled', right?

Indeed.  Fixed.

> Otherwise, LGTM and I've tested it without finding any problems.

What do you think about the updated version attached?
--
Michael
diff --git a/src/bin/initdb/t/001_initdb.pl b/src/bin/initdb/t/001_initdb.pl
index 5dc629fd5e..610887e5d4 100644
--- a/src/bin/initdb/t/001_initdb.pl
+++ b/src/bin/initdb/t/001_initdb.pl
@@ -8,7 +8,7 @@ use Fcntl ':mode';
 use File::stat qw{lstat};
 use PostgresNode;
 use TestLib;
-use Test::More tests => 18;
+use Test::More tests => 21;
 
 my $tempdir = TestLib::tempdir;
 my $xlogdir = "$tempdir/pgxlog";
@@ -58,6 +58,12 @@ mkdir $datadir;
 			"check PGDATA permissions");
 	}
 }
+
+# Control file should tell that data checksums are disabled by default.
+command_like(['pg_controldata', $datadir],
+			 qr/Data page checksum version:.*0/,
+			 'checksums are disabled in control file');
+
 command_ok([ 'initdb', '-S', $datadir ], 'sync only');
 command_fails([ 'initdb', $datadir ], 'existing data directory');
 
diff --git a/src/bin/pg_verify_checksums/.gitignore b/src/bin/pg_verify_checksums/.gitignore
index d1dcdaf0dd..0e5e569a54 100644
--- a/src/bin/pg_verify_checksums/.gitignore
+++ b/src/bin/pg_verify_checksums/.gitignore
@@ -1 +1,3 @@
 /pg_verify_checksums
+
+/tmp_check/
diff --git a/src/bin/pg_verify_checksums/Makefile b/src/bin/pg_verify_checksums/Makefile
index d16261571f..cfe4ab1b8b 100644
--- a/src/bin/pg_verify_checksums/Makefile
+++ b/src/bin/pg_verify_checksums/Makefile
@@ -34,3 +34,9 @@ uninstall:
 clean distclean maintainer-clean:
 	rm -f pg_verify_checksums$(X) $(OBJS)
 	rm -rf tmp_check
+
+check:
+	$(prove_check)
+
+installcheck:
+	$(prove_installcheck)
diff --git a/src/bin/pg_verify_checksums/t/001_basic.pl b/src/bin/pg_verify_checksums/t/001_basic.pl
new file mode 100644
index 00..1fa2e12db2
--- /dev/null
+++ b/src/bin/pg_verify_checksums/t/001_basic.pl
@@ -0,0 +1,8 @@
+use strict;
+use warnings;
+use TestLib;
+use Test::More tests => 8;
+
+program_help_ok('pg_verify_checksums');
+program_version_ok('pg_verify_checksums');
+program_options_handling_ok('pg_verify_checksums');
diff --git a/src/bin/pg_verify_checksums/t/002_actions.pl b/src/bin/pg_verify_checksums/t/002_actions.pl
new file mode 100644
index 00..2c329bbf8b
--- /dev/null
+++ b/src/bin/pg_verify_checksums/t/002_actions.pl
@@ -0,0 +1,69 @@
+# Do basic sanity checks supported by pg_verify_checksums using
+# an initialized cluster.
+
+use strict;
+use warnings;
+use PostgresNode;
+use TestLib;
+use Test::More tests => 12;
+
+# Initialize node with checksums enabled.
+my $node = get_new_node('node_checksum');
+$node->init(extra => ['--data-checksums']);
+my $pgdata = $node->data_dir;
+
+# Control file should know that checksums are enabled.
+command_like(['pg_controldata', $pgdata],
+	 qr/Data page checksum version:.*1/,
+		 'checksums enabled in control file');
+
+# Checksums pass on a newly-created cluster
+command_ok(['pg_verify_checksums',  '-D', $pgdata],
+		   "checksum checks done and passing");
+
+# Checks cannot happen for an online cluster
+$node->start;
+command_fails(['pg_verify_checksums',  '-D', $pgdata],
+			  "checksum checks not done");
+
+# Create table to corrupt and get its relfilenode
+$node->safe_psql('postgres',
+	"SELECT a INTO corrupt1 FROM generate_series(1,1) AS a;
+	ALTER TABLE corrupt1 SET (autovacuum_enabled=false);");
+
+my $file_corrupted = $node->safe_psql('postgres',
+	"SELECT pg_relation_filepath('corrupt1')");
+my $relfilenode_corrupted =  $node->safe_psql('postgres',
+	"SELECT relfilenode FROM pg_class WHERE relname = 'corrupt1';");
+
+# Set page header and block size
+my $pageheader_size = 24;
+my $block_size = $node->safe_psql('postgres', 'SHOW block_size;');
+$node->stop;
+
+# Checksums pass for single relfilenode as the table is not corrupted
+# yet.
+command_ok(['pg_verify_checksums',  '-D', $pgdata,
+	'-r', $relfilenode_corrupted],
+	"checksum checks for table relfilenode done and passing");
+
+# Time to create a corruption
+open my $file, '+<', "$pgdata/$file_corrupted";
+seek($file, $pageheader_size, 0);
+syswrite($file, '\0\0\0\0\0\0\0\0\0');
+close $file;
+
+# Checksum checks on single relfilenode fail
+$node->command_checks_all([ 'pg_verify_checksums', '-D', $pgdata, '-r',
+			$relfilenode_corrupted],
+		  1,
+		  [qr/Bad checksums:.*1/],
+		  [qr/checksum verification failed/],
+		  'pg_checksums reports checksum mismatch');
+
+# Global checksum checks fail
+$node->command_checks_all([ 'pg_verify_checksums', '-D', $pgdata],
+		  1,
+		  [qr/Bad checksums:.*1/],
+		  [qr/checksum verification failed/],
+		  'pg_checksums reports checksum mismatch');

Re: now() vs transaction_timestamp()

2018-10-06 Thread Amit Kapila
On Sat, Oct 6, 2018 at 2:55 AM Tom Lane  wrote:
>
> My initial thought was that we should just re-mark transaction_timestamp()
> as parallel-restricted and call it a day, but we'd then have to do the
> same for SQLValueFunction, which is not much fun because it does have
> variants that are parallel safe (and teaching max_parallel_hazard_walker
> which is which seems like a recipe for bugs).
>
> Also, while it might not be quite too late to force a catversion bump
> in v11, this is demonstrably also broken in v10, and we can't do that
> there.
>
> So maybe the right answer is to change the parallel mode infrastructure
> so it transmits xactStartTimestamp, making transaction_timestamp()
> retroactively safe, and then in HEAD only we could re-mark now() as
> safe.  We might as well do the same for statement_timestamp as well.
>

+1.  Sounds like a reasonable way to fix the problem.  I can take care
of it (though not immediately) if you want.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: now() vs transaction_timestamp()

2018-10-06 Thread Pavel Stehule
so 6. 10. 2018 v 13:47 odesílatel Amit Kapila 
napsal:

> On Sat, Oct 6, 2018 at 2:55 AM Tom Lane  wrote:
> >
> > My initial thought was that we should just re-mark
> transaction_timestamp()
> > as parallel-restricted and call it a day, but we'd then have to do the
> > same for SQLValueFunction, which is not much fun because it does have
> > variants that are parallel safe (and teaching max_parallel_hazard_walker
> > which is which seems like a recipe for bugs).
> >
> > Also, while it might not be quite too late to force a catversion bump
> > in v11, this is demonstrably also broken in v10, and we can't do that
> > there.
> >
> > So maybe the right answer is to change the parallel mode infrastructure
> > so it transmits xactStartTimestamp, making transaction_timestamp()
> > retroactively safe, and then in HEAD only we could re-mark now() as
> > safe.  We might as well do the same for statement_timestamp as well.
> >
>
> +1.  Sounds like a reasonable way to fix the problem.  I can take care
> of it (though not immediately) if you want.
>
>
+1

Pavel


> --
> With Regards,
> Amit Kapila.
> EnterpriseDB: http://www.enterprisedb.com
>
>


Re: now() vs transaction_timestamp()

2018-10-06 Thread Konstantin Knizhnik



On 06.10.2018 00:25, Tom Lane wrote:

I wrote:

Konstantin Knizhnik  writes:

Postgres documentation says that |"now()| is a traditional PostgreSQL
equivalent to |transaction_timestamp()|".
Also both use the same implementation.

Right.

But them have different parallel safety property:

That seems like a bug/thinko.  I am not sure which property setting is
correct though.  It'd only be safe if the parallel-query infrastructure
transfers the relevant timestamp to workers, and I don't know if it does.

The answer is that it doesn't; indeed, xactStartTimestamp is local inside
xact.c, and it's easy to see by inspection that there is nothing at all
that sets it except StartTransaction().  What is happening, if you do

set force_parallel_mode to 1;
select transaction_timestamp();

is that the value you get back is the time that the parallel worker did
StartTransaction().  It is easy to show that this is utterly broken:
transaction_timestamp() holds still within a transaction until you set
force_parallel_mode, and then it does not.

regression=# begin;
BEGIN
regression=# select transaction_timestamp();
  transaction_timestamp
---
  2018-10-05 17:00:11.764019-04
(1 row)

regression=# select transaction_timestamp();
  transaction_timestamp
---
  2018-10-05 17:00:11.764019-04
(1 row)

regression=# select transaction_timestamp();
  transaction_timestamp
---
  2018-10-05 17:00:11.764019-04
(1 row)

regression=# set force_parallel_mode to 1;
SET
regression=# select transaction_timestamp();
  transaction_timestamp
---
  2018-10-05 17:00:21.983122-04
(1 row)

Looking at the related functions, I see that now() and
statement_timestamp() are marked stable/restricted, which is OK, while
clock_timestamp() and timeofday() are marked volatile/safe which seems odd
but on reflection I think it's OK.  Their values wouldn't hold still in
the parent process either, so there's no reason to disallow workers from
running them.

So transaction_timestamp() is definitely buggy, but we're not out of the
woods yet: SQLValueFunction is treated as parallel-safe, but it also has
some instances that are equivalent to transaction_timestamp and so do not
work correctly.

regression=# begin;
BEGIN
regression=# select current_time;
 current_time

  17:12:35.942968-04
(1 row)

regression=# select current_time;
 current_time

  17:12:35.942968-04
(1 row)

regression=# set force_parallel_mode to 1;
SET
regression=# select current_time;
 current_time

  17:12:55.462141-04
(1 row)

regression=# set force_parallel_mode to 0;
SET
regression=# select current_time;
 current_time

  17:12:35.942968-04
(1 row)

Ain't that fun?

My initial thought was that we should just re-mark transaction_timestamp()
as parallel-restricted and call it a day, but we'd then have to do the
same for SQLValueFunction, which is not much fun because it does have
variants that are parallel safe (and teaching max_parallel_hazard_walker
which is which seems like a recipe for bugs).

Also, while it might not be quite too late to force a catversion bump
in v11, this is demonstrably also broken in v10, and we can't do that
there.

So maybe the right answer is to change the parallel mode infrastructure
so it transmits xactStartTimestamp, making transaction_timestamp()
retroactively safe, and then in HEAD only we could re-mark now() as
safe.  We might as well do the same for statement_timestamp as well.

Thoughts?

regards, tom lane
Attached please find very small patch fixing the problem (propagating 
transaction and statement timestamps to workers).
diff --git a/src/backend/access/transam/parallel.c b/src/backend/access/transam/parallel.c
index cdaa32e..abc41e7 100644
--- a/src/backend/access/transam/parallel.c
+++ b/src/backend/access/transam/parallel.c
@@ -87,6 +87,8 @@ typedef struct FixedParallelState
 	PGPROC	   *parallel_master_pgproc;
 	pid_t		parallel_master_pid;
 	BackendId	parallel_master_backend_id;
+	TimestampTz xact_ts;
+	TimestampTz stmt_ts;
 
 	/* Mutex protects remaining fields. */
 	slock_t		mutex;
@@ -318,6 +320,8 @@ InitializeParallelDSM(ParallelContext *pcxt)
 	fps->parallel_master_pgproc = MyProc;
 	fps->parallel_master_pid = MyProcPid;
 	fps->parallel_master_backend_id = MyBackendId;
+	fps->xact_ts = GetCurrentTransactionStartTimestamp();
+	fps->stmt_ts = GetCurrentStatementStartTimestamp();
 	SpinLockInit(&fps->mutex);
 	fps->last_xlog_end = 0;
 	shm_toc_insert(pcxt->toc, PARALLEL_KEY_FIXED, fps);
@@ -1351,6 +1355,9 @@ ParallelWorkerMain(Datum main_arg)
 	tstatespace = shm_toc_lookup(toc, PARALLEL_KEY_TRANSACTION_STATE, false);
 	StartParallelWorkerTransaction(tstatespace);
 
+	/* Restore current transaction and statement timestamps */
+	SetCurrentStartTimestamps(fps->xact_ts, fps->stmt_ts);
+
 	/* Restore combo CID s

Re: Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-06 Thread Andrew Dunstan




On 10/05/2018 08:35 PM, Thomas Munro wrote:

On Sat, Oct 6, 2018 at 2:31 AM Andrew Dunstan
 wrote:

On Wed, Oct 3, 2018 at 2:24 PM Thomas Munro
 wrote:

Over the thread for bug #14825 I posted some draft code to show one
way to save/restore the enum blacklist for parallel workers.  Here's a
better version, and a new thread.  0001 is the code by Andrew Dustan
and Tom Lane that was reverted in 93a1af0b, unchanged by me except for
resolving trivial conflicts on current master.  0002 fixes the failure
seen with make installcheck when postgresql.conf says
force_parallel_mode = regress.

Many thanks for doing this. Your solution seems simpler and cleaner that
what was previously proposed.

I have tested it, and confirm that without your 0002 patch there is an
error with force_parallel_mode=regress and with 0002 that error goes away.

Thanks.  Here is a version squashed into one commit, with a decent
commit message and a small improvement: the code to create the hash
table is moved into a static function, to avoid repetition.  I will
push this to master early next week, unless there is more feedback or
one of you would prefer to do that.



Go for it.

cheers

andrew

--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




fine tune v11 release notes

2018-10-06 Thread Justin Pryzby
Find below various fixes to release notes for v11, for discussion purposes.

Note: I changed these:
 "B-tree indexes can now be built in parallel with" 
 "Allow btree indexes to be built in parallel"
..since they could be construed to mean building multiple indexes
simultaneously (perhaps like what's done by pg_restore --jobs or pg_repack).

Conversely, in create_index.sgml, change pre-existing phase "in parallel" to
"simultaneously", since "parallel" is now used to refer to parallel build of a
(single) index.

I think this needs to be rewritten, maybe in a separate commit, but I'm having
trouble writing something less impenetrable:
doc/src/sgml/ref/alter_index.sgml
 ATTACH PARTITION
 
  
   Causes the named index to become attached to the altered index.
   The named index must be on a partition of the table containing the
   index being altered, and have an equivalent definition.  An attached


Finally: should there be an entry for this?
fb466d7b Fully enforce uniqueness of constraint names.  (Tom Lane)
"Previously, multiple constraints could be created with same name.  A 
new
unique index on pg_constraint now avoids that situation."

That was never intended to be allowed..but could still be a pg_upgrade hazard,
which would mean disruptive downtime scheduled and executed, but failure late
in pg_upgrade due to unique violation.  Maybe it'd be worth adding a check
early in pg_upgrade to detect that and exit(1) before stopping the old cluster.
I'm drawing a comparison with inconsistent nullability across inheritence
heirarchy, as discussed at [0].  That affected pg_upgrades at several of our
customers (in our case, not due to adding primary keys).  The last 25% of this
thread is what's relevant.  It looks like that check was never implemented
though.

[0] 
https://www.postgresql.org/message-id/flat/CACQjQLoMsE%2B1pyLe98pi0KvPG2jQQ94LWJ%2BPTiLgVRK4B%3Di_jg%40mail.gmail.com#015ff524738be6cb9a2bd6875c87094a


diff --git a/doc/src/sgml/ref/create_index.sgml 
b/doc/src/sgml/ref/create_index.sgml
index 3c1223b..7bd1da3 100644
--- a/doc/src/sgml/ref/create_index.sgml
+++ b/doc/src/sgml/ref/create_index.sgml
@@ -612,10 +612,10 @@ Indexes:
 

 Regular index builds permit other regular index builds on the
-same table to occur in parallel, but only one concurrent index build
-can occur on a table at a time.  In both cases, no other types of schema
-modification on the table are allowed meanwhile.  Another difference
-is that a regular CREATE INDEX command can be performed 
within
+same table to occur simultaneously, but only one concurrent index build
+can occur on a table at a time.  In either case, schema modification of the
+table is not allowed while the index is being built.  Another difference is
+that a regular CREATE INDEX command can be performed 
within
 a transaction block, but CREATE INDEX CONCURRENTLY 
cannot.

   
@@ -650,7 +650,7 @@ Indexes:
ordering. For example, we might want to sort a complex-number data
type either by absolute value or by real part. We could do this by
defining two operator classes for the data type and then selecting
-   the proper class when making an index.  More information about
+   the proper class when creating an index.  More information about
operator classes is in  and in .
   
@@ -673,7 +673,8 @@ Indexes:
valid, once all partitions acquire the index.)  Note, however, that
any partition that is created in the future using
CREATE TABLE ... PARTITION OF will automatically
-   contain the index regardless of whether this option was specified.
+   contain the index regardless of whether ONLY was
+   specified.
   
 
   
diff --git a/doc/src/sgml/release-11.sgml b/doc/src/sgml/release-11.sgml
index ca42f28..a945502 100644
--- a/doc/src/sgml/release-11.sgml
+++ b/doc/src/sgml/release-11.sgml
@@ -32,25 +32,25 @@

 
  UPDATE statements that change a partition key
- now move affected rows to the appropriate partitions
+ now cause affected rows to be moved to the appropriate partitions
 


 
  Improved SELECT performance from enhanced partition
- elimination strategies during query processing and execution
+ elimination strategies during query planning and execution
 


 
- Support for PRIMARY KEY, FOREIGN
- KEY, indexes, and triggers on partitioned tables
+ Added support on partitioned tables for PRIMARY
+ KEY, FOREIGN KEY, indexes, and triggers
 


 
- Having a "default" partition for storing data that does not match any
- of the remaining partitions
+Allow creation of a "default" partition for storing rows which are
+ excluded by bounds of all other partitions.
 

   
@@ -63,7 +63,7 @@
   

 
-  

Re: fine tune v11 release notes

2018-10-06 Thread Marko Tiikkaja
I like broccoli


Re: Creating Certificates

2018-10-06 Thread Andrew Dunstan




On 10/06/2018 05:46 AM, Tatsuo Ishii wrote:

After sending below to pgsql-docs, I noticed if I follow the step
described in the doc[1], generated root.crt lacks below.

 X509v3 extensions:
 X509v3 Subject Key Identifier:
 3B:16:EA:86:0B:7C:E4:7A:16:F2:4E:54:F5:9C:0E:0F:38:02:8C:CF
 X509v3 Authority Key Identifier:
 
keyid:3B:16:EA:86:0B:7C:E4:7A:16:F2:4E:54:F5:9C:0E:0F:38:02:8C:CF

 X509v3 Basic Constraints: critical
 CA:TRUE
 Signature Algorithm: sha256WithRSAEncryption

This is present if I use command[2]:

openssl req -new -x509 -nodes -text -days 3650 \
   -config /etc/ssl/openssl.cnf -extensions v3_ca \
   -out root.crt -keyout root.key -subj "/CN=root.yourdomain.com"

I wonder if this is normal or not.




It will in fact be in the certificate:

   andrew@ad-c7:tmp $ openssl req -new -nodes  -out root.csr -keyout
   root.key -subj "/CN=root.yourdomain.com"
   Generating a 2048 bit RSA private key
   ...+++
   ...+++
   writing new private key to 'root.key'
   -
   andrew@ad-c7:tmp $ openssl x509 -req -in root.csr -days 3650
   -extfile /etc/pki/tls/openssl.cnf -extensions v3_ca -signkey
   root.key -out root.crt
   Signature ok
   subject=/CN=root.yourdomain.com
   Getting Private key
   andrew@ad-c7:tmp $ openssl x509 -in root.crt -noout -text
   Certificate:
    Data:
    Version: 3 (0x2)
    Serial Number:
    b3:cf:16:ad:94:fa:69:d6
    Signature Algorithm: sha256WithRSAEncryption
    Issuer: CN=root.yourdomain.com
    Validity
    Not Before: Oct  6 14:44:05 2018 GMT
    Not After : Oct  3 14:44:05 2028 GMT
    Subject: CN=root.yourdomain.com
    Subject Public Key Info:
    Public Key Algorithm: rsaEncryption
    Public-Key: (2048 bit)
    Modulus:
    00:ea:37:82:84:45:b2:21:15:a6:bc:4c:00:9f:15:
    c4:8c:c2:0c:14:a2:1f:55:3b:5b:33:45:80:a4:47:
    c1:4b:31:f6:2d:a3:ff:e6:eb:fe:05:4a:8f:6d:24:
    ec:dc:ab:cf:b0:30:38:1e:1a:ba:32:31:98:e1:31:
    73:ab:7f:0c:aa:5c:33:f8:68:b1:c2:8b:eb:2a:60:
    88:4d:77:d9:65:b3:08:10:dd:3a:de:3b:ff:88:c9:
    f7:6d:e1:0a:8f:e0:cd:ac:67:40:76:0c:50:d3:ee:
    63:9b:23:25:87:ce:a4:2f:a4:46:4c:dc:8f:d6:98:
    55:75:bb:79:d2:21:57:bc:b3:72:fb:d0:7e:4e:f5:
    2d:97:34:82:89:4a:70:59:87:5e:e1:8a:5e:ce:15:
    ab:a1:83:c6:be:75:b2:70:12:88:87:89:4d:a1:ca:
    11:bf:3f:2e:0d:0e:e6:64:cf:8b:44:8a:d7:ba:15:
    66:85:16:87:6b:e4:22:cc:70:80:9a:a6:ef:8a:bf:
    e9:b5:0a:11:aa:b7:3f:91:ad:a1:37:5e:7e:29:a5:
    6e:7d:e9:1c:c4:53:23:fd:cf:e0:79:f9:eb:98:96:
    7d:38:10:78:d3:5b:c5:49:1f:76:c3:d6:2c:bb:00:
    6b:19:b1:1f:6f:d4:3d:41:85:34:c1:72:48:59:b1:
    0e:49
    Exponent: 65537 (0x10001)
    X509v3 extensions:
    X509v3 Subject Key Identifier:
   CD:9F:99:94:4E:3E:1D:B7:66:0D:65:6A:6E:C5:16:A8:04:20:16:6E
    X509v3 Authority Key Identifier:
   keyid:CD:9F:99:94:4E:3E:1D:B7:66:0D:65:6A:6E:C5:16:A8:04:20:16:6E

    X509v3 Basic Constraints:
    CA:TRUE
    Signature Algorithm: sha256WithRSAEncryption
 e7:9d:de:79:cf:c4:ce:fd:10:6c:47:3c:b1:75:0a:5b:a5:c8:
 5a:36:63:b6:d5:46:7a:f3:3f:6a:e5:4f:46:a7:25:05:6c:d3:
 14:1e:90:73:f1:8c:46:a8:ed:c3:da:34:c2:25:09:79:39:7f:
 23:9d:43:5e:a9:2b:8e:34:d2:da:fa:c2:b4:76:0a:3b:26:14:
 c6:72:3b:df:e5:f0:0c:27:48:ab:4a:72:74:f4:d5:31:a4:4d:
 9e:f9:fa:45:f8:50:5b:81:59:bc:22:c7:9f:dc:01:fe:29:41:
 40:ff:6b:a1:82:e8:50:11:92:60:2c:e2:3b:32:9f:cd:f4:d0:
 dc:04:96:5a:18:7d:86:9b:0c:81:d5:aa:14:2b:c2:c1:80:09:
 b3:05:37:87:62:fe:36:c8:5e:28:8e:fa:6a:56:00:fa:85:6f:
 28:f1:75:1b:1b:62:9e:36:c7:ad:ec:fd:05:e0:a9:9f:b2:29:
 e5:0b:5e:fc:9f:5a:18:dd:4f:c6:ed:24:a3:a0:6b:35:b0:de:
 f3:ab:e5:42:e6:ae:9b:c1:e1:70:66:64:5c:46:86:1b:ad:a8:
 e0:4b:47:28:37:e6:ec:99:8f:e4:a9:06:0d:53:a7:5b:7c:c2:
 5d:b4:d8:14:aa:10:d9:4e:6a:1a:6e:1c:7f:2e:3c:2a:61:73:
 fd:04:cb:c1



I'm not opposed to simplifying the instructions, however.

cheers

andrew


--
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Performance improvements for src/port/snprintf.c

2018-10-06 Thread Tom Lane
Andrew Gierth  writes:
> "Tom" == Tom Lane  writes:
>  Tom> Yeah, one would hope. But I wonder whether it always produces the
>  Tom> same low-order digits, and if not, whether people will complain.

> It won't produce the same low-order digits in general, since it has a
> different objective: rather than outputting a decimal value which is the
> true float value rounded to a fixed size by decimal rounding rules, it
> produces the shortest decimal value which falls within the binary float
> rounding interval of the true float value. i.e. the objective is to be
> able to round-trip back to float and get the identical result.

So I'm thinking that there are two, hopefully separable, issues here:

1. The algorithm for deciding how many digits to print.

2. The speed.

Now, "shortest value that converts back exactly" is technically cool,
but I am not sure that it solves any real-world problem that we have.
I'm also worried that introducing it would result in complaints like
https://www.postgresql.org/message-id/CANaXbVjw3Y8VmapWuZahtcRhpE61hsSUcjquip3HuXeuN8y4sg%40mail.gmail.com

As for #2, my *very* short once-over of the code led me to think that
the speed win comes mostly from use of wide integer arithmetic, and
maybe from throwing big lookup tables at the problem.  If so, it's very
likely possible that we could adopt those techniques without necessarily
buying into the shortest-exact rule for how many digits to print.

> One option would be to stick with snprintf if extra_float_digits is less
> than 0 (or less than or equal to 0 and make the default 1) and use ryu
> otherwise, so that the option to get rounded floats is still there.
> (Apparently some people do use negative values of extra_float_digits.)
> Unlike other format-changing GUCs, this one already exists and is
> already used by people who want more or less precision, including by
> pg_dump where rount-trip conversion is the requirement.

I wouldn't necessarily object to having some value of extra_float_digits
that selects the shortest-exact rule, but I'm thinking maybe it should
be a value we don't currently accept.

regards, tom lane



Re: now() vs transaction_timestamp()

2018-10-06 Thread Tom Lane
Konstantin Knizhnik  writes:
> On 06.10.2018 00:25, Tom Lane wrote:
>> So maybe the right answer is to change the parallel mode infrastructure
>> so it transmits xactStartTimestamp, making transaction_timestamp()
>> retroactively safe, and then in HEAD only we could re-mark now() as
>> safe.  We might as well do the same for statement_timestamp as well.

> Attached please find very small patch fixing the problem (propagating 
> transaction and statement timestamps to workers).

That's a bit too small ;-) ... one demonstrable problem with it is
that the parallel worker will report the wrong xactStartTimestamp
to pgstat_report_xact_timestamp(), since you aren't jamming the
transmitted value in soon enough.  Also, I found that ParallelWorkerMain
executes at least two transactions before it ever gets to the "main"
transaction that does real work, and I didn't much care for the fact
that those were running with worker-local values of xactStartTimestamp
and stmtStartTimestamp.  So I rearranged things a bit to ensure that
parallel workers wouldn't generate their own values for either
timestamp, and pushed it.

regards, tom lane



Re: Segfault when creating partition with a primary key and sql_drop trigger exists

2018-10-06 Thread Alvaro Herrera
here's my proposed patch.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>From 50f049f30875fd6fa9b8b3346c9da176725f37ff Mon Sep 17 00:00:00 2001
From: Alvaro Herrera 
Date: Sat, 6 Oct 2018 12:53:01 -0300
Subject: [PATCH] Fix event triggers for partitioned tables
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Index DDL cascading on partitioned tables introduced a way for ALTER
TABLE to be called reentrantly.  This caused an an important deficiency
in event trigger support to be exposed: on exiting the reentrant call,
the alter table state object was clobbered, causing a crash when the
outer alter table tries to finalize its processing.  Fix the crash by
creating a stack of event trigger state objects.  There are still ways
to cause things to misbehave (and probably other crashers) with more
elaborate tricks, but at least it now doesn't crash in the obvious
scenario.

Backpatch to 9.5, where DDL deparsing of event triggers was introduced.

Reported-by: Marco Slot
Authors: Michaël Paquier, Álvaro Herrera
Discussion: https://postgr.es/m/CANNhMLCpi+HQ7M36uPfGbJZEQLyTy7XvX=5efkpr-b1bo0u...@mail.gmail.com
---
 src/backend/catalog/index.c  |  8 +++-
 src/backend/commands/event_trigger.c | 13 +++--
 src/backend/commands/indexcmds.c |  3 ++-
 src/backend/commands/tablecmds.c |  2 +-
 src/backend/commands/view.c  |  4 
 src/include/catalog/index.h  |  3 ++-
 src/include/tcop/deparse_utility.h   |  3 +++
 .../test_ddl_deparse/expected/alter_table.out| 12 
 .../modules/test_ddl_deparse/sql/alter_table.sql |  8 
 src/test/regress/expected/event_trigger.out  | 20 +++-
 src/test/regress/sql/event_trigger.sql   | 13 +
 11 files changed, 78 insertions(+), 11 deletions(-)

diff --git a/src/backend/catalog/index.c b/src/backend/catalog/index.c
index 39605732b1..339bb35f9b 100644
--- a/src/backend/catalog/index.c
+++ b/src/backend/catalog/index.c
@@ -50,6 +50,7 @@
 #include "catalog/pg_type.h"
 #include "catalog/storage.h"
 #include "commands/tablecmds.h"
+#include "commands/event_trigger.h"
 #include "commands/trigger.h"
 #include "executor/executor.h"
 #include "miscadmin.h"
@@ -212,7 +213,8 @@ relationHasPrimaryKey(Relation rel)
 void
 index_check_primary_key(Relation heapRel,
 		IndexInfo *indexInfo,
-		bool is_alter_table)
+		bool is_alter_table,
+		IndexStmt *stmt)
 {
 	List	   *cmds;
 	int			i;
@@ -280,7 +282,11 @@ index_check_primary_key(Relation heapRel,
 	 * unduly.
 	 */
 	if (cmds)
+	{
+		EventTriggerAlterTableStart((Node *) stmt);
 		AlterTableInternal(RelationGetRelid(heapRel), cmds, true);
+		EventTriggerAlterTableEnd();
+	}
 }
 
 /*
diff --git a/src/backend/commands/event_trigger.c b/src/backend/commands/event_trigger.c
index eecc85d14e..2c1dc47541 100644
--- a/src/backend/commands/event_trigger.c
+++ b/src/backend/commands/event_trigger.c
@@ -1696,11 +1696,6 @@ EventTriggerCollectSimpleCommand(ObjectAddress address,
  * Note we don't collect the command immediately; instead we keep it in
  * currentCommand, and only when we're done processing the subcommands we will
  * add it to the command list.
- *
- * XXX -- this API isn't considering the possibility of an ALTER TABLE command
- * being called reentrantly by an event trigger function.  Do we need stackable
- * commands at this level?	Perhaps at least we should detect the condition and
- * raise an error.
  */
 void
 EventTriggerAlterTableStart(Node *parsetree)
@@ -1725,6 +1720,7 @@ EventTriggerAlterTableStart(Node *parsetree)
 	command->d.alterTable.subcmds = NIL;
 	command->parsetree = copyObject(parsetree);
 
+	command->parent = currentEventTriggerState->currentCommand;
 	currentEventTriggerState->currentCommand = command;
 
 	MemoryContextSwitchTo(oldcxt);
@@ -1765,6 +1761,7 @@ EventTriggerCollectAlterTableSubcmd(Node *subcmd, ObjectAddress address)
 		return;
 
 	Assert(IsA(subcmd, AlterTableCmd));
+	Assert(OidIsValid(currentEventTriggerState->currentCommand));
 	Assert(OidIsValid(currentEventTriggerState->currentCommand->d.alterTable.objectId));
 
 	oldcxt = MemoryContextSwitchTo(currentEventTriggerState->cxt);
@@ -1790,11 +1787,15 @@ EventTriggerCollectAlterTableSubcmd(Node *subcmd, ObjectAddress address)
 void
 EventTriggerAlterTableEnd(void)
 {
+	CollectedCommand *parent;
+
 	/* ignore if event trigger context not set, or collection disabled */
 	if (!currentEventTriggerState ||
 		currentEventTriggerState->commandCollectionInhibited)
 		return;
 
+	parent = currentEventTriggerState->currentCommand->parent;
+
 	/* If no subcommands, don't collect */
 	if (list_length(currentEventTriggerState->currentCommand->d.alterTable.subcmds) != 0)
 	{
@@ -1805,7 +1806,7 @@ EventTri

Re: Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-06 Thread Tom Lane
Thomas Munro  writes:
> Thanks.  Here is a version squashed into one commit, with a decent
> commit message and a small improvement: the code to create the hash
> table is moved into a static function, to avoid repetition.  I will
> push this to master early next week, unless there is more feedback or
> one of you would prefer to do that.

Nitpicky gripes:

* In the commit message's references to prior commits, I think it
should be standard to refer to master-branch commit hashes unless
there's some really good reason to do otherwise (and then you should
say that this commit is on branch X).  Your reference to the revert
commit is pointing to the REL_10_STABLE back-patch commit.

* In the (de)serialization code, it seems kinda ugly to me to use "Oid"
as the type of the initial count-holding value, rather than "int".
I suppose you did that to avoid alignment issues in case Oid should
someday be a different size than "int", but it would be a good thing
to have a comment explaining that and pointing out specifically that
the first item is a count not an OID.  (Right now, a reader has to
reverse-engineer that fact, which I do not think is polite to the
reader.)

* Also, I don't much like the lack of any cross-check that
SerializeEnumBlacklist has been passed the correct amount of space.
I think there should be at least an assert at the end that it hasn't
overrun the space the caller gave it.  As-is, there's exactly no
protection against the possibility that the hash traversal produced a
different number of entries than what EstimateEnumBlacklistSpace saw.

regards, tom lane



Re: executor relation handling

2018-10-06 Thread Tom Lane
Amit Langote  writes:
> On 2018/10/05 5:59, Tom Lane wrote:
>> So I'm inclined to just omit 0003.  AFAICS this would only mean that
>> we couldn't drop the global PlanRowMarks list from PlannedStmt, which
>> does not bother me much.

> To be honest, I too had begun to fail to see the point of this patch since
> yesterday.  In fact, getting this one to pass make check-world took a bit
> of head-scratching due to its interaction with EvalPlanQuals EState
> building, so I was almost to drop it from the series.  But I felt that it
> might still be a good idea to get rid of what was described as special
> case code.  Reading your argument against it based on how it messes up
> some of the assumptions regarding ExecRowMark design, I'm willing to let
> go of this one as more or less a cosmetic improvement.

OK.  We do need to fix the comments in InitPlan that talk about acquiring
locks and how that makes us need to do things in a particular order, but
I'll go take care of that.

> So, that leaves us with only the patch that revises the plan node fields
> in light of having relieved the executor of doing any locking of its own
> in *some* cases.  That patch's premise is that we don't need the fields
> that the patch removes because they're only referenced for the locking
> purposes.  But, if those plan nodes support parallel execution and hence
> will be passed to parallel workers for execution who will need to lock the
> tables contained in the plan nodes, then they better contain all the
> information needed for locking *every* affected tables, so we had better
> not removed it.

No, I think this is unduly pessimistic.  We need to make sure that a
parallel worker has lock on tables it's actually touching, but I don't
see why that should imply a requirement to hold lock on parent tables
it never touches.

The reasons why we need locks on tables not physically accessed by the
query are (a) to ensure that we've blocked, or received sinval messages
for, any DDL related to views or partition parent tables, in case that
would invalidate the plan; (b) to allow firing triggers safely, in
the case of partition parent tables.  Neither of these issues apply to
a parallel worker -- the plan is already frozen before it can ever
start, and it isn't going to be firing any triggers either.

In particular, I think it's fine to get rid of
ExecLockNonLeafAppendTables.  In the parent, that's clearly useless code
now: we have already locked *every* RTE_RELATION entry in the rangetable,
either when the parser/rewriter/planner added the RTE to the list to begin
with, or during AcquireExecutorLocks if the plan was retrieved from the
plancache.  In a child it seems unnecessary as long as we're locking the
leaf rels we actually touch.

Possibly, we might fix the problem of inadequate locking in worker
processes by having them do the equivalent of AcquireExecutorLocks, ie
just run through the whole RT list and lock everything.  I think we'd soon
end up doing that if we ever try to allow parallelization of non-read-only
queries; but that's a long way off AFAIK.  For read-only queries it seems
like it'll be fine if we just rejigger ExecGetRangeTableRelation to take a
lock when IsParallelWorker.

regards, tom lane



Re: Unclear error message

2018-10-06 Thread Laurenz Albe
Michael Paquier wrote:
> On Thu, Sep 20, 2018 at 09:45:09PM +0200, Laurenz Albe wrote:
> > That message is wrong, because "rel" and not "pkrel" is the partitioned 
> > table.
> > I think it should be
> > 
> > ereport(ERROR,
> > (errcode(ERRCODE_WRONG_OBJECT_TYPE),
> >  errmsg("foreign key cannot be defined on ONLY \"%s\" for a 
> > partitioned table",
> > Relatio
> > nGetRelationName(rel;
> 
> Hmm...  There are no test cases for this error message, nor for the one
> below "cannot add NOT VALID foreign key to relation foo", which is a bad
> idea.
> 
> The referenced relation has to be RELKIND_RELATION.  Here is another
> idea of error message:
> cannot use ONLY on partitioned table "foo" with foreign key

True; how about the attached?

I think this should go in before v11.

Yours,
Laurenz Albe
From d305cfe821decbe9ff113626c945a521fb55191c Mon Sep 17 00:00:00 2001
From: Laurenz Albe 
Date: Sat, 6 Oct 2018 23:13:39 +0200
Subject: [PATCH] Fix an error message

The message for creating a foreign key on a partitioned
table using ONLY was wrong.

Also, there were regression tests missing for this and another
error message.

Laurenz Albe, reviewed by Michael Paquier
---
 src/backend/commands/tablecmds.c  | 4 ++--
 src/test/regress/expected/foreign_key.out | 7 +++
 src/test/regress/sql/foreign_key.sql  | 5 +
 3 files changed, 14 insertions(+), 2 deletions(-)

diff --git a/src/backend/commands/tablecmds.c b/src/backend/commands/tablecmds.c
index c145385f84..b3050ee95a 100644
--- a/src/backend/commands/tablecmds.c
+++ b/src/backend/commands/tablecmds.c
@@ -7365,8 +7365,8 @@ ATAddForeignKeyConstraint(List **wqueue, AlteredTableInfo *tab, Relation rel,
 		if (!recurse)
 			ereport(ERROR,
 	(errcode(ERRCODE_WRONG_OBJECT_TYPE),
-	 errmsg("foreign key referencing partitioned table \"%s\" must not be ONLY",
-			RelationGetRelationName(pkrel;
+	 errmsg("cannot define foreign key on partitioned table \"%s\" with ONLY",
+			RelationGetRelationName(rel;
 		if (fkconstraint->skip_validation && !fkconstraint->initially_valid)
 			ereport(ERROR,
 	(errcode(ERRCODE_WRONG_OBJECT_TYPE),
diff --git a/src/test/regress/expected/foreign_key.out b/src/test/regress/expected/foreign_key.out
index fc3bbe4deb..e423b8ffb2 100644
--- a/src/test/regress/expected/foreign_key.out
+++ b/src/test/regress/expected/foreign_key.out
@@ -1465,6 +1465,13 @@ CREATE TABLE fk_partitioned_fk_3_0 PARTITION OF fk_partitioned_fk_3 FOR VALUES W
 CREATE TABLE fk_partitioned_fk_3_1 PARTITION OF fk_partitioned_fk_3 FOR VALUES WITH (MODULUS 5, REMAINDER 1);
 ALTER TABLE fk_partitioned_fk ATTACH PARTITION fk_partitioned_fk_3
   FOR VALUES FROM (2000,2000) TO (3000,3000);
+-- but creating a foreign key with ONLY on a partitioned table doesn't work
+ALTER TABLE ONLY fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk;
+ERROR:  cannot define foreign key on partitioned table "fk_partitioned_fk" with ONLY
+-- also, adding a NOT VALID foreign key should fail
+ALTER TABLE fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk NOT VALID;
+ERROR:  cannot add NOT VALID foreign key to relation "fk_notpartitioned_pk"
+DETAIL:  This feature is not yet supported on partitioned tables.
 -- these inserts, targetting both the partition directly as well as the
 -- partitioned table, should all fail
 INSERT INTO fk_partitioned_fk (a,b) VALUES (500, 501);
diff --git a/src/test/regress/sql/foreign_key.sql b/src/test/regress/sql/foreign_key.sql
index d2cecdf4eb..85540a0fae 100644
--- a/src/test/regress/sql/foreign_key.sql
+++ b/src/test/regress/sql/foreign_key.sql
@@ -1106,6 +1106,11 @@ CREATE TABLE fk_partitioned_fk_3_1 PARTITION OF fk_partitioned_fk_3 FOR VALUES W
 ALTER TABLE fk_partitioned_fk ATTACH PARTITION fk_partitioned_fk_3
   FOR VALUES FROM (2000,2000) TO (3000,3000);
 
+-- but creating a foreign key with ONLY on a partitioned table doesn't work
+ALTER TABLE ONLY fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk;
+-- also, adding a NOT VALID foreign key should fail
+ALTER TABLE fk_partitioned_fk ADD FOREIGN KEY (a, b) REFERENCES fk_notpartitioned_pk NOT VALID;
+
 -- these inserts, targetting both the partition directly as well as the
 -- partitioned table, should all fail
 INSERT INTO fk_partitioned_fk (a,b) VALUES (500, 501);
-- 
2.17.1



Re: Creating Certificates

2018-10-06 Thread Tatsuo Ishii
> It will in fact be in the certificate:

Oh, ok. That makes sense. Thanks for the explanation.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp



Re: Segfault when creating partition with a primary key and sql_drop trigger exists

2018-10-06 Thread Alvaro Herrera
On 2018-Oct-06, Alvaro Herrera wrote:

> here's my proposed patch.

Pushed a few hours ago.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Segfault when creating partition with a primary key and sql_drop trigger exists

2018-10-06 Thread Michael Paquier
On Sat, Oct 06, 2018 at 09:32:02PM -0300, Alvaro Herrera wrote:
> On 2018-Oct-06, Alvaro Herrera wrote:
>> here's my proposed patch.
> 
> Pushed a few hours ago.

Thanks, Alvaro.
--
Michael


signature.asc
Description: PGP signature


Re: pg_upgrade failed with ERROR: null relpartbound for relation 18159 error.

2018-10-06 Thread Alvaro Herrera
On 2018-Oct-05, Amit Langote wrote:

> On 2018/10/05 16:05, Michael Paquier wrote:
> >> As of commit 2fbdf1b38bc [1], which has been applied in 11 and HEAD
> >> branches, RelationBuildPartitionDesc emits an error if we don't find
> >> relpartbound set for a child found by scanning pg_inherits, instead of
> >> skipping such children.  While that commit switched the order of creating
> >> pg_inherits entry and checking a new bound against existing bounds in
> >> DefineRelation in light of aforementioned change, it didn't in
> >> ATExecAttachPartition, hence this error.
> >>
> >> Attached patch fixes that.
> > 
> > Could you please add a minimal regression test in your patch?  That's
> > the second bug related to ATTACH PARTITION I am looking at today..
> 
> OK, done, although I hope that's not bloating the tests.

Pushed to 11 and master.  I'm not convinced that it's a good idea to
mess with 10 at this point -- the partitioning code is rather completely
different already ...

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: Relax transactional restrictions on ALTER ENUM ... ADD TYPE (redux)

2018-10-06 Thread Thomas Munro
On Sun, Oct 7, 2018 at 5:53 AM Tom Lane  wrote:
> Thomas Munro  writes:
> > Thanks.  Here is a version squashed into one commit, with a decent
> > commit message and a small improvement: the code to create the hash
> > table is moved into a static function, to avoid repetition.  I will
> > push this to master early next week, unless there is more feedback or
> > one of you would prefer to do that.
>
> Nitpicky gripes:

Thanks for the review.

> * In the commit message's references to prior commits, I think it
> should be standard to refer to master-branch commit hashes unless
> there's some really good reason to do otherwise (and then you should
> say that this commit is on branch X).  Your reference to the revert
> commit is pointing to the REL_10_STABLE back-patch commit.

Oops, fixed.

> * In the (de)serialization code, it seems kinda ugly to me to use "Oid"
> as the type of the initial count-holding value, rather than "int".
> I suppose you did that to avoid alignment issues in case Oid should
> someday be a different size than "int", but it would be a good thing
> to have a comment explaining that and pointing out specifically that
> the first item is a count not an OID.  (Right now, a reader has to
> reverse-engineer that fact, which I do not think is polite to the
> reader.)

I got rid of the leading size and used InvalidOid as a terminator
instead, with comments to make that clear.  The alternative would be a
struct with a size and then a flexible array member, but there seems
to be no real advantage to that.

> * Also, I don't much like the lack of any cross-check that
> SerializeEnumBlacklist has been passed the correct amount of space.
> I think there should be at least an assert at the end that it hasn't
> overrun the space the caller gave it.  As-is, there's exactly no
> protection against the possibility that the hash traversal produced a
> different number of entries than what EstimateEnumBlacklistSpace saw.

I added a couple of assertions.

-- 
Thomas Munro
http://www.enterprisedb.com


0001-Relax-transactional-restrictions-on-ALTER-TYPE-.--v4.patch
Description: Binary data


Re: now() vs transaction_timestamp()

2018-10-06 Thread Amit Kapila
On Sat, Oct 6, 2018 at 9:40 PM Tom Lane  wrote:
>
> Konstantin Knizhnik  writes:
> > On 06.10.2018 00:25, Tom Lane wrote:
> >> So maybe the right answer is to change the parallel mode infrastructure
> >> so it transmits xactStartTimestamp, making transaction_timestamp()
> >> retroactively safe, and then in HEAD only we could re-mark now() as
> >> safe.  We might as well do the same for statement_timestamp as well.
>
> > Attached please find very small patch fixing the problem (propagating
> > transaction and statement timestamps to workers).
>
> That's a bit too small ;-) ... one demonstrable problem with it is
> that the parallel worker will report the wrong xactStartTimestamp
> to pgstat_report_xact_timestamp(), since you aren't jamming the
> transmitted value in soon enough.  Also, I found that ParallelWorkerMain
> executes at least two transactions before it ever gets to the "main"
> transaction that does real work, and I didn't much care for the fact
> that those were running with worker-local values of xactStartTimestamp
> and stmtStartTimestamp.  So I rearranged things a bit to ensure that
> parallel workers wouldn't generate their own values for either
> timestamp, and pushed it.
>

Currently, we serialize the other transaction related stuff via
PARALLEL_KEY_TRANSACTION_STATE.   However, this patch has serialized
xact_ts via PARALLEL_KEY_FIXED which appears okay, but I think it
would have been easier for future readers of the code if all the
similar state variables have been serialized by using the same key.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: now() vs transaction_timestamp()

2018-10-06 Thread Tom Lane
Amit Kapila  writes:
> On Sat, Oct 6, 2018 at 9:40 PM Tom Lane  wrote:
>> That's a bit too small ;-) ... one demonstrable problem with it is
>> that the parallel worker will report the wrong xactStartTimestamp
>> to pgstat_report_xact_timestamp(), since you aren't jamming the
>> transmitted value in soon enough.  Also, I found that ParallelWorkerMain
>> executes at least two transactions before it ever gets to the "main"
>> transaction that does real work, and I didn't much care for the fact
>> that those were running with worker-local values of xactStartTimestamp
>> and stmtStartTimestamp.  So I rearranged things a bit to ensure that
>> parallel workers wouldn't generate their own values for either
>> timestamp, and pushed it.

> Currently, we serialize the other transaction related stuff via
> PARALLEL_KEY_TRANSACTION_STATE.   However, this patch has serialized
> xact_ts via PARALLEL_KEY_FIXED which appears okay, but I think it
> would have been easier for future readers of the code if all the
> similar state variables have been serialized by using the same key.

That state is restored at least two transactions too late.

regards, tom lane



Re: now() vs transaction_timestamp()

2018-10-06 Thread Amit Kapila
On Sun, Oct 7, 2018 at 10:36 AM Tom Lane  wrote:
>
> Amit Kapila  writes:
> > On Sat, Oct 6, 2018 at 9:40 PM Tom Lane  wrote:
> >> That's a bit too small ;-) ... one demonstrable problem with it is
> >> that the parallel worker will report the wrong xactStartTimestamp
> >> to pgstat_report_xact_timestamp(), since you aren't jamming the
> >> transmitted value in soon enough.  Also, I found that ParallelWorkerMain
> >> executes at least two transactions before it ever gets to the "main"
> >> transaction that does real work, and I didn't much care for the fact
> >> that those were running with worker-local values of xactStartTimestamp
> >> and stmtStartTimestamp.  So I rearranged things a bit to ensure that
> >> parallel workers wouldn't generate their own values for either
> >> timestamp, and pushed it.
>
> > Currently, we serialize the other transaction related stuff via
> > PARALLEL_KEY_TRANSACTION_STATE.   However, this patch has serialized
> > xact_ts via PARALLEL_KEY_FIXED which appears okay, but I think it
> > would have been easier for future readers of the code if all the
> > similar state variables have been serialized by using the same key.
>
> That state is restored at least two transactions too late.
>

That is right, but I think we can perform shm_toc_lookup for
PARALLEL_KEY_TRANSACTION_STATE at some earlier point and then use the
variables from it to restore the respective state at the different
point of times.

-- 
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com



Re: now() vs transaction_timestamp()

2018-10-06 Thread Konstantin Knizhnik




On 07.10.2018 07:58, Amit Kapila wrote:

On Sat, Oct 6, 2018 at 9:40 PM Tom Lane  wrote:

Konstantin Knizhnik  writes:

On 06.10.2018 00:25, Tom Lane wrote:

So maybe the right answer is to change the parallel mode infrastructure
so it transmits xactStartTimestamp, making transaction_timestamp()
retroactively safe, and then in HEAD only we could re-mark now() as
safe.  We might as well do the same for statement_timestamp as well.

Attached please find very small patch fixing the problem (propagating
transaction and statement timestamps to workers).

That's a bit too small ;-) ... one demonstrable problem with it is
that the parallel worker will report the wrong xactStartTimestamp
to pgstat_report_xact_timestamp(), since you aren't jamming the
transmitted value in soon enough.  Also, I found that ParallelWorkerMain
executes at least two transactions before it ever gets to the "main"
transaction that does real work, and I didn't much care for the fact
that those were running with worker-local values of xactStartTimestamp
and stmtStartTimestamp.  So I rearranged things a bit to ensure that
parallel workers wouldn't generate their own values for either
timestamp, and pushed it.


Currently, we serialize the other transaction related stuff via
PARALLEL_KEY_TRANSACTION_STATE.
Yes, it was my first though to add serialization of timestamps to 
SerializeTransactionState.
But it performs serialization into array of TransactionId, which is 
32-bit (except PGProEE), and so too small for TimestampTz. Certainly it 
is possible to store timestamp into pair of words, but frankly speaking 
I do not like approach used in SerializeTransactionState: IMHO 
serializer should either produce stream of bytes, either use some structure.
Serialization to array of words combines drawbacks of both approaches. 
Also using type TransactionId for something very different from XIDs 
seems to be not so good idea.




However, this patch has serialized
xact_ts via PARALLEL_KEY_FIXED which appears okay, but I think it
would have been easier for future readers of the code if all the
similar state variables have been serialized by using the same key.






Re: now() vs transaction_timestamp()

2018-10-06 Thread Tom Lane
Amit Kapila  writes:
> On Sun, Oct 7, 2018 at 10:36 AM Tom Lane  wrote:
>> That state is restored at least two transactions too late.

> That is right, but I think we can perform shm_toc_lookup for
> PARALLEL_KEY_TRANSACTION_STATE at some earlier point and then use the
> variables from it to restore the respective state at the different
> point of times.

That hardly seems cleaner.

I think this is actually the right way, because
PARALLEL_KEY_TRANSACTION_STATE is (at least by the name) state associated
with the main transaction the worker is going to run.  But given what
I did to xact.c just now, xactStartTimestamp and stmtStartTimestamp are
*not* transaction-local state so far as the worker is concerned.  They
will persist throughout the lifetime of that process, just as the database
ID, user ID, etc, will.  So you might as well argue that none of
FixedParallelState should exist because it should all be in
PARALLEL_KEY_TRANSACTION_STATE, and that doesn't seem like much of
an improvement.

regards, tom lane



Defaulting to password_encryption = scram-sha-256

2018-10-06 Thread Andres Freund
Hi,

Now that we probably have shaken the worst issues out of scram,
shouldn't we change the default password_encryption to something that
doesn't scare people?   The only reason I could think of not wanting to
do that for is that we don't necessarily guarantee that we have a strong
random generator, but if that's the issue, we should change initdb to
default it to something safe if the platform provides something. Which
is just about any sane one, no?

Greetings,

Andres Freund