[HACKERS] Re: [COMMITTERS] pgsql: Respect TEMP_CONFIG when running contrib regression tests.

2016-02-27 Thread Andrew Dunstan



On 02/26/2016 10:59 PM, Robert Haas wrote:

On Sat, Feb 27, 2016 at 9:00 AM, Andrew Dunstan  wrote:

Sure.  Saving three lines of Makefile duplication is hardly a
world-shattering event, so I thought there might be some other
purpose.  But I'm not against saving three lines of duplication
either, if it won't break anything.

The point is that we should do this for several other test sets as well as
contrib - isolation tests, PL tests and ecpg tests.

OK, I was wondering about that.  I can try to write a patch, or
someone else can, but if you already understand what needs to be done,
perhaps you should just go ahead.




What I had in mind was something like the attached.

In testing this seems to do the right thing, and the nice part is that 
it will be picked up by the buildfarm in the one case that's relevant, 
namely the ecpg tests.


The only fly in the ointment is that there are a few places that set 
--temp-config explicitly:


   ./contrib/test_decoding/Makefile:--temp-config
   $(top_srcdir)/contrib/test_decoding/logical.conf \
   ./contrib/test_decoding/Makefile:--temp-config
   $(top_srcdir)/contrib/test_decoding/logical.conf \
   ./src/test/modules/commit_ts/Makefile:REGRESS_OPTS =
   --temp-config=$(top_srcdir)/src/test/modules/commit_ts/commit_ts.conf
   ./src/test/modules/test_rls_hooks/Makefile:REGRESS_OPTS =
   --temp-config=$(top_srcdir)/src/test/modules/test_rls_hooks/rls_hooks.conf


Perhaps what we need to do is modify pg_regress.c slightly to allow more 
than one --temp-config argument. But that could be done later.



cheers

andrew
diff --git a/contrib/contrib-global.mk b/contrib/contrib-global.mk
index ba49610..6ac8e9b 100644
--- a/contrib/contrib-global.mk
+++ b/contrib/contrib-global.mk
@@ -1,9 +1,4 @@
 # contrib/contrib-global.mk
 
-# file with extra config for temp build
-ifdef TEMP_CONFIG
-REGRESS_OPTS += --temp-config=$(TEMP_CONFIG)
-endif
-
 NO_PGXS = 1
 include $(top_srcdir)/src/makefiles/pgxs.mk
diff --git a/src/Makefile.global.in b/src/Makefile.global.in
index e94d6a5..47b265e 100644
--- a/src/Makefile.global.in
+++ b/src/Makefile.global.in
@@ -524,14 +524,20 @@ ifdef NO_LOCALE
 NOLOCALE += --no-locale
 endif
 
+# file with extra config for temp build
+TEMP_CONF =
+ifdef TEMP_CONFIG
+TEMP_CONF += --temp-config=$(TEMP_CONFIG)
+endif
+
 pg_regress_locale_flags = $(if $(ENCODING),--encoding=$(ENCODING)) $(NOLOCALE)
 
-pg_regress_check = $(with_temp_install) $(top_builddir)/src/test/regress/pg_regress --inputdir=$(srcdir) --temp-instance=./tmp_check --bindir= $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
+pg_regress_check = $(with_temp_install) $(top_builddir)/src/test/regress/pg_regress --inputdir=$(srcdir) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
 pg_regress_installcheck = $(top_builddir)/src/test/regress/pg_regress --inputdir=$(srcdir) --bindir='$(bindir)' $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
 
 pg_regress_clean_files = results/ regression.diffs regression.out tmp_check/ log/
 
-pg_isolation_regress_check = $(with_temp_install) $(top_builddir)/src/test/isolation/pg_isolation_regress --inputdir=$(srcdir) --temp-instance=./tmp_check --bindir= $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
+pg_isolation_regress_check = $(with_temp_install) $(top_builddir)/src/test/isolation/pg_isolation_regress --inputdir=$(srcdir) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
 pg_isolation_regress_installcheck = $(top_builddir)/src/test/isolation/pg_isolation_regress --inputdir=$(srcdir) $(pg_regress_locale_flags) $(EXTRA_REGRESS_OPTS)
 
 ##
diff --git a/src/interfaces/ecpg/test/Makefile b/src/interfaces/ecpg/test/Makefile
index a4ac021..4ed785b 100644
--- a/src/interfaces/ecpg/test/Makefile
+++ b/src/interfaces/ecpg/test/Makefile
@@ -78,11 +78,11 @@ endif
 REGRESS_OPTS = --dbname=regress1,connectdb --create-role=connectuser,connectdb $(EXTRA_REGRESS_OPTS)
 
 check: all
-	$(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule
+	$(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule
 
 # the same options, but with --listen-on-tcp
 checktcp: all
-	$(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule_tcp --host=localhost
+	$(with_temp_install) ./pg_regress $(REGRESS_OPTS) --temp-instance=./tmp_check $(TEMP_CONF) --bindir= $(pg_regress_locale_flags) $(THREAD) --schedule=$(srcdir)/ecpg_schedule_tcp --host=localhost
 
 installcheck: all
 	./pg_regress $(REGRESS_OPTS) --bindir='$(bindir)' $(pg_regress_locale_f

[HACKERS] Re: [COMMITTERS] pgsql: Respect TEMP_CONFIG when running contrib regression tests.

2016-02-27 Thread Andrew Dunstan



On 02/27/2016 09:25 AM, Robert Haas wrote:

On Sat, Feb 27, 2016 at 7:08 PM, Andrew Dunstan  wrote:

What I had in mind was something like the attached.

In testing this seems to do the right thing, and the nice part is that it
will be picked up by the buildfarm in the one case that's relevant, namely
the ecpg tests.

The only fly in the ointment is that there are a few places that set
--temp-config explicitly:

./contrib/test_decoding/Makefile:--temp-config
$(top_srcdir)/contrib/test_decoding/logical.conf \
./contrib/test_decoding/Makefile:--temp-config
$(top_srcdir)/contrib/test_decoding/logical.conf \
./src/test/modules/commit_ts/Makefile:REGRESS_OPTS =
--temp-config=$(top_srcdir)/src/test/modules/commit_ts/commit_ts.conf
./src/test/modules/test_rls_hooks/Makefile:REGRESS_OPTS =

--temp-config=$(top_srcdir)/src/test/modules/test_rls_hooks/rls_hooks.conf

Perhaps what we need to do is modify pg_regress.c slightly to allow more
than one --temp-config argument. But that could be done later.

Well, I'm pretty interested in using --temp-config for parallelism
testing; I want to be able to run the whole regression test suite with
a given --temp-config.  I'm in agreement with this change but if it
doesn't play well with that need, I suppose I'll be writing that
pg_regress.c patch sooner rather than later.



"doesn't meet your need" is probably a better way of putting it. The 
facility's use has grown beyond what I originally envisaged, so I think 
we will need that patch.


Would you like me to apply what I have?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Respect TEMP_CONFIG when running contrib regression tests.

2016-02-28 Thread Andrew Dunstan



On 02/27/2016 01:24 PM, John Gorman wrote:


On Sat, Feb 27, 2016 at 9:25 AM, Robert Haas <mailto:robertmh...@gmail.com>> wrote:


On Sat, Feb 27, 2016 at 7:08 PM, Andrew Dunstan
mailto:and...@dunslane.net>> wrote:

> Perhaps what we need to do is modify pg_regress.c slightly to
allow more
> than one --temp-config argument. But that could be done later.

Well, I'm pretty interested in using --temp-config for parallelism
testing; I want to be able to run the whole regression test suite with
a given --temp-config.  I'm in agreement with this change but if it
doesn't play well with that need, I suppose I'll be writing that
pg_regress.c patch sooner rather than later.


Here is a patch to allow pg_regress to include several --temp-config 
files.





Thanks, wonderfully small patch. Applied.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Equivalent of --enable-tap-tests in MSVC scripts

2016-03-01 Thread Andrew Dunstan



On 03/01/2016 08:00 AM, Michael Paquier wrote:

Hi all,

As of now the MSVC scripts control if TAP tests are enabled or not
using a boolean flag as $config->{tap_tests}. However, this flag is
just taken into account in vcregress.pl, with the following issues:
1) config_default.pl does not list tap_tests, so it is unclear to
users to enable them. People need to look into vcregress.pl as a start
point.
2) GetFakeConfigure() does not translate $config->{tap_tests} into
--enable-tap-tests, leading to pg_config not reporting it in
CONFIGURE. This is inconsistent with what is done in ./configure.

Attached is a patch to address those two issues.



Good work. There seem to be some unrelated whitespace changes. Shouldn't 
this just be two extra lines?


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-03 Thread Andrew Dunstan



On 03/03/2016 09:02 AM, Michael Paquier wrote:

Hi all,

Microsoft provides a set of VMs that one can use for testing and
Windows 10 is in the set:
https://dev.windows.com/en-us/microsoft-edge/tools/vms/windows/
I have grabbed one and installed the community version of Visual
Studio 2015 so I think that I am going to be able to compile Postgres
with VS2015 with a bit of black magic.

My goal is double:
1) As far as things have been discussed, VS2015 is making difficult
the compilation of Postgres, particularly for locales. So I'd like to
see what are the problems behind that and see if we can patch it
properly. This would facilitate the integration of cmake as well for
Windows.
2) src/tools/msvc stuff has support only up to VS2013. I think that it
would be nice to bump that a bit and get something for 9.5~.

So, would there be interest for a patch on the perl scripts in
src/tools/msvc or are they considered a lost cause? Having a look at
the failures could be done with the cmake work, but it seems a bit
premature to me to look at that for the moment, and having support for
VS2015 with 9.5 (support for new versions of VS won a backpatch the
last couple of years) would be a good thing I think.




I am not holding my breath on cmake. Until we have something pretty 
solid on that front I'm going to assume it's not happening. If we're 
going to support VS2015 (and I think we should) then it should be 
supported for all live branches if possible. I'm assuming the changes 
would be pretty localized, at least in src/tools/msvc, and adding a new 
compile shouldn't break anything with existing compilers.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-05 Thread Andrew Dunstan



On 03/05/2016 12:46 AM, Tom Lane wrote:

Michael Paquier  writes:

On Sat, Mar 5, 2016 at 1:41 PM, Petr Jelinek  wrote:

I vote for just using sed considering we need flex and bison anyway.

OK cool, we could go with something else than sed to generate probes.h
but that seems sensible considering that this should definitely be
back-patched. Not sure what the others think about adding a new file
in the source tarball by default though.

AFAIK, sed flex and bison originate from three separate source projects;
there is no reason to suppose that the presence of flex and bison on a
particular system guarantee the presence of sed.  I thought the proposal
to get rid of the psed dependence in favor of some more perl code was
pretty sane.




Here is a translation into perl of the sed script, courtesy of the s2p 
incarnation of psed: 
 The sed script 
appears to have been stable for a long time, so I don't think we need to 
be too concerned about possibly maintaining two versions.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-05 Thread Andrew Dunstan



On 03/05/2016 01:31 PM, Michael Paquier wrote:

On Sat, Mar 5, 2016 at 11:34 PM, Andrew Dunstan  wrote:

Here is a translation into perl of the sed script, courtesy of the s2p
incarnation of psed: <https://gist.github.com/adunstan/d61b1261a4b91496bdc6>
The sed script appears to have been stable for a long time, so I don't think
we need to be too concerned about possibly maintaining two versions.

That's 95% of the work already done, nice! If I finish wrapping up a
patch for this issue at least would you backpatch? It would be saner
to get rid of this dependency everywhere I think regarding compilation
with perl 5.22.


Sure.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allowing to run a buildfarm animal under valgrind

2016-03-08 Thread Andrew Dunstan



On 03/07/2016 08:39 PM, Andres Freund wrote:

Hi,

I'm setting up a buildfarm animal that runs under
valgrind. Unfortunately there's not really any good solution to force
make check et al. to start postgres wrapped in valgrind.  For now I've
resorted to adding something like

sub replace_postgres
{
 my $srcdir=$use_vpath ? "../pgsql/" : ".";
 my $builddir=abs_path("$pgsql");
 $srcdir=abs_path("$pgsql/$srcdir");
 chdir "$pgsql/src/backend/";
 rename "postgres", "postgres.orig";
 sysopen my $fh, "postgres", O_CREAT|O_TRUNC|O_RDWR, 0700
 or die "Could not create postgres wrapper";
 print $fh <<"END";
#!/bin/bash
~/src/valgrind/vg-in-place \\
 --quiet \\
 --error-exitcode=128 \\
 --suppressions=$srcdir/src/tools/valgrind.supp \\
 --trace-children=yes --track-origins=yes --read-var-info=yes \\
 --leak-check=no \\
 $builddir/src/backend/postgres.orig \\
 "\$@"
END
 close $fh;
 chdir $branch_root;
}
to the buildfarm client.

i.e. a script that replaces the postgres binary with a wrapper that
invokes postgres via valgrind.

That's obviously not a very good approach though. It's buildfarm
specific and thus can't be invoked by developers and it doesn't really
support being installed somewhere.

Does anybody have a better idea about how to do this?





Why not just create a make target which does this? It could be run after 
'make' and before 'make check'. I would make it assume valgrind was 
installed and in the path rather than using vg-in-place.


If that doesn't work and you want to do something with the buildfarm, 
probably a buildfarm module would be the way to go. We might need to add 
a new module hook to support it, to run right after make(), but that 
would be a one line addition to run_build.pl.


cheers

andrew









--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-08 Thread Andrew Dunstan



On 03/08/2016 08:32 AM, Michael Paquier wrote:

On Mon, Mar 7, 2016 at 10:40 PM, Michael Paquier
 wrote:

On Sun, Mar 6, 2016 at 5:55 AM, Andrew Dunstan  wrote:

On 03/05/2016 01:31 PM, Michael Paquier wrote:

On Sat, Mar 5, 2016 at 11:34 PM, Andrew Dunstan 
wrote:

Here is a translation into perl of the sed script, courtesy of the s2p
incarnation of psed:
<https://gist.github.com/adunstan/d61b1261a4b91496bdc6>
The sed script appears to have been stable for a long time, so I don't
think
we need to be too concerned about possibly maintaining two versions.

That's 95% of the work already done, nice! If I finish wrapping up a
patch for this issue at least would you backpatch? It would be saner
to get rid of this dependency everywhere I think regarding compilation
with perl 5.22.

Sure.

OK, so after some re-lecture of the script and perltidy-ing I finish
with the attached. How does that look?

Attached is a new set for the support of MS 2015 + the psed issue,
please use those ones for testing:
- 0001 is the replacement of psed by a dedicated perl script to
generate probes.h
- 0002 Fix of the declaration of TIMEZONE_GLOBAL and TZNAME_GLOBAL for WIN32
- 0003 addresses the issue with lc_codepage missing from _locale_t.
- 0004 adds support for MS2015 in src/tools/scripts/



Thanks for doing this work.

Do we already have a hard dependency on perl for all non-Windows builds? 
I know it's been discussed but I don't recall. If so, back to what version?


The comment in the codepage patch is a bit unclear. Can you reword it a bit?

cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-08 Thread Andrew Dunstan



On 03/08/2016 10:43 AM, Tom Lane wrote:

Andrew Dunstan  writes:

Do we already have a hard dependency on perl for all non-Windows builds?
I know it's been discussed but I don't recall. If so, back to what version?

I think the policy is we require perl for building from a git pull,
but not for building from a tarball.  Thus, any files that perl is used
to produce have to be shipped in tarballs.

However, there definitely *is* a hard requirement on perl for Windows
builds, even from tarballs, and I thought this patch was only about
the Windows build?





Michael's patch proposes to replace the use of sed to generate probes.h 
with the perl equivalent everywhere. That has the advantage that we keep 
to one script to generate probes.h, but it does impose a perl dependency.


We could get around that by shipping probes.h in tarballs, which seems 
like a perfectly reasonable thing to do. If we don't like that we can 
fall back to using the perl script just for MSVC builds.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-08 Thread Andrew Dunstan



On 03/08/2016 11:17 AM, Tom Lane wrote:

On the whole, I'd rather that this patch left the non-Windows side alone.



OK, that's why I raised the issue. We'll do it that way.

As I noted upthread, the sed script has been very stable so the overhead 
of having to maintain two scripts is pretty minimal.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-09 Thread Andrew Dunstan



On 03/09/2016 09:56 AM, Matthias Kurz wrote:

Hi!

Right now it is not possible to rename an enum value.
Are there plans to implement this anytime soon?
I had a bit of a discussion on the IRC channel and it seems it 
shouldn't be that hard to implement this.

Again, I am talking about renaming the values, not the enum itself.





I don't know of any plans, but it would be a useful thing. I agree it 
wouldn't be too hard. The workaround is to do an update on pg_enum 
directly, but proper SQL support would be much nicer.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-09 Thread Andrew Dunstan



On 03/09/2016 11:07 AM, Tom Lane wrote:

Andrew Dunstan  writes:

On 03/09/2016 09:56 AM, Matthias Kurz wrote:

Right now it is not possible to rename an enum value.
Are there plans to implement this anytime soon?

I don't know of any plans, but it would be a useful thing. I agree it
wouldn't be too hard. The workaround is to do an update on pg_enum
directly, but proper SQL support would be much nicer.

I have a vague recollection that we discussed this at the time the enum
stuff went in, and there are concurrency issues?  Don't recall details
though.





Rings a vague bell, but should it be any worse than adding new labels?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] enums and indexing

2016-03-09 Thread Andrew Dunstan
Currently we don't have a way to create a GIN index on an array of 
enums, or to use an enum field in a GIST index, so it can't be used in 
an exclusion constraint, among other things.


I'd like to work on fixing that if possible. Are there any insuperable 
barriers? If not, what do we need to do?


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Perl's newSViv() versus 64-bit ints?

2016-03-11 Thread Andrew Dunstan



On 03/11/2016 06:49 PM, Tom Lane wrote:

Anybody know what will happen when passing a uint64 to newSViv()?





A perl IV is guaranteed large enough to hold a pointer, if that's any 
help. But for an unsigned value you might be better off calling newSVuv()


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-19 Thread Andrew Dunstan



On 03/08/2016 07:40 PM, Michael Paquier wrote:

On Wed, Mar 9, 2016 at 1:07 AM, Andrew Dunstan  wrote:

Michael's patch proposes to replace the use of sed to generate probes.h with
the perl equivalent everywhere. That has the advantage that we keep to one
script to generate probes.h, but it does impose a perl dependency.

Good point. It did not occur to me that this would bring a hard
dependency for non-Windows builds. Let's keep both scripts then. The
attached is changed to do so.



Actually, it wasn't, but I fixed it and committed it.

Still to do: the non-perl pieces.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-03-19 Thread Andrew Dunstan



On 03/18/2016 09:54 AM, Robert Haas wrote:

On Thu, Mar 17, 2016 at 11:21 AM, Andrew Dunstan  wrote:

On 03/17/2016 06:44 AM, Andrew Dunstan wrote:

Here is a patch to add enum support to btree_gin and btree_gist. I didn't
include distance operations, as I didn't think it terribly important, and
there isn't a simple way to compute it sanely and efficiently, so no KNN
support.

This time including the data file for the gist regression tests.

Are you intending this as a 9.7 submission?  Because it's pretty late for 9.6.




I guess so. I would certainly find it hard to argue that it should be in 
9.6 unless there is a consensus on it.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-20 Thread Andrew Dunstan



On 03/20/2016 12:02 AM, Michael Paquier wrote:

On Sun, Mar 20, 2016 at 8:06 AM, Andrew Dunstan  wrote:


Still to do: the non-perl pieces.

The patch to address locales is the sensitive part. The patch from
Petr is taking the correct approach though I think that we had better
simplify it a bit and if possible we had better not rely on the else
block it introduces.



OK, the please send an updated set of patches for what remains.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-03-24 Thread Andrew Dunstan



On 03/24/2016 12:40 PM, Matt Wilmas wrote:



It would be *really* nice to have this in 9.6.  It seems it's simply 
filling out functionality that should already be there, right?  An 
oversight/bug fix so it works "as advertised?" :-)



I think that would be stretching the process a bit far. I'm certainly 
not prepared to commit it unless there is a general consensus to make an 
exception for it.



Is any other btree type-compatibility missing from these modules?  (I 
notice the btree_gin docs don't mention "numeric," but it works.)



uuid would be another type that should be fairly easily covered but isn't.





I just looked over the patch, and the actual code addition/changes 
seem pretty small and straightforward (or am I wrong?). 




Yes, sure, it's all fairly simple. Took me a little while to understand 
what I was doing, but once I did it was pretty much plain sailing.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-25 Thread Andrew Dunstan



On 03/25/2016 08:31 AM, Michael Paquier wrote:

On Fri, Mar 25, 2016 at 9:09 PM, Robert Haas  wrote:

On Thu, Mar 24, 2016 at 1:07 PM, Petr Jelinek  wrote:

On 24/03/16 17:28, Robert Haas wrote:

On Wed, Mar 23, 2016 at 3:17 AM, Michael Paquier
 wrote:

- 0001 fixes the global declarations of TIMEZONE_GLOBAL and
TZNAME_GLOBAL to be WIN32-compliant. I got bitten by that in the ECPG
compilation.

So this isn't going to break other Windows builds?  I mean, if we've
got the names for those symbols wrong, how is this working right now?


We didn't older versions just defined the other variants as well, but the
_timezone and _tzname have been around since at least VS2003.

I am unable to parse this sentence.  Sorry.

Petr means that both _timezone and _tzname are objects defined in
Visual Studio since more or less its 2003 release
(https://msdn.microsoft.com/en-us/library/htb3tdkc%28v=vs.71%29.aspx).
The oldest version on the buildfarm is Visual Studio 2005, and I agree
with him that there is no need to worry about older versions than
VS2003. The issue is that VS2015 does *not* define timezone and tzname
(please note the prefix underscore missing in those variable names),
causing compilation failures. That's why I am suggesting such a change
in this patch: this will allow the code to compile on VS2015, and
that's compatible with VS2003~.



OK, sounds good. I don't have a spare machine on which to install 
VS2015, nor time to set one up, so I'm going to have to trust the two of 
you (Michael and Petr) that this works. Will either of you be setting up 
a buildfarm animal with VS2015?


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-25 Thread Andrew Dunstan



On 03/25/2016 04:13 AM, Matthias Kurz wrote:


Hopefully at the commitfest at least the transaction limitation 
will/could be tackled - that would help us a lot already.





I don't believe anyone knows how to do that safely. Enums pose special 
problems here exactly because unlike all other types the set of valid 
values is mutable.


cheers

andre



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-03-25 Thread Andrew Dunstan



On 03/24/2016 12:40 PM, Matt Wilmas wrote:

(I notice the btree_gin docs don't mention "numeric," but it works.)




Numeric does work - we have regression tests to prove it, do we should 
fix the docs. But I'm also curious to know why apparently we don't have 
distance operator support for numeric.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-25 Thread Andrew Dunstan



On 03/25/2016 03:22 PM, Christophe Pettus wrote:

On Mar 25, 2016, at 11:50 AM, Andrew Dunstan  wrote:


I don't believe anyone knows how to do that safely.

The core issue, for me, is that not being able to modify enum values in a 
transaction breaks a very wide variety of database migration tools.  Even a 
very brutal solution like marking indexes containing the altered type invalid 
on a ROLLBACK would be preferable to the current situation.




Well, let's see if we can think harder about when it will be safe to add 
new enum values and how we can better handle unsafe situations.


The danger AIUI is that the new value value will get into an upper level 
page of an index, and that we can't roll that back if the transaction 
aborts.


First, if there isn't an existing index using the type we should be safe 
- an index created in the current transaction is not a problem because 
if the transaction aborts the whole index will disappear.


Even if there is an existing index, if it isn't touched by the current 
transaction the we should still be safe. However, if it is touched we 
could end up with a corrupted index if the transaction aborts. Maybe we 
could provide an option to reindex those indexes if they have been 
touched in the transaction and it aborts. And if it's not present we 
could instead abort the transaction as soon as it detects something that 
touches the index.


I quite understand that this is all hand-wavy, but I'm trying to get a 
discussion started that goes beyond acknowledging the pain that the 
current situation involves.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-26 Thread Andrew Dunstan



On 03/26/2016 12:35 AM, David G. Johnston wrote:
On Friday, March 25, 2016, Andrew Dunstan <mailto:and...@dunslane.net>> wrote:



On 03/25/2016 04:13 AM, Matthias Kurz wrote:


Hopefully at the commitfest at least the transaction
limitation will/could be tackled - that would help us a lot
already.


I don't believe anyone knows how to do that safely. Enums pose
special problems here exactly because unlike all other types the
set of valid values is mutable.


Yeah, I'm not sure there is much blue sky here as long as the 
definition of an enum is considered system data.  It probably needs to 
be altered so that a user can create a table of class enum with a 
known layout that PostgreSQL can rely upon to perform optimizations 
and provide useful behaviors - at least internally.  The most visible 
behavior being displaying the label while ordering using its position.


The system, seeing a data type of that class, would have an implicit 
reference between columns of that type and the source table.
You have to use normal cascade update/delete/do-nothing while 
performing DML on the source table.


In some ways it would be a specialized composite type, and we could 
leverage that to you all the syntax available for those - but without 
having a different function for each differently named enum classed 
table since they all would share a common structure, differing only in 
name.  But the tables would be in user space and not a preordained 
relation in pg_catalog.  Maybe require they all inherit from some 
template but empty table...





We don't have the luxury of being able to redesign this as a green 
fields development.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-26 Thread Andrew Dunstan



On 03/26/2016 10:25 AM, Tom Lane wrote:

Andrew Dunstan  writes:

We don't have the luxury of being able to redesign this as a green
fields development.

I'm not actually convinced that we need to do anything.  SQL already has a
perfectly good mechanism for enforcing that a column contains only values
of a mutable set defined in another table --- it's called a foreign key.
The point of inventing enums was to provide a lower-overhead solution
for cases where the allowed value set is *not* mutable.  So okay, if we
can allow certain cases of changing the value set without increasing
the overhead, great.  But when we can't do it without adding a whole
lot of complexity and overhead (and, no doubt, bugs), we need to just
say no.

Maybe the docs about enums need to be a little more explicit about
pointing out this tradeoff.




OK, but we (in fact, you and I, for the most part) have provided a way 
to add new values. The trouble people have is the transaction 
restriction on that facility, because all the other changes they make 
with migration tools can be nicely wrapped in a transaction. And the 
thing is, in my recent experience on a project using quite a few enums, 
none of the transactions I'd like to include these mutations of enums in 
does anything that would be at all dangerous. It would be nice if we 
could find a less broad brush approach to dealing with the issue.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-27 Thread Andrew Dunstan



On 03/27/2016 12:43 AM, Christophe Pettus wrote:

On Mar 26, 2016, at 7:40 AM, Andrew Dunstan  wrote:

It would be nice if we could find a less broad brush approach to dealing with 
the issue.

I don't know how doable this is, but could we use the existing mechanism of 
marking an index invalid if it contains an enum type to which a value was 
added, and the transaction was rolled back?  For the 90% use case, that would 
be acceptable, I would expect.




The more I think about this the more I bump up against the fact that 
almost anything we do might want to do to ameliorate the situation is 
going to be rolled back. The only approach I can think of that doesn't 
suffer from this is to abort if an insert/update will affect an index on 
a modified enum. i.e. we prevent the possible corruption from happening 
in the first place, as we do now, but in a much more fine grained way.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] raw output from copy

2016-03-28 Thread Andrew Dunstan



On 03/28/2016 06:26 PM, Tom Lane wrote:

Pavel Stehule  writes:

[ copy-raw-format-20160227-03.patch ]

I looked at this patch.  I'm having a hard time accepting that it has
a use-case large enough to justify it, and here's the reason: it's
a protocol break.  Conveniently omitting to update protocol.sgml
doesn't make it not a protocol break.  (libpq.sgml also contains
assorted statements that are falsified by this patch.)

You could argue that it's the user's own fault if he tries to use
COPY RAW with client-side code that hasn't been updated to support it.
Maybe that's okay, but I wonder if we're opening ourselves up to
problems.  Maybe even security-grade problems.

In terms of specific code that hasn't been updated, ecpg is broken
by this patch, and I'm not very sure what libpq's PQbinaryTuples()
ought to do but probably something other than what it does today.

There's also a definitional question of what we think PQfformat() ought
to do; should it return "2" for the per-field format?  Or maybe the
per-field format is still "1", since it's after all the same binary data
format as for COPY BINARY, and only the overall copy format reported by
PQbinaryTuples() should change to "2".

BTW, I'm not really sure why the patch is trying to enforce single
row and column for the COPY OUT case.  I thought the idea for that
was that we'd just shove out the data without any delimiters, and
if it's more than one datum it's the user's problem whether he can
identify the boundaries.  On the input side we would have to insist
on one column since we're not going to attempt to identify boundaries
(and one row would fall out of the fact that we slurp the entire input
and treat it as one datum).

Anyway this is certainly not committable as-is, so I'm setting it back
to Waiting on Author.  But the fact that both libpq and ecpg would need
updates makes me question whether we can safely pretend that this isn't
a protocol break.





In that case I humbly submit that there is a case for reviving the psql 
patch I posted that kicked off this whole thing and lets you export a 
piece of binary data from psql quite easily. It should certainly not 
involve any protocol break.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] raw output from copy

2016-03-29 Thread Andrew Dunstan



On 03/28/2016 11:18 PM, Pavel Stehule wrote:




Anyway this is certainly not committable as-is, so I'm setting
it back
to Waiting on Author.  But the fact that both libpq and ecpg
would need
updates makes me question whether we can safely pretend that
this isn't
a protocol break.




In that case I humbly submit that there is a case for reviving the
psql patch I posted that kicked off this whole thing and lets you
export a piece of binary data from psql quite easily. It should
certainly not involve any protocol break.


The psql only solution can work only for output. Doesn't help with input.





The I would suggest we try to invent something for psql which does help 
with it. I just don't see this as an SQL problem. Pretty much any driver 
library will have no difficulty in handling binary input and output. 
It's only psql that has an issue, ISTM, and therefore I believe that's 
where the fix should go. What else is going to use this? As an SQL 
change this seems like a solution in search of a problem. If someone can 
make a good case that this is going to be of general use I'll happily go 
along, but I haven't seen one so far.


cheers

andrdew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Alter or rename enum value

2016-03-29 Thread Andrew Dunstan



On 03/27/2016 10:20 AM, Tom Lane wrote:

Andrew Dunstan  writes:

The more I think about this the more I bump up against the fact that
almost anything we do might want to do to ameliorate the situation is
going to be rolled back. The only approach I can think of that doesn't
suffer from this is to abort if an insert/update will affect an index on
a modified enum. i.e. we prevent the possible corruption from happening
in the first place, as we do now, but in a much more fine grained way.

Perhaps, instead of forbidding ALTER ENUM ADD in a transaction, we could
allow that, but not allow the new value to be *used* until it's committed?
This could be checked cheaply during enum value lookup (ie, is xmin of the
pg_enum row committed).

What you really need is to prevent the new value from being inserted
into any indexes, but checking that directly seems far more difficult,
ugly, and expensive than the above.

I do not know whether this would be a meaningful improvement for
common use-cases, though.  (It'd help if people were more specific
about the use-cases they need to work.)





I think this is a pretty promising approach, certainly well worth 
putting some resources into investigating. One thing I like about it is 
that it gives a nice cheap negative test, so we know if the xmin is 
committed we are safe. So we could start by rejecting anything where 
it's not, but later might adopt a more refined but expensive tests for 
cases where it isn't committed without imposing a penalty on anything else.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] VS 2015 support in src/tools/msvc

2016-03-29 Thread Andrew Dunstan



On 03/29/2016 08:48 PM, Michael Paquier wrote:

On Wed, Mar 30, 2016 at 12:45 AM, Robert Haas  wrote:

On Tue, Mar 29, 2016 at 11:31 AM, Petr Jelinek  wrote:

I have machine ready, waiting for animal name and secret.

Great!

Nice. Thanks.


It will obviously
fail until we push the 0002 and 0004 though.

I think it would be a shame if we shipped 9.6 without MSVC 2015
support, but it'd be nice if Andrew or Magnus could do the actual
committing, 'cuz I really don't want to be responsible for breaking
the Windows build.

I'm fine to back you up as needed. That's not a committer guarantee,
still better than nothing I guess. And I make a specialty of looking
at VS-related bugs lately :)



I am currently travelling, but my intention is to deal with the 
remaining patches when I'm back home this weekend, unless someone beats 
me to it.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [postgresSQL] [bug] Two or more different types of constraints with same name creates ambiguity while drooping.

2016-03-30 Thread Andrew Dunstan



On 03/30/2016 10:21 AM, Tom Lane wrote:

Amit Langote  writes:

On 2016/03/30 15:16, Harshal Dhumal wrote:

If we create two different type of constrains (lets say primary key and
foreign key) on same table with same name (lets say 'key' ) then its shows
same drop query for both constrains.

I have a vague recollection that non-uniqueness of constraint names may
have been intentional at some point.  But yeah, the existence of the
ALTER TABLE DROP CONSTRAINT syntax seems to make that a pretty bad idea.


It seems that, whereas name uniqueness check occurs when creating a named
FK constraint, the same does not occur when creating a named PK constraint
or any index-based constraint for that matter (they are handled by
different code paths - in the latter's case, name conflicts with existing
relations is checked for when creating the constraint index)

I think that if we want to ensure uniqueness of constraint names, this
is really approaching it the wrong way, as it still fails to provide
any guarantees (consider concurrent index creation, for example).
What we need is a unique index on pg_constraint.

The problem with that is that pg_constraint contains both table-related
and type (domain) related constraints; but it strikes me that we could
probably create a unique index on (conrelid, contypid, conname).  Given
the convention that conrelid is zero in a type constraint and contypid
is zero in a table constraint, this should work to enforce per-table
or per-type constraint name uniqueness.  The cost of an extra index
is a bit annoying, but we could probably make it help pay for itself
by speeding up assorted searches.





+1, but does that mean people will have to change constraint names to be 
compliant before running pg_upgrade?


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgindent-polluted commits

2016-01-14 Thread Andrew Dunstan



On 01/13/2016 12:13 PM, Tom Lane wrote:

Simon Riggs  writes:

On 13 January 2016 at 14:48, Noah Misch  wrote:

I've noticed commits, from a few of you, carrying pgindent changes to lines
the patch would not otherwise change.

Could we review again why this matters?

Basically this is trading off convenience of the committer (all of the
alternatives Noah mentions are somewhat annoying) versus the convenience
of post-commit reviewers.  I'm not sure that his recommendation is the
best trade-off, nor that the situation is precisely comparable to
pre-commit review.  There definitely will be pre-commit review, there
may or may not be any post-commit review.

I'm willing to go with the "separate commit to reindent individual files"
approach if there's a consensus that that makes for a cleaner git history.
But I'm not 100% convinced it matters.






I do think it makes life easier when going through the git history if 
semantic changes are separated from formatting changes.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SET syntax in INSERT

2016-01-14 Thread Andrew Dunstan



On 01/14/2016 03:00 PM, Marko Tiikkaja wrote:

On 2016-01-14 20:50, Vitaly Burovoy wrote:

On 1/14/16, Tom Lane  wrote:

Assume a table with an int-array column, and consider

INSERT INTO foo SET arraycol[2] = 7, arraycol[4] = 11;


Right part is a column name, not an expression. Isn't it?
So "arraycol[2]" is not possible there.


I think the idea here was that it's allowed in UPDATE.  But I don't 
see the point of allowing that in an INSERT.







Right. Why not just forbid anything other than a plain column name on 
the LHS for INSERT, at least as a first cut.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Andrew Dunstan



On 01/14/2016 12:38 AM, Michael Paquier wrote:

Hi all,

Beginning a new thread seems more adapted regarding $subject but
that's mentioned here as well:
http://www.postgresql.org/message-id/CAB7nPqQXghm_SdB5iniupz1atzMxk=95gv9a8ocdo83sxcn...@mail.gmail.com

It happens based on some investigation from Andrew that cygwin does
not need to use the service-related code, aka register/unregister
options or similar that are proper to WIN32 and rely instead on
cygrunsrv with a SYSV-like init file to manage the service. Based on
my tests with cygwin, I am able to see as well that a server started
within a cygwin session is able to persist even after it is closed,
which is kind of nice and I think that it is a additional reason to
not use the service-related code paths. Hence what about the following
patch, that makes cygwin behave like any *nix OS when using pg_ctl?
This simplifies a bit the code.

Marco, as I think you do some packaging for Postgres in Cygwin, and
others, does that sound acceptable?





I think we can be a bit more adventurous and remove all the 
Cygwin-specific code. See attached patch, which builds fine on buildfarm 
cockatiel.


cheers

andrew



diff --git a/src/bin/pg_ctl/pg_ctl.c b/src/bin/pg_ctl/pg_ctl.c
index 919d764..98aa1a0 100644
--- a/src/bin/pg_ctl/pg_ctl.c
+++ b/src/bin/pg_ctl/pg_ctl.c
@@ -39,13 +39,6 @@
 #include "getopt_long.h"
 #include "miscadmin.h"
 
-#if defined(__CYGWIN__)
-#include 
-#include 
-/* Cygwin defines WIN32 in windows.h, but we don't want it. */
-#undef WIN32
-#endif
-
 /* PID can be negative for standalone backend */
 typedef long pgpid_t;
 
@@ -105,7 +98,7 @@ static char backup_file[MAXPGPATH];
 static char recovery_file[MAXPGPATH];
 static char promote_file[MAXPGPATH];
 
-#if defined(WIN32) || defined(__CYGWIN__)
+#ifdef WIN32
 static DWORD pgctl_start_type = SERVICE_AUTO_START;
 static SERVICE_STATUS status;
 static SERVICE_STATUS_HANDLE hStatus = (SERVICE_STATUS_HANDLE) 0;
@@ -133,7 +126,7 @@ static void do_kill(pgpid_t pid);
 static void print_msg(const char *msg);
 static void adjust_data_dir(void);
 
-#if defined(WIN32) || defined(__CYGWIN__)
+#ifdef WIN32
 #if (_MSC_VER >= 1800)
 #include 
 #else
@@ -165,7 +158,7 @@ static void unlimit_core_size(void);
 #endif
 
 
-#if defined(WIN32) || defined(__CYGWIN__)
+#ifdef WIN32
 static void
 write_eventlog(int level, const char *line)
 {
@@ -207,20 +200,11 @@ write_stderr(const char *fmt,...)
 	va_list		ap;
 
 	va_start(ap, fmt);
-#if !defined(WIN32) && !defined(__CYGWIN__)
+#ifndef WIN32
 	/* On Unix, we just fprintf to stderr */
 	vfprintf(stderr, fmt, ap);
 #else
 
-/*
- * On Cygwin, we don't yet have a reliable mechanism to detect when
- * we're being run as a service, so fall back to the old (and broken)
- * stderr test.
- */
-#ifdef __CYGWIN__
-#define	pgwin32_is_service()	(isatty(fileno(stderr)))
-#endif
-
 	/*
 	 * On Win32, we print to stderr if running on a console, or write to
 	 * eventlog if running as a service
@@ -718,7 +702,7 @@ test_postmaster_connection(pgpid_t pm_pid, bool do_checkpoint)
 #endif
 
 		/* No response, or startup still in process; wait */
-#if defined(WIN32)
+#ifdef WIN32
 		if (do_checkpoint)
 		{
 			/*
@@ -1342,7 +1326,7 @@ do_kill(pgpid_t pid)
 	}
 }
 
-#if defined(WIN32) || defined(__CYGWIN__)
+#ifdef WIN32
 
 #if (_MSC_VER < 1800)
 static bool
@@ -1408,20 +1392,6 @@ pgwin32_CommandLine(bool registration)
 		}
 	}
 
-#ifdef __CYGWIN__
-	/* need to convert to windows path */
-	{
-		char		buf[MAXPGPATH];
-
-#if CYGWIN_VERSION_DLL_MAJOR >= 1007
-		cygwin_conv_path(CCP_POSIX_TO_WIN_A, cmdPath, buf, sizeof(buf));
-#else
-		cygwin_conv_to_full_win32_path(cmdPath, buf);
-#endif
-		strcpy(cmdPath, buf);
-	}
-#endif
-
 	/* if path does not end in .exe, append it */
 	if (strlen(cmdPath) < 4 ||
 		pg_strcasecmp(cmdPath + strlen(cmdPath) - 4, ".exe") != 0)
@@ -1775,10 +1745,8 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser
 	if (!OpenProcessToken(GetCurrentProcess(), TOKEN_ALL_ACCESS, &origToken))
 	{
 		/*
-		 * Most Windows targets make DWORD a 32-bit unsigned long.  Cygwin
-		 * x86_64, an LP64 target, makes it a 32-bit unsigned int.  In code
-		 * built for Cygwin as well as for native Windows targets, cast DWORD
-		 * before printing.
+		 * Most Windows targets make DWORD a 32-bit unsigned long, but
+		 * in case it doesn't cast DWORD before printing.
 		 */
 		write_stderr(_("%s: could not open process token: error code %lu\n"),
 	 progname, (unsigned long) GetLastError());
@@ -1819,10 +1787,6 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser
 		return 0;
 	}
 
-#ifndef __CYGWIN__
-	AddUserToTokenDacl(restrictedToken);
-#endif
-
 	r = CreateProcessAsUser(restrictedToken, NULL, cmd, NULL, NULL, TRUE, CREATE_SUSPENDED, NULL, NULL, &si, processInfo);
 
 	Kernel32Handle = LoadLibrary("KERNEL32.DLL");
@@ -1926,7 +1890,7 @@ CreateRestrictedProcess(char *cmd, PROCESS_INFORMATION *processInfo, bool as_ser
 	 */
 	

Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 12:38 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


I think we can be a bit more adventurous and remove all the Cygwin-specific
code. See attached patch, which builds fine on buildfarm cockatiel.

Hopefully you also tested that it builds under MSVC :-)




Why would I? This isn't having the slightest effect on MSVC builds.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Removing service-related code in pg_ctl for Cygwin

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 03:46 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


On 01/18/2016 12:38 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


I think we can be a bit more adventurous and remove all the Cygwin-specific
code. See attached patch, which builds fine on buildfarm cockatiel.

Hopefully you also tested that it builds under MSVC :-)

Why would I? This isn't having the slightest effect on MSVC builds.

You never know ... you might have inadvertently broken some WIN32 block.



If you can point out a line where that might be true I'll test it ;-)

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] system mingw not recognized

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 04:11 PM, Igal @ Lucee.org wrote:

I posted the error in the docs to pgsql-d...@postgresql.org

If it's possible to update it myself via git, or if it should be 
reported elsewhere -- please advise.



1. Please don't top-post on the PostgreSQL lists. See 
>


2. You've done all you need to do. We'll take care of it. Thanks for the 
report.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Buildfarm server move

2016-01-18 Thread Andrew Dunstan

People,

Apologies for the late notice.

Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we will 
be moving the buildfarm server from its current home at CommandPrompt, 
where we have been ever since we started, to a machine that is part of 
the standard core infrastructure. In doing so we will be moving to a) a 
more modern and supported PostgreSQL version, and b) a machine with more 
disk space so that our current severe pace shortage will be alleviated. 
In addition, the community would be much better placed to maintain the 
buildfarm if either JD or I were to fall under a bus.


The outage is expected to last about 4 hours or less, and we will sent 
out notifications when this is complete.


Buildfarm owners who want to avoid getting reporting failures should 
disable their animals during that time. We don't have an avalanche of 
commits right now either, but it might also be nice if committers were 
to refrain from adding changes in the hours leading up to this and until 
we announce that we're back online, for the benefit of those owners who 
don't see this message in time.


Thanks in advance for your help and understanding.

And many thanks to CommandPrompt for their constant support over the 
many years we've been in operation.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Buildfarm server move

2016-01-18 Thread Andrew Dunstan



On 01/18/2016 05:20 PM, Andrew Dunstan wrote:

People,

Apologies for the late notice.

Tomorrow, January 18th, at 4.00 pm US East Coast time (UT - 5.0) we 
will be moving the buildfarm server from its current home at 
CommandPrompt, where we have been ever since we started, to a machine 
that is part of the standard core infrastructure. In doing so we will 
be moving to a) a more modern and supported PostgreSQL version, and b) 
a machine with more disk space so that our current severe pace 
shortage will be alleviated. In addition, the community would be much 
better placed to maintain the buildfarm if either JD or I were to fall 
under a bus.


The outage is expected to last about 4 hours or less, and we will sent 
out notifications when this is complete.


Buildfarm owners who want to avoid getting reporting failures should 
disable their animals during that time. We don't have an avalanche of 
commits right now either, but it might also be nice if committers were 
to refrain from adding changes in the hours leading up to this and 
until we announce that we're back online, for the benefit of those 
owners who don't see this message in time.


Thanks in advance for your help and understanding.

And many thanks to CommandPrompt for their constant support over the 
many years we've been in operation.






Yes, sorry about that. It will be on the 19th. Fat fingers strike again.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make error - libpqdll.def No such file or directory

2016-01-19 Thread Andrew Dunstan



On 01/19/2016 01:08 PM, Igal @ Lucee.org wrote:

On 1/17/2016 8:17 PM, Igal @ Lucee.org wrote:

On 1/17/2016 3:24 PM, Igal @ Lucee.org wrote:

When running make I encounter the following error:

gcc.exe: error: libpqdll.def: No such file or directory
/home/Admin/sources/postgresql-9.5.0/src/Makefile.shlib:393: recipe 
for target 'libpq.dll' failed

make[3]: *** [libpq.dll] Error 1
make[3]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq'

Makefile:17: recipe for target 'all-libpq-recurse' failed
make[2]: *** [all-libpq-recurse] Error 2
make[2]: Leaving directory 
'/home/Admin/builds/postgresql-9.5.0/src/interfaces'

Makefile:34: recipe for target 'all-interfaces-recurse' failed
make[1]: *** [all-interfaces-recurse] Error 2
make[1]: Leaving directory '/home/Admin/builds/postgresql-9.5.0/src'
GNUmakefile:11: recipe for target 'all-src-recurse' failed
make: *** [all-src-recurse] Error 2

But /home/Admin/builds/postgresql-9.5.0/src/interfaces/libpq does 
contain the file libpqdll.def


The file /home/Admin/sources/postgresql-9.5.0/src/Makefile.shlib 
exists as well.


It's hard for me to decipher which file is reporting the error and 
which file is not found.


Any feedback would be greatly appreciated, thanks!
So when I try to run `make` I still get that error.  Please note that 
I am doing a VPATH build (the build in a separate directory from the 
downloaded sources), which might play a role here:

x86_64-w64-mingw32-gcc.exe: error: libpqdll.def: No such file or directory
make[3]: *** [libpq.dll] Error 1
make[2]: *** [all-libpq-recurse] Error 2
make[1]: *** [all-interfaces-recurse] Error 2
make: *** [all-src-recurse] Error 2
I found a script that builds postgresql via MinGW-w64, and in it the 
author specifically creates symlinks to libpqdll.def
https://github.com/Alexpux/MINGW-packages/blob/master/mingw-w64-postgresql/PKGBUILD#L72 
-- excerpt below:


  # Make DLL definition file visible during each arch build
  ln -s 
"${srcdir}/postgresql-$pkgver/src/interfaces/libpq/libpqdll.def" 
src/interfaces/libpq/
  ln -s 
"${srcdir}/postgresql-$pkgver/src/interfaces/ecpg/ecpglib/libecpgdll.def" 
src/interfaces/ecpg/ecpglib/
  ln -s 
"${srcdir}/postgresql-$pkgver/src/interfaces/ecpg/pgtypeslib/libpgtypesdll.def" 
src/interfaces/ecpg/pgtypeslib/
  ln -s 
"${srcdir}/postgresql-$pkgver/src/interfaces/ecpg/compatlib/libecpg_compatdll.def" 
src/interfaces/ecpg/compatlib/


Why are the symlinks needed to make the definition files visible?

Is this an issue with VPATH builds?  It is not mentioned in the docs 
where VPATH builds are discussed (section 15.4 
http://www.postgresql.org/docs/9.5/static/install-procedure.html )





jacana does VPATH builds with pretty much this setup all the time. See 
for example 



cheers

andrew






--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] make error - libpqdll.def No such file or directory

2016-01-19 Thread Andrew Dunstan



On 01/19/2016 02:01 PM, Alvaro Herrera wrote:

Andrew Dunstan wrote:


jacana does VPATH builds with pretty much this setup all the time. See for
example 
<http://www.pgbuildfarm.org/cgi-bin/show_log.pl?nm=jacana&dt=2016-01-19%2013%3A00%3A09>

Yes, it builds a tree *once*, then deletes the result, and the next BF
run uses a completely new build directory.  So any issues resulting from
re-building an existing tree are never seen.



Oh. odd. I don't think I've seen that even when developing on Windows 
(e.g. parallel pg_restore). Maybe I always did that in-tree.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Buildfarm server move complete

2016-01-19 Thread Andrew Dunstan


The buildfarm server move is complete. Thanks to all who helped, 
especially Stephen Frost.


There might be some small performance regressions which we'll be digging 
into.


Next step: move the mailing lists off pgfoundry. The new lists have been 
set up I will be working on that migration next week.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Template for commit messages

2016-01-28 Thread Andrew Dunstan



On 01/28/2016 09:57 AM, Robert Haas wrote:

On Thu, Jan 28, 2016 at 9:52 AM, Tom Lane  wrote:

Robert Haas  writes:

On Thu, Jan 28, 2016 at 8:04 AM, Tomas Vondra
 wrote:

Why can't we do both? That is, have a free-form text with the nuances, and
then Reviewed-By listing the main reviewers? The first one is for humans,
the other one for automated tools.

I'm not objecting to or endorsing any specific proposal, just asking
what we want to do about this.  I think the trick if we do it that way
will be to avoid having it seem like too much duplication, but there
may be a way to manage that.

FWIW, I'm a bit suspicious of the idea that we need to make the commit
messages automated-tool-friendly.  What tools are there that would need
to extract this info, and would we trust them if they didn't understand
"nuances"?

I'm on board with Bruce's template as being a checklist of points to be
sure to cover when composing a commit message.  I'm not sure we need
fixed-format rules.

Well, I think what people are asking for is precisely a fixed format,
and I do think there is value in that.  It's nice to capture the
nuance, but the nuance is going to get flattened out anyway when the
release notes are created.  If we all agree to use a fixed format,
then a bunch of work there that probably has to be done manually can
be automated.  However, if we all agree to use a fixed format except
for you, we might as well just forget the whole thing, because the
percentage of commits that are done by you is quite high.




Yeah.

I have no prejudice in this area, other than one in favor of any rules 
being fairly precise. As for nuances, I guess they can be specified in 
the commit message. The one thing I do find annoying from time to time 
is the limit on subject size. Sometimes it's very difficult to be 
sufficiently communicative in, say, 50 characters.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: make NOTIFY list de-duplication optional

2016-02-06 Thread Andrew Dunstan



On 02/05/2016 08:49 PM, Tom Lane wrote:


Yeah, I agree that a GUC for this is quite unappetizing.


Agreed.



One idea would be to build a hashtable to aid with duplicate detection
(perhaps only once the pending-notify list gets long).

Another thought is that it's already the case that duplicate detection is
something of a "best effort" activity; note for example the comment in
AsyncExistsPendingNotify pointing out that we don't collapse duplicates
across subtransactions.  Would it be acceptable to relax the standards
a bit further?  For example, if we only checked for duplicates among the
last N notification list entries (for N say around 100), we'd probably
cover just about all the useful cases, and the runtime would stay linear.
The data structure isn't tremendously conducive to that, but it could be
done.





I like the hashtable idea if it can be made workable.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: schema PL session variables

2016-02-08 Thread Andrew Dunstan



On 02/08/2016 03:16 AM, Pavel Stehule wrote:

Hi

On Russian PgConf I had a talk with Oleg about missing features in 
PLpgSQL, that can complicates a migrations from Oracle to PostgreSQL. 
Currently I see only one blocker - missing protected session 
variables. PL/SQL has package variables with possible only package 
scope and session life cycle. Currently we cannot to ensure/enforce 
schema scope visibility - and we cannot to implement this 
functionality in PL languages other than C.


I propose really basic functionality, that can be enhanced in future - 
step by step. This proposal doesn't contain any controversial feature 
or syntax, I hope. It is related to PLpgSQL only, but described 
feature can be used from any PL languages with implemented interface.



I think it would make sense to implement the interface in at least one 
of our other supported PLs. I'm not entirely clear how well this will 
match up with, say, plperl, but I'd be interested to see.



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-08 Thread Andrew Dunstan



On 02/08/2016 02:15 PM, Tom Lane wrote:

Of late, by far the majority of the random-noise failures we see in the
buildfarm have come from failure to shut down the postmaster in a
reasonable timeframe.  An example is this current failure on hornet:

http://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hornet&dt=2016-02-08%2013%3A41%3A55

waiting for server to shut 
down...
 failed
pg_ctl: server does not shut down
=== db log file ==
2016-02-08 15:14:35.783 UTC [56b8b00d.9e00e2:2] LOG:  received fast shutdown 
request
2016-02-08 15:14:35.783 UTC [56b8b00d.9e00e2:3] LOG:  aborting any active 
transactions
2016-02-08 15:14:35.783 UTC [56b8b00d.7e0028:2] LOG:  autovacuum launcher 
shutting down
2016-02-08 15:14:35.865 UTC [56b8b00d.2e5000a:1] LOG:  shutting down

The buildfarm script runs pg_ctl stop with -t 120, and counting the dots
shows that pg_ctl was honoring that, so apparently the postmaster took
more than 2 minutes to shut down.

Looking at other recent successful runs, stopdb-C-1 usually takes 30 to
40 or so seconds on this machine.  So it's possible that it was just so
overloaded that it took three times that much time on this run, but I'm
starting to think there may be more to it than that.  We've seen variants
on this theme on half a dozen machines just in the past week --- and it
seems to mostly happen in 9.5 and HEAD, which is fishy.

The fact that we got "shutting down" but not "database system is shut
down" indicates that the server was in ShutdownXLOG().  Normally the
only component of that that takes much time is buffer flushing, but
could something else be happening?  Or perhaps the checkpoint code
has somehow not gotten the word to do its flushing at full speed?

What I'd like to do to investigate this is put in a temporary HEAD-only
patch that makes ShutdownXLOG() and its subroutines much chattier about
how far they've gotten and what time it is, and also makes pg_ctl print
out the current time if it gives up waiting.  A few failed runs with
that in place will at least allow us to confirm or deny whether it's
just that the shutdown checkpoint is sometimes really slow, or whether
there's a bug lurking.

Any objections?  Anybody have another idea for data to collect?





I think that's an excellent idea. This has been a minor running sore for 
ages on slow machines.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/08/2016 10:55 PM, Tom Lane wrote:

Noah Misch  writes:

On Mon, Feb 08, 2016 at 02:15:48PM -0500, Tom Lane wrote:

We've seen variants
on this theme on half a dozen machines just in the past week --- and it
seems to mostly happen in 9.5 and HEAD, which is fishy.

It has been affecting only the four AIX animals, which do share hardware.
(Back in 2015 and once in 2016-01, it did affect axolotl and shearwater.)

Certainly your AIX critters have shown this a bunch, but here's another
current example:
http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2016-02-08%2014%3A49%3A23


That's reasonable.  If you would like higher-fidelity data, I can run loops of
"pg_ctl -w start; make installcheck; pg_ctl -t900 -w stop", and I could run
that for HEAD and 9.2 simultaneously.  A day of logs from that should show
clearly if HEAD is systematically worse than 9.2.

That sounds like a fine plan, please do it.


So, I wish to raise the timeout for those animals.  Using an environment
variable was a good idea; it's one less thing for test authors to remember.
Since the variable affects a performance-related fudge factor rather than
change behavior per se, I'm less skittish than usual about unintended
consequences of dynamic scope.  (With said unintended consequences in mind, I
made "pg_ctl register" ignore PGCTLTIMEOUT rather than embed its value into
the service created.)

While this isn't necessarily a bad idea in isolation, the current
buildfarm scripts explicitly specify a -t value to pg_ctl stop, which
I would not expect an environment variable to override.  So we need
to fix the buildfarm script to allow the timeout to be configurable.
I'm not sure if there would be any value-add in having pg_ctl answer
to an environment variable once we've done that.



The failure on axolotl was for the ECPG tests, which don't use the 
buildfarm's startup/stop db code. They don't honour TEMP_CONFIG either, 
which they probably should - then we might get better log traces.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 03:05 PM, Tom Lane wrote:

I wrote:

In any case, we should proceed with fixing things so that buildfarm owners
can specify a higher shutdown timeout for especially slow critters.

I looked into doing this as I suggested yesterday, namely modifying the
buildfarm scripts, and soon decided that it would be a mess; there are
too many cases where "pg_ctl stop" is not invoked directly by the script.

I'm now in favor of applying the PGCTLTIMEOUT patch Noah proposed, and
*removing* the two existing hacks in run_build.pl that try to force -t 120.

The only real argument I can see against that approach is that we'd have
to back-patch the PGCTLTIMEOUT patch to all active branches if we want
to stop the buildfarm failures.  We don't usually back-patch feature
additions.  On the other hand, this wouldn't be the first time we've
back-patched something on grounds of helping the buildfarm, so I find
that argument pretty weak.






OK. I can put out a new release as required.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 05:53 PM, Tom Lane wrote:




Andrew, I wonder if I could prevail on you to make axolotl run "make
check" on HEAD in src/interfaces/ecpg/ until it fails, so that we can
see if the logging I added tells anything useful about this.





Will do.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 06:46 PM, Andrew Dunstan wrote:



On 02/09/2016 05:53 PM, Tom Lane wrote:




Andrew, I wonder if I could prevail on you to make axolotl run "make
check" on HEAD in src/interfaces/ecpg/ until it fails, so that we can
see if the logging I added tells anything useful about this.





Will do.



Incidentally, as I noted earlier, the ecpg tests don't honour 
TEMP_CONFIG, and in axolotl's case this could well make a difference, as 
it it set up like this:


extra_config =>{
DEFAULT => [
q(log_line_prefix = '%m [%c:%l] '),
"log_connections = 'true'",
"log_disconnections = 'true'",
"log_statement = 'all'",
"fsync = off",
   "stats_temp_directory='/home/andrew/bf/root/stats_temp/$branch'"
],
},

So running it's not running with fsync off or using the ramdisk for 
stats_temp_directory. Of course, that doesn't explain why we're not 
seeing it on branches earlier than 9.5, but it could explain why we're 
only seeing it on the ecpg tests.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 07:49 PM, Tom Lane wrote:

Andrew Dunstan  writes:

So running it's not running with fsync off or using the ramdisk for
stats_temp_directory. Of course, that doesn't explain why we're not
seeing it on branches earlier than 9.5, but it could explain why we're
only seeing it on the ecpg tests.

BTW, the evidence we already have from axolotl's last run is that
post-checkpoint shutdown in the ecpg test was pretty quick:

LOG:  database system is shut down at 2016-02-09 16:31:14.784 EST
LOG:  lock files all released at 2016-02-09 16:31:14.817 EST

However, I'd already noted from some other digging in the buildfarm
logs that axolotl's speed seems to vary tremendously.  I do not
know what else you typically run on that hardware, but putting it
under full load might help prove things.




Almost nothing. It's a Raspberry Pi 2B that I got mainly to run a 
buildfarm animal. About the only other thing of note is a very lightly 
configured Nagios instance.


Of course, I could put it under continuous load running a compilation or 
something.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 08:49 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 02/09/2016 07:49 PM, Tom Lane wrote:

However, I'd already noted from some other digging in the buildfarm
logs that axolotl's speed seems to vary tremendously.  I do not
know what else you typically run on that hardware, but putting it
under full load might help prove things.

Almost nothing. It's a Raspberry Pi 2B that I got mainly to run a
buildfarm animal. About the only other thing of note is a very lightly
configured Nagios instance.

Huh, that's quite strange.  There is one fairly recent report of
axolotl failing "make check" because of taking over a minute to shut down:

http://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=axolotl&dt=2015-12-14%2020%3A30%3A52

but the runs before and after that show shutdown times of only a second or
two.



No idea why.

anyway, we got a failure pretty quickly:

   pg_ctl: server does not shut down at 2016-02-09 21:10:11.914 EST

   pg_regress: could not stop postmaster: exit code was 256
   Makefile:81: recipe for target 'check' failed
   make: *** [check] Error 2

The log traces from the shutdown are below.

cheers

andrew



LOG:  received fast shutdown request at 2016-02-09 21:09:11.824 EST
LOG:  aborting any active transactions
LOG:  autovacuum launcher shutting down at 2016-02-09 21:09:11.830 EST
LOG:  shutting down at 2016-02-09 21:09:11.839 EST
LOG:  checkpoint starting: shutdown immediate
LOG:  CheckPointGuts starting at 2016-02-09 21:09:11.840 EST
LOG:  CheckPointCLOG() done at 2016-02-09 21:09:11.840 EST
LOG:  CheckPointCommitTs() done at 2016-02-09 21:09:11.840 EST
LOG:  CheckPointSUBTRANS() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointMultiXact() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointPredicate() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointRelationMap() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointReplicationSlots() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointSnapBuild() done at 2016-02-09 21:09:11.841 EST
LOG:  CheckPointLogicalRewriteHeap() done at 2016-02-09 21:09:11.842 EST
LOG:  BufferSync(5) beginning to write 172 buffers at 2016-02-09 
21:09:11.845 EST
LOG:  BufferSync(5) done, wrote 172/172 buffers at 2016-02-09 
21:09:14.635 EST

LOG:  CheckPointBuffers() done at 2016-02-09 21:09:14.636 EST
LOG:  CheckPointReplicationOrigin() done at 2016-02-09 21:09:14.636 EST
LOG:  CheckPointGuts done at 2016-02-09 21:09:14.636 EST
LOG:  checkpoint WAL record flushed at 2016-02-09 21:09:14.636 EST
LOG:  pg_control updated at 2016-02-09 21:09:14.637 EST
LOG:  smgrpostckpt() done at 2016-02-09 21:09:14.668 EST
LOG:  RemoveOldXlogFiles() done at 2016-02-09 21:09:14.668 EST
LOG:  TruncateSUBTRANS() done at 2016-02-09 21:09:14.669 EST
LOG:  checkpoint complete: wrote 172 buffers (1.0%); 0 transaction log 
file(s) added, 0 removed, 0 recycled; write=2.794 s, sync=0.000 s, 
total=2.829 s; sync files=0, longest=0.000 s, average=0.000 s; 
distance=966 kB, estimate=966 kB

LOG:  shutdown checkpoint complete at 2016-02-09 21:09:14.669 EST
LOG:  ShutdownCLOG() complete at 2016-02-09 21:09:14.669 EST
LOG:  ShutdownCommitTs() complete at 2016-02-09 21:09:14.669 EST
LOG:  ShutdownSUBTRANS() complete at 2016-02-09 21:09:14.669 EST
LOG:  database system is shut down at 2016-02-09 21:09:14.669 EST
LOG:  doing before_shmem_exit 0 at 2016-02-09 21:09:14.670 EST
LOG:  doing on_shmem_exit 2 at 2016-02-09 21:09:14.670 EST
LOG:  doing on_shmem_exit 1 at 2016-02-09 21:09:14.670 EST
LOG:  doing on_shmem_exit 0 at 2016-02-09 21:09:14.670 EST
LOG:  doing on_proc_exit 1 at 2016-02-09 21:09:14.670 EST
LOG:  doing on_proc_exit 0 at 2016-02-09 21:09:14.671 EST
LOG:  calling exit(0) at 2016-02-09 21:09:14.671 EST
LOG:  checkpointer dead at 2016-02-09 21:09:14.683 EST
LOG:  all children dead at 2016-02-09 21:10:11.184 EST
LOG:  doing on_shmem_exit 3 at 2016-02-09 21:10:11.191 EST
LOG:  doing on_shmem_exit 2 at 2016-02-09 21:10:11.191 EST
LOG:  doing on_shmem_exit 1 at 2016-02-09 21:10:11.192 EST
LOG:  doing on_shmem_exit 0 at 2016-02-09 21:10:11.208 EST
LOG:  doing on_proc_exit 1 at 2016-02-09 21:10:11.209 EST
LOG:  doing on_proc_exit 0 at 2016-02-09 21:10:11.209 EST
LOG:  lock files all released at 2016-02-09 21:10:11.211 EST
LOG:  calling exit(0) at 2016-02-09 21:10:11.211 EST


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-09 Thread Andrew Dunstan



On 02/09/2016 10:27 PM, Tom Lane wrote:

Noah Misch  writes:

On Tue, Feb 09, 2016 at 10:02:17PM -0500, Tom Lane wrote:

I wonder if it's worth sticking some instrumentation into stats
collector shutdown?

I wouldn't be surprised if the collector got backlogged during the main phase
of testing and took awhile to chew through its message queue before even
starting the write of the final stats.

But why would the ecpg tests show such an effect when the main regression
tests don't?  AFAIK the ecpg tests don't exactly stress the server ---
note the trivial amount of data written by the shutdown checkpoint,
for instance.



The main regression tests run with the stats file on the ramdisk.




The other weird thing is that it's only sometimes slow.  If you look at
the last buildfarm result from axolotl, for instance, the tail end of
the ecpg log is

LOG:  ShutdownSUBTRANS() complete at 2016-02-09 16:31:14.784 EST
LOG:  database system is shut down at 2016-02-09 16:31:14.784 EST
LOG:  lock files all released at 2016-02-09 16:31:14.817 EST

so we only spent ~50ms on stats write that time.



That part is puzzling.


The idea I was toying with is that previous filesystem activity (making
the temp install, the server's never-fsync'd writes, etc) has built up a
bunch of dirty kernel buffers, and at some point the kernel goes nuts
writing all that data.  So the issues we're seeing would come and go
depending on the timing of that I/O spike.  I'm not sure how to prove
such a theory from here.



Yeah. It's faintly possible that a kernel upgrade will  help.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-10 Thread Andrew Dunstan



On 02/09/2016 11:21 PM, Andrew Dunstan wrote:





The idea I was toying with is that previous filesystem activity (making
the temp install, the server's never-fsync'd writes, etc) has built up a
bunch of dirty kernel buffers, and at some point the kernel goes nuts
writing all that data.  So the issues we're seeing would come and go
depending on the timing of that I/O spike.  I'm not sure how to prove
such a theory from here.



Yeah. It's faintly possible that a kernel upgrade will  help.






Another data point. I have another RPi2B that is running Debian Wheezy 
rather than the Fedora remix. I'm running the same test on it we ran 
yesterday on axolotl. It seems to be running without having the same 
problems. So maybe the particular kernel port to ARM is what's tripping 
us up.



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Tracing down buildfarm "postmaster does not shut down" failures

2016-02-10 Thread Andrew Dunstan



On 02/10/2016 12:53 PM, Tom Lane wrote:

Andrew Dunstan  writes:

Yeah. It's faintly possible that a kernel upgrade will  help.

Another data point. I have another RPi2B that is running Debian Wheezy
rather than the Fedora remix. I'm running the same test on it we ran
yesterday on axolotl. It seems to be running without having the same
problems. So maybe the particular kernel port to ARM is what's tripping
us up.

I'd bet against it being a port-specific problem; it sounds more like a
filesystem performance-tuning issue, which could easily change from one
kernel version to another.  What are the kernel version numbers exactly?


axolotl: Linux pirouette 
3.18.13-501.20150510gitf36e19f.sc20.armv7hl.bcm2709 #1 SMP PREEMPT

debian: Linux piratic 3.18.7-v7+ #755 SMP PREEMPT

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw vs. force_parallel_mode on ppc

2016-02-16 Thread Andrew Dunstan



On 02/15/2016 07:57 PM, Tom Lane wrote:

Noah Misch  writes:

On Mon, Feb 15, 2016 at 07:31:40PM -0500, Robert Haas wrote:

Oh, crap.  I didn't realize that TEMP_CONFIG didn't affect the contrib
test suites.  Is there any reason for that, or is it just kinda where
we ended up?

To my knowledge, it's just the undesirable place we ended up.

Yeah.  +1 for fixing that, if it's not unreasonably painful.





+1 for fixing it everywhere.

Historical note: back when TEMP_CONFIG was implemented, the main 
regression set was just about the only set of tests the buildfarm ran 
using a temp install. That wasn't even available for contrib and the 
PLs, IIRC.


cheers

andrew





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] exposing pg_controldata and pg_config as functions

2016-02-17 Thread Andrew Dunstan



On 02/17/2016 05:14 PM, Tom Lane wrote:

Peter Eisentraut  writes:

On 2/17/16 12:15 PM, Joe Conway wrote:

Ok, removed the documentation on the function pg_config() and pushed.

I still have my serious doubts about this, especially not even requiring
superuser access for this information.  Could someone explain why we
need this?

I thought we'd agreed on requiring superuser access for this function.
I concur that letting just anyone see the config data is inappropriate.





I'm in favor, and don't really want to rehearse the argument. But I 
think I can live quite happily with it being superuser only. It's pretty 
hard to argue that exposing it to a superuser creates much risk, ISTM.



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Release 4.17 of the PostgreSQL Buildfarm client

2016-02-20 Thread Andrew Dunstan


I have just cut release 4.17 of the PostgreSQL Buildfarm client. It is 
available at .


Changes of note:

 * use PGCTLTIMEOUT instead of hardcoded timeout settings
 * shipped config file is renamed to build-farm.conf.sample to avoid
   overwriting a default config file
 * use configured make command in make_bin_installcheck
 * restrict restart sleeps to Windows platforms
 * fix trigger_exclude pattern in build-farm.conf
 * protect git checkout with a lock if using git_use_workdirs

For the most part, the thing of significance it the PGCTLTIMEOUT change, 
which follows up on some changes in core code. The new code defaults 
this setting to 120, but it can be set in the config file. If you don't 
need any of the other changes (most people probably won't) then just 
adding something like


PGCTLTIMEOUT => '120',

to the build_env stanza of your config file should do the trick, and you 
can happily wait for the next release.



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Alter or rename enum value

2016-09-06 Thread Andrew Dunstan



On 09/06/2016 02:30 PM, Tom Lane wrote:

Robert Haas  writes:

On Mon, Sep 5, 2016 at 11:40 PM, Tom Lane  wrote:

... And again, it's
hard to get excited about having these options for RENAME VALUE when no
one has felt a need for them yet in RENAME COLUMN.  I'm especially dubious
about IF NOT EXISTS against the destination name, considering that there
isn't *any* variant of RENAME that has an equivalent of that.  If it's
really useful, why hasn't that happened?

Because Tom Lane keeps voting against every patch to expand IF [ NOT ]
EXISTS into a new area?  :-)

Well, I'm on record as not liking the squishy semantics of CREATE IF NOT
EXISTS, and you could certainly make the same argument against RENAME IF
NOT EXISTS: you don't really know what state you will have after the
command executes.  But that wasn't the point I was trying to make here.


We do have ALTER TABLE [ IF EXISTS ] .. ADD COLUMN [ IF NOT EXISTS ],
so if somebody wanted the [ IF NOT EXISTS ] clause to also apply to
the RENAME COLUMN case, they'd have a good argument for adding it.

If someone wanted to propose adding IF NOT EXISTS to our rename
commands across-the-board, that would be a sensible feature to discuss.
What I'm objecting to is this one niche-case command getting out in
front of far-more-widely-used commands in terms of having such features.
I think the fact that we don't already have it in other rename commands
is pretty strong evidence that this is a made-up feature rather than
something with actual field demand.  I'm also concerned about adding it
in just one place like this; we might find ourselves boxed in in terms of
hitting syntax conflicts when we try to duplicate the feature elsewhere,
if we haven't done the legwork to add it to all variants of RENAME at
the same time.






Are we also going to have an exists test for the original thing being 
renamed? Exists tests on renames do strike me as somewhat cumbersome, to 
say the least.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pgpassfile connection option

2016-09-22 Thread Andrew Dunstan



On 09/22/2016 10:44 AM, Julian Markwort wrote:

Hello psql-hackers!

We thought it would be advantageous to be able to specify a 'custom' 
pgpassfile within the connection string along the lines of the 
existing parameters sslkey and sslcert.


Which is exactly what this very compact patch does.
The patch is minimally invasive - when no pgpassfile attribute is 
provided in the connection string, the regular pgpassfile is used.
The security-measures (which are limited to checking the permissions 
for 0600) are kept, however we could loosen that restriciton to allow 
group access as well along the lines of the ssl key file , if this is 
preferred. (in case multiple users belonging to the same group would 
like to connect using the same file).


The patch applies cleanly to master and compiles and runs as expected 
(as there are no critical alterations).
I've not written any documentation as of now, but I'll follow up 
closely if there is any interest for this patch.


notes:
 - using ~ to denote the user's home directory in the path does not 
work, however $HOME works (as this is translated by bash beforehand).
 - the notation in the custom pgpassfile should follow the notation of 
the 'default' pgpass files:

hostname:port:database:username:password
 - this has only been tested on linux so far, however due to the 
nature of the changes I suspect that there is nothing that could go 
wrong in other environments, although I could test that as well, if 
deemed necessary.




I'm not necessarily opposed to this, but what is the advantage over the 
existing PGPASSFILE  environment setting mechanism?



cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_upgrade vs user created range type extension

2016-09-22 Thread Andrew Dunstan


I have just encountered an apparent bug in pg_upgrade (or possibly pg_dump).


To recreate, install the cranges extension - which can be obtained via 
"git clone https://bitbucket.org/adunstan/pg-closed-ranges.git"; - for 
both 9.4 and 9.5. Create a fresh 9.4 instance, create a database and in 
it run "create extension cranges schema pg_catalog". Then create a fresh 
9.5 instance and try to pg_upgrade from the 9.4 instance to the 9.5 
instance. Here's the tail of the log:



   pg_restore: creating SCHEMA "public"
   pg_restore: creating COMMENT "SCHEMA "public""
   pg_restore: creating EXTENSION "cranges"
   pg_restore: creating COMMENT "EXTENSION "cranges""
   pg_restore: creating SHELL TYPE "pg_catalog.cdaterange"
   pg_restore: creating FUNCTION
   "pg_catalog.cdaterange_canonical("cdaterange")"
   pg_restore: creating TYPE "pg_catalog.cdaterange"
   pg_restore: creating SHELL TYPE "pg_catalog.cint4range"
   pg_restore: creating FUNCTION
   "pg_catalog.cint4range_canonical("cint4range")"
   pg_restore: creating TYPE "pg_catalog.cint4range"
   pg_restore: creating SHELL TYPE "pg_catalog.cint8range"
   pg_restore: creating FUNCTION
   "pg_catalog.cint8range_canonical("cint8range")"
   pg_restore: creating TYPE "pg_catalog.cint8range"
   pg_restore: creating FUNCTION "pg_catalog.cdaterange("date", "date")"
   pg_restore: [archiver (db)] Error while PROCESSING TOC:
   pg_restore: [archiver (db)] Error from TOC entry 191; 1255 16389
   FUNCTION cdaterange("date", "date") andrew
   pg_restore: [archiver (db)] could not execute query: ERROR: function
   "cdaterange" already exists with same argument types
Command was: CREATE FUNCTION "cdaterange"("date", "date")
   RETURNS "cdaterange"
LANGUAGE "internal" IMMUTABLE
AS $$range_construct...

The end result is that I currently can't upgrade a database using this 
extension, which is rather ugly.


Similar things happen if I put the extension in public instead of 
pg_catalog.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade vs user created range type extension

2016-09-22 Thread Andrew Dunstan



On 09/22/2016 07:33 PM, Tom Lane wrote:

Andrew Dunstan  writes:

I have just encountered an apparent bug in pg_upgrade (or possibly pg_dump).

Hmm, it sort of looks like pg_dump believes it should dump the range's
constructor function in binary-upgrade mode, while the backend is creating
the constructor function during CREATE TYPE anyway.  But if that's the
case, upgrade of user-defined range types would never have worked ...
seems like we should have noticed before now.

If that diagnosis is correct, we should either change pg_dump to not
dump that function, or change CREATE TYPE AS RANGE to not auto-create
the constructor functions in binary-upgrade mode.  The latter might be
more flexible in the long run.





Yeah, I think your diagnosis is correct. I'm not sure I see the point of 
the flexibility given that you can't specify a constructor function for 
range types (if that feature had been available I would probably have 
used it in this extension).


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade vs user created range type extension

2016-09-23 Thread Andrew Dunstan



On 09/23/2016 11:55 AM, Tom Lane wrote:

Andrew Dunstan  writes:

On 09/22/2016 07:33 PM, Tom Lane wrote:

If that diagnosis is correct, we should either change pg_dump to not
dump that function, or change CREATE TYPE AS RANGE to not auto-create
the constructor functions in binary-upgrade mode.  The latter might be
more flexible in the long run.

Yeah, I think your diagnosis is correct. I'm not sure I see the point of
the flexibility given that you can't specify a constructor function for
range types (if that feature had been available I would probably have
used it in this extension).

It turns out that pg_dump already contains logic that's meant to exclude
constructor functions from getting dumped, but it's broken for binary
upgrades from pre-9.6 branches because somebody fat-fingered the AND/OR
nesting in the giant WHERE clauses in getFuncs().  Curiously, it's *not*
broken when dumping from >= 9.6, which explains why I couldn't reproduce
this in HEAD.  It looks like Stephen fixed this while adding the
pg_init_privs logic, while not realizing that he'd left it broken in the
existing cases.

The comment in front of all this is nearly as confused as the code is,
too.  Will fix.




Great, Thanks.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] datetime.h defines like PM conflict with external libraries

2017-10-03 Thread Andrew Dunstan




On 10/03/2017 03:00 PM, Andres Freund wrote:
> Hi,
>
> In my llvm jit work I'd to
>
> #undef PM
> /* include some llvm headers */
> #define PM 1
>
> because llvm has a number of functions which have an argument named PM.
> Now that works, but it's fairly ugly. Perhaps it would be a good idea to
> name these defines in a manner that's slightly less likely to conflict?
>
> Alternatively we could use #pragma push_macro() around the includes, but
> that'd be a new dependency.
>
> Better ideas?
>


AFAICT at a quick glance these are only used in a couple of files. Maybe
the defs need to be floated off to a different header with more limited
inclusion?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] datetime.h defines like PM conflict with external libraries

2017-10-03 Thread Andrew Dunstan


On 10/03/2017 04:43 PM, Tom Lane wrote:
> Andres Freund  writes:
>> On 2017-10-03 16:34:38 -0400, Andrew Dunstan wrote:
>>> AFAICT at a quick glance these are only used in a couple of files. Maybe
>>> the defs need to be floated off to a different header with more limited
>>> inclusion?
>> Why not just rename them to PG_PM etc? If we force potential external
>> users to do some changes, we can use more unique names just as well -
>> the effort to adapt won't be meaningfully higher... IMNSHO there's not
>> much excuse for defining macros like PM globally.
> I like the new-header-file idea because it will result in minimal code
> churn and thus minimal back-patching hazards.
>
> I do *not* like "PG_PM".  For our own purposes that adds no uniqueness
> at all.  If we're to touch these symbols then I'd go for names like
> "DATETIME_PM".  Or maybe "DT_PM" ... there's a little bit of precedent
> for the DT_ prefix already.
>
>   


Yeah. If we use a prefix +1 for DT_. If we do that then I think they
should *all* be prefixed, not just the ones we know of conflicts for.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [BUGS] BUG #14825: enum type: unsafe use?

2017-10-03 Thread Andrew Dunstan


On 09/27/2017 02:52 PM, Tom Lane wrote:
> Andrew Dunstan  writes:
>> At this stage on reflection I agree it should be pulled :-(
> That seems to be the consensus, so I'll go make it happen.
>
>> I'm not happy about the idea of marking an input function as not
>> parallel safe, certainly not without a good deal of thought and
>> discussion that we don't have time for this cycle.
> I think the way forward is to do what we had as of HEAD (984c92074),
> but add the ability to transmit the blacklist table to parallel
> workers.  Since we expect the blacklist table would be empty most of
> the time, this should be close to no overhead in practice.  I concur
> that the idea of marking the relevant functions parallel-restricted is
> probably not as safe a fix as I originally thought, and it's not a
> very desirable restriction even if it did fix the problem.
>
>   


Do you have any suggestion as to how we should transmit the blacklist to
parallel workers?

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Continuous integration on Windows?

2017-10-12 Thread Andrew Dunstan


On 10/11/2017 11:04 PM, Thomas Munro wrote:
> Hi hackers,
>
> I don't use Windows myself, but I'd rather avoid submitting patches
> that fail to build, build with horrible warnings or blow up on that
> fine operating system.  I think it would be neat to be able to have
> experimental branches of PostgreSQL built and tested on Windows
> automatically just by pushing them to a watched public git repo.  Just
> like Travis CI and several others do for the major GNU/Linux distros,
> it seems there is at least one Windows-based CI company that
> generously offers free testing to open source projects:
> https://www.appveyor.com (hat tip to Ilmari for the pointer).  I
> wonder... has anyone here with Microsoft know-how ever tried to
> produce an appveyor.yml file that would do a MSVC build and
> check-world?
>


Interesting.

I'm taking a look.

A couple of things not in their pre-built images that we'll need are
flex and bison. We might be able to overcome that with chocolatey, which
is installed, haven't tested yet.

getting a working appveyor.yml will take a little while, though.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Continuous integration on Windows?

2017-10-12 Thread Andrew Dunstan


On 10/12/2017 04:14 PM, Andrew Dunstan wrote:
>
> On 10/11/2017 11:04 PM, Thomas Munro wrote:
>> Hi hackers,
>>
>> I don't use Windows myself, but I'd rather avoid submitting patches
>> that fail to build, build with horrible warnings or blow up on that
>> fine operating system.  I think it would be neat to be able to have
>> experimental branches of PostgreSQL built and tested on Windows
>> automatically just by pushing them to a watched public git repo.  Just
>> like Travis CI and several others do for the major GNU/Linux distros,
>> it seems there is at least one Windows-based CI company that
>> generously offers free testing to open source projects:
>> https://www.appveyor.com (hat tip to Ilmari for the pointer).  I
>> wonder... has anyone here with Microsoft know-how ever tried to
>> produce an appveyor.yml file that would do a MSVC build and
>> check-world?
>>
>
> Interesting.
>
> I'm taking a look.
>
> A couple of things not in their pre-built images that we'll need are
> flex and bison. We might be able to overcome that with chocolatey, which
> is installed, haven't tested yet.
>
> getting a working appveyor.yml will take a little while, though.



Actually, that didn't take too long.

No testing yet, but this runs a build successfully:
<https://gist.github.com/adunstan/7f18e5db33bb2d73f69ff8c9337a4e6c>

See results at <https://ci.appveyor.com/project/AndrewDunstan/pgdevel>

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Continuous integration on Windows?

2017-10-12 Thread Andrew Dunstan


On 10/12/2017 06:46 PM, Thomas Munro wrote:
> On Fri, Oct 13, 2017 at 10:57 AM, Andrew Dunstan
>  wrote:
>> Actually, that didn't take too long.
>>
>> No testing yet, but this runs a build successfully:
>> <https://gist.github.com/adunstan/7f18e5db33bb2d73f69ff8c9337a4e6c>
>>
>> See results at <https://ci.appveyor.com/project/AndrewDunstan/pgdevel>
> Excellent!  Thanks for looking at this, Andrew.  It's going to be
> really useful for removing surprises.
>
> It would be nice to finish up with a little library of control files
> like this for different CI vendors, possibly with some different
> options (different compilers, different word size, add valgrind, ...).
> I don't know if it would ever make sense to have standardised CI
> control files in the tree -- many projects do -- but it's very easy to
> carry a commit that adds them in development branches but drop it as
> part of the format-patch-and-post process.
>


Well, as you can see here the appveyor.yml file can live outside the
tree that's being built.


What would be good, though, would be to split build.pl into two so you
wouldn't need the auxiliary file I had to create from it.


cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Still another race condition in recovery TAP tests

2017-10-13 Thread Andrew Dunstan


On 10/13/2017 01:04 AM, Noah Misch wrote:
> On Fri, Oct 06, 2017 at 05:57:24PM +0800, Craig Ringer wrote:
>> On 6 October 2017 at 14:03, Noah Misch  wrote:
>>> On Fri, Sep 08, 2017 at 10:32:03PM -0400, Tom Lane wrote:
>>>> (I do kinda wonder why we rolled our own RecursiveCopy; surely there's
>>>> a better implementation in CPAN?)
>>> Fewer people will test as we grow the list of modules they must first 
>>> install.
>> Meh, I don't buy that. At worst, all we have to do is provide a script
>> that fetches them, from distro repos if possible, and failing that
>> from CPAN.
>>
>> With cpanminus, that's pretty darn simple too.
> If the tree had such a script and it were reliable, then yes, it would matter
> little whether the script procured one module or five.
>
>



Not everyone has cpanminus installed either. My approach in the
buildfarm code is to lean over backwards in order to avoid non-standard
modules. For the problem at hand we use cp/xcopy, but the tree being
copied is stable so we don't run into the disappearing/changing file
problem.


cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Continuous integration on Windows?

2017-10-13 Thread Andrew Dunstan


On 10/13/2017 08:09 AM, Thomas Munro wrote:
> On Fri, Oct 13, 2017 at 1:42 PM, Andrew Dunstan
>  wrote:
>> Well, as you can see here the appveyor.yml file can live outside the
>> tree that's being built.
> Here's a Wiki page where I hope we can collect how-to information on
> this general topic:
>
> https://wiki.postgresql.org/wiki/Continuous_Integration
>
> I tried your appveyor.yml, and added:
>
> test_script:
>   - cd src\tools\msvc && vcregress check
>
> That much I could figure out just by reading our manual and I could
> see that it worked first time, but to make this really useful I guess
> we'd have to teach it to dump out resulting regression.diffs files and
> backtraces from core files (as the Travis CI version accessible from
> that page does).
>


I'll add some info to the wiki. Unfortunately, the tests fail on the
tablespace test because they are running as a privileged user. We need
to find a way around that, still looking into it. (See
<https://blog.2ndquadrant.com/setting-build-machine-visual-studio-2017/>
which describes how I get around that when running by hand).

cheers

andrew



-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #14849: jsonb_build_object doesn't like VARIADIC calls very much

2017-10-22 Thread Andrew Dunstan


On 10/22/2017 12:11 PM, Andrew Dunstan wrote:
>
> On 10/21/2017 07:33 PM, Michael Paquier wrote:
>> On Sun, Oct 22, 2017 at 1:43 AM, Tom Lane  wrote:
>>> I don't think collecting all the arguments is particularly special ---
>>> format() or concat() for instance could possibly use this.  You might
>>> need an option to say what to do with unknown.
>> In this case, we could just use a boolean flag to decide if TEXTOID
>> should be enforced or not.
> A generic function is going to look a little more complicated than this,
> though. The functions as coded assume that the function has a single
> variadic argument. But for it to be useful generically it really needs
> to be able to work where there are both fixed and variadic arguments (a
> la printf style functions).
>
> I guess a simple way would be to make the caller tell the function where
> the variadic arguments start, or how many to skip, something like that.
>


here's a patch that works that way, based on Michael's code.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [BUGS] BUG #14849: jsonb_build_object doesn't like VARIADIC calls very much

2017-10-22 Thread Andrew Dunstan


On 10/22/2017 04:35 PM, Michael Paquier wrote:
> On Mon, Oct 23, 2017 at 1:44 AM, Andrew Dunstan
>  wrote:
>
>> here's a patch that works that way, based on Michael's code.
> Patch not attached :)
> I still have a patch half-cooked, that I can send if necessary, but
> you are on it, right?


Sorry. Here it is.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

diff --git a/src/backend/utils/fmgr/funcapi.c b/src/backend/utils/fmgr/funcapi.c
index 9c3f451..dfd5d8c 100644
--- a/src/backend/utils/fmgr/funcapi.c
+++ b/src/backend/utils/fmgr/funcapi.c
@@ -1400,3 +1400,120 @@ TypeGetTupleDesc(Oid typeoid, List *colaliases)
 
 	return tupdesc;
 }
+
+/*
+ * Extract a set of variadic argument values, types and NULL markers. This is
+ * useful for abstracting away whether or not the call has been done
+ * with an explicit VARIADIC array or a normal set of arguments, for a
+ * VARIADIC "any" function. When doing a VARIADIC call, the caller has
+ * provided one argument made of an array of keys, so deconstruct the
+ * array data before using it for the next processing.
+ * If no explicit VARIADIC call is used, just fill in the data based on
+ * all the arguments given by the caller.
+ * This function returns the number of arguments generated. In the event
+ * where the caller provided a NULL array input, return -1.
+ * nfixed is the number of non-variadic arguments to the function - these
+ * will be ignored by this function.
+ * If required by the caller, values of type "unknown" will be converted
+ * to text datums.
+ */
+int
+extract_variadic_args(FunctionCallInfo fcinfo, Datum **args,
+	  Oid **types, bool **nulls, int nfixed,
+	  bool convert_unknown_to_text)
+{
+	bool		variadic = get_fn_expr_variadic(fcinfo->flinfo);
+	Datum	   *args_res;
+	bool	   *nulls_res;
+	Oid		   *types_res;
+	int			nargs, i;
+
+	*args = NULL;
+	*types = NULL;
+	*nulls = NULL;
+
+	if (variadic)
+	{
+		ArrayType  *array_in;
+		Oid			element_type;
+		bool		typbyval;
+		char		typalign;
+		int16		typlen;
+
+		Assert(PG_NARGS() == nfixed + 1);
+
+		if (PG_ARGISNULL(nfixed))
+			return -1;
+
+		array_in = PG_GETARG_ARRAYTYPE_P(nfixed);
+		element_type = ARR_ELEMTYPE(array_in);
+
+		get_typlenbyvalalign(element_type,
+			 &typlen, &typbyval, &typalign);
+		deconstruct_array(array_in, element_type, typlen, typbyval,
+		  typalign, &args_res, &nulls_res,
+		  &nargs);
+
+		/* All the elements of the array have the same type */
+		types_res = (Oid *) palloc0(nargs * sizeof(Oid));
+		for (i = 0; i < nargs; i++)
+			types_res[i] = element_type;
+	}
+	else
+	{
+		nargs = PG_NARGS() - nfixed;
+		Assert (nargs > 0);
+		nulls_res = (bool *) palloc0(nargs * sizeof(bool));
+		args_res = (Datum *) palloc0(nargs * sizeof(Datum));
+		types_res = (Oid *) palloc0(nargs * sizeof(Oid));
+
+		for (i = 0; i < nargs; i++)
+		{
+			nulls_res[i] = PG_ARGISNULL(i + nfixed);
+			types_res[i] = get_fn_expr_argtype(fcinfo->flinfo, i + nfixed);
+
+			if (!OidIsValid(types_res[i]))
+ereport(ERROR,
+		(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		 errmsg("could not determine data type for argument %d",
+i + nfixed + 1)));
+
+			/*
+			 * Turn a constant (more or less literal) value that's of unknown
+			 * type into text if required . Unknowns come in as a cstring
+			 * pointer.
+			 */
+			if (convert_unknown_to_text && types_res[i] == UNKNOWNOID &&
+get_fn_expr_arg_stable(fcinfo->flinfo, i + nfixed))
+			{
+types_res[i] = TEXTOID;
+
+/* important for value, key cannot being NULL */
+if (PG_ARGISNULL(i + nfixed))
+	args_res[i] = (Datum) 0;
+else
+	args_res[i] =
+		CStringGetTextDatum(PG_GETARG_POINTER(i + nfixed));
+			}
+			else
+			{
+/* no conversion needed, just take the datum as given */
+args_res[i] = PG_GETARG_DATUM(i + nfixed);
+			}
+
+			if (!OidIsValid(types_res[i]) ||
+(convert_unknown_to_text && (types_res[i] == UNKNOWNOID)))
+ereport(ERROR,
+		(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
+		 errmsg("could not determine data type for argument %d",
+i + nfixed + 1)));
+		}
+	}
+
+	/* Fill in results */
+	*args = args_res;
+	*nulls = nulls_res;
+	*types = types_res;
+
+	return nargs;
+}
diff --git a/src/include/funcapi.h b/src/include/funcapi.h
index 951af2a..0550c07 100644
--- a/src/include/funcapi.h
+++ b/src/include/funcapi.h
@@ -281,6 +281,9 @@ extern TupleTableSlot *TupleDescGetSlot(TupleDesc tupdesc);
 extern FuncCallContext *init_MultiFuncCall(PG_FUNCTION_ARGS);
 extern FuncCallContext *per_MultiFuncCall(PG_FUNCTION_ARGS);
 extern void end_MultiFuncCall(PG_FUNCTION_ARGS, FuncCallContext *funcctx);
+extern int extract_variadic_args(FunctionCallInfo fcinfo, Datum **args,
+	  Oid **types, bool **nulls, int nf

[HACKERS] Re: [COMMITTERS] pgsql: Process variadic arguments consistently in json functions

2017-10-26 Thread Andrew Dunstan


On 10/26/2017 12:12 AM, Michael Paquier wrote:
> On Wed, Oct 25, 2017 at 5:24 AM, Andrew Dunstan  wrote:
>> Process variadic arguments consistently in json functions
>>
>> json_build_object and json_build_array and the jsonb equivalents did not
>> correctly process explicit VARIADIC arguments. They are modified to use
>> the new extract_variadic_args() utility function which abstracts away
>> the details of the call method.
>>
>> Michael Paquier, reviewed by Tom Lane and Dmitry Dolgov.
>>
>> Backpatch to 9.5 for the jsonb fixes and 9.4 for the json fixes, as
>> that's where they originated.
> - * Copyright (c) 2014-2017, PostgreSQL Global Development Group
> + * COPYRIGHT (c) 2014-2017, PostgreSQL Global Development Group
> Andrew, I have just noticed that this noise diff has crept in. You may
> want to fix that.


Argh! I see that in your v6 patch and I thought I'd caught all of it but
apparently not for master and REL_10. I wonder how that happened?

Will fix.

cheers

andrew




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-30 Thread Andrew Dunstan



On 09/29/2016 05:48 PM, Tom Lane wrote:

We might've caught these things earlier if the buildfarm testing
included cross-version upgrades, but of course that requires a
lot of test infrastructure that's not there ...





I have done quite a bit of work on this - see 



This actually runs on crake, and it has found problems in the past which 
we've fixed. However, it requires quite a deal of disk space (4GB or 
so), and the results are not stable, which is why I haven't enabled it.


What I am thinking of doing is making the diff tests fuzzy - if, say, we 
don't get a diff of more than 2000 lines we won't consider it a failure.


Among this module's advantages is that it tests upgrading quite a bit 
more than the standard pg_upgrade check - all the buildfarm's regression 
databases, not just the one created by "make check". For example, 
currently it is failing to upgrade from 9.6 to HEAD with this error:


   Could not load library "$libdir/hstore_plpython2"
   ERROR:  could not load library
   "/home/bf/bfr/root/upgrade/HEAD/inst/lib/postgresql/hstore_plpython2.so":
   /home/bf/bfr/root/upgrade/HEAD/inst/lib/postgresql/hstore_plpython2.so:
   undefined symbol: _Py_NoneStruct

I need to look into that.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-30 Thread Andrew Dunstan



On 09/30/2016 10:25 AM, Tom Lane wrote:

Andrew Dunstan  writes:

On 09/29/2016 05:48 PM, Tom Lane wrote:

We might've caught these things earlier if the buildfarm testing
included cross-version upgrades, but of course that requires a
lot of test infrastructure that's not there ...

I have done quite a bit of work on this - see
<https://github.com/PGBuildFarm/client-code/blob/master/PGBuild/Modules/TestUpgradeXversion.pm>

Oh!  How would I enable or use that?



1. Pull the latest version of the module from git.
2. enable it in your buildfarm config file
3. do "run_branches.pl --run-all --verbose --test" and watch the output


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-09-30 Thread Andrew Dunstan



On 09/30/2016 12:24 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 09/30/2016 10:25 AM, Tom Lane wrote:

Oh!  How would I enable or use that?

1. Pull the latest version of the module from git.
2. enable it in your buildfarm config file
3. do "run_branches.pl --run-all --verbose --test" and watch the output

Seems to be some additional prep work needed somewhere ...

No upgrade_install_root at 
/home/bfarm/bf-scripts/build-farm-4.17/PGBuild/Modules/TestUpgradeXversion.pm 
line 51.
No upgrade_install_root at 
/home/bfarm/bf-scripts/build-farm-4.17/PGBuild/Modules/TestUpgradeXversion.pm 
line 51.
No upgrade_install_root at 
/home/bfarm/bf-scripts/build-farm-4.17/PGBuild/Modules/TestUpgradeXversion.pm 
line 51.





Oh sorry, you also need an entry for that in the config file. Mine has:

   upgrade_install_root => '/home/bf/bfr/root/upgrade',


cheers

andre


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [GENERAL] pg_upgrade from 9.5 to 9.6 fails with "invalid argument"

2016-10-02 Thread Andrew Dunstan



On 10/01/2016 02:01 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 09/30/2016 12:24 PM, Tom Lane wrote:

Seems to be some additional prep work needed somewhere ...
No upgrade_install_root at 
/home/bfarm/bf-scripts/build-farm-4.17/PGBuild/Modules/TestUpgradeXversion.pm 
line 51.

Oh sorry, you also need an entry for that in the config file. Mine has:
 upgrade_install_root => '/home/bf/bfr/root/upgrade',

Yesterday's runs of prairiedog had this turned on.  I'm not sure how
to interpret the output (because there isn't any, ahem), but it does
not appear that the test detected anything wrong.



I'm working on collecting the log files and making the module more 
useful. But you can see pretty much all the logs (lots of them!) after a 
run inside the upgrade directories.


cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pg_upgade vs config

2016-10-02 Thread Andrew Dunstan


I'm working on updating and making production ready my experimental 
cross version pg_upgrade testing module for the buildfarm. A couple of 
things have emerged that are of concern. This module does a much more 
complete test that our normal test for pg_upgrade, which only checks 
upgrading the standard regression database. This tests all the databases 
the buildfarm creates for testing, including those made by modules in 
contrib.


The biggest issue is this: the upgrade fails completely on 
ltree-plpython and hstore-plpython, presumably because these modules 
rely on the plpython module being loaded first. pg_upgrade rather 
simple-mindedly calls LOAD on the object library to test if it's usable. 
It's a bit embarrassing that we can't upgrade a database using one of 
our own modules. At the very least we should hard-code a way around this 
(e.g. have it load the relevant plpython module first), but more 
generally I think we need a way to tell pg_upgrade that module X relies 
on module Y. In the past ISTR we've said we don't support having 
dependencies between loadable modules, but that ship now seems to have 
sailed.


Second, we get an unexpected difference between the pre-upgrade and 
post-upgrade dumps for the bloom module:


   --- /home/bf/bfr/root/upgrade/HEAD/origin-REL9_6_STABLE.sql
   2016-10-02 09:16:03.298341639 -0400
   +++
   /home/bf/bfr/root/upgrade/HEAD/converted-REL9_6_STABLE-to-HEAD.sql
   2016-10-02 09:16:54.889343991 -0400
   @@ -7413,6 +7413,20 @@
 COMMENT ON EXTENSION bloom IS 'bloom access method - signature
   file based index';


   +--
   +-- Name: bloom; Type: ACCESS METHOD; Schema: -; Owner:
   +--
   +
   +CREATE ACCESS METHOD bloom TYPE INDEX HANDLER public.blhandler;
   +
   +
   +--
   +-- Name: ACCESS METHOD bloom; Type: COMMENT; Schema: -; Owner:
   +--
   +
   +COMMENT ON ACCESS METHOD bloom IS 'bloom index access method';
   +
   +
 SET search_path = public, pg_catalog;

 SET default_tablespace = '';



It looks like we have some work to do to teach pg_dump about handling 
access methods in extensions. This doesn't look quite as bad as the 
first issue, but it's a pity 9.6 escaped into the wild with this issue.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgade vs config

2016-10-02 Thread Andrew Dunstan



On 10/02/2016 09:50 AM, Michael Paquier wrote:

On Sun, Oct 2, 2016 at 10:40 PM, Andrew Dunstan  wrote:

It looks like we have some work to do to teach pg_dump about handling access
methods in extensions. This doesn't look quite as bad as the first issue,
but it's a pity 9.6 escaped into the wild with this issue.

562f06f3 has addressed this issue 3 months ago, and there is a test in
src/test/modules/test_pg_dump.



So then why are the pre-upgrade and post-upgrade dumps different?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgade vs config

2016-10-02 Thread Andrew Dunstan



On 10/02/2016 01:53 PM, Tom Lane wrote:

So then why are the pre-upgrade and post-upgrade dumps different?

Because pg_dump with --binary-upgrade neglects to emit

ALTER EXTENSION bloom ADD ACCESS METHOD bloom;


That's what I suspected.



which it would need to do in order to make this work right.  The other
small problem is that there is no such ALTER EXTENSION syntax in the
backend.  This is a rather major oversight in the patch that added DDL
support for access methods, if you ask me.



I agree.

cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgade vs config

2016-10-02 Thread Andrew Dunstan



On 10/02/2016 12:54 PM, Tom Lane wrote:

Andrew Dunstan  writes:

The biggest issue is this: the upgrade fails completely on
ltree-plpython and hstore-plpython, presumably because these modules
rely on the plpython module being loaded first. pg_upgrade rather
simple-mindedly calls LOAD on the object library to test if it's usable.

FWIW, that seems to have worked fine yesterday on prairiedog.

I suspect the explanation is that macOS's dynamic linker is smart enough
to pull in plpython when one of those modules is LOAD'ed.  The ideal fix
would be to make that happen on all platforms.  I'm not actually sure
why it doesn't already; surely every dynamic linker in existence has
such a capability.

[ digs more deeply ... ]  Oh, weird: it looks like this succeeded in
every case except 9.6->HEAD upgrade.  Did we break something recently?



Yeah, my latest version of the test module (soon to hit githyb) also 
does a self upgrade, and these modules pass that on 9.5, whereas they 
fail on 9.6, as well as the 9.6->HEAD and HEAD self-tests failing. So 
indeed it looks like we've broken something. Yet another example of why 
I need to get this test module production ready :-)


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgade vs config

2016-10-02 Thread Andrew Dunstan



On 10/02/2016 07:21 PM, Tom Lane wrote:

I wrote:

It occurs to me that a back-patchable workaround for this would be to
make get_loadable_libraries sort the library names in order by length
(and I guess we might as well sort same-length names alphabetically).
This would for example guarantee that hstore_plpython is probed after
both hstore and plpython.  Admittedly, this is a kluge of the first
water.  But I see no prospect of back-patching any real fix, and it
would definitely be better if pg_upgrade didn't fail on these modules.

I've tested the attached and verified that it allows pg_upgrade'ing
of the hstore_plpython regression DB --- or, if I reverse the sort
order, that it reproducibly fails.  I propose back-patching this
at least as far as 9.5, where the transform modules came in.  It might
be a good idea to go all the way back, just so that the behavior is
predictable.




Yeah, it's a really ugly kluge, but I don't have a better idea.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Misidentification of Python shared library

2016-10-04 Thread Andrew Dunstan



On 10/04/2016 10:43 AM, Tom Lane wrote:

While chasing down last night's failure on buildfarm member longfin,
I came across an interesting factoid.  If you just do

./configure --enable-shared
make
make install

in unmodified Python sources, what you will get is an install tree in
which libpython.so (or local equivalent such as .dylib) is installed
in /usr/local/lib, while libpython.a is installed in a directory named
like /usr/local/lib/python3.5/config-3.5m.  I've verified this behavior
on both Linux and macOS.



Wow, these modules have uncovered a number of cans of worms.





In short: I propose replacing all of this logic with "if there's something
in $python_libdir that has the right name to be a python shared library,
use that, else try the same in $python_configdir, else fail".  Thoughts?





Seems reasonable.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cygwin64 assertion failure

2016-10-09 Thread Andrew Dunstan
lorikeet seems to be stuck running the parallel tests, after having 
failed an assertion.



Here's an excerpt from the log:


2016-10-09 11:52:35.751 EDT [57fa67c3.1148:11] LOG: statement: alter 
table tenk1 set (parallel_workers = 4);
2016-10-09 11:52:35.753 EDT [57fa67c3.1148:12] LOG:  statement: explain 
(verbose, costs off)

select parallel_restricted(unique1) from tenk1
  where stringu1 = 'GR' order by 1;
2016-10-09 11:52:35.756 EDT [57fa67c3.1148:13] LOG:  statement: explain 
(costs off)

select length(stringu1) from tenk1 group by length(stringu1);
2016-10-09 11:52:35.758 EDT [57fa67c3.1148:14] LOG:  statement: select 
length(stringu1) from tenk1 group by length(stringu1);
TRAP: FailedAssertion("!(vmq->mq_sender == ((void *)0))", File: 
"/home/andrew/bf64/root/HEAD/pgsql.build/../pgsql/src/backend/storage/ipc/shm_mq.c", 
Line: 221)
2016-10-09 11:53:54.260 EDT [57fa6795.2a7c:2] WARNING:  worker took too 
long to start; canceled



Since then almost the only thing on the log is repeated instances of 
that last message.



cheers


andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] buildfarm client release 4.18

2016-10-11 Thread Andrew Dunstan


I have just released buildfarm client version 4.18.


In addition to some minor fixes, there are two significant changes:


 * The client now makes a determined effort to clean up any left over
   build artefacts from previous runs at the start of a run. It also
   tries to clean away old socket files. That means that manual fixes
   are much less likely to be required, if the script has crashed or
   the machine has crashed or restarted.
 * There is a new config parameter "wait_timeout", which defaults to
   undefined. If it is set then any run that takes longer than the
   specified number of seconds is aborted. That means that runs that
   get stuck will time out, and the whole animal won't be stuck and
   requiring manual intervention. My test animal has a setting of 3 *
   3600, i.e. 3 hours. There is some possible danger in this, that we
   might miss bugs that cause us to get stuck, so at least some animals
   should not use this feature.


The release can be downloaded from: 




cheers


andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Better logging of COPY queries if log_statement='all'

2016-10-17 Thread Andrew Dunstan



On 10/17/2016 09:57 AM, Aleksander Alekseev wrote:

Hello.

Sometimes it's useful to log content of files used in COPY ... TO ... and
COPY ... FROM ... queries. Unfortunately PostgreSQL doesn't allow to do
it, even if log_statement='all'. Suggested patch fixes this.

Log example:

```
LOG:  statement: create table test (k int, v text);
LOG:  statement: insert into test values (111, 'aaa'), (222, 'bbb');
LOG:  statement: copy test to '/tmp/copy.txt';
LOG:  statement: 111aaa
LOG:  statement: 222bbb
LOG:  statement: delete from test;
LOG:  statement: copy test from '/tmp/copy.txt';
LOG:  statement: 111aaa
LOG:  statement: 222bbb
```



I'm not in favor of this, especially if it's not even optional. 
log_statement is about logging, er, statements, not logging data.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed of user-defined aggregates using array_append as transfn

2016-10-28 Thread Andrew Dunstan



On 10/28/2016 02:04 PM, Tom Lane wrote:

Comments, better ideas?





My initial admittedly ugly thought was why not have a second append 
function that doesn't use expanded arrays?


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed of user-defined aggregates using array_append as transfn

2016-10-28 Thread Andrew Dunstan



On 10/28/2016 03:14 PM, Tom Lane wrote:

Andrew Dunstan  writes:

My initial admittedly ugly thought was why not have a second append
function that doesn't use expanded arrays?

That won't get us out of the O(N^2) behavior.  Also I don't see what's
better about it than my suggestion of making array_append itself do
that when called as an aggregate function.






Probably nothing, I was just thinking out loud. That suggestion seems 
like the most obviously back-patchable solution.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-11-04 Thread Andrew Dunstan



On 11/04/2016 04:14 PM, Emre Hasegeli wrote:

Here is a patch to add enum support to btree_gin and btree_gist. I didn't
include distance operations, as I didn't think it terribly important, and
there isn't a simple way to compute it sanely and efficiently, so no KNN
support.

I don't think we can implement a meaningful distance operator for enums.



Probably.




This time including the data file for the gist regression tests.

It doesn't apply to HEAD anymore.  I have tested it on b12fd41.



I'll fix the bitrot..



The GiST part of it doesn't really work.  The current patch compares
oids.  We need to change it to compare enumsortorder.  I could make it
fail easily:



ouch. Ok,I'll work on this.



+ALTER OPERATOR FAMILY gist_enum_ops USING gist ADD

Why don't we just create it with those changes?



I'll take a look.






+ * Use a similar trick to that used for numeric for enums, since we don't

Do you mean "similar trick that is used" or "trick similar to numeric"?



This is perfectly legal English. It means "... a similar trick to the 
one used for numeric ... ". I'll change it to that if you think it's 
clearer.






+ * actually know the leftmost value of any enum without knowing the concrete
+ * type, so we use a dummy leftmost value of InvalidOid.
+return DatumGetBool(
+DirectFunctionCall2(enum_gt, ObjectIdGetDatum(*((const 
Oid *) a)), ObjectIdGetDatum(*((const Oid *) b)))
+);

Wouldn't it be nice to refactor enum_cmp_internal() to accept typcache
and use it there like the rangetypes?




Not sure. Will look.

Thanks for the review.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-11-05 Thread Andrew Dunstan



On 11/04/2016 04:14 PM, Emre Hasegeli wrote:

The GiST part of it doesn't really work.  The current patch compares
oids.  We need to change it to compare enumsortorder.  I could make it
fail easily:


regression=# create type e as enum ('0', '2', '3');
CREATE TYPE
regression=# alter type e add value '1' after '0';
ALTER TYPE
regression=# create table t as select (i % 4)::text::e from generate_series(0, 
10) as i;
SELECT 11
regression=# create index on t using gist (e);
SEGFAULT



It calls the enum routines which use the sortorder if necessary. It's 
not necessary to use sortorder in the case of evenly numbered enum oids 
as we have carefully arranged for them to be directly comparable, and 
all the initially allocated oids are even, a very nifty efficiency 
measure devised by Tom when we added enum extension.


The real problem here is that enum_cmp_internal assumes that 
fcinfo->flinfo has been set up, and DirectFunctionCallN doesn't, it sets 
it to NULL.


The patch below cures the problem. I'm not sure if there is a better 
way. Thoughts?


cheers

andrew


diff --git a/src/backend/utils/adt/enum.c b/src/backend/utils/adt/enum.c
index 47d5355..64ba7a7 100644
--- a/src/backend/utils/adt/enum.c
+++ b/src/backend/utils/adt/enum.c
@@ -261,7 +261,7 @@ enum_send(PG_FUNCTION_ARGS)
 static int
 enum_cmp_internal(Oid arg1, Oid arg2, FunctionCallInfo fcinfo)
 {
-   TypeCacheEntry *tcache;
+   TypeCacheEntry *tcache = NULL;

/* Equal OIDs are equal no matter what */
if (arg1 == arg2)
@@ -277,7 +277,8 @@ enum_cmp_internal(Oid arg1, Oid arg2, 
FunctionCallInfo fcinfo)

}

/* Locate the typcache entry for the enum type */
-   tcache = (TypeCacheEntry *) fcinfo->flinfo->fn_extra;
+   if (fcinfo->flinfo != NULL)
+   tcache = (TypeCacheEntry *) fcinfo->flinfo->fn_extra;
if (tcache == NULL)
{
HeapTuple   enum_tup;
@@ -296,7 +297,8 @@ enum_cmp_internal(Oid arg1, Oid arg2, 
FunctionCallInfo fcinfo)

ReleaseSysCache(enum_tup);
/* Now locate and remember the typcache entry */
tcache = lookup_type_cache(typeoid, 0);
-   fcinfo->flinfo->fn_extra = (void *) tcache;
+   if (fcinfo->flinfo != NULL)
+   fcinfo->flinfo->fn_extra = (void *) tcache;
}

/* The remaining comparison logic is in typcache.c */





--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] btree_gin and btree_gist for enums

2016-11-05 Thread Andrew Dunstan



On 11/05/2016 01:13 PM, Tom Lane wrote:

Andrew Dunstan  writes:

On 11/05/2016 11:46 AM, Tom Lane wrote:

That may be a good fix for robustness purposes, but it seems pretty horrid
from an efficiency standpoint.  Where is this call, and should we be
modifying it to provide a flinfo?

I thought of providing an flinfo, but I couldn't see a simple way to do
it that would provide something much longer lived than the function
call, in which case it seemed a bit pointless. That's why I asked for
assistance :-)

Hmm.  The problem is that the intermediate layer in btree_gist (and
probably btree_gin too, didn't look) does not pass through the
FunctionCallInfo data structure that's provided by the GIST AM.
That wasn't needed up to now, because none of the supported data types
are complex enough that any cached state would be useful, but trying
to extend it to enums exposes the shortcoming.

We could fix this, but it would be pretty invasive since it would require
touching just about every function in btree_gist/gin.  Not entirely sure
that it's worth it.  On the other hand, the problem might well come up
again in future, perhaps for a datatype where the penalty for lack of
a cache is greater --- eg, it would be pretty painful to support
record_cmp without caching.  And the changes ought to be relatively
mechanical, even if they are extensive.




Yeah. I think it's probably worth doing. btree_gin is probably a better 
place to start because it's largely macro-ized.


I don't have time right now to do this. I'll try to get to it if nobody 
else does over then next couple of months.





BTW, this patch would be a great deal shorter if you made use of the
work done in 40b449ae8.  That is, you no longer need to replace the
base extension script --- just add an update script and change the
default version in the .control file.  See fd321a1df for an example.


Oh, nice. My work was originally done in March, before this came in.


cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we need use more meaningful variables to replace 0 in catalog head files?

2016-11-11 Thread Andrew Dunstan



On 11/11/2016 03:03 AM, Magnus Hagander wrote:


On Nov 11, 2016 00:53, "Jan de Visser" > wrote:

>
> On 2016-11-09 10:47 AM, Tom Lane wrote:
>
>> Amit Langote > writes:

>>>
>>> On Wed, Nov 9, 2016 at 11:47 PM, Tom Lane > wrote:


 Hmm, that's from 2009.  I thought I remembered something much 
more recent,

 like last year or so.
>>>
>>> This perhaps:
>>> * Re: Bootstrap DATA is a pita *
>>> 
https://www.postgresql.org/message-id/flat/CAOjayEfKBL-_Q9m3Jsv6V-mK1q8h%3Dca5Hm0fecXGxZUhPDN9BA%40mail.gmail.com

>>
>> Yeah, that's the thread I remembered.  I think the basic conclusion was
>> that we needed a Perl script that would suck up a bunch of data 
from some

>> representation that's more edit-friendly than the DATA lines, expand
>> symbolic representations (regprocedure etc) into numeric OIDs, and 
write
>> out the .bki script from that.  I thought some people had 
volunteered to

>> work on that, but we've seen no results ...
>>
>> regards, tom lane
>>
>>
>
> Would a python script converting something like json or yaml be 
acceptable? I think right now only perl is used, so it would be a new 
build chain tool, albeit one that's in my (very humble) opinion much 
better suited to the task.

>

Python or perl is not what matters here really. For something as 
simple as this (for the script) it doesn't make a real difference. I 
personally prefer python over perl in most cases, but our standard is 
perl so we should stick to that.


The issues is coming up with a format that people like and think is an 
improvement.


If we have that and a python script for our, someone would surely 
volunteer to convert that part. But we need to start by solving the 
actual problem.






+1. If we come up with an agreed format I will undertake to produce the 
requisite perl script. So let's reopen the debate on the data format. I 
want something that doesn't consume large numbers of lines per entry. If 
we remove defaults in most cases we should be able to fit a set of 
key/value pairs on just a handful of lines.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we need use more meaningful variables to replace 0 in catalog head files?

2016-11-13 Thread Andrew Dunstan



On 11/13/2016 04:54 AM, Andres Freund wrote:

On 2016-11-12 12:30:45 -0500, Andrew Dunstan wrote:


On 11/11/2016 11:10 AM, Tom Lane wrote:

boolin: OID=1242 proname=boolin proargtypes="cstring" prorettype=bool
boolin: prosrc=boolin provolatile=i proparallel=s




I have written a little perl script to turn the pg_proc DATA lines into
something like the format suggested. In order to keep the space used as
small as possible, I used a prefix based on the OID. See attached result.

Still plenty of work to go, e.g. grabbing the DESCR lines, and turning this
all back into DATA/DESCR lines, but I wanted to get this out there before
going much further.

The defaults I used are below (commented out keys are not defaulted, they
are just there for completeness).

In the referenced thread I'd started to work on something like this,
until other people also said they'd be working on it.  I chose a
different output format (plain Data::Dumper), but I'd added the parsing
of DATA/DESCR and such to genbki.

Note that I found that initdb performance is greatly increased *and*
legibility is improvided, if types and such in the data files are
expanded, and converted to their oids when creating postgres.bki.



Yeah, I have the type name piece, it was close to trivial. I just read 
in pg_type.h and stored the names/oids in a hash.


Data::Dumper is too wasteful of space. The thing I like about Tom's 
format is that it's nice and concise.


I'm not convinced the line prefix part is necessary, though. What I'm 
thinking of is something like this:


PROCDATA( oid=1242 name=boolin isstrict=t volatile=i parallel=s nargs=1
rettype=bool argtypes="cstring" src=boolin );

Teaching Catalog.pm how to parse that and turn the type names back into 
oids won't be difficult. I already have code for the prefix version, and 
this would be easier since there is an end marker.


I'd actually like to roll up the DESCR lines in pg_proc.h into this too, 
they strike me as a bit of a wart. But I'm flexible on that.


If we can generalize this to other catalogs, then that will be good, but 
my inclination is to handle the elephant in the room (pg_proc.h) and 
worry about the gnats later.




I basically made genbki/catalog.pm accept text whenever a column is of
type regtype/regprocedure/. To then make use of that I converted a bunch
of plain oid columns to their their reg* equivalent. That's also nice
for just plain qureying of the catalogs ;)

I don't think the code is going to be much use for you directlky, but it
might be worthwhile to crib some stuff from the 0002 of the attached
patches (based on 74811c4050921959d54d42e2c15bb79f0e2c37f3).



Thanks, I will take a look.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Do we need use more meaningful variables to replace 0 in catalog head files?

2016-11-14 Thread Andrew Dunstan



On 11/13/2016 11:11 AM, Tom Lane wrote:

Andrew Dunstan  writes:

I'm not convinced the line prefix part is necessary, though. What I'm
thinking of is something like this:
PROCDATA( oid=1242 name=boolin isstrict=t volatile=i parallel=s nargs=1
  rettype=bool argtypes="cstring" src=boolin );

We could go in that direction too, but the apparent flexibility to split
entries into multiple lines is an illusion, at least if you try to go
beyond a few lines; you'd end up with duplicated line sequences in
different entries and thus ambiguity for patch(1).  I don't have any
big objection to the above, but it's not obviously better either.



Yeah, I looked and there are too many cases where the name would be 
outside the normal 3 lines of context.





Some things we should try to resolve before settling definitively on
a data representation:

1. Are we going to try to keep these things in the .h files, or split
them out?  I'd like to get them out, as that eliminates both the need
to keep the things looking like macro calls, and the need for the data
within the macro call to be at least minimally parsable as C.



That would work fine for pg_proc.h, less so for pg_type.h where we have 
a whole lot of


   #define FOOOID nn

directives in among the data lines. Moving these somewhere remote from 
the catalog lines they relate to seems like quite a bad idea.





2. Andrew's example above implies some sort of mapping between the
keywords and the actual column names (or at least column positions).
Where and how is that specified?



There are several possibilities. The one I was leaning towards was to 
parse out the Anum_pg_foo_* definitions.





3. Also where are we going to provide the per-column default values?
How does the converter script know which columns to convert to type oids,
proc oids, etc?  Is it going to do any data validation beyond that, and
if so on what basis?



a) something like DATA_DEFAULTS( foo=bar );
b) something like DATA_TYPECONV ( rettype argtypes allargtypes );


Hadn't thought about procoids, but something similar.



4. What will we do about the #define's that some of the .h files provide
for (some of) their object OIDs?  I assume that we want to move in the
direction of autogenerating those macros a la fmgroids.h, but this needs
a concrete spec as well.  If we don't want this change to result in a big
hit to the source code, we're probably going to need to be able to specify
the macro names to generate in the data files.



Yeah, as I noted above it's a bit messy,




5. One of the requirements that was mentioned in previous discussions
was to make it easier to add new columns to catalogs.  This format
does that only to the extent that you don't have to touch entries that
can use the default value for such a column.  Is that good enough, and
if not, what might we be able to do to make it better?



I think it is good enough, at least for a first cut.




I'd actually like to roll up the DESCR lines in pg_proc.h into this too,
they strike me as a bit of a wart. But I'm flexible on that.

+1, if we can come up with a better syntax.  This together with the
OID-macro issue suggests that there will be items in each data entry that
correspond to something other than columns of the target catalog.  But
that seems fine.


If we can generalize this to other catalogs, then that will be good, but
my inclination is to handle the elephant in the room (pg_proc.h) and
worry about the gnats later.

I think we want to do them all.  pg_proc.h is actually one of the easier
catalogs to work on presently, IMO, because the only kind of
cross-references it has are type OIDs.  Things like pg_amop are a mess.
And I really don't want to be dealing with multiple notations for catalog
data.  Also I think this will be subject to Polya's paradox: designing a
general solution will be easier and cleaner than a hack that works only
for one catalog.



I don't know that we need to handle everything at once, as long as the 
solution is sufficiently general.




cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] postgres_fdw and defaults

2016-11-15 Thread Andrew Dunstan


I know we've discussed this before, but I have just had the unpleasant 
experience of trying to get around the difficulty of inserting into a 
foreign table with a serial field, surely a common enough scenario that 
we should try to deal with it better. The solution of using a local 
sequence really doesn't work, as there might be multiple users of the 
table, as there will be in my scenario. I opted instead to get a value 
from the foreign sequence explicitly before inserting, but that's pretty 
ugly. So I am wondering (without having looked at all closely at it) if 
we could set an option to tell the FDW that we want the foreign default 
to be used instead of a local one. Is the difficulty that we don't know 
if a value has been explicitly supplied or not? Maybe we could have some 
magic value that we could use instead ('foreign_default'?). I'm just 
throwing out ideas here, but this is really a wart that could well do 
with attention.



cheers


andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw and defaults

2016-11-15 Thread Andrew Dunstan



On 11/15/2016 10:49 AM, Tom Lane wrote:

Andrew Dunstan  writes:

I know we've discussed this before, but I have just had the unpleasant
experience of trying to get around the difficulty of inserting into a
foreign table with a serial field, surely a common enough scenario that
we should try to deal with it better. The solution of using a local
sequence really doesn't work, as there might be multiple users of the
table, as there will be in my scenario. I opted instead to get a value
from the foreign sequence explicitly before inserting, but that's pretty
ugly. So I am wondering (without having looked at all closely at it) if
we could set an option to tell the FDW that we want the foreign default
to be used instead of a local one. Is the difficulty that we don't know
if a value has been explicitly supplied or not? Maybe we could have some
magic value that we could use instead ('foreign_default'?). I'm just
throwing out ideas here, but this is really a wart that could well do
with attention.

I'm not awake enough to recall the previous discussions of remote
default-value insertion in any detail, but they were extensive, and
no one has proposed solutions to the problems we hit.  Please consult
the archives.



I will look back further, But I see in 2013 Stephen said this:


At first blush, with 'simple' writable views, perhaps that can just be a
view definition on the remote side which doesn't include that column and
therefore that column won't be sent to the remote side explicitly but,
but the view, running on the remote, would turn around and pick up the
default value for any fields which aren't in the view definition when
inserting into the table underneath.



and you replied


Yeah, I think the possibility of such a workaround was one of the
reasons we decided it was okay to support only locally-computed
defaults for now.



The trouble in my case is I actually need to know the serial column 
value, so using a view that hides it doesn't work.




cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: [COMMITTERS] pgsql: Build HTML documentation using XSLT stylesheets by default

2016-11-16 Thread Andrew Dunstan



On 11/16/2016 09:46 AM, Tom Lane wrote:

Erik Rijkers  writes:

This xslt build  takes  8+ minutes, compared to barely 1 minute for
'oldhtml'.

I'm just discovering the same.


I'd say that is a strong disadvantage.

I'd say that is flat out unacceptable.  I won't ever use this toolchain
if it's that much slower than the old way.  What was the improvement
we were hoping for, again?






On the buildfarm crake has gone from about 2 minutes to about 3.5 
minutes to run "make doc". That's not good but it's not an eight-fold 
increase either.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Mail thread references in commits

2016-11-17 Thread Andrew Dunstan
I love seeing references to email threads in commit messages. It would 
make them a lot friendlier though if a full http URL were included 
instead of just a Message-ID, i.e. instead of  put 
. I know this is 
a bit more trouble. but not that much, and it would make it very easy to 
follow with just a mouse click.


cheers

andrew



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Mail thread references in commits

2016-11-17 Thread Andrew Dunstan



On 11/17/2016 04:06 PM, Andres Freund wrote:


On November 17, 2016 1:02:38 PM PST, Andrew Dunstan  wrote:

I love seeing references to email threads in commit messages. It would
make them a lot friendlier though if a full http URL were included
instead of just a Message-ID, i.e. instead of  put
<https://www.postgresql.org/message-id/foo1...@bar.com>. I know this is

a bit more trouble. but not that much, and it would make it very easy
to
follow with just a mouse click.

They're really hard to read though, because lines with e.g. gmail message IDs 
get very long even without that prefix.  Do you look at these in the e-mail or 
gitweb?




Mainly in Thunderbird. But it's cumbersome either way.

I agree gmail message IDs are ugly and insanely long. Personally I would 
rather pay the price of one line of ugliness for nice clickability.


To Tom's point that the URL might be ephemeral while the Message-ID is 
not, the URL would contain the Message-ID, so even if the base changed 
you could adjust to it.


cheers

andrew
**


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   3   4   5   6   7   8   9   10   >