Re: [HACKERS] increasing the default WAL segment size

2017-04-07 Thread Beena Emerson
I ran tests and following are the details:

Machine details:
Architecture:  ppc64le
Byte Order:Little Endian
CPU(s):192
On-line CPU(s) list:   0-191
Thread(s) per core:8
Core(s) per socket:1
Socket(s): 24
NUMA node(s):  4
Model: IBM,8286-42A

clients>  16  32   64
 128
size
16MB  18895.63486 28799.48759 37855.39521 27968.88309
32MB  18313.1461  29201.44954 40733.80051 32458.74147
64 MB18055.73141 30875.28687 42713.54447 38009.60542
128MB   18234.31424 33208.65419 48604.5593  45498.27689
256MB19524.36498 35740.19032 54686.16898 54060.11168
512MB 20351.90719 37426.72174 55045.60719 56194.99349
1024MB   19667.67062 35696.19194 53666.60373 54353.0614

I did not get any degradation, in fact, higher values showed performance
improvement for higher client count.

-- 

Beena Emerson

EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


[HACKERS] Comment typo in publicationcmd.c

2017-04-07 Thread Masahiko Sawada
Hi all,

Attached fixes a typo in publicationcmd.c file.

s/om/on/

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


fix_typo_in_publicationcmds_c.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 08:21 AM, Noah Misch wrote:

On Thu, Apr 06, 2017 at 09:46:29PM +0300, Heikki Linnakangas wrote:

On 04/06/2017 08:36 AM, Noah Misch wrote:

On Tue, Mar 07, 2017 at 02:36:13PM +0200, Heikki Linnakangas wrote:

I didn't include the last-minute changes to the way you specify this in
pg_hba.conf. So it's still just "scram". I agree in general that we should
think about how to extend that too, but I think the proposed syntax was
overly verbose for what we actually support right now. Let's discuss that as
a separate thread, as well.


[Action required within three days.  This is a generic notification.]

The above-described topic is currently a PostgreSQL 10 open item.


I don't think we will come up with anything better than what we have now, so
I have removed this from the open items list.


Michael shared[1] better pg_hba.conf syntax on 2016-11-05.  I agreed[2] with
his framing of the problem and provided two syntax alternatives, on
2017-01-18.  Michael implemented[3] a variation of one of those on 2017-02-20,
which you declined in your 2017-03-07 commit with just the explanation quoted
above.  I say Michael came up with something better five months ago.


OK. My feeling is that we should have a relatively short and 
easy-to-pronounce name for it. People editing pg_hba.conf with a text 
editor will need to type in the keyword, and "scram" is a lot easier to 
remember than "scram-sha-256". The word will also be used in 
conversations, "hey, Noah, can you add example.com to the hba file, with 
scram, please?" If md5 had a more difficult name, I think we would've 
come up with a shorthand for it back in the day, too.


I might be wrong, of course. I don't set up PostgreSQL installations for 
a living, so I might be out of touch of what's important.



Reserving, as HEAD does today, keyword "scram" to mean "type of SCRAM we
introduced first" will look ugly in 2027.  Cryptographic hash functions have a
short shelf life compared to PostgreSQL.


I don't think that's such a big deal. Firstly, I don't think it would be 
too bad for "scram" to mean "the type of SCRAM we introduced first". 
Secondly, we can add an alias later, if we add support for a new 
mechanism in the SCRAM family.


Our MD5 authentication method was introduced in 2001, I expect 
SCRAM-SHA-256 to also last about 15 years before we consider replacing 
it. Note that the big problem with our MD5 authentication is not 
actually the hash algorithm. There are still no practical pre-image 
attacks on MD5, even though it's thoroughly broken for collisions. If we 
had "SCRAM-MD5", it would still be OK. So I'd hazard a guess that 
whatever will eventually replace SCRAM-SHA-256, will not be SCRAM with a 
different hash algorithm, but something else entirely.


The channel binding aspect is actually more important to think about 
right now, as that we will hopefully implement in the next release or two.


In [1], Michael wrote:

There is also the channel binding to think about... So we could have a
list of keywords perhaps associated with SASL? Imagine for example:
sasl$algo,$channel_binding
Giving potentially:
saslscram_sha256
saslscram_sha256,channel
saslscram_sha512
saslscram_sha512,channel
In the case of the patch of this thread just the first entry would
make sense, once channel binding support is added a second
keyword/option could be added. And there are of course other methods
that could replace SCRAM..


It should also be possible to somehow specify "use channel binding, if 
the client supports it".


I don't think "sasl" is interesting to a user, it's the actual 
mechanisms (e.g "scram-sha256") that matter. So I'd suggest that we 
allow a list of algorithms in the method field. If we go with the longer 
"scram-sha-256" name, it would look like this:


# TYPE  DATABASEUSERADDRESS METHOD
host	all all example.com 
scram-sha-256-plus, scram-sha-256


The problem again is that those names are quite long. Is that OK?

In [2], you wrote:

The latest versions document this precisely, but I agree with Peter's concern
about plain "scram".  Suppose it's 2025 and PostgreSQL support SASL mechanisms
OAUTHBEARER, SCRAM-SHA-256, SCRAM-SHA-256-PLUS, and SCRAM-SHA3-512.  What
should the pg_hba.conf options look like at that time?  I don't think having a
single "scram" option fits in such a world.  I see two strategies that fit:

1. Single "sasl" option, with a GUC, similar to ssl_ciphers, controlling the
   mechanisms to offer.
2. Separate options "scram_sha_256", "scram_sha3_512", "oauthbearer", etc.


The example I gave above is like option 2. The problem with option 1 is 
that different SASL mechanisms can have very different properties. You 
might want to allow "NTLM" from a trusted network, but require "OTP" 
from the public internet, for example. So I don't think a single GUC 
would be flexible enough.


That said, a GUC with a more narrow scope might be OK. If we cal

Re: [HACKERS] Comment typo in publicationcmd.c

2017-04-07 Thread Magnus Hagander
On Fri, Apr 7, 2017 at 9:00 AM, Masahiko Sawada 
wrote:

> Hi all,
>
> Attached fixes a typo in publicationcmd.c file.
>
> s/om/on/
>

Applied, thanks.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Magnus Hagander
Jumping late into this one, apologies if these opinions have already been
up and discarded.

On Fri, Apr 7, 2017 at 9:28 AM, Heikki Linnakangas  wrote:

> On 04/07/2017 08:21 AM, Noah Misch wrote:
>
>> On Thu, Apr 06, 2017 at 09:46:29PM +0300, Heikki Linnakangas wrote:
>>
>>> On 04/06/2017 08:36 AM, Noah Misch wrote:
>>>
 On Tue, Mar 07, 2017 at 02:36:13PM +0200, Heikki Linnakangas wrote:

> I didn't include the last-minute changes to the way you specify this in
> pg_hba.conf. So it's still just "scram". I agree in general that we
> should
> think about how to extend that too, but I think the proposed syntax was
> overly verbose for what we actually support right now. Let's discuss
> that as
> a separate thread, as well.
>

 [Action required within three days.  This is a generic notification.]

 The above-described topic is currently a PostgreSQL 10 open item.

>>>
>>> I don't think we will come up with anything better than what we have
>>> now, so
>>> I have removed this from the open items list.
>>>
>>
>> Michael shared[1] better pg_hba.conf syntax on 2016-11-05.  I agreed[2]
>> with
>> his framing of the problem and provided two syntax alternatives, on
>> 2017-01-18.  Michael implemented[3] a variation of one of those on
>> 2017-02-20,
>> which you declined in your 2017-03-07 commit with just the explanation
>> quoted
>> above.  I say Michael came up with something better five months ago.
>>
>
> OK. My feeling is that we should have a relatively short and
> easy-to-pronounce name for it. People editing pg_hba.conf with a text
> editor will need to type in the keyword, and "scram" is a lot easier to
> remember than "scram-sha-256". The word will also be used in conversations,
> "hey, Noah, can you add example.com to the hba file, with scram, please?"
> If md5 had a more difficult name, I think we would've come up with a
> shorthand for it back in the day, too.
>
> I might be wrong, of course. I don't set up PostgreSQL installations for a
> living, so I might be out of touch of what's important.
>
> Reserving, as HEAD does today, keyword "scram" to mean "type of SCRAM we
>> introduced first" will look ugly in 2027.  Cryptographic hash functions
>> have a
>> short shelf life compared to PostgreSQL.
>>
>
> I don't think that's such a big deal. Firstly, I don't think it would be
> too bad for "scram" to mean "the type of SCRAM we introduced first".
> Secondly, we can add an alias later, if we add support for a new mechanism
> in the SCRAM family.
>
> Our MD5 authentication method was introduced in 2001, I expect
> SCRAM-SHA-256 to also last about 15 years before we consider replacing it.
> Note that the big problem with our MD5 authentication is not actually the
> hash algorithm. There are still no practical pre-image attacks on MD5, even
> though it's thoroughly broken for collisions. If we had "SCRAM-MD5", it
> would still be OK. So I'd hazard a guess that whatever will eventually
> replace SCRAM-SHA-256, will not be SCRAM with a different hash algorithm,
> but something else entirely.
>

So here's a wild idea. What if we just call it "sha256"? Does the user
actually care about it being scram, or is scram just an implementation
detail for them? That way when the next one shows up, it'll be sha512 or
whatever. It happens to use scram under the hood, but does the user have to
or does the user want to care about that?

(One could argue the same way that the user shouldn't have to or want to
care about the hashing algorithm -- but if that's the case then we should
only have one entry, it would be "scram", and the system would decide from
there. And I think this discussion already indicates we don't think this is
enough)




>
> The channel binding aspect is actually more important to think about right
> now, as that we will hopefully implement in the next release or two.
>
> In [1], Michael wrote:
>
>> There is also the channel binding to think about... So we could have a
>> list of keywords perhaps associated with SASL? Imagine for example:
>> sasl$algo,$channel_binding
>> Giving potentially:
>> saslscram_sha256
>> saslscram_sha256,channel
>> saslscram_sha512
>> saslscram_sha512,channel
>> In the case of the patch of this thread just the first entry would
>> make sense, once channel binding support is added a second
>> keyword/option could be added. And there are of course other methods
>> that could replace SCRAM..
>>
>
> It should also be possible to somehow specify "use channel binding, if the
> client supports it".
>

Is that really a type of authentication? We already hvae the idea of
authentication method options, used for most other things except md5 which
doesn't have any. So it could be "sha256 channelbind=on", "sha256
channelbind=off" or "sha256 channelbind=negotiate" or something like that?



> I don't think "sasl" is interesting to a user, it's the actual mechanisms
> (e.g "scram-sha256") that matter. So I'd suggest t

Re: [HACKERS] Compiler warning in costsize.c

2017-04-07 Thread Michael Paquier
On Tue, Apr 4, 2017 at 9:42 PM, Michael Paquier
 wrote:
> On Wed, Apr 5, 2017 at 2:54 AM, Tom Lane  wrote:
>> (I'm personally not that much in love with PG_USED_FOR_ASSERTS_ONLY,
>> because it tends to confuse pgindent.)
>
> I would be incline to just do that, any other solution I can think of
> is uglier than that.

Actually, no. Looking at this issue again the warning is triggered
because the Assert() clause is present in USE_ASSERT_CHECKING. So
instead of removing PG_USED_FOR_ASSERTS_ONLY, here is a more simple
patch that fixes the problem. No need to put the variable *rte within
ifdefs as well.
-- 
Michael


costsize-warning-fix-2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Undefined psql variables

2017-04-07 Thread Fabien COELHO


Hello Corey,


\if defined varname
\if sql boolean expression to send to server
\if compare value operator value


I'm still thinking:-)

Independently of the my aethetical complaint against having a pretty
unusual keyword prefix syntax, how would you envision a \set assignment
variant? Would \if have a different expression syntax somehow?


Any further thoughts?


My current opinion:

 - I'm fine if \set stays as it is, i.e. no expression.

 - I agree that some client-side expressions are needed, along the
   semantics suggested by Tom, i.e. definition and comparisons.

 - I'm really against the prefix syntax suggested by Tom


I wish I could have an explanation about why the :?varname (or some other 
variant) syntax I suggested has a "namespace" issue.


The advantage that I see is that although it is obviously ugly, it is ugly 
in the continuity of the various :["'?]varname syntaxes already offered 
and it allows to get rid of "defined varname" which does not look like 
SQL. A second advantage is that with the "defined" proposal


   \if defined var1 and defined var2 or defined var3 and sqlrt() >= ..

Would probably never work work, as it cannot be embedded in another 
expression, while it would work with


   \if :?var1 and :?var2 or :?var3 and ...


Moreover, I would like the condition syntax to be basically SQL & psql 
variables, without explicit prefixes, with a transparent decision whether 
it is evaluated client side or server side.


As client-side expressions are pretty simple, ISTM that some regex could 
be used for this purpose, eg for integer and boolean comparisons:


 ^\s*\d+\s*(=|<>|!=|<|<=|>|>=)\s*\d+\s*$
 ^\s*(bool...)\s*(=|<>|!=)\s*(bool...)\s*$
 ^\s*(NOT\s*)?(bool...)\s*$

So that one could just write the expressions without having to tell where 
it is executed, eg


 \if :VERSION_NUM < 11

Would lead to

 \if 10 < 11

Caught by the first regex, and evaluated with a few lines of code.

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-04-07 Thread Fabien COELHO


Hello Tatsuo,


Ok, I will move the patch to the next cf.


Done.


If I understand correctly, the patch is moved because of the unrelated 
issue that variables cannot be utf8 in pgbench, and it is a condition to 
consider this patch that existing pgbench variables (set with \set) can be 
utf8?


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning in costsize.c

2017-04-07 Thread Michael Paquier
On Fri, Apr 7, 2017 at 12:38 AM, Michael Paquier
 wrote:
> On Tue, Apr 4, 2017 at 9:42 PM, Michael Paquier
>  wrote:
>> On Wed, Apr 5, 2017 at 2:54 AM, Tom Lane  wrote:
>>> (I'm personally not that much in love with PG_USED_FOR_ASSERTS_ONLY,
>>> because it tends to confuse pgindent.)
>>
>> I would be incline to just do that, any other solution I can think of
>> is uglier than that.
>
> Actually, no. Looking at this issue again the warning is triggered
> because the Assert() clause is present in USE_ASSERT_CHECKING. So
> instead of removing PG_USED_FOR_ASSERTS_ONLY, here is a more simple
> patch that fixes the problem. No need to put the variable *rte within
> ifdefs as well.

Bah. This actually fixes nothing. Attached is a different patch that
really addresses the problem, by removing the variable because we
don't want planner_rt_fetch() to run for non-Assert builds.
-- 
Michael


costsize-warning-fix-3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 10:38 AM, Magnus Hagander wrote:

So here's a wild idea. What if we just call it "sha256"? Does the user
actually care about it being scram, or is scram just an implementation
detail for them? That way when the next one shows up, it'll be sha512 or
whatever. It happens to use scram under the hood, but does the user have to
or does the user want to care about that?

(One could argue the same way that the user shouldn't have to or want to
care about the hashing algorithm -- but if that's the case then we should
only have one entry, it would be "scram", and the system would decide from
there. And I think this discussion already indicates we don't think this is
enough)


I think the "SCRAM" part is more important than "SHA-256", so -1 on that.

The main against using just "scram" is that it's misleading, because we 
implement SCRAM-SHA-256, rather than SCRAM-SHA-1, which was the first 
SCRAM mechanism, commonly called just SCRAM. As long as that's the only 
SCRAM variant we have, that's not too bad, but it becomes more confusing 
if we ever implement SCRAM-SHA-512 or SCRAM-something-else in the 
future. That's the point Noah made, and it's a fair point, but the 
question is whether we consider that to be more important than having a 
short name for what we have now.



The channel binding aspect is actually more important to think about right
now, as that we will hopefully implement in the next release or two.

In [1], Michael wrote:


There is also the channel binding to think about... So we could have a
list of keywords perhaps associated with SASL? Imagine for example:
sasl$algo,$channel_binding
Giving potentially:
saslscram_sha256
saslscram_sha256,channel
saslscram_sha512
saslscram_sha512,channel
In the case of the patch of this thread just the first entry would
make sense, once channel binding support is added a second
keyword/option could be added. And there are of course other methods
that could replace SCRAM..


It should also be possible to somehow specify "use channel binding, if the
client supports it".


Is that really a type of authentication? We already hvae the idea of
authentication method options, used for most other things except md5 which
doesn't have any. So it could be "sha256 channelbind=on", "sha256
channelbind=off" or "sha256 channelbind=negotiate" or something like that?


Technically, the channel-binding variant is a separate SASL mechanism, 
i.e. it has a separate name, SCRAM-SHA-256-PLUS. I'm not sure if 
users/admins think of it that way.



I don't think "sasl" is interesting to a user, it's the actual mechanisms
(e.g "scram-sha256") that matter. So I'd suggest that we allow a list of
algorithms in the method field. If we go with the longer "scram-sha-256"
name, it would look like this:

# TYPE  DATABASEUSERADDRESS METHOD
hostall all example.com scram-sha-256-plus,
scram-sha-256

The problem again is that those names are quite long. Is that OK?


Not sure if it would be doable in the code, but we could also have:
host all all example.com scram method=sha256plus,sha256

or something like that. Which would fit within the current syntax of the
file. But I think it might not be enough, because then you couldn't have
two entries with different scram methods for the same combination of the
other fields -- the hba *matching* doesn't look at the options fields.


You can't have two entries with the same type+database+user+address 
combination, period. (Or if you do, the second one is ignored.)


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-04-07 Thread Tatsuo Ishii
> If I understand correctly, the patch is moved because of the unrelated
> issue that variables cannot be utf8 in pgbench, and it is a condition
> to consider this patch that existing pgbench variables (set with \set)
> can be utf8?

I'm not sure if it is "unrelated" because the new feature relies on
existing pgbench variable infrastructure.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Letting the client choose the protocol to use during a SASL exchange

2017-04-07 Thread Heikki Linnakangas

On 04/06/2017 11:16 PM, Simon Riggs wrote:

or it
can just ignore the list and send what it wants anyway, probably leading
to client disconnect.

It would need to follow one of the requested protocols, but mark the
request as doomed. Otherwise we'd be revealing information. That's
what SCRAM does now.


It's not a secret today, what authentication method the server requires. 
You can't really hide it, anyway, as the client could probe with 
different lists of supported methods, and see which method the server 
picks in each case.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Letting the client choose the protocol to use during a SASL exchange

2017-04-07 Thread Heikki Linnakangas

On 04/06/2017 11:05 PM, Tom Lane wrote:

Perhaps we could turn this around: have the client send (in the connection
request packet) a list of auth protocols it thinks it is able to handle.
(I'm envisioning this as being more or less fixed for any one version of
any one client, since it would basically mean "I have code to do X, Y, or
Z".)  Then the server can pick one that is allowed by pg_hba.conf, or it
can just ignore the list and send what it wants anyway, probably leading
to client disconnect.


That list of supported authentication methods would need to be included 
in the startup message. Unfortunately, there is no way to add options to 
the startup message, without breaking compatibility with old servers. If 
there is an option in the startup message that the server doesn't 
understand, it will treat it as a GUC, and you get an "unrecognized 
configuration parameter" after authentication.


It would be nice to change that, so that the server would ignore 
parameters that it doesn't understand that begin with "optional_" 
prefix, for example. But it won't help us right now.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Letting the client choose the protocol to use during a SASL exchange

2017-04-07 Thread Craig Ringer
On 7 April 2017 at 16:33, Heikki Linnakangas  wrote:

> That list of supported authentication methods would need to be included in
> the startup message. Unfortunately, there is no way to add options to the
> startup message, without breaking compatibility with old servers. If there
> is an option in the startup message that the server doesn't understand, it
> will treat it as a GUC, and you get an "unrecognized configuration
> parameter" after authentication.

sasl.mechanisms = 'SCRAM_SHA256'

:p

No, I'm not seriously suggesting we abuse that.

Personally I think it's reasonable enough to let the server send a 'B'
message with supported auth modes. I'm not overly concerned about the
small information leak that provides. We're unlikely to be able to
convincingly fake execution of any and all SASL auth methods the
client may request, and since they may require any arbitrary number of
message exchanges we'd basically land up blackholeing clients that try
an unsupported auth-method.

No thanks. It's one area I'd rather honestly say "nope, we don't
support that". In which case the client can easily enough probe for
all known methods, and we might as well just tell it up front.

This is hardly new. Most servers with neotiable auth, like SMTP, IMAP,
etc, have the server supply a list of auth mechs.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Craig Ringer
On 7 April 2017 at 15:59, Heikki Linnakangas  wrote:
> On 04/07/2017 10:38 AM, Magnus Hagander wrote:

>> Not sure if it would be doable in the code, but we could also have:
>> host all all example.com scram method=sha256plus,sha256
>>
>> or something like that. Which would fit within the current syntax of the
>> file. But I think it might not be enough, because then you couldn't have
>> two entries with different scram methods for the same combination of the
>> other fields -- the hba *matching* doesn't look at the options fields.
>
> You can't have two entries with the same type+database+user+address
> combination, period. (Or if you do, the second one is ignored.)

So we need a methods= list for users who want to constrain the allowed
methods, accepting a list of methods. This is just how things like SSH
work; e.g. ssh_config might contain

Ciphers aes128-cbc,3des-cbc

if you feel like using the old dodgy stuff today.

If the user doesn't supply a methods= list, they get a full list of
supported methods by the server to choose from in the 'B' message, and
can auth with any one of them.

I'm aware there are some compat concerns there, but existing clients
will already have no idea what the scram method is, so now's our
chance to lock it in as containing a *list* of permitted methods. Even
though to start with it it's hard coded.

-- 
 Craig Ringer   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Magnus Hagander
On Fri, Apr 7, 2017 at 9:59 AM, Heikki Linnakangas  wrote:

> On 04/07/2017 10:38 AM, Magnus Hagander wrote:
>
>> So here's a wild idea. What if we just call it "sha256"? Does the user
>> actually care about it being scram, or is scram just an implementation
>> detail for them? That way when the next one shows up, it'll be sha512 or
>> whatever. It happens to use scram under the hood, but does the user have
>> to
>> or does the user want to care about that?
>>
>> (One could argue the same way that the user shouldn't have to or want to
>> care about the hashing algorithm -- but if that's the case then we should
>> only have one entry, it would be "scram", and the system would decide from
>> there. And I think this discussion already indicates we don't think this
>> is
>> enough)
>>
>
> I think the "SCRAM" part is more important than "SHA-256", so -1 on that.
>

If that is the important part, then I agree :) I am not entirely sure that
the scram part *is* more important though.

I think most users will be a lot more comfortable with "sha256" than
"scram" though. But I guess that says using scram-sha-256 is the correct
way.



> The main against using just "scram" is that it's misleading, because we
> implement SCRAM-SHA-256, rather than SCRAM-SHA-1, which was the first SCRAM
> mechanism, commonly called just SCRAM. As long as that's the only SCRAM
> variant we have, that's not too bad, but it becomes more confusing if we
> ever implement SCRAM-SHA-512 or SCRAM-something-else in the future. That's
> the point Noah made, and it's a fair point, but the question is whether we
> consider that to be more important than having a short name for what we
> have now.


Yeah, I agree we should be prepared for the future. And having "scram" and
"scram-sha-512" would definitely fall under confusing.


The channel binding aspect is actually more important to think about right
>> now, as that we will hopefully implement in the next release or two.
>>
>> In [1], Michael wrote:
>>
>> There is also the channel binding to think about... So we could have a
>>> list of keywords perhaps associated with SASL? Imagine for example:
>>> sasl$algo,$channel_binding
>>> Giving potentially:
>>> saslscram_sha256
>>> saslscram_sha256,channel
>>> saslscram_sha512
>>> saslscram_sha512,channel
>>> In the case of the patch of this thread just the first entry would
>>> make sense, once channel binding support is added a second
>>> keyword/option could be added. And there are of course other methods
>>> that could replace SCRAM..
>>>
>>
>> It should also be possible to somehow specify "use channel binding, if the
>> client supports it".
>>
>
> Is that really a type of authentication? We already hvae the idea of
> authentication method options, used for most other things except md5 which
> doesn't have any. So it could be "sha256 channelbind=on", "sha256
> channelbind=off" or "sha256 channelbind=negotiate" or something like that?
>

> Technically, the channel-binding variant is a separate SASL mechanism,
i.e. it has a separate name, SCRAM-SHA-256-PLUS. I'm not sure if >
users/admins think of it that way.


I bet they don't.



I don't think "sasl" is interesting to a user, it's the actual mechanisms
>> (e.g "scram-sha256") that matter. So I'd suggest that we allow a list of
>> algorithms in the method field. If we go with the longer "scram-sha-256"
>> name, it would look like this:
>>
>> # TYPE  DATABASEUSERADDRESS METHOD
>> hostall all example.com scram-sha-256-plus,
>> scram-sha-256
>>
>> The problem again is that those names are quite long. Is that OK?
>>
>
> Not sure if it would be doable in the code, but we could also have:
> host all all example.com scram method=sha256plus,sha256
>
> or something like that. Which would fit within the current syntax of the
> file. But I think it might not be enough, because then you couldn't have
> two entries with different scram methods for the same combination of the
> other fields -- the hba *matching* doesn't look at the options fields.
>


> You can't have two entries with the same type+database+user+address
combination, period. (Or if you do, the second one is ignored.)

That's exactly my point.

//Magnus


Re: [HACKERS] Proposal : For Auto-Prewarm.

2017-04-07 Thread Mithun Cy
On Thu, Apr 6, 2017 at 4:12 AM, Andres Freund  wrote:
> On 2017-03-13 18:45:00 +0530, Mithun Cy wrote:
>> I have implemented a similar logic now. The prewarm bgworker will
>> launch a sub-worker per database in the dump file. And, each
>> sub-worker will load its database block info. The sub-workers will be
>> launched only after previous one is finished. All of this will only
>> start if the database has reached a consistent state.
>
> Hm. For replay performance it'd possibly be good to start earlier,
> before reaching consistency.  Is there an issue starting earlier?

Thanks Andres for a detailed review. I will try to address them in my next
post. I thought it is important to reply to above comment before that.
Earlier patches used to start loading blocks before reaching a consistent
state. Then Robert while reviewing found a basic flaw in my approach[1].
The function DropRelFileNodesAllBuffers do not expect others to load the
blocks concurrently while it is getting rid of buffered blocks. So has to
delay loading until database reaches consistent state so that we can
connect to each database and take a relation lock before loading any of
theirs blocks.


[1] cannot load blocks without holding relation lock



-- 
Thanks and Regards
Mithun C Y
EnterpriseDB: http://www.enterprisedb.com


Re: [HACKERS] Letting the client choose the protocol to use during a SASL exchange

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 11:57 AM, Craig Ringer wrote:

On 7 April 2017 at 16:33, Heikki Linnakangas  wrote:


That list of supported authentication methods would need to be included in
the startup message. Unfortunately, there is no way to add options to the
startup message, without breaking compatibility with old servers. If there
is an option in the startup message that the server doesn't understand, it
will treat it as a GUC, and you get an "unrecognized configuration
parameter" after authentication.


sasl.mechanisms = 'SCRAM_SHA256'

:p

No, I'm not seriously suggesting we abuse that.


Hmm, that's not such a bad idea, actually. It only goes back to 9.2, 
though. Before that, the prefix needed to be listed in 
custom_variable_classes, or you got an error. 9.2 is the oldest 
supported version, but libpq should still be able to connect to older 
versions.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pgbench - allow to store select results into variables

2017-04-07 Thread Fabien COELHO



If I understand correctly, the patch is moved because of the unrelated
issue that variables cannot be utf8 in pgbench, and it is a condition
to consider this patch that existing pgbench variables (set with \set)
can be utf8?


I'm not sure if it is "unrelated" because the new feature relies on
existing pgbench variable infrastructure.


Sure. I meant that the constraint on variable names exists before the 
patch and the patch is not related to variable names, but the patch is 
about variables, obviously.


As "psql" variables can be utf8 and that the same scanner is used, but the 
variables values are not stritcly the same (they are typed in pgbench), 
I'm wondering whether the effort should be do share more code/abstraction 
between psql & pgbench or just adjust/replicate the needed small 
functions/code excerpts.


--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] ExecPrepareExprList and per-query context

2017-04-07 Thread Amit Langote
As of b8d7f053c5c, ExecPrepareExprList is (must be?) used instead of
ExecPrepareExpr when the caller wants to initialize expressions in a list,
for example, FormIndexDatum.  ExecPrepareExpr doesn't require the caller
to have switched to per-query context, because it itself will.  Same is
not however true for the new ExecPrepareExprList.  That means the List
node that it creates might be in a context that is not necessarily
per-query context, where it previously would be.  That breaks third-party
users of FormIndexDatum that rely on the list to have been created in
per-query context (pg_bulkload was broken by this).

Should ExecPrepareExprList also switch to estate->es_query_cxt?  Or maybe
ExecPrepareExpr could itself detect that passed-in node is a List and
create the list of ExprState nodes by itself.  I guess the reason to
separate list case is because ExecInitExpr() does not take Lists anymore.

Thanks,
Amit




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-04-07 Thread Etsuro Fujita

On 2016/12/14 16:20, Etsuro Fujita wrote:

On 2016/12/09 19:46, Maksim Milyutin wrote:

I would like to work on two tasks:
 - insert (and eventually update) tuple routing for foreign partition.
 - the ability to create an index on the parent and have all of the
children inherit it;

The first one has been implemented in pg_pathman somehow, but the code
relies on dirty hacks, so the FDW API has to be improved. As for the
extended index support, it doesn't look like a super-hard task.



That would be great!  I'd like to help review the first one.


There seems to be no work on the first one, so I'd like to work on that.

Best regards,
Etsuro Fujita




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Performance issue with postgres9.6

2017-04-07 Thread Prakash Itnal
Hello,

We currently use psotgres 9.3 in our products. Recently we upgraded to
postgres 9.6. But with 9.6 we have seen a drastic reduction in throughput.
After analyzing carefully I found that "planner time" in 9.6 is very high.
Below are the details:

Scenario:
1 Create a table with 10 rows.
2 Execute simple query: select * from subscriber where s_id = 100;
3 No update/delete/insert; tried vacuum, full vacuum; by default we enable
auto-vacuum

9.3: Avg of "Total runtime" : *0.24ms* [actual throughput: *650 TPS*]
9.6: Avg of Total time: *0.56ms* (Avg of "Planning time" : 0.38ms + Avg of
"Execution time" : 0.18ms) [actual throughput: *80 TPS*]

Check the attachments for more details.

Below is the configuration setting. Full configuration can be found in
attachment.
shared_buffers = 128MB
effective_cache_size = 256MB

Note that we use master-slave (one master - one slave) setup. I could see
no difference even when I take out slave.

I tried all possibilities of increasing shared memory, maitenance_work,
asynchronous commit etc. but, nothing showed any major improvements. Kindly
help to identify what is missing!

PS: We use postgres for small scale so the values are less. The size of the
DB is also just around 180MB.

-- 
Cheers,
Prakash
psql (9.3.14)
Type "help" for help.

perftestdb=# select count(*) from subscriber ;
 count  

 10
(1 row)

perftestdb=# \d subscriber
Table "public.subscriber"
Column| Type  | Modifiers 
--+---+---
 s_id | integer   | not null
 sub_nbr  | character varying(15) | not null
 bit_1| smallint  | 
 bit_2| smallint  | 
 bit_3| smallint  | 
 bit_4| smallint  | 
 bit_5| smallint  | 
 bit_6| smallint  | 
 bit_7| smallint  | 
 bit_8| smallint  | 
 bit_9| smallint  | 
 bit_10   | smallint  | 
 hex_1| smallint  | 
 hex_2| smallint  | 
 hex_3| smallint  | 
 hex_4| smallint  | 
 hex_5| smallint  | 
 hex_6| smallint  | 
 hex_7| smallint  | 
 hex_8| smallint  | 
 hex_9| smallint  | 
 hex_10   | smallint  | 
 byte2_1  | smallint  | 
 byte2_2  | smallint  | 
 byte2_3  | smallint  | 
 byte2_4  | smallint  | 
 byte2_5  | smallint  | 
 byte2_6  | smallint  | 
 byte2_7  | smallint  | 
 byte2_8  | smallint  | 
 byte2_9  | smallint  | 
 byte2_10 | smallint  | 
 msc_location | integer   | 
 vlr_location | integer   | 
Indexes:
"subscriber_pkey" PRIMARY KEY, btree (s_id)
"subscriber_by_sub_nbr" UNIQUE, btree (sub_nbr)
Referenced by:
TABLE "access_info" CONSTRAINT "access_info_s_id_fkey" FOREIGN KEY (s_id) 
REFERENCES subscriber(s_id)
TABLE "special_facility" CONSTRAINT "special_facility_s_id_fkey" FOREIGN 
KEY (s_id) REFERENCES subscriber(s_id)

perftestdb=#  explain analyze select * from subscriber where s_id = 100;
 QUERY PLAN 
 
-
 Index Scan using subscriber_pkey on subscriber  (cost=0.29..8.31 rows=1 
width=88) (actual time=0.049..0.055 rows=1 loops=1)
   Index Cond: (s_id = 100)
 Total runtime: 0.231 ms
(3 rows)

perftestdb=#  explain analyze select * from subscriber where s_id = 100;
 QUERY PLAN 
 
-
 Index Scan using subscriber_pkey on subscriber  (cost=0.29..8.31 rows=1 
width=88) (actual time=0.059..0.066 rows=1 loops=1)
   Index Cond: (s_id = 100)
 Total runtime: 0.246 ms
(3 rows)

perftestdb=#  explain analyze select * from subscriber where s_id = 100;
 QUERY PLAN 
 
-
 Index Scan using subscriber_pkey on subscriber  (cost=0.29..8.31 rows=1 
width=88) (actual time=0.059..0.066 rows=1 loops=1)
   Index Cond: (s_id = 100)
 Total runtime: 0.249 ms
(3 rows)

perftestdb=#  explain analyze select * from subscriber where s_id = 100;
 QUERY PLAN 

[HACKERS] pg_export_snapshot doc

2017-04-07 Thread Tatsuo Ishii
pg_export_snapshot() cannot be used during recovery (i.e. on standby
servers), but it's not documented. IMO this is a bug and should be
fixed. Patch attached.

Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index cb0a36a..9923e67 100644
--- a/doc/src/sgml/func.sgml
+++ b/doc/src/sgml/func.sgml
@@ -18778,6 +18778,7 @@ postgres=# SELECT * FROM pg_walfile_name_offset(pg_stop_backup());
 Snapshots are exported with the pg_export_snapshot function,
 shown in , and
 imported with the  command.
+This function cannot be used during recovery.

 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Declarative partitioning - another take

2017-04-07 Thread Maksim Milyutin

On 07.04.2017 13:05, Etsuro Fujita wrote:

On 2016/12/14 16:20, Etsuro Fujita wrote:

On 2016/12/09 19:46, Maksim Milyutin wrote:

I would like to work on two tasks:
 - insert (and eventually update) tuple routing for foreign partition.
 - the ability to create an index on the parent and have all of the
children inherit it;

The first one has been implemented in pg_pathman somehow, but the code
relies on dirty hacks, so the FDW API has to be improved. As for the
extended index support, it doesn't look like a super-hard task.



That would be great!  I'd like to help review the first one.


There seems to be no work on the first one, so I'd like to work on that.


Yes, you can start to work on this, I'll join later as a reviewer.



--
Maksim Milyutin
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/interfaces/libpq shipping nmake-related Makefiles

2017-04-07 Thread Magnus Hagander
On Fri, Apr 7, 2017 at 6:29 AM, Tom Lane  wrote:

> Andres Freund  writes:
> > On 2017-04-07 13:07:59 +0900, Michael Paquier wrote:
> >> On Fri, Apr 7, 2017 at 1:01 PM, Tom Lane  wrote:
> >>> Still, it's not very clear why we need to cater for building just libpq
> >>> rather than the whole distribution, and a user of win32.mak presumably
> >>> has the option to do the latter.
>
> >> Indeed. Those recent reports indicate that removing win32.c would be a
> >> bad move.
>
> > For me they indicate the contrary, that we're currently not properly
> > maintaining it so that longstanding errors crop up.
>


Is it broken in HEAD only or also in 9.6?

I think whomever uses win32.mak to build is quite unlikely to be tracking
HEAD. They would notice at release time. (Since we don't have a buildfarm
animal doing it)



> Yeah.  For win32.mak, the key question is whether there is still anybody
> who'd have an insurmountable problem with building the whole distro via
> src/tools/msvc/ rather than just building libpq with win32.mak.  Given our
> lack of infrastructure for testing win32.mak, continuing support for it
> seems like quite an expensive proposition from the developer-time
> standpoint.  I don't really want to do that if it's only going to save
> somebody an occasional few minutes of build time.
>


Insurmountable, probably not. The big difference is that you don't need
*any* dependencies to build a libpq using win32.mak, but you need many of
them (to start with, perl...) to build using the built-in one. For people
who want to build it, it certainly save a lot more than "a few minutes".
For somebody who doesn't have ready scripts it takes a *long* time to set
up a build environment to do our regular msvc builds.

I think the question is more, is there any need for people to do that at
all, or are those people just going to be using the pre-built binaries
anyway? That question I can't say I know the answer to.


bcc32.mak is in a different category because it's basically the only
> solution if you want to build libpq in Borland C.  But the lack of
> user input suggests that maybe nobody cares about that anymore.
>

I think there's always been even fewer users of bcc32.mak than of the
win32.mak one.



> Borland C, per se, has been dead since the 90s according to wikipedia.
> There are successor products with different names, none of which I can
> recall anybody ever mentioning on the PG lists.  I speculate that
> people are taking libpq.dll built with MSVC and using it in those
> products, if they're using them with PG at all.
>

The compiler is still called bcc (bcc32c to be specific - see
https://www.embarcadero.com/free-tools/ccompiler).  Apparently it's clang
based now. I don't know if our mak file even works with that anymore
though, it wouldn't surprise me if it doesn't. But the non-change-of-name
could be why we're not seeing questions about it.

FWIW, I've suggested we drop it before, so no objections to that part from
me (and if we do, there's some #ifdefs around it in headers as well).

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Álvaro Hernández Tortosa



On 07/04/17 11:05, Magnus Hagander wrote:
On Fri, Apr 7, 2017 at 9:59 AM, Heikki Linnakangas > wrote:


On 04/07/2017 10:38 AM, Magnus Hagander wrote:

So here's a wild idea. What if we just call it "sha256"? Does
the user
actually care about it being scram, or is scram just an
implementation
detail for them? That way when the next one shows up, it'll be
sha512 or
whatever. It happens to use scram under the hood, but does the
user have to
or does the user want to care about that?

(One could argue the same way that the user shouldn't have to
or want to
care about the hashing algorithm -- but if that's the case
then we should
only have one entry, it would be "scram", and the system would
decide from
there. And I think this discussion already indicates we don't
think this is
enough)


I think the "SCRAM" part is more important than "SHA-256", so -1
on that.


If that is the important part, then I agree :) I am not entirely sure 
that the scram part *is* more important though.


I agree it is much more important. Needed, I'd say. "SHA-256" could 
refer to other mechanisms that just simply hash the value (maybe with a 
salt, or not) with that hash algorithm. SCRAM is a different beast, with 
much more functionality than that. So yes, it matters a lot :)




I think most users will be a lot more comfortable with "sha256" than 
"scram" though. But I guess that says using scram-sha-256 is the 
correct way.


I don't like UPPERCASE, but the RFC links to the IANA registry 
where SCRAM methods are all uppercase and with dashes: SCRAM-SHA-256 and 
SCRAM-SHA-256-PLUS. I'd use those names, they are standardized.





The main against using just "scram" is that it's misleading,
because we implement SCRAM-SHA-256, rather than SCRAM-SHA-1, which
was the first SCRAM mechanism, commonly called just SCRAM. As long
as that's the only SCRAM variant we have, that's not too bad, but
it becomes more confusing if we ever implement SCRAM-SHA-512 or
SCRAM-something-else in the future. That's the point Noah made,
and it's a fair point, but the question is whether we consider
that to be more important than having a short name for what we
have now.


Yeah, I agree we should be prepared for the future. And having "scram" 
and "scram-sha-512" would definitely fall under confusing.


The channel binding aspect is actually more important to think
about right
now, as that we will hopefully implement in the next release
or two.

In [1], Michael wrote:

There is also the channel binding to think about... So we
could have a
list of keywords perhaps associated with SASL? Imagine for
example:
sasl$algo,$channel_binding
Giving potentially:
saslscram_sha256
saslscram_sha256,channel
saslscram_sha512
saslscram_sha512,channel
In the case of the patch of this thread just the first
entry would
make sense, once channel binding support is added a second
keyword/option could be added. And there are of course
other methods
that could replace SCRAM..


It should also be possible to somehow specify "use channel
binding, if the
client supports it".


Is that really a type of authentication? We already hvae the idea of
authentication method options, used for most other things except
md5 which
doesn't have any. So it could be "sha256 channelbind=on", "sha256
channelbind=off" or "sha256 channelbind=negotiate" or something
like that?


> Technically, the channel-binding variant is a separate SASL 
mechanism, i.e. it has a separate name, SCRAM-SHA-256-PLUS. I'm not 
sure if > users/admins think of it that way.



I bet they don't.


Probably. But let's not underestimate channel binding: it is the 
"greatest" feature of SCRAM, and where security really shines. I'd 
encourage the use of channel binding and would definitely make it explicit.


As for the options, there's no way to negotiate, the client picks. 
It could still be three-valued: on, off, only-channel-binding (or 
however you want to call them). That will only change what mechanisms 
the server will be advertising to clients.




Álvaro



--

Álvaro Hernández Tortosa


---
<8K>data



Re: [HACKERS] Supporting huge pages on Windows

2017-04-07 Thread Magnus Hagander
On Wed, Apr 5, 2017 at 9:15 AM, Tsunakawa, Takayuki <
tsunakawa.ta...@jp.fujitsu.com> wrote:

> From: pgsql-hackers-ow...@postgresql.org
> > [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Andres Freund
> > As I asked before, why can't we delete all privs and add the explicitly
> > needed once back (using AdjustTokenPrivileges)?
>
> I tried it with pg_ctl.c attached to an earlier mail today, i.e. delete
> all privs with CreateRestrictedToken(DISABLE_ALL_PRIVILEGE) and enable
> Lock Pages in Memory with AdjustTokenPrivileges().  But it didn't work;
> AdjustTokenPrivileges() failed to enable the priv.  It's probably that
> CreateRestrictedToken() deletes (unassigns?) the privs from the access
> token, so subsequent AdjustTokenPrivileges() can no longer enable the priv.
>
>
Once you have used CreateRestrictedToken(), you can no longer add
*anything* to it. It's not just removed privileges, there's a special flag
on the token that says it's restricted (can be checked with
IsTokenRestricted()).

I think what you'd need to do is enumerate what privileges the user has
*before* calling CreateRestrictedToken(), using GetTokenInformation(). And
then pass those into PrivilegesToDelete (except for
SeChangeNotifyPrivilege) in the call to CreateRestrictedToken(), instead of
using DISABLE_MAX_PRIVILEGE. (and add the privilege needed for huge pages
before you start that whole process -- that needs to be added in the token
used *before* we create the restricted one).

At least that's my guess from reading the docs and trying to remember :)

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [HACKERS] Implementation of SASLprep for SCRAM-SHA-256

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 05:30 AM, Michael Paquier wrote:

On Fri, Apr 7, 2017 at 2:47 AM, Heikki Linnakangas  wrote:

On 04/06/2017 08:42 PM, Heikki Linnakangas wrote:

There is for example this portion in the new tables:
+static const Codepoint prohibited_output_chars[] =
+{
+   0xD800, 0xF8FF, /* C.3, C.5 */

   - Start Table C.5 -
   D800-DFFF; [SURROGATE CODES]
   - End Table C.5 -
This indicates a range of values. Wouldn't it be better to split this
table in two, one for the range of codepoints and another one with the
single entries?


I considered that, but there are relatively few singular codepoints in
the tables, so it wouldn't save much space. In this patch, singular
codepoints are represented by a range like "0x3000, 0x3000".


I am really wondering if this should not reflect the real range
reported by the RFC. I understand that you have grouped things to save
a couple of bytes, but that would protect from any updates of the
codepoints within those ranges (unlikely to happen I agree).


It just means that there will be some more work required to apply the 
changes to the current lists. I constructed the lists manually to begin 
with, copy-pasting the lists from the RFC, and moving and merging 
entries by hand. I wouldn't mind doing that by hand again, if the lists 
change. But as you said, it seems unlikely that they would change any 
time soon.



You may want to add a .gitignore in src/common/unicode for norm_test
and norm_test_table.h.


Added, and pushed, with some more comment fixes.

Many thanks, Michael!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Yorick Peterse
The attached patch updates the hot-standby documentation (in the high
availability section) so it explicitly mentions that certain settings
need to be applied to servers in a particular order. For example, it
states that if you increase a certain setting (e.g. max_connections) you
need to do so on a primary first, before applying it to any secondaries.

Previously this was not explicitly mentioned and could lead to one
thinking they can just apply the setting to all servers in parallel,
resulting in standby servers refusing to start.

The exact phrasing currently used in the patch may be a bit rough, I'm
open to suggestions on how to best improve the writing.

The patch is based on the master branch and applies cleanly to it.

Yorick
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 51359d6236..bbb0e1aab2 100644
*** a/doc/src/sgml/high-availability.sgml
--- b/doc/src/sgml/high-availability.sgml
***
*** 2098,2104  LOG:  database system is ready to accept read only connections
  be equal to or greater than the value on the primary. If these parameters
  are not set high enough then the standby will refuse to start.
  Higher values can then be supplied and the server
! restarted to begin recovery again.  These parameters are:
  

 
--- 2098,2108 
  be equal to or greater than the value on the primary. If these parameters
  are not set high enough then the standby will refuse to start.
  Higher values can then be supplied and the server
! restarted to begin recovery again. If you want to increase these values you
! should do so on the primary first, before applying the changes to any
! standby servers. If you instead want to decrease these values you should do
! so on the standbys first, before applying the changes to the primary. These
! parameters are:
  

 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Implementation of SASLprep for SCRAM-SHA-256

2017-04-07 Thread Michael Paquier
On Fri, Apr 7, 2017 at 8:58 PM, Heikki Linnakangas  wrote:
> On 04/07/2017 05:30 AM, Michael Paquier wrote:
>> I am really wondering if this should not reflect the real range
>> reported by the RFC. I understand that you have grouped things to save
>> a couple of bytes, but that would protect from any updates of the
>> codepoints within those ranges (unlikely to happen I agree).
>
> It just means that there will be some more work required to apply the
> changes to the current lists. I constructed the lists manually to begin
> with, copy-pasting the lists from the RFC, and moving and merging entries by
> hand. I wouldn't mind doing that by hand again, if the lists change. But as
> you said, it seems unlikely that they would change any time soon.

Yeah, I don't mind either. That's simple enough to change should that happen.

>> You may want to add a .gitignore in src/common/unicode for norm_test
>> and norm_test_table.h.
>
> Added, and pushed, with some more comment fixes.

Nice. There are still a couple of important items pending for SCRAM,
so I would think that it is better to not do the refactoring now (but
rework it in PG11), but polish a bit more the documentation. Your
thoughts on that?
-- 
Michael


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Aleksander Alekseev
Hi Yorick,

> The attached patch updates the hot-standby documentation (in the high
> availability section) so it explicitly mentions that certain settings
> need to be applied to servers in a particular order. For example, it
> states that if you increase a certain setting (e.g. max_connections)
> you need to do so on a primary first, before applying it to any
> secondaries.

I'm sorry to inform you that your description of max_connection is,
lets say, not quite accurate. I've just increased max_connections on a
standby without doing anything on a master and nothing wrong happened.

-- 
Best regards,
Aleksander Alekseev


pgpqfeUI2TpqL.pgp
Description: OpenPGP digital signature


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Stephen Frost
Aleksander, Yorick,

* Aleksander Alekseev (a.aleks...@postgrespro.ru) wrote:
> > The attached patch updates the hot-standby documentation (in the high
> > availability section) so it explicitly mentions that certain settings
> > need to be applied to servers in a particular order. For example, it
> > states that if you increase a certain setting (e.g. max_connections)
> > you need to do so on a primary first, before applying it to any
> > secondaries.
> 
> I'm sorry to inform you that your description of max_connection is,
> lets say, not quite accurate. I've just increased max_connections on a
> standby without doing anything on a master and nothing wrong happened.

Right, the logic there is reversed- reduction has to be done on the
primary first and then WAL replayed on the replica, while increasing has
to be done on the secondary first and then on the primary.

I do think that we should add the (correct!) information into the docs
explicitly, perhaps even as a 'Note', since it can be quite confusing
otherwise.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Yorick Peterse
Ha! It seems I indeed had it the other way around. I suppose that's what
happens when writing a patch late at night. Somewhat ironically I did
have the other correct in my Git commit message.

Attached is an updated version of the patch that corrects the order in
the documentation.

Yorick
diff --git a/doc/src/sgml/high-availability.sgml b/doc/src/sgml/high-availability.sgml
index 51359d6236..434afe5d43 100644
*** a/doc/src/sgml/high-availability.sgml
--- b/doc/src/sgml/high-availability.sgml
***
*** 2098,2104  LOG:  database system is ready to accept read only connections
  be equal to or greater than the value on the primary. If these parameters
  are not set high enough then the standby will refuse to start.
  Higher values can then be supplied and the server
! restarted to begin recovery again.  These parameters are:
  

 
--- 2098,2108 
  be equal to or greater than the value on the primary. If these parameters
  are not set high enough then the standby will refuse to start.
  Higher values can then be supplied and the server
! restarted to begin recovery again. If you want to increase these values you
! should do so on any standby servers first, before applying the changes to
! the primary. If you instead want to decrease these values you should do so
! on the primary first, before applying the changes to any standby servers.
! These parameters are:
  

 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/interfaces/libpq shipping nmake-related Makefiles

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 02:00 PM, Magnus Hagander wrote:

On Fri, Apr 7, 2017 at 6:29 AM, Tom Lane  wrote:


Yeah.  For win32.mak, the key question is whether there is still anybody
who'd have an insurmountable problem with building the whole distro via
src/tools/msvc/ rather than just building libpq with win32.mak.  Given our
lack of infrastructure for testing win32.mak, continuing support for it
seems like quite an expensive proposition from the developer-time
standpoint.  I don't really want to do that if it's only going to save
somebody an occasional few minutes of build time.


Insurmountable, probably not. The big difference is that you don't need
*any* dependencies to build a libpq using win32.mak, but you need many of
them (to start with, perl...) to build using the built-in one. For people
who want to build it, it certainly save a lot more than "a few minutes".
For somebody who doesn't have ready scripts it takes a *long* time to set
up a build environment to do our regular msvc builds.

I think the question is more, is there any need for people to do that at
all, or are those people just going to be using the pre-built binaries
anyway? That question I can't say I know the answer to.


It does seem handy, if all you want is libpq. Clearly not many people 
use it, though.


I just tested it. After adding all the missing files to the makefile, 
I'm getting an error:



.\Release\libpq.dll.manifest : general error c1010070: Failed to load and parse
the manifest. The system cannot find the file specified.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Windows Kits\10\bin\x86\mt.
XE"' : return code '0x1f'
Stop.
NMAKE : fatal error U1077: '"C:\Program Files (x86)\Microsoft Visual Studio 14.
\VC\BIN\nmake.EXE"' : return code '0x2'
Stop.


This seems be the same as the 2nd error that was reported back in 2013: 
https://www.postgresql.org/message-id/CAJ2%3DPVQcW8UGNnSy%3DOw%3DvUK2zpjowTkzUS1B864REa7LOT140Q%40mail.gmail.com.


Despite the failure, it built a libpq.dll file, and it seems to work. I 
have no idea what the manifest file is.


I could easily add the missing files to win32.mak, but unless someone 
else steps up to the plate and fixes the manifest issue, I don't think 
we have much choice but remove the it.


- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw: support parameterized foreign joins

2017-04-07 Thread Arthur Zakirov

On 05.04.2017 12:20, Etsuro Fujita wrote:


Rebased.

Attached is an updated version created on top of the latest patch
"epqpath-for-foreignjoin" [1].

Other changes:
* Added a bit more regression tests with FOR UPDATE clause to see if
CreateLocalJoinPath works well for parameterized foreign join paths.
* Added/revised comments a bit.



Thank you!

> scan_clauses=NIL for a join relation.  So, for a join relation we use
> fpinfo->remote_conds and fpinfo->local_conds, instead.  (Note that those
> conditions are created at path creation time, ie,
> postgresGetForeignJoinPaths.  See foreign_join_ok.)

Oh, I see. I've missed that condition.

Marked the patch as "Ready for Commiter". But the patch should be 
commited only after the patch [1].


1. 
https://www.postgresql.org/message-id/424933d7-d6bb-4b8f-4e44-1fea212af083%40lab.ntt.co.jp


--
Arthur Zakirov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] src/interfaces/libpq shipping nmake-related Makefiles

2017-04-07 Thread Tom Lane
Heikki Linnakangas  writes:
> I just tested it. After adding all the missing files to the makefile, 
> I'm getting an error:

>> .\Release\libpq.dll.manifest : general error c1010070: Failed to load and 
>> parse
>> the manifest. The system cannot find the file specified.

> This seems be the same as the 2nd error that was reported back in 2013: 
> https://www.postgresql.org/message-id/CAJ2%3DPVQcW8UGNnSy%3DOw%3DvUK2zpjowTkzUS1B864REa7LOT140Q%40mail.gmail.com.

Well, if it's been broken since (at least) 2013, and we've had only one
complaint, then I'd say there are too few potential users to justify
the maintenance burden.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Aleksander Alekseev
Hi Yorick,

> Attached is an updated version of the patch that corrects the order in
> the documentation.

Looks promising. I would recommend to add this patch to the next
commitfest [1]. Otherwise there is a chance that it will be lost.

[1] https://commitfest.postgresql.org/14/

-- 
Best regards,
Aleksander Alekseev


pgpRyayBlz4tm.pgp
Description: OpenPGP digital signature


Re: [HACKERS] src/interfaces/libpq shipping nmake-related Makefiles

2017-04-07 Thread Andrew Dunstan


On 04/07/2017 09:58 AM, Tom Lane wrote:
>> This seems be the same as the 2nd error that was reported back in 2013: 
>> https://www.postgresql.org/message-id/CAJ2%3DPVQcW8UGNnSy%3DOw%3DvUK2zpjowTkzUS1B864REa7LOT140Q%40mail.gmail.com.
> Well, if it's been broken since (at least) 2013, and we've had only one
> complaint, then I'd say there are too few potential users to justify
> the maintenance burden.
>
>   


I agree.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Transactions involving multiple postgres foreign servers

2017-04-07 Thread Masahiko Sawada
On Wed, Mar 29, 2017 at 11:14 PM, Masahiko Sawada  wrote:
> On Wed, Mar 22, 2017 at 2:49 AM, Masahiko Sawada  
> wrote:
>> On Thu, Mar 16, 2017 at 2:37 PM, Vinayak Pokale
>>  wrote:
>>> The following review has been posted through the commitfest application:
>>> make installcheck-world:  tested, passed
>>> Implements feature:   tested, passed
>>> Spec compliant:   tested, passed
>>> Documentation:tested, passed
>>>
>>> I have tested the latest patch and it looks good to me,
>>> so I marked it "Ready for committer".
>>> Anyway, it would be great if anyone could also have a look at the patches 
>>> and send comments.
>>>
>>> The new status of this patch is: Ready for Committer
>>>
>>
>> Thank you for updating but I found a bug in 001 patch. Attached latest 
>> patches.
>> The differences are
>>   * Fixed a bug.
>>   * Ran pgindent.
>>   * Separated the patch supporting GetPrepareID API.
>>
>
> Since previous patches conflict with current HEAD, I attached latest
> set of patches.
>

Vinayak, why did you marked this patch as "Move to next CF"? AFAIU
there is not discussion yet.

Regards,

--
Masahiko Sawada
NIPPON TELEGRAPH AND TELEPHONE CORPORATION
NTT Open Source Software Center


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Parallel Append implementation

2017-04-07 Thread Andres Freund
Hi,

On 2017-04-07 11:44:39 +0530, Amit Khandekar wrote:
> On 6 April 2017 at 07:33, Andres Freund  wrote:
> > On 2017-04-05 14:52:38 +0530, Amit Khandekar wrote:
> >> This is what the earlier versions of my patch had done : just add up
> >> per-subplan parallel_workers (1 for non-partial subplan and
> >> subpath->parallel_workers for partial subplans) and set this total as
> >> the Append parallel_workers.
> >
> > I don't think that's great, consider e.g. the case that you have one
> > very expensive query, and a bunch of cheaper ones. Most of those workers
> > wouldn't do much while waiting for the the expensive query.  What I'm
> > basically thinking we should do is something like the following
> > pythonesque pseudocode:
> >
> > best_nonpartial_cost = -1
> > best_nonpartial_nworkers = -1
> >
> > for numworkers in 1...#max workers:
> >worker_work = [0 for x in range(0, numworkers)]
> >
> >nonpartial_cost += startup_cost * numworkers
> >
> ># distribute all nonpartial tasks over workers.  Assign tasks to the
> ># worker with the least amount of work already performed.
> >for task in all_nonpartial_subqueries:
> >least_busy_worker = worker_work.smallest()
> >least_busy_worker += task.total_nonpartial_cost
> >
> ># the nonpartial cost here is the largest amount any single worker
> ># has to perform.
> >nonpartial_cost += worker_work.largest()
> >
> >total_partial_cost = 0
> >for task in all_partial_subqueries:
> >total_partial_cost += task.total_nonpartial_cost
> >
> ># Compute resources needed by partial tasks. First compute how much
> ># cost we can distribute to workers that take shorter than the
> ># "busiest" worker doing non-partial tasks.
> >remaining_avail_work = 0
> >for i in range(0, numworkers):
> >remaining_avail_work += worker_work.largest() - worker_work[i]
> >
> ># Equally divide up remaining work over all workers
> >if remaining_avail_work < total_partial_cost:
> >   nonpartial_cost += (worker_work.largest - remaining_avail_work) / 
> > numworkers
> >
> ># check if this is the best number of workers
> >if best_nonpartial_cost == -1 or best_nonpartial_cost > nonpartial_cost:
> >   best_nonpartial_cost = worker_work.largest
> >   best_nonpartial_nworkers = nworkers
> >
> > Does that make sense?
> 
> Yeah, I gather what you are trying to achieve is : allocate number of
> workers such that the total cost does not exceed the cost of the first
> non-partial plan (i.e. the costliest one, because the plans are sorted
> by descending cost).
> 
> So for non-partial costs such as (20, 10, 5, 2) allocate only 2
> workers because the 2nd worker will execute (10, 5, 2) while 1st
> worker executes (20).
> 
> But for costs such as (4, 4, 4,   20 times), the logic would give
> us 20 workers because we want to finish the Append in 4 time units;
> and this what we want to avoid when we go with
> don't-allocate-too-many-workers approach.

I guess, my problem is that I don't agree with that as a goal in an of
itself.  If you actually want to run your query quickly, you *want* 20
workers here.  The issues of backend startup overhead (already via
parallel_setup_cost), concurrency and such cost should be modelled, not
burried in a formula the user can't change.  If we want to make it less
and less likely to start more workers we should make that configurable,
not the default.
I think there's some precedent taken from the parallel seqscan case,
that's not actually applicable here.  Parallel seqscans have a good
amount of shared state, both on the kernel and pg side, and that shared
state reduces gains of increasing the number of workers.  But with
non-partial scans such shared state largely doesn't exist.


> > especially if we get partitionwise joins.
> 
> About that I am not sure, because we already have support for parallel
> joins, so wouldn't the join subpaths corresponding to all of the
> partitions be partial paths ? I may be wrong about that.

We'll probably generate both, and then choose the cheaper one.  The
startup cost for partitionwise joins should usually be a lot cheaper
(because e.g. for hashtables we'll generate smaller hashtables), and we
should have less cost of concurrency.


> But if the subplans are foreign scans, then yes all would be
> non-partial plans. This may provoke  off-topic discussion, but here
> instead of assigning so many workers to all these foreign plans and
> all those workers waiting for the results, a single asynchronous
> execution node (which is still in the making) would be desirable
> because it would do the job of all these workers.

That's something that probably shouldn't be modelled throug a parallel
append, I agree - it shouldn't be too hard to develop a costing model
for that however.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.or

Re: [HACKERS] Supporting huge pages on Windows

2017-04-07 Thread Andres Freund
On 2017-04-07 13:57:07 +0200, Magnus Hagander wrote:
> On Wed, Apr 5, 2017 at 9:15 AM, Tsunakawa, Takayuki <
> tsunakawa.ta...@jp.fujitsu.com> wrote:
> 
> > From: pgsql-hackers-ow...@postgresql.org
> > > [mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Andres Freund
> > > As I asked before, why can't we delete all privs and add the explicitly
> > > needed once back (using AdjustTokenPrivileges)?
> >
> > I tried it with pg_ctl.c attached to an earlier mail today, i.e. delete
> > all privs with CreateRestrictedToken(DISABLE_ALL_PRIVILEGE) and enable
> > Lock Pages in Memory with AdjustTokenPrivileges().  But it didn't work;
> > AdjustTokenPrivileges() failed to enable the priv.  It's probably that
> > CreateRestrictedToken() deletes (unassigns?) the privs from the access
> > token, so subsequent AdjustTokenPrivileges() can no longer enable the priv.
> >
> >
> Once you have used CreateRestrictedToken(), you can no longer add
> *anything* to it. It's not just removed privileges, there's a special flag
> on the token that says it's restricted (can be checked with
> IsTokenRestricted()).

:/


> I think what you'd need to do is enumerate what privileges the user has
> *before* calling CreateRestrictedToken(), using GetTokenInformation(). And
> then pass those into PrivilegesToDelete (except for
> SeChangeNotifyPrivilege) in the call to CreateRestrictedToken(), instead of
> using DISABLE_MAX_PRIVILEGE. (and add the privilege needed for huge pages
> before you start that whole process -- that needs to be added in the token
> used *before* we create the restricted one).
> 
> At least that's my guess from reading the docs and trying to remember :)

Yea, seems that way.  Therefore I propose returning this patch with
feedback.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Aleksander Alekseev
Hi.

Recently I've discovered that if there are multiple values of the same
parameter in postgresql.conf PostgreSQL will silently use the last one.
It looks like not the best approach to me. For instance, user can find
the first value in the config file and expect that it will be used, etc.

I suggest to warn users about duplicated parameters. Here is a
corresponding patch.

Thoughts?

-- 
Best regards,
Aleksander Alekseev
diff --git a/src/backend/utils/misc/guc-file.l b/src/backend/utils/misc/guc-file.l
index f01b814..6aa60a4 100644
--- a/src/backend/utils/misc/guc-file.l
+++ b/src/backend/utils/misc/guc-file.l
@@ -304,6 +304,13 @@ ProcessConfigFileInternal(GucContext context, bool applySettings, int elevel)
 			}
 			/* Now mark it as present in file */
 			record->status |= GUC_IS_IN_FILE;
+
+			/* Warn the user about duplicate configuration parameter */
+			ereport(elevel,
+(errcode(ERRCODE_DUPLICATE_OBJECT),
+errmsg("duplicate configuration parameter \"%s\" overrides previous value in file \"%s\" line %u",
+		item->name,
+		item->filename, item->sourceline)));
 		}
 		else if (strchr(item->name, GUC_QUALIFIER_SEPARATOR) == NULL)
 		{


pgp126mUyo0K4.pgp
Description: OpenPGP digital signature


Re: [HACKERS] src/interfaces/libpq shipping nmake-related Makefiles

2017-04-07 Thread Andres Freund
On 2017-04-07 13:00:39 +0200, Magnus Hagander wrote:
> Insurmountable, probably not. The big difference is that you don't need
> *any* dependencies to build a libpq using win32.mak, but you need many of
> them (to start with, perl...) to build using the built-in one. For people
> who want to build it, it certainly save a lot more than "a few minutes".
> For somebody who doesn't have ready scripts it takes a *long* time to set
> up a build environment to do our regular msvc builds.

For a useful libpq you actually need a good chunk of dependencies,
including openssl.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Andres Freund
On 2017-04-07 18:14:27 +0300, Aleksander Alekseev wrote:
> Hi.
> 
> Recently I've discovered that if there are multiple values of the same
> parameter in postgresql.conf PostgreSQL will silently use the last one.
> It looks like not the best approach to me. For instance, user can find
> the first value in the config file and expect that it will be used, etc.
> 
> I suggest to warn users about duplicated parameters. Here is a
> corresponding patch.
> 
> Thoughts?

-1 - I frequently just override earlier parameters by adding an include
at the end of the file.  Also, with postgresql.auto.conf it's even more
common to override parameters.


Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Tatsuo Ishii
>> Recently I've discovered that if there are multiple values of the same
>> parameter in postgresql.conf PostgreSQL will silently use the last one.
>> It looks like not the best approach to me. For instance, user can find
>> the first value in the config file and expect that it will be used, etc.
>> 
>> I suggest to warn users about duplicated parameters. Here is a
>> corresponding patch.
>> 
>> Thoughts?
> 
> -1 - I frequently just override earlier parameters by adding an include
> at the end of the file.  Also, with postgresql.auto.conf it's even more
> common to override parameters.

-1 from me too by the same reason Andres said.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Aleksander Alekseev
Andres, Tatsuo,

Thank you for sharing your thoughts.

> -1 - I frequently just override earlier parameters by adding an
> include at the end of the file.  Also, with postgresql.auto.conf it's
> even more common to override parameters.

> -1 from me too by the same reason Andres said.

I see no problem here. After all, it's just warnings. We can even add
a GUC that disables them specially for experienced users who knows what
she or he is doing. And/or add special case for postgresql.auto.conf.

-- 
Best regards,
Aleksander Alekseev


pgphtrO1vN7g6.pgp
Description: OpenPGP digital signature


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Andres Freund
On 2017-04-07 18:29:40 +0300, Aleksander Alekseev wrote:
> Andres, Tatsuo,
> 
> Thank you for sharing your thoughts.
> 
> > -1 - I frequently just override earlier parameters by adding an
> > include at the end of the file.  Also, with postgresql.auto.conf it's
> > even more common to override parameters.
> 
> > -1 from me too by the same reason Andres said.
> 
> I see no problem here. After all, it's just warnings. We can even add
> a GUC that disables them specially for experienced users who knows what
> she or he is doing. And/or add special case for postgresql.auto.conf.

More useless warnings make actual warnings less useful.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread David Steele
On 4/7/17 11:22 AM, Tatsuo Ishii wrote:
>>> Recently I've discovered that if there are multiple values of the same
>>> parameter in postgresql.conf PostgreSQL will silently use the last one.
>>> It looks like not the best approach to me. For instance, user can find
>>> the first value in the config file and expect that it will be used, etc.
>>>
>>> I suggest to warn users about duplicated parameters. Here is a
>>> corresponding patch.
>>>
>>> Thoughts?
>>
>> -1 - I frequently just override earlier parameters by adding an include
>> at the end of the file.  Also, with postgresql.auto.conf it's even more
>> common to override parameters.
> 
> -1 from me too by the same reason Andres said.

-1 from me for the same reason.

-- 
-David
da...@pgmasters.net


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread Euler Taveira
2017-04-07 12:14 GMT-03:00 Aleksander Alekseev :

> Recently I've discovered that if there are multiple values of the same
> parameter in postgresql.conf PostgreSQL will silently use the last one.
> It looks like not the best approach to me. For instance, user can find
> the first value in the config file and expect that it will be used, etc.
>

Postgres configuration file concept is based on overriding parameter
values. It would be annoying if we emit warning for this feature. Also, it
is easier to know which file/line the parameter value came from. You can
check for duplicates with a small script.


-- 
   Euler Taveira   Timbira - http://www.
timbira.com.br/
   PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento



Re: [HACKERS] [PATCH] Warn users about duplicate configuration parameters

2017-04-07 Thread David G. Johnston
On Fri, Apr 7, 2017 at 8:29 AM, Aleksander Alekseev <
a.aleks...@postgrespro.ru> wrote:

> Andres, Tatsuo,
>
> Thank you for sharing your thoughts.
>
> > -1 - I frequently just override earlier parameters by adding an
> > include at the end of the file.  Also, with postgresql.auto.conf it's
> > even more common to override parameters.
>
> > -1 from me too by the same reason Andres said.
>
> I see no problem here. After all, it's just warnings. We can even add
> a GUC that disables them specially for experienced users who knows what
> she or he is doing. And/or add special case for postgresql.auto.conf.
>
>
​-1 for learning how the configuration system work via warning messages.

We've recently added pg_file_settings to provide a holistic view and the
docs cover the topic quite well.

David J.
​


Re: [HACKERS] [PATCH] Document the order of changing certain settings when using hot-standby servers

2017-04-07 Thread Yorick Peterse
Done! It can be found at https://commitfest.postgresql.org/14/1110/

Thanks for reviewing thus far :)

Yorick


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_basebackup: Allow use of arbitrary compression program

2017-04-07 Thread Jeff Janes
On Thu, Apr 6, 2017 at 7:04 PM, Michael Harris  wrote:

> Hello,
>
> Back in pg 9.2, we hacked a copy of pg_basebackup to add a command
> line option which would allow the user to specify an arbitrary
> external program (potentially including arguments) to be used to
> compress the tar backup.
>
> Our motivation was to be able to use pigz (parallel gzip
> implementation) to speed up the compression. It also allows using
> tools like bzip2, xz, etc instead of the inbuilt zlib.
>
> I never ended up submitting that upstream, but now it looks like I
> will have to repeat the exercise for 9.6, so I was wondering if such a
> feature would be welcomed.
>

I would welcome it.  I would really like to be able to use parallel pigz
and pxz.

You can stream the data into a compression tool of your choice as long as
you use tar mode and specify '-D -', but that is incompatible with table
spaces, and with xlog streaming, and so is not a very good solution.

Cheers,

Jeff


Re: [HACKERS] Duplicate assignment in Unicode/convutils.pm

2017-04-07 Thread Heikki Linnakangas

On 04/07/2017 09:32 AM, Kyotaro HORIGUCHI wrote:

Hi, I found that convutils.pl contains a harmless duplicate
assignemnt.


my $out = {f => $fname, l => $.,
   code => hex($1),
   ucs => hex($2),
   comment => $4,
   direction => BOTH,
   f => $fname,
   l => $.
};


Of course this is utterly harmless but wrong.

The attached patch fixes this following other perl files around.
No similar mistake is not found there.


Committed, thanks!

- Heikki



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Undefined psql variables

2017-04-07 Thread Pavel Stehule
2017-04-07 9:52 GMT+02:00 Fabien COELHO :

>
> Hello Corey,
>
> \if defined varname
 \if sql boolean expression to send to server
 \if compare value operator value

>>>
>>> I'm still thinking:-)
>>>
>>> Independently of the my aethetical complaint against having a pretty
>>> unusual keyword prefix syntax, how would you envision a \set assignment
>>> variant? Would \if have a different expression syntax somehow?
>>>
>>
>> Any further thoughts?
>>
>
> My current opinion:
>
>  - I'm fine if \set stays as it is, i.e. no expression.
>
>  - I agree that some client-side expressions are needed, along the
>semantics suggested by Tom, i.e. definition and comparisons.
>
>  - I'm really against the prefix syntax suggested by Tom
>
>
> I wish I could have an explanation about why the :?varname (or some other
> variant) syntax I suggested has a "namespace" issue.
>
> The advantage that I see is that although it is obviously ugly, it is ugly
> in the continuity of the various :["'?]varname syntaxes already offered and
> it allows to get rid of "defined varname" which does not look like SQL. A
> second advantage is that with the "defined" proposal
>

I don't think so this argument is valid - \if doesn't look like SQL too.


>
>\if defined var1 and defined var2 or defined var3 and sqlrt() >= ..
>
> Would probably never work work, as it cannot be embedded in another
> expression, while it would work with
>
>\if :?var1 and :?var2 or :?var3 and ...
>
>
I don't see any reason why first should not work and second should to work


>
> Moreover, I would like the condition syntax to be basically SQL & psql
> variables, without explicit prefixes, with a transparent decision whether
> it is evaluated client side or server side.
>
> As client-side expressions are pretty simple, ISTM that some regex could
> be used for this purpose, eg for integer and boolean comparisons:
>
>  ^\s*\d+\s*(=|<>|!=|<|<=|>|>=)\s*\d+\s*$
>  ^\s*(bool...)\s*(=|<>|!=)\s*(bool...)\s*$
>  ^\s*(NOT\s*)?(bool...)\s*$
>
> So that one could just write the expressions without having to tell where
> it is executed, eg
>
>  \if :VERSION_NUM < 11
>
> Would lead to
>
>  \if 10 < 11
>
> Caught by the first regex, and evaluated with a few lines of code.


I have a different opinion - the condition expression should not be SQL
necessary. This language is oriented on client side operations. What is
benefit from server side expression?

Regards

Pavel


>
>
> --
> Fabien.
>
>
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>


Re: [HACKERS] Performance issue with postgres9.6

2017-04-07 Thread Merlin Moncure
On Fri, Apr 7, 2017 at 5:16 AM, Prakash Itnal  wrote:
> Hello,
>
> We currently use psotgres 9.3 in our products. Recently we upgraded to
> postgres 9.6. But with 9.6 we have seen a drastic reduction in throughput.
> After analyzing carefully I found that "planner time" in 9.6 is very high.
> Below are the details:
>
> Scenario:
> 1 Create a table with 10 rows.
> 2 Execute simple query: select * from subscriber where s_id = 100;
> 3 No update/delete/insert; tried vacuum, full vacuum; by default we enable
> auto-vacuum
>
> 9.3: Avg of "Total runtime" : 0.24ms [actual throughput: 650 TPS]
> 9.6: Avg of Total time: 0.56ms (Avg of "Planning time" : 0.38ms + Avg of
> "Execution time" : 0.18ms) [actual throughput: 80 TPS]

I think your math is off.  Looking at your attachments, planning time
is 0.056ms, not 0.56ms.  This is in no way relevant to performance on
the order of your measured TPS.   How are you measuring TPS?

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Compiler warning in costsize.c

2017-04-07 Thread Tom Lane
Michael Paquier  writes:
> Bah. This actually fixes nothing. Attached is a different patch that
> really addresses the problem, by removing the variable because we
> don't want planner_rt_fetch() to run for non-Assert builds.

I don't really like any of these fixes, because they take the code
further away from the structure used by all the other similar functions
in costsize.c, and they will be hard to undo whenever these functions
grow a reason to look at the RTE normally (outside asserts).

I'd be happier with something along the line of

RangeTblEntry *rte;
ListCell   *lc;

/* Should only be applied to base relations that are subqueries */
Assert(rel->relid > 0);
rte = planner_rt_fetch(rel->relid, root);
#ifdef USE_ASSERT_CHECKING
Assert(rte->rtekind == RTE_SUBQUERY);
#else
(void) rte;  /* silence unreferenced-variable complaints */
#endif

assuming that that actually does silence the warning on MSVC.

BTW, is it really true that only these two places produce such warnings
on MSVC?  I see about three dozen uses of PG_USED_FOR_ASSERTS_ONLY in our
tree, and I'd have expected all of those places to be causing warnings on
a compiler that doesn't have a way to understand that annotation.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] valgrind errors around dsa.c

2017-04-07 Thread Andres Freund
Hi,

newly added tests exercise parallel bitmap scans.  And they trigger
valgrind errors:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skink&dt=2017-04-07%2007%3A10%3A01


==4567== VALGRINDERROR-BEGIN
==4567== Conditional jump or move depends on uninitialised value(s)
==4567==at 0x5FD62A: check_for_freed_segments (dsa.c:2219)
==4567==by 0x5FD97E: dsa_get_address (dsa.c:934)
==4567==by 0x5FDA2A: init_span (dsa.c:1339)
==4567==by 0x5FE6D1: ensure_active_superblock (dsa.c:1696)
==4567==by 0x5FEBBD: alloc_object (dsa.c:1452)
==4567==by 0x5FEBBD: dsa_allocate_extended (dsa.c:693)
==4567==by 0x3C7A83: pagetable_allocate (tidbitmap.c:1536)
==4567==by 0x3C7A83: pagetable_create (simplehash.h:342)
==4567==by 0x3C7A83: tbm_create_pagetable (tidbitmap.c:323)
==4567==by 0x3C8DAD: tbm_get_pageentry (tidbitmap.c:1246)
==4567==by 0x3C98A1: tbm_add_tuples (tidbitmap.c:432)
==4567==by 0x22510C: btgetbitmap (nbtree.c:460)
==4567==by 0x21A8D1: index_getbitmap (indexam.c:726)
==4567==by 0x38AD48: MultiExecBitmapIndexScan (nodeBitmapIndexscan.c:91)
==4567==by 0x37D353: MultiExecProcNode (execProcnode.c:621)
==4567==  Uninitialised value was created by a heap allocation
==4567==at 0x602FD5: palloc (mcxt.c:872)
==4567==by 0x5FF73B: create_internal (dsa.c:1242)
==4567==by 0x5FF8F5: dsa_create_in_place (dsa.c:473)
==4567==by 0x37CA32: ExecInitParallelPlan (execParallel.c:532)
==4567==by 0x38C324: ExecGather (nodeGather.c:152)
==4567==by 0x37D247: ExecProcNode (execProcnode.c:551)
==4567==by 0x39870F: ExecNestLoop (nodeNestloop.c:156)
==4567==by 0x37D1B7: ExecProcNode (execProcnode.c:512)
==4567==by 0x3849D4: fetch_input_tuple (nodeAgg.c:686)
==4567==by 0x387764: agg_retrieve_direct (nodeAgg.c:2306)
==4567==by 0x387A11: ExecAgg (nodeAgg.c:2117)
==4567==by 0x37D217: ExecProcNode (execProcnode.c:539)
==4567==

It could be that these are spurious due to shared memory - valgrind
doesn't track definedness across processes - but the fact that memory
allocated by palloc is the source of the undefined memory makes me doubt
that.

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ExecPrepareExprList and per-query context

2017-04-07 Thread Tom Lane
Amit Langote  writes:
> Should ExecPrepareExprList also switch to estate->es_query_cxt?

Good point; I'm surprised we haven't noted any failures from that.
We surely want the entire result data structure to be in the same
memory context.  There are not very many callers right now, and
I guess they are all in the right context already (or we aren't
testing them :-().

> Or maybe
> ExecPrepareExpr could itself detect that passed-in node is a List and
> create the list of ExprState nodes by itself.

-1.  That's just breaking the API of ExecPrepareExpr.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] recent deadlock regression test failures

2017-04-07 Thread Andres Freund
Hi,

There's two machines that recently report changes in deadlock detector
output:
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-05%2018%3A58%3A04
https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2017-04-07%2004%3A20%3A01

both just failed twice in a row:
https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=hyrax&br=HEAD
https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=friarbird&br=HEAD
without previous errors of the same ilk.

I don't think any recent changes are supposed to affect deadlock
detector behaviour?

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] pgbench --progress-timestamp no longer works correctly

2017-04-07 Thread Jeff Janes
--progress-timestamp is supposed to make -P report a Unix Epoch time stamp,
for easy correlation with the entries in other log files (like the postgres
server log file using %n).

But that broke in this commit:

commit 1d63f7d2d180c8708bc12710254eb7b45823440f
Author: Tom Lane 
Date:   Mon Jan 2 13:41:51 2017 -0500

Use clock_gettime(), if available, in instr_time measurements.


The commit before that one changed pgbench to make it tolerate the change
in clock, but it overlooked --progress-timestamp.

Cheers,

Jeff


Re: [HACKERS] SCRAM authentication, take three

2017-04-07 Thread Robert Haas
On Fri, Apr 7, 2017 at 3:59 AM, Heikki Linnakangas  wrote:
> I think the "SCRAM" part is more important than "SHA-256", so -1 on that.

I agree.  The point here isn't that we're using a better hashing
method, even if a lot of people *think* that's the point.  The point
is we're using a modern algorithm that has nice properties like "you
can't impersonate the client by steeling the verifier, or even by
snooping the exchange".

But "sasl" might be even better.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Andrew Dunstan


On 04/07/2017 12:57 PM, Andres Freund wrote:
> Hi,
>
> There's two machines that recently report changes in deadlock detector
> output:
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hyrax&dt=2017-04-05%2018%3A58%3A04
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=friarbird&dt=2017-04-07%2004%3A20%3A01
>
> both just failed twice in a row:
> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=hyrax&br=HEAD
> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=friarbird&br=HEAD
> without previous errors of the same ilk.
>
> I don't think any recent changes are supposed to affect deadlock
> detector behaviour?
>


Both these machines have CLOBBER_CACHE_ALWAYS set. And on both machines
recent changes have made the isolation tests run much much longer.

cheers

andrew

-- 
Andrew Dunstanhttps://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Tom Lane
Andrew Dunstan  writes:
> On 04/07/2017 12:57 PM, Andres Freund wrote:
>> I don't think any recent changes are supposed to affect deadlock
>> detector behaviour?

> Both these machines have CLOBBER_CACHE_ALWAYS set. And on both machines
> recent changes have made the isolation tests run much much longer.

Ouch.  I see friarbird's run time for the isolation tests has gone from an
hour and change to over 5 hours in one fell swoop.  hyrax not much better.
Oddly, non-CCA animals don't seem to have changed much.

Eyeing recent patches, it seems like the culprit must be Kevin's
addition to isolationtester's wait query:

@@ -231,6 +231,14 @@ main(int argc, char **argv)
appendPQExpBuffer(&wait_query, ",%s", backend_pids[i]);
appendPQExpBufferStr(&wait_query, "}'::integer[]");
 
+   /* Also detect certain wait events. */
+   appendPQExpBufferStr(&wait_query,
+" OR EXISTS ("
+"  SELECT * "
+"  FROM pg_catalog.pg_stat_activity "
+"  WHERE pid = $1 "
+"  AND wait_event IN ('SafeSnapshot'))");
+
res = PQprepare(conns[0], PREP_WAITING, wait_query.data, 0, NULL);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
{

This seems a tad ill-considered.  We sweated a lot of blood not so long
ago to get the runtime of that query down, and this seems to have blown
it up again.  And done so for every isolation test case, not only the
ones where it could possibly matter.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance issue with postgres9.6

2017-04-07 Thread Tomas Vondra

On 04/07/2017 06:31 PM, Merlin Moncure wrote:

On Fri, Apr 7, 2017 at 5:16 AM, Prakash Itnal  wrote:

Hello,

We currently use psotgres 9.3 in our products. Recently we upgraded to
postgres 9.6. But with 9.6 we have seen a drastic reduction in throughput.
After analyzing carefully I found that "planner time" in 9.6 is very high.
Below are the details:

Scenario:
1 Create a table with 10 rows.
2 Execute simple query: select * from subscriber where s_id = 100;
3 No update/delete/insert; tried vacuum, full vacuum; by default we enable
auto-vacuum

9.3: Avg of "Total runtime" : 0.24ms [actual throughput: 650 TPS]
9.6: Avg of Total time: 0.56ms (Avg of "Planning time" : 0.38ms + Avg of
"Execution time" : 0.18ms) [actual throughput: 80 TPS]


I think your math is off.  Looking at your attachments, planning time
is 0.056ms, not 0.56ms.  This is in no way relevant to performance on
the order of your measured TPS.   How are you measuring TPS?



Not sure where did you get the 0.056ms? What I see is this in the 9.3 
explains:


 Total runtime: 0.246 ms

and this in those from 9.6:

 Planning time: 0.396 ms

 Execution time: 0.181 ms


That is roughly 0.25ms vs. 0.6ms (0.4+0.2), as reported by Prakash.

Obviously, this "just" 2x slowdown, so it does not match the drop from 
650 to 80 tps. Also, 0.25ms would be ~4000 tps, so I guess this was just 
an example of a query that slowed down.


Prakash, are you using packages (which ones?), or have you compiled from 
sources? Can you provide pg_config output from both versions, and also 
'select * from pg_settings' (the full config)?


It might also be useful to collect profiles, i.e. (1) install debug 
symbols (2) run the query in a loop and (3) collect profiles from that 
one backend using 'perf'.


I assume you're using the same hardware / machine for the tests?

regards

--
Tomas Vondra  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Speed up Clog Access by increasing CLOG buffers

2017-04-07 Thread Robert Haas
On Thu, Mar 9, 2017 at 5:49 PM, Robert Haas  wrote:
> However, I just realized that in
> both this case and in the case of group XID clearing, we weren't
> advertising a wait event for the PGSemaphoreLock calls that are part
> of the group locking machinery.  I think we should fix that, because a
> quick test shows that can happen fairly often -- not, I think, as
> often as we would have seen LWLock waits without these patches, but
> often enough that you'll want to know.  Patch attached.

I've pushed the portion of this that relates to ProcArrayLock.  (I
know this hasn't been discussed much, but there doesn't really seem to
be any reason for anybody to object, and looking at just the
LWLock/ProcArrayLock wait events gives a highly misleading answer.)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Kevin Grittner
On Fri, Apr 7, 2017 at 12:28 PM, Tom Lane  wrote:
> Andrew Dunstan  writes:
>> On 04/07/2017 12:57 PM, Andres Freund wrote:
>>> I don't think any recent changes are supposed to affect deadlock
>>> detector behaviour?
>
>> Both these machines have CLOBBER_CACHE_ALWAYS set. And on both machines
>> recent changes have made the isolation tests run much much longer.
>
> Ouch.  I see friarbird's run time for the isolation tests has gone from an
> hour and change to over 5 hours in one fell swoop.  hyrax not much better.
> Oddly, non-CCA animals don't seem to have changed much.
>
> Eyeing recent patches, it seems like the culprit must be Kevin's
> addition to isolationtester's wait query:

Ouch.  Without this we don't have regression test coverage for the
SERIALIZABLE READ ONLY DEFERRABLE code, but it's probably not worth
adding 4 hours to any tests, even if it only shows up with
CLOBBER_CACHE_ONLY.  I assume the consensus is that I should revert
it?

--
Kevin Grittner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] parallel bitmapscan isn't exercised in regression tests

2017-04-07 Thread Andres Freund
On 2017-04-06 13:43:55 -0700, Andres Freund wrote:
> On 2017-04-06 10:00:32 +0530, Dilip Kumar wrote:
> > On Tue, Apr 4, 2017 at 5:51 AM, Dilip Kumar  wrote:
> > > Sure I can do that, In attached patch, I only fixed the problem of not
> > > executing the bitmap test.  Now, I will add few cases to cover other
> > > parts especially rescan and prefetching logic.
> > 
> > I have added two test cases to cover rescan, prefetch and lossy pages
> > logic for parallel bitmap.  I have removed the existing case because
> > these two new cases will be enough to cover that part as well.
> > 
> > Now, nodeBitmapHeapScan.c has 95.5% of line coverage.
> 
> Great!  Pushed.

At some point it might also be a good idea to compare parallel and
non-parallel results.  It's obviously quite possible to break semantics
with parallelism...

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Andres Freund
On 2017-04-07 12:49:22 -0500, Kevin Grittner wrote:
> On Fri, Apr 7, 2017 at 12:28 PM, Tom Lane  wrote:
> > Andrew Dunstan  writes:
> >> On 04/07/2017 12:57 PM, Andres Freund wrote:
> >>> I don't think any recent changes are supposed to affect deadlock
> >>> detector behaviour?
> >
> >> Both these machines have CLOBBER_CACHE_ALWAYS set. And on both machines
> >> recent changes have made the isolation tests run much much longer.
> >
> > Ouch.  I see friarbird's run time for the isolation tests has gone from an
> > hour and change to over 5 hours in one fell swoop.  hyrax not much better.
> > Oddly, non-CCA animals don't seem to have changed much.
> >
> > Eyeing recent patches, it seems like the culprit must be Kevin's
> > addition to isolationtester's wait query:
> 
> Ouch.  Without this we don't have regression test coverage for the
> SERIALIZABLE READ ONLY DEFERRABLE code, but it's probably not worth
> adding 4 hours to any tests, even if it only shows up with
> CLOBBER_CACHE_ONLY.  I assume the consensus is that I should revert
> it?

I'd rather fix the issue, than remove the tests entirely.  Seems quite
possible to handle blocking on Safesnapshot in a similar manner as 
pg_blocking_pids?

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance issue with postgres9.6

2017-04-07 Thread Tom Lane
Tomas Vondra  writes:
> On 04/07/2017 06:31 PM, Merlin Moncure wrote:
>> I think your math is off.  Looking at your attachments, planning time
>> is 0.056ms, not 0.56ms.  This is in no way relevant to performance on
>> the order of your measured TPS.   How are you measuring TPS?

> Not sure where did you get the 0.056ms?

I don't see that either, but:

> What I see is this in the 9.3 explains:
>   Total runtime: 0.246 ms
> and this in those from 9.6:
>   Planning time: 0.396 ms
>   Execution time: 0.181 ms
> That is roughly 0.25ms vs. 0.6ms (0.4+0.2), as reported by Prakash.

9.3's EXPLAIN did not measure planning time at all.  The "Total runtime"
it reports corresponds to "Execution time" in the newer version.  So
these numbers indicate that 9.6 is significantly *faster*, not slower,
than 9.3, at least so far as execution of this one example is concerned.

The OP may well be having some performance issue with 9.6, but the
presented material completely fails to demonstrate it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-04-07 Thread Robert Haas
On Wed, Apr 5, 2017 at 5:54 AM, Amit Langote
 wrote:
> Marked as ready for committer.

Andres seems to have changed the status of this patch to "Needs
review" and then, 30 seconds later, to "Waiting on author", but
there's no actual email on the thread explaining what his concerns
were.  I'm going to set this back to "Ready for Committer" and push it
out to the next CommitFest.  I think this would be a great feature,
but I think it's not entirely clear that we have consensus on the
design, so let's revisit it for next release.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] UPDATE of partition key

2017-04-07 Thread Andres Freund
On 2017-04-07 13:55:51 -0400, Robert Haas wrote:
> On Wed, Apr 5, 2017 at 5:54 AM, Amit Langote
>  wrote:
> > Marked as ready for committer.
> 
> Andres seems to have changed the status of this patch to "Needs
> review" and then, 30 seconds later, to "Waiting on author"
> there's no actual email on the thread explaining what his concerns
> were.  I'm going to set this back to "Ready for Committer" and push it
> out to the next CommitFest.  I think this would be a great feature,
> but I think it's not entirely clear that we have consensus on the
> design, so let's revisit it for next release.

I was kind of looking for the appropriate status of "not entirely clear
that we have consensus on the design" - which isn't really
ready-for-committer, but no waiting-on-author either...

Greetings,

Andres Freund


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Partitioned tables vs GRANT

2017-04-07 Thread Joe Conway
Apparently INSERT and SELECT on the parent partitioned table skip normal
acl checks on the partitions. Is that intended behavior?

8<---
test=# create user part_test;
CREATE ROLE
test=#
test=# create table t1 (id int) partition by range ((id % 4));
CREATE TABLE
test=# create table t1_0 partition of t1 for values from (0) to (1);
CREATE TABLE
test=# create table t1_1 partition of t1 for values from (1) to (2);
CREATE TABLE
test=# create table t1_2 partition of t1 for values from (2) to (3);
CREATE TABLE
test=# create table t1_3 partition of t1 for values from (3) to (4);
CREATE TABLE
test=# grant all on TABLE t1 to part_test;
GRANT
test=# set session authorization part_test ;
SET
test=> select current_user;
 current_user
--
 part_test
(1 row)

test=> insert into t1 values(0),(1),(2),(3);
INSERT 0 4
test=> insert into t1_0 values(0);
ERROR:  permission denied for relation t1_0
test=> insert into t1_1 values(1);
ERROR:  permission denied for relation t1_1
test=> insert into t1_2 values(2);
ERROR:  permission denied for relation t1_2
test=> insert into t1_3 values(3);
ERROR:  permission denied for relation t1_3
test=> select * from t1;
 id

  0
  1
  2
  3
(4 rows)

test=> select * from t1_0;
ERROR:  permission denied for relation t1_0
test=> select * from t1_1;
ERROR:  permission denied for relation t1_1
test=> select * from t1_2;
ERROR:  permission denied for relation t1_2
test=> select * from t1_3;
ERROR:  permission denied for relation t1_3
test=> reset session authorization;
RESET
test=# drop table if exists t1;
DROP TABLE
8<---

Joe

-- 
Crunchy Data - http://crunchydata.com
PostgreSQL Support for Secure Enterprises
Consulting, Training, & Open Source Development



signature.asc
Description: OpenPGP digital signature


Re: [HACKERS] [PATCH] Remove unused argument in btree_xlog_split

2017-04-07 Thread Robert Haas
On Thu, Apr 6, 2017 at 8:21 AM, Aleksander Alekseev
 wrote:
> Hi Robert,
>
>> Hmm.  I don't see anything wrong with that, particularly, but it seems
>> we also don't need the distinction between XLOG_BTREE_SPLIT_L and
>> XLOG_BTREE_SPLIT_L_ROOT or likewise between XLOG_BTREE_SPLIT_R and
>> XLOG_BTREE_SPLIT_R_ROOT -- in which case I think this patch should go
>> a little further and do all of that together.
>
> Thank you for sharing your thoughts on this patch. Here is a new
> version.

Thanks.  Please add this to the next CommitFest, as there seems to be
no urgency (and some risk) in committing it right before feature
freeze.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partitioned tables vs GRANT

2017-04-07 Thread Keith Fiske
On Fri, Apr 7, 2017 at 2:05 PM, Joe Conway  wrote:

> Apparently INSERT and SELECT on the parent partitioned table skip normal
> acl checks on the partitions. Is that intended behavior?
>
> 8<---
> test=# create user part_test;
> CREATE ROLE
> test=#
> test=# create table t1 (id int) partition by range ((id % 4));
> CREATE TABLE
> test=# create table t1_0 partition of t1 for values from (0) to (1);
> CREATE TABLE
> test=# create table t1_1 partition of t1 for values from (1) to (2);
> CREATE TABLE
> test=# create table t1_2 partition of t1 for values from (2) to (3);
> CREATE TABLE
> test=# create table t1_3 partition of t1 for values from (3) to (4);
> CREATE TABLE
> test=# grant all on TABLE t1 to part_test;
> GRANT
> test=# set session authorization part_test ;
> SET
> test=> select current_user;
>  current_user
> --
>  part_test
> (1 row)
>
> test=> insert into t1 values(0),(1),(2),(3);
> INSERT 0 4
> test=> insert into t1_0 values(0);
> ERROR:  permission denied for relation t1_0
> test=> insert into t1_1 values(1);
> ERROR:  permission denied for relation t1_1
> test=> insert into t1_2 values(2);
> ERROR:  permission denied for relation t1_2
> test=> insert into t1_3 values(3);
> ERROR:  permission denied for relation t1_3
> test=> select * from t1;
>  id
> 
>   0
>   1
>   2
>   3
> (4 rows)
>
> test=> select * from t1_0;
> ERROR:  permission denied for relation t1_0
> test=> select * from t1_1;
> ERROR:  permission denied for relation t1_1
> test=> select * from t1_2;
> ERROR:  permission denied for relation t1_2
> test=> select * from t1_3;
> ERROR:  permission denied for relation t1_3
> test=> reset session authorization;
> RESET
> test=# drop table if exists t1;
> DROP TABLE
> 8<---
>
> Joe
>
> --
> Crunchy Data - http://crunchydata.com
> PostgreSQL Support for Secure Enterprises
> Consulting, Training, & Open Source Development
>
>
I encountered that as well testing for native in pg_partman. I had to
include the code for non-native that propagates ownership/privileges from
the parent to the child.
Another question to ask is that if you change privileges on the parent,
does that automatically change them for all children as well? I encountered
this being a rather expensive operation using plpgsql methods to fix it
when the child count grows high. That's why I have resetting all child
table privileges as a separate, manual function and changes only apply to
new partitions automatically. Hopefully internally there's a more efficient
way.

Keith


Re: [HACKERS] postgres_fdw bug in 9.6

2017-04-07 Thread Robert Haas
On Mon, Apr 3, 2017 at 2:12 AM, Etsuro Fujita
 wrote:
> On 2017/04/01 1:32, Jeff Janes wrote:
>> On Thu, Mar 23, 2017 at 5:20 AM, Etsuro Fujita
>> mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>> Done.  Attached is a new version of the patch.
>> Is the fix for 9.6.3 going to be just a back port of this, or will it
>> look different?
>
> +1 for backporting; although that requires that GetForeignJoinPaths be
> updated so that the FDW uses a new function to create an alternative local
> join path (ie, CreateLocalJoinPath), that would make maintenance of the code
> easy.

Well, the problem here is that this breaks ABI compatibility.  If we
applied this to 9.6, and somebody tried to use a previously-compiled
FDW .so against a new server version, it would fail after the upgrade,
because the new server wouldn't have GetExistingLocalJoinPath and also
possibly because of the change to the structure of JoinPathExtraData.
Maybe there's no better alternative, and maybe nothing outside of
postgres_fdw is using this stuff anyway, but it seems like a concern.

Also, the CommitFest entry for this seems to be a bit sketchy.  It
claims that Tom Lane is a co-author of this patch which, AFAICS, is
not the case.  It is listed under Miscellaneous rather than "Bug
Fixes", which seems like a surprising decision.  And it uses a subject
line which is neither very clear nor the same as the (also not
particularly helpful) subject line of the email thread.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Kevin Grittner
On Fri, Apr 7, 2017 at 12:52 PM, Andres Freund  wrote:

> I'd rather fix the issue, than remove the tests entirely.  Seems quite
> possible to handle blocking on Safesnapshot in a similar manner as 
> pg_blocking_pids?

I'll see what I can figure out.

-- 
Kevin Grittner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] partitioned tables and contrib/sepgsql

2017-04-07 Thread Mike Palmiotto
On Thu, Apr 6, 2017 at 5:52 PM, Joe Conway  wrote:
> On 04/06/2017 12:35 PM, Tom Lane wrote:
>> Joe Conway  writes:
>>> Any thoughts on whether 0001a and 0001b ought to be backpatched? I'm
>>> thinking not given the lack of past complaints but it might make sense
>>> to do.
>>
>> I think 0001a absolutely needs to be, because it is fixing what is really
>> an ABI violation: sepgsql_needs_fmgr_hook is supposed to return our notion
>> of bool, but as things stand it's returning _Bool (which is why the
>> compiler is complaining).  Now we might get away with that on most
>> hardware, but on platforms where those are different widths, it's possible
>> to imagine function-return conventions that would make it fail.
>>
>> 0001b seems to only be needed for compilers that aren't smart enough
>> to see that tclass won't be referenced for RELKIND_INDEX, so it's
>> just cosmetic.
>
> Ok, committed/pushed that way.
>
> I found some missing bits in the 0002 patch -- new version attached.
> Will wait on new regression tests before committing, but I expect we'll
> have those by end of today and be able to commit the rest tomorrow.

Attached are the regression test updates for partitioned tables.

Thanks,
-- 
Mike Palmiotto
Software Engineer
Crunchy Data Solutions
https://crunchydata.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Andres Freund
Hi,

When I started writing this, there were the following reamining CF
items, minus bugfix ones which aren't bound by the code freeze.

I think it makes sense to go through those and see whether it's
realistic to commit any of them.

Ready for Committer:

Add GUCs for predicate lock promotion thresholds:
- claimed by Kevin, should be easy enough

initdb configurable wal_segment_size
- parts have been committed
- significantly derailed by segment naming discussion
- possibly committable if we decide to skip the naming bit? But also a
  bit late given that it touches some quite sensitive code.

Create fdw_outerpath for foreign
- haven't really followed discussion
- only marked as ready-for-committer 2017-04-04

Vacuum: allow usage of more than 1GB of work mem
- hm, maybe?  Will take a look.

Unique Joins
- Tom's discussing things with David, not sure.

Push down more UPDATEs/DELETEs in postgres_fdw
- claimed by Robert?

postgres_fdw: support parameterized foreign joins
- think that depends on fdw_outerpath?


Waiting on Author:

SQL statements statistics counter view (pg_stat_sql)
- the code doesn't look quite ready
- don't think we quite have design agreement, e.g. I don't like where it's
  hooked into query execution

Write Amplification Reduction Method (WARM)
- fair number of people don't think it's ready for v10.
- can't move to next fest because it's waiting-on-author, which doesn't
  allow that.  Doesn't strike me as a useful restriction.

BRIN optimize memory allocation
- I think Alvaro has indicated that he wants to take care of that?

Indexes with Included Columns (was Covering + unique indexes)
- Don't think concerns about #columns on truncated tuples have been
  addressed.  Should imo be returned-with-feedback.


Needs-Review:

Better estimate merging for duplicate vars in clausesel.c
- has been submitted pretty late (2017-02-24) and discussion is ongoing
- I'm inclined to punt on this one to the next release, previous
  proposal along that line got some pushback

new plpgsql extra_checks
- Winner of the "most opaque CF title" award
- hasn't received a whole lot of review
- don't think we're even close to having design agreement

Generic type subscripting
- still some review back and forth
- probably should be punted


Any comments?

Greetings,

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Kevin Grittner
On Fri, Apr 7, 2017 at 1:37 PM, Andres Freund  wrote:

> I think it makes sense to go through those and see whether it's
> realistic to commit any of them.
>
> Ready for Committer:
>
> Add GUCs for predicate lock promotion thresholds:
> - claimed by Kevin, should be easy enough

I was planning on pushing this today.

-- 
Kevin Grittner


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Alvaro Herrera
Andres Freund wrote:

> Unique Joins
> - Tom's discussing things with David, not sure.

This one was already included-and-removed from 9.6, Tom had said he'd
give it priority during the current cycle as I recall.  It seems unfair
that it's still waiting for review on the last day of pg10's last
commitfest.

> Write Amplification Reduction Method (WARM)
> - fair number of people don't think it's ready for v10.

I'm going over this one now with Pavan, with the intent of getting it in
committable shape.

I may be biased, but the claimed performance gains are so large that I
can't let it slip through without additional effort.

> BRIN optimize memory allocation
> - I think Alvaro has indicated that he wants to take care of that?

I am happy to see it move to pg11 to give priority to WARM.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Peter Geoghegan
On Fri, Apr 7, 2017 at 11:37 AM, Andres Freund  wrote:
> Write Amplification Reduction Method (WARM)
> - fair number of people don't think it's ready for v10.
> - can't move to next fest because it's waiting-on-author, which doesn't
>   allow that.  Doesn't strike me as a useful restriction.

I agree that that CF app restriction makes little sense.

> Indexes with Included Columns (was Covering + unique indexes)
> - Don't think concerns about #columns on truncated tuples have been
>   addressed.  Should imo be returned-with-feedback.

+1.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Partitioned tables vs GRANT

2017-04-07 Thread Tom Lane
Joe Conway  writes:
> Apparently INSERT and SELECT on the parent partitioned table skip normal
> acl checks on the partitions. Is that intended behavior?

Yes, this matches normal inheritance behavior.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Tom Lane
Andres Freund  writes:
> I think it makes sense to go through those and see whether it's
> realistic to commit any of them.

> Unique Joins
> - Tom's discussing things with David, not sure.

Working on this one today.

> Generic type subscripting
> - still some review back and forth
> - probably should be punted

Yeah, I do not think we should hustle this in at the last minute.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Andres Freund
On 2017-04-07 15:45:33 -0300, Alvaro Herrera wrote:
> > Write Amplification Reduction Method (WARM)
> > - fair number of people don't think it's ready for v10.
> 
> I'm going over this one now with Pavan, with the intent of getting it in
> committable shape.
> 
> I may be biased, but the claimed performance gains are so large that I
> can't let it slip through without additional effort.

I strongly object to pushing it into v10.  The potential negative impact
of a patch that touches the on-disk representation is also pretty
large.  I think we'll have to discuss among a few more committers
whether we're ok with pushing this one.

Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Performance issue with postgres9.6

2017-04-07 Thread Merlin Moncure
On Fri, Apr 7, 2017 at 12:54 PM, Tom Lane  wrote:
> Tomas Vondra  writes:
>> On 04/07/2017 06:31 PM, Merlin Moncure wrote:
>>> I think your math is off.  Looking at your attachments, planning time
>>> is 0.056ms, not 0.56ms.  This is in no way relevant to performance on
>>> the order of your measured TPS.   How are you measuring TPS?
>
>> Not sure where did you get the 0.056ms?
>
> I don't see that either, but:
>
>> What I see is this in the 9.3 explains:
>>   Total runtime: 0.246 ms
>> and this in those from 9.6:
>>   Planning time: 0.396 ms
>>   Execution time: 0.181 ms
>> That is roughly 0.25ms vs. 0.6ms (0.4+0.2), as reported by Prakash.
>
> 9.3's EXPLAIN did not measure planning time at all.  The "Total runtime"
> it reports corresponds to "Execution time" in the newer version.  So
> these numbers indicate that 9.6 is significantly *faster*, not slower,
> than 9.3, at least so far as execution of this one example is concerned.
>
> The OP may well be having some performance issue with 9.6, but the
> presented material completely fails to demonstrate it.

This smells like a problem with the test execution environment itself.
OP (if on linux), try:

pgbench -n -f <(echo "select * from subscriber where s_id = 100") -c 4 -T 10

...where pgbench is run from the database server (if pgbench is not in
the default path, you may have to qualify it).  This should give
apples to apples comparison, or at least rule out certain
environmental considerations like the network stack.

If your client is running on windows, one place to look is the TCP
stack.  In my experience tcp configuration issues are much more common
on windows.  On any reasonably modern hardware can handle thousands
and thousands of transactions per second for simple indexed select.

merlin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Tom Lane
Alvaro Herrera  writes:
> Andres Freund wrote:
>> Write Amplification Reduction Method (WARM)
>> - fair number of people don't think it's ready for v10.

> I'm going over this one now with Pavan, with the intent of getting it in
> committable shape.

I have to agree with Andres that this is not something to push in, on the
last day before feature freeze, when a number of people aren't comfortable
with it.  It looks much more like a feature to push at the start of a
development cycle.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Robert Haas
On Fri, Apr 7, 2017 at 2:53 PM, Tom Lane  wrote:
> Alvaro Herrera  writes:
>> Andres Freund wrote:
>>> Write Amplification Reduction Method (WARM)
>>> - fair number of people don't think it's ready for v10.
>
>> I'm going over this one now with Pavan, with the intent of getting it in
>> committable shape.
>
> I have to agree with Andres that this is not something to push in, on the
> last day before feature freeze, when a number of people aren't comfortable
> with it.  It looks much more like a feature to push at the start of a
> development cycle.

I strongly agree.  Testing has found some noticeable regressions in
some cases as well, even if there were no outright bugs.  I'm frankly
astonished by the ongoing unwillingness to admit that the objections
(by multiple people) to this patch have any real merit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Undefined psql variables

2017-04-07 Thread Fabien COELHO


Hello Pavel,


I wish I could have an explanation about why the :?varname (or some other
variant) syntax I suggested has a "namespace" issue.

The advantage that I see is that although it is obviously ugly, it is ugly
in the continuity of the various :["'?]varname syntaxes already offered and
it allows to get rid of "defined varname" which does not look like SQL. A
second advantage is that with the "defined" proposal


I don't think so this argument is valid - \if doesn't look like SQL too.


Sure. I'm talking about the expressions after the "\if" which should be 
as close as SQL, I think. At least that is what Tom required about the 
expression syntax in pgbench, and I tend to agree that psql should avoid 
to mix in another language if possible.



   \if defined var1 and defined var2 or defined var3 and sqlrt() >= ..

Would probably never work work, as it cannot be embedded in another
expression, while it would work with

   \if :?var1 and :?var2 or :?var3 and ...


I don't see any reason why first should not work and second should to work


Because of the mix of client-side and server-side stuff which needs to be 
interpreted. Let us consider:


  \if EXISTS (SELECT * FROM tbl WHERE id=3) AND defined foo

The "exists" is obviously executed server-side, but "defined foo" needs to 
be interpreted client-side, and it means that some parser client side 
would have been able to catch it in the middle of everything else. This 
example also illustrate my "does not look like SQL" point, as the first 
part is clearly SQL and the part after AND is not.


With the second approach, ... "AND :?foo", the ":?foo" reference would be 
substituted directly by psql lexer and replaced on the fly by the answer, 
resulting in "AND TRUE" or "AND FALSE" depending, then the whole result 
(from EXISTS to TRUE/FALSE) could be interpreted server side to get an 
answer.


Basically, catching :?varname seems easier/safer than catching "defined 
varname". I think that Tom's intent is that the defined expressions could 
not be mixed with SQL server side stuff, but I do not see why not, it is 
easy to imagine some use case where it would make sense.



I have a different opinion - the condition expression should not be SQL
necessary. This language is oriented on client side operations. What is
benefit from server side expression?


Because I think it is legitimate to be able to write things like:

  \if NOT pg_extension_is_loaded('units')
\echo 'this application requires the great units extension'
\q
  \endif

  \if (SELECT version FROM app_version) >= 2.0
\echo 'application already installed at 2.0'
\q
  \endif

--
Fabien.


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] postgres_fdw bug in 9.6

2017-04-07 Thread Robert Haas
On Fri, Apr 7, 2017 at 2:33 PM, Robert Haas  wrote:
> On Mon, Apr 3, 2017 at 2:12 AM, Etsuro Fujita
>  wrote:
>> On 2017/04/01 1:32, Jeff Janes wrote:
>>> On Thu, Mar 23, 2017 at 5:20 AM, Etsuro Fujita
>>> mailto:fujita.ets...@lab.ntt.co.jp>> wrote:
>>> Done.  Attached is a new version of the patch.
>>> Is the fix for 9.6.3 going to be just a back port of this, or will it
>>> look different?
>>
>> +1 for backporting; although that requires that GetForeignJoinPaths be
>> updated so that the FDW uses a new function to create an alternative local
>> join path (ie, CreateLocalJoinPath), that would make maintenance of the code
>> easy.
>
> Well, the problem here is that this breaks ABI compatibility.  If we
> applied this to 9.6, and somebody tried to use a previously-compiled
> FDW .so against a new server version, it would fail after the upgrade,
> because the new server wouldn't have GetExistingLocalJoinPath and also
> possibly because of the change to the structure of JoinPathExtraData.
> Maybe there's no better alternative, and maybe nothing outside of
> postgres_fdw is using this stuff anyway, but it seems like a concern.
>
> Also, the CommitFest entry for this seems to be a bit sketchy.  It
> claims that Tom Lane is a co-author of this patch which, AFAICS, is
> not the case.  It is listed under Miscellaneous rather than "Bug
> Fixes", which seems like a surprising decision.  And it uses a subject
> line which is neither very clear nor the same as the (also not
> particularly helpful) subject line of the email thread.

Looking at the code itself, I find the changes to joinpath.c rather alarming.

+/* Save hashclauses for possible use by the FDW */
+if (extra->consider_foreignjoin && hashclauses)
+extra->hashclauses = hashclauses;

A minor consideration is that this is fairly far away from where
hashclauses actually gets populated, so if someone later added an
early "return" statement to this function -- after creating some paths
-- it could subtly break join pushdown.  But I also think there's no
real need for this.  The loop that extracts hash clauses is simple
enough that we could just refactor it into a separate function, or if
necessary duplicate the logic.

+/* Save first mergejoin data for possible use by the FDW */
+if (extra->consider_foreignjoin && outerkeys == all_pathkeys)
+{
+extra->mergeclauses = cur_mergeclauses;
+extra->outersortkeys = outerkeys;
+extra->innersortkeys = innerkeys;
+}

Similarly here.  select_outer_pathkeys_for_merge(),
find_mergeclauses_for_pathkeys(), and make_inner_pathkeys_for_merge()
are all extern, so there's nothing to keep CreateLocalJoinPath() from
just doing that work itself instead of getting help from joinpath,
which I guess seems better to me.  I think it's just better if we
don't burden joinpath.c with keeping little bits of data around that
CreateLocalJoinPath() can easily get for itself.

There appears to be no regression test covering the case where we get
a Merge Full Join with a non-empty list of mergeclauses.  Hash Full
Join is tested, as is Merge Full Join without merge clauses, but
there's no test for Merge Full Join with mergeclauses, and since that
is a separate code path it seems like it should be tested.

-/*
- * If either inner or outer path is a ForeignPath corresponding to a
- * pushed down join, replace it with the fdw_outerpath, so that we
- * maintain path for EPQ checks built entirely of local join
- * strategies.
- */
-if (IsA(joinpath->outerjoinpath, ForeignPath))
-{
-ForeignPath *foreign_path;
-
-foreign_path = (ForeignPath *) joinpath->outerjoinpath;
-if (IS_JOIN_REL(foreign_path->path.parent))
-joinpath->outerjoinpath = foreign_path->fdw_outerpath;
-}
-
-if (IsA(joinpath->innerjoinpath, ForeignPath))
-{
-ForeignPath *foreign_path;
-
-foreign_path = (ForeignPath *) joinpath->innerjoinpath;
-if (IS_JOIN_REL(foreign_path->path.parent))
-joinpath->innerjoinpath = foreign_path->fdw_outerpath;
-}

This logic is removed and not replaced with anything, but I don't see
what keeps this code...

+Path   *outer_path = outerrel->cheapest_total_path;
+Path   *inner_path = innerrel->cheapest_total_path;

...from picking a ForeignPath?

There's probably more to think about here, but those are my question
on an initial read-through.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Alvaro Herrera
Peter Geoghegan wrote:
> On Fri, Apr 7, 2017 at 11:37 AM, Andres Freund  wrote:
> > Write Amplification Reduction Method (WARM)
> > - fair number of people don't think it's ready for v10.

Given the number of votes against putting this on pg10, I am going to
back off from this patch now, with an eye towards putting it in pg11 as
soon as the tree opens.  Either I or Pavan are going to post another
version of this patch series, within the next couple of weeks, so that
others can base their testing, review and suggestions.


> > - can't move to next fest because it's waiting-on-author, which doesn't
> >   allow that.  Doesn't strike me as a useful restriction.
> 
> I agree that that CF app restriction makes little sense.

What the restriction means is that if a patch is in waiting-on-author,
the proper "close" action is to return-with-feedback.  There is no point
in moving the patch to the next commitfest if there is no further patch
version.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Andres Freund
On 2017-04-07 16:28:03 -0300, Alvaro Herrera wrote:
> Peter Geoghegan wrote:
> > > - can't move to next fest because it's waiting-on-author, which doesn't
> > >   allow that.  Doesn't strike me as a useful restriction.
> > 
> > I agree that that CF app restriction makes little sense.
> 
> What the restriction means is that if a patch is in waiting-on-author,
> the proper "close" action is to return-with-feedback.  There is no point
> in moving the patch to the next commitfest if there is no further patch
> version.

That's true if the patch has been in that state for a while, but if you
find some relatively minor issues, and then move it soon after, it seems
to make sense to keep it open in the next CF.

- Andres


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Robert Haas
On Fri, Apr 7, 2017 at 3:30 PM, Andres Freund  wrote:
> On 2017-04-07 16:28:03 -0300, Alvaro Herrera wrote:
>> Peter Geoghegan wrote:
>> > > - can't move to next fest because it's waiting-on-author, which doesn't
>> > >   allow that.  Doesn't strike me as a useful restriction.
>> >
>> > I agree that that CF app restriction makes little sense.
>>
>> What the restriction means is that if a patch is in waiting-on-author,
>> the proper "close" action is to return-with-feedback.  There is no point
>> in moving the patch to the next commitfest if there is no further patch
>> version.
>
> That's true if the patch has been in that state for a while, but if you
> find some relatively minor issues, and then move it soon after, it seems
> to make sense to keep it open in the next CF.

In an ideal world, we wouldn't do that.  Of course, we do not live in
an ideal world...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [sqlsmith] Unpinning error in parallel worker

2017-04-07 Thread Robert Haas
On Wed, Apr 5, 2017 at 10:18 AM, Robert Haas  wrote:
>>> Ugh, OK.  I committed this, but I think this whole file needs a visit
>>> from the message style police.
>>
>> Like this?
>
> I was thinking of maybe not creating two separate (translatable)
> messages, and just using "could not attach to dynamic shared area" for
> both.

Done that way.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Peter Geoghegan
On Fri, Apr 7, 2017 at 12:28 PM, Alvaro Herrera
 wrote:
> Peter Geoghegan wrote:
>> On Fri, Apr 7, 2017 at 11:37 AM, Andres Freund  wrote:
>> > Write Amplification Reduction Method (WARM)
>> > - fair number of people don't think it's ready for v10.
>
> Given the number of votes against putting this on pg10, I am going to
> back off from this patch now, with an eye towards putting it in pg11 as
> soon as the tree opens.  Either I or Pavan are going to post another
> version of this patch series, within the next couple of weeks, so that
> others can base their testing, review and suggestions.

My offer to work with you on amcheck verification of WARM invariants
remains open. If nothing else, structuring things so that verification
is possible may clarify your design. Formalizing the preconditions,
postconditions, and legal states for on-disk structures might just be
a useful exercise, even if verification never actually finds a
problem.

I anticipate that amcheck verification will become my main focus for
Postgres 11, in any case.

-- 
Peter Geoghegan

VMware vCenter Server
https://www.vmware.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Undefined psql variables

2017-04-07 Thread Pavel Stehule
2017-04-07 21:04 GMT+02:00 Fabien COELHO :

>
> Hello Pavel,
>
> I wish I could have an explanation about why the :?varname (or some other
>>> variant) syntax I suggested has a "namespace" issue.
>>>
>>> The advantage that I see is that although it is obviously ugly, it is
>>> ugly
>>> in the continuity of the various :["'?]varname syntaxes already offered
>>> and
>>> it allows to get rid of "defined varname" which does not look like SQL. A
>>> second advantage is that with the "defined" proposal
>>>
>>
>> I don't think so this argument is valid - \if doesn't look like SQL too.
>>
>
> Sure. I'm talking about the expressions after the "\if" which should be as
> close as SQL, I think. At least that is what Tom required about the
> expression syntax in pgbench, and I tend to agree that psql should avoid to
> mix in another language if possible.
>
>\if defined var1 and defined var2 or defined var3 and sqlrt() >= ..
>>>
>>> Would probably never work work, as it cannot be embedded in another
>>> expression, while it would work with
>>>
>>>\if :?var1 and :?var2 or :?var3 and ...
>>>
>>> I don't see any reason why first should not work and second should to
>> work
>>
>
> Because of the mix of client-side and server-side stuff which needs to be
> interpreted. Let us consider:
>
>   \if EXISTS (SELECT * FROM tbl WHERE id=3) AND defined foo
>
> The "exists" is obviously executed server-side, but "defined foo" needs to
> be interpreted client-side, and it means that some parser client side would
> have been able to catch it in the middle of everything else. This example
> also illustrate my "does not look like SQL" point, as the first part is
> clearly SQL and the part after AND is not.
>
> With the second approach, ... "AND :?foo", the ":?foo" reference would be
> substituted directly by psql lexer and replaced on the fly by the answer,
> resulting in "AND TRUE" or "AND FALSE" depending, then the whole result
> (from EXISTS to TRUE/FALSE) could be interpreted server side to get an
> answer.
>
> Basically, catching :?varname seems easier/safer than catching "defined
> varname". I think that Tom's intent is that the defined expressions could
> not be mixed with SQL server side stuff, but I do not see why not, it is
> easy to imagine some use case where it would make sense.
>
> I have a different opinion - the condition expression should not be SQL
>> necessary. This language is oriented on client side operations. What is
>> benefit from server side expression?
>>
>
> Because I think it is legitimate to be able to write things like:
>
>   \if NOT pg_extension_is_loaded('units')
> \echo 'this application requires the great units extension'
> \q
>   \endif
>
>   \if (SELECT version FROM app_version) >= 2.0
> \echo 'application already installed at 2.0'
> \q
>   \endif
>
>
you proposal disallow client side expressions. I agree so is not possible
to mix server side and client side expressions. But I am sceptic so benefit
of server side expression is higher than a lost of client side expressions.
If we disallow server side expressions, then your examples are only two
lines longer, but the implementation can be more simpler.

SELECT version FROM  app_version
\gset
\if :version >= 2.0
 ...

Still I don't think so server side expression in \if is good idea.

Regards

Pavel




> --
> Fabien.
>


Re: [HACKERS] Vacuum: allow usage of more than 1GB of work mem

2017-04-07 Thread Andres Freund
Hi,

I've *not* read the history of this thread.  So I really might be
missing some context.


> From e37d29c26210a0f23cd2e9fe18a264312fecd383 Mon Sep 17 00:00:00 2001
> From: Claudio Freire 
> Date: Mon, 12 Sep 2016 23:36:42 -0300
> Subject: [PATCH] Vacuum: allow using more than 1GB work mem
> 
> Turn the dead_tuples array into a structure composed of several
> exponentially bigger arrays, to enable usage of more than 1GB
> of work mem during vacuum and thus reduce the number of full
> index scans necessary to remove all dead tids when the memory is
> available.

>   * We are willing to use at most maintenance_work_mem (or perhaps
>   * autovacuum_work_mem) memory space to keep track of dead tuples.  We
> - * initially allocate an array of TIDs of that size, with an upper limit that
> + * initially allocate an array of TIDs of 128MB, or an upper limit that
>   * depends on table size (this limit ensures we don't allocate a huge area
> - * uselessly for vacuuming small tables).  If the array threatens to 
> overflow,
> - * we suspend the heap scan phase and perform a pass of index cleanup and 
> page
> - * compaction, then resume the heap scan with an empty TID array.
> + * uselessly for vacuuming small tables). Additional arrays of increasingly
> + * large sizes are allocated as they become necessary.
> + *
> + * The TID array is thus represented as a list of multiple segments of
> + * varying size, beginning with the initial size of up to 128MB, and growing
> + * exponentially until the whole budget of (autovacuum_)maintenance_work_mem
> + * is used up.

When the chunk size is 128MB, I'm a bit unconvinced that using
exponential growth is worth it. The allocator overhead can't be
meaningful in comparison to collecting 128MB dead tuples, the potential
waste is pretty big, and it increases memory fragmentation.


> + * Lookup in that structure proceeds sequentially in the list of segments,
> + * and with a binary search within each segment. Since segment's size grows
> + * exponentially, this retains O(N log N) lookup complexity.

N log N is a horrible lookup complexity.  That's the complexity of
*sorting* an entire array.  I think you might be trying to argue that
it's log(N) * log(N)? Once log(n) for the exponentially growing size of
segments, one for the binary search?

Afaics you could quite easily make it O(2 log(N)) by simply also doing
binary search over the segments.  Might not be worth it due to the small
constant involved normally.


> + * If the array threatens to overflow, we suspend the heap scan phase and
> + * perform a pass of index cleanup and page compaction, then resume the heap
> + * scan with an array of logically empty but already preallocated TID 
> segments
> + * to be refilled with more dead tuple TIDs.

Hm, it's not really the array that overflows, it's m_w_m that'd be
exceeded, right?


>  /*
> + * Minimum (starting) size of the dead_tuples array segments. Will allocate
> + * space for 128MB worth of tid pointers in the first segment, further 
> segments
> + * will grow in size exponentially. Don't make it too small or the segment 
> list
> + * will grow bigger than the sweetspot for search efficiency on big vacuums.
> + */
> +#define LAZY_MIN_TUPLES  Max(MaxHeapTuplesPerPage, (128<<20) / 
> sizeof(ItemPointerData))

That's not really the minimum, no? s/MIN/INIT/?


> +typedef struct DeadTuplesSegment
> +{
> + int num_dead_tuples;/* # of entries in the 
> segment */
> + int max_dead_tuples;/* # of entries 
> allocated in the segment */
> + ItemPointerData last_dead_tuple;/* Copy of the last dead tuple 
> (unset
> + 
>  * until the segment is fully
> + 
>  * populated) */
> + unsigned short padding;
> + ItemPointer dt_tids;/* Array of dead tuples */
> +}DeadTuplesSegment;

Whenever padding is needed, it should have an explanatory comment.  It's
certainly not obvious to me wh it's neede here.


> @@ -1598,6 +1657,11 @@ lazy_vacuum_index(Relation indrel,
>   ivinfo.num_heap_tuples = vacrelstats->old_rel_tuples;
>   ivinfo.strategy = vac_strategy;
>  
> + /* Finalize the current segment by setting its upper bound dead tuple */
> + seg = DeadTuplesCurrentSegment(vacrelstats);
> + if (seg->num_dead_tuples > 0)
> + seg->last_dead_tuple = seg->dt_tids[seg->num_dead_tuples - 1];

Why don't we just maintain this here, for all of the segments?  Seems a
bit easier.


> @@ -1973,7 +2037,8 @@ count_nondeletable_pages(Relation onerel, LVRelStats 
> *vacrelstats)
>  static void
>  lazy_space_alloc(LVRelStats *vacrelstats, BlockNumber relblocks)
>  {
> - longmaxtuples;
> + longmaxtuples,
> + mintuples;
>   int vac_work_mem = IsAutoV

Re: [HACKERS] Remaining 2017-03 CF entries

2017-04-07 Thread Alvaro Herrera
Peter Geoghegan wrote:

> My offer to work with you on amcheck verification of WARM invariants
> remains open. If nothing else, structuring things so that verification
> is possible may clarify your design. Formalizing the preconditions,
> postconditions, and legal states for on-disk structures might just be
> a useful exercise, even if verification never actually finds a
> problem.

Agreed.  Thanks much.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] BRIN desummarization writes junk WAL records

2017-04-07 Thread Alvaro Herrera
Tom Lane wrote:

> The proximate cause of the exception seems to be that
> brinSetHeapBlockItemptr is being passed pagesPerRange = 0,
> which is problematic since HEAPBLK_TO_REVMAP_INDEX tries to
> divide by that.  Looking one level down, the bogus value
> seems to be coming out of an xl_brin_desummarize WAL record:
> 
> (gdb) f 1
> #1  0x00478cdc in brin_xlog_desummarize_page (record=0x2403ac8)
> at brin_xlog.c:274
> 274 brinSetHeapBlockItemptr(buffer, xlrec->pagesPerRange, 
> xlrec->heapBlk, iptr);
> (gdb) p *xlrec
> $1 = {pagesPerRange = 0, heapBlk = 0, regOffset = 1}
> 
> This is, perhaps, not unrelated to the fact that
> brinRevmapDesummarizeRange doesn't seem to be bothering to fill
> that field of the record.

Absolutely.

> BTW, is it actually sensible that xl_brin_desummarize's heapBlk
> is declared OffsetNumber and not BlockNumber?  If there's a reason
> why that's correct, the field name seems damn misleading.

Nah, just an oversight (against which the compiler doesn't protect.)

Fixed both problems.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] recent deadlock regression test failures

2017-04-07 Thread Thomas Munro
On Sat, Apr 8, 2017 at 6:35 AM, Kevin Grittner  wrote:
> On Fri, Apr 7, 2017 at 12:52 PM, Andres Freund  wrote:
>
>> I'd rather fix the issue, than remove the tests entirely.  Seems quite
>> possible to handle blocking on Safesnapshot in a similar manner as 
>> pg_blocking_pids?
>
> I'll see what I can figure out.

Ouch.  These are the other ways I thought of to achieve this:

https://www.postgresql.org/message-id/CAEepm%3D1MR4Ug9YsLtOS4Q9KAU9aku0pZS4RhBN%3D0LY3pJ49Ksg%40mail.gmail.com

I'd be happy to write one of those, but it may take a day as I have
some other commitments.

-- 
Thomas Munro
http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] monitoring.sgml missing tag

2017-04-07 Thread Erik Rijkers

monitoring.sgml has one  tag missing--- doc/src/sgml/monitoring.sgml.orig	2017-04-07 22:37:55.388708334 +0200
+++ doc/src/sgml/monitoring.sgml	2017-04-07 22:38:16.582047695 +0200
@@ -1275,6 +1275,7 @@
 
  ProcArrayGroupUpdate
  Waiting for group leader to clear transaction id at transaction end.
+
 
  SafeSnapshot
  Waiting for a snapshot for a READ ONLY DEFERRABLE transaction.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


  1   2   >