[GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

Hi All,

I have quite a few tables that follow a pattern like this:

 Table "public.my_model"
  Column |   Type| Modifiers
+---+---
  period | tsrange   | not null
  key| character varying | not null
  value  | integer   |
Indexes:
 "my_model_pkey" PRIMARY KEY, btree (period, key)
 "my_model_period_key_excl" EXCLUDE USING gist (period WITH &&, key 
WITH =)

Check constraints:
 "my_model_period_check" CHECK (period <> 'empty'::tsrange)

So, a primary key of a period column and one or more other columns 
(usually int or string) and an exclude constraint to prevent overlaps, 
and a check constraint to prevent empty ranges.


However, I'm hitting performance problems on moderate bulk inserts and 
updates, with ~700k rows taking around 13 minutes. Profiling my python 
code suggests that most of the time is being taken by Postgres (9.4 in 
this case...)


What can I do to speed things up? Is there a different type of index I 
can use to achieve the same exclude constraint? Is there something I can 
do to have the index changes only done on the commit of the bulk batches?


cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread John R Pierce

On 9/16/2016 2:01 AM, Chris Withers wrote:

Hi All,

I have quite a few tables that follow a pattern like this:

 Table "public.my_model"
  Column |   Type| Modifiers
+---+---
  period | tsrange   | not null
  key| character varying | not null
  value  | integer   |
Indexes:
 "my_model_pkey" PRIMARY KEY, btree (period, key)
 "my_model_period_key_excl" EXCLUDE USING gist (period WITH &&, 
key WITH =)

Check constraints:
 "my_model_period_check" CHECK (period <> 'empty'::tsrange)

So, a primary key of a period column and one or more other columns 
(usually int or string) and an exclude constraint to prevent overlaps, 
and a check constraint to prevent empty ranges.


However, I'm hitting performance problems on moderate bulk inserts and 
updates, with ~700k rows taking around 13 minutes. Profiling my python 
code suggests that most of the time is being taken by Postgres (9.4 in 
this case...)


What can I do to speed things up? Is there a different type of index I 
can use to achieve the same exclude constraint? Is there something I 
can do to have the index changes only done on the commit of the bulk 
batches? 


if (period,key) is unique, by virtue of being the primary key, then 
whats the point of the exclusion ??


I'm curious, how fast do your insert/updates run if you remove the key 
exclusion and check constraint ?  tsvector operations are a lot more 
complicated than simple matches in indexing





--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread John R Pierce

On 9/16/2016 2:12 AM, John R Pierce wrote:

  Column |   Type| Modifiers
+---+---
  period | tsrange   | not null 


wait, what is a tsrange?   the standard textsearch data types in 
postgres are tsvector and tsquery,



--
john r pierce, recycling bits in santa cruz



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread John R Pierce

On 9/16/2016 2:23 AM, John R Pierce wrote:


wait, what is a tsrange?   the standard textsearch data types in 
postgres are tsvector and tsquery,


never mind,  I should have known, its a timestamp range.   ...


when you do updates, are you changing any of the indexed fields, or just 
"value" ?





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Albe Laurenz
Arun Rangarajan wrote:
> But when I try to create the extension, I get the following error:
>
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

> t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process (PID 
> 20397) was terminated by signal 11: Segmentation fault

Well, as I told you, get a stack trace with debugging symbols.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

On 16/09/2016 10:26, John R Pierce wrote:

On 9/16/2016 2:23 AM, John R Pierce wrote:


wait, what is a tsrange?   the standard textsearch data types in
postgres are tsvector and tsquery,


never mind,  I should have known, its a timestamp range.   ...


when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period 
column for a row to set its end ts, and then inserting a new row with a 
start ts running on from that.


Of course, the adds are just inserting new rows.

cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread John R Pierce

On 9/16/2016 3:46 AM, Chris Withers wrote:


when you do updates, are you changing any of the indexed fields, or
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period 
column for a row to set its end ts, and then inserting a new row with 
a start ts running on from that. 


thats expensive, as it has to reindex that row.   and range indexes are 
more expensive than timestamp indexes


modifiyng the primary key is kind of a violation of one of the basic 
rules of relational databases as it means the row can't be referenced by 
another table.


I expect the expensive one is the constraint that ensures no periods 
overlap for the given key.I'm not sure how that can be done short of 
a full scan for each update/insert.   it might actually perform better 
if you write the index with the key first as presumably the key is 
invariant ?




--
john r pierce, recycling bits in santa cruz



Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

On 16/09/2016 12:00, John R Pierce wrote:

On 9/16/2016 3:46 AM, Chris Withers wrote:


when you do updates, are you changing any of the indexed fields, or
just "value" ?

Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting a new row with
a start ts running on from that.


thats expensive, as it has to reindex that row.   and range indexes are
more expensive than timestamp indexes

modifiyng the primary key is kind of a violation of one of the basic
rules of relational databases as it means the row can't be referenced by
another table.


Right, but these rows have no natural primary key. Would it help if I 
just added an auto-incrementing integer key? Would that make a positive 
difference or would it just be a wasted column?



I expect the expensive one is the constraint that ensures no periods
overlap for the given key.I'm not sure how that can be done short of
a full scan for each update/insert.


Indeed, I wonder if making the constraint deferrable might help for the 
bulk case?



it might actually perform better
if you write the index with the key first as presumably the key is
invariant ?


You mean:

PRIMARY KEY, btree (key1, key2, period)

as opposed to

PRIMARY KEY, btree (period, key)

Interesting, I'd assumed postgres would optimise that under the covers...

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Igor Neyman

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Withers
Sent: Friday, September 16, 2016 6:47 AM
To: John R Pierce ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] performance problems with bulk inserts/updates on 
tsrange with gist-based exclude constrains

On 16/09/2016 10:26, John R Pierce wrote:
> On 9/16/2016 2:23 AM, John R Pierce wrote:
>>
>> wait, what is a tsrange?   the standard textsearch data types in
>> postgres are tsvector and tsquery,
>
> never mind,  I should have known, its a timestamp range.   ...
>
>
> when you do updates, are you changing any of the indexed fields, or 
> just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period column 
for a row to set its end ts, and then inserting a new row with a start ts 
running on from that.

Of course, the adds are just inserting new rows.

cheers,

Chris



So, what is the value for "end ts", when the record is inserted (the range just 
started)?

Regards,
Igor Neyman

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Arun Rangarajan
Hi Laurenz,

Thanks for your reply.

Sorry for the double posting here and StackOverflow
http://stackoverflow.com/questions/39518417/unable-to-create-oracle-fdw-extension-on-postgres
.

I will update the details on StackOverflow since formatting and editing are
easier there.

On Fri, Sep 16, 2016 at 3:14 AM, Albe Laurenz 
wrote:

> Arun Rangarajan wrote:
> > But when I try to create the extension, I get the following error:
> >
> > postgres=# create extension oracle_fdw;
> > server closed the connection unexpectedly
> > This probably means the server terminated abnormally
> > before or while processing the request.
> > The connection to the server was lost. Attempting reset: Failed.
>
> > t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process
> (PID 20397) was terminated by signal 11: Segmentation fault
>
> Well, as I told you, get a stack trace with debugging symbols.
>
> Yours,
> Laurenz Albe
>


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

On 16/09/2016 14:54, Igor Neyman wrote:


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Chris Withers
Sent: Friday, September 16, 2016 6:47 AM
To: John R Pierce ; pgsql-general@postgresql.org
Subject: Re: [GENERAL] performance problems with bulk inserts/updates on 
tsrange with gist-based exclude constrains

On 16/09/2016 10:26, John R Pierce wrote:

On 9/16/2016 2:23 AM, John R Pierce wrote:


wait, what is a tsrange?   the standard textsearch data types in
postgres are tsvector and tsquery,


never mind,  I should have known, its a timestamp range.   ...


when you do updates, are you changing any of the indexed fields, or
just "value" ?

Yeah, it's a temporal table, so "updates" involve modifying the period column 
for a row to set its end ts, and then inserting a new row with a start ts running on from 
that.

Of course, the adds are just inserting new rows.

So, what is the value for "end ts", when the record is inserted (the range just 
started)?


It's open ended, so the period is [start_ts, )

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Tom Lane
Chris Withers  writes:
> On 16/09/2016 14:54, Igor Neyman wrote:
>> So, what is the value for "end ts", when the record is inserted (the range 
>> just started)?

> It's open ended, so the period is [start_ts, )

I've not looked at the GiST range opclass, but I would not be surprised if
having lots of those is pretty destructive to the index's ability to be
selective about && searches.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

Hi All,

I have quite a few tables that follow a pattern like this:

Table "public.my_model"
 Column |   Type| Modifiers
+---+---
 period | tsrange   | not null
 key| character varying | not null
 value  | integer   |
Indexes:
"my_model_pkey" PRIMARY KEY, btree (period, key)
"my_model_period_key_excl" EXCLUDE USING gist (period WITH &&, key 
WITH =)

Check constraints:
"my_model_period_check" CHECK (period <> 'empty'::tsrange)

So, a primary key of a period column and one or more other columns 
(usually int or string) and an exclude constraint to prevent overlaps, 
and a check constraint to prevent empty ranges.


However, I'm hitting performance problems on moderate bulk inserts and 
updates, with ~700k rows taking around 13 minutes. Profiling my python 
code suggests that most of the time is being taken by Postgres (9.4 in 
this case...)


What can I do to speed things up? Is there a different type of index I 
can use to achieve the same exclude constraint? Is there something I can 
do to have the index changes only done on the commit of the bulk batches?


cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL installation failure

2016-09-16 Thread Thomas . Deboben . ext
Hello,

I need some help on an installation issue with PostgreSQL 9.5.0.1
I have written a wix bootstrapper to install postgresql for our 
application.
This has worked for different version, but now we get sometimes trouble 
with the installer as some sql scripts can't be executed during install.
>From the logs I see an error 2 retunred from psql.exe

This error is found in the bitrock_installer.log from your original 
installer 
---
[14:58:49] Installing the adminpack module in the postgres database...
Executing C:\Program Files\PostgreSQL\9.5\bin\psql.exe -c "CREATE 
EXTENSION adminpack" -d postgres
Script exit code: 2
---

and as well in our log where we try to launch some more sql scripts to 
create a user, the database and the tables.
---
2016.08.02 14:59:07, Info, LaunchProcess(), Launch 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' with ' -U postgres --file "C:\Program 
Files\PostgreSQL\9.5\Contest\CreateContestUser.sql"'.
2016.08.02 14:59:09, Info, LaunchProcess(), Standard output: ''
2016.08.02 14:59:09, Info, LaunchProcess(), Standard error: 'psql: 
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (127.0.0.1) and 
accepting
TCP/IP connections on port 5432?
'
2016.08.02 14:59:09, Info, LaunchProcess(), 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' returned: '2'.
2016.08.02 14:59:09, Error   , InstallContestDB(), Contestdb not restored 
-> create Contestdb and tables
2016.08.02 14:59:09, Info, LaunchProcess(), Launch 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' with ' -U postgres --file "C:\Program 
Files\PostgreSQL\9.5\Contest\CreateContestDB.sql"'.
2016.08.02 14:59:11, Info, LaunchProcess(), Standard output: ''
2016.08.02 14:59:11, Info, LaunchProcess(), Standard error: 'psql: 
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (127.0.0.1) and 
accepting
TCP/IP connections on port 5432?
'
2016.08.02 14:59:11, Info, LaunchProcess(), 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' returned: '2'.
2016.08.02 14:59:11, Info, LaunchProcess(), Launch 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' with ' -U postgres --dbname Contestdb 
--file "C:\Program 
Files\PostgreSQL\9.5\Contest\CreateContestDBTables.sql"'.
2016.08.02 14:59:13, Info, LaunchProcess(), Standard output: ''
2016.08.02 14:59:13, Info, LaunchProcess(), Standard error: 'psql: 
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (127.0.0.1) and 
accepting
TCP/IP connections on port 5432?
'
2016.08.02 14:59:13, Info, LaunchProcess(), 'C:\Program 
Files\PostgreSQL\9.5\bin\psql.exe' returned: '2'.
---

So far I wasn't able to reproduce this error by my self, but I know from 
about 5 issues inside the last month.
As I'm not able to reproduce I would appreciate any help from you to find 
out what causes this error.
All installations do run on Windows 7 Pro x64.

Best regards,
   Thomas Deboben

Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

On 16/09/2016 12:00, John R Pierce wrote:

On 9/16/2016 3:46 AM, Chris Withers wrote:


when you do updates, are you changing any of the indexed fields, or
just "value" ?

Yeah, it's a temporal table, so "updates" involve modifying the period
column for a row to set its end ts, and then inserting a new row with
a start ts running on from that.


thats expensive, as it has to reindex that row.   and range indexes are
more expensive than timestamp indexes

modifiyng the primary key is kind of a violation of one of the basic
rules of relational databases as it means the row can't be referenced by
another table.


Right, but these rows have no natural primary key. Would it help if I 
just added an auto-incrementing integer key? Would that make a positive 
difference or would it just be a wasted column?



I expect the expensive one is the constraint that ensures no periods
overlap for the given key.I'm not sure how that can be done short of
a full scan for each update/insert.


Indeed, I wonder if making the constraint deferrable might help for the 
bulk case?



it might actually perform better
if you write the index with the key first as presumably the key is
invariant ?


You mean:

PRIMARY KEY, btree (period, key) as opposed to




--
john r pierce, recycling bits in santa cruz




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] performance problems with bulk inserts/updates on tsrange with gist-based exclude constrains

2016-09-16 Thread Chris Withers

On 16/09/2016 10:26, John R Pierce wrote:

On 9/16/2016 2:23 AM, John R Pierce wrote:


wait, what is a tsrange?   the standard textsearch data types in 
postgres are tsvector and tsquery,


never mind,  I should have known, its a timestamp range.   ...


when you do updates, are you changing any of the indexed fields, or 
just "value" ?
Yeah, it's a temporal table, so "updates" involve modifying the period 
column for a row to set its end ts, and then inserting a new row with a 
start ts running on from that.


Of course, the adds are just inserting new rows.

cheers,

Chris


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unable to create oracle_fdw (foreign data wrapper) extension

2016-09-16 Thread Alex Ignatov




On 16.09.2016 17:01, Arun Rangarajan wrote:

Hi Laurenz,

Thanks for your reply.

Sorry for the double posting here and StackOverflow
http://stackoverflow.com/questions/39518417/unable-to-create-oracle-fdw-extension-on-postgres
 .

I will update the details on StackOverflow since formatting and editing are 
easier there.

On Fri, Sep 16, 2016 at 3:14 AM, Albe Laurenz mailto:laurenz.a...@wien.gv.at>> wrote:

Arun Rangarajan wrote:
> But when I try to create the extension, I get the following error:
>
> postgres=# create extension oracle_fdw;
> server closed the connection unexpectedly
> This probably means the server terminated abnormally
> before or while processing the request.
> The connection to the server was lost. Attempting reset: Failed.

> t=2016-09-15 16:16:08 PDT d= h= p=25327 a=LOG:  0: server process 
(PID 20397) was terminated by signal 11: Segmentation fault

Well, as I told you, get a stack trace with debugging symbols.

Yours,
Laurenz Albe




Hi!
May be this helps:

"Don't add oracle_fdw to shared_preload_libraries! (c) Laurenz Albe



Alex Ignatov
Postgres Professional: http://www.postgrespro.com
The Russian Postgres Company



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL installation failure

2016-09-16 Thread John R Pierce

On 9/16/2016 1:53 AM, thomas.deboben@rohde-schwarz.com wrote:

psql: could not connect to server: Connection refused (0x274D/10061)
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 5432?


that says it all.   Is the postgres service running ?  and if it is, 
whats listen_addresses set to in postgresql.conf ?  your system has ipv6 
enabled, so listen_addresses should be 'localhost'  not just 
'127.0.0.1'.  Or, it could be '*' if you want to listen on all 
interfaces (eg, allow LAN connections).




--
john r pierce, recycling bits in santa cruz



[GENERAL] Nonblocking libpq + openssl = ?

2016-09-16 Thread Nikolai Zhubr

Hello all,

Not sure this is exactly right list, so feel free to point me to some 
other as appropriate.


While working on a higher-level binding to the libpq library, I've 
(likely) discovered a problem with non-blocking operation in case of 
using openssl. And, it looks so striking I'd like to share my observation.


For libpq, non-blocking operation is documented as a normal supported 
feature, e.g. [1]
Now, openssl transport is also documented as a normal supported feature, 
e.g. [2]
I have not found anywhere in documentaion any clear warnings that 
non-blocking operation and openssl transport are mutually exclusive or 
might not quite work as specified in any way.


From [1] we learn (through some intricate wording) that in order to 
avoid blocking at PQgetResult() one can employ PQsetnonblocking(), 
PQflush(), PQconsumeInput() and PQisBusy(), supposedly all of them 
non-blocking after calling PQsetnonblocking(), although not stated 
explicitely so, but otherwise it would make just no sence whatsoever, right?


Now lets have a look at e.g. PQconsumeInput():

===
.
/*
 * Load more data, if available. We do this no matter what state we are
 * in, since we are probably getting called because the application wants
 * to get rid of a read-select condition. Note that we will NOT block
 * waiting for more input.
 */
if (pqReadData(conn) < 0)
return 0;

/* Parsing of the data waits till later. */
 return 1;
}
===

It is stated that pqReadData() will NOT block. Now let's get inside:

===
.
/* OK, try to read some data */
retry3:
nread = pqsecure_read(conn, conn->inBuffer + conn->inEnd,
 conn->inBufSize - conn->inEnd);
.
/*
 * Still not sure that it's EOF, because some data could have just
 * arrived.
 */
retry4:
nread = pqsecure_read(conn, conn->inBuffer + conn->inEnd,
conn->inBufSize - conn->inEnd);



Now in case of SSL, this pqsecure_read() is just a wrapper around 
pgtls_read(), so lets look further:



pgtls_read(PGconn *conn, void *ptr, size_t len)
{
.
rloop:
SOCK_ERRNO_SET(0);
n = SSL_read(conn->ssl, ptr, len);
err = SSL_get_error(conn->ssl, n);
switch (err)
{
..
break;
case SSL_ERROR_WANT_WRITE:
/* Returning 0 here would cause caller to wait for read-ready,
 * which is not correct since what SSL wants is wait for
 * write-ready.  The former could get us stuck in an infinite
 * wait, so don't risk it; busy-loop instead. */
goto rloop;
==

So going PQconsumeInput()->pqReadData()->pqsecure_read()->pgtls_read() 
in a supposedly non-blocking operation we finally come to a tight 
busy-loop waiting for SSL_ERROR_WANT_WRITE to go down! How could such 
thing ever be,


- with no even sleep(1),
- no timeout,
- no diagnostics of any sort,
- a comment implying that getting stuck in a (potentially) infinite 
sleepless loop deep inside a library is OK.


And looking more into this pgtls_read() function it seems it just has 
inadequate interface. So that it has really no way to reliably indicate 
some important details to its caller, namely the need to wait for 
write-readyness. It's like if ssl support was a quick-n-dirty hack 
rather than a consistently integrated feature. Or do I read it all wrong?

Any thoughts?

[1] https://www.postgresql.org/docs/9.5/static/libpq-async.html
[2] https://www.postgresql.org/docs/9.5/static/libpq-ssl.html

Thank you,
Regards,

Nikolai


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Nonblocking libpq + openssl = ?

2016-09-16 Thread Andres Freund
Hi,

> So going PQconsumeInput()->pqReadData()->pqsecure_read()->pgtls_read() in a
> supposedly non-blocking operation we finally come to a tight busy-loop
> waiting for SSL_ERROR_WANT_WRITE to go down! How could such thing ever be,
> 
> - with no even sleep(1),
> - no timeout,
> - no diagnostics of any sort,
> - a comment implying that getting stuck in a (potentially) infinite
> sleepless loop deep inside a library is OK.

> And looking more into this pgtls_read() function it seems it just has
> inadequate interface. So that it has really no way to reliably indicate some
> important details to its caller, namely the need to wait for
> write-readyness. It's like if ssl support was a quick-n-dirty hack rather
> than a consistently integrated feature. Or do I read it all wrong?
> Any thoughts?

Well, it's not pretty. I quite dislike this bit, and I've complained
about it before.  But it is noteworthy that it's nearly impossible to
hit these days, due to ssl-renegotiation support having been ripped out.
That's what could trigger openssl to require writes upon reads.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Nonblocking libpq + openssl = ?

2016-09-16 Thread Nikolai Zhubr

17.09.2016 2:05, Andres Freund:
[...]

Well, it's not pretty. I quite dislike this bit, and I've complained
about it before.  But it is noteworthy that it's nearly impossible to
hit these days, due to ssl-renegotiation support having been ripped out.
That's what could trigger openssl to require writes upon reads.


Looks like it _usually_ happens so that such interdependent reads and 
writes are unnecessary in the absence of renegotiations. But still [1] 
instructs to always check for both SSL_ERROR_WANT_READ and 
SSL_ERROR_WANT_WRITE in all cases. Supposedly it is for a reason. The 
way it is implemented in fe-secure-openssl.c looks just somewhat unfinished.

I'm wondering is there really something that prevents doing it properly?

[1] https://www.openssl.org/docs/manmaster/ssl/SSL_get_error.html

Thank you,
Regards,
Nikolai



Greetings,

Andres Freund






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Nonblocking libpq + openssl = ?

2016-09-16 Thread Andres Freund
On 2016-09-17 03:12:53 +0300, Nikolai Zhubr wrote:
> 17.09.2016 2:05, Andres Freund:
> [...]
> > Well, it's not pretty. I quite dislike this bit, and I've complained
> > about it before.  But it is noteworthy that it's nearly impossible to
> > hit these days, due to ssl-renegotiation support having been ripped out.
> > That's what could trigger openssl to require writes upon reads.
> 
> Looks like it _usually_ happens so that such interdependent reads and writes
> are unnecessary in the absence of renegotiations. But still [1] instructs to
> always check for both SSL_ERROR_WANT_READ and SSL_ERROR_WANT_WRITE in all
> cases. Supposedly it is for a reason. The way it is implemented in
> fe-secure-openssl.c looks just somewhat unfinished.
> I'm wondering is there really something that prevents doing it properly?

The relevant user-level API of libpq (PQisBusy) doesn't have a way to
return "waiting for write". So we'd have to break API compatibility.

Greetings,

Andres Freund


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general