Re: PostGreSQL TDE encryption patch

2020-06-26 Thread Bhalodiya, Chirag
Ok, thanks for the clarification.

On Thu, Jun 25, 2020 at 11:35 PM Bruce Momjian  wrote:

> On Thu, Jun 25, 2020 at 04:20:06PM +0530, Bhalodiya, Chirag wrote:
> > Hi Patrick,
> >
> > Thanks for the information. I was looking for out of box postgre
> solution so
> > wanted to know how to apply following patch on top of my postgre 12
> > installation:
> > https://www.postgresql.org/message-id/
> > CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%
> 40mail.gmail.com
>
> The patch is for developers and not for production use.
>
> ---
>
>
> >
> >
> > Regards,
> > Chirag.
> >
> > On Thu, Jun 25, 2020 at 3:33 PM Patrick FICHE  >
> > wrote:
> >
> >
> > Hi
> >
> >
> >
> > CYBERTEC provided good installation guide (https://
> > www.cybertec-postgresql.com/en/products/
> > postgresql-transparent-data-encryption/).
> >
> >
> >
> > Here is their answer to your question :
> >
> > Q: Can I upgrade to an encrypted database?
> > A: In place encryption of existing clusters is currently not
> supported. A
> > dump and reload to an encrypted instance is required, or logical
> > replication can be used to perform the migration online.
> >
> >
> >
> > Regards,
> >
> >
> >
> > Patrick Fiche
> >
> > Database Engineer, Aqsacom Sas.
> >
> > c. 33 6 82 80 69 96
> >
> >
> >
> > 01-03_AQSA_Main_Corporate_Logo_JPEG_White_Low.jpg
> >
> >
> >
> > From: Bhalodiya, Chirag 
> > Sent: Thursday, June 25, 2020 9:50 AM
> > To: pgsql-gene...@postgresql.org
> > Subject: PostGreSQL TDE encryption patch
> >
> >
> >
> > Hi,
> >
> >
> >
> > We are migrating our product to PostGreSQL from Oracle and as part
> of HIPPA
> > (https://en.wikipedia.org/wiki/
> > Health_Insurance_Portability_and_Accountability_Act) guidelines, we
> have a
> > requirement to encrypt entire tablespace/specific tables using
> Transparent
> > data encryption(TDE).
> >
> >
> >
> > I was looking at TDE solution in PostGreSQL and went through
> following
> > wiki:
> >
> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption
> >
> >
> >
> > I found following TDE patch from this wiki:
> >
> > https://www.postgresql.org/message-id/
> > CAD21AoBjrbxvaMpTApX1cEsO%3D8N%3Dnc2xVZPB0d9e-VjJ%3DYaRnw%
> 40mail.gmail.com
> >
> >
> >
> >
> > However, I am not sure how to apply this patch and I had the
> > following questions:
> >
> > 1. We are using PostGreSQL 12. Is it possible to apply patches on
> top of
> > existing PostGreSQL installation?
> >
> > 2. Will it be available anytime sooner with a major release like
> PostGreSQL
> > 13?
> >
> >
> >
> > Regards,
> >
> > Chirag.
> >
>
>
>
> --
>   Bruce Momjian  https://momjian.us
>   EnterpriseDB https://enterprisedb.com
>
>   The usefulness of a cup is in its emptiness, Bruce Lee
>
>


Re: EXTERNAL: Re: Netapp SnapCenter

2020-06-26 Thread Magnus Hagander
On Thu, Jun 25, 2020 at 5:23 PM Paul Förster 
wrote:

> Hi Ken,
>
> > On 25. Jun, 2020, at 17:15, Wolff, Ken L  wrote:
> > There's actually a lot of good stuff in that document about Postgres in
> general.  I'd be curious to hear what everyone thinks, though, and
> specifically about what NetApp recommends in Section 3.3 about putting data
> and WAL on separate volumes, which I believe contradicts what's been
> discussed in this email thread.
>
> yes, I've read the part about different volumes and I must say, I don't
> agree because I think it violates atomicity.
>

I believe NetApp does atomic snapshots across multiple volumes, if you have
them in the same consistency group. (If you don't then you're definitely in
for a world of pain if you ever have to restore)

Snapshotting multiple volumes in a consistency group will set up a write
fence across them, then snapshot, and AIUI guarantees correct write
ordering.

-- 
 Magnus Hagander
 Me: https://www.hagander.net/ 
 Work: https://www.redpill-linpro.com/ 


Re: EXTERNAL: Re: Netapp SnapCenter

2020-06-26 Thread Paul Förster
Hi Magnus,

> On 26. Jun, 2020, at 12:29, Magnus Hagander  wrote:
> I believe NetApp does atomic snapshots across multiple volumes, if you have 
> them in the same consistency group. (If you don't then you're definitely in 
> for a world of pain if you ever have to restore)
> 
> Snapshotting multiple volumes in a consistency group will set up a write 
> fence across them, then snapshot, and AIUI guarantees correct write ordering.

That's how I understood it too. But it will be difficult in our complex storage 
world to get the storage guys to create a consistency group.

Also, what I'm looking for is a plugin that allows to, how Netapp call it, 
"quiece" the PostgreSQL cluster, i.e. does start/stop backup.

I don't really trust the community plugin as it is so old and I'm not sure it 
gets any more development attention. For example, what about the API change 
(exclusive vs. non-exclusive)? Does it use the new API?

Cheers,
Paul



Re: Netapp SnapCenter

2020-06-26 Thread Paul Förster
Hi Stephen,

> On 22. Jun, 2020, at 19:00, Stephen Frost  wrote:
> ... and all tablespaces are also on that volume.  Basically, anything
> that PG might ever write to needs to all be included in that atomic
> write.

it turns out that tablespaces can be stored on different volumes if you use 
consistency groups in Netapp to do the snapshots. But this is per documentation 
that Ken mentioned (see below)! I couldn't verify it as I am not the storage 
person...

https://www.netapp.com/us/media/tr-4770.pdf

Cheers,
Paul



RE: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked

2020-06-26 Thread Jim Hurne
"Jim Hurne"  wrote on 06/24/2020 03:44:26 PM:
> That brings us to transactions. I looked for any long-running idle 
> transaction (transactions that have been idle for more than 15 or 5 
> minutes), but found none.

It turns out I didn't see any long running transactions because I was 
using a user who didn't have permissions to view most of the data in the 
pg_stat_activity view.  When we looked using a more powerful user, we 
found four transactions that had been idle for a long time. Killing those 
transactions allowed vacuums to clean up the dead tuples.

This is most likely the root of all of our problems, so we can essentially 
consider this mystery solved for now.

Thanks to all who gave us advice and information. It's been highly 
educational. I'm just sorry it turned out to be a stupid "long running 
transaction" problem. We'll know to look for that first in the future.






Connecting Powerbuilder / EAserver to PostgreSQL server 11 via ODBC or JDBC

2020-06-26 Thread Matthias Apitz


Hello,

After the very successful migration of our Library Management System
(which uses ESQL/C, DBI, JDBC) together with PostgreSQL 11.4, we want to do the
same with another LMS written in Powerbuild, running in an EAServer and
currently using Sybase ASE as DBS.

There is an error situation already on the connection phase, the ODBC
debug logs show that the connection establishment and authentication to
the PostgreSQL server is fine (also the disconnect), but the EAServer
makes an error out of this and returns to the Powerbuilder software that
the connection is invalid,
raising some error 999 without saying much in the log files what this
could mean or is caused from.

I know this is difficult to analyze with all this proprietary software
stack, but my first question here is: anybody out here who could manage
such an architecture successful working?

And please do not send hints of the type, rewrite everything in Java or
Visual Basic, as this is not an option :-)

Thanks

matthias
-- 
Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/ +49-176-38902045
Public GnuPG key: http://www.unixarea.de/key.pub




Re: Connecting Powerbuilder / EAserver to PostgreSQL server 11 via ODBC or JDBC

2020-06-26 Thread Igor Korot
Hi,



On Fri, Jun 26, 2020, 8:31 AM Matthias Apitz  wrote:

>
> Hello,
>
> After the very successful migration of our Library Management System
> (which uses ESQL/C, DBI, JDBC) together with PostgreSQL 11.4, we want to
> do the
> same with another LMS written in Powerbuild, running in an EAServer and
> currently using Sybase ASE as DBS.
>
> There is an error situation already on the connection phase, the ODBC
> debug logs show that the connection establishment and authentication to
> the PostgreSQL server is fine (also the disconnect), but the EAServer
> makes an error out of this and returns to the Powerbuilder software that
> the connection is invalid,
> raising some error 999 without saying much in the log files what this
> could mean or is caused from.
>

So what operation is eroding out?
Can you try to isolate it?

Thank you.


> I know this is difficult to analyze with all this proprietary software
> stack, but my first question here is: anybody out here who could manage
> such an architecture successful working?
>
> And please do not send hints of the type, rewrite everything in Java or
> Visual Basic, as this is not an option :-)
>
> Thanks
>
> matthias
> --
> Matthias Apitz, ✉ g...@unixarea.de, http://www.unixarea.de/
> +49-176-38902045
> Public GnuPG key: http://www.unixarea.de/key.pub
>
>
>


Re: Connecting Powerbuilder / EAserver to PostgreSQL server 11 via ODBC or JDBC

2020-06-26 Thread Adrian Klaver

On 6/26/20 6:31 AM, Matthias Apitz wrote:


Hello,

After the very successful migration of our Library Management System
(which uses ESQL/C, DBI, JDBC) together with PostgreSQL 11.4, we want to do the
same with another LMS written in Powerbuild, running in an EAServer and
currently using Sybase ASE as DBS.

There is an error situation already on the connection phase, the ODBC
debug logs show that the connection establishment and authentication to
the PostgreSQL server is fine (also the disconnect), but the EAServer
makes an error out of this and returns to the Powerbuilder software that
the connection is invalid,
raising some error 999 without saying much in the log files what this
could mean or is caused from.


A quick search found that this code be returned if the correct server 
type is not specified in EAServer.




I know this is difficult to analyze with all this proprietary software
stack, but my first question here is: anybody out here who could manage
such an architecture successful working?

And please do not send hints of the type, rewrite everything in Java or
Visual Basic, as this is not an option :-)

Thanks

matthias




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: timestamp - timestamp result

2020-06-26 Thread Tom Lane
Pavel Stehule  writes:
> pá 26. 6. 2020 v 7:29 odesílatel Thomas Kellerer  napsal:
>> Is it safe to assume that "timestamp - timestamp" will never contain units
>> larger then days?

> Now, this operator internally calls only interval_justify_hours functions.

We should probably document that --- I'll go do so.

regards, tom lane




PG13 Trusted Extension usability issue

2020-06-26 Thread Brad Nicholson


First, as a long time user of the pgextwlist extension, I'm happy to see
this functionality appearing in core.  However, as a long term user of that
extension, I can say that ability to create an extension as a non-super
user is pretty limited in itself in a lot of cases.  Many extensions both
in contrib and external ones (like PostGIS for example) don't give
appropriate permissions to actually use the extension.

Taking postgresql_fdw as an example.

I modify the control file to allow it to be trusted (I'd suggest how to do
this be documented as well)

cat  /usr/pgsql-13/share/extension/postgres_fdw.control
# postgres_fdw extension
comment = 'foreign-data wrapper for remote PostgreSQL servers'
default_version = '1.0'
module_pathname = '$libdir/postgres_fdw'
relocatable = true
trusted = true

My current, non-super user:

\du admin
 List of roles
 Role name |   Attributes   | Member of
---++---
 admin | Create role, Create DB | {}


Create the extension:
test2=> create extension postgres_fdw ;
CREATE EXTENSION

Actually try and use it:

test2=> CREATE SERVER foreign_server
FOREIGN DATA WRAPPER postgres_fdw
OPTIONS (host '192.83.123.89', port '5432', dbname 'foreign_db');
ERROR:  permission denied for foreign-data wrapper postgres_fdw


To make it work, you need a superuser to run GRANT usage ... on the foreign
data wrapper to the user that creates it.  And while a superuser can apply
the needed grants, it's an awkard feature to allow a non-superuser to
create an extension, then require a superuser to grant things to allow it
to actually be used.

There are lots of other cases of such behavior.  pg_stats_statements_reset
() for example.  Big external extensions like PostGIS also suffer from
permissions issues.

Brad.


Interpreting autovacuum logs (9.6)

2020-06-26 Thread Gabe Kopley
Hi all,

Please see this graph of data points extracted from autovacuum logs:
https://imgur.com/a/OCRKoDn . It's from a 9.6 instance with default params
for autovacuum with the exceptions of autovacuum_work_mem=1 and
log_autovacuum_min_duration=100.

1. How should we interpret the # tuples remain reported by the autovac
logs? The source code says it's the "estimated total # of tuples" which to
me means # dead + # live. But that is invalidated by the pattern here where
the orange points (# tuples remain) are dramatically higher than # dead not
removable (blue points) + # dead removed (green points) + # live (which
never exceeded 1M during this entire interval, per count query).

2. Beginning around the first 6/19 tick, what could be causing # tuples
remain to drop steeply after periods of growth when # tuples removed is 0?
I confirmed there was no truncation. And what could the # tuples remain
recurrent asymptote at ~22M mean?

(further context for those curious: the discontinuity at 6/23 is due to an
individual autovacuum run getting stuck. After manually killing that run,
the next one succeeded and you see that reporting toward the right of the
graph)

Thanks!

Gabe


Re: PG13 Trusted Extension usability issue

2020-06-26 Thread Tom Lane
"Brad Nicholson"  writes:
> First, as a long time user of the pgextwlist extension, I'm happy to see
> this functionality appearing in core.  However, as a long term user of that
> extension, I can say that ability to create an extension as a non-super
> user is pretty limited in itself in a lot of cases.  Many extensions both
> in contrib and external ones (like PostGIS for example) don't give
> appropriate permissions to actually use the extension.

> Taking postgresql_fdw as an example.

I'm confused about your point here.  postgresql_fdw has intentionally
*not* been marked trusted.  That's partly because it doesn't seem like
outside-the-database access is something we want to allow by default,
but it's also the case that there are inside-the-database permissions
issues.  So no, we have not solved those, but that is not a shortcoming
of the trusted-extensions feature AFAICS.  It is not the intent of
that feature that you can randomly mark unsafe extensions as trusted
and have every one of their permissions safety-checks vanish.

regards, tom lane




Re: Interpreting autovacuum logs (9.6)

2020-06-26 Thread Adrian Klaver

On 6/26/20 11:47 AM, Gabe Kopley wrote:

Hi all,

Please see this graph of data points extracted from autovacuum logs: 
https://imgur.com/a/OCRKoDn . It's from a 9.6 instance with default 
params for autovacuum with the exceptions of autovacuum_work_mem=1 
and log_autovacuum_min_duration=100.


1. How should we interpret the # tuples remain reported by the autovac 
logs? The source code says it's the "estimated total # of tuples" which 
to me means # dead + # live. But that is invalidated by the pattern here 
where the orange points (# tuples remain) are dramatically higher than # 
dead not removable (blue points) + # dead removed (green points) + # 
live (which never exceeded 1M during this entire interval, per count query).


2. Beginning around the first 6/19 tick, what could be causing # tuples 
remain to drop steeply after periods of growth when # tuples removed is 
0? I confirmed there was no truncation. And what could the # tuples 
remain recurrent asymptote at ~22M mean?


AFAIK there is not dedicated autovac log, so something is pulling this 
out of the Postgres log correct?


What is the program that is doing that and what is the raw output?



(further context for those curious: the discontinuity at 6/23 is due to 
an individual autovacuum run getting stuck. After manually killing that 
run, the next one succeeded and you see that reporting toward the right 
of the graph)


Thanks!

Gabe





--
Adrian Klaver
adrian.kla...@aklaver.com




RE: PG13 Trusted Extension usability issue

2020-06-26 Thread Brad Nicholson

Tom Lane  wrote on 2020/06/26 02:47:25 PM:

> From: Tom Lane 
> To: "Brad Nicholson" 
> Cc: pgsql-general@lists.postgresql.org
> Date: 2020/06/26 02:51 PM
> Subject: [EXTERNAL] Re: PG13 Trusted Extension usability issue
>
> "Brad Nicholson"  writes:
> > First, as a long time user of the pgextwlist extension, I'm happy to
see
> > this functionality appearing in core.  However, as a long term user of
that
> > extension, I can say that ability to create an extension as a non-super
> > user is pretty limited in itself in a lot of cases.  Many extensions
both
> > in contrib and external ones (like PostGIS for example) don't give
> > appropriate permissions to actually use the extension.
>
> > Taking postgresql_fdw as an example.
>
> I'm confused about your point here.  postgresql_fdw has intentionally
> *not* been marked trusted.  That's partly because it doesn't seem like
> outside-the-database access is something we want to allow by default,
> but it's also the case that there are inside-the-database permissions
> issues.

Ah - I misread the docs.  Specifically I read this:

"For many extensions this means superuser privileges are needed. However,
if the extension is marked trusted in its control file, then it can be
installed by any user who has CREATE privilege on the current database"

To mean that you could mark any extension as trusted in the control file to
allow non-superuser installation.

Thanks,
Brad


Re: PG13 Trusted Extension usability issue

2020-06-26 Thread Adrian Klaver

On 6/26/20 12:03 PM, Brad Nicholson wrote:

Tom Lane  wrote on 2020/06/26 02:47:25 PM:

 > From: Tom Lane 
 > To: "Brad Nicholson" 
 > Cc: pgsql-general@lists.postgresql.org
 > Date: 2020/06/26 02:51 PM
 > Subject: [EXTERNAL] Re: PG13 Trusted Extension usability issue
 >
 > "Brad Nicholson"  writes:
 > > First, as a long time user of the pgextwlist extension, I'm happy 
to see
 > > this functionality appearing in core.  However, as a long term user 
of that

 > > extension, I can say that ability to create an extension as a non-super
 > > user is pretty limited in itself in a lot of cases.  Many 
extensions both

 > > in contrib and external ones (like PostGIS for example) don't give
 > > appropriate permissions to actually use the extension.
 >
 > > Taking postgresql_fdw as an example.
 >
 > I'm confused about your point here.  postgresql_fdw has intentionally
 > *not* been marked trusted.  That's partly because it doesn't seem like
 > outside-the-database access is something we want to allow by default,
 > but it's also the case that there are inside-the-database permissions
 > issues.

Ah - I misread the docs.  Specifically I read this:

"For many extensions this means superuser privileges are needed. 
However, if the extension is marked trusted in its control file, then it 
can be installed by any user who has CREATE privilege on the current 
database"


The rest of that paragraph:

"In this case the extension object itself will be owned by the calling 
user, but the contained objects will be owned by the bootstrap superuser 
(unless the extension's script explicitly assigns them to the calling 
user). This configuration gives the calling user the right to drop the 
extension, but not to modify individual objects within it."




To mean that you could mark any extension as trusted in the control file 
to allow non-superuser installation.


Thanks,
Brad




--
Adrian Klaver
adrian.kla...@aklaver.com




Re: Interpreting autovacuum logs (9.6)

2020-06-26 Thread Gabe Kopley
That's right Adrian, no dedicated log, but rather the messages to the
Postgres log emitted I believe here:
https://github.com/postgres/postgres/blob/REL9_6_18/src/backend/commands/vacuumlazy.c#L382
. I'm just using a regex to pull out those [removed, remain, dead not
removable] values and plot them.

On Fri, Jun 26, 2020 at 12:02 PM Adrian Klaver 
wrote:

> On 6/26/20 11:47 AM, Gabe Kopley wrote:
> > Hi all,
> >
> > Please see this graph of data points extracted from autovacuum logs:
> > https://imgur.com/a/OCRKoDn . It's from a 9.6 instance with default
> > params for autovacuum with the exceptions of autovacuum_work_mem=1
> > and log_autovacuum_min_duration=100.
> >
> > 1. How should we interpret the # tuples remain reported by the autovac
> > logs? The source code says it's the "estimated total # of tuples" which
> > to me means # dead + # live. But that is invalidated by the pattern here
> > where the orange points (# tuples remain) are dramatically higher than #
> > dead not removable (blue points) + # dead removed (green points) + #
> > live (which never exceeded 1M during this entire interval, per count
> query).
> >
> > 2. Beginning around the first 6/19 tick, what could be causing # tuples
> > remain to drop steeply after periods of growth when # tuples removed is
> > 0? I confirmed there was no truncation. And what could the # tuples
> > remain recurrent asymptote at ~22M mean?
>
> AFAIK there is not dedicated autovac log, so something is pulling this
> out of the Postgres log correct?
>
> What is the program that is doing that and what is the raw output?
>
> >
> > (further context for those curious: the discontinuity at 6/23 is due to
> > an individual autovacuum run getting stuck. After manually killing that
> > run, the next one succeeded and you see that reporting toward the right
> > of the graph)
> >
> > Thanks!
> >
> > Gabe
> >
> >
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>


Re: PG13 Trusted Extension usability issue

2020-06-26 Thread Tom Lane
"Brad Nicholson"  writes:
> Tom Lane  wrote on 2020/06/26 02:47:25 PM:
>> I'm confused about your point here.  postgresql_fdw has intentionally
>> *not* been marked trusted.  That's partly because it doesn't seem like
>> outside-the-database access is something we want to allow by default,
>> but it's also the case that there are inside-the-database permissions
>> issues.

> Ah - I misread the docs.  Specifically I read this:
> "For many extensions this means superuser privileges are needed. However,
> if the extension is marked trusted in its control file, then it can be
> installed by any user who has CREATE privilege on the current database"
> To mean that you could mark any extension as trusted in the control file to
> allow non-superuser installation.

Well, it's just like anything else in an open-source database: you can
change the code however you want, but the fallout from that is on you.

In the case at hand, you might be able to do what you want by adding
something along the line of

GRANT USAGE ON FOREIGN DATA WRAPPER postgres_fdw
  TO @extowner@ WITH GRANT OPTION;

to the extension install script.  But nobody's researched whether that's
reasonable from a security standpoint, or whether it will play nicely
with dump/reload, etc etc.

regards, tom lane




Re: Interpreting autovacuum logs (9.6)

2020-06-26 Thread Adrian Klaver

On 6/26/20 12:13 PM, Gabe Kopley wrote:
That's right Adrian, no dedicated log, but rather the messages to the 
Postgres log emitted I believe here: 
https://github.com/postgres/postgres/blob/REL9_6_18/src/backend/commands/vacuumlazy.c#L382 
. I'm just using a regex to pull out those [removed, remain, dead not 
removable] values and plot them.


Further on in the source there is:

_("tuples: %.0f removed, %.0f remain, %.0f are dead but not yet 
removable, oldest xmin: %u\n"),


vacrelstats->tuples_deleted,

vacrelstats->new_rel_tuples,

vacrelstats->new_dead_tuples,
 OldestXmin);

To me that tracks with your graph. The orange line is both the new live 
and dead tuples and the difference to the blue line is the number of new 
live tuples relative to the number of new dead tuples. The flat line for 
tuples removed seems to track the fact that the other tuples lines 
increase because the VACUUM process was not working. The two data points 
at the far right, when the auto VACUUM kicked in, back that up.




On Fri, Jun 26, 2020 at 12:02 PM Adrian Klaver 
mailto:adrian.kla...@aklaver.com>> wrote:


On 6/26/20 11:47 AM, Gabe Kopley wrote:
 > Hi all,
 >
 > Please see this graph of data points extracted from autovacuum logs:
 > https://imgur.com/a/OCRKoDn . It's from a 9.6 instance with default
 > params for autovacuum with the exceptions of
autovacuum_work_mem=1
 > and log_autovacuum_min_duration=100.
 >
 > 1. How should we interpret the # tuples remain reported by the
autovac
 > logs? The source code says it's the "estimated total # of tuples"
which
 > to me means # dead + # live. But that is invalidated by the
pattern here
 > where the orange points (# tuples remain) are dramatically higher
than #
 > dead not removable (blue points) + # dead removed (green points) + #
 > live (which never exceeded 1M during this entire interval, per
count query).
 >
 > 2. Beginning around the first 6/19 tick, what could be causing #
tuples
 > remain to drop steeply after periods of growth when # tuples
removed is
 > 0? I confirmed there was no truncation. And what could the # tuples
 > remain recurrent asymptote at ~22M mean?

AFAIK there is not dedicated autovac log, so something is pulling this
out of the Postgres log correct?

What is the program that is doing that and what is the raw output?

 >
 > (further context for those curious: the discontinuity at 6/23 is
due to
 > an individual autovacuum run getting stuck. After manually
killing that
 > run, the next one succeeded and you see that reporting toward the
right
 > of the graph)
 >
 > Thanks!
 >
 > Gabe
 >
 >


-- 
Adrian Klaver

adrian.kla...@aklaver.com 




--
Adrian Klaver
adrian.kla...@aklaver.com




libpq pipelineing

2020-06-26 Thread Samuel Williams
Hello,

Using the asynchronous interface of libpq, is it possible to pipeline
multiple queries?

i.e.

PQsendQuery(query1)
PQsendQuery(query2)

followed by

query1_results = PQgetResult(...)
query2_results = PQgetResult(...)

I tried it but got "another command is already in progress" error.

So, maybe it's not supported, or maybe I'm doing something wrong.

Thanks
Samuel




Re: libpq pipelineing

2020-06-26 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> Hello,
>
> Using the asynchronous interface of libpq, is it possible to pipeline
> multiple queries?
>
> i.e.
>
> PQsendQuery(query1)
> PQsendQuery(query2)
>
> followed by
>
> query1_results = PQgetResult(...)
> query2_results = PQgetResult(...)
>
> I tried it but got "another command is already in progress" error.
>

The documentation seems to leave zero ambiguity:

 After successfully calling PQsendQuery, call PQgetResult one or more times
to obtain the results. PQsendQuery cannot be called again (on the same
connection) until PQgetResult has returned a null pointer, indicating that
the command is done.

David J.


Re: libpq pipelineing

2020-06-26 Thread Samuel Williams
Thanks David,

You are correct.

I was giving an example of what I was hoping to achieve, not what I
expected to work with the current interface.

I found some discussion in the past relating to batch processing which
appears to support some kind of pipelining:

https://2ndquadrant.github.io/postgres/libpq-batch-mode.html

However it seems to be abandoned.

Kind regards,
Samuel

On Sat, 27 Jun 2020 at 16:15, David G. Johnston
 wrote:
>
> On Friday, June 26, 2020, Samuel Williams  
> wrote:
>>
>> Hello,
>>
>> Using the asynchronous interface of libpq, is it possible to pipeline
>> multiple queries?
>>
>> i.e.
>>
>> PQsendQuery(query1)
>> PQsendQuery(query2)
>>
>> followed by
>>
>> query1_results = PQgetResult(...)
>> query2_results = PQgetResult(...)
>>
>> I tried it but got "another command is already in progress" error.
>
>
> The documentation seems to leave zero ambiguity:
>
>  After successfully calling PQsendQuery, call PQgetResult one or more times 
> to obtain the results. PQsendQuery cannot be called again (on the same 
> connection) until PQgetResult has returned a null pointer, indicating that 
> the command is done.
>
> David J.
>




Re: libpq pipelineing

2020-06-26 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> Thanks David,
>
> You are correct.
>
> I was giving an example of what I was hoping to achieve, not what I
> expected to work with the current interface.
>

What about, as it says, sending multiple statements in a single sendQuery
and then polling for multiple results?

David J.


Re: libpq pipelineing

2020-06-26 Thread Samuel Williams
> What about, as it says, sending multiple statements in a single sendQuery and 
> then polling for multiple results?

I tried this, and even in single row streaming mode, I found that
there are cases where the results would not be streamed until all the
queries were sent.

>From the users point of view, they may generate a loop sending
multiple queries and don't care about the result, so a pipeline/batch
processing is ideal to avoid RTT per loop iteration, if database
access is slow, this can be a significant source of latency.

Kind regards,
Samuel




Re: libpq pipelineing

2020-06-26 Thread David G. Johnston
On Friday, June 26, 2020, Samuel Williams 
wrote:

> > What about, as it says, sending multiple statements in a single
> sendQuery and then polling for multiple results?
>
> I tried this, and even in single row streaming mode, I found that
> there are cases where the results would not be streamed until all the
> queries were sent.
>
> From the users point of view, they may generate a loop sending
> multiple queries and don't care about the result, so a pipeline/batch
> processing is ideal to avoid RTT per loop iteration, if database
> access is slow, this can be a significant source of latency
>

 I don’t have any insight into the bigger picture but I’d concur that no
other option is documented so what you desire is not possible.

David J.