question on auto_explain

2023-08-03 Thread Karsten Hilbert
Dear list,

when debugging slow queries in a larger application (https://www.gnumed.de) I 
started to use auto_explain.

The "normal" EXPLAIN warns

  https://www.postgresql.org/docs/current/sql-explain.html

that ANALYZE on INSERT/UPDATE/DELETE will (of course, in hindsight) modify 
rows. Now, the
auto_explain docs

  https://www.postgresql.org/docs/current/auto-explain.html

don't explicitely state that it does so, too. Nor can I read impliciteness that
"normal" EXPLAIN is *run* by auto_explain.

Hence my question:

  Does auto_explain also modify rows on INSERT/UPDATE/DELETE if 
auto_explain.log_analyze is TRUE ?

If not how so ?

(I guess it would have to run a dance of "BEGIN; EXPLAIN ANALYZE ...; 
ROLLBACK;" just
 before any query is being run.)

Thanks,
Karsten




Re: question on auto_explain

2023-08-03 Thread David G. Johnston
On Thu, Aug 3, 2023 at 9:29 AM Karsten Hilbert 
wrote:

>   https://www.postgresql.org/docs/current/auto-explain.html
>
> don't explicitely state that it does so, too. Nor can I read impliciteness
> that
> "normal" EXPLAIN is *run* by auto_explain.
>

auto_explain automatically produces the explain output of a query that is
running for reals.  The effect is identical to running explain analyze
except your output here is whatever the query would produce instead of the
explain output, which instead goes into the log.

David J.


Aw: Re: question on auto_explain

2023-08-03 Thread Karsten Hilbert
> On Thu, Aug 3, 2023 at 9:29 AM Karsten Hilbert 
> mailto:karsten.hilb...@gmx.net]> wrote:
>>
>>  
>>https://www.postgresql.org/docs/current/auto-explain.html[https://www.postgresql.org/docs/current/auto-explain.html]
>>
>> don't explicitely state that it does so, too. Nor can I read impliciteness 
>> that
>> "normal" EXPLAIN is *run* by auto_explain.
>
> auto_explain automatically produces the explain output of a query that is 
> running for reals.  The effect is identical to running explain analyze except 
> your output > here is whatever the query would produce instead of the explain 
> output, which instead goes into the log.

Thanks David. I take this to mean that auto_explain produces the same 
side-effects as manually running "explain analyze" does.

Would this warrant a documentation patch ?  Like so:

  auto_explain.log_analyze causes EXPLAIN ANALYZE output, rather than just 
EXPLAIN output, to be printed when an execution plan is logged. Hence
  the same caveats apply for INSERT/UPDATE/DELETE queries.

Also, doesn't this makes auto_explain.log_analyze = TRUE rather surprising as it
can make any INSERT/UPDATE/DELETE fail when it is slow for some reason ...

Thanks,
Karsten




Runaway Initial Table Syncs in Logical Replication?

2023-08-03 Thread Don Seiler
Logical Rep question. Publisher is PG12 on Ubuntu 18.04, subscriber is PG15
on Ubuntu 22.04.

I bumped up some values to see how initial load times change. I set
max_logical_replication_workers to 20 and max_sync_workers_per_subscription
to 4. I’m using 3 subscriptions, 2 of the subscriptions have 3 tables each,
the other has a lot more, say 100 for the sake of example.What we noticed
was that the subscriber basically maxed out the wal senders and replication
slots on the publisher side, even when the publisher settings were bumped
up to 50 each. 3 replication slots were for the 3 subs and the rest (47)
were sync workers. Was creating a sync worker for each table right away, or
at least trying to? The subscriber side was still complaining that it
couldn’t create more replication slots on the publisher.

I was expecting to see max_sync_workers_per_subscription (4) x # of subs
(3) = 12 sync workers in action so this was a big surprise. Is this
expected behavior?

-- 
Don Seiler
www.seiler.us


Aw: Re: question on auto_explain

2023-08-03 Thread Karsten Hilbert
> > auto_explain automatically produces the explain output of a query that is 
> > running for reals.  The effect is identical to running explain analyze 
> > except your output > here is whatever the query would produce instead of 
> > the explain output, which instead goes into the log.
> 
> Thanks David. I take this to mean that auto_explain produces the same 
> side-effects as manually running "explain analyze" does.
> 
> Would this warrant a documentation patch ?  Like so:
> 
>   auto_explain.log_analyze causes EXPLAIN ANALYZE output, rather than just 
> EXPLAIN output, to be printed when an execution plan is logged. Hence
>   the same caveats apply for INSERT/UPDATE/DELETE queries.
> 
> Also, doesn't this makes auto_explain.log_analyze = TRUE rather surprising as 
> it
> can make any INSERT/UPDATE/DELETE fail when it is slow for some reason ...

Ah, wait, I think I've been a bit dense here. I thought it was a two-step 
process of
first running any queries "normally", somehow magically noticing slow ones as 
per
auto_explain.log_min_duration, and re-running those with EPXPLAIN ANALYZE 
prepended.

I think I understand better now: with auto_explain ALL queries are run with 
EXPLAIN ANALYZE
prepended BUT the output is two-fold: query results are fed into whatever wire 
protocol client
is and EXPLAIN output is re-routed to the log. Does that sound right ?

I think was misguided by psql discarding (?) query results (the rows)
while displaying EXPLAIN output only.

The auto_explain docs might perhaps still benefit from a more
explicit hint towards write query side effects.

Karsten




Re: Installation Issue

2023-08-03 Thread Mark Atlantic



From: Adrian Klaver 
Sent: Sunday, July 16, 2023 11:35 AM
To: Baskar Muthu ; pgsql-general@lists.postgresql.org 

Subject: Re: Installation Issue

On 7/15/23 07:30, Baskar Muthu wrote:
> Hi sir/mam
>
> I installed postre - sql but it is not connected with the server. I
> tried many times but it's not working and it shows a 'connecting server
> error'.

This needs more information:

1) OS and version.

2) Postgresql version.

3) How was Postgresql installed?

5) Is the Postgresql server running?

6) What client are you connecting with and the connection parameters?

7) Is the client remote to the server?

8) The complete error message.


>
> Please resolve this issue and alternate ideas please let me know.
>
> Thanks & Regards
> Baskar Muthu

--
Adrian Klaver
adrian.kla...@aklaver.com





role "my_account" does not exist

2023-08-03 Thread Amn Ojee Uw

Hello!

Following this web page 
, I have tried to 
create a new role/user, but to no avail.


After typing the following command : /*createuser --interactive 
--pwprompt*/ I get the following questions:


Enter name of role to add: company_name
Enter password for new role: xxx
Enter it again: xxx
Shall the new role be a superuser? (y/n) y

Than this error is displayed :
*/createuser: error: connection to server on socket 
"/var/run/postgresql/.s.PGSQL.5432" /**/

/**/failed: FATAL:  role "my_debian_login_account_name" does not exist/*

How/ever, this issue is not really relevant/, per se, since I am able to 
create roles once I log in PostgreSQL-15. Nonetheless, if the service 
exist, why not usu it. If the services exist and is not working, could 
it be that there is a problem that will come apparent later on, when 
unlike now, it would be during a critical moment. Thus the question, 
what can I do to resolve this issue?

Is there something I am missing?


Thanks



Re: role "my_account" does not exist

2023-08-03 Thread David G. Johnston
On Thu, Aug 3, 2023 at 5:18 PM Amn Ojee Uw  wrote:

> Hello!
>
> Following this web page
> , I have tried to
> create a new role/user, but to no avail.
>
> After typing the following command : *createuser --interactive --pwprompt*
> I get the following questions:
>
> Enter name of role to add: company_name
> Enter password for new role: xxx
> Enter it again: xxx
> Shall the new role be a superuser? (y/n) y
>
> Than this error is displayed :
> *createuser: error: connection to server on socket
> "/var/run/postgresql/.s.PGSQL.5432" *
> *failed: FATAL:  role "my_debian_login_account_name" does not exist*
>
> How*ever, this issue is not really relevant*, per se, since I am able to
> create roles once I log in PostgreSQL-15. Nonetheless, if the service
> exist, why not usu it. If the services exist and is not working, could it
> be that there is a problem that will come apparent later on, when unlike
> now, it would be during a critical moment. Thus the question, what can I do
> to resolve this issue?
> Is there something I am missing?
>

createuser has to login to the server to do its work.  Whatever credentials
or method you use to connect via psql (usually via sudo and the postgres
o/s user) you need to use here as well.

David J.


Re: role "my_account" does not exist

2023-08-03 Thread Adrian Klaver

On 8/3/23 17:17, Amn Ojee Uw wrote:

Hello!

Following this web page 
, I have tried to 
create a new role/user, but to no avail.


Why use random Web tutorials, most of which are full of errors, instead 
of the Postgres documentation:


https://www.postgresql.org/docs/current/app-createuser.html



After typing the following command : /*createuser --interactive 
--pwprompt*/ I get the following questions:


Enter name of role to add: company_name
Enter password for new role: xxx
Enter it again: xxx
Shall the new role be a superuser? (y/n) y

Than this error is displayed :
*/createuser: error: connection to server on socket 
"/var/run/postgresql/.s.PGSQL.5432" /**/

/**/failed: FATAL:  role "my_debian_login_account_name" does not exist/*


Well in your example above the role name is 'company_name' not 
'my_debian_login_account_name', so the error message is correct.




How/ever, this issue is not really relevant/, per se, since I am able to 
create roles once I log in PostgreSQL-15. Nonetheless, if the service 
exist, why not usu it. If the services exist and is not working, could 


createuser works.

it be that there is a problem that will come apparent later on, when 
unlike now, it would be during a critical moment. Thus the question, 
what can I do to resolve this issue?

Is there something I am missing?



Yes reading the documentation and paying attention to what you are doing.




Thanks




--
Adrian Klaver
adrian.kla...@aklaver.com





Re: role "my_account" does not exist

2023-08-03 Thread Amn Ojee Uw
I came to understand PG a bit better and then understood where I had 
gone wrong. Thanks folks, but please ignore this email.


My most sincere apologies.

On 8/3/23 8:17 p.m., Amn Ojee Uw wrote:


Hello!

Following this web page 
, I have tried 
to create a new role/user, but to no avail.


After typing the following command : /*createuser --interactive 
--pwprompt*/ I get the following questions:


Enter name of role to add: company_name
Enter password for new role: xxx
Enter it again: xxx
Shall the new role be a superuser? (y/n) y

Than this error is displayed :
*/createuser: error: connection to server on socket 
"/var/run/postgresql/.s.PGSQL.5432" /**/

/**/failed: FATAL:  role "my_debian_login_account_name" does not exist/*

How/ever, this issue is not really relevant/, per se, since I am able 
to create roles once I log in PostgreSQL-15. Nonetheless, if the 
service exist, why not usu it. If the services exist and is not 
working, could it be that there is a problem that will come apparent 
later on, when unlike now, it would be during a critical moment. Thus 
the question, what can I do to resolve this issue?

Is there something I am missing?


Thanks



RE: Runaway Initial Table Syncs in Logical Replication?

2023-08-03 Thread Zhijie Hou (Fujitsu)
On Friday, August 4, 2023 3:24 AM Don Seiler   wrote:

Hi,

Thanks for reporting.

> Logical Rep question. Publisher is PG12 on Ubuntu 18.04, subscriber is PG15 on
> Ubuntu 22.04.

Just to confirm, which subversion of PG15 are you using on the subscriber side ?
Is it 15.3 ? I am confirming because I recall we fixed similar bug before in 
commit [1].

> 
> I bumped up some values to see how initial load times change. I set
> max_logical_replication_workers to 20 and max_sync_workers_per_subscription to
> 4. I’m using 3 subscriptions, 2 of the subscriptions have 3 tables each, the
> other has a lot more, say 100 for the sake of example.What we noticed was that
> the subscriber basically maxed out the wal senders and replication slots on 
> the
> publisher side, even when the publisher settings were bumped up to 50 each. 3
> replication slots were for the 3 subs and the rest (47) were sync workers. Was
> creating a sync worker for each table right away, or at least trying to? The
> subscriber side was still complaining that it couldn’t create more replication
> slots on the publisher. 

Are there any ERRORs(including erros other than the slots number) in the log of 
publisher and subscriber.
It would be helpful to see the log in case there are some other ERRORs that 
could cause problem.

> 
> I was expecting to see max_sync_workers_per_subscription (4) x # of subs (3) =
> 12 sync workers in action so this was a big surprise. Is this expected
> behavior?


Best Regards,
Hou zj

[1] 
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=f6c5edb8abcac04eb3eac6da356e59d399b2bcef


org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Sai Teja
Hi team,

I am trying to migrate the data from db2 to postgreSQL in which one of the
table is having XML data.
For one of the file (13MB) I'm facing an error with

ERROR: invalid XML content
Detail: line 418061: internal error: Huge input lookup
nested exception is org.postgresql.util.PSQLException: ERROR: invalid XML
content

I even tried with more size of XML data (30MB) but for this case I can able
to migrate the date successfully.

But only for that file I'm facing the issue.
Later, I got to know that there are some special characters in that file
not sure whether this can be the issue or not.

I'm using java.sql.preparedstatement
Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data from
db2 to postgreSQL
When I tried in my local postgreSQL (which is windows and located in APAC
region. I can able to migrate successfully)
But when I'm trying to migrate into azure postgreSQL (Linux OS, located in
Germany) facing an issue.

I request you to please look into this and help me to resolve this error.

Please let me know if any queries.

Looking forward for the resolution.

Thanks & regards,
Sai Teja


Re: org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Ron

On 8/3/23 21:22, Sai Teja wrote:

Hi team,

I am trying to migrate the data from db2 to postgreSQL in which one of the 
table is having XML data.

For one of the file (13MB) I'm facing an error with

ERROR: invalid XML content
Detail: line 418061: internal error: Huge input lookup
nested exception is org.postgresql.util.PSQLException: ERROR: invalid XML 
content


I even tried with more size of XML data (30MB) but for this case I can 
able to migrate the date successfully.


But only for that file I'm facing the issue.
Later, I got to know that there are some special characters in that file 
not sure whether this can be the issue or not.


I'm using java.sql.preparedstatement
Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data from 
db2 to postgreSQL
When I tried in my local postgreSQL (which is *windows* and located in 
*APAC* region. I can able to migrate successfully)
But when I'm trying to migrate into *azure* postgreSQL (Linux OS, located 
in *Germany*) facing an issue.


That sounds like it has to do with locale differences between the two systems.

*Exactly* what Pg version (including patch level) and locale are in each of 
the two systems?


Even if it's not that, *something* is configured differently between the two 
systems.


--
Born in Arizona, moved to Babylonia.

Re: org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Sai Teja
Thank you for the reply.

The pg version is 14.7 (Azure DB)
Locale :-
LC_COLLATE & LC_CTYPE : en_US.utf_8

Local postgreSQL:- Version:- (15.1-1-windows-x64-binaries)

LC_COLLATE & LC_CTYPE:-
English_United States.1252

Please let me know if any other information is needed.

Thanks & Best Regards,
Sai Teja






On Fri, 4 Aug, 2023, 8:03 am Ron,  wrote:

> On 8/3/23 21:22, Sai Teja wrote:
>
> Hi team,
>
> I am trying to migrate the data from db2 to postgreSQL in which one of the
> table is having XML data.
> For one of the file (13MB) I'm facing an error with
>
> ERROR: invalid XML content
> Detail: line 418061: internal error: Huge input lookup
> nested exception is org.postgresql.util.PSQLException: ERROR: invalid XML
> content
>
> I even tried with more size of XML data (30MB) but for this case I can
> able to migrate the date successfully.
>
> But only for that file I'm facing the issue.
> Later, I got to know that there are some special characters in that file
> not sure whether this can be the issue or not.
>
> I'm using java.sql.preparedstatement
> Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data from
> db2 to postgreSQL
> When I tried in my local postgreSQL (which is *windows* and located in
> *APAC* region. I can able to migrate successfully)
> But when I'm trying to migrate into *azure* postgreSQL (Linux OS, located
> in *Germany*) facing an issue.
>
>
> That sounds like it has to do with locale differences between the two
> systems.
>
> *Exactly* what Pg version (including patch level) and locale are in each
> of the two systems?
>
> Even if it's not that, *something* is configured differently between the
> two systems.
>
> --
> Born in Arizona, moved to Babylonia.
>


PITR based recovery failing due to difference in max_connections

2023-08-03 Thread Kalit Inani
Hi all,
During PITR based recovery of a postgres instance, we are getting the
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible
because max_connections = 150 is a lower setting than on the master server
(its value was 500)'

Here are the steps we are following -

   1.

   We took a snapshot of the data disk from a Postgres leader node, let’s
   call this as source instance.
   2.

   Then, we modified the MAX_CONNECTIONS in that source instance to 500.
   3.

   Due to the modification, the following wal_file entry gets generated -
   rmgr: XLOG len (rec/tot): 54/54, tx: 0, lsn: 1/5EA0, prev
   1/5E28, desc: PARAMETER_CHANGE max_connections=500
   max_worker_processes=8 max_wal_senders=10 max_prepared_xacts=0
   max_locks_per_xact=64 wal_level=replica wal_log_hints=off
   track_commit_timestamp=off
   4.

   Next, we did a PITR based recovery in another instance. During the
   recovery we have used a config file with MAX_CONNECTIONS as 150.
   5.

   However, the recovery fails with the following error -
   '2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible
   because max_connections = 150 is a lower setting than on the master server
   (its value was 500)'


What are the probable solutions to fix this issue? One of the approaches we
tried was to set ‘hot_standby = off’ in postgresql.conf. By doing this, we
are successfully able to restore the source’s content on the destination
instance. However, is this the correct way to move forward?

We also read the postgres documentation for hot_standby -
https://www.postgresql.org/docs/current/hot-standby.html

It mentions -
“The settings of some parameters determine the size of shared memory for
tracking transaction IDs, locks, and prepared transactions. These shared
memory structures must be no smaller on a standby than on the primary in
order to ensure that the standby does not run out of shared memory during
recovery. For example, if the primary had used a prepared transaction but
the standby had not allocated any shared memory for tracking prepared
transactions, then recovery could not continue until the standby's
configuration is changed.”

Does this mean that turning off hot_standby and then performing a recovery
operation may lead to some unintended consequences? Do we always have to
keep these parameter (‘max_connections’) values greater than equal to that
of the source instance?

Thank you,
Kalit.


Re: org.postgresql.util.PSQLException: ERROR: invalid XML content. Huge Input lookup

2023-08-03 Thread Ron
I would install 14.7 on your Windows box (multiversioning /is/ supported, at 
least on Linux; probably Windows too), making sure that it uses the 
en_US.utf_8 locale (or the closest Windows analog). *Then* I'd try the same 
migration on the Windows box.


After all development boxes should run the same software version as 
production...


On 8/3/23 22:46, Sai Teja wrote:

Thank you for the reply.

The pg version is 14.7 (Azure DB)
Locale :-
LC_COLLATE & LC_CTYPE : en_US.utf_8

Local postgreSQL:- Version:- (15.1-1-windows-x64-binaries)

LC_COLLATE & LC_CTYPE:-
English_United States.1252

Please let me know if any other information is needed.

Thanks & Best Regards,
Sai Teja






On Fri, 4 Aug, 2023, 8:03 am Ron,  wrote:

On 8/3/23 21:22, Sai Teja wrote:

Hi team,

I am trying to migrate the data from db2 to postgreSQL in which one
of the table is having XML data.
For one of the file (13MB) I'm facing an error with

ERROR: invalid XML content
Detail: line 418061: internal error: Huge input lookup
nested exception is org.postgresql.util.PSQLException: ERROR: invalid
XML content

I even tried with more size of XML data (30MB) but for this case I
can able to migrate the date successfully.

But only for that file I'm facing the issue.
Later, I got to know that there are some special characters in that
file not sure whether this can be the issue or not.

I'm using java.sql.preparedstatement
Preparedstatement.setSQLXML(iterator, xmlobject) to migrate the data
from db2 to postgreSQL
When I tried in my local postgreSQL (which is *windows* and located
in *APAC* region. I can able to migrate successfully)
But when I'm trying to migrate into *azure* postgreSQL (Linux OS,
located in *Germany*) facing an issue.


That sounds like it has to do with locale differences between the two
systems.

*Exactly* what Pg version (including patch level) and locale are in
each of the two systems?

Even if it's not that, *something* is configured differently between
the two systems.

-- 
Born in Arizona, moved to Babylonia.




--
Born in Arizona, moved to Babylonia.

Re: PITR based recovery failing due to difference in max_connections

2023-08-03 Thread Ron

On 8/3/23 23:47, Kalit Inani wrote:


Hi all,
During PITR based recovery of a postgres instance, we are getting the 
following error -
'2023-06-21 23:52:52.232 PDT [24244] FATAL:  hot standby is not possible 
because max_connections = 150 is a lower setting than on the master server 
(its value was 500)'



Here are the steps we are following -

1.

We took a snapshot of the data disk from a Postgres leader node, let’s
call this as source instance.

2.

Then, we modified the MAX_CONNECTIONS in that source instance to 500.



Why did you do that?


1.

Due to the modification, the following wal_file entry gets generated -
rmgr: XLOG len (rec/tot): 54/54, tx: 0, lsn: 1/5EA0, prev
1/5E28, desc: PARAMETER_CHANGE max_connections=500
max_worker_processes=8 max_wal_senders=10 max_prepared_xacts=0
max_locks_per_xact=64 wal_level=replica wal_log_hints=off
track_commit_timestamp=off

2.

Next, we did a PITR based recovery in another instance. During the
recovery we have used a config file with MAX_CONNECTIONS as 150.

3.

However, the recovery fails with the following error - '2023-06-21
23:52:52.232 PDT [24244] FATAL:  hot standby is not possible because
max_connections = 150 is a lower setting than on the master server
(its value was 500)'

What are the probable solutions to fix this issue?

Since it complains about a MAX_CONNECTIONS mismatch... *don't mismatch 
MAX_CONNECTIONS*.


Take a snapshot *after* setting MAX_CONNECTIONS = 500.

One of the approaches we tried was to set ‘hot_standby = off’ in 
postgresql.conf. By doing this, we are successfully able to restore the 
source’s content on the destination instance. However, is this the correct 
way to move forward?


We also read the postgres documentation for hot_standby - 
https://www.postgresql.org/docs/current/hot-standby.html 



It mentions -
“The settings of some parameters determine the size of shared memory for 
tracking transaction IDs, locks, and prepared transactions. These shared 
memory structures must be no smaller on a standby than on the primary in 
order to ensure that the standby does not run out of shared memory during 
recovery. For example, if the primary had used a prepared transaction but 
the standby had not allocated any shared memory for tracking prepared 
transactions, then recovery could not continue until the standby's 
configuration is changed.”



Does this mean that turning off hot_standby and then performing a recovery 
operation may lead to some unintended consequences? Do we always have to 
keep these parameter (‘max_connections’) values greater than equal to that 
of the source instance?



Thank you,
Kalit.


--
Born in Arizona, moved to Babylonia.

Re: question on auto_explain

2023-08-03 Thread Julien Rouhaud
Hi,

On Thu, Aug 03, 2023 at 09:45:39PM +0200, Karsten Hilbert wrote:
> > > auto_explain automatically produces the explain output of a query that is 
> > > running for reals.  The effect is identical to running explain analyze 
> > > except your output > here is whatever the query would produce instead of 
> > > the explain output, which instead goes into the log.
> >
> > Thanks David. I take this to mean that auto_explain produces the same 
> > side-effects as manually running "explain analyze" does.
> >
> > Would this warrant a documentation patch ?  Like so:
> >
> >   auto_explain.log_analyze causes EXPLAIN ANALYZE output, rather than just 
> > EXPLAIN output, to be printed when an execution plan is logged. Hence
> >   the same caveats apply for INSERT/UPDATE/DELETE queries.
> >
> > Also, doesn't this makes auto_explain.log_analyze = TRUE rather surprising 
> > as it
> > can make any INSERT/UPDATE/DELETE fail when it is slow for some reason ...
>
> Ah, wait, I think I've been a bit dense here. I thought it was a two-step 
> process of
> first running any queries "normally", somehow magically noticing slow ones as 
> per
> auto_explain.log_min_duration, and re-running those with EPXPLAIN ANALYZE 
> prepended.
>
> I think I understand better now: with auto_explain ALL queries are run with 
> EXPLAIN ANALYZE
> prepended BUT the output is two-fold: query results are fed into whatever 
> wire protocol client
> is and EXPLAIN output is re-routed to the log. Does that sound right ?

That's not how it's implemented but that's the idea.  auto_explain indeed
doesn't run an extra EXPLAIN ..., it simply asks for the wanted instrumentation
data before the execution, and then output the actually used execution plan to
the logs.

> I think was misguided by psql discarding (?) query results (the rows)
> while displaying EXPLAIN output only.

Note that it's not psql discarding the rows but the EXPLAIN command.


> The auto_explain docs might perhaps still benefit from a more
> explicit hint towards write query side effects.

The docs says that it automatically shows the execution plans, not that it's
itself doing an EXPLAIN.  It also mentions that some flags can lead to
noticeable performance drop as they are influencing all queries even if they
don't reach the configured thresholds so it should be a strong hint about not
having some extra command run.

Now, there also seems to be some common misconception about how auto_explain is
working, as it's not the first time that I hear people thinking or assuming
that it's executing extra EXPLAIN commands.  Maybe adding some extra
clarification could help, maybe stating that the emitted plan is the one
actually used during the query execution.