On Fri, 2025-04-25 at 12:36 +1200, David Rowley wrote:
> On Fri, 25 Apr 2025 at 03:06, Laurenz Albe wrote:
> > On Fri, 2025-04-25 at 01:41 +1200, David Rowley wrote:
> > > The 79.3 seconds is the total time spent doing reads for all parallel
> > > workers. 52.6 seconds is the wall clock time elaps
On Fri, 25 Apr 2025 at 03:06, Laurenz Albe wrote:
>
> On Fri, 2025-04-25 at 01:41 +1200, David Rowley wrote:
> > The 79.3 seconds is the total time spent doing reads for all parallel
> > workers. 52.6 seconds is the wall clock time elapsed to execute the
> > query.
>
> But wouldn't it read "loops=
On Fri, 2025-04-25 at 01:41 +1200, David Rowley wrote:
> On Fri, 25 Apr 2025 at 01:32, Marc Millas wrote:
> > Nested Loop Left Join (cost=941400.77..966327.57 rows=3 width=653) (actual
> > time=52655.694..62533.811 rows=346 loops=1)
> > Buffers: shared hit=10068265 read=396705 dirtied=1858 wri
Thanks !
Marc MILLAS
Senior Architect
+33607850334
www.mokadb.com
On Thu, Apr 24, 2025 at 3:41 PM David Rowley wrote:
> On Fri, 25 Apr 2025 at 01:32, Marc Millas wrote:
> > Nested Loop Left Join (cost=941400.77..966327.57 rows=3 width=653)
> (actual time=52655.694..62533.811 rows=346 loops=
On Fri, 25 Apr 2025 at 01:32, Marc Millas wrote:
> Nested Loop Left Join (cost=941400.77..966327.57 rows=3 width=653) (actual
> time=52655.694..62533.811 rows=346 loops=1)
> Buffers: shared hit=10068265 read=396705 dirtied=1858 written=218, temp
> read=429687 written=115187
> I/O Timings: r
hello,
there is something I seem missing in explain analyze buffers results when
track_io_timing is on:
this is the first lines of an explain: (the request is of no interest, such
a result is quite frequent)
Nested Loop Left Join (cost=941400.77..966327.57 rows=3 width=653) (actual
time
n.
> Running explain directly on a top-level query doesn't establish any of the
> needed hooks to produce these additional explains. While those hooks could
> be installed the output for manual explain is the single query result sent
> to the client. How multiple explains could be combi
On Sat, 19 Oct 2024 at 23:48, Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
>
>
> ok, it makes sense for the reason of having auto_explain. but maybe i did
> ask correctly,
> why do we not have the extended flags in auto_explain , in , explain wrt
> nested_
On Sat, Oct 19, 2024 at 10:43 AM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> i tried to check the code for auto_explain , there is nothing that helps
> understand why it was provided as a separate .
>
Probably because output to log was easier than reworking the internals to
make o
ably because output to log was easier than reworking the internals to
> make output to client happen.
The reason that auto_explain exists is to capture plans for queries
that are being issued by real applications --- which aren't programmed
to issue EXPLAIN for themselves, and likely don
> >> understand why it was provided as a separate .
>
> > Probably because output to log was easier than reworking the internals to
> > make output to client happen.
>
> The reason that auto_explain exists is to capture plans for queries
> that are being issue
On Sat, Oct 19, 2024 at 10:05 AM Vijaykumar Jain <
vijaykumarjain.git...@gmail.com> wrote:
> Why does auto_explain have extended flags to support nested statements and
> triggers whereas explain does not.
> the objects of concern have the same ownership and access, so it does n
Hi,
Why does auto_explain have extended flags to support nested statements and
triggers whereas explain does not.
the objects of concern have the same ownership and access, so it does not
even sound like a security thing.
or maybe it is just a format/display issue ?
i tried to check the doc
On 2024-07-06 11:09:23 +0530, Krishnakant Mane wrote:
>
> On 7/5/24 21:10, Peter J. Holzer wrote:
> > If I understand https://github.com/sraoss/pg_ivm correctly, the
> > materialized view will be updated within the same transaction. So it's
> > just the same as any other change in the database:
>
On 7/5/24 21:10, Peter J. Holzer wrote:
If I understand https://github.com/sraoss/pg_ivm correctly, the
materialized view will be updated within the same transaction. So it's
just the same as any other change in the database:
Neither client will wait for the other. The first client will see ei
On 2024-07-05 18:58:21 +0530, Krishnakant Mane wrote:
> I have decided to use materialised views (more precisely IVM ).
[...]
> Given this setup I have a very specific questionh.
>
> if client 1 has asked for his balance sheet and the view is being queried,
> then what will happen if client 2 happ
Hello all.
I am an old timer postgresql user for last 17 years.
I have used it for my open source as well as enterprise cloud services.
IN my current fintech solution, I need to do some performance optimisations.
I have decided to use materialised views (more precisely IVM ).
So on my postgre
Thomas Nyberg writes:
> Or put another way, does an EXPLAIN UPDATE on a table really require
> UPDATE permissions to run? Why can't it be done without UPDATE
> permissions?
IIRC, the reasoning is this: should you be allowed to run an EXPLAIN
on a table that you have no permissi
Hello,
Or put another way, does an EXPLAIN UPDATE on a table really require
UPDATE permissions to run? Why can't it be done without UPDATE
permissions? I can understand EXPLAIN ANALYZE UPDATE requiring it, since
it actually executes the statement, but why would a regular EXPLAIN
U
Thank you very much for your quick answer and patch!
I tested and this is exactly what I was looking for :-)
Best
Chantal
Le 18/01/2024 à 18:55, Tom Lane a écrit :
Chantal Keller writes:
I would like "explain" to output formulas for filtering over
subplans. Is it possible?
Chantal Keller writes:
> I would like "explain" to output formulas for filtering over
> subplans. Is it possible?
No, and that's been a to-do item for a long time.
Currently, EXPLAIN just ignores the "testexpr" field of SubPlan
nodes, which is what you are after.
Hi
I would like "explain" to output formulas for filtering over
subplans. Is it possible?
Here is a minimal example. Consider the queries:
create table t(a int);
explain (format xml, verbose true) select * from t where a >= all
(select * from t);
I put the result of the second
execution plan you wish to see.
> >>>
>
> seems you can use it with SELECT INTO.
>
> explain (ANALYZE, BUFFERS, TIMING, VERBOSE ON, BUFFERS ON, WAL ON,SETTINGS
> ON)
> select count(*) as cnt INTO s from tenk1;
>
>
We try not to encourage such things. And CTAS is listed.
David J.
hi.
https://www.postgresql.org/docs/current/sql-explain.html
>>>
Any SELECT, INSERT, UPDATE, DELETE, MERGE, VALUES, EXECUTE, DECLARE,
CREATE TABLE AS, or CREATE MATERIALIZED VIEW AS statement, whose
execution plan you wish to see.
>>>
seems you can use it with SELECT INTO.
Hi Community!
I have already read the comment of the proc.h, read some related question about
it, and read the "74. Transaction Processing" of the PostgreSQL document.
But I really don't know how the PGPROC works in the transcation?
Can someone provide me some help? Thanks in advance!
Yours,
W
In MemoryContext source code:
/* context_freelists[0] is for default params, [1] for small params */
static AllocSetFreeList context_freelists[2] =
{
{
0, NULL
},
{
0, NULL
}
};
I really don't know what this variable mean, can someone share me with your
view?
Th
y
> occurs for your numfic column.
>
> I just wanted to clear up any confusion here. The above simply is not
> true. If you see the filter in EXPLAIN, then the executor *is*
> applying that filter.
>
> The planner could likely work a bit harder to prove which filters are
>
en though there is a filter, there is not. Thus, filtering only occurs
> for your numfic column.
I just wanted to clear up any confusion here. The above simply is not
true. If you see the filter in EXPLAIN, then the executor *is*
applying that filter.
The planner could likely work a bit harder
Hi,
@Marc, I think there is no problem.Even though it says it is filtered by
ladate, it is not. Because of the partition.
As you can see for each index scan it uses a different partition and those
partition boundaries are already specified logically.
For example; "Parallel Index Scan using table1_
Le mer. 28 juin 2023 à 22:46, Laurenz Albe a
écrit :
> On Wed, 2023-06-28 at 19:25 +0200, Marc Millas wrote:
> > Hi Laurenz, as said, in each partition there is only one value for
> ladate.
>
> The planner doesn't seem to take that into account.
>
Indeed. I did check values in pg_statistic. And r
On Wed, 2023-06-28 at 19:25 +0200, Marc Millas wrote:
> Hi Laurenz, as said, in each partition there is only one value for ladate.
The planner doesn't seem to take that into account.
Yours,
Laurenz Albe
On Wed, Jun 28, 2023 at 6:48 PM Laurenz Albe
wrote:
> On Wed, 2023-06-28 at 17:29 +0200, Marc Millas wrote:
> > https://explain.depesz.com/s/Opk0
> >
> > The big table (10 billions raws) is split in around 130 partitions, one
> by month.
> > the ladate column is the partition key, and it does hav
On Wed, 2023-06-28 at 17:29 +0200, Marc Millas wrote:
> https://explain.depesz.com/s/Opk0
>
> The big table (10 billions raws) is split in around 130 partitions, one by
> month.
> the ladate column is the partition key, and it does have 1 value for each
> partition.
> there is an index on the nu
Sorry, gmail sent uncompleted message
Hi,
Postgres 14.2 (for one more month)
The explain plan and request is here:
https://explain.depesz.com/s/Opk0
The big table (10 billions raws) is split in around 130 partitions, one by
month.
the ladate column is the partition key, and it does have 1
Hi,
Postgres 14.2 (for one more month)
The explain plan and request is here:
https://explain.depesz.com/s/Opk0
The big table is split in around 130 partitions, one by month.
the ladate column is the partition key, and it does have 1 value for each
partition.
there is a
Marc MILLAS
Senior
On Mon, May 8, 2023 at 8:29 AM Kent Tong wrote:
> Hi,
>
> I have a complex query involving over 15 joins and a CTE query and it
> takes over 17s to complete. The output of EXPLAIN ANALYZE includes
> (somewhere deep inside):
>
> Index Scan using document_pkey on document d
Kent Tong writes:
> I have a complex query involving over 15 joins and a CTE query and it takes
> over 17s to complete.
If you want constructive comments about that, you're going to need to show
us the whole thing (and the underlying tables' schemas). Little bits in
isolation are not very intell
On 5/8/23 07:29, Kent Tong wrote:
Hi,
I have a complex query involving over 15 joins and a CTE query and it
takes over 17s to complete. The output of EXPLAIN ANALYZE includes
(somewhere deep inside):
Index Scan using document_pkey on document document0_
(cost=0.29..8.31 rows=1
Hi,
I have a complex query involving over 15 joins and a CTE query and it takes
over 17s to complete. The output of EXPLAIN ANALYZE includes (somewhere
deep inside):
Index Scan using document_pkey on document document0_ (cost=0.29..8.31
rows=1 width=3027) (actual time=16243.959..16243.961 rows
On Thu, Jan 12, 2023 at 01:33:56AM -0600, Ron wrote:
> On 1/12/23 01:11, Tom Lane wrote:
> > Ron writes:
> > > On 1/12/23 00:07, Tom Lane wrote:
> > > > No, not directly, but you could look at EXPLAIN ANALYZE to see which
> > > > of the RI triggers is eat
On 1/12/23 01:11, Tom Lane wrote:
Ron writes:
On 1/12/23 00:07, Tom Lane wrote:
No, not directly, but you could look at EXPLAIN ANALYZE to see which
of the RI triggers is eating the time.
Good to know, but even deleting one day of data (90,000 rows using an index
scan on the date field
Ron writes:
> On 1/12/23 00:07, Tom Lane wrote:
>> No, not directly, but you could look at EXPLAIN ANALYZE to see which
>> of the RI triggers is eating the time.
> Good to know, but even deleting one day of data (90,000 rows using an index
> scan on the date field) takes
On 1/12/23 00:07, Tom Lane wrote:
Ron writes:
Deletes are slow in one table with many indices and FK references. That's
not surprising, but it's *VERY* slow, and I'm trying to figure out why.
Is there any EXPLAIN option which shows what "query plans" Pg is using wh
On Wed, Jan 11, 2023 at 10:13:11PM -0800, Christophe Pettus wrote:
>
> Also, IIRC, the SELECTs generated to do foreign key checks do appear in
> pg_stat_statements, so that might provide a guide to ones that are consuming
> an unusually large amount of resources.
Yes, but you need to have pg_stat_
> On Jan 11, 2023, at 22:07, Tom Lane wrote:
> No, not directly, but you could look at EXPLAIN ANALYZE to see which
> of the RI triggers is eating the time. It's not going to be hard to
> figure out which one(s) are using indexed plans and which are not.
Also, IIRC, the SEL
Ron writes:
> Deletes are slow in one table with many indices and FK references. That's
> not surprising, but it's *VERY* slow, and I'm trying to figure out why.
> Is there any EXPLAIN option which shows what "query plans" Pg is using when
> checking FK
Pg 12.11
Deletes are slow in one table with many indices and FK references. That's
not surprising, but it's *VERY* slow, and I'm trying to figure out why.
Is there any EXPLAIN option which shows what "query plans" Pg is using when
checking FK references (index sc
Mark Mizzi writes:
> So to confirm, EXPLAIN ANALYZE does not detoast rows?
Not values that would have been transmitted to the client, no.
regards, tom lane
Hi, thanks for your reply.
So to confirm, EXPLAIN ANALYZE does not detoast rows? The original goal of
these queries was to see the effect of fetching from toast tables on query
performance.
On Thu, 27 Oct 2022 at 15:43, Tom Lane wrote:
> Mark Mizzi writes:
> > When I run
>
> &g
Mark Mizzi writes:
> When I run
> EXPLAIN ANALYZE SELECT * FROM unary;
> I get the following result:
> Seq Scan on unary (cost=0.00..1637.01 rows=11 width=18) (actual
> time=0.009..6.667 rows=11 loops=1)
> Planning Time: 0.105 ms
> Execution Time: 8.565 ms
>
Hi,
On Thu, Oct 27, 2022 at 03:28:14PM +0200, Mark Mizzi wrote:
>
> EXPLAIN ANALYZE SELECT * FROM unary;
>
> I get the following result:
>
> Seq Scan on unary (cost=0.00..1637.01 rows=11 width=18) (actual
> time=0.009..6.667 rows=11 loops=1)
> Planning Time: 0.
eries(0, 10);
-- update planner statistics on the unary table.
ANALYZE unary;
When I run
EXPLAIN ANALYZE SELECT * FROM unary;
I get the following result:
Seq Scan on unary (cost=0.00..1637.01 rows=11 width=18) (actual
time=0.009..6.667 rows=11 loops=1)
Planning Time: 0.105 ms
Execution Tim
On Tue, Oct 25, 2022 at 7:54 AM jack...@gmail.com wrote:
> /*
> * In a "leaf" node representing a VALUES list, the above fields are all
> * null, and instead this field is set. Note that the elements of the
> * sublists are just expressions, without ResTarget decoration. Also note
> * that a
/*
* In a "leaf" node representing a VALUES list, the above fields are all
* null, and instead this field is set. Note that the elements of the
* sublists are just expressions, without ResTarget decoration. Also note
* that a list element can be DEFAULT (represented as a SetToDefault
* node),
Thanks for the clarification, Peter.
On Sat, Oct 22, 2022, 05:32 Peter J. Holzer wrote:
> On 2022-10-20 09:56:23 -0700, Christophe Pettus wrote:
> > On Oct 20, 2022, at 09:52, Vince McMahon
> wrote:
> > > The number of rows are different.
> >
> > This is
On 2022-10-20 09:56:23 -0700, Christophe Pettus wrote:
> On Oct 20, 2022, at 09:52, Vince McMahon
> wrote:
> > The number of rows are different.
>
> This isn't unexpected. EXPLAIN does not actually run the query and
> determine how many rows are returned; it calcula
> On Oct 20, 2022, at 09:52, Vince McMahon
> wrote:
> The number of rows are different.
This isn't unexpected. EXPLAIN does not actually run the query and determine
how many rows are returned; it calculates an estimate based on the current
system statistics, which v
I did get reply so I am trying again.
I executed the following statements 3 times
> explain(analyze, buffet) select * from table1
>
> The number of rows are different. Is the table corrupted? How to confirm
> and how to fix it?
>
can you give me a link to study it?
Danny Shemesh writes:
> A tool I seem to be missing, and I wondered if such exists, is to have the
> planner output alternative plans for a given query, i.e. to say, give me
> the x top plans sorted by cost - I believe this would help shed some light
> on the internal state machine and subsequent
Hey all !
I'm currently optimizing queries and indices on a relatively large dataset;
one of the frequent questions I seem to ask myself is why the planner
chooses plan A over B.
Reading the docs, blogs, stack exchange posts, wiki, ... helps in trying to
tinker with the query or indices in a way
I found answer on https://www.unicode.org/reports/tr35/tr35-collation.html
and https://cldr.unicode.org/index/bcp47-extension
On Wed, May 25, 2022 at 1:52 PM jian he wrote:
>
> code from
> https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
>
>> DRO
jian he wrote:
> So which website can I get the info about the "kf" and "u".
https://unicode.org/reports/tr35/#u_Extension
https://www.unicode.org/reports/tr35/tr35-collation.html#Collation_Settings
Best regards,
--
Daniel Vérité
https://postgresql.verite.pro/
Twitter: @DanielVerite
code from
https://www.postgresql.org/message-id/flat/5e756dd6-0e91-d778-96fd-b1bcb06c161a%402ndquadrant.com
> DROP DATABASE IF EXISTS dbicu;
> CREATE DATABASE dbicu LOCALE_PROVIDER icu LOCALE 'en_US' ICU_LOCALE
> 'en-u-kf-upper' template 'template0';
> \c dbicu
> CREATE COLLATION upperfirst (provi
Good day!
There's a query inside a DO block which -- because it's parameterized -- I'd
rather analyze while it's in the FOR loop of a DO block, instead of pulling
it out and hard-coding the parameters.
Is this possible?
No.
Why not to use auto_explain module?
postgres=# LOAD 'auto_explain';
L
On Thu, Jul 1, 2021 at 9:22 AM Michael Lewis wrote:
> It sounds like you are wanting to run 'explain analyze [query]' inside a
> loop inside a DO block. That isn't possible as far as I know, but
> auto_explain and log_nested_statements should be able to let you profi
licate the behavior of a plpgsql query is
to set it up as a PREPAREd statement (with parameters for any
plpgsql variables it references) and use EXPLAIN EXECUTE.
This also works if you're curious about the behavior of a
query issued via PQexecParams or the like.
It's recommendable to repea
It sounds like you are wanting to run 'explain analyze [query]' inside a
loop inside a DO block. That isn't possible as far as I know, but
auto_explain and log_nested_statements should be able to let you profile
the whole thing and perhaps you can pick out the part you want from the
logs.
Postgresql 12.5
There's a query inside a DO block which -- because it's parameterized -- I'd
rather analyze while it's in the FOR loop of a DO block, instead of pulling
it out and hard-coding the parameters.
Is this possible? If so, where do I put the EXPLAIN statement?
> Laurenz Albe writes:
> > "generate_series" has a support function from v12 on:
> True, but I don't think it can do anything with non-constant inputs,
> as we have in the OP's case.
As confirmed by this:
https://dbfiddle.uk/?rdbms=postgres_12&fiddle=d0b70717faca3b094be8b4a096758b94
Plan:
=
Laurenz Albe writes:
> On Sat, 2021-06-05 at 00:12 +1200, David Rowley wrote:
>> As for the call to generate_series, you're not likely to ever get any
>> great estimation from that.
> "generate_series" has a support function from v12 on:
True, but I don't think it can do anything with non-consta
On Sat, 5 Jun 2021 at 00:55, Laurenz Albe wrote:
>
> On Sat, 2021-06-05 at 00:12 +1200, David Rowley wrote:
> > As for the call to generate_series, you're not likely to ever get any
> > great estimation from that. The number of rows returned by a call to
> > that particular function are just what
case, 1000. The other generate_series functions which take
> INT and BIGINT inputs do have a prosupport function.
"generate_series" has a support function from v12 on:
EXPLAIN SELECT * FROM generate_series(1, 25);
On Fri, 4 Jun 2021 at 22:59, Pól Ua Laoínecháin wrote:
> Now, we've gone from 1,160,000 to 39,830 rows (progress? :-) ) and a
> cost of ~ 1M (compared with 168k for the first query).
The estimates are not that meaningful due to a lack of table
statistics on the "test" table. If you run ANALYZE o
; =
>
> Query:
>
>
> SELECT
> id,
> GENERATE_SERIES
> (t.start_date, t.end_date, '1 DAY')::DATE AS sd,
> t.end_date
> FROM test t
> ORDER BY t.id, t.start_date;
> =
>
> Now,
Y')::DATE AS sd,
t.end_date
FROM test t
ORDER BY t.id, t.start_date;
=
Now, the EXPLAIN (ANALYZE, BUFFERS) of this query is as follow - see
fiddle - with 7 records:
QUERY PLAN
Sort (cost=165708.62..168608.62 rows=116 width=10) (actual
time=0.083..0.087 rows
Il giorno mar 27 apr 2021 alle ore 11:38 David Rowley
ha scritto:
> On Tue, 27 Apr 2021 at 21:15, Matteo Bonardi wrote:
> > Explain plan: https://explain.depesz.com/s/BXGT
> >
> > Usually I'm looking for "external merge Disk" to see temp files usage
> b
On Tue, 27 Apr 2021 at 21:15, Matteo Bonardi wrote:
> Explain plan: https://explain.depesz.com/s/BXGT
>
> Usually I'm looking for "external merge Disk" to see temp files usage but, in
> this case, the only reference to that is 299,368kB in the last but one node
>
On Tue, 2021-04-27 at 11:15 +0200, Matteo Bonardi wrote:
> I have a query that creates a large number of temporary files, in this
> example ~ 45GB.
> Looking at the query plan I can't figure out where temporary files are being
> generated.
>
> Explain plan: https://ex
Hi everybody,
I have a query that creates a large number of temporary files, in this
example ~ 45GB.
Looking at the query plan I can't figure out where temporary files are
being generated.
Explain plan: https://explain.depesz.com/s/BXGT
Usually I'm looking for "external merge Di
On 2021-02-21 10:14:04 -0700, Michael Lewis wrote:
> No issues for us. We have used a low sample rate of 1% or so and gotten some
> very useful data.
Oh, somehow I never noticed the auto_explain.sample_rate parameter in
the docs. Good to know.
hp
--
_ | Peter J. Holzer| Story mu
No issues for us. We have used a low sample rate of 1% or so and gotten
some very useful data. Particularly with logging nested statements so we
can profile stored procs more easily than adding a large number of raise
notice statements.
With shared_buffers=16GB, pg_top shows:
last pid: 2613; load avg: 0.49, 0.45, 0.37; up 0+00:19:21
16:41:16
16 processes: 16 sleeping
CPU states: 9.2% user, 0.0% nice, 0.2% system, 90.4% idle, 0.1% iowait
Memory: 21G used, 42G free, 29M buffers, 18G cached
DB activity: 39 tps, 0
Thank you all, I will try at first
shared_buffers = 16GBand
index on words_scores(uid, length(word) desc)
On 2021-02-20 06:43:28 -0700, Michael Lewis wrote:
> Using auto_explain may be needed to catch the plan when it is slow, if the
> other suggestions don't resolve things for you.
Yes, auto_explain is very helpful. Unfortunately, by default it only
does an explain, not the equivalent of
Using auto_explain may be needed to catch the plan when it is slow, if the
other suggestions don't resolve things for you.
rds played by a user (they are displayed at the player profile
> page).
>
> I have run the EXPLAIN ANALYZE here: https://explain.depesz.com/s/IcqN
Well, I wouldn't call a query with 8 joins "simple".
I notice two things:
1) It does a lot more than the description ("S
disks).
So it might be that when you executed the EXPLAIN the data just happened
to be in RAM (that's almost certainly the case if you executed the same
query without EXPLAIN before that) while in those cases where it was
slow, the data (or a large part of it) had to be loaded from disk first.
Yes, Michael, that I have noticed too, but should have written more in my
original mail.
The query when I try it does run in 15ms, but evening logs show the query
(I think only small percentage of it) running 1-3s.
At the same time my CentOS 8 server with 64 GB RAM is never loaded, the
load avera
Your explain analyze is showing an example that runs in less than 15ms, and
your logs indicate queries taking 2-3 seconds. I am missing part of the
picture here. Can you clarify?
by a user (they are displayed at the player profile
page).
I have run the EXPLAIN ANALYZE here: https://explain.depesz.com/s/IcqN
It is difficult for me to understand, what could be missing there, does
anybody please have an idea?
Thank you for any input
Alex
P.S. Here are the tables involved in
Tom,
Ok, but how about reading from temporary tables? We could mark the function as
"PARALLEL SAFE” instead of "PARALLEL RESTRICTED” in this case if it’s
important. Actually, I rewrote the function without temp tables access but
that’s not helpful - the function marked as "PARALLEL SAFE” is not
mobigroup writes:
>> Perhaps you didn't define it as PARALLEL SAFE.
> The function is marked as "PARALLEL RESTRICTED” because it’s uses temp tables
> (and I tested it as PARALLEL SAFE with the same result… parallelisation
> doesn’t work anyway).
If it writes into temp tables then it can't be r
Thanks for the ideas, I tested them:
>
> The essential question here is when the function is executed, so you should
> use
> EXPLAIN (VERBOSE) to see that.
Append (cost=0.00..5.12 rows=4 width=32)
-> Gather (cost=0.00..1.26 rows=1 width=32)
Output: p
On Tue, 2020-12-01 at 14:38 +0700, mobigroup wrote:
> I have strange behaviour when EXPLAIN returns parallel execution plan but the
> query execution is not parallel. The query looks as
>
> SELECT
> plpgsql_function(...parameters…)
> FROM table as t
Hi,
I have strange behaviour when EXPLAIN returns parallel execution plan but the
query execution is not parallel. The query looks as
SELECT
plpgsql_function(...parameters…)
FROM table as t
WHERE id=1
UNION ALL
SELECT
lling tuples from workers, it's not dedicated to helping out.
>
> Sounds like it help to set max_parallel_workers = 1 before running EXPLAIN
> ANALYZE in order to simplify the numbers, yes? Or is there a possibility that
> doing so would send the planner down an entirely diff
> On Aug 19, 2020, at 6:24 PM, David Rowley wrote:
>
> On Thu, 20 Aug 2020 at 09:55, Philip Semanchuk
> wrote:
>> I could use some help interpreting EXPLAIN ANALYZE output.
>>
>> -> Index Scan using ix_foo on t1 (cost=0.69..68.57 rows=3283 width=105)
>&
On Thu, 20 Aug 2020 at 09:55, Philip Semanchuk
wrote:
> I could use some help interpreting EXPLAIN ANALYZE output.
>
> -> Index Scan using ix_foo on t1 (cost=0.69..68.57 rows=3283 width=105)
> (actual time=0.006..0.918 rows=3760 loops=94)
>
> The actual rows returned by t
Hi all,
I could use some help interpreting EXPLAIN ANALYZE output.
-> Index Scan using ix_foo on t1 (cost=0.69..68.57 rows=3283 width=105)
(actual time=0.006..0.918 rows=3760 loops=94)
The actual rows returned by this plan node ~= 3760 * 94 = 353,440. Did postgres
expect (estimate) 3283 r
1 - 100 of 178 matches
Mail list logo