eer".
> any idea why it faced this error?
>
This means the TCP/IP connection got closed abruptly. It might be
because the client had some issue, or something on the network path
decided to block the connection (like a router or a firewall).
regards
--
Tomas Vondra
when not using
remote estimates (use_remote_estimate=false).
3) Could it be some sort of memory pressure/swapping? But that would
look different in the profile, AFAIK.
regards
[1] https://github.com/tds-fdw/tds_fdw/blob/master/ForeignTableCreation.md
--
Tomas Vondra
ote query is the same ...
Maybe try setting log_temp_files=0 before running the query. Writing the
foreign scan result set into a temp file could be happening on one of
the machines only.
regards
--
Tomas Vondra
On 11/9/24 15:05, Achilleas Mantzios wrote:
>
> Στις 9/11/24 12:49, ο/η Tomas Vondra έγραψε:
>> On 11/8/24 20:32, Achilleas Mantzios wrote:
>>> Dear All,
>>>
>>> we have hit a serious performance regression going from 10.23 → 16.4 as
>>> far
ce will be an indication
regarding what got that slower. It might be something in PG or in the
FDW, hard to say.
regards
--
Tomas Vondra
eries).
BTW you suggested each partition has ~250k rows, but the explain plan
snippet you shared does not seem to be consistent with that - it only
shows 2500-5000 rows per partition. If you run ANALYZE on the table,
does that change the plan?
regards
--
Tomas Vondra
EnterpriseDB:
o investigate (or hopefully fix!) the issue?
>
A reproducer would be great - a script that creates user/roles, and
triggers the long login time would allow us to investigate that.
Another option would be to get a perf profile from the process busy with
logging the user in - assuming it
e extra complexity of
partitioning the data. If it happens only occasionally (a couple times a
year), it probably is not. You'll just delete the data and reuse the
space for new data.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
. Why do you think this would not be expected?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
har(3000),address varchar(3000),city varchar(900),salary
> smallint,
> pincode bigint,sales numeric,phone real,amount double precision,
> dob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE,
> status boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int,
> col2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000),
> col6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real,
> col11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH
> TIME ZONE,
> col15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary
> key(col1,col7,col9));
>
> *Table Data:* 100 rows with each Row has a size of 2.
>
Without the data we can't actually try running the query.
In general it's a good idea to show the "explain analyze" output for the
cases you're comparing. Not only that shows what the database is doing,
it also shows timings for different parts of the query, how many workers
were planned / actually started etc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
(cost=0.29..5.57 ...
Index Cond: (id = david.id)
...
And this is per-loop, of which there'll be 500 (because the small david
table has 500 rows).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
s as an optimization fence
(which means the merge does not actually see the underlying table is
partitioned).
If you get rid of that and add the part_col to the join, it translates
to the first issue with setting costs to flip to the sequential scan at
the right point.
[1]
https://www.postgresql.org/docs/15/runtime-config-query.html#GUC-ENABLE-PARTITIONWISE-JOIN
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
of information.
I'm willing to continue to investigate, but only if you prepare a
reproducer, i.e. a SQL script that demonstrates the issue - I don't
think preparing that should be difficult, something like the SQL script
I shared earlier today should do the trick.
I suggest you do that d
the partitionwise join. I'd bet that if Nicolas replaces
MERGE INTO "goliath" ca
USING (SELECT * FROM "david" ORDER BY "list_id") AS t
..
with
MERGE INTO "goliath" ca
USING "david" AS t
...
it'll start doing the working much better.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
repro.sql
Description: application/sql
n was truncated and wrong, sorry for that).
>
None of the plans has estimates anywhere close to 17023331531230, so
where did that come from?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t's see if there are other problems in the
query itself. If it's generating a cartesian product, it's pointless to
tune parameters.
>> What PostgreSQL version are you using, what hardware? Did you tune it
>> in
>> any way, or is everything just default?
>
> It is pg 15.3, on 2 cores / 8GO / 2TO ssds, with defaults cloud
> provider parameters (RDS).
>
I assume 2TO is 2TB?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I suspect postgres has
> everything internally (indexes catalog, planner) to split itself the job,
> making David vs Goliath something trivial.
>
What PostgreSQL version are you using, what hardware? Did you tune it in
any way, or is everything just default?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
idth=115)
> Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY
> ('{Active,Inactive,Pending}'::text[])))
>
If you remove that condition, does the estimate improve?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
me sort of
"pushdown" that'd derive an array of values and push it down into a
parameterized path at once (instead of doing that for each value in a loop).
regards
[1] https://commitfest.postgresql.org/42/4187/
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ost) to make the difference larger.
regards
[1]
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L51
[2]
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L166
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ical drives
in the on-premise system.
If that does not explain this, I suggest picking a single query and
focus on it, instead of investigating all queries at once. There's a
nice wiki page explaining what info to provide:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
rdware capability first, using tools like fio
to measure IOPS with different workloads.
Then try pgbench with a single client, and finally increase the number
of clients and see how it behaves and compare it to what you expect.
In any case, every system has a bottleneck. You're clearly hitting one,
otherwise the numbers would go faster. Usually, it's either CPU bound,
in which case "perf top" might tell us more, or it's IO bound, in which
case try e.g. "iostat -x -k 1" or something.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
_dt IS NOT NULL)
Rows Removed by Filter: 160402
The filter is bound to be misestimated, and the error then snowballs.
Try replacing this part with a temporary table (with pre-aggregated
results) - you can run analyze on it, etc. I'd bet that'll make the
issue go away.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
cache hit rate (from pg_stat_database
view) - if that's low, increase shared buffers. Then monitor and tune
slow queries - if a slow query benefits from higher work_mem values, do
increase that value. It's nonsense to just increase the parameters to
consume more memory.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ine_item_acct_base
- ...
(in this order). I'd bet "lms_payment_check_request" is where things
start to go south.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
at it does help if you know
> what the intent of the query is (or for that matter, the text of the
> query--you attached the plan twice).
>
Right, simplify the query. Or maybe do it the other way around - start
with the simplest query (the inner-most part of the explain) and add
joins one
on theory.
I'd try two things:
1) define a covering index, so that the query can do Index Only Scan
2) define partial index, moving some of the filter conditions to index
predicate (not sure if that's possible, it depends on what parameters of
the condition are static)
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
. Or maybe there are more connections to the DB.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ies will be about the same. In that case you need to focus on those
queries, see if the plans are the same, do some profiling, etc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t, but some values have a
rather strong dependency, skewing the estimates. The MCV would help with
that, but those are in PG12 :-(
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ort/hash going to
disks, not to temporary objects (tables and indexes).
Right. log_temp_files does not cover temporary tables.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
identifying which of the cursors is causing the issues, show us the
explain analyze for that query (with and without the cursor), and that
should tell us more.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
about PostgreSQL 13. Does anyone know if it
would have a major performance impact?
This is a few years old, but Tomas Vondra did a presentation comparing major
Linux file systems for PostgreSQL:
https://www.slideshare.net/fuzzycz/postgresql-on-ext4-xfs-btrfs-and-zfs
That talk was age
ng
cheapest_total_path + sort.
3) Not sure if get_cheapest_fractional_path_for_pathkeys should worry
about require_parallel_safe too.
Doesn't seem like an urgent issue (has been there for a while, not sure
we even want to backpatch it). I'll add this to the next CF.
regards
--
Tomas Vondra
EnterpriseD
rly, nestlop (B) is pretty terrible for LIMIT, because
of the high total cost, and mergejoin (C) is what we end up with.
Not sure how to fix this without making generate_orderedappend_paths way
more complicated ...
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
optimizer_first_rows_simple.sql
Description: application/sql
e, there are two things
you can do to determine which part of the plan is causing trouble.
Firstly, you can profile the backend using perf or some other profiles,
and if we're lucky the function will give us some hints about which node
type is using the CPU.
Secondly, you can "c
autovacuum is actually executed on the large table
(there's last_autovacuum in pg_stat_all_tables)
3) try running VACUUM VERBOSE on the large table, it may tell you that
the rows can't be cleaned up yet.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise
overall principles don't change much.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
estimate?
I don't think that's related - to hit that bug, there would have to be
implied conditions pushed-down to the scan level. And there's nothing
like that in this case.
FWIW I don't think this has anything to do with join cardinality
estimation - at least not
ng a lot of I/O, or maybe the f_sel_*
functions in the filter are expensive. Judging by how few rows are in
the tables (not sure how large the tables are), I'd guess it's the
latter ... Hard to say without knowing what the functions do etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
the time is spent in sort, so the one thing you can do is
either increasing work_mem, or adding index providing that ordering.
Even better if you include all necessary columns to allow IOS.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Supp
be sanitized before posting.
We need to see plans for the good and bad case, so that we can compare
them, look at the plan general shapes, costs, etc.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ed) and do
(gdb) p MemoryContextStats(TopMemoryContext)
(gdb) q
and show us the output printed into server log. If it's a backend
running a query, it'd help knowing the execution plan.
It would also help knowing the non-default configuration, i.e. stuff
tweaked in postgresql.conf.
rega
continues to use the wrong
plan (maybe stats don't work well on array fields like tags??).
We support this type of clause for extended statistics (yet).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
will be
exactly the same. You need to do VACUUM FULL, but be careful as that
acquires exclusive lock on the table.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
reproduce the issue,
and investigate locally. SQL script that creates the whole schema and
runs the query, for example.
What I'd like to see is a perf profile from the planning, so that we can
see where exactly is the bottleneck. Maybe there actually is a bug that
makes it muych more expensive
On Mon, Nov 18, 2019 at 03:46:03PM +0100, Ekaterina Amez wrote:
El 18/11/19 a las 15:16, Tomas Vondra escribió:
Not sure I understand. Whenever PostgreSQL process needs memory it
requests it from the kernel by calling malloc(), and the amount of
availabe RAM is limited. So when kernel can
On Mon, Nov 18, 2019 at 03:02:16PM +0100, Ekaterina Amez wrote:
El 18/11/19 a las 13:25, Tomas Vondra escribió:
On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:
Hi all,
This morning I was checking postgres servers logs, looking for
errors (we've recently upgraded the
ws processes to use more memory when executing queries, i.e.
it increases memory pressure and makes OOM more likely.
So you need to watch system monitoring, see how much memory is being
used (excluding page cache) and consider reducing work_mem and/or
max_connections if it's too close.
regard
single process serially.
How does the explain analyze output look like without the HAVING clause?
Try setting parallel_setup_cost and parallel_tuple_cost to 0. That might
trigger parallel query.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Supp
On Wed, Nov 06, 2019 at 03:15:53PM -0800, Andres Freund wrote:
Hi,
On 2019-11-06 22:54:48 +0100, Tomas Vondra wrote:
If we're only talking about FPGA I/O acceleration, essentially FPGA
between the database and storage, it's likely possible to get that
working without any extensiv
ones
they show benchmarks for.
Interesting, and perhaps interesting for in-memory databases.
[1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pproach a university (which does not mean people
from pgsql-hackers can't be involved, of course). I've been involved in
a couple of such research projects in Europe, not sure what exactly is
the situation/rules in US.
regards
[1] https://swarm64.com/netezza-replacement/
--
Tomas
io 1" in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
and even queries is does not really help either.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
are causing this and optimize them.
Note: FWIW, a single snapshot of pg_stats* may be misleading, because
it's cumulative, so it's not clear how accurately it reflects current
state. Next time take two snapshots and subtract them.
regards
--
Tomas Vondra http://www.2ndQuad
memory error.
As Tom already said, this seems like a quite independent issue. Next
time it'd be better to share it in a new thread, not to mix it up with
the old discussion.
Have any of the things discussed and proposed, especially this last
one by Tomas Vondra, been applied to t
query plan we can hardly do any guesses about what
might be the issue.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
le. That should delete
files in smaller chunks.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e subquery
into the main (upper) one, and it probably happens to constrict the
choices so that the planner ends up picking a good plan. I guess adding
"OFFSET 0" to the subquery would have the same effect.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Devel
e table (or the indexes on it)
are getting bloated over time. Try looking at the sizes of those objects
(and maybe use pgstattuple to get more detailed statistics before
rebuilding it next time.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Supp
ew data type is sorted in a way that eliminates/reduces
random I/O against the index. But maybe that's not the case - hard to
say, because the linked results don't say how the data files were
generated ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
very clearly does not use the index-only scan, so it's not
surprising it's not any faster. You need to find out why the planner
makes that decision.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
check why
it did not help much and give you further advice.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sat, Jul 06, 2019 at 05:35:33PM +0200, Ancoron Luciferis wrote:
On 06/07/2019 15:38, Tomas Vondra wrote:
On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:
Hi,
I've been wondering whether it is possible somehow to have the standard
column statistics to respect a ce
ample demonstrating the issue?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ws have been removed from, there
is nothing suggesting that this dataset was ever created (eg. temp)
and these numbers definitely don't match what i was expecting, which is more or
less what i'm seeing in 9.4 plan.
Obviously i've tested this more than once and this behaviour consist
ter.
Has anyone been through this type of problem?
Unfortunately, I don't think there's a much better solution that what you
initially described - dump/restore, and then creating a replica.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development,
dex pages (because we have to touch many of them).
The point of generating UUIDs in a more sequential way is to limit this
behavior by "concentrating" the index inserts into a smaller part of the
index. That's why indexes on sequential data (say, generated from a
SERIAL column)
On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:
On 26/05/2019 00:14, Tomas Vondra wrote:
On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:
Ancoron Luciferis writes:
On 25/05/2019 16:57, Tom Lane wrote:
(4) it in fact *wouldn't* do anything useful, because
On Sat, May 25, 2019 at 06:38:08PM -0400, Tom Lane wrote:
Tomas Vondra writes:
On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:
What you might want to think about is creating a function that maps
UUIDs into an ordering that makes sense to you, and then creating
a unique index over
o older parts of
the index (so if you delete data, we'd never fill that space).
[1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
storage pages.
Right. And the largest page size is 64kB. But 8kB is a pretty good
trade-off, in most cases.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ce of code somewhere. I suggest you do a bit of profiling, for
example using perf [1], which would show where the time is spent.
[1] https://wiki.postgresql.org/wiki/Profiling_with_perf
And please stop top-posting, it makes it much harder to follow the
discussion.
regards
--
Tomas Vondra
e the
dump with pg_restore with 5 jobs (-j).
Well, presumably the third database has complexity in other places,
possibly spending a lot of time on CPU, while the other databases don't
have such issue.
What would help is a CPU profile, e.g. from perf.
regards
--
Tomas Vondra
g now, as things seem
terribly broken.
Right. But my impression is there are no backups ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
are in the data).
The data files are pretty useless on their own. It might be possible to do
some manualy recovery - say, you might create the same tables in the new
schema, and then guess which data files belong to them. But there are
various caveats e.g. due to dropped columns, etc.
--
Tomas Vo
handling.
I am looking forward to some expert advice here when dealing with citext data
type.
It's generally a good idea to share explain analyze output for both
versions of the query - both with citext and text.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Postg
ay it does not work, why do you think so? Does it print
some error, or what? Does it even get executed? It does not seem to be
the case, judging by the log (there's no archive_command message).
How was the "secondary machine" created? You said you have all the WAL
since then - ho
If you also get corrupted WAL, you have
bigger problems, I'm afraid.
Also, data corruption issues are one-off events, mostly unique. That
makes it rather difficult (~impossible) to write docs about recovering
from them. And it's why there are no magic tools.
regards
--
Tomas Vondra
On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:
...
I still think the idea with an "overflow batch" is worth considering,
because it'd allow us to keep the memory usage within work_mem. And
after getting familiar with the hash join code again (haven't messed
since 9.5 or so) I think it should not be all that difficult.
I'll give it a try over the weekend if I get bored for a while.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
What I think might work better is the attached v2 of the patch, with a
single top-level condition, comparing the combined memory usage
(spaceUsed + BufFile) against spaceAllowed. But
On Tue, Apr 23, 2019 at 03:43:48PM -0500, Justin Pryzby wrote:
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
>What I think might work better is the attached v2 of the patch, with a
Thanks for this, and I am trying this now.
...
Aand, i
ched the
hash function, because it might move rows backwards (to the already
processed region).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ng further. That is, I'm imagining adding a second independent
reason for shutting off growEnabled, along the lines of "increasing
nbatch any further will require an unreasonable amount of buffer memory".
The question then becomes how to define "unreasonable".
On Sun, Ap
ut 4GB might be allocated. And it seems ~1/2 of them did not
receive any tuples, so only about 2GB got allocated so far.
The second batch will probably make it fail much sooner, because it
allocates the BufFile stuff eagerly (both for inner and outer side).
regards
--
Tomas Vondra
w is this related to performance? Please send it to pgsql-general, and
include information about how you created the dump.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
mory used.
An alternative would be spilling the extra tuples into a special
overflow file, as I explained earlier. That would actually enforce
work_mem I think.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote:
On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:
On 4/20/2019 21:14, Tomas Vondra wrote:
>Maybe. But before wasting any more time on the memory leak investigation,
>I suggest you first try the patch moving the B
On Sat, Apr 20, 2019 at 08:33:46PM -0400, Gunther wrote:
On 4/20/2019 16:01, Tomas Vondra wrote:
For me, this did the trick:
update pg_class set (relpages, reltuples) = (100, 1) where
relname = 'tmp_r';
update pg_class set (relpages, reltuples) = (1, 100) where
relnam
ause merge join will win thanks
to being automatically cheaper? Not sure.
Also, I wonder if we really need 8kB buffers here. Would it make sense to
allow smaller buffers in some cases? Say, 1kB. It's not going to save us,
but it's still 8x better than now.
regards
--
Tomas Vondra
On Sat, Apr 20, 2019 at 10:36:50PM +0200, Tomas Vondra wrote:
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
Tomas Vondra writes:
I think it's really a matter of underestimate, which convinces the planner
to hash the larger table. In this case, the table is 42GB, so it's
possible it actually works as expected. With work_mem = 4MB I
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to focus on next is why the code keeps
increasing the number of
eded for
the hash table contents (i.e. rows) below work_mem. If you just cap the
number of batches, you'll keep the amount of memory for BufFile under
control, but the hash table may exceed work_mem.
Considering how rare this issue likely is, we need to be looking for a
solution that doe
hes but 128k virtual ones, which means we'd do 4 rounds of
this dance.
It's a bit inefficient, but situations like this should be rather rare,
and it's more graceful than just crashing with OOM.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Develo
nced (like better snapshotting, etc.) then
maybe ZFS is the right choice for you. It also allos various advanced
configurations with ZIL, L2ARC, ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
1 - 100 of 122 matches
Mail list logo