#x27;t tried).
So something like
CREATE INDEX ON adroom ((groups->0->>'provider'));
WHERE groups->0->>'provider' LIKE 'something%';
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
der
postgres, which I assume is superuser. The backend has full access to
the data directory, of course, so it may create extra files (using
adminpack extension, for example).
If that's the case (and if it's indeed an attack), it either means the
attacker likely already has access to all the data. So presumably
x330341 is doing something else at the OS level.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
duced it to 0.05.
The other question is why it's so CPU-intensive. Are you using the
default statistics_target value (100), or have you increased that too?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
or you.
> How should I reduce the better fillfactor?
For example to change fillfactor to 75% (i.e. 25% free space):
ALTER TABLE t SET (fillfactor = 75);
But as I said, it's not a solution for you.
> What will be with WAL-files it this case?
Not sure what you mean.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
information that is confidential or legally protected. If you are not
> the intended recipient or have received this message in error, you
> are not authorized to copy, distribute, or otherwise use this message
> or its attachments. Please notify the sender immediately by return
> e-mail and permanently delete this message and any attachments.
> Tangoe makes no warranty that this e-mail or its attachments are
> error or virus free.
>
LOL
kindd regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
the query plan.
So it's rather hard to say. You mentioned text format, but then you use
to_date() to query the partitioned table. Which I guess might be the
cause, but it's hard to say for sure.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
xkeys | stxdependencies
> ---+-+--
> cgm_stat2 | 2 6 | {"2
> => 6": 1.00}
> cgm_stat1 | 3 6 | {"3 => 6": 1.00}
> (2 rows)
>
> However, this does not help the planner. It still picks the bad pla
CHDIR)
>
>
>
> Mar 05 23:57:24 vipgadmin systemd[1]: Started Pgadmin4 Service.
>
> Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Main process
> exited, code=exited, status=200/CHDIR
>
> Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Unit entered
>
the
queries. Try EXPLAIN (ANALYZE, BUFFERS) both for the slow and fast
executions, and show us the results.
FWIW you might also read this first:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
how this.
Why is this?
It doesn't really matter on which side it shows, it's more about a
generic plan built without knowledge of the parameter value.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
y a general issue with planning EXISTS / LIMIT and
non-uniform data distribution.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On 04/17/2018 04:01 PM, Hackety Man wrote:
On Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra
mailto:tomas.von...@2ndquadrant.com>> wrote:
On 04/16/2018 10:42 PM, Hackety Man wrote:
...
The first thing I did was to run some baseline tests using
the
On 04/17/2018 05:43 PM, Hackety Man wrote:
>
>
> On Tue, Apr 17, 2018 at 10:23 AM, Tomas Vondra
> mailto:tomas.von...@2ndquadrant.com>> wrote:
>
>
>
> On 04/17/2018 04:01 PM, Hackety Man wrote:
>
> ...
> Right. I was more wond
ents ;-)
That being said, parallelism opens an entirely new dimension of possible
plans and planning issues.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
at it does help if you know
> what the intent of the query is (or for that matter, the text of the
> query--you attached the plan twice).
>
Right, simplify the query. Or maybe do it the other way around - start
with the simplest query (the inner-most part of the explain) and add
joins one
ine_item_acct_base
- ...
(in this order). I'd bet "lms_payment_check_request" is where things
start to go south.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
cache hit rate (from pg_stat_database
view) - if that's low, increase shared buffers. Then monitor and tune
slow queries - if a slow query benefits from higher work_mem values, do
increase that value. It's nonsense to just increase the parameters to
consume more memory.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
_dt IS NOT NULL)
Rows Removed by Filter: 160402
The filter is bound to be misestimated, and the error then snowballs.
Try replacing this part with a temporary table (with pre-aggregated
results) - you can run analyze on it, etc. I'd bet that'll make the
issue go away.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
rdware capability first, using tools like fio
to measure IOPS with different workloads.
Then try pgbench with a single client, and finally increase the number
of clients and see how it behaves and compare it to what you expect.
In any case, every system has a bottleneck. You're clearly hitting one,
otherwise the numbers would go faster. Usually, it's either CPU bound,
in which case "perf top" might tell us more, or it's IO bound, in which
case try e.g. "iostat -x -k 1" or something.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
ical drives
in the on-premise system.
If that does not explain this, I suggest picking a single query and
focus on it, instead of investigating all queries at once. There's a
nice wiki page explaining what info to provide:
https://wiki.postgresql.org/wiki/Slow_Query_Questions
ost) to make the difference larger.
regards
[1]
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L51
[2]
https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L166
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
me sort of
"pushdown" that'd derive an array of values and push it down into a
parameterized path at once (instead of doing that for each value in a loop).
regards
[1] https://commitfest.postgresql.org/42/4187/
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
idth=115)
> Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY
> ('{Active,Inactive,Pending}'::text[])))
>
If you remove that condition, does the estimate improve?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
I suspect postgres has
> everything internally (indexes catalog, planner) to split itself the job,
> making David vs Goliath something trivial.
>
What PostgreSQL version are you using, what hardware? Did you tune it in
any way, or is everything just default?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
t's see if there are other problems in the
query itself. If it's generating a cartesian product, it's pointless to
tune parameters.
>> What PostgreSQL version are you using, what hardware? Did you tune it
>> in
>> any way, or is everything just default?
>
> It is pg 15.3, on 2 cores / 8GO / 2TO ssds, with defaults cloud
> provider parameters (RDS).
>
I assume 2TO is 2TB?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
n was truncated and wrong, sorry for that).
>
None of the plans has estimates anywhere close to 17023331531230, so
where did that come from?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
the partitionwise join. I'd bet that if Nicolas replaces
MERGE INTO "goliath" ca
USING (SELECT * FROM "david" ORDER BY "list_id") AS t
..
with
MERGE INTO "goliath" ca
USING "david" AS t
...
it'll start doing the working much better.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
repro.sql
Description: application/sql
of information.
I'm willing to continue to investigate, but only if you prepare a
reproducer, i.e. a SQL script that demonstrates the issue - I don't
think preparing that should be difficult, something like the SQL script
I shared earlier today should do the trick.
I suggest you do that d
s as an optimization fence
(which means the merge does not actually see the underlying table is
partitioned).
If you get rid of that and add the part_col to the join, it translates
to the first issue with setting costs to flip to the sequential scan at
the right point.
[1]
https://www.postgresql.org/docs/15/runtime-config-query.html#GUC-ENABLE-PARTITIONWISE-JOIN
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
(cost=0.29..5.57 ...
Index Cond: (id = david.id)
...
And this is per-loop, of which there'll be 500 (because the small david
table has 500 rows).
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
har(3000),address varchar(3000),city varchar(900),salary
> smallint,
> pincode bigint,sales numeric,phone real,amount double precision,
> dob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE,
> status boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int,
> col2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000),
> col6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real,
> col11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH
> TIME ZONE,
> col15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary
> key(col1,col7,col9));
>
> *Table Data:* 100 rows with each Row has a size of 2.
>
Without the data we can't actually try running the query.
In general it's a good idea to show the "explain analyze" output for the
cases you're comparing. Not only that shows what the database is doing,
it also shows timings for different parts of the query, how many workers
were planned / actually started etc.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
. Why do you think this would not be expected?
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
e extra complexity of
partitioning the data. If it happens only occasionally (a couple times a
year), it probably is not. You'll just delete the data and reuse the
space for new data.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
o investigate (or hopefully fix!) the issue?
>
A reproducer would be great - a script that creates user/roles, and
triggers the long login time would allow us to investigate that.
Another option would be to get a perf profile from the process busy with
logging the user in - assuming it
eries).
BTW you suggested each partition has ~250k rows, but the explain plan
snippet you shared does not seem to be consistent with that - it only
shows 2500-5000 rows per partition. If you run ANALYZE on the table,
does that change the plan?
regards
--
Tomas Vondra
EnterpriseDB:
wonder if it might work with BRIN indexes of some kind. If the range
summary is defined as OR of the values, that might help, depending on
variability within the page range. But that would probably require some
development.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
does not really solve the issues with having to pass the
password to the query, making it visible in pg_stat_activity, various
logs etc.
Which is why people generally use FDE for the whole disk, which is
transparent and provides the same level of protection.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
s discussion, it
does not seem very polished / resilient.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
f the query
planning process is slow here? That is:
1) make sure you have the debug symbols installed
2) do `perf record`
3) run the benchmark for a while (a minute or so)
4) stop the perf record using Ctrl-C
5) generate a profile using `perf report` and share the result
Possibly do the same thi
t
and if it crashes when Committed_AS hits the CommitLimit.
That doesn't explain where the memory leak is, though :-(
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
extra debugging to see where the memory is
allocated from? It's a bit heavy-handed, though.
Or maybe splitting es_query_ctx into smaller contexts. That might be
easier to evaluate than sifting throuht god-knows-how-many-gbs of log.
regards
--
Tomas Vondra http://www.2ndQua
cumentation or FAQ somewhere.
Maybe read this famous paper by Jim Gray & Franco Putzolu. It's not
exactly about the thing you're asking about, but it's related. It
essentially deals with sizing memory vs. disk I/O, and page size plays
an important role in that too.
[1]
sensitive data, it may not be
possible, but perhaps you could hash it in some way.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
nced (like better snapshotting, etc.) then
maybe ZFS is the right choice for you. It also allos various advanced
configurations with ZIL, L2ARC, ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
hes but 128k virtual ones, which means we'd do 4 rounds of
this dance.
It's a bit inefficient, but situations like this should be rather rare,
and it's more graceful than just crashing with OOM.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Develo
eded for
the hash table contents (i.e. rows) below work_mem. If you just cap the
number of batches, you'll keep the amount of memory for BufFile under
control, but the hash table may exceed work_mem.
Considering how rare this issue likely is, we need to be looking for a
solution that doe
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to focus on next is why the code keeps
increasing the number of
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
Tomas Vondra writes:
I think it's really a matter of underestimate, which convinces the planner
to hash the larger table. In this case, the table is 42GB, so it's
possible it actually works as expected. With work_mem = 4MB I
On Sat, Apr 20, 2019 at 10:36:50PM +0200, Tomas Vondra wrote:
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote:
Tomas Vondra writes:
Considering how rare this issue likely is, we need to be looking for a
solution that does not break the common case.
Agreed. What I think we need to
ause merge join will win thanks
to being automatically cheaper? Not sure.
Also, I wonder if we really need 8kB buffers here. Would it make sense to
allow smaller buffers in some cases? Say, 1kB. It's not going to save us,
but it's still 8x better than now.
regards
--
Tomas Vondra
On Sat, Apr 20, 2019 at 08:33:46PM -0400, Gunther wrote:
On 4/20/2019 16:01, Tomas Vondra wrote:
For me, this did the trick:
update pg_class set (relpages, reltuples) = (100, 1) where
relname = 'tmp_r';
update pg_class set (relpages, reltuples) = (1, 100) where
relnam
On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote:
On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote:
On 4/20/2019 21:14, Tomas Vondra wrote:
>Maybe. But before wasting any more time on the memory leak investigation,
>I suggest you first try the patch moving the B
mory used.
An alternative would be spilling the extra tuples into a special
overflow file, as I explained earlier. That would actually enforce
work_mem I think.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
w is this related to performance? Please send it to pgsql-general, and
include information about how you created the dump.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ut 4GB might be allocated. And it seems ~1/2 of them did not
receive any tuples, so only about 2GB got allocated so far.
The second batch will probably make it fail much sooner, because it
allocates the BufFile stuff eagerly (both for inner and outer side).
regards
--
Tomas Vondra
ng further. That is, I'm imagining adding a second independent
reason for shutting off growEnabled, along the lines of "increasing
nbatch any further will require an unreasonable amount of buffer memory".
The question then becomes how to define "unreasonable".
On Sun, Ap
ched the
hash function, because it might move rows backwards (to the already
processed region).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Tue, Apr 23, 2019 at 03:43:48PM -0500, Justin Pryzby wrote:
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
>What I think might work better is the attached v2 of the patch, with a
Thanks for this, and I am trying this now.
...
Aand, i
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote:
On 4/21/2019 23:09, Tomas Vondra wrote:
What I think might work better is the attached v2 of the patch, with a
single top-level condition, comparing the combined memory usage
(spaceUsed + BufFile) against spaceAllowed. But
since 9.5 or so) I think it should not be all that difficult.
I'll give it a try over the weekend if I get bored for a while.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote:
...
I still think the idea with an "overflow batch" is worth considering,
because it'd allow us to keep the memory usage within work_mem. And
after getting familiar with the hash join code again (haven't messed
If you also get corrupted WAL, you have
bigger problems, I'm afraid.
Also, data corruption issues are one-off events, mostly unique. That
makes it rather difficult (~impossible) to write docs about recovering
from them. And it's why there are no magic tools.
regards
--
Tomas Vondra
ay it does not work, why do you think so? Does it print
some error, or what? Does it even get executed? It does not seem to be
the case, judging by the log (there's no archive_command message).
How was the "secondary machine" created? You said you have all the WAL
since then - ho
handling.
I am looking forward to some expert advice here when dealing with citext data
type.
It's generally a good idea to share explain analyze output for both
versions of the query - both with citext and text.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
Postg
are in the data).
The data files are pretty useless on their own. It might be possible to do
some manualy recovery - say, you might create the same tables in the new
schema, and then guess which data files belong to them. But there are
various caveats e.g. due to dropped columns, etc.
--
Tomas Vo
g now, as things seem
terribly broken.
Right. But my impression is there are no backups ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e the
dump with pg_restore with 5 jobs (-j).
Well, presumably the third database has complexity in other places,
possibly spending a lot of time on CPU, while the other databases don't
have such issue.
What would help is a CPU profile, e.g. from perf.
regards
--
Tomas Vondra
ce of code somewhere. I suggest you do a bit of profiling, for
example using perf [1], which would show where the time is spent.
[1] https://wiki.postgresql.org/wiki/Profiling_with_perf
And please stop top-posting, it makes it much harder to follow the
discussion.
regards
--
Tomas Vondra
storage pages.
Right. And the largest page size is 64kB. But 8kB is a pretty good
trade-off, in most cases.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
o older parts of
the index (so if you delete data, we'd never fill that space).
[1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sat, May 25, 2019 at 06:38:08PM -0400, Tom Lane wrote:
Tomas Vondra writes:
On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:
What you might want to think about is creating a function that maps
UUIDs into an ordering that makes sense to you, and then creating
a unique index over
On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote:
On 26/05/2019 00:14, Tomas Vondra wrote:
On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote:
Ancoron Luciferis writes:
On 25/05/2019 16:57, Tom Lane wrote:
(4) it in fact *wouldn't* do anything useful, because
dex pages (because we have to touch many of them).
The point of generating UUIDs in a more sequential way is to limit this
behavior by "concentrating" the index inserts into a smaller part of the
index. That's why indexes on sequential data (say, generated from a
SERIAL column)
ter.
Has anyone been through this type of problem?
Unfortunately, I don't think there's a much better solution that what you
initially described - dump/restore, and then creating a replica.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development,
ws have been removed from, there
is nothing suggesting that this dataset was ever created (eg. temp)
and these numbers definitely don't match what i was expecting, which is more or
less what i'm seeing in 9.4 plan.
Obviously i've tested this more than once and this behaviour consist
ample demonstrating the issue?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Sat, Jul 06, 2019 at 05:35:33PM +0200, Ancoron Luciferis wrote:
On 06/07/2019 15:38, Tomas Vondra wrote:
On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote:
Hi,
I've been wondering whether it is possible somehow to have the standard
column statistics to respect a ce
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
check why
it did not help much and give you further advice.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
very clearly does not use the index-only scan, so it's not
surprising it's not any faster. You need to find out why the planner
makes that decision.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
ew data type is sorted in a way that eliminates/reduces
random I/O against the index. But maybe that's not the case - hard to
say, because the linked results don't say how the data files were
generated ...
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
e table (or the indexes on it)
are getting bloated over time. Try looking at the sizes of those objects
(and maybe use pgstattuple to get more detailed statistics before
rebuilding it next time.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Supp
e subquery
into the main (upper) one, and it probably happens to constrict the
choices so that the planner ends up picking a good plan. I guess adding
"OFFSET 0" to the subquery would have the same effect.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Devel
le. That should delete
files in smaller chunks.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
query plan we can hardly do any guesses about what
might be the issue.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
memory error.
As Tom already said, this seems like a quite independent issue. Next
time it'd be better to share it in a new thread, not to mix it up with
the old discussion.
Have any of the things discussed and proposed, especially this last
one by Tomas Vondra, been applied to t
are causing this and optimize them.
Note: FWIW, a single snapshot of pg_stats* may be misleading, because
it's cumulative, so it's not clear how accurately it reflects current
state. Next time take two snapshots and subtract them.
regards
--
Tomas Vondra http://www.2ndQuad
and even queries is does not really help either.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
io 1" in the barman docs, i.e.
pg_basebackup (backup_method = postgres) and WAL streaming.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
pproach a university (which does not mean people
from pgsql-hackers can't be involved, of course). I've been involved in
a couple of such research projects in Europe, not sure what exactly is
the situation/rules in US.
regards
[1] https://swarm64.com/netezza-replacement/
--
Tomas
ones
they show benchmarks for.
Interesting, and perhaps interesting for in-memory databases.
[1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
On Wed, Nov 06, 2019 at 03:15:53PM -0800, Andres Freund wrote:
Hi,
On 2019-11-06 22:54:48 +0100, Tomas Vondra wrote:
If we're only talking about FPGA I/O acceleration, essentially FPGA
between the database and storage, it's likely possible to get that
working without any extensiv
single process serially.
How does the explain analyze output look like without the HAVING clause?
Try setting parallel_setup_cost and parallel_tuple_cost to 0. That might
trigger parallel query.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Supp
ws processes to use more memory when executing queries, i.e.
it increases memory pressure and makes OOM more likely.
So you need to watch system monitoring, see how much memory is being
used (excluding page cache) and consider reducing work_mem and/or
max_connections if it's too close.
regard
On Mon, Nov 18, 2019 at 03:02:16PM +0100, Ekaterina Amez wrote:
El 18/11/19 a las 13:25, Tomas Vondra escribió:
On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote:
Hi all,
This morning I was checking postgres servers logs, looking for
errors (we've recently upgraded the
On Mon, Nov 18, 2019 at 03:46:03PM +0100, Ekaterina Amez wrote:
El 18/11/19 a las 15:16, Tomas Vondra escribió:
Not sure I understand. Whenever PostgreSQL process needs memory it
requests it from the kernel by calling malloc(), and the amount of
availabe RAM is limited. So when kernel can
reproduce the issue,
and investigate locally. SQL script that creates the whole schema and
runs the query, for example.
What I'd like to see is a perf profile from the planning, so that we can
see where exactly is the bottleneck. Maybe there actually is a bug that
makes it muych more expensive
will be
exactly the same. You need to do VACUUM FULL, but be careful as that
acquires exclusive lock on the table.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
continues to use the wrong
plan (maybe stats don't work well on array fields like tags??).
We support this type of clause for extended statistics (yet).
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
1 - 100 of 122 matches
Mail list logo