Re: Query became very slow after 9.6 -> 10 upgrade

2017-11-22 Thread Tomas Vondra
#x27;t tried). So something like CREATE INDEX ON adroom ((groups->0->>'provider')); WHERE groups->0->>'provider' LIKE 'something%'; regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: CPU 100% usage caused by unknown postgres process..

2017-12-13 Thread Tomas Vondra
der postgres, which I assume is superuser. The backend has full access to the data directory, of course, so it may create extra files (using adminpack extension, for example). If that's the case (and if it's indeed an attack), it either means the attacker likely already has access to all the data. So presumably x330341 is doing something else at the OS level. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Autoanalyze CPU usage

2017-12-19 Thread Tomas Vondra
duced it to 0.05. The other question is why it's so CPU-intensive. Are you using the default statistics_target value (100), or have you increased that too? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Updating a large table

2017-12-23 Thread Tomas Vondra
or you. > How should I reduce the better fillfactor? For example to change fillfactor to 75% (i.e. 25% free space): ALTER TABLE t SET (fillfactor = 75); But as I said, it's not a solution for you. > What will be with WAL-files it this case? Not sure what you mean. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: pg_xlog unbounded growth

2018-01-25 Thread Tomas Vondra
information that is confidential or legally protected. If you are not > the intended recipient or have received this message in error, you > are not authorized to copy, distribute, or otherwise use this message > or its attachments. Please notify the sender immediately by return > e-mail and permanently delete this message and any attachments. > Tangoe makes no warranty that this e-mail or its attachments are > error or virus free. > LOL kindd regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: postgresql 10.1 wrong plan in when using partitions bug

2018-02-04 Thread Tomas Vondra
the query plan. So it's rather hard to say. You mentioned text format, but then you use to_date() to query the partitioned table. Which I guess might be the cause, but it's hard to say for sure. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: SV: bad plan using nested loops

2018-02-05 Thread Tomas Vondra
xkeys |   stxdependencies > ---+-+-- >  cgm_stat2 | 2 6 | {"2 > => 6": 1.00} >  cgm_stat1 | 3 6 | {"3 => 6": 1.00} > (2 rows) > > However, this does not help the planner. It still picks the bad pla

Re: Please help

2018-03-06 Thread Tomas Vondra
CHDIR) > >   > > Mar 05 23:57:24 vipgadmin systemd[1]: Started Pgadmin4 Service. > > Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Main process > exited, code=exited, status=200/CHDIR > > Mar 05 23:57:24 vipgadmin systemd[1]: pgadmin4.service: Unit entered >

Re: Memory size

2018-03-11 Thread Tomas Vondra
the queries. Try EXPLAIN (ANALYZE, BUFFERS) both for the slow and fast executions, and show us the results. FWIW you might also read this first: https://wiki.postgresql.org/wiki/Slow_Query_Questions regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Unexplainable execution time difference between two test functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF EXISTS (SELECT 1 FROM...)

2018-04-17 Thread Tomas Vondra
how this. Why is this? It doesn't really matter on which side it shows, it's more about a generic plan built without knowledge of the parameter value. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Unexplainable execution time difference between two test functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF EXISTS (SELECT 1 FROM...)

2018-04-17 Thread Tomas Vondra
y a general issue with planning EXISTS / LIMIT and non-uniform data distribution. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Unexplainable execution time difference between two test functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF EXISTS (SELECT 1 FROM...)

2018-04-17 Thread Tomas Vondra
On 04/17/2018 04:01 PM, Hackety Man wrote: On Tue, Apr 17, 2018 at 6:49 AM, Tomas Vondra mailto:tomas.von...@2ndquadrant.com>> wrote: On 04/16/2018 10:42 PM, Hackety Man wrote: ...     The first thing I did was to run some baseline tests using the

Re: Unexplainable execution time difference between two test functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF EXISTS (SELECT 1 FROM...)

2018-04-17 Thread Tomas Vondra
On 04/17/2018 05:43 PM, Hackety Man wrote: > > > On Tue, Apr 17, 2018 at 10:23 AM, Tomas Vondra > mailto:tomas.von...@2ndquadrant.com>> wrote: > > > > On 04/17/2018 04:01 PM, Hackety Man wrote: > > ... > Right.  I was more wond

Re: Unexplainable execution time difference between two test functions...one using IF (SELECT COUNT(*) FROM...) and the other using IF EXISTS (SELECT 1 FROM...)

2018-04-17 Thread Tomas Vondra
ents ;-) That being said, parallelism opens an entirely new dimension of possible plans and planning issues. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Never Ending query in PostgreSQL

2022-03-01 Thread Tomas Vondra
at it does help if you know > what the intent of the query is (or for that matter, the text of the > query--you attached the plan twice). > Right, simplify the query. Or maybe do it the other way around - start with the simplest query (the inner-most part of the explain) and add joins one

Re: Never Ending query in PostgreSQL

2022-03-01 Thread Tomas Vondra
ine_item_acct_base - ... (in this order). I'd bet "lms_payment_check_request" is where things start to go south. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Optimal configuration for server

2022-03-07 Thread Tomas Vondra
cache hit rate (from pg_stat_database view) - if that's low, increase shared buffers. Then monitor and tune slow queries - if a slow query benefits from higher work_mem values, do increase that value. It's nonsense to just increase the parameters to consume more memory. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Performance issue post upgrade on Version 13 - Incorrect Estimation Cost choosing Hash Aggregate-Nested Left Loop Join

2022-03-26 Thread Tomas Vondra
_dt IS NOT NULL) Rows Removed by Filter: 160402 The filter is bound to be misestimated, and the error then snowballs. Try replacing this part with a temporary table (with pre-aggregated results) - you can run analyze on it, etc. I'd bet that'll make the issue go away. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Postgresql TPS Bottleneck

2022-03-31 Thread Tomas Vondra
rdware capability first, using tools like fio to measure IOPS with different workloads. Then try pgbench with a single client, and finally increase the number of clients and see how it behaves and compare it to what you expect. In any case, every system has a bottleneck. You're clearly hitting one, otherwise the numbers would go faster. Usually, it's either CPU bound, in which case "perf top" might tell us more, or it's IO bound, in which case try e.g. "iostat -x -k 1" or something. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Performance for SQL queries on Azure PostgreSQL PaaS instance

2022-04-12 Thread Tomas Vondra
ical drives in the on-premise system. If that does not explain this, I suggest picking a single query and focus on it, instead of investigating all queries at once. There's a nice wiki page explaining what info to provide: https://wiki.postgresql.org/wiki/Slow_Query_Questions

Re: why choosing an hash index instead of the btree version even if the cost is lower?

2022-11-18 Thread Tomas Vondra
ost) to make the difference larger. regards [1] https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L51 [2] https://github.com/postgres/postgres/blob/master/src/backend/optimizer/util/pathnode.c#L166 -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: BRIN index worse than sequential scan for large search set

2023-02-25 Thread Tomas Vondra
me sort of "pushdown" that'd derive an array of values and push it down into a parameterized path at once (instead of doing that for each value in a loop). regards [1] https://commitfest.postgresql.org/42/4187/ -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: wrong rows estimation by hash join

2023-06-10 Thread Tomas Vondra
idth=115) > Index Cond: ((xx_to > CURRENT_DATE) AND ((status)::text = ANY > ('{Active,Inactive,Pending}'::text[]))) > If you remove that condition, does the estimate improve? regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Merge David and Goliath tables efficiently

2023-06-17 Thread Tomas Vondra
I suspect postgres has > everything internally (indexes catalog, planner) to split itself the job, > making David vs Goliath something trivial. > What PostgreSQL version are you using, what hardware? Did you tune it in any way, or is everything just default? regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Merge David and Goliath tables efficiently

2023-06-17 Thread Tomas Vondra
t's see if there are other problems in the query itself. If it's generating a cartesian product, it's pointless to tune parameters. >> What PostgreSQL version are you using, what hardware? Did you tune it >> in >> any way, or is everything just default? > > It is pg 15.3, on 2 cores / 8GO / 2TO ssds, with defaults cloud > provider parameters (RDS). > I assume 2TO is 2TB? regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Merge David and Goliath tables efficiently

2023-06-19 Thread Tomas Vondra
n was truncated and wrong, sorry for that).  > None of the plans has estimates anywhere close to 17023331531230, so where did that come from? regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Merge David and Goliath tables efficiently

2023-06-19 Thread Tomas Vondra
the partitionwise join. I'd bet that if Nicolas replaces MERGE INTO "goliath" ca USING (SELECT * FROM "david" ORDER BY "list_id") AS t .. with MERGE INTO "goliath" ca USING "david" AS t ... it'll start doing the working much better. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company repro.sql Description: application/sql

Re: Merge David and Goliath tables efficiently

2023-06-19 Thread Tomas Vondra
of information. I'm willing to continue to investigate, but only if you prepare a reproducer, i.e. a SQL script that demonstrates the issue - I don't think preparing that should be difficult, something like the SQL script I shared earlier today should do the trick. I suggest you do that d

Re: Merge David and Goliath tables efficiently

2023-06-19 Thread Tomas Vondra
s as an optimization fence (which means the merge does not actually see the underlying table is partitioned). If you get rid of that and add the part_col to the join, it translates to the first issue with setting costs to flip to the sequential scan at the right point. [1] https://www.postgresql.org/docs/15/runtime-config-query.html#GUC-ENABLE-PARTITIONWISE-JOIN -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Merge David and Goliath tables efficiently

2023-06-20 Thread Tomas Vondra
(cost=0.29..5.57 ... Index Cond: (id = david.id) ... And this is per-loop, of which there'll be 500 (because the small david table has 500 rows). regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Parallel hints in PostgreSQL with consistent perfromance

2023-12-27 Thread Tomas Vondra
har(3000),address varchar(3000),city varchar(900),salary > smallint, > pincode bigint,sales numeric,phone real,amount double precision, > dob date,starttime timestamp,timezone TIMESTAMP WITH TIME ZONE, > status boolean,timenow time,timelater TIME WITH TIME ZONE,col1 int, > col2 char,col3 varchar(3000),col4 varchar(3000),col5 varchar(3000), > col6 varchar(900),col7 smallint,col8 bigint,col9 numeric,col10 real, > col11 double precision,col12 date,col13 timestamp,col14 TIMESTAMP WITH > TIME ZONE, > col15 boolean,col16 time,col17 TIME WITH TIME ZONE,primary > key(col1,col7,col9));  > > *Table Data:* 100 rows with each Row has a size of 2. > Without the data we can't actually try running the query. In general it's a good idea to show the "explain analyze" output for the cases you're comparing. Not only that shows what the database is doing, it also shows timings for different parts of the query, how many workers were planned / actually started etc. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Fwd: extend statistics help reduce index scan a lot of shared buffer hits.

2024-02-28 Thread Tomas Vondra
. Why do you think this would not be expected? regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Separate 100 M spatial data in 100 tables VS one big table

2024-03-05 Thread Tomas Vondra
e extra complexity of partitioning the data. If it happens only occasionally (a couple times a year), it probably is not. You'll just delete the data and reuse the space for new data. regards -- Tomas Vondra EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company

Re: Extremely slow to establish connection when user has a high number of roles

2024-04-20 Thread Tomas Vondra
o investigate (or hopefully fix!) the issue? > A reproducer would be great - a script that creates user/roles, and triggers the long login time would allow us to investigate that. Another option would be to get a perf profile from the process busy with logging the user in - assuming it&#

Re: Hash Right join and seq scan

2024-07-03 Thread Tomas Vondra
eries). BTW you suggested each partition has ~250k rows, but the explain plan snippet you shared does not seem to be consistent with that - it only shows 2500-5000 rows per partition. If you run ANALYZE on the table, does that change the plan? regards -- Tomas Vondra EnterpriseDB:

Re: Special bloom index of INT, BIGINT, BIT, VARBIT for bitwise operation

2018-07-17 Thread Tomas Vondra
wonder if it might work with BRIN indexes of some kind. If the range summary is defined as OR of the values, that might help, depending on variability within the page range. But that would probably require some development. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Why the index is not used ?

2018-10-07 Thread Tomas Vondra
does not really solve the issues with having to pass the password to the query, making it visible in pg_stat_activity, various logs etc. Which is why people generally use FDE for the whole disk, which is transparent and provides the same level of protection. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Why the index is not used ?

2018-10-08 Thread Tomas Vondra
s discussion, it does not seem very polished / resilient. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Fwd: Query with high planning time compared to execution time

2018-11-02 Thread Tomas Vondra
f the query planning process is slow here? That is: 1) make sure you have the debug symbols installed 2) do `perf record` 3) run the benchmark for a while (a minute or so) 4) stop the perf record using Ctrl-C 5) generate a profile using `perf report` and share the result Possibly do the same thi

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tomas Vondra
t and if it crashes when Committed_AS hits the CommitLimit. That doesn't explain where the memory leak is, though :-( regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-04-15 Thread Tomas Vondra
extra debugging to see where the memory is allocated from? It's a bit heavy-handed, though. Or maybe splitting es_query_ctx into smaller contexts. That might be easier to evaluate than sifting throuht god-knows-how-many-gbs of log. regards -- Tomas Vondra http://www.2ndQua

Re: Block / Page Size Optimization

2019-04-15 Thread Tomas Vondra
cumentation or FAQ somewhere. Maybe read this famous paper by Jim Gray & Franco Putzolu. It's not exactly about the thing you're asking about, but it's related. It essentially deals with sizing memory vs. disk I/O, and page size plays an important role in that too. [1]

Re: Out of Memory errors are frustrating as heck!

2019-04-18 Thread Tomas Vondra
sensitive data, it may not be possible, but perhaps you could hash it in some way. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Best Filesystem for PostgreSQL

2019-04-18 Thread Tomas Vondra
nced (like better snapshotting, etc.) then maybe ZFS is the right choice for you. It also allos various advanced configurations with ZIL, L2ARC, ... regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
hes but 128k virtual ones, which means we'd do 4 rounds of this dance. It's a bit inefficient, but situations like this should be rather rare, and it's more graceful than just crashing with OOM. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Develo

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
eded for the hash table contents (i.e. rows) below work_mem. If you just cap the number of batches, you'll keep the amount of memory for BufFile under control, but the hash table may exceed work_mem. Considering how rare this issue likely is, we need to be looking for a solution that doe

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote: Tomas Vondra writes: Considering how rare this issue likely is, we need to be looking for a solution that does not break the common case. Agreed. What I think we need to focus on next is why the code keeps increasing the number of

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote: Tomas Vondra writes: I think it's really a matter of underestimate, which convinces the planner to hash the larger table. In this case, the table is 42GB, so it's possible it actually works as expected. With work_mem = 4MB I&#

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
On Sat, Apr 20, 2019 at 10:36:50PM +0200, Tomas Vondra wrote: On Sat, Apr 20, 2019 at 04:26:34PM -0400, Tom Lane wrote: Tomas Vondra writes: Considering how rare this issue likely is, we need to be looking for a solution that does not break the common case. Agreed. What I think we need to

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
ause merge join will win thanks to being automatically cheaper? Not sure. Also, I wonder if we really need 8kB buffers here. Would it make sense to allow smaller buffers in some cases? Say, 1kB. It's not going to save us, but it's still 8x better than now. regards -- Tomas Vondra

Re: Out of Memory errors are frustrating as heck!

2019-04-20 Thread Tomas Vondra
On Sat, Apr 20, 2019 at 08:33:46PM -0400, Gunther wrote: On 4/20/2019 16:01, Tomas Vondra wrote: For me, this did the trick:  update pg_class set (relpages, reltuples) = (100, 1) where relname = 'tmp_r';  update pg_class set (relpages, reltuples) = (1, 100) where relnam

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
On Sun, Apr 21, 2019 at 03:08:22AM -0500, Justin Pryzby wrote: On Sun, Apr 21, 2019 at 01:03:50AM -0400, Gunther wrote: On 4/20/2019 21:14, Tomas Vondra wrote: >Maybe. But before wasting any more time on the memory leak investigation, >I suggest you first try the patch moving the B

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
mory used. An alternative would be spilling the extra tuples into a special overflow file, as I explained earlier. That would actually enforce work_mem I think. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Backup and Restore (pg_dump & pg_restore)

2019-04-21 Thread Tomas Vondra
w is this related to performance? Please send it to pgsql-general, and include information about how you created the dump. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
ut 4GB might be allocated. And it seems ~1/2 of them did not receive any tuples, so only about 2GB got allocated so far. The second batch will probably make it fail much sooner, because it allocates the BufFile stuff eagerly (both for inner and outer side). regards -- Tomas Vondra

Re: Out of Memory errors are frustrating as heck!

2019-04-21 Thread Tomas Vondra
ng further. That is, I'm imagining adding a second independent reason for shutting off growEnabled, along the lines of "increasing nbatch any further will require an unreasonable amount of buffer memory". The question then becomes how to define "unreasonable". On Sun, Ap

Re: Out of Memory errors are frustrating as heck!

2019-04-22 Thread Tomas Vondra
ched the hash function, because it might move rows backwards (to the already processed region). regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-04-23 Thread Tomas Vondra
On Tue, Apr 23, 2019 at 03:43:48PM -0500, Justin Pryzby wrote: On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote: On 4/21/2019 23:09, Tomas Vondra wrote: >What I think might work better is the attached v2 of the patch, with a Thanks for this, and I am trying this now. ... Aand, i

Re: Out of Memory errors are frustrating as heck!

2019-04-23 Thread Tomas Vondra
On Tue, Apr 23, 2019 at 04:37:50PM -0400, Gunther wrote: On 4/21/2019 23:09, Tomas Vondra wrote: What I think might work better is the attached v2 of the patch, with a single top-level condition, comparing the combined memory usage (spaceUsed + BufFile) against spaceAllowed. But

Re: Out of Memory errors are frustrating as heck!

2019-04-23 Thread Tomas Vondra
since 9.5 or so) I think it should not be all that difficult. I'll give it a try over the weekend if I get bored for a while. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-04-28 Thread Tomas Vondra
On Wed, Apr 24, 2019 at 02:36:33AM +0200, Tomas Vondra wrote: ... I still think the idea with an "overflow batch" is worth considering, because it'd allow us to keep the memory usage within work_mem. And after getting familiar with the hash join code again (haven't messed

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Tomas Vondra
If you also get corrupted WAL, you have bigger problems, I'm afraid. Also, data corruption issues are one-off events, mostly unique. That makes it rather difficult (~impossible) to write docs about recovering from them. And it's why there are no magic tools. regards -- Tomas Vondra

Re: Trying to handle db corruption 9.6

2019-05-20 Thread Tomas Vondra
ay it does not work, why do you think so? Does it print some error, or what? Does it even get executed? It does not seem to be the case, judging by the log (there's no archive_command message). How was the "secondary machine" created? You said you have all the WAL since then - ho

Re: Generic Plans for Prepared Statement are 158155 times slower than Custom Plans

2019-05-20 Thread Tomas Vondra
handling. I am looking forward to some expert advice here when dealing with citext data type. It's generally a good idea to share explain analyze output for both versions of the query - both with citext and text. regards -- Tomas Vondra http://www.2ndQuadrant.com Postg

Re: Trying to handle db corruption 9.6

2019-05-21 Thread Tomas Vondra
are in the data). The data files are pretty useless on their own. It might be possible to do some manualy recovery - say, you might create the same tables in the new schema, and then guess which data files belong to them. But there are various caveats e.g. due to dropped columns, etc. -- Tomas Vo

Re: Trying to handle db corruption 9.6

2019-05-21 Thread Tomas Vondra
g now, as things seem terribly broken. Right. But my impression is there are no backups ... regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: pg_restore takes more time on creation of rules

2019-05-22 Thread Tomas Vondra
e the dump with pg_restore with 5 jobs (-j). Well, presumably the third database has complexity in other places, possibly spending a lot of time on CPU, while the other databases don't have such issue. What would help is a CPU profile, e.g. from perf. regards -- Tomas Vondra

Re: pg_restore takes more time on creation of rules

2019-05-22 Thread Tomas Vondra
ce of code somewhere. I suggest you do a bit of profiling, for example using perf [1], which would show where the time is spent. [1] https://wiki.postgresql.org/wiki/Profiling_with_perf And please stop top-posting, it makes it much harder to follow the discussion. regards -- Tomas Vondra

Re: Use Postgres as a column store by creating one table per column

2019-05-24 Thread Tomas Vondra
storage pages. Right. And the largest page size is 64kB. But 8kB is a pretty good trade-off, in most cases. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: UUID v1 optimizations...

2019-05-25 Thread Tomas Vondra
o older parts of the index (so if you delete data, we'd never fill that space). [1] https://www.2ndquadrant.com/en/blog/sequential-uuid-generators/ regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: UUID v1 optimizations...

2019-05-25 Thread Tomas Vondra
On Sat, May 25, 2019 at 06:38:08PM -0400, Tom Lane wrote: Tomas Vondra writes: On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote: What you might want to think about is creating a function that maps UUIDs into an ordering that makes sense to you, and then creating a unique index over

Re: UUID v1 optimizations...

2019-05-25 Thread Tomas Vondra
On Sun, May 26, 2019 at 01:49:30AM +0200, Ancoron Luciferis wrote: On 26/05/2019 00:14, Tomas Vondra wrote: On Sat, May 25, 2019 at 05:54:15PM -0400, Tom Lane wrote: Ancoron Luciferis writes: On 25/05/2019 16:57, Tom Lane wrote: (4) it in fact *wouldn't* do anything useful, because

Re: UUID v1 optimizations...

2019-05-26 Thread Tomas Vondra
dex pages (because we have to touch many of them). The point of generating UUIDs in a more sequential way is to limit this behavior by "concentrating" the index inserts into a smaller part of the index. That's why indexes on sequential data (say, generated from a SERIAL column)

Re: Shortest offline window on database migration

2019-05-30 Thread Tomas Vondra
ter. Has anyone been through this type of problem? Unfortunately, I don't think there's a much better solution that what you initially described - dump/restore, and then creating a replica. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development,

Re: Strange query behaviour between 9.4 and 12beta1

2019-06-04 Thread Tomas Vondra
ws have been removed from, there is nothing suggesting that this dataset was ever created (eg. temp) and these numbers definitely don't match what i was expecting, which is more or less what i'm seeing in 9.4 plan. Obviously i've tested this more than once and this behaviour consist

Re: Custom opclass for column statistics?

2019-07-06 Thread Tomas Vondra
ample demonstrating the issue? regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Custom opclass for column statistics?

2019-07-06 Thread Tomas Vondra
On Sat, Jul 06, 2019 at 05:35:33PM +0200, Ancoron Luciferis wrote: On 06/07/2019 15:38, Tomas Vondra wrote: On Sat, Jul 06, 2019 at 11:02:27AM +0200, Ancoron Luciferis wrote: Hi, I've been wondering whether it is possible somehow to have the standard column statistics to respect a ce

Re: Searching in varchar column having 100M records

2019-07-17 Thread Tomas Vondra
-- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Searching in varchar column having 100M records

2019-07-18 Thread Tomas Vondra
check why it did not help much and give you further advice. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Searching in varchar column having 100M records

2019-07-19 Thread Tomas Vondra
very clearly does not use the index-only scan, so it's not surprising it's not any faster. You need to find out why the planner makes that decision. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Standard uuid vs. custom data type uuid_v1

2019-07-27 Thread Tomas Vondra
ew data type is sorted in a way that eliminates/reduces random I/O against the index. But maybe that's not the case - hard to say, because the linked results don't say how the data files were generated ... regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Query went slow all of sudden. ON V 11.3

2019-10-06 Thread Tomas Vondra
e table (or the indexes on it) are getting bloated over time. Try looking at the sizes of those objects (and maybe use pgstattuple to get more detailed statistics before rebuilding it next time. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Supp

Re: Slow PostgreSQL 10.6 query

2019-10-06 Thread Tomas Vondra
e subquery into the main (upper) one, and it probably happens to constrict the choices so that the planner ends up picking a good plan. I guess adding "OFFSET 0" to the subquery would have the same effect. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Devel

Re: Delete huge Table under XFS

2019-10-06 Thread Tomas Vondra
le. That should delete files in smaller chunks. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-10-06 Thread Tomas Vondra
query plan we can hardly do any guesses about what might be the issue. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Out of Memory errors are frustrating as heck!

2019-10-06 Thread Tomas Vondra
memory error. As Tom already said, this seems like a quite independent issue. Next time it'd be better to share it in a new thread, not to mix it up with the old discussion. Have any of the things discussed and proposed, especially this last one by Tomas Vondra, been applied to t

Re: pg_stat_bgwriter

2019-10-14 Thread Tomas Vondra
are causing this and optimize them. Note: FWIW, a single snapshot of pg_stats* may be misleading, because it's cumulative, so it's not clear how accurately it reflects current state. Next time take two snapshots and subtract them. regards -- Tomas Vondra http://www.2ndQuad

Re: pg_stat_bgwriter

2019-10-14 Thread Tomas Vondra
and even queries is does not really help either. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Reading explain plans- row estimates/actuals on lower nodes vs next level up

2019-10-17 Thread Tomas Vondra
-- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Barman

2019-10-31 Thread Tomas Vondra
io 1" in the barman docs, i.e. pg_basebackup (backup_method = postgres) and WAL streaming. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: FPGA optimization ...

2019-11-04 Thread Tomas Vondra
pproach a university (which does not mean people from pgsql-hackers can't be involved, of course). I've been involved in a couple of such research projects in Europe, not sure what exactly is the situation/rules in US. regards [1] https://swarm64.com/netezza-replacement/ -- Tomas

Re: FPGA optimization ...

2019-11-06 Thread Tomas Vondra
ones they show benchmarks for. Interesting, and perhaps interesting for in-memory databases. [1] https://escholarship.org/content/qt1ph2x5td/qt1ph2x5td.pdf?t=pkvkdm -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: FPGA optimization ...

2019-11-06 Thread Tomas Vondra
On Wed, Nov 06, 2019 at 03:15:53PM -0800, Andres Freund wrote: Hi, On 2019-11-06 22:54:48 +0100, Tomas Vondra wrote: If we're only talking about FPGA I/O acceleration, essentially FPGA between the database and storage, it's likely possible to get that working without any extensiv

Re: Parallel Query

2019-11-13 Thread Tomas Vondra
single process serially. How does the explain analyze output look like without the HAVING clause? Try setting parallel_setup_cost and parallel_tuple_cost to 0. That might trigger parallel query. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Supp

Re: Out of memory error on automatic vacuum

2019-11-18 Thread Tomas Vondra
ws processes to use more memory when executing queries, i.e. it increases memory pressure and makes OOM more likely. So you need to watch system monitoring, see how much memory is being used (excluding page cache) and consider reducing work_mem and/or max_connections if it's too close. regard

Re: Out of memory error on automatic vacuum

2019-11-18 Thread Tomas Vondra
On Mon, Nov 18, 2019 at 03:02:16PM +0100, Ekaterina Amez wrote: El 18/11/19 a las 13:25, Tomas Vondra escribió: On Mon, Nov 18, 2019 at 12:41:27PM +0100, Ekaterina Amez wrote: Hi all, This morning I was checking postgres servers logs, looking for errors (we've recently upgraded the

Re: Out of memory error on automatic vacuum

2019-11-18 Thread Tomas Vondra
On Mon, Nov 18, 2019 at 03:46:03PM +0100, Ekaterina Amez wrote: El 18/11/19 a las 15:16, Tomas Vondra escribió: Not sure I understand. Whenever PostgreSQL process needs memory it requests it from the kernel by calling malloc(), and the amount of availabe RAM is limited.  So when kernel can&#

Re: Postgresql planning time too high

2019-11-22 Thread Tomas Vondra
reproduce the issue, and investigate locally. SQL script that creates the whole schema and runs the query, for example. What I'd like to see is a perf profile from the planning, so that we can see where exactly is the bottleneck. Maybe there actually is a bug that makes it muych more expensive

Re: Seeking reason behind performance gain in 12 with HashAggregate

2020-01-13 Thread Tomas Vondra
will be exactly the same. You need to do VACUUM FULL, but be careful as that acquires exclusive lock on the table. regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Re: Bad query plan when you add many OR conditions

2020-01-13 Thread Tomas Vondra
continues to use the wrong plan (maybe stats don't work well on array fields like tags??). We support this type of clause for extended statistics (yet). regards -- Tomas Vondra http://www.2ndQuadrant.com PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

  1   2   >