> -Original Message-
> From: pgsql-performance-ow...@postgresql.org [mailto:pgsql-performance-
> ow...@postgresql.org] On Behalf Of Gunther
> Sent: Wednesday, November 01, 2017 20:29
> To: pgsql-performance@postgresql.org
> Subject: [PERFORM] OLAP/reporting queries fall into nested loops o
Sent from my BlackBerry - the most secure mobile device
From: gneun...@comcast.net
Sent: August 9, 2017 14:52
To: l...@laurent-hasson.com
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Unlogged tables
Please don't top post.
On 8/9/2017 2:30 PM, l...@laurent-hasson.com<
Ok, I am not sure. I run Postgres as a service, and when my Windows rebooted
after a patch, UNLOGGED tables were cleaned... maybe the patch process in
Windows messed something up, I don't know.
From: gneun...@comcast.net
Sent: August 9, 2017 13:17
To: pgsql-performance@postgresql.org
Subject: Re
Hello,
We have a fairly large static dataset that we load into Postgres. We made the
tables UNLOGGED and saw a pretty significant performance improvement for the
loading. This was all fantastic until the server crashed and we were surprised
to see during a follow up demo that the data had disa
ditto here... much slower, and crashes too often. We run an evergreen shop
where I work, but everyone has moved back to III.
Sent from my BlackBerry KEYone - the most secure mobile device
From: adambrusselb...@gmail.com
Sent: June 23, 2017 8:11 AM
To: t...@sss.pgh.pa.us
Cc: sumeet.k.shu...@gmail.
Hello all,
I have a query with many joins, something like:
Select c1, c2, c3, sum(c5)
From V1
Join V2 on ...
Left join V3 on ...
Left join T4 on ...
Join T5 on ...
Join T6 on ...
Left join T7 on ...
Join T8 on ...
Left join T9 on ...
Where
Are you already on SSDs? That will be the dominant factor I think. Then
memory After that, more cores are good for parallelism (especially with
9.6, although that requires solid memory support). Faster cores will be better
if you expect complex calculations in memory, i.e., some analytics pe
The first behavior is very likely just caching. The plan and results from the
query are cached, so the second time, it's reused directly.
If you ran a bunch of other queries in the middle and effectively exhausted the
cache, then back to your query, likely tou'd see the 'slow' behavior again.
A
It'd be so nice to have some checks to guarantee the backup is trustworthy.
Restoring the db is imho not a very good option in general:
- large databases are a problem. My db is about 3TB. Time plus disk space is a
big blocker.
- also, what if the backup is incomplete? Just restoring the db suc
4:59
To: l...@laurent-hasson.com
Cc: Marc Mamin; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Inlining of functions (doing LIKE on an array)
"l...@laurent-hasson.com" writes:
> I wish there were a way to force inlining, or some other mechanism as the
> performance diff
an 1)"
--"Rows Removed by Filter: 1851321"
--"SubPlan 1"
--" -> Function Scan on unnest a (cost=0.00..1.25 rows=1 width=0)
(actual time=0.001..0.001 rows=0 loops=1936953)"
--"Filter: (a ~~ '427%'::text)&quo
Filter: (tilda."like"("SECONDARY_ICD9_DGNS_CD", '427%'::text) > 0)"
"Rows Removed by Filter: 1851321"
"Planning time: 0.166 ms"
"Execution time: 8169.676 ms"
There is something fundamental here it seems, but I am not so good at
Hello,
I am trying to implement an efficient "like" over a text[]. I see a lot of
people have tried before me and I learnt a lot through the forums. The results
of my search is that a query like the following is optimal:
select count(*)
from claims
where (select count(*)
from unnest
13 matches
Mail list logo