Andy wrote:
Are there any significant performance improvements or regressions from 8.4 to
9.0? If yes, which areas (inserts, updates, selects, etc) are those in?
There were two major rounds of tinkering to the query planner/optimizer
that can impact the types of plans you get from some SEL
Hi,
Are there any significant performance improvements or regressions from 8.4 to
9.0? If yes, which areas (inserts, updates, selects, etc) are those in?
In a related question, is there any public data that compares the performances
of various Postgresql versions?
Thanks
--
Sent vi
Fabrício dos Anjos Silva wrote:
> explain analyze select max(cnpj) from empresa where dtcriacao >=
> current_date-5;
> Result (cost=32.24..32.24 rows=1 width=0) (actual
> time=5223.937..5223.938 rows=1 loops=1)
>InitPlan 1 (returns $0)
> -> Limit (cost=0.00..32.24 rows=1 width=15)
2010/9/29 Fabrício dos Anjos Silva
>
>
>When setting seq_page_cost and random_page_cost, do I have to consider
> the probability that data will be in memory? Or does seq_page_cost mean
> "sequential access on disk" and random_page_cost mean "random access on
> disk"?
>
>
The reason seq_page_c
Hi,
I have this situation:
Database size: 7,6GB (on repository)
Memory size: 7,7GB
1 CPU: aprox. 2GHz Xeon
Number of tables: 1 (empresa)
CREATE TABLE "public"."empresa" (
"cdempresa" INTEGER NOT NULL,
"razaosocial" VARCHAR(180),
"cnpj" VARCHAR(14) NOT NULL,
"ie" VARCHAR(13),
"end
Fabrício dos Anjos Silva wrote:
> After reading lots of documentation, I still don't understand
> fully how PG knows if some needed data is in memory or in second
> storage.
> Does PG consider this?
No.
> When setting seq_page_cost and random_page_cost, do I have to
> consider the probabili
Hi,
After reading lots of documentation, I still don't understand fully how
PG knows if some needed data is in memory or in second storage.
While choosing the best query plan, the optimizer must take this into
account. Does PG consider this? If so, how does it know?
I presume it chec
On 29 September 2010 10:03, Mark Kirkwood
> Yeah, I think the idea of trying to
have a few smaller indexes for the 'hot'
> customers is a good idea. However I am wondering if just using single column
> indexes and seeing if the bitmap scan/merge of smaller indexes is actually
> more efficient is w
On 29/09/10 19:41, Tobias Brox wrote:
I just got this crazy, stupid or maybe genius idea :-)
Now, my idea is to drop that fat index and replace it with conditional
indexes for a dozen of heavy users - like those:
acc_trans(trans_type, created) where customer_id=224885;
acc_trans(trans_ty