2012/1/31 Carlo Stonebanks :
> Pavel, thank you very much for your explanation.
>
> Is it possible to define under what conditions that sql procs will
> outperform plpgsql ones, and vice-versa?
yes, little bit :)
when inlining is possible, then SQL function will be faster - typical
use case is si
> Looking at
> http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COSTI
> wonder if I should try reducing random_page_cost?
Yes, and I should speak to Heroku about reducing it by default. RPC
represents the ratio between the cost of a sequential lookup of a s
Pavel, thank you very much for your explanation.
Is it possible to define under what conditions that sql procs will
outperform plpgsql ones, and vice-versa?
-Original Message-
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: January 30, 2012 2:57 AM
To: Carlo Stonebanks
Cc: Mer
Got it (with a little bit of klutzing around). :) Thanks!
On Mon, Jan 30, 2012 at 2:24 PM, Scott Marlowe wrote:
> On Mon, Jan 30, 2012 at 3:19 PM, Scott Marlowe
> wrote:
> > On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi
> > wrote:
> >> I set random_page_cost to 2 (with enable_seqscan on
On Mon, Jan 30, 2012 at 3:19 PM, Scott Marlowe wrote:
> On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi
> wrote:
>> I set random_page_cost to 2 (with enable_seqscan on) and get the same
>> performance I got with enable_seqscan off.
>> So far so good. Now I just need to figure out how to set
On Mon, Jan 30, 2012 at 2:55 PM, Alessandro Gagliardi
wrote:
> I set random_page_cost to 2 (with enable_seqscan on) and get the same
> performance I got with enable_seqscan off.
> So far so good. Now I just need to figure out how to set it globally. :-/
alter database set random_page_cost=2.0;
-
I set random_page_cost to 2 (with enable_seqscan on) and get the same
performance I got with enable_seqscan off.
So far so good. Now I just need to figure out how to set it globally. :-/
On Mon, Jan 30, 2012 at 1:45 PM, Scott Marlowe wrote:
> On Mon, Jan 30, 2012 at 2:39 PM, Alessandro Gagliardi
Pretty sure. I just ran the same query twice in a row
with enable_seqscan=true and the "actual time" was on the order of 42
seconds both times. With enable_seqscan=false, it was on the order 3
seconds. Going back to enable_seqscan=true, it's back to 42 seconds. Unless
you're saying that enable_seqs
On Mon, Jan 30, 2012 at 2:39 PM, Alessandro Gagliardi
wrote:
> On Mon, Jan 30, 2012 at 1:25 PM, Josh Berkus wrote:
>>
>> You can do "SHOW random_page_cost" yourself right now, too.
>>
> 4
>
> I also tried "SHOW seq_page_cost" and that's 1.
>
> Looking
> at http://www.postgresql.org/docs/current/s
On Mon, Jan 30, 2012 at 1:25 PM, Josh Berkus wrote:
> You can do "SHOW random_page_cost" yourself right now, too.
>
> 4
I also tried "SHOW seq_page_cost" and that's 1.
Looking at
http://www.postgresql.org/docs/current/static/runtime-config-query.html#GUC-RANDOM-PAGE-COSTI
wonder if I should try
On 1/30/12 12:59 PM, Claudio Freire wrote:
> On Mon, Jan 30, 2012 at 5:55 PM, Alessandro Gagliardi
> wrote:
>> Hm. Well, it looks like setting enable_seqscan=false is session specific, so
>> it seems like I can use it with this query alone; but it sounds like even if
>> that works, it's a bad prac
On Mon, Jan 30, 2012 at 17:35, Alessandro Gagliardi wrote:
> Well that was a *lot* faster:
>
> "HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual
> time=2692.806..2692.807 rows=2 loops=1)"
> " -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871
> width=26) (actu
On Mon, Jan 30, 2012 at 5:55 PM, Alessandro Gagliardi
wrote:
> Hm. Well, it looks like setting enable_seqscan=false is session specific, so
> it seems like I can use it with this query alone; but it sounds like even if
> that works, it's a bad practice. (Is that true?)
Yep
> My effective_cache_s
Hm. Well, it looks like setting enable_seqscan=false is session specific,
so it seems like I can use it with this query alone; but it sounds like
even if that works, it's a bad practice. (Is that true?)
My effective_cache_size is 153kB
On Mon, Jan 30, 2012 at 12:50 PM, Claudio Freire wrote:
On Mon, Jan 30, 2012 at 5:35 PM, Alessandro Gagliardi
wrote:
> To answer your (non-)question about Heroku, it's a cloud service, so I don't
> host PostgreSQL myself. I'm not sure how much I can mess with things like
> GUC since I don't even have access to the "postgres" database on the server.
> I
Well that was a *lot* faster:
"HashAggregate (cost=156301.82..156301.83 rows=2 width=26) (actual
time=2692.806..2692.807 rows=2 loops=1)"
" -> Bitmap Heap Scan on blocks (cost=14810.54..155828.95 rows=472871
width=26) (actual time=289.828..1593.893 rows=575186 loops=1)"
"Recheck Cond:
On Mon, Jan 30, 2012 at 9:46 AM, Saurabh wrote:
> Thank you for the information.
>
> Schema of table is:
>
> ID bigint
> company_name text
> data_set text
> time timestamp
> Date date
>
> Length of company_name is
On Mon, Jan 30, 2012 at 4:13 PM, Alessandro Gagliardi
wrote:
> So, here's the query:
>
> SELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND
> shared IS FALSE GROUP BY private
>
> What confuses me is that though this is a largish table (millions of rows)
> with constant wri
So, here's the query:
SELECT private, COUNT(block_id) FROM blocks WHERE created > 'yesterday' AND
shared IS FALSE GROUP BY private
What confuses me is that though this is a largish table (millions of rows)
with constant writes, the query is over indexed columns of types timestamp
and boolean so I
On 1/30/2012 3:27 AM, Saurabh wrote:
Hi all,
I am using Postgresql database for our project and doing some
performance testing. We need to insert millions of record with indexed
columns. We have 5 columns in table. I created index on integer only
then performance is good but when I created index
On Mon, Jan 30, 2012 at 2:46 PM, Saurabh wrote:
> max_connections = 100
> shared_buffers = 32MB
> wal_buffers = 1024KB
> checkpoint_segments = 3
That's a default config isn't it?
You'd do well to try and optimize it for your system. The defaults are
really, reeallly conservative.
You should als
Thank you for the information.
Schema of table is:
ID bigint
company_name text
data_settext
time timestamp
Date date
Length of company_name is not known so it is of datatype text. I need
to build the index on co
On Mon, Jan 30, 2012 at 1:27 AM, Saurabh wrote:
> Hi all,
>
> I am using Postgresql database for our project and doing some
> performance testing. We need to insert millions of record with indexed
> columns. We have 5 columns in table. I created index on integer only
> then performance is good but
Update: The main stored function in question and all of its sub
sub-functions were recoded to new pure sql functions.
I then stub tested the sub functions sql vs. plpgsql.
Here were the results for new sql vs old plpgsql:
Individual sub functions tested 20-30% faster
But the main function call
On Mon, Jan 30, 2012 at 6:27 AM, Saurabh wrote:
> Hi all,
>
> I am using Postgresql database for our project and doing some
> performance testing. We need to insert millions of record with indexed
> columns. We have 5 columns in table. I created index on integer only
> then performance is good but
On Sat, Jan 28, 2012 at 19:11, Jayashankar K B
wrote:
> But we are stumped by the amount of CPU Postgres is eating up.
You still haven't told us *how* slow it actually is and how fast you
need it to be? What's your database layout like (tables, columns,
indexes, foreign keys)? What do the queries
Hi all,
I am using Postgresql database for our project and doing some
performance testing. We need to insert millions of record with indexed
columns. We have 5 columns in table. I created index on integer only
then performance is good but when I created index on text column as
well then the perfor
27 matches
Mail list logo