Hi, Rob.
I tried bumping the effective_cache_size. It made no difference.
My latest attempt at forcing PostgreSQL to use the indexes involved two
loops: one to loop over the stations, the other to extract the station data
from the measurement table. The outer loop executes in 1.5 seconds. The
inn
On Wed, May 12, 2010 at 1:45 AM, venu madhav wrote:
> [Venu] Yes, autovacuum is running every hour. I could see in the log
> messages. All the configurations for autovacuum are disabled except that it
> should run for every hour. This application runs on an embedded box, so
> can't change the para
On Tue, May 11, 2010 at 2:00 PM, Carlo Stonebanks
wrote:
> I am concerned that there is such a lag between all the index and function
> scans start/complete times and and the nested loops starting. I have
> reformatted the SLOW PLAN results below to make them easier to read. Can you
> tell me if t
Hi, Tom.
Yes, that is what happened, making the tests rather meaningless, and giving
me the false impression that the indexes were being used. They were but only
because of cached results. When multiple users making different queries, the
performance will return to ~80s per query.
I also tried Ke
On Tue, May 25, 2010 at 4:26 PM, David Jarvis wrote:
> shared_buffers = 1GB
> temp_buffers = 32MB
> work_mem = 32MB
> maintenance_work_mem = 64MB
> effective_cache_size = 256MB
Shouldn't effective_cache_size be significantly larger?
--
Rob Wultsch
wult...@gmail.com
--
Sent via pgsql-performan
David Jarvis writes:
>> It sounds as though the active portion of your database is pretty
>> much cached in RAM. True?
> I would not have thought so; there are seven tables, each with 39 to 43
> million rows as: [ perhaps 64 bytes per row ]
> The machine has 4GB of RAM, donated to PG as follows:
Hi, Kevin.
Thanks for the response.
It sounds as though the active portion of your database is pretty
> much cached in RAM. True?
>
I would not have thought so; there are seven tables, each with 39 to 43
million rows as:
CREATE TABLE climate.measurement (
id bigserial NOT NULL,
taken date
On Tue, May 25, 2010 at 5:58 AM, Konrad Garus wrote:
> 2010/5/24 Merlin Moncure :
>
>> *) a page fault to disk is a much bigger deal than a fault to pg cache
>> vs os/ cache.
>
> That was my impression. That's why I did not touch our 2/16 GB setting
> right away. I guess that 2 more gigabytes in O
On Tue, May 25, 2010 at 11:27:08AM -0700, Scott Carey wrote:
> On May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:
> > On Fri, 21 May 2010, Richard Yen wrote:
> >> Any ideas why the query planner chooses a different query plan when using
> >> prepared statements?
> >
> > This is a FAQ. Preparing
David Jarvis wrote:
> The value for *random_page_cost* was at 2.0; reducing it to 1.1
> had a massive performance improvement (nearly an order of
> magnitude). While the results now return in 5 seconds (down from
> ~85 seconds)
It sounds as though the active portion of your database is pretty
On May 21, 2010, at 8:26 PM, Matthew Wakeling wrote:
> On Fri, 21 May 2010, Richard Yen wrote:
>> Any ideas why the query planner chooses a different query plan when using
>> prepared statements?
>
> This is a FAQ. Preparing a statement makes Postgres create a plan, without
> knowing the value
Sorry Jesper, I thought I had mentioned.. our dataset have 18GB.
Pedro Axelrud
http://mailee.me
http://softa.com.br
http://flavors.me/pedroaxl
On Tue, May 25, 2010 at 03:21, Jesper Krogh wrote:
> Option 2:
>> App Server and Postgres: Dual Xeon 5520 quad core with 12GB ram and 2x
>> 146GB 15k
Have you read this?
http://blog.endpoint.com/2008/12/why-is-my-function-slow.html
99% of the 'function is slow' problems are caused by this.
Have you checked the difference between explain and prepare + explain execute?
>>> Tyler Hildebrandt 05/25/10 4:59 AM >>>
We're using a function that
On Tue, May 25, 2010 at 10:55 AM, Merlin Moncure wrote:
> On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt
> wrote:
>>> I think, your problem is here:
>>>
>>> SELECT INTO current_user * FROM
>>> fn_medirota_validate_rota_master(in_currentuser);
>>>
>>>
>>> The planner has no knowledge about how
On Tue, May 25, 2010 at 9:41 AM, Tyler Hildebrandt
wrote:
>> I think, your problem is here:
>>
>> SELECT INTO current_user * FROM
>> fn_medirota_validate_rota_master(in_currentuser);
>>
>>
>> The planner has no knowledge about how many rows this functions returns
>> if he don't know the actual par
On Tue, May 25, 2010 at 02:04:07PM +, Juan Pablo Sandoval Rivera wrote:
> Please let me give recommendation to the confituracion...
The subject line of this message said you're trying to run PostgreSQL 7.3.7. I
hope that's a typo, and you really mean 8.3.7, in which case this suggestion
boils
Good day list
I would appreciate some comments to the following:
I have a Dell PowerEdge SC1420 server with 2 GB of RAM 1 DD 73 Gb SCSI
Ulltra320 2 Xeon (4
cache) with PGSQL 7.3.7
running GNU / Linux Red Hat Enterprise 4, 0 for 32-bit (kernel 2.6.9-5Elsmp)
Nahant (ES)
and another server
> I think, your problem is here:
>
> SELECT INTO current_user * FROM
> fn_medirota_validate_rota_master(in_currentuser);
>
>
> The planner has no knowledge about how many rows this functions returns
> if he don't know the actual parameter. Because of this, this query
> enforce a seq-scan. Try to
On Tuesday 25 May 2010 11:00:24 Joachim Worringen wrote:
> Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
> > temporary tables are handled pretty much like the regular table. The
> > magic happens on schema level, new schema is setup for connection, so
> > that it can access its own temporary ta
In response to Tyler Hildebrandt :
> We're using a function that when run as a select statement outside of the
> function takes roughly 1.5s to complete whereas running an identical
> query within a function is taking around 55s to complete.
>
> select * from fn_medirota_get_staff_leave_summary(6
Am 25.05.2010 11:38, schrieb Grzegorz Jaśkiewicz:
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Or until I commit the transa
We're using a function that when run as a select statement outside of the
function takes roughly 1.5s to complete whereas running an identical
query within a function is taking around 55s to complete.
We are lost as to why placing this query within a function as opposed to
substituting the variab
2010/5/24 Merlin Moncure :
> *) a page fault to disk is a much bigger deal than a fault to pg cache
> vs os/ cache.
That was my impression. That's why I did not touch our 2/16 GB setting
right away. I guess that 2 more gigabytes in OS cache is better than 2
more (duplicated) gigabytes in PG share
WAL does the same thing to DB journaling does to the FS.
Plus allows you to roll back (PITR).
As for the RAM, it will be in ram as long as OS decides to keep it in
RAM cache, and/or its in the shared buffers memory.
Unless you have a lot of doubt about the two, I don't think it makes
too much sens
Am 25.05.2010 11:15, schrieb Thom Brown:
2010/5/25 Joachim Worringen:
And, is there anything like RAM-only tables? I really don't care whether the
staging data is lost on the rare event of a machine crash, or whether the
query crashes due to lack of memory (I make sure there's enough w/o paging)
2010/5/25 Joachim Worringen :
> Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
>>
>> temporary tables are handled pretty much like the regular table. The
>> magic happens on schema level, new schema is setup for connection, so
>> that it can access its own temporary tables.
>> Temporary tables a
Am 25.05.2010 10:49, schrieb Grzegorz Jaśkiewicz:
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty m
temporary tables are handled pretty much like the regular table. The
magic happens on schema level, new schema is setup for connection, so
that it can access its own temporary tables.
Temporary tables also are not autovacuumed.
And that's pretty much the most of the differences.
--
Sent via pgsql
Greetings,
in
http://archives.postgresql.org/message-id/1056648218.7041.11.ca...@jester,
it is stated that the performance of temporary tables is "the same as a
regular table but without
WAL on the table contents.".
I have a datamining-type application which makes heavy use of temporary
tab
29 matches
Mail list logo