On Mon, Aug 20, 2007 at 10:17:14PM -0700, Ben Perrault wrote:
> -> Nested Loop (cost=1.00..106559138.00 rows=336
> width=137)
This sounds very much like you're trying to force the planner. Did you set
enable_nestloop=false or something? Are there any other non-default settings
On 8/21/07, Stephen Davies <[EMAIL PROTECTED]> wrote:
> I have a PostgreSQL 8.2.4 table with some seven million rows.
>
> The psql query:
>
> select count(rdate),rdate from reading where sensor_id in
> (1137,1138,1139) group by rdate order by rdate desc limit 1;
>
> takes a few seconds but:
>
> sel
Hi,
I recently inherited a very old (PostgreSQL 7.0.3) database, and have
migrated it to 8.2.4 but have run into a performance issue.
Basically, I did a dump and import into the new database, vacuumed and
created fresh indexes and everything is work great except the following
type of query (
I have a PostgreSQL 8.2.4 table with some seven million rows.
The psql query:
select count(rdate),rdate from reading where sensor_id in
(1137,1138,1139) group by rdate order by rdate desc limit 1;
takes a few seconds but:
select count(rdate),rdate from reading where sensor_id in
(1137,1138,11
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
> Thanks to all of you. I have changed the settings and reloaded the
> config. Let me run this system overnight. I will update this forum if
> new settings works for me. I am also asking management to upgrade the
> hardware.
You need to run vac
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
> We are having only two disk (40GB each). One disk is used for OS, App
> Server, and application. Second disk is used for postgresql database.
> It's a dual cpu machine having 2 GB of ram.
Even a single disk, with a battery backed caching cont
We are having only two disk (40GB each). One disk is used for OS, App
Server, and application. Second disk is used for postgresql database.
It's a dual cpu machine having 2 GB of ram.
Regards
Sachchida
-Original Message-
From: Scott Marlowe [mailto:[EMAIL PROTECTED]
Sent: Tuesday, Augu
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).
Oh, and I'd look at your I/O subsystem. You might want to look at
putting $300 ha
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
> Is there any data corruption/damage to the database if we forcefully
> kill autovacuum using cron job (if it is running longer than a
> predefined time frame).
Not really. but vacuum will just have to run that long again plus
some the next t
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
> vacuum_cost_delay = 200
> vacuum_cost_page_hit= 1
> vacuum_cost_page_miss = 10
> vacuum_cost_page_dirty = 20
> vacuum_cost_limit
Is there any data corruption/damage to the database if we forcefully
kill autovacuum using cron job (if it is running longer than a
predefined time frame).
Regards
Sachchida
-Original Message-
From: Sachchida Ojha
Sent: Tuesday, August 21, 2007 5:46 PM
To: 'Michael Glaesemann'
Cc: 'pgs
Sachchida Ojha wrote:
> vacuum_cost_delay = 200
That is absurdly high. A setting of 10 is more likely to be useful.
> autovacuum_naptime = 3600
That is too high probably as well; particularly so if you have "updates
and deletes every s
Our model is to provode black box solutions to our customer.
Black box, I mean application system, web sever and database is running
on the same machine.
We are running our sensor on 10 assets (windows machine) and sending
asset data to the server at every 15 minutes. There are some other user
On 8/21/07, Sachchida Ojha <[EMAIL PROTECTED]> wrote:
>
>
> Is there any way to stop the autovacuum if it is running longer than 10 min
> or so?
>
> Is it good idea to kill autovacuum if it is running longer than expected?
>
> In my OLTP system, we are inserting, updating and deleting the data ever
Total RAM in the system is 2GB
#---
# AUTOVACUUM PARAMETERS
#---
vacuum_cost_delay = 200
# 0-1000 milliseconds
vacuum_co
On Aug 21, 2007, at 16:26 , Sachchida Ojha wrote:
In my OLTP system, we are inserting, updating and deleting the data
every second.
Autovacuum started and never finished slowing down the whole system.
There's the possibility that your autovacuum settings aren't
aggressive enough for your sy
Is there any way to stop the autovacuum if it is running longer than 10
min or so?
Is it good idea to kill autovacuum if it is running longer than
expected?
In my OLTP system, we are inserting, updating and deleting the data
every second.
Autovacuum started and never finished slowi
Is there any way to stop the autovacuum if it is running longer than 10
min or so?
Is it good idea to kill autovacuum if it is running longer than
expected?
In my OLTP system, we are inserting, updating and deleting the data
every second.
Autovacuum started and never finished slowi
Hi Frederick,
There is an article about tunning the performance of PostgreSQL on
Solaris at
http://tweakers.net/reviews/649/9
which is not quite exactly of what you wanted but it might help you.
I think that Solaris has disk cache turned off by default which linux
does not have
due to possi
19 matches
Mail list logo