Joe Uhl wrote:
I realize there are people who discourage looking at Dell, but i've been
very happy with a larger ball of equipment we ordered recently from
them. Our database servers consist of a PowerEdge 2950 connected to a
PowerVault MD1000 with a 1 meter SAS cable.
We have a similar piec
Alvaro Herrera wrote:
Ron St-Pierre wrote:
Okay, here's our system:
postgres 8.1.4
Upgrade to 8.1.10
Any particular fixes in 8.1.10 that would help with this?
Here's the table information:
The table has 140,000 rows, 130 columns (mostly NUMERIC), 60 indexes.
Gregory Stark wrote:
"Ron St-Pierre" <[EMAIL PROTECTED]> writes:
We vacuum only a few of our tables nightly, this one is the last one because it
takes longer to run. I'll probably re-index it soon, but I would appreciate any
advice on how to speed up the vacuum p
Tom Lane wrote:
Here is your problem:
vacuum_cost_delay = 200
If you are only vacuuming when nothing else is happening, you shouldn't
be using vacuum_cost_delay at all: set it to 0. In any case this value
is probably much too high. I would imagine that if you watch the
machine while
Bill Moran wrote:
In response to Ron St-Pierre <[EMAIL PROTECTED]>:
We vacuum only a few of our tables nightly, this one is the last one
because it takes longer to run. I'll probably re-index it soon, but I
would appreciate any advice on how to speed up the vacuum process (and
We vacuum only a few of our tables nightly, this one is the last one
because it takes longer to run. I'll probably re-index it soon, but I
would appreciate any advice on how to speed up the vacuum process (and
the db in general).
Okay, here's our system:
postgres 8.1.4
Linux version 2.4.21
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
Jim C. Nasby wrote:
On Wed, Jun 21, 2006 at 02:27:41PM -0700, Ron St-Pierre wrote:
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
RAID *4*?
oops, raid 5 (but we are getting good io
We just purchased a new Dell PowerEdge 2800 (dual xeon, 8GB RAM, raid 4,
RHEL, postgres 8.1) and ported our old database over to it (single cpu,
2GB RAM, no raid, postgres 7.4). Our apps perform great on it, however
some queries are super slow. One function in particular, which used to
take 15-
Jeffrey W. Baker wrote:
All these replies are really interesting, but the point is not that my
RAIDs are too slow, or that my CPUs are too slow. My point is that, for
long stretches of time, by database doesn't come anywhere near using the
capacity of the hardware. And I think that's odd and woul
Thanks for everyone's comments (Thomas, Steinar, Frank, Matt, William).
Right now I'm bench-marking the time it takes for each step
in the end of day update process and then I am going to test a few things:
- dropping most indexes, and check the full processing time and see if
there is any notice
Steinar H. Gunderson wrote:
On Tue, Aug 31, 2004 at 11:11:02AM -0700, Ron St-Pierre wrote:
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column
Thomas F. O'Connell wrote:
What is the datatype of the id column?
The id column is INTEGER.
Ron
---(end of broadcast)---
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
We have a web based application with data that is updated daily. The
biggest bottleneck occurs when we try to update
one of the tables. This table contains 58,000 rows and 62 columns, and
EVERY column is indexed. Every column is
queryable (?) by the users through the web interface so we are
relu
Chris Hoover wrote:
On Friday 23 April 2004 14:57, Ron St-Pierre wrote:
Does this apply to 7.3.4 also?
No it doesn't, I didn't look back through the thread far enough to see
what you were running. I tried it on 7.3.4 and none of the summary info
listed below was returned. FWIW one
Josh Berkus wrote:
Chris,
Sorry for the confusion here. I can't run any sort of vacuum durin the day
due to performance hits. However, I have run vacuums at night. Several
nights a week I run a vacuumdb -f -z on all of the clusters. I can take
serveral hours to complete, but it does compl
Tom Lane wrote:
Josh Berkus <[EMAIL PROTECTED]> writes:
A better way to set this would be to run VACUUM VERBOSE ANALYZE right after
doing one of your update batches, and see how many dead pages are being
reclaimed, and then set max_fsm_pages to that # + 50% (or more).
Actually, since he'
Josh Berkus wrote:
Ron,
The emp table has 60 columns, all indexed, about two-thirds are numeric,
but they are not affected by this update. The other 50+ columns are
updated in the middle of the night and the amount of time that update
takes isn't a concern.
Well, I'd say that you have
I am using postgres 7.4.1 and have a problem with a plpgsql function.
When I run the function on the production server it takes approx 33
minutes to run. I dumped the DB and copied it to a similarly configured
box and ran the function and it ran in about 10 minutes. Can anyone
offer advice on t
I need to get 200 sets of the most recent data from a table for further
processing, ordered by payDate. My
current solution is to use subselects to:
1 - get a list of unique data
2 - get the 200 most recent records (first 200 rows, sorted descending)
3 - sort them in ascending order
SELECT SSS.*
21 matches
Mail list logo