Hi,
Many thanks for all your thoughts and advice. With just 2GB or RAM, no
change to the harddisc (still SATA) but proper tuning of Postgresql
(still 7.4) and aggressive normalization to shrink row width, I have
managed to get suitable performance, with, when fully cached, queries on
a 5 million r
On Tue, Sep 06, 2005 at 11:32:00AM -0400, Tom Lane wrote:
> "Brian Choate" <[EMAIL PROTECTED]> writes:
> > We are seeing a very strange behavior from postgres. For one of our very =
> > common tasks we have to delete records from a table of around 500,000 =
> > rows. The delete is by id which is th
On Thu, Sep 01, 2005 at 06:05:43PM -0400, Ron wrote:
> > Selection from the database is, hence the indexes.
>
> A DB _without_ indexes that fits into RAM during ordinary operation
> may actually be faster than a DB _with_ indexes that does
> not. Fitting the entire DB into RAM during ordinary o
On Thu, Sep 01, 2005 at 11:52:45PM +0200, Steinar H. Gunderson wrote:
> On Thu, Sep 01, 2005 at 10:13:59PM +0100, Matthew Sackman wrote:
> > Well that's the thing - on the queries where it decides to use the index
> > it only reads at around 3MB/s and the CPU is maxed
On Thu, Sep 01, 2005 at 02:26:47PM -0700, Jeff Frost wrote:
> >Well I've got 1GB of RAM, but from analysis of its use, a fair amount
> >isn't being used. About 50% is actually in use by applications and about
> >half of the rest is cache and the rest isn't being used. Has this to do
> >with the max
On Thu, Sep 01, 2005 at 10:54:45PM +0200, Arjen van der Meijden wrote:
> On 1-9-2005 19:42, Matthew Sackman wrote:
> >Obviously, to me, this is a problem, I need these queries to be under a
> >second to complete. Is this unreasonable? What can I do to make this "go
> &g
On Thu, Sep 01, 2005 at 10:09:30PM +0200, Steinar H. Gunderson wrote:
> > "address_city_index" btree (city)
> > "address_county_index" btree (county)
> > "address_locality_1_index" btree (locality_1)
> > "address_locality_2_index" btree (locality_2)
> > "address_pc_bottom_index"
On Thu, Sep 01, 2005 at 02:04:54PM -0400, Merlin Moncure wrote:
> > Any help most gratefully received (even if it's to say that I should
> be
> > posting to a different mailing list!).
>
> this is correct list. did you run vacuum/analyze, etc?
> Please post vacuum analyze times.
2005-09-01 19:47
On Thu, Sep 01, 2005 at 02:47:06PM -0400, Tom Lane wrote:
> Matthew Sackman <[EMAIL PROTECTED]> writes:
> > Obviously, to me, this is a problem, I need these queries to be under a
> > second to complete. Is this unreasonable?
>
> Yes. Pulling twenty thousand rows a
Hi,
I'm having performance issues with a table consisting of 2,043,133 rows. The
schema is:
\d address
Table "public.address"
Column| Type | Modifiers
--++---
postcode_top | character
10 matches
Mail list logo