Title: RE: [PERFORM] postgres 7.4 at 100%
Creating indexes on a table affects insert performance depending on the number of indexes that have to be populated. From a query standpoint, indexes are a godsend in most cases.
Duane
-Original Message-
From: Chris Cheston [mailto:[EMAIL
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE. So rule of thumb- create an index for column(s)
which I use in WHERE queries.
So to speak. They can also sometimes assist in sorting. The OID column
is special. I suggest adding a unique index to that c
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE. So rule of thumb- create an index for column(s)
which I use in WHERE queries.
Thanks,
Chis
On Wed, 30 Jun 2004 15:30:52 +0800, Christopher Kings-Lynne
<[EMAIL PROTECTED]> wrote:
>
>
> > Why would I or w
Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's improved performance
even more.
You create indexes when you need indexes. Indexes are most helpful when
they match the WHERE clause of your selects.
So, if you commonly do one query
Oh my, creating an index has absolutely reduced the times it takes to
query from around 700 ms to less than 1 ms!
Thanks so much for all your help. You've saved me!
One question:
Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's imp
Is the from field nullable? If not, try "create index calllogs_from on
calllogs ( from );" and then do an explain analyze of your query.
Gavin
Chris Cheston wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes f
Chris Cheston <[EMAIL PROTECTED]> writes:
> Wow, this simple query is taking 676.24 ms to execute! it only takes
> 18 ms on our other machine.
> This table has 150,000 rows. Is this normal?
> live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
>
On Tue, Jun 29, 2004 at 01:37:30 -0700,
Chris Cheston <[EMAIL PROTECTED]> wrote:
> ok i just vacuumed it and it's taking slightly longer now to execute
> (only about 8 ms longer, to around 701 ms).
>
> Not using indexes for calllogs(from)... should I? The values for
> calllogs(from) are not uni
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).
Thanks,
Chris
On Tue, 29 Jun 2004 16:21:01
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4) (actual
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.
This table has 150,000 rows. Is this normal?
no, the machine is not running software RAID. Anyone have any ideas
next as to what I should do to debug this? I'm really wondering if the
Linux OS runnin
Frank,
> I understand tuning PG is almost an art form, yet it should be based on
> actual usage patterns, not just by system dimensions, don't you agree?
Well, it's both. It's more that available RAM determines your *upper* limit;
that is, on Linux, you don't really want to have more than 20%
On Mon, 2004-06-28 at 14:40, Josh Berkus wrote:
> As one of the writers of that article, let me point out:
>
> " -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)
> -- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) "
>
> While this is probably a lit
Frank,
> Doug said the same, yet the PG Tuning article recommends not make this
> too large as it is just temporary used by the query queue or so. (I
> guess the system would benefit using more memory for file system cache)
As one of the writers of that article, let me point out:
" -- Medium siz
Tom,
> So while he surely should not go back to 40, it seems there's another
> factor involved here that we've not recognized yet.
I'd agree. Actually, the first thing I'd do, were it my machine, is reboot it
and run memtest86 overnight.CPU thrashing like that may indicate bad RAM.
If the
Frank Knobbe <[EMAIL PROTECTED]> writes:
> On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
>>> I upped effective_cache to 16000 KB and I could only up the
>>> shared_buffers to 3000. Anything more and postgres would not start.
>> You need to greatly incrase the shared memory max setti
On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
> > I upped effective_cache to 16000 KB and I could only up the
> > shared_buffers to 3000. Anything more and postgres would not start.
>
> You need to greatly incrase the shared memory max setting on your
> machine so that you can use
I upped effective_cache to 16000 KB and I could only up the
shared_buffers to 3000. Anything more and postgres would not start.
You need to greatly incrase the shared memory max setting on your
machine so that you can use at the very least, 1 shared buffers.
Chris
---
Date: Sat, 26 Jun 2004 07:11:49 -0700
Subject: Re: [PERFORM] postgres 7.4 at 100%
To: Doug McNaught <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]
Hello,
Not to mention upping your effective_cache.
Doug McNaught wrote:
Chris Cheston <[EMAIL PROTECTED]> writes:
shared_buffers = 40
Hello,
Not to mention upping your effective_cache.
Doug McNaught wrote:
Chris Cheston <[EMAIL PROTECTED]> writes:
shared_buffers = 40 # min 16, at least max_connections*2, 8KB each
This is ridiculously low for any kind of production server. Try
something
Chris Cheston <[EMAIL PROTECTED]> writes:
> shared_buffers = 40 # min 16, at least max_connections*2, 8KB each
This is ridiculously low for any kind of production server. Try
something like 5000-1 for a start.
-Doug
---(end of broadcast)
Hi Richard,
Thanks so much for replying. Pls see below.
Thanks in advance for any advice,
Chris
> People are going to need more information. Are you talking about
> CPU/disk IO/memory?
## CPU is at 100%.
>
> > My settings are default on both boxes I think.
>
> Doubtful - PG crawls with the de
Chris Cheston wrote:
Hi all,
I was running Postgres 7.3 and it was running at about 15% with my
application. On Postgres 7.4 on another box, it was running at 100%...
People are going to need more information. Are you talking about
CPU/disk IO/memory?
My settings are default on both boxes I think
Hi all,
I was running Postgres 7.3 and it was running at about 15% with my
application. On Postgres 7.4 on another box, it was running at 100%...
My settings are default on both boxes I think.
There are only about 20 inserts per second, which is really low.
Anyone have any ideas as to something
24 matches
Mail list logo