Rhaoni Chiu Pereira <[EMAIL PROTECTED]> writes:
>I still have performance problems with this sql:
It seems odd that all the joins are being done as nestloops. Perhaps
you shouldn't be forcing enable_seqscan off?
regards, tom lane
---(end of br
Hi List,
I still have performance problems with this sql:
SELECT /*+ */
ftnfco00.estado_cliente ,
ftcofi00.grupo_faturamento ,
SUM( DECODE( ftcofi00.atual_fatura, '-', -(NVL(ftnfpr00.qtde_duzias,0)),
'+', NVL(ftnfpr00.qtde_duzias,0), 0) ) ,
SUM( DECODE( ftcofi00.atual_fatura, '-'
On Fri, Aug 29, 2003 at 01:19:35PM -0400, george young wrote:
> Does anyone know how and when the actual release will happen?
See the erserver project on gborg. It's out. There's a list, too;
any problems, send 'em there.
A
Andrew Sullivan 204-4141 Yonge Street
Lib
On Fri, 2003-08-29 at 11:33, William Yu wrote:
> Shridhar Daithankar wrote:
[snip]
> > I am sure. But is 64 bit environment, Xeon is not the compitition. It's PA-RSC-
> > 8700, ultraSparcs, Power series and if possible itanium.
>
> Well, just because the Opteron is 64-bit doesn't mean it's direct
I was ecstatic to hear that postgresql.com is releasing the eRServer
replication project to postgresql.org as open source! I'm anxious
to get my hands on it -- actually I'm desperate: I'm under pressure to
produce a warm-failover server for our lab. I REALLY would like to
get hands on this code
> >>I found I'm suffering from an effect detailed in a previous thread titled
> >>
> >>Does "correlation" mislead the optimizer on large tables?
> >
> >
> >I don't know about large tables, but this is a big problem and
> >something I'm going to spend some time validating later today. I
> >thin
Sean Chittenden wrote:
I found I'm suffering from an effect detailed in a previous thread titled
Does "correlation" mislead the optimizer on large tables?
I don't know about large tables, but this is a big problem and
something I'm going to spend some time validating later today. I
think Manfr
On Fri, Aug 29, 2003 at 05:13:52PM +0200, Alexander Priem wrote:
> Well, the intention is to hold every record that ever existed in the table.
> Therefore, records do not get deleted, but they get a date in the
> deleteddate field. This way, we can track what changes were made to the
> table(s).
>
Shridhar Daithankar wrote:
Just a guess here but does a precompiled postgresql for x86 and a x86-64
optimized one makes difference?
>
> Opteron is one place on earth you can watch difference between 32/64
> bit on same machine. Can be handy at times..
I don't know yet. I tried building a 64-bit ke
> >If you want both the max and the min, then things are going to be a
> >bit more work. You are either going to want to do two separate
> >selects or join two selects or use subselects. If there aren't
> >enough prices per stock, the sequential scan might be fastest since
> >you only need to go th
Hi all,
I compared 2.6 with elevator=deadline. It did bring some improvement in
performance. But still it does not beat 2.4.
Attached are three files for details.
I also ran a simple insert benchmark to insert a million record in a simple
table with a small int and a varchar(30).
Here are th
Andrew Sullivan <[EMAIL PROTECTED]> writes:
> On Fri, Aug 29, 2003 at 11:34:13AM -0400, Bill Moran wrote:
>> Have any explanation as to why that function is so slow?
> Sorry, no. It might have to do with the planning, though.
Specifically, I'll bet he's getting an indexscan plan with one and not
Bruno Wolff III wrote:
If you want both the max and the min, then things are going to be a bit
more work. You are either going to want to do two separate selects
or join two selects or use subselects. If there aren't enough prices
per stock, the sequential scan might be fastest since you only need
On Fri, Aug 29, 2003 at 11:34:13AM -0400, Bill Moran wrote:
> Have any explanation as to why that function is so slow?
Sorry, no. It might have to do with the planning, though. I believe
the funciton is planned the first time it is run. It may need to be
marked as "STABLE" in order to use any i
Andrew Sullivan wrote:
On Fri, Aug 29, 2003 at 10:46:44AM -0400, Bill Moran wrote:
Postgres has to convert the text to a varchar before it can actually
do anything. It's possible (though I'm not sure) that it has to
do the conversion with each record it looks at.
It does? According to the docs,
Well, the intention is to hold every record that ever existed in the table.
Therefore, records do not get deleted, but they get a date in the
deleteddate field. This way, we can track what changes were made to the
table(s).
So if a record gets 'deleted', the field 'deleted' is set to today's date.
On Fri, 29 Aug 2003, Ken Geis wrote:
> Ken Geis wrote:
> > I went through the code (7.4 beta2) that estimates the cost of an index
> > scan path. What I need to be sure of is that when running a query in
> > pgsql that uses only the columns that are in an index, the underlying
> > table need not
Hi Bill,
On Friday 29 August 2003 16:46, you wrote:
> Postgres has to convert the text to a varchar before it can actually
> do anything. It's possible (though I'm not sure) that it has to
> do the conversion with each record it looks at.
Nope. I tested you function with the temporary varchar va
On Fri, Aug 29, 2003 at 10:46:44AM -0400, Bill Moran wrote:
>
> Postgres has to convert the text to a varchar before it can actually
> do anything. It's possible (though I'm not sure) that it has to
> do the conversion with each record it looks at.
It does? According to the docs, varchar is jus
Oliver Siegmar wrote:
Hi,
I'm using PostgreSQL 7.3.4 and noticed a havy performance issue when
using the datatype text for PL/pgSQL functions instead of varchar.
This is the table:
CREATE TABLE user_login_table (
id serial,
username varchar(100),
PRIMARY ID (id),
UNIQUE (username)
On Fri, Aug 29, 2003 at 02:52:10PM +0200, Fabian Kreitner wrote:
> Hi everyone,
>
> I have a sql request which on first invocation completes in ~12sec but then
> drops to ~3sec on the following runs. The 3 seconds would be acceptable but
> how can I make sure that the data is cached and all time
On Fri, Aug 29, 2003 at 12:05:03AM -0700, William Yu wrote:
> We should see a boost when we move to 64-bit Linux and hopefully another
> one when NUMA for Linux is production-stable.
According to the people who've worked with SGIs, NUMA actually seems
to make things worse. It has something to do
"Alexander Priem" <[EMAIL PROTECTED]> writes:
> Does anyone know whether it is bad practise to have two indexes on the
> primary key of a table? (one 'primary key' index and one partial index)
It's a little unusual, but if you get enough performance boost from it
to justify the maintenance cost of
Hi,
I'm using PostgreSQL 7.3.4 and noticed a havy performance issue when
using the datatype text for PL/pgSQL functions instead of varchar.
This is the table:
CREATE TABLE user_login_table (
id serial,
username varchar(100),
PRIMARY ID (id),
UNIQUE (username)
);
This table conta
On Fri, 29 Aug 2003, Ken Geis wrote:
> Some good news here. Doing the same as above on 7.4beta2 took 29
> minutes. Now, the 7.3.3 was on reiser and 7.4 on ext2, so take that as
> you will. 7.4's index selectivity estimate seems much better; 7.3.3's
> anticipated rows was ten times the actual; 7
Hi everyone,
I have a sql request which on first invocation completes in ~12sec but then
drops to ~3sec on the following runs. The 3 seconds would be acceptable but
how can I make sure that the data is cached and all times? Is it simply
enough to set shared_buffers high enough to hold the entir
On Fri, 2003-08-29 at 03:18, Shridhar Daithankar wrote:
> On 29 Aug 2003 at 0:05, William Yu wrote:
>
> > Shridhar Daithankar wrote:
[snip]
> > As for performance, the scaling is magnificient -- even when just using
> > PAE instead of 64-bit addressing. At low transaction counts, it's only
> > ~
The first index is for sorting on orad_id, the second one for sorting on
orad_name. The first one would be useful for queries like 'select * from
orderadvice_edit where orad_id=100', the second one for queries like 'select
* from orderadvice_edit order by orad_name'. Right?
Does anyone know whethe
create index orad_id_index on orderadvice (orad_id) where orad_deleteddate
is null;
create index orad_name_index on orderadvice (orad_name) where
orad_deleteddate is null;
create view orderadvice_edit as select
orad_id,orad_name,orad_description,orad_value,orad_value_quan from
orderadvice where ora
I think I understand what you mean :)
Let's see if that's true :
The entire table WAS like this: (just one example table, I have many more)
create table orderadvice (
orad_id serial primary key,
orad_name varchar(25) unique not null,
orad_description varchar(50) default null,
orad_value intege
On 28 Aug 2003 at 20:16, Anders K. Pedersen wrote:
> Shridhar Daithankar wrote:
> > On 28 Aug 2003 at 1:07, Anders K. Pedersen wrote:
> >>We're running a set of Half-Life based game servers that lookup user
> >>privileges from a central PostgreSQL 7.3.4 database server (I recently
> >>ported the
Ken Geis wrote:
When run on 7.3.3, forcing an index scan by setting
enable_seqscan=false, the query took 55 minutes to run. The index is
about 660M in size, and the table is 1G. As I mentioned before, with
table scans enabled, it bombs, running out of temporary space.
Man, I should wait a whil
On 29 Aug 2003 at 0:05, William Yu wrote:
> Shridhar Daithankar wrote:
> >> Be careful here, we've seen that with the P4 Xeon's that are
> >>hyper-threaded and a system that has very high disk I/O causes the
> >>system to be sluggish and slow. But after disabling the hyper-threading
> >>itself, ou
> > I went through the code (7.4 beta2) that estimates the cost of an index
> > scan path. What I need to be sure of is that when running a query in
> > pgsql that uses only the columns that are in an index, the underlying
> > table need not be accessed. I know that Oracle does this.
PostgreSQL
> So if I understand correctly I could ditch the 'deleted' field entirely
and
> use just the 'deleteddate' field. This 'deleteddate' field would be NULL
by
> default. It would contain a date value if the record is considered
> 'deleted'.
>
> The index would be 'create index a on tablename(deletedd
Ken Geis wrote:
I went through the code (7.4 beta2) that estimates the cost of an index
scan path. What I need to be sure of is that when running a query in
pgsql that uses only the columns that are in an index, the underlying
table need not be accessed. I know that Oracle does this.
Thinking
So if I understand correctly I could ditch the 'deleted' field entirely and
use just the 'deleteddate' field. This 'deleteddate' field would be NULL by
default. It would contain a date value if the record is considered
'deleted'.
The index would be 'create index a on tablename(deleteddate) where
de
Sorry, all, to wipe out the context, but it was getting a little long.
Bruno Wolff III wrote:
I am calling it quits for tonight, but will check back tomorrow
to see how things turned out.
I went through the code (7.4 beta2) that estimates the cost of an index
scan path. What I need to be sure of
So if I understand correctly I could ditch the 'deleted' field entirely and
use just the 'deleteddate' field. This 'deleteddate' field would be NULL by
default. It would contain a date value if the record is considered
'deleted'.
The index would be 'create index a on tablename(deleteddate) where
d
> We should see a boost when we move to 64-bit Linux and hopefully another
> one when NUMA for Linux is production-stable.
Assuming SCO doesn't make them remove it :P
Chris
---(end of broadcast)---
TIP 8: explain analyze is your friend
Shridhar Daithankar wrote:
Be careful here, we've seen that with the P4 Xeon's that are
hyper-threaded and a system that has very high disk I/O causes the
system to be sluggish and slow. But after disabling the hyper-threading
itself, our system flew..
Anybody has opteron working? Hows' the perform
Hi all,
I have some tables (which can get pretty large) in which I want to
record 'current' data as well as 'historical' data. This table has
fields 'deleted' and 'deleteddate' (among other fields, of course). The
field 'deleted' is false be default. Every record that I want to delete
gets th
Remember to consider partial indexes:
eg. CREATE INDEX ON table (col) WHERE deletedate IS NOT
NULL
Chris
- Original Message -
From:
Alexander Priem
To: [EMAIL PROTECTED]
Sent: Friday, August 29, 2003 2:52
PM
Subject: [PERFORM] Indexing
question
Hi all,
Bruno Wolff III wrote:
Can you do a \d on the real table or is that too sensitive?
It was silly of me to think of this as particularly sensitive.
stocks=> \d day_ends
Table "public.day_ends"
Column | Type | Modifiers
+--+---
stock_id | intege
Hi all,
I have some tables (which can get pretty large) in
which I want to record 'current' data as well as 'historical' data. This
table has fields 'deleted' and 'deleteddate' (among other fields, of course).
The field 'deleted' is false be default. Every record that I want to delete gets
On Thu, 28 Aug 2003, Vivek Khera wrote:
> > "sm" == scott marlowe writes:
>
> sm> My experience has been that once you get past 6 disks, RAID5 is faster
> sm> than RAID1+0.
>
> Any opinion on stripe size for the RAID?
That's more determined by what kind of data you're gonna be handling.
On Thu, Aug 28, 2003 at 21:09:00 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
> Bruno Wolff III wrote:
>
> I am positive! I can send a log if you want, but I won't post it to the
> list.
Can you do a \d on the real table or is that too sensitive?
It still doesn't make sense that you have a pri
Bruno Wolff III wrote:
On Thu, Aug 28, 2003 at 20:46:00 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
It is not the table or the query that is wrong. It is either the db
parameters or the optimizer itself.
...
It is still odd that you didn't get a big speed up for just the min though.
You example
On Thu, Aug 28, 2003 at 20:46:00 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
>
> A big problem is that the values I am working with are *only* the
> primary key and the optimizer is choosing a table scan over an index
> scan. That is why I titled the email "bad estimates." The table has
> (s
Bruno Wolff III wrote:
On Thu, Aug 28, 2003 at 20:00:32 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
Bruno Wolff III wrote:
Not according to the optimizer! Plus, this is not guaranteed to return
the correct results.
For it to be fast you need an index on (stock_id, price_date) so that
you can us
How To calcute PostgreSQL HDD grow capacity for every byte data, start from
installation initialize.
Regards,
Eko Pranoto
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmail
On Thu, Aug 28, 2003 at 20:00:32 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
> Bruno Wolff III wrote:
> >>Not according to the optimizer! Plus, this is not guaranteed to return
> >>the correct results.
> >
> >For it to be fast you need an index on (stock_id, price_date) so that
> >you can use an
Bruno Wolff III wrote:
Not according to the optimizer! Plus, this is not guaranteed to return
the correct results.
For it to be fast you need an index on (stock_id, price_date) so that
you can use an index scan.
I already said that such an index existed. In fact, it is the primary
key of the ta
On Thu, Aug 28, 2003 at 19:50:38 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
> Bruno Wolff III wrote:
> >On Thu, Aug 28, 2003 at 17:10:31 -0700,
> > Ken Geis <[EMAIL PROTECTED]> wrote:
> >
> >>The query I want to run is
> >>
> >>select stock_id, min(price_date) from day_ends group by stock_id;
>
Bruno Wolff III wrote:
On Thu, Aug 28, 2003 at 17:10:31 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
The query I want to run is
select stock_id, min(price_date) from day_ends group by stock_id;
The fast way to do this is:
select distinct on (stock_id) stock_id, price_date
order by stock_id, pri
On Thu, Aug 28, 2003 at 17:10:31 -0700,
Ken Geis <[EMAIL PROTECTED]> wrote:
> The query I want to run is
>
> select stock_id, min(price_date) from day_ends group by stock_id;
The fast way to do this is:
select distinct on (stock_id) stock_id, price_date
order by stock_id, price_date;
> Also
Christopher Kings-Lynne wrote:
As with all performance tests/benchmarks, there are probably dozens or
more reasons why these results aren't as accurate or wonderful as they
should be. Take them for what they are and hopefully everyone can
learn a few things from them.
Intelligent feedback is welco
> "sm" == scott marlowe writes:
sm> My experience has been that once you get past 6 disks, RAID5 is faster
sm> than RAID1+0.
Any opinion on stripe size for the RAID?
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EM
> I'm likely going to make this the default for PostgreSQL on FreeBSD
> starting with 7.4 (just posted something to -hackers about this)f. If
> you'd like to do this in your testing, just apply the following patch.
>
> Right now PostgreSQL defaults to 8K blocks, but FreeBSD uses 16K
> blocks which
> > As with all performance tests/benchmarks, there are probably dozens or
> > more reasons why these results aren't as accurate or wonderful as they
> > should be. Take them for what they are and hopefully everyone can
> > learn a few things from them.
> >
> > Intelligent feedback is welcome.
> >
Shridhar Daithankar wrote:
On 26 Aug 2003 at 21:47, Bill Moran wrote:
Hey all.
I said I was going to do it, and I finally did it.
As with all performance tests/benchmarks, there are probably dozens or
more reasons why these results aren't as accurate or wonderful as they
should be. Take them f
Alright.
To anyone who didn't get the news the first time:
The first set of benchmarks were terribly skewed because FreeBSD
didn't properly work with the hardware I was using. Thanks to
those who pointed the problem out to me.
I have scrounged new hardware, and insured that FreeBSD is working
prop
scott.marlowe wrote:
Just to add to the clutch here, also check your bdflush settings (if
you're on linux) or equivalent (if you're not.)
Many times the swapping algo in linux can be quite bursty if you have it
set to move too many pages at a time during cleanup / flush.
According to vmstat it d
I'm surprised at the effort pgsql requires to run one of my queries. I
don't know how to tune this query.
Column | Type | Modifiers
+--+---
the_id | integer | not null
the_date | date | not null
num1 | numeric(9,4) |
num2
64 matches
Mail list logo