Re: [PERFORM] Joins and full index scans...mysql vs postgres?

2006-02-23 Thread Andreas Pflug
Christopher Kings-Lynne wrote: The pgAdmin query tool is known to give an answer about 5x the real answer - don't believe it! Everybody please forget immediately the factor 5. It's no factor at all, but the GUI update time that is *added*, which depends on rows*columns. ryan groth wrote:

[PERFORM] Created Index is not used

2006-02-23 Thread Kjeld Peters
Select and update statements are quite slow on a large table with more than 600,000 rows. The table consists of 11 columns (nothing special). The column "id" (int8) is primary key and has a btree index on it. The following select statement takes nearly 500ms: SELECT * FROM table WHERE id = 600

Re: [PERFORM] LIKE query on indexes

2006-02-23 Thread Ibrahim Tekin
hi,i ran a query with ILIKE but it doesn't use the index.but i tried following method, and it worked. there is 3 extra lower() overhead but i don't think it will effect the performance.CREATE INDEX index_name ON mytable (lower(column) varchar_pattern_ops); SELECT * FROM mytable WHERE lower(column)

Re: [PERFORM] Created Index is not used

2006-02-23 Thread Ragnar
On fim, 2006-02-23 at 13:35 +0100, Kjeld Peters wrote: > Select and update statements are quite slow on a large table with more > than 600,000 rows. The table consists of 11 columns (nothing special). > The column "id" (int8) is primary key and has a btree index on it. > > The following select s

Re: [PERFORM] Created Index is not used

2006-02-23 Thread Markus Schaber
Hi, Kjeld, Kjeld Peters wrote: > Select and update statements are quite slow on a large table with more > than 600,000 rows. The table consists of 11 columns (nothing special). > The column "id" (int8) is primary key and has a btree index on it. > > The following select statement takes nearly 500

Re: [PERFORM] Created Index is not used

2006-02-23 Thread Kjeld Peters
Hi Markus, first of all thanks for your quick reply! Markus Schaber wrote: Kjeld Peters wrote: Select and update statements are quite slow on a large table with more than 600,000 rows. The table consists of 11 columns (nothing special). The column "id" (int8) is primary key and has a btree ind

Re: [PERFORM]

2006-02-23 Thread Vivek Khera
On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote:That is what I wanted to know,  how do I tune it?If there were a simple formula for doing it, it would already have been written up as a program that runs once you install postgres.You have to monitor your usage, use your understanding of y

[PERFORM] how to interpret/improve bad row estimates

2006-02-23 Thread Robert Treat
postgresql 8.1, I have two tables, bot hoth vacuumed and analyzed. on msg307 I have altered the entityid and msgid columns statistics values to 400. dev20001=# explain analyze SELECT ewm.entity_id, m.agentname, m.filecreatedate AS versioninfo FROM msg307 m join entity_watch_map ewm on (ewm.

[PERFORM] Looking for a tool to "*" pg tables as ERDs

2006-02-23 Thread Ron Peacetree
Where "*" == {print | save to PDF | save to format | display on screen} Anyone know of one? TiA Ron ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq

Re: [PERFORM] Looking for a tool to "*" pg tables as ERDs

2006-02-23 Thread Markus Schaber
Hi, Ron, Ron Peacetree wrote: > Where "*" == > {print | save to PDF | save to format | display on screen} > > Anyone know of one? psql with fancy output formatting comes to my mind, or "COPY table TO file" SQL command. HTH, Markus -- Markus Schaber | Logical Tracking&Tracing International AG

Re: [PERFORM] Looking for a tool to "*" pg tables as ERDs

2006-02-23 Thread Vivek Khera
On Feb 23, 2006, at 11:38 AM, Ron Peacetree wrote: Where "*" == {print | save to PDF | save to format | display on screen} Anyone know of one? There's a perl module, GraphViz::DBI::General, which does a rather nifty job of taking a schema and making a graphviz "dot" file from it, which

Re: [PERFORM] Slow query

2006-02-23 Thread Tom Lane
"Jeremy Haile" <[EMAIL PROTECTED]> writes: > I am running a query that joins against several large tables (~5 million > rows each). The query takes an exteremely long time to run, and the > explain output is a bit beyond my level of understanding. It is an > auto-generated query, so the aliases a

Re: [PERFORM] Good News re count(*) in 8.1

2006-02-23 Thread Kevin Grittner
>>> On Wed, Feb 22, 2006 at 9:52 pm, in message <[EMAIL PROTECTED]>, Greg Stark <[EMAIL PROTECTED]> wrote: > "Kevin Grittner" <[EMAIL PROTECTED]> writes: > >> There have been several times that I have run a SELECT COUNT(*) on an entire >> table on all central machines. On identical hardware, wi

Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
Thank for looking into this Tom. Here's the output from PostgreSQL log: *** Postgresql Log: TopMemoryContext: 32768 total in 4 blocks; 7232 free (9 chunks); 25536 used Operator class cache: 8192 total in 1 blocks; 4936 free (0 chunks); 3256 used TopTransactionContext: 8192 total in 1 blocks; 6

Re: [PERFORM]

2006-02-23 Thread Jim C. Nasby
On Thu, Feb 23, 2006 at 09:38:25AM -0500, Vivek Khera wrote: > > On Feb 22, 2006, at 10:44 PM, Chethana, Rao ((IE10)) wrote: > > >That is what I wanted to know, how do I tune it? > > If there were a simple formula for doing it, it would already have > been written up as a program that runs on

Re: [PERFORM] Good News re count(*) in 8.1

2006-02-23 Thread Jim C. Nasby
On Thu, Feb 23, 2006 at 12:54:52PM -0600, Kevin Grittner wrote: > >>> On Wed, Feb 22, 2006 at 9:52 pm, in message > <[EMAIL PROTECTED]>, Greg Stark <[EMAIL PROTECTED]> wrote: > > > > "Kevin Grittner" <[EMAIL PROTECTED]> writes: > > > >> There have been several times that I have run a SELECT COU

Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tomeh, Husam
What's more interesting is this: When I first connect to the database via "psql" and issue the "create index" statement, of course, I get the "out of memory" error. If I don't quit my current session and re-ran the same DDL statement again, the index gets created successfully!.. However, if aft

Re: [PERFORM] 0ut of Memory Error during Vacuum Analyze and

2006-02-23 Thread Tom Lane
"Tomeh, Husam" <[EMAIL PROTECTED]> writes: > When I first connect to the database via "psql" and issue the "create > index" statement, of course, I get the "out of memory" error. If I > don't quit my current session and re-ran the same DDL statement again, > the index gets created successfully!..

[PERFORM] Really really slow query. What's a better way?

2006-02-23 Thread Brendan Duddridge
Hi,We're executing a query that has the following plan and we're wondering given the size of the data set, what's a better way to write the query? It's been running since 2pm 2 days ago.explain DELETE FROM cds.cds_mspecxx WHERE ProdID not in (SELECT stage.ProdID FROM cds_stage.cds_Catalog stage whe

Re: [PERFORM] Really really slow query. What's a better way?

2006-02-23 Thread Christopher Kings-Lynne
how about something like: DELETE FROM cds.cds_mspecxx WHERE NOT EXISTS (SELECT 1 FROM cds_stage.cds_Catalog stage where stage.countryCode = 'us' and stage.ProdId=cds.cds_mspecxx.ProdId) and countryCode = 'us'; Run explain on it first to see how it will be planned. Both tables should have a

Re: [PERFORM] Really really slow query. What's a better way?

2006-02-23 Thread Brendan Duddridge
Thanks Chris for the very quick response! Just after posting this message, we tried explain on the same format as you just posted: explain DELETE FROM cds.cds_mspecxx WHERE not exists (SELECT 'X' FROM cds_stage.cds_Catalog stage where stage.countryCode = 'us' and stage.prodid = cds.cds_ms