The parsing has turned out to be pretty intense. It takes about 10-20
minutes for any project. When we are parsing data, it really slows
down the site's response. I tested serving static webpages from
apache, endless loops in php , but the choke point seems to be doing
any other query on postgr
Zlatko Matić wrote:
Hello, Tom.
I don't understand relation between constraints and indexes.
By using EMS PostgreSQL Manager Lite, I created indexes on columns, some
of them are unique values.
But when I open it in PgAdmin, all such "unique" indexes are listed as
constraints and there are no i
t; <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc:
Sent: Friday, July 13, 2007 3:39 AM
Subject: Re: [GENERAL] optimizing postgres
[EMAIL PROTECTED] writes:
It turned out he was right for our current set up. When I needed to
empty the project table to re-parse data, doing a ca
[EMAIL PROTECTED] writes:
> It turned out he was right for our current set up. When I needed to
> empty the project table to re-parse data, doing a cascading delete
> could take up to 10 minutes!
You mean ON CASCADE DELETE foreign keys? Usually the reason that's
slow is you forgot to put an index
* [EMAIL PROTECTED] ([EMAIL PROTECTED]) wrote:
> Since I'm not an expert in Postgres database design, I'm assuming I've
> done something sub-optimal. Are there some common techniques for
> tuning postgres performance? Do we need beefier hardware?
Honestly, it sounds like the database design might
Hello all -
I'm working on a postgres project after coming from a MySQL background
( no flames, please :). We are importing fairly large xml datasets
( 10-20 MB of xml files per 'project', currently 5 projects) into the
database for querying.
We are using PHP to create a web interface where users