Alan Hodgson wrote:
It's because everything is cached, in particular the relevant rows from
the "email" table (accessing which took 22 of the original 27 seconds).
The plan looks good for what it's doing.
I don't see that query getting much faster unless you could add a lot more
cache RAM; 30
I think I may have answered my own question partially, the problem may
be how I structure the query.
I always structured my tsearch queries as follows following my initial
read of the tsearch2 instructions...
select email_id from email, to_tsquery('default', 'howard') as q where
q@@fts;
Ho
On Tue, 17 Jun 2008, Howard Cole wrote:
Alan Hodgson wrote:
It's because everything is cached, in particular the relevant rows from the
"email" table (accessing which took 22 of the original 27 seconds).
Thanks Alan, I guessed that the caching was the difference, but I do not
understand why t
On Tue, 17 Jun 2008, Howard Cole wrote:
I think I may have answered my own question partially, the problem may be how
I structure the query.
Original statement:
"Nested Loop (cost=4.40..65.08 rows=16 width=8)"
" -> Function Scan on q (cost=0.00..0.01 rows=1 width=32)"
" -> Bitmap Heap Sc
As far as I can see, that shouldn't make any difference. Both queries
still do the bitmap heap scan, and have almost exactly the same cost.
Matthew
You may have a point there Matthew, they both appear to do a scan on the
email table (Why?). But for whatever reason, I swear the second method
On Tue, 17 Jun 2008, Howard Cole wrote:
They both appear to do a scan on the email table (Why?).
The indexes don't contain copies of the row data. They only contain
pointers to the rows in the table. So once the index has been consulted,
Postgres still needs to look at the table to fetch the
Matthew Wakeling wrote:
On Tue, 17 Jun 2008, Howard Cole wrote:
They both appear to do a scan on the email table (Why?).
The indexes don't contain copies of the row data. They only contain
pointers to the rows in the table. So once the index has been
consulted, Postgres still needs to look a
On Tue, 17 Jun 2008, Howard Cole wrote:
If I do a query that uses another index, then it uses the index only and
does not scan the email table.
Not true. It only looks a little bit like that from the explain output.
However, if you look closely:
Index Scan using email_email_directory_id_idx
Dear All,
Am going to do migration of database from one version to another., is there
any article or any other document explaining the possibilities and other
things.
Further Explanation:
I have a database in postgres X.Y which has around 90 tables, and lot of
data in it.
In the next version of
On Tue, 17 Jun 2008, sathiya psql wrote:
I have a database in postgres X.Y which has around 90 tables, and lot of
data in it.
In the next version of that product, i had some more tables, so how to
migrate that,. there may be 150 tables., in that 90 tables, 70 may be the
same, 20 got deleted, and
Actually, the index returns page numbers in the table on disc which
may contain one or more rows that are relevant. Postgres has to fetch
the whole row to find out the email_id and any other information,
including whether the row is visible in your current transaction
(concurrency control com
Hi,
I need to install a 8.3 database and was wondering which hardware would be
sufficient to have good performances (less than 30s for² slowest select).
Database size: 25 Go /year, 5 years of history
One main table containing 40 million lines per year.
Batch inserts of 10 lines. Very very fe
On Tue, Jun 17, 2008 at 7:38 AM, Lionel <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I need to install a 8.3 database and was wondering which hardware would be
> sufficient to have good performances (less than 30s for² slowest select).
>
> Database size: 25 Go /year, 5 years of history
> One main table con
On Tue, Jun 17, 2008 at 03:38:59PM +0200, Lionel wrote:
> Hi,
>
> I need to install a 8.3 database and was wondering which hardware would be
> sufficient to have good performances (less than 30s for� slowest select).
> Statements will mainly do sums on the main table, grouped by whatever column
On Tue, Jun 17, 2008 at 8:25 AM, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
> On Tue, Jun 17, 2008 at 03:38:59PM +0200, Lionel wrote:
>> Which OS would you use ? (knowing that there will be a JDK 1.6 installed
>> too)
>
> . . .I think this is the real mistake. Get a separate database box.
> It's
On Tue, 17 Jun 2008, Lionel wrote:
I need to install a 8.3 database and was wondering which hardware would be
sufficient to have good performances (less than 30s for² slowest select).
It's almost impossible to predict what users will do via the webapplication
that queries this database: almost
Andrew Sullivan wrote:
> You won't need lots of processer, then.
can't find less than quad core for this price range...
> How big's the database?
with 20 millions of rows, the main table is 3.5 Go on win XP.
With 8 Go of indexes.
I estimate the whole database around 30 Go / year
> If you can
On Tue, 17 Jun 2008, Andrew Sullivan wrote:
Which OS would you use ? (knowing that there will be a JDK 1.6 installed
too)
. . .I think this is the real mistake. Get a separate database box.
It's approximately impossible to tune a box correctly for both your
application and your database, in m
On Tue, Jun 17, 2008 at 04:49:17PM +0200, Lionel wrote:
> My tomcat webapp is well coded and consumes nearly nothing.
If I were ever inclined to say, "Nonsense," about code I've never
seen, this is probably the occasion on which I'd do it. A running JVM
is necessarily going to use some memory, a
On Tue, Jun 17, 2008 at 9:32 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> Considering that Lionel's system seems pretty overpowered for what he's
> doing--runs plenty fast on a much slower system, enough RAM to hold a large
> portion of the primary tables and database, all batch updates that don't
On Tuesday 17 June 2008, Howard Cole <[EMAIL PROTECTED]> wrote:
> This misses out the random access of the email table, turning my 27
> second query into 6 seconds.
It took less time because it retrieved a lot less data - it still has to
look at the table.
--
Alan
--
Sent via pgsql-performanc
On Tuesday 17 June 2008, Howard Cole <[EMAIL PROTECTED]> wrote:
> Incidentally, how can I clear the cache in between queries?
Stop PostgreSQL, unmount the filesystem it's on, remount it, restart
PostgreSQL. Works under Linux.
If it's on a filesystem you can't unmount hot, you'll need to reboot.
On Tue, 17 Jun 2008, Alan Hodgson wrote:
On Tuesday 17 June 2008, Howard Cole <[EMAIL PROTECTED]> wrote:
Incidentally, how can I clear the cache in between queries?
Stop PostgreSQL, unmount the filesystem it's on, remount it, restart
PostgreSQL. Works under Linux.
If it's on a filesystem you
On Tue, Jun 17, 2008 at 9:42 AM, Andrew Sullivan <[EMAIL PROTECTED]> wrote:
> On Tue, Jun 17, 2008 at 04:49:17PM +0200, Lionel wrote:
>> My tomcat webapp is well coded and consumes nearly nothing.
>
> If I were ever inclined to say, "Nonsense," about code I've never
> seen, this is probably the oc
On Tue, 17 Jun 2008, Andrew Sullivan wrote:
A running JVM is necessarily going to use some memory, and that is
memory use that you won't be able to factor out properly when developing
models of your database system performance.
Now you've wandered into pure FUD. Tuning maximum memory usage o
On Tue, 17 Jun 2008, Scott Marlowe wrote:
We had a reporting server with about 80G of data on a machine with 4G
ram last place I worked, and it could take it a few extra seconds to
hit the old data, but the SW RAID-10 on it made it much faster at
reporting than it would have been with a single d
On Tue, Jun 17, 2008 at 10:56 AM, Greg Smith <[EMAIL PROTECTED]> wrote:
> On Tue, 17 Jun 2008, Scott Marlowe wrote:
>
>> We had a reporting server with about 80G of data on a machine with 4G
>> ram last place I worked, and it could take it a few extra seconds to
>> hit the old data, but the SW RAID
"Scott Marlowe" wrote:
> You're absolutely right though, we really need to know the value of
> fast performance here.
the main problem is that my customers are used to have their reporting after
few seconds.
They want do have 10 times more data but still have the same speed, which
is, I think, q
On Tue, Jun 17, 2008 at 11:59 AM, Lionel <[EMAIL PROTECTED]> wrote:
> "Scott Marlowe" wrote:
>> You're absolutely right though, we really need to know the value of
>> fast performance here.
>
> the main problem is that my customers are used to have their reporting after
> few seconds.
> They want d
On Tue, 17 Jun 2008, Lionel wrote:
I did some test with a 20 millions lines database on a single disk dual core
2GB win XP system (default postgresql config), most of the time is spent in
I/O: 50-100 secs for statements that scan 6 millions of lines, which will
happen. Almost no CPU activity.
> From: [EMAIL PROTECTED]
> Subject: [PERFORM] Which hardware ?
> Date: Tue, 17 Jun 2008 15:38:59 +0200
> To: pgsql-performance@postgresql.org
>
> Hi,
>
> I need to install a 8.3 database and was wondering which hardware would be
> sufficient to have go
31 matches
Mail list logo