:[EMAIL PROTECTED]
Sent: Tuesday, January 16, 2007 19:12
To: Jan van der Weijde; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III <[EMAIL PROTECTED]> wrote:
>
> Depending on exactly what
On Tue, Jan 16, 2007 at 12:06:38 -0600,
Bruno Wolff III <[EMAIL PROTECTED]> wrote:
>
> Depending on exactly what you want to happen, you may be able to continue
> where you left off using a condition on the primary key, using the last
> primary key value for a row that you have viewed, rather th
On Mon, Jan 15, 2007 at 11:52:29 +0100,
Jan van der Weijde <[EMAIL PROTECTED]> wrote:
> Does anyone have a suggestion for this problem ? Is there for instance
> an alternative to LIMIT/OFFSET so that SELECT on large tables has a good
> performance ?
Depending on exactly what you want to happen,
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
You will want to increase the default settings and let PostgreSQL use as
much RAM as you have - especially when retrieving a larg
007 13:49
*To:* Jan van der Weijde
*Cc:* Alban Hertroys; pgsql-general@postgresql.org
*Subject:* Re: [GENERAL] Performance with very large tables
If you go with Java, you can make it faster by using setFetchSize (JDBC
functionality) from client and that will help you with the performance in
case of
Jan
-Original Message-
From: Alban Hertroys [mailto:[EMAIL PROTECTED]
Sent: Monday, January 15, 2007 12:49
To: Jan van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
> Thank you.
>
Jan van der Weijde wrote:
That is exactly the problem I think. However I do not deliberately
retrieve the entire table. I use the default settings of the PostgreSQL
installation and just execute a simple SELECT * FROM table.
I am using a separate client and server (both XP in the test
environme
Cc: Alban Hertroys; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
If you go with Java, you can make it faster by using setFetchSize (JDBC
functionality) from client and that will help you with the performance
in case of fetching large amounts of data
an van der Weijde
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT i
Cc: Richard Huxton; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT it already takes
Jan van der Weijde wrote:
> Thank you.
> It is true he want to have the first few record quickly and then
> continue with the next records. However without LIMIT it already takes a
> very long time before the first record is returned.
> I reproduced this with a table with 1.1 million records on a
Oh yes, need to have a condition first for which you have partitioned
tables. Only in that case it will work with partitions.
---
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton wrote:
Shoaib Mir wrote:
> You can also opt for partitioning the tables and t
Jan van der Weijde wrote:
Thank you.
It is true he want to have the first few record quickly and then
continue with the next records. However without LIMIT it already takes a
very long time before the first record is returned.
I reproduced this with a table with 1.1 million records on an XP mac
Shoaib Mir wrote:
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
Not in the case of SELECT * FROM though. Unless you access the
specific partitioned table.
On 1/15/07, Richard Huxton wrote:
Jan van der Weijde wrote:
>
: Monday, January 15, 2007 12:01
To: Jan van der Weijde
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Performance with very large tables
Jan van der Weijde wrote:
> Hello all,
>
> one of our customers is using PostgreSQL with tables containing
> millions of records. A si
You can also opt for partitioning the tables and this way select will only
get the data from the required partition.
--
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)
On 1/15/07, Richard Huxton wrote:
Jan van der Weijde wrote:
> Hello all,
>
> one of our customers is using Postgre
Jan van der Weijde wrote:
Hello all,
one of our customers is using PostgreSQL with tables containing millions
of records. A simple 'SELECT * FROM ' takes way too much time in
that case, so we have advised him to use the LIMIT and OFFSET clauses.
That won't reduce the time to fetch millions
17 matches
Mail list logo