Hi all,
Chris Browne (one of my colleagues here) has posted some tests in the
past indicating that jfs may be the fastest filesystem for Postgres
use on Linux.
We have lately had a couple of cases where machines either locked up,
slowed down to the point of complete unusability, or died completel
On Wed, 7 Jan 2004, Eric Jain wrote:
> Any tips for speeding up index creation?
>
> I need to bulk load a large table with 100M rows and several indexes,
> some of which span two columns.
>
> By dropping all indexes prior to issuing the 'copy from' command, the
> operation completes 10x as fast
Does anyone have any data to support arguing for a particular stripe size in
RAID-0? Do large stripe sizes allow drives to stream data more efficiently or
defeat read-ahead?
--
greg
---(end of broadcast)---
TIP 7: don't forget to increase your fr
"D. Dante Lorenso" <[EMAIL PROTECTED]> writes:
> Any thoughts? Sure, the PHP function I'm using above 'works', but is it
> the most efficient? I hope I'm not actually pulling all 100,000 records
> across the wire when I only intend to show 10 at a time. See what I'm
> getting at?
I tend to do
On Wed, 7 Jan 2004 18:08:06 +0100
"Eric Jain" <[EMAIL PROTECTED]> wrote:
> Any tips for speeding up index creation?
>
> I need to bulk load a large table with 100M rows and several indexes,
> some of which span two columns.
>
> By dropping all indexes prior to issuing the 'copy from' command, th
Any tips for speeding up index creation?
I need to bulk load a large table with 100M rows and several indexes,
some of which span two columns.
By dropping all indexes prior to issuing the 'copy from' command, the
operation completes 10x as fast (1.5h vs 15h).
Unfortunately, recreating a single i
I need to know that original number of rows that WOULD have been returned
by a SELECT statement if the LIMIT / OFFSET where not present in the
statement.
Is there a way to get this data from PG ?
SELECT
... ;
> returns 100,000 rows
but,
SELECT
...
LIMIT x
OFFSET y;
-