/confluence/pages/viewpage.action?pageId=10338
-邮件原件-
发件人: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] 代表 Ryan Hansen
发送时间: 2008年9月11日 1:14
收件人: pgsql-performance@postgresql.org
主题: Re: [PERFORM] Improve COPY performance for large data sets
NEVERMIND!!
I found it. Turns out there was
On Wed, Sep 10, 2008 at 11:16 AM, Bill Moran
<[EMAIL PROTECTED]> wrote:
> There's a program called pgloader which supposedly is faster than copy.
> I've not used it so I can't say definitively how much faster it is.
I think you are thinking of pg_bulkloader...
--
Sent via pgsql-performance maili
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Le 10 sept. 08 à 19:16, Bill Moran a écrit :
There's a program called pgloader which supposedly is faster than
copy.
I've not used it so I can't say definitively how much faster it is.
In fact pgloader is using COPY under the hood, and doing so v
Correction --
2 hours to read the whole disk.
1. It won't make a load take 12 hours unless we're talking a load that is in
> total, similar to the size of the disk. A slow, newer SATA drive will read
> and write at at ~50MB/sec at minimum, so the whole 320GB can be scanned at
> 3GB per minute.
A single SATA drive may not be the best performer, but:
1. It won't make a load take 12 hours unless we're talking a load that is in
total, similar to the size of the disk. A slow, newer SATA drive will read
and write at at ~50MB/sec at minimum, so the whole 320GB can be scanned at
3GB per minute
Hi,
Le mercredi 10 septembre 2008, Ryan Hansen a écrit :
> One thing I'm experiencing some trouble with is running a COPY of a
> large file (20+ million records) into a table in a reasonable amount of
> time. Currently it's taking about 12 hours to complete on a 64 bit
> server with 3 GB memory a
In response to Ryan Hansen <[EMAIL PROTECTED]>:
>
> I'm relatively new to PostgreSQL but I've been in the IT applications
> industry for a long time, mostly in the LAMP world.
>
> One thing I'm experiencing some trouble with is running a COPY of a
> large file (20+ million records) into a table
On Wednesday 10 September 2008, Ryan Hansen <[EMAIL PROTECTED]>
wrote:
>Currently it's taking about 12 hours to complete on a 64 bit
> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB
> drive. I don't seem to get any improvement running the same operation
> on a dual opteron
NEVERMIND!!
I found it. Turns out there was still a constraint on the table. Once
I dropped that, the time went down to 44 minutes.
Maybe I am an idiot after all. :)
-Ryan
--- Begin Message ---
Greetings,
I'm relatively new to PostgreSQL but I've been in the IT applications
industry for
Greetings,
I'm relatively new to PostgreSQL but I've been in the IT applications
industry for a long time, mostly in the LAMP world.
One thing I'm experiencing some trouble with is running a COPY of a
large file (20+ million records) into a table in a reasonable amount of
time. Currently it
10 matches
Mail list logo