On 2009-09-29, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason wrote:
>> ?? I'm not sure what you're implying about the semantics here, but it
>> doesn't seem right. COPY doesn't somehow break out of ACID semantics,
>> it's only an *optimization* that allows you to get large quanti
On Tue, Sep 29, 2009 at 12:17:51PM -0400, Tom Lane wrote:
> Sam Mason writes:
> > On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> >> I think a big reason is also that the client can stream the data without
> >> waiting for a network round trip ack on every statement.
>
> > I don'
On Tue, Sep 29, 2009 at 09:11:19AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason wrote:
> > I'm pretty sure you can send multiple statements in a
> > single round trip. libpq is defined to work in such cases anyway:
> >
> > http://www.postgresql.org/docs/current/static/li
Sam Mason writes:
> On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
>> I think a big reason is also that the client can stream the data without
>> waiting for a network round trip ack on every statement.
> I don't think so. I'm pretty sure you can send multiple statements in a
> s
On Tuesday 29 September 2009, Sam Mason wrote:
> > I think a big reason is also that the client can stream the data
> > without waiting for a network round trip ack on every statement.
>
> I don't think so. I'm pretty sure you can send multiple statements in a
> single round trip. libpq is defin
On Tue, Sep 29, 2009 at 08:45:55AM -0700, Alan Hodgson wrote:
> On Tuesday 29 September 2009, Sam Mason wrote:
> > it's faster is because
> > parsing CSV data is easier than parsing SQL.
> >
> > At least I think that's the only difference; anybody know better?
>
> I think a big reason is also th
On Tuesday 29 September 2009, Sam Mason wrote:
> ?? I'm not sure what you're implying about the semantics here, but it
> doesn't seem right. COPY doesn't somehow break out of ACID semantics,
> it's only an *optimization* that allows you to get large quantities of
> data into the database faster.
On Tue, Sep 29, 2009 at 3:31 PM, Dave Huber <
dhu...@letourneautechnologies.com> wrote:
> All I have to say is wow! COPY works sooo much faster than the iterative
> method I was using. Even after having to read the entire binary file and
> reformat the data into the binary format that postgres ne
All I have to say is wow! COPY works sooo much faster than the iterative method
I was using. Even after having to read the entire binary file and reformat the
data into the binary format that postgres needs it is an order of magnitude
faster than using a prepared INSERT. At least that's what my
On Mon, Sep 28, 2009 at 08:33:45PM -0400, Martin Gainty wrote:
> INSERTS/UPDATES are historically slow especially with autocommit is
> on (implied autocommit on) the Database writer actually stops any
> processing and applies that one record to the database
That seems to be overstating the issue s
52:36 +0100
> From: s...@samason.me.uk
> To: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] bulk inserts
>
> On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
> > Using COPY is out of the question as the file is not formatted for
> > that and si
On Mon, Sep 28, 2009 at 04:35:53PM -0500, Dave Huber wrote:
> One assumption I am operating under right now is
> that the format of the binary file is the same as the buffer in
> PQputCopyData, including the header. If I am wrong, could someone
> please let me know? Thanks,
I've always used ASCII
3:53 PM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] bulk inserts
On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
> Using COPY is out of the question as the file is not formatted for
> that and since other operations need to occur, the file needs to be
> read seque
On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
> Using COPY is out of the question as the file is not formatted for
> that and since other operations need to occur, the file needs to be
> read sequentially anyway.
Just to expand on what Martin said; if you can generate a set of EXECUT
On Mon, Sep 28, 2009 at 10:38:05AM -0500, Dave Huber wrote:
> Hi, I'm fairly new to postgres and am having trouble finding what I'm
> looking for. Is there a feature that allows bulk inserts into tables?
> My setup is Win XPe 2002 SP3 and PostgreSQL 8.3. I need to add
> entries from a file where ea
Thank you all for cluing me in on pg_putline and pg_endcopy. Much cleaner than my kluge.
kj
$dbi->do("COPY TABLE FROM stdin;");
for(;;) {
$dbi->func( "$idd\t$tid\n", 'putline');
}
$dbi->func("\\.\n", 'putline');
$dbi->func('endcopy');
I don't known what about modern versions of DBI and DBD::Pg,
but it worked at 2001 year :)
Kynn Jones wrote:
I have a Perl script that is suppose
"Kynn Jones" <[EMAIL PROTECTED]> writes:
> So I'm back at the drawing board. How can I make fast bulk inserts into a
> PostgreSQL database from within a Perl script?
The simplest and most effective thing you can do is to wrap many inserts
into a single transaction block. After that, if you're us
On Apr 18, 2006, at 4:03 PM, Kynn Jones wrote:
I have a Perl script that is supposed to make a large number of
inserts in a PostgreSQL database. Performing individual inserts
with SQL's INSERT command is too slow, however, I can use a
"COPY ... from stdin" approach that is fast enough. B
19 matches
Mail list logo