First of all, you should try and use the database native tools for loading
tables this size...

If it has to be perl which does happen sometimes, you should definetly
switch of autocommit. Try doing a commit every 50000 lines or even every
100000 lines (ask your friendly DBA how big the transaction log and rollback
segment are and if they will support the 100k or if maybe 50k or 25k would
be better), the bigger the block the less the overhead. Also ask you DBA to
consider table locks and so on when doing an insert this big...

Then you should if at all posible prepare the insert statment so you just
have to give the variables before doing the execute, this will save a lot on
the overhead.

Then of course if you have the memory try and shove as much of the variables
in memory while the commit is being executed, you might be able to use two
threads one to read a chunk and one to shove the stuff in the database,
while the other is reading a new chunk.

Regards,

Rob


On 9/22/07, Luke <[EMAIL PROTECTED]> wrote:
>
> Hello,
> I am looking for a proper, fastest and most reasonable way to insert
> data from pretty big file (~1,000,000 lines) to database. I am using
> Win32::ODBC (ActiveState Perl) module to connect with Access/MSSQL
> database and inserting line after line. I was wondering if there is a
> better way to do it...
> Maybe create hash with part of data (maybe all of it - what are the
> limitations ?)
> What is other way to do it instead 'INSERT INTO...' statement after
> reading each line ?
>
> Thanks for any help in advance...
>
> Luke
>
>
> --
> To unsubscribe, e-mail: [EMAIL PROTECTED]
> For additional commands, e-mail: [EMAIL PROTECTED]
> http://learn.perl.org/
>
>
>

Reply via email to