You don't indicate how you're executing the copy. If you're using a
script/program, just add your own counter as the first field in the copy. I
do this 'all the time' from Perl scripts, and it works fine. If you're
using psql, I haven't a clue unless you massage the input data before doing
the
desc is a keyword - ORDER BY DESC-ending
Robert Creager
StorageTek
INFORMATION made POWERFUL
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
>
> I have 7.1
>
> Can someone take a look the following
> and tell me why I'm getting errors?
> I'm completely baffle
I'm creating a script which will re-claim sequence numbers in a table by
'packing' the existing sequence numbers. My questions is if I lock the
table in access exclusive mode, and an insert into that table occurs after
the lock, with the insert be blocked before or after the nextval is chosen?
Hey all,
I have 3 tables - A refers to B and C, and A has ON DELETE CASCADE for
referring columns. I'm trying to delete from B, and through the CASCADE to
A's BEFORE DELETE TRIGGER, SELECT a value from B, and then UPDATE C. The
problem is that through this path, when I SELECT from B, the selec
Tom believes there may be a memory leak, which would be causing the
(strangely enough) memory problem. Didn't think about reducing the import
size. What I might try in that case would be to re-connect to the db
periodically, rather than splitting the file. The problem becomes
unmanageable afte
If the file is truly CSV (comma separated values), you might want to change
DELIMITERS '\t' to DELIMITERS ','... Otherwise, include a couple of lines
of data...
Robert Creager
Senior Software Engineer
Client Server Library
303.673.2365 V
303.661.5379 F
888.912.4458 P
StorageTek
INFORMATION made
I think this is a question regarding the backend, but...
I'm in the process of changing 1 large table (column wise) into 6 smaller
tables, and ran into a situation. I'm using Postgresql 7.1beta5, Pg as
included, Perl 5.6, Solaris 2.6 on an Ultra 5.
The new setup is 6 tables, the 'main' table l
I just joined this list, so pardon if this has been suggested.
Have you tried 'COPY expafh FROM stdin', rather than inserting each record?
I'm managing a 2.5 million record import, creating a btree index on two
columns, and then vacuuming the db in 36 minutes (on an Ultra 5 - similar to
a AMD K6