The list approach for partial restore is also useful, thank you.
On another note, I used to take full backups (entire database), however
switched to table by table scheme in order to make it more VCS friendly.
Namely, so I only check into github the dumps of the tables that are updated
only.
S
Seems that would be easier and less error prone. Thanks,
--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-gener
I was dumping each table to a separate file so I could pick and choose when
restoring. However, seems this was not a great idea, since two of my tables
happened to reference each other via FOREIGN KEYs, and I am not able to
restore them. Is there a way to do this without manually merging the dump
f
Yes, csvkit is what I decided to go with. Thank you all!
--
View this message in context:
http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963559.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@
Yes, the delimiter was indeed ",". I fixed my original post . Seems I
carelessly copy/pasted from excel.
--
View this message in context:
http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963558.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Se
BTW, we have pg9.5 run on ubuntu.
--
View this message in context:
http://www.postgresql-archive.org/COPY-row-is-too-big-tp5936997p5963386.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
I am piggy-backing in this thread because I have the same issue as well. I
need to import a csv file that is 672 columns long and each column consists
of 12 alpha-numeric characters. Such as:
SA03ARE1015DSA03ARE1S15NSB03ARE1015D ...
356412 275812 43106 ...
I am aware t