This problem is to do with bulk loading of data. I use the following scripts to take data from a live DB and put it into a testing DB with the current version of the schema:

# SCRIPT 1
pg_dump blogbog -a -D -f blogbog_data.sql
dropdb blogbogtemp
createdb blogbogtemp
psql blogbogtemp -f /www/htdocs/mrnaz.com/sql_tables.sql
pg_dump blogbogtemp -D -f blogbog_tables.sql
cp blogbog_tables.sql blogbog_constraints.sql


I edit the blogbog_tables.sql file to remove the constraints and the blogbog_constraints.sql file to remove the tables. I then run the following script:


# SCRIPT 2
dropdb blogbogdev
createdb blogbogdev
psql blogbogdev -f ./blogbog_tables.sql > ./blogbog_tables_inserted.log
psql blogbogdev -f ./blogbog_data.sql > ./blogbog_data_inserted.log
psql blogbogdev -f ./blogbog_constraints.sql > ./blogbog_constraints_applied.log


Somewhere in the insertion of the data (4th line of script 2) there is a failure, as no data appears in the blogbogdev database. This is likely due to a column in the live data somewhere that has been deprecated from the schema causing an insert failure on a table causing failure on all tables referring to it and so on cascading down the reference paths.

I really would prefer psql to halt on error instead of just continuing to plow right ahead, but IIRC there was a discussion about this and it was decided that continuing was the best behavior.

I have grepped the .log files that the script outputs for "ERROR" but there is none. How would one go about finding where the error in an SQL script is?
Bulk

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

              http://www.postgresql.org/docs/faq

Reply via email to