On Feb 19, 9:23 am, Thomas Guettler <[EMAIL PROTECTED]> wrote: > Yes, you can use "pg_dump production ... | psql testdb", but > this can lead to dead locks, if you call this during > a python script which is in the middle of a transaction. The python > script locks a table, so that psql can't write to it.
Hrm? Dead locks where? Have you considered a cooperative user lock? Are just copying data? ie, no DDL or indexes? What is the script doing? Updating a table with unique indexes? > I don't think calling pg_dump and psql/pg_restore is faster. Normally it will be. I've heard people citing cases of COPY at about a million records per second into "nicely" configured systems. However, if psycopg2's COPY is in C, I'd imagine it could achieve similar speeds. psql and psycopg2 both being libpq based are bound to have similar capabilities assuming the avoidance of interpreted Python code in feeding the data to libpq. > I know, but COPY is much faster. yessir. -- http://mail.python.org/mailman/listinfo/python-list