On 9/4/24 03:48, Sam Son wrote:
Hi Adrian, Muhammad,

Thanks for the quick response.

For new I cannot do changes in old version DB, since it is deployed remotely and i dont have any access. And it has to be done from multiple servers.

As a work around I tried two solutions.

Both of which depend on the plpythonu functions running with plpython3u, in other words that they are Python3 compatible. Have you verified that?


*Solution 1:*

After downloading and extracting the dump, convert the pgdump file to sql file which is editable.

*    pg_restore -f out_dump.sql dump.pgdump*

Replace all the plpythonu references with plputhon3u.

Restore using the sql file.

*    sudo -H -u postgres psql -p 5433 -d db_name <  out_dump.sql*

I would suggest working on the schema portion separate from the data:

pg_restore -s -f out_dump_schema.sql dump.pgdump*

Do your search and replace, restore to database and then:

pg_restore -a ...  dump.pgdump*

Where -a is data only.

In fact if you have control of the pg_dump break it into two parts:

pg_dump -s ...  --schema

pg_dump -a ...  --data only



*Solution 2:*

After downloading and extracting the dump, get the list of items in dump (Schemas, tables, table data, Index, functions, etc).

*    pg_restore -l dump.pgdump > dump.txt*

Delete all the function references which have plpython3u.

I'm guessing you meant plpythonu above.


*Question:*

Our database size is 500GB,

Do we see any performance impact using solution 1. Since solution 1 is using sql file load and solution 2 is using pg_restore directly.

Kindly recommend what to choose, solution 1 or solution 2 or any other workaround to restore.

Personally I would go with solution 1 with the modifications I suggested.



Thanks,
Samson G



--
Adrian Klaver
adrian.kla...@aklaver.com



Reply via email to