L
> On Dec 4, 2018, at 11:13 PM, Rene Romero Benavides <rene.romer...@gmail.com> > wrote: > > I tend to believe that a backup (pg_dump) in custom format (-F c) using > multiple jobs (parallel) -> restore (pg_restore) also with multiple > concurrent jobs would be better. > >> Am Di., 4. Dez. 2018 um 21:14 Uhr schrieb Rhys A.D. Stewart >> <rhys.stew...@gmail.com>: >> Greetings Folks, >> >> I have a relatively large table (100m rows) that I want to move to a >> new box with more resources. The table isn't doing anything...i.e its >> not being updated or read from. Which approach would be faster to move >> the data over: >> >> a). Use pg_fdw and do "create local_table as select * from foreign_table". >> b). setup logical replication between the two servers. >> >> Regards, >> >> Rhys >> Peace & Love|Live Long & Prosper >> > > > -- > El genio es 1% inspiración y 99% transpiración. > Thomas Alva Edison > http://pglearn.blogspot.mx/ > L > Let’s compromise. Copy out as described. Tell the auditors where the file is. Skip the copy in. If you truly don’t need the data online going forward this might actually pass muster.