On Thu, Oct 15, 2015 at 3:55 PM, Guillaume Lelarge <guilla...@lelarge.info> wrote:
> 2015-10-15 20:40 GMT+02:00 anj patnaik <patn...@gmail.com>: > >> It's a Linux machine with 8 CPUs. I don't have the other details. >> >> I get archive member too large for tar format. >> >> Is there a recommended command/options when dealing with very large >> tables, aka 150K rows and half of the rows have data being inserted with >> 22MB? >> >> > Don't use tar format? I never understood the interest on this one. You > should better use the custom method. > + 1 Use -F c -- Scott Mead Sr. Architect *OpenSCG* PostgreSQL, Java & Linux Experts http://openscg.com > > >> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t -w > /tmp/dump >> pg_dump: [archiver (db)] connection to database "postgres" failed: >> fe_sendauth: no password supplied >> -bash-4.1$ ./pg_dump -t RECORDER postgres --format=t > /tmp/dump >> Password: >> pg_dump: [tar archiver] archive member too large for tar format >> -bash-4.1$ pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz >> -bash: pg_dumpall: command not found >> -bash: tmpdb.out-2015101510.gz: Permission denied >> -bash-4.1$ ./pg_dumpall | gzip > \tmp\db.out-`date +\%Y\%m\%d\%H`.gz >> >> >> Thank you so much for replying and accepting my post to this NG. >> >> On Thu, Oct 15, 2015 at 11:17 AM, Melvin Davidson <melvin6...@gmail.com> >> wrote: >> >>> In addition to exactly what you mean by "a long time" to pg_dump 77k of >>> your table, >>> >>> What is your O/S and how much memory is on your system? >>> How many CPU's are in your system? >>> Also, what is your hard disk configuration? >>> What other applications are running simultaneously with pg_dump? >>> What is the value of shared_memory & maintenance_work_mem in >>> postgresql.conf? >>> >>> On Thu, Oct 15, 2015 at 11:04 AM, Adrian Klaver < >>> adrian.kla...@aklaver.com> wrote: >>> >>>> On 10/14/2015 06:39 PM, anj patnaik wrote: >>>> >>>>> Hello, >>>>> >>>>> I recently downloaded postgres 9.4 and I have a client application that >>>>> runs in Tcl that inserts to the db and fetches records. >>>>> >>>>> For the majority of the time, the app will connect to the server to do >>>>> insert/fetch. >>>>> >>>>> For occasional use, we want to remove the requirement to have a server >>>>> db and just have the application retrieve data from a local file. >>>>> >>>>> I know I can use pg_dump to export the tables. The questions are: >>>>> >>>>> 1) is there an in-memory db instance or file based I can create that is >>>>> loaded with the dump file? This way the app code doesn't have to >>>>> change. >>>>> >>>> >>>> No. >>>> >>>> >>>>> 2) does pg support embedded db? >>>>> >>>> >>>> No. >>>> >>>> 3) Or is my best option to convert the dump to sqlite and the import the >>>>> sqlite and have the app read that embedded db. >>>>> >>>> >>>> Sqlite tends to follow Postgres conventions, so you might be able to >>>> use the pg_dump output directly if you use --inserts or --column-inserts: >>>> >>>> http://www.postgresql.org/docs/9.4/interactive/app-pgdump.html >>>> >>>> >>>>> Finally, I am noticing pg_dump takes a lot of time to create a dump of >>>>> my table. right now, the table has 77K rows. Are there any ways to >>>>> create automated batch files to create dumps overnight and do so >>>>> quickly? >>>>> >>>> >>>> Define long time. >>>> >>>> What is the pg_dump command you are using? >>>> >>>> Sure use a cron job. >>>> >>>> >>>>> Thanks for your inputs! >>>>> >>>> >>>> >>>> -- >>>> Adrian Klaver >>>> adrian.kla...@aklaver.com >>>> >>>> >>>> -- >>>> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >>>> To make changes to your subscription: >>>> http://www.postgresql.org/mailpref/pgsql-general >>>> >>> >>> >>> >>> -- >>> *Melvin Davidson* >>> I reserve the right to fantasize. Whether or not you >>> wish to share my fantasy is entirely up to you. >>> >> >> > > > -- > Guillaume. > http://blog.guillaume.lelarge.info > http://www.dalibo.com >