[GENERAL] autovacuum: 50% iowait for hours
Hello all, I have a hard situation in hands. my autovacuum does not seem to be able to get his job done; database is under active INSERTs/UPDATEs; CPU is in aprox 50% iowait for the past 5 hours; I've tried turning off autovacuum and the effect goes away; I turn it back on and it goes back to 50% iowait; my IO system is nothing special at all; besides turning autovacuum off and running vacuum by hand once in a while, what else can I do to get out of this situation ? bellow some logs I'm seriously considering turning off autovacuum for good; but I'dd like to get input concerning other approaches... I mean... if I don't turn it of, how can I be sure this will not happen again... we ship products with PG inside... I must be absolutelly sure this will not ever happen in any of our costumers. I'm a bit confuse... sorry :) ! Joao 2010-05-13 20:55:00 WEST DEBUG: server process (PID 6197) exited with exit code 0 2010-05-13 20:55:00 WEST DEBUG: forked new backend, pid=6573 socket=6 2010-05-13 20:57:41 WEST ERROR: failed to re-find parent key in "timeslots_strs_var_ts_key" 2010-05-13 20:58:42 WEST LOG: autovacuum: processing database "postgres" 2010-05-13 20:59:35 WEST DEBUG: checkpoint starting 2010-05-13 20:59:35 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 20:59:42 WEST LOG: autovacuum: processing database "template1" 2010-05-13 21:00:00 WEST DEBUG: server process (PID 6573) exited with exit code 0 2010-05-13 21:00:00 WEST DEBUG: forked new backend, pid=6846 socket=6 2010-05-13 21:01:00 WEST LOG: autovacuum: processing database "egbert" 2010-05-13 21:01:00 WEST DEBUG: autovac: will VACUUM ANALYZE timeslots 2010-05-13 21:01:00 WEST DEBUG: vacuuming "public.timeslots" 2010-05-13 21:02:39 WEST DEBUG: index "timeslots_timestamp_index" now contains 3971156 row versions in 15338 pages 2010-05-13 21:02:39 WEST DETAIL: 351 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.67s/1.28u sec elapsed 71.06 sec. 2010-05-13 21:03:51 WEST DEBUG: index "timeslots_timeslot_index" now contains 3971159 row versions in 18724 pages 2010-05-13 21:03:51 WEST DETAIL: 351 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 0.86s/1.31u sec elapsed 71.65 sec. 2010-05-13 21:04:35 WEST DEBUG: checkpoint starting 2010-05-13 21:04:37 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:04:58 WEST DEBUG: index "timeslots_var_index" now contains 3971167 row versions in 23857 pages 2010-05-13 21:04:58 WEST DETAIL: 351 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU 1.34s/1.36u sec elapsed 67.49 sec. 2010-05-13 21:05:00 WEST DEBUG: server process (PID 6846) exited with exit code 0 2010-05-13 21:05:01 WEST DEBUG: forked new backend, pid=7319 socket=6 2010-05-13 21:09:35 WEST DEBUG: checkpoint starting 2010-05-13 21:09:37 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:10:08 WEST DEBUG: server process (PID 7319) exited with exit code 0 2010-05-13 21:10:08 WEST DEBUG: forked new backend, pid=7845 socket=6 2010-05-13 21:14:35 WEST DEBUG: checkpoint starting 2010-05-13 21:14:37 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:15:00 WEST DEBUG: server process (PID 7845) exited with exit code 0 2010-05-13 21:15:00 WEST DEBUG: forked new backend, pid=8214 socket=6 2010-05-13 21:19:35 WEST DEBUG: checkpoint starting 2010-05-13 21:19:41 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:20:00 WEST DEBUG: server process (PID 8214) exited with exit code 0 2010-05-13 21:20:00 WEST DEBUG: forked new backend, pid=8521 socket=6 2010-05-13 21:24:35 WEST DEBUG: checkpoint starting 2010-05-13 21:24:38 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:25:00 WEST DEBUG: server process (PID 8521) exited with exit code 0 2010-05-13 21:25:00 WEST DEBUG: forked new backend, pid=8919 socket=6 2010-05-13 21:29:37 WEST DEBUG: checkpoint starting 2010-05-13 21:29:40 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:30:00 WEST DEBUG: server process (PID 8919) exited with exit code 0 2010-05-13 21:30:00 WEST DEBUG: forked new backend, pid=9239 socket=6 2010-05-13 21:34:35 WEST DEBUG: checkpoint starting 2010-05-13 21:34:38 WEST DEBUG: checkpoint complete; 0 transaction log file(s) added, 0 removed, 0 recycled 2010-05-13 21:35:00 WEST DEBUG: server process (PID 9239) exited with exit code 0 2010-05-13 21:35:00 WEST DEBUG: forked new backend, pid=9672
Re: [GENERAL] Facing Problem in Autovacuuming Deamon....
Hello, On Tue, 2010-06-22 at 04:50 -0700, Deven wrote: > Hi all, > > I am using Postgresql Database for our system storage and I am running > Autovacuuming Deamon on my entire database. But on of the table set in my > database never undergoes the Autovacuuming. I always need to do the manual > vacuuming on that table. the way autovacuum behaves on each table will depend very much on the INSERT/UPDATE profile in that specific table so it's not necessarilly a problem if a given table never gets vacuumed; may be it does not need; can you be more specific on a) why do you say that that specific table does not get autovacuumed ? what makes you say that ? b) is your filesystem experiencing bloat (bloat means: postgres seems to start eating up disk space like crazy after some days/weeks/months) ? is it ? c) do the logs say things regarding max_fsm_pages ? d) what version of postgres is it ? e) have a closer look at the "autovacuum_*" configuration directives... Cheers JOao > > Can anyone kindly tell me the solution for the same problem.Do I need to do > any configurations to come out of this problem? > > Waiting for reply... > > Thanks and Regards, > Devendra Mule. > Covacsis Technologies Pvt. Ltd. > IIT, Mumbai. > -- > View this message in context: > http://old.nabble.com/Facing-Problem-in-Autovacuuming-Deamon-tp28959155p28959155.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] text .vs. varchar
Hello all, I have a big database in which much information is stored in TEXT type columns (I did this initially because I did not want to limit the maximum size of the string to be stored)... but... .. let's say I choose an upper limit (p.ex. 200) for the string sizes and I start a fresh database with VARCHAR(200). What tradeoffs can I expect ? disk usage ? query execution times ? thx Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] speed up restore from dump
On Thu, 2008-10-30 at 13:08 -0700, Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira gmail > > During restore: > > # vmstat > > procs memory--- ---swap-- -io -system-- cpu > > r b swpd free buff cache si so bi bo in cs us sy id wa > > 3 1 230204 4972 1352 110128 21 17 63 24 56 12 2 85 0 > > # > > Does that machine really have only 256MB of RAM? And it's over 200MB into > swap? well. see for yourself... (360 RAM , 524 SWAP) that's what it is... it supposed to be somewhat an embedded product... # # cat /proc/meminfo MemTotal: 360392 kB MemFree: 59548 kB Buffers: 7392 kB Cached: 62640 kB SwapCached: 44724 kB Active: 247892 kB Inactive:29936 kB HighTotal: 0 kB HighFree:0 kB LowTotal: 360392 kB LowFree: 59548 kB SwapTotal: 524280 kB SwapFree: 292532 kB Dirty: 744 kB Writeback: 0 kB AnonPages: 190344 kB Mapped: 42772 kB Slab:13176 kB SReclaimable: 5116 kB SUnreclaim: 8060 kB PageTables: 2728 kB NFS_Unstable:0 kB Bounce: 0 kB CommitLimit:704476 kB Committed_AS: 1035156 kB VmallocTotal: 524280 kB VmallocUsed: 4020 kB VmallocChunk: 520164 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 4096 kB # # cat /proc/cpuinfo processor : 0 vendor_id : CentaurHauls cpu family : 6 model : 9 model name : VIA Nehemiah stepping: 8 cpu MHz : 1002.309 cache size : 64 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 1 wp : yes flags : fpu vme de pse tsc msr cx8 sep mtrr pge cmov pat mmx fxsr sse up rng rng_en ace ace_en bogomips: 2009.04 clflush size: 32 > > > -- > Alan > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] perl-DBD-Pg package for CentOS 5?
On Fri, 2008-10-31 at 17:31 +0200, Devrim GÜNDÜZ wrote: > Hi, > Have you considered installing directlly from CPAN ? # perl -MCPAN -e 'install DBD::Pg;' joao > On Fri, 2008-10-31 at 09:20 -0400, Kevin Murphy wrote: > > > My life would be complete if it offered perl-DBD-Pg for CentOS 5! > > We had an up2date package, but it broke many apps inside RHEL/CentOS 5, > so I removed EL-4 and EL-5 branches from SVN. > > If you want, you can grab Fedora 9 SRPM and rebuild it on your system -- > but it will probably be broken since it will try to gra some > dependencies that RHEL/CentOS 5 does not have. > > Regards, > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] possible pg_dump bug
On Wed, 2009-01-28 at 09:09 -0800, David Miller wrote: > Pg Dump does not include schema name on insert statement generated from > pg_dump with a -d option when exporting data for a particular table using the > -t . in version 8.3. I believe this same bug exists in 8.4 but > have not confirmed it. I believe pg_dump should have an option to retain > schema information. > would --clean solve the problem ? Joao > David Miller > River Systems, Inc. > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Testing ... please reply
Coming loud and clear ! joao On Wed, 2009-04-22 at 13:21 -0400, Atul Chojar wrote: > Could someone reply to this email? I am testing my subscription; joined over > 2 months ago, but never get any response to questions > > Thanks! > Atul > > > > -Original Message- > From: pgsql-general-ow...@postgresql.org > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Joshua D. Drake > Sent: Wednesday, April 22, 2009 12:56 PM > To: Alvaro Herrera > Cc: S Arvind; pgsql-general@postgresql.org > Subject: Re: [GENERAL] From 8.1 to 8.3 > > On Wed, 2009-04-22 at 12:49 -0400, Alvaro Herrera wrote: > > S Arvind escribió: > > > Our company wants to move from 8,1 to 8.3 latest. In irc they told me to > > > check realse notes for issues while upgrading. But there are lots of > > > release > > > notesss. Can anyone tell some most noticable change or place-of-error > > > while > > > upgrading? > > > > If you're too lazy to read them, we're too lazy to summarise them for > > you ... > > > > And to actually be helpful, the number one issue people see to run into > is this one: > > Non-character data types are no longer automatically cast to > TEXT (Peter, Tom) > > Previously, if a non-character value was supplied to an operator > or function that requires text input, it was automatically cast > to text, for most (though not all) built-in data types. This no > longer happens: an explicit cast to text is now required for all > non-character-string types. For example, these expressions > formerly worked: > > substr(current_date, 1, 4) > 23 LIKE '2%' > > but will now draw "function does not exist" and "operator does > not exist" errors respectively. Use an explicit cast instead: > > substr(current_date::text, 1, 4) > 23::text LIKE '2%' > > (Of course, you can use the more verbose CAST() syntax too.) The > reason for the change is that these automatic casts too often > caused surprising behavior. An example is that in previous > releases, this expression was accepted but did not do what was > expected: > > current_date < 2017-11-17 > > This is actually comparing a date to an integer, which should be > (and now is) rejected — but in the presence of automatic casts > both sides were cast to text and a textual comparison was done, > because the text < text operator was able to match the > expression when no other < operator could. > > Types char(n) and varchar(n) still cast to text automatically. > Also, automatic casting to text still works for inputs to the > concatenation (||) operator, so long as least one input is a > character-string type. > > However Alvaro is right. You should read the entire incompatibilities > section, and of course test. > > Sincerely, > > Joshua D. Drake > > -- > PostgreSQL - XMPP: jdr...@jabber.postgresql.org >Consulting, Development, Support, Training >503-667-4564 - http://www.commandprompt.com/ >The PostgreSQL Company, serving since 1997 > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > No virus found in this incoming message. > Checked by AVG - www.avg.com > Version: 8.5.287 / Virus Database: 270.12.1/2070 - Release Date: 04/22/09 > 08:49:00 > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] XML -> PG ?
On Wed, 2009-05-06 at 16:53 +0100, Joao Ferreira gmail wrote: > hello, > > as a perl addict I am... I recommend checking this out: > > http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm > > it's pretty flexible and allows you to specify to some extent just how > the database structure is infered from the XML... > maybe start here to get a faster grasp: http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag.pm j > check it out > > Joao > > > > > On Wed, 2009-05-06 at 11:31 -0400, Christophe wrote: > > > > On May 6, 2009, at 10:47 AM, Gauthier, Dave wrote: > > > > > Is there a way to read an XML file into a postgres DB? I’m thinking > > > that it will create and relate whatever tables are necessary to > > > reflect whatever’s implied by the XML file structure. > > > > > > There's no built-in functionality that does what you describe, > > although building such a thing would be very straight-forward. There > > are many application-specific decisions you'd need to make (what level > > of hierarchy in the XML file corresponds to a database/schema/table, > > for example, and how to handle nested "fields" and missing ones). > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] modo texto
On Thu, 2009-05-28 at 16:43 +0100, Grzegorz Jaśkiewicz wrote: > On Thu, May 28, 2009 at 4:24 PM, wrote: > > hi, sorry my english > > I need to copy a data base from windows to linux, how can I save my data > > base from windows with pg_dump, and where the file is?? > > and after how can I create the data base in linux without graphic enviroment > > , just text mode, with commands > > yes, pg_dump is just the tool for it. > > when you run pg_dump., you decide where the file is going to be > stored, by redirecting output to a file: > > pg_dump database > C:\tmp\foo.sql > take a look at: a) the --clean option and also b) pg_dumpall depending on what exactly you have in hands these options would be helpfull. Joao > > -- > GJ > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] list of databases in C ? libpq ?
Hello all, I need to write an application in C to read the list of databases currently in the server. very much like a "psql -l"... but I need it in C ! I never used C before to access PG. the libpq API seems a bit scary ! Is there anything, shipped with postgresql, other than libpq that would make my life simpler ? thanks a lot Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] autovacuum: 50% iowait for hours
Hello guys, thx for your inputs. I consider you suggestions valid. We have hundreds or thousands of unreachable and unmaintained PG instalations. I'm totally unable to experiment in each of them. Usage profile can range from 100 rows per hour to 1000, 10.000, 50.000... sustained... for several days... or even forever... CPU's and IO subsystem also varies.. from cheap IO to fast enterprise grade hardware. But I did try vacuum_cost_delay and the effect is as expected: IOWAIT reduces :) but vacuum time increases even more :( but it still does not leave the processor alone; autovaccum is still not able to finish it's job... and I can't wait forever. We are using pg 8.1.4, and yes we a) reindex b) vacuum full and c) reindex again once a week up to once a month; this weekly/monthly maintenance script has been quite effective as a workaround for the "can't find parent index" bug... we can eliminate heavy bloats just by running the script... but it also plays an important role in keeping database correctly indexed (reindex) and responsive. My experience with this scenario tells me (I may be wrong) that I don't really need autovacuum as long as I keep those periodic maintenance operations running (tipically Saturdays or Sundays during the night) For several reasons I can _not_ upgrade pg. I must use 8.1.4 and just live with it. And learn to work around the issues it might bring me. Additionally I can not change the cheap storage we ship in some lower end versions of our product. So, I'm still considering turning off autovacuum. any thoughts ? I really appreciate the discussion. thx a lot. Cheers Joao On Fri, 2010-05-14 at 06:47 -0400, Scott Mead wrote: > > > On Thu, May 13, 2010 at 6:23 PM, Scott Marlowe > wrote: > On Thu, May 13, 2010 at 4:05 PM, Joao Ferreira > wrote: > > > > Hello all, > > > > I have a hard situation in hands. my autovacuum does not > seem to be able > > to get his job done; > > > > database is under active INSERTs/UPDATEs; > > CPU is in aprox 50% iowait for the past 5 hours; > > > > I've tried turning off autovacuum and the effect goes away; > I turn it back > > on and it goes back to 50% iowait; my IO system is nothing > special at all; > > > > besides turning autovacuum off and running vacuum by hand > once in a while, > > what else can I do to get out of this situation ? > > > > bellow some logs > > > > I'm seriously considering turning off autovacuum for good; > but I'dd like > > to get input concerning other approaches... I mean... if I > don't turn it > > of, how can I be sure this will not happen again... we ship > products with > > PG inside... I must be absolutelly sure this will not ever > happen in any of > > our costumers. I'm a bit confuse... sorry :) ! > > > Have you considered tuning autovacuum to not use less IO so > that it > has no serious impact on other running pg processes? it's > pretty easy > to do, just don't go crazy (i.e. move > autovacuum_vacuum_cost_delay > from 10 to 20 or 30 ms, not 2000ms) > > > + 1 here, start with a 20ms delay, your vacuums make take a bit longer > to run, but they'll have less impact on I/O. > > > Just curious, what is your log_min_messages setting? I notice that you > had 'DEBUG' in your logs, I'm guessing that you've just cranked up to > DEBUG for your testing make sure that you leave that 'warning' or > 'notice' for production, leaving those logs at DEBUG will also chew up > I/O and get in the way of things like autovacuum. > > > > > What version of Postgres are you using? The visibility map in 8.4 > should lower the amount of I/O that you're stuck with (inmost cases) > with vacuum. Although you'll still need a full table scan to avoid > xid wrap, you should get away with only vacuuming changed blocks in > the general case. > > > -- > Scott Mead > EnterpriseDB > The Enterprise Postgres Company > www.enterprisedb.com > > > -- > Sent via pgsql-general mailing list > (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Facing Problem in Autovacuuming Deamon....
Hello, On Tue, 2010-06-22 at 04:50 -0700, Deven wrote: > Hi all, > > I am using Postgresql Database for our system storage and I am running > Autovacuuming Deamon on my entire database. But on of the table set in my > database never undergoes the Autovacuuming. I always need to do the manual > vacuuming on that table. autovacuum can be tricky... the way autovacuum behaves on each table will depend very much on the INSERT/UPDATE profile in that specific table so it's not necessarilly a problem if a given table never gets vacuumed; may be it does not need; can you be more specific on a) why do you say that that specific table does not get autovacuumed ? what makes you say that ? b) is your filesystem experiencing bloat (bloat means: postgres seems to start eating up disk space like crazy after some days/weeks/months) ? is it ? c) do the logs say things regarding max_fsm_pages ? d) what version of postgres is it ? e) have a closer look at the "autovacuum_*" configuration directives... not an expert here... but these are important topics regarding autovacuum... Cheers Joao > > Can anyone kindly tell me the solution for the same problem.Do I need to do > any configurations to come out of this problem? > > Waiting for reply... -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] running two instances of pg
Hello all, we are using PostgreSQL in a situation where I think we could try and run two separate instances: - 1 application can grow up to 10 Giga data; needs 'frequent' vaccuming and re-indexing (lots of insert, updates and deletes, per minute): we use it for 'near-real-time' applications logs storage; not big problem if it runs in lower priority; - 1 very small application with high responsivness requirements (few updates, very small size on disk); must run in high priority in order to prevent service failure to the end users; So we are considereing separating these two, in order to use 2 diferent pg instances. One of the instances will be reniced down in order to prevent user access failure during vacuuming and re-indexing operations (currently, when we perform database maintenance (we are now at 2Giga and growing), several other user related services become inoperative, during the maintenance operation, but if we renice the postmaster, the users dont experience such a severe service outage) So... my question is: is it a good approach to separate the 2 "worlds" into 2 diferent databases and 2 diferent postmasters ? is it good idea to "renice postmaster" ? what other measures could we take to prevent the postmaster service of the big database from consuming so much CPU during maintenance operations ? thanks a lot on your feedback Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] diagnostic, admin, maintenance tool
Hello all, could you please recommend tools to make diagnostic, admin and maintenance work easier... I imagine there are tools (maybe graphical, or browser based) that allow me to connect to postgres and receive diagnostic data and pointers/recommendations on how to solve specific problems or optimize some configuration parameters... I'dd like to receive your suggestions. thanks in advance Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] big database with very small dump !?
Hello all, I'm finding it very strange that my pg takes 9Giga on disk but pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed yesterday. Is this normal ? Should I be worried ? details bellow: -- # pg_dumpall --oids --clean > pg_dumpall.sql # ls -lh total 232M -rw-r--r--1 postgres postgres 231M Aug 11 15:46 pg_dumpall.sql # du -sh /var/pgsql/data/ 9.4G/var/pgsql/data -- thx joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] big database with very small dump !?
On Mon, 2008-08-11 at 12:43 -0700, Vlad Kosilov wrote: > ./data/ you may want to exclude those. I find this query useful for > something like this as well: > > select datname,pg_size_pretty(pg_database_size(oid)) from pg_database ; > Hello Vlad, I ran your query and I got the 9Gigas! I guess it should be related to index bloating, then. Do you agree ? thx Joao postgres=# select datname,pg_size_pretty(pg_database_size(oid)) from pg_database; datname| pg_size_pretty ---+ postgres | 3617 kB egbert| 9585 MB asterisk | 3993 kB turba | 3673 kB edgereporting | 3617 kB template1 | 3617 kB template0 | 3537 kB (7 rows) postgres=# > V. > > Joao Ferreira gmail wrote: > > Hello all, > > > > I'm finding it very strange that my pg takes 9Giga on disk but > > pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed > > yesterday. > > > > Is this normal ? Should I be worried ? > > > > > > details bellow: > > -- > > # pg_dumpall --oids --clean > pg_dumpall.sql > > # ls -lh > > total 232M > > -rw-r--r--1 postgres postgres 231M Aug 11 15:46 > > pg_dumpall.sql > > # du -sh /var/pgsql/data/ > > 9.4G/var/pgsql/data > > -- > > > > > > thx > > joao > > > > > > > > > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] big database with very small dump !?
Hello Greg, Vlad, Scott and all, thanks for the feedback. O forgot to mention that I execute REINDEX on all tables and INDEXes every week (right after executing VACUUM FULL). Is this enough to eliminate the possibility of "index bloat" ? and, yes, my database has some crazy indexes. I use these indexes, and I keep them REINDEXed to keep query execution time down. see bellow. could these indexes be the real reason for taking up all that space ? thanks joao egbert=# \d timeslots; Table "public.timeslots" Column | Type | Modifiers ---+-+--- str1 | text| str2 | text| ... ... str20 | text| val1 | real| ... ... val6 | real| var | text| count | integer | total | real| timeslot | integer | not null timestamp | integer | not null tsws | integer | not null tses | integer | not null Indexes: "timeslots_strs_var_ts_key" UNIQUE, btree (str1, str2, str3, str4, str5, str6, str7, str8, str9, str10, str11, str12, str13, str14, str15, str16, str17, str18, str19, str20, var, timeslot) CLUSTER "timeslots_timeslot_index" btree (timeslot) "timeslots_timestamp_index" btree ("timestamp") "timeslots_var_index" btree (var) egbert=# On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote: > On Mon, 11 Aug 2008, Joao Ferreira gmail wrote: > > > I'm finding it very strange that my pg takes 9Giga on disk but > > pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed > > yesterday. > > If you've been running VACUUM FULL, it's probably so-called "index bloat". > Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to > figure out where all your space has gone inside the database. > > -- > * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, M -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] big database with very small dump !?
On Mon, 2008-08-11 at 12:45 -0400, Greg Smith wrote: > On Mon, 11 Aug 2008, Joao Ferreira gmail wrote: > > > I'm finding it very strange that my pg takes 9Giga on disk but > > pg_dumpall produces a 250Mega dump. 'VACUUM FULL' was executed > > yesterday. > > If you've been running VACUUM FULL, it's probably so-called "index bloat". > Try running the query at http://wiki.postgresql.org/wiki/Disk_Usage to > figure out where all your space has gone inside the database. > egbert=# SELECT nspname || '.' || relname AS "relation", egbert-# pg_size_pretty(pg_relation_size(nspname || '.' || relname)) AS "size" egbert-# FROM pg_class C egbert-# LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace) egbert-# WHERE nspname NOT IN ('pg_catalog', 'information_schema') egbert-# AND nspname !~ '^pg_toast' egbert-# AND pg_relation_size(nspname || '.' || relname)>0 egbert-# ORDER BY pg_relation_size(nspname || '.' || relname) DESC egbert-# LIMIT 20; relation | size --+- public.timeslots_strs_var_ts_key | 5643 MB #this is a UNIQUE clause public.timeslots | 2660 MB #this is the only table public.timeslots_timestamp_index | 583 MB #this is an index public.timeslots_var_index | 314 MB #this is an index public.timeslots_timeslot_index | 275 MB "this is an index (5 rows) so it seems that the UNIQUE clause is taking up more space than the data itself... stil I have 2660 MB of data but the dump is about 10x smaller !!! any hints ? > -- > * Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] big database with very small dump !?
On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote: > It's likely you've got index bloat. If you reload a pg_dump of the > database in question into another server how much space does that take > up? right. just loaded the dump into a clean database and everything came down about 10 times... -- NOW: (injected dump into fresh Pg): relation | size --+ public.timeslots | 549 MB public.timeslots_strs_var_ts_key | 482 MB public.timeslots_var_index | 59 MB public.timeslots_timeslot_index | 37 MB public.timeslots_timestamp_index | 37 MB (5 rows) BEFORE: relation | size --+- public.timeslots_strs_var_ts_key | 5643 MB public.timeslots | 2660 MB public.timeslots_timestamp_index | 583 MB public.timeslots_var_index | 314 MB public.timeslots_timeslot_index | 275 MB I'm confused here on the fresh database the whole set only takes 1.3G on the original db, even after VACUUM FULL and REINDEX it takes 9G. can I really do anything about it ? If I try cluster, I'm guessing I'll choose the big index and forget about the smaller ones... is this right ? thanks joao thx > Look into using CLUSTER or REINDEX to fix the space usage. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] automatic REINDEX-ing
Hello all [[[ while dealing with a disk size problem I realised my REINDEX cron script was not really being called every week :( so... ]]] I executed REINDEX by hand and the disk ocupation imediatelly dropped 6 Giga...!!! is there a way to configure postgres to automatically execute the needed REINDEXING (on indexes and tables) for a given database something similar to auto-vacuum... I guess thx joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] automatic REINDEX-ing
On Tue, 2008-08-12 at 11:53 -0400, Tom Lane wrote: > TW, "more aggressive routine vacuuming" does NOT mean "use vacuum > full". > Vacuum full tends to make index bloat worse, not better. > > regards, tom lane > Ok. so what does it mean ? I'm a bit lost here. I'm currently executing VACUUM FULL _and_ REINDEX (tbls & idxs) every week. Should I keep the REINDEX and drop VACUUM FULL ? How do I iterate to a better approach ? thanks. joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] big database with very small dump !? SOLVED
Hi guys, If found the reason for all this problem. explanation: vacuum reindex cron scripts were not being executed. I executed the operations by hand and the values became normal. thank you all for the fine discussion. joao On Tue, 2008-08-12 at 13:49 +0200, Tommy Gildseth wrote: > Joao Ferreira gmail wrote: > > On Mon, 2008-08-11 at 10:58 -0600, Scott Marlowe wrote: > > > > If I try cluster, I'm guessing I'll choose the big index and forget > > about the smaller ones... is this right ? > > > > CLUSTER will sort out all the indexes, even though you're just > clustering on on. > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] automatic REINDEX-ing
> Because VACUUM FULL needs to move stuff around in the table which means it > need to mess around with the indexes (adding new entries). Ordinary > VACUUM only needs to delete stuff so doesn't cause anywhere near as > many problems. so in the event that I really end up running VACUUM FULL once in a while, it seems to me it is a good idea to run REINDEX. Is this correct ? thx joao > > Have a nice day, -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] failed to re-find parent key in "..."
Hello all, a few days ago I bumped into this: - # vacuumdb -f -z -a vacuumdb: vacuuming database "postgres" VACUUM vacuumdb: vacuuming database "rtdata" vacuumdb: vacuuming of database "rtdata" failed: ERROR: failed to re-find parent key in "timeslots_strs_var_ts_key" - We are using postgres 8.1.4 and I realise this has been fixed in 8.1.6 but... ...before updating all our servers I need to have an idea of the extent of the impact of this error. It seems to me that this error may cause the VACUUM, AUTOVACUUM and VACUUM FULL operations to fail partially or totally. Am I correct ? Or does this only have impact on VACUUM FULL ? We run many servers with a database application very heavily updated. In this scenario do I tell my boss that "updating postgres would be nice" or that we "we _must_ update postgres" ? What risks are we facing if we choose not to update postgresql ? thanks a lot for your advice. joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] negative values for vacuum threshold check !!!
Hello all While debugging my autovacuum I increased the level of logging to "debug3" and got this: # cat /var/pgsql/data/logfile | grep vac | egrep "mydb|mytable" LOG: autovacuum: processing database "mydb" DEBUG: mytbl: vac: 10409 (threshold 20), anl: -183366 (threshold 5) LOG: autovacuum: processing database "mydb" DEBUG: mytbl: vac: 10966 (threshold 20), anl: -182489 (threshold 5) LOG: autovacuum: processing database "mydb" DEBUG: mytbl: vac: 11545 (threshold 20), anl: -181592 (threshold 5) Should I be worried with the negative values in 'anl' ? thx joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Restoring a database from a file system snapshot
> Any suggestions? Is my procedure correct? Would I need to also copy > the transaction logs or something like that? the 'by the book' procedure for this operation is to use pg_dumpall . > dump_file.sql and later psql -f dump_file.sql postgres pg_dumpall gives you a transaction aware dump. I'm not sure how you'dd do about that old snapshot, sorry. joao > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] upgrading from 8.1.4 to 8.3.3
Hello all, I'm getting this error after installing pg-8.3.3 on a test system which had 8.1.4 shell> su postgres -c "/usr/bin/postmaster -D /var/pgsql/data" FATAL: database files are incompatible with server DETAIL: The data directory was initialized by PostgreSQL version 8.1, which is not compatible with this version 8.3.3. I guess I should get a pg_dumpall before upgrading and put the data back after the upgrade. Is this correct ? Is there a more apropriate approach ? thanks joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vaccuum best practice: cronjob or autovaccuum?
On Thu, 2008-08-28 at 19:53 +0800, Phoenix Kiula wrote: > On our database of about 5GB we vaccuum all of our 12 tables (only one > is huge, all others have about 100,000 rows or so) every hour or so. if you refer to manual VACUUM or VACUUM FULL every hour is probably too much. You should aim your vacuum full for about 1ce per week. > > But we also have autovaccuum enabled. Is this okay? Do the two vaccuum > processes contradict each other, or add unnecessary load to the > system? read the manuals in www.postgresql.org specifically read this seciton: http://www.postgresql.org/docs/8.3/static/maintenance.html you'll find that once in a while (start at once/week and build up or down from there) you can/should: - vacuum full - reindex your tables - reindex your indexes > > The reason we introduced the cronjob we felt was that the autovaccuum > was not really doing its job. how did you realise that ? turn off the cron job, wait a few days. In the meanwhile monitor your disk space ocupation (du -sh /var/lib?/pgsql/base/) if you see that size growing and the total row count (select count(*) from whatever) isn't gorwing you need external vacuums > I wonder if anyone can share some > insight on whether these settings are good for a DB that is basically > 24x7: like someone sayd: it's not the 24x7. it's the: how many tuples get DELETEd or UPDTATEd (for the case of autovacuum) in one day, for example. If you find that your db updates/deletes many tuples per hour > > autovacuum = on > autovacuum_vacuum_cost_delay = 20 > vacuum_cost_delay= 20 > autovacuum_naptime = 10 > stats_start_collector= on > stats_row_level = on > autovacuum_vacuum_threshold = 75 > autovacuum_analyze_threshold = 25 > autovacuum_analyze_scale_factor = 0.02 > autovacuum_vacuum_scale_factor = 0.01 these two can be tricky: if your database is very big, then 2% of 'very big' is 'a lot of changes' before autovacuum even tries to vacuum. read the documentation about these two. you might want to consider using scale factors of 0 and increase just a bit both thresholds; p. ex autovacuum_vacuum_threshold = 2 autovacuum_analyze_threshold = 1 Autovacuum is something that you adjust to your needs; Another tip: edit your postgresql.conf and bring the debug levels to: log_min_messages = debug3 log_min_error_statement = debug3 then just follow the logfile (location depends on distro; /var/log/postgresql; /var/pgsql/data/; whatever) IMPORTANT: don't leave the debug3 forever: it is very verbose and _will_ eat up your disc in no time; you might want to do something like this: tail -f logfile | grep vac or tail -f logfile | grep threshold take your time :) autovacuum requires a bit of dedication but, in the end it works fine (preventig disc space growth) note also that VACUUM has nothing to do with REINDEXing and REINDEXing also frees a considerable amount of disk space in certain cases. very important: read the docs: http://www.postgresql.org/docs/8.3/static/maintenance.html In my case I have autovaccum with scale factors 0 and naptime 600; also a cron job for vacuum full and reindex everything once a week (during the night). its working fine on a db with about 2 Giga and average 1 deletes a day and well above 20 INSERTs/UPDATEs per day. cheers joao PS: I'm saying all this because I'm just going through this process myself and I think I'm on the right track. things are starting to work the way I want them too :) > > Thanks! I've read up some websites for this kind of tuning, but often > the info is for older versions of DBs and some of the tweaks have > changed since. I am on 8.2.9. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Vaccuum best practice: cronjob or autovaccuum?
Hello, I'dd like to apologise about my short knowledge of VACUUM FULL and REINDEX. I'm just stating what I do in my case. I don not know if it is a corner case or not. I've been dealing with this specific application which is very demanding for Postgres for about 2 years. When autovacuum was introduced, I kept the weekly VACUUM FULL because it efectively brings disk ocupatio down, dispite it grows back after a few hours. It's just re-assuring to me to make sure that at least one of the vacuums it's working when I see the weekly drop of disk ocupation. On Thu, 2008-08-28 at 11:27 -0400, Bill Moran wrote: > In response to Joao Ferreira gmail <[EMAIL PROTECTED]>: > > > > On Thu, 2008-08-28 at 19:53 +0800, Phoenix Kiula wrote: > > > On our database of about 5GB we vaccuum all of our 12 tables (only one > > > is huge, all others have about 100,000 rows or so) every hour or so. > > > > if you refer to manual VACUUM or VACUUM FULL every hour is probably too > > much. You should aim your vacuum full for about 1ce per week. > > This statement is ambiguous. Advice about VACUUM is _very_ different > than advice about VACUUM FULL. > > Running a normal VACUUM once an hour may be exactly what you need, but > you didn't give enough evidence one way or the other. Look into the > various system catalog tables to see how much bloat your tables and > indexes have and track that over time to see if autovac and/or your > cron vacuum is keeping things in check. > > > > But we also have autovaccuum enabled. Is this okay? Do the two vaccuum > > > processes contradict each other, or add unnecessary load to the > > > system? > > If you're cron jobs are vacuuming enough, then autovac will simply > take a few CPU cycles to realize that nothing needs to be done. > > > you'll find that once in a while (start at once/week and build up or > > down from there) you can/should: > > > > - vacuum full > > - reindex your tables > > - reindex your indexes > Whoah there. How on earth did you derive that from those documents? I did not say I did. I just advised the original poster to read the docs. I'm not trying to say 'this is the way'. I'm trynig to say 'read the docs; the docs are good'. but, BTW, quoting: http://www.postgresql.org/docs/8.3/static/routine-reindex.html - The potential for bloat in non-B-tree indexes has not been well characterized. - It is a good idea to keep an eye on the index's physical size when using any non-B-tree index type. Also, for B-tree indexes a freshly-constructed index is somewhat faster to access than one that has been updated many times, because logically adjacent pages are usually also physically adjacent in a newly built index. (This consideration does not currently apply to non-B-tree indexes.) It might be worthwhile to reindex periodically just to improve access speed. -- I did some tests (lets say about 50 INSERT/UPDATES per second and somw thousande DELETEs once in a while). Query execution time DROPs 10 times after REINDEXING and VACUUM I've seen 600Megas DROP in disk size ocupation just by reindexing; additionally about 500 Megas drop by VACCUMING; this was on a 100Megas of usefull data. > That's a pretty lousy maintenance plan with lots of unneeded overhead, > unless you're database has a very edge-case access pattern. !? Many people mention these 'edeg-case access pattern', and 'corner cases'. I don't now if mine is such. My appliucation executes patterns of varying INSERTING and UPDATING but it must overcome stress tests with about 100 UPDATES mixed with INSERTS per second. Additionaly it DELETEs up to 1 records once every night. It's a 'near-realtime' log store which stores log information for 'you name it' logs in Linux based systems. It also removes old data once a night. > > VACUUM FULL really only needs done in extreme cases where massive data > bloat is experienced, and not expected to happen again. If massive data > bloat is routine, you're probably better off letting ordinary VACUUM > maintain a consistent level of free space on tables. well. I've seen massive data and index bloating in my application. In some cases REINDEXING and VACUUM FULL ing was the only way to clear up things. but you're probably right. I'll give it a try. thx > > REINDEXING seems nice at times, but I've yet to see any evi
[GENERAL] autovacuum settings
Hello all, in which system tables can I find the effective run-time values of the autovacuum configuration parameters... naptime, thresholds, scale factors, etc thx joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Changes for version 8.4
Is there a date for the release of 8.4 ? joao On Thu, 2008-09-04 at 10:09 -0400, Alvaro Herrera wrote: > paul tilles wrote: > > Where can I find a list of changes for Version 8.4 of postgres? > > It's not officially written anywhere. As a starting point you can look > here: > http://wiki.postgresql.org/wiki/Category:CommitFest > Then look at each Commitfest:2008:xx page, and see the list of committed > patches. Also, note that a certain number of patches have gone in > without being listed there (most notably, a huge improvement in how > EXISTS queries are handled). > > The definitive place, of course, is the CVS logs. > > -- > Alvaro Herrerahttp://www.CommandPrompt.com/ > PostgreSQL Replication, Consulting, Custom Development, 24x7 support > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] about partitioning
Hello all, my application is coming to a point on which 'partitioning' seems to be the solution for many problems: - query speed up - data elimination speed up I'dd like to get the feeling of it by talking to people who use partitioning, in general.. - good, bad, - hard to manage, easy to manage, - processing over-head during INSERT/UPDATE, - stability/compatibility of pg_dump and restore operations, - how many partitions would be reasonable for read _and_ write access optimal speed; thx joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] about partitioning
Hello all, still with partitioning... wheter I use rules or triggers is there a way for me _not_ to specify field-by-field all the fields I wish to be redirected to the child-table... as example: instead of this: - create rule insert_t_1 AS ON INSERT TO teste WHERE (t>=1000 and t<2000) DO INSTEAD INSERT INTO t_1 VALUES (NEW.t, NEW.s1, NEW.s2 NEW.s3, NEW.s4 NEW.s5, NEW.s6, NEW.s7, NEW.s8); - something like this: - create rule insert_t_1 AS ON INSERT TO teste WHERE (t>=1000 and t<2000) DO INSTEAD INSERT INTO t_1 VALUES (__ALL__); - of course this assumes that the child table inherits all fields from the parent table _and_ has no extra fields which is exactly my case. any hints. thx j On Sat, 2008-09-13 at 16:48 -0400, Robert Treat wrote: > On Thursday 11 September 2008 07:47:00 Joao Ferreira gmail wrote: > > Hello all, > > > > my application is coming to a point on which 'partitioning' seems to be > > the solution for many problems: > > > > - query speed up > > - data elimination speed up > > > > I'dd like to get the feeling of it by talking to people who use > > partitioning, in general.. > > > > - good, bad, > > good :-) > > > - hard to manage, easy to manage, > > I think the upfront costs for managing a partitioning setup are higher with > postgres than other systems, but there is nothing that you shouldn't be able > to automate in a cron script (at which point management becomes easy), plus > postgres gives you some interesting flexibility that is harder to find in > other setups. > > > - processing over-head during INSERT/UPDATE, > > you can setup inserts to have relativly little overhead, but it requires more > management/maintence work up front. Updates within a partition also have > relativly little extra overhead, especially if you put in a little > application logic to figure out how to work on a partition directly. Updates > where you are changing the partition key value are always more problematic > though. > > > - stability/compatibility of pg_dump and restore operations, > > no real issues here as long as your on recent enough versions to do wildcard > table matching for individual tables. > > > - how many partitions would be reasonable for read _and_ write access > > optimal speed; > > > > again, this depends on how exactly your working on the data. For example, we > have tables with over a thousand partitions on them; in those scenarios all > data is written into a single partition (with a new partition created daily), > and the qeury patterns are really straightforward... last month gets a lot of > queries, lasat three months not so much, last year barely any, and beyond > that is pretty much just archive info. That said, we have other systems where > that wouldnt work at all (for example, a static number of partitions, all of > which are queried activly). > > For some more info, I've given at least one presentation on the topic, which > seems to be missing from the omniti site, but I've uploaded it to > slideshare... > http://www.slideshare.net/xzilla/postgresql-partitioning-pgcon-2007-presentation > > HTH. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] match an IP address
hello all, I'm unable to build a LIKE or SIMILAR TO expression for matching and ip address 192.168.90.3 10.3.2.1 any help please... thanks joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] match an IP address
well... my IP addresses are stored in a TEXT type field. that field can actually contain usernames like 'joao' or 'scott' and it can contain IP addresses :( joao On Mon, 2008-09-22 at 11:13 -0600, Scott Marlowe wrote: > On Mon, Sep 22, 2008 at 10:59 AM, Joao Ferreira gmail > <[EMAIL PROTECTED]> wrote: > > hello all, > > > > I'm unable to build a LIKE or SIMILAR TO expression for matching and ip > > address > > > > 192.168.90.3 > > 10.3.2.1 > > As already mentioned inet / cidr types should work. Example: > > postgres=# create table inettest (a inet); > CREATE TABLE > postgres=# insert into inettest values > ('192.168.0.1'),('192.168.1.1'),('10.0.0.1'); > INSERT 0 3 > postgres=# select a from inettest where '192.168.0.1/16' >> a; > a > - > 192.168.0.1 > 192.168.1.1 > (2 rows) > postgres=# select a from inettest where '192.168.0.1/24' >> a; > a > - > 192.168.0.1 > (1 row) > postgres=# select a from inettest where '192.168.0.1/0' >> a; > a > - > 192.168.0.1 > 192.168.1.1 > 10.0.0.1 > (3 rows) > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] match an IP address
thank you depesz it seems a pretty good fix for my problem. Actually yestreday I came up with something similar but your's is better. cheers joao On Tue, 2008-09-23 at 09:26 +0200, hubert depesz lubaczewski wrote: > On Mon, Sep 22, 2008 at 05:59:25PM +0100, Joao Ferreira gmail wrote: > > I'm unable to build a LIKE or SIMILAR TO expression for matching and ip > > address > > 192.168.90.3 > > 10.3.2.1 > > any help please... > > use this regular expression: > > '^[0-9]{1,3}(.[0-9]{1,3}){3}$' > > warning: do not use "like" or "similar to". > proper way to use it: > > select * from table where field ~ '^[0-9]{1,3}(.[0-9]{1,3}){3}$'; > > this regexp is not 100% fault proof - it will happily return rows like: > '300.999.998.7' > > but for most of the cases it should be enough. if you need it to match > only ips, and filter out things like '300.999.999.999' - let me know. > > Best regards, > > depesz > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] databases list to file
Hello all, I need to print to a file a simple list of all the databases on my postgresql. I need to do this from a shell script to be executed without human intervention I guess something like: su postgres -c 'psql ...whatever > /tmp/my_databases.txt' but I don't know exactly to what extent I can format the output of this in order to simplify the parser that will read that file. any suggestions ? thank you joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] when COPY violates Primary Keys
Hello all, I have a ascii dump file based on the COPY operation. lets say I restore this dump into a live database with applications doing INSERTs and UPDATEs onto it. in case the COPY of a register causes a primary key (or UNIQUE, or FK) violation does the psql restore command try to continue the operation until it has parsed the whole dump, or does it abort on the first violation ? thx Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] run postgres 8.3
Hello Eduardo On Tue, 2008-10-14 at 15:40 -0500, Eduardo Arévalo wrote: > I installed the 8.3 postgres > the amount of giving the command: > bash-3.2$ /usr/local/postgres_8.3/bin/initdb -D /base/data > that command only initializes the underlying filesystem database files, directories and configurations the command that starts the database system (if you do not wish to use init scripts, as seems to be the case) is something like: su - postgres -c '/usr/bin/pg_ctl start -w -D /var/pgsql/data \ -l /var/pgsql/data/logfile -o "-i"' evaluate_retval ;; > the result is: > > > The files belonging to this database system will be owned by user > "postgres". > This user must also own the server process. > > The database cluster will be initialized with locale en_US.UT F-8. > The default database encoding has accordingly been set to UTF8. > The default text search configuration will be set to "english". > > initdb: directory "/base/data" exists but is not empty > If you want to create a new database system, either remove or empty > the directory "/base/data" or run initdb > with an argument other than "/base/data". you must remove that directory (or it's contents ) first: try one of these: rm -rf /base/* rm -rf /base/data/* only after this removal retry the initdb command. after that try the start command BTW, use something like /usr/bin/pg_ctl stop to stop your DB BTW: why aren't you using your distribution's packages they should work just fine :) cheers joao > > > but do not raise your service > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] speed up restore from dump
Hello all, I've been tring to speed up the restore operation of my database without success. I have a 200MB dump file obtained with 'pg_dumpall --clean --oids'. After restore is produces a database with one single table (1.000.000) rows. I have also some indexes on that table. that's it. It always takes me about 20 minutes to reload the data, whatever settings I change. I have so far touched these settings: - fsync = off - shared_buffers = 24MB - temp_buffers = 24Mb - maintenance_work_mem = 128MB - full_page_writes = off - wal_writer_delay = 1 - checkpoint_segments = 200 - checkpoint_timeout = 1800 - autovacuum = off I started with a default instalation. first I changed fsync to off, then I started touching other cfg params. but I always get around 20 minutes (21, 19, 18) Can I expect these 20 minutes to be significantly reduced ? What other cfg paramenters shoud I touch ? Can anyone shed some light on this ? any faster approach to upgrade from 8.1 to 8.3 ? thank you Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] speed up restore from dump
On Thu, 2008-10-30 at 11:39 -0700, Alan Hodgson wrote: > On Thursday 30 October 2008, Joao Ferreira gmail > <[EMAIL PROTECTED]> wrote: > > What other cfg paramenters shoud I touch ? > > work_mem set to most of your free memory might help. I've raised work_mem to 128MB. still get the same 20 minutes ! > You're probably just > disk-bound, though. What does vmstat say during the restore? During restore: # vmstat procs memory--- ---swap-- -io -system-- cpu r b swpd free buff cache si so bi bo in cs us sy id wa 3 1 230204 4972 1352 110128 21 17 63 24 56 12 2 85 0 # After restore has finished # vmstat procs memory ---swap-- ---io -system-- cpu r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 246864 59972 2276 186420 2 118 63 28 56 12 2 85 0 # joao > > -- > Alan > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] time interval format srting
Hello, I've been searching the docs on a simple way to convert a time _duration_ in seconds to the format dd:hh:mm:ss, but I can't find it. 90061 --> 1d 1h 1m 1s (90061=24*3600+3600+60+1) any ideas ? I've been using to_char and to_timestamp to format dates/timestamps... but this is diferent... I want to format time intervals, durations.. cheers Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Upgrading Postgres question
On Wed, 2008-11-05 at 15:08 -0600, Tony Fernandez wrote: > Hello all, > > > > I am in the process of updating my DB on Postgres 8.1.11 to 8.3.4. I > also use Slony 1.2.14 for replication. > > > > Is there a safe path on how to accomplish this, please advice on what > steps I will need to consider. Bear in mind that I am planning to > skip from Postgres 8.1.x to 8.3.x and I use Slony to replicate my > production DB into two more boxes simultaneously. > I don't know about slony but, one way to do this is by (roughly): a) dump all contents of your pg with pg_dumpall b) install pg8.3 in a test server c) restore the dump into the test server (su postgres; psql -f my_dump); d) if all went well you can purge the 8.1 database from the disc or u can skip to the next step and install the new db in some alternative directory, in case you need to revert to pg8.1 latter. e) install 8.3 on the main server (maybe you will have to use initdb) f) restore the dump into the new 8.3 in the production server. This is how I do it :) Hope it fits your needs I also noticed some SQL parsing changes (like the need to explicitlly cast from text to numeric)... you shold try all your apps running against the test server before purging the old db the mais issue here is that, from 8.1 to 8.3 the underlying database files have changed format... so u need the dump/restore. u shld rd this: http://www.postgresql.org/docs/current/static/install-upgrading.html Joao > > > Thanks, > > > > Tony Fernandez > > > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] merge 2 dumps
hello all, I have 2 dumps of the same Pg database in diferent instants. I'dd like to merge the two dumps into one single dump in order to restore all data at one time. Is this possible ? are there any helper tools to aid in dealing with text dump files ? thanks Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] merge 2 dumps
On Tue, 2008-11-11 at 11:16 +, Richard Huxton wrote: > Joao Ferreira gmail wrote: > > hello all, > > > > I have 2 dumps of the same Pg database in diferent instants. > > > > I'dd like to merge the two dumps into one single dump in order to > > restore all data at one time. > > Is there any overlap in the data? no. I don't expect that there might be overlaps. any simpler solution in this case ? could I just get the "COPY TO" sections from the files and load them one after the other ? I never tried this before... j > > If so, simplest might be to restore dump1, rename all the tables, > restore dump2 then run queries to rationalise your two sets of tables. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] safelly erasing dirs/files
Hello all, How can I safelly erase (with rm command in Linux) files or dirs concerning a specific database ? assuming I whish to elimiante data belonging to database A but I do not whish to disturb or cause any injury to database B Is there documentation on how to do this or on what exactly am I erasing if I remove some specific file or dir. Thanks Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] safelly erasing dirs/files
ok. thx all for the explanation my problem is I a heavilly bloated pg database that has just filled up the partition data is in /var/pgsql/.. and /var is 100% full... vacuum/reindex is saying: I can't do it cause I have no space :( how do I un-bloat the /var partition ? this is why I was considering rm thx Joao On Sat, 2009-11-14 at 14:35 -0500, Bill Moran wrote: > Joao Ferreira gmail wrote: > > > > Hello all, > > > > How can I safelly erase (with rm command in Linux) files or dirs > > concerning a specific database ? > > What do you mean by this? The typical way would be to log into the > DB server and issue DROP DATABASE x. > > However, if you're looking for secure data destruction (akin to rm's -P > flag) I don't know if there is a canned method for doing so. > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Data Directory size increasing abnormally
> A strange behaviour is observerd in the physical files with respect to > this table. The size of the file is growing abnormally in GBs. Suppose > the file name (oid of relation )with respect to the table is "18924" I > could find entries of 1 GB files like 18924, 18924.1, 18924.2 , > 18924.3..18924.40 in the data directory, though there are on 10k > records in the table. Hello, your application is similar to mine and has similar problems. In mine I have average 10 Updates per second with peaks of 100 updates per second... this kind of applications is known to cause un undesireable behaviour in PostgresSQL called (I hope I'm not confusing things here) table bloat and if you have indexes/primary keys, also index bloat; the solution is somewhere in the correct balance of: autovacuum, periodic vacuuming, vacuum full once in a while, and re-indexing; of course upgrading to 8.3.x or higher is very very much recommended. but with this kind of application (very frequent updates) vacuum/autovacuum/reindex/cluster will always be good friends to have around. My best advice is to quickly read the documentation regarding those commands and start applying them to your test database; when you get the feeling of it configure autovacuum and plan weekly (to start with) vacuum/reindex operations; vacuum full is typically not needed but let the experience tell you how it goes in your case. I'm not a postgres expert but my application, similar in behaviour to yours, teached me these things. So I'm sorry if I'm not being totally cientific regarding my suggestions. cheers Joao > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Creating new database
I'dd suggest: pgdumpall --clean > dump.sql edit the dump.sql file by hand replacing database name and owners and so... then reload into the new DB with psql -f dump.sql postgres this does all the work except creation of users and databases should give you an exact replica with all data inside see "man pgdumpall" Joao On Wed, 2009-11-18 at 12:48 +0100, Malcolm Warren wrote: > Dear All, > > > I've been using a single database for many years. > I'd now like to create a new separate database with most of the same > tables, so that I don't have to re-write my code, I'd like to just use > a different Tomcat datasource to access it. > > I had imagined this would be as simple as : > 1) using createdb, to create a database with a different name, maybe > with a different postgres user assigned to it, > 2) using a pg_dump from the original database to import the table > creation. > > However it doesn't appear to be that simple. > > Let's say the old database is called database1, and I've created a new > database2. > > When I import the pg_dump into database2, it tells me that the tables > already exist (obviously it's talking about the tables in database1). > But I surely I've created a new database called database2? Why is it > stopping me from creating the same tables in the new database? > > Same sort of problem with psql: > If I run 'psql database2' then I see everything from database1 as > well. > What am I missing here? > > > > Thanks for your time. > > Malcolm Warren > > > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Creating new database
On Wed, 2009-11-18 at 08:39 -0700, Scott Marlowe wrote: > On Wed, Nov 18, 2009 at 8:12 AM, Joao Ferreira gmail > wrote: > > I'dd suggest: > > > > pgdumpall --clean > dump.sql > > I'd think he'd be much better off with pg_dump, not pg_dumpall. yes, agree. sorry. joao > > pg_dump srcdb | psql destdb > > you can add -s as a pg_dump switch if all you want is the schema. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] db size and VACUUM ANALYZE
On Fri, 2010-02-12 at 18:43 +0100, Marcin Krol wrote: > Amitabh Kant wrote: > > You need to do VACUUM FULL ANALYZE to claim the disk space, but this > > creates a exclusive lock on the tables. > > > > See http://www.postgresql.org/docs/8.3/static/sql-vacuum.html > > Aha! > > OK but why did the performance degrade so much? The same reason -- lack > of autovacuuming/vacuum full? > if the application makes use of INDEXes then REINDEX will also play an important role in that case REINDEXING your indexes once in a while may give you imediate improvements in performance (may give... may not give depends) moreover, you should expect that in a few days/weeks/months the database size can (probably will) grow up again... it's the way pg works try using autovacuum if you are already using it you can make it more agressive by decreasing the thresholds and so on Joao > Regards, > mk > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] CLUSTER on 2 indexes ?
Hello, Considering the CLUSTER operation on a frequently updated table, if I have 2 indexes on the table how do I choose one of them ? or is it possible to have CLUSTER take both into consideration... my table is read from based on two columns: a 'timestamp' integer column (actually a UTC integer timestamp) and a text columun called 'var' that separates raw data from each other... something like a 'product type' in a wharehouse example. All queries that read from the table specify a time range and also the 'product type'. I have indexes on both. Evidently no CLUSTERing should be worse than clustering on any of them, but is there a way to have the two or choose the most profitable ? thanks Joao -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto VACUUM
On Wed, 2010-03-03 at 12:46 -0500, akp geek wrote: > Hi All - > > I need some help from you. this question is in follow up > with my earlier questions. I turned the autovacuum and restarted the > db and the settings I have as follows. It seems the autovacuum process > has not been turned on. It's almost more than 3 hours I have restarted > my DB with following setting. I have ps -ef to see the proces list. > Is there some thing I am doing wrong. > > Can you please help? I'dd suggest leaving the "naptime" in the default (60 seconds) Your value is very high... too high... I'dd say Use values around 60 seconds (never minutes)... > > Regards > > # - Query/Index Statistics Collector - > #track_activities = on > track_counts = on > #track_functions = none # none, pl, all > #track_activity_query_size = 1024 > #update_process_title = on > #stats_temp_directory = 'pg_stat_tmp' > > > #-- > # AUTOVACUUM PARAMETERS > #-- > > autovacuum = on # Enable autovacuum subprocess? 'on' > # requires track_counts to > also be on. > #log_autovacuum_min_duration = -1 # -1 disables, 0 logs all > actions and > # their durations, > 0 logs > only > # actions running at least > this number > # of milliseconds. > autovacuum_max_workers = 10 # max number of autovacuum > subprocesses > autovacuum_naptime = 180min # time between autovacuum runs > #autovacuum_vacuum_threshold = 50 # min number of row updates > before > # vacuum > #autovacuum_analyze_threshold = 50 # min number of row updates > before > # analyze > #autovacuum_vacuum_scale_factor = 0.2 # fraction of table size > before vacuum > #autovacuum_analyze_scale_factor = 0.1 # fraction of table size > before analyze > #autovacuum_freeze_max_age = 2 # maximum XID age before > forced vacuum > # (change requires restart) > #autovacuum_vacuum_cost_delay = 20ms# default vacuum cost delay > for > # autovacuum, in milliseconds; > # -1 means use > vacuum_cost_delay > #autovacuum_vacuum_cost_limit = -1 # default vacuum cost limit > for > # autovacuum, -1 means use > # vacuum_cost_limit > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Auto VACUUM
On Fri, 2010-03-05 at 10:03 -0500, akp geek wrote: > Hi All - > > I am still having the issue, even after I turned on the > auto vaccum. I have quick question. How do I know that auto vacuum > process is running. When I restarted my database , I got the message > auto vacuum launcher started. But is there a way that I can check that > the process is really working. > u can "tail -f " in my case I have "tail -f /var/pgsql/data/logfile" in your case it could be diferent watch for lines containing the words "vacuum", "autovacuum", "will analyse" etc etc Joao > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] possible pg_dump bug
On Wed, 2009-01-28 at 09:09 -0800, David Miller wrote: > Pg Dump does not include schema name on insert statement generated from > pg_dump with a -d option when exporting data for a particular table using the > -t . in version 8.3. I believe this same bug exists in 8.4 but > have not confirmed it. I believe pg_dump should have an option to retain > schema information. > would the option --clean solve this problem ? pg_dump --clean > David Miller > River Systems, Inc. > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] psql 8.1.9 autovacuum safe?
On Sun, 2009-04-12 at 09:27 -0700, Irene Barg wrote: > Hi, > > We are running postgresql-8.1.9 and plan to upgrade to 8.2 or even 8.3 > but can't just yet. I need to run analyze periodically (like hourly), > but before I write a script to loop through the tables in each schema > and run analyze, I thought I would try autovacuum. I say one post that > said there was a bug with autovacuum in 8.1.x? > > Is autovacuum under 8.1.9 safe or should I wait until I upgrade? There was a serious issue in 8.1.4 that, in certain situations, would make VACUUM and AUTOVACUUM totally inefective. But, afaik, it's fixed since 8.1.6. joao > Thanks in advance. > -- irene > - > Irene BargEmail: ib...@noao.edu > NOAO/AURA Inc. http://www.noao.edu/noao/staff/ibarg > 950 N. Cherry Ave.Voice: 520-318-8273 > Tucson, AZ 85726 USA FAX: 520-318-8360 > - > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] From 8.1 to 8.3
On Wed, 2009-04-22 at 22:12 +0530, S Arvind wrote: > Our company wants to move from 8,1 to 8.3 latest. In irc they told me > to check realse notes for issues while upgrading. But there are lots > of release notesss. Can anyone tell some most noticable change or > place-of-error while upgrading? one I had to solve was the need for explicit casting in SQL queries that used numeric comparison of REAL with TEXT... yes... this used to be possible on 8.1 and is no longer on 8.3 so if your applications have such queries maybe you will bumo into some problems I used stuff like this: cast(instantin as numeric) cheers Joao > > Arvind S > > > "Many of lifes failure are people who did not realize how close they > were to success when they gave up." > -Thomas Edison -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Selling an 8.1 to 8.3 upgrade
pg 8.1.4 has a very ugly bug which prevents VACUUM and AUTOVACUUM from performing well I certain situations the AUTOVACUUM will start failing and any VACUUM operations will fail too. solution I found was to periodically REINDEX my tables and INDEXES. the major effect of this bug is Pg starts continuously taking up filesystem space for nothing pg looses track of disk space that is no longer needed and simply requests more and more; another side effect is that, in that situation, query execution time grows continuously too until client applications simply tiemout. This issue is documented and has been fixed starting Pg 8.1.6. JOao On Fri, 2009-04-24 at 12:01 -0500, Josh Trutwin wrote: > I've been asked to put together a list of reasons to upgrade a db > from 8.1 to 8.3 and I've looked over the changelog, but they want a > bullet list of 4-5 top things. I'm curious what others would say the > most 5 important updates from 8.1 to 8.3 are. > > I can say "performance improvevents" but I'm not sure how to sell > that better other than listing what's in the release notes. I also > think the autovac was greatly improved in these releases as well? Or > maybe I'm thinking 8.0 to 8.1? > > Sorry this is so vague, I'm frustrated with this request as I figured > just the amount of bug-fixes alone would be adequate reasoning. > > Thanks, > > Josh > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] XML -> PG ?
hello, as a perl addict I am... I recommend checking this out: http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm it's pretty flexible and allows you to specify to some extent just how the database structure is infered from the XML... check it out Joao On Wed, 2009-05-06 at 11:31 -0400, Christophe wrote: > > On May 6, 2009, at 10:47 AM, Gauthier, Dave wrote: > > > Is there a way to read an XML file into a postgres DB? I’m thinking > > that it will create and relate whatever tables are necessary to > > reflect whatever’s implied by the XML file structure. > > > There's no built-in functionality that does what you describe, > although building such a thing would be very straight-forward. There > are many application-specific decisions you'd need to make (what level > of hierarchy in the XML file corresponds to a database/schema/table, > for example, and how to handle nested "fields" and missing ones). -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] XML -> PG ?
On Wed, 2009-05-06 at 16:53 +0100, Joao Ferreira gmail wrote: > hello, > > as a perl addict I am... I recommend checking this out: > > http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag/Cookbook.pm > > it's pretty flexible and allows you to specify to some extent just how > the database structure is infered from the XML... ... maybe start here to get a faster grasp: http://search.cpan.org/~cmungall/DBIx-DBStag/DBIx/DBStag.pm j > > check it out > > Joao > > > > > On Wed, 2009-05-06 at 11:31 -0400, Christophe wrote: > > > > On May 6, 2009, at 10:47 AM, Gauthier, Dave wrote: > > > > > Is there a way to read an XML file into a postgres DB? I’m thinking > > > that it will create and relate whatever tables are necessary to > > > reflect whatever’s implied by the XML file structure. > > > > > > There's no built-in functionality that does what you describe, > > although building such a thing would be very straight-forward. There > > are many application-specific decisions you'd need to make (what level > > of hierarchy in the XML file corresponds to a database/schema/table, > > for example, and how to handle nested "fields" and missing ones). > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general