Re: [GENERAL] Replication and PITR
Csaba Nagy wrote: That's gone with 8.2, it will be possible to stream the last modifications, or force a WAL recycle periodically, whatever fits you better. There is some new infrastructure which allows these things, although I didn't have the time to play with them. This sound very nice, where can I find more info about this ... and when is 8.2 expected to be released ? And are there anyone else that plays with making build in replication for PG 8.2 ? The big improvement would be indeed to have the infrastructure to start up a standby by simply pointing it to the master server, no other setup needed. Implement that, make it reliable, and any beginner to postgres will be able to easily set up a WAL shipping based standby. Right now you still have to do some complicated scripting to make it work (no idea how much 8.2 will help here, didn't try yet). Do you know what the motivation for these changes have been ? Better PITR or is replication a direct goal ? /BL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Replication and PITR
Bill Moran wrote: - No reliability. On slow days, WAL logs could take a long time to rotate, so small but important transactions might not be replicated for a long time. So it is all right for backup but for replication it could end up laking too much behind, and a fail over could be hours behind. So PITR can be used, but one of the cons is the unpredictable delay of data. I thought one of the ideas behind the PITR system was to get rather reason backups of data ... /BL ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Replication and PITR
Chander Ganesan wrote: Keep in mind that while Slony-I provides you with a read-only replica (able to service queries). At present WAL log replication (in 8.2 or otherwise) would allow you to have a 'warm standby' type database - which would be somewhat in sync (pending the latest transactions), but would be unable to service queries (essentially, you'd have a server that was sitting with postmaster in a recovery state). Interesting note ... do you know how fare PG would be from being able to be in "read-only" state when receiving PITR data ? Is it a complex problem or a simple one to solve ? /BL ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Replication and PITR
Jeff Davis wrote: 8.2 makes PITR much easier to use for the situation you'd like. In 8.1, a WAL might sit around for a while before it becomes full and then sent off. 8.2 allows you to force a WAL to be sent, and it also allows a standby mode. This sounds really neat ! To me this sound like we may be able to replicate using different priorities, balancing performance and data integrity. But what do you mean by "standby mode" ? That PG maý be able to serve as a readonly DB and replication client at the same time ? Slony is a good system now, and it's nice because you can use different versions of PostgreSQL. PITR requires that it's the same version. The PITR requirement all makes sense, and I don't like the missing DDL updates and the fact that Slony is trigger. However, if you're working with a payment system or accounting system, you may need synchronous replication. With any asynchronous solution (Slony or PITR standby), there is a possibility (although not likely) to lose *committed* transactions. Hmm, yes but again this have to be balanced with performance, and at the moment I can live with a replication that is a little behind but, I will try to keep the overhead down. If you do need synchronous replication, consider using two-phase commit to prepare transactions on several machines before committing them. This ensures that the data will be on multiple machines before committing to any of them. Hmm, I saw that feature announced in 8.1, and I am sure it will come in handy one day, but right now async is acceptable. /BL ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Backup / export DB
Title: Nachricht Simply dump the database with pg_dump, copy the dump to the other machine and restore the dump there. See the man pages of pg_dump how to use the tool. If you're database contains foreign keys or similar, make sure to include OIDs into the database dump. -- Matthias -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alain RogerSent: Friday, September 22, 2006 7:59 AMTo: pgsql-general@postgresql.orgSubject: [GENERAL] Backup / export DBHi,I have a database (table, stored procedures, accounts,..) on 1 computer.for some reason i need to move this database to another computer but not in the same folder name or on the same HDD.how can i export (and after import) all relative structure and data ? or maybe a backup of DB is enough ?thanks a lot,Alain
[GENERAL] Backup roles / users
Hi,I've checked in pg_dump and i did not find anything regarding backuping roles ?When i migrate my DB to another computer, should i recreate all roles manually ?thx.Alain
Re: [GENERAL] Backup roles / users
Title: Nachricht Hi Alain! To get all the users and so on to the other machine, you have to use the pg_dumpall tool with the -g switch. This will dump out all the users, groups and roles i think. -- Matthias -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alain RogerSent: Friday, September 22, 2006 9:20 AMTo: pgsql-general@postgresql.orgSubject: [GENERAL] Backup roles / usersHi,I've checked in pg_dump and i did not find anything regarding backuping roles ?When i migrate my DB to another computer, should i recreate all roles manually ?thx.Alain
Re: [GENERAL] Strange database corruption with PostgreSQL 7.4.x o
Hello everyone! Small update on this issue: Our server has four 146GB disks as pairwise RAID 1 and one of these is affected by the bug mentioned in the HP support page. As quick fix i moved our database to the the other raid device built of unaffected disks. Till now i don't got any new database corruption, so i think the one disk with the firmware bug is the cause of our problems. Since only the database does a lot of I/O onto the disks, this will help us for the next days till we can upgrade or replace the bugged disk. Thank you all for your hints and suggestions! -- Matthias > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of > [EMAIL PROTECTED] > Sent: Wednesday, September 20, 2006 4:51 PM > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Strange database corruption with > PostgreSQL 7.4.x o > > > Hello all! > > Ok, i found out some more informations. According to > http://h2.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&; taskId=110&prodSeriesId=397634&prodTypeId=15351&prodSeriesId=397634&objectID =PSD_EX050119_CW01 > one of our four disks in the server has a firmware issue. > The problem are incomplete writes onto disk while on high I/O load... > We will check this one first. If it won't help, we will try the hardware > diagnostics and some other tests... > Meanwhile thank you all for your suggestions :) > > -- Matthias ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] After Trigger
Bob Pawley wrote: I am seeking wording for a procedure that will initiate a trigger only after another trigger has completed its function. In this case I want to drop a table that is created and used by the first trigger. You could just call a function from your trigger, or handle it inside the trigger. An alternative approach would be to use a permanent table, fill it within your transaction and trunk it eventually. To other transactions there'll never be any data in it, and you lose the overhead of creating and dropping the table (replacing it by trunking...). -- Alban Hertroys [EMAIL PROTECTED] magproductions b.v. T: ++31(0)534346874 F: ++31(0)534346876 M: I: www.magproductions.nl A: Postbus 416 7500 AK Enschede // Integrate Your World // ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Can i switch to the use of persistent connections
On fim, 2006-09-21 at 23:40 -0700, Najib Abi Fadel wrote: > i have an already running web application with a lot of users. All the > connections to the postgresql database are not persistent. I was > wondering if it is possible to switch to persistent connection without > having problems in my current in production application in order to > boost performance ? take a look at pgpool http://pgpool.projects.postgresql.org/ gnari ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] postgresql rising
On Wed, 2006-09-20 at 10:10 -0500, Tony Caduto wrote: > Merlin Moncure wrote: > > I have seen a steady progressive rise in the number of postgresql > > related jobs and the quality of those jobs. Major companies are > > apparently rolling out critical infrastructure on postgresql...Vonage > > is one example: > > > That is good news, I wish there where some of those Postgresql jobs in > the Milwaukee area :-) > Don't want to move to NJ :-( > > It does seem to be a grass roots kind of thing as the major corp > managers have no clue what Postgresql is. > For a high level corp manager all they ever hear about is MS SQL Server, > Oracle and DB2, and the more it costs the more they think it is what > they need :-) Hmmm... I'm broke. I'd be happy to wrap a sexy, colorful, pointy-clicky installer around postgreSQL and then market the whole package to the corporate world with 5 and 6 figure annual site deployment fees. I'll even funnel half back into core devel. It wouldn't even be the exploitation of profound stupidity, it would be catering to the needs and desires of the corporate customer. It's not about the product, it's about the marketing. Look what MS has done with the often Alpha Dung they produce. Imagine that machinery behind a sound and superior product. Boggles the mind, dunnit? I wonder if the Advocacy crowd have thought about an outright sale of marketing rights Andy ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Select Cast Error
On fim, 2006-09-21 at 21:57 -0400, Brad Budge wrote: > I have a field that is varchar(15) type and an example of data I'm > working with is (PROJ-0001-06) > select cast((max(substring(test.test from 6 for 4))) as integer) + 1 > FROM test where max(substring(test.test from 11 for 2)); > as someone already pointed out, this where clause is just equivalent to WHERE '06' and does not mean anything > List below is a better idea of what my table looks like and the result > I need. > PROJ-0004-05 > PROJ-0001-06 > PROJ-0002-06 > PROJ-0003-06 > When I run my select statement I want to return the number 4. The > idea is that I need the next highest number in the middle but take in > consideration that the highest trailing numbers take president. if all the values are really formatted like this, you could ORDER BY to get the value you want: SELECT 1 + CAST(substring(test.test from 6 for 4) AS INTEGER) FROM test ORDER BY substring(test.test from 11 for 2) DESC, substring(test.test from 6 for 4) DESC LIMIT 1; gnari ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] dumping 8M bit fields
On Thu, Sep 21, 2006 at 08:14:49PM -0400, Rajarshi Guha wrote: > Hi, I have a table with 8M rows and one of the fields is a bit (1024 > bits) field. I am trying to dump the bit field for 8M rows to a file: > out of memory for query result > > I am surprised since I did an equivalent query but selecting varchar > field for 6.9M rows which worked fine. The memory needed is related to how much screen space it takes. So a 1024 bitfield takes 1024 bytes to display. Times 6.9M rows is about 7GB of data. If the itention is to get all the data out into a file, try COPY. That just dumps the data, without trying to store it first. Hope this helps, -- Martijn van Oosterhout http://svana.org/kleptog/ > From each according to his ability. To each according to his ability to > litigate. signature.asc Description: Digital signature
[GENERAL] PG_DUMP without asking password
Hi,is there a way to backup the database thanks a command script, without postgresql requesting the user password ?thanks a lot,Alain
Re: [GENERAL] PG_DUMP without asking password
Title: Nachricht Hm, depends on how the security settings of the database are set. For local users you could set in the pg_hba.conf ident as authentication method. Then they won't need a password anymore. See the pg_hba.conf documentation for more infos. Another way could be to execute the dump script under a privileged user. For my machines here, the user postgres can dump all databases without entering a password. If you backup your database via a cron job started by root, you can simply do a su postgres -c ... to run the backupjob under user postgres. -- Matthias -Original Message-From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Alain RogerSent: Friday, September 22, 2006 1:32 PMTo: pgsql-general@postgresql.orgSubject: [GENERAL] PG_DUMP without asking passwordHi,is there a way to backup the database thanks a command script, without postgresql requesting the user password ?thanks a lot,Alain
Re: [GENERAL] PG_DUMP without asking password
If you run it on the same server, with a user who has permission to access the db using ident (e.g. user "postgres"), i.e. you have a pg_hba.conf entry like: local allpostgres ident sameuser Then pg_dump and pg_dumpall will not require a password. Alternatively, you can create a .pgpass file in the users home directory (if ident not available, but you *need* to think about the permissions of this file and ramifications) Terry Terry Fielder [EMAIL PROTECTED] Associate Director Software Development and Deployment Great Gulf Homes / Ashton Woods Homes Fax: (416) 441-9085 Alain Roger wrote: Hi, is there a way to backup the database thanks a command script, without postgresql requesting the user password ? thanks a lot, Alain ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Majordomo drops multi-line Subject:
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 >> I know, since some Mailinglist I am on are migrated to Majordomo2. >> Afaik does Mailman not support the "nomail" Option. > > BTW, majordomo may not support 'nomail', but it does allow you to add > additional email addresses to a primary email, which I find to be far > easier for dealing with multiple accounts. Just for the record, both Mailman and Majordomo support the "nomail" option. More specifically, all the Postgres lists do, and a number of subscribers are using this feature. - -- Greg Sabino Mullane [EMAIL PROTECTED] End Point Corporation PGP Key: 0x14964AC8 200609221021 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iD8DBQFFE/GNvJuQZxSWSsgRAkoSAKDfXCQ5NIkRkFUIttFbb1j6/Mc7xACeJFtg i4TDwh/B4F062fz2Gl20yMM= =tF9p -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] ERROR: index "patient_pkey" is not a btree
"surabhi.ahuja" <[EMAIL PROTECTED]> writes: > i am having postgres version 8.0.0 installed, on my machine ^ The 8.0 branch is up to 8.0.8. You have just found one of the reasons why. Update. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Replication and PITR
On Fri, 2006-09-22 at 08:12 +0200, Bo Lorentsen wrote: > Jeff Davis wrote: > > 8.2 makes PITR much easier to use for the situation you'd like. In 8.1, > > a WAL might sit around for a while before it becomes full and then sent > > off. 8.2 allows you to force a WAL to be sent, and it also allows a > > standby mode. > > > This sounds really neat ! > > To me this sound like we may be able to replicate using different > priorities, balancing performance and data integrity. But what do you > mean by "standby mode" ? That PG maý be able to serve as a readonly DB > and replication client at the same time ? Standby mode means that the database is kept almost up to date with the master, but is not "up". When the master goes down, you can bring the standby machine up. Until then, you unfortunately can't even do read queries on that machine. If you want more of a master/slave setup for performance, you should take a second look at Slony. A PITR standby doesn't help you with performance at all. > > Slony is a good system now, and it's nice because you can use different > > versions of PostgreSQL. PITR requires that it's the same version. > > > The PITR requirement all makes sense, and I don't like the missing DDL > updates and the fact that Slony is trigger. Why don't you like the fact that Slony is trigger-based? Does that cause you a problem? And missing DDL is mainly a problem when you want to provide postgresql to many people, and you have no idea how they will use it. If that's the case, standby PITR might be a better solution for you. Slony has nice "execute script" functionality that is useful for making DDL changes on all machines. > > However, if you're working with a payment system or accounting system, > > you may need synchronous replication. With any asynchronous solution > > (Slony or PITR standby), there is a possibility (although not likely) to > > lose *committed* transactions. > > > Hmm, yes but again this have to be balanced with performance, and at the > moment I can live with a replication that is a little behind but, I will > try to keep the overhead down. Asynchronous does have very good performance. > > If you do need synchronous replication, consider using two-phase commit > > to prepare transactions on several machines before committing them. This > > ensures that the data will be on multiple machines before committing to > > any of them. > > > Hmm, I saw that feature announced in 8.1, and I am sure it will come in > handy one day, but right now async is acceptable. > I prefer working with async when possible because it's easier to do well. I was just making sure you knew that it is possible to lose transactions. By the way, no matter what you do, you probably do want to use the PITR to at least do backups for you. It won't help to use replication if someone accidentally does an unqualified "DELETE FROM mytable". Regards, Jeff Davis ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] CLUSTERing on Insert
I believe there's a TODO item for index-organized tables/clustered tables. If not, there's certainly been discussion about it on the -hackers list. On Sun, Sep 17, 2006 at 10:21:27PM -0700, CG wrote: > As I'm waiting for a CLUSTER operation to finish, it occurs to me that in a > lot of cases, the performance benefits to having one's data stored on disk in > index order can outweigh the overhead involved in inserting data on-disk in > index order Just an idea I thought I'd throw out. :) > > Also, the CLUSTER operation is about as straight forward as one can get. It > basically reads each row, one-by-one, in the index order over to the new > table, reindexes, then renames the new table to preserve references. I've > been thinking about how to speed up the copy process. Perhaps taking > contiguous blocks of data and moving them into place would save some I/O > time. Locking the table is another problem. Would it be impossible to perform > the CLUSTER within the context of a READ COMMITTED transaction, and then pick > up the leftover CRUD rows and put them at the end of the file. The existing > code makes some assumptions that the table was not altered. There would be no > more assumptions. > > I'm sure I'm not the first person to scratch his head thinking about CLUSTER. > Maybe I just don't really understand the limitations that are out there > preventing these things from being created. But, what else is there to do at > 1AM on a Sunday night waiting for a 500MB table to CLUSTER? :) > > > CG > > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to >choose an index scan if your joining column's datatypes do not >match > -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] dumping 8M bit fields
On Thu, 2006-09-21 at 20:14 -0400, Rajarshi Guha wrote: > Hi, I have a table with 8M rows and one of the fields is a bit (1024 > bits) field. I am trying to dump the bit field for 8M rows to a file: > > psql -U cicc3 -A -F " " -t -o pubchem.fp -c "select cid,gfp from > pubchem_compound; > > However I get > > out of memory for query result > psql is trying to load all of the data into RAM before outputting any of it. More specifically, it's trying to load all of the output representations of all the data into RAM before outputting it. For 1024 bits, the output representation will be 1k. For 8M rows that's a lot of RAM needed. It would be better to use something like COPY or a cursor. Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql rising
On Thu, Sep 21, 2006 at 10:48:47AM -0500, Scott Marlowe wrote: > On Thu, 2006-09-21 at 08:47, Brad Nicholson wrote: > > On Wed, 2006-09-20 at 16:38 -0500, Philip Hallstrom wrote: > > > > On Wed, Sep 20, 2006 at 10:10:56AM -0500, Tony Caduto wrote: > > > >> For a high level corp manager all they ever hear about is MS SQL > > > >> Server, > > > >> Oracle and DB2, and the more it costs the more they think it is what > > > >> they need :-) > > > > > > > > I think that description is false. At a certain point in the > > > > management hierarchy, the only way anyone has the ability to evaluate > > > > something is on the basis of reputation. > > > > > > I think that description is false. At a certain point in the management > > > hierarchy, the only way anyone has the ability to evaluate something is > > > on > > > the basis of > > > > > > - if there is someone they can sue. > > > > Good luck attempting to sue Microsoft, Oracle or IBM for deficiencies in > > their database products. > > I had a boss once who panned PostgreSQL because he wanted a company to > be able to blame if things went wrong. I asked him if it wasn't more > important to worry about preventing things from going wrong in the first > place. I got a rather blank stare for a while. No answer. And now-a-days, there's at least 2 US companies you can pay for the right to blame when something goes wrong. -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] postgresql rising
On Wed, Sep 20, 2006 at 05:30:59PM -0700, CSN wrote: > PostgreSQL doesn't have any booth babes? ;P Berkus doesn't count??! He's got long hair! What more do you want?! :P > csn > > > On 09/20/06 16:38, Philip Hallstrom wrote: > > [snip] > > > I think that description is false. At a certain point in the > > > management hierarchy, the only way anyone has the ability to > > > evaluate something is on the basis of > > > > > > - if there is someone they can sue. > > > - how attractive the sales rep is. > > > > Back in my youth, working for the family business (roofing/siding > > distributor, not many women, fewer attractive women), the most > > successful salespeople were always... young attractive women. > > > __ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection around > http://mail.yahoo.com > > ---(end of broadcast)--- > TIP 3: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] postgresql rising
On Fri, Sep 22, 2006 at 11:14:06AM +0200, Andrew Kelly wrote: > On Wed, 2006-09-20 at 10:10 -0500, Tony Caduto wrote: > > Merlin Moncure wrote: > > > I have seen a steady progressive rise in the number of postgresql > > > related jobs and the quality of those jobs. Major companies are > > > apparently rolling out critical infrastructure on postgresql...Vonage > > > is one example: > > > > > That is good news, I wish there where some of those Postgresql jobs in > > the Milwaukee area :-) > > Don't want to move to NJ :-( > > > > It does seem to be a grass roots kind of thing as the major corp > > managers have no clue what Postgresql is. > > For a high level corp manager all they ever hear about is MS SQL Server, > > Oracle and DB2, and the more it costs the more they think it is what > > they need :-) > > Hmmm... > I'm broke. > > I'd be happy to wrap a sexy, colorful, pointy-clicky installer around > postgreSQL and then market the whole package to the corporate world with > 5 and 6 figure annual site deployment fees. I'll even funnel half back > into core devel. > It wouldn't even be the exploitation of profound stupidity, it would be > catering to the needs and desires of the corporate customer. So why don't you? -- Jim Nasby[EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Replication and PITR
On Fri, 2006-09-22 at 07:47 +0200, Bo Lorentsen wrote: > Bill Moran wrote: > > - No reliability. On slow days, WAL logs could take a long time to > > rotate, so small but important transactions might not be replicated > > for a long time. > > > So it is all right for backup but for replication it could end up laking > too much behind, and a fail over could be hours behind. > > So PITR can be used, but one of the cons is the unpredictable delay of > data. I thought one of the ideas behind the PITR system was to get > rather reason backups of data ... > 8.2 will fix this. You can send the WALs periodically even if they're not full. In general, PITR will be substantially improved in 8.2 (thanks Simon!). The beta should be out soon enough. Download it (or the CVS) and try it out. It never hurts to actually simulate a failure and see how quickly and effectively you actually can recover. That is especially true in 8.1, where PITR is still somewhat rough around the edges. Regards, Jeff Davis ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] postgresql rising
> Berkus doesn't count??! He's got long hair! What more do you want?! banjos playing in background... -- Scott Ribe [EMAIL PROTECTED] http://www.killerbytes.com/ (303) 722-0567 voice ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Backup Large Tables
On Sep 21, 2006, at 10:54 PM, Charles Ambrose wrote:I have a fairly large database tables (say an average of 3Million to 4Million records). Using the pg_dump utility takes forever to dump the Sounds like your either woefully mis-configured or woefully underpowered or have a short definition of "forever" :-)Every night we take a dump of our several hundred million row DB in about 49 minutes. We use the "pg_dump -Fc" format and that comes to a bit over 5GB of data compressed. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Backup / export DB
On Sep 22, 2006, at 3:04 AM, [EMAIL PROTECTED] wrote:If you're database contains foreign keys or similar, make sure to include OIDs into the database dump.on what basis did you make up this advice?i have no OID's anywhere and FK's work just fine.when you set up the new server, be sure to re-create any users you had on the old, and do the dump as the postgres superuser to ensure everything gets copied. do the restore as the superuser too to ensure all ownerships and permissions were as before as well. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] Backup roles / users
On Sep 22, 2006, at 3:20 AM, Alain Roger wrote:When i migrate my DB to another computer, should i recreate all roles manually ?either that or use this command to dump the roles as an SQL file you can feed into the new server. it will complain about trying to recreate the superuser, but that's ignorable: pg_dumpall --globals-only -U ${pguser} > users.sqlwhere pguser is defined as the postgres superuser you use. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] postgresql rising
On Sep 21, 2006, at 10:27 PM, Christopher Browne wrote: In contrast, if a similar infringement were found with one of the products of, say, IBM, you might discover that you got some value for money out of those licensing fees in that the only folks sued are likely to be IBM... That assumes their license indemnifies you of such liability. But you could be sued anyhow, and would then have to claim against that indemnification. I find this to be a mostly false argument. smime.p7s Description: S/MIME cryptographic signature
Re: [GENERAL] postgresql rising
On Sep 22, 2006, at 1:03 PM, Jim C. Nasby wrote: Berkus doesn't count??! He's got long hair! What more do you want?! Well, then based on volume he should count as two :-) No offense intended, Josh... *I'd* count as two, too. smime.p7s Description: S/MIME cryptographic signature
[GENERAL] Is relpages always supposed to be right?
Is relpages always supposed to be right?:select count(*) from result_entry;trend=# select count(*) from result_entry; count--- 59913(1 row)trend=# select relpages from pg_class where relname='result_entry'; relpages-- 0(1 row)trend=#Alex
Re: [GENERAL] Is relpages always supposed to be right?
Alex Turner wrote: > Is relpages always supposed to be right?: No, it's only updated on VACUUM, ANALYZE, CREATE INDEX, and perhaps REINDEX. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgres Team: Thank You All
# [EMAIL PROTECTED] / 2006-09-20 21:59:10 -0400: > To all involved in this project, > > I justed wanted to let you know how impressed and pleased I have been > with postgres over the past 5 years . Remember, this is an opens source project. Satisfied users are similar to random victims in drive-by shootings. -- How many Vietnam vets does it take to screw in a light bulb? You don't know, man. You don't KNOW. Cause you weren't THERE. http://bash.org/?255991 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Can i switch to the use of persistent connections with my already in use application without worrying ?
Najib Abi Fadel wrote: > i have an already running web application with a lot of users. All the connections to the > postgresql database are not persistent. I was wondering if it is possible to switch to > persistent connection without having problems in my current in production application in > order to boost performance ? You don't mention how your web is constructed, but just about every runtime platform has some kind of connection pooling. For example, if you are using Tomcat, then that has connection pools available. You can define them as part of your context, then just look them up when you need a connection. -- Guy Rouillier ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] powerset?
Does anybody have a stored proc they'd like to share (preferably pl/ pgsql) that generates the power set of an array? For instance: select powerset({1,2,3}); would give 8 rows: {} {1} {2} {3} {1,2} {2,3} {1,3} {1,2,3} ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly