[GENERAL] trying to track down postgres deaths
We have a problem with the postgres backend process terminating and dropping core. We believe it may be related to another problem we have where the postgres backend process can not be properly terminated. Question is, when I see: #17 0x08151bc5 in ClosePostmasterPorts () in the stack trace of the process, is this an indicator that the backend process was trying to shutdown? I'm assuming that if it's trying to 'close ports' then it would be shuting down the process. Are there multiple scenarios where this routine would be called? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] trying to track down postgres deaths
Tom Lane wrote: Geoffrey <[EMAIL PROTECTED]> writes: Question is, when I see: #17 0x08151bc5 in ClosePostmasterPorts () in the stack trace of the process, is this an indicator that the backend process was trying to shutdown? No; that's a function that's executed immediately after fork() to close postmaster sockets that the backend doesn't need to have open. It's highly unlikely that you'd get a crash in there, and even more improbable that it'd be 17 levels down the call stack. My bet is that you are trying to debug with a non-debug-enabled build, and gdb is giving you a less than accurate stack trace. This is highly likely as you are correct, we are not running with debug turned on. It's on the list, but high availability for these services has not permitted us to do this. We know we have a problem with a third party vendor's code we have used to build into the backend. It doesn't properly terminate when you run their application from a shell, thus we are pretty certain this is causing the problem with the backend processes not properly terminating when our application terminates. Rebuild with --enable-debug, and maybe also --enable-cassert, and see what you can learn. Thanks, as noted, that is the plan. regards, tom lane -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] [PERFORM] [ADMIN] Postgres VS Oracle
Andrew Kelly wrote: On Mon, 2007-06-18 at 13:02 -0400, Jonah H. Harris wrote: On 6/18/07, Andreas Kostyrka <[EMAIL PROTECTED]> wrote: As a cynic, I might ask, what Oracle is fearing? As a realist, I might ask, how many times do we have to answer this type of anti-commercial-database flamewar-starting question? As a nudist, I think I have to answer, "About every 9 weeks, it would seem". Jeese! You could have warned us to shield our eyes! -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Mugs 'n stuff
A. Kretschmer wrote: am Tue, dem 03.07.2007, um 23:37:57 +0100 mailte Raymond O'Donnell folgendes: Hi all, Is it still possible to get PostgreSQL merchandise? A friend of mine is looking for some, but I can't seem to find where its available. Can you or your fried visit the pgday.it at Prato, Italy? http://www.pgday.it/en/ , maybe there. We, the german postgresql user group, have some stuff like blue plush elephants, shirts and coffee-cups, see also http://ads.wars-nicht.de/blog/ (scroll a little bit down), this will be available in Prato. I've always had luck finding such items at http://www.cafepress.com/ Here's the tinyurl to a search for postgresql, which found a few items as well as some semi-related: http://tinyurl.com/27onuq I found all kinds of clothing as well as coffee cups, license plate frames and clocks. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] upgrade to 8.2.? or implement Slony, which first?
We plan to upgrade from our current 7.4.17 Postgresql to 8.2.? in the near future. We also plan to implement a replication solution, most likely Slony. We know that the upgrade to 8.2.? will require some code changes. Question is, is there a particular order in which we should perform these changes that would require the least amount of additional work? That is, would it be more efficient to move to 8.2.? and then implement our Slony replication, or would there be less effort in implementing the Slony solution and then upgrading to 8.2.? ? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] upgrade to 8.2.? or implement Slony, which first?
Joseph S wrote: If you don't mind the downtime it would be simpler to upgrade to 8.2 and then worry about Slony. If you do mind you can use slony to do the upgrade which needs much less downtime as you switch servers. Interesting point. It's a 24/7 environment, so downtime is unacceptable. Thanks for the response. Geoffrey wrote: We plan to upgrade from our current 7.4.17 Postgresql to 8.2.? in the near future. We also plan to implement a replication solution, most likely Slony. We know that the upgrade to 8.2.? will require some code changes. Question is, is there a particular order in which we should perform these changes that would require the least amount of additional work? That is, would it be more efficient to move to 8.2.? and then implement our Slony replication, or would there be less effort in implementing the Slony solution and then upgrading to 8.2.? ? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] plperl syntax highlighting for vi
Has anyone taken a stab at adding plperl syntax highlighting for vi? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] backend process terminates
Tom Lane wrote: Geoffrey Myers <[EMAIL PROTECTED]> writes: The output from the gdb batch process may be found here: http://www.serioustechnology.com/gdbbatch.txt gdb isn't telling you the whole truth, evidently --- how'd control get from line 781 to 912 with nothing in between? Recompiling the backend with -O0 or at most -O1 would be a good idea to get a more trustworthy gdb trace. Well, there is some third party libraries we've built into the backend that we don't have the source for. We think it may be that there's some memory corruption going on there. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] backend process terminates
Tom Lane wrote: Geoffrey Myers <[EMAIL PROTECTED]> writes: The output from the gdb batch process may be found here: http://www.serioustechnology.com/gdbbatch.txt gdb isn't telling you the whole truth, evidently --- how'd control get from line 781 to 912 with nothing in between? Recompiling the backend with -O0 or at most -O1 would be a good idea to get a more trustworthy gdb trace. As previously noted, we are building some third party code into the backend. We don't have the source code, so it's difficult to know what might be going on there. I don't know all the idiosyncrasies of how this works, so bear with me on this. The developer at the vendor indicated that he's narrowed down the problem to a set of wrapper routines in their code. They are named OpenFile(), CloseFile() and ReadFile(); He inquired as to whether there might be routines in the Postgresql code with the same names that might be causing a conflict. Sure enough, I searched the Postgresql source code and found routines with the same names. I don't see how this could pose a problem though, as it is my understanding that the compiler will properly address this issue. Anyone think this might be a problem? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] backend process terminates
Martijn van Oosterhout wrote: On Tue, Aug 07, 2007 at 07:46:45AM -0400, Geoffrey wrote: I don't know all the idiosyncrasies of how this works, so bear with me on this. The developer at the vendor indicated that he's narrowed down the problem to a set of wrapper routines in their code. They are named OpenFile(), CloseFile() and ReadFile(); He inquired as to whether there might be routines in the Postgresql code with the same names that might be causing a conflict. Sure enough, I searched the Postgresql source code and found routines with the same names. I don't see how this could pose a problem though, as it is my understanding that the compiler will properly address this issue. Yes, this could cause a problem. In general, when loading a library, any external references are first resolved against the main executable, then already loaded libraries, then the library being loaded. It's all in the ELF standard, if you're interested. I will be checking them out. My compiler knowledge is a bit rusty, circa SVR4... ;) As for solutions: 1. In your third party library, have the library built in such a way that the symbols are explicitly bound to the internal library version. There are various methods for dealing with that, it all depends on the toolchain used to build it. I suppose this product is actually several libraries that call eachother? Namespace would help here. Correct on both counts. Many of the routines are wrapper routines used to assist in code portability. 2. Make sure that any externally visible symbols in libraries are always prefixed by a tag, like libpq does (almost all symbols are pq*). Running "nm -D" over the main postgres executable and your libraries should give you an idea of the scope of the problem. Hope this helps, It appears that the common routine names were causing the problem. We are currently testing new versions of these libraries where they have renamed the common routines with unique names. Thanks for the insights. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] reporting tools
We are looking for a reporting tool that will enable users to generate their own reports. Something like Crystal Reports. Anyone using something like this with Postgresql? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] reporting tools
Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Geoffrey wrote: We are looking for a reporting tool that will enable users to generate their own reports. Something like Crystal Reports. Anyone using something like this with Postgresql? Why not Crystal Reports? My bad. We are looking for an open source reporting tool that will enable users to generate their own reports. Something like Crystal Reports. ;) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] reporting tools
Joshua D. Drake wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 John DeSoi wrote: On Aug 22, 2007, at 7:21 PM, Geoffrey wrote: We are looking for an open source reporting tool that will enable users to generate their own reports. Something like Crystal Reports. ;) I was looking at a couple the other day: iReport (part of Jasper), OpenRPT, and DataVision (http://datavision.sourceforge.net/). The DataVision page has some links to other report writers. Hopefully you'll do better than I did -- I also wanted something that works on OS X. All of the above meet that criteria by using Java or GTK, but the user interfaces are hard to take if you want a typical Mac application. MS Access? Gag, cough, choke -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] reporting tools
Thanks for the various responses, I'll check them out and post my research results and our decision. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Recall: August Monthly techdata split fi
[EMAIL PROTECTED] wrote: Robert Wickert would like to recall the message, "August Monthly techdata split file printers for France and Denmark ". In my experience, attempting to 'recall' an email message is a fruitless endeavor. Seems to me that this is a 'Microsoft' creation. I really don't understand the purpose, because by the time you consider 'recalling' the email message, it's already sitting in 1000s of inboxes... -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Recall: August Monthly techdata split fi
Ow Mun Heng wrote: On Tue, 2007-09-18 at 06:52 -0400, Geoffrey wrote: [EMAIL PROTECTED] wrote: Robert Wickert would like to recall the message, "August Monthly techdata split file printers for France and Denmark ". In my experience, attempting to 'recall' an email message is a fruitless endeavor. Seems to me that this is a 'Microsoft' creation. I really don't understand the purpose, because by the time you consider 'recalling' the email message, it's already sitting in 1000s of inboxes... It works for people that has the message _still_ in their exchange boxes. For those that don't, then... all is lost unfortunately. My point exactly. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] reporting tools
Andrus wrote: Use www.fyireporting.com Open source, uses excellent PostgreSQL npgsql drivers. Use standard RDL format I guess I should have noted that we will need to run this on Linux clients. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] good sql tutorial
Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. This is for a financial user who will need to have an understanding of sql in order to generate reports with a report writer like openrpt. Thanks for any suggestions. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] good sql tutorial
Rodrigo De León wrote: On 10/4/07, Geoffrey <[EMAIL PROTECTED]> wrote: Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. This is for a financial user who will need to have an understanding of sql in order to generate reports with a report writer like openrpt. Thanks for any suggestions. http://www.google.com/search?q=sql+tutorial I know how to use google, I'm looking for recommendations. What an ass. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] good sql tutorial
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/04/07 10:28, Geoffrey wrote: Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. This is for a financial user who will need to have an understanding of sql in order to generate reports with a report writer like openrpt. Thanks for any suggestions. My suggestion: tell him that the SQL interface is "broken". Then we have to write the reports for him, which is what we are trying to get away from. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] good sql tutorial
[EMAIL PROTECTED] wrote: Scott Marlowe wrote: On 10/4/07, Geoffrey <[EMAIL PROTECTED]> wrote: Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. I'd recommend The Art of SQL and Joel Celko's books. None are online that I know of, but they're affordable and quite informative. I have found this to be an excellent book: O'Reilly Learning SQL http://www.amazon.com/Learning-SQL-Alan Beaulieu/dp/0596007272/ref=pd_bbs_1/103-3121914-1316662?ie=UTF8&s=books&qid= 1191519417&sr=8-1 Thanks Ken, I'll check it out as well. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] good sql tutorial
Rodrigo De León wrote: On 10/4/07, Geoffrey <[EMAIL PROTECTED]> wrote: I know how to use google, I'm looking for recommendations. What an ass. If you found my reply to be lacking, you can say so without being rude... Stating the obvious google search to me is just as rude. I was looking for recommendations based on others' experiences. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] good sql tutorial
Ron Johnson wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On 10/04/07 11:06, Geoffrey wrote: Ron Johnson wrote: On 10/04/07 10:28, Geoffrey wrote: Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. This is for a financial user who will need to have an understanding of sql in order to generate reports with a report writer like openrpt. Thanks for any suggestions. My suggestion: tell him that the SQL interface is "broken". Then we have to write the reports for him, which is what we are trying to get away from. You'll see his SQL, squeal like a girl and then forbid him from ever touching the database again. I'm just saving you the intermediary steps. I hear you, but then again, it's not my decision.. :) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] good sql tutorial
Scott Marlowe wrote: On 10/4/07, Geoffrey <[EMAIL PROTECTED]> wrote: Anyone have a recommendation for a good sql tutorial? Looking for a book, but online would be useful as well. I'd recommend The Art of SQL and Joel Celko's books. None are online that I know of, but they're affordable and quite informative. Thanks Scott, I'll check it out. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] good sql tutorial
Rodrigo De León wrote: On 10/4/07, Geoffrey <[EMAIL PROTECTED]> wrote: Stating the obvious google search to me is just as rude. I was looking for recommendations based on others' experiences. That was not my intention, so I'm sorry if you felt that way. My apologies then. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] good sql tutorial
Scott Ribe wrote: I think it's out of print, unfortunately, but by far the best quick intro I've ever seen is: "The Essence of SQL: A Guide to Learning Most of SQL in the Least Amount of Time" by David Rozenshtein. Must be a pretty good book. Amazon has one used copy for $119. abebooks.com shows one used %96, three used at $549.05. :) Thanks the pointer though. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] reporting tools
Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] reporting tools
Bill Moran wrote: Geoffrey <[EMAIL PROTECTED]> wrote: Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. We are not interested in using any Microsoft technologies, or technologies based on Microsoft technologies. It's a philosophical position. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] reporting tools
Ned Lilly wrote: On 10/14/2007 6:41 PM Geoffrey wrote: Bill Moran wrote: Geoffrey <[EMAIL PROTECTED]> wrote: Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. We are not interested in using any Microsoft technologies, or technologies based on Microsoft technologies. It's a philosophical position. Geoffrey, have you looked at OpenRPT? http://sourceforge.net/projects/openrpt Yes, and it is looking like the app we will be going with. Thanks. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] reporting tools
Robert James wrote: Can you clarify the need / value added for reporting tool, over just running queries and packaging the output in HTML (with a little CSS for styling, you can get near PDF quality). This can be done in SQL and a tad of PHP (or Ruby). Looking at having someone generate reports which a gui that requires no more then sql knowledge. On 10/15/07, Geoffrey <[EMAIL PROTECTED]> wrote: Ned Lilly wrote: On 10/14/2007 6:41 PM Geoffrey wrote: Bill Moran wrote: Geoffrey <[EMAIL PROTECTED]> wrote: Andrus wrote: I guess I should have noted that we will need to run this on Linux clients. Geoffrey, You can run FYIReporting engine in Linux using MONO ( www.go-mono.com ) Thanks, we're looking for something that will run natively on Linux. I read this, almost deleted it, read it again ... Just in case there's confusion, MONO + FYIReporting _is_ native on Linux. At least, as much so as Java on Linux is. We are not interested in using any Microsoft technologies, or technologies based on Microsoft technologies. It's a philosophical position. Geoffrey, have you looked at OpenRPT? http://sourceforge.net/projects/openrpt Yes, and it is looking like the app we will be going with. Thanks. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] 1 cluster on several servers
Willy-Bas Loos wrote: I'll take that as a "no". What i mean is to actually run exactly one cluster (no replicated copy) on more than one server. Of course, if that were possible, why would people bother with replication.. I guess it is irrational to suggest that it would be possible, since each server would at least need to have it's own copy of the DBMS software etc, or it would cease to be a separate server. I think you need to better identify what you're trying to do. I can think of a couple of different solutions based on the limited info provided. You've already said you don't want replication. We have a scenario where we have a data silo that is shared between two servers, so the data exist in one place. To make things simple, if one server fails, the postmasters running on that server are started on the other server. This is a hot/hot fail over implementation as we have multiple postmasters running. You could implement a hot/warm fail over solution if you're running a single postmaster. Finally, you might be thinking of something like a beowulf cluster where multiple machines function as a single machine. I can't help you with that scenario as I don't have any experience with it and postgresql. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Slony replication
Pedro Doria Meunier wrote: Hi All, I would like to implement DB replication with Slony but with a slave that will not be always available. The master would have to check first of the slave's availability and then start syncing... Is this at all possible, or am I just raving? : ) I am quite new to Slony as well, but one of the first requirements the docs state is: Thus, examples of cases where Slony-I probably won't work out well would include: * Sites where connectivity is really "flakey" * Replication to nodes that are unpredictably connected. So I suspect Slony is not a solution for your effort. See: http://slony.info/documentation/slonyintro.html#INTRODUCTION -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] slony question
Scott Marlowe wrote: On Dec 10, 2007 2:19 PM, Chris Browne <[EMAIL PROTECTED]> wrote: [EMAIL PROTECTED] ("Josh Harrison") writes: Does slony support postgres major version upgrade ? ie., will it replicate between different major versions? Yes, that's one of the major "use cases" for Slony-I. Version 1.0 and 1.1 support (loosely) from PostgreSQL 7.3 to 8.1-ish. Version 1.2 dropped 7.3 support, but supports up to (just about; there's a patch not in a 1.2 release that is needed for late-breaking 8.3 changes) 8.3. Didn't 1.2 drop support for pg 7.4 as well? I thought I remembered reading that some time ago. Not according to the docs: http://slony.info/documentation/requirements.html Section 3.1, 3rd bullet: You also need a recent version of PostgreSQL source. Slony-I depends on namespace support so you must have PostgreSQL version 7.3.3 or newer to be able to build and use Slony-I. It says the same thing in the Slony 1.2.12 docs rpm. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Hijack!
Steve Atkins wrote: In the business world it's common to top-post and not cut previous content - and often appropriate, as it tends to be a communication between a smaller number of people, and the uncut content provides context for future reference. And it is quite common for tractor trailers to take wide right turns at intersections, but it's quite foolish for a car to do the same. There are approaches that are accepted depending on the vehicle and environment in use. Those who rant about anyone who top posts, or state that you should never top-post are mostly clueless or arrogant, or are making over-broad generalizations. Actually, it appears to me that those folks generally are clued into the acceptable approach in the environment they are posting. Also, they probably spend more time posting to technical lists and not in clueless corporate speak html email conversations. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] top posting
Collin Kidder wrote: I have to suffer through dealing with people like the two of you quoted above. You can deal with people who'd like to top post. Anything else is just being a spoiled baby who can't deal with minor issues. If all the energy spent crying about top posting were used to fuel cities none of us would be paying for power right now. Sorry to be so blunt but it really irritates me when people cry like 4 year olds about top posting. It's not that bad, get over it. If it's not brought to the attention of the masses, then it will simply grow, and it simply is not the way it's done on this list. Get use to it. Now who's doing the 4 year old crying?? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] top posting
Joshua D. Drake wrote: Geoffrey wrote: Collin Kidder wrote: I have to suffer through dealing with people like the two of you quoted above. You can deal with people who'd like to top post. Anything else is just being a spoiled baby who can't deal with minor issues. If all the energy spent crying about top posting were used to fuel cities none of us would be paying for power right now. Sorry to be so blunt but it really irritates me when people cry like 4 year olds about top posting. It's not that bad, get over it. If it's not brought to the attention of the masses, then it will simply grow, and it simply is not the way it's done on this list. Get use to it. Now who's doing the 4 year old crying?? There is no reason for this discussion to become rude. It has been productive on both sides thus far. Let's keep it that way. I felt I was 'responding in kind' wrt 'it really irritates me when people cry like 4 year olds about top posting. It's not that bad, get over it.' posting. My apologies if I've taken it to a level of rude that it had not already reached. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] to realise the dark side of Microsoft SQL Server...?
Johan van Zyl wrote: Please elaborate! (Many a true word spoken in jest) "That is until I can convince my new employer to realise the dark side of Microsoft SQL Server." It's the product of an evil company? Let's see: 1. they are virtually a monopoly 2. they use their monopoly to tie folks to their product upgrade merry-go-round. 3. They purchase companies to destroy competition. 4. They generate FUD rather then innovate. 5. The Novell agreement? 6. They coerce computer makers to not only preload their operating system, but coerce them into promoting it. 7. the OOXML debacle? The list continues, but I suspect you get my point. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] to realise the dark side of Microsoft SQL Server...?
Joshua D. Drake wrote: Geoffrey wrote: Johan van Zyl wrote: Please elaborate! (Many a true word spoken in jest) "That is until I can convince my new employer to realise the dark side of Microsoft SQL Server." It's the product of an evil company? Let's see: 1. they are virtually a monopoly 2. they use their monopoly to tie folks to their product upgrade merry-go-round. 3. They purchase companies to destroy competition. 4. They generate FUD rather then innovate. 5. The Novell agreement? 6. They coerce computer makers to not only preload their operating system, but coerce them into promoting it. 7. the OOXML debacle? The list continues, but I suspect you get my point. You are assuming most people care. No, I'm trying to educate people. I was answering the question of the previous poster. If people begin to understand the issues, they will begin to care. At least some of them. 1. You have to pay for the the OS. 2. You have to pay for the database. 3. You have to pay for any "extra" feature. That is where you start. Agreed. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] [OT] Slony (initial) Replication - Slow
Ow Mun Heng wrote: {resend as don't see it on the list after 4 hours} I'm just wetting my hands with slony and during the setup of the slave, I did and dump and restore of the master DB to the Slave DB. You don't need to do this. However during the startup of slony, I noticed that it issues a truncate command to the (to be) replicated table. Hence, this means that there's no such need for me to do a dump/restore in the 1st place. can someone confirm this? It _is_ taking long time (for slony) to do the \copy (~60GB in multiple tables being replicated, including (on the fly) index creation) This is correct. You want an empty replication database. When you start replication slony will bring the master and slave into sync. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Memory on 32bit machine
[EMAIL PROTECTED] wrote: Am Montag, 7. Januar 2008 12:56 schrieb Florian Weimer: Or would you rather vote for 64bit because there are no problems anymore and postgresql runs fine on 64bit debian. We haven't run into any trouble with iptables on Debian etch, running on amd64 hardware. But we use only fairly standard iptables functionality. ok, thanks for this information. It was sarge when i used iptables with 64bit kernel and it even failed to start up.so i'll might give 64bit another try. Are there any other recommendations whether to use 64bit or 32bit OS with postgresql? I just want to use 64bit if it as stable as 32bit. I'm curious as to why you would run iptables on a database server. My expectation would be that your database machine would be behind a dedicated firewall. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] shouldn't all tables have Object Descriptions?
We've stumbled upon an issue that we don't understand. I'm assuming that any given table should contain 'Object Descriptions.' For example, we have two tables as follows: mwv=# \dd carr; Object descriptions Schema | Name | Object | Description +--++-- public | carr | table | Carrier File (1 row) mwv=# \dd avlds; Object descriptions Schema | Name | Object | Description +--++- (0 rows) Question is, why does the avlds table not have an entry in the Object descriptions? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] what is it that \d actually does
It seems that I recall there is a way to display the actual select statement that is executed when you execute the \d command. Anyone? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] OT - pg perl DBI question
David Fetter wrote: On Tue, Jan 29, 2008 at 05:10:00AM -0800, Glyn Astill wrote: well its using the pgsql_replication_check.pl, which does: -- use Pg; I wouldn't trust that library or anything that depends on it if I were you. It's been unmaintained for a *very* long time. Are you speaking from personal experience, or just of the lack of maintenance? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] ERROR: relation with OID 1322527 does not exist
We've just started seeing these errors. Research I've done seems to indicate that it's related to temp tables. Question is, we didn't start seeing these errors until we started using slony to replicate our data. The errors only showed up shortly after the initial replication of the data was complete. That is, once the replication server 'caught up' with the production server. I posted to the slony list about this issue, but didn't get any bites. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] is a unique key on null field bad?
So, we are trying to track down some problems we're having with an implementation of slony on our database. I've posted to the slony list about this issue, but I wanted to get a more generic response from the perspective of postgresql. Is it a 'bad thing' to have a unique key on a field that is often times null? This application has been running along just fine for a couple of years now, but when we try to implement a slony replication solution, this one table consistently has inconsistent data between the primary node and the slave. The problem we are having with slony seems to be related to a table that has just such a key, so we are trying to figure out if this is causing the problem. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] is a unique key on null field bad?
Peter Childs wrote: On 20/02/2008, Geoffrey <[EMAIL PROTECTED]> wrote: So, we are trying to track down some problems we're having with an implementation of slony on our database. I've posted to the slony list about this issue, but I wanted to get a more generic response from the perspective of postgresql. Is it a 'bad thing' to have a unique key on a field that is often times null? This application has been running along just fine for a couple of years now, but when we try to implement a slony replication solution, this one table consistently has inconsistent data between the primary node and the slave. The problem we are having with slony seems to be related to a table that has just such a key, so we are trying to figure out if this is causing the problem. Its not a problem as such, but it will not exactly be unique as there could be multiple records with null values in that table. So it can't be the primary key, (Hence why Slony has a problem) We aren't using this as the primary key, so would this still pose a problem for slony? (indexes on this table) Indexes: "tract_pkey" primary key, btree (recid) "tract_order_num_key" unique, btree (order_num) "tract_assigned" btree (assigned) "tract_code" btree (code) "tract_comments" btree (comments) "tract_compound_1" btree (code, old_order_num) "tract_date_avail" btree (date_avail) "tract_dest_state" btree (dest_state) "tract_dest_zone" btree (dest_zone) "tract_driver" btree (driver) "tract_orig_state" btree (orig_state) "tract_orig_zone" btree (orig_zone) "tract_prebooked" btree (prebooked) "tract_tractor_num" btree (tractor_num) "tract_trailer_num" btree (trailer_num) However it you want to ensure that the field is either Unique or Null (ie not known) then this is a good way of doing it for example with Car Number Plates where the details are not known yet but must be unique once they are known... -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Disable Triggers
Tom Lane wrote: Andrew Sullivan <[EMAIL PROTECTED]> writes: On Thu, Feb 21, 2008 at 11:44:25AM -0500, Terry Lee Tucker wrote: table where, when the given trigger does fire, it checks for an entry in the table at the top of the trigger and takes the appropiate action. The problem is that the solution for disabling all triggers is used in several utility programs and I'm trying to avoid changing a bunch of code. I appreciate your input. Well, you could try rewriting the function to disable all but the Slony trigger. But there's something else wrong here. I seem to recall that we found some code path where reltriggers wasn't checked properly anyway, so disabling triggers wouldn't work exactly as you are doing it. No, reltriggers is reliable as a disable-all-triggers mechanism; when it's zero the code won't even look in pg_trigger. But you can't use it to disable just some triggers. I think the bug you are remembering is that there's always been a pg_trigger.tgenabled field, but it wasn't always honored everywhere, so it was unreliable as a selective-disable mechanism until some recent release (I don't recall which, but I'm afraid 7.4 is too old). How might we find out which release it was fixed in? Back patching 7.4.19 with the fix might be easier then trying to move up to the fixed version. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] disabling triggers, constaints and so on
We are still in a pickle with trying to resolve our trigger issues without affecting slony triggers. The point is, we need to be able to disable triggers, check constraints, and foreign-key constraints without affecting slony triggers in certain situations. This is all running on 7.4.19, thus, it's our understanding that using tgenabled is not going to be a solution. Making our triggers smarter doesn't get us all the way there. Suggestions? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Survey: renaming/removing script binaries (createdb, createuser...)
Shane Ambler wrote: Greg Smith wrote: And if anybody suggests putting a "_" in something I have to type all the time, I will stick my fingers in my ears and start yelling until they stop. Bad enough I have to type pg_ctl a few times every day now. +10 on hating "_" +20 if need be, I'd go with - (dash) over _ (underbar) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disable Triggers
Greg Sabino Mullane wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I see the following in the documentation for pg_trigger related to tgenabled: "Controls in which session_replication_role modes the trigger fires. O = trigger fires in "origin" and "local" modes, D = trigger is disabled, R = trigger fires in "replica" mode, A = trigger fires always." My question is: When tgenabled is set to "D", how does that setting interact with session_replication_role and, is there a way to use tgenabled with a setting of "D" to prevent a particular trigger from firing. Using ALTER TABLE to disable the trigger won't work because the whole table is locked during the transaction and I only want the disabled trigger to apply to the current transaction in the current session. If you simply want to ignore all triggers, just use a 'replica' role. When done, switch it back to 'origin' (or your default, which should be origin). If you want to fire only a single trigger, set it to 'always' mode and switch to 'replica'. If you want to fire all triggers *except* a certain trigger, set that trigger to replica mode and leave the session_replication_mode unchanged (default/origin). You should be using ALTER TABLE and not worry about changing tgenabled yourself, in case it wasn't obvious. You should be able to make permanent changes and then just use session_replication_role to control how it acts in a particular transaction. The issue at hand (I work with the OP), is that our current application disables all triggers quite often. Enter Slony, we want to replicate. So, what we need to do is, disable ALL triggers EXCEPT slony triggers. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Disable Triggers
Terry Lee Tucker wrote: Greetings: We have been working diligently toward integrating Slony into our production databases. We've been having trouble with various tables, although being replicated perfectly in the initial replication stage, afterwards, getting out of sync. I have finally figured out what the problem is. We have a Perl process that continually updates certain columns across all databases. That Perl process calls a function we have written called disable_triggers which updates pg_class, setting reltriggers to 0 for the given table, and then later, after the work is complete, resetting reltriggers to the original value. Unfortunately, during this process, the Slony trigger is disabled as well which is causing our problem. My questions is this: how would I go about changing my function so that all the triggers EXCEPT the Slony trigger would be disabled? Any ideas? Version: PostgreSQL 7.4.19 on i686-redhat-linux-gnu, compiled by GCC gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-9) Me thinks you forgot to mention that you are working on implementing this on Postgresql 8.3.1. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgresSQL on a networked drive with multiple users
J Ottery wrote: Thanks Craig for making me look like an idiot. I feel bad now. You don't look like an idiot, you look like someone who has just learned something. If we don't learn from our mistakes, then we are idiots... ;) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Joshua D. Drake wrote: Hello, I am currently chilling at MySQLCon. If any other Elephant riders who are doing a little Dolphin hunting are about... I am in Ballroom E about to give a talk on what Mysql can learn from PostgreSQL. Check the room for barrels of tar and feathers before you start your presentation... -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Alan Hodgson wrote: On Friday 18 April 2008, Chris Browne <[EMAIL PROTECTED]> wrote: I note in the blog that the "in place upgrade" issue came up. (Interesting to observe that it *also* came up pretty prominently in the intro session at PG East... This is obviously a matter of Not Inconsiderable Interest...) Upgrades are certainly the biggest PostgreSQL operational issue for me. Uhh, boss, I need to bring the database down for 3 days to do an upgrade. Yes, that means clients can't do anything during that time. Boss - [EMAIL PROTECTED]@#$# Hence why I'm still running 8.1 What about the: 8.1 -> slony -> 8.3 switch users to 8.3 databases solution. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Joshua D. Drake wrote: On Fri, 18 Apr 2008 13:54:52 -0400 Geoffrey <[EMAIL PROTECTED]> wrote: What about the: 8.1 -> slony -> 8.3 switch users to 8.3 databases solution. Is it? What is your transactional velocity? How long will the initial sync transaction have to run? You know vacuum isn't working while that transaction is open right? Are you going to have to setup up a dozen different replicated sets in order to get it done? Slony works in "some" cases it does not work in all nor the most critical (in terms of large sets with minimal downtime requirement). I don't know that slony is the answer. It was more of a question then an answer. We are hoping to use that solution to migrate from 7.4.19 to 8.3.1. The primary reason is to reduce downtime. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Joshua D. Drake wrote: On Fri, 18 Apr 2008 14:30:18 -0400 Geoffrey <[EMAIL PROTECTED]> wrote: I don't know that slony is the answer. It was more of a question then an answer. We are hoping to use that solution to migrate from 7.4.19 to 8.3.1. The primary reason is to reduce downtime. Well :) That is why I asked the questions I did. It very well may be the answer. It might not. For example, what size is your database? How many transactions per hour are you doing? We have 13 separate databases, not terribly large, my WAG would be that our largest database might have 30 million rows. I couldn't honestly tell you how many trans/hour we are doing without doing a little research. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Alvaro Herrera wrote: Geoffrey wrote: Joshua D. Drake wrote: On Fri, 18 Apr 2008 14:30:18 -0400 Geoffrey <[EMAIL PROTECTED]> wrote: I don't know that slony is the answer. It was more of a question then an answer. We are hoping to use that solution to migrate from 7.4.19 to 8.3.1. The primary reason is to reduce downtime. Well :) That is why I asked the questions I did. It very well may be the answer. It might not. For example, what size is your database? How many transactions per hour are you doing? We have 13 separate databases, not terribly large, my WAG would be that our largest database might have 30 million rows. I couldn't honestly tell you how many trans/hour we are doing without doing a little research. I find it pretty unlikely that Slony would be unable to help you upgrade here. Obviously you can upgrade one database at a time. Also, mostly it's not the database size what's a concern, but rather the size of the largest table. Well, we have done some extensive testing and it seems feasible. We've got more testing to do, so we've not decided. We hope to use slony to replicate the data in order to have a hot spare solution as well. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Joshua D. Drake wrote: On Fri, 18 Apr 2008 14:59:34 -0400 Alvaro Herrera <[EMAIL PROTECTED]> wrote: I find it pretty unlikely that Slony would be unable to help you upgrade here. Obviously you can upgrade one database at a time. Also, mostly it's not the database size what's a concern, but rather the size of the largest table. As I recall (I could be wrong) Slony syncs the whole set as a single transaction. So if he has his entire database as a set he may have a problem regardless of the largest or smallest table. I would also agree that 30 million rows is likely not a problem but he should still check his velocity. We have actually replicated all 13 databases and there was no apparent impact on production. This was in the evening, but we are a 24/7 shop. I'm really not altogether sure what you mean by transaction velocity. I'm pretty sure the electrons are traveling pretty close to the speed of light. ;) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] In the belly of the beast (MySQLCon)
Alvaro Herrera wrote: Geoffrey wrote: I'm really not altogether sure what you mean by transaction velocity. I'm pretty sure the electrons are traveling pretty close to the speed of light. ;) Actually, electrons themselves flow rather slowly -- millimeters per second according to Wikipedia. The signal propagation is a bit faster: "typically 75% of light speed", Wikipedia again. Well, with respect to the velocity of my car, they're much closer to the speed of light. It's all relative. In that comparison, the difference is certainly imperceptible. Let's not take this to the quantum level okay? :) Anyway, he means "transaction rate". But the actual useful figure is further removed from that because we're mostly only interested in write transactions, not read-only (though those still have an effect in versions earlier than 8.3.) Understood. I don't think we'll have a problem, based on our testing thus far. We have to get to 8.3 first though, and that's on the front burner. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Advice for "hot-swapping" databases
Kynn Jones wrote: Hi. I'm trying to automate the updating of a database. This entails creating the new database from scratch (which takes a long time), under a different name, say mydb_tmp, and once this new database is ready, doing a "hot swap", i.e. renaming the existing database to something like mydb_20080613 and renaming the newly created database mydb. I have automated everything up to the creation of mydb_tmp. (This is done with an unholy mix of Perl and shell scripts.) I'm trying to figure out the best way to perform the hot swap. One suggestion I would make is that as long as your database schema hasn't changed, you should be able to keep a copy of the 'empty' database around and then rather then build it from scratch, simply drop the backup in place. I've done this very recently when testing our Slony setup. I create the database structure (initdb, createdb, load schema), then, after shutting down the postmaster, create a tarball of the whole directory. This has saved me a good bit of time when having to drop databases and then recreate the empty ones. I'm by no means an expert, so others, please feel free to shoot holes in this solution if it is not viable. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Losing data
Garry Saddington wrote: On Thursday 19 June 2008 18:15, Alan Hodgson wrote: On Thursday 19 June 2008, Garry Saddington <[EMAIL PROTECTED]> wrote: I read in a Postgres manual that the hard disk may report to the OS that a write has occured when it actually has not, is this possible? Yeah. But unless the power suddenly turned off that wouldn't cause data loss. Oh, and the problem has been intermittant. Another thing that happened this morning is that Postgres had today as 18/06/2008 when in fact it was 19/06/2008 and the OS reported this correctly. Restarting postgres sorted it, could this be the problem? I strongly suspect the problem is between the keyboard and the chair. I'd love to agree, but I have seen this first hand as a user! In any case, however, if PostgreSQL reported the transaction complete and the machine didn't experience any hardware problems (like sudden power or disk failure), I would certainly not suspect PostgreSQL as the source of the problem. What has happened to the reports then? I have used this combination of Zope and Postgres for 5 years with no problems like this before and we have written one complete set of reports on this server in the past 6 weeks. The problem seems to have started last friday, when reports started to go missing. What has changed prior to Friday? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Please remove me from the list!
Marcelo Giovane wrote: Please, remove me from the list! Generally, when you find your way into a location, you are expected to find your way out... -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Please remove me from the list!
Raymond O'Donnell wrote: On 29/05/2009 00:14, Guy Rouillier wrote: You may remove yourself. At the bottom of *every* message sent from the mail list server are these instructions: Actually, for some reason, the instructions don't seem to have come through on the OP's emailI was poised to say the same as you until I realised this. :-) They have come through on every other post, however. I don't really want to drag this out into a discussion, but the instructions are provided to you when you subscribe. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_usleep() location
I'm trying to build a customized pg_standby.c and I can not locate pg_usleep(). Can anyone point me to the source file that contains this library? (or even better,the associated Red Hat rpm?) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_usleep() location
Richard Huxton wrote: Geoffrey wrote: I'm trying to build a customized pg_standby.c and I can not locate pg_usleep(). Can anyone point me to the source file that contains this library? (or even better,the associated Red Hat rpm?) port/pgsleep.c If you're a perl guy might be worth getting a copy of "ack" too. perl would likely be preferred, can you point me to ack?? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_usleep() location
Richard Huxton wrote: Geoffrey wrote: Richard Huxton wrote: Geoffrey wrote: I'm trying to build a customized pg_standby.c and I can not locate pg_usleep(). Can anyone point me to the source file that contains this library? (or even better,the associated Red Hat rpm?) port/pgsleep.c If you're a perl guy might be worth getting a copy of "ack" too. perl would likely be preferred, can you point me to ack?? http://search.cpan.org/~petdance/ack-1.88/ Probably an RPM for it too. Not sure what you'd search for. Searches code with Perl regexps, knows about code extensions, does nice colouring in terminal and switches to a simpler layout when piped/redirected to a file. Knows to skip .svn directories etc, don't know how I ever lived without it :-) Thanks muchly, I'll check it out. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] warm standby with WAL shipping
I'm trying to set up a warm standby via WAL shipping. I'm digging through the source of pg_standby.c to determine the proper method. Since we are using scp to access the archive files, pg_standby doesn't provide a solution to our problem. From reading the comments in pg_standby.c, I see: 'Check for initial history file: always the first file to be requested It's OK if the file isn't there - all other files need to wait' My problem is, I never see a *.history file, thus my script sits in a loop looking for it. I see the WAL files showing up on the archive server, but I don't see a *.history file. What am I missing? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Tue, 2009-06-02 at 15:21 -0400, Geoffrey wrote: My problem is, I never see a *.history file, thus my script sits in a loop looking for it. I see the WAL files showing up on the archive server, but I don't see a *.history file. What am I missing? pg_standby it self isn't a solution for warm standby. It is a component thereof. Also don't use SCP. Use rsync. Take a look at walmgr or PITRTools it will make your life easier. Okay, I'm attempting to check out PITRTools: svn co https://projects.commandprompt.com/public/pitrtools/repo/branches/1.2 svn: Unrecognized URL scheme for 'https://projects.commandprompt.com/public/pitrtools/repo/branches/1.2' What am I missing here? -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Geoffrey wrote: Joshua D. Drake wrote: On Tue, 2009-06-02 at 15:21 -0400, Geoffrey wrote: My problem is, I never see a *.history file, thus my script sits in a loop looking for it. I see the WAL files showing up on the archive server, but I don't see a *.history file. What am I missing? pg_standby it self isn't a solution for warm standby. It is a component thereof. Also don't use SCP. Use rsync. Take a look at walmgr or PITRTools it will make your life easier. Okay, I'm attempting to check out PITRTools: svn co https://projects.commandprompt.com/public/pitrtools/repo/branches/1.2 svn: Unrecognized URL scheme for 'https://projects.commandprompt.com/public/pitrtools/repo/branches/1.2' What am I missing here? Nevermind, ID 10T error. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Tue, 2009-06-02 at 15:21 -0400, Geoffrey wrote: My problem is, I never see a *.history file, thus my script sits in a loop looking for it. I see the WAL files showing up on the archive server, but I don't see a *.history file. What am I missing? pg_standby it self isn't a solution for warm standby. It is a component thereof. Also don't use SCP. Use rsync. Take a look at walmgr or PITRTools it will make your life easier. So, I'm looking at the PITRTools stuff, but I really want to understand how this all works. I'm shipping my wall files fine. I've been able to recreate my database from a backup and the accumulated WAL files. The problem with my current process is as noted, my script keeps looking for the *.history file, but never sees it. I see the list of files on my archive machine growing, so I know WAL shipping is working. I've created my backup as instructed in the docs. I don't know how/where the *.history file is generated and/or why it's not showing up in the archive directory with the WAL files. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Tue, 2009-06-02 at 15:21 -0400, Geoffrey wrote: My problem is, I never see a *.history file, thus my script sits in a loop looking for it. I see the WAL files showing up on the archive server, but I don't see a *.history file. What am I missing? pg_standby it self isn't a solution for warm standby. It is a component thereof. Also don't use SCP. Use rsync. Take a look at walmgr or PITRTools it will make your life easier. I still don't understand why the pg_standby code is looking for the *.history. Apparently others have seen this same behavior (according to google), yet I don't see any definitive answer. I don't mind looking at other tools, but I want to understand this process. I can't find what files are processed in what order, although according to the pg_standby.c code, there are at least three different files it's looking for: *.history, *.backup, and the standard WAL files. Is this documented anywhere? It appears the PITRTools use pg_standby binary, thus I'm still confused as to how these files are processed. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Tue, 2009-06-02 at 19:44 -0400, Geoffrey wrote: pg_standby it self isn't a solution for warm standby. It is a component thereof. Also don't use SCP. Use rsync. Take a look at walmgr or PITRTools it will make your life easier. I still don't understand why the pg_standby code is looking for the *.history. Apparently others have seen this same behavior (according to google), yet I don't see any definitive answer. http://www.postgresql.org/docs/8.3/static/warm-standby.html http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-ARCHIVING-WAL I think will have what you want. I have read this documentation, unfortunately a couple of times. Is the section '24.3.4 Timelines' referencing these *.history files? Is there any documentation out there that specifically lays out what the files are and how they are processed? Or, is that what I'm looking at in 24.3.4? It appears the PITRTools use pg_standby binary, thus I'm still confused as to how these files are processed. Yes it does use pg_standby. It just wraps everything that is missing for warm standby into a single utility. Looking at the pg_standby.c I have, I don't see a reference to scp or rsync. If I use the PITRTools, is there a different version or do I need to modify this version to use rsync. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Thank you Greg for taking the time to explain this as throughly as you have. I have found a logic problem in my code. I still don't know if we will use pg_standby as the wrapper code in PITRTools is python and we are not a python shop. Kinda want to stick with what we know (C, perl, shell). I'm certainly looking at rsync rather then scp, which really makes more sense. Greg Smith wrote: On Tue, 2 Jun 2009, Geoffrey wrote: The problem with my current process is as noted, my script keeps looking for the *.history file, but never sees it. From the restore_command section of the documentation: "The command will be asked for log files that are not present in the archive; it must return nonzero when so asked. This is not an error condition." So if you're asked for a .history file, and you don't have one, return an error state saying as much and the right thing will happen--recovery continues. More comments about the path everyone wanders down when trying to build their own tools here are at http://archives.postgresql.org/sydpug/2006-10/msg1.php , you'll probably get some more insight into the details here reading that early commentary. But you still want to know where they might come from, right? Those history files show up when you've started your backup server after recovering files from the original system. You need to bring the backup system out of standby before you'll see one. That results in a new timeline: http://www.postgresql.org/docs/8.3/static/continuous-archiving.html#BACKUP-TIMELINES Think about for a second: if the original server is still running, but you've started the standby system too, there are two separate histories with a common ancestor possible. One history has the original data plus what happened afterwards on the master, the other has the originals plus what happened afterwards on the standby, after it was started. The fun part is that you can return to copying files from the master again, so that you've got both sets of files available. You then choose which history to follow by adjusting the recovery_target_timeline parameter in the recovery.conf file. Anyway, while getting your hands dirty so you understand what's happening is a good idea, trying to fully reinvent pg_standby is an exercise destined to have a whole stack of little issues like these. Don't do that; it's taken years to get that code as mature as it is, and while you'll progress faster because you can stare at its source it will still take you a while. Returning to your original motivation for doing that, I threw a suggestion for how to combine pg_standby with using scp as the transport mechanism into http://wiki.postgresql.org/wiki/Warm_Standby , you just need to buffer transfers into a holding area to get around the atomic copy issues here. This requires using a non-trivial archive_command process though, you'll need to call a real script there to handle the multiple steps involved rather than just getting away with a one-line command for that setting. I reinvent that wheel periodically for sites that can't or won't install rsync for the job instead (always some variant on "for security reasons"). Unfortunately those sites also don't like releasing the resulting code to the world at large, so I don't have a full sample to show you. -- * Greg Smith gsm...@gregsmith.com http://www.gregsmith.com Baltimore, MD -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Erik Jones wrote: On Jun 3, 2009, at 5:13 AM, Geoffrey wrote: Thank you Greg for taking the time to explain this as throughly as you have. I have found a logic problem in my code. I still don't know if we will use pg_standby as the wrapper code in PITRTools is python and we are not a python shop. Kinda want to stick with what we know (C, perl, shell). I'm certainly looking at rsync rather then scp, which really makes more sense. pg_standby is in no way dependent on PITRTools. PITRTools is, however, dependent on pg_standby. Put another way: you do not need to use PITRTools to use pg_standby. In fact, you also don't need any perl or shell scripts to use pg_standby, just use rsync directly in the archive_command on the master and pg_standby in the recovery_command on the standby. The wiki link Greg provided (http://wiki.postgresql.org/wiki/Warm_Standby) has all of the info needed to set things up manually. Our current scenario is that we are archiving from machine A to machine B. Our hot spare is machine C, thus we are pulling the files via network from machine B to machine C, hence the reason I don't believe db_standby will work as it has no facility (rsync,scp) to retrieve the files from another machine. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Wed, 2009-06-03 at 14:43 -0400, Geoffrey wrote: pg_standby is in no way dependent on PITRTools. PITRTools is, however, dependent on pg_standby. Put another way: you do not need to use PITRTools to use pg_standby. In fact, you also don't need any perl or shell scripts to use pg_standby, just use rsync directly in the archive_command on the master and pg_standby in the recovery_command on the standby. The wiki link Greg provided (http://wiki.postgresql.org/wiki/Warm_Standby) has all of the info needed to set things up manually. Our current scenario is that we are archiving from machine A to machine B. Our hot spare is machine C, thus we are pulling the files via network from machine B to machine C, hence the reason I don't believe db_standby will work as it has no facility (rsync,scp) to retrieve the files from another machine. The point that is being made is that pg_standby doesn't need pitrtools to do its job. That is all. It is also why I said that pg_standby is just a component of a PITR solution and not a PITR Solution in itself. I understand his point very clearly. You are still going to need to either: A. Reinvent the wheel, by scripting it all yourself B. Use solutions that are already used by others such as walmgr or pitrtools My assumption was that since pg_standby does not have the scp/rsync functionality, I would have to either modify it, change the way we do things, or 'reinvent' a little different wheel. There is also an objection to using the python tools as we are small shop and do not have anyone who is versed in python. I have not had a chance to look at walmgr, I will do that shortly. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Greg Smith wrote: On Wed, 3 Jun 2009, Geoffrey wrote: My assumption was that since pg_standby does not have the scp/rsync functionality, I would have to either modify it, change the way we do things, or 'reinvent' a little different wheel. There are three things to setup here: 1) archive_command on the master 2) Transport between master and standby(s) 3) recovery_command. pg_standby is the reference implementation here. You can combine (1) and (2) by putting some sort of network copy command into the archive_command, but better practice here (and probably required practice in your case) is to write a script that does that instead. That's the part you need to worry about. I have a functioning script that accomplishes 1 and 2 together. The archive script copies the files to a second server. (not the warm standby) There is no need for you to reinvent (3) just because you have different requirements than most for (2). As you've noticed, pg_standby doesn't actually do the network transport part, and that also means that it's decoupled from what choices you make for that layer. Focus on writing scripts to atomically copy the files into the right destination on the standbys, and pg_standby will take care of applying the shipped log files to the database. This is a good point. As it turns out, it appears that I've got a fully functional solution at this time. I squashed a few bugs in my restore script yesterday and ran a test where it continuously restored WAL files until it received the 'failover' indicator. I'm still looking at other solutions and how they can fit into our particular problem. I also heed your statement from yesterday: 'Anyway, while getting your hands dirty so you understand what's happening is a good idea, trying to fully reinvent pg_standby is an exercise destined to have a whole stack of little issues like these. Don't do that; it's taken years to get that code as mature as it is, and while you'll progress faster because you can stare at its source it will still take you a while.' There is much wisdom in this statement and I see that recommendation in Joshua's comments as well. At this point, we are pushing hard on a deadline to get this in, so my quandry is whether to use what appears to be a working home grown solution, or continue researching other options. For now, I'm still looking at the other tools as well as attempting to verify that my current solution doesn't miss any 'little issues.' Thanks again for everyone's patience and input. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Joshua D. Drake wrote: On Wed, 2009-06-03 at 15:07 -0400, Geoffrey wrote: You are still going to need to either: A. Reinvent the wheel, by scripting it all yourself B. Use solutions that are already used by others such as walmgr or pitrtools My assumption was that since pg_standby does not have the scp/rsync functionality, I would have to either modify it, change the way we do things, or 'reinvent' a little different wheel. You wouldn't modify pg_standby. You would create a new utility that ships logs (or pulls logs) for pg_standby to utilize. I assume a script that pulls the logs to the warm standby and then calls pg_standby. That's a thought. There is also an objection to using the python tools as we are small shop and do not have anyone who is versed in python. At best this seems like a very odd requirement. You aren't doing any development. You are using a utility which is written in a specific language. Well, we are an odd group. :) I have not had a chance to look at walmgr, I will do that shortly. I believe it is written in Python. Hmm, I guess I should brush up on my Python. Anyway good luck! Thank you. Joshua D. Drkae echo Joshua D. Drkae |sed 's/ka/ak/' :) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] warm standby with WAL shipping
Greg Smith wrote: On Thu, 4 Jun 2009, Geoffrey wrote: For now, I'm still looking at the other tools as well as attempting to verify that my current solution doesn't miss any 'little issues.' The main thing you want to test out are that it acts sanely when the network connection to the destination server is out, and that it doesn't go insane if either the source or destination server run out of disk space. You should simulate both of those things. This has not happened as of yet, part of the test suite though. The important thing is to validate your script cannot say you've processed an archive file until you're absolutely positive it's stored somewhere safe. It's really not that hard. If you've got practice writing robust system scripts already, and it sounds like you do, I wouldn't hesitate to use a homegrown solution here instead of walmgr/pitrtools as long as you've done the tests I outline here. I wrote a language parser in AWK years ago... ;) (that was ugly) I assume a script that pulls the logs to the warm standby and then calls pg_standby. The way you say this makes me think you haven't really absorbed how pg_standby works yet. You don't call it; the database recovery script does. Your program's interaction with it is merely to drop files into the place it expects them to be (atomically), it's a polling solution that alternates between looking for files there/applying them to the database/sleeping when there's no more left. The script I mention above would be the recovery script. Since, as I understand pg_standby, looks for files locally, then my restore script would have to pull the files from the remote machine and drop them somewhere where pg_standby would be looking for them. That's my thought anyway. If you've already gone to the trouble of writing all the pieces here yourself, it really shouldn't be difficult to yank out the parts pg_standby does and use it for those instead. There's a few things in there you'll have a hard time implementing yourself that probably aren't even on your radar yet, but are nonetheless important. Being able to keep standby disk usage pruned easily with the restartwalfile feature comes to mind, that one is a subtle problem that doesn't sneak up on you until you've been in production a while. Still studying the pg_standby code. ;) -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
Mike Christensen wrote: We need to stop this thread, you guys are making me want to ditch Postgres and get Oracle (after taking out a second mortgage on my house that is).. You must have a REALLY nice house with a LOT of equity... Mike On Wed, Jun 24, 2009 at 9:40 AM, Thomas Kellerer <mailto:spam_ea...@gmx.net>> wrote: Craig Ringer wrote on 24.06.2009 04:07: Thomas Kellerer wrote: Mike Christensen wrote on 23.06.2009 19:37: Does anyone actually have that (any node can go down and the others still replicate amongst themselves?) I think this is what Oracle promises with their RAC technology. Isn't RAC a shared-storage cluster? As far as I know it is also a cluster of servers which can even hand over the processing of a single statement "in-flight" if the node goes down that is current processing the statement. Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] adding another node to our pitr config
We are currently using WAL shipping to have a hot spare of our databases. We want to add another node to this configuration. The question is, what is the best way to go about this? Currently, our script checks to see if the WAL file already exists on the target server, if not, then we scp the file over. This is a local machine, so the scp overhead is not considered to be an issue. So, the current approach is: ssh $1 "test ! -f $2/$4" && scp $3 $1:$2/$4 So, should I simply duplicate that line for the second server and place it below this one, or should they be dependent upon each other? That is: archive_command = 'archive.sh node1 /esc/master/pitr %p %f node2' ssh $1 "test ! -f $2/$4" && scp $3 $1:$2/$4 && ssh $5 "test ! -f $2/$4" && scp $3 $5:$2/$4 The second node will not be at the same location, thus the network reliability is less. Thanks for any insights. -- Until later, Geoffrey Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety. - Benjamin Franklin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] using between with dates
I'm trying the following: ship_date between '04/30/2010' AND '04/30/2010' + 14 But this returns: ERROR: invalid input syntax for integer: "04/30/2010" Can I use between with dates? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] using between with dates
Geoffrey wrote: I'm trying the following: ship_date between '04/30/2010' AND '04/30/2010' + 14 But this returns: ERROR: invalid input syntax for integer: "04/30/2010" Can I use between with dates? Got it: ship_date between '04/30/2010' and timestamp '04/30/2010' + interval '14 day' -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] using between with dates
Tom Lane wrote: Geoffrey writes: ship_date between '04/30/2010' AND '04/30/2010' + 14 ERROR: invalid input syntax for integer: "04/30/2010" Can I use between with dates? The problem with that is the parser has no reason to treat the strings as dates, at least not till it comes to consider the BETWEEN comparisons, which is too late to help in resolving the addition in the subexpression (data types are determined bottom-up). This'd work: ship_date between '04/30/2010' AND '04/30/2010'::date + 14 Thanks muchly, likely a better solution then my timestamp approach. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] temp tables
Do temp tables need to be explicitly dropped, or do the go away when the process that created them leaves? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Documentation availability as a single page of text
Bruce Momjian wrote: Bruce Momjian wrote: John Gage wrote: Is the documentation available anywhere as a single page text file? This would be enormously helpful for searching using regular expressions in Vim, for example, or excerpting pieces for future reference. Uh, no, and no one has ever asked for that. There must be some tool that will dump an HTML tree as a single text file. Or maybe convert the PDF file to text. On Linux: /usr/bin/pdftotext -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] calculating distance between longitude and latitude
Does postgresql have functions to calculate the distance between two sets of longitude and latitude. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pgpool
So, I've got it installed, and tweaked the configuration, but I simply can not figure out how to connect to my databases via pgpool. Is this simply transparent? I don't see how. So I have a postmaster running on port 5434, how do I connect to that database via pgpool? I simply can not find this piece of info in the docs? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Gerd Koenig wrote: Hi Geoffrey, you do not need to connect to your database directly, just connect to pgpool itself. e.g.: your database runs on port 5434, pgpool runs on port 5432 => * pgpool has to be configured in that way that it connects to the database on port 5434 * you/your app's should connect to the server where pgpool is running on port 5432 (the connection to the database is established by pgpool) hth...::GERD::... So I've got 13 different databases on 13 different postmasters, now does pgpool know which databases I'm trying to connect to? On Monday, June 21, 2010 02:10:22 pm Geoffrey wrote: So, I've got it installed, and tweaked the configuration, but I simply can not figure out how to connect to my databases via pgpool. Is this simply transparent? I don't see how. So I have a postmaster running on port 5434, how do I connect to that database via pgpool? I simply can not find this piece of info in the docs? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Gerd Koenig wrote: Hi Geoffrey, you do not need to connect to your database directly, just connect to pgpool itself. e.g.: your database runs on port 5434, pgpool runs on port 5432 => * pgpool has to be configured in that way that it connects to the database on port 5434 What parameter in the config file is this? I can't seem to locate it? * you/your app's should connect to the server where pgpool is running on port 5432 (the connection to the database is established by pgpool) hth...::GERD::... On Monday, June 21, 2010 02:10:22 pm Geoffrey wrote: So, I've got it installed, and tweaked the configuration, but I simply can not figure out how to connect to my databases via pgpool. Is this simply transparent? I don't see how. So I have a postmaster running on port 5434, how do I connect to that database via pgpool? I simply can not find this piece of info in the docs? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Geoffrey wrote: Gerd Koenig wrote: Hi Geoffrey, you do not need to connect to your database directly, just connect to pgpool itself. e.g.: your database runs on port 5434, pgpool runs on port 5432 => * pgpool has to be configured in that way that it connects to the database on port 5434 What parameter in the config file is this? I can't seem to locate it? I mis-read the docs, figured this out, thanks. * you/your app's should connect to the server where pgpool is running on port 5432 (the connection to the database is established by pgpool) hth...::GERD::... On Monday, June 21, 2010 02:10:22 pm Geoffrey wrote: So, I've got it installed, and tweaked the configuration, but I simply can not figure out how to connect to my databases via pgpool. Is this simply transparent? I don't see how. So I have a postmaster running on port 5434, how do I connect to that database via pgpool? I simply can not find this piece of info in the docs? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
So I've got the following: port = . . backend_hostname0 = 'localhost' backend_port0 = 5434 backend_weight0 = 1 backend_data_directory0 = '/data/pgsql/master' backend_hostname1 = 'localhost' backend_port1 = 5435 backend_weight1 = 1 backend_data_directory1 = '/data/pgsql/mwv' In my pgpool.conf file and I've restarted the pgpool processes. I can connect to the first entry as follows: psql -p master But if I attempt to connect to the second postmaster as follows: psql -p mwv I can not connect. What am I missing? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Guillaume Lelarge wrote: Le 21/06/2010 15:52, Geoffrey a écrit : So I've got the following: port = . . backend_hostname0 = 'localhost' backend_port0 = 5434 backend_weight0 = 1 backend_data_directory0 = '/data/pgsql/master' backend_hostname1 = 'localhost' backend_port1 = 5435 backend_weight1 = 1 backend_data_directory1 = '/data/pgsql/mwv' In my pgpool.conf file and I've restarted the pgpool processes. I can connect to the first entry as follows: psql -p master But if I attempt to connect to the second postmaster as follows: psql -p mwv I can not connect. What am I missing? You can't. backend_hostnamen, with n greater then 0, is only used in replication, and load balancing mode. If you want to be able to connect on various databases from various servers, pgBouncer is what you're looking for. Ouch, okay, thanks for the clarification. I simply did not pick up on that in the docs. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Guillaume Lelarge wrote: If you want to be able to connect on various databases from various servers, pgBouncer is what you're looking for. It does not appear that pgbouncer will 'limit exceeding connections' as does pgpool. So if I have a pool of 20 connections and 20 connections are used, what happens to the 21st connection attempt? Is it rejected or put into a queue to wait for the next available connection? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
John R Pierce wrote: On 06/21/10 5:37 AM, Geoffrey wrote: So I've got 13 different databases on 13 different postmasters, now does pgpool know which databases I'm trying to connect to? you would need 13 different connection pools. Can this be done? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Scott Marlowe wrote: On Mon, Jun 21, 2010 at 1:45 PM, Geoffrey wrote: John R Pierce wrote: On 06/21/10 5:37 AM, Geoffrey wrote: So I've got 13 different databases on 13 different postmasters, now does pgpool know which databases I'm trying to connect to? you would need 13 different connection pools. Can this be done? Yes Can you point me to any docs that explain how this is done? As there is only one pgpool.conf file, I'm wondering how to go about that process. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
Scott Marlowe wrote: On Mon, Jun 21, 2010 at 2:42 PM, Geoffrey wrote: Scott Marlowe wrote: On Mon, Jun 21, 2010 at 1:45 PM, Geoffrey wrote: John R Pierce wrote: On 06/21/10 5:37 AM, Geoffrey wrote: So I've got 13 different databases on 13 different postmasters, now does pgpool know which databases I'm trying to connect to? you would need 13 different connection pools. Can this be done? Yes Can you point me to any docs that explain how this is done? As there is only one pgpool.conf file, I'm wondering how to go about that process. sudo apt-get install pgpool2 (password, yes, wait...) pgpool --help Usage: pgpool [ -c] [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] [ -n ] [ -d ] pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] [ -m SHUTDOWN-MODE ] stop pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] reload Common options: -a HBA_CONFIG_FILE Sets the path to the pool_hba.conf configuration file (default: /etc/pool_hba.conf) -f CONFIG_FILE Sets the path to the pgpool.conf configuration file (default: /etc/pgpool.conf) -F PCP_CONFIG_FILE Sets the path to the pcp.conf configuration file (default: /etc/pcp.conf) -h Prints this help Looks like there's a -f and -F option to use various files. Admittedly, you might not be starting it up with /etc/init.d/pgpool2 each time now. But the basics are there. Thanks muchly, that should give me a good kick start. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgpool
A couple of questions regarding pgpool: Is there a problem with using multiple connection pools for the same database? Point being, we might want to give a higher number of connections to one group of users then another. I can see doing this by having separate connection pools. The higher priority users have a larger connection pool. Is there a problem with using connection pooling and traditional connections to connect to the same database? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] weird initdb output
I wrote a script that creates a new database from an existing backup. Works great on my machine. Another user tries to use it and sees the following output from initdb: could not change directory to "/root" The files belonging to this database system will be owned by user "postgres". This user must also own the server process. . . Why is it trying to change directory to /root??? Running as the postgres user. Any assistance would be appreciated. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] weird initdb output
Tom Lane wrote: Geoffrey writes: I wrote a script that creates a new database from an existing backup. Works great on my machine. Another user tries to use it and sees the following output from initdb: could not change directory to "/root" The files belonging to this database system will be owned by user "postgres". This user must also own the server process. . . Why is it trying to change directory to /root??? IIRC, part of the startup process involves chdir'ing to where the initdb executable is and then chdir'ing back to whatever directory had been current when you called initdb. I speculate the other guy was root and did "su postgres" not "su - postgres", so his cwd was still root's home directory. This might be harmless as long as you gave an absolute path for PGDATA to initdb, but I'd still recommend using su - not just su. Appears this was exactly the case. I do specify absolute path for PGDATA, thanks. -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] dropdb weirdness
So running the following command: dropdb -p 5443 swr I get: dropdb: could not connect to database postgres: FATAL: database "postgres" does not exist Why is it not 'seeing' the database name I'm passing to it? Why is it trying to drop a database named postgres?? -- Until later, Geoffrey "I predict future happiness for America if they can prevent the government from wasting the labors of the people under the pretense of taking care of them." - Thomas Jefferson -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general