Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
Another testimonial to the stability of Postgres :) We may be arguing semantics here but I would consider dump/restore an admin function. How do you handle a client restoring a database currently? Database is 8.0 compliant. In this case 8.4 pg_dump/pg_restore is used to dump and restore with any same version 8 of server witthout issues. This was broken in 9: Postgres 9 emits invalid "create procedural language plpgsql" command which does not work in any other version. How to fix this without distributing two copies of pg_dump/pg_restore ? Is it reasonable to create database and plpgsql language manually before running pg_restore ? In this case invalid "create procedural language plpgsql" issued by pg_restore gets ignored and maybe restore succeeds? I could see a client connecting to one of the system dbs and doing a DROP DATABASE. From your earlier messages the implication was that you used pg_restore to repopulate the db. My question then is how do the clients make sure that they are not doing this on an active database and keep it from going active during the process? Applicaton asks for new database name and verifies that this does not exist before executing pg_restore. Another requirement is to clone existing database in server with data. I posted question about it and it seems that PostgreSql does not have any capability to do this in server side in plpgsql fast. So we need to use slow and unsafe dump/restore over internet for this also. Andrus. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] plperl - caching prepared queries and cleanup
Hi, I am trying to use plperl for one of my triggers. The trigger executes a query. So I am preparing the query and caching it as below: if (!defined $_SHARED{'base_table_query'}) { my $base_columns_query = "select column_name from information_schema.columns where table_schema = '$_TD->{table_schema}' and table_name = \$1"; $_SHARED{'base_table_query'} = spi_prepare($base_columns_query, 'text'); } $result = spi_exec_prepared($_SHARED{'base_table_query'}, $base_table_name); Now, in a batch update (involving 1000s of records), the trigger itself fires for each update. I will get maximum performance benefit, if I can keep the prepared query in the global cache across trigger invocation. In effect, there is no logical point to call spi_freeplan() Will the plan be freed when the db session / connection closes Will it lead to any memory leaks (typically my db connections are open for long time ) ? Thanks Anupama. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] getting composite types info from libpq
On Dec16, 2010, at 02:51 , Daniele Varrazzo wrote: > 1. do I get enough info in the PGresult to inspect anonymous composite types? You just get the composite value, as you discovered. In text mode, that means only the composite string value, which contains no information about the individual field's types. In binary mode, however, the structure of such a composite value appears to be [for each field] [if value is NULL] <-1: 4 bytes> [else] bytes> [end if] [end for] according to a quick glance over record_send() in src/backend/utils/rowtypes.c. You'll want to double-check this, it really was a *very* quick glance ;-) The field's values are, again, in binary format, not text! AFAIK you *can* decide whether to use text for binary mode on a per-field basis when you execute a query, but once you request a field of type "record" to be transferred as binary, you'll have to be able to deal with arbitrary types sent as binary since you won't know which types the record might contain. Which isn't easy, because the binary representation of some types (like float I think) is machine-dependent :-( > 2. do I get such info for composite types for which I have schema info > in the catalog, without issuing a second query? (which I don't feel it > is a driver's job) No. Your only option is probably to query this information once and cache it. Knowing when to invalidate that cache isn't easy, though - but since type's probably don't change too often, some compromise will hopefully do. > 3. is there any libpq facility to split the string returned after a > composite types into its single components, without having to write a > parser to deal with commas and quotes? Not that I'd know of. There is, however, a project called libpqtypes which I think deal with things like that. I've never used it, though, so I can't say whether it fits your needs or not. > 4. are by any chance those info passed on the network, maybe available > in an internal libpq structure, but then not accessible from the libpq > interface? Don't think so. FYI, There has been some discussion about providing SQL-accessible functions to inspect and modify field of arbitrary records. There are two implementations of such a thing that I know of One was written by me, and is available at https://github.com/fgp/pg_record_inspect The other was written by Pavel Stehule and is described in his blob here http://okbob.blogspot.com/2010/12/iteration-over-record-in-plpgsql.html Neither of these helps much with doing things on the driver level, though, unless you're willing to tell your users to select record_inspect.fieldinfo(some_record) as "some_record.fieldinfo" alongside "some_record" if they want the record to be represented as other than one large string. best regards, Florian Pflug -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 2010-12-16 09.16, Andrus Moor wrote: Another requirement is to clone existing database in server with data. I posted question about it and it seems that PostgreSql does not have any capability to do this in server side in plpgsql fast. I'm probably misunderstanding but "CREATE DATABASE foo TEMPLATE bar" will clone bar as foo including data. Of course this only works within the same cluster. So we need to use slow and unsafe dump/restore over internet for this also. Andrus. -- Regards, Robert "roppert" Gravsjö -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
Robert, I'm probably misunderstanding but "CREATE DATABASE foo TEMPLATE bar" will clone bar as foo including data. Of course this only works within the same cluster. Than you. You are genious I haven't never tought about this. Will this work if database bar is accessed by other users ? Probably it fails, so it cannot used. Will this command create exact copy ? Andrus. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 16/12/2010 10:12, Andrus Moor wrote: Robert, I'm probably misunderstanding but "CREATE DATABASE foo TEMPLATE bar" will clone bar as foo including data. Of course this only works within the same cluster. Than you. You are genious I haven't never tought about this. Will this work if database bar is accessed by other users ? Probably it fails, so it cannot used. I don't know for sure, but I don't see why it should fail - it's only reading it, not writing data to it or making any changes. Will this command create exact copy ? Why wouldn't it? :-) Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
Hello, > I don't know for sure, but I don't see why it should fail - it's only > reading it, not writing data to it or making any changes. Probably it will fail... http://www.postgresql.org/docs/9.0/static/sql-createdatabase.html Although it is possible to copy a database other than template1 by specifying its name as the template, this is not (yet) intended as a general-purpose "COPY DATABASE" facility. The principal limitation is that no other sessions can be connected to the template database while it is being copied. CREATE DATABASE will fail if any other connection exists when it starts; Regards, Jayadevan DISCLAIMER: "The information in this e-mail and any attachment is intended only for the person to whom it is addressed and may contain confidential and/or privileged material. If you have received this e-mail in error, kindly contact the sender and destroy all copies of the original communication. IBS makes no warranty, express or implied, nor guarantees the accuracy, adequacy or completeness of the information contained in this email or any attachment and is not liable for any errors, defects, omissions, viruses or for resultant loss or damage, if any, direct or indirect." -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres Installation
2010/12/16 Adarsh Sharma > Dear all, > > Is there are any useful links for Installing Postgres_9.1 (recent version ) > in CentOS from its binaries. > > most recent stable version is 9.0.2. here you will find Yum instructions and links to RPM packages http://yum.pgrpms.org/howtoyum.php see also http://people.planetpostgresql.org/devrim/index.php?/archives/50-Upgrading-from-8.4-to-9.0-on-Fedora-Red-Hat-CentOS-using-RPMs.html I see there was some temporary problem with 9.0.1 packages so if you can, wait for 9.0.2 RPMs (should be available soon, if not yet)
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 2010-12-16 11.12, Andrus Moor wrote: Robert, I'm probably misunderstanding but "CREATE DATABASE foo TEMPLATE bar" will clone bar as foo including data. Of course this only works within the same cluster. Than you. You are genious I haven't never tought about this. Will this work if database bar is accessed by other users ? Probably it fails, so it cannot used. Correct. Will this command create exact copy ? Yes. Andrus. -- Regards, Robert "roppert" Gravsjö -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 2010-12-16 11.21, Jayadevan M wrote: Hello, I don't know for sure, but I don't see why it should fail - it's only reading it, not writing data to it or making any changes. Probably it will fail... http://www.postgresql.org/docs/9.0/static/sql-createdatabase.html Although it is possible to copy a database other than template1 by specifying its name as the template, this is not (yet) intended as a general-purpose "COPY DATABASE" facility. The principal limitation is that no other sessions can be connected to the template database while it is being copied. CREATE DATABASE will fail if any other connection exists when it starts; Are there any other side effects to this besides failing CREATE DATABASE command? -- Regards, Robert "roppert" Gravsjö -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] What is the name pseudo column
On 12/15/2010 5:43 PM, Adrian Klaver wrote: On Wednesday 15 December 2010 1:27:19 pm David Fetter wrote: On Wed, Dec 15, 2010 at 01:50:54PM -0600, Jack Christensen wrote: I was just surprised when accidentally selecting a non-existent name column there was no error -- instead something came back. select accounts.name from accounts limit 1 -> (1,65522,1,0.00,,"2010-07-22 09:57:26.281172-05",2) It appears it tries to return the entire row in an array (but longer rows get truncated). I've searched Google and the PG docs but I haven't had any luck. What happened here is that you ran into PostgreSQL's "charming" habit of using the argument.function notation, so you called the "name" function, i.e. the one that casts to name, on the entire row from your accounts table. Cheers, David. In the for what is worth department that behavior is going away in 9.1. See here for a detailed explanation: http://www.depesz.com/index.php/2010/11/08/waiting-for-9-1-removed-autocast-footgun/#more-1908 It makes sense now what's happening -- but glad to see that "feature" is going away. Thanks. -- Jack Christensen ja...@hylesanderson.edu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] getting composite types info from libpq
On Thu, Dec 16, 2010 at 5:03 AM, Florian Pflug wrote: > On Dec16, 2010, at 02:51 , Daniele Varrazzo wrote: >> 1. do I get enough info in the PGresult to inspect anonymous composite types? > You just get the composite value, as you discovered. In text mode, that means > only the composite string value, which contains no information about the > individual field's types. In binary mode, however, the structure of such a > composite value appears to be > > > [for each field] > > [if value is NULL] > <-1: 4 bytes> > [else] > > bytes> > [end if] > [end for] > > according to a quick glance over record_send() in > src/backend/utils/rowtypes.c. You'll want to double-check this, it really > was a *very* quick glance ;-) > > The field's values are, again, in binary format, not text! AFAIK you *can* > decide whether to use text for binary mode on a per-field basis when you > execute a query, but once you request a field of type "record" to be > transferred as binary, you'll have to be able to deal with arbitrary types > sent as binary since you won't know which types the record might contain. > Which isn't easy, because the binary representation of some types > (like float I think) is machine-dependent :-( > >> 2. do I get such info for composite types for which I have schema info >> in the catalog, without issuing a second query? (which I don't feel it >> is a driver's job) > No. Your only option is probably to query this information once and cache > it. Knowing when to invalidate that cache isn't easy, though - but since > type's probably don't change too often, some compromise will hopefully do. > >> 3. is there any libpq facility to split the string returned after a >> composite types into its single components, without having to write a >> parser to deal with commas and quotes? > Not that I'd know of. There is, however, a project called libpqtypes > which I think deal with things like that. I've never used it, though, > so I can't say whether it fits your needs or not. yeah -- what libpqtypes does is expose composites and arrays (and composites of arrays) as a 'result within a result'. You register the composite type by name, then you can create a PGresult that exposes the composite as if itself were a returned set -- then you get to use the regular libpq access functions to get the oid is null, etc. This process can nest of course. You might want to check it out. libpqtypes also always requests data in binary. this would actually be counter productive if you were to immediately convert it to a string. However, if you are moving data to some other binary structure, it's a lot faster and less work for the server. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Fwd: Re: [GENERAL] Postgresql 9.1 pg_last_xact_replay_timestamp limitations
> > > >> We should return the timestamp of last valid checkpoint rather than NULL > >> in that > >> case? > > > > Well, I think this behavior would be more appreciated by postgresql users > > in general. The case where the slave can be restarted after a clean > > shutdown is rare but we need to consider it nonetheless. In my case I > > implemented a custom function that reads the last returned timestamp from a > > new file on disk. This is not a perfect solution since the value returned > > might be older then the actual state of the replication but it's good > > enough for my needs. > > The second question is; What should be returned when the server has been > started normally without recovery? NULL? The timestamp of last valid > checkpoint? NULL is fine is this server is not a hot standby. > > The third question is; What should be returned while replaying WAL records > which > exist between REDO starting point and checkpoint? In this case, it seems bad > to > return the timestamp of the checkpoint whenever there is no replay > transaction, > since the result timestamp would go back once at least one transaction has > been > replayed before reaching the checkpoint record. Not sure I get this but it should probably be the highest timestamp value. The thing is, from my perspective, I need to know how up to date the replica his. Perhaps we are trying to squeeze to much into "pg_last_xact_replay_timestamp()" and a new function "pg_replication_timestamp()" is needed that would accurately tell me a simple information: The time is was when the master db server was in the exact same state as this replication is right now. > > Regards, > Regards, Gabi Julien -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On Thursday 16 December 2010 12:16:47 am Andrus Moor wrote: > > Another testimonial to the stability of Postgres :) > > We may be arguing semantics > > here but I would consider dump/restore an admin function. How do you > > handle a > > client restoring a database currently? > > Database is 8.0 compliant. > In this case 8.4 pg_dump/pg_restore is used to dump and restore with any > same version 8 of server witthout issues. First from here: http://www.postgresql.org/docs/9.0/interactive/app-pgdump.html "Also, it is not guaranteed that pg_dump's output can be loaded into a server of an older major version — not even if the dump was taken from a server of that version. Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. " The fact that it worked without intervention before is as much luck as anything else. Depending on that behavior going forward is not prudent. > > This was broken in 9: > Postgres 9 emits invalid "create procedural language plpgsql" command which > does not work in any other version. > How to fix this without distributing two copies of pg_dump/pg_restore ? > Is it reasonable to create database and plpgsql language manually before > running pg_restore ? In this case invalid "create procedural language > plpgsql" issued by pg_restore gets ignored and maybe restore succeeds? This is an example of the above. Two things at work here. First in 9.0 the plpgsql language is loaded automatically. Two it is loaded using the CREATE OR REPLACE LANGUAGE form of SQL, which did not exist prior to 9.0. It is the REPLACE clause that is causing the problem. You could try creating the language ahead of time and see if it just skips over the error. Another option would be to use the -l switch to pg_restore to generate a list of the TOC entries for the dump and store in a file. You then comment out the one for plpgsql and then use the -L switch to read the edited file. See here for more detail: http://www.postgresql.org/docs/9.0/interactive/app-pgrestore.html > > > I could see a client connecting to one > > of the system dbs and doing a DROP DATABASE. > > From your earlier messages the > > implication was that you used pg_restore to repopulate the db. My > > question then > > is how do the clients make sure that they are not doing this on an active > > database > > and keep it from going active during the process? > > Applicaton asks for new database name and verifies that this does not exist > before executing > pg_restore. > > Another requirement is to clone existing database in server with data. I > posted question about it and it seems that PostgreSql does not have any > capability to do this in server side in plpgsql fast. > So we need to use slow and unsafe dump/restore over internet for this also. Yea the CREATE DATABASE TEMPLATE option is as close as it gets and you are already aware of its limitations. > > Andrus. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. " pg_restore ignores erros during restore (it only returns exit code 1). So "manual editing of the dump file to remove syntax not understood" is never required. Why this is not mentioned in docs ? It is the REPLACE clause that is causing the problem How to force 9.0 to emit CREATE LANGUAGE command without replace clause ? Yea the CREATE DATABASE TEMPLATE option is as close as it gets and you are already aware of its limitations. How to make CREATE DATABASE TEMPLATE to work if there are connected users to template ? Since pg_dump works if active users are connected, CREATE DATABASE TEMPLATE should also work. Andrus. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] When the trigger is called my application is awaiting the finish
Hello, I'm having a problem running an update command that invokes a trigger, the problem is that the function performed takes about 45 minutes to finish and my application is locked waiting for the finish. You know how I can't wait for the end? Thanks in advance -- Felipe Pamboukian Analista Programador Informídia Pesquisas Esportivas Ltda. Tel./Fax.: 55 11 3673 5566 Email: fel...@informidia.com.br Web: www.informidia.com.br Antes de imprimir lembre-se de sua responsabilidade e comprometimento com o Meio Ambiente
[GENERAL] Postgresql: Remove last char in text-field if the column ends with minus sign
Hello sir, I want to remove the last char in a column if it ends with the minus sign. How could I do this in postgresql? For example: sdfs-dfg4t-etze45z5z- => sdfs-dfg4t-etze45z5z gsdhfhsfh-rgertggh => stay untouched Is there an easy syntax I can use? Thanking You. With Regards, Sarang Dave
[GENERAL] Plperl caching prepared queries and cleanup
Hi, I am trying to use plperl for one of my triggers. The trigger executes a query. So I am preparing the query and caching it as below: if (!defined $_SHARED{'base_table_query'}) { my $base_columns_query = "select column_name from information_schema.columns where table_schema = '$_TD->{table_schema}' and table_name = \$1"; $_SHARED{'base_table_query'} = spi_prepare($base_columns_query, 'text'); } $result = spi_exec_prepared($_SHARED{'base_table_query'}, $base_table_name); Now, in a batch update (involving 1000s of records), the trigger itself fires for each update. I will get maximum performance benefit, if I can keep the prepared query in the global cache across trigger invocation. In effect, there is no logical point to call spi_freeplan() Will the plan be freed when the db session / connection closes OR Will it lead to any memory leaks ? Thanks Anupama. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Searing array fields - or should I redesign?
Le mercredi 15 décembre 2010 à 19:12 +0100, Jan Kesten a écrit : > > eg, insert into logtable values ( 'vehicle123', now(), > > {{'voltage','13'},{'rpm','600'}}; > > > > However, I am not sure how I can write a query - for example to read all > > records where the voltage field is less than 13. Performance in this case > > is not a real significant issue. > > > > Would I be better off redesigning and having a master / detail kind of > > structure? Where the master table would have the vehicle id, timestamp and > > a key to the detail table. > > The second approach would work quite well. > > table logentry > id primary unique > vehicleid int > logtime timestamp > > table logdetail > logid int > attribute varchar/int > value decimal > textvalue varchar > > You can retrieve logentries for specific vehicles, timeframes and attributes > - and you can extend more log attributes without changing the database > structure. I would suggest another table for the attributes where you can > lookup if it is a text or numeric entry. .. The problem with this approach is that you need to loop through your recordset in your code to collect all the values. If you only have one value per key to store per vehicule, it's much easier to have one big table with all the right columns, thus having just one line to process with all the information . So, from your example : create table logtable( id_vehicle text, date_purchased date, voltage integer, rpm integer); the corresponding record being vehicle123, now(), 13, 600 this will simplify your queries/code _a lot_. You can keep subclasses for details that have more than one value. Adding a column if you have to store new attributes is not a big problem. -- Vincent Veyron http://marica.fr/ Progiciel de gestion des dossiers de contentieux et d'assurance pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql: Remove last char in text-field if the column ends with minus sign
On 12/16/2010 06:52 AM, Sarang Dave wrote: Hello sir, I want to remove the last char in a column if it ends with the minus sign. How could I do this in postgresql? regexp_replace(your_field, '-+$', ''); Cheers, Steve
[GENERAL] PgEast 2011: NYC CFP
December 16th, 2010: Celebrating 15 years of PostgreSQL, early. Following on the smashing success of PostgreSQL Conference West, PostgreSQL Conference West, The PostgreSQL Conference for Developers, End Users and Decision Makers, is being held at the Hotel Pennsylvania, in New York City from March 22nd through 25th 2011. Please join us in continuing to make this the largest PostgreSQL Conference series! Main site: http://www.postgresqlconference.org/ CFP: https://www.postgresqlconference.org/talk_types Thank you to our sponsors: Command Prompt: http://www.commandprompt.com/ EnterpriseDB: http://www.enterprisedb.com/ Time line: Dec 16th: Talk submission opens Feb 10th: Talk submission closes Feb 15th: Speaker notification This year we will be continuing our trend of covering the entire PostgreSQL ecosystem. We would like to see talks and tutorials on the following topics: * General PostgreSQL: * Administration * Performance * High Availability * Migration * GIS * Integration * Solutions and White Papers * The Stack: * Python/Django/Pylons/TurboGears/Custom * Perl5/Catalyst/Bricolage * Ruby/Rails * Java (PLJava would be great)/Groovy/Grails * Operating System optimization (Linux/FBSD/Solaris/Windows) * Solutions and White Papers -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 16 Dec 2010, at 9:16, Andrus Moor wrote: > How to fix this without distributing two copies of pg_dump/pg_restore ? > Is it reasonable to create database and plpgsql language manually before > running pg_restore ? In Are you sure that restoring dumps to your customers' sites is the best approach to install your software (if that's what you're doing)? Most people seem to write scripts to install databases for their products, which has a few benefits: - You can put those scripts under version control. - You can write them in such a way that you can do incremental updates of a database corresponding to version X of your product, to version Y or Z of your product. With pg_dump/restore you can only restore an entire database at a time. - You don't need to rely on the availability or installation location of pg_dump/pg_restore at your customer's site. - You are much more flexible in what data gets inserted into your database and how that happens. - It's easy to extend those scripts with, for example, some simple unit-tests to verify that your database on site works as expected. Of course, the big drawback is that you need to write them first and that (as with all home-cooked software) there will be bugs in them. Alban Hertroys -- If you can't see the forest for the trees, cut the trees and you'll see there is no forest. !DSPAM:737,4d0a5a32802651802549062! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
On 12/16/2010 08:47 AM, Andrus Moor wrote: Loading a dump file into an older server may require manual editing of the dump file to remove syntax not understood by the older server. " pg_restore ignores erros during restore (it only returns exit code 1). So "manual editing of the dump file to remove syntax not understood" is never required. Why this is not mentioned in docs ? That begs the question what is the problem then? If the 9.0 pg_restore is skipping over the error there should be no problem. As to the docs, when I have seen this besides skipping over the error pg_restore skipped over creating the object also. This in worst case led to a cascading problem where subsequent objects dependent on the skipped object where not loaded either. Bottom line is you are depending on a behavior which is documented not to be guaranteed to work. That is not going to go away, so it would really be a good idea to come up with another solution. This has been mentioned multiple times and now has reached the point of beating a dead horse :) It is the REPLACE clause that is causing the problem How to force 9.0 to emit CREATE LANGUAGE command without replace clause ? As far as I know you can't, it goes to the whole backwards compatibility issue already covered. I gave you an option in the previous post re using -l and -L switches to pg_restore. Yea the CREATE DATABASE TEMPLATE option is as close as it gets and you are already aware of its limitations. How to make CREATE DATABASE TEMPLATE to work if there are connected users to template ? Since pg_dump works if active users are connected, CREATE DATABASE TEMPLATE should also work. Andrus. At this point as I see it you have the following options: 1) Bundle version specific pg_dump/pg_restore code with your application 2) Give your users admin access. One option is phppgadmin: http://phppgadmin.sourceforge.net/ 3) An off the top of my head suggestion. Use one of the untrusted languages to write functions that access the file system and run the pg_restore/pg_dump programs. This would require that language be included in each database as well as the function(s). Could be put in template1 so subsequent databases created would pick them up. Does create a potential security risk though. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore
Adrian Klaver writes: > On 12/16/2010 08:47 AM, Andrus Moor wrote: >>> Loading a dump file into an older server may require manual editing of >>> the dump file to remove syntax not understood by the older server. " >> pg_restore ignores erros during restore (it only returns exit code 1). >> So "manual editing of the dump file to remove syntax not understood" is >> never required. >> Why this is not mentioned in docs ? > That begs the question what is the problem then? The problem of course is that Andrus' statement is wrong. There is not, and never has been, any guarantee that pg_dump output would load into an older server without hand-editing to adjust uses of new syntax. Which is exactly what the documentation says. He's apparently only had experience with version combinations where no editing was needed, but he was just lucky (and pretty conservative with what DDL he used). We have changed the output in non-backward-compatible ways before now, and doubtless will again in the future. As an example, 9.0 did add OR REPLACE to its CREATE LANGUAGE commands, but the previous output wouldn't have worked before 8.1 anyway --- before that CREATE LANGUAGE had to fully specify all the language's options, because there was no pg_pltemplate. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Searching array fields - or should I redesign?
Thanks for the comments. Just to clarify, I gave these two values as examples. The readings could be between a handful for one vehicle type up to 40 or more for another type of vehicle. On Thu, Dec 16, 2010 at 12:26 PM, Vincent Veyron wrote: > Le mercredi 15 décembre 2010 à 19:12 +0100, Jan Kesten a écrit : > > > eg, insert into logtable values ( 'vehicle123', now(), > {{'voltage','13'},{'rpm','600'}}; > > > > > > However, I am not sure how I can write a query - for example to read > all records where the voltage field is less than 13. Performance in this > case is not a real significant issue. > > > > > > Would I be better off redesigning and having a master / detail kind of > structure? Where the master table would have the vehicle id, timestamp and a > key to the detail table. > > > > The second approach would work quite well. > > > > table logentry > > id primary unique > > vehicleid int > > logtime timestamp > > > > table logdetail > > logid int > > attribute varchar/int > > value decimal > > textvalue varchar > > > > You can retrieve logentries for specific vehicles, timeframes and > attributes - and you can extend more log attributes without changing the > database structure. I would suggest another table for the attributes where > you can lookup if it is a text or numeric entry. > .. > > The problem with this approach is that you need to loop through your > recordset in your code to collect all the values. > If you only have one value per key to store per vehicule, it's much > easier to have one big table with all the right columns, thus having > just one line to process with all the information . So, from your > example : > > create table logtable( > id_vehicle text, > date_purchased date, > voltage integer, > rpm integer); > > the corresponding record being > vehicle123, now(), 13, 600 > > this will simplify your queries/code _a lot_. You can keep subclasses > for details that have more than one value. Adding a column if you have > to store new attributes is not a big problem. > > -- > Vincent Veyron > http://marica.fr/ > Progiciel de gestion des dossiers de contentieux et d'assurance pour le > service juridique > >
Re: [GENERAL] Plperl caching prepared queries and cleanup
On Thu, Dec 16, 2010 at 01:57, Anupama wrote: > Will the plan be freed when the db session / connection closes OR Yes. However results may vary with a connection pooler. FYI %_SHARED is global to session, not database global. (technically its global per perl interpreter, so plperl and plperlu have different %_SHARED). > Will it lead to any memory leaks ? Not unless you consider the act of storing something in %_SHARED a leak. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Searching array fields - or should I redesign?
Le jeudi 16 décembre 2010 à 15:18 -0500, Bryan Montgomery a écrit : > Thanks for the comments. Just to clarify, I gave these two values as > examples. The readings could be between a handful for one vehicle type > up to 40 or more for another type of vehicle. > Not sure what you call a reading? do you mean like a parameter? if so give it its own column with an appropriate type. Your table would then have 40 columns, some of them being empty for some type of vehicle. If a reading really is a record, then you want a subclass, with a foreign key referencing the primary key of your vehicle table. You can then have un unlimited number of records (again, each record having a column for each of the parameters/characteristics measured) . -- Vincent Veyron http://marica.fr/ Progiciel de gestion des dossiers de contentieux et d'assurance pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PANIC: ERRORDATA_STACK_SIZE exceeded
hi! i have postgresql 8.3.12 and i got the eror: PANIC: ERRORDATA_STACK_SIZE exceeded This happens after insert 1000 registers on a table every 1 minute during 3 hours. any idea, why this happened??? # - # PostgreSQL configuration file # - # # This file consists of lines of the form: # # name = value # # (The "=" is optional.) Whitespace may be used. Comments are introduced with # "#" anywhere on a line. The complete list of parameter names and allowed # values can be found in the PostgreSQL documentation. # # The commented-out settings shown in this file represent the default values. # Re-commenting a setting is NOT sufficient to revert it to the default value; # you need to reload the server. # # This file is read on server startup and when the server receives a SIGHUP # signal. If you edit the file on a running system, you have to SIGHUP the # server for the changes to take effect, or use "pg_ctl reload". Some # parameters, which are marked below, require a server shutdown and restart to # take effect. # # Any parameter can also be given as a command-line option to the server, e.g., # "postgres -c log_connections=on". Some paramters can be changed at run time # with the "SET" SQL command. # # Memory units: kB = kilobytes MB = megabytes GB = gigabytes # Time units:ms = milliseconds s = seconds min = minutes h = hours d = days #-- # FILE LOCATIONS #-- # The default values of these variables are driven from the -D command-line # option or PGDATA environment variable, represented here as ConfigDir. #data_directory = 'ConfigDir' # use data in another directory # (change requires restart) #hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file # (change requires restart) #ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file # (change requires restart) # If external_pid_file is not explicitly set, no extra PID file is written. #external_pid_file = '(none)' # write an extra PID file # (change requires restart) #-- # CONNECTIONS AND AUTHENTICATION #-- # - Connection Settings - listen_addresses = '*' # what IP address(es) to listen on; # comma-separated list of addresses; # defaults to 'localhost', '*' = all # (change requires restart) port = 5432 # (change requires restart) max_connections = 512 # (change requires restart) # Note: Increasing max_connections costs ~400 bytes of shared memory per # connection slot, plus lock space (see max_locks_per_transaction). You might # also need to raise shared_buffers to support more connections. superuser_reserved_connections = 3 # (change requires restart) #unix_socket_directory = '' # (change requires restart) #unix_socket_group = '' # (change requires restart) #unix_socket_permissions = 0777 # begin with 0 to use octal notation # (change requires restart) #bonjour_name = '' # defaults to the computer name # (change requires restart) # - Security and Authentication - #authentication_timeout = 1min # 1s-600s #ssl = off # (change requires restart) #ssl_ciphers = 'ALL:!ADH:!LOW:!EXP:!MD5:@STRENGTH' # allowed SSL ciphers # (change requires restart) #password_encryption = on #db_user_namespace = off # Kerberos and GSSAPI #krb_server_keyfile = ''# (change requires restart) #krb_srvname = 'postgres' # (change requires restart, Kerberos only) #krb_server_hostname = '' # empty string matches any keytab entry # (change requires restart, Kerberos only) #krb_caseins_users = off# (change requires restart) #krb_realm = '' # (change requires restart) # - TCP Keepalives - # see "man 7 tcp" for details #tcp_keepalives_idle = 0# TCP_KEEPIDLE, in seconds; # 0 selects the system default #tcp_keepalives_interval = 0# TCP_KEEPINTVL, in seconds; # 0 selects the system default #tcp_keepalives_count = 0 # TCP_KEEPCNT; # 0
Re: [GENERAL] PANIC: ERRORDATA_STACK_SIZE exceeded
On Thu, Dec 16, 2010 at 06:33:25PM -0600, Edmundo Robles L. wrote: > hi! i have postgresql 8.3.12 and i got the eror: > PANIC: ERRORDATA_STACK_SIZE exceeded > if (++errordata_stack_depth >= ERRORDATA_STACK_SIZE) { /* * Wups, stack not big enough. We treat this as a PANIC condition * because it suggests an infinite loop of errors during error * recovery. */ errordata_stack_depth = -1; /* make room on stack */ ereport(PANIC, (errmsg_internal("ERRORDATA_STACK_SIZE exceeded"))); } max_stack_depth (integer) Specifies the maximum safe depth of the server's execution stack. The ideal setting for this parameter is the actual stack size limit enforced by the kernel (as set by ulimit -s or local equivalent), less a safety margin of a megabyte or so. The safety margin is needed because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines such as expression evaluation. The default setting is two megabytes (2MB), which is conservatively small and unlikely to risk crashes. However, it may be too small to allow execution of complex functions. Only superusers can change this setting. Setting max_stack_depth higher than the actual kernel limit will mean that a runaway recursive function can crash an individual backend process. On platforms where PostgreSQL can determine the kernel limit, it will not let you set this variable to an unsafe value. However, not all platforms provide the information, so caution is recommended in selecting a value. > #max_stack_depth = 2MB# min 100kB -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to obtain the maximum value of a date, between 3 tables...
On 2010-12-08, Andre Lopes wrote: > --20cf3043476e053b5f0496e5ebc4 > Content-Type: text/plain; charset=ISO-8859-1 > > Hi, > > I need to obtain the maximum value of a date, but that comparison will be > made between 3 tables... I will explain better with a query... you probably want greatest(d1,d2,d3) or possibly max(greatest(d1,d2,d3)) -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] if-clause to an exiting statement
On 2010-12-07, Kobi Biton wrote: > hi i am a newbie to sql statments , I am running postgres 8.1 with > application called opennms version 1.8.5 due to an application bug > queries that I execute aginst the DB which returns raw-count=0 are being > ignored and will not process a certain trigger I need to process. I think you want this: ORIGINAL QUERY union select DUMMY ROW DATA where not exists ( ORIGINAL QUERY ) you need to return something to get a rowcount of 1 this is what the dummy row data provides. the "where not exists" part blocks the dummy row data when the main query returns something. -- ⚂⚃ 100% natural -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Simple, free PG GUI/query tool wanted
you could try SQLWorkbench (http://www.sql-workbench.net/)