Re: [GENERAL] Replication
On 02/11/10 01:56, Scott Marlowe wrote: On Mon, Nov 1, 2010 at 4:39 PM, Jonathan Tripathy wrote: On 01/11/10 21:10, Vick Khera wrote: On Mon, Nov 1, 2010 at 3:53 PM, Jonathan Tripathy wrote: The standby must support INSERTS and UPDATES as well (once the master has failed) Are there any solutions like this? Looking on the Postgresql site, all the standby solutions seem to be read only.. If they are RO it is only while they are replicas, not masters. Once the server is upgraded to the master role, it becomes RW. So in the "Hot Standby" setup as described in http://www.postgresql.org/docs/current/static/hot-standby.html , how would I automatically make the slave a master? I think you're looking for this: http://www.postgresql.org/docs/current/static/warm-standby-failover.html What is the difference between the "Hot-Standby" and "Warm-Standby"? Is the only different that the "Hot-Standby" standby servers are read-only, whereas the "Warm-Standby" standbys can't be queried at all? Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Connection Pool
On 11/02/2010 01:42 AM, Jonathan Tripathy wrote: I would like my application to try and get a connection from the pool, and if there are none free, wait until either one is free or time is up. Rather than rolling your own connection pool, consider using one of the well-established existing ones like DBCP. Alternately, you can use the pooling DataSource offered by the PostgreSQL driver, though it's pretty minimal as well. http://commons.apache.org/dbcp/ Alternately you could use server-side pooling with PgPool-II. -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] JDBC Transactions
On Mon, 01 Nov 2010 20:02:30 +, Jonathan Tripathy wrote: > On 01/11/10 19:56, Andy Colson wrote: >> On 11/1/2010 2:29 PM, Jonathan Tripathy wrote: >>> >>> On 01/11/10 19:12, Andy Colson wrote: On 11/1/2010 2:01 PM, Jonathan Tripathy wrote: > >>> I'll give you the exact case where I'm worried: >>> >>> We have a table of customers, and each customer can have multiple >>> memberships (which are stored in the memberships table). We want our >>> deleteMembership(int membershipID) method to remove the membership, >>> then >>> check to see if there are no more memberships left for the >>> corresponding >>> customer, and if there are none, delete the corresponding >>> customer as >>> well. >>> >> >> Hum.. yeah, I can see a race condition there. but even with table >> locking I can see it. Not sure how your stuff works, but I'm thinking >> website: >> >> user1 goes to customer page, clicks on "add membership" and starts >> filling out info. >> >> user2 goes to customer page, clicks on "delete membership" of the >> last >> member ship, which blows away the membership, then the customer. >> >> user1 clicks save. >> >> Wouldnt matter for user2 if you locked the table or not, right? >> >> -Andy > > In the case described above, our code would throw an exception saying > "Customer no longer exists", prompting the user to create a fresh > customer - So I'm not worried about this (Although it may be > inconvenient for the user, I don't think much can be done in this > case). > Please let me know if I've missed something here. > > I'm more worried about the following situation (Where a bad > interleaving > sequence happens): > > user1 goes to customer page, clicks on "delete membership" of the last > member ship, which blows away the membership, > user2 goes to customer page, clicks on "add membership" and starts > filling out info. > user1 then blows away the customer. > > However I guess that if the relations are set up properly in the > database, an exception could be thrown to say that there are > corresponding memberships still exist... > yep, that sequence could be a problem too. It'll be a problem whenever more than one person gets to the customer page. Another user could cause that customer to go away at any time. with or without table locks: user1 and 2 go to customer page. user1 deletes last membership, and customer user2 does anything... cuz customer has gone away. Do you really need to delete the customer? Is leaving it around a problem? -Andy >>> Yeah, unfortunately leaving the customer round is a problem due to Data >>> Protection Policies in the EU. >>> >>> However, I'm not worried about the above situation, as if the user tries >>> to do anything with a customer that doesn't exist, an exception is >>> thrown which is, I believe, handled properly (i.e. the program doesn't >>> crash, but will simply tell the user to start again and create a new >>> customer). >>> >>> Do you think table relations are enough to solve the situation that I >>> gave above? I.e: >>> >>> user1 goes to customer page, clicks on "delete membership" of the last >>> membership, which blows away the membership, >>> user2 goes to customer page, clicks on "add membership" and starts >>> filling out info. >>> user1 then blows away the customer. >>> >>> Would my above problem be solved if the database refused to remove a >>> customer if it had remaining memberships? >>> >>> Another potential solution could be to leave the customer behind, but >>> run a script on a Saturday night or something to delete all customers >>> with no memberships... >>> >>> What do you think would be best? >>> >>> Thanks >>> >> >> I think we might be splitting hairs... What are the chances two people >> are editing the same customer at the exact same time? Plus the >> chances there is only one membership (which one user is deleting), >> plus the chances they are clicking the save button at the exact same >> time. >> >> In the PG world, I think it might go like: >> >> user1 clicks delete last membership: >> start transaction >> delete from memberships where id = 42; >> >> user2 has filled out new membership and clicks save >> start transaction >> insert into memebership where id = 100; >> >> user1 >> pg's default transaction level is read commited (which I learned >> in "[GENERAL] Can Postgres Not Do This Safely ?!?" thread) >> At this point both have a transaction open, neither commited. If >> user1 checked right now to see if customer had any more memberships, >> it would not see any and delete the customer which would be bad... but >> lets wait >> >> user2 >> commit >> >> user1 >> now user1 would see the new membership, and not dele
[GENERAL] index in desc order
Is it possible to create an index in descending order?
Re: [GENERAL] index in desc order
On 2 November 2010 12:36, AI Rumman wrote: > Is it possible to create an index in descending order? > Yes it is - http://www.postgresql.org/docs/current/interactive/indexes-ordering.html -- Sergey Konoplev Blog: http://gray-hemp.blogspot.com / Linkedin: http://ru.linkedin.com/in/grayhemp / JID/GTalk: gray...@gmail.com / Skype: gray-hemp -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] index in desc order
On 2 November 2010 10:36, AI Rumman wrote: > Is it possible to create an index in descending order? > yes... create index i on t(i desc); regards Szymon
[GENERAL] Dealing with locking on batch updates.
Hi, Though the following issue we are facing at present with Oracle 10g database, but I just want to know how PostgreSQL would solve this problem. We are planning a migration to any open-source RDBMS in future, so just wanted to clear this issue. Let me clear the scenario, the real-life issue that we faced on a very large database. Our client is a well-known cell phone service provider. Our database has a table that manages records of the current balance left on the customer's cell phone account. Among the other columns of the table, one column stores the amount of recharge done and one other column manages the current active balance left. We have two independent PL/SQL scripts. One script is automatically fired when the customer recharges his phone and updates his balance. The second script is about deduction certain charges from the customers account. This is a batch job as it applies to all the customers. This script is scheduled to run at certain intervals of a day. When this script is run, it loads 50,000 records in the memory, updates certain columns and performs bulk update back to the table. The issue happened is like this: A customer, whose ID is 101, contacted his local shop to get his phone recharged. He pays the amount. But till the time his phone was about to recharge, the scheduled time of the second script fired the second script. The second script loaded the records of 50,000 customers in the memory. In this in-memory records, one of the record of this customer too. Till the time the second script's batch update finishes, the first script successfully recharged the customer's account. Now what happened is that is the actual table, the column: "CurrentAccountBalance" gets updated to 150, but the in-memory records on which the second script was working had the customer's old balance i.e, 100. The second script had to deduct 10 from the column: "CurrentAccountBalance". When, according to actual working, the customer's "CurrentAccountBalance" should be 140, this issue made his balance 90. Now how to deal with this issue. I want to know how we can handle this issue in PostgreSQL. Regards, Rohit P. Khare
Re: [GENERAL] JDBC Transactions
On 11/02/2010 03:01 AM, Jonathan Tripathy wrote: user1 goes to customer page, clicks on "delete membership" of the last member ship, which blows away the membership, user2 goes to customer page, clicks on "add membership" and starts filling out info. user1 then blows away the customer. However I guess that if the relations are set up properly in the database, an exception could be thrown to say that there are corresponding memberships still exist... Yep. However, most webapps use short transactions and optimistic locking using a row change timestamp / counter. This sort of approach will detect conflicting writes but will NOT lock rows to prevent someone else deleting them. There are still races, you just find out if you lose rather than having data clobbered silently. It doesn't sound like you're using this kind of strategy; it's mostly popular with ORM systems and "highly scalable" webapps with high user counts. Beware if you are, though, as you have to design things differently, as you pretty much have to live with user 2 getting an error from your app saying that "the customer seems to have been deleted by somebody else". If you're holding database connections open with transactions open during user "think time", which I think you are, then you can use row-level locking in the database to handle the issue. Just obtain a row-level read lock on the customer row of interest before doing any addition/deletion/alteration of memberships. If your transaction will alter the customer record its self, obtain a write lock (FOR UPDATE) instead, because trying to get a SHARE lock then upgrading to an UPDATE lock is, like any other lock promotion, prone to deadlock. SELECT id FROM customer WHERE id = 'thecustomerid' FOR SHARE; INSERT INTO membership(...) You can do this with a BEFORE trigger on the table containing memberships, but doing it that way may make you more prone to deadlocks caused by lock ordering problems. If you do this, you have to be aware that other SELECT .. FOR UPDATE queries will block if a row is already locked by another transaction. You can use NOWAIT to prevent this, but have to be prepared to handle errors caused by another transaction having the row locked. See: http://www.postgresql.org/docs/current/static/sql-select.html#SQL-FOR-UPDATE-SHARE -- Craig Ringer -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] index in desc order
But I am using Postgresql 8.1. Is it possible here? On Tue, Nov 2, 2010 at 3:42 PM, Szymon Guz wrote: > > > On 2 November 2010 10:36, AI Rumman wrote: > >> Is it possible to create an index in descending order? >> > > yes... > > create index i on t(i desc); > > > regards > Szymon >
Re: [GENERAL] Dealing with locking on batch updates.
On 2 November 2010 10:47, RP Khare wrote: > Hi, > > Though the following issue we are facing at present with Oracle 10g > database, but I just want to know how PostgreSQL would solve this problem. > We are planning a migration to any open-source RDBMS in future, so just > wanted to clear this issue. > > Let me clear the scenario, the real-life issue that we faced on a very > large database. Our client is a well-known cell phone service provider. > Our database has a table that manages records of the current balance left > on the customer's cell phone account. Among the other columns of the table, > one column stores the amount of recharge done and one other column manages > the current active balance left. > > > We have two independent PL/SQL scripts. One script is automatically fired > when the customer recharges his phone and updates his balance. > The second script is about deduction certain charges from the customers > account. This is a batch job as it applies to all the customers. This script > is scheduled to run at certain intervals of a day. When this script is run, > it loads 50,000 records in the memory, updates certain columns and performs > bulk update back to the table. > > > The issue happened is like this: > > > A customer, whose ID is 101, contacted his local shop to get his phone > recharged. He pays the amount. But till the time his phone was about to > recharge, the scheduled time of the second script fired the second script. > The second script loaded the records of 50,000 customers in the memory. In > this in-memory records, one of the record of this customer too. > > > Till the time the second script's batch update finishes, the first script > successfully recharged the customer's account. > > > Now what happened is that is the actual table, the column: > "CurrentAccountBalance" gets updated to 150, but the in-memory records on > which the second script was working had the customer's old balance i.e, 100. > The second script had to deduct 10 from the column: > "CurrentAccountBalance". When, according to actual working, the customer's > "CurrentAccountBalance" should be 140, this issue made his balance 90. > Now how to deal with this issue. > > I want to know how we can handle this issue in PostgreSQL. > > Regards, > Rohit P. Khare > Maybe you should just lock the rows during those operations? Simple select for update should be enough I think. And Oracle can do that too. regards Szymon
Re: [GENERAL] index in desc order
On 2 November 2010 12:57, AI Rumman wrote: > But I am using Postgresql 8.1. Is it possible here? I am afraid not. You could try to do the index using kind of 1/field trick but I am not sure if it performs better than backward index scan in general. > > On Tue, Nov 2, 2010 at 3:42 PM, Szymon Guz wrote: >> >> >> On 2 November 2010 10:36, AI Rumman wrote: >>> >>> Is it possible to create an index in descending order? >> >> yes... >> create index i on t(i desc); >> >> regards >> Szymon > -- Sergey Konoplev Blog: http://gray-hemp.blogspot.com / Linkedin: http://ru.linkedin.com/in/grayhemp / JID/GTalk: gray...@gmail.com / Skype: gray-hemp -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] JDBC Transactions
On 02/11/10 09:53, Craig Ringer wrote: On 11/02/2010 03:01 AM, Jonathan Tripathy wrote: user1 goes to customer page, clicks on "delete membership" of the last member ship, which blows away the membership, user2 goes to customer page, clicks on "add membership" and starts filling out info. user1 then blows away the customer. However I guess that if the relations are set up properly in the database, an exception could be thrown to say that there are corresponding memberships still exist... Yep. However, most webapps use short transactions and optimistic locking using a row change timestamp / counter. This sort of approach will detect conflicting writes but will NOT lock rows to prevent someone else deleting them. There are still races, you just find out if you lose rather than having data clobbered silently. It doesn't sound like you're using this kind of strategy; it's mostly popular with ORM systems and "highly scalable" webapps with high user counts. Beware if you are, though, as you have to design things differently, as you pretty much have to live with user 2 getting an error from your app saying that "the customer seems to have been deleted by somebody else". If you're holding database connections open with transactions open during user "think time", which I think you are, then you can use row-level locking in the database to handle the issue. Just obtain a row-level read lock on the customer row of interest before doing any addition/deletion/alteration of memberships. If your transaction will alter the customer record its self, obtain a write lock (FOR UPDATE) instead, because trying to get a SHARE lock then upgrading to an UPDATE lock is, like any other lock promotion, prone to deadlock. SELECT id FROM customer WHERE id = 'thecustomerid' FOR SHARE; INSERT INTO membership(...) You can do this with a BEFORE trigger on the table containing memberships, but doing it that way may make you more prone to deadlocks caused by lock ordering problems. If you do this, you have to be aware that other SELECT .. FOR UPDATE queries will block if a row is already locked by another transaction. You can use NOWAIT to prevent this, but have to be prepared to handle errors caused by another transaction having the row locked. See: http://www.postgresql.org/docs/current/static/sql-select.html#SQL-FOR-UPDATE-SHARE -- Craig Ringer Hi Craig, Thanks for the excellent reply. I don't have time to read it at the minute, but I'll read it later on today and get back to you. Just as a quick response, I'm not keeping any transactions open during user "think time" so row level locks aren't possible. However I'm happy enough with the user getting a message saying that "The customer has been deleted by somebody else". I don't really mind what happens, as long as the user is made aware of what has happen, and there aren’t any memberships with no corresponding customers. Thanks Jonny -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] postgre...@fosdem 2011 - Call for talks
FOSDEM is one of the biggest Free and Open Source event held annually in Brussels, Belgium, and attended by around 4000 people. As in recent years, the PostgreSQL project will have a devroom where we will be presenting a number of talks. The event will be held on the 5 - 6th February 2011. We're looking for developers, users and contributors to submit talks for inclusion on the program. Any topic related to PostgreSQL is acceptable as long as it is non-commercial in nature. Suggested topics might include: * Migration of systems to PostgreSQL * Developing applications for PostgreSQL * Benchmarking and tuning * Administering PostgreSQL installations * Spatial applications * PostgreSQL hacking * Data warehousing * New features * Community & user groups * PostgreSQL tools and utilities * Tips and tricks * Replication * Case studies We will have a number of 45 minutes slots, and may split one or more into 3 back-to-back 15 minute slots if we receive suitable proposals. Please submit your proposals to: fos...@postgresql.eu and include the following information: * Your name * The title of your talk (please be descriptive, as titles will be listed with ~250 from other projects) * A short abstract of one to two paragraphs * A short biography introducing yourself * Links to related websites/blogs etc. The deadline for submissions is 20th December 2010. The proposals will be considered by committee. If your proposal has been accepted, you will be informed by email within one week of the submission deadline. -- Andreas 'ads' Scherbaum German PostgreSQL User Group European PostgreSQL User Group - Board of Directors Volunteer Regional Contact, Germany - PostgreSQL Project -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
On Tue, Nov 2, 2010 at 2:59 AM, Jonathan Tripathy wrote: > What is the difference between the "Hot-Standby" and "Warm-Standby"? Is the > only different that the "Hot-Standby" standby servers are read-only, whereas > the "Warm-Standby" standbys can't be queried at all? > That's the general definition of those two terms as applied to a database server. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
On Mon, Nov 1, 2010 at 6:39 PM, Jonathan Tripathy wrote: > So in the "Hot Standby" setup as described in > http://www.postgresql.org/docs/current/static/hot-standby.html , how would I > automatically make the slave a master? If you plan to make it automatic, be absolutely 100% sure that your automated criteria for doing the switchover is really 100% accurate, else you may end up switching when you didn't really want to for some temporary failure condition. I've never been able to define something that perfect so we still only ever do manual switchovers. Based on your questions, you perhaps should be seeking the advice of a paid consultant expert in such matters if you really value your data. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dealing with locking on batch updates.
On Tue, Nov 2, 2010 at 5:47 AM, RP Khare wrote: > I want to know how we can handle this issue in PostgreSQL. > When you load your 50,000 record batch, do it inside a transaction block and "SELECT FOR UPDATE" the rows instead of plain "SELECT". That will lock those rows from being updated by other transactions, which will just wait for your transaction to finish, and conversely, your transaction will not proceed until the other transactions have released their update locks as well. This is the most efficient way you can do locking. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
So I am still in the dark about the entire upgrade or step up process from 8.4.4-6 to 9.0.1-2. I have my 4 databases all backed up which I did when my server was 8.4.4-6 using the 'pg_dump' utility. That worked fine. So after I backed up my databases, I then upgraded the daemon to 9.0.1-2 and from there I don't understand the process. I obviously can't connect to the PostgreSQL database after I upgrade because I get the error: FATAL: database files are incompatible with server DETAIL: The data directory was initialized by PostgreSQL version 8.4, which is not compatible with this version 9.0.1. I have located the '/usr/bin/pg_upgrade' but I don't understand how I go about this. I am worried because I have 12 unique database users with unique / individual grants on specific databases. I don't want to start from scratch so I would assume this is where the 'pg_upgrade' script comes into play. How do I proceed from this point on? Obviously I can't access or connect to PostgreSQL once I upgrade to 9.0.1-2 so I assume at this time I need to execute the 'pg_upgrade' script to move forward, right? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Implementing replace function
Hello, If you expect your data to reach some kind of "critical size" at some point ( ie updates will be more likely than inserts at that point ), you can optimize your UPSERT code by trying to UPDATE before INSERTing. Otherwise trying to INSERT first should decrease the average UPSERT execution time in the long run, since you are less likely to hit the exception and do some extra work on the table. Anyway, glad to hear this is going to be accounted for in 9.1 . :) Le 31/10/2010 11:07, Thomas Kellerer a écrit : Alexander Farber wrote on 31.10.2010 09:22: Hello Postgres users, to mimic the MySQL-REPLACE statement I need to try to UPDATE a record and if that fails - INSERT it. There is actually an example of this in the PG manual ;) http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE Here is another solution based on triggers: http://database-programmer.blogspot.com/2009/06/approaches-to-upsert.html Regards Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] index in desc order
Szymon Guz writes: > On 2 November 2010 10:36, AI Rumman wrote: >> Is it possible to create an index in descending order? > create index i on t(i desc); Note that there is actually no point at all in such a declaration. The planner is perfectly capable of using backwards indexscans at need, so the above index doesn't do anything you couldn't do with a regular ascending-order index. The cases where this feature is actually worth something is where you have a multi-column index and you need different sort orders for the components, for example create index xy on t (x asc, y desc); which could be used to satisfy SELECT ... ORDER BY x ASC, y DESC. The OP didn't say what he wanted to use the feature for, but unless it's something like that, there's probably a better way. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] index in desc order
On Tue, 2 Nov 2010 10:10:19 -0400, Tom Lane wrote: > Szymon Guz writes: > > On 2 November 2010 10:36, AI Rumman wrote: > >> Is it possible to create an index in descending order? > > > create index i on t(i desc); > > Note that there is actually no point at all in such a declaration. > The planner is perfectly capable of using backwards indexscans at > need, so the above index doesn't do anything you couldn't do with > a regular ascending-order index. Cannot there be a (system/hardware) setup where there is a perceptible performance difference between forward and backward index scans? -- Michał Politowski Talking has been known to lead to communication if practiced carelessly. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
From: pgsql-general-ow...@postgresql.org on behalf of Vick Khera Sent: Tue 02/11/2010 13:18 To: pgsql-general Subject: Re: [GENERAL] Replication On Tue, Nov 2, 2010 at 2:59 AM, Jonathan Tripathy wrote: > What is the difference between the "Hot-Standby" and "Warm-Standby"? Is the > only different that the "Hot-Standby" standby servers are read-only, whereas > the "Warm-Standby" standbys can't be queried at all? > That's the general definition of those two terms as applied to a database server. -- Excellent! In terms of streaming "Warm-Standby" replication, how much data loss will occur? Are we talking seconds, minutes, or hours? Let's assume a lightly used database (maybe 50 update queries an hour) and the master and slave are connected by Gigabit ethernet Thanks
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On Tuesday 02 November 2010 7:02:05 am Carlos Mennens wrote: > So I am still in the dark about the entire upgrade or step up process > from 8.4.4-6 to 9.0.1-2. I have my 4 databases all backed up which I > did when my server was 8.4.4-6 using the 'pg_dump' utility. That > worked fine. So after I backed up my databases, I then upgraded the > daemon to 9.0.1-2 and from there I don't understand the process. I > obviously can't connect to the PostgreSQL database after I upgrade > because I get the error: > > FATAL: database files are incompatible with server > DETAIL: The data directory was initialized by PostgreSQL version 8.4, > which is not compatible with this version 9.0.1. > > I have located the '/usr/bin/pg_upgrade' but I don't understand how I > go about this. I am worried because I have 12 unique database users > with unique / individual grants on specific databases. I don't want to > start from scratch so I would assume this is where the 'pg_upgrade' > script comes into play. How do I proceed from this point on? Obviously > I can't access or connect to PostgreSQL once I upgrade to 9.0.1-2 so I > assume at this time I need to execute the 'pg_upgrade' script to move > forward, right? pg_upgrade uses a different process then pg_dump/pg_restore.In order to use it you have to have both the old cluster and new cluster directories available at the same time, though not both servers running at the same time. See here for details: http://www.postgresql.org/docs/9.0/interactive/pgupgrade.html Since you already did the pg_dump, you also have the option of using psql/pg_restore to populate the 9.0 cluster. The choice of restore software depends on whether you used the plain text option to pg_dump(restore with psql) or the custom option(restore with pg_restore). Database users are global to cluster, in order to carry them over you need to do pg_dumpall -g where the -g specifies dump global objects only. If you have not already done that, it will be necessary to start up the 8.4 server again. -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Group by and lmit
Hey Folks - have a coded myself into a corner yet? I have a situation with a select count / group by / order by query that I need to limit each group to 500 entries. Not seeing a way to do this in a single query, do I need to use multiple queries? Group x has about 200 entries in it; group y has about 5-8k per x. select x, y, count(*) as counter from mytable group by x, y order by x, counter, y I only want the first 500 for each x. Any tips or tricks someone might know would be appreciated. I'm using postgres 8.3.7. Thanks, Bill
Re: [GENERAL] pg_get_procdef ?
hubert depesz lubaczewski writes: > and check the queries. getting function definition from there should be > simple. Check getddl to see how much simple it is: https://labs.omniti.com/labs/pgtreats/wiki/getddl http://pgsql.tapoueh.org/getddl/ http://pgsql.tapoueh.org/getddl/sql/function.body.sql The Omiti version will output a single file with the objects in there, the other version will split the objects each in its own file in directories, to be svn / git friendly. With the python version: ./getddl.py -f -F fun_dir -d db -h host -p port -U user Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] how to get the height of index tree?
John R Pierce writes: >> 2010/10/22 sunpeng mailto:blueva...@gmail.com>> >> >> Hi, how to get the height of R* or B tree of created index ? You could use Gevel for GiST and GIN: http://www.sai.msu.su/~megera/wiki/Gevel Don't know if something similar exists for btree, but I guess it would be very useful. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On Mon, Nov 1, 2010 at 1:36 PM, Carlos Mennens wrote: > I did an upgrade on my database server this past weekend and the > database fails to start. I checked /var/log/postgresql and found the > reason: > > [r...@slave ~]# ps aux | grep postgres > root 5189 0.0 0.0 8128 956 pts/0 S+ 12:28 0:00 grep postgres > > [r...@slave ~]# /etc/rc.d/postgresql start > :: Starting PostgreSQL > > [BUSY] server starting > > > [DONE] > [r...@slave ~]# ps aux | grep postgres > root 5205 0.0 0.0 8128 960 pts/0 R+ 12:28 0:00 grep postgres > > [r...@slave ~]# tail -n 50 /var/log/postgresql.log > FATAL: database files are incompatible with server > DETAIL: The data directory was initialized by PostgreSQL version 8.4, > which is not compatible with this version 9.0.1. > FATAL: database files are incompatible with server > DETAIL: The data directory was initialized by PostgreSQL version 8.4, > which is not compatible with this version 9.0.1. > FATAL: database files are incompatible with server > DETAIL: The data directory was initialized by PostgreSQL version 8.4, > which is not compatible with this version 9.0.1. > > Does anyone know if this is a issue with PostgreSQL or with the way > Arch Linux packages the upgrade? > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > Hi all, Just upgraded PostgreSQL yesterday, from 8.4.5 to 9.0.1 on Debian. This is not to start a flame on linux distributions but instead I just want to stand out that the hard work done over the years by Martin Pitt on the excellent PostgreSQL debian packages isn't sufficiently appreciated. Regards, diego -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
Jonathan Tripathy writes: > What is the difference between the "Hot-Standby" and "Warm-Standby"? Is the > only different that the "Hot-Standby" standby servers are read-only, whereas > the "Warm-Standby" standbys can't be queried at all? That and the fact that running queries are not canceled at the time you flick the switch to have your standby a master. The ongoing read-only traffic is not affected. That's hot. Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] index in desc order
Michal Politowski writes: > Cannot there be a (system/hardware) setup where there is a perceptible > performance difference between forward and backward index scans? I think it's been reported already that backward index scans indeed can be much slower than forward index scan, but that how to model that is still unclear and undone in the cost estimations. You will have to crawl the pgsql-performance list yourself, though… Regards, -- Dimitri Fontaine http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is this a known feature of 8.1 SSL connection?
On Tue, Nov 2, 2010 at 11:25 AM, Ray Stell wrote: > On Tue, Nov 02, 2010 at 09:03:59AM -0400, zhong ming wu wrote: >> On Mon, Nov 1, 2010 at 5:06 PM, Ray Stell wrote: >> > >> > no, that does not make sense to me, however, I don't have an 8.x to play >> with. >> > >> > In 9.0.1, >> > with hostnossl+md5 >> > ssl=on >> > no ~/.postgresql on the client >> > >> > $ psql -p 5498 template1 postgres >> > Password for user postgres: >> > psql (9.0.1) >> > Type "help" for help. >> > >> > template1=# \q >> > >> > what is the postmaster msg exactly? >> > >> >> psql 8.1 Client on 32 bit. 8.1 Server on 64 bit. Both centos 5.4. >> >> Client message: >> --- >> psql: could not open certificate file >> "/some/path/.postgresql/postgresql.crt" no such file ro directory >> -- >> Server log: >> -- >> Could not accept SSL connection: peer did not return a certificate. >> --- >> >> Now when a certificate was supplied the connection was made sucessfully with >> hostnossl >> >> And at the psql prompt, I do not get "ssl connection" details as expected >> for hostnossl connection. >> >> The psql command used >> >> psql -h 192.168.56.101 -U testuser test > > well, that is really strange. I wish I could help you by looking at an 8.x > install, but I don't have time right now. If we assume the code works the > same way in in 8 and 9, which I think they probably do, then I'd have to ask > if you are sure you are looking at the right config. Maybe you have more > than one test db? I'm sure you are not making that mistake. > > Good question. First, it's not easy to get confused like this because server logs lives in $PGDATA/pg_log/ and pg_hba.conf is in $PGDATA That is the production system where I first experienced the error. The errors I sent you above were generated on two freshly made virtualboxes with prepackaged 8.1 that comes with centos.
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On 11/01/2010 02:49 PM, Carlos Mennens wrote: On Mon, Nov 1, 2010 at 2:27 PM, Steve Crawford wrote: I'm guessing you are missing an initdb. Move your old data directory somewhere else for now and do a new initdb so you can start up version 9. When you say 'old data' can you be more specific as to the path and possible files I need to move? I go to '/var/lib/postgres/data/' directory however I am not sure where from that folder structure I need to start moving files away without breaking basic server functionality& connection data. mv /var/lib/postgres/data /var/lib/postgres/data.old You will then have to do an initdb to create the basic 9.x databases. You can then use psql or pg_restore depending on how you dumped your data to restore your databases. With fedora you use either: /etc/init.d/postgresql initdb or service postgresql initdb to initialize the 9.x database system. HTH -- Stephen Clark *NetWolves* Sr. Software Engineer III Phone: 813-579-3200 Fax: 813-882-0209 Email: steve.cl...@netwolves.com http://www.netwolves.com
[GENERAL] Feature request for this mail list
I looked on PG website but couldn't find admin email for this list I would like list emails to have extra 'reply-to' header addressed to the list Is it too much to ask? When replying from a mobile client it can be tricky without + even from a bigger client most often I forgot
Re: [GENERAL] can select contents of view but not view itself, despite indirect membership
> My guess is that it has something to do with the join to the table > 'mandate'. If your view definition includes a CASE WHEN... that would My new guess is something to do with permissions being cached somewhere, because this morning when I tried it (SET followed by SELECT from page_startup) from pgAdminIII, it worked. I had shut down pgAdminIII before leaving last night. Gah! Oh well, at least it's working now. I guess I should remember to always try restarting everything... Kev -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Feature request for this mail list
zhong ming wu writes: > I would like list emails to have extra 'reply-to' header addressed to the > list > Is it too much to ask? Yes. This is something that would break the messaging style that everyone on these lists uses. Reply-to-all is the standard custom around here. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Implementing replace function
On Tue, Nov 02, 2010 at 10:46:42AM +0100, Matthieu Huin wrote: > Hello, > > If you expect your data to reach some kind of "critical size" at some > point ( ie updates will be more likely than inserts at that point ), you > can optimize your UPSERT code by trying to UPDATE before INSERTing. > Otherwise trying to INSERT first should decrease the average UPSERT > execution time in the long run, since you are less likely to hit the > exception and do some extra work on the table. You'd almost think of using some kind of branch prediction techniques. You could track what happened the last two times and use that to predict which would be better. There's always pathelogical cases, but it could work well for normal workloads. Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patriotism is when love of your own people comes first; nationalism, > when hate for people other than your own comes first. > - Charles de Gaulle signature.asc Description: Digital signature
[GENERAL] pg_migrator segfault
I am stuck with a segmentation fault while running pg_upgrade, from 8.4.3 to 9.0.1 $ ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug Running in verbose mode Running in debug mode PerForming Consistency Checks on Old Live Server Checking old data directory (/var/pgsql-8_4_3/data) ok Checking old bin directory (/var/pgsql-8_4_3/bin) ok Checking new data directory (/var/pgsql-9_0_1/data) ok Checking new bin directory (/var/pgsql-9_0_1/bin) ok Segmentation fault In my logs : pg_upgrade[29259]: segfault at 474e4158 ip b7f444ed sp bfdd35c4 error 4 in libc-2.11.1.so[b7ed1000+16f000] Both postgres version were compiled from source. My environment: Linux Fedora Core 12 (kernel 2.6.30.5-xenU , glibc 2.11.1, gcc 4.4.3) No custom shared object files. Cluster encoding: LATIN9 Any clues? Hernán J. González
Re: [GENERAL] pg_migrator segfault
try gdb --args ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug and when it fails, type in 'bt' and paste it here please. -- GJ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
On Tue, 2010-11-02 at 06:59 +, Jonathan Tripathy wrote: > >>> > >> So in the "Hot Standby" setup as described in > >> http://www.postgresql.org/docs/current/static/hot-standby.html , how would > >> I > >> automatically make the slave a master? > > I think you're looking for this: > > http://www.postgresql.org/docs/current/static/warm-standby-failover.html > > > What is the difference between the "Hot-Standby" and "Warm-Standby"? Is > the only different that the "Hot-Standby" standby servers are read-only, > whereas the "Warm-Standby" standbys can't be queried at all? The title of the second HTML page is now out of date. So there is no warm/hot confusion to worry about, just the name of the page and URL. -- Simon Riggs http://www.2ndQuadrant.com/books/ PostgreSQL Development, 24x7 Support, Training and Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is this a known feature of 8.1 SSL connection?
On Tue, Nov 2, 2010 at 1:43 PM, Ray Stell wrote: >> >> Good question. First, it's not easy to get confused like this because server >> logs lives in $PGDATA/pg_log/ > > > this is configurable in postgresql.conf. you can set the logs to any dir > that exists and is writable by the software owner. Doing that wil make it > more clear what db you are really working with. I redirect to home: > > log_directory = '/home/postgres/log/atlassian' > Sure. What I meant was that it was configured to what I wrote above and that I was not looking in the wrong log.
Re: [GENERAL] pg_migrator segfault
2010/11/2 Grzegorz Jaśkiewicz > try gdb --args ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D > /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B > /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug > and when it fails, type in 'bt' and paste it here please. > > -- > GJ > Well, this is strange. I run the same command line with gbd, it does not throw the segfault. The first time it gave me some error, but with sensible info: === Starting program: /var/pgsql-9_0_1/bin/pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug [Thread debugging using libthread_db enabled] Running in verbose mode Running in debug mode Detaching after fork from child process 30334. Detaching after fork from child process 30335. PerForming Consistency Checks on Old Live Server Checking old data directory (/var/pgsql-8_4_3/data) ok Checking old bin directory (/var/pgsql-8_4_3/bin) ok Checking new data directory (/var/pgsql-9_0_1/data) ok Checking new bin directory (/var/pgsql-9_0_1/bin) ok Detaching after fork from child process 30336. Detaching after fork from child process 30337. Checking for reg* system oid user data typesok Checking for /contrib/isn with bigint-passing mismatch ok Checking for large objects ok "/var/pgsql-9_0_1/bin/pg_ctl" -l "/dev/null" -D "/var/pgsql-9_0_1/data" -o "-p 5433 -c autovacuum=off -c autovacuum_freeze_max_age=20" start >> "/dev/null" 2>&1 Detaching after fork from child process 30362. Trying to start new server .ok Unable to start new postmaster with the command: "/var/pgsql-9_0_1/bin/pg_ctl" -l "/dev/null" -D "/var/pgsql-9_0_1/data" -o "-p 5433 -c autovacuum=off -c autovacuum_freeze_max_age=20" start >> "/dev/null" 2>&1 Perhaps pg_hba.conf was not set to "trust". Program exited with code 01. Missing separate debuginfos, use: debuginfo-install glibc-2.11.1-4.i686 I found (by calling pg_ctl manually and redirecting the log somewhere) that there was a postmaster.opts not writable by postgres. I fixed that, run again, and all seemed well, but... === Reading symbols from /var/pgsql-9_0_1/bin/pg_upgrade...(no debugging symbols found)...done. (gdb) run Starting program: /var/pgsql-9_0_1/bin/pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug [Thread debugging using libthread_db enabled] Running in verbose mode Running in debug mode Detaching after fork from child process 30680. Detaching after fork from child process 30681. PerForming Consistency Checks on Old Live Server Checking old data directory (/var/pgsql-8_4_3/data) ok Checking old bin directory (/var/pgsql-8_4_3/bin) ok Checking new data directory (/var/pgsql-9_0_1/data) ok Checking new bin directory (/var/pgsql-9_0_1/bin) ok Detaching after fork from child process 30682. Detaching after fork from child process 30683. Checking for reg* system oid user data typesok Checking for /contrib/isn with bigint-passing mismatch ok Checking for large objects ok "/var/pgsql-9_0_1/bin/pg_ctl" -l "/dev/null" -D "/var/pgsql-9_0_1/data" -o "-p 5433 -c autovacuum=off -c autovacuum_freeze_max_age=20" start >> "/dev/null" 2>&1 Detaching after fork from child process 30708. Checking for presence of required libraries ok *Clusters are compatible* "/var/pgsql-9_0_1/bin/pg_ctl" -l "/dev/null" -D "/var/pgsql-9_0_1/data" stop >> "/dev/null" 2>&1 Detaching after fork from child process 30724. Program exited normally. Missing separate debuginfos, use: debuginfo-install glibc-2.11.1-4.i686 == But when I run the pg_upgrade outside gbd, again the segfault: === ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug Running in verbose mode Running in debug mode PerForming Consistency Checks on Old Live Server Checking old data directory (/var/pgsql-8_4_3/data) ok Checking old bin directory (/var/pgsql-8_4_3/bin) ok Checking new data directory (/var/pgsql-9_0_1/data) ok Checking new bin directory (/var/pgsql-9_0_1/bin) ok Segmentation fault == WTF? Hernán J. González
Re: [GENERAL] Feature request for this mail list
On Tue, Nov 02, 2010 at 11:58:35AM -0400, zhong ming wu wrote: > I looked on PG website but couldn't find admin email for this list > > I would like list emails to have extra 'reply-to' header addressed to the > list > > Is it too much to ask? When replying from a mobile client it can be tricky > without + even from a bigger client most often I forgot The mailing list manager has several options which may be relevent here. There is a reply-to option which may do what you want: http://mail.postgresql.org/mj/mj_wwwusr/domain=postgresql.org Have a nice day, -- Martijn van Oosterhout http://svana.org/kleptog/ > Patriotism is when love of your own people comes first; nationalism, > when hate for people other than your own comes first. > - Charles de Gaulle signature.asc Description: Digital signature
Re: [GENERAL] pg_upgrade segfault (was: pg_migrator segfault)
2010/11/2 hernan gonzalez > 2010/11/2 Grzegorz Jaśkiewicz > > try gdb --args ./pg_upgrade -d /var/pgsql-8_4_3/data/ -D >> /var/pgsql-9_0_1/data/ -b /var/pgsql-8_4_3/bin/ -B >> /var/pgsql-9_0_1/bin/ --check -P 5433 -v -g -G debug >> and when it fails, type in 'bt' and paste it here please. >> >> -- >> GJ >> > I read somewhere that it can happen that a programs segfaults because some allocation problem, which doesnt happen inside gbd (because there some more memory is allocated, or whatever). Running gbd with the core generated by the segfault, it outputs this: Program terminated with signal 11, Segmentation fault. #0 0xb7df84ed in _int_realloc () from /lib/libc.so.6 Missing separate debuginfos, use: debuginfo-install glibc-2.11.1-4.i686 (gdb) bt #0 0xb7df84ed in _int_realloc () from /lib/libc.so.6 #1 0xb7df88a0 in realloc () from /lib/libc.so.6 #2 0xb7db2a5e in __add_to_environ () from /lib/libc.so.6 #3 0xb7db27b7 in putenv () from /lib/libc.so.6 #4 0x0804aa11 in putenv2 () #5 0x0804af93 in get_control_data () #6 0x08049801 in check_cluster_compatibility () #7 0x0804eb88 in main () Hernán J. González
Re: [GENERAL] pg_migrator segfault
hernan gonzalez writes: > Well, this is strange. I run the same command line with gbd, it does not > throw the segfault. Interesting. Do "ulimit -c unlimited", then run pg_upgrade normally, and then try "gdb ./pg_upgrade core" (the name of the corefile might be something different depending on local configuration). regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is this a known feature of 8.1 SSL connection?
Sorry to top post but it's this email client .. pg_hba.conf is bare bone Besides it asks for certs but it obviously does not do SSL connection at the end On Nov 2, 2010 2:12 PM, "Ray Stell" wrote: On Tue, Nov 02, 2010 at 01:54:34PM -0400, zhong ming wu wrote: > On Tue, Nov 2, 2010 at 1:43 PM, Ray... oh, well so the 8.1 pg is doing something strange then. the only other thing I can think of is that maybe the rules in pg_hba are top down and it hits a rule before the one you are reporting. I'm not exactly sure that's how it works, but that would expain the problem.
Re: [GENERAL] pg_upgrade segfault (was: pg_migrator segfault)
hernan gonzalez writes: > Running gbd with the core generated by the segfault, it outputs this: > Program terminated with signal 11, Segmentation fault. > #0 0xb7df84ed in _int_realloc () from /lib/libc.so.6 > Missing separate debuginfos, use: debuginfo-install glibc-2.11.1-4.i686 > (gdb) bt > #0 0xb7df84ed in _int_realloc () from /lib/libc.so.6 > #1 0xb7df88a0 in realloc () from /lib/libc.so.6 > #2 0xb7db2a5e in __add_to_environ () from /lib/libc.so.6 > #3 0xb7db27b7 in putenv () from /lib/libc.so.6 > #4 0x0804aa11 in putenv2 () > #5 0x0804af93 in get_control_data () > #6 0x08049801 in check_cluster_compatibility () > #7 0x0804eb88 in main () Hmm, this suggests that pg_upgrade has managed to clobber malloc's internal data structures, probably by writing past the end of an allocated chunk. You should be able to identify where if you can run pg_upgrade under valgrind or ElectricFence. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_upgrade segfault (was: pg_migrator segfault)
In pg_upgrade/controldata.c , putenv2 function : char *envstr = (char *) pg_malloc(ctx, strlen(var) + strlen(val) + 1); sprintf(envstr, "%s=%s", var, val); Shouldn't it be "+ 2 " instead of "+ 1" ? (one for the '=', plus one for the null terminating char) ? I think that fixes it. Hernán J. González http://hjg.com.ar/
Re: [GENERAL] pg_upgrade segfault (was: pg_migrator segfault)
Replacing that 1 for 2 it's enough for making it work, for me, it seems. But it's not enough to get valgrind happy (It still reports 4 "definitely lost" blocks, all from that putenv2 function). Perhaps that's related to the comment: /* * Do not free envstr because it becomes part of the environment * on some operating systems. See port/unsetenv.c::unsetenv. */ Hernán J. González http://hjg.com.ar/
Re: [GENERAL] pg_upgrade segfault (was: pg_migrator segfault)
hernan gonzalez writes: > In pg_upgrade/controldata.c , putenv2 function : > char *envstr = (char *) pg_malloc(ctx, strlen(var) > + strlen(val) + 1); > sprintf(envstr, "%s=%s", var, val); > Shouldn't it be "+ 2 " instead of "+ 1" ? Yup, it sure should. So probably the reason you're the first one to see it is that the problem would depend on the exact lengths of the strings being used here :-( > But it's not enough to get valgrind happy (It still reports 4 "definitely > lost" blocks, all from that putenv2 function). That's expected; those blocks aren't supposed to get freed. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Return key from query
Hi everyone, When adding a new record, we run an insert query which auto-increments the primary key for the table. However the method (in java) which calls this query must return the newly created key. Any ideas on how to do this, preferably using a single transaction? Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Return key from query
On 11/02/2010 01:43 PM, Jonathan Tripathy wrote: Hi everyone, When adding a new record, we run an insert query which auto-increments the primary key for the table. However the method (in java) which calls this query must return the newly created key. Any ideas on how to do this, preferably using a single transaction? Thanks Use "returning": create table foo (bar serial, baz text); insert into foo (baz) values ('test') returning bar; bar - 1 (1 row) Cheers, Steve -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Return key from query
On 2 November 2010 21:43, Jonathan Tripathy wrote: > Hi everyone, > > When adding a new record, we run an insert query which auto-increments the > primary key for the table. However the method (in java) which calls this > query must return the newly created key. > > Any ideas on how to do this, preferably using a single transaction? > > Thanks > > Try insert returning, something like this: INSERT INTO distributors (did, dname) VALUES (DEFAULT, 'XYZ Widgets') RETURNING did; http://www.postgresql.org/docs/9.0/static/sql-insert.html regards Szymon
Re: [GENERAL] Return key from query
On 11/02/2010 02:43 PM, Jonathan Tripathy wrote: > Hi everyone, > > When adding a new record, we run an insert query which auto-increments > the primary key for the table. However the method (in java) which calls > this query must return the newly created key. > > Any ideas on how to do this, preferably using a single transaction? > > Thanks > Ah yes have your cake and eat it too. If you app code (java) is making new instances and wants to have an id, then the table cannot have an auto-id. I'm big on java/class id generation (usually use UUID) but you do have to take complete responsibility for it. The hibernate folks frown on it but their retort is imho woeful. If you're not using uuid's you will need a generator. You can back it up with a persistent store so you know where you left off, but you will want to (auto-)increment that id table with a large value (say 5000) and have you generate dole them out as needed at the app level. When it has spent 5000 ids, it will go to the server are ask for another 5000 (separate tx). Please don't get hung up on loosing some portion of the 5000 id when you restart or whatever. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Return key from query
On 2 November 2010 21:59, Rob Sargent wrote: > > > On 11/02/2010 02:43 PM, Jonathan Tripathy wrote: > > Hi everyone, > > > > When adding a new record, we run an insert query which auto-increments > > the primary key for the table. However the method (in java) which calls > > this query must return the newly created key. > > > > Any ideas on how to do this, preferably using a single transaction? > > > > Thanks > > > > Ah yes have your cake and eat it too. > > If you app code (java) is making new instances and wants to have an id, > then the table cannot have an auto-id. > > why? > I'm big on java/class id generation (usually use UUID) but you do have > to take complete responsibility for it. The hibernate folks frown on it > but their retort is imho woeful. If you're not using uuid's you will > need a generator. You can back it up with a persistent store so you > know where you left off, but you will want to (auto-)increment that id > table with a large value (say 5000) and have you generate dole them out > as needed at the app level. When it has spent 5000 ids, it will go to > the server are ask for another 5000 (separate tx). Please don't get > hung up on loosing some portion of the 5000 id when you restart or > whatever. > > I'm not fan of UUID, though I have to use it in many projects... nothing special, just another id, not better, not worse than id, maybe except for the chance of collisions :) regards Szymon
[GENERAL] postgres videos - copyright issues or something else?
it's very hard to find postgres related videos although i regularly read various blog posts mentioning some talk/conference. is this because some copyright issues or somethings else? i've tried finding a videos/talks page on postgres site but didn't find anything (other then fosdem videos). any links or comments appreciated. Aljosa Mohorovic -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres videos - copyright issues or something else?
Maybe this link can help you http://vimeo.com/channels/postgres On Tue, 2 Nov 2010 22:11:34 +0100 Aljoša Mohorović wrote: > it's very hard to find postgres related videos although i regularly > read various blog posts mentioning some talk/conference. > is this because some copyright issues or somethings else? > > i've tried finding a videos/talks page on postgres site but didn't > find anything (other then fosdem videos). > any links or comments appreciated. > > Aljosa Mohorovic > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgres videos - copyright issues or something else?
On Tue, Nov 2, 2010 at 10:17 PM, Rodrigo Gonzalez wrote: > Maybe this link can help you http://vimeo.com/channels/postgres thanks, i was looking for something like that. somebody who has access to postgres site should put this link (and similar) somewhere where users can find it. Aljosa -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On Tue, Nov 2, 2010 at 10:53 AM, Steve Clark wrote: > mv /var/lib/postgres/data /var/lib/postgres/data.old Before I move or rename '/var/lib/postgres/data', what version of PostgreSQL should I be at? 8.4 or 9.0? > You will then have to do an initdb to create the basic 9.x databases. > You can then use psql or pg_restore depending on how you dumped > your data to restore your databases. I simply ran the following command: /usr/bin/pg_dump finance > finance.sql > With fedora you use either: > /etc/init.d/postgresql initdb > or > service postgresql initdb > to initialize the 9.x database system. Sadly that command didn't pan out for Arch Linux: # /etc/rc.d/postgresql initdb usage: /etc/rc.d/postgresql {start|stop|restart} -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On Tuesday 2. November 2010 22.37.33 Carlos Mennens wrote: > On Tue, Nov 2, 2010 at 10:53 AM, Steve Clark wrote: > > mv /var/lib/postgres/data /var/lib/postgres/data.old > > Before I move or rename '/var/lib/postgres/data', what version of > PostgreSQL should I be at? 8.4 or 9.0? You should be at 9.0. The important thing is that you must start the new installation initdb with an empty data directory, or you'll inevitably get the complaints about incompatibility with the old files. It's quite logical, really, if you think about it. > > You will then have to do an initdb to create the basic 9.x databases. > > You can then use psql or pg_restore depending on how you dumped > > your data to restore your databases. > > I simply ran the following command: > > /usr/bin/pg_dump finance > finance.sql That's only the first part of the upgrade process. Then you must prepare an empty directory to acommodate your data, do an initdb, and then populate the new directory from your dump file. I think that the Gentoo method works great: In the /var/lib/postgres directory, we just create a new subirectory named /8.3, /8.4, /9.0, and so on, and then a new /data directory under each one. That way, you can safely remove the old data when you've confirmed that the new version actually works. regards, Leif B. Kristensen -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
Carlos Mennens, 02.11.2010 22:37: Before I move or rename '/var/lib/postgres/data', what version of PostgreSQL should I be at? 8.4 or 9.0? Actually both, because pg_upgrade needs the binaries of the old *and* new version. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On 11/02/2010 02:54 PM, Thomas Kellerer wrote: Carlos Mennens, 02.11.2010 22:37: Before I move or rename '/var/lib/postgres/data', what version of PostgreSQL should I be at? 8.4 or 9.0? Actually both, because pg_upgrade needs the binaries of the old *and* new version. Part of the confusion Carlos is experiencing is that he is caught between two upgrade suggestions. At this point he is most of the way to doing it the traditional way, dump/restore. While pg_upgrade could be of use, it does not solve the immediate problem, which how to restore the dump file :) -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
Adrian Klaver, 02.11.2010 23:23: Before I move or rename '/var/lib/postgres/data', what version of PostgreSQL should I be at? 8.4 or 9.0? Actually both, because pg_upgrade needs the binaries of the old *and* new version. Part of the confusion Carlos is experiencing is that he is caught between two upgrade suggestions. At this point he is most of the way to doing it the traditional way, dump/restore. While pg_upgrade could be of use, it does not solve the immediate problem, which how to restore the dump file :) Ah, right. But on the other hand, he only dumped a single database which will not include e.g. users and roles. So if he needs to restore users and privileges from the original 8.4 installation there is no way around re-installing the 8.4 binaries. Regards Thomas -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Postgres connection errors
Hello. I have lots of ruby daemons running connected to postgres. Some of them start getting connection errors after about a day or two of running. The odd thing is that they don't all get the same error. Some get this error: PGError: lost synchronization with server: got message type "T" Others get this PGError: lost synchronization with server: got message type "e" And sometimes this PGError: lost synchronization with server: got message type "" What is postgres trying to tell me here? This error is most likely coming out of libpq I would think. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?
On 02/11/2010 21:37, Carlos Mennens wrote: # /etc/rc.d/postgresql initdb usage: /etc/rc.d/postgresql {start|stop|restart} That's the daemon start/stop script. You need something like this: /path/to/pg/binaries/initdb /path/to/data/dir Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres connection errors
Tim Uckun writes: > I have lots of ruby daemons running connected to postgres. Some of > them start getting connection errors after about a day or two of > running. The odd thing is that they don't all get the same error. > Some get this error: PGError: lost synchronization with server: got > message type "T" > Others get this PGError: lost synchronization with server: > got message type "e" > And sometimes this PGError: lost synchronization with server: got > message type "" > What is postgres trying to tell me here? Most of the cases we've seen like that have been because multiple threads in the client application were trying to use the same PGconn connection object concurrently. There's no cross-thread synchronization built into libpq, so you have to provide the interlocks yourself if there's any possibility of multiple threads touching the same PGconn concurrently. And it will not support more than one query at a time in any case. But having said that ... usually apps that have made this type of mistake start falling over almost immediately. Maybe you have a case where it's mostly interlocked correctly, and you just missed one infrequent code path? regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] JDBC Transactions
On 02/11/10 18:29, Jonathan Tripathy wrote: > I don't really mind what happens, as > long as the user is made aware of what has happen, and there aren’t any > memberships with no corresponding customers. Well, that's taken care of by a referential integrity constraint. You don't need anything else. It sounded earlier like you also needed to ensure that there were no customers without corresponding memberships. -- Craig Ringer Tech-related writing: http://soapyfrogs.blogspot.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Replication
On 02/11/10 21:21, Vick Khera wrote: > On Mon, Nov 1, 2010 at 6:39 PM, Jonathan Tripathy wrote: >> So in the "Hot Standby" setup as described in >> http://www.postgresql.org/docs/current/static/hot-standby.html , how would I >> automatically make the slave a master? > > If you plan to make it automatic, be absolutely 100% sure that > your automated criteria for doing the switchover is really 100% > accurate, else you may end up switching when you didn't really want to > for some temporary failure condition. I've never been able to define > something that perfect so we still only ever do manual switchovers. >From what I've seen, the only way automatic switchovers ever work sanely is when the node that's promoting its self has a way to pull the plug on the master it's taking over from. A USB-controlled power board seems to be a popular cheap option, and isolation on a fibre-channel switch a more expensive option. Of course, even then you have to be sure your method for killing the old master will always work when the slave promotes its self to master, and will never trigger under any other circumstances. Good luck with that. (Reading the above par, does anyone else find some IT terminology, when read out of context, kind of creepy? Unix's killing of children in particular.) > > Based on your questions, you perhaps should be seeking the advice of a > paid consultant expert in such matters if you really value your data. +1 There are many people on this list who do paid work. See the PostgreSQL website for a list of companies that work with PostgreSQL. -- Craig Ringer Tech-related writing: http://soapyfrogs.blogspot.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres connection errors
> > Most of the cases we've seen like that have been because multiple > threads in the client application were trying to use the same PGconn > connection object concurrently. There's no cross-thread synchronization > built into libpq, so you have to provide the interlocks yourself if > there's any possibility of multiple threads touching the same PGconn > concurrently. And it will not support more than one query at a time > in any case. These are not threaded daemons but this does give me some sort of a clue to work on. I noticed that there is a call to clear stale connections which might be the culprit because in the case of these workers there is only one connection. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] JDBC Transactions
On 02/11/10 23:11, Craig Ringer wrote: On 02/11/10 18:29, Jonathan Tripathy wrote: I don't really mind what happens, as long as the user is made aware of what has happen, and there aren’t any memberships with no corresponding customers. Well, that's taken care of by a referential integrity constraint. You don't need anything else. It sounded earlier like you also needed to ensure that there were no customers without corresponding memberships. That would be bad as well, however at least it wouldn’t crash the system. What interleaving sequence would cause that? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Group by and limit
Reposting as I noticed that the original was in reply to a different subject. Hey Folks – have a coded myself into a corner yet? I have a situation with a select count / group by / order by query that I need to limit each group to 500 entries. Not seeing a way to do this in a single query, do I need to use multiple queries? Group x has about 200 entries in it; group y has about 5-8k per x. select x, y, count(*) as counter from mytable group by x, y order by x, counter, y I only want the first 500 for each x. Any tips or tricks someone might know would be appreciated. I’m using postgres 8.3.7. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] A strange SQL grammar issue with postgreSql 8.4.4 & Hibernate 3.5
Hi, This is the sql statement that causes the problem in Hibernate. It's actually generated by Hibernate. select this_.ENTITY_ID as ENTITY1_54_0_, this_.objecthandle_id as objectha2_54_0_, this_.remoteId as remoteId55_0_, this_.MANAGEMENTSERVER_ID as MANAGEME2_55_0_, this_.configFile as configFile66_0_, this_.runtimeGUID as runtimeG2_66_0_, this_.name as name66_0_, this_.runtimeGuestOS as runtimeG4_66_0_, this_.configGuestOS as configGu5_66_0_, this_.configGuestId as configGu6_66_0_, this_.numCPU as numCPU66_0_, this_.numNIC as numNIC66_0_, this_.memory as memory66_0_, this_.cpuReservation as cpuRese10_66_0_, this_.cpuLimit as cpuLimit66_0_, this_.cpuShares as cpuShares66_0_, this_.cpuSharesLevel as cpuShar13_66_0_, this_.memReservation as memRese14_66_0_, this_.memLimit as memLimit66_0_, this_.memShares as memShares66_0_, this_.memSharesLevel as memShar17_66_0_, this_.diskSpaceConsumed as diskSpa18_66_0_, this_.diskSpaceScanned as diskSpa19_66_0_, this_.diskSpaceCommitted as diskSpa20_66_0_, this_.toolStatus as toolStatus66_0_, this_.toolVersion as toolVer22_66_0_, this_.ipAddress as ipAddress66_0_, this_.dnsName as dnsName66_0_, this_.state as state66_0_, this_.connectionState as connect26_66_0_, this_.expiryState as expiryS27_66_0_, this_.printEmbedded as printEm28_66_0_, this_.existsInInventory as existsI29_66_0_, this_.existsOnDiskEnum as existsO30_66_0_, this_.customizableOS as customi31_66_0_, this_.PARENTFOLDER_ID as PARENTF32_66_0_, this_.RUNTIMESERVER_ID as RUNTIME33_66_0_, this_.RESOURCEPOOL_ID as RESOURC34_66_0_, this_.DSBACKING_ID as DSBACKING35_66_0_, this_.DATACENTER_ID as DATACENTER36_66_0_, this_.CLUSTER_ID as CLUSTER37_66_0_, this_.installedOs as install38_66_0_, this_.lastScanDate as lastSca39_66_0_, this_.lastLoginScanDate as lastLog40_66_0_, this_.poweredOffSince as powered41_66_0_, this_.lastLogonTime as lastLog42_66_0_, this_.lastLogonUser as lastLog43_66_0_, this_.containerSpaceUsed as contain44_66_0_, this_.logicalDiskSize as logical45_66_0_, this_.freeDiskSpace as freeDis46_66_0_, this_.freeDiskSpacePercent as freeDis47_66_0_, this_.cost as cost66_0_, this_.snapshots as snapshots66_0_, this_.earliestSnapshotDate as earlies50_66_0_, this_.earliestSnapshotName as earlies51_66_0_, this_.lastDatastoreScanDate as lastDat52_66_0_, this_.diskStorageTypeEnum as diskSto53_66_0_, this_.connectedMedia as connect54_66_0_, this_.compliant as compliant66_0_, this_.complianceIssue as complia56_66_0_, this_.nonCompliantTime as nonComp57_66_0_, (SELECT this_.cost/ 4 ) as formula4_0_, (SELECT this_.cost / 12 ) as formula5_0_ from INV_DEPLOYEDIMAGES this_ where (lower(this_.configGuestId) like ? escape '\') The last statement I traced up to is the executeQuery() in an NewProxyPreparedStatement instance (I use C3P0 connection pool with Hibernate), the exception from this statement is JDBCExceptionReporter - ERROR: unterminated quoted string at or near "'\')" The unwrapped SQL exception is pretty much the same, complaining about the escape '\' at the end of the statement. But if I use plain JDBC to connect to the same database, and run the same query, it's all fine! Please help.
[GENERAL] use wal archiving scheme from 8.1 on 9.0
Hello List I have a nicely working system of maintaining a cold standby of master 8.1.11 server. System consists of a set of scripts to do a base backup to standby and scheme to send wal files and restoring the cold standby should the master server break down I have another totally separate postgresql server 8.4.4 that stands alone (without any slave standby server). I wish to upgrade this 8.4.4 server to 9.0.1 and at the same time build a hot standby server for read-only queries. My question is can I essentially steal the scripts and schemes from 8.1.11 to set up this 9.0.1 master/slave system? Then set up a section for hot standby in the slave? Thanks in advance mr.wu -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] A strange SQL grammar issue with postgreSql 8.4.4 & Hibernate 3.5
Sheng Hui writes: > the exception from this statement is > JDBCExceptionReporter - ERROR: unterminated quoted string at or near "'\')" > The unwrapped SQL exception is pretty much the same, complaining about the > escape '\' at the end of the statement. Looks to me like somebody's on the wrong page about whether standard_conforming_strings is set. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] timestamps in Australia
set timezone to 'Australia/Sydney'; set timezone_abbreviations to 'Australia'; set datestyle to 'SQL,DMY'; select '2011-04-03 01:00'::timestamptz+generate_series(0,3)*'1h'::interval,generate_series(0,3); notice how the middle two look the same. (this is Australias DST change-back) This has the potential to cause all sorts of problems, especially in triggers where now() seems to return a string which gets reinterpreted... other than setting datestyle to 'ISO,DMY' is there a way to fix this? How do the Australians handle this? Apparently the zic database (where the string 'EST' originates has been this way since 2004.) select '2011-04-03 02:00:00 EST'::timestamptz - '1h':: interval; -- invariant ? -- ɹǝpun uʍop ɯoɹɟ sƃuıʇǝǝɹ⅁ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general