Re: [GENERAL] how to avoid repeating expensive computation in select
On 2011-02-03 18:07, Bob Price wrote: I would like to know if there is a way in PostgreSQL to avoid repeating an expensive computation in a SELECT where the result is needed both as a returned value and as an expression in the WHERE clause. I think I've seen it said here that PG avoids redundant multiple calculations of an expression. Even so, have you thought about using subqueries? SELECT id, expensivefunc(value) AS score FROM mytable WHERE id LIKE '%z%' AND expensivefunc(value)> 0.5; SELECT id, expensivefunc(value) FROM ( (SELECT id, value FROM mytable WHERE id LIKE '%z%') ) WHERE expensivefunc(value) > 0.5; or even SELECT id, score FROM ( SELECT id, expensivefunc(value) AS score FROM ( (SELECT id, value FROM mytable WHERE id LIKE '%z%') ) ) WHERE score > 0.5 -- Orhan Kavrakoğlu or...@tart.com.tr Tart New Media w : http://www.tart.com.tr t : +90 212 263 0 666 / ext: 142 f : TBA a : TBA -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_catalog.pg_stat_activity and current_query
Good morning, I'm trying to make a Postgre profiler reading pg_catalog.pg_stat_activity. But when I read that view... I always find my query. Example: ResultSet rs = st.executeQuery("SELECT query_start,current_query FROM pg_catalog.pg_stat_activity where xact_start is not null"); while(rs.next()) { String sQuery = rs.getString("current_query"); qStart = rs.getTimestamp("query_start"); if(!qStart.equals(qStart_last)){ display.append(sQuery+'\n'); qStart_last=(Timestamp)qStart.clone(); } } Always prints "SELECT query_start,current_query FROM pg_catalog.pg_stat_activity where xact_start is not null" :) I'd like to discard my query... any ideas? Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_catalog.pg_stat_activity and current_query
On 1 March 2011 09:36, Alex wrote: > Good morning, > > I'm trying to make a Postgre profiler reading > pg_catalog.pg_stat_activity. > But when I read that view... I always find my query. > > Example: > > ResultSet rs = st.executeQuery("SELECT query_start,current_query FROM > pg_catalog.pg_stat_activity where xact_start is not null"); > while(rs.next()) { > String sQuery = rs.getString("current_query"); > qStart = rs.getTimestamp("query_start"); > if(!qStart.equals(qStart_last)){ >display.append(sQuery+'\n'); >qStart_last=(Timestamp)qStart.clone(); > } > } > > Always prints "SELECT query_start,current_query FROM > pg_catalog.pg_stat_activity where xact_start is not null" :) > > I'd like to discard my query... any ideas? > > Thanks > > If you want just to get rid of you query, add this to your query: and procpid <> pg_backend_pid() so you will have something like this: SELECT query_start,current_query FROM pg_catalog.pg_stat_activity where xact_start is not null and procpid <> pg_backend_pid() regards Szymon
[GENERAL] Tool for shifting tables from Mysql to Postgresql
Dear all, I want to convert some tables from Mysql database to Postgresql Database in Linux Systems ( Ubuntu-10.4, CentOS ). Can someone Please tell me tool for it that makes it easier. I am able to done it through FW tools in Windows System but i want to achieve it in Linux ( CentOS ) System. I researched a lot & tried below steps : 1. mysqldump --compatible=postgresql wiki20100130 > /hdd4-1/wiki20100130_mysql108feb22.sql 2. sed "s/\\\'/\'\'/g" wiki20100130_mysql108feb22.sql 3. bin/psql -Upostgres wiki20100130 < /hdd4-1/wiki20100130_mysql108feb22.sql invalid byte sequence for encoding "UTF8": 0xe3ba27 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". ERROR: invalid byte sequence for encoding "UTF8": 0xee6c65 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". ERROR: invalid byte sequence I think a tool would ease that work. Thanks & best Regards, Adarsh Sharma -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] restore a server backup
On 01/03/2011 06:37, Malm Paul wrote: Hi, I've used PgAdmin III to store a server backup. But I'm not able to restore it. Please, could any one tell me how to do it? Im using version 1.10 Hi there, Did you create a text or binary backup? If binary, you either (i) use pg_restore on the command line, or (ii) use pgAdmin's "restore" option from the menu you get when right-clicking on the tree view (which just uses pg_restore anyway). If you created a text backup, you'll need to feed it to psql on the command line: psql -f Ray. -- Raymond O'Donnell :: Galway :: Ireland r...@iol.ie -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Tool for shifting tables from Mysql to Postgresql
Hi Adarsh, There are very good tools out for migration from Mysql to PostgreSQL. EnterpriseDB has the migration studio which will help to migrate Mysql to PostgreSQL. http://www.enterprisedb.com/solutions/mysql-vs-postgresql/how-to-move-from-mysql-to-postgresql Best Regards, Raghavendra EnterpriseDB Corporation On Tue, Mar 1, 2011 at 3:57 PM, Adarsh Sharma wrote: > Dear all, > > I want to convert some tables from Mysql database to Postgresql Database in > Linux Systems ( Ubuntu-10.4, CentOS ). > > Can someone Please tell me tool for it that makes it easier. > > I am able to done it through FW tools in Windows System but i want to > achieve it in Linux ( CentOS ) System. > > I researched a lot & tried below steps : > > 1. mysqldump --compatible=postgresql wiki20100130 > > /hdd4-1/wiki20100130_mysql108feb22.sql > > 2. sed "s/\\\'/\'\'/g" wiki20100130_mysql108feb22.sql > > 3. bin/psql -Upostgres wiki20100130 < > /hdd4-1/wiki20100130_mysql108feb22.sql > > invalid byte sequence for encoding "UTF8": 0xe3ba27 > HINT: This error can also happen if the byte sequence does not match the > encoding expected by the server, which is controlled by "client_encoding". > ERROR: invalid byte sequence for encoding "UTF8": 0xee6c65 > HINT: This error can also happen if the byte sequence does not match the > encoding expected by the server, which is controlled by "client_encoding". > ERROR: invalid byte sequence > > > I think a tool would ease that work. > > Thanks & best Regards, > > Adarsh Sharma > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] Tool for shifting tables from Mysql to Postgresql
By looking at error it seems that database encoding is different in mysql database. By default db encoding in PostgreSQL is UTF-8. You can set client_encoding by set client_encoding = You can check the Mysql database encoding using following command. show variables like "character_set_database"; -- Thanks & Regards Dhaval Jaiswal From: pgsql-general-ow...@postgresql.org on behalf of Adarsh Sharma Sent: Tue 3/1/2011 3:57 PM To: my...@lists.mysql.com Cc: pgsql-general@postgresql.org Subject: [GENERAL] Tool for shifting tables from Mysql to Postgresql Dear all, I want to convert some tables from Mysql database to Postgresql Database in Linux Systems ( Ubuntu-10.4, CentOS ). Can someone Please tell me tool for it that makes it easier. I am able to done it through FW tools in Windows System but i want to achieve it in Linux ( CentOS ) System. I researched a lot & tried below steps : 1. mysqldump --compatible=postgresql wiki20100130 > /hdd4-1/wiki20100130_mysql108feb22.sql 2. sed "s/\\\'/\'\'/g" wiki20100130_mysql108feb22.sql 3. bin/psql -Upostgres wiki20100130 < /hdd4-1/wiki20100130_mysql108feb22.sql invalid byte sequence for encoding "UTF8": 0xe3ba27 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". ERROR: invalid byte sequence for encoding "UTF8": 0xee6c65 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". ERROR: invalid byte sequence I think a tool would ease that work. Thanks & best Regards, Adarsh Sharma -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general The information transmitted is intended only for the person or entity to which it is addressed and may contain confidential and/or privileged material. Any review, re-transmission, dissemination or other use of or taking of any action in reliance upon,this information by persons or entities other than the intended recipient is prohibited. If you received this in error, please contact the sender and delete the material from your computer. Microland takes all reasonable steps to ensure that its electronic communications are free from viruses. However, given Internet accessibility, the Company cannot accept liability for any virus introduced by this e-mail or any attachment and you are advised to use up-to-date virus checking software.
[GENERAL] Postgresql not start during Startup
Dear all, I have a problem related start up of Postgres Server, when I start or boot my system my postgres service starts and i am able to issues all Database related commands. In another system ( Ubuntu 10.4 ) , postgres service doesn't start and /etc/init.d/postgresql-8.4 status shows root@orkglo-dell:~# /etc/init.d/postgresql-8.4 status pg_ctl: no server running And then i manually execute command /etc/init.d/postgresql-8.4 start & all is running properly. But I want to start it after booting automatically. Thanks & best Regards, Adarsh Sharma -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql not start during Startup
On Tue, Mar 01, 2011 at 06:37:35PM +0530, Adarsh Sharma wrote: > > But I want to start it after booting automatically. > http://embraceubuntu.com/2005/09/07/adding-a-startup-script-to-be-run-at-bootup/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Binary params in libpq
On Mon, Feb 28, 2011 at 8:42 AM, Merlin Moncure wrote: > On Sun, Feb 27, 2011 at 7:39 PM, Craig Ringer >> AFAIK, the `timestamp' type moved from a floating-point to an integer >> representation internally, which would've affected the binary protocol >> representation. That was even a compile-time config option, so it could be >> different between two different Pg installs with the same version. > > Actually, this has always been a compile time option on the server as > far as i remember and there is protocol support for it -- libpq tells > you how it has been set...you've always had to deal with this I don't see any libpq calls that can, at run-time, tell you things like what format the timestamp is and what endian-ness the server is. Is there something I'm missing? The only thing I could figure out is to do something like 'select 123::int4' or select a known date and determine the nature of the server from what you get back. -K -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Lock ACCESS EXCLUSIVE and Select question !
Hi everyone, thanks for all your advice, i will take then in mind ^_^, yep it was a little difficult to know which seats i can sell, but it was one of the client request, some business constraints don't let me know how many seats have an specific bus even 5 minutes before departure, sometimes i know sometimes i don't, even sometimes when i know i have to change on fly this capacity, for example my bus crash just before departure, so i have to use a default averaged capacity. A human must have the final word about which bus departure, so the software must be very very open to changes. Meanwhile, i reduce my lock level and even the CPU load is now lower LOL, is fantastic, thanks for your help, clients are now working better and faster than before ^_^, i still have a lot of to read about postgres. Alan Acosta On Mon, Feb 28, 2011 at 8:13 PM, David Johnston wrote: > As mentioned SELECT FOR UPDATE is likely your best option. As for an > algorithm if you can find an airline or sporting event case study those two > domains have this problem solved already. Barring that the following comes > to mind. > > Create a record for every "seat" that needs to be sold. > You can list all unreserved seats at a given point in time then at the time > of attempted reservation you re-SELECT but this time with FOR UPDATE and > then immediately mark the seat as reserved (and when it was reserved). > Establish a policy that reservations last for "X minutes" and, using > application code, reset the reservation to OPEN if that time elapses. > If the application needs to restart you can scan the table for the > reservation time and reset any that have already expired while loading back > into memory all those that are still valid. > > It really isn't that different than dispatching tasks to handlers (which is > what I do) and the FOR UPDATE works just fine. I recommend using a > pl/pgsql > function for implementation. Return a reservationID if the seat has been > reserved for a specific user or return null if it could not be reserved. > You also have access to "RAISE" events. Alternatively, you could output a > multi-column row with a Boolean true/false as one of the fields for > "reservation made" and have other message field for cases where it was not > made. > > David J. > > -Original Message- > From: pgsql-general-ow...@postgresql.org > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Andrew Sullivan > Sent: Monday, February 28, 2011 4:28 PM > To: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Lock ACCESS EXCLUSIVE and Select question ! > > On Mon, Feb 28, 2011 at 04:12:30PM -0500, Alan Acosta wrote: > > > My application is trying to generate a numbered place for a client > > inside a bus, and to avoid to sell the place number "5" to two people, > > so i need to avoid that two sellers to sell the same place to same > > time, when i start my project, i read about table lock and choose > > ACCESS EXCLUSIVE, cause blocks everything, in that time seems safe :p, > > but now i have more and more sellers and the application is throwing a > > lot deadlocks in simple SELECTs, i check my logs and notice that was > > because ACCESS EXCLUSIVE is taking a little more time now, and deadlocks > arise ! > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] Lock ACCESS EXCLUSIVE and Select question !
On Tue, Mar 01, 2011 at 09:23:49AM -0500, Alan Acosta wrote: > seats have an specific bus even 5 minutes before departure, sometimes i know > sometimes i don't, even sometimes when i know i have to change on fly this > capacity, for example my bus crash just before departure, so i have to use a > default averaged capacity. A human must have the final word about which bus > departure, so the software must be very very open to changes. This still sounds a little odd to me, but there is another way to do it, and someone suggested it in this thread. If you're doing this only with INSERT, then you just need to find some combination of columns that needs to be unique (one of which is obviously the seat number). Then you'll get a unique violation when two people try to insert the same data, and someone will lose. Your application could catch this in a savepoint and try again with a different seat number. A -- Andrew Sullivan a...@crankycanuck.ca -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)
On Monday, February 28, 2011 9:51:10 pm Aleksey Tsalolikhin wrote: > On Sun, Feb 27, 2011 at 2:52 AM, Alban Hertroys > wrote: > Thank you for your kind replies. > > > I noticed in your table definition that you seem to store timestamps in > > text-fields. Restoring those from text-fields shouldn't make any > > difference, but perhaps your locales are set up differently between the > > machines and cause some type of conversion to take place? > > OK, Alban, I'm game. How would I check how locales are set up? > > Adrian, I found pg_indexes_size() is only in 9 (I have 8.4) but I got > the same information from a query based on > http://www.issociate.de/board/post/478501/How_much_space_do_database_object > s_take_up_in_data_files.html Sorry about that, I was not paying attention. FYI 8.4 does have pg_relation_size() which can be applied against individual indexes. > > > Here is what I see: > > > > nspname | relname | tablesize > > | indexsize | toastsize | toastindexsize > > +--++-- > --++ public | big >| 744 MB > > | 737 MB | 48 GB | 278 MB > > public | big | 503 MB > > | 387 MB | 99 GB | 278 MB > > Check out that toastsize delta. What makes up TOAST? How can I > compare the two TOAST tables in detail? TOAST is best explained here: http://www.postgresql.org/docs/8.4/interactive/storage-toast.html Looks like the TOAST compression is not working on the second machine. Not sure how that could come to be. Further investigation underway:) > > Thanks, > Aleksey -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Binary params in libpq
On Tue, Mar 1, 2011 at 8:19 AM, Kelly Burkhart wrote: > On Mon, Feb 28, 2011 at 8:42 AM, Merlin Moncure wrote: >> On Sun, Feb 27, 2011 at 7:39 PM, Craig Ringer >>> AFAIK, the `timestamp' type moved from a floating-point to an integer >>> representation internally, which would've affected the binary protocol >>> representation. That was even a compile-time config option, so it could be >>> different between two different Pg installs with the same version. >> >> Actually, this has always been a compile time option on the server as >> far as i remember and there is protocol support for it -- libpq tells >> you how it has been set...you've always had to deal with this > > I don't see any libpq calls that can, at run-time, tell you things > like what format the timestamp is and what endian-ness the server is. > Is there something I'm missing? The only thing I could figure out is > to do something like 'select 123::int4' or select a known date and > determine the nature of the server from what you get back. You don't need to know endian-ness: it is always in network order (big endian). As noted above, you can check datetime format with PQparameterStatus. Really, my advice to you and the countless others who seem to continually want to re-engineer this problem is to either use or crib from two libraries that have completely solved it...namely libpqtypes and ecpg. Some of the wire formats are non-trivial to convert to C native types. merlin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Screencasts for PostgreSQL
I recently viewed a screen-cast on PostgreSQL developed by Peepcode.com and obtained a few really valuable insights respecting full text searches. These were things that I was dimly aware of but that extensive reading had not revealed to me ( lacking as I am in the imagination necessary ). I was wondering if any here know of similar presentations on PostgreSQL usage and administration that might be available to me. Free is good but I am willing to pay a reasonable fee for such things as I did for the material from Peepcode. Any suggestions? -- *** E-Mail is NOT a SECURE channel *** James B. Byrnemailto:byrn...@harte-lyne.ca Harte & Lyne Limited http://www.harte-lyne.ca 9 Brockley Drive vox: +1 905 561 1241 Hamilton, Ontario fax: +1 905 561 0757 Canada L8E 3C3 -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Lock ACCESS EXCLUSIVE and Select question !
Yep i already have those columns and unique constraint, my issue isn't sell the seat two times, i was a lot of paranoiac about that and use a lock mode to restricted for that. I will check if i can create rows for seats before sell and use update, so i can use SELECT FOR UPDATE and not use insert, having to lock the whole table to check if a seat is free or sold, but i have to solve several issues in the meantime, for example, to sell a seat for one month in future, will i have to create every single seat(row) in database to be able to know if is free or sold, and other stuff, even so, seems a new and good idea for me, is the first time i heard about SELECT FOR UPDATE. Another question !, can i combine SELECT FOR UPDATE and LOCK command on different tables at the same transaction ? Alan Acosta On Tue, Mar 1, 2011 at 9:33 AM, Andrew Sullivan wrote: > On Tue, Mar 01, 2011 at 09:23:49AM -0500, Alan Acosta wrote: > > > seats have an specific bus even 5 minutes before departure, sometimes i > know > > sometimes i don't, even sometimes when i know i have to change on fly > this > > capacity, for example my bus crash just before departure, so i have to > use a > > default averaged capacity. A human must have the final word about which > bus > > departure, so the software must be very very open to changes. > > This still sounds a little odd to me, but there is another way to do > it, and someone suggested it in this thread. If you're doing this > only with INSERT, then you just need to find some combination of > columns that needs to be unique (one of which is obviously the seat > number). Then you'll get a unique violation when two people try to > insert the same data, and someone will lose. > > Your application could catch this in a savepoint and try again with a > different seat number. > > A > > -- > Andrew Sullivan > a...@crankycanuck.ca > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)
On Monday, February 28, 2011 9:51:10 pm Aleksey Tsalolikhin wrote: > > Here is what I see: > > > > nspname | relname | tablesize > > | indexsize | toastsize | toastindexsize > > +--++-- > --++ public | big >| 744 MB > > | 737 MB | 48 GB | 278 MB > > public | big | 503 MB > > | 387 MB | 99 GB | 278 MB > > Check out that toastsize delta. What makes up TOAST? How can I > compare the two TOAST tables in detail? > The compression/no compression thing tickled a memory. Run \d+ against the table in question. It should show a storage column with values for each field. Are any of those set to EXTERNAL instead of the default EXTENDED? -- Adrian Klaver adrian.kla...@gmail.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Binary params in libpq
Kelly Burkhart writes: > On Mon, Feb 28, 2011 at 8:42 AM, Merlin Moncure wrote: >> Actually, this has always been a compile time option on the server as >> far as i remember and there is protocol support for it -- libpq tells >> you how it has been set...you've always had to deal with this > I don't see any libpq calls that can, at run-time, tell you things > like what format the timestamp is and what endian-ness the server is. > Is there something I'm missing? The timestamp format can be determined by querying PQparameterStatus for the value of "integer_datetimes". Server endianness is irrelevant because values are always sent big-endian. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)
Adrian Klaver writes: > Looks like the TOAST compression is not working on the second machine. Not > sure > how that could come to be. Further investigation underway:) Somebody carelessly messed with the per-column SET STORAGE settings, perhaps? Compare pg_attribute.attstorage settings ... regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] slow building index and reference after Sybase to Pg
On 02/28/11 19:30, Andres Freund wrote: Hi, On Wednesday 23 February 2011 19:31:58 Gary Fu wrote: I'm testing on converting a big Sybase db to Pg. It took about 45 hours to convert all sybase tables (bcp) to Pg (copy) without index and reference. After that I built the index (one by one, sequentially) and it took about 25 hours and then I started to add the references (one by one), however, it has been more than 30 hours and still has no sign of finishing. I wonder, is there any suggestion that may speed up the index and reference building (on Pg). I think some additional information would be useful: * pg version * kernel version * distribution Andres Here are the information : modaps_lads=> show server_version; server_version 9.0.1 9:58am 32 gfu@moddblads:/dump/gfu> uname -a Linux moddblads 2.6.18-194.17.1.el5 #1 SMP Wed Sep 29 12:50:31 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux 9:58am 34 gfu@moddblads:/dump/gfu> cat /proc/version Linux version 2.6.18-194.17.1.el5 (mockbu...@builder10.centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Wed Sep 29 12:50:31 EDT 2010 Also, we have RAID10 with 600GB SAS drives 15000RPM Another question here is that why building the reference will lock the table for reading ? I mean why I cannot build two references at the same time on the same reference table. Does the reference build just read ? Thanks, Gary -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issues with imported blobs from Postgres 8 to 9
Thanks for the reply. What kind of detail can I provide? Mahdi On 2011-02-28, at 6:58 PM, Tom Lane wrote: > Mahdi Mankai writes: >> I created a database dumb using pg_dump on Postgres 8.3.6. After that I >> tried to import the same database into a Postgres 9.0 install. Everything >> worked fine except some image blobs. Some of them seem to be corrupt. > > I doubt this is a bytea_escape problem. In the first place that would > only affect output from the new server, not input into it; and in the > second place, if that were the issue, it would probably result in *all* > your blobs being messed up not just a small number of them. We need > a lot more details than this to offer any help. > > regards, tom lane > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Issues with imported blobs from Postgres 8 to 9
Is there a way you can dump the same image in hex format (or even PostgreSQL's own escape format) from both the 8.3.6 and 9.0.X setup (with bytea_escape set to escape) and do a file comparison between the two to at least show that the results are different? As I have not actually ever done this I am only guessing but I would think it should work. If you have a record that IS correctly displaying you can use it as a control. Also, you fail to indicate what GUI and/or middle-tier tools you are using to retrieve and display the image (or save the file locally to display in a viewer). Along the same lines where are you seeing errors related to these images - and what are they? You also say "some" but is that because you have only tested a few or are there some that are coming across just fine? David J. -Original Message- From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Mahdi Mankai Sent: Tuesday, March 01, 2011 6:08 AM To: pgsql-general@postgresql.org Subject: Re: [GENERAL] Issues with imported blobs from Postgres 8 to 9 Thanks for the reply. What kind of detail can I provide? Mahdi On 2011-02-28, at 6:58 PM, Tom Lane wrote: > Mahdi Mankai writes: >> I created a database dumb using pg_dump on Postgres 8.3.6. After that I tried to import the same database into a Postgres 9.0 install. Everything worked fine except some image blobs. Some of them seem to be corrupt. > > I doubt this is a bytea_escape problem. In the first place that would > only affect output from the new server, not input into it; and in the > second place, if that were the issue, it would probably result in > *all* your blobs being messed up not just a small number of them. We > need a lot more details than this to offer any help. > > regards, tom lane > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To > make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Thousands of users using one schema -> ERROR: row is too big
Hi all, I'm working on a database that will have a very large number of users, and I'm running in to a problem: when I grant more than about 2500 users access to a schema, I get my_db=# grant usage on schema my_schema to some_user; ERROR: row is too big: size 8168, maximum size 8164 This of course makes access control tricky on high user-count setups. On IRC, linuxpoet and andres suggested that the problem is that the nspacl column in pg_catalog.pg_namespace grows too large. A suggested fix by linuxpoet adds a toast table to pg_namespace. A potentially dangerous work-around suggested by andres is to alter the pg_namespace table while temporarily having allow_system_table_mods on. That seems to have made the symptom go away for me, but I'm not sure of what consequences the change had. Spontaneously, it seems to me that ACL entries could be stored as rows in a table instead of as elements in an array, but I'm definitely not qualified to comment on PostgreSQL implementation issues. Do you agree with linuxpoet's fix? If so, when do you think it is reasonable to include it? Best Regards Magnus Reftel -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Thousands of users using one schema -> ERROR: row is too big
In response to Magnus Reftel : > > I'm working on a database that will have a very large number of users, and > I'm running in to a problem: when I grant more than about 2500 users access > to a schema, I get > > my_db=# grant usage on schema my_schema to some_user; > ERROR: row is too big: size 8168, maximum size 8164 > > This of course makes access control tricky on high user-count setups. > > On IRC, linuxpoet and andres suggested that the problem is that the nspacl > column in pg_catalog.pg_namespace grows too large. A suggested fix by > linuxpoet adds a toast table to pg_namespace. A potentially dangerous > work-around suggested by andres is to alter the pg_namespace table while > temporarily having allow_system_table_mods on. That seems to have made the > symptom go away for me, but I'm not sure of what consequences the change had. > Spontaneously, it seems to me that ACL entries could be stored as rows in a > table instead of as elements in an array, but I'm definitely not qualified to > comment on PostgreSQL implementation issues. > > Do you agree with linuxpoet's fix? If so, when do you think it is reasonable > to include it? I would think that a better solution would be to follow best practices and create roles and put users in those roles, so you don't have to have so many grants on objects. -- Bill Moran http://www.potentialtech.com http://people.collaborativefusion.com/~wmoran/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Thousands of users using one schema -> ERROR: row is too big
On Mar 1, 2011, at 21:57 , Bill Moran wrote: > In response to Magnus Reftel : >> >> I'm working on a database that will have a very large number of users, and >> I'm running in to a problem: when I grant more than about 2500 users access >> to a schema, I get >> >> my_db=# grant usage on schema my_schema to some_user; >> ERROR: row is too big: size 8168, maximum size 8164 >> >> This of course makes access control tricky on high user-count setups. > I would think that a better solution would be to follow best practices and > create roles and put users in those roles, so you don't have to have so > many grants on objects. Right, that grant was actually useless, since the users already have usage rights to the schema via a role. Problem solved. Sorry for the noise! Thanks! Magnus Reftel -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] #PgEast Training Schedule Up
Hey folks, The training (not sessions) schedule is up for trainings. You can get it right off the front page: https://www.postgresqlconference.org/ . We are running 7 sessions in parallel with a total of 9 trainings. Sincerely, JD -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] #PgEast Training Schedule Up
Hey folks, The training (not sessions) schedule is up for trainings. You can get it right off the front page: https://www.postgresqlconference.org/ . We are running 7 sessions in parallel with a total of 9 trainings. Sincerely, JD -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Thousands of users using one schema -> ERROR: row is too big
On Tue, 2011-03-01 at 15:57 -0500, Bill Moran wrote: > In response to Magnus Reftel : > > > > I'm working on a database that will have a very large number of users, and > > I'm running in to a problem: when I grant more than about 2500 users access > > to a schema, I get > > > > my_db=# grant usage on schema my_schema to some_user; > > ERROR: row is too big: size 8168, maximum size 8164 > > > > This of course makes access control tricky on high user-count setups. > > > > On IRC, linuxpoet and andres suggested that the problem is that the nspacl > > column in pg_catalog.pg_namespace grows too large. A suggested fix by > > linuxpoet adds a toast table to pg_namespace. A potentially dangerous > > work-around suggested by andres is to alter the pg_namespace table while > > temporarily having allow_system_table_mods on. That seems to have made the > > symptom go away for me, but I'm not sure of what consequences the change > > had. Spontaneously, it seems to me that ACL entries could be stored as rows > > in a table instead of as elements in an array, but I'm definitely not > > qualified to comment on PostgreSQL implementation issues. > > > > Do you agree with linuxpoet's fix? If so, when do you think it is > > reasonable to include it? > > I would think that a better solution would be to follow best practices and > create roles and put users in those roles, so you don't have to have so > many grants on objects. Well, yes and no. There is no technical reason (that I know of) that we don't toast those tables. It would be good for him to follow best practices but considering he did run into the bug/oversight and it does appear to be arbitrary, there is no reason to not fix it. JD -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Thousands of users using one schema -> ERROR: row is too big
Bill Moran writes: > In response to Magnus Reftel : >> On IRC, linuxpoet and andres suggested that the problem is that the > nspacl column in pg_catalog.pg_namespace grows too large. A suggested > fix by linuxpoet adds a toast table to pg_namespace. > I would think that a better solution would be to follow best practices and > create roles and put users in those roles, so you don't have to have so > many grants on objects. Yeah. You could probably get around it with the add-a-toast-table hack, but I think performance would be a lot worse. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] #PgEast Training Schedule Up
Hey folks, The training (not sessions) schedule is up for trainings. You can get it right off the front page: https://www.postgresqlconference.org/ . We are running 7 sessions in parallel with a total of 9 trainings. Sincerely, JD -- PostgreSQL.org Major Contributor Command Prompt, Inc: http://www.commandprompt.com/ - 509.416.6579 Consulting, Training, Support, Custom Development, Engineering http://twitter.com/cmdpromptinc | http://identi.ca/commandprompt -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] What could cause sudden increase in "PARSE" stage of prepared statements?
hi i have system when we log every query that is over 150ms. usually we get 100-500 "parse ..." elements per minute, but there are some cases where the number exceeds 15000 per minute (it's pretty busy system). the interesting fact is that io is not taxed, there is no sudden network traffic, not more than usual queries - all looks normal, but the parses start to take much longer. it's 8.3.11 on x64 linux. what I should look into to solve this mystery? Best regards, depesz -- The best thing about modern society is how easy it is to avoid contact with it. http://depesz.com/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Dynamic binding in plpgsql function
Hi, I would like to write a generic plpgsql function with a text parameter being a callback function name so that my general function can call this callback function. e.g.: CREATE OR REPLACE FUNCTION ST_MyCallbackFunction(y int) RETURNS int AS $$ DECLARE BEGIN RETURN someCalculationBasedOnY; END; $$ LANGUAGE 'plpgsql'; CREATE OR REPLACE FUNCTION ST_MyGeneralFunction(callback text) RETURNS SETOF geomval AS $$ DECLARE x integer; y integer; BEGIN y := somecalculation; x := 'callback'(y); --This is what I need RETURN x; END; $$ LANGUAGE 'plpgsql'; I don't want to do an EXECUTE statement since I have no table to put after the FROM clause. I want to assign the resulting value directly to a variable like in my example. Can I/How can I achieve this? Thanks, Pierre -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic binding in plpgsql function
Hello 2011/3/2 Pierre Racine : > Hi, > > I would like to write a generic plpgsql function with a text parameter being > a callback function name so that my general function can call this callback > function. e.g.: > > CREATE OR REPLACE FUNCTION ST_MyCallbackFunction(y int) > RETURNS int AS $$ > DECLARE > BEGIN > RETURN someCalculationBasedOnY; > END; > $$ LANGUAGE 'plpgsql'; > > CREATE OR REPLACE FUNCTION ST_MyGeneralFunction(callback text) > RETURNS SETOF geomval AS $$ > DECLARE > x integer; > y integer; > BEGIN > y := somecalculation; > x := 'callback'(y); --This is what I need EXECUTE 'SELECT ' || callback || '($1)' USING y INTO x; there are no other way than EXECUTE attention - there is a sql injection risk regards Pavel Stehule > RETURN x; > END; > $$ LANGUAGE 'plpgsql'; > > I don't want to do an EXECUTE statement since I have no table to put after > the FROM clause. I want to assign the resulting value directly to a variable > like in my example. > > Can I/How can I achieve this? > > Thanks, > > Pierre > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Dynamic binding in plpgsql function
On Mar 2, 2011, at 4:31 AM, Pierre Racine wrote: > CREATE OR REPLACE FUNCTION ST_MyGeneralFunction(callback text) >RETURNS SETOF geomval AS $$ >DECLARE >x integer; >y integer; >BEGIN >y := somecalculation; >x := 'callback'(y); --This is what I need >RETURN x; >END; >$$ LANGUAGE 'plpgsql'; > > I don't want to do an EXECUTE statement since I have no table to put after > the FROM clause. I want to assign the resulting value directly to a variable > like in my example. You don't need any table to assign value of function to Variable, if function returning single value. You can use EXECUTE 'SELECT '||$1||'(y)' into x; Thanks & Regards, Vibhor Kumar EnterpriseDB Corporation The Enterprise PostgreSQL Company vibhor.ku...@enterprisedb.com Blog:http://vibhork.blogspot.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Query should have failed, but didn't?
Hi all, Got an odd one. test=# select version(); version --- PostgreSQL 8.4.5 on i386-apple-darwin, compiled by GCC i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5370), 32-bit (1 row) test=# create temp table test1 (something integer); CREATE TABLE test=# create temp table test2 (id1 integer, charge numeric); CREATE TABLE test=# select * from test2 where id1 in (select id1 from test1) and charge=0.05; id1 | charge -+ (0 rows) Notice the last query's subselect is selecting "id1" from test1, but there's no such column on id1. Postgres correctly fails when you do: test=# select id1 from test1; ERROR: column "id1" does not exist LINE 1: select id1 from test1; The plan for the query that I think should have failed: test=# explain select * from test2 where id1 in (select id1 from test1) and charge=0.05; QUERY PLAN - Seq Scan on test2 (cost=0.00..24628.45 rows=3 width=36) Filter: ((charge = 0.05) AND (SubPlan 1)) SubPlan 1 -> Seq Scan on test1 (cost=0.00..34.00 rows=2400 width=0) (4 rows) Notice the sub plan doesn't actually cause any filtering. This produces incorrect results when there's data in the table: test=# insert into test1 select 5; INSERT 0 1 test=# insert into test2 select 10, 70; INSERT 0 1 test=# select * from test2 where id1 in (select id1 from test1) and charge=70; id1 | charge -+ 10 | 70 (1 row) test=# explain select * from test2 where id1 in (select id1 from test1) and charge=70; QUERY PLAN - Seq Scan on test2 (cost=0.00..24628.45 rows=3 width=36) Filter: ((charge = 70::numeric) AND (SubPlan 1)) SubPlan 1 -> Seq Scan on test1 (cost=0.00..34.00 rows=2400 width=0) (4 rows) Am I correct in thinking this is a bug in pg? --Royce --Royce Chief Engineer @ Inomial 03 3125 0417 954 640
Re: [GENERAL] Query should have failed, but didn't?
On 03/01/2011 07:50 PM, Royce Ausburn wrote: Hi all, Got an odd one. test=# select version(); version --- PostgreSQL 8.4.5 on i386-apple-darwin, compiled by GCC i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 (Apple Computer, Inc. build 5370), 32-bit (1 row) test=# create temp table test1 (something integer); CREATE TABLE test=# create temp table test2 (id1 integer, charge numeric); CREATE TABLE test=# select * from test2 where id1 in (select id1 from test1) and charge=0.05; id1 | charge -+ (0 rows) Notice the last query's subselect is selecting "id1" from test1, but there's no such column on id1. Postgres correctly fails when you do: test=# select id1 from test1; ERROR: column "id1" does not exist LINE 1: select id1 from test1; The plan for the query that I think should have failed: test=# explain select * from test2 where id1 in (select id1 from test1) and charge=0.05; QUERY PLAN - Seq Scan on test2 (cost=0.00..24628.45 rows=3 width=36) Filter: ((charge = 0.05) AND (SubPlan 1)) SubPlan 1 -> Seq Scan on test1 (cost=0.00..34.00 rows=2400 width=0) (4 rows) Notice the sub plan doesn't actually cause any filtering. This produces incorrect results when there's data in the table: test=# insert into test1 select 5; INSERT 0 1 test=# insert into test2 select 10, 70; INSERT 0 1 test=# select * from test2 where id1 in (select id1 from test1) and charge=70; id1 | charge -+ 10 | 70 (1 row) test=# explain select * from test2 where id1 in (select id1 from test1) and charge=70; QUERY PLAN - Seq Scan on test2 (cost=0.00..24628.45 rows=3 width=36) Filter: ((charge = 70::numeric) AND (SubPlan 1)) SubPlan 1 -> Seq Scan on test1 (cost=0.00..34.00 rows=2400 width=0) (4 rows) Am I correct in thinking this is a bug in pg? --Royce --Royce Chief Engineer @ Inomial 03 3125 0417 954 640 Its getting id1 from the parent table. (test2) You can use fields from the parent table in subselects. try this and it'll complain: select * from test2 where id1 in (select junk from test1) and charge=70; try: select * from test2 where id1 in (select something from test1 where something = id1) and charge=70; -Andy -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Query should have failed, but didn't?
This is not a bug; given your test queries whenever you reference id1 you are ALWAYS referencing the column id1 in table test2. >>test=# select * from test2 where id1 in (select id1 from test1) and charge=70; >> id1 | charge >>-+ >> 10 | 70 >> (1 row) Hint: Consider the results of: SELECT literal FROM table1; AND SELECT t1.*, (SELECT t2.col2 FROM t2 WHERE t2.id = t1.id) FROM t1; David J. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Query should have failed, but didn't?
On 02/03/2011, at 2:16 PM, Andy Colson wrote: > Its getting id1 from the parent table. (test2) You can use fields from the > parent table in subselects. > > try this and it'll complain: > > select * from test2 where id1 in (select junk from test1) and charge=70; Oh! Of course! What a fool. Thanks =) --Royce -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Comparing md5 hash with md5 password hash
Hi We are developing application in Microsoft Access linked via ODBC with PostgreSQL 8.2 database. The ODBC Data Source is configured to login using single user and password for all users logining to our application in MS. Now. We need to check out if user that trying to login has valid user and password in PostgreSQL database. We are asking tables pg_authid and/or pg_shadow for user names and passwords. Passwords in those tables are stored in md5. We have following statement to compare password hash with password typed by user: SELECT 1 FROM pg_authid WHERE rolename='' AND rolpassword='md5'||md5(''); But hash generated this way is different than password hash stored in pg_authid. What we are doing wrong ? Is there any possibility to checkout passwords in pg_authid table ? P.S. Sorry for my awfull english. -- Michal Koba
Re: [GENERAL] Comparing md5 hash with md5 password hash
2011/3/2 Michał Koba > Hi > > We are developing application in Microsoft Access linked via ODBC with > PostgreSQL 8.2 database. The ODBC Data Source is configured to login using > single user and password for all users logining to our application in MS. > > Now. We need to check out if user that trying to login has valid user > and password in PostgreSQL database. We are asking tables pg_authid and/or > pg_shadow for user names and passwords. Passwords in those tables are stored > in md5. We have following statement to compare password hash with password > typed by user: > > SELECT 1 > FROM pg_authid > WHERE rolename='' > AND rolpassword='md5'||md5(''); > > But hash generated this way is different than password hash stored in > pg_authid. > > What we are doing wrong ? Is there any possibility to checkout > passwords in pg_authid table ? > > P.S. Sorry for my awfull english. > > s/rolename/rolname/ The password isn't hashed on its own; it's salted with the username, so you'd really want: rolpassword = 'md5' || md5('password' || rolname); -- Thom Brown Twitter: @darkixion IRC (freenode): dark_ixion Registered Linux user: #516935