Re: [GENERAL] several postgres installations on the same machine?
Hi. If I understood correctly, this blog describes how to create second instance that is linked to first (the same service acount user)? But, I want to know whether it is possible to have second instance completely independent, not influencing each other? Regards, Zlatko - Original Message - From: Anoo Sivadasan Pillai To: Zlatko Matić Cc: pgsql-general@postgresql.org Sent: Friday, July 20, 2007 8:40 AM Subject: Re: [GENERAL] several postgres installations on the same machine? If you mean multiple instances, Then it can be. Have a look on Pauls personal Blog http://people.planetpostgresql.org/paul/index.php?/archives/4-Running-a-Second-Instance-of-PostgreSQL-8.1.4-on-Windows-2003.html Anoo S Pillai -- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Friday, July 20, 2007 11:51 AM To: pgsql-general@postgresql.org Subject: [GENERAL] several postgres installations on the same machine? Is it possible to have few independant PostgreSQL 8.2 installations on the same PC, WIndows XP? Thanks, Zlatko Visit our Website at www.rmesi.co.in This message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. Freedom of Information Act 2000 This email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests. This email has been scanned for viruses by Trend ScanMail.
Re: [GENERAL] several postgres installations on the same machine?
Hello PostgreSQL can run with more independent clusters. These clusters can be related to one binary files or to more (different versions of postgresql) files. Every cluster has own configuration. Databases from one cluster share users, locales, etc. Windows installer build first cluster automatically. Second and others clusters have to be created manually with statement initdb. These clusters will share only dll files. If you wont absolutely independent installations, then change paths and ports. Regards Pavel Stehule 2007/7/20, Zlatko Matić <[EMAIL PROTECTED]>: Hi. If I understood correctly, this blog describes how to create second instance that is linked to first (the same service acount user)? But, I want to know whether it is possible to have second instance completely independent, not influencing each other? Regards, Zlatko - Original Message - From: Anoo Sivadasan Pillai To: Zlatko Matić Cc: pgsql-general@postgresql.org Sent: Friday, July 20, 2007 8:40 AM Subject: Re: [GENERAL] several postgres installations on the same machine? If you mean multiple instances, Then it can be. Have a look on Pauls personal Blog http://people.planetpostgresql.org/paul/index.php?/archives/4-Running-a-Second-Instance-of-PostgreSQL-8.1.4-on-Windows-2003.html Anoo S Pillai From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Friday, July 20, 2007 11:51 AM To: pgsql-general@postgresql.org Subject: [GENERAL] several postgres installations on the same machine? Is it possible to have few independant PostgreSQL 8.2 installations on the same PC, WIndows XP? Thanks, Zlatko Visit our Website at www.rmesi.co.in This message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. Freedom of Information Act 2000 This email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests. This email has been scanned for viruses by Trend ScanMail. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] several postgres installations on the same machine?
On Fri, Jul 20, 2007 at 09:24:05AM +0200, Zlatko Matić wrote: > Hi. > If I understood correctly, this blog describes how to create second instance > that is linked to first (the same service acount user)? > But, I want to know whether it is possible to have second instance completely > independent, not influencing each other? > Regards, It is, but you need to install it manually. The installer will only permit one comlpete installation per version. (you can have multiple servers running of course, but they share the binaries if installed by the installer) //Magnus ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] several postgres installations on the same machine?
Zlatko Matić wrote: Hi. If I understood correctly, this blog describes how to create second instance that is linked to first (the same service acount user)? But, I want to know whether it is possible to have second instance completely independent, not influencing each other? Yes, just make sure you give each it's own user account and run it on a different port (typically default on 5432, next on 5433 etc). They will of course all compete for resources on the machine, but otherwise I don't see why Windows should be different from *nix. Lots of people run multiple different versions on the various Linux/BSD/Mac OS-X boxes they have, but there's no reason why they couldn't be the same version. -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Retrieve the record ID
Hi all. I'm new to this list and, first of all, I'm a new user of PostgreSQL. The version I'm using is 8.2.3 and I've the necessity to retrieve, using an application, the ID of a modified (INSERT, UPDATE, DELETE) record of a triggered table. I wasn't able to find out a way to obtain the required ID. The only information available outside of the DBMS and usable by my app (subscribed for a significant event) are the table name and the server process id. The application is running on Windows XP and the server is installed on Windows Server 2003. Any one knows a way to notify my app with the ID of the modified record? Any idea is appreciated. Thanks in advance. Luca _ Watch all 9 Live Earth concerts live on MSN. http://liveearth.uk.msn.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Retrieve the record ID
Luca Ciciriello wrote: Hi all. I'm new to this list and, first of all, I'm a new user of PostgreSQL. The version I'm using is 8.2.3 and I've the necessity to retrieve, using an application, the ID of a modified (INSERT, UPDATE, DELETE) record of a triggered table. I wasn't able to find out a way to obtain the required ID. The only information available outside of the DBMS and usable by my app (subscribed for a significant event) are the table name and the server process id. The last insert is the easy one - SELECT currval('mtable_id_seq'); To get the ID of an UPDATE or DELETE you will need a trigger to be run on each event. If the trigger you refer to is your own then I would integrate the change into what you have, if you are referring to cascading updates/deletes then you will need to add a trigger to suit your needs. Within the trigger you have access to 'NEW' and 'OLD' copies of the row affected, which is where you can get the ID you are after. (see chapter 37.10) The information passed with a notify is simply an event name so you may want a table to store the ID's you want. Maybe with a timestamp that you app uses to find changes since it last looked. Polling this table can be an alternate to using notify. The application is running on Windows XP and the server is installed on Windows Server 2003. Any one knows a way to notify my app with the ID of the modified record? Any idea is appreciated. Thanks in advance. Luca _ Watch all 9 Live Earth concerts live on MSN. http://liveearth.uk.msn.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- Shane Ambler [EMAIL PROTECTED] Get Sheeky @ http://Sheeky.Biz ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] privillages for pg_class
Hello. I use following statements for temporarily disable triggers and enable them again: --Disable triggers. UPDATE "pg_class" SET "reltriggers" = 0 WHERE "relname" = 'orders'; --Enable triggers. UPDATE pg_class SET reltriggers = ( SELECT count(*) FROM pg_trigger where pg_class.oid = tgrelid) WHERE relname = 'orders'; It works when connected as a superuser. But, when I try to execute it as a normal user, not enough privillages, although I granted all privillages to pg_class and pg_trigger. Do I miss something? Thanks.
Re: [GENERAL] pg_dump without blobs
I think i've found a solution, i will do this : CREATE FUNCTION save_bytea_on_system(bytea) RETURNS varchar AS $$ use File::Temp (); $fh = new File::Temp(); binmode($fh); my $data = $_[0]; $data =~ s{\\(\\|[0-7]{3})}{$1 eq "\\" ? $1 : chr(oct($1))}ge; $fname = $fh->filename; syswrite $fh,$data; close $fh; return $fname; $$ LANGUAGE plperlu; update my_table set new_field = lo_import(save_bytea_on_system(old_field)); I think that postgres team should add a method for doing these conversion much simplier. Sebastien On 7/15/07, Sébastien Boutté <[EMAIL PROTECTED]> wrote: Hi, I have a postgressql database with a table containing a bytea field. I would like to dump without this particulary field in order to improve dump and restore time. I try with options : -n public (without -b) but it's always the same, the dump is huge. How can i do that ? Do i have to convert my field bytea to oid ? Thank you for your help, Sebastien ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] CASE in ORDER BY clause
Uwe C. Schroeder wrote: On Saturday 07 July 2007, Lew wrote: So if your RDBMS sorts NULLs after all other values, then from select start_date from show_date order by case when start_date > CURRENT_DATE then start_date end desc, case when start_date <= CURRENT_DATE then start_date end asc; all rows with start_date > CURRENT_DATE will appear first, in start_date descending order, then all rows with start_date <= CURRENT_DATE will appear, in start_date ascending order. Is CURRENT_DATE evaluated once for the query or twice for each row? CURRENT_DATE is evaluated once per transaction. If you run in autocommit - mode, then the single query is wrapped in a transaction by itself. Either way it's never evaluated per occurrence. I'm coming in late on this but you might try something like... select ... from ( select ... case when start_date > current date then 1 || start_date - current_date else 0 || current_date - start_date end "FOO" ) order by FOO desc Or something like that... ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] When is PostgreSQL 8.3 slated for release?
Keaton Adams wrote: I am being asked by management when PostgreSQL 8.3 will become generally available. Is there an updated timeline for 8.3? You know the answer is supposed to be "when it's ready" but we all know Management doesn't like that kind of an answer. Of course, you could just pull the same thing a proprietary software company does. Make up a date based on some WAG and then excuse the delays for years afterwards by citing all the really great features that are going to be included and how wonderful it's going to be when it's finally released. Here's my WAG: Release 8.0 Release date: 2005-01-19 Release 8.1 Release date: 2005-11-08 Release 8.2 Release date: 2006-12-05 I wouldn't expect anything earlier than say... 2008-01-12 I am purely guessing here and I haven't sat down and read through all the change notes to see what really happened with PostgreSQL but based on other projects x.0 to x.1 is a lot of relatively easy things to address from a major point release. As time progresses -- the time between these sub point releases (x.1->x.2->...) will increase slightly as the level of effort to identify and resolve any issues increases based on a "low hanging fruit" and "severity" approach. There may be a significant reduction in point releases at the end of the cycle as there is a lot of "cleanup" activity before the next major release. So the x.9 to x.10 time might be reduced before you get to 9.0.0 So I would expect the next release (8.3) to not be any sooner then January 2008 at best. Anything before that could be considered a gift and anything beyond that shouldnt' be considered "bad". If management is the type who prefers to wait until the ".1" release, then it's going to be a bit longer. Me? I tend to be a ".1" guy much of the time. And this is just a guess. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Retrieve the record ID
We have a system that sends back information by using the RAISE NOTICE. I've taken two lines out of it and put them here to illustrate the idea. You may have to play with it to get it right: NotifyList = 'The OID is: ' || CAST(new.oid as varchar(10)) || ';'; RAISE NOTICE '%',NotifyList; Luca Ciciriello wrote: Hi all. I'm new to this list and, first of all, I'm a new user of PostgreSQL. The version I'm using is 8.2.3 and I've the necessity to retrieve, using an application, the ID of a modified (INSERT, UPDATE, DELETE) record of a triggered table. I wasn't able to find out a way to obtain the required ID. The only information available outside of the DBMS and usable by my app (subscribed for a significant event) are the table name and the server process id. The application is running on Windows XP and the server is installed on Windows Server 2003. Any one knows a way to notify my app with the ID of the modified record? Any idea is appreciated. Thanks in advance. Luca _ Watch all 9 Live Earth concerts live on MSN. http://liveearth.uk.msn.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq -- Kenneth Downs Secure Data Software, Inc. www.secdat.comwww.andromeda-project.org 631-689-7200 Fax: 631-689-0527 cell: 631-379-0010 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Foreign key constraint question
I want to do something like this: ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id FOREIGN KEY (item_id, 'Company') REFERENCES item_bases(item_id, item_type) INITIALLY DEFERRED I could add a column to companies that is always set to "Company" but that seems like a waste. I tried the above and I got a syntax error. Any suggestions? Thank you, Perry Smith ( [EMAIL PROTECTED] ) Ease Software, Inc. ( http://www.easesoftware.com ) Low cost SATA Disk Systems for IBMs p5, pSeries, and RS/6000 AIX systems
Re: [GENERAL] Foreign key constraint question
On Fri, Jul 20, 2007 at 08:57:25AM -0500, Perry Smith wrote: > I want to do something like this: > > ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id > FOREIGN KEY (item_id, 'Company') > REFERENCES item_bases(item_id, item_type) > INITIALLY DEFERRED > > I could add a column to companies that is always set to "Company" but > that seems like a waste. I tried the above and I got a syntax error. What purpose is the constraint intended to achieve? -- Michael Fuhr ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Foreign key constraint question
On Jul 20, 2007, at 9:06 AM, Michael Fuhr wrote: On Fri, Jul 20, 2007 at 08:57:25AM -0500, Perry Smith wrote: I want to do something like this: ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id FOREIGN KEY (item_id, 'Company') REFERENCES item_bases(item_id, item_type) INITIALLY DEFERRED I could add a column to companies that is always set to "Company" but that seems like a waste. I tried the above and I got a syntax error. What purpose is the constraint intended to achieve? I am trying to create a polymorphic item. item_bases holds the item_id and item_type. Different tables like companies, people, etc will hold the data. This example is for companies. The item_type does not hold the actual table name but holds the "class" name. The idea is that the item_id will be the same in item_bases and in companies (to pick an example). I want to make sure that the item_base points to an item and the item points back to the item_base. I can add a check constraint to item_base to make sure that with the item_id and item_type it references a record in the right table. But I can not defer that. So, I plan to add to the companies table first, then add to item_bases table. So, with the the foreign key constraint above, I want to make sure that (item_id, item_type) of companies matches an (item_id, item_type) in item_bases but in companies, I do not have an item_type column since it will always be a constant. Can I add a column to the table somehow like: "Company" as item_type ? That would achieve the same effect. Thank you, Perry Smith ( [EMAIL PROTECTED] ) Ease Software, Inc. ( http://www.easesoftware.com ) Low cost SATA Disk Systems for IBMs p5, pSeries, and RS/6000 AIX systems ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] several postgres installations on the same machine?
I followed instructions from the blog, but when applying initdb command I have the following error: "initdb: file "C:/Program Files/PostgreSQL/8.2/share/postgres.bki" does not exist. This means you have a corrupted installation or identified the wrong directory with the invocation option -L.". I tried to find that postgres.bki file, but it seems it doesn't exist. What now? Regards, Zlatko - Original Message - From: Anoo Sivadasan Pillai To: Zlatko Matić Cc: pgsql-general@postgresql.org Sent: Friday, July 20, 2007 8:40 AM Subject: Re: [GENERAL] several postgres installations on the same machine? If you mean multiple instances, Then it can be. Have a look on Pauls personal Blog http://people.planetpostgresql.org/paul/index.php?/archives/4-Running-a-Second-Instance-of-PostgreSQL-8.1.4-on-Windows-2003.html Anoo S Pillai -- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Zlatko Matic Sent: Friday, July 20, 2007 11:51 AM To: pgsql-general@postgresql.org Subject: [GENERAL] several postgres installations on the same machine? Is it possible to have few independant PostgreSQL 8.2 installations on the same PC, WIndows XP? Thanks, Zlatko Visit our Website at www.rmesi.co.in This message is confidential. You should not copy it or disclose its contents to anyone. You may use and apply the information for the intended purpose only. Internet communications are not secure; therefore, RMESI does not accept legal responsibility for the contents of this message. Any views or opinions presented are those of the author only and not of RMESI. If this email has come to you in error, please delete it, along with any attachments. Please note that RMESI may intercept incoming and outgoing email communications. Freedom of Information Act 2000 This email and any attachments may contain confidential information belonging to RMESI. Where the email and any attachments do contain information of a confidential nature, including without limitation information relating to trade secrets, special terms or prices these shall be deemed for the purpose of the Freedom of Information Act 2000 as information provided in confidence by RMESI and the disclosure of which would be prejudicial to RMESI's commercial interests. This email has been scanned for viruses by Trend ScanMail.
[GENERAL] Postgres Performance Issue
Hi, We're trying to figure out why we're getting poor query performance on a particular database running on a 64 bit Solaris box. The info for the poor database is: Red Hat Enterprise Linux AS release 4 (Nahant Update 2) Linux vl-sfv40z-001 2.6.9-22.0.2.ELsmp #1 SMP Thu Jan 5 17:11:56 EST 2006 x86_64 x86_64 x86_64 GNU/Linux 16Gb ram. Postgres version 8.1.2 Database size is about 7 Gigs. Live lines in config: max_connections = 500 shared_buffers = 21760 work_mem = 2048 max_fsm_pages = 5 checkpoint_segments = 125 effective_cache_size = 262144 # =2GB typically 8KB each redirect_stderr = on# Enable capturing of stderr into log log_directory = '/var/log/pglogs' log_truncate_on_rotation = on # If on, any existing log file of the same log_rotation_size = 10240 log_min_duration_statement = 4000 stats_command_string = on lc_messages = 'en_US.UTF-8' # locale for system error message lc_monetary = 'en_US.UTF-8' # locale for monetary formatting lc_numeric = 'en_US.UTF-8' # locale for number formatting lc_time = 'en_US.UTF-8' # locale for time formatting We've already ruled out an I/O issue. The disk is running FAST. We know it's running poorly because when we put a copy of the database on a lesser hardware 32 bit server, it runs TEN TIMES faster. Here are the relevant issues with the FAST server: Red Hat Enterprise Linux AS release 4 (Nahant Update 4) Linux vl-filesrv-001 2.6.9-42.0.8.ELsmp #1 SMP Tue Jan 23 13:01:26 EST 2007 i686 i686 i386 GNU/Linux Dell dual CPU 4GB ram. Postgres version: 8.2.3 live config lines: max_connections = 100 # (change requires restart) shared_buffers = 24MB # min 128kB or max_connections*16kB max_fsm_pages = 153600 # min max_fsm_relations*16, 6 bytes each redirect_stderr = on# Enable capturing of stderr into log log_directory = 'pg_log'# Directory where log files are writtenlog_truncate_on_rotation = on # If on, any existing log file of the same log_rotation_age = 1d # Automatic rotation of logfiles will log_rotation_size = 0 # Automatic rotation of logfiles will datestyle = 'iso, mdy' lc_messages = 'en_US.UTF-8' # locale for system error message lc_monetary = 'en_US.UTF-8' # locale for monetary formatting lc_numeric = 'en_US.UTF-8' # locale for number formatting lc_time = 'en_US.UTF-8' # locale for time formatting --- --- --- --- I explain the characterization of fast and slow like this: Slow is taking about ten times longer than fast to execute the same query. If there's any gotcha here that we're not seeing, please point it out. I'm flummoxed. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Postgres Performance Issue
In response to Brian Maguire <[EMAIL PROTECTED]>: > Hi, > > We're trying to figure out why we're getting poor query performance on a > particular database running on a 64 bit Solaris box. The info for the poor > database is: > > Red Hat Enterprise Linux AS release 4 (Nahant Update 2) Linux vl-sfv40z-001 > 2.6.9-22.0.2.ELsmp #1 SMP Thu Jan 5 17:11:56 EST 2006 x86_64 x86_64 x86_64 > GNU/Linux > > 16Gb ram. > > Postgres version 8.1.2 I don't know how you can expect older technology to run faster than newer, more optimized technology. > > Database size is about 7 Gigs. > > Live lines in config: > > max_connections = 500 > shared_buffers = 21760 ^ With 16G of RAM, this is so small as to be laughable. > work_mem = 2048 This as well. There's a lot of detail missing, but I wouldn't even try to diagnose any more until you've tuned those numbers closer to sanity. Also, did you vacuum analyze prior to running the speed test? > max_fsm_pages = 5 > checkpoint_segments = 125 > effective_cache_size = 262144 # =2GB typically 8KB each > redirect_stderr = on# Enable capturing of stderr into log > log_directory = '/var/log/pglogs' > log_truncate_on_rotation = on # If on, any existing log file of the > same > log_rotation_size = 10240 > log_min_duration_statement = 4000 > stats_command_string = on > lc_messages = 'en_US.UTF-8' # locale for system error > message > lc_monetary = 'en_US.UTF-8' # locale for monetary > formatting > lc_numeric = 'en_US.UTF-8' # locale for number formatting > lc_time = 'en_US.UTF-8' # locale for time formatting > > We've already ruled out an I/O issue. The disk is running FAST. > > We know it's running poorly because when we put a copy of the database on a > lesser hardware 32 bit server, it runs TEN TIMES faster. > > Here are the relevant issues with the FAST server: > > Red Hat Enterprise Linux AS release 4 (Nahant Update 4) Linux vl-filesrv-001 > 2.6.9-42.0.8.ELsmp #1 SMP Tue Jan 23 13:01:26 EST 2007 i686 i686 i386 > GNU/Linux > > Dell dual CPU > 4GB ram. > > > Postgres version: 8.2.3 > > live config lines: > > max_connections = 100 # (change requires restart) > shared_buffers = 24MB # min 128kB or max_connections*16kB > max_fsm_pages = 153600 # min max_fsm_relations*16, 6 bytes each > redirect_stderr = on# Enable capturing of stderr into log > log_directory = 'pg_log'# Directory where log files are > writtenlog_truncate_on_rotation = on # If on, any existing log file > of the same > log_rotation_age = 1d # Automatic rotation of logfiles will > log_rotation_size = 0 # Automatic rotation of logfiles will > datestyle = 'iso, mdy' > lc_messages = 'en_US.UTF-8' # locale for system error > message > lc_monetary = 'en_US.UTF-8' # locale for monetary > formatting > lc_numeric = 'en_US.UTF-8' # locale for number formatting > lc_time = 'en_US.UTF-8' # locale for time formatting > --- --- --- --- > I explain the characterization of fast and slow like this: Slow is taking > about ten times longer than fast to execute the same query. > > If there's any gotcha here that we're not seeing, please point it out. I'm > flummoxed. > > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org/ -- Bill Moran http://www.potentialtech.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] 8.2.4 signal 11 with large transaction
The attached PHP script is a derived test case based on an actual problem we've been seeing in our application. The result of this script is a crashed (sig 11) backend on pg 8.2.4. I've now reproduced this on two different systems, a large server with 1G of shared_buffers and many other performance tunings, and my workstation, which has a pretty much default install of PG. (The problem also occurred on 8.2.0, which led to an upgrade but no resolution) On 8.1, the query fails but instead of the back end crashing, I get a log message "[warning] postgres[49484]: [3-1] ERROR: out of memory" Oddly, the query succeeds if it's fed into psql. I'm now full of mystery and wonder. It would appear as if the underlying problem has something to do with PHP, but why should this cause a backend process to crash? 0 ) { $sql .= "'" . $data[$j] . "'"; } else { $sql .= "NULL"; } if ( $j < count($data) - 1 ) { $sql .= ","; } } $sql .= $insert_suffix; } $sql .= "COMMIT WORK;\n"; echo "\nQuerying..\n"; pg_query($sql); echo "\n"; ?> -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] 8.2.4 signal 11 with large transaction
Bill Moran <[EMAIL PROTECTED]> writes: > I'm now full of mystery and wonder. It would appear as if the > underlying problem has something to do with PHP, but why should this > cause a backend process to crash? I'd bet on PHP submitting the query via extended query protocol (PQexecParams or equivalent) instead of plain ol PQexec which is what psql uses. I don't speak PHP or have it installed here, so this example is hard for me to investigate. Can someone make a reproducer that uses PQexecParams? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] 8.2.4 signal 11 with large transaction
On Friday 20 July 2007 12:51:47 Tom Lane wrote: > Can someone make a reproducer that uses > PQexecParams? Does JDBC apply? jan -- -- Jan de Visser [EMAIL PROTECTED] Baruk Khazad! Khazad ai-menu! -- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] 8.2.4 signal 11 with large transaction
In response to Tom Lane <[EMAIL PROTECTED]>: > Bill Moran <[EMAIL PROTECTED]> writes: > > I'm now full of mystery and wonder. It would appear as if the > > underlying problem has something to do with PHP, but why should this > > cause a backend process to crash? > > I'd bet on PHP submitting the query via extended query protocol > (PQexecParams or equivalent) instead of plain ol PQexec which is what > psql uses. Doesn't appear that way. The PHP source is somewhat cryptic, but I don't seem much ambiguity here: pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query)); There're no conditional blocks around that, so it's the only possible choice when pg_query() gets called in a PHP script. PHP exposes a seperate pg_query_params() that wraps PQexecParams(). > I don't speak PHP or have it installed here, so this example > is hard for me to investigate. Can someone make a reproducer that uses > PQexecParams? Is there any way that this (or something similar) could still apply? -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] 8.2.4 signal 11 with large transaction
Bill Moran <[EMAIL PROTECTED]> writes: > In response to Tom Lane <[EMAIL PROTECTED]>: >> I'd bet on PHP submitting the query via extended query protocol >> (PQexecParams or equivalent) instead of plain ol PQexec which is what >> psql uses. > Doesn't appear that way. OK, it seemed like a good first guess. Can you get a stack trace from the crashed backend? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] psql-odbc configure error
This message bounced from psql-odbc (I suppose because I am not subscribed there), so I am hoping some kindly soul here will help me! Hi developers, In attempting to build psqlodbc from source, I receive the following error in configure after invoking it by /configure --with-unixodbc=/usr/local checking for ssize_t... yes checking for long... yes checking size of long... configure: error: cannot compute sizeof (long) See `config.log' for more details. I have attached the config.log in order to avoid snipping the important thing. I notice in the log that it fails reading libodbcinst.so.1, but lrwxrwxrwx1 root root 20 Jul 19 16:59 /usr/local/lib/libodbcinst.so.1 -> libodbcinst.so.1.0.0 [EMAIL PROTECTED] psqlodbc-08.02.0400]$ ls -l /usr/local/lib/libodbcinst.so.1.0.0 and -rwxr-xr-x1 root root 310919 Jul 19 16:59 /usr/local/lib/libodbcinst.so.1.0.0 and /usr/local/lib is in the ldconfig path. Just prior to this, I installed from source unixodbc 2.2.12 in /usr/local So, please tell me what I did wrong, and how to fix it. Thank you. Don ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] 8.2.4 signal 11 with large transaction
On 7/20/07, Bill Moran <[EMAIL PROTECTED]> wrote: In response to Tom Lane <[EMAIL PROTECTED]>: > Bill Moran <[EMAIL PROTECTED]> writes: > > I'm now full of mystery and wonder. It would appear as if the > > underlying problem has something to do with PHP, but why should this > > cause a backend process to crash? > > I'd bet on PHP submitting the query via extended query protocol > (PQexecParams or equivalent) instead of plain ol PQexec which is what > psql uses. Doesn't appear that way. The PHP source is somewhat cryptic, but I don't seem much ambiguity here: pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query)); There're no conditional blocks around that, so it's the only possible choice when pg_query() gets called in a PHP script. PHP exposes a seperate pg_query_params() that wraps PQexecParams(). > I don't speak PHP or have it installed here, so this example > is hard for me to investigate. Can someone make a reproducer that uses > PQexecParams? Is there any way that this (or something similar) could still apply? I just ran your script, and only changed the connect string to reflect my environment. It ran smoothly against my workstations 8.1.8 pgsql install and against my reporting server's 8.2.4 installation, inserting 30001 rows in each. I'm not familiar with the host=/tmp bit in the connect string, is that an explicit declaration of using unix local sockets and the directory to find it? Does it work if you go to tcp/ip sockets and use a hostname etc... in your pg_connect? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Reminder: PostgreSQL PDXPGDay + PostgreSQL Party
Hello, This is a reminder to everyone that we hare have a PDXPGDay at the Oregon Convention Center on July 22nd. The schedule can be found here: http://pdxgroups.pbwiki.com/PDXPUG%20PostgreSQL%20Day http://developer.postgresql.org/index.php/OSCON2007#PostgreSQL_Day_July_22 We are also having a party directly after the day. Details can be found here: http://www.postresqlparty.org/ Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] OSCON Booth volunteers
Hello, My current list for volunteers is: * Alvaro Herrera * Josh Berkus * Joshua D. Drake * Chris Travers * David Fetter * Michael Alan Brewer * Robert Bernier * Selena Decklemann (part time) (PDXPUG) * Gabrielle 1-2pm Wednesday (PDXPUG) Am I missing anyone? Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] two phase commit
On Thu, Jul 19, 2007 at 03:13:27PM -0700, Ben wrote: > What corner case reduces 2pc from "guaranteed" to "very high probability"? > Is the worry if somebody leaves transactions in a prepared state for > weeks, only to find that deadlock issues has arrisen at final commit time? That's not the worry, no. But something _else_ could happen. For instance, recently it turned out that there was a way, using 2PC, to lock everybody out of the database. The only remedy to that at the moment is to blow away all the PREPAREd transactions, which could mean you lose something that was already committed to. A -- Andrew Sullivan | [EMAIL PROTECTED] This work was visionary and imaginative, and goes to show that visionary and imaginative work need not end up well. --Dennis Ritchie ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Solved? Re: [GENERAL] 8.2.4 signal 11 with large transaction
In response to "Scott Marlowe" <[EMAIL PROTECTED]>: > On 7/20/07, Bill Moran <[EMAIL PROTECTED]> wrote: > > In response to Tom Lane <[EMAIL PROTECTED]>: > > > > > Bill Moran <[EMAIL PROTECTED]> writes: > > > > I'm now full of mystery and wonder. It would appear as if the > > > > underlying problem has something to do with PHP, but why should this > > > > cause a backend process to crash? > > > > > > I'd bet on PHP submitting the query via extended query protocol > > > (PQexecParams or equivalent) instead of plain ol PQexec which is what > > > psql uses. > > > > Doesn't appear that way. The PHP source is somewhat cryptic, but I > > don't seem much ambiguity here: > > > > pgsql_result = PQexec(pgsql, Z_STRVAL_PP(query)); > > > > There're no conditional blocks around that, so it's the only possible > > choice when pg_query() gets called in a PHP script. PHP exposes a > > seperate pg_query_params() that wraps PQexecParams(). > > > > > I don't speak PHP or have it installed here, so this example > > > is hard for me to investigate. Can someone make a reproducer that uses > > > PQexecParams? > > > > Is there any way that this (or something similar) could still apply? > > I just ran your script, and only changed the connect string to reflect > my environment. > > It ran smoothly against my workstations 8.1.8 pgsql install and > against my reporting server's 8.2.4 installation, inserting 30001 rows > in each. Just great. I love these "hard to pin down" issues ... > I'm not familiar with the host=/tmp bit in the connect string, is that > an explicit declaration of using unix local sockets and the directory > to find it? Yes. > Does it work if you go to tcp/ip sockets and use a > hostname etc... in your pg_connect? Behaviour is the same. We first experienced the problem in a development environment where the PHP server is separate from the PG server, thus we're using TCP sockets for that connection. I'm starting to wonder if the OS could be sending the sig 11? ... time warp ... Yup, that was it. The OS was limiting the amount of memory a single process could use via kern.maxdsiz (FreeBSD). This was evident with ulimit -d. So, the fact remains that PG 8.1 returns an out of memory error when it hits this, and PG 8.2 coredumps. Is the 8.2 behaviour expected/ planned? The out of memory error I would expect, the coredump I would not. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: Solved? Re: [GENERAL] 8.2.4 signal 11 with large transaction
In response to Bill Moran <[EMAIL PROTECTED]>: > In response to "Scott Marlowe" <[EMAIL PROTECTED]>: [snip] > I'm starting to wonder if the OS could be sending the sig 11? > > ... time warp ... > > Yup, that was it. The OS was limiting the amount of memory a single > process could use via kern.maxdsiz (FreeBSD). This was evident with > ulimit -d. > > So, the fact remains that PG 8.1 returns an out of memory error when > it hits this, and PG 8.2 coredumps. Is the 8.2 behaviour expected/ > planned? The out of memory error I would expect, the coredump I > would not. It just occurred to me that there's another wildcard in this one. The 8.1 system I tested was on FreeBSD 5.5, while both 8.2 systems were running on FreeBSD 6.2. I wonder if FreeBSD has changed which signal gets sent on memory exhaustion? -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] 8.2.4 signal 11 with large transaction
Bill Moran <[EMAIL PROTECTED]> writes: > Oddly, the query succeeds if it's fed into psql. > I'm now full of mystery and wonder. It would appear as if the > underlying problem has something to do with PHP, but why should this > cause a backend process to crash? Ah, I see it. Your PHP script is sending all 3 INSERT commands to the backend *in a single PQexec*, ie, one 37MB command string. psql won't do that, it splits the input at semicolons. Unsurprisingly, this runs the backend out of memory. (It's not the command string that's the problem, so much as the 3 parse and plan trees...) Unfortunately, in trying to prepare the error message, it tries to attach the command text as the STATEMENT field of the log message. All 37MB worth. And of course *that* gets an out-of-memory error. Presto, infinite recursion, broken only by stack overflow (= SIGSEGV). It looks like 8.1 and older are also vulnerable to this, it's just that they don't try to log error statement strings at the default logging level, whereas 8.2 does. If you cranked up log_min_error_statement I think they'd fail too. I guess what we need to do is hack the emergency-recovery path for error-during-error-processing such that it will prevent trying to print a very long debug_query_string. Maybe we should just not try to print the command at all in this case, or maybe there's some intermediate possibility like only printing the first 1K or so. Thoughts? regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Reminder: PostgreSQL PDXPGDay + PostgreSQL Party
Joshua D. Drake wrote: Hello, This is a reminder to everyone that we hare have a PDXPGDay at the Oregon Convention Center on July 22nd. The schedule can be found here: http://pdxgroups.pbwiki.com/PDXPUG%20PostgreSQL%20Day http://developer.postgresql.org/index.php/OSCON2007#PostgreSQL_Day_July_22 We are also having a party directly after the day. Details can be found here: http://www.postresqlparty.org/ err http://www.postgresqlparty.org J Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate PostgreSQL Replication: http://www.commandprompt.com/products/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Feature request: Per database search_path
On Wed, Jul 18, 2007 at 11:02:51PM +0100, Richard Huxton wrote: > Francisco Reyes wrote: > >As far as I know, currently one can set the search path globally, or on > >a per role bases. > > > >I was wondering if it could be possible to have a per database search_path. > >I believe this would be not only convenient, but will add flexibility. > > > ALTER DATABASE leia SET search_path = public,lookups; > > Seems to work for me on 8.2 - you'll need to disconnect and reconnect to > see it take place though. BTW, in case it's not obvious, that works with pretty much anything that session-setable. Same with ALTER ROLE ... SET. -- Jim Nasby [EMAIL PROTECTED] EnterpriseDB http://enterprisedb.com 512.569.9461 (cell) pgpgFIvnPhGZA.pgp Description: PGP signature
[GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
Hi all. Maybe mine is a stupid question, but I'd like to know the answer if possible. In an inner join involving a 16M+ rows table and a 100+ rows table performances got drastically improved by 100+ times by replacing a UNIQUE-NOT NULL index with a PRIMARY KEY on the very same columns in the very same order. The query has not been modified. In the older case, thanks to the EXPLAIN command, I saw that the join was causing a sort on the index elements, while the primary key was not. So ther's some difference for sure, but I'm missing it. Any hint? -- Vincenzo Romano -- Maybe Computer will never become as intelligent as Humans. For sure they won't ever become so stupid. [VR-1988] ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
On Jul 20, 2007, at 17:54 , Vincenzo Romano wrote: In an inner join involving a 16M+ rows table and a 100+ rows table performances got drastically improved by 100+ times by replacing a UNIQUE-NOT NULL index with a PRIMARY KEY on the very same columns in the very same order. The query has not been modified. There should be no difference in query performance, AIUI. In the older case, thanks to the EXPLAIN command, I saw that the join was causing a sort on the index elements, while the primary key was not. Can you provide the actual EXPLAIN ANALYZE (not just EXPLAIN) outputs you can provide for us to look at? I suspect there's a difference wrt the size of the tables, the distribution of the values of the involved columns, index bloat, or how recent the tables have been analyzed. (Most likely the last.) Dropping the UNIQUE NOT NULL constraint and adding the PRIMARY KEY constraint will cause the index to be recreated, which could affect which plan is chosen and its efficacy. Without the EXPLAIN ANALYZE output, I don't think there's a lot of hope in understanding what's different. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] When is PostgreSQL 8.3 slated for release?
On Jul 20, 2007, at 6:37 , Tom Allison wrote: x.0 to x.1 is a lot of relatively easy things to address from a major point release. From 8.0 to 8.1 is a major release for PostgreSQL. 8.0, 8.1, and 8.2 are all major releases. "Point" releases for PostgreSQL are, for example, from 8.0.1 to 8.0.2. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Foreign key constraint question
On Fri, 2007-07-20 at 09:27 -0500, Perry Smith wrote: > On Jul 20, 2007, at 9:06 AM, Michael Fuhr wrote: > > > On Fri, Jul 20, 2007 at 08:57:25AM -0500, Perry Smith wrote: > >> I want to do something like this: > >> > >> ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id > >> FOREIGN KEY (item_id, 'Company') > >> REFERENCES item_bases(item_id, item_type) > >> INITIALLY DEFERRED > >> > >> I could add a column to companies that is always set to "Company" but > >> that seems like a waste. I tried the above and I got a syntax error. > > > > What purpose is the constraint intended to achieve? > > I am trying to create a polymorphic item. item_bases holds the > item_id and item_type. Different tables like companies, people, etc > will hold the data. This example is for companies. The item_type > does not hold the actual table name but holds the "class" name. Is there a reason you're storing the type (or, to be precise, a more specific type) of the entity as a _value_ in the table that holds attributes of the super type? That seems limiting, and means you can only have one extra level of specificity in your entity type. Better to just not include the type of the entity as a value at all. If you want only companies, join item_bases to companies and the join will eliminate any non-company entities from the result (because the non- company entities in item_bases will have no matching tuple in companies). The relational model handles inheritance and polymorphism very well if you don't store types as values. > The idea is that the item_id will be the same in item_bases and in > companies (to pick an example). I want to make sure that the > item_base points to an item and the item points back to the item_base. > > I can add a check constraint to item_base to make sure that with the > item_id and item_type it references a record in the right table. But > I can not defer that. So, I plan to add to the companies table > first, then add to item_bases table. > You can use a constraint trigger. The current docs say that's "not intended for general use," but it will be available for general use in 8.3. And those can be deferred. Regards, Jeff Davis ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Foreign key constraint question
On Jul 20, 2007, at 7:01 PM, Jeff Davis wrote: On Fri, 2007-07-20 at 09:27 -0500, Perry Smith wrote: On Jul 20, 2007, at 9:06 AM, Michael Fuhr wrote: On Fri, Jul 20, 2007 at 08:57:25AM -0500, Perry Smith wrote: I want to do something like this: ALTER TABLE companies ADD CONSTRAINT fk_companies_item_id FOREIGN KEY (item_id, 'Company') REFERENCES item_bases(item_id, item_type) INITIALLY DEFERRED I could add a column to companies that is always set to "Company" but that seems like a waste. I tried the above and I got a syntax error. What purpose is the constraint intended to achieve? I am trying to create a polymorphic item. item_bases holds the item_id and item_type. Different tables like companies, people, etc will hold the data. This example is for companies. The item_type does not hold the actual table name but holds the "class" name. Is there a reason you're storing the type (or, to be precise, a more specific type) of the entity as a _value_ in the table that holds attributes of the super type? That seems limiting, and means you can only have one extra level of specificity in your entity type. Better to just not include the type of the entity as a value at all. If you want only companies, join item_bases to companies and the join will eliminate any non-company entities from the result (because the non- company entities in item_bases will have no matching tuple in companies). The relational model handles inheritance and polymorphism very well if you don't store types as values. What if I have just an id for an item? This will happen when another table references an item. How do I know what type it is? Are you suggesting I look in companies, people, etc, etc to find the type? It would seem better to have a table that tells me the type. Then retrieve the item from the specified table. The idea is that the item_id will be the same in item_bases and in companies (to pick an example). I want to make sure that the item_base points to an item and the item points back to the item_base. I can add a check constraint to item_base to make sure that with the item_id and item_type it references a record in the right table. But I can not defer that. So, I plan to add to the companies table first, then add to item_bases table. You can use a constraint trigger. The current docs say that's "not intended for general use," but it will be available for general use in 8.3. And those can be deferred. I saw those but where scared away from them because of the "not for general use". With my somewhat limited experience, I figured that applied to me. Thank you again for your help, Perry ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] two phase commit
On Fri, 2007-07-20 at 15:26 -0400, Andrew Sullivan wrote: > On Thu, Jul 19, 2007 at 03:13:27PM -0700, Ben wrote: > > What corner case reduces 2pc from "guaranteed" to "very high probability"? > > Is the worry if somebody leaves transactions in a prepared state for > > weeks, only to find that deadlock issues has arrisen at final commit time? > > That's not the worry, no. But something _else_ could happen. For > instance, recently it turned out that there was a way, using 2PC, to > lock everybody out of the database. The only remedy to that at the > moment is to blow away all the PREPAREd transactions, which could > mean you lose something that was already committed to. > To clarify, I think you're referring to this: http://archives.postgresql.org/pgsql-hackers/2007-07/msg00245.php which can only be done as superuser locking a system table. I would classify that as a "catastrophic" problem, since it involves manually modifying $PGDATA. Regards, Jeff Davis ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Foreign key constraint question
On Fri, 2007-07-20 at 19:18 -0500, Perry Smith wrote: > > The relational model handles inheritance and polymorphism very well if > > you don't store types as values. > > What if I have just an id for an item? This will happen when another > table references an item. How do I know what type it is? Are you > suggesting I look in companies, people, etc, etc to find the type? > It would seem better to have a table that tells me the type. Then > retrieve the item from the specified table. Why do you need to know the type? The purpose of polymorphism is that, if you are looking to access a set of polygons, you don't care whether an individual shape is a triangle or a square, all you care is that it's a polygon. If you want to access triangles specifically, you join polygons to triangles. > > You can use a constraint trigger. The current docs say that's "not > > intended for general use," but it will be available for general use in > > 8.3. And those can be deferred. > > I saw those but where scared away from them because of the "not for > general use". > I only mentioned it because in 8.3 it will be useful for general use. I don't know what's changing about it between now and then, but it's becoming "un-deprecated". It's probably wise to stay away from them until 8.3. Regards, Jeff ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Solved? Re: 8.2.4 signal 11 with large transaction
On 2007-07-20, Bill Moran <[EMAIL PROTECTED]> wrote: > It just occurred to me that there's another wildcard in this one. > The 8.1 system I tested was on FreeBSD 5.5, while both 8.2 systems > were running on FreeBSD 6.2. I wonder if FreeBSD has changed > which signal gets sent on memory exhaustion? Neither 5.x nor 6.x send any sort of signal simply because maxdsize was reached; the only effect of reaching maxdsize is that further memory allocations fail. I'd guess that the most likely cause for a sig11 under those circumstances is a stack overflow resulting from recursive errors... -- Andrew, Supernews http://www.supernews.com - individual and corporate NNTP services ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] psql-odbc configure error
Hi. Umm, I don't understand your build environment well However, The following examination may help to find a problem. -- long_test.c - start --- long longval () { return (long) (sizeof (long)); } unsigned long ulongval () { return (long) (sizeof (long)); } #include #include int main () { long i = longval (); unsigned long ui = ulongval (); printf ("%ld\n", ((long) (sizeof (long; printf ("%ld\n", i); printf ("%lu\n", ((long) (sizeof (long; printf ("%lu\n", ui); return 0; } -- long_test.c - end --- inet% gcc -o long_test long_test.c inet% long_test 4 4 4 4 inet% ldd long_test long_test: libc.so.6 => /lib/libc.so.6 (0x28074000) inet% gcc -v Using built-in specs. Configured with: FreeBSD/i386 system compiler Thread model: posix gcc version 3.4.4 [FreeBSD] 20050518 I want these to be help you.! Regards, Hiroshi Saito From: "djisgitt" This message bounced from psql-odbc (I suppose because I am not subscribed there), so I am hoping some kindly soul here will help me! Hi developers, In attempting to build psqlodbc from source, I receive the following error in configure after invoking it by /configure --with-unixodbc=/usr/local checking for ssize_t... yes checking for long... yes checking size of long... configure: error: cannot compute sizeof (long) See `config.log' for more details. I have attached the config.log in order to avoid snipping the important thing. I notice in the log that it fails reading libodbcinst.so.1, but lrwxrwxrwx1 root root 20 Jul 19 16:59 /usr/local/lib/libodbcinst.so.1 -> libodbcinst.so.1.0.0 [EMAIL PROTECTED] psqlodbc-08.02.0400]$ ls -l /usr/local/lib/libodbcinst.so.1.0.0 and -rwxr-xr-x1 root root 310919 Jul 19 16:59 /usr/local/lib/libodbcinst.so.1.0.0 and /usr/local/lib is in the ldconfig path. Just prior to this, I installed from source unixodbc 2.2.12 in /usr/local So, please tell me what I did wrong, and how to fix it. Thank you. Don ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Foreign key constraint question
On Jul 20, 2007, at 19:37 , Jeff Davis wrote: I only mentioned it because in 8.3 it will be useful for general use. I don't know what's changing about it between now and then, but it's becoming "un-deprecated". AFAIK, nothing's changed in the actual constraint trigger code: it's just a doc change. No reason not to use them in 8.2. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] query optimizer
Luca Ferrari <[EMAIL PROTECTED]> writes: >> src/backend/optimizer/README > I've read this yesterday, very interesting, but I'm looking for something > similar related to geqo. I mean, is there any presentation/demo that > step-y-step explains how geqo could take decisions? There's not a lot, but I just made a few improvements here and here: http://developer.postgresql.org/pgdocs/postgres/planner-optimizer.html http://developer.postgresql.org/pgdocs/postgres/geqo-pg-intro.html Beyond that, the GEQO chapter provides several references, and IMHO you should not be all that resistant to looking into the source code. Even if you don't read C well, many of the files provide a wealth of info in the comments. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
On 7/20/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote: On Jul 20, 2007, at 17:54 , Vincenzo Romano wrote: > In an inner join involving a 16M+ rows table and a 100+ rows table > performances got drastically improved by 100+ times by replacing a > UNIQUE-NOT NULL index with a PRIMARY KEY on the very same columns in > the very same order. The query has not been modified. There should be no difference in query performance, AIUI. If I read the documentation correctly, PRIMARY KEY is simply syntactic sugar equivalent to UNIQUE + NOT NULL, the only difference being that a PRIMARY KEY is reported as such to someone looking at the table structure, which becomes more intuitive than seeing UNIQUE + NOT NULL. > In the older case, thanks to the EXPLAIN command, I saw that the join > was causing a sort on the index elements, while the primary key was > not. Might it just be that the original UNIQUE + NOT NULL index was bloated or otherwise degraded, and reindexing it would have resulted in the same performance gain? That's just a guess. -Josh ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Difference between PRIMARY KEY index and UNIQUE-NOT NULL index
"Josh Tolley" <[EMAIL PROTECTED]> writes: > Might it just be that the original UNIQUE + NOT NULL index was bloated > or otherwise degraded, and reindexing it would have resulted in the > same performance gain? That's just a guess. Yeah. There is precious little difference between UNIQUE+NOT NULL and PRIMARY KEY --- to be exact, the latter will allow another table to reference this one in FOREIGN KEY without specifying column names. The planner knows nothing of that little convenience. The interesting thing about this report is that the plan changed after creating the new index. That has to mean that some statistic visible to the planner changed. Creating an index does update the pg_class columns about the table's size and number of rows, but probably those weren't that far off to start with. My bet is that the new index is a lot smaller than the old because of bloat in the old index. If so, REINDEX would have had the same result. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] Char vs SmallInt
Which data type is smaller and will lead to better query performance - smallint or char?