[BUGS] BUG #2825: Installation doesnt let me create username
The following bug has been logged online: Bug reference: 2825 Logged by: Josh Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: Windows XP Description:Installation doesnt let me create username Details: When i try to install a postgres as a service it tells me all the names i try to use already exist. what do i do? ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[BUGS] BUG #3823: Installing Permissions set still says there not
The following bug has been logged online: Bug reference: 3823 Logged by: Josh Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.5 Operating system: windows server 2003 Description:Installing Permissions set still says there not Details: I tried installing postgresql onto a go daddy dedicated server with server 2003, with no luck after setting the permissions on C: C:\Program Files C:\program files\postgres etc to everyone is aloud to do anything to those directory's, I still cant get postgres to install, so I ask GoDaddy. They try and There IT people cant figure out whats wrong :). ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[BUGS] BUG #1670: pg_dump fails on CentOS 4
The following bug has been logged online: Bug reference: 1670 Logged by: Josh Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.2, 8.0.3 Operating system: CentOS 4 Description:pg_dump fails on CentOS 4 Details: pg_dump, as packaged in postgresql-8.0.3-1PGDG.i686.rpm, fails with no output whatsoever when run from a terminal window: [EMAIL PROTECTED] ~]$ pg_dump -U username database > backup.sql [EMAIL PROTECTED] ~]$ However, when run from within pgadmin3, it runs as it should. When I download the source to 8.0.3 and compile it from scratch, the freshly compiled pg_dump works as it should from any source (PGAdmin3, command line) ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] BUG #1670: pg_dump fails on CentOS 4
Tom, Try turning off SELinux enforcement, or better update the selinux policy package and do a restorecon on all of /usr/bin. The earlier I'll definitely give this a try. Does this policy also apply to binaries compiled on the local machine? When I compiled PG803 from source, then copied pg_dump to /usr/bin as root, pg_dump worked fine. Thanks for the help! --Josh ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #5911: pg_notify() function only works when channel name is lower case
Thank you both for clearing that up (and doing so quite quickly!). The behavior makes complete sense now that I understand what is happening here behind the scenes. Regards, Josh On 3/3/2011 11:24 AM, Tom Lane wrote: "Joshua McDougall" writes: When using the pg_notify(text,text) function, the channel name MUST be lower case otherwise the message does not go through. It's not clear to me that this is a bug. The argument of NOTIFY is a SQL identifier, which is folded to lower case by the lexer if not double-quoted, but the argument of pg_notify is a string constant which is a different matter altogether. We could have pg_notify lowercase its argument at runtime, but then we'd have to introduce quoting rules, so that you could do select pg_notify('"IntentionallyMixedCase"', '...'); This isn't a lot clearer than the current behavior, and it definitely wouldn't be backwards compatible. So I'm inclined to leave it alone. regards, tom lane -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Combination of Triggers and self-FKs produces inconsistent data
Version: 8.3.5 Install: self-compile on 64-bit Ubuntu Linux also reproduced by AndrewSN on another platform Summary: self-referential FKs are not enforced properly in the presence of BEFORE triggers Test Case: -- create two tables, one of which is the master table (reftable) the other of which is a child which contains a tree structure (treetab): create table reftable( refid int primary key, refname text ); create table treetab ( id int primary key, parent int, refid int not null references reftable(refid) on delete cascade, name text ); -- now create a trigger function to maintain the integrity of the trees in treetab by "pulling up" -- each node to its parent if intermediate nodes get deleted -- this trigger is inherently flawed and won't work with the FK below create function treemaint () returns trigger as $t$ begin update treetab set parent = OLD.parent where parent = OLD.id; return OLD; end; $t$ language plpgsql; create trigger treemaint_trg before delete on treetab for each row execute procedure treemaint(); -- populate reftable insert into reftable select i, ( 'Ref' || i::TEXT ) from generate_series(1,100) as g(i); -- populate treetab with 10 rows each pointing to reftable insert into treetab (id, refid) select i, (( i / 10::INT ) + 1 ) from generate_series (1,900) as g(i); -- create trees in treetab. for this simple example each treeset is just a chain with each child node -- pointing to one higher node update treetab set parent = ( id - 1 ) where id > ( select min(id) from treetab tt2 where tt2.refid = treetab.refid); update treetab set "name" = ('tree' || parent::TEXT || '-' || id::TEXT); -- now create a self-referential FK to enforce tree integrity. This logically breaks the trigger alter table treetab add constraint selfref foreign key (parent) references treetab (id); -- show tree for id 45 select * from treetab where refid = 45; id | parent | refid |name -++---+- 440 ||45 | 441 |440 |45 | tree440-441 442 |441 |45 | tree441-442 443 |442 |45 | tree442-443 444 |443 |45 | tree443-444 445 |444 |45 | tree444-445 446 |445 |45 | tree445-446 447 |446 |45 | tree446-447 448 |447 |45 | tree447-448 449 |448 |45 | tree448-449 -- now, we're going to delete the tree. This delete should fail with an error because the -- trigger will violate "selfref" delete from reftable where refid = 45; -- however, it doesn't fail. it reports success, and some but not all rows from treetab -- are deleted, leaving the database in an inconsistent state. select * from treetab where refid = 45; id | parent | refid |name -++---+- 441 ||45 | tree440-441 443 |441 |45 | tree442-443 445 |443 |45 | tree444-445 447 |445 |45 | tree446-447 449 |447 |45 | tree448-449 -- this means we now have rows in the table which -- violate the FK to reftable. postgres=# select * from reftable where refid = 45; refid | refname ---+- (0 rows) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Combination of Triggers and self-FKs produces inconsistent data
Tom Lane wrote: Josh Berkus writes: Summary: self-referential FKs are not enforced properly in the presence of BEFORE triggers This isn't a bug. If you create triggers that prevent the RI actions from being taken, it's your own problem. Huh? Since when was it OK by us to have data which violates a declared FK under *any* circumstances? Where in our docs does it say that Foreign Keys are not enforced if the table has triggers on it? --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Combination of Triggers and self-FKs produces inconsistent data
Tom, You can't have your cake and eat it too, Josh. If we make the RI mechanism operate at a level underneath triggers, then we'll lose all sorts of useful capability that people are depending on. A couple of examples: * the ability to log table changes caused by RI cascades * the ability to maintain row update timestamps when the update is caused by an RI cascade Yeah, I can see that there isn't an obvious fix. However, at the end of the day it means that RI in Postgres can be accidentally broken by user action without removing or disabling the constraint. This isn't a comfortable thought; it sounds an awful lot like another OSS-DB. Or to put it another way, we don't allow triggers to break UNIQUE constraints or CHECK constraints. All of the other constraints operate at a level below triggers. Why are FKs different? It doesn't say that, because it isn't true. What is true is that if you make a trigger that prevents updates from happening, it breaks RI updates as well as directly-user-initiated updates. Again, if we're going to retain this issue, then it needs to be in the documentation that RI isn't enforced on the results of triggers. Because, polling 5 people on IRC who each have more than 3 years of PostgreSQL experience ... and two of whom are code contributors ... this issue surprised *all* of them. Either way, you're going to need to fix the trigger. If you read to the end of the example, you'd see that I'm saying that the trigger should *fail*, with an error. Not work. Throughout the history of the project, no functionality which ends in a inconsistent data state has ever been acceptable which I can recall. When did we change our policy? --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Weird quirk with pg_dump of complex types
All, This is not so much a bug as a weird inconsistency, which ought to go on a list somewhere so that it gets cleaned up the next time someone overhauls pg_dump: Summary: CREATE TYPE uses explicit schemas Versions Tested: 8.2.9, 8.3.5 Platform: Linux Description of Issue: When doing pg_dump in text mode, complext types will be dumped like this: CREATE TYPE complex_foo ( var INT, gar TEXT, natch public.foo_type ); That is, a custom type in a complex type declaration is explicitly schema-qualified, even when the schema in question is in the default schema_path. This is inconsistent with all other database objects, which use "SET search_path" to qualify the correct schemas. This is only a real problem in that it may interfere with backup and/or schema comparison automation (like I'm trying to write right now). --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Weird quirk with pg_dump of complex types
Jeff, Functions are similar, actually. The argument list needs to specify schema paths as well, if it's not in some expected place (I think it does so for all schemas other than pg_catalog). Except that they don't appear to do so. --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Weird quirk with pg_dump of complex types
Jeff, In the general case though, for any object that refers to multiple other objects, I don't see any way around explicit schema qualification. I suppose it could be smart and say "foo_type is unique in my search path, so I don't need to schema-qualify it". Yeah, but for most other objects "public" is also excluded as well as pg_catalog. For CREATE TYPE, "public" is explicit. Have you considered working from the "custom" format rather than text? I'm not sure whether it solves your problem, but I think it provides the most information. --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Weird quirk with pg_dump of complex types
Tom Lane wrote: Josh Berkus writes: When doing pg_dump in text mode, complext types will be dumped like this: CREATE TYPE complex_foo ( var INT, gar TEXT, natch public.foo_type ); You didn't say which schema "complex_foo" is in? Public. --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Weird quirk with pg_dump of complex types
Jeff Davis wrote: On Fri, 2009-02-27 at 01:24 -0500, Tom Lane wrote: Are you entirely sure that they don't? Oh, you're right, of course: postgres=# create type public.mytype as (i int); CREATE TYPE postgres=# create type public.mytype2 as (j mytype); CREATE TYPE -- pg_dump output: CREATE TYPE mytype2 AS ( j mytype ); Really? Ok, I'll have to work up a reproduceable case, because I'm definitely getting the "public" qualification in the create type. --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] MD5 checksum or RPM for PostgreSQL 8.7.3
Dafina, I am a member of a small group using PostgreSQL has a data backend and I writing to request and MD5 checksum for PostgreSQL 8.7.3. If an MD5 is not available, I would appreciate the location of an RPM for 8.7.3, if there is one available. Thank you very much for your assistance in this matter. Um, there's no such thing as PostgreSQL 8.7.3. The latest stable version is *8.3.7*. Was that the version you meant? If that's the case, then everything you want is here: http://www.postgresql.org/download/linux http://www.postgresql.org/ftp/source/v8.3.7/ --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Possible stability issue: permanent hang on dropdb
> It's too bad you didn't capture a stack backtrace at step #3 or step > #6. If you manage to reproduce the problem, that would be a good > thing to try to get. It never actually crashed. And, of course, this was happening right when I needed to go home and the server needed to be up for that. --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Possible stability issue: permanent hang on dropdb
On 2/6/10 8:20 PM, Robert Haas wrote: > On Sat, Feb 6, 2010 at 7:43 PM, The Fuzzy Chef wrote: >>> It's too bad you didn't capture a stack backtrace at step #3 or step >>> #6. If you manage to reproduce the problem, that would be a good >>> thing to try to get. >> Well, I never got an actual crash. > > That's OK - you can still attach gdb and see where it's hung up... it > would have been really nice to see what that "hung" drop database > thought it was doing... Yep, sorry. Was trying to get the system working first, and then afterwards thought it might be worth reporting. Anything I can mine out of the logs or files? --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] RETURNS TABLE returns NULL set when called by another RETURNS TABLE
Pavel, all: Apparently if you use one returns table function to call a 2nd returns table function, it returns a recordset which consists entirely of nulls. Here's the test case: create table srf_data ( id serial, cat int, val text ); insert into srf_data ( cat, val ) values ( 1, 'josh' ), ( 1, 'selena' ), ( 2, 'bruce' ), ( 2, 'josh' ), ( 3, 'robert' ); create or replace function srf1 ( this_cat int ) returns table ( id1 int, val1 text ) language sql as $f$ select id, val from srf_data where cat = $1; $f$; create or replace function srf2 ( ) returns table ( id1 int, val1 text ) language plpgsql as $f$ begin return query select id1, val1 from srf1(1); return; end; $f$; select * from srf2(); -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] RETURNS TABLE returns NULL set when called by another RETURNS TABLE
> val1 is just as ambiguous. I think you got bit by the name collision; > the output parameters would start out NULLs and thus lead to the > described behavior, in versions before 9.0. Aha, yeah, that's probably it. Take this example as the reason we had to change the behavior ... --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] [Fwd: [TESTERS] Numerics of diffrent scales Raises Type Mismatch Error in a Set Returning Function]
All, I tested Noel's test case and verified that it does, in fact, break. And functions on 8.4. --Josh Berkus Original Message Subject: [TESTERS] Numerics of diffrent scales Raises Type Mismatch Error in a Set Returning Function Date: Tue, 2 Mar 2010 20:07:07 -0800 From: Noel Proffitt To: pgsql-test...@postgresql.org [TEST REPORT] [Release]: 9.0 Alpha 4 [Test Type]: feature [Test]: NUMERICS OF DIFFERENT SCALE UNABLE TO CAST TO RESULTS IN SET RETURNING FUNCTION [Platform]: Linux RHEL/Fedora [Parameters]: [Failure]: Yes [Results]: ERROR: wrong record type supplied in RETURN NEXT DETAIL: Returned type numeric does not match expected type numeric(14,2) in column 1. CONTEXT: PL/pgSQL function "check_numeric" line 5 at RETURN NEXT -- Test case CREATE TABLE a_table ( val NUMERIC ); INSERT INTO a_table VALUES (42); CREATE TABLE b_table ( val NUMERIC(14,2) ); CREATE OR REPLACE FUNCTION check_numeric() RETURNS SETOF b_table AS $$ DECLARE myrec RECORD; BEGIN SELECT * INTO myrec FROM a_table; RETURN NEXT myrec; RETURN; END; $$ LANGUAGE 'plpgsql' IMMUTABLE; SELECT * FROM check_numeric(); [Comments]: Works in Pg 8.3 and 8.4. Didn't see a change in the release notes notifying of the behavior change. - HOWTO Alpha/Beta Test: http://wiki.postgresql.org/wiki/HowToBetaTest To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-testers -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] [Fwd: [TESTERS] Numerics of diffrent scales Raises Type Mismatch Error in a Set Returning Function]
On 3/6/10 5:45 PM, Tom Lane wrote: > The reason for the behavioral change is that plpgsql, which formerly > had really crummy tuple conversion logic with a whole bunch of other > deficiencies besides this one, now shares the logic used by > ConvertRowtypeExpr. Oh, yes, of course. Should have thought of that. --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4
All, What I did: 1. Set up 9.0a4 doing SR replication with a 2nd 9.0a4 2. Ran pgbench for a while. 3. Aborted pgbench with Ctl-C 4. Changed vacuum_defer_cleanup_age in postgresql.conf and reloaded 5. Ran pgbench again, and got: Sidney-Stratton:pg90 josh$ pgbench -c 2 -T 300 bench starting vacuum...WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation "pgbench_branches" page 0 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation "pgbench_branches" page 1 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation "pgbench_tellers" page 0 WARNING: PD_ALL_VISIBLE flag was incorrectly set in relation "pgbench_tellers" page 1 ... not one I'm familiar with. Issues? --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4
On 3/10/10 3:26 PM, Simon Riggs wrote: > OK, that's enough to not remove it. I was aware of more negative > thoughts and conscious of my own feelings about it being a kluge. Well, it *is* a kludge, but it may be the best one for people who want to use HS/SR to support web applications. So I think we should work on making it less kludgy. Ultimately we're going to need publish-XID-to-master, but that's not realistic for 9.0. --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4
> That's better, I was worried you'd gone all complimentary on me. Never fear that! Was that setting originally part of your design for HS? If so, why did you back off from it? --Josh -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] PD_ALL_VISIBLE flag error on 9.0 alpha 4
> It's also my 3rd choice of solution behind fine-grained lock conflicts > (1st) which would avoid many issues and master/standby in lock step > (2nd). Yeah, I just can't imagine you hunting down all of the corner cases for fine-grained lock conflicts in time for 9.0. Given what I've been looking at, it seems like a LOT of work. --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Error when lock conflict on REPLACE function
Severity: Annoyance Versions Tested: 8.4.2 Platform: Linux RHEL 5.4 Reproduceable: always Steps to reproduce: 1. Create a function. 2. In one session, start an explicit transaction. 3. Do a CREATE OR REPLACE on the same function, but do not commit. 4. Open a 2nd session, and an explicit transaction in that session. 5. Do a CREATE OR REPLACE on the same function in the 2nd session. 6. COMMIT the 2nd session. 7. COMMIT the 1st session. 8. You get: ERROR: duplicate key value violates unique constraint "pg_proc_proname_args_nsp_index" SQL state: 23505 What should have happened: the 2nd replace should have succeeded. Or it should have given a user-friendly error message. Opinions? --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] regexp_matches illegally restricts rows
Severity: major (data loss) Versions Tested: 8.4.2, 9.0 HEAD Test Case: create table regex_test ( id serial not null primary key, myname text ); insert into regex_test ( myname ) values ( 'josh'),('joe'),('mary'),('stephen'), ('jose'), ('kelley'),('alejandro'); select id, regexp_matches(myname, $x$(j[\w]+)$x$) from regex_test; The above will return 4 rows, not the 7 which are in the table. I can't see how this is anything but a bug; as far as I know, nothing in the target list is allowed to restrict the number of rows which are returned by the query. We should get 7 rows, 3 of which have an empty array or a NULL in the 2nd column. -- -- Josh Berkus PostgreSQL Experts Inc. http://www.pgexperts.com -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Re: regexp_matches illegally restricts rows -- just a documentation issue?
On 4/5/10 9:16 PM, Josh Berkus wrote: > I can't see how this is anything but a bug; as far as I know, nothing in > the target list is allowed to restrict the number of rows which are > returned by the query. We should get 7 rows, 3 of which have an empty > array or a NULL in the 2nd column. Just noticed it's a SETOF[] function. Which makes it odd that I can call it in the target list at all, but explains the row restriction. It's still confusing behavior (three regulars on IRC thought it was a bug too) and users should be warned in the documentation. Not sure exactly where, though ... maybe in 9.7? --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Re: regexp_matches illegally restricts rows -- just a documentation issue?
> While I understand why this is confusing, it's really very normal > behavior for a SRF, and I don't really think it makes sense to > document that this SRF behaves just like other SRFs... It's likely to be used by people who do not otherwise use SRFs, and many would not be prepared for the consequences. It's not instinctive that a regexp function would be an SRF in any case; if someone is not looking closely at the docs, it would be easy to miss this entirely -- as 3 experienced PG people did yesterday. Personally, I also think that PostgreSQL is wrong to allow an SRF in the target list to restrict the number of rows output. A subselect in the target list does not do so. However, that's completely another discussion. --Josh Berkus -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Bug Report for 7.1 Beta 3
Folks, Please ignore this if you've already fixed these bugs in 7.1 beta 4. Also, please note that I don't subscribe to the bugs list --- sorry, but pgsql-sql is all I can keep up with! BUG REPORT Version: 7.1 beta 3 Platform: SuSE Linux 7.0 Installation: Mixed; 7.0.3 & 7.1 beta parallel install Urgency: Cosmetic Location: psql and pgaccess I've noticed a problem displaying Functions and Views via PSQL in 7.1b3. When one attempts to display views (\dv), one gets nothing. When one displays functions, one gets the opposite ... the command lists *all* functions, including all builtins, not just user-defined functions. Both views and functions operate normally when called via sql, however. Please note that I may, due to my parallel install, be accidentally using ver. 7.0.3 psql with the 7.1b3 Postgres. If so, I;d like to know. -Josh Berkus -- __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco
Re: [BUGS] PgSQL 7.1 beta 3 breaks ODBC
Tom, Thanks for the quick response. > Try it with the beta version's ODBC driver ... How do I get it? I looked through the Development version FTP filetree, and can't find the ODBC driver anywhere. Link ... pretty please? Second, I've also discovered that Postgres ODBC 6.50 and Microsoft's MDAC 2.5 are incompatible ... making it hard to connect from the same workstation to both Postgres and MS SQL Server servers. However, I don't know that you can do anything other than post an advisory on this; Microsith is hardly going to give us their proprietary code so that we can make compatible drivers. :-( -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businessesfax 621-2533 and non-profit organizations. San Francisco
Re: [BUGS] JDBC/JSP: Strange Problem
I'm also having problems with Postgres' JDBC driver and the tag libraries. I have a JSP page which selects everything from a table, and prints it out. Workaround: back up to using "jdbc7.0-1.2.jar" (or "pgjdbc1.jar", if you don't need all the Java2 JDBC stuff.) jdbc7.1-1.2.jar - fetches only the last row (I can tell this by using "limit n" to just select part of the table) pgjdbc2.jar - just repeats one row, over and over jdbc7.0-1.2.jar - works pgjdbc1.jar - works These are all taken from http://jdbc.postgresql.org/download.html. I'm using - dbtags.jar package from (IIRC) Netbeans 3.1 or so (in other words, not the latest version) - running in Apache Tomcat 4.0 - on Red Hat 7.2, kernel 2.4.9-21 I'm not sure whether the bug is in Postgres' JDBC, or dbtags.jar. I'm not even sure how much dbtags.jar is supported these days (given that there are several SQL taglibs being developed at http://jakarta.apache.org/taglibs/), so I'm not sure how much of an issue this is... Also, the link to the "Postgres BugTool" on www.us.postgresql.org <http://www.us.postgresql.org> is broken. (In the meantime, if you could just link to this e-mail address, or to a page with this e-mail address, that would be a good temporary thing...) Thanks, Josh [EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]> http://www.cis.upenn.edu/~jburdick <http://www.cis.upenn.edu/%7Ejburdick> ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] Handling of \ in array data display
Issue: \ is escaped oddly when displaying the contents of array fields. Severity: annoyance Affects: 8.1.3, 8.1.4, 8.0.3, possibly others. Demonstration of bug: When saving \ escaped values into text array fields, the \ is escaped when displaying the contents of the array, leading to an appearance that the correct data was not saved: scratch=# create table test_arr ( tarr text[] ); CREATE TABLE scratch=# insert into test_arr values ( array['x\y','x\\y','x y'] ); INSERT 5695623 1 scratch=# select * from test_arr; tarr --- {xy,"x\\y","x y"} (1 row) scratch=# select tarr[1] from test_arr; tarr -- xy (1 row) scratch=# select tarr[2] from test_arr; tarr -- x\y (1 row) -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Handling of \ in array data display
Alvaro, > tarr[1] does not have a \, because it was eaten by the parser (so \y is > the same as a plain y). tarr[2] does have a single backslash, which for > output purposes is shown escaped with another backslash when part of an > array, but unescaped when not. I'm not sure if this qualifies as a bug > or not. I think it does. It's not consistent with how text values not in an array are displayed. The whole reason I reported it was because of a user thinking their data wasn't being saved correctly, so it's causing confusion. FWIW, I personaly think we should be using the ARRAY[] format for display anyway, but that would break some backwards compatibility ... -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Handling of \ in array data display
Tom, > This is documented behavior for arrays: > http://developer.postgresql.org/docs/postgres/arrays.html#AEN5764 > and has been that way for a very long time. If we change it we will > break every array-using application on the planet, because it will > in fact be impossible to parse an array value unambiguously. Ok, so "yes, it's inconsistent, but we don't want to break backwards compatibility." I can buy that ... -- --Josh Josh Berkus PostgreSQL @ Sun San Francisco ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] BUG #2599: AM/PM doesn't work in to_timestamp in the middle of a string
The following bug has been logged online: Bug reference: 2599 Logged by: Josh Tolley Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.4 Operating system: Fedora Core 5 Description:AM/PM doesn't work in to_timestamp in the middle of a string Details: eggyknap=# select to_timestamp('30 Aug 06:01:03.223 PM 2006', 'DD Mon HH:MI:SS.MS AM '), to_timestamp('30 Aug 2006 06:01:03.223 PM', 'DD Mon HH:MI:SS.MS AM'); to_timestamp|to_timestamp ---+ 0001-08-30 18:01:03-08 BC | 2006-08-30 18:01:03.223-06 This appears to happen when AM/PM isn't the last element in the string. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #3583: IMPORT/EXPORT into PostgreSQL
On 8/28/07, Vova <[EMAIL PROTECTED]> wrote: > > The following bug has been logged online: > > Bug reference: 3583 > Logged by: Vova > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0.7 > Operating system: Linux Gentoo > Description:IMPORT/EXPORT into PostgreSQL > Details: > > How to carry out the import of information in POSTGRESQL? > This isn't a bug, and should probably be asked on -novice or -general instead. That said, look at the COPY command. http://www.postgresql.org/docs/current/static/sql-copy.html - Josh ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] to_date gives odd results
> On Thursday 30 August 2007 21:15, Tom Lane wrote: > > to_date and friends are fairly awful in terms of not throwing errors > > when the input doesn't really match the format. I think what you > > shoulda got here is a bad-input error. However, somebody's going to > > have to do a major rewrite of formatting.c to make it much better... Any votes for making that formatting.c rewrite a TODO item? -eggyknap ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] to_date gives odd results
On 8/31/07, Alvaro Herrera <[EMAIL PROTECTED]> wrote: > Josh Tolley escribió: > > > On Thursday 30 August 2007 21:15, Tom Lane wrote: > > > > to_date and friends are fairly awful in terms of not throwing errors > > > > when the input doesn't really match the format. I think what you > > > > shoulda got here is a bad-input error. However, somebody's going to > > > > have to do a major rewrite of formatting.c to make it much better... > > > > Any votes for making that formatting.c rewrite a TODO item? > > Well, there is already a to_char patch scheduled for 8.4. If you want > to improve the to_date code, you are invited to do so -- no need to have > a TODO item about it. I figured as much. > If what you expect is that having a TODO item will mean that somebody > else will start working on it, I think you'll be disappointed :-) I realize this chance is slim. The likelihood that I could get to it and make something useful of it seemed even more slim :) > (OTOH maybe we should add it and put the % mark in it.) - Josh ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] Error
The bug message was cannot edit file permissions not set or something along those lines. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] help me please
On Dec 28, 2007 7:01 AM, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > hello > > Help me please... > I have two "postgres" user. How to delete the first one ??? > > *** > # select * from pg_shadow where usesysid=1; > usename | usesysid | usecreatedb | usesuper | usecatupd > | passwd| valuntil | useconfig > --+--+-+--+---+-+--+--- > postgres |1 | t | t| t > | | | > postgres |1 | t | t| t | > md53532747417351142d5270721fcf740ed5 | infinity | > (2 rows) > *** > > *** > # select * from pg_user where usesysid=1; > usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | > valuntil | useconfig > --+--+-+--+---+--+--+--- > postgres |1 | t | t| t | > | | > postgres |1 | t | t| t | | > infinity | > (2 rows) > *** > > > I tried > delete from pg_shadow where passwd is null; > > but it's not working. > > Help me please > oleg pg_shadow is a view based on pg_authid. Try deleting it from pg_authid. Note that pg_authid is defined with OIDs, which might also be useful for you. - Josh / eggyknap ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] Bug with Daylight Savings Time & Interval
Folks, Found this interesting bug: jwnet=> select version(); version --- PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.95.3 (1 row) jwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('248 days'::INTERVAL) ; ?column? 2002-04-05 10:00:00-08 (1 row) jwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('249 days'::INTERVAL) ; ?column? 2002-04-06 10:00:00-08 (1 row) jwnet=> select ('2001-07-31 10:00:00 PST'::TIMESTAMP) + ('250 days'::INTERVAL) ; ?column? 2002-04-07 11:00:00-07 jwnet=> select ('2001-04-01 10:00:00 PST'::TIMESTAMP) + ('100 days'::INTERVAL) ; ?column? 2001-07-10 11:00:00-07 It appears that Spring Daylight Savings Time causes PostgreSQL to change my time zone. Only the spring, mind you, and not the fall. This is potentially catastrophic for the application I'm developing; what can I do to see that it's fixed? Or am I misunderstanding the behavior, here? -- -Josh Berkus P.S. I'm posting this here instead of the online bug form because I know that Bruce is on vacation. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] [SQL] Bug with Daylight Savings Time & Interval
Tom and Tom, > This isn't a bug per the existing definition of INTERVAL. '250 days' > is > defined as '250*24 hours', exactly, no more no less. When you move > across a DST boundary you get behavior like the above. > I've opined several times that interval should account for three > separate units: months, days, and seconds. But our time-meister > Tom Lockhart doesn't seem to have taken any interest in the idea. I beg to differ with Tom L. Even if there were justification for the addition of an hour to a calculation involving only days, which there is not, there are two bugs with the existing behavior: 1. You do not lose an hour with the end of DST, you just gain one with the beginning of it (until you wraparound a whole year, which is really confusing), which is inconsistent; 2. Even if you justify gaining or losing an hour through DST in a '+days' operation, changing the TIMEZONE is a bizarre and confusing way to do it. I don't fly to Colorado on April 7th! While this needs to be fixed eventually, I need a quick workaround; is there a way to "turn off" DST behavior in PostgreSQL? Further, it seems that the whole "Interval" section of Postgres, possibly one of our greatest strengths as a database, has languished in the realm of inconsistent behavior due to lack of interest. Is there anything I can do without learning C? -Josh Berkus ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] [SQL] Bug with Daylight Savings Time & Interval
Tom L, Thanks for answering my pushy opinions! > Not actually true (probably due to a cut and paste error in your test > suite). Your example specified '2001-07-31 10:00:00 PST' which is > actually within the PDT time of year. PostgreSQL took you at your > word > on this one and evaluated the time as though it were in PST. So you > didn't see the 1 hour offset when adding days to another time zone. Aha. I understand. That's consistent, even if it doesn't work the way I want it (life is difficult that way). However, I would assert that it is not at all intuitive, and we need to have it documented somewhere. > > 2. Even if you justify gaining or losing an hour through DST in a > > '+days' operation, changing the TIMEZONE is a bizarre and confusing > way > > to do it. I don't fly to Colorado on April 7th! > > I'm not sure what you mean here. My confusion because of the default way of displaying time zones. It looked to me like Postgres was changing to CST on April 7th. Once again, consistent but not intuitive. > > While this needs to be fixed eventually, I need a quick workaround; > is > > there a way to "turn off" DST behavior in PostgreSQL? > > Consider using TIMESTAMP WITHOUT TIME ZONE. Damn. Doesn't work for me either. I do need to cast stuff into several time zones, as this is a New York/San Francisco calendar. Isn't there a version of GMT -8:00 I can use that doesn't involve DST? What does Postgresql do for Arizona (Arizona does not have DST)? > You can continue to explore the current behavior and to form an > opinion > on what correct behavior should be. Oliver and I are having a lively discussion regarding Interval math on PGSQL-SQL. I would love to have you enter the discussion. > I've resisted adding fields to > the > internal interval type for performance and design reasons. I don't blame you. Data Subtypes is a huge can o' crawdads. > As > previously > mentioned, blind verbatim compliance with SQL9x may suggest breaking > our > INTERVAL type into a bunch of pieces corresponding to the different > interval ranges specified in the standard. However, the SQL standard > is > choosing to cover a small subset of common usage to avoid dealing > with > the implementation complexities and usage patterns which are > uncovered > when trying to do more. Ok, so how should things work, then? While I agree that SQL92's spec is awkward and limited, we'd need a pretty good argument for breaking standards. Oliver is already wearing me down in this regard. -Josh Berkus ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[BUGS] referential integrity through inherited tables
This is probably more like an "undesired feature" than a software bug, but it was behaviour that I did not expect. thanks! Your name : Josh Goldberg Your email address : [EMAIL PROTECTED] System Configuration - Architecture (example: Intel Pentium) : intel P3 Operating System (example: Linux 2.0.26 ELF) : Linux 2.2 PostgreSQL version (example: PostgreSQL-7.2.1): PostgreSQL-7.2.1 Compiler used (example: gcc 2.95.2) : Please enter a FULL description of your problem: Foreign key checks fail when referenced row is in an inherited table. In the reproduction procedure below, if you select * from foo it returns the record from table bar, however a key referencing foo will fail when you want it to reference the record that was inserted into bar even though it appears as a part of the foo table via inheritance. Please describe a way to repeat the problem. Please try to provide a concise reproducible example, if at all possible: -- CREATE TABLE foo(id1 int4,id2 int4,id3 int4); CREATE TABLE bar(id4 int4) inherits(foo); CREATE TABLE baz(id1 int4,troz int4); ALTER TABLE baz ADD CONSTRAINT bazfk FOREIGN KEY (id1) REFERENCES foo(id1) MATCH FULL; INSERT INTO bar(1,2,3,4); INSERT INTO baz(1,5); ERROR: bazfk referential integrity violation - key referenced from baz not found in foo If you know how this problem might be fixed, list the solution below: - Add a way to denote that a constraint should check children tables in addition to the one named in the constraint. perhaps something like ALTER TABLE baz ADD CONSTRAINT bazfk FOREIDNG KEY (id1) REFERENCES foo*(id1) MATCH FULL; or have it check children tables by default and do something like ALTER TABLE baz ADD CONSTRAINT bazfk FOREIDNG KEY (id1) REFERENCES ONLY foo(id1) MATCH FULL; to only check foo. similar syntax to select statements. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[BUGS] Pg_dump Backup Drops a Few Things
Folks, I am not subscribed to -bugs. Please e-mail me directly. For some time on 7.2.1 I have suspected that one or two items from large, complex databases was not getting backed up. However, I could not say conclusively that this was the case, as it was always possible that I had missed something somewhere. Today I just got done with a very painful restore. PG_dump had failed to back up a small view on which 6 other views and functions depended, and I had to spend several hours editing the 99mb backup file by hand. I'd like to work with someone on pinpointing the problem, as obviously this could be a critical issue for production databases. However, I'm not sure how to submit the files as they are very large (> 90mb) and how can I send the original database not as a backup file? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] Pg_dump Backup Drops a Few Things
Tom, > Do you still have the original database available? The obvious route to > finding the problem is to watch pg_dump in action and see why it misses > that view. How do you feel about letting someone else have access to > your system to do this? (Or get out a debugger and do it yourself...) OK, more specifics: The problem only seems to happen with views and functions that are part of unresolved dependancies. e.g., here's how I produced the problem: 1. Edited the view lock_users, on which 6 other views depended. 2. This broke the 6 other views. 3. Tried to re-load the other views and had problems finding them all. Decided to dump and restore to resolve the dependancies. 4. Did a text pg_dump (not binary). 5. Dropped database and reloaded. Discovered that lock_users was not loaded; in fact, it wasn't part of the pg_dump file at all. 6. Hand-edited the pg_dump file (yay Joe text editor!) and re-inserted the lock_users view after its dependancies, but before the other views. 7. Re-loaded the database. After a couple of tries, it worked. As the broken dependancy problem no longer exists, futher pg_dumps now back up lock_users correctly. At a blind guess, I would hypothesize that the problem occurrs becuase pg_dump is trying to backup stuff in correct dependancy order, but becuase of the broken links gets confused and drops the object entirely. However, this becomes a circular problem for Postgres db developers, as drop and restore is one of the primary ways of fixing broken dependancy chains. I will see if I can re-produce this on a sample database. lock_users is a view with 6 view dependancies, and itself depends on 2 tables and a custom function. So I can see how this would be a destruction test. I do have the Postgresql log files for the last few days, but my mastery of command-line text parsing is not sufficient to find the relevant section of the log. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] Pg_dump Backup Drops a Few Things
Phillip, > If Tom's not already on top of this, I'd be happy to help. There seem to be > a few possibilities: See my last e-mail. I'm not on -bugs, so my responses are delayed by the moderation process. > We can remove (3) by just doing a schema-only dump of the original DB. If > this works, we know it's data related. Ugh. It's not data related ... the missing object is a view. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[BUGS] Bug in Function-Transactions?
Folks, I just encountered a very interesting bug in ver 7.2.1. As an experiment, I tried running a VACCUUM ANALYZE on a table inside a Function after the function had made a large number of updates to that table. I assumed that this wouldn't work, but I thought I'd give it a try. It did not work. What it did was interesting. Postgres terminated my back-end connection to the server when it reached the VACUUM statement. Next, I reconnected. I was quite surprised to discover that Postgres had *not* rolled back the changes made by the function before it crashed. I'm testing to see if I can reproduce this issue on 7.3b1. I'll e-mail you with a test database if I can. -Josh Berkus ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] PAM Authentication Bug
I found that PAM authentication in 7.2.3 doesn't appear to work properly. It appeared to me that the server wasn't waiting for the PAM conversation to complete before it rejected the request, so I did a little rewrite on the auth.c file to sort of force things to prompt for a password and then shove it into PAM. The one thing I see still is that it appears that you still have to create user accounts in the database for things to work. This is kind of disappointing as I'd like to not have to repeat creating users in PAM if I've already got them defined, say in an LDAP database somewhere, but at least I have PAM authentication working with passwords. Attached is the patch if the developers want to look at it, clean it up, and stick it into the next version. -jth *** src/backend/libpq/auth.cMon Feb 25 15:07:33 2002 --- auth.c Mon Oct 28 20:34:06 2002 *** *** 44,65 char *pg_krb_server_keyfile; #ifdef USE_PAM ! #include ! ! #define PGSQL_PAM_SERVICE "postgresql"/* Service name passed to PAM */ ! ! static intCheckPAMAuth(Port *port, char *user, char *password); ! static int pam_passwd_conv_proc(int num_msg, const struct pam_message ** msg, !struct pam_response ** resp, void *appdata_ptr); ! ! static struct pam_conv pam_passw_conv = { ! &pam_passwd_conv_proc, ! NULL ! }; ! ! static char *pam_passwd = NULL; /* Workaround for Solaris 2.6 brokenness */ ! static Port *pam_port_cludge; /* Workaround for passing "Port *port" !* into pam_passwd_conv_proc */ #endif /* USE_PAM */ #ifdef KRB4 --- 44,58 char *pg_krb_server_keyfile; #ifdef USE_PAM !#include ! !/* Constants */ !#define PGSQL_PAM_SERVICE "postgresql" /* Service name passed to PAM */ ! !/* PAM functions */ !static int doPAMAuth(Port *port, char *user, char *password); !static int doPAMConversation(int num_msg, const struct pam_message **msg, ! struct pam_response **resp, void *appdata_ptr); #endif /* USE_PAM */ #ifdef KRB4 *** *** 583,590 #ifdef USE_PAM case uaPAM: ! pam_port_cludge = port; ! status = CheckPAMAuth(port, port->user, ""); break; #endif /* USE_PAM */ --- 576,583 #ifdef USE_PAM case uaPAM: ! sendAuthRequest(port, AUTH_REQ_PASSWORD); ! status = recv_and_check_password_packet(port); break; #endif /* USE_PAM */ *** *** 625,823 #ifdef USE_PAM /* ! * PAM conversation function */ ! ! static int ! pam_passwd_conv_proc(int num_msg, const struct pam_message ** msg, struct pam_response ** resp, void *appdata_ptr) { ! StringInfoData buf; ! int32 len; ! ! if (num_msg != 1 || msg[0]->msg_style != PAM_PROMPT_ECHO_OFF) ! { ! switch (msg[0]->msg_style) ! { ! case PAM_ERROR_MSG: ! snprintf(PQerrormsg, PQERRORMSG_LENGTH, !"pam_passwd_conv_proc: Error from underlying PAM layer: '%s'\n", msg[0]->msg); ! fputs(PQerrormsg, stderr); ! pqdebug("%s", PQerrormsg); ! return PAM_CONV_ERR; ! default: ! snprintf(PQerrormsg, PQERRORMSG_LENGTH, !"pam_passwd_conv_proc: Unexpected PAM conversation %d/'%s'\n", !msg[0]->msg_style, msg[0]->msg); ! fputs(PQerrormsg, stderr); ! pqdebug("%s", PQerrormsg); ! return PAM_CONV_ERR; ! } ! } ! ! if (!appdata_ptr) ! { ! /* !* Workaround for Solaris 2.6 where the PAM library is broken and !* does not pass appdata_ptr to the conversation routine !*/ ! appdata_ptr = pam_passwd; ! } ! ! /* !* Password wasn't passed to PAM the first time around - let's go ask !* the client to send a password, which we then stuff into PAM. !*/ ! if (strlen(appdata_ptr) == 0) ! { ! sendAuthRequest(pam_port_cludge, AUTH_REQ_PASSWORD); ! if (pq_eof() == EOF || pq_getint(&len, 4) == EOF) ! { ! return PAM_CONV_ERR;/* client didn't want to send password */ ! } ! ! initStringInfo(&buf); ! pq_getstr(&buf); ! if (DebugLvl > 5) ! fprintf(stderr, "received PAM packet with len=%d, pw=%s\n", !
[BUGS] Cannot assign ROWTYPE, RECORD variables in PL/pgSQL
Bug: Cannot assign ROWTYPE, RECORD variables in PL/pgSQL Affects: PL/pgSQL Severity: Annoyance Priority: Minor Enhancement Confirmed On: 7.3beta2, Linux Given the following function: === create or replace function rowtype_test () returns text as ' declare this_row candidates%rowtype; that_row candidates%rowtype; begin select * into this_row from candidates; that_row := this_row; return that_row.first_name; end;' language 'plpgsql'; === ... it will error out at the assignment "that_row := this_row". For that matter, any attempt to assign the contents of two ROWTYPE or RECORD variables directly to each other will error out: that_record := this_record; SELECT this_row INTO that_row; SELECT * INTO that_row FROM this_row; The only way to populate that_row with a copy of this_row is by re-querying the source table. While a relatively easy workaround, this behaviour is annoying and inconsistent. It would be nice to fix in 7.3.1 or 7.4. Thanks for your attention. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Cannot assign ROWTYPE, RECORD variables in PL/pgSQL
Neil, > Unless anyone sees a problem with this, I'll work on this. I > definately think it's inappropriate for 7.3.1 though. Thank you! -Josh Berkus ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] PPTP + Cisco - is it possible for RADIUS server to
Ruslan, I'm afraid that you sent your message to the PostgreSQL Bugs Mailing List, where we cannot help you with Cisco problems. Please try an appropriate Cisco mailing list. -Josh Berkus ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] Bug #871: FW: How to solve the problem
Derek, > Long Description > NOTICE: Message from PostgreSQL backend: > The Postmaster has informed me that some other backend > died abnormally and possibly corrupted shared memory. > I have rolled back the current transaction and am > going to terminate your database system connection and exit. > Please reconnect to the database system and repeat your query. > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. Did you KILL -9 one or more Postgres processes? Then you will get this message, and have to restart the postgresql server to use it again. IF YOU ARE STILL GETTING THIS ERROR AFTER RESTARTING, then please read the following: This somewhat deceptive message, in my experience, is usually caused by a database crash which WAL is unable to easily recover from, such as HDD errors corrupting the Postgres files. Please do the following: WARNING: The below is provided strictly as volunteer peer-to-peer advice. Follow AT YOUR OWN RISK. 1) Shutdown PostgreSQL using "pg_ctl -m fast stop" 2) check your process log to make sure that *all* postgres processes are halted. Give the system some time to shut everything down. 2)a) if you cannot shut down all postgres processes properly, even after 15-20 minutes, try restarting the system. 3) re-start Postgresql. Give it some time to attempt to restore itself; 20-30 minutes may be necessary with a large database and a slow server. 4) Connect to PostgreSQL. If you get that message again, then you have some kind of serious hardware or OS related problem that Postgres can't deal with. If you connect normally, then everything is OK; skip the rest of the suggestions. 5) Shut down postgres again. 6) Use diagnostic tools to examine your system for: a) Hard drive/controller errors; b) bad RAM; c) OS errors; d) other hardware issues 7) correct any problems you find through (6) 8) re-start postgres and restore your database from backup (you do have a backup, yes?) 9) start using postgresql again. Good luck! -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[BUGS] WAL Recovery Bug in 7.2.3
Affects: 7.2.3 (possibly down to 7.1.0) Frequency: Very Rare Effect When Occurring: Database corruption Difficulty of Fix: Trivial Certianty of Diagnosis: about 50% according to Tom This bug was analyzed by Tom Lane; I'm just writing it up. In 7.2.3 (and possibly in earlier versions) these two rows in xlog.c are out of order: FlushBufferPool(); CheckPointCLOG(); Per Tom's analysis: "7.2.* does checkpoint operations in the wrong order: CreateCheckPoint does FlushBufferPool(); CheckPointCLOG(); ... create and write checkpoint WAL record ... The reason this is the wrong order is that CheckPointCLOG() only issues write()s of dirty pg_clog pages; it does not fsync them. Thus, it is possible that the checkpoint WAL record will be flushed to disk while the clog page writes are still hanging about in kernel disk cache. If there is a system crash before the kernel gets around to sync'ing the dirty clog pages to disk, then we lose --- on restart, the WAL logic will only replay WAL entries after the latest checkpoint, and so any transaction commits occurring before the checkpoint would fail to be re-marked in pg_clog." As an error scenario, this seems rather farfetched; Postgres would have to be killed, a second time, while in recovery mode at a moment between FlushBufferPool() and CheckPointCLOG(). A remote enough possibility to ignore. Except that it seems to have happened twice, to two different users. The scenario under which this bug becomes critical is this: 1) In the middle of a large UPDATE statement, the Postgres server loses power from a general power outage or local building short. 2) This server is not buffered by a UPS. 3) Due to work on the power system or weather damage, power comes back on, then off after a few minutes, cycling off-and-on 4-5 times (this is not farfetched; during the California "power crisis" I saw it happen several times). 4) This has the possible effect of repeatedly downing Postgres while it is in recovery mode. 5) Sooner or later, the up-down effect "gets lucky" and postgres goes down while FlushBufferCache() is finishing up. 6) The user ends up with two versions of one or more of their records marked as valid by Postgres. Per Tom's analysis of one such problem: "Well, here's what I've found so far. The two tuples in question have header data like so (as printed by pg_filedump): Item 28 -- Length: 248 Offset: 7944 (0x1f08) Flags: USED OID: 487894 CID: min(0) max(0) XID: min(9776912) max(17920315) Block Id: 4664 linp Index: 1 Attributes: 31 Size: 36 infomask: 0x0903 (HASNULL|HASVARLENA|XMIN_COMMITTED|XMAX_INVALID) t_bits: [0]: 0xff [1]: 0xfe [2]: 0xdf [3]: 0x7d Item 2 -- Length: 248 Offset: 7944 (0x1f08) Flags: USED OID: 487894 CID: min(0) max(0) XID: min(9777615) max(10180711) Block Id: 4666 linp Index: 1 Attributes: 31 Size: 36 infomask: 0x2903 (HASNULL|HASVARLENA|XMIN_COMMITTED|XMAX_INVALID|UPDATED) t_bits: [0]: 0xff [1]: 0xfe [2]: 0xdf [3]: 0x7d What appears to have happened is this: transaction 9776912 created the row initially (the first of the two items is evidently the first incarnation of the row, since it does not have HEAP_UPDATED set). A little while later, transaction 9777615 updated the row, creating the second tuple. Our problem is that both tuples appear to be committed good --- both have XMIN_COMMITTED set. Digging into the pg_clog data, I find that 9776912 is shown as "committed", as expected. But 9777615 is shown as "in progress" --- the clog entry has not been marked as either committed or aborted!" Since this bug has been fixed in 7.3.1, it's not cirtical to release a patch. HOWEVER, given the triviality of the fix ... simply swapping those two lines in xlog.c ... does everyone think it would be a good idea to post a notice on the lists of the issue and the fix? While it easy enough to tell users, "Upgrade, or get a UPS" this is not practical for everyone. And is there any potential issue with swapping those two lines? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] No migration path for MONEY
Folks, Bug reported off IRC: MONEY Type cannot be cast to any other type, preventing migration from this depreciated data type. Affects: 7.2.3, 7.3.1 Frequency: 100% Reproducable Effect When Occurring: Unable to convert, query data Difficulty of Fix: Easy, probably Certianty of Diagnosis: 100% On both 7.2.3 and 7.3.1 all of the following statements will fail: select cast('40.00'::MONEY as NUMERIC); select cast('40.00'::MONEY as DOUBLE); select cast('40.00'::MONEY as VARCHAR); select "numeric"('40.00'::MONEY); select to_char('40.00'::MONEY, '999.'); This means that someone who has inherited or upgarded a 6.5 database with MONEY columns has no way to migrate them to NUMERIC columns other than an external language script or dump and reload from COPY file. I propose that we need to restore the CAST(MONEY AS NUMERIC) function so that users can migrate old databases to the new data type. In later versions of postgres, I suggest that MONEY be abandoned as a true data type and instead become a DOMAIN of NUMERIC for those converting. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] No migration path for MONEY
Bruce, > They are probably better off just changing the column data type, _and_ > we need someone to get MONEY working as an extented NUMERIC type. Apparently D'Arcy McCain is going to do this. Go, D'arcy! -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Problem when adding an existing primary key
Ricardo, > When I try to insert via application a registry that already exists > Postgres shows an error "Unable to insert duplicate primary key on index > 'index' ". I think it's an error, because I would treat this error, just > like others. You're getting that message becuase you are trying to insert a value into the PK column which is a duplicate of a value already present. This is not permitted ... definitionally, primary keys must be unique. In other words, the above is a valid error message being sent to you because you violated a table constraint. If there is more to the story than that, I suggest that you post your problems to the PGSQL-NOVICE mailing list, where other users will help you fix your queries. For that matter, I believe that there are other Brazillians on some of the other mailing lists (you could try PGSQL-GENERAL as well), which would spare you the pain of translating your questions. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[BUGS] Function will not back up on 7.2.3
Affects: 7.1.3, 7.2.1 to 7.2.3, not tested on 7.3.x or 7.4dev Frequency: 100% Reproducable Effect When Occurring: Object Missing from Backup Difficulty of Fix: Unknown For some time, I've been noticing that one of my database projects fails to back up a few functions every time I run pg_dump.I've seen this since 7.1.3. Finally, I have a sample of the database that invariably refuses to back up one function. Given that the database in question uses functions that call other functions that call views, it's probably some sort of dependancy issue. What can I ship people so that we can resolve this? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] Function will not back up on 7.2.3
Tom, > > For some time, I've been noticing that one of my database projects fails to > > back up a few functions every time I run pg_dump.I've seen this since > > 7.1.3. Finally, I have a sample of the database that invariably refuses to > > back up one function. > > What do you mean by "refuses"? The function is silently dropped from the pg_dump file. This happens in both binary and sql-script modes, and I've tracked the log to see if pg_dump is reporting an error to postmaster. No luck. But I'll try later to see if 7.3.2 fixes this or 7.4 devel. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] Function will not back up on 7.2.3
Tom, > Is it possible that the function's owner has been dropped from pg_shadow? No, the function owner is the database owner ... and also the same user calling pg_dump. > How about dropped return type, etc? pg_dump used to use inner joins to > collect info about database objects, meaning it would silently miss > objects that were missing expected collateral objects. Return type is TEXT, so I think that's OK too. However, this database does have some pretty complex dependancies. I just tested. This is still a bug in 7.3.0. I will download and test 7.3.2 now. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Function will not back up on 7.2.3
Folks, This bug in 7.2.3 and 7.3.0 seems to have been fixed as a side effect of some of the other fixes in 7.2.4 and 7.3.2. We're not sure exactly *how*, but the bug occurs on 7.2.3 and not on 7.2.4. Did anybody do anything to patch dependancy tracking 7.2.3 ==> 7.2.4? -- -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 621-2533 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [SQL] [BUGS] 7.3 GROUP BY differs from 7.2
Guys, SQL spec aside, thinking about this from a strictly implementation/user point of view: (an keeping in mind that I think it's very important that we work out the spec-correct behaviour for 7.4 and/or 7.3.3) The particular case that Dan has raised is an issue for four reasons: 1) It looks to a human like it *should* work, and I think given a long weekend of relational calculus someone (not me) could define the cases where it is OK as opposed to the cases (probably the majority) where it is not. 2) That syntax *did* work in previous versions of PostgreSQL. 3) That syntax will be accepted by some other SQL databases. 4) The error message is rather confusing and could cause a developer to spend an hour or more hunting for the wrong error. I propose that, should we decide not to change the behaviour of the parser, that we do the following: 1) add the following to the FAQ or elsewhere: Q. I just got the message "ERROR: Attribute unnamed_join.[column name] must be GROUPed or used in an aggregate function" and my GROUP BY query won't run, even though all of the columns are in the GROUP BY clause. This query may have worked in PostgreSQL 7.2, or on another SQL database. What do I do? A. You are probably qualifying a column name differently in the SELECT clause than in the GROUP BY clause, causing the parser to be confused about what you really mean. For example, you may be referring to a column by its simple column name ("element_id") in the SELECT clause, and by its table-qualified name ("table1.element_id") in the GROUP BY clause, or you may be using an alias in one place but not the other. Please make sure that all columns in the SELECT clause match *exactly* the columns in the GROUP BY clause. 2) That we add a warning in the 7.3 release notes about the breaking of backward compatibility. Thoughts? -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] Aliased SubSelect in HAVING clause bug -- in progress?
Folks, I don't think I'm the first to report this, but: SELECT a.id, b.type, max(b.number), (SELECT count(*) from c where c.b_type = b.type) as count_c FROM a, b WHERE a.id = b.a_id GROUP BY a.id, b.type HAVING count_c > 2; Will get a: ERROR: Attribute "count_c" not found. It seems that subselects aliased in the SELECT clause of a GROUP BY query cannot be referenced in the HAVING or ORDER BY clauses of any query. I'd guess that this is being worked on for 7.4/8.0? Thanks! -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] Aliased SubSelect in HAVING clause bug -- in progress?
Tom, > No, because it's not a bug. The SELECT list is evaluated after HAVING, > so what you are asking for is an impossibility in the SQL semantic > model. > > (Yeah, I know there's some laxity in GROUP BY ... one of our worse > mistakes IMHO ...) Oh. I see what you mean. Given that I (along with at least a dozen posters to the SQL list) was confused that our HAVING/ORDER BY will accept column aliases but not sub-select aliases, would this be worthy of a FAQ item? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] Vacuum going -D; crash or just impatience?
Folks, I've a 7.2.4 report-generation database that has been growing for some time, resuting in the nightly VACUUM FULL ANALYZE taking longer and longer. (most of the data is copied nightly from other systems, so use of FSM is not very effective). The problem is that the nightly admin scripts are programmed to check for a locked up nightly maintainence, and to "pg_ctl -m fast stop" it. As the VACUUM FULL now takes over an hour, it falsely detected a lockup and shutdown the database in the middle of VACUUM. On restarting the database, I manually VACUUM FULLed it, and the VACUUM would speed through until hitting the spot where the database was shutdown, at which point the VACUUM process went "D", and apparently locked up for 10 minutes. No error messages were written to the logs. Unfortunately, I could not give it longer to see if it recovered because this is a production system and I had to get it up and running from backup by 9am. Does this sound like a crash during VACUUM, or just like it needed more time? If anyone wants to analyze, I have a complete backup of the post-problem PGDATA directory. The host system is RH Linux 8.0. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] Vacuum going -D; crash or just impatience?
Tom, > I think it just needed more time. VACUUM goes to great lengths to be > crash-safe. I doubt that a "fast stop" could have left the database > in a corrupted state. OK, that's reasuring. I would have liked to give the process more time, but with users waiting One thing I am puzzled by is the "D" status on the VACUUM process. That would seem to indicate that VACUUM was waiting for some other process ... but I can't imagine what it could be. Suggestions? > Are you saying that you delete most or all of the rows, then vacuum? > You might consider TRUNCATE if you delete all the rows, or CLUSTER > if you delete most, as a substitute for VACUUM FULL. (You'd still want > to run ANALYZE, after you load fresh data.) VACUUM FULL is really > designed for the case where there are not a huge number of dead rows > --- it gets awfully slow if it has to move lots of data. There are several "holding" tables which are truncated and then re-built. But the tables that are holding up VACUUM are the permanent ones, which are experiencing up to 900,000 updates every night. > Also, I think you have probably not given the FSM enough chance. > If the FSM settings are adequate then it should work fine to do Well, the holdup is the indexes, which are recycling about 500,000 pages and in 7.2.4 FSM doesn't help me. Unfortunately, dropping the indexes during the data transformation isn't really an option, because the indexes support some of the data transform steps. I'm wondering if I need to REINDEX more often; I think I'll try that next. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[BUGS] Implicit conversion bugaboo in beta2?
Guys, got this problem in 7.4 beta 2: treedemo=# SELECT LPAD ((team_name), (LENGTH(team_name) + (3*(tlevel-2 AS teams_display,team_id, lnode treedemo-# FROM teams treedemo-# WHERE lnode > 0 treedemo-# ORDER BY lnode; ERROR: function lpad(character varying, bigint) does not exist (the above query worked fine in 7.3.4, as I recall) treedemo=# \df lpad List of functions Result data type | Name | Argument data types --+--+- text | lpad | text, integer text | lpad | text, integer, text Now, I've been in favor of reducing problematic implicit conversions. But VARCHAR --> TEXT is one that needs to stay, as there's no possibility of ambiguity, and most users count on doing it transparently. Either that, or we need to build all string function for varchar. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Implicit conversion bugaboo in beta2?
Tom, > regression=# select lpad('xyz'::varchar, 4::int8); > ERROR: Function lpad(character varying, bigint) does not exist > Unable to identify a function that satisfies the given argument > types You may need to add explicit typecasts Oops! Sorry. The problem is there, it's just something different than I orginally thought; the issue is the BIGINT. What confuses me is how the bigint got there; it's from this view: CREATE VIEW vw_teams AS SELECT teams_desc.team_id, team_name, team_code, notes, MIN(teams_tree.treeno) as lnode, MAX(teams_tree.treeno) as rnode, parent.team_id as parent_id, COUNT(*)/2 as tlevel FROM teams_desc JOIN teams_tree USING (team_id) JOIN teams_tree parent ON parent.treeno < teams_tree.treeno JOIN teams_tree parents ON parents.treeno < teams_tree.treeno WHERE parent.treeno = (SELECT max(p1.treeno) FROM teams_tree p1 WHERE p1.treeno < teams_tree.treeno AND EXISTS (select treeno from teams_tree p2 where p2.treeno > teams_tree.treeno and p2.team_id = p1.team_id)) AND EXISTS (select parents2.team_id from teams_tree parents2 where parents2.treeno > teams_tree.treeno AND parents2.team_id = parents.team_id) GROUP BY teams_desc.team_id, team_name, team_code, notes, parent.team_id; In 7.4 beta2, the "tlevel" column comes out as BIGINT, not INT as it certainly did in 7.2.4 and I think it did in 7.3.4. Are we now defaulting COUNT(*) to BIGINT? IF so, that's going to be a *huge* backwards compatibility warning for people -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] Implicit conversion bugaboo in beta2?
Tom, > Uh, yeah, but we did in 7.2 and 7.3 as well... count() hasn't been > int4 since 7.1... Hmmm ... can't be 7.2. The query is taken from a production database written for 7.2; I'd have noticed the BIGINT problem before now. Either that, or in 7.2 we were doing implicit conversion from BIGINT to INT for function calls? However, it's certainly possible it happend in 7.3, as this particular app was not ported to 7.3. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [BUGS] Implicit conversion bugaboo in beta2?
Tom, > That could be --- I don't recall exactly when we decided implicit > bigint->int conversion was a bad idea ... Well, it is a bad idea, so I won't argue. Sorry for the false alarm. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] Can't access table to describe, drop, or select, but it does exist
Howdy Folks, I was dumping a database to test backups of the tsearch2 objects, and as I glanced through the output of pg_dump -Ft database > DBdata.bak, I found a table that I hadn't seen before in the table. It's a table that's used in other databases, but not this one. Somehow it had gotten created and populated with 40,000 or so rows of data. No problem, I figured I'd drop it, and that's where things started getting bizarre. The reason I'd never noticed the table is because in doing a \d it doesn't show up in the table list. If I try to do a \d TABLE_NAME, I can use to autocomplete the name, but then it says the table doesn't exist. I can't select any of those 40,000 rows while I'm in the database, and I can't drop it, either. The only evidence of the table I can find while I'm actually in the database is by doing a select * from pg_tables, and it shows up as the following: schemaname | tablename | tableowner | hasindexes | hasrules | hastriggers +-+++--+- public | ROOT_U_QUICK_LOOKUP| cp | f | f| f Any \d on the table gives: Did not find any relation named "ROOT_U_QUICK_LOOKUP". and any select/drop on the table gives: ERROR: Relation "root_u_quick_lookup" does not exist So what's the deal? If the pg_dump wasn't giving me so much data I'd be tempted to just delete the row from pg_tables, but the rows are there, and I want to clobber them. Any ideas? Thanks, Josh Eno ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[BUGS] Bug or Feature? Subquery issue.
Folks, Came across this counter-intuitive behavior on IRC today: test1=> create table vhost(idvhost serial primary key, foo integer); NOTICE: CREATE TABLE will create implicit sequence "vhost_idvhost_seq" for "serial" column "vhost.idvhost" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "vhost_pkey" for table "vhost" CREATE TABLE test1=> create table domain(iddomain serial primary key, bar integer); NOTICE: CREATE TABLE will create implicit sequence "domain_iddomain_seq" for "serial" column "domain.iddomain" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "domain_pkey" for table "domain" CREATE TABLE test1=> create table forwarding(idforwarding serial primary key, iddomain integer references domain, baz integer); NOTICE: CREATE TABLE will create implicit sequence "forwarding_idforwarding_seq" for "serial" column "forwarding.idforwarding" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "forwarding_pkey" for table "forwarding" ERROR: relation "forwarding_idforwarding_seq" already exists test1=> insert into domain test1-> values (100, 5); INSERT 147824 1 test1=> insert into forwarding test1-> values (1, 100, 15); INSERT 147825 1 test1=> insert into vhost values (100, 15); INSERT 147826 1 test1=> --this generates an error test1=> select iddomain from vhost where IDvhost = 100; ERROR: column "iddomain" does not exist test1=> -- This should generate an error, because IDdomain isn't a column of vhost test1=> --instead it deletes a row. test1=> delete from forwarding where iddomain in (select iddomain from vhost where idvhost = 100); DELETE 1 test1=> According to Neil, what's happening is that "select iddomain" in the subquery is grabbing the iddomain column from the forwarding table in the outer query. This is not intutive, for certain; however, what I don't know is if it's SQL Spec. So, my question: does the SQL spec allow for citing the outer query in the SELECT target list of a subquery? If yes, this is a feature, if no, a bug. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] Bug or Feature? Subquery issue.
Tom, > This is absolutely NOT an error. iddomain in the subquery is a > legitimate outer reference, if it's not otherwise known in the subquery. > There is no clause in the SQL spec that says that outer references are > invisible in any context ... even if it means you just deleted your > whole table, which is what I think will happen here... Yup, that's what happened. Wasn't sure.We're OK then. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 8: explain analyze is your friend
[BUGS] ISM shared memory on solaris
I hope this is the right place to send this.. the FAQ in the distribution mentions http://www.PostgreSQL.org/bugs/bugs.php, which doesn't work. We've found that postgresql wasn't using ISM shared memory on solaris, which theoretically would cost performance. The root cause in our case was that the "solaris" define is not defined by our compilers or by postgresql itself. The patch below simple has it check SHM_SHARE_MMU instead, which should work fine. I verified (with 'pmap') that the database is now using ISM on its shared memory, after this patch was applied. --Josh --- sysv_shmem.c.orig 2002-09-04 13:31:24.0 -0700 +++ sysv_shmem.c2003-10-23 12:52:26.756765000 -0700 @@ -143,7 +143,7 @@ on_shmem_exit(IpcMemoryDelete, Int32GetDatum(shmid)); /* OK, should be able to attach to the segment */ -#if defined(solaris) && defined(__sparc__) +#if defined(SHM_SHARE_MMU) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else @@ -323,8 +323,8 @@ shmid = shmget(NextShmemSegID, sizeof(PGShmemHeader), 0); if (shmid < 0) continue; /* failed: must be some other app's */ - -#if defined(solaris) && defined(__sparc__) + +#if defined(SHM_SHARE_MMU) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] ISM shared memory on solaris
Bruce Momjian wrote: Josh Wilmes wrote: Nope, __solaris__ is not defined on our system either. I thought our configure defined __portname__ for every platform, but I don't see that anywhere, so it seems we rely on the compiler to supply defines for the cpu and OS. Does src/tools/ccsym show you your defines? I would like to have something that identifies Solaris rather than something that checks for ISM so that if the ISM define isn't found, we throw an error and we hear about it. That would be preferable- i didn't know what was safe to assume would always be defined. ccsym is pretty neat. Here's what it shows (gcc) __GNUC__=2 __GNUC_MINOR__=95 sparc sun unix __svr4__ __SVR4 __sparc__ __sun__ __unix__ __svr4__ __SVR4 __sparc __sun __unix system=unix system=svr4 __GCC_NEW_VARARGS__ cpu=sparc machine=sparc ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [BUGS] ISM shared memory on solaris
Nope, __solaris__ is not defined on our system either. --Josh Bruce Momjian wrote: Yikes! We thought we were already using ISM on Solaris. Would you test the attached patch? It uses _solaris_ rather than SHM_SHARE_MMU in the define test. Does that work too? --- Josh Wilmes wrote: I hope this is the right place to send this.. the FAQ in the distribution mentions http://www.PostgreSQL.org/bugs/bugs.php, which doesn't work. We've found that postgresql wasn't using ISM shared memory on solaris, which theoretically would cost performance. The root cause in our case was that the "solaris" define is not defined by our compilers or by postgresql itself. The patch below simple has it check SHM_SHARE_MMU instead, which should work fine. I verified (with 'pmap') that the database is now using ISM on its shared memory, after this patch was applied. --Josh --- sysv_shmem.c.orig 2002-09-04 13:31:24.0 -0700 +++ sysv_shmem.c2003-10-23 12:52:26.756765000 -0700 @@ -143,7 +143,7 @@ on_shmem_exit(IpcMemoryDelete, Int32GetDatum(shmid)); /* OK, should be able to attach to the segment */ -#if defined(solaris) && defined(__sparc__) +#if defined(SHM_SHARE_MMU) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else @@ -323,8 +323,8 @@ shmid = shmget(NextShmemSegID, sizeof(PGShmemHeader), 0); if (shmid < 0) continue; /* failed: must be some other app's */ - -#if defined(solaris) && defined(__sparc__) + +#if defined(SHM_SHARE_MMU) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly Index: src/backend/port/sysv_shmem.c === RCS file: /cvsroot/pgsql-server/src/backend/port/sysv_shmem.c,v retrieving revision 1.21 diff -c -c -r1.21 sysv_shmem.c *** src/backend/port/sysv_shmem.c 13 Oct 2003 22:47:15 - 1.21 --- src/backend/port/sysv_shmem.c 24 Oct 2003 15:46:03 - *** *** 133,139 on_shmem_exit(IpcMemoryDelete, Int32GetDatum(shmid)); /* OK, should be able to attach to the segment */ ! #if defined(solaris) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else --- 133,139 on_shmem_exit(IpcMemoryDelete, Int32GetDatum(shmid)); /* OK, should be able to attach to the segment */ ! #if defined(__solaris__) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else *** *** 352,358 hdr = (PGShmemHeader *) shmat(*shmid, UsedShmemSegAddr, ! #if defined(solaris) && defined(__sparc__) /* use intimate shared memory on Solaris */ SHM_SHARE_MMU #else --- 352,358 hdr = (PGShmemHeader *) shmat(*shmid, UsedShmemSegAddr, ! #if defined(__solaris__) && defined(__sparc__) /* use intimate shared memory on Solaris */ SHM_SHARE_MMU #else ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] ISM shared memory on solaris
Seems like the BEST case would be to have a configure test verify that it works and define something if it does, but i don't know what such a test would look like. --Josh Bruce Momjian wrote: Tom Lane wrote: Bruce Momjian <[EMAIL PROTECTED]> writes: ! #if defined(sun) && defined(__sparc__) /* use intimate shared memory on SPARC Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else I think this is going in the wrong direction. Why isn't the code just #if defined(SHM_SHARE_MMU) /* use intimate shared memory on Solaris */ memAddress = shmat(shmid, 0, SHM_SHARE_MMU); #else If the symbol is available I think we probably want to use it. It is an O/S issue, not a hardware issue, and so the test on __sparc__ seems quite wrongheaded ... What I was hoping to do with the define test was to throw an error if we don't find intimate shared memory on Solaris, but the define doesn't work fir i386/Solaris so we are probably better going with the define as you suggest --- I just hope we don't fail to include a file and somehow miss it on some version of Solaris. Change applied. ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] Minor bug: Odd feedback on STDERR from PSQL for block comments
Guys, Here's a non-showstopper in 7.4RC1, but probably good to fix before RC2: (btw, tested & exists on beta4 as well, so I'm not sure when it was introduced) 1) Create a file with *only* a block comment in it, or with a block comment as the very last line. 2) run psql -f filename database You will get some odd feedback on STDERR: on Linux, file with only a block comment: [EMAIL PROTECTED]:~/Documents/oss/postgres> psql -f tempit2.sql -U postgres test1 psql:tempit2.sql:1: [EMAIL PROTECTED]:~/Documents/oss/postgres> on OSX: josh% /usr/local/pgsql/bin/psql -U postgres -f ~/test.sql sharky psql:/Users/josh/test.sql:1: [mercury:~] josh% on Linux, file with several commands, ending in a block comment: [EMAIL PROTECTED]:~/Documents/oss/postgres/phpcon> psql -f tempit.sql -U postgres test1 CREATE TABLE INSERT 17168 1 INSERT 17169 1 INSERT 17170 1 psql:tempit.sql:13: ERROR: column "iddomain" does not exist DELETE 1 psql:tempit.sql:18: [EMAIL PROTECTED]:~/Documents/oss/postgres/phpcon> I'm not sure why we're getting a line header ("psql:tempit2.sql:1: ") on an empty line, but it's annoying for any application writer (Bricolage) looking for failure messages on STDERR. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] Minor bug: Odd feedback on STDERR from PSQL for block comments
Guys, > I can confirm the problem. It happens in more cases than Josh mentions, > as well: > -- running the file with \i also shows the problem > -- there can be blank lines and whitespace after the > block comment, and it still shows up. This bug is still present in RC2. Are we going to fix it before release? According to David W., it really screws up the error-detection code on Bricolage, and I'd think that other applications might have the same problem. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] Wierd MD5-authentication crash on Solaris 8
Severity: Core dump Frequency: Unusual PostgreSQL Version: 7.4.0 release Platform: Solaris 8 gmake 3.80 gcc 3.23 440mhz UltraSparc III Netra T1/105 Reproducability: 100% on this machine and an identical one. Not yet tested on other machines running Solaris 8. Summary: attempting to connect via MD5 authentication as a user who has no password triggers a core dump of Postmaster. Steps to Reproduce: 1) set localhost authentication to MD5 2) set up a new user without setting a password. 3) attempt to connect as that user. Core dump file is available.Strace can also be done if desired. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] Wierd MD5-authentication crash on Solaris 8
Neil, > Can you post a stacktrace? (Building the postmaster with debugging > symbols first would be nice.) I'll see what I can do. A regular strace should be easy. -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] Wierd MD5-authentication crash on Solaris 8
Tom, > Is this the bsearch-of-no-elements problem recently discussed? > (If you have other users who do have passwords, then it's not...) Actually, it could be. Is it patched in the current source code? -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Known issue with Reindex-based corruption?
Tom, > One question that comes to mind is were you reindexing a system or user > table? User. > Another is whether you were using disks that lie about write > completion (SCSI vs IDE)? First thing I thought of. Haven't been able to verify, yet. The basic symptoms are: 1) Machine stated scheduled REINDEX. 2) Unexpected power-out 3) On reboot, we have 2 different versions of the index file on disk, one with 0 bytes. Attempts to use the index (via SELECT) result in statement-fatal errors. I'm waiting on more data. For now, I was wondering whether there was a known issue with WAL recovery on indexes in 7.2.4. Neil thought there was. -- -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 651-9224 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Known issue with Reindex-based corruption?
Tom, > It'd be more productive for them to update to 7.4 ... It's a distributed app, meaning that they have boxes in the field which can not be practically updated by remote. They'll be using 7.4 for *new* boxes, sometime around November. Their requirements include 6 months of testing before release. -- -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 651-9224 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [BUGS] Known issue with Reindex-based corruption?
Tom, > That's a definite possibility. Before 7.4 we did not emit WAL records > for data written during index build. What we could have here is that > the transaction completed and synced to WAL, but none of the data-file > writes were sent to disk before power-out. On restart, WAL replay would > faithfully update the pg_class row, but the index file would still be > empty :-( Would this be back-patchable by a good PG hacker? The client has $$$. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, I think I have a new instance of the "Variable not Found in Subplan Target List" bug, or at least one that was not patched in 7.4.1. Version: 7.4.1 from source Platform: RH Linux 7.3 running on Dual Athalon Severity: Showstopper Symptoms: Converted 7.2 databse to 7.4.1 three weeks ago. This view worked normally for those 3 weeks; in fact, it worked normally until a couple of hours ago (and was in heavy use all that time, being queried about 1000 times per day) It is still in use on a mirror server, with identical schema but slightly different data, where the error does NOT occur. Starting about 2 hours ago, we began to get this: net_test=# select * from sv_cases; ERROR: variable not found in subplan target lists The database is huge, proprietary, and very complex. I can't provide you with full schema on this list, but could provide more information privately. Here is the view: CREATE VIEW "sv_cases" as SELECT cases.case_id, cases.case_name, cases.docket, status.status_label, cases.opp_counsel_name, trial_groups.tgroup_name, cases.tgroup_id, cases.status, cases.lead_case_docket, cases.lead_case_id, cases.priority, tpr.rollup1 as pr_element, tpr.rollup2 as pr_label FROM status, ( SELECT vchar_to_int2(list_value) as priority, rollup1, rollup2 from text_list_values WHERE list_name = 'Case Priority' ) tpr, cases LEFT JOIN trial_groups on cases.tgroup_id = trial_groups.tgroup_id WHERE (cases.status = status.status AND status.relation = 'cases'::"varchar") AND cases.priority = tpr.priority; I cannot run an EXPLAIN, it errors out as well. And, per one of your previous e-mails, I tried forcing a change in the plan, but to no benefit: jwnet_test=# set enable_hashjoin=false; SET jwnet_test=# select * from sv_cases; ERROR: variable not found in subplan target lists jwnet_test=# set enable_hashjoin=true; SET jwnet_test=# set enable_mergejoin=false; SET jwnet_test=# select * from sv_cases; ERROR: variable not found in subplan target lists jwnet_test=# set enable_mergejoin=true; SET jwnet_test=# set enable_nestloop=false; SET jwnet_test=# select * from sv_cases; ERROR: variable not found in subplan target lists If there is a patch for this that isn't in 7.4.1, please let me know where I can grab it other than the archives, as the HTML formatting is messing me up with the 11/2003 patch. Thanks! -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, Further information: > CREATE VIEW "sv_cases" as > SELECT cases.case_id, cases.case_name, cases.docket, status.status_label, > cases.opp_counsel_name, trial_groups.tgroup_name, cases.tgroup_id, > cases.status, cases.lead_case_docket, cases.lead_case_id, > cases.priority, tpr.rollup1 as pr_element, tpr.rollup2 as pr_label > FROM status, > ( SELECT vchar_to_int2(list_value) as priority, rollup1, rollup2 > from text_list_values WHERE list_name = 'Case Priority' ) tpr, > cases LEFT JOIN trial_groups on cases.tgroup_id = trial_groups.tgroup_id > WHERE (cases.status = status.status AND status.relation = 'cases'::"varchar") > AND cases.priority = tpr.priority; In the above view, text_list_values is another, simple view. Removing that view from the equation fixed it, becuase it turns out that the issue is with the text_list_value view: CREATE VIEW text_list_values AS SELECT text_lists.list_id, text_lists.list_name, text_lists.list_group, text_lists.app_id, text_lists.status AS list_status, s1.status_label AS list_status_label, text_lists.list_format, text_lists.item_length, list_values.value_id, list_values.list_value, list_values.description, list_values.rollup1, list_values.rollup2, list_values.status AS value_status, s2.status AS value_status_label FROM text_lists JOIN list_values USING (list_id) JOIN status s1 ON text_lists.status = s1.status AND s1.relation::text = 'text_lists'::character varying::text JOIN status s2 ON list_values.status = s2.status AND s2.relation::text = 'list_values'::character varying::text; RELOADING the view fixed the error. Here's the EXPLAIN plan: QUERY PLAN Merge Join (cost=14.51..15.69 rows=66 width=130) Merge Cond: ("outer".status = "inner".status) -> Sort (cost=1.94..1.94 rows=3 width=2) Sort Key: s2.status -> Seq Scan on status s2 (cost=0.00..1.91 rows=3 width=2) Filter: ((relation)::text = 'list_values'::text) -> Sort (cost=12.57..12.83 rows=102 width=128) Sort Key: list_values.status -> Hash Join (cost=4.11..9.17 rows=102 width=128) Hash Cond: ("outer".list_id = "inner".list_id) -> Seq Scan on list_values (cost=0.00..3.36 rows=136 width=69) -> Hash (cost=4.06..4.06 rows=18 width=63) -> Merge Join (cost=3.74..4.06 rows=18 width=63) Merge Cond: ("outer".status = "inner".status) -> Sort (cost=1.95..1.96 rows=4 width=16) Sort Key: s1.status -> Seq Scan on status s1 (cost=0.00..1.91 rows=4 width=16) Filter: ((relation)::text = 'text_lists'::text) -> Sort (cost=1.79..1.85 rows=24 width=49) Sort Key: text_lists.status -> Seq Scan on text_lists (cost=0.00..1.24 rows=24 width=49) Suggestions on how to diagnose this, before I erase all evidence of it? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, > There are several (two or three, I forget) post-7.4.1 fixes that resolve > bugs that all have that symptom. I can't tell with this much info > whether you have a new case or one of the known ones. > > I'd suggest pulling the tip of REL7_4_STABLE branch to see if it's > fixed. Hmmm ... problem is, per my last e-mail, the bug is not reproducable off of this particular database instance -- if I copy it to my laptop, the bug goes away. And even though it's not a production database, it *is* a production *server*. Which means that I can't apply CVS code to it ... -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, > It's presumably dependent on the contents of pg_statistic and the > relpages/reltuples counts in pg_class for the tables involved. > You could likely reproduce it by migrating that data to your laptop. > It would take a little bit of hacking to get the pg_statistic data > in (adjusting starelid for instance) but I think it's doable. > > Note that the planner control settings (eg effective_cache_size) might > also need to be copied. Hmmm ... could I do it through a binary file copy? I'm on a bit of a deadline here, and need to replace the bad view in the next hour or so. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, > > RELOADING the view fixed the error. > > What do you mean by "reloading the view", exactly? I created the same view under a new name.The new view runs fine. I suspect that if I REPLACED the view, it would be fixed, but I don't want to do that if we want to analyze it further. > The cost numbers here are very small; are the tables themselves small, or > did you reload them too? The tables are quite small, the largest < 200 rows. This view just links a bunch of reference lists. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] New Instance of Variable Not Found in Subplan Bug
Tom, > [scratches head...] That doesn't make any sense to me at all ... > there must be some difference between the two view definitions. > The planner doesn't have any statistics associated with views, > only with underlying tables (in fact it never even sees the views). Unlikely, given that I created the second view by copying the \d output of the first view. However, here goes. First is \d for the bad view, and second is \d and 2nd for the good view. I can't see any difference. Can you? View "public.text_list_values" Column | Type | Modifiers ++--- list_id| integer| list_name | character varying(30) | list_group | character varying(30) | app_id | integer| list_status| integer| list_status_label | character varying(30) | list_format| character varying(30) | item_length| smallint | value_id | integer| list_value | character varying(50) | description| character varying(100) | rollup1| character varying(100) | rollup2| character varying(50) | value_status | integer| value_status_label | smallint | View definition: SELECT text_lists.list_id, text_lists.list_name, text_lists.list_group, text_lists.app_id, text_lists.status AS list_status, s1.status_label AS list_status_label, text_lists.list_format, text_lists.item_length, list_values.value_id, list_values.list_value, list_values.description, list_values.rollup1, list_values.rollup2, list_values.status AS value_status, s2.status AS value_status_label FROM text_lists JOIN list_values USING (list_id) JOIN status s1 ON text_lists.status = s1.status AND s1.relation::text = 'text_lists'::character varying::text JOIN status s2 ON list_values.status = s2.status AND s2.relation::text = 'list_values'::character varying::text; View "public.text_list_values_2" Column | Type | Modifiers ++--- list_id| integer| list_name | character varying(30) | list_group | character varying(30) | app_id | integer| list_status| integer| list_status_label | character varying(30) | list_format| character varying(30) | item_length| smallint | value_id | integer| list_value | character varying(50) | description| character varying(100) | rollup1| character varying(100) | rollup2| character varying(50) | value_status | integer| value_status_label | smallint | View definition: SELECT text_lists.list_id, text_lists.list_name, text_lists.list_group, text_lists.app_id, text_lists.status AS list_status, s1.status_label AS list_status_label, text_lists.list_format, text_lists.item_length, list_values.value_id, list_values.list_value, list_values.description, list_values.rollup1, list_values.rollup2, list_values.status AS value_status, s2.status AS value_status_label FROM text_lists JOIN list_values USING (list_id) JOIN status s1 ON text_lists.status = s1.status AND s1.relation::text = 'text_lists'::character varying::text JOIN status s2 ON list_values.status = s2.status AND s2.relation::text = 'list_values'::character varying::text; -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[BUGS] Glitch: cannot use Arrays with Raise Notice
Bug: Cannot Use Arrays with Raise Notice in PL/pgSQL. Version Tested: 7.4.1 Severity: Annoyance Description: Attempting to pass an array element to Raise Notice in PL/pgSQL will produce a parse error: CREATE OR REPLACE FUNCTION if_exec_to_text ( TEXT, TEXT[], TEXT[] ) RETURNS text AS ' DECLARE qstring TEXT; r_params ALIAS for $2; r_values ALIAS for $3; param_loop INT; execrec RECORD; retval TEXT; BEGIN -- swaps in parameters and executes a query returning a single -- text value qstring := $1; param_loop := 1; raise notice '' param 1 %'', r_params[param_loop]; WHILE r_params[param_loop] <> '''' LOOP -- qstring := strswap(qstring, r_params[param_loop], r_values[param_loop] ); -- above line commented out for reproducability param_loop := param_loop + 1; END LOOP; FOR execrec IN EXECUTE qstring LOOP retval := execrec.col1; END LOOP; RETURN retval; END;' LANGUAGE plpgsql; Produces the following error: jwnet_test=# select if_exec_to_text ( 'select to_char(''#VALUE#''::DATE,''MM/ DD/'') as col1;', jwnet_test(# ARRAY[ '#VALUE#' ], ARRAY[ '2004-03-21' ]); ERROR: syntax error at or near "[" CONTEXT: compile of PL/pgSQL function "if_exec_to_text" near line 12 Removal of the Raise Notice statement will cause the procedure to execute. No biggie, just wanted to get it on the bug list. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[BUGS] Core Dump on SunOS + 7.3.3
Folks, Just had a client report a core dump of 7.3.3 on on SunOS 2.9 after the system was running for a couple of weeks. I'm inclined to tell them to upgrade to 7.3.6 and try again, but was wondering if this jogs anyone's memory. I've searched the archives and the HISTORY, and the only 7.3.3 --> 7.3.6 core dumping issues I see relate to pg_dump, or Solaris 7. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Core Dump on SunOS + 7.3.3
Tom, > Can't do much without more detail than that. A stack trace would be > most helpful, but at the very least I'd want to know what query dumped > core... > > Is it reproducible? Apparently not ... just crops up randomly, after 2+ weeks of good service. Will dig for more information ... and get them to upgrade to 7.3.6. -- -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 651-9224 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [BUGS] Core Dump on SunOS + 7.3.3
Tom, > * src/backend/utils/adt/selfuncs.c (REL7_3_STABLE): Work around > buggy strxfrm() present in some Solaris releases. > > Don't recall now which are "some" Solaris releases. Turns out it is the "user with no password" bug, which was throught to affect only Solaris 7 or 8. However, it turns out that they are using a pre-release of Solaris 9, so they have the same issue. They've been told to upgrage to 7.3.6. -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[BUGS] Known issue with Reindex-based corruption?
Folks, Is there any known issues with index file corruption in the event of a power-out during REINDEX with 7.2.4? I *think* the problem is this client's peculiar hardware, but wanted to eliminate any potential known issues. -- -Josh Berkus __AGLIO DATABASE SOLUTIONS___ Josh Berkus Complete information technology [EMAIL PROTECTED] and data management solutions (415) 565-7293 for law firms, small businesses fax 651-9224 and non-profit organizations. San Francisco ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Bug in concat operator for Char? -- More Info
Folks, Also: This behavior was different in 7.1: [11:02:45] darcy=# select '1'::char(4) || '-'::char(1); [11:02:45] ?column? [11:02:45] -- [11:02:45] 1 - [11:02:45] (1 row) [11:02:49] on 7.1 And there's apparently either an issue, or a change in behavior, in CHAR for 7.5: [11:03:25] darcy=# SELECT length('1'::char(4)); [11:03:25] length [11:03:25] [11:03:25]1 [11:03:25] (1 row) [11:03:29] is 7.5 pg743=> select length('1'::char(4)); length 4 (1 row) (on 7.4.3) Are these changes intentional, or are they bugs? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[BUGS] Bug in concat operator for Char?
People, Severity: Serious Annoyance Reproducable on: 7.4.1, 7.4.3, 7.5devel Summary: Concatination of CHAR() data type field seems to result in a TEXT value instead of a CHAR value. Is there a reason for this? Example: webmergers=> select '"'::char(4) || ''::char(4) || '"'::char(4); ?column? -- "" (1 row) Depending on the spec, it seems to me that the above should result either in a char(4) of " " or a char(12) of " " . But we get a text value. Is this the SQL spec? Is there another reason for this behavior? -- -Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] 8.0: Absolute path required for INITDB?
8.0 beta CVS of 8/8/2004: If a relative path is used for the -L option in initdb, the following fatal error happens: ./initdb -D ../data -L ../share/postgresql/ creating system views ... ok loading pg_description ... ERROR: could not open file "../share/postgresql/postgres.description" for reading: No such file or directory child process exited with exit code 1 Is this intentional? The first dozen or so commands work fine with a relative path, and this executes fine with an absolute path for both -D and -L. SuSE 9.1, GCC 3.3.3 -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] 8.0: Absolute path required for INITDB?
Tom, > It might be worth absolut-izing this path in initdb before it's > passed down, but I can't get exceedingly excited about it. Well, once again let me check the docs to make sure people are warned about this -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])