[BUGS] BUG #1450: unknown symbols
The following bug has been logged online: Bug reference: 1450 Logged by: Dirk Email address: [EMAIL PROTECTED] PostgreSQL version: ? Operating system: Qnx Description:unknown symbols Details: Hi, I try to resolv some test on an qnx based system, but there is with every command I use an unknown symbol : wcwidth The postgresql system could not resolv all symbols. The installation is made from the standard repository used on qnx. greetings ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Small bug with numeric in 7.0 (also in 6.5.3)
Bruce Momjian writes: > This is terrible. I can't imagine how people use this without terrible > problems. Why don't we get more bug reports about this? Maybe this relates to this too: $ pg_dump cs1 -o > /tmp/x.dump parseNumericArray: bogus number pg_dump aborts then. This is on 7.0 on Linux 2.2.10. I can't get more about this error but I definately can not dump the db. Though the migration from 6.5.3 to 7.0 went smooth. Any suggestions? Dirk
[BUGS] ERROR: btree: index item size 3048 exceeds maximum 2717
ERROR: btree: index item size 3048 exceeds maximum 2717 This is on 7.0.2 while doing an INSERT. The INSERT aborts and returns this error. It just occurred now for the first time and now everytime. There are several indexes defined on this table. Should I rebuild them all? What is this? Regards, Dirk
Re: [BUGS] ERROR: btree: index item size 3048 exceeds maximum 2717
Dirk Lutzebaeck writes: > ERROR: btree: index item size 3048 exceeds maximum 2717 > > This is on 7.0.2 while doing an INSERT. The INSERT aborts and returns > this error. It just occurred now for the first time and now everytime. > There are several indexes defined on this table. Should I rebuild > them all? > > What is this? Ok, this looks like an index on a TEXT column can't cope with more than 2717 octets. I would expect the 8k limit here but it seems less. I understand that indexing on such long fields may not be meaningful. But in my application 90% of the text data is very small but some entries are quite large. Dirk
[BUGS] Backend crashes in 7.0.3
Hello, I observe occasionaly crashes on 7.0.3 under medium load: Backend message type 0x49 arrived while idle Backend message type 0x44 arrived while idle Backend message type 0x54 arrived while idle I recently upgraded from 7.0.2 to 7.0.3 on RH6.0, Linux 2.2.10 and I haven't observed these messages before. I have compiled the source on my own (egcs 2.91.66). Can I downgrade from 7.0.3 to 7.0.2 without dump/restore? Dirk
Re: [BUGS] Backend crashes in 7.0.3
Tom Lane writes: > Dirk Lutzebaeck <[EMAIL PROTECTED]> writes: > > I observe occasionaly crashes on 7.0.3 under medium load: > > > Backend message type 0x49 arrived while idle > > Backend message type 0x44 arrived while idle > > Backend message type 0x54 arrived while idle > > > I recently upgraded from 7.0.2 to 7.0.3 on RH6.0, Linux 2.2.10 and I > > haven't observed these messages before. I have > > compiled the source on my own (egcs 2.91.66). > > You can, but in the long run it'd be more useful to figure out what's > going wrong. The above is not much info --- what are you doing when > this happens, and what if anything appears in the postmaster log? It may be that there is some kernel corruption appearing here. I'm using kernel nfs on Linux 2.2.10 with a Solaris8 i86pc client. I saw some weird NFS error messages on the Linux system which are related to the solaris client. I suspect the kernel nfs daemon corrupting memory areas where postgres shared mem resides. I'm currently trying to dig more into the problem. Could this be possible? Strange is that stopping and restarting the postmaster does not help. The crashes occur again. When killing the children some still stay alive. Giving them a SIGTERM again leaves them in a constant running state (R). strace -p to the child is just quiet. I can only kill the child then with SIGKILL. I haven't started the postmaster with debug on yet. I have now shut off the solaris client and restarted the machine. Currently it looks fine. Dirk
Re: [BUGS] Backend crashes in 7.0.3
Tom Lane writes: > Dirk Lutzebaeck <[EMAIL PROTECTED]> writes: > > It may be that there is some kernel corruption appearing here. I'm > > using kernel nfs on Linux 2.2.10 with a Solaris8 i86pc client. I saw > > some weird NFS error messages on the Linux system which are related to > > the solaris client. I suspect the kernel nfs daemon corrupting memory > > areas where postgres shared mem resides. I'm currently trying to dig more into > > the problem. Could this be possible? > > Seems like a bizarre theory. In particular, why would 7.0.3 be affected > and not 7.0.2? Sorry, it seems I was on the wrong track. It has nothing do to with NFS, I just panicked... I'm currently looking into ApacheDBI a modperl utility to reuse open db connections. In any case the backend messages are odd. It seems I misunderstand ApacheDBI loosing a DB connection with a backend crash. Dirk
Re: [BUGS] INSERT/SELECT with ORDER BY
Stephan Szabo writes: > > Unless I'm horribly misreading the spec, this is correct. > ORDER BY is used in reference to cursor definitions and > direct select statements, but is not part of query expressions > which is what the INSERT INTO references. > > ::= > INSERT INTO > > > ::= > [] > > | DEFAULT VALUES > > ::= > > But, this may be a useful extension to allow, at least with the > existance of LIMIT. > > Stephan Szabo > [EMAIL PROTECTED] > > On Thu, 17 Aug 2000, Bob Rivoir wrote: > > > > Please enter a FULL description of your problem: > > > > I got the following error message when I tried to do an INSERT using > > SELECT ... ORDER BY: > > > > ERROR: ORDER BY is not allowed in INSERT/SELECT Is 7.1 able to do INSERT/SELECT with ORDER BY and LIMIT ? Dirk ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[BUGS] cannot find attribute 1 of relation (of some user table) in 7.1.3
Hi, I have the follow problem when vacuum'ing on 7.1.3: db1=# vacuum; ERROR: cannot find attribute 1 of relation docmatchsel I cannot drop the table either and the system does not seem do know the columns: db1=# \d docmatchsel Table "docmatchsel Attribute | Type | Modifier ---+--+-- db1=# Then I tried to reindex the table in standalone mode which gives: backend> reindex table docmatchsel force ERROR: catalog is missing 4 attributes for relid 117042636 I also reindexed the system tables to no avail. docmatchsel was created as a temporary table in the application. No indexes were defined for this table. I cannot get either run vacuum nor pg_dump with success. Is there any insight to this? I looked up the email archives but could find a way to get rid of this problem. Dirk ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] cannot find attribute 1 of relation (of some user table) in 7.1.3
Tom Lane writes: > Dirk Lutzebaeck <[EMAIL PROTECTED]> writes: > > Hi, I have the follow problem when vacuum'ing on 7.1.3: > > ERROR: cannot find attribute 1 of relation docmatchsel > > Then I tried to reindex the table in standalone mode which gives: > > backend> reindex table docmatchsel force > > ERROR: catalog is missing 4 attributes for relid 117042636 > > Bizarre. Were there four columns altogether? Yes. > > docmatchsel was created as a temporary table in the application. > > Since it's only a temporary table, my counsel is: > > 1. delete from pg_class where relname = 'docmatchsel'; > > 2. pg_dump, initdb, reload. > > You might be okay after just step 1, but given that we don't know what > caused this, an initdb seems like a good idea. Ok, I will do this tonight. > It would be interesting to go groveling through pg_attribute and see if > we can find any traces of the missing attribute rows, but unless you're > prepared to sift through the bits yourself (or give access to some > developer to do it), that's not going to happen. I would like to go with your fixes above this time. If it happens again we should take a closer look indeed. Thanks, Dirk ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[BUGS] ALTER TABLE Problem
Hi, - I tried to execute following SQL alter table PRICEGROUP add FOREIGN KEY ("PARENT") REFERENCES "PRICEGROUP" ON DELETE CASCADE this gives an error ALTER TABLE: column "PARENT" referenced in foreign key constraint does not exist (7) while I'm sure that the column exists and is Non null. I'm using version 7.2 on linux redhat 7.2 - I also tried some ODBC stuff. Here I've seen that the S result codes are not correct. Most of the time it simply return S1000 which is a general error code. and not a specific code. e.g. when you want to create a table that already exists it should return the correct error. ( same for an index, etc... ) Regards, Dirk Jacobs
[BUGS] How to set the global OID counter? COPY WITH OIDS does not set global OID counter?
Hi, how can one set the global OID counter in 8.1.X? We think it would work in 8.0.X using the COPY WITH OIDS command but this does not work in 8.1.X anymore. We have the problem that we made a dump using 'pg_dump -o' in 8.0.X, created a new database in 8.1.X and read back in but the global OID counter stayed at 40.000 so OIDs will be allocated again! Thanks for help, Dirk ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] [ADMIN] How to set the global OID counter? COPY WITH OIDS does
This is not a large object. We are seeing rows with duplicate oids because the OID counter is not changed after the dump (exported with --oids) is being loaded. How does 8.1 prevent to allocate duplicate OIDs? Regards, Dirk Tom Lane wrote: Alvaro Herrera <[EMAIL PROTECTED]> writes: Dirk Lutzebäck wrote: how can one set the global OID counter in 8.1.X? We think it would work in 8.0.X using the COPY WITH OIDS command but this does not work in 8.1.X anymore. pg_resetxlog -o (Postmaster stopped of course) Possibly more to the point: why do you think you need to mess with the counter? 8.1 is smart enough not to assign conflicting OIDs to large objects. regards, tom lane
Re: [BUGS] [ADMIN] How to set the global OID counter? COPY WITH OIDS does
True. Dirk Tom Lane wrote: =?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[EMAIL PROTECTED]> writes: This is not a large object. We are seeing rows with duplicate oids because the OID counter is not changed after the dump (exported with --oids) is being loaded. How does 8.1 prevent to allocate duplicate OIDs? If there's a unique index on the OID column then 8.1 will not allocate duplicate OIDs. If there's not such a unique index, you had no guarantee of no-duplicates before 8.1 either. regards, tom lane -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, you should not copy it, re-transmit it, use it or disclose its contents, but should return it to the sender immediately and delete your copy from your system. Thank you for your cooperation. Dirk Lutzebäck <[EMAIL PROTECTED]> Tel +49.30.5362.1635 Fax .1638 CTO AEC/communications GmbH, Berlin, Germany
[BUGS] partial indexes not used on parameterized queries?
Hi, we are using postgresql 8.1.4 and found that partial indexes are not being used when the query is run using parameters, eg. in a function. When running the same query with arguments passed in the query string partial indexes are being used. here is the index: CREATE INDEX c_6012_index ON consumption (voi) WHERE code = 6012 AND val1 IS NULL; here is the query using parameters inside the query string which uses an index scan in turn: explain analyze UPDATE c SET val1=1784 WHERE code=6012 AND voi='1923328-8-0-0' AND val1 IS NULL; QUERY PLAN --- Index Scan using c_6012_index on c (cost=0.00..4.71 rows=1 width=164) (actual time=0.196..0.196 rows=0 loops=1) Index Cond: (voi = '1923328-8-0-0'::text) Filter: ((code = 6012) AND (val1 IS NULL)) Total runtime: 0.304 ms (4 rows) Now put it in a function with parameter passing we get a seq scan: CREATE FUNCTION setsize(integer, integer, text) RETURNS integer AS $BODY$ DECLARE v_size alias for $1; v_code alias for $2; v_voi alias for $3; r record; BEGIN FOR r IN EXPLAIN UPDATE c SET val1=v_size WHERE code=v_code AND voi=v_voi AND val1 IS NULL LOOP RAISE NOTICE '%', r; END LOOP; RETURN 0; END; # select setsize(1784, 6012, '1923328-8-0-0'); NOTICE: ("Seq Scan on c (cost=0.00..344372.82 rows=1 width=164)") NOTICE: (" Filter: ((code = $2) AND (voi = $3) AND (val1 IS NULL))") setsize - 0 (1 row) Bummer, a sequential scan is being run. Any clues? Has this behaviour changed for a while? Regards, Dirk
Re: [BUGS] partial indexes not used on parameterized queries?
Hi Simon, are you sure this has not been changed? I'm pretty sure my code worked a while ago. We are using Perl DBD::Pg on the client side and almost never pass parameters inside the SQL string for security reasons. I can't say if it broke from 8.0 -> 8.1 for us or in one of the minor 8.1 releases. In any case I would see this as a security problem because you cannot control sql code injection easily (as with using DBD::Pg) if you have to pass parameters in the SQL string to use partial indexes. Regards, Dirk Simon Riggs wrote: On Mon, 2006-07-10 at 12:22 +0200, Dirk Lutzebäck wrote: we are using postgresql 8.1.4 and found that partial indexes are not being used when the query is run using parameters, eg. in a function. When running the same query with arguments passed in the query string partial indexes are being used. Any clues? Has this behaviour changed for a while? No, it's always worked like this. The index can only be used if we know at plan time that the index predicate is true for all times that the query is executed. We cannot know this for the exact query and index combination you have requested. If we acted differently, your query would return the wrong answer in some circumstances. I can't find anything in the manual that explains this distinction. Here's an example that explains this more easily: If your index looked like this CREATE INDEX c_6000_index ON consumption (voi) WHERE code > 5000 AND val1 IS NULL; and your query like this UPDATE c SET val1=1784 WHERE ( code > 6000 AND val1 IS NULL ) AND code = ? AND voi = '1923328-8-0-0'; ...then the index could be used, because the index predicate is implied by part of the query clause for all values of the parameter. So its best to look for some other static definition of the index. I'll submit a doc patch. -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, you should not copy it, re-transmit it, use it or disclose its contents, but should return it to the sender immediately and delete your copy from your system. Thank you for your cooperation. Dirk Lutzebäck <[EMAIL PROTECTED]> Tel +49.30.5362.1635 Fax .1638 CTO AEC/communications GmbH, Berlin, Germany
Re: [BUGS] partial indexes not used on parameterized queries?
Ok, we checked our client code to eliminate this problem. Thanks for the doc patch. Regards, Dirk Simon Riggs wrote: On Mon, 2006-07-10 at 13:35 +0200, Dirk Lutzebäck wrote: In any case I would see this as a security problem because you cannot control sql code injection easily (as with using DBD::Pg) if you have to pass parameters in the SQL string to use partial indexes. That's not what I said. I see no security problem as a result of this behaviour. -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, you should not copy it, re-transmit it, use it or disclose its contents, but should return it to the sender immediately and delete your copy from your system. Thank you for your cooperation. Dirk Lutzebäck <[EMAIL PROTECTED]> Tel +49.30.5362.1635 Fax .1638 CTO AEC/communications GmbH, Berlin, Germany
[BUGS] BUG #3516: Incomplete #ifdef statement in s_lock.h
The following bug has been logged online: Bug reference: 3516 Logged by: Dirk Tilger Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: Linux with Intel compiler on ia64 Description:Incomplete #ifdef statement in s_lock.h Details: I have been compiling postgresql 8.0, 8.1 and 8.2.4 with the Intel compiler in the past successfully. This time something went wrong and although I can't tell precisely how I triggered it, I have found a fix. We have an increasingly complex Makefile that compiles our third-party applications. While playing with the options of the compiler I must have activated something that triggered the #error on line 809 in file s_lock.h. I was able to fix the problem by changing line 81 of s_lock.h to: | #if defined(__GNUC__) || defined(__INTEL_COMPILER) || defined(__ICC) The icc 9.1.045 manual page says (reformatted): -8<- __ICC Value on IA-32 -- 910 Value on EM64T -- 910 Value on Itanium Architecture -- NA Notes -- Assigned value refers to the compiler (e.g., 800 is 8.00). Supported for legacy reasons. Use __INTEL_COMPILER instead. -8<- Best regards, Dirk Tilger PS: I'm not subscribed to the bugs list, put me in CC for communication. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3516: Incomplete #ifdef statement in s_lock.h
On Sun, Aug 05, 2007 at 11:20:18AM -0400, Tom Lane wrote: > "Dirk Tilger" <[EMAIL PROTECTED]> writes: > > Operating system: Linux with Intel compiler on ia64 > > > I have been compiling postgresql 8.0, 8.1 and 8.2.4 with the Intel compiler > > in the past successfully. This time something went wrong and although I > > can't tell precisely how I triggered it, I have found a fix. > > BTW, what I found in googling suggested that (1) icc never has defined > __ICC on 64-bit machines, and (2) although it does define __GNUC__ by > default, there is a way to turn that off. So I surmise that your > relevant change was adding a compiler flag that disabled the definition > of __GNUC__. That doesn't seem to have stopped it from accepting > GNU-style asm directives, though, so I kinda wonder what compiler > behavior does change and what was the point of your flag change. The compiler seems to have been called with: | icc -mp -no-gcc -mcpu=itanium2 -mtune=itanium2 The manual page about '-no-gcc' says: -no-gcc Do not predefine the __GNUC__, __GNUC_MINOR__, and __GNUC_PATCHLEVEL__ macros. FYI: we also apply the attached patch to postgre, so that the 'configure' script would not add the -mp1, when -mp was already specified. From the manual: | -mpMaintain floating-point precision (disables some opti- | mizations). The -mp option restricts optimization to | maintain declared precision and to ensure that float- | ing-point arithmetic conforms more closely to the ANSI | and IEEE standards. For most programs, specifying this | option adversely affects performance. If you are not sure | whether your application needs this option, try compiling | and running your program both with and without it to | evaluate the effects on both performance and precision. | | | -mp1 Improve floating-point precision. -mp1 disables fewer | optimizations and has less impact on performance than | -mp. Best regards, Dirk Tilger. --- postgresql-8.1.5/configure.in 2006-10-12 23:41:27.0 +0400 +++ postgresql-8.1.5-patched/configure.in 2007-04-01 18:35:40.0 +0400 @@ -266,11 +266,6 @@ PGAC_PROG_CC_CFLAGS_OPT([-Wdeclaration-after-statement]) PGAC_PROG_CC_CFLAGS_OPT([-Wendif-labels]) - else -# Intel compiler has a bug/misoptimization in checking for -# division by NAN (NaN == 0), -mp1 fixes it, so add it to the -# CFLAGS. -PGAC_PROG_CC_CFLAGS_OPT([-mp1]) fi # Disable strict-aliasing rules; needed for gcc 3.3+ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] BUG #3693: PSQLException when using ResultSet.getLong(String) in turkish locale
The following bug has been logged online: Bug reference: 3693 Logged by: Dirk Moebius Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: Windows 2000 Description:PSQLException when using ResultSet.getLong(String) in turkish locale Details: JDBC driver: postgresql-jdbc-8.2-506.zip OS: Windows 2000, but applies to other any other OS Java: Sun JRE 1.5.11 Using one of the get...(String columnName) methods in ResultSet in a JVM running in a turkish environment results in a PSQLException saying that the named column cannot be found. Here's the shortest testcase I could come up with: import java.sql.*; import java.util.Locale; public class PostgresLocaleProblem { public static void main(String[] args) throws Exception { Locale.setDefault(new Locale("tr", "TR")); Class.forName("org.postgresql.Driver"); String url = "jdbc:postgresql://localhost/?user=postgres&password=postgres"; Connection conn = DriverManager.getConnection(url); try { conn.setAutoCommit(true); conn.createStatement().execute("drop table if exists test"); conn.createStatement().execute("create table test ( id int )"); conn.createStatement().execute("insert into test values (1)"); ResultSet rs = conn.createStatement().executeQuery("select * from test"); rs.next(); System.out.println("id: " + rs.getLong("ID")); rs.close(); } finally { conn.close(); } } } This results in: Exception in thread "main" org.postgresql.util.PSQLException: The column name ID was not found in this ResultSet. at org.postgresql.jdbc2.AbstractJdbc2ResultSet.findColumn(AbstractJdbc2ResultSe t.java:2465) at org.postgresql.jdbc2.AbstractJdbc2ResultSet.getLong(AbstractJdbc2ResultSet.j ava:2332) at PostgresLocaleProblem.main(PostgresLocaleProblem.java:18) (I translated the turkish exception message text for you.) The reason is that AbstractJdbc2ResultSet.findColumn() uses an internal HashMap for quick access to the field indices of the result set (because of performance reasons, I guess). The problem is that the HashMap is keyed by toLowerCase() values of the field names. The lowercase variant of columnName "ID" in the turkish locale is "ıd" (the i doesn't have a dot), so it is not "id", so the column is not found. When using toLowerCase(), the developer must be well aware that Java always applies the standard Unicode collation functions. So lowercasing a char/String may not always result the expected result. Note that in the swedish locale there's exactly the same problem. A correct solution would be to use locale agnostic keys in the HashMap, e.g. by defining something like this: class LocaleAgnosticCaseInsensitiveHashMap extends HashMap { public Object get(String key) { return super.get(transformKey(key)); } public void put(String key, Object value) { super.put(transformKey(key), value); } private String transformKey(String key) { return key.toLowerCase(Locale.US); } } Hope this helps. Dirk. ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[BUGS] wcwidth
Hi, I try to resolv some test on an qnx based system, but there is with every command I use an unknown symbol : wcwidth The postgresql system could not resolv all symbols. The installation is made from the standard repository used on qnx. greetings
Re: [BUGS] BUG #1453: NULLs in UNION query
I've also come across this in 7.4. You could also use: SELECT NULL AS Test UNION ALL SELECT NULL::int UNION ALL SELECT 0 Dirk Tom Lane wrote: "" <[EMAIL PROTECTED]> writes: The following query should not raise an error ("ERROR: UNION types text and integer cannot be matched"): SELECT NULL AS Test UNION ALL SELECT NULL UNION ALL SELECT 0 Hmm ... it works if you do SELECT NULL AS Test UNION ALL (SELECT NULL UNION ALL SELECT 0) The problem is that transformSetOperationTree() resolves the column datatypes one UNION pair at a time, and so the two NULLs default to "text" before we ever look at the zero. It's probably possible to rejigger it so that the common type is chosen considering all the set-operation arms in parallel, but it doesn't seem like a trivial change. (Translation: there will not be an immediate fix.) As a workaround, perhaps you could cast one or all of the nulls to int explicitly: SELECT NULL::int AS Test UNION ALL SELECT NULL UNION ALL SELECT 0 regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
[BUGS] BUG #1563: wrong week returnded by date_trunc('week', ...)
The following bug has been logged online: Bug reference: 1563 Logged by: Dirk Raetzel Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.1 Operating system: i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 (mingw-special) Description:wrong week returnded by date_trunc('week', ...) Details: date_trunc('week', ...) returns the wrong week for first days in January if their calendar week belongs to the previous week. Example: select date_trunc('week', timestamp '2005-01-01'); gives: 2006-01-02 00:00:00 but 2004-12-27 00:00:00 would be right. sample code: create table weektest ( date timestamp ); copy weektest from STDIN; '1999-01-01' '2000-01-01' '2001-01-01' '2002-01-01' '2003-01-01' '2004-01-01' '2005-01-01' '2006-01-01' '2007-01-01' '2008-01-01' '2009-01-01' \. date | week_t| week_p -+-+ 1999-01-01 00:00:00 | 2000-01-03 00:00:00 | 53 2000-01-01 00:00:00 | 2000-12-25 00:00:00 | 52 2001-01-01 00:00:00 | 2001-01-01 00:00:00 | 1 2002-01-01 00:00:00 | 2001-12-31 00:00:00 | 1 2003-01-01 00:00:00 | 2002-12-30 00:00:00 | 1 2004-01-01 00:00:00 | 2003-12-29 00:00:00 | 1 2005-01-01 00:00:00 | 2006-01-02 00:00:00 | 53 2006-01-01 00:00:00 | 2006-12-25 00:00:00 | 52 2007-01-01 00:00:00 | 2007-01-01 00:00:00 | 1 2008-01-01 00:00:00 | 2007-12-31 00:00:00 | 1 2009-01-01 00:00:00 | 2008-12-29 00:00:00 | 1 (11 rows) ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[BUGS] BUG #1647: shows version 7.1, doesnt create tablespaces etc.
The following bug has been logged online: Bug reference: 1647 Logged by: Dirk Bade Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.1 Operating system: SUSE LINUX 7.3 Description:shows version 7.1, doesnt create tablespaces etc. Details: postgresql-8.0.1.tar.gz installs fine and tests run ok. But after initdb PG_VERSION shows 7.1 and simple commands like CREATE TABLESPACE or ALTER TABLE return errors. What's wrong ? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[BUGS] BUG #1774: ecpg preprocessor produces a wrong varchar struct
The following bug has been logged online: Bug reference: 1774 Logged by: Dirk Jagdmann Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.3 Operating system: i686 Linux 2.6 Description:ecpg preprocessor produces a wrong varchar struct Details: The ecpg preprocessor produces wrong code which does not compile. --- input.pgc --- int main() { EXEC SQL BEGIN DECLARE SECTION; VARCHAR v; EXEC SQL END DECLARE SECTION; return 0; } --- Now process it with "ecpg input.pgc" which results in: --- input.c --- /* Processed by ecpg (4.0.1) */ /* These include files are added by the preprocessor */ #include #include #include #include /* End of automatic include section */ #line 1 "input.pgc" int main() { /* exec sql begin declare section */ #line 4 "input.pgc" struct varchar_v { int len; char arr[ -1 ]; } v ; /* exec sql end declare section */ #line 5 "input.pgc" return 0; } --- The array size of the struct is -1 and this does not compile: input.pgc: In function `main': input.pgc:4: error: size of array `arr' is negative I suppose the size should have been 0 so it can be malloced by the backend. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[BUGS] BUG #1975: Postgres 8.0.4 build fails on AIX
The following bug has been logged online: Bug reference: 1975 Logged by: Dirk Pirschel Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.4 Operating system: AIX Description:Postgres 8.0.4 build fails on AIX Details: Postgres 8.0.4 build fails on AIX. $cd /var/tmp/postgresql-8.0.4 $./configure --prefix=$HOME/software # [no errors] $gmake touch libpq.a ../../../src/backend/port/aix/mkldexport.sh libpq.a > libpq.exp gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Wendif-labels -fno-strict-aliasing -Wl,-bnoentry -Wl,-H512 -Wl,-bM:SRE -o libpq.so libpq.a -L../../../src/port -L/client/lib -L../../../src/port -L/client/lib -lnsl -Wl,-bI:../../../src/backend/postgres.imp -Wl,-bE:libpq.exp ld: 0711-224 WARNING: Duplicate symbol: .pqStrerror ld: 0711-224 WARNING: Duplicate symbol: .pqGetpwuid ld: 0711-224 WARNING: Duplicate symbol: .getnameinfo_all ld: 0711-224 WARNING: Duplicate symbol: .getaddrinfo_all ld: 0711-224 WARNING: Duplicate symbol: .freeaddrinfo_all ld: 0711-224 WARNING: Duplicate symbol: .rangeSockAddr ld: 0711-224 WARNING: Duplicate symbol: .SockAddr_cidr_mask ld: 0711-224 WARNING: Duplicate symbol: .promote_v4_to_v6_addr ld: 0711-224 WARNING: Duplicate symbol: .promote_v4_to_v6_mask ld: 0711-224 WARNING: Duplicate symbol: .pqsignal ld: 0711-224 WARNING: Duplicate symbol: .pg_char_to_encoding ld: 0711-224 WARNING: Duplicate symbol: .pg_valid_client_encoding ld: 0711-224 WARNING: Duplicate symbol: .pg_valid_server_encoding ld: 0711-224 WARNING: Duplicate symbol: .pg_char_to_encname_struct ld: 0711-224 WARNING: Duplicate symbol: .pg_encoding_to_char ld: 0711-224 WARNING: Duplicate symbol: pg_encname_tbl ld: 0711-224 WARNING: Duplicate symbol: pg_encname_tbl_sz ld: 0711-224 WARNING: Duplicate symbol: pg_enc2name_tbl ld: 0711-224 WARNING: Duplicate symbol: .pg_strcasecmp ld: 0711-224 WARNING: Duplicate symbol: .pg_strncasecmp ld: 0711-224 WARNING: Duplicate symbol: .pg_toupper ld: 0711-224 WARNING: Duplicate symbol: .pg_tolower ld: 0711-224 WARNING: Duplicate symbol: .pg_encoding_mblen ld: 0711-224 WARNING: Duplicate symbol: .pg_utf_mblen ld: 0711-224 WARNING: Duplicate symbol: .pg_mule_mblen ld: 0711-224 WARNING: Duplicate symbol: .pg_mic_mblen ld: 0711-224 WARNING: Duplicate symbol: .pg_encoding_dsplen ld: 0711-224 WARNING: Duplicate symbol: .pg_encoding_max_length ld: 0711-224 WARNING: Duplicate symbol: pg_wchar_table ld: 0711-224 WARNING: Duplicate symbol: pg_utf_mblen ld: 0711-224 WARNING: Duplicate symbol: pg_mule_mblen ld: 0711-224 WARNING: Duplicate symbol: .pg_set_noblock ld: 0711-224 WARNING: Duplicate symbol: .pg_set_block ld: 0711-224 WARNING: Duplicate symbol: .EncryptMD5 ld: 0711-224 WARNING: Duplicate symbol: .md5_hash ld: 0711-224 WARNING: Duplicate symbol: lo_open ld: 0711-224 WARNING: Duplicate symbol: .lo_close ld: 0711-224 WARNING: Duplicate symbol: .lo_read ld: 0711-224 WARNING: Duplicate symbol: .lo_write ld: 0711-224 WARNING: Duplicate symbol: .lo_lseek ld: 0711-224 WARNING: Duplicate symbol: .lo_creat ld: 0711-224 WARNING: Duplicate symbol: .lo_tell ld: 0711-224 WARNING: Duplicate symbol: .lo_unlink ld: 0711-224 WARNING: Duplicate symbol: .lo_import ld: 0711-224 WARNING: Duplicate symbol: .lo_export ld: 0711-224 WARNING: Duplicate symbol: lo_close ld: 0711-224 WARNING: Duplicate symbol: lo_read ld: 0711-224 WARNING: Duplicate symbol: lo_write ld: 0711-224 WARNING: Duplicate symbol: lo_lseek ld: 0711-224 WARNING: Duplicate symbol: lo_creat ld: 0711-224 WARNING: Duplicate symbol: lo_tell ld: 0711-224 WARNING: Duplicate symbol: lo_unlink ld: 0711-224 WARNING: Duplicate symbol: lo_import ld: 0711-224 WARNING: Duplicate symbol: lo_export ld: 0711-224 WARNING: Duplicate symbol: pqsignal ld: 0711-224 WARNING: Duplicate symbol: md5_hash ld: 0711-224 WARNING: Duplicate symbol: EncryptMD5 ld: 0711-224 WARNING: Duplicate symbol: getaddrinfo_all ld: 0711-224 WARNING: Duplicate symbol: freeaddrinfo_all ld: 0711-224 WARNING: Duplicate symbol: getnameinfo_all ld: 0711-224 WARNING: Duplicate symbol: rangeSockAddr ld: 0711-224 WARNING: Duplicate symbol: SockAddr_cidr_mask ld: 0711-224 WARNING: Duplicate symbol: promote_v4_to_v6_addr ld: 0711-224 WARNING: Duplicate symbol: promote_v4_to_v6_mask ld: 0711-224 WARNING: Duplicate symbol: pg_mic_mblen ld: 0711-224 WARNING: Duplicate symbol: pg_encoding_mblen ld: 0711-224 WARNING: Duplicate symbol: pg_encoding_dsplen ld: 0711-224 WARNING: Duplicate symbol: pg_encoding_max_length ld: 0711-224 WARNING: Duplicate symbol: pg_valid_client_encoding ld: 0711-224 WARNING: Duplicate symbol: pg_valid_server_encoding ld: 0711-224 WARNING: Duplicate symbol: pg_char_to_encname_struct ld: 0711-224 WARNING: Duplicate symbol: pg_char_to_encoding ld: 0711-224 WARNING: Duplicate symbol: pg_encoding_to_char ld: 0711-224 WARNING: Duplicate symbol: pg_set_noblock ld: 0711-224 WARNING: Duplicate symbol: pg_set_block ld: 0711-224 WARNING: Duplicate symbol: pg_strcasecmp ld: 0711-224 WARNING: Duplicate symbol: pg_strncasecmp
[BUGS] pg_dump: schema with OID 559701082 does not exist
Hi, I get the following error when I call: # pg_dump db pg_dump: schema with OID 559701082 does not exist this happens with 8.0.1 on RHEL 3.0. I cannot dump the database neither with --schema-only or --data-only. What can I do? Thanks for help, Dirk ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] pg_dump: schema with OID 559701082 does not exist
Yes, I think so. What search path do you mean? These tables were all temporary tables. Dirk Tom Lane wrote: =?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[EMAIL PROTECTED]> writes: The problem I'm facing is the following: cs1=# select relname from pg_class where relname like 'bm%'; relname --- bm_pagehits_1133271374_047421_941 bm_pagehits_1133271378_920896_129 bm_pagehits_1133282272_744243_536 bm_pagehits_1133284780_743839_884 (4 rows) cs1=# drop table bm_pagehits_1133271374_047421_941; ERROR: table "bm_pagehits_1133271374_047421_941" does not exist Uh, are you sure these are in a schema that's in your search path? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] pg_dump: schema with OID 559701082 does not exist
I have found and deleted an entry with pg_class.relnamespace=559701082 but nowhere else. I still cannot dump the schema. Is there something like a system catalog integrity checker? The problem I'm facing is the following: cs1=# select relname from pg_class where relname like 'bm%'; relname --- bm_pagehits_1133271374_047421_941 bm_pagehits_1133271378_920896_129 bm_pagehits_1133282272_744243_536 bm_pagehits_1133284780_743839_884 (4 rows) cs1=# drop table bm_pagehits_1133271374_047421_941; ERROR: table "bm_pagehits_1133271374_047421_941" does not exist cs1=# drop table bm_pagehits_1133271378_920896_129; ERROR: table "bm_pagehits_1133271378_920896_129" does not exist cs1=# drop table bm_pagehits_1133282272_744243_536; ERROR: table "bm_pagehits_1133282272_744243_536" does not exist cs1=# drop table bm_pagehits_1133284780_743839_884; ERROR: table "bm_pagehits_1133284780_743839_884" does not exist Tom Lane wrote: =?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[EMAIL PROTECTED]> writes: I get the following error when I call: # pg_dump db pg_dump: schema with OID 559701082 does not exist I suspect a corrupted catalog entry. Look through your system catalogs for an entry referring to namespace 559701082 --- you want to look in pg_class.relnamespace, pg_proc.pronamespace, etc. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] pg_dump: schema with OID 559701082 does not exist
Hi Tom, I have now deleted every temp table I know from pg_temp_nnn using your approach but still can't dump the schema: pg_dump: schema with OID 559701082 does not exist I could'nt find any reference to 559701082 in pg_class, pg_namespace or pg_proc. Regards, Dirk Tom Lane wrote: =?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[EMAIL PROTECTED]> writes: Yes, I think so. What search path do you mean? These tables were all temporary tables. In that case they're most likely *not* in your search path, unless your session happens to have seized on the same pg_temp_nn schema they are in (in which case it'd have deleted them, so I guess it didn't). I'd suggest joining to pg_namespace to determine which schema(s) they are in, and then deleting with the explicit schema, ie, DROP TABLE pg_temp_nnn.bm_pagehits_... Note you'll probably need to be superuser to do this. Or you could just ignore them, because they'll get zapped automatically next time the containing pg_temp schema gets reused. regards, tom lane -- This email and any files transmitted with it are confidential and intended solely for the use of the individual or entity to whom they are addressed. If you are not the intended recipient, you should not copy it, re-transmit it, use it or disclose its contents, but should return it to the sender immediately and delete your copy from your system. Thank you for your cooperation. Dirk Lutzebäck <[EMAIL PROTECTED]> Tel +49.30.5362.1635 Fax .1638 CTO AEC/communications GmbH, Berlin, Germany
Re: [BUGS] pg_dump: schema with OID 559701082 does not exist
Yes, I finally found the reference in pg_type. Thanks for your help! Regards, Dirk Tom Lane wrote: =?ISO-8859-1?Q?Dirk_Lutzeb=E4ck?= <[EMAIL PROTECTED]> writes: I have now deleted every temp table I know from pg_temp_nnn using your approach but still can't dump the schema: pg_dump: schema with OID 559701082 does not exist I could'nt find any reference to 559701082 in pg_class, pg_namespace or pg_proc. Better keep looking then. See http://www.postgresql.org/docs/8.1/static/catalogs.html (adjust link for your PG version) to find out which catalogs reference pg_namespace. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #2092: No answer to bug reports 1975 and 2055
Hi Tom, * Tom Lane wrote on Sat, 03 Dec 2005 at 13:11 -0500: > * Dirk Pirschel writes: > > > No answer to bug reports 1975 and 2055 yet. Are you going to fix these > > issues, or is AIX currently unsupportet? > > You seem to have a problem with missing SSL in the link, but I don't > see why that should be; ecpg certainly tries to link to ssl. You'll > need to dig into it a little bit for yourself. You haven't provided > enough context to let anyone else reproduce the problem even if they > had AIX which most of us don't The first time, i have posted all configure and make output. I received a "message size too large" from the mailing list server. Unfortunately, the www bug reporting form did not complain about message size, only the mail server does later. The second time i have truncated the output to the compiler errors. > for example, what configure arguments did you use? Nothing relevant. ./configure --prefix=$HOME/software --with-includes=/client/include --with-libs=/client/lib [...] checking whether to build with OpenSSL support... no Any hints where to investigate? -Dirk -- Windoze is bootiful pgpEKpSI1zeNX.pgp Description: PGP signature
Re: [BUGS] BUG #2092: No answer to bug reports 1975 and 2055
Hi Tom, * Tom Lane wrote on Sat, 03 Dec 2005 at 15:02 -0500: > looking at your command again, the -L/client/lib is in there twice: > [...] > Assuming there is a libpq in /client/lib, I bet this patch will help. $ cd postgresql-8.1.0 $ patch -i ~/patch src/Makefile.shlib $ ./configure --prefix=$HOME/software --with-includes=/client/include --with-libs=/client/lib $ make [...] All of PostgreSQL successfully made. Ready to install. $ make install [...] PostgreSQL installation complete. Your patch works fine :-) Thanks! Regards, -Dirk -- "If Microsoft can change and compete on quality, I've won." - Linus Torvalds pgpv14RoQ5F50.pgp Description: PGP signature
Re: [BUGS] BUG #2092: No answer to bug reports 1975 and 2055
Hi Tom, * Tom Lane wrote on Sat, 03 Dec 2005 at 14:34 -0500: > Well, *something* in your link is trying to pull in OpenSSL. > > What exactly is in /client/lib ... could it be that there is an existing > SSL-dependent installation of libpq in there? In theory the link should > find the libpq in ../../../../src/interfaces/libpq not the one in > /client/lib, but we've seen bizarre linker search behavior before ... $ cd /client/lib $ ls -l *libpq* lrwxrwxrwx 1 root system 41 Nov 18 2004 libpq.a -> /sw/rs_aix52/postgresql-7.4.6/lib/libpq.a lrwxrwxrwx 1 root system 42 Nov 18 2004 libpq.so -> /sw/rs_aix52/postgresql-7.4.6/lib/libpq.so lrwxrwxrwx 1 root system 44 Nov 18 2004 libpq.so.3 -> /sw/rs_aix52/postgresql-7.4.6/lib/libpq.so.3 Regards, -Dirk -- Close the windows - the penguin is freezing pgpEwWu817Z62.pgp Description: PGP signature
[BUGS] BUG #5740: contrib/spi/moddatetime.c doesn't work with timezones.
The following bug has been logged online: Bug reference: 5740 Logged by: Dirk Heinrichs Email address: dirk.heinri...@altum.de PostgreSQL version: 8.4.5 Operating system: Linux Description:contrib/spi/moddatetime.c doesn't work with timezones. Details: The moddatetime function provided by this module only works on columns of type "timestamp without time zone". Would be nice if it could also provide an analogous function moddatetime_tz which provides the same functionality for columns of type "timestamp with time zone". Thanks... Dirk -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5740: contrib/spi/moddatetime.c doesn't work with timezones.
Am 02.11.2010 23:09, schrieb Dimitri Fontaine: > So I guess that you need to modify very little code to get the trigger > to work for both types. Please find the patch attached. It's against 8.4.5. Bye... Dirk diff -u spi.old/moddatetime.c spi/moddatetime.c --- spi.old/moddatetime.c 2010-10-01 15:35:31.0 +0200 +++ spi/moddatetime.c 2010-11-03 20:08:38.030772830 +0100 @@ -22,6 +22,7 @@ PG_MODULE_MAGIC; extern Datum moddatetime(PG_FUNCTION_ARGS); +extern Datum moddatetimetz(PG_FUNCTION_ARGS); PG_FUNCTION_INFO_V1(moddatetime); @@ -111,7 +112,7 @@ /* 1 is the number of items in the arrays attnum and newdt. attnum is the positional number of the field to be updated. newdt is the new datetime stamp. - NOTE that attnum and newdt are not arrays, but then a 1 ellement array + NOTE that attnum and newdt are not arrays, but then a 1 element array is not an array any more then they are. Thus, they can be considered a one element array. */ @@ -123,6 +124,111 @@ relname, SPI_result); /* Clean up */ + pfree(relname); + + return PointerGetDatum(rettuple); +} + +PG_FUNCTION_INFO_V1(moddatetimetz); + +Datum +moddatetimetz(PG_FUNCTION_ARGS) +{ + TriggerData *trigdata = (TriggerData *) fcinfo->context; + Trigger*trigger; /* to get trigger name */ + int nargs; /* # of arguments */ + int attnum; /* positional number of field to change */ + Datum newdt; /* The current datetime. */ + char **args; /* arguments */ + char *relname; /* triggered relation name */ + Relation rel; /* triggered relation */ + HeapTuple rettuple = NULL; + TupleDesc tupdesc; /* tuple description */ + + if (!CALLED_AS_TRIGGER(fcinfo)) + /* internal error */ + elog(ERROR, "moddatetimetz: not fired by trigger manager"); + + if (TRIGGER_FIRED_FOR_STATEMENT(trigdata->tg_event)) + /* internal error */ + elog(ERROR, "moddatetimetz: cannot process STATEMENT events"); + + if (TRIGGER_FIRED_AFTER(trigdata->tg_event)) + /* internal error */ + elog(ERROR, "moddatetimetz: must be fired before event"); + + if (TRIGGER_FIRED_BY_INSERT(trigdata->tg_event)) + /* internal error */ + elog(ERROR, "moddatetimetz: must be fired before event"); + else if (TRIGGER_FIRED_BY_UPDATE(trigdata->tg_event)) + rettuple = trigdata->tg_newtuple; + else + /* internal error */ + elog(ERROR, "moddatetimetz: cannot process DELETE events"); + + rel = trigdata->tg_relation; + relname = SPI_getrelname(rel); + + trigger = trigdata->tg_trigger; + + nargs = trigger->tgnargs; + + if (nargs != 1) + /* internal error */ + elog(ERROR, "moddatetimetz (%s): A single argument was expected", relname); + + args = trigger->tgargs; + /* must be the field layout? */ + tupdesc = rel->rd_att; + + /* Get the current datetime. */ + newdt = DirectFunctionCall3(timestamptz_in, +CStringGetDatum("now"), +ObjectIdGetDatum(InvalidOid), +Int32GetDatum(-1)); + + /* + * This gets the position in the tuple of the field we want. args[0] being + * the name of the field to update, as passed in from the trigger. + */ + attnum = SPI_fnumber(tupdesc, args[0]); + + /* + * This is were we check to see if the field we are supposed to update + * even exits. The above function must return -1 if name not found? + */ + if (attnum < 0) + ereport(ERROR, +(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION), + errmsg("\"%s\" has no attribute \"%s\"", + relname, args[0]))); + + /* + * OK, this is where we make sure the timestamp field that we are + * modifying is really a timestamptz field. Hay, error checking, what a + * novel idea !-) + */ + if (SPI_gettypeid(tupdesc, attnum) != TIMESTAMPTZOID) + ereport(ERROR, +(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION), + errmsg("attribute \"%s\" of \"%s\" must be type TIMESTAMP WITH TIME ZONE", + args[0], relname))); + +/* 1 is the number of items in the arrays attnum and newdt. + attnum is the positional number of the field to be updated. + newdt is the new datetime stamp. + NOTE that attnum and newdt are not arrays, but then a 1 element array + is not an array any more then they are. Thus, they can be considered a + one element array. +*/ + rettuple = SPI_modifytuple(rel, rettuple, 1, &attnum, &newdt, NULL); + + if (rettuple == NULL) + /* internal error */ + elog(ERROR, "moddatetimetz (%s): %d returned by SPI_modifytuple", + relname, SPI_result); + +/* Clean up */ pfree(relname); return PointerGetDatum(rettuple); diff -u spi.old/moddatetime.sql.in spi/moddatetime.sql.in --- spi.old/moddatetime.sql.in 2010-10-01 15:35:31.0 +0200 +++ spi/moddatetime.sql.in 2010-11-03 19:41:51.806037512 +0100 @@ -7,3 +7,8 @@ RETURNS trigger AS 'MODULE_PATHNAME' LANGUAGE C; + +CREATE OR REPLACE FUNCTION moddatetimetz() +RETURNS trigger +AS 'MODULE_PATHNAME' +LANGUAGE C; signature.asc Description: OpenPGP digital signature
Re: [BUGS] BUG #5740: contrib/spi/moddatetime.c doesn't work with timezones.
Am 04.11.2010 04:55, schrieb Tom Lane: > I don't actually see any point in having two functions at all. Since > the trigger is examining the column type internally, it could perfectly > well do the right thing at runtime depending on column type. Got the point. Here's another patch, hope my limited C skills are sufficient... Works here, at least. BTW: Is there a way to achieve the same in pure PL/pgSQL or PL/perl? Bye... Dirk diff -u spi.old/moddatetime.c spi/moddatetime.c --- spi.old/moddatetime.c 2010-10-01 15:35:31.0 +0200 +++ spi/moddatetime.c 2010-11-04 19:32:11.877235825 +0100 @@ -38,6 +38,7 @@ Relation rel; /* triggered relation */ HeapTuple rettuple = NULL; TupleDesc tupdesc; /* tuple description */ + OidstampOid; if (!CALLED_AS_TRIGGER(fcinfo)) /* internal error */ @@ -75,12 +76,6 @@ /* must be the field layout? */ tupdesc = rel->rd_att; - /* Get the current datetime. */ - newdt = DirectFunctionCall3(timestamp_in, -CStringGetDatum("now"), -ObjectIdGetDatum(InvalidOid), -Int32GetDatum(-1)); - /* * This gets the position in the tuple of the field we want. args[0] being * the name of the field to update, as passed in from the trigger. @@ -102,16 +97,29 @@ * modifying is really a timestamp field. Hay, error checking, what a * novel idea !-) */ - if (SPI_gettypeid(tupdesc, attnum) != TIMESTAMPOID) - ereport(ERROR, -(errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION), - errmsg("attribute \"%s\" of \"%s\" must be type TIMESTAMP", - args[0], relname))); + stampOid = SPI_gettypeid(tupdesc, attnum); + if (stampOid == TIMESTAMPOID) + /* Get the current datetime. */ + newdt = DirectFunctionCall3(timestamp_in, + CStringGetDatum("now"), + ObjectIdGetDatum(InvalidOid), + Int32GetDatum(-1)); + else if (stampOid == TIMESTAMPTZOID) + /* Get the current datetime. */ + newdt = DirectFunctionCall3(timestamptz_in, + CStringGetDatum("now"), + ObjectIdGetDatum(InvalidOid), + Int32GetDatum(-1)); + else + ereport(ERROR, + (errcode(ERRCODE_TRIGGERED_ACTION_EXCEPTION), + errmsg("attribute \"%s\" of \"%s\" must be type TIMESTAMP", + args[0], relname))); /* 1 is the number of items in the arrays attnum and newdt. attnum is the positional number of the field to be updated. newdt is the new datetime stamp. - NOTE that attnum and newdt are not arrays, but then a 1 ellement array + NOTE that attnum and newdt are not arrays, but then a 1 element array is not an array any more then they are. Thus, they can be considered a one element array. */ signature.asc Description: OpenPGP digital signature