[BUGS] BUG #5317: no puedo instalarlo
The following bug has been logged online: Bug reference: 5317 Logged by: Marc Email address: rasck...@gmail.com PostgreSQL version: 8.4.2-1 Operating system: windows xp Description:no puedo instalarlo Details: No me deja instalar el programa ya que me pide una contraseña de windows que desconozco -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6350: Delete a role which is still in use
The following bug has been logged on the website: Bug reference: 6350 Logged by: Marc Balmer Email address: m...@msys.ch PostgreSQL version: 9.1.1 Operating system: NetBSD Description: A role can be deleted although it is still referenced in a column privilege. Here is how: create three new roles: user, admin, new_admin create a table, e.g. "bar" with one column, e.g. "foo", owner of the table is "admin". now grant "user" select rights on the "foo" column in table "bar". now change the owner of the table "bar" to "new_admin". drop the role "admin". PostgreSQL won't complain. look at the privileges of table "bar" using \dp. The ownership change from "admin" to "new_admin" did not make it to the column privileges. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #8144: Problem with rank window function and CTEs
The following bug has been logged on the website: Bug reference: 8144 Logged by: Marc Munro Email address: m...@bloodnok.com PostgreSQL version: 9.2.4 Operating system: Linux 3.6.3 (debian wheezy) Description: I have a query in which I want to use the result of a window function to isolate the most relevant results. While I was trying to develop and test the query, I discovered what looks like a bug in the results of the rank() function. This has been tried with the same results on 9.1.9 and 9.2.4 I have two versions of the query, the first works as expected while the second produces incorrect rank values. I have tried various work-arounds but they all seem to trigger the same behaviour once a certain degree of complexity is reached. This version of the query seems to work, producing the expected rank values: with recursive parent_orgs(parent_org_id, org_id) as ( select pr.first_party_id, pr.second_party_id from party_rels pr where pr.relationship_type_id in (1009, 1010, 1011, 1012, 1013) ), ancestor_orgs( ancestor_org_id, org_id, depth) as ( select org_id, org_id, 0 from parent_orgs union all select p.parent_org_id, a.org_id, a.depth + 1 from ancestor_orgs a join parent_orgs p on p.org_id = a.ancestor_org_id ) select ao.org_id, oi.item_id, oi.seq_id, oi.complete, ao.ancestor_org_id, ao.depth, rank() over (partition by oi.item_id order by ao.depth) from ancestor_orgs ao join oitems oi on oi.org_id = ao.ancestor_org_id where ao.org_id = 20150; org_id | item_id | seq_id | complete | ancestor_org_id | depth | rank +-++--+-+---+-- 20150 | 1 | 1 | t| 20139 | 4 |1 20150 | 2 | 1 | t| 20139 | 4 |1 20150 | 200146 | 1 | t| 20146 | 3 |1 20150 | 200147 | 1 | t| 20146 | 3 |1 20150 | 200148 | 1 | t| 20146 | 3 |1 20150 | 200149 | 2 | t| 20146 | 3 |1 20150 | 200150 | 1 | t| 20146 | 3 |1 20150 | 200151 | 1 | t| 20146 | 3 |1 20150 | 200152 | 1 | t| 20146 | 3 |1 20150 | 200153 | 7 | t| 20150 | 0 |1 20150 | 200153 | 1 | t| 20146 | 3 |2 20150 | 200154 | 1 | t| 20146 | 3 |1 [ rows removed for brevity ] (38 rows) This version, which should be equivalent, yields crazy rank values: with recursive parent_orgs(parent_org_id, org_id) as ( select pr.first_party_id, pr.second_party_id from party_rels pr where pr.relationship_type_id in (1009, 1010, 1011, 1012, 1013) ), ancestor_orgs( ancestor_org_id, org_id, depth) as ( select org_id, org_id, 0 from parent_orgs union all select p.parent_org_id, a.org_id, a.depth + 1 from ancestor_orgs a join parent_orgs p on p.org_id = a.ancestor_org_id ), visible_org_items(org_id, item_id, seq_id, complete, ancestor_org_id, depth, rank) as ( select ao.org_id, oi.item_id, oi.seq_id, oi.complete, ao.ancestor_org_id, ao.depth, rank() over (partition by oi.item_id order by ao.depth) from ancestor_orgs ao join oitems oi on oi.org_id = ao.ancestor_org_id ) select * from visible_org_items where org_id = 20150; org_id | item_id | seq_id | complete | ancestor_org_id | depth | rank +-++--+-+---+-- 20150 | 1 | 1 | t| 20139 | 4 | 21 20150 | 2 | 1 | t| 20139 | 4 | 21 20150 | 200146 | 1 | t| 20146 | 3 |9 20150 | 200147 | 1 | t| 20146 | 3 |9 20150 | 200148 | 1 | t| 20146 | 3 |9 20150 | 200149 | 2 | t| 20146 | 3 |9 20150 | 200150 | 1 | t| 20146 | 3 |9 20150 | 200151 | 1 | t| 20146 | 3 |9 20150 | 200152 | 1 | t| 20146 | 3 |9 20150 | 200153 | 7 | t| 20150 | 0 |1 20150 | 200153 | 1 | t| 20146
[BUGS] BUG #8153: check constraint results in a lot of casts
The following bug has been logged on the website: Bug reference: 8153 Logged by: Marc Mamin Email address: m...@intershop.de PostgreSQL version: 9.2.4 Operating system: Linux & windows Description: hello, This is not a functional bugs, but it appers (at least for me...) that following constraint gets defined with some unecessary overweight: create table test (a varchar); (1) alter table test add constraint test_ck check ( a in ('a','b')); I'd expect this to be resolved in CHECK (a = ANY (ARRAY['a'::character varying, 'b'::character varying]) ). I've tried further variantes. (3) looks better but there is always a cast on the table side. (2) alter table test add constraint test_ck_2 check ( a = ANY (ARRAY['a'::character varying, 'b'::character varying] )); (3) alter table test add constraint test_ck_3 check ( a = ANY (ARRAY['a'::text, 'b'] )); \d+ test Table "public.test" Column | Type| Modifiers | Storage | Stats target | Description +---+---+--+--+- a | character varying | | extended | | Check constraints: "test_ck" CHECK (a::text = ANY (ARRAY['a'::character varying, 'b'::character varying]::text[])) "test_ck_2" CHECK (a::text = ANY (ARRAY['a'::character varying, 'b'::character varying]::text[])) "test_ck_3" CHECK (a::text = ANY (ARRAY['a'::text, 'b'::text])) best regards, Marc Mamin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4447: install failed to start; libintl3.dll was not found
We had the same problem last week when installing PG 8.3.5/8.3.4 on two machine running Win2k3 Server sr2 belonging to one of our customers. After some hours of trial and error we found out that this error only occurred when not installing to the system drive. If we chose the system drive for installation everything worked fine. We couldn't reproduce the error on our own machines though (real Win2k3 Server, and some VMs with various versions of W2k3 installed). Our customer's machines are used for development and software evaluation so there might be some third party software involved or it's a broken w2k3 installation. But we couldn't investigate any further. We used the MSI-Package for installation. As a workaround we installed postgres without running initdb and initialized the database manually. Marc Magnus Hagander wrote: > Jen McCann wrote: > >> The following bug has been logged online: >> >> Bug reference: 4447 >> Logged by: Jen McCann >> Email address: [EMAIL PROTECTED] >> PostgreSQL version: 8.3.4 >> Operating system: Win2k3 server sp1 >> Description:install failed to start; libintl3.dll was not found >> Details: >> >> I have attempted to install postgreSQL 8.3.4. on my win2k3 server sp1 >> (enterprise edition) machine, and it has failed with the following error: >> >> initdb.exe - Unable to Locate Component >> this application has failed to start because libintl3.dll was not found. >> Re-installing the application max fix this problem. >> >> Note: the libintl3.dll was indeed present on in the PostgreSQL 8.3\bin >> folder. >> >> I have been able to install version 8.2 without issue. >> However, the same issue was observed when attempting to install v8.3.3 >> > > Strange. You are using the MSI installer from our ftp site? Or are you > using the binaries-no-installer? > > Also, please verify using "depends.exe" that all the DLL files that > libintl3.dll requires are also present. > > //Magnus > > > -- click:ware Informationstechnik GmbH i. A. Marc Schablewski Hausanschrift: Kamekestraße 19 50672 Köln Postanschrift: Postfach 10 04 22 50444 Köln Tel: (+49) 0221 139988-0 Fax: (+49) 0221 139988-79 Geschäftsführer: Dipl.-Ing. Thomas Goerner Handelsregisternr: HRB 31438 ( Amtsgericht Köln ) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] could not read block 77 of relation 1663/16385/388818775
I think both approaches (checksum and write protection) might contribute to finding this bug. If pages with bogus data but correct checksum are ever found on disk, I think this would prove that there is no hardware / file system / os issue. If an access violation resulting from writes to locked pages were hit, would it be possible to log a stack backtrace? Especially on our test systems we can easily afford any performance degradations resulting from this. Question: Who is responsible for maintaining this part (buffer cache maintenance, writer etc) of postgres code? Could you provide the necessary patches? Thanks in advance Thomas Goerner Marc Schablewski John R Pierce wrote: > Gregory Stark wrote: >> John R Pierce <[EMAIL PROTECTED]> writes: >> >> >>> oracle has had an option for some time that uses read/only page >>> protection for >>> each page of the shared buffer area... when oracle knows it wants >>> to modify a >>> page, it un-protects it via a system call. this catches any wild >>> writes >>> into the shared buffer area as a memory protection fault. >>> >> >> The problem with both of these approaches is that most bugs occur >> when the >> code *thinks* it's doing the right thing. A bug in the buffer >> management code >> which returns the wrong buffer or a real wild pointer dereference. I >> don't >> remember ever having either of those. >> >> That said, the second option seems pretty trivial to implement. I >> think the >> performance would be awful for a live database but for a read-only >> database it >> might make more sense. >> > > > FWIW, it has modest overhead on Oracle on Solaris on Sparc... EXCEPT > on the "Niagra" aka 'Coolthreads' CPUs (the T1 processor), on that it > was horribly slow on our write intensive transactional system.Our > environment is on very large scale servers where the shared buffers > are often 32 or 64GB, I suspect this increases our exposure to > bizarro-world writes. > > believe me, especially in earlier Oracle releases (6, 7, 8), this > caught/prevented many problems which otherwise would have ended in a > Oracle fatal Block Corruption error, which would require many hours of > DBA hackery before the database could be restarted. > > > -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #4565: nextval not updated during wal replication, leading to pk violations
The following bug has been logged online: Bug reference: 4565 Logged by: Marc Schablewski Email address: [EMAIL PROTECTED] PostgreSQL version: 8.3.5 Operating system: Debian, 2.6.24-19 kernel Description:nextval not updated during wal replication, leading to pk violations Details: Hi! We are using "Continuous Archiving" of WAL to keep a warm standby database server. We recently switch over to that backup database for testing purpose. We first took an online backup of the master database, let the WAL shipping run for some days and finally started the backup for normal use. A job tried to insert some records into a few tables and complained about a PK violation. PK are of type bigint. The column is populated by a sequence and a "default nextval()" on the PK column. We found that the sequence's currval was lower than the maximum value in the table and that this value was already present. Further investigation showed us that the last records in the table were inserted on the former master server while taking the initial online backup for the replication. It seems that the records got replicated but not the currval/nextval of the sequence. When running "select nextval()" on the backup database it returned the PK value of the first record inserted during that last run on the former master server. I couldn't reproduce the problem with two different servers and a simple database containing one table. Right now, I don't have any idea how to narrow down the problem or what to check next. So any hint would be helpful. Marc -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4565: nextval not updated during wal replication, leading to pk violations
FYI: We found another anomaly in that database (duplicated record with a unique index on a column (non PK)) and so we did some more research. As before, the record got replicated, but this time, the index entry was not. Later, a job tried to check if that record was already there by querying the column (thus, using the index) and couldn't find it so it was inserted again. We first thought it could be a WAL that wasn't restored. If I understand pg_backup_start() and _stop() right, the first WAL that should have been requested after the online backup had finished should have been the one mentioned in the "START WAL LOCATION" line of the backup history file, but the first file that actually gets requested is that in the "STOP WAL LOCATION" line. 0002004B0027.00396450.backup: START WAL LOCATION: 4B/27396450 (file 0002004B0027) STOP WAL LOCATION: 4B/28DDF968 (file 0002004B0028) CHECKPOINT LOCATION: 4B/27396450 START TIME: 2008-11-28 16:29:17 CET LABEL: backup-20081128-162916 STOP TIME: 2008-11-28 17:49:09 CET Both WAL files were copied to the backup system. The log switch occurred before 17:06, because at that time the file was copied to the backup server by the archive command. But sadly, the records were all part of a single transaction which ran between 17:32 and 17:33, so obviously this is not a part of the problem. Though it seems a bit strange to me that the ...27 WAL didn't get recovered, but maybe I just don't fully understand that part of the docs (Chapter 24.3.2., fifth paragraph after the ordered list). Right now, I run a fresh online backup to check if we can reproduce the problem. I made some minor corrections on my scripts which prepare the backup database for replication, which might have caused the problem. I now omit the pg_xlog directory when taking the initial backup, after rereading chapter 24.3.2. saying there is a "risk of mistakes" if one copies those files, whatever that means. Maybe someone could clarify what the risks and mistakes could be? Speaking of chapter 24.3.2.: The last sentence in second paragraph after the ordered list ("You must ensure that these steps are carried out in sequence without any possible overlap, or you will invalidate the backup.") seems a bit misplaced. As far as I understand, it makes a reference to the ordered list ("steps"), but I had to read it several times to get it. I think, a better place would be right after the list. ...But then, maybe it's just me, not sleeping well last night. ;) Marc Marc Schablewski wrote: > The following bug has been logged online: > > Bug reference: 4565 > Logged by: Marc Schablewski > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.3.5 > Operating system: Debian, 2.6.24-19 kernel > Description:nextval not updated during wal replication, leading to > pk violations > Details: > > Hi! > > We are using "Continuous Archiving" of WAL to keep a warm standby database > server. We recently switch over to that backup database for testing purpose. > We first took an online backup of the master database, let the WAL shipping > run for some days and finally started the backup for normal use. > > A job tried to insert some records into a few tables and complained about a > PK violation. PK are of type bigint. The column is populated by a sequence > and a "default nextval()" on the PK column. We found that the sequence's > currval was lower than the maximum value in the table and that this value > was already present. Further investigation showed us that the last records > in the table were inserted on the former master server while taking the > initial online backup for the replication. It seems that the records got > replicated but not the currval/nextval of the sequence. When running "select > nextval()" on the backup database it returned the PK value of the first > record inserted during that last run on the former master server. > > I couldn't reproduce the problem with two different servers and a simple > database containing one table. Right now, I don't have any idea how to > narrow down the problem or what to check next. So any hint would be > helpful. > > Marc > > -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4565: nextval not updated during wal replication, leading to pk violations
Tom Lane wrote: > "Marc Schablewski" <[EMAIL PROTECTED]> writes: > >> A job tried to insert some records into a few tables and complained about a >> PK violation. PK are of type bigint. The column is populated by a sequence >> and a "default nextval()" on the PK column. We found that the sequence's >> currval was lower than the maximum value in the table and that this value >> was already present. Further investigation showed us that the last records >> in the table were inserted on the former master server while taking the >> initial online backup for the replication. It seems that the records got >> replicated but not the currval/nextval of the sequence. When running "select >> nextval()" on the backup database it returned the PK value of the first >> record inserted during that last run on the former master server. >> > > How many sequence values "overlapped" here, exactly? If more than one, > are you by any chance running that sequence with a CACHE setting larger > than one? (See the sequence's cache_value column if you're unsure.) > > I can see what might be a race condition between nextval()'s WAL logging > and checkpoint start, but it doesn't seem like it could account for more > than the CACHE setting worth of overlap. > > regards, tom lane > > Thanks for your reply. cache_value is set to one for all sequences. As far as I can tell, they were all created by a plain CREATE SEQUENCE seq_name and no other settings changed. And as we found out later this "loss of information" hit some indexes as well. We took a fresh backup yesterday and again we saw that only the last WAL generated during the backup run was restored on our backup system. I'm sure that this is the real problem. START WAL LOCATION: 4E/36F2C280 (file 0002004E0036) STOP WAL LOCATION: 4E/3944C988 (file 0002004E0039) CHECKPOINT LOCATION: 4E/36F2C280 START TIME: 2008-12-08 17:17:34 CET LABEL: backup-20081208-171733 STOP TIME: 2008-12-08 18:22:25 CET File 0002004E0039 and all WAL that were generated afterwards were restored. Files ...36 to ...38 weren't even requested. After starting up the database, we had a table with records that were not present it's PK index. But the were in the index on the former master system. I ran an explicit pg_switch_xlog() on the master database before switching to the backup to make sure all changes got transferred. I'll check our scripts that do the backup/archiving/recovery again, there still might be some kind of bug. But beside that, I'd be thankful for any hints what else I could check. Marc -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4565: nextval not updated during wal replication, leading to pk violations
Tom Lane wrote: > Marc Schablewski <[EMAIL PROTECTED]> writes: > >> cache_value is set to one for all sequences. As far as I can tell, they >> were all created by a plain CREATE SEQUENCE seq_name and no other >> settings changed. And as we found out later this "loss of information" >> hit some indexes as well. >> > > >> We took a fresh backup yesterday and again we saw that only the last WAL >> generated during the backup run was restored on our backup system. I'm >> sure that this is the real problem. >> > > This is beginning to sound like an error in your backup/restore > procedures. Please describe exactly what you're doing. > > regards, tom lane > > I'm sorry. It was all my fault. If been a bit to picky about the files I copy, and I didn't copy the files directly under the cluster directory. I thought, there are just the config files, so what's the point. But this way of course, I didn't copy the backup_label either. It was never there when I looked, because it was either before or after running a backup. Reading and especially understanding the manual sometimes helps. doh! Now everything seems to work fine, but we are still testing. At least what we see in the logs is more reasonable. Now the .backup file is requested first, then the WALs. There is one strange thing left, though. The server first requests the second WAL, then the first one, then the second again and then it processes them in order (second, third, fourth, ...). Is this normal? I hope I didn't distract you from your regular work too much. Thanks again. Marc -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] perl5 interface won't compile
POSTGRESQL BUG REPORT TEMPLATE Your name : Marc Lehmann Your email address : [EMAIL PROTECTED] System Configuration - Architecture (example: Intel Pentium) : Operating System (example: Linux 2.0.26 ELF) : PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.0beta3 Compiler used (example: gcc 2.8.0) : Please enter a FULL description of your problem: the perl interface does not compile with newer perl versions (5.006 and probably 5.005 without options). Please describe a way to repeat the problem. Please try to provide a (sorry, just found out that plperl also won't compile, so I have "re-added" another, a second diff against plperl ;) concise reproducible example, if at all possible: -- "make" If you know how this problem might be fixed, list the solution below: - A diff against Pg.xs is attached, however, it will not compile with older perl versions (it is the prefered long-term solution). So, for the forseeable future, it might be a better to create the Makefile using perl Makefile.PL POLLUTE=1 which will enable some kind of compatibility mode. A preferable but better solution would be to use the Devel::PPPort module (on CPAN) to get rid of versiondependonitis (in which case you will need to apply both diffs and additionally include ppport.h, preferably after renaming it to something else. ===PATCH 1=== diff -r -u perl5o/Pg.c perl5/Pg.c --- perl5o/Pg.c Sat Mar 25 13:09:05 2000 +++ perl5/Pg.c Sat Mar 25 13:10:38 2000 @@ -1407,7 +1407,7 @@ ps.caption = caption; Newz(0, ps.fieldName, items + 1 - 11, char*); for (i = 11; i < items; i++) { - ps.fieldName[i - 11] = (char *)SvPV(ST(i), na); + ps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i)); } PQprint(fout, res, &ps); Safefree(ps.fieldName); @@ -3182,7 +3182,7 @@ EXTEND(sp, cols); while (col < cols) { if (PQgetisnull(res->result, res->row, col)) { - PUSHs(&sv_undef); + PUSHs(&PL_sv_undef); } else { char *val = PQgetvalue(res->result, res->row, col); PUSHs(sv_2mortal((SV*)newSVpv(val, 0))); @@ -3238,7 +3238,7 @@ ps.caption = caption; Newz(0, ps.fieldName, items + 1 - 11, char*); for (i = 11; i < items; i++) { - ps.fieldName[i - 11] = (char *)SvPV(ST(i), na); + ps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i)); } PQprint(fout, res->result, &ps); Safefree(ps.fieldName); diff -r -u perl5o/Pg.xs perl5/Pg.xs --- perl5o/Pg.xsSat Mar 11 04:08:37 2000 +++ perl5/Pg.xs Sat Mar 25 13:10:36 2000 @@ -581,7 +581,7 @@ ps.caption = caption; Newz(0, ps.fieldName, items + 1 - 11, char*); for (i = 11; i < items; i++) { - ps.fieldName[i - 11] = (char *)SvPV(ST(i), na); + ps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i)); } PQprint(fout, res, &ps); Safefree(ps.fieldName); @@ -1252,7 +1252,7 @@ EXTEND(sp, cols); while (col < cols) { if (PQgetisnull(res->result, res->row, col)) { - PUSHs(&sv_undef); + PUSHs(&PL_sv_undef); } else { char *val = PQgetvalue(res->result, res->row, col); PUSHs(sv_2mortal((SV*)newSVpv(val, 0))); @@ -1292,7 +1292,7 @@ ps.caption = caption; Newz(0, ps.fieldName, items + 1 - 11, char*); for (i = 11; i < items; i++) { - ps.fieldName[i - 11] = (char *)SvPV(ST(i), na); + ps.fieldName[i - 11] = (char *)SvPV_nolen(ST(i)); }
[BUGS] initdb won't run due to syntax errors
POSTGRESQL BUG REPORT TEMPLATE Your name : Marc Lehmann Your email address : [EMAIL PROTECTED] System Configuration - Architecture (example: Intel Pentium) : Operating System (example: Linux 2.0.26 ELF) : PostgreSQL version (example: PostgreSQL-6.5.1): PostgreSQL-7.03beta3 Compiler used (example: gcc 2.8.0) : Please enter a FULL description of your problem: initdb stops with bin/initdb[483]: >&/dev/null : illegal file descriptor name Of course, >&/dev/null is not valid shell syntax (it's probably a bash extension :() Please describe a way to repeat the problem. Please try to provide a concise reproducible example, if at all possible: -- Just run initdb If you know how this problem might be fixed, list the solution below: - The recommended way when doing _bash_ programming is to use /bin/bash or something similar in the shebang line. If the shell scripts are supposed to run with /bin/sh, they should use something like echo >/dev/null 2>&1 to get rid of both stdout and stderr. -- -==- | ==-- _ | ---==---(_)__ __ __ Marc Lehmann +-- --==---/ / _ \/ // /\ \/ / [EMAIL PROTECTED] |e| -=/_/_//_/\_,_/ /_/\_\ XX11-RIPE --+ The choice of a GNU generation | |
Re: [BUGS] perl5 interface won't compile
On Sat, Mar 25, 2000 at 11:49:09AM -0500, Tom Lane <[EMAIL PROTECTED]> wrote: > compile against 5.005_03 --- without options --- and AFAICT that is > still considered the current stable release of Perl. I'm pretty 5.6 is out ;) Wether that is considered stable is another question, of course ;) > > using > >perl Makefile.PL POLLUTE=1 > > which will enable some kind of compatibility mode. > > Interesting. I could not find anything about POLLUTE at www.perl.com. > What does it do, and will it cause problems on pre-5.005 perls? Apart from warnings it should work. What is does it to enable commonly used symbols to eb available under the old names, i.e. sv_undef instead of PL_sv_undef, thereby "polluting" your namespace. I have no diea where it is documentd ;) > This looks like it could be the Right Thing To Do. Anyone have time to > make it happen (and perhaps even access to a few different perl versions I don't, but that module is basiclly a header file. Just rename it to somethign else (e.g. pgppport.h) and _run_ it against the interface files: perl -x pgppport.h *.c *.h *.xs foo/*.c [etc] It will tell you about most things that need to be fixed. -- -==- | ==-- _ | ---==---(_)__ __ __ Marc Lehmann +-- --==---/ / _ \/ // /\ \/ / [EMAIL PROTECTED] |e| -=/_/_//_/\_,_/ /_/\_\ XX11-RIPE --+ The choice of a GNU generation | |
[BUGS] postmaster quits
If PostgreSQL failed to compile on your computer or you found a bug that is likely to be specific to one platform then please fill out this form and e-mail it to [EMAIL PROTECTED] To report any other bug, fill out the form below and e-mail it to [EMAIL PROTECTED] If you not only found the problem but solved it and generated a patch then e-mail it to [EMAIL PROTECTED] instead. Please use the command "diff -c" to generate the patch. You may also enter a bug report at http://www.postgresql.org/ instead of e-mail-ing this form. POSTGRESQL BUG REPORT TEMPLATE Your name : Marc Prewitt Your email address : [EMAIL PROTECTED] System Configuration - Architecture (example: Intel Pentium) : Sparcstation 20/sun4m Operating System (example: Linux 2.0.26 ELF) : Solaris 5.8 PostgreSQL version (example: PostgreSQL-7.1): PostgreSQL-7.1beta4 Compiler used (example: gcc 2.8.0) : gcc 2.95.3 Please enter a FULL description of your problem: Our development server, on occaision, runs out of swap space and inevitably when it does, the postmaster process stops. When restarted, the following info is in the log (the machine ran out of swap around 19:17): DEBUG: starting up DEBUG: database system was interrupted at 2001-08-27 19:17:33 DEBUG: CheckPoint record at (0, 218748312) DEBUG: Redo record at (0, 218748312); Undo record at (0, 0); Shutdown FALSE DEBUG: NextTransactionId: 213780; NextOid: 297035 DEBUG: database system was not properly shut down; automatic recovery in progress... DEBUG: ReadRecord: record with zero len at (0, 218748376) DEBUG: Formatting logfile 0 seg 13 block 78 at offset 5592 DEBUG: The last logId/logSeg is (0, 13) DEBUG: redo is not required DEBUG: database system is in production state It is odd that postgresql doesn't create a core file or retry it's mallocs. We are running mysql and Oracle on the same machine and neither of them have had this problem during the temporary memory shortage. Please describe a way to repeat the problem. Please try to provide a concise reproducible example, if at all possible: -- Run of of swap space. If you know how this problem might be fixed, list the solution below: - ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[BUGS] array_dims function is poorly behaved
Simply use an array_dims() on any non-array type, the messages vary from: SIS=# select array_dims(datdba) from pg_database; pqReadData() -- backend closed the channel unexpectedly. This probably means the backend terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !# ; You are currently not connected to a database. !# \q to: SIS=# select array_dims(datname) from pg_database; ERROR: MemoryContextAlloc: invalid request size 1702125936 Reproduces consistently on Linux (i386) 2.4.16 running 7.1.3. Cheers, Marc L. -- 01010101010101010101010101010101010101010101010101 Marc P. Lavergne [wk:407-648-6996] Product Development richLAVA Corporation -- "Anyone who slaps a 'this page is best viewed with Browser X' label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network." -Tim Berners-Lee (Technology Review, July 1996) 01010101010101010101010101010101010101010101010101 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[BUGS] BLOB support is broken in the 7.2/7.3 JDBC driver, works in 7.1 driver java.sql.SQLException: ERROR: oidin: error in "\377\330\377\340...I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case:-- start --package sample.java.jdbc;import java.sql.*;import java.io.*;import java.util.*;class testBlob{ public static void main(String args[]) { new testBlob(); System.exit(0); } public testBlob() { System.out.println("Let's begin the test ..."); String v_username = "postgre"; String v_password = "postgre"; String v_database = "jdbc:postgresql://127.0.0.1/TEST"; PreparedStatement v_pstmt = null; try { DriverManager.registerDriver(new org.postgresql.Driver()); Connection v_conn = DriverManager.getConnection( v_database, v_username, v_password); v_conn.setAutoCommit(false); File v_file = new File("/home/postgre/something.jpg"); FileInputStream v_fis = new FileInputStream(v_file); v_pstmt = v_conn.prepareStatement( "insert into lob_test (id,data) values (?,?)"); v_pstmt.setInt(1,2); v_pstmt.setBinaryStream(2,v_fis,(int)v_file.length()); System.out.println("now you see me"); v_pstmt.executeUpdate(); System.out.println("now you don't"); v_conn.commit(); v_fis.close(); v_pstmt.close(); } catch (IOException e_io) { System.out.println(e_io); } catch (SQLException e_sql) { System.out.println(e_sql); } System.out.println("You'll only get here with the 7.1 driver!"); }}-- end --01010101010101010101010101010101010101010101010101Marc P. Lavergne [wk:407-648-6996]Product DevelopmentrichLAVA Corporation--"Anyone who slaps a 'this page is best viewed withBrowser X' label on a Web page appears to beyearning for the bad old days, before the Web,when you had very little chance of reading adocument written on another computer, another wordprocessor, or another network."-Tim Berners-Lee (Technology Review, July 1996)01010101010101010101010101010101010101010101010101-- 01010101010101010101010101010101010101010101010101Marc P. Lavergne [wk:407-648-6996]Product DevelopmentrichLAVA Corporation--"Anyone who slaps a 'this page is best viewed withBrowser X' label on a Web page appears to beyearning for the bad old days, before the Web,when you had very little chance of reading adocument written on another computer, another wordprocessor, or another network."-Tim Berners-Lee (Technology Review, July 1996) java.sql.SQLException: ERROR: oidin: error in "\377\330\377\340...I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case:-- start --package sample.java.jdbc;import java.sql.*;import java.io.*;import java.util.*;class testBlob{ public static void main(String args[]) { new testBlob(); System.exit(0); } public testBlob() { System.out.println("Let's begin the test ..."); String v_username = "postgre"; String v_password = "postgre"; String v_database = "jdbc:postgresql://127.0.0.1/TEST"; PreparedStatement v_pstmt = null; try { DriverManager.registerDriver(new org.postgresql.Driver()); Connection v_conn = DriverManager.getConnection( v_database, v_username, v_password); v_conn.setAutoCommit(false); File v_file = new File("/home/postgre/something.jpg"); FileInputStream v_fis = new FileInputStream(v_file); v_pstmt = v_conn.prepareStatement( "insert into lob_test (id,data) values (?,?)"); v_pstmt.setInt(1,2); v_pstmt.setBinaryStream(2,v_fis,(int)v_file.length()); System.out.println("now you see me"); v_pstmt.executeUpdate(); System.out.println("now you don't"); v_conn.commit(); v_fis.close(); v_pstmt.close(); } catch (IOException e_io) { System.out.println(e_io); } catch (SQLException e_sql) { System.out.println(e_sql); } System.out.println("You'll only get here with the 7.1 driver!"); }}-- end --01010101010101010101010101010101010101010101010101Marc P. Lavergne [wk:407-648-6996]Product DevelopmentrichLAVA Corporation--"Anyone who slaps a 'this page is best viewed withBrowser X' label on a Web page appears to beyearning for the bad old days, before the Web,when you had very little chance of reading adocument written on another computer, another wordprocessor, or another network."-Tim Berners-Lee (Technology Review, July 1996)01010101010101010101010101010101010101010101010101-- 01010101010101010101010101010101010101010101010101Marc P. Lavergne [wk:407-648-6996]Product DevelopmentrichLAVA Corporation--"Anyone who slaps a 'this page is best viewed withBrowser X' label on a Web page appears to beyearning for the bad old days, before the Web,when you had very little chance of reading adocument written on another computer, another wordprocessor, or another network."-Tim Berners-Lee (Technology Review, July 1996)
BLOB support is broken in the 7.2/7.3 JDBC driver, works in 7.1 driver though. Basically, returns an error like the following on executeUpdate(): java.sql.SQLException: ERROR: oidin: error in "\377\330\377\340... I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case: -- start -- package sample.java.jdbc; import java.sql.*; import java.io.*; import java.util.*; class testBlob { public static void main(String args[]) { new testBlob(); System.exit(0); } public testBlob() { System.out.println("Let's begin the test ..."); String v_username = "postgre"; String v_password = "postgre"; String v_database = "jdbc:postgresql://127.0.0.1/TEST"; PreparedStatement v_pstmt = null; try { DriverManager.registerDriver(new org.postgresql.Driver()); Connection v_conn = DriverManager.getConnection( v_database, v_username, v_password); v_conn.setAutoCommit(false); File v_file = new File("/home/postgre/something.jpg"); FileInputStream v_fis = new FileInputStream(v_file); v_pstmt = v_conn.prepareStatement( "insert into lob_test (id,data) values (?,?)"); v_pstmt.setInt(1,2); v_pstmt.setBinaryStream(2,v_fis,(int)v_file.length()); System.out.println("now you see me"); v_pstmt.executeUpdate(); System.out.println("now you don't"); v_conn.commit(); v_fis.close(); v_pstmt.close(); } catch (IOException e_io) { System.out.println(e_io); } catch (SQLException e_sql) { System.out.println(e_sql); } System.out.println("You'll only get here with the 7.1 driver!"); } } -- end -- 01010101010101010101010101010101010101010101010101 Marc P. Lavergne [wk:407-648-6996] Product Development richLAVA Corporation -- "Anyone who slaps a 'this page is best viewed with Browser X' label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network." -Tim Berners-Lee (Technology Review, July 1996) 01010101010101010101010101010101010101010101010101 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] psqlODBC
I created a temporary table using a procedure called from VB. The table contains int8 and text data. I display the data contained in the temporary table using a datagrid with the datasource set to a recordset that is opened through code (no data control or data environment used). The problem I have is if I try to edit the data in the datagrid the int8 values are read as text with trailing spaces and causes an error. This does not happen when I use a permanent table. The work around I have is to create the temporary table using varchar datatypes instead of int8 and it works fine (a little extra manipulation when updating the permanent table with the temporary data). I did not know if this was a known problem. Marc Breneiser
[BUGS] [Fwd: BLOB support is broken in the 7.2/7.3 JDBC driver, works in 7.1 driver java.sql.SQLException: ERROR: oidin: error in "\377\330\377\340... I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case: -- start -- package sample.java.jdbc; import java.sql.*; import java.io.*; import java.util.*; class testBlob { public static void main(String args[]) { new testBlob(); System.exit(0); } public testBlob() { System.out.println("Let's begin the test ..."); String v_username = "postgre"; String v_password = "postgre"; String v_database = "jdbc:postgresql://127.0.0.1/TEST"; PreparedStatement v_pstmt = null; try { DriverManager.registerDriver(new org.postgresql.Driver()); Connection v_conn = DriverManager.getConnection( v_database, v_username, v_password); v_conn.setAutoCommit(false); File v_file = new File("/home/postgre/someth!ing.jpg"); FileInputStream v_fis = new FileInputStream(v_file); v_pstmt = v_conn.prepareStatement( "insert into lob_test (id,data) values (?,?)"); v_pstmt.setInt(1,2); v_pstmt.setBinaryStream(2,v_fis,(int)v_file.length()); System.out.println("now you see me"); v_pstmt.executeUpdate(); System.out.println("now you don't"); v_conn.commit(); v_fis.close(); v_pstmt.close(); } catch (IOException e_io) { System.out.println(e_io); } catch (SQLException e_sql) { System.out.println(e_sql); } System.out.println("You'll only get here with the 7.1 driver!"); } } -- end -- 01010101010101010101010101010101010101010101010101 Marc P. Lavergne [wk:407-648-6996] Product Development richLAVA Corporation -- "Anyone who slaps a 'this page is best viewed with Browser X' label on a Web page appears to be yearning for the bad old days, before the Web, when you had v!ery little chance of reading a document written on another computer, a
A little more info ... it fails on any file ... I tested with a 1 byte file with a single byte (value of that byte is irrespective ... fails in all cases) Original Message java.sql.SQLException: ERROR: oidin: error in "\377\330\377\340... I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case: -- see prior post -- 01010101010101010101010101010101010101010101010101 Marc P. Lavergne [wk:407-648-6996] Product Development richLAVA Corporation -- "Anyone who slaps a 'this page is best viewed with Browser X' label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network." -Tim Berners-Lee (Technology Review, July 1996) 01010101010101010101010101010101010101010101010101 ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] BLOB support is broken in the 7.2/7.3 JDBC driver, works in 7.1 driver
(NOTE: This is a report ... my cut-and-paste went a little haywire on the last one.) BLOB support is broken in the 7.2/7.3 JDBC driver, works in 7.1 driver though. Basically, returns an error like the following on executeUpdate(): java.sql.SQLException: ERROR: oidin: error in "x": can't parse "x" Fails on any file. In this case, I created a 1 byte file with the character 'x' in it. I'm on Linux 2.4.16 using JDK 1.4.0 but it fails in 1.3.1 as well. Here's a quick reproducible test case: -- start -- package sample.java.jdbc; import java.sql.*; import java.io.*; import java.util.*; class testBlob { public static void main(String args[]) { new testBlob(); System.exit(0); } public testBlob() { System.out.println("Let's begin the test ..."); String v_username = "postgre"; String v_password = "postgre"; String v_database = "jdbc:postgresql://127.0.0.1/TEST"; PreparedStatement v_pstmt = null; try { DriverManager.registerDriver(new org.postgresql.Driver()); Connection v_conn = DriverManager.getConnection( v_database, v_username, v_password); v_conn.setAutoCommit(false); File v_file = new File("/home/postgre/something.jpg"); FileInputStream v_fis = new FileInputStream(v_file); v_pstmt = v_conn.prepareStatement( "insert into lob_test (id,data) values (?,?)"); v_pstmt.setInt(1,2); v_pstmt.setBinaryStream(2,v_fis,(int)v_file.length()); System.out.println("now you see me"); v_pstmt.executeUpdate(); System.out.println("now you don't"); v_conn.commit(); v_fis.close(); v_pstmt.close(); } catch (IOException e_io) { System.out.println(e_io); } catch (SQLException e_sql) { System.out.println(e_sql); } System.out.println("You'll only get here with the 7.1 driver!"); } } -- end -- 01010101010101010101010101010101010101010101010101 Marc P. Lavergne [wk:407-648-6996] Product Development richLAVA Corporation -- "Anyone who slaps a 'this page is best viewed with Browser X' label on a Web page appears to be yearning for the bad old days, before the Web, when you had very little chance of reading a document written on another computer, another word processor, or another network." -Tim Berners-Lee (Technology Review, July 1996) 01010101010101010101010101010101010101010101010101 ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] BUG #2606: (libpq) incorrect function declaration in libpq-fe.h
The following bug has been logged online: Bug reference: 2606 Logged by: Marc ROGLIANO Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.4 Operating system: Windows XP Description:(libpq) incorrect function declaration in libpq-fe.h Details: Hello, In libpq-fe.h, the function lo_write is declared : extern int lo_write(PGconn *conn, int fd, char *buf, size_t len); => 'const' missing before char *buf : buf should not be writable here Best regards, Marc ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[BUGS] BUG #3484: Missing pg_clog file / corrupt index
The following bug has been logged online: Bug reference: 3484 Logged by: Marc Schablewski Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.8 Operating system: SuSE Linux 10.0 / Kernel 2.6.13-15.8-smp (x86-64) Description:Missing pg_clog file / corrupt index Details: Our application receives and processes payment information that comes in plain text files and stores the processed data into different tables in our database. There are about 5 tables involved, three of them with 35 million records so far. We get approximately 15 payments a day. Each payment is handled in a single transaction, because in case of an error, we want to store as many payments as possible. We have about 50 INSERT and a few UPDATE statements each day. The whole application runs on two servers (see specs below) which are nearly identical. One is the production system, the other is for testing. A few months ago we had some trouble with the test system. Postgres complained about a missing pg_clog file during nightly routine VACUUM/ANALYZE. Some days later, the same error occurred on the production system, even on the same table! The corrupted table is one of those bigger ones involved into the file processing. After searching the web we found a hint that this problem could be related to a bug in 8.1.3, so we upgraded to 8.1.8 and restored the databases on both servers. This was around May, but now we discovered the same problem on our production system again. Actually, the error occurred four weeks ago, but it was not discovered until now â if you do error logging, you should look at it from time to time ;) When trying to backup or vacuum the database, we receive one of the following error messages: **BACKUP** pg_dump: ERROR: could not access status of transaction 2134240 DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis nicht gefunden pg_dump: SQL command to dump the contents of table "dateiblock" failed: PQendcopy() failed. pg_dump: Error message from server: ERROR: could not access status of transaction 2134240 DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis nicht gefunden pg_dump: The command was: COPY public.dateiblock (id, eda_id, dbt_id, lfdnr, binaer_offset, laenge) TO stdout; **VACUUM** INFO: vacuuming "public.dateiblock" ERROR: could not access status of transaction 2134240 DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis nicht gefunden (For those not familiar to the German language: âDatei oder Verzeichnis nicht gefundenâ means âfile or directory not foundâ). Current pg_clogs range from 005A to 008F. The oldest one is dated to April 30th. We narrowed it down to a few records in that table. Some records contain unreasonable values, others produce the same message about the missing pg_clog file when selected and some are simply missing. But they must have existed, because there are still records in a second table referencing them. One strange thing about this is, that the referencing records are about two and a half months old and shouldnât been touched since then. We donât think this is a hardware issue, because we had it on two different servers and within a short period of time. Luckily, the loss of data is minimal. There are only about 30 records affected. Otherwise this would have been fatal, because as said before, our backup was not working either. In addition, we had another problem today. One of the indexes on a second table became corrupted, causing the database backend to rollback all transactions and restart the processes. A REINDEX fixed it, but it leaves me with a bad feeling about what will break next. 2007-07-25 08:07:00 CEST PANIC: right sibling's left-link doesn't match 2007-07-25 08:07:00 CEST STATEMENT: insert into transaktion (kaz_id, dbl_id, sta_id, kss_id, summe, zeitpunkt, tracenr, terminalid, status, kartennr , wae_id, kassenschnitt, freigabe, flo_id, importdatum) VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15) 2007-07-25 08:07:00 CEST LOG: server process (PID 5699) was terminated by signal 6 2007-07-25 08:07:00 CEST LOG: terminating any other active server processes 2007-07-25 08:07:00 CEST WARNING: terminating connection because of crash of another server process 2007-07-25 08:07:00 CEST DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory. Kind regards, Marc Schablewski System: OS: SUSE LINUX 10.0 (x86-64), 2.6.13-15.8-smp x86_64 System: 2x Intel(R) Xeon(TM) CPU 2.80GHz Dual Core, 4GB RAM (HP proliant server) 3Ware 9500S-4LP, 2xRAID1 (one for OS & one for database) Postgres 8.1.8 Postgres was compiled manually but with default parameters. The configuration has been tuned to improve performance. shared_buff
Re: [BUGS] BUG #3484: Missing pg_clog file / corrupt index
I kept a copy of the data files in case it is needed, but I have to check first, if I am allowed to give away that information. Some of the data is confidential. If you just need the files containing the dammaged table, this won't be a big problem, because it does not contain any confidential information (as long as one data file only contains the data of one table). The other problem is the size of the files. The whole database is about 60GB and the files belonging to that table are about 2.5GB. Mayby there is a way to pre-select the data you need? Decibel! wrote: > Actually, this does sound like a hardware problem to me. You only have 5 > tables that get hit heavily, so you've likely got somewhere around a 20% > chance that corruption would hit the same table on two different machines. > > So far you haven't said anything that sounds unusual about how you're > using the database, and the hardware certainly seems pretty > common-place, so I'm rather doubtful that it's software. But if you > still have copies of the bad database, someone might be able to help you. -- Marc Schablewski click:ware Informationstechnik GmbH ---(end of broadcast)--- TIP 6: explain analyze is your friend
[BUGS] BUG #3502: pg_ctl register translates \ to /
The following bug has been logged online: Bug reference: 3502 Logged by: Marc Frappier Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: windows XP Description:pg_ctl register translates \ to / Details: I'm creating a windows service using the following command "C:\Program Files\PostgreSQL\8.2\bin\pg_ctl.exe" register -N "pgsql-8.2-test" -U bdpostgres -P "xx" -D "C:\data" The created service is defined using / instead of \; hence it does not work when I try to start it. Here is the command executed by the generated service C:/Program Files/PostgreSQL/8.2/bin/pg_ctl.exe runservice -N "pgsql-8.2-test" -D "C:/data" ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #3484: Missing pg_clog file / corrupt index
I've a question that's somehow relateted to this bug. I hope it's ok to post it here, even if it's not a bug report. We are planning to set up a standby system, in case our productive database system crashes again. Replication by WAL archiving is one possible solution. But the question is: would an error like the one we had appear in WAL and would it be replicated too? Or is there some kind of consistency check, that prevents broken WAL from being restored? Marc Schablewski click:ware Informationstechnik GmbH Decibel! wrote: > On Jul 25, 2007, at 4:02 PM, Marc Schablewski wrote: >> The following bug has been logged online: >> >> Bug reference: 3484 >> Logged by: Marc Schablewski >> Email address: [EMAIL PROTECTED] >> PostgreSQL version: 8.1.8 >> Operating system: SuSE Linux 10.0 / Kernel 2.6.13-15.8-smp (x86-64) >> Description:Missing pg_clog file / corrupt index >> Details: >> >> Our application receives and processes payment information that comes in >> plain text files and stores the processed data into different tables >> in our >> database. There are about 5 tables involved, three of them with 35 >> million >> records so far. We get approximately 15 payments a day. Each >> payment is >> handled in a single transaction, because in case of an error, we want to >> store as many payments as possible. We have about 50 INSERT and a >> few >> UPDATE statements each day. The whole application runs on two servers >> (see >> specs below) which are nearly identical. One is the production >> system, the >> other is for testing. >> >> A few months ago we had some trouble with the test system. Postgres >> complained about a missing pg_clog file during nightly routine >> VACUUM/ANALYZE. Some days later, the same error occurred on the >> production >> system, even on the same table! The corrupted table is one of those >> bigger >> ones involved into the file processing. After searching the web we >> found a >> hint that this problem could be related to a bug in 8.1.3, so we >> upgraded to >> 8.1.8 and restored the databases on both servers. This was around >> May, but >> now we discovered the same problem on our production system again. >> Actually, >> the error occurred four weeks ago, but it was not discovered until now – >> if you do error logging, you should look at it from time to time ;) >> >> When trying to backup or vacuum the database, we receive one of the >> following error messages: >> >> **BACKUP** >> pg_dump: ERROR: could not access status of transaction 2134240 >> DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis >> nicht >> gefunden >> pg_dump: SQL command to dump the contents of table "dateiblock" failed: >> PQendcopy() failed. >> pg_dump: Error message from server: ERROR: could not access status of >> transaction 2134240 >> DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis >> nicht >> gefunden >> pg_dump: The command was: COPY public.dateiblock (id, eda_id, dbt_id, >> lfdnr, >> binaer_offset, laenge) TO stdout; >> >> **VACUUM** >> INFO: vacuuming "public.dateiblock" >> ERROR: could not access status of transaction 2134240 >> DETAIL: could not open file "pg_clog/0002": Datei oder Verzeichnis >> nicht >> gefunden >> >> (For those not familiar to the German language: ‘Datei oder Verzeichnis >> nicht gefunden’ means ‘file or directory not found’). >> >> Current pg_clogs range from 005A to 008F. The oldest one is dated to >> April >> 30th. >> >> We narrowed it down to a few records in that table. Some records contain >> unreasonable values, others produce the same message about the missing >> pg_clog file when selected and some are simply missing. But they must >> have >> existed, because there are still records in a second table >> referencing them. >> One strange thing about this is, that the referencing records are >> about two >> and a half months old and shouldn’t been touched since then. >> >> We don’t think this is a hardware issue, because we had it on two >> different servers and within a short period of time. >> >> Luckily, the loss of data is minimal. There are only about 30 records >> affected. Otherwise this would have been fatal, because as said >> before, our >> backup was not working either. >> >> >> In addition, we had another problem today. One of the indexes on a &g
[BUGS] BUG #3653: Database crash
The following bug has been logged online: Bug reference: 3653 Logged by: Marc Munro Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.5 Operating system: Linux 2.6.15.6 #4 SMP PREEMPT i686 Description:Database crash Details: Also occurs on 8.2.3 and 8.1.4 The following script causes a db server crash: psql -d postgres <<'CLUSTEREOF' create database "skittest" with encoding 'UTF8' connection limit = -1; CLUSTEREOF psql -d skittest <<'DBEOF' create or replace function "public"."mycharin"( in "pg_catalog"."cstring") returns "public"."mychar" as 'charin' language internal immutable strict; create or replace function "public"."mycharout"( in "public"."mychar") returns "pg_catalog"."cstring" as 'charout' language internal immutable strict; create type "public"."mychar"( input = "public"."mycharin", output = "public"."mycharout", internallength = -1, alignment = char, storage = plain, delimiter = ','); create domain "public"."postal3" as "public"."mychar" default 'xxx' not null; comment on domain "public"."postal3" is 'wibble'; DBEOF The following is from the log file: 2007-10-04 11:05:40 PDT NOTICE: type "public.mychar" is not yet defined 2007-10-04 11:05:40 PDT DETAIL: Creating a shell type definition. 2007-10-04 11:05:40 PDT NOTICE: argument type public.mychar is only a shell 2007-10-04 11:05:40 PDT LOG: server process (PID 13775) was terminated by signa l 11 2007-10-04 11:05:40 PDT LOG: terminating any other active server processes 2007-10-04 11:05:40 PDT LOG: all server processes terminated; reinitializing 2007-10-04 11:05:40 PDT LOG: database system was interrupted at 2007-10-04 11:0 5:34 PDT ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[BUGS] BUG #3672: ALTER TYPE change the underlying index tablespace to default
The following bug has been logged online: Bug reference: 3672 Logged by: Marc Mamin Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: Linux Description:ALTER TYPE change the underlying index tablespace to default Details: steps to repeat: create table tbltest ( id serial, constraint tbltest_pk primary key (id) USING INDEX TABLESPACE tblspc_idx_ciclocal ) select tablespace from pg_indexes where indexname ='tbltest_pk' => tblspc_idx_ciclocal alter table tbltest ALTER id TYPE int2; select tablespace from pg_indexes where indexname ='tbltest_pk' => NULL "tbltest_pk" has been moved from its original tablespace to the default one. regards, Marc Mamin ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[BUGS] BUG #3697: utf8 issue: can not reimport a table that was successfully exported.
The following bug has been logged online: Bug reference: 3697 Logged by: Marc Mamin Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: SuSE Linux 9.1 (i586) Description:utf8 issue: can not reimport a table that was successfully exported. Details: Hello, I'm not sure this is a bug; the problem might be related to a client encoding issue. My Database is on a Linux server which I connect to using putty from Windows. both server and client are set to UTF8: client_encoding | UTF8 backslash_quote | safe_encoding server_encoding | UTF8 I stumbled on this issue while trying to import a "malicious" user agent string... I didn't check if all characters are valid UTF8... My concern is about database recovery. I'm using pg_dump to regulary export my users, bu according to the example below, it seems that my dumps may be worthless ! May be you should consider not to publish this before a fix exist as this is a serious issue which could eventually be exploited to damage existing instances (for the case this is really a bug)... regards, Marc Mamin steps to repeat: CREATE TABLE utf8_test(s varchar); CREATE OR REPLACE FUNCTION f_utf8_test( st VARCHAR) RETURNS INT AS $$ DECLARE quotedline varchar = quote_literal($1); BEGIN INSERT INTO utf8_test ( s ) VALUES ( quotedline); RETURN 0; END; $$ LANGUAGE plpgsql; select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind \xE0\xF0\xF1\xF2\xE2\xE5\xED\xED\xFB\xE9 \xE2\xFB\xF1\xF8\9 \xE3\xEE\xF1\xF3\xE4 xE4\xE6 \xCD\xC1 \xD0\xC1")'); -- here the same statement, but with all backslashed duplicated for the case when the string was modified when posting this issue: select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind \\xE0\\xF0\\xF1\\xF2\\xE2\\xE5\\xED\\xED\\xFB\\xE9 \\xE2\\xFB\\xF1\\xF8\\9 \\xE3\\xEE\\xF1\\xF3\\xE4 xE4\\xE6 \\xCD\\xC1 \\xD0\\xC1")'); WARNING: nonstandard use of escape in a string literal LINE 1: select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind COPY utf8_test TO '/tmp/utf8_test.txt'; COPY 1 COPY utf8_test FROM '/tmp/utf8_test.txt'; ERROR: invalid byte sequence for encoding "UTF8": 0xd3ce HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY utf8_test, line 1 The same isuue occure when using pg_dump: pg_dump -i -v -p 5433 -Uisdb2 -tutf8_test > /tmp/utf8_dump pg_dump: server version: 8.2.4; pg_dump version: 8.2.1 pg_dump: proceeding despite version mismatch psql -f"/tmp/utf8_dump" => psql:/tmp/utf8_dump:40: ERROR: invalid byte sequence for encoding "UTF8": 0xd3ce HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". CONTEXT: COPY utf8_test, line 1 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #3697: utf8 issue: can not reimport a table that was successfully exported.
Thank you for your quick response, > if you don't quote backslashes in untrusted input you'll have problems far worse than this one I do it now but not since by db is live... So I probably have some invalid caraters in. Is this an issue that must be fixed before I can upgrade to 8.3 ? Is there a recommendation how to clean these data (I know where to search for them) Thanks, Marc Mamin -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Thursday, October 25, 2007 6:08 PM To: Marc Mamin Cc: pgsql-bugs@postgresql.org Subject: Re: [BUGS] BUG #3697: utf8 issue: can not reimport a table that was successfully exported. "Marc Mamin" <[EMAIL PROTECTED]> writes: > I didn't check if all characters are valid UTF8... They aren't ... > select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind > \xE0\xF0\xF1\xF2\xE2\xE5\xED\xED\xFB\xE9 \xE2\xFB\xF1\xF8\9 > \xE3\xEE\xF1\xF3\xE4 > xE4\xE6 \xCD\xC1 \xD0\xC1")'); In 8.3 that will throw an error: utf8=# select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind utf8'# \xE0\xF0\xF1\xF2\xE2\xE5\xED\xED\xFB\xE9 \xE2\xFB\xF1\xF8\9 utf8'# \xE3\xEE\xF1\xF3\xE4 utf8'# xE4\xE6 \xCD\xC1 \xD0\xC1")'); WARNING: nonstandard use of escape in a string literal LINE 1: select f_utf8_test('(Mozilla/4.0 (compatible; MSIE 6.0; Wind ^ HINT: Use the escape string syntax for escapes, e.g., E'\r\n'. ERROR: invalid byte sequence for encoding "UTF8": 0xe0f0f1 HINT: This error can also happen if the byte sequence does not match the encoding expected by the server, which is controlled by "client_encoding". utf8=# However, since this behavior isn't backwards-compatible, there's not much appetite for back-patching it. I don't think this is a security issue --- if you don't quote backslashes in untrusted input you'll have problems far worse than this one. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #3787: configure: error: readline library not found
You probably don't have readline's header files installed. They are usually part of your linux distro, but not installed by default. Install the files and rerun configure. Marc paulo wrote: > The following bug has been logged online: > > Bug reference: 3787 > Logged by: paulo > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.0.14 > Operating system: MANDRIVA > Description:configure: error: readline library not found > Details: > > Help -me no install > > configure: error: readline library not found > If you have readline already installed, see config.log for details on the > failure. It is possible the compiler isn't looking in the proper > directory. > Use --without-readline to disable readline support. > > ---(end of broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > > > -- Marc Schablewski click:ware Informationstechnik GmbH ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[BUGS] BUG #3861: cannot cast type smallint to bit
The following bug has been logged online: Bug reference: 3861 Logged by: Marc mamin Email address: [EMAIL PROTECTED] PostgreSQL version: 8.2.4 Operating system: Linux Description:cannot cast type smallint to bit Details: Hello, This is more a feature request than a bug, but I guess there is no reason why this should not be possible: select 1::int2::bit(16) => ERROR: cannot cast type smallint to bit I guess the correct way to workaround would be: select (1::int4::bit(32)<<16)::bit(16) which is a bit complicated :-) HTH, Marc Mamin ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
[BUGS] psql malloc problem
Postgres version 8.3.0 on Mac OSX Version 10.4.11 Built from source with no parameters to configure script (ie a simple default build). Error occurs every time I exit psql: marc:[prototype]$ psql -d postgres Welcome to psql 8.3.0, the PostgreSQL interactive terminal. Type: \copyright for distribution terms \h for help with SQL commands \? for help with psql commands \g or terminate with semicolon to execute query \q to quit postgres=# \q psql(3050) malloc: *** error for object 0x1807000: incorrect checksum for freed object - object was probably modified after being freed, break at szone_error to debug psql(3050) malloc: *** set a breakpoint in szone_error to debug Segmentation fault marc:[prototype]$ psql -l List of databases Name| Owner | Encoding ---+--+--- monkey| marc | SQL_ASCII postgres | postgres | SQL_ASCII template0 | postgres | SQL_ASCII template1 | postgres | SQL_ASCII (4 rows) marc:[prototype]$ gdb backtrace: monkey=# \q psql(2895) malloc: *** error for object 0x1807000: incorrect checksum for freed object - object was probably modified after being freed, break at szone_error to debug psql(2895) malloc: *** set a breakpoint in szone_error to debug Program received signal EXC_BAD_ACCESS, Could not access memory. Reason: KERN_INVALID_ADDRESS at address: 0x305c303c 0x900065ed in szone_free () (gdb) bt #0 0x900065ed in szone_free () #1 0x90005588 in free () #2 0x90010f5f in fclose () #3 0x964a891c in history () #4 0x964a9b05 in write_history () #5 0x9cbc in saveHistory () #6 0x9dd2 in finishInput () #7 0x900103ec in __cxa_finalize () #8 0x900102e8 in exit () #9 0x2422 in _start () #10 0x2341 in start () (gdb) Please let me know what else I can do to assist in tracking this down. __ Marc ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #3958: Self-Join Group-By Clause Produces Incorrect Results
I respectfully challenge that the aggregation is correct. In the where clause, I specify A2.AUDIT_TYPE_CODE = CONTENT_2, thus returning only 2 rows for A2 and not all of the rows in A2 which happen to have a TXN_COUNT of 1 / row but could in fact be any positive number. I used 1 for simplicity. Similarly, if you take out A1 from the query you receive the following result: audit_date|content_policy_name|sum_2 2008-01-01|TEST POLICY|2 I do not see how/why a self-join changes the condition specified in the where clause and thus returns a sum of 8 rows that do not meet the specified condition in the query? Thanks in advance, Marcus Torres - Original Message From: Tom Lane <[EMAIL PROTECTED]> To: Heikki Linnakangas <[EMAIL PROTECTED]> Cc: Marcus Torres <[EMAIL PROTECTED]>; pgsql-bugs@postgresql.org Sent: Wednesday, February 13, 2008 7:40:32 AM Subject: Re: [BUGS] BUG #3958: Self-Join Group-By Clause Produces Incorrect Results "Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Marcus Torres wrote: >> I wrote a simple self-join query to sum the transaction count of different >> types of records in a audit table and the result set for the different sum >> totals was the same which is incorrect. > Looks perfectly correct to me. Me too. The underlying data before grouping/aggregation is regression=# select A1.AUDIT_DATE, P.CONTENT_POLICY_NAME, A1.TXN_COUNT, A2.TXN_COUNT FROM T_AUDIT A1, T_AUDIT A2, T_POLICY P WHERE P.ID = A1.POLICY_ID AND P.ID = A2.POLICY_ID AND A1.POLICY_ID = A2.POLICY_ID AND A1.AUDIT_DATE = A2.AUDIT_DATE AND A1.AUDIT_TYPE_CODE = 'CONTENT_1' AND A2.AUDIT_TYPE_CODE = 'CONTENT_2'; audit_date | content_policy_name | txn_count | txn_count +-+---+--- 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 (10 rows) from which it's clear that given all ones in txn_count, the sums *must* be the same because they're taken over the same number of rows. I suspect what the OP needs is two separate queries (perhaps union'ed together) not a self-join. regards, tom lane Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs
Re: [BUGS] BUG #3958: Self-Join Group-By Clause Produces Incorrect Results
Please disregard the previous email. After rereading what you sent, I realized that I need an outer join to A2 and not simply a self join...thanks and my apologies! - Original Message From: Tom Lane <[EMAIL PROTECTED]> To: Heikki Linnakangas <[EMAIL PROTECTED]> Cc: Marcus Torres <[EMAIL PROTECTED]>; pgsql-bugs@postgresql.org Sent: Wednesday, February 13, 2008 7:40:32 AM Subject: Re: [BUGS] BUG #3958: Self-Join Group-By Clause Produces Incorrect Results "Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Marcus Torres wrote: >> I wrote a simple self-join query to sum the transaction count of different >> types of records in a audit table and the result set for the different sum >> totals was the same which is incorrect. > Looks perfectly correct to me. Me too. The underlying data before grouping/aggregation is regression=# select A1.AUDIT_DATE, P.CONTENT_POLICY_NAME, A1.TXN_COUNT, A2.TXN_COUNT FROM T_AUDIT A1, T_AUDIT A2, T_POLICY P WHERE P.ID = A1.POLICY_ID AND P.ID = A2.POLICY_ID AND A1.POLICY_ID = A2.POLICY_ID AND A1.AUDIT_DATE = A2.AUDIT_DATE AND A1.AUDIT_TYPE_CODE = 'CONTENT_1' AND A2.AUDIT_TYPE_CODE = 'CONTENT_2'; audit_date | content_policy_name | txn_count | txn_count +-+---+--- 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 2008-01-01 | TEST POLICY | 1 | 1 (10 rows) from which it's clear that given all ones in txn_count, the sums *must* be the same because they're taken over the same number of rows. I suspect what the OP needs is two separate queries (perhaps union'ed together) not a self-join. regards, tom lane Never miss a thing. Make Yahoo your home page. http://www.yahoo.com/r/hs
[BUGS] Problem with sequence et rule
Forgive me if this is not a bug. But I have a problem with a rule on a table which has a column with a sequence. I'm using postgres 7.3.4. I have a table named "album" with the following structure (part only) CREATE TABLE album ( id integer DEFAULT nextval('"album_id_seq"'::text) NOT NULL, isbn character varying(10), flags smallint DEFAULT 0, and many more columns that are not relevant here. I have another table "album_edit_tst" alb_id integer NOT NULL, ed_ref character varying(30) NOT NULL, isbn character varying(30) flags smallint DEFAULT 0, whose purpose is to gather additional information (only related to the "album" table by the alb_id (if value is >0)). Currently I have some queries that are interrogating both table (with a UNION) to get complete relevant information. My main objective is to get all data from "album" inserted into "album_edit_tst" so that I can use a single select. Since I want to gain execution time by this method, views are not suited. So I've created the following rule to update "album_edit_tst" in conjunction with "album". CREATE RULE albed_setalb_rl AS ON INSERT TO album DO INSERT INTO album_edit_tst (alb_id,ed_ref,isbn,flags) VALUES (NEW.id,'',NEW.isbn,NEW.flags); Note: The insert queries on table "album" do not specify the "id" column. I leave it to PG. When I insert new values the rule work but the value for "id" is wrong. Instead of getting the same value used in the insert on "table" I get the next one. example: id = '8225' in "album", but is set to "8226" in the record inserted in "album_edit_tst" Now if I play dumb with PG and use this rule instead: CREATE RULE albed_setalb_rl AS ON INSERT TO album DO INSERT INTO album_edit_tst (alb_id,ed_ref,isbn,flags) VALUES (NEW.id,'',NEW.id,NEW.id); (I place the "id" value in 3 columns) I get this result: record in "album": id=8230, ... record in "album_edit_tst": alb_id=8231,isbn=8232,flags=8233 Now my questions are: - Is this an expected behavior ? - How can I bypass this problem and ensure that I use the correct value, and that it's not incremented once more ? Thanks -- Marc ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [BUGS] Problem with sequence et rule
At 21:28 31/07/2004 -0400, Tom Lane wrote: >>> Now my questions are: >>> - Is this an expected behavior ? > It is. Rules are essentially macros and so you have all the usual > potential gotchas with multiple evaluations of their input arguments. I've understood what was done by the evaluation process. I was just expecting that the "NEW" variable would contain the inserted values (after all it contains correct values for non-sequence columns). > The recommended way to handle this type of problem is with a trigger > rather than a rule. I've changed this operation into a trigger, and it works like a charm. The function receives the correct values, even the oid (which "rule" doesn't provide). I've since modified my queries to use the unified table, and I've gained approx. 25-35% of execution time. Interesting on an admin page that takes seconds to generate (hundreds of table lookups). Thanks for your help. -- Marc ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[BUGS] Problem with rule and null value
This is a resent. Wrong email account the first time. --- I'm using PostgreSQL 7.3.4. I've set up a series of rules and triggers to complete an intermediate table with similar informations from another table. This ensures better performance over a solution with views. Intermediate table: Table album_edit Field Type Length Not NullDefault alb_id int44 Yes ed_ref varchar30 Yes isbn varchar30 No flags int22 No pls_id int44 No A set of rules is added to another table, "album". The general principle of the relation between "album" and "album_edit", is that each record of "album" is duplicated in "album_edit" (and these record are uniquely identified in "album_edit" with the use of column "ed_ref"). So insert, update and delete on "album" are passed to "album_edit". One of the update rules is this one: CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO UPDATE album_edit SET pls_id=new.pls_id WHERE alb_id=new.id; It works until new.pls_id or old.pls_id is null. The rule is still called (based on my test), but the "DO" query is not executed correctly. The values in "album_edit" are not updated. To find the source of the problem I've modified the rule: CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO select old.pls_id, new.pls_id; Is this a bug or an intended behavior ? Here is a log with some tests. -- bd=# CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO select old.pls_id, new.pls_id; CREATE RULE bd=# update album set pls_id='666' where id='8838'; pls_id | pls_id + 100 |666 (1 row) bd=# update album set pls_id=null where id='8838'; pls_id | pls_id + (0 rows) bd=# update album set pls_id='666' where id='8838'; pls_id | pls_id + (0 rows) bd=# update album set pls_id='111' where id='8838'; pls_id | pls_id + 666 |111 (1 row) bd=# update album set pls_id='0' where id='8838'; pls_id | pls_id + 111 | 0 (1 row) bd=# update album set pls_id='111' where id='8838'; pls_id | pls_id + 0 |111 (1 row) bd=# update album set pls_id=null where id='8838'; pls_id | pls_id + (0 rows) ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[BUGS] Problem with rule and null value
I'm using PostgreSQL 7.3.4. I've set up a series of rules and triggers to complete an intermediate table with similar informations from another table. This ensures better performance over a solution with views. Intermediate table: Table album_edit Field Type Length Not NullDefault alb_id int44 Yes ed_ref varchar30 Yes isbn varchar30 No flags int22 No pls_id int44 No A set of rules is added to another table, "album". The general principle of the relation between "album" and "album_edit", is that each record of "album" is duplicated in "album_edit" (and these record are uniquely identified in "album_edit" with the use of column "ed_ref"). So insert, update and delete on "album" are passed to "album_edit". One of the update rules is this one: CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO UPDATE album_edit SET pls_id=new.pls_id WHERE alb_id=new.id; It works until new.pls_id or old.pls_id is null. The rule is still called (based on my test), but the "DO" query is not executed correctly. The values in "album_edit" are not updated. To find the source of the problem I've modified the rule: CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO select old.pls_id, new.pls_id; Is this a bug or an intended behavior ? Here is a log with some tests. -- bd=# CREATE OR REPLACE RULE albed_updalb3_rl AS ON UPDATE TO album WHERE new.pls_id != old.pls_id DO select old.pls_id, new.pls_id; CREATE RULE bd=# update album set pls_id='666' where id='8838'; pls_id | pls_id + 100 |666 (1 row) bd=# update album set pls_id=null where id='8838'; pls_id | pls_id + (0 rows) bd=# update album set pls_id='666' where id='8838'; pls_id | pls_id + (0 rows) bd=# update album set pls_id='111' where id='8838'; pls_id | pls_id + 666 |111 (1 row) bd=# update album set pls_id='0' where id='8838'; pls_id | pls_id + 111 | 0 (1 row) bd=# update album set pls_id='111' where id='8838'; pls_id | pls_id + 0 |111 (1 row) bd=# update album set pls_id=null where id='8838'; pls_id | pls_id + (0 rows) ---(end of broadcast)--- TIP 8: explain analyze is your friend
[BUGS] BUG #1612: Problem with PGAPI_SetPos
The following bug has been logged online: Bug reference: 1612 Logged by: Marc Soleda Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.1 Operating system: Win XP SP2 Description:Problem with PGAPI_SetPos Details: Environment: WinXP SP2/Visual C++ 6/MFC PostgreSQL 8.0.1 db ODBC 8.00.01.01 I'm accessing the Postgres from a VC++ application using the ODBC driver mentioned. All works fine (open the db, SELECT, ...) until I try to insert a row using the CRecordset::Update() class method. This method is intended to insert to the db all the records previously added in the recordset object. Postgres returns the exception: "Only SQL_POSITION/REFRESH is supported for PGAPI_SetPos" Anybody has encountered with the same problem? ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] BUG #1631: pg_autovacuum fails when creating as a windows service
The following bug has been logged online: Bug reference: 1631 Logged by: Marc Soleda Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.1 Operating system: WinXP SP2 Description:pg_autovacuum fails when creating as a windows service Details: I'm installing pg_autovacuum as a windows service. The windows user account used is a local one and it has admin privileges: pg_autovacuum -I -N winuser -W pwdwinuser -U pguser -P pwdpguser and I get: ERROR: The account name is invalid or does not exist, or the password is invalid for the account name specified. I've tried with different admin users but the answer is always the same. Any ideas? If I execute pg_autovacuum as standalone, all works fine. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] BUG #2185: function compilation error with "Create [TEMP] table?
The following bug has been logged online: Bug reference: 2185 Logged by: marc mamin Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: DB Server: Linux Client: windows XP Description:function compilation error with "Create [TEMP] table? Details: within a function, when I: - use create temp table , - do anyting with this table - drop that table, The first call to that function works, but further calls fail. Rebuilding the function before each call fix the issue. I guess that the function is not yet compiled at the first call, and that further calls use a compiled version Cheers, Marc Here the steps to repeat the bug: - CREATE OR REPLACE FUNCTION bugtest() RETURNS int AS $BODY$ BEGIN create temp table bugt(i int); insert into bugt values(1); drop table bugt; RETURN 0; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE; select bugtest(); -->0 select bugtest(); -->ERROR: relation with OID 52284 does not exist -->CONTEXT: SQL statement "insert into bugt values(1)" -->PL/pgSQL function "bugtest" line 9 at SQL statement ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] BUG #5801: characters not encoded properly for column names
The following bug has been logged online: Bug reference: 5801 Logged by: Marc Cousin Email address: cousinm...@gmail.com PostgreSQL version: 9.0.2 Operating system: Windows XP Description:characters not encoded properly for column names Details: I get a different behaviour between a Linux and a Windows server, when a user creates an accentuated column name. All tests below were done with a linux psql client, the console being set on win1252 charset (so the input character is truly 'é' in win1252) With the Linux server : marc=# SET client_encoding TO 'win1252'; SET marc=# CREATE TABLE test (nom varchar, prénom varchar); CREATE TABLE marc=# \d test Table "public.test" Column | Type| Modifiers +---+--- nom| character varying | prénom | character varying | 'prénom' is also displayed correctly if client_encoding and console are UTF8, so the conversion is good. With the Windows server : test=# SET client_encoding TO 'win1252'; SET test=# CREATE TABLE test (nom varchar, prénom varchar); CREATE TABLE test=# \d test ERROR: invalid byte sequence for encoding "UTF8": 0xe3a96e test=# SELECT attname from pg_attribute where attrelid = (select oid from pg_class where relname = 'test'); ERROR: invalid byte sequence for encoding "UTF8": 0xe3a96e test=# select version(); version - PostgreSQL 9.0.2, compiled by Visual C++ build 1500, 32-bit (1 row) The main reason that this is a problem is that the table cannot be pg_dumped anymore because of this. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5801: characters not encoded properly for column names
No, from a Linux psql client (inside a kde konsole). You can setup any charset in it. I have exactly the same behaviour with psql under windows anyway, with a chcp 1252 in the cmd console. It's not a console charset problem, I've tripled checked that :) And anyway, this character shouldn't get into the database as UTF8, as it is 1252 (hence the error message). The Thursday 23 December 2010 11:24:21, Pavel Stehule wrote : > Hello Marc, > > It was entered from a windows cmd console? It doesn't use win > encodings as default. For example, it must be executed with parameter > /c chcp 1250 for win1250 encoding. > > Regards > > Pavel Stehule > > 2010/12/23 Marc Cousin : > > The following bug has been logged online: > > > > Bug reference: 5801 > > Logged by: Marc Cousin > > Email address: cousinm...@gmail.com > > PostgreSQL version: 9.0.2 > > Operating system: Windows XP > > Description:characters not encoded properly for column names > > Details: > > > > I get a different behaviour between a Linux and a Windows server, when a > > user creates an accentuated column name. > > > > All tests below were done with a linux psql client, the console being set > > on win1252 charset (so the input character is truly 'é' in win1252) > > > > With the Linux server : > > marc=# SET client_encoding TO 'win1252'; > > SET > > marc=# CREATE TABLE test (nom varchar, prénom varchar); > > CREATE TABLE > > > > > > marc=# \d test > > > > > > Table "public.test" > > > > > > Column | Type| Modifiers > > > > > > +---+--- > > > > > > nom| character varying | > > > > > > prénom | character varying | > > > > 'prénom' is also displayed correctly if client_encoding and console are > > UTF8, so the conversion is good. > > > > With the Windows server : > > test=# SET client_encoding TO 'win1252'; > > SET > > test=# CREATE TABLE test (nom varchar, prénom varchar); > > CREATE TABLE > > test=# \d test > > ERROR: invalid byte sequence for encoding "UTF8": 0xe3a96e > > test=# SELECT attname from pg_attribute where attrelid = (select oid from > > pg_class where relname = 'test'); > > ERROR: invalid byte sequence for encoding "UTF8": 0xe3a96e > > test=# select version(); > > version > > - > > PostgreSQL 9.0.2, compiled by Visual C++ build 1500, 32-bit > > (1 row) > > > > > > The main reason that this is a problem is that the table cannot be > > pg_dumped anymore because of this. > > > > -- > > Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) > > To make changes to your subscription: > > http://www.postgresql.org/mailpref/pgsql-bugs -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5801: characters not encoded properly for column names
Le jeudi 23 décembre 2010 18:21:55, John R Pierce a écrit : > On 12/23/10 2:34 AM, Marc Cousin wrote: > > No, from a Linux psql client (inside a kde konsole). You can setup any > > charset in it. I have exactly the same behaviour with psql under windows > > anyway, with a chcp 1252 in the cmd console. It's not a console charset > > problem, I've tripled checked that :) > > > > And anyway, this character shouldn't get into the database as UTF8, as it > > is 1252 (hence the error message). > > does client_encoding affect names ? (I'm asking because I have no idea). Yes (for the Linux server, for Windows it fails). I have exactly the same problem if I test with LATIN9 (except that the utf8 error message has a different value for the bad character). > > what encodings are the database clusters on the two platforms? Oh. Both are UTF-8. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5801: characters not encoded properly for column names
2010/12/27 Robert Haas : > On Thu, Dec 23, 2010 at 5:18 AM, Marc Cousin wrote: >> With the Windows server : >> test=# SET client_encoding TO 'win1252'; >> SET > > I have a vague recollection that the argument to SET client_encoding > isn't validated on Windows, and if you enter a value that it doesn't > like it simply silently doesn't work. Am I wrong? What happens if > you do: > > SET client_encoding TO > 'some_really_long_string_that_is_almost_certainly_not_a_valid_encoding'; Here it is… postgres=# SET client_encoding TO 'foo'; ERROR: invalid value for parameter "client_encoding": "foo" (It does the same with your really long string by the way :) ) Seems validated to me ? -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5801: characters not encoded properly for column names
The Tuesday 28 December 2010 12:49:20, Robert Haas wrote : > On Tue, Dec 28, 2010 at 4:01 AM, Marc Cousin wrote: > > 2010/12/27 Robert Haas : > >> On Thu, Dec 23, 2010 at 5:18 AM, Marc Cousin wrote: > >>> With the Windows server : > >>> test=# SET client_encoding TO 'win1252'; > >>> SET > >> > >> I have a vague recollection that the argument to SET client_encoding > >> isn't validated on Windows, and if you enter a value that it doesn't > >> like it simply silently doesn't work. Am I wrong? What happens if > >> you do: > >> > >> SET client_encoding TO > >> 'some_really_long_string_that_is_almost_certainly_not_a_valid_encoding'; > > > > Here it is… > > > > postgres=# SET client_encoding TO 'foo'; > > ERROR: invalid value for parameter "client_encoding": "foo" > > > > (It does the same with your really long string by the way :) ) > > > > Seems validated to me ? > > Hrm, OK. Well, you just used up my one guess. :-( Sorry about that. Anyone else wanting to take a guess ? :) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] collation problem on 9.1-beta1
Hi, I've been starting to work on a 'what's new in 9.1' like i did last year, and am faced with what I feel is a bug, while building a demo case for collation. Here it is: SELECT * from (values ('llegar'),('llorer'),('lugar')) as tmp order by 1 collate "es_ES.utf8"; ERROR: collations are not supported by type integer at character 74 STATEMENT: SELECT * from (values ('llegar'),('llorer'),('lugar')) as tmp order by 1 collate "es_ES.utf8"; ^ marc=# SELECT * from (values ('llegar'),('llorer'),('lugar')) as tmp order by column1 collate "es_ES.utf8"; column1 - llegar llorer lugar (3 rows) Of course, without the collate keyword, the «order by 1» works as usual. Regards Marc -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] collation problem on 9.1-beta1
On 02/06/2011 14:09, Peter Eisentraut wrote: On ons, 2011-05-11 at 14:58 -0400, Tom Lane wrote: Marc Cousin writes: I've been starting to work on a 'what's new in 9.1' like i did last year, and am faced with what I feel is a bug, while building a demo case for collation. Here it is: SELECT * from (values ('llegar'),('llorer'),('lugar')) as tmp order by 1 collate "es_ES.utf8"; ERROR: collations are not supported by type integer at character 74 This isn't a bug, or at least not one we're going to fix. ORDER BY column-number is a legacy syntax that doesn't support many options, and COLLATE is one of the ones that it doesn't support. (The actual technical reason for this is that COLLATE turns the argument into a general expression, not something we can special-case. You would get the same error from writing "1 COLLATE something" anyplace else.) Well, I'm just counting how many more people are going to complain about this before fixing it. Possibly in 9.2. I wasn't complaining at all, just wondering if this was intended or not :) But I'm sure that I won't be the only one to be caught by this, as it took me about ten minutes to fall in this trap. That was the main reason I reported this problem in the first place :) -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #5741: syslog line length
On 02/11/2010 17:17, heasley wrote: > The following bug has been logged online: > > Bug reference: 5741 > Logged by: heasley > Email address: h...@shrubbery.net > PostgreSQL version: 8.4 > Operating system: solaris > Description:syslog line length > Details: > > * Max string length to send to syslog(). Note that this doesn't count the > * sequence-number prefix we add, and of course it doesn't count the prefix > * added by syslog itself. On many implementations it seems that the > hard > * limit is approximately 2K bytes including both those prefixes. > */ > #ifndef PG_SYSLOG_LIMIT > #define PG_SYSLOG_LIMIT 1024 > #endif > > solaris' syslogd limits the line length to 1024, with a > FQDN and it's silly "msg ID" quite a bit is dropped by > syslogd. > I've been having the exact same problem with CentOS 5.5 these days (and a customer's Red Hat 5.4). This same problem occurs with sysklogd, which has a #define MAXLINE 1024/* maximum line length */ Replacing sysklogd with rsyslog, metalog or syslog-ng solves the problem, as they all have 2048 for their buffer size. But CentOS and RedHat 5 both seem to have sysklogd as the default logger. I'll solve this by changing their logger, but I thought it would be worthy to mention. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6057: regexp_replace & back references
The following bug has been logged online: Bug reference: 6057 Logged by: Marc Mamin Email address: m...@intershop.de PostgreSQL version: 9.0.4 Operating system: Windows Description:regexp_replace & back references Details: select regexp_replace ('a','(a)','\\1'||substring('\\1',1,1)||'\\1','g') = a\1 I'd expect a\1a as result. More generally, I miss the avaibility to use back references as function parameters. It is only possible with some operators (e.g. '\\1'||'\\1'), probably depending on their lexical precedence. Any way to workaround this limitation would be very helpful. HTH, Marc Mamin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] BUG #6168: db_link may generate additional unformatted log entries in stderr
The following bug has been logged online: Bug reference: 6168 Logged by: Marc Mamin Email address: m...@intershop.de PostgreSQL version: 9.1 Beta 3 Operating system: Linux Description:db_link may generate additional unformatted log entries in stderr Details: Hello, The problem occures with 9.1 Beta 3 on Linux, but I suspect this to be an older issue in db_link here my logging configuration: log_destination = 'stderr,csvlog' logging_collector = on log_line_prefix = '%t|%c|%p|%u|' log_min_messages = warning I've stumbled on a few entries in my stderr log where the line prefix where missing. e.g.: 2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|DETAIL: Returned type character ... 2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|CONTEXT: PL/pgSQL function "bm_ ... 2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|STATEMENT: Select coalesce(colu ... => NOTICE: table "sessions_summary_raw_20110815" does not exist, skipping => NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index "pk_sessions_s ... => NOTICE: table "sessions_site_summary_raw_20110815" does not exist, skipping NOTICE entries should moreover not be logged ! Those entries are coming from db_link. Here is a small test case to repeat: CREATE FUNCTION my_warn (msg varchar) returns int AS $$ BEGIN raise WARNING '%', msg; return 0; END; $$ language 'PLPGSQL'; cic_db=# SELECT dblink_connect_u('1', 'hostaddr=127.0.0.1 port=5432 dbname=cic_db user=isdb1 password=foo'); dblink_connect_u -- OK (1 row) cic_db=# SELECT dblink_send_query ('1', $$select my_warn ('my_warn test')$$); dblink_send_query --- 1 (1 row) cic_db=# SELECT * FROM dblink_get_result('1') AS t1(f1 int); f1 0 (1 row) cic_db=# SELECT dblink_disconnect('1'); dblink_disconnect --- OK (1 row) DROP FUNCTION my_warn(msg varchar); stderr log: the correct warning entry is followed by a repetiton of the message: (In my real case I only see the unformated NOTICE, probably because of the log_min_messages set to warning) ; 2011-08-19 12:44:11 CEST|4e4e3e7b.529d|21149|isdb1|WARNING: my_warn test => WARNING: my_warn test the csvlog is clean. It does not contains these extra entries: 2011-08-19 12:44:11.799 CEST,"isdb1","cic_db",21149,"127.0.0.1:42769",4e4e3e7b.529d,1,"SELECT",2011- 08-19 12:44:11 CEST,4/116414,0,WARNING,01000,"my_warn test","" As far as I see, this is just a cosmetic issue, but might be problmatic for third party applications that parse the stderr log. HTH, Marc Mamin -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6168: db_link may generate additional unformatted log entries in stderr
>> I've stumbled on a few entries in my stderr log where the line prefix where >> missing. e.g.: >> >>2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|DETAIL: Returned type >> character ... >>2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|CONTEXT: PL/pgSQL >> function "bm_ ... >>2011-08-08 13:12:16 CEST|4e3fc490.17bc|6076|foo|STATEMENT: Select >> coalesce(colu ... >> => NOTICE: table "sessions_summary_raw_20110815" does not exist, skipping >> => NOTICE: ALTER TABLE / ADD PRIMARY KEY will create implicit index >> "pk_sessions_s ... >> => NOTICE: table "sessions_site_summary_raw_20110815" does not exist, >> skipping >> >> NOTICE entries should moreover not be logged ! >> >> Those entries are coming from db_link. >> >I guess your test case is looping back to the same Postgres instance? Yes, I use db_link to trigger multithreading :) >But frankly this is an enhancement request, not a bug. The entire point >of the stderr logging mechanism is to capture random printouts to stderr >that might happen in code called by the backend, and it's doing exactly >what it's supposed to. Thanks for the explanation, this issue has no impact on our application. best regards, Marc Mamin
[BUGS] BUG #6209: Invalid subquery is accepted within a IN() clause
The following bug has been logged online: Bug reference: 6209 Logged by: Marc Mamin Email address: m...@intershop.de PostgreSQL version: 9.1beta3 Operating system: Linux Description:Invalid subquery is accepted within a IN() clause Details: Hello, This is somehow similar to BUG #6154 but I don't have yet a 9.1. Version to test and I'm not sure that 9.1 already contains the Fix. This issue can also be reproduced in 8.3.13 HTH, Marc Mamin steps to repeat: CREATE TABLE test_f_files_steps ( id bigserial NOT NULL, file_id integer NOT NULL, class_id integer NOT NULL, step_id integer NOT NULL, "timestamp" bigint NOT NULL, infotext character varying, efm_uid integer, CONSTRAINT test_f_files_steps_pk PRIMARY KEY (id) ); CREATE TABLE test_f_files_status ( id serial NOT NULL, class_id integer NOT NULL, file_name character varying NOT NULL, last_step_id integer NOT NULL, runs smallint, size bigint, "timestamp" bigint, plainday integer, success boolean, linecount integer, rejected integer, efm_uid integer NOT NULL, CONSTRAINT test_f_files_status_pk PRIMARY KEY (id, class_id) ); This is not valid, but is accepted. EXPLAIN analyze select * from test_f_files_steps where id in (select id from ( select file_id,class_id from test_f_files_steps EXCEPT select id,class_id from test_f_files_status )foo ) Seq Scan on test_f_files_steps (cost=0.00..26895.75 rows=430 width=64) (actual time=0.001..0.001 rows=0 loops=1) Filter: (SubPlan 1) SubPlan 1 -> Subquery Scan on foo (cost=0.00..62.00 rows=200 width=0) (never executed) -> HashSetOp Except (cost=0.00..60.00 rows=200 width=8) (never executed) -> Append (cost=0.00..52.00 rows=1600 width=8) (never executed) -> Subquery Scan on "*SELECT* 1" (cost=0.00..27.20 rows=860 width=8) (never executed) -> Seq Scan on test_f_files_steps (cost=0.00..18.60 rows=860 width=8) (never executed) -> Subquery Scan on "*SELECT* 2" (cost=0.00..24.80 rows=740 width=8) (never executed) -> Seq Scan on test_f_files_status (cost=0.00..17.40 rows=740 width=8) (never executed) Total runtime: 0.087 ms calling the IN subquery is correctly rejected: select id from ( select file_id,class_id from test_f_files_steps EXCEPT select id,class_id from test_f_files_status )foo ERROR: column "id" does not exist drop table test_f_files_steps; drop table test_f_files_status; -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] script for #6350
Attached script (run as psql -U postgres -f bug_6350.sql) shows the problem. Enjoy! create database problematic; \c problematic -- create three roles create role usr; create role adm; create role new_adm; create table foo (bar integer); alter table foo owner to adm; grant select(bar) on foo to usr; -- all ok so far, usr and adm are referenced in -- the foo.bar column privileges \dp foo -- now change the owner alter table foo owner to new_adm; -- the following drop succeds, although role adm -- is still referenced in the foo.bar column -- privileges. this is a bug. drop role adm; \dp foo -- the column privileges can now not be changed -- due to the stale reference to the deleted role. revoke select(bar) on foo from usr; \dp foo \c template1 drop database problematic; -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] script for #6350
Attached script (run as psql -U postgres -f bug_6350.sql) shows the problem. Enjoy! create database problematic; \c problematic -- create three roles create role usr; create role adm; create role new_adm; create table foo (bar integer); alter table foo owner to adm; grant select(bar) on foo to usr; -- all ok so far, usr and adm are referenced in -- the foo.bar column privileges \dp foo -- now change the owner alter table foo owner to new_adm; -- the following drop succeds, although role adm -- is still referenced in the foo.bar column -- privileges. this is a bug. drop role adm; \dp foo -- the column privileges can now not be changed -- due to the stale reference to the deleted role. revoke select(bar) on foo from usr; \dp foo \c template1 drop database problematic; -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Different error messages executing CREATE TABLE or ALTER TABLE to create a column "xmin"
Am 22.01.12 14:22, schrieb Giuseppe Sucameli: > Hi all, > > trying to create a table with a column xmin I get the > following error message: > > test=> create table lx (xmin int); > ERROR: column name "xmin" conflicts with a system > column name > > Instead I get a different (and less understandable) error > message if I try to add a column named xmin to an > existent table: > > test=> create table lx (i int); > CREATE TABLE > test=> alter table lx add xmin int; > ERROR: column "xmin" of relation "lx" already exists. > > The same problem occurs using "xmax" as column name. > > I'm on Ubuntu 11.04. > Tried on both PostgreSQL 8.4.10 and 9.1.2 That is not a bug, but a feature. See section 5.4 of the documentation "System Columns": "Every table has several system columns that are implicitly defined by the system. Therefore, these names cannot be used as names of user-defined columns. (Note that these restrictions are separate from whether the name is a key word or not; quoting a name will not allow you to escape these restrictions.) You do not really need to be concerned about these columns; just know they exist." and further down: "xmin The identity (transaction ID) of the inserting transaction for this row version. (A row version is an individual state of a row; each update of a row creates a new row version for the same logical row.)" -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6454: Latest x64 msi does not recognize admin account
Am 13.02.12 00:09, schrieb the_r...@yahoo.com: > The following bug has been logged on the website: > > Bug reference: 6454 > Logged by: Paul Peterson > Email address: the_r...@yahoo.com > PostgreSQL version: 9.1.2 > Operating system: Windows 7 home > Description: > > Latest x64 msi does not recognize admin account. There is only one account, > mine and it appears as though Windows took away the XP admin default > account. suggestions? Windows 7 deactivates the Administrator account by default. Well, it rather hides it. To make it visible, use the follow steps (sorry, they are in german, because I took them out of our ticket/FAQ system): Systemsteuereung System und Sicherheit Verwaltung Computerverwaltung Lokale Benutzer und Gruppen wählen Benutzer wählen mit rechter Maustaste auf Administrator, es erscheint ein Fenster Eigenschaften in diesem Fenster Häckchen bei "Konto ist deaktiviert" entfernen mit ok bestätigen mit rechter Maustaste auf Administrator klicken und im Popup-Menu "Kennwort festlegen" anklicken Warnmeldung taucht auf, auf Fortsetzen klicken Kennwort eingeben und ok klicken, fertig -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #6595: can't remote access
Am 17.04.12 12:10, schrieb li...@nway.com.cn: > The following bug has been logged on the website: > > Bug reference: 6595 > Logged by: lihao > Email address: li...@nway.com.cn > PostgreSQL version: 9.1.3 > Operating system: windows xp > Description: > > I has install pg 9.1.3 on windows xp,but it can't accessed by a remote > windows xp. That is not a bug, but a configuration error. I am sure if you read the documentation, you will be able to properly configure your system. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #7670: BUG #7545: Unresponsive server with error log reporting: "poll() failed: Invalid argument"
Am 18.11.2012 um 19:36 schrieb Peter Geoghegan : > On 18 November 2012 18:18, Tom Lane wrote: >> Well, we have two reports of people trying such values (assuming that >> #7545 actually is the same thing), and it didn't work for either of >> them. I don't think it's a problem to restrict the value to something >> that will work rather than fail. > > Right. sizeof(int) is very probably 4 on all platforms that we > support. I see no problem with the proposal. Have you cross-checked this on a 64bit platform vs. a 32 bit platform? e.g. on Linux i386 vs. Linux amd64? > > -- > Peter Geoghegan http://www.2ndQuadrant.com/ > PostgreSQL Development, 24x7 Support, Training and Services > > > -- > Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-bugs Freundliche Grüsse, micro systems Marc Balmer -- Marc Balmer micro systems, Wiesendamm 2a, Postfach, 4019 Basel fon +41 61 383 05 10, fax +41 61 383 05 12, http://www.msys.ch/ -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #7820: Extension uuid-ossp cannot be installed on Windows - getting syntax error
Am 22.01.2013 um 14:31 schrieb jan-peter.seif...@gmx.de: > The following bug has been logged on the website: > > Bug reference: 7820 > Logged by: Jan-Peter Seifert > Email address: jan-peter.seif...@gmx.de > PostgreSQL version: 9.1.7 > Operating system: Windows 7 64-bit > Description: > > The statement: > 'CREATE EXTENSION uuid-ossp' > > just gives me a syntax error: > > ERROR: syntax error at or near "-" > LINE 1: CREATE EXTENSION uuid-ossp > ^ > ** Fehler ** > > ERROR: syntax error at or near "-" > SQL Status:42601 > Zeichen:22 > > Obviously "CREATE EXTENSION" expects underscores instead of hyphens. no. Your syntax is wrong. > > I had to replace the hyphen in file names and in the scripts to make the > module work. > That is the wrong "fix". The hyphen has a meaning in SQL. So you need to properly enclose uuid-ossp in quotes. -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Optimizer problem with multi-column index
Hello, we have an issue concerning multi-column indexes not being used by the planner. version PostgreSQL 9.2.3 on x86_64-unknown-linux-gnu, compiled by gcc (SUSE Linux) 4.7.2 20130108 [gcc-4_7-branch revision 195012], 64-bit (same has been tested on postgres 8.4 with exactly the same results) default_statistics_target is set to 1000. The following is a detailed description, completely sufficient to reproduce the problem. consider the following table: create sequence plannertest_seq; create table plannertest ( id bigint not null default nextval( 'plannertest_seq' ), xyz_id bigint not null default random() * 9 + 1, datum timestamp not null ); with these 2 indexes: create index plannertest_datum on plannertest( datum ); create index plannertest_xyz_id_datum on plannertest( xyz_id, datum ); we now insert data into the table using the following statement: insert into plannertest( datum ) select datum from generate_series( '2010-01-01 00:00', now(), interval '1 minute' ); INSERT 0 1748655 analyze plannertest; then we issue the following select: select datum from plannertest where xyz_id = 3 and datum >= '2012-10-25 06:00:00' and Datum < '2013-01-27 06:00:00'; which yields the following plan: Index Scan using plannertest_datum on plannertest (cost=0.00..5444.83 rows=15164 width=24) Index Cond: ((datum >= '2012-10-25 06:00:00'::timestamp without time zone) AND (datum < '2013-01-27 06:00:00'::timestamp without time zone)) Filter: (xyz_id = 3) so the optimizer here chooses the 1-column index on column "datum" even though the 2-column index would be 10 times as selective. we now drop the 1-column index. drop index plannertest_datum; and then issue the same statement again. now we get the following plan: QUERY PLAN -- Bitmap Heap Scan on plannertest (cost=630.09..12623.71 rows=15164 width=24) Recheck Cond: ((xyz_id = 3) AND (datum >= '2012-10-25 06:00:00'::timestamp without time zone) AND (datum < '2013-01-27 06:00:00'::timestamp without time zone)) -> Bitmap Index Scan on plannertest_xyz_id_datum (cost=0.00..626.30 rows=15164 width=0) Index Cond: ((xyz_id = 3) AND (datum >= '2012-10-25 06:00:00'::timestamp without time zone) AND (datum < '2013-01-27 06:00:00'::timestamp without time zone)) this is really funny. Apparently, in the first case, where the 1-column index (which does NOT contain the column to be filtered) is used, the planner forgets about the necessity to visit every tuple in order to do the filtering on the "xyz" column. In the second case, where everything needed is included in the index, however, the planner thinks it has to use an absolutely unnecessary bitmap heap scan. A simple forward index scan would do here. BTW, exactly the same plans are output when I do a "select *".
Re: [BUGS] ftp server symlink
ftp.postgresql.org sync's off of developer.postgersql.org every ... 6 hours, I believe it is ... On Thu, 23 Oct 2008, Bruce Momjian wrote: Bruce Momjian wrote: Thanks, fixed, sources -> source. Oops, sorry, I did not fix it. I fixed it on developer.postgresql.org, but that is not where ftp.postgresql.org is pointed to. Marc? --- --- hubert depesz lubaczewski wrote: there is a typo on postgresql ftp server: => lftp ftp.postgresql.org lftp ftp.postgresql.org:~> cd pub cd ok, cwd=/pub lftp ftp.postgresql.org:/pub> ls -l -rw-rw-r--1 258 70 1719 Dec 10 2007 README -rw-rw-r--1 258 70 1552 Feb 09 2008 README.dist-split drwxrwxr-x 28 258 70512 Sep 24 05:42 binary drwxrwxr-x3 258 70 1024 Mar 07 2008 dev lrwxr-xr-x1 1194 70 13 Sep 24 05:41 latest -> source/v8.3.4 drwxrwxr-x4 1017 70512 Nov 28 2005 odbc drwxrwxr-x4 1017 70512 Sep 01 2006 pgadmin3 drwxr-xr-x4 070512 Nov 28 2005 projects drwxr-xr-x4 258 70512 Mar 07 2008 snapshot drwxrwxr-x 20 258 70512 Sep 27 09:45 source -rw-r--r--1 070 24 Oct 18 19:05 sync_timestamp lrwxr-xr-x1 1194 70 15 Sep 24 05:41 v8.2.10 -> sources/v8.2.10 lrwxr-xr-x1 1194 70 13 Sep 24 05:41 v8.3.4 -> source/v8.3.4 lftp ftp.postgresql.org:/pub> cd v8.2.10 cd: Brak dost?pu: 550 Failed to change directory. (/pub/v8.2.10) lftp ftp.postgresql.org:/pub> cd source/v8.2.10 cd ok, cwd=/pub/source/v8.2.10 Best regards, depesz -- Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/ jid/gtalk: [EMAIL PROTECTED] / aim:depeszhdl / skype:depesz_hdl / gg:6749007 -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs -- Bruce Momjian <[EMAIL PROTECTED]>http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs -- Bruce Momjian <[EMAIL PROTECTED]>http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.orgICQ . 7615664 -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] List archives moved and cleaned up ...
Finally figuring that enough is enough, I've been spending the past few days working on the list archives ... I've reformatted, so far, the following lists into a cleaner format: pgsql-hackers pgsql-sql pgsql-bugs pgsql-general pgadmin-hackers pgadmin-support With more lists to be worked on over the next few days ... Major changes include the following: Replaced the wide banner in the center with two smaller, 120x120 banners in the corners ... Provide a search facility incorporated into each page that searches the mhonarc pages themselves ... Change the colors to better match the main site ... Moved the archives to its own URL/Domain so that it is no longer part of the general mirror of the site ... There is still alot of work that I'm planning on doing on this, but I want to get all of the archives moved first ... To access any of the archives that have been moved, go to: http://archives.postgresql.org/ I've been modifying the links from the main web site for those lists that I've moved, as I've moved them, so getting there through 'normal channels' should also work ... Once finished, there will also be links to the OpenFTS search facility that we have online, which uses a different way of formatting/displaying the messages, so you will have the choice of using either ... ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
[BUGS] Majordomo being upgraded ...
Its been much much too long since I've upgraded Majordomo2 on the server, so this is the last email I'm sending out prior to upgrading her today ... if anyone notices the lists go suddenly quiet, or the way it works changing, please let me know ... My main worry is that in the past 6+ months, some of the defaults might have been reversed, so that they default to off instead of on, or vice versa ... just a heads up so that ppl are watching for it ... ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[BUGS] Workaround for html output formatting feature?
There is a change in html output construction in the 7.1 series that I have had difficulty handling. In a prior version (7.0.x, I think) invoking psql with -H command line switch and a select statement of the following form: select '' || 'link text ' from table where produced a html table with "clickable" links. In 7.1.3, if html output is selected, the angle brackets are output as < or > . I see the need for the change in this behaviour, but it hampers my use of postgresql to generate html tags. Modifying the select statement to include hex or octal character equivalents for the angle brackets produces the same results as does use of the 〈 / 〉 constructs. Short of outputing the query results as text and generating the html on my own, is there an intrinsic postgresql solution to this issue? -- Marc Zuckman [EMAIL PROTECTED] ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] CVS rejected me
go to anoncvs.postgresql.org, not postgresql.org .. On Tue, 18 Sep 2001, John Summerfield wrote: > This worked 2-3 days ago: what's changed? > > vs -z9 -q update > cvs update: authorization failed: server postgresql.org rejected access to >/home/projects/pgsql/cvsroot for user anoncvs > summer@dugite pgsql$ > > The entry's still in ~/.cvspass > > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Website, mailing list
In the process of fixing ... should be good to go by end of day on Sunday ... On Sun, 23 Sep 2001, John Summerfield wrote: > On Fri, 21 Sep 2001, Vince Vielhaber wrote: > > > > On Fri, 21 Sep 2001, Tom Lane wrote: > > > > > John Summerfield <[EMAIL PROTECTED]> writes: > > > > I get response like this > > > > subscribe > > > > Illegal command! > > > > No valid commands processed. > > > > > > > when I send commands according to the instructions at > > > > http://developer.postgresql.org/subunsub.php > > > > > > It would seem that that page is out of date. Vince, can you fix it? > > > I have no idea where the website source files live... > > > > Actually the instructions are correct. Something happened in the > > last couple of weeks that broke things. I'm guessing Marc's recent > > upgrade. Marc?? Care to look into this? > > > > In the mean time, can someone tell me what works now, or alternatively enrol me in >hackers and bugs and jdbc and announcements? > > > > ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [BUGS] Website, mailing list
It used to work, recent upgrade broke it, it was/is a known bug that has been fixed since and I'm in the process of upgrading ... On Sun, 23 Sep 2001, John Summerfield wrote: > On Sat, 22 Sep 2001, Vince Vielhaber wrote: > > > > On Sat, 22 Sep 2001, Tom Lane wrote: > > > > > Vince Vielhaber <[EMAIL PROTECTED]> writes: > > > >> It would seem that that page is out of date. Vince, can you fix it? > > > > > > > Actually the instructions are correct. > > > > > > Are they? The last time I subscribed to a new list, the right > > > incantation was to mail > > > subscribe list-name my-address > > > to [EMAIL PROTECTED] This doesn't square with what the webpage > > > says to do ... > > > > Majordomo2 and a number of other maillist packages accept subscription > > requests to [EMAIL PROTECTED] I know the instructions worked > > when I subscribed to general, odbc and at least one other list. > > > Please, someone who can fix it test it. I have requests like that that didn't work, > > > > > > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
Re: [BUGS] [HACKERS] [PG MAIL LISTS] SEND OUT ALL????
someone, either intentially or accidentally, sent out a load to the lists ... On Fri, 16 Nov 2001, Mike Rogers wrote: > Why did it just send out tons of mail since September of this year- every > message? > -- > Mike > > > ---(end of broadcast)--- > TIP 4: Don't 'kill -9' the postmaster > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] time's running short for you to reserve 5432/tcp!!!!
being worked on ... thanks for the reminder ... On Sat, 22 Sep 2001, Greg A. Woods wrote: > It's LONG past time you asked IANA to officially register the default > PGPORT you've been using unofficially for a very long time now! > > You are no longer in the middle of an officially unassigned block -- > there's an assignment at 5431 now! > > Please visit this link and fill out the application A.S.A.P. > > http://www.iana.org/cgi-bin/usr-port-number.pl > > -- > Greg A. Woods > > +1 416 218-0098 VE3TCP <[EMAIL PROTECTED]> <[EMAIL PROTECTED]> > Planix, Inc. <[EMAIL PROTECTED]>; Secrets of the Weird <[EMAIL PROTECTED]> > > ---(end of broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly > ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] Test suite fails on alpha architecture
Heya, I know I'm quite late with my answer, sorry. Frank Lichtenheld <[EMAIL PROTECTED]> writes: > On Sat, Nov 03, 2007 at 06:32:34PM -0400, Martin Pitt wrote: >>> Can you grant one of us access to the machine to work on it? >> I don't own any alpha machine, but maybe Frank, Steven, or anyone from >> the Debian alpha porter list can create a temporary account for you? > I'm not sure how we handle that for our experimental buildds. Admins? One of the alphas used in the experimental buildd network is actually in bdale's basement, so I'm not really happy to hand out access to it. The other one (digitalis), which is hosted at the university of Darmstadt and is our under full control, should actually be used as a porting machine if needed. Debian Developers [1] can get access to them by pinging either Andreas Barth, Martin Zobel-Helas or me. We have our own userdir-ldap setup, so please include a mail address and a verifiable GPG key in your ping, together with a short description what you want to do. Marc Footnotes: [1] And Debian contributors, as long as there is some sort of trust relationship -- BOFH #357: I'd love to help you -- it's just that the Boss won't let me near the computer. pgpFepXbExFeE.pgp Description: PGP signature
Re: [BUGS] postgresql in FreeBSD jails: proposal
[EMAIL PROTECTED] (Mischa Sandberg) writes: >Unfortunately, with multiple jails running PG servers and (due to app >limitations) all servers having same PGPORT, you get the situation that >when jail#2 (,jail#3,...) server comes up, it: >- detects that there is a shm seg with ipc key 5432001 >- checks whether the associated postmaster process exists (with kill -0) >- overwrites the segment created and being used by jail #1 Easiest fix: change the UID of the user running the postmaster (ie. pgsql) so that each runs as a distinct UID (instead of distinct PGPORT) ... been doing this since moving to FreeBSD 6.x ... no patches required ... -- Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email . [EMAIL PROTECTED] MSN . [EMAIL PROTECTED] Yahoo . yscrappy Skype: hub.orgICQ . 7615664 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Get a CD
fixed, thanks ... On Mon, 11 Nov 2002, web_admin wrote: > Regard must be paid to... > > This link is not available : > http://www.pgsql.com/cd-dist.html > > From url http://www3.ru.postgresql.org/software.html > > Best regards, > VVS. > > > > ---(end of broadcast)--- > TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED] > ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/users-lounge/docs/faq.html
[DOCS] Testing gateway
In theory, the news2mail gateway is back in place ... ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] [BUGS] postgresql-7.4RC1 - unrecognized privilege type
On Sat, 8 Nov 2003, Tom Lane wrote: > <[EMAIL PROTECTED]> writes: > >select * from information_schema.tables; > >ERROR: unrecognized privilege type: "RERERENCES" > > > Replacing the word "RERERENCES" with "REFERENCES" in > > the predicate "has_table_privilege(c.oid, > > 'RERERENCES'::text)" near the end of the view SQL > > seems to correct the problem. > > Good catch. There are two other places with the same typo :-( (all > copied and pasted no doubt). > > I've applied the patch but am loathe to force an initdb this late in > the beta cycle. Any opinions out there? Annoying as a spelling mistake is (and, from my read of the above, that is all it is?), I don't thnk it warrants forcing an initdb ... unless I'm missing a larger scope? ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Wrong PG_VERSION number in pg_config.h.win32
did you update it on the branch? I saw just the commit on HEAD :) On Wed, 12 Nov 2003, Bruce Momjian wrote: > Tom Lane wrote: > > Sebastien FLAESCH <[EMAIL PROTECTED]> writes: > > > In candiate release 7.4RC2 you still have > > > PG_VERSION "7.3" > > > > We don't generally bother with updating every last version number until > > we are branding the final release ... Bruce has a checklist of these > > things, it won't get missed ... > > Tom, I appreciate your confidence, but somehow I missed updating this > file --- not sure how I missed it. Updated now. Thanks for the report. > > -- > Bruce Momjian| http://candle.pha.pa.us > [EMAIL PROTECTED] | (610) 359-1001 > + If your life is a hard drive, | 13 Roberts Road > + Christ can be your backup.| Newtown Square, Pennsylvania 19073 > > ---(end of broadcast)--- > TIP 6: Have you searched our list archives? > >http://archives.postgresql.org > ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[BUGS] [8.0.0] out of memory on large UPDATE
The table contains ~10 million rows: # time psql -c "UPDATE xa_url SET url = url;" -U pgsql pareto ERROR: out of memory DETAIL: Failed on request of size 32. 0.000u 0.022s 2:41:14.76 0.0% 88+66k 12+0io 19pf+0w And the server is running: PostgreSQL 8.0.0 on i386-portbld-freebsd4.10, compiled by GCC 2.95.4 I haven't had a chance to upgrade it to 8.0.3 yet ... didn't realize we had any limits on stuff like this ... bug, or really a limit? Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] [8.0.0] out of memory on large UPDATE
On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: The table contains ~10 million rows: # time psql -c "UPDATE xa_url SET url = url;" -U pgsql pareto ERROR: out of memory DETAIL: Failed on request of size 32. If you've got any AFTER UPDATE triggers on that table, you could be running out of memory for the pending-triggers list. Nope, only have a BEFORE UPDATE, or would that be similar except for at which point it runs out of memory? Triggers: xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW EXECUTE PROCEDURE xa_url_domain() Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] [8.0.0] out of memory on large UPDATE
On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: On Thu, 11 Aug 2005, Tom Lane wrote: If you've got any AFTER UPDATE triggers on that table, you could be running out of memory for the pending-triggers list. Nope, only have a BEFORE UPDATE, or would that be similar except for at which point it runs out of memory? Nope, BEFORE UPDATE shouldn't result in any permanent memory accumulation. An out-of-memory error should result in a long report in the postmaster log about how many bytes in each memory context --- can you post that? This is all I'm seeing in the logs: # grep "\[653\]" pgsql Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR: out of memory Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL: Failed on request of size 32. Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] [8.0.0] out of memory on large UPDATE
On Thu, 11 Aug 2005, Marc G. Fournier wrote: On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: On Thu, 11 Aug 2005, Tom Lane wrote: If you've got any AFTER UPDATE triggers on that table, you could be running out of memory for the pending-triggers list. Nope, only have a BEFORE UPDATE, or would that be similar except for at which point it runs out of memory? Nope, BEFORE UPDATE shouldn't result in any permanent memory accumulation. An out-of-memory error should result in a long report in the postmaster log about how many bytes in each memory context --- can you post that? This is all I'm seeing in the logs: # grep "\[653\]" pgsql Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR: out of memory Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL: Failed on request of size 32. 'k, does this help any? TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 534763424 used ExecutorState: 8192 total in 1 blocks; 7648 free (3 chunks); 544 used MessageContext: 57344 total in 3 blocks; 30312 free (11 chunks); 27032 used PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used ExecutorState: 122880 total in 4 blocks; 98912 free (127 chunks); 23968 used ExprContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used CacheMemoryContext: 516096 total in 6 blocks; 134272 free (3 chunks); 381824 used xa_url_tuid_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_spiderlite_signature_check_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_priority_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used urls_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used urls_logger_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used url_url: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_id_pk: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_index_indexrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attribute_relid_attnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_class_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_amproc_opc_p
Re: [BUGS] [8.0.0] out of memory on large UPDATE
Just as a reminder, this is an 8.0.0 install, so if you think this might have been fixed in later sub-releases, plesae let me know and I'll upgrade/test again ... On Thu, 11 Aug 2005, Marc G. Fournier wrote: On Thu, 11 Aug 2005, Marc G. Fournier wrote: On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: On Thu, 11 Aug 2005, Tom Lane wrote: If you've got any AFTER UPDATE triggers on that table, you could be running out of memory for the pending-triggers list. Nope, only have a BEFORE UPDATE, or would that be similar except for at which point it runs out of memory? Nope, BEFORE UPDATE shouldn't result in any permanent memory accumulation. An out-of-memory error should result in a long report in the postmaster log about how many bytes in each memory context --- can you post that? This is all I'm seeing in the logs: # grep "\[653\]" pgsql Aug 11 08:45:47 pgsql80 pg[653]: [34-1] ERROR: out of memory Aug 11 08:45:47 pgsql80 pg[653]: [34-2] DETAIL: Failed on request of size 32. 'k, does this help any? TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 534763424 used ExecutorState: 8192 total in 1 blocks; 7648 free (3 chunks); 544 used MessageContext: 57344 total in 3 blocks; 30312 free (11 chunks); 27032 used PortalMemory: 8192 total in 1 blocks; 8040 free (0 chunks); 152 used PortalHeapMemory: 1024 total in 1 blocks; 896 free (0 chunks); 128 used ExecutorState: 122880 total in 4 blocks; 98912 free (127 chunks); 23968 used ExprContext: 8192 total in 1 blocks; 7656 free (0 chunks); 536 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used CacheMemoryContext: 516096 total in 6 blocks; 134272 free (3 chunks); 381824 used xa_url_tuid_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_spiderlite_signature_check_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_priority_idx: 1024 total in 1 blocks; 912 free (0 chunks); 112 used urls_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used urls_logger_status: 1024 total in 1 blocks; 912 free (0 chunks); 112 used url_url: 1024 total in 1 blocks; 912 free (0 chunks); 112 used xa_url_id_pk: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_index_indrelid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_attrdef_adrelid_adnum_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_type_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_type_typname_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_statistic_relid_att_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_shadow_usesysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_rewrite_rel_rulename_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_proc_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_proc_proname_args_nsp_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_operator_oprname_l_r_n_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_namespace_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_namespace_nspname_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_language_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_language_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_inherits_relid_seqno_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_group_sysid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_group_name_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_conversion_name_nsp_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_conversion_default_index: 1024 total in 1 blocks; 712 free (0 chunks); 312 used pg_opclass_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_opclass_am_name_nsp_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_cast_source_target_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_attribute_relid_attnam_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_amop_opr_opc_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_aggregate_fnoid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_shadow_usename_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_trigger_tgrelid_tgname_index: 1024 total in 1 blocks; 848 free (0 chunks); 176 used pg_operator_oid_index: 1024 total in 1 blocks; 912 free (0 chunks); 112 used pg_amproc_opc_proc_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_amop_opc_strat_index: 1024 total in 1 blocks; 776 free (0 chunks); 248 used pg_index_indexreli
Re: [BUGS] [8.0.0] out of memory on large UPDATE
On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: 'k, does this help any? TopMemoryContext: 40960 total in 4 blocks; 8632 free (10 chunks); 32328 used SPI Plan: 3072 total in 2 blocks; 1728 free (0 chunks); 1344 used TopTransactionContext: 534765568 total in 74 blocks; 2144 free (68 chunks); 534763424 used Yeah, the leak is clearly in TopTransactionContext. That doesn't let the trigger code off the hook though, because the pending-triggers list is kept there. Are you *sure* there are no AFTER triggers here? (Don't forget foreign-key checking triggers.) This is all of them ... nothing AFTER, just ON or BEFORE ... Foreign-key constraints: "xa_classification_id_fk" FOREIGN KEY (classification_id) REFERENCES xa_classification(classification_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_ip_address_id_fk" FOREIGN KEY (ip_address_id) REFERENCES xa_ip_addresses(ip_address_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_logger_status_id_fk" FOREIGN KEY (logger_status_id) REFERENCES xa_logger_status(logger_status_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_url_queue_id_fk" FOREIGN KEY (url_queue_id) REFERENCES xa_url_queue(url_queue_id) ON UPDATE RESTRICT ON DELETE SET NULL Triggers: xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW EXECUTE PROCEDURE xa_url_domain() Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] [8.0.0] out of memory on large UPDATE
On Thu, 11 Aug 2005, Tom Lane wrote: "Marc G. Fournier" <[EMAIL PROTECTED]> writes: On Thu, 11 Aug 2005, Tom Lane wrote: Are you *sure* there are no AFTER triggers here? (Don't forget foreign-key checking triggers.) This is all of them ... nothing AFTER, just ON or BEFORE ... Foreign-key constraints: "xa_classification_id_fk" FOREIGN KEY (classification_id) REFERENCES xa_classification(classification_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_ip_address_id_fk" FOREIGN KEY (ip_address_id) REFERENCES xa_ip_addresses(ip_address_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_logger_status_id_fk" FOREIGN KEY (logger_status_id) REFERENCES xa_logger_status(logger_status_id) ON UPDATE RESTRICT ON DELETE RESTRICT "xa_url_queue_id_fk" FOREIGN KEY (url_queue_id) REFERENCES xa_url_queue(url_queue_id) ON UPDATE RESTRICT ON DELETE SET NULL Triggers: xa_url_domain_b_i_u BEFORE INSERT OR UPDATE ON xa_url FOR EACH ROW EXECUTE PROCEDURE xa_url_domain() Um, foreign-key triggers are always AFTER. Ah, k ... that would actually make sense had I thought of it too :( Can you afford to drop the FK constraints while you do the update? I can't think of any other short-term workaround. Not sure, but is there a way to do so temporarily? DarcyB and I were talking the other day about how slow things where for that UPDATE ... I figured alot of the cause was the UPDATEng of the INDICES at the same time, so he suggested doing something they are apparenty looking for with Slony, and "temporarily disabling" the indices inside a transaction, and then REINDEXng at the end ... ie. BEGIN; UPDATE pg_catalog.pg_class SET relhasindex = 'f' WHERE pg_catalog.pg_class.oid= 'tableoid'; UPDATE pg_catalog.pg_class SET relhasindex = 't' WHERE pg_catalog.pg_class.oid= 'tableoid'; REINDEX; END; Could I do similar setting "relfkeys = 'f'"? Or is it more complicated then that? Marc G. Fournier Hub.Org Networking Services (http://www.hub.org) Email: [EMAIL PROTECTED] Yahoo!: yscrappy ICQ: 7615664 ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq