[BUGS] ascii_and_mic.so
i cannot compile your last release of postgresql (7.3.2). i'm using right now the 7.2.1 on a red hat 7.3. if you could help me find the problem : gmake check install: /var/tmp/postgresql-7.3.2/src/test/regress/./tmp_check/install/usr/lib/postg resql/ascii_and_mic.so does not exist what is this ascii_and_mic.so - where can i find this? patrick ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[BUGS] Could not receive data from client: Unknown winsock error 10061
Hi, the error message of the subject is my big problem, log entrys: 2010-02-24 12:22:25 LOG: XX000: could not receive data from client: Unknown winsock error 10061 2010-02-24 12:22:25 LOCATION: pq_recvbuf, pqcomm.c:738 2010-02-24 12:22:25 LOG: 08P01: unexpected EOF on client connection 2010-02-24 12:22:25 LOCATION: SocketBackend, postgres.c:307 After this messages our WebSphere Application Server 6.1.0.9 crashs, with Database Timeout. I spend many time on google and the postgre manual and found some possible problem causes, but no one helped me. The following link describes my problem and including a patch, but the link seems to be down. Post http://archives.postgresql.org/pgsql-bugs/2009-07/msg00179.php Patch http://archives.postgresql.org/pgsql-bugs/2009-07/msg00078.php Eviroment: System1: Windows 2003 SP2 on ESX WAS6 System2: Windows 2003 SP2 on ESX PostgreSQL 8.2.15 Actually I'm a little bit helpless with debugging this error, someon has a idea what the problem causes? Or a advice what to do, for debbunging?? Kind regards, Patrick Weimer -- Sitz / Registergericht: Frankfurt am Main / Amtsgericht Frankfurt am Main Registernummer: HRA 42705 Geschaeftsfuehrer: Jens Dreger USt.ID: DE244892265 -- Komplementaer-GmbH: J&J DREGER Verwaltungs GmbH, Carl-Benz-Str. 35, 60386 Frankfurt am Main Sitz / Registergericht: Frankfurt am Main / Amtsgericht Frankfurt am Main Registernummer: HRB 73891 Geschaeftsfuehrer: Jens Dreger -- -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
[BUGS] Alternate data locations ?
Hi: I'm installing postgresql-7.0.2 from ftp.postgresql.org on a clean redhat 6.2 system. I'd like to create an alternate data location, similar to that supported in pgsql 6.3, so, I do: $ export ARC_P_DAT_01="/home/httpd/data/arc_p_dat_01" $ initlocation initlocation $ARC_P_DAT_01 initlocation is complete $ createdb -D $ARC_P_DAT_01 test ERROR: The database path '/home/httpd/data/arc_p_dat_01' is invalid. This may be due to a character that is not allowed or because the chosen path isn't permitted for databases createdb: database creation failed $ sudo -u postgres ls -l $ARC_P_DAT_01 total 4 drwx--2 postgres http_adm 4096 Aug 18 23:10 base So, everything looks okay. Can anyone give me a hint what might be happening ? What can I do to debug this ? Thanks, -- Pat
[BUGS] Documentation Error -- JDBC
The programmers guide states that in order to load the JDBC driver, you use the class name "postgresql.Driver" http://www.postgresql.org/users-lounge/docs/7.0/programmer/jdbc6424.htm This is only true if you compile the driver to use the 1.1 JDK; if you compile it to use the 1.2 jdk the class will be "org.postgresql.Driver" I am using postgresql 7.0.2 Thanks for the good work, Patrick May
[BUGS] BUG #2465: Installation
The following bug has been logged online: Bug reference: 2465 Logged by: Patrick Hayes Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: Window XP Description:Installation Details: I have installed version 8.1. I am trying to access the console. This will not allow me access. I know I am entering the correct password. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] BUG #2785: Exception Issue
The following bug has been logged online: Bug reference: 2785 Logged by: Patrick Hayes Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: Windows Professional Description:Exception Issue Details: I am defining Exception blocks. When I put something that others in the when clause, it will not compile. The others clause works fine. The following is working fine. BEGIN PPSDATA.member_date_of_birth=date(substr(ROWDATA.inline,55,4)||'-'|| substr(ROWDATA.inline,59,2)||'-'||substr(ROWDATA.inline,61,2)); EXCEPTION WHEN OTHERS THEN ERRORCODE:='Y'; PPSERROR.current_month_error:='*'; END; The following is not working fine. BEGIN select count(*) into cnt_county from county_code where county_code = PPSERROR.member_county_code; EXCEPTION WHEN no_data then PPSERROR.member_county_code_error:='*'; RAISE NOTICE ' ---county_code_error out %', PPSERROR.member_county_code; ERRORCODE:='Y'; END; I get the following error. postgres-# language plpgsql; ERROR: unrecognized exception condition "no_data" CONTEXT: compile of PL/pgSQL function "chip_pps_data_check" near line 101 Thanks ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] Indexes not used for "min()"
Your name : Patrick Your email address : Valsecchi System Configuration - Architecture (example: Intel Pentium) : Intel Pentium Operating System (example: Linux 2.0.26 ELF) : Linux 2.4.20 PostgreSQL version (example: PostgreSQL-7.3.3): PostgreSQL-7.3.3 Compiler used (example: gcc 2.95.2) : gcc 3.2 Please enter a FULL description of your problem: When doing the following query, it's obvious that postgres should use the index: stats=# explain select min(time) from call; QUERY PLAN Aggregate (cost=49779.82..49779.82 rows=1 width=8) -> Seq Scan on stb_call (cost=0.00..44622.06 rows=2063106 width=8) (2 rows) stats=# \d call Table "public.call" Column | Type | Modifiers -+--+--- a | integer | not null b | integer | not null time| timestamp with time zone | not null d | character varying(4) | e | character varying(4) | f | character varying(4) | g | character varying(15)| not null h | character varying(7) | i | smallint | not null Indexes: call_time btree ("time") stats=# select count(*) from call; count - 2063106 (1 row) If you have any question or comment, please contact me directly, I'm not suscribed to the list. ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[BUGS] EXIT <> in function
Hi, It seems to me that a bug was introduced in 8.0 version. I'm using Beta3 version on WIN32 platform. CREATE OR REPLACE FUNCTION TEST( ) RETURNS int4 AS ' BEGIN <>LOOP Raise Notice ''Loop 1''; EXIT LOOP1;END LOOP; RETURN 0; END' LANGUAGE 'plpgsql'; When I call SELECT TEST(), I get the error message : ERROR: control reached end of function without RETURNCONTEXT: PL/pgSQL function "test" If I just change EXIT LOOP1 to EXIT -> everything is OK Regards, Patrick ------- Patrick Fiche email : [EMAIL PROTECTED] tél : 01 69 29 36 18 --- Protected by Polesoft Lockspam http://www.polesoft.com/refer.html
[BUGS] Disk space is consumed by UPDATE query
Your name : Patrick Clery Your email address : [EMAIL PROTECTED] System Configuration - Architecture (example: Intel Pentium) : Intel Operating System (example: Linux 2.4.18) : FreeBSD 4.10-stable PostgreSQL version (example: PostgreSQL-8.0): PostgreSQL-8.0 Compiler used (example: gcc 2.95.2) : gcc 2.95.4 Please enter a FULL description of your problem: This query appears to enter an infinite loop and fill up my /usr partition (PGDATA=/usr/local/pgsql/data) at a rapid rate: UPDATE people_locations SET postalcode = foo.pcode, city_id = foo.cid FROM ( SELECT p.postalcode AS pcode, p.city_id AS cid, c.state_id AS sid FROM postalcodes p JOIN cities c USING (city_id) ) foo WHERE foo.sid = state_id AND old_postalcode = foo.pcode psql:/usr/local/www/beano/datingsite/sql/import_people.sql:363: ERROR: could not write to hash-join temporary file: No space left on device >From when the query is first run (somehow the disk space goes up initially): $ while : ; do df -h /usr/; sleep 3; done FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.8G 7.8G43%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.2G 8.3G39%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.1G 8.4G38%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.2G 8.4G38%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.2G 8.3G39%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.3G 8.3G39%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.3G 8.3G39%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.4G 8.2G40%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.4G 8.2G40%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.5G 8.1G40%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.5G 8.1G41%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.6G 8.0G41%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.6G 8.0G41%/usr FilesystemSize Used Avail Capacity Mounted on /dev/ad0s1g15G 5.7G 7.9G42%/usr ... and on and on until it reaches zero. Here's the query plan: QUERY PLAN -- Hash Join (cost=18770.77..185690.90 rows=20626 width=140) Hash Cond: ((("outer".postalcode)::text = ("inner".old_postalcode)::text) AND ("outer".city_id = "inner".city_id)) -> Seq Scan on postalcodes p (cost=0.00..14742.12 rows=825012 width=18) -> Hash (cost=9955.64..9955.64 rows=366625 width=126) -> Merge Join (cost=69.83..9955.64 rows=366625 width=126) Merge Cond: ("outer".state_id = "inner".state_id) -> Index Scan using cities_state_id on cities c (cost=0.00..4203.13 rows=73325 width=8) -> Sort (cost=69.83..72.33 rows=1000 width=122) Sort Key: people_locations.state_id -> Seq Scan on people_locations (cost=0.00..20.00 rows=1000 width=122) (10 rows) Here's the inner query by itself: datingsite=> EXPLAIN ANALYZE SELECT datingsite-> p.postalcode AS pcode, datingsite-> p.city_id AS cid, datingsite-> c.state_id AS sid datingsite-> FROM postalcodes p datingsite-> JOIN cities c USING (city_id); QUERY PLAN --- Hash Join (cost=2091.56..47451.98 rows=825012 width=22) (actual time=1132.994..16764.241 rows=825012 loops=1) Hash Cond: ("outer".city_id = "inner".city_id) -> Seq Scan on postalcodes p (cost=0.00..14742.12 rows=825012 width=18) (actual time=0.077..4657.842 rows=825012 loops=1) -> Hash (cost=1585.25..1585.25 rows=73325 width=8) (actual time=1131.010..1131.010 rows=0 loops=1) -> Seq Scan on cities c (cost=0.00..1585.25 rows=73325 width=8) (actual time=0.031..738.582 rows=73325 loops=1) Total runtime: 20475.610 ms (6 rows) Both tables are rather large: datingsite=> select count(*) from people_loc
[BUGS] BUG #1539: Suggestion
The following bug has been logged online: Bug reference: 1539 Logged by: Patrick Boulay Email address: [EMAIL PROTECTED] PostgreSQL version: 7.4.5 Operating system: Unix Solaris Description:Suggestion Details: Sometime I do transaction and I do a couple of commands on SQL shell and I forgot to commit my commands before testing my software. Is it possible to change a prompt character during a transaction? instead of: database=# it can be database=> or something like that!! After the commit, the prompt come back to database=# example: database=# begin; database=> insert into somewhere values(1); ... database=> end; database=# it's more clear! :) Patrick ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] Strange behavior for boolean predicates and partial indexes
I have a partial index that contains a predicate to check for whether the field deleted is false or not: CREATE INDEX people_essays_any_essaytype_idx ON people_essays (person_id) WHERE NOT deleted; The following query does NOT use the index: EXPLAIN ANALYZE SELECT * FROM people_essays WHERE person_id = 1 AND deleted IS FALSE; QUERY PLAN Seq Scan on people_essays (cost=0.00..10225.85 rows=4 width=67) (actual time=110.205..417.113 rows=4 loops=1) Filter: ((person_id = 1) AND (deleted IS FALSE)) Total runtime: 417.203 ms (3 rows) EXPLAIN ANALYZE SELECT * FROM people_essays WHERE person_id = 1 AND deleted = FALSE; QUERY PLAN -- Index Scan using people_essays_uniq on people_essays (cost=0.00..18.06 rows=4 width=67) (actual time=35.094..35.971 rows=4 loops=1) Index Cond: (person_id = 1) Filter: (deleted = false) Total runtime: 36.070 ms (4 rows) EXPLAIN ANALYZE SELECT * FROM people_essays WHERE person_id = 1 AND NOT deleted; QUERY PLAN - Index Scan using people_essays_any_essaytype_idx on people_essays (cost=0.00..18.05 rows=4 width=67) (actual time=0.034..0.047 rows=4 loops=1) Index Cond: (person_id = 1) Filter: (NOT deleted) Total runtime: 0.136 ms (4 rows) Though the index was created with "NOT deleted", shouldn't the planner evaluate "IS FALSE" as the same if "= FALSE" works? ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[BUGS] BUG #1997: Grammar error in phpPgAdmin 3.1
The following bug has been logged online: Bug reference: 1997 Logged by: Patrick Kik Email address: [EMAIL PROTECTED] PostgreSQL version: 7.4.5 Operating system: SLES 9 Description:Grammar error in phpPgAdmin 3.1 Details: In Dutch, after deleting a column the text "Kolom verwijdert" appears. This should be "Kolom verwijderd". ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] BUG #2225: Backend crash -- BIG table
The following bug has been logged online: Bug reference: 2225 Logged by: Patrick Rotsaert Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.2 Operating system: Linux Description:Backend crash -- BIG table Details: Situation: - Database with 1 table: CREATE TABLE pptran ( cchk int4, trid char(14), csnr char(13), amount int4, date date, "time" time, lane int4, "type" int4, data char(24), stat int4, skip int4, retry int4, points_bc int4, points_chip int4, trid_tid int2, trid_seq int2 ); This table contains approx. 36 million records! CREATE INDEX pptran_trid ON pptran USING btree (trid); Problem: --- Executing a select query causes the backend to crash. This is the output from the psql frontend: pointspp=# select trid, count(*) from pptran group by trid having count(*) > 1; server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. The connection to the server was lost. Attempting reset: Failed. !> This error appears after several minutes. During execution, 'top' shows about 4% CPU usage and 98% memory usage of the postmaster process. At the time of the crash, the server logs: LOG: server process (PID 21815) was terminated by signal 9 LOG: terminating any other active server processes FATAL: the database system is in recovery mode LOG: all server processes terminated; reinitializing LOG: database system was interrupted at 2006-01-30 15:04:31 CET LOG: checkpoint record is at 3/275944AC LOG: redo record is at 3/275944AC; undo record is at 0/0; shutdown TRUE LOG: next transaction ID: 5415; next OID: 16444 LOG: next MultiXactId: 1; next MultiXactOffset: 0 LOG: database system was not properly shut down; automatic recovery in progress LOG: record with zero length at 3/275944F0 LOG: redo is not required LOG: database system is ready LOG: transaction ID wrap limit is 2147484146, limited by database "postgres" Platform info: - - PostgreSQL version 8.1.2 - Linux Slackware 9.1.0, Kernel 2.4.22 - /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 2 model name : Intel(R) Pentium(R) 4 CPU 2.20GHz stepping: 4 cpu MHz : 2192.973 cache size : 512 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 4377.80 - /proc/meminfo total:used:free: shared: buffers: cached: Mem: 926220288 33148928 8930713600 1163264 17268736 Swap: 2048376832 18296832 203008 MemTotal: 904512 kB MemFree:872140 kB MemShared: 0 kB Buffers: 1136 kB Cached: 15288 kB SwapCached: 1576 kB Active: 5824 kB Inactive:15820 kB HighTotal: 0 kB HighFree:0 kB LowTotal: 904512 kB LowFree:872140 kB SwapTotal: 2000368 kB SwapFree: 1982500 kB ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #2225: Backend crash -- BIG table
on 31/01/2006 16:18 Tom Lane wrote : "Patrick Rotsaert" <[EMAIL PROTECTED]> writes: At the time of the crash, the server logs: LOG: server process (PID 21815) was terminated by signal 9 You're running on a Linux machine with memory overcommit enabled. Turn that off, or nothing will ever work very reliably --- the OOM killer is entirely capable of zapping innocent processes that have nothing to do with the one eating too much memory; and even when it kills the right process, "kill -9" is not IMHO an acceptable way for the system to tell a process it can't have any more memory. See http://www.postgresql.org/docs/8.1/static/kernel-resources.html#AEN18105 regards, tom lane Syslog indeed shows: kernel: Out of Memory: Killed process 21815 (postmaster). Looking at the kernel source mm/mmap.c, the function `int vm_enough_memory(long pages)' does 1 simple test: /* Sometimes we want to use more memory than we have. */ if (sysctl_overcommit_memory) return 1; But /proc/sys/vm/overcommit_memory reads `0', so my guess is that overcommit is not enabled... right? Any hints? Thanks, Patrick Rotsaert
Re: [BUGS] BUG #2225: Backend crash -- BIG table
Tom Lane wrote: Patrick Rotsaert <[EMAIL PROTECTED]> writes: on 31/01/2006 16:18 Tom Lane wrote : http://www.postgresql.org/docs/8.1/static/kernel-resources.html#AEN18105 But /proc/sys/vm/overcommit_memory reads `0', so my guess is that overcommit is not enabled... right? Please read the reference I pointed you to. regards, tom lane I did read it, very carefully. The proposed fix will only work in 2.6 kernels. Mine is a 2.4 and upgrading it is not an option. The document suggests to look at the kernel source for 2.4 kernels. I did that, as I wrote in the previous mail. Setting the overcommit parameter to '2', or any value for that matter, won't do any good because in this kernel, it is only tested if it is non-zero. On my system, the parameter is 0, so overcommit is *not* enabled. I don't know what else I can do. The other proposed option is to install more memory. Sorry, not an option, 1GB has to be sufficient. Apart from the overcommit subject, why is postgres consuming so much memory? Should the solution of this problem not be searched for here? Thanks, Patrick Rotsaert
Re: [BUGS] BUG #2225: Backend crash -- BIG table
Is there some reason you can't add more swap space? Yes, disk space. I have about 2 GB of swap space enabled. How do you know it is Postgres that is using lots of memory? The OOM killer doesn't just kill of memory hogs, so you can't just assume the processes being killed tells you which processes were using lots of memory. I keep 'top' running while I launch the query. One single postmaster climbs to the top, claiming 98% MEM and about 8% CPU. The memory that Postgres uses is controlled in postgresql.conf. One particular gotcha is that sortmem is per sort, so if you have a number of concurrent sorts you might be using more memory than you expected. I am very sure there are no other queries running at the same time. This is a development machine of which I have full control. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] BUG #2225: Backend crash -- BIG table
0 means overcommit is enabled. You want to set it to something other than 0 to prevent overcommitting and the consequent suprising process deaths. Exactly what other values are accepted varies, but 0 isn't the one for you. I do not understand how 0 could mean overcommit is enabled. I do not know how it is in recent kernels, but the source code of the 2.4 kernel I use is this: int vm_enough_memory(long pages) { unsigned long free; /* Sometimes we want to use more memory than we have. */ if (sysctl_overcommit_memory) return 1; // ... } seems pretty straightforward to me. I also did a recursive grep through all of the kernel source and this is the only place where this parameter is used. I tried setting the parameter to 1, but it did not make any difference. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #2225: Backend crash -- BIG table
One question is what does the explain (without analyze) plan look like for the above and are the row estimates valid in the case of one of the hash plans. pointspp=# explain select trid, count(*) from pptran group by trid having count(*) > 1; QUERY PLAN -- HashAggregate (cost=1311899.28..1311902.78 rows=200 width=18) Filter: (count(*) > 1) -> Seq Scan on pptran (cost=0.00..1039731.02 rows=36289102 width=18) (3 rows) Failing that, how many rows should the above return? That is exactly what I am trying to find out. I can only guess that, but it should not be more than a couple of 10k rows. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #2225: Backend crash -- BIG table
Hmm, if you do an enable_hashagg=false and then run the query (without explain) does it work then? pointspp=# set enable_hashagg = false; SET pointspp=# select trid, count(*) from pptran group by trid having count(*) > 1; ERROR: could not write block 661582 of temporary file: No space left on device HINT: Perhaps out of disk space? Still does not work, but it no longer consumes that same amount of memory ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #2225: Backend crash -- BIG table
The problem is that the HashAgg will have to maintain a counter for every distinct value of trid, not just those that occur more than once. So if there are a huge number of one-time-only values you could still blow out memory (and HashAgg doesn't currently know how to spill to disk). One-time-only values are in my case more probable, so it will use a lot of counters. That "rows=200" estimate looks suspiciously like a default. Has this table been ANALYZEd recently? I'd expect the planner not to choose HashAgg if it has a more realistic estimate of the number of groups. regards, tom lane I did a vacuum analyze, now the explain gives different results. pointspp=# vacuum analyze; VACUUM pointspp=# explain select trid, count(*) from pptran group by trid having count(*) > 1; QUERY PLAN GroupAggregate (cost=9842885.29..10840821.57 rows=36288592 width=18) Filter: (count(*) > 1) -> Sort (cost=9842885.29..9933606.77 rows=36288592 width=18) Sort Key: trid -> Seq Scan on pptran (cost=0.00..1039725.92 rows=36288592 width=18) (5 rows) pointspp=# select trid, count(*) from pptran group by trid having count(*) > 1; ERROR: could not write block 661572 of temporary file: No space left on device HINT: Perhaps out of disk space? I have 5.1GB of free disk space. If this is the cause, I have a problem... or is there another way to extract (and remove) duplicate rows? ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] BUG #2334: WHERE IN (SUBSELECT) fails when column is null
The following bug has been logged online: Bug reference: 2334 Logged by: Patrick Narkinsky Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1.3 Operating system: Mac OS X Description:WHERE IN (SUBSELECT) fails when column is null Details: This may be expected behavior, but it certainly doesn't seem right to me, and it works as expected in sqlite. The database is as follows: BEGIN TRANSACTION; create table a ( id integer, text varchar(20) ); INSERT INTO a VALUES(0,'test'); INSERT INTO a VALUES(1,'test2'); create table b ( id integer, a_id integer); INSERT INTO b VALUES(0,NULL); INSERT INTO b VALUES(1,NULL); INSERT INTO b VALUES(2,NULL); COMMIT; The following query returns everything in a in sqlite, but returns nothing in postgresql: select * from a where a.id not in (select a_id from b); On postgresql, it works as expected when a_id has a non-null value. I'm not expert enough on SQL to say which is wrong, but it appears to me that the SQLite behavior makes a lot more sense. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[BUGS] BUG #2386: pg_restore doesn't restore large objects
The following bug has been logged online: Bug reference: 2386 Logged by: Patrick Headley Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.4 + Operating system: Windows XP Description:pg_restore doesn't restore large objects Details: I have been trying to restore some PostgreSQL databases with a single large object in them. By single I mean a single table with one lo and only one record. Using the pg_restore utility that ships with the Windows version of PGAdmin III v1.4.1 and 1.4.2 didn't work. I was finally able to restore the databases with the pg_restore that was on the Mac OS X machine hosting the PostgreSQL Server. Postgres was compiled and installed by using the Fink project. The version on that machine is 8.1.x. Backups don't seem to be the problem as I was able to make a backup using the Windows version of pg_backup that was shipped with PGAdmin III v1.4.2. I also tried v1.4.1 and though I didn't realize it was a restore problem the backups didn't error out in any way. The restore operation did error out in the same way. I tried restoring by using the pg_restore that ships with v1.4.1 and 1.4.2 of PGAdmin III on both a Windows XP machine to a Mac OS X 10.4 server hosting PostgreSQL v8.0.7 and on a Windows Server 2003 hosting it's own Postgres server v8.0.4 and from the Windows XP machine to the Windows Server 2003 machine. None of those combinations worked. What finally worked was logging onto the Mac OS X machine and running pg_restore from the bin directory. That machine has a Fink compiled and installed version of PostgreSQL. I don't know yet if the restore will work on the G4 machines but suspect that it will. It just seems to be something to do with the Windows dll. I thgought I saw something on a custom pg_restore but I don't remember where I saw that. Maybe I was using a custom pg_restore without knowing it. If so, and if this issue isn't a bug I appologize. However, I searched on the Internet for several hours while trying to figure out what to do and cannot find anything regarding this problem. Please let me know if there is something that I may have done wrong or if you can reproduce the symptom. Patrick Headley. ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #2386: pg_restore doesn't restore large objects
I'm a bit hurt by your statement that what I sent was just about useless :( The problem here is that I am new to PostgreSQL and PGAdmin III and so, in my confusion about what's normal and what's not, I am unable to provide you with all the details that would help you resolve the problem. However, I tried to be clear about what actions didn't work and those that did. Just as a point of reference, I was essentially thrown into the world of PostgreSQL where the installations were incomplete and the databases were poorly designed so the learning curve has been short and steep. So, let me try to explain this again. I recently added an LO object to a database using Peter Mount's LO type. So far, that's working. Yesterday, I made a backup of the database in order to restore it onto my test server. I used PGAdmin III to do the backup and it worked OK. Due to the problems I'm having with the restore, I tried the backup from two Mac OS X G4 servers and one Mac OS X Intel Dou server. All the backups were run from PGAdmin III and they all seem to work. I didn't attempt to restore every backup from every machine but they all ran the same and no error messages appeared. When I try to restore the backup using PGAdmin III, the log window begins to fill up. Near the end, when it should say it's restoring the BLOBS an error message appears stating the BLOBS couldn't be restored. I don't have the exact text of the message but I could get it for you if needed. I even created a test database with one table and two fields. The fields were recordid and logo (the LO type field). I couldn't even get this database to restore using PGAdmin III. The point here is that it doesn't matter which server I tried to restore too or which database I used (as long as it had at least one large object stored in it), if I used PGAdmin III, the same error message appeared at the same place in the process. However, if I restored the backup by opening a command or terminal window and ran the command from the command line, it worked. You should have no problem reproducing the same error message that I received. If you don't see the same problem, let me know and the next time I go to do a restore I'll get the details for you. By the way, when I put the backup file on one of the Macs and then ran the restore using the command line from the Mac Terminal window I was only prompted for a password once. However, when restoring the backup onto the Windows 2003 server I was prompted for the password at the beginning of the process and then just before restoring the BLOBs. Don't know how this might be related by I thought I would let you know. If you are unable to reproduce the problem by simply attempting to restore a backup of a database that has some LO data stored in it, let me know and I'll start from scratch and send you all the details that I can come up with. Patrick Headley Linx Consulting, Inc. (303) 916-5522 [EMAIL PROTECTED] www.linxco-inc.com -Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: Tuesday, April 11, 2006 2:14 PM To: Patrick Headley Cc: pgsql-bugs@postgresql.org Subject: Re: [BUGS] BUG #2386: pg_restore doesn't restore large objects "Patrick Headley" <[EMAIL PROTECTED]> writes: > Description:pg_restore doesn't restore large objects At no point did you show us exactly what you did or exactly what went wrong, so even though this report has a lot of version-number details, it's just about useless :-(. Please see the reporting suggestions at http://www.postgresql.org/docs/8.1/static/bug-reporting.html regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #2386: pg_restore doesn't restore large objects
Andreas, I had another opportunity to do a restore for a client today. Both restore attempts that I'll be describing were to a remote Mac OS X machine with PostgreSQL 8.0.7 installed. Knowing that it wasn't going to work using PGAdmin III, I went directly to a command window on the machine with PGAdmin III v1.4.2 installed. The version of pg_restore on that machine is 8.1.3.6044. That machine is running Windows XP SP2. The following command, which didn't work was taken directly from the command window. I've removed the IP address and password for security reasons. C:\Program Files\pgAdmin III\1.4>pg_restore -i -h xx.xx.xx.xx -p 5432 -U password -d mtviewDEV -v C:\mtviewDEV.backup The restore errored out with the following lines: pg_restore: restoring large object data pg_restore: [archiver] could not create large object 28305 pg_restore: *** aborted because of error I then switched over to a Windows Server 2003 machine with PostgreSQL 8.0.4 installed and used pg_restore from the PostgreSQL bin folder. The version for pg_restore on that machine is v8.0.4.5277. On the Windows 2003 machine the restore worked. Only thing is that I lost a lot of the output from pg_restore that appeared in the command window because the lines scrolled out of the display buffer. However, at the point where the error occurred on the Windows XP machine the Windows 2003 machine prompted me for the password a second time. It then reported that it was restoring the large objects and then proceeded to restore other items in the database. Hope that provides some insights into the cause of the problem. Patrick Headley Linx Consulting, Inc. (303) 916-5522 [EMAIL PROTECTED] www.linxco-inc.com -Original Message- From: Andreas Pflug [mailto:[EMAIL PROTECTED] Sent: Friday, April 14, 2006 3:52 AM To: Dave Page Cc: Bruce Momjian; Patrick Headley; Tom Lane; pgsql-bugs@postgresql.org Subject: Re: [BUGS] BUG #2386: pg_restore doesn't restore large objects Dave Page wrote: > pgAdmin just uses pg_dump/pg_restore to handle the heavy lifting. > >>> When I try to restore the backup using PGAdmin III, the log >>> window begins to fill up. Near the end, when it should say it's >>> restoring the BLOBS an error message appears stating the BLOBS >>> couldn't be restored. This is the original output from pg_restore. pgAdmin will also log the precise parameters used to restore in the first log line, it might help to see those. Regards, Andreas ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] Proble Postgre SQL version 7.4.1
Bonjour Je rencontre un problème depuis le 3 janvier 2012 sur La base de Données IWSS qui est apparemment corrompu Message Lorsque je veux accéder à la base [root@srv-proxy bin]# ./psql -U sa -d iwss Message = psql: FATAL: could not open relation "pg_trigger": Et du coup la partie antivirus du proxy ne fonctionne pas Database access module initialization failed, please check the setting in Database tab. Et l'accès http à trend micro iwss ne fonctionne pas non plus Voila Pouvez vous me venir en aide. Patrick Pileri patrick.pil...@fr.loomis.com <>
Re: [BUGS] BUG #6727: Column does not exist exception has no error code
BTW I'm using postgresql-9.1-901.jdbc4.jar Thanks again, Pat On Wed, Jul 11, 2012 at 2:10 PM, Patrick Tucker wrote: > Interesting, the test case that I ran was performing a simple SQL > statement like the following: select "a" from table > > I wonder if the version of the driver I am using needs to be updated or > even the database? > Thanks, > Pat > On Wed, Jul 11, 2012 at 1:43 PM, Tom Lane wrote: > >> tucker...@gmail.com writes: >> > The SQLException that is thrown when performing a query that has a >> column >> > name that is not valid, does not exist, does not contain an error >> > code. >> >> Could you provide a specific example? When I try this I see an error >> code, 42703: >> >> regression=# \set VERBOSITY verbose >> regression=# select nosuchcol from int8_tbl; >> ERROR: 42703: column "nosuchcol" does not exist >> LINE 1: select nosuchcol from int8_tbl; >>^ >> LOCATION: transformColumnRef, parse_expr.c:766 >> >> There may indeed be someplace where the errcode was forgotten, but >> without a concrete example it's hard to find where. >> >> regards, tom lane > >
Re: [BUGS] BUG #6727: Column does not exist exception has no error code
Interesting, the test case that I ran was performing a simple SQL statement like the following: select "a" from table I wonder if the version of the driver I am using needs to be updated or even the database? Thanks, Pat On Wed, Jul 11, 2012 at 1:43 PM, Tom Lane wrote: > tucker...@gmail.com writes: > > The SQLException that is thrown when performing a query that has a column > > name that is not valid, does not exist, does not contain an error > > code. > > Could you provide a specific example? When I try this I see an error > code, 42703: > > regression=# \set VERBOSITY verbose > regression=# select nosuchcol from int8_tbl; > ERROR: 42703: column "nosuchcol" does not exist > LINE 1: select nosuchcol from int8_tbl; >^ > LOCATION: transformColumnRef, parse_expr.c:766 > > There may indeed be someplace where the errcode was forgotten, but > without a concrete example it's hard to find where. > > regards, tom lane >