Re: [BUGS] BUG #4307: INSERT fails with primary key contraint
"Oskars Ozols" <[EMAIL PROTECTED]> writes: > id bigint NOT NULL DEFAULT > nextval(('public.event_log_id_seq'::text)::regclass), > 2008-07-15 12:32:03 EEST STATEMENT: INSERT INTO public.event_log > (date_time, ip_address, action_type, severity, parameters, web_address, > server, user_id, id) VALUES ('2008-07-15 12:28:50.00', > '123.123.123.123', 'WebServices.SomeService:LogError', 7, 'error text', > 'http://123.123.123.123/WebServices/SomeService.asmx', '4', 75, 156112) There's something strange here. Your SQL statement includes the id as a literal constant 156112. This isn't the normal way to write this query. This is defeating the point of the DEFAULT you see in the table definition. Postgres guarantees that the nextval() function will only return each value once. But it's not clear from this log how your application is generating the 156112 value which it is explicitly putting in the query. If it's getting it by calling nextval() then it's somehow using it twice. It's also possible someone has written code to pick primary key values by calling "select max(id)+1". That is guaranteed to have race conditions like this. The safest thing to do is to just leave out the id column from your INSERT statement. Just let the DEFAULT expression generate a value for you. Then you can use curval('event_log_id_seq') to find out what value it generated. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Psql or test application hangs when interface is down for the DB server
"K, Niranjan (NSN - IN/Bangalore)" <[EMAIL PROTECTED]> writes: > Is there any other workaround or alternative so that the situation about > the interface is down is known and based on that the 'PQexec' does not > get blocked for ~15 minutes. Absent threads I think you have to use alarm() and a SIGALRM signal handler. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Psql or test application hangs when interface is down for the DB server
"Valentin Bogdanov" <[EMAIL PROTECTED]> writes: > I have noticed this as well. Blocks in poll(), timeout parameter -1, Oh good point. non-blocking sockets and poll/select let you control the timeout too. > meaning infinite then after 4 minutes on my system poll() returns 1 and > getsockopt() is called with SO_ERROR. SYN packets are tried only for the > default tcp timeout of 20 seconds. Uhm, 20 seconds would be an unreasonably low default. I think the RFCs mandate timeouts closer to the 4 minutes you describe. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4313: Strange optimizer behaviour
"Daniel Podlejski" <[EMAIL PROTECTED]> writes: > I think there is no sense to cast too big value to numeric when field type > is integer. > On really big table this "bug" cause unnecessary io load. Well, for example, the same logic doesn't hold for < where all the records would satisfy the inequality but only numeric.< will be able to handle the argument. I think you could get the behaviour you're looking for by using an untyped quoted constant like '11' instead of using an integer constant. The fact that these two cases behave differently is a bit confusing too. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Hmm, nodeUnique doesn't really support backwards scan too well
"Tom Lane" <[EMAIL PROTECTED]> writes: > We could probably fix this by complicating the logic in ExecUnique, > but I wonder whether it wouldn't be better to just stop treating > Unique nodes as backwards-scannable. The only reason for that > node type to exist (as opposed to using Group nodes) is that it's > simple and low-overhead. So complicating it to support a corner case > that no one has noticed in many years might be counterproductive. > Thoughts? Hm, that has the nasty side effect that someone who uses SCROLL but doesn't fetch backwards much or at all suddenly gets a much more expensive plan than if they didn't. On the other hand someone who does actually use the scrollability of the cursor to fetch forward and backwards a lot, repeatedly fetching the same records, would actually get significantly better performance out of a materialized result than having to skip over the duplicates repeatedly. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Hmm, nodeUnique doesn't really support backwards scan too well
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> Hm, that has the nasty side effect that someone who uses SCROLL but doesn't >> fetch backwards much or at all suddenly gets a much more expensive plan than >> if they didn't. > > Well, what are they using SCROLL for if they don't need it? > > A more plausible objection is that previously, (some) cursors using > SELECT DISTINCT would support backwards fetch even if you hadn't said > SCROLL. The bug I'm concerned about is only manifest with SELECT > DISTINCT ON, so someone might well be happily using DISTINCT in a way > that is affected. So there might be apps out there that are working > today and will stop working after this change. I must be missing something. Couldn't we just make the paths non-reversible if there's a DISTINCT ON? > But they are very clearly breaking the rules so I don't have a huge amount > of sympathy for them. If we were to take this argument seriously, it would > mean that we'd have to not only complicate ExecUnique but back-patch the > result clear back to 7.4. I'm not even sure how to fix it (the nasty case is > changing directions partway through the scan); let alone how to fix it in a > way that's obviously enough right to make me feel comfortable in > back-patching. It seems like the obvious fix is to just reverse the behaviour -- keep reading backwards until you see the level break then return the previous record from a second slot. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Hmm, nodeUnique doesn't really support backwards scan too well
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> "Tom Lane" <[EMAIL PROTECTED]> writes: >>> ... I'm not even sure how to fix it (the nasty case is >>> changing directions partway through the scan); let alone how to fix it in a >>> way that's obviously enough right to make me feel comfortable in >>> back-patching. > >> It seems like the obvious fix is to just reverse the behaviour -- keep >> reading backwards until you see the level break then return the >> previous record from a second slot. > > Well, if you think it's easy, the best form of criticism is a patch. > The change-of-direction problem seems to me to be messy --- not > insoluble, but messy enough to need beta testing. Hm, I must have misunderstood the bug because there's a comment in nodeUnique which claims it already does precisely what I was suggesting: * We return the first tuple from each group of duplicates (or the last * tuple of each group, when moving backwards). At either end of the * subplan, clear the result slot so that we correctly return the * first/last tuple when reversing direction. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Hmm, nodeUnique doesn't really support backwards scan too well
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> "Tom Lane" <[EMAIL PROTECTED]> writes: >>> Well, if you think it's easy, the best form of criticism is a patch. >>> The change-of-direction problem seems to me to be messy --- not >>> insoluble, but messy enough to need beta testing. > >> Hm, I must have misunderstood the bug because there's a comment in nodeUnique >> which claims it already does precisely what I was suggesting: > >> * We return the first tuple from each group of duplicates (or the last >> * tuple of each group, when moving backwards). At either end of the >> * subplan, clear the result slot so that we correctly return the >> * first/last tuple when reversing direction. > > That's what it *used* to say. But the problem is that that's the wrong > behavior, because you get different tuples returned depending on which way > you are traveling. It's only workable if the tuples in a group are > completely indistinguishable. Oh egads. I see what it's trying to say now. I assumed it meant it worked *properly* meaning it returned the "last tuple of each group" returned by the child node as it scanned backwards. What it actually means it say is that it is *intentionally* behaving incorrectly! It's returning the last tuple of the set as it scans backward meaning the first tuple that comes out scanning backwards. Sigh. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does not work immediately
<[EMAIL PROTECTED]> writes: > ALTER TABLE name RENAME TO new_name; > ALTER TABLE new_name > RENAME COLUMN x TO y; > ... > I think the issue might be reproducible by creating table with a lot > of rows, setting high work_mem and issuing the commands, but I did not > try to reproduce it this way. The commands you described should take the same length of time regardless of the size of table and the memory settings are not relevant. I suspect you're actually running some different commands? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does not work immediately
<[EMAIL PROTECTED]> writes: > Here is a script that is able to reproduce the problem (on my machine > anyway), you may need to play with the number of rows or workmem > settings to be able to reproduce it: Egads, i can confirm that this script reproduces the problem on my machine as well: ALTER TABLE xxx RENAME TO yyy; ALTER TABLE ALTER TABLE yyy RENAME COLUMN col1 TO colA; ERROR: relation "yyy" does not exist -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does not workimmediately
"Alvaro Herrera" <[EMAIL PROTECTED]> writes: > Gregory Stark escribió: >> >> <[EMAIL PROTECTED]> writes: >> >> > Here is a script that is able to reproduce the problem (on my machine >> > anyway), you may need to play with the number of rows or workmem >> > settings to be able to reproduce it: >> >> Egads, i can confirm that this script reproduces the problem on my machine as >> well: >> >> ALTER TABLE xxx RENAME TO yyy; >> ALTER TABLE >> ALTER TABLE yyy RENAME COLUMN col1 TO colA; >> ERROR: relation "yyy" does not exist > > On what version? Sounds like a problem in sinval message processing. My first suspicion was in the same area. It's two commands within the same session so it wouldn't actually be receiving any sinval messages just processing the local list. The two commands are within the same transaction so only the end-of-command list processed during CommandCounterIncrement() is happening (or should be happening). Either a) we're not actually issuing a CommandCounterIncrement() or b) the command is not being properly marked as non-read-only when the update happens or, uh, c) something else is broken with generating or executing those invalidation messages. > I could not reproduce it here, but didn't try very hard. Did you try running the script he provided? It happened for me first try. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does notworkimmediately
"Alvaro Herrera" <[EMAIL PROTECTED]> writes: >> > I could not reproduce it here, but didn't try very hard. >> >> Did you try running the script he provided? It happened for me first try. > > Yep. No luck. I didn't try changing the args in generate_series though. > > How many CPUs are you running this on? My system has 2. Huh, I've been able to simplify the reproducing script somewhat and still trigger the bug. It is *not* necessary to run the table rename and the column rename in the same transaction. That means it's *not* a problem with a missing CommandCounterIncrement() or anything like that. Even at transaction end the cache entry isn't being invalidated up if that's the cause. Also, as expected the size of the table is irrelevant. On the other hand the problem does not occur if the CREATE TABLE is in the same session. Nor if the REINDEX DATABASE is skipped. #!/bin/sh /usr/local/pgsql/bin/dropdb bug /usr/local/pgsql/bin/createdb bug /usr/local/pgsql/bin/psql -X bug << EOF CREATE TABLE xxx (id SERIAL, col1 TEXT, col2 TEXT); EOF /usr/local/pgsql/bin/psql -e -X bug << EOF REINDEX DATABASE bug; ALTER TABLE xxx RENAME TO yyy; ALTER TABLE yyy RENAME COLUMN col1 TO colA; ALTER TABLE yyy RENAME COLUMN col2 TO colB; EOF -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does notworkimmediately
"Tom Lane" <[EMAIL PROTECTED]> writes: > Also, I can still reproduce it with just REINDEX TABLE pg_class instead > of REINDEX DATABASE. Ah, I had tried just a reindex xxx but not a reindex pg_class. * reindex_index will attempt to update the pg_class rows for the relation * and index. If we are processing pg_class itself, we want to make sure * that the updates do not try to insert index entries into indexes we * have not processed yet. (When we are trying to recover from corrupted * indexes, that could easily cause a crash.) We can accomplish this * because CatalogUpdateIndexes will use the relcache's index list to know * which indexes to update. We just force the index list to be only the * stuff we've processed. Uhm. Is it possible we're mistakenly doing a HOT update because we're lying about what indexes exist? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] ALTER TABLE name RENAME TO new_name; does notworkimmediately
"Tom Lane" <[EMAIL PROTECTED]> writes: > Apparently, rd_indexattr needs to get cleared someplace where it > isn't being cleared right now. I'm sure this is related to the > fancy dance we do for pg_class reindexing, but not sure yet where > to fix it. Yeah, doing the equivalent of this in gdb seems to fix it (haven't rebuilt yet to test that I haven't done something silly in the C code though) --- index.c 30 Jul 2008 21:56:59 +0100 1.300 +++ index.c 10 Aug 2008 19:55:21 +0100 @@ -2382,6 +2382,11 @@ is_pg_class = (RelationGetRelid(rel) == RelationRelationId); doneIndexes = NIL; + /* have to build the HOT indexed column bitmap before we start lying about +* what indexes exist... */ + if (is_pg_class && !rel->rd_indexattr) + (void)RelationGetIndexAttrBitmap(rel); + /* Reindex all the indexes. */ foreach(indexId, indexIds) { -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4340: SECURITY: Is SSL Doing Anything?
"Tom Lane" <[EMAIL PROTECTED]> writes: > Actually, I had missed that the OP was looking at 7.3 rather than 8.3. > There was a "verify_peer()" in 7.3 but it was #ifdef'd out. The > question remains whether there's a reason to have it. It would be good > if the discussion were based on a non-obsolete PG version ... Well in theory SSL without at least one-way authentication is actually worthless. It's susceptible to man-in-the-middle attacks meaning someone can sniff all the contents or even inject into or take over connections. It is proof against passive attacks but active attacks are known in the field so that's cold comfort these days. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4281: some types of errors do not log statements
"Peter Eisentraut" <[EMAIL PROTECTED]> writes: > Thomas H. wrote: >> maybe its by design (to not insert badly encoded characters into the >> utf8 encoded logs)? nevertheless to debug those faulty programm/codes, >> it would help to see what query provokes the error... > > Well, the problem is mainly that there is no query, because the bytes > arriving > are garbage. A human observer could make sense of it in some cases, but not > a computer in the general case. How is that different from any other syntax error? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4465: GROUP BY is not to SQL standard
"Tony Marston" <[EMAIL PROTECTED]> writes: > I think your definition of "Feature T301 Functional Dependencies" is > extremely questionable. ... If you support both key and non-key columns on a > table then you support functional dependencies whether you like it or not. An ISO/IEC 9075 conformant implementation must list in its documentation which optional features it claims to support. Postgres does so at: http://www.postgresql.org/docs/8.3/static/features.html In particular note that T301 is listed partway down this list: http://www.postgresql.org/docs/8.3/static/unsupported-features-sql-standard.html > As soon as I point out an SQL standard that you DON'T follow I get a barrage > of weasel words and pathetic excuses. Well then. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4513: VACUUM FULL fails with "out of memory" error
"Alexey Bashtanov" <[EMAIL PROTECTED]> writes: > The following bug has been logged online: > > Bug reference: 4513 > Logged by: Alexey Bashtanov > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.3.1 > Operating system: Red Hat Linux 2.6.20-1.2320.fc5smp i686 > Description:VACUUM FULL fails with "out of memory" error Are you sure this is 8.3.1 and not 8.3.0? There is a bug fixed in 8.3.1 which sounds a lot like this: Another VACUUM FULL bug introduced in 8.3 could result in a crash or out-of-memory report when dealing with pages containing no live tuples. > Here's the part of serverlog: > == > TopMemoryContext: 49416 total in 6 blocks; 8776 free (46 chunks); 40640 > used > TopTransactionContext: 2986336304 total in 367 blocks; 192312 free (1088 > chunks); 2986143992 used It does look like you have a memory leak in VACUUM FULL. There have been three memory leaks fixed in bug-fix releases since 8.3.1 but none should be related to VACUUM FULL. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] could not read block 77 of relation 1663/16385/388818775
Alexandra Nitzschke <[EMAIL PROTECTED]> writes: > This monday I updated postgres to 8.3.5 on the standby server. > After that I intialized the database > ( copy once the database from the primary system: removing data/* on stand-by, > setting the database on primary in backup-modus and then copy the database > files ) Uhm, just to be sure. You did pg_start_backup() on the primary *before* you started copying the data files across, right? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] could not read block 77 of relation 1663/16385/388818775
John R Pierce <[EMAIL PROTECTED]> writes: > Alexandra Nitzschke wrote: >> BTW... how about a block checksum that is checked just before writing a block >> and just after reading it? I know this would degrade performance, but I think >> we can afford that. Would it be possible to incorporate such code without >> having to do too much patching? > > oracle has had an option for some time that uses read/only page protection for > each page of the shared buffer area... when oracle knows it wants to modify > a > page, it un-protects it via a system call. this catches any wild writes > into the shared buffer area as a memory protection fault. The problem with both of these approaches is that most bugs occur when the code *thinks* it's doing the right thing. A bug in the buffer management code which returns the wrong buffer or a real wild pointer dereference. I don't remember ever having either of those. That said, the second option seems pretty trivial to implement. I think the performance would be awful for a live database but for a read-only database it might make more sense. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] Combination of Triggers and self-FKs produces inconsistent data
Tom Lane writes: > It doesn't say that, because it isn't true. What is true is that if you > make a trigger that prevents updates from happening, it breaks RI > updates as well as directly-user-initiated updates. Can we detect that this happened and throw an error? I suspect not, though, since we have no way to actually determine whether the user trigger didn't do something else equivalent. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4640: Drop leading zero in EXECUTE
"Eduard Deacoon" writes: > If $4 has leading zero it does`n work :( please expand on "doesn't work" > Context replace with ''0 || '' || $4 work only if $4 have one leading zero. > ''00 || '' || $4 does`n work, ''0 || 0 ||'' || $4 - work I'm sorry I'm not following this part. What parameters did you call this function with? What did you expect to happen? What actually happened? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] could not read block 77 of relation 1663/16385/388818775
Darren Reed writes: > ERROR: could not read block 1 of relation 1664/0/1233: read only 0 of > 8192 bytes FWIW this is pg_shdepend_reference_index which is actually a bit special. It's a "shared" relation which means it spans all your databases. Your reindex didn't rebuild to. To reindex it you would have to shut down postgres and run REINDEX in postgres in "standalone" mode. > I suppose the logical thing for me to do is go back to 8.2.6. I think it would be more interesting to know how you got into this situation. When you ran initdb did anything unusual happen? Is it possible anything later truncated these files? There's no reason I can imagine 8.3 would be any more susceptible to this than 8.2. And certainly no reason you would want to use a year-old release of 8.2 missing a year's worth of bug fixes and security fixes. The current release of 8.2 is 8.2.12. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] (some) Indexes ignored after long-running UPDATE and REINDEX at the same time (8.3.6)
Marinos Yannikos writes: > Hi, > > I had a strange problem this morning - I started a long-running UPDATE on a > heavily indexed table with about 8m rows last night to test a trigger-based > queue (PgQ): > > UPDATE eintrag SET mtime=mtime; I think you were bitten by a gotcha with newly created indexes and "heap-only" updates. If a table has any "heap-only" updates then a newly created index cannot be used by any queries which come along which need to be able to see older versions of those records. Once your older transactions had all finished then the index would have suddenly started being used. This is not very common in practice because usually index builds take a while and once they're done any transactions which were started earlier have long since expired. But if you were running any long-lived transactions at the same time they could prevent any other transaction from being able to use the index until they commit (and you start a new transaction to run the query in). Normally I would not recommend running nightly REINDEXes, though in this case because you had done a massive UPDATE against the table it was probably helpful. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] (some) Indexes ignored after long-running UPDATE and REINDEX at the same time (8.3.6)
Tom Lane writes: > Gregory Stark writes: >> Marinos Yannikos writes: >>> I had a strange problem this morning - I started a long-running UPDATE on a >>> heavily indexed table with about 8m rows last night to test a trigger-based >>> queue (PgQ): > >> I think you were bitten by a gotcha with newly created indexes and >> "heap-only" >> updates. > > This would only be possible if it were 8.3 and the reindex script used > REINDEX CONCURRENTLY. Neither fact is in evidence at this point. Well it does say "8.3.6" in the subject. We don't actually support REINDEX CONCURRENTLY... But if my other post is right then it would still happen (unnecessarily) if the index was originally built with REINDEX CONCURRENTLY and then reindexed. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #2401: spinlocks not available on amd64
Theo Schlossnagle wrote: > Tom Lane wrote: > >> There is no reason for the i386 or AMD64 code to be different from what's >> already tested on Linux --- the hardware's the same and the OS surely >> doesn't make a difference at this level. > > On linux you use gcc, which allows for inline assembly. So, the code is > already very different. How does this interact with binary builds such as rpms? If someone installs an amd64 binary on an x86 machine or vice versa does this assembly do the right thing at all? Does it perform slowly? Ideally we would compile both and pick the right one at run-time but that might have annoying overhead if there's a branch before every pg_atomic_cas call. Perhaps a minimal thing to do would be to detect a mismatch on startup and log a message about it. -- Gregory Stark http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #3320: Error when using INSERT...RETURNING as a subquery
"David Fetter" <[EMAIL PROTECTED]> writes: > On Tue, May 29, 2007 at 09:41:38AM -0400, Tom Lane wrote: >> "Jan Szumiec" <[EMAIL PROTECTED]> writes: >> > UPDATE items SET (type, post_id) = (INSERT INTO efforts (effort) VALUES >> > (667) RETURNING 'Item', id) WHERE id = 1937 >> >> Sorry, RETURNING is only supported at the top level of a query. > > What would be involved with making this possible? What we have at the > moment is a pretty clear POLA violation because unlike the rest of the > row-returning objects (tables, views, SRFs and VALUES() clauses), only > RETURNING can't be used in a subquery. It has the same problem that SELECT triggers have. How many rows should you expect that subquery to insert, update, or delete if it's used in a join clause? Or in the where clause of another insert/update/delete statement? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #3387: mod on non-integer returns bad result
"Tom Lane" <[EMAIL PROTECTED]> writes: > "Filip Krska" <[EMAIL PROTECTED]> writes: >> select mod (70.0,70) from dual; >> returns >> 70.0 >> instead of >> 0.0 > > PG 8.0 gets this right. I think this demonstrates that Bruce's 8.1 patch > http://archives.postgresql.org/pgsql-committers/2005-06/msg00045.php > didn't actually fix anything, merely move the failure cases around. Well I think that patch is right in itself. The source of the problem is the floating point arithmetic which is used to do the individual steps in the long division. It does seem odd to me that it isn't using integer arithmetic for that step. It says it has to do avoid overflow though? I would start with this though. 1/70 isn't exactly representable so 70.0 * 1.0/70 doesn't work properly whereas 70.0 / 70.0 has at least a chance of working. But I do think switching this one way or another to integer math would be the real solution. Index: numeric.c === RCS file: /home/stark/src/REPOSITORY/pgsql/src/backend/utils/adt/numeric.c,v retrieving revision 1.104 diff -u -r1.104 numeric.c --- numeric.c 9 Jun 2007 15:52:30 - 1.104 +++ numeric.c 15 Jun 2007 16:22:11 - @@ -4052,7 +4052,6 @@ NumericDigit *res_digits; double fdividend, fdivisor, - fdivisorinverse, fquotient; int qi; int i; @@ -4128,7 +4127,6 @@ if (i < var2ndigits) fdivisor += (double) var2digits[i]; } - fdivisorinverse = 1.0 / fdivisor; /* * maxdiv tracks the maximum possible absolute value of any div[] entry; @@ -4152,7 +4150,7 @@ fdividend += (double) div[qi + i]; } /* Compute the (approximate) quotient digit */ - fquotient = fdividend * fdivisorinverse; + fquotient = fdividend / fdivisor; qdigit = (fquotient >= 0.0) ? ((int) fquotient) : (((int) fquotient) - 1);/* truncate towards -infinity */ @@ -4203,7 +4201,7 @@ fdividend += (double) div[qi + i]; } /* Compute the (approximate) quotient digit */ - fquotient = fdividend * fdivisorinverse; + fquotient = fdividend / fdivisor; qdigit = (fquotient >= 0.0) ? ((int) fquotient) : (((int) fquotient) - 1);/* truncate towards -infinity */ maxdiv += Abs(qdigit); @@ -4236,7 +4234,7 @@ fdividend = (double) div[qi]; for (i = 1; i < 4; i++) fdividend *= NBASE; - fquotient = fdividend * fdivisorinverse; + fquotient = fdividend / fdivisor; qdigit = (fquotient >= 0.0) ? ((int) fquotient) : (((int) fquotient) - 1);/* truncate towards -infinity */ div[qi] = qdigit; -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #3387: mod on non-integer returns bad result
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> The source of the problem is the floating point arithmetic which is used to >> do >> the individual steps in the long division. > > I don't think so. The ultimate source of the problem is that div_var > can only report a finite number of digits. In the case reported div_var was getting 70/70 = 0.9. Which is really just wrong. The only reason was because 1.0/70 isn't representable so (1.0/70) * 70 is slightly more than 1 which ... div_var really ought not have any trouble representing an integer even in its finite number of digits. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] Error message that is a bit misleading / weird result from || null
"Frank van Vugt" <[EMAIL PROTECTED]> writes: > Sidenote: since it does solve my problem, I can now build an index based on > such an interpretation of xmin, I'm a bit anxious as to how this will > be 'fixed', if at all ;) Actually this was noticed only recently but that was precisely because it was related to some significant changes that were being made. Because of those changes 8.3 behaves markedly different in this area: postgres=# select xmin || 'x' from w limit 1; ?column? -- 1679x (1 row) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3417: Foreign key constraint violation occurs unexpectedly
"David Boesch" <[EMAIL PROTECTED]> writes: > I add data to reference > > select * from reference shows as > > id | name | description > ++ > 11 | rd | road > 12 | st | street > 13 | way| way > 14 | close | close > 15 | bend | bend > 3 | vic| victoria > 4 | nsw| new south wales > 5 | qld| queensland > 6 | nt | northern territory > 7 | sa | south australia How did you add this data? Given that the ids are out of order I assume you've updated or deleted and re-inserted records a few times? That shouldn't break anything but it's possible the insert on table a doesn't see the same version of this table that you're looking at with the select. Also, just to check that there's nothing wrong with the inex, what do you get if you do: enable_seqscan = off; select * from reference where id = 7; -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3455: order of rows should not significant
"YaYo" <[EMAIL PROTECTED]> writes: > test=# select * from t9; > var > - > 3 > 5 > (2 rows) > > test=# update t9 set var=var+2; > ERROR: duplicate key violates unique constraint "t9_var_key" /*we get the > same conclusion*/ Sorry, Postgres does not support deferred unique constraints which is what you would need to get this to work. This is a TODO item but nobody has indicated they wish to (or know how to) do it yet. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #3479: contraint exclusion and locks
"Tiago Jacobs" <[EMAIL PROTECTED]> writes: > oh-ow... It waits while the table (That is not used) is locked. > > Even that the final plan dont use tab_2007_07, it wait for unlock of table > for make the plan. Well one of the reasons exclusive locks are taken are to make changes to constraints. > So, if I`m running a VACUUM on specific table, all the querys on the > "master" table don't work. VACUUM doesn't take an exclusive lock. VACUUM is designed to be run regularly without interfering with full use of the table. You're not running VACUUM FULL are you? That's much more intrusive and shouldn't be needed in regular operation. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3479: contraint exclusion and locks
"Tiago Daniel Jacobs" <[EMAIL PROTECTED]> writes: > Tom and Gregory. U're right! The problem is > that we're using constraints for partitioned tables and by definition, > a partition never, absolutely never, can affect the entire system. > > But I think that we have nothing to do about this. There are plans to > another kind of partitioning? If yes, I would like to contribute. If > not, i`m okay. There are lots of ideas of where to go with partitioning including possibly ditching the use of constraints. But I don't think there's any settled plans yet. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3484: Missing pg_clog file / corrupt index
"Marc Schablewski" <[EMAIL PROTECTED]> writes: > I kept a copy of the data files in case it is needed, but I have to > check first, if I am allowed to give away that information. Some of the > data is confidential. If you just need the files containing the dammaged > table, this won't be a big problem, because it does not contain any > confidential information (as long as one data file only contains the > data of one table). The other problem is the size of the files. The > whole database is about 60GB and the files belonging to that table are > about 2.5GB. Mayby there is a way to pre-select the data you need? Perhaps. You could find the records with unreasonable values. But I don't think there's any convenient way to find the records which produce the clog error or which are missing unless they turn out to be on the same page. Out of curiosity, what do the unreasonable values look like? Earlier you said: > We narrowed it down to a few records in that table. Some records contain > unreasonable values, others produce the same message about the missing > pg_clog file when selected and some are simply missing. But they must have > existed, because there are still records in a second table referencing them. If you still have a live database with this data then if you can do SELECT ctid FROM tab WHERE ... for the records with unreasonable values that might tell you what blocks are corrupted. The value before the comma is the block number, which when multiplied by 8192 (assuming you're using 8k blocks) will tell you what file offset to look for the page. To find the file to look for the block in do: postgres=# select relfilenode from pg_class where relname = 'tablename'; relfilenode - 16384 (1 row) Note that if the file offset is over 1G then you would be looking for a file named 16384.N where N is which gigabyte chunk. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] Oddities with ANYARRAY
"Decibel!" <[EMAIL PROTECTED]> writes: > David Fetter and I were just looking at something on IRC... > > decibel=# select most_common_vals[1] from pg_stats where > tablename='pg_depend' and attname='classid'; > ERROR: cannot subscript type anyarray because it is not an array > decibel=# select most_common_freqs[1] from pg_stats where > tablename='pg_depend' and attname='classid'; > most_common_freqs > --- > 0.566 > (1 row) > > ISTM you'd want to be able to reference an individual element of an > ANYARRAY... but this gets even more odd... And what type would the result be? But this is an odd coincidence as I was just looking at this myself to do those histogram charts I was talking about. The solution I was going to propose was to allow casting from anyarray to a normal array, then allow subscripting the normal array. I would be fine requiring the cast to be to the correct array type with a run-time error if the type doesn't match. Or it could use the VIAIO cast which would work as long as the input format matched. So you could always cast to text[] even if it was an integer[] or something else originally. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] Oddities with ANYARRAY
"Decibel!" <[EMAIL PROTECTED]> writes: > On Jul 31, 2007, at 11:55 PM, Gregory Stark wrote: >> >> And what type would the result be? > > ANYELEMENT? I know that'd still have to be casted to something normal > eventually; do we have support for that? There isn't really any such thing. There isn't really any such thing as anyarray either, the actual arrays are normal arrays of a real data type. anyarray and anyelement are things the parser and labels things it doesn't know better. Normally that's just parameters of polymorphic functions since you can't define columns of type anyarray normally. pg_statistic is a magic exception. > I'd expected that the 'ANY' types had additional information somewhere that > told > them what the original data type actually was, but I guess that's not the > case. > Maybe it'd be worth adding? Well arrays do. That's the only reason we can output the arrays from pg_statistic. So we could cast an anyarray to an array of a specific data type. The parser would be able to make sense of (histogram_bounds::text[])[1] since it's obviously a text. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: RE : RE : [BUGS] BUG #3519: Postgres takes the wrong query plan resulting in performance issues
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Have you played with enable_seqscan=off or enable_hashjoin=off? That's > not a good long term solution, but it would be interesting to see what > happens. I think this is a case where Postgres just doesn't know it can re-order near an outer join. Outer joins often can't be re-ordered and Postgres isn't a general theorem prover, it can't always figure out whether it's safe to re-order them. The structure of your query is a whole series of left outer joins, the result of which is then (inner) joined with one more table. The outer joins return a whole lot of records but the inner join is only going to match a few of them. The only hope you have of a reasonable plan here is if Postgres can figure out that it can do the inner join first so that it only has to perform the outer join on the resulting records. I think it could actually re-order the inner query to happen first in this case. But I'm not certain, it's tricky to tell. But the fact that Oracle finds a way to execute it quickly gives me some confidence that it ought to be possible since I think Oracle does get join orderings reasonably right. I'm not so sure about Informix. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: RE : RE : [BUGS] BUG #3519: Postgres takes the wrong query plan resulting in performance issues
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> The structure of your query is a whole series of left outer joins, the result >> of which is then (inner) joined with one more table. The outer joins return a >> whole lot of records but the inner join is only going to match a few of them. > > Hmmm ... actually I see 6 tables inside the join-tree and four more > loose in the FROM-clause, ten relations altogether. Which means the OP > is falling foul of from_collapse_limit, and it's not investigating every > possible join order. Try setting from_collapse_limit to more than 10. The three other loose ones are attached to a table inside outer joins though. Doesn't that prevent any possibility of them being done earlier? But the first one looks like it ought to be driving the join. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3520: insert causing error "invalid memory alloc request size 2147483648"
"Brodie Thiesfield" <[EMAIL PROTECTED]> writes: > (at time of final INSERT) > ERROR: XX000: invalid memory alloc request size 2147483648 > LOCATION: MemoryContextAlloc, mcxt.c:504 Some questions: what encoding was the database created in? And what locale is your server running in? The commands which would be useful to answer the above are: postgres=# \l List of databases Name| Owner | Encoding ---+---+--- postgres | stark | SQL_ASCII template0 | stark | SQL_ASCII template1 | stark | SQL_ASCII (3 rows) postgres=# show all; name |setting | description -++--- add_missing_from| off| Automatically adds missing table references to FROM clauses. ... -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] ON DELETE SET NULL clauses do error when more than two columns are referenced to one table
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > I'm not sure what to do about this. We could change the order the > triggers are fired to breadth-first. If all the setnull triggers were > executed first, there would be no problem. But that seems like a pretty > big change, and I'm afraid it might have other unintended consequences. We could also check the queued triggers to see if there really is a trigger which will be invalidated by the second update and therefore needs to be rechecked (or even just change the tid of the existing queued check). I don't know what it would take to make that efficient though. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] error while starting database
"Douglas Toltzman" <[EMAIL PROTECTED]> writes: > In any case, the server specifically calls out an error creating > "/tmp/.s.PGSQL.5432". I would start by creating a world-writable /tmp > folder. FWIW the correct permissions for /tmp on Unix machines is 1777. It should look like this (note the "t" in place of the last "x"): $ ls -ld /tmp drwxrwxrwt 8 root root 12288 Aug 18 16:30 /tmp -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #3561: CHAR(x) blank stripped. Server or documentation bug?
"Alexander Kirpa" <[EMAIL PROTECTED]> writes: > select '['||a||']' as aa,'['||b||']' as bb from test1; > aa|bb > --+-- > [ok] | [ok] > [ bad] | [ bad ] > [ poor] | [ poor] > (3 rows) > It is easy to see that char(x) field really blank stripped. > This is bug of server or documentation? >From the documentation: > Values of type character are physically padded with spaces to the specified > width n, and are stored and displayed that way. However, the padding spaces > are treated as semantically insignificant. Trailing spaces are disregarded > when comparing two values of type character, and they will be removed when > converting a character value to one of the other string types. Note that > trailing spaces are semantically significant in character varying and text > values. Concatenation is an example of an operation where the padding spaces are treated as semantically insignificant, so they get removed before the concatenation. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SPAM] Re: [BUGS] BUG #3484: Missing pg_clog file / corrupt index
"Tom Lane" <[EMAIL PROTECTED]> writes: > Remember that the xmin/xmax fields are basically the first thing we can > check with any degree of strictness when examining a tuple. This means that > if a page is partially clobbered, but not in a way that sets off the > invalid-page-header checks, then the odds are very high that the first > detectable sign of trouble will be references to transaction numbers that > are far away from what the system is really using. I'm increasingly thinking that one of the first things I'll suggest putting into 8.4 is a per-page checksum after all. It was talked about a while back and people thought it was pointless but I think the number of reports of hardware and kernel bugs resulting in zeroed and corrupted pages has been steadily going up. If not in total than as a percentage of the total problems. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [SPAM] Re: [BUGS] BUG #3484: Missing pg_clog file / corrupt index
"Tom Lane" <[EMAIL PROTECTED]> writes: > It's still pointless; a checksum does nothing to prevent data > corruption. The error report might be slightly more obvious to a novice > but it doesn't bring your data back. Well if it's a single bit error from bad memory or a torn page from having full_page_writes turned off then the resulting page could be entirely valid. The user might not find out about the corrupt data before it's had time to migrate elsewhere. Also, the sooner the corrupt data is reported the sooner the user can restore from backups and avoid further data loss. The last discussion of this feature concentrated on beingg able to detect torn page corruption with full_page_writes turned off during recovery. > Something we could possibly do now is to modify these error messages: > if the transaction number we're trying to check is obviously bogus > (beyond the current XID counter or older than the current freeze > horizon) we could report it as a corrupted XID rather than exposing > the "no such clog segment" condition. That would be clever. I take it you mean the invalid values would be those values older than the actual relfrozenxid. Certainly they should throw some sort of error instead of trying to find the transaction in the clog. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] Memory Allocation Error
"zhuge xiao" <[EMAIL PROTECTED]> writes: > I wrote and compiled and linked a C function, add_float, which returns > float in C like following: > > PG_FUNCTION_INFO_V1(add_float); > > Datum > add_float(PG_FUNCTION_ARGS) > { > float arg = PG_GETARG_FLOAT8(0); > > PG_RETURN_FLOAT8(arg + 10); > } > > After having loaded it into database, executed the SQL command, "select > ad_float(2.90)", but one error occured. The error message is "Error: > invalid memory alloc request size 4294967290" . float8 and float aren't the same thing. "float" is a C data type which is not wide enough to hold a float8. The first line actually works by shortening it but the return doesn't work because it returns a pointer to the float and claims it's a pointer to the float8. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #3638: UTF8 Character encoding does NOT work
"Fil Matthews" <[EMAIL PROTECTED]> writes: > In Short A Postgres UTF8 database .. PGCLIENENCODING=UTF8 ... > Insert into test values ( chr(146));; Can you explain what you expected to happen here? Did you, for example, expect the character with Unicode code point 146 to be inserted? Because the single byte 146 isn't a valid UTF8 character. In PostgreSQL 8.2 I don't think there's any function to generate an arbitrary Unicode code point. You'll have to do that on the client end and encode it in UTF8 before sending. In PostgreSQL 8.3 chr() will in fact be modified to do this. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] PQmakeEmptyPQresult makes my application dumps core?
"Adam PAPAI" <[EMAIL PROTECTED]> writes: > PostgreSQL version: (PostgreSQL) 7.4.7 Incidentally there have been 11 bug-fix releases to 7.4 since this one. Several of those cause crashes or data corruption and you would be well-advised to install 7.4.18 asap. Normally you don't need a dump/restore but there were a couple bug-fix releases in that branch which might require one depending on what locale you're using. But that's not what's causing your problem I don't think: > Usually our program dumps core several times a day, but the reason is unknown. > All I know from the core file is the lines below. > > during PQexec(db->conn, query) it dumps core. > > Should it happen due to a connection problem between the pgsql server and our > program? What could be the resolution to avoid the coredumps? > > (gdb) up > #1 0xb7df4cff in malloc () from /lib/tls/libc.so.6 > (gdb) up > #2 0xb7eed753 in PQmakeEmptyPGresult () from /usr/lib/libpq.so.3 This core dump is from the client, not the database server. That it's coming from malloc makes me think it's likely you've done something wrong with your memory allocations previously. Double-freeing a pointer, freeing a pointer which didn't come from malloc, writing past the end or beginning of the allocated memory, etc. Any bug like this can cause random core dumps in malloc or free later. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] PQmakeEmptyPQresult makes my application dumps core?
"Tom Lane" <[EMAIL PROTECTED]> writes: > Adam PAPAI <[EMAIL PROTECTED]> writes: >> Usually our program dumps core several times a day, but the reason is >> unknown. All I know from the core file is the lines below. > > That failure is inside malloc, not PQmakeEmptyPQresult, and the odds > are extremely high that the reason is some part of your program > clobbering memory that doesn't belong to it (and thereby damaging > malloc's internal data structures). Try running your program under > Electric Fence or some other debugging malloc package. Incidentally glic comes with such a debugging malloc which you can get by defining the environment variable MALLOC_CHECK_ before starting your program. In bash you can do this by running your program with something like: MALLOC_CHECK_=3 ./myprogram -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] Yet another problem with ILIKE and UTF-8
"Gergely Bor" <[EMAIL PROTECTED]> writes: > I have a nasty-looking problem case. Shortly described as follows: > > INSERT INTO mytable (id, value) VALUES (4242, 'úabcdú'); > SELECT id FROM mytable WHERE value ILIKE '%abc%'; > > In environment A, the row of the ID just inserted is returned > correctly, but in environment B no rows are found. Uh! (Sadly > environment B is the productive environment... :/) > > Notice the UTF-8 chars in the inserted sting and the _lack_ of UTF-8 > chars in the searched string. > > Environment A: Win2000, psql 8.2.4, lc_* is C, all encondings (client, > server, DB) are UTF-8. > Environment B: Debian lenny/sid ^[1], kernel version 2.6.20.1, glibc > 2.6.1-5, psql 8.2.5, lc_* is hu_HU, all encondings (client, server, > DB) are UTF-8. I'm not sure this is the right answer but what happens if you initdb a database on the Debian box with lc_* set to hu_HU.UTF-8 ? (You may have to add it to /etc/locale.gen and rerun locale-gen) Also, what does lower('úabcdú') return in that locale? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Yet another problem with ILIKE and UTF-8
"Gergely Bor" <[EMAIL PROTECTED]> writes: > We'll google the initdb stuff and try it ASAP. > > What I've tried is LOWER and UPPER, and they seem to return trash for > Hungarian UTF-8 characters, but they handle ASCII well. (H... > maybe ILIKE requires LOWER and UPPER to work? Would not be > illogical...) It does. I think it works by just downcasing both strings. It's possible to do better but tricky. I think 8.3 has an optimization for that for single-byte encodings but it had to be disabled for utf-8 in the end. If it's returning trash for those characters then it's not prepared to handle UTF-8 data. You have to use an encoding compatible with your locale and vice-versa. If you want to store UTF-8 data I suggest you . add hu_HU.UTF-8 to /etc/locale.gen, . rerun /usr/sbin/locale-gen . pg_dump your database . re-initdb with the locale set to hu_HU.UTF-8 . pg_restore your data. Unfortunately that'll take quite a while and involve down-time. You should probably do this in a second directory aside from your existing database just in case you've created any invalidly encoded utf-8 strings. You'll have to fix them before restoring. (Actually I don't recall which version got strict about that.) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] I incrementally altered my database into a state where backups couldn't be restored.
"Adam Tomjack" <[EMAIL PROTECTED]> writes: > -- This will succeed: > SELECT * FROM v_a_b_c; > > -- But, this will fail with > -- ERROR: common column name "aid" appears more than once in left table > -- SQL state: 42702 > SELECT * > FROM b > JOIN c USING (bid) > JOIN a USING (aid) > > -- It is now possible to make a backup with pg_dump that cannot be fully > restored. > -- When restoring, this will fail: > CREATE OR REPLACE VIEW v_a_b_c AS > SELECT * > FROM b > JOIN c USING (bid) > JOIN a USING (aid) > ; That does really suck. But I'm not sure what we can do about it. There's no SQL which is entirely equivalent to the resulting view. I think the closest you could get would be something like SELECT * FROM (SELECT bid,cid FROM b) AS b JOIN c USING (bid) JOIN a USING (aid) But it's not clear to me that we could generate that easily. AFAIK the information about which columns were in the table at the time the view was defined isn't readily available. And checking for conflicts might be hard since they could happen much higher up in the join tree. We can't just output the USING as an ON clause which would let pg_dump specify precisely which column to join against because ON doesn't merge the two columns. The resulting records would have two bid columns. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] I incrementally altered my database into a state where backups couldn't be restored.
"Tom Lane" <[EMAIL PROTECTED]> writes: > Gregory Stark <[EMAIL PROTECTED]> writes: >> That does really suck. But I'm not sure what we can do about it. There's no >> SQL which is entirely equivalent to the resulting view. > > If we were to do anything about it, I think it would have to be to > forbid the original ALTER. But I don't see any good way to detect > the situation, either. Hm, we could call pg_get_viewdef() on dependent views and then verify that the resulting view compiles without error. For bonus points we could verify that it produces the same parsed view and throw warning if it doesn't. That would give us a warning if you have a view with natural joins which changed meanings. I'm not sure it's worth that much extra work on every ALTER TABLE though. I suppose pg_dump() could have an option to check these things itself but by that point it's too late to do anything about it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3737: lower/upper fails to match extended chars in LATIN1
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Jonas Forsman wrote: > >> Try: >> select * from table where lower(address) like '%Ã¥%' >> >> This select fails to find addresses including capital Ã… and similars in >> LATIN1 (like Ã…, Ä, Ö). Isn't à an upper-case letter? In which case lower(address) will never match it since by definition lower(address) will only contain characters which are lower case. That is if address contains a "Ã" then lower(address) will contain an "ã" instead which won't match the "Ã" in the pattern. I think you either need to put a lower() on both sides of the LIKE or use ILIKE. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Planner problems in 8.2.4 and 8.2.5
"Tom Lane" <[EMAIL PROTECTED]> writes: > It's been clear for quite awhile that a stats target of 10 is often > too low, but no one has done the legwork to establish what a more > reasonable tradeoff point would be. Any ideas on what measurements would be interesting for this? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #3790: pg_restore error canceling statement due to user request
"Mike C." <[EMAIL PROTECTED]> writes: > I don't know if this is either a wording change, or a more serious bug, but > when I do a pg_restore (from a 8.1.9 setup) to a fresh 8.3beta3 created > database (createdb command only), I repeatedly see: > > ERROR: canceling statement due to user request > CONTEXT: automatic analyze of table "dbs.public.entity_event" This is intentional, though perhaps the wording is confusing. What impression does the wording give you? Does it make you think something has gone wrong? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #3791: Bug in aritmethic calculations
"Michailas" <[EMAIL PROTECTED]> writes: > postgres=# SELECT abs(300 - 200)/200*100 as bad, abs(300-200)*100/200 as > good; > > bad | good > -+-- >0 | 50 > (1 row) That's how integer arithmetic works. Try just 1/2 and you'll see you get 0. But if you do floating point arithmetic with 1.0/2 you'll get 0.5. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] BUG #3790: pg_restore error canceling statement due touser request
"Alvaro Herrera" <[EMAIL PROTECTED]> writes: > Bruce Momjian escribió: >> Magnus Hagander wrote: >> > On Fri, Nov 30, 2007 at 10:13:53AM +, Gregory Stark wrote: >> > > >> > > "Mike C." <[EMAIL PROTECTED]> writes: >> > > >> > > > ERROR: canceling statement due to user request >> > > > CONTEXT: automatic analyze of table "dbs.public.entity_event" >> > > >> > > This is intentional, though perhaps the wording is confusing. What >> > > impression >> > > does the wording give you? Does it make you think something has gone >> > > wrong? >> > >> > The fact that it says ERROR kind of hints that something has gone wrong, >> > no? (so yes, I agree the wording isn't very good) >> >> What is causing this? Statement_timeout? I see different wording for >> that behavior. Is the postmaster getting a signal from somewhere on the >> system? > > It's the new autovacuum cancel stuff. I guess we should capture this error with a PG_TRY and silently abort instead. Just a NOTICE or INFO should be sufficient. Other errors should of course be rethrown. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3790: pg_restore error canceling statement due touser request
"Tom Lane" <[EMAIL PROTECTED]> writes: > This falls in the category of "destabilizing the code for purely > cosmetic reasons", and would be a foolish change to make at RC1 time. I suppose. Expect to have more bug reports like this one then though. > We could change the text of the ERROR message reasonably easily, > but changing the basic transaction abort method is right out. I fear having a message saying "ERROR This is not an error" is going to get us laughed at. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #3809: SSL "unsafe" private key permissions bug
"Simon Arlott" <[EMAIL PROTECTED]> writes: > On 08/12/07 15:31, Tom Lane wrote: >> "Simon Arlott" <[EMAIL PROTECTED]> writes: >>> FATAL: unsafe permissions on private key file "server.key" >>> DETAIL: File must be owned by the database user and must have no >>> permissions for "group" or "other". >> >>> It should be possible to disable this check in the configuration, so those >>> of us capable of deciding what's unsafe can do so. >> >> You haven't given any reason to think that you are smarter than this >> check. > > The directory containing the SSL keys has appropriate permissions, I > shouldn't have to make a separate copy of them for every application. Another case where it's important to be able to disable this check is when you're using a file system which doesn't use the unix bits for permission checks either at all or in the traditional way. So for example if the key directory lay on an FAT filesystem which doesn't have unix bit per file the only way to satisfy the check would be to mount the filesystem with the option to make every file in the filesystem have those bits. Storing your keys on a usb stick (which usually use fat filesystems) isn't really such a crazy idea either. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #3809: SSL "unsafe" private key permissions bug
"Tom Lane" <[EMAIL PROTECTED]> writes: > Alvaro Herrera <[EMAIL PROTECTED]> writes: >> Gregory Stark wrote: >>> Storing your keys on a usb stick (which usually use fat filesystems) >>> isn't really such a crazy idea either. > >> Storing a server SSL key on a USB stick is not crazy? I don't follow. >> What use case do you have for that? Sure, private keys are often more sensitive than the data they protect. You might want them not to be included in backups or to ever live on spinning disks that you'll have to wipe in case of a disk crash. A stick can be moved to a backup server when failing over. Once upon a time people used to use floppies for this purpose (which also use fat filesystems incidentally). > It's worth pointing out also that we require server.key to be directly > in the $PGDATA directory, which means that any filesystem limitations on > its permissions info are going to apply to the $PGDATA directory itself. > > Curiously enough, the access-permission checks on both $PGDATA and > $PGDATA/server.key are diked out in WIN32 builds, but I consider that > a bug we should fix, not a feature to be extended. Another filesystem where people get bit by tools which assume they can look directly at unix permission bits instead of using access() and impose fascist rules on what they expect to see there is AFS. The unix bits are mostly meaningless on AFS. So you get users complaining that they're following the instructions on setting permission and the occasional tool is still complaining about problems. I think looking at the unix permission bits and imposing policy is usually a bad idea but in those few cases where it makes any sense there should always be a switch to disable it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3811: Getting multiple values from a sequence generator
"Michael Glaesemann" <[EMAIL PROTECTED]> writes: > On Dec 9, 2007, at 9:24 , Adriaan van Os wrote: > >> 3. nextval doesn't have an optional "increase" parameter (the increase is >> always one). > > Not true. Please read the documentation at > > http://www.postgresql.org/docs/8.2/static/sql-createsequence.html > http://www.postgresql.org/docs/8.2/static/sql-altersequence.html > > In particular, the INCREMENT BY option (and perhaps CACHE). I think he's looking for a an option to increase a sequence which normally increments by 1 by a larger number for a single transaction. You would want to do this if you were doing an exceptional bulk operation. If you set the "increment by" then if another transaction happens to come along while you've modified it you'll waste N sequence numbers. Seems like a reasonable feature request. But I do wonder if the OP has actually tried just incrementing it one by one for each of the records being inserted. Incrementing sequences is pretty damn quick and I doubt it would actually be a bottleneck. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3811: Getting multiple values from a sequence generator
"Adriaan van Os" <[EMAIL PROTECTED]> writes: > Right, I want to use it with a bulk operation, say importing a million records > with COPY. Calling nextval one million times looks to me like an enormous > waste > of resources. Suppose, you are on an ADSL line: it will cost one million times > the ping time of the ADSL line (say 10 milliseconds per call). Well OK, one > could write a server function that does this, but then the one million result > values must be transported back to the client, because they are not guaranteed > to be contiguous. Unneeded complexity compared to a simple nextval increment > parameter. The usual way to use nextval() is to use it on the server as an expression in an INSERT or DEFAULT. If you're using COPY and don't have a column default set up then, hm, I guess you're kind of stuck. That would make a good use case for a one-time nextval(increment) or something like that. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #3811: Getting multiple values from a sequence generator
"NikhilS" <[EMAIL PROTECTED]> writes: > Coincidently, I very briefly discussed (offline) about supporting > expressions while doing loads using COPY FROM with Heikki a while back. From > the above mail exchanges, it does appear that adding this kind of > functionality will be useful while doing bulk imports into tables using > COPY. > > Heikki's initial suggestion was as follows: > > COPY FROM USING > > Where query could be any SELECT query, executed once for row using the > values from the input data file. For example: Another direction to head would be to take away COPY's special logic to insert into tables and instead have something like: COPY FROM USING where is an *INSERT* statement. Or for that matter a DELETE or an UPDATE. It would prepare the query then execute it once per line read from the streamed copy data. It would be much more general but perhaps be harder to optimize the our current COPY can be optimized. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3816: Timezone bug
"Geert Bijloos" <[EMAIL PROTECTED]> writes: > The following bug has been logged online: > PostgreSQL version: 8.1.3 and 8.2.4 > Description:Timezone bug Without actually checking whether the results are wrong I'll note that several of the changes in the bug-fix releases post 8.1.3 and 8.2.4 are timezone updates. (And since 8.1.3 there were several crashing and data eating bugs fixed in those bug-fix releases) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [BUGS] BUG #3824: Query hangs when result set empty using sort and limit
"Tom Lane" <[EMAIL PROTECTED]> writes: > "Gregor Roessle" <[EMAIL PROTECTED]> writes: >> The Server hangs when executing a query like this: > >> select * from messwerte where pknr = 28315178 and isproducing = 't' order by >> timestamp desc limit 1; > > I seriously doubt that it's hung. Takes a long time, maybe so. Is it > trying to use an indexscan on timestamp to substitute for a sort step? Surely it wouldn't make 32ms become long enough to qualify as "hung". Unless perhaps it's otherwise using an index on pknr and/or isproducing and with the "ORDER BY timestamp desc LIMIT 1" it's using a different large index. I think we have to see EXPLAIN ANALYZE select * from messwerte where pknr = 28315178 and isproducing = 't' order by timestamp; EXPLAIN ANALYZE select * from messwerte where pknr = 28315178 and isproducing = 't' limit 1; EXPLAIN select * from messwerte where pknr = 28315178 and isproducing = 't' order by timestamp limit 1; -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] Bug (#3484) - Invalid page header again
"Zdenek Kotala" <[EMAIL PROTECTED]> writes: > I got dump of affected two blocks from Alex and it seems that both blocks were > overwritten together with some 128bytes length structure (there some pattern) > and complete damaged size is 9728bytes (first block is overwritten completely > and second one only at the beginning), but another buffer from another > relation > could be overwritten too. I don't understand this 9728 bytes. Postgres blocks are 8192 bytes. Are you saying one whole block is trashed and then part, but not all, of the next block? What's the block size of the ZFS filesystem? And what exactly does the trash data look like? -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [BUGS] BUG #3826: Very Slow Execution of examplequery (wrong plan?)
"Alexander Steffens" <[EMAIL PROTECTED]> writes: > Hello, I have found an Query (with data) > that need to execute on MS-SQL 2005 < 9sec, > on Postgresql I will stop it now after more than 30 mins: > insert into t1 > select distinct (t1.a + t2.a)*2 > from t1,t2 > where not exists ( > select * from t1 tt where tt.a = (t1.a + t2.a)*2 > ) What plan does MS-SQL use to complete this? I wonder whether it's producing the same answer Postgres is. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3826: Very Slow Execution of examplequery (wrong plan?)
"Tom Lane" <[EMAIL PROTECTED]> writes: > It's possible that MS-SQL is doing something analogous to the > hashed-subplan approach (hopefully with suitable tweaking for the NULL > case) but even then it's hard to see how it could take only 9 sec. > The cartesian product is too big. Fwiw it seems MS-SQL is doing something funny. The three plans posted in the screenshots for the "small", "mediu", and "large" cases are: Top Sort (Distinct Sort) Nested Loop (Left Anti Semi Join) Nested Loop Table Scan Table Scan Top Table Scan [cut off by the screenshot] Match Hash Match (Right Anti Semi Join) Table Scan Nested Loop Table Scan Table Scan Hash Match (Right Anti Semi Join) Parallelism (Repartition Streams) Table Scan Parallelism (Repartition Streams) Nested Loop (Inner Join) Table Scan Table Spool (Lazy Spool) Table Scan Postgres is doing something equivalent to the first plan. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] Duplicate values found when reindexing unique index
"Tom Lane" <[EMAIL PROTECTED]> writes: >> Here's the system column data you requested. > >> id | ctid | xmin | xmax | cmin | cmax >> ---+--+--+--+--+-- >> 151341072 | (1508573,11) |2 |0 | 19 |0 >> 151341072 | (1818219,11) |2 |0 | 19 |0 >> (2 rows) > > I wonder whether it's just a coincidence that these have the same offset > number... I can't imagine any Postgres bug which would depend on the offsets being the same. But what I could imagine is filesystem corruption which copied the block to someplace else in the table or possibly has even mapped the same block into two different places in the table. Can you unmount the filesystem and run "fsck -v -n" on it? Is this the only duplicate record in the table? Are there any other records on either of these blocks? To answer the latter question I found a handy trick of converting the tid to a "point" so I could refer to the block or offset. In 8.3 this looks like: select ctid from foo where (ctid::text::point)[0] = 0; But in 8.2 iirc you had to call the tid output function explicitly because there was no cast to text. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] why provide cross type arithmetic operators
"ykhuang" <[EMAIL PROTECTED]> writes: > there are many cross type arithmetic operators, like int2 + int4, int8 + > int4, I think these can be deleted. Here are the reasons, after deleted, > int2 + int4 will choose the operator int4 + int4, int8 + int4 choose int8 + > int8, Is that ok? Thanks. Then the system wouldn't be able to use indexes as flexibly. For example if you have an index on an int2 column and perform a query with a restriction like "int2col = 1" the system wouldn't find a matching =(int2,int4) operator and would instead have to do a sequential scan casting the int2 column to an int4 when checking each row. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] Bugs
"Abhay Kumar" <[EMAIL PROTECTED]> writes: > Hi, > I am installing the Postgis 2.2.1 on PostgreSQL. I think you would be better off speaking to this mailing list: http://postgis.refractions.net/mailman/listinfo/postgis-users -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3921: CREATE TABLE / INCLUDING INDEXES fails with permission denied
"Tom Lane" <[EMAIL PROTECTED]> writes: > 1. DefineIndex() treats an omitted tablespace clause differently from > explicitly specifying the tablespace that is the database's default: > if you explicitly specify the space then you must have permissions on > it, otherwise you don't need any. (This is the same behavior as in > DefineRelation incidentally.) Maybe this isn't such a hot idea, and > we should treat the two cases interchangeably? I always thought that was absolutely bizarre. Security should never depend on *how* you refer to an object. You should either have access to the object or not regardless of how you refer to it. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Adding new columns - bug
"Michael Andreasen" <[EMAIL PROTECTED]> writes: > There is nothing sepcial about the product table other than it has a couple > of trigger rules to post to other tables on updates of some fields (audit > log), nothing to cause this behavior. I have only noticed this happening of > this table, so I am guessing it's related to it having rules, since it's one > of the few that does. Perhaps you should post the triggers or rules you have on this table. SQL generally expands things like "select *" when you define things so it may well have stored a definition which is out of date and causes this problem. 8.3 may actually fix it for you because if they're plpgsql functions then they will replan any cached query plans. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [BUGS] BUG #3954: Duplicate Primary Keys
[Sorry I think I may have sent this out already with a typo in the version numbers. I may have cancelled in time but I'm not sure. This one is corrected] "Tapin Agarwal" <[EMAIL PROTECTED]> writes: > The following bug has been logged online: > > Bug reference: 3954 > Logged by: Tapin Agarwal > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.1.2 > Operating system: Suse > Description:Duplicate Primary Keys > Details: > > Hi, > > We are using postgres-8.1.2 and observed the degradation in postgres > performance. Select query was taking very long time to complete. While on a > freshly installed postgres, with same number of records select query was > getting completed within seconds. The most likely reason was having lots of dead tuples in the table due to vacuum not being run frequently enough. autovacuum has been improved dramatically in both 8.2 and 8.3 which would help avoid the need for additional manual vacuums. > Our tables has indexes created on them and were running auto vacuum. Also we > are not using any table inheritance. When tried to perform re-indexing, > found that there were duplicate primary key entries in few of the tables. > This problem has occurred several times on our database setup and every time > different set of tables get affected. > > Can you please let us know if this issue is already resolved in any of the > postgres releases. If it is then please let us know the release number. It does sound familiar though I don't find a specific changelog entry which describes that symptom. The version you're on is over 2 years old and there have been 9 subsequent bug-fix and security releases for that version. I would suggest you try to go to 8.1.11 immediately and plan to upgrade to 8.2 or 8.3 when possible to help with your vacuuming problems. Generally 8.1.11 is compatible with 8.1.x without a dump/restore or any semantic changes. However there were a couple gotchas in some of the intervening releases. Notably if you have any invalid UTF8 data in a UTF8-encoded database 8.1.11 will refuse to load it as it can be a security issue. The release notes for the 8.1.x bug-fix and security releases are at: http://www.postgresql.org/docs/8.1/static/release.html#RELEASE-8-1-11 Click "Next" or scroll up to the list to get the previous releases. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #3954: Duplicate Primary Keys
"Tapin Agarwal" <[EMAIL PROTECTED]> writes: > The following bug has been logged online: > > Bug reference: 3954 > Logged by: Tapin Agarwal > Email address: [EMAIL PROTECTED] > PostgreSQL version: 8.1.2 > Operating system: Suse > Description:Duplicate Primary Keys > Details: > > Hi, > > We are using postgres-8.1.2 and observed the degradation in postgres > performance. Select query was taking very long time to complete. While on a > freshly installed postgres, with same number of records select query was > getting completed within seconds. The most likely reason was having lots of dead tuples in the table due to vacuum not being run frequently enough. autovacuum has been improved dramatically in both 8.2 and 8.3 which would help avoid the need for additional manual vacuums. > Our tables has indexes created on them and were running auto vacuum. Also we > are not using any table inheritance. When tried to perform re-indexing, > found that there were duplicate primary key entries in few of the tables. > This problem has occurred several times on our database setup and every time > different set of tables get affected. > > Can you please let us know if this issue is already resolved in any of the > postgres releases. If it is then please let us know the release number. It does sound familiar though I don't find a specific changelog entry which describes that symptom. The version you're on is over 2 years old and there have been 9 subsequent bug-fix and security releases for that version. I would suggest you try to go to 8.2.11 immediately and plan to upgrade to 8.2 or 8.3 when possible to help with your vacuuming problems. Generally 8.1.11 is compatible with 8.1.x without a dump/restore or any semantic changes. However there were a couple gotchas in some of the intervening releases. Notably if you have any invalid UTF8 data in a UTF8-encoded database 8.1.11 will refuse to load it as it can be a security issue. The release notes for the 8.1.x bug-fix and security releases are at: http://www.postgresql.org/docs/8.1/static/release.html#RELEASE-8-1-11 Click "Next" or scroll up to the list to get the previous releases. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] BUG #3965: UNIQUE constraint fails on long column values
"Juho Saarikko" <[EMAIL PROTECTED]> writes: > It is propably impossible to fix this in a simple way, since it is an > inherent result of the underlying storage specification rather than a mere > programming error, so the documentation needs to be updated to warn about > this. Point taken. > I suggest implementing unique hash indexes and automatically creating one > (and turning the b-tree index into a non-unique one) when a large value is > inserted to fix this. Alternatively, fix b-trees so they can handle large > values; however, a hash index should be far more efficient for this specific > case, since the size of a hash is independent of pre-hash data size. With expression indexes you can do this yourself with something like CREATE INDEX pk_hash on tab ((hashtext(safe))) We can't do this automatically since it wouldn't enforce the UNIQUE constraint. Conceivably we could actually do something about that but there's nothing like that now. We have hash indexes too but in practice a btree over a hash seems to work just as well or better. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's PostGIS support! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #3965: UNIQUE constraint fails on long column values
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > As others have pointed out, CREATE UNIQUE INDEX i ON ((md5(column)) is a > pretty > good work-around. Unless you need cryptographic security I would not suggest using MD5. MD5 is intentionally designed to take a substantial amount of CPU resources to calculate. Postgres's internal hash method is exposed for most data types as hashtext() hashfloat8(), hashint4(), etc. These functions were chosen for their lightweight design. Cryptographic security is important only if you're concerned with people being able to intentionally create collisions. In this scenario that's probably not a top threat. Conceivably someone could create a denial-of-service attack slowing down your server by causing your indexes to become unbalanced. But it would be fairly challenging to engineer. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] BUG #3965: UNIQUE constraint fails on long column values
"Heikki Linnakangas" <[EMAIL PROTECTED]> writes: "Heikki Linnakangas" <[EMAIL PROTECTED]> writes: > Gregory Stark wrote: >> "Heikki Linnakangas" <[EMAIL PROTECTED]> writes: >> >>> As others have pointed out, CREATE UNIQUE INDEX i ON ((md5(column)) is a >>> pretty >>> good work-around. >> >> Unless you need cryptographic security I would not suggest using MD5. MD5 is >> intentionally designed to take a substantial amount of CPU resources to >> calculate. > > Return type of hash* functions is just 32 bits. I wonder if that's wide enough > to avoid accidental collisions? Depends on the application of course... Oh, I missed that you were suggesting a UNIQUE index. That seems unsafe to me even for md5 or its ilk. But that would depend on the application too. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #3965: UNIQUE constraint fails on long column values
"Michael Fuhr" <[EMAIL PROTECTED]> writes: > On Wed, Feb 20, 2008 at 12:21:03PM +0100, Francisco Olarte Sanz wrote: >> On Wednesday 20 February 2008, Gregory Stark wrote: >> >> > Unless you need cryptographic security I would not suggest using MD5. MD5 >> > is intentionally designed to take a substantial amount of CPU resources to >> > calculate. >> >> I thought it was the exact opposite, quoting from RFC1321: Hm, ok, strike "intentionally". Nonetheless MD5 is quite computationally intensive compared to quick hashes like the ones Postgres uses or CRC hashes (which we ought to have functions for, but we don't seem to). SHA-1 is even more computationally intensive and SHA-256 far more again. For purposes of speeding up access a simple hash with the possibility of a few collisions is normally fine. You add an additional clause to recheck the original constraint. For purposes of enforcing uniqueness I would be leery of depending on any hash. The decision would depend on the application and the consequences of a spurious error. The chances are slim but it's not impossible. > And if you *do* need cryptographic security then don't use MD5, and > consider using SHA-256 instead of SHA-1. See RFC 4270 for discussion. > > ftp://ftp.rfc-editor.org/in-notes/rfc4270.txt One of the factors in deciding between cryptographic algorithms is the longevity required. MD5 has not been cracked but some suspicious weaknesses have been discovered which might lead to a crack sometime in the future where an attacker might be able to construct new plaintexts with identical hashes. If you just need something secure for session keys then that's not going to be a concern. If you need to distinguish user-provided documents from other user-provided documents you're keeping for decades then it is. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] BUG #4069: Wrong tip
"Dave Page" <[EMAIL PROTECTED]> writes: >> Wrong tip at the beginning of the administrator: "The answer to the >> question of Life, the Universe and Everything is 42." The answer is "God >> created them." > > I'm not so sure - can you define exactly what 'the question' is? That > will probably help figure out if the answer is correct. IIRC the question is "What is six times eight" -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] The problem with FULL JOIN
<[EMAIL PROTECTED]> writes: > PROBLEM: > How to FULL JOIN groups=1 from table 'a' with groups=2 from table 'b' > and exclude original NULL groups not thouse which FULL JOIN produce? ... SELECT * FROM (select * from a where a.groups = 1) AS a FULL OUTER JOIN (select * from b where b.groups = 2) AS b ON (a.num1=b.num1) -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4096: PG 8.3.1. confused about remaining disk space
"Zdenek Kotala" <[EMAIL PROTECTED]> writes: > [EMAIL PROTECTED] napsal(a): >> relation 1663/19693/634162: No space left on device ... >> Disk saturation, you think ? No at all, as in the meantime : >> >> FilesystemSize Used Avail Use% Mounted on >> /dev/sda3 9.9G 8.5G 874M 91% / >> udev 125M 88K 125M 1% /dev >> /dev/sda1 54M 15M 37M 29% /boot >> /dev/sda4 8.3G 7.4G 966M 89% /home >> /dev/sdb1 38G 31G 7.2G 81% /var/lib/pgsql >> >> As you can see, there is some 200 times the active DB size that remains >> available. > > User quota or super user space reservation could take affect. Could you create > file as a postgres user on pgsql filesystem? Also check "df -i" -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL training! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4096: PG 8.3.1. confused about remaining disk space
Sorry, this is the URL I meant to send: http://www.mail-archive.com/[EMAIL PROTECTED]/msg19905.html -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's RemoteDBA services! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4096: PG 8.3.1. confused about remaining disk space
What version of linux and reiserfs? see also: http://osdir.com/ml/file-systems.reiserfs.general/2004-01/msg00116.html -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4124: Error message: "You local administrators group contains 'Service users' this is a common configurat
"Blake Lovely" <[EMAIL PROTECTED]> writes: > PostgreSQL version: 8.2 > Operating system: Vista home premium 8.2 was not supported on Vista as it came out long before Vista did. Try 8.3. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's 24x7 Postgres support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4190: sparc64 test suite fails
"Tom Lane" <[EMAIL PROTECTED]> writes: > IMHO, what you have here is a compiler bug, or at least the gcc boys are > going to need to provide a damn good excuse why it's not. You don't, perhaps have any of -ffast-math or these two suboptions which -ffast-math enables turned on? `-funsafe-math-optimizations' Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations. This option should never be turned on by any `-O' option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. The default is `-fno-unsafe-math-optimizations'. `-fno-trapping-math' Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option implies `-fno-signaling-nans'. Setting this option may allow faster code if one relies on "non-stop" IEEE arithmetic, for example. This option should never be turned on by any `-O' option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. The default is `-ftrapping-math'. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's Slony Replication support! -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs
Re: [BUGS] BUG #4198: The bugreport form has an encoding problem
"Tom Lane" <[EMAIL PROTECTED]> writes: > Alvaro Herrera <[EMAIL PROTECTED]> writes: >> It does have a problem, because the email sent does not contain a >> charset header. It does look OK for me too -- as long as I use a UTF8 >> terminal. The fix is easy, just add this line to the message headers: > >> Content-Type: text/plain; charset=utf-8 > > What happens if someone pastes text into the form that is *not* in UTF-8? http form submissions include a content-type header too. I don't remember if PHP (or is it mod_python?) automatically converts incoming strings to the server encoding or if you're expected to do that yourself? Or if it isn't being done for us we could just put that encoding in the email headers. -- Gregory Stark EnterpriseDB http://www.enterprisedb.com Ask me about EnterpriseDB's On-Demand Production Tuning -- Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-bugs