Re: [BUGS] auto type casting bug
Matthew Manuel wrote: Hello, I found a strange error which occurs when I run a query from PHP4, where if there is a column which has strings arbitrarily defined text for all rows of a sub-select, and you attempt to sort by that column, it cannot determine the type for that column in order to do the sort. I think it's actually a little more subtle than that, because of what you're saying about psql. - If you UNION more than one of these rows, the UNION seems to determine the data type for the column, so the sort works. - If the values are integers, the error does not occur Yep - the UNION will coerce the "unknown" type, or give an error. - if each arbitrary text value is explicitly cast, the error does not occur. I have not tested other data types. That's correct behaviour. - This error does NOT occur when the query is run in the psql command line. Now that puzzles me, since I get the error on the version I'm currently logged into here (7.4.x). Could you just test it again? query1: SELECT a.col1, a.col2 FROM ( SELECT 'test row' AS col1, 1 AS col2 ) AS a ORDER BY a.col1 Running... *Warning*: pg_query(): Query failed: ERROR: failed to find conversion function from "unknown" to text in */home3/manuel.ca/test/pgtest.php* on line *14* I don't suppose you could be running a locale of "C" in psql and something else via php? I don't see how that could make the error go away, but it's the only thing I can think of. The heart of the problem is that the type is actually "unknown" and not text. For example, if I had values '3 Jan 2005',' 3 Oct 2004' how should they be sorted? Well, it depends on whether they are text or dates. How does PG know which I want? It doesn't. Actually, if we decide they are text then it depends on locale too, since "C" locale will do a char-by-char sort whereas others will ignore the leading space on the second example. You can get similar problems with numeric literals if you want floating-point or int8 instead of int4. PostgreSQL is flexible about its types, but that does mean you need to be more precise in defining what you mean sometimes. HTH -- Richard Huxton Archonet Ltd ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] BUG #1849: Is PgOleDb still alive ?
The following bug has been logged online: Bug reference: 1849 Logged by: Bernard Henry Voynet Email address: [EMAIL PROTECTED] PostgreSQL version: 8.0.3 Operating system: Windows XP Description:Is PgOleDb still alive ? Details: Since this project has no new release since March even though bugs that have been reported makes it unusable, I was wondering whether it is still alive or not ? Whether it is still part of PostgresSQL ? ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[BUGS] Serialization errors on single threaded request stream
I have an odd one here. I was unable to find it with a search of the mailing lists. I've spent a few hours trying to create a simple test case, but so far these simple cases aren't showing the problem. I want to make sure this isn't a know problem before investing more time trying to come up with a test case suffiently complex to expose the problem. The problem is this: a single thread is submitting database updates through a middle tier which has a pool of connections. There are no guarantees of which connection will be used for any request. Each request is commited as its own database transaction before the middle tier responds to the requester, which then immediately submits the next request. Nothing else it hitting the database. We are getting serialization errors. If we add a 1 ms delay on the client side between requests to the middle tier, the frequency of these errors drops by about two orders of magnitude. With a 100 ms delay, we haven't seen any. The pattern of activity which causes the problem involves a single database transaction with inserts and updates to many tables, including one with a potentially large blob, followed by an update to a numeric column in a row which tracks progress. The serialization errors are happening on this final update. My simple test cases use a single thread on two JDBC connection emulating just this final update, and the problem does not show up. We have the same behavior on 8.0.3 and the develpment snapshot from yesterday. (I haven't gotten a test run from today's beta release yet -- I need to coordinate the test with someone else who's not here right now. I'll follow up if the beta release changes this behavior.) The server is SuSE 9.3 with dual xeons and xfs on a SAN. The client and middle tier for these tests have been on Windows XP. The requests are going through JDBC. Does this behavior sound familiar to anyone? -Kevin ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] Serialization errors on single threaded request stream
"Kevin Grittner" <[EMAIL PROTECTED]> writes: > The problem is this: a single thread is submitting database updates through > a middle tier which has a pool of connections. There are no guarantees of > which connection will be used for any request. Each request is commited as > its own database transaction before the middle tier responds to the > requester, which then immediately submits the next request. Nothing else it > hitting the database. We are getting serialization errors. Hm. Are you sure your middle tier is actually waiting for the commit to come back before it claims the transaction is done? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [BUGS] Serialization errors on single threaded request
I am absolutely sure that the database transaction is always terminated by invoking commit or rollback, and waiting for the method to come back, before the middle tier returns control to the client. A couple other potentially relevant facts are that these connections are doing all this work in the SERIALIZABLE transaction isolation mode, and that the updates are done through ResultSet objects from prepared statements which SELECT * on the appropriate rows. I read through the documentation of the error message, and of the way PostgreSQL handles the isolation levels. This is behaving as though the time the PostgreSQL server assigns to the commit is sometimes later than the time of the subsequent transaction start, so I totally understand why you would ask the question you did. It is also why I checked this very carefully before posting. What happens if the timestamp of the commit is an exact match for the timestamp of the next transaction start? What is the resolution of the time sampling? It may be possible that we could submit several of these, on different connections, within the space of a millisecond. Could that be a problem? (It doesn't appear to be in my simple test cases.) I don't trust the clock on the Windows client, but I wouldn't think that has anything to do with the issue. -Kevin >>> Tom Lane <[EMAIL PROTECTED]> 08/26/05 11:10 AM >>> "Kevin Grittner" <[EMAIL PROTECTED]> writes: > The problem is this: a single thread is submitting database updates through > a middle tier which has a pool of connections. There are no guarantees of > which connection will be used for any request. Each request is commited as > its own database transaction before the middle tier responds to the > requester, which then immediately submits the next request. Nothing else it > hitting the database. We are getting serialization errors. Hm. Are you sure your middle tier is actually waiting for the commit to come back before it claims the transaction is done? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [BUGS] Serialization errors on single threaded request stream
"Kevin Grittner" <[EMAIL PROTECTED]> writes: > What happens if the timestamp of the commit is an exact match for the > timestamp of the next transaction start? What is the resolution of > the time sampling? It's not done via timestamps: rather, each transaction takes a census of the transaction XIDs that are running in other backends when it starts (there is an array in shared memory that lets it get this information cheaply). Reliability of the system clock is not a factor. Are you sure the server is 8.0.3? There was a bug in prior releases that might possibly be related: 2005-05-07 17:22 tgl * src/backend/utils/time/: tqual.c (REL7_3_STABLE), tqual.c (REL7_4_STABLE), tqual.c (REL7_2_STABLE), tqual.c (REL8_0_STABLE), tqual.c: Adjust time qual checking code so that we always check TransactionIdIsInProgress before we check commit/abort status. Formerly this was done in some paths but not all, with the result that a transaction might be considered committed for some purposes before it became committed for others. Per example found by Jan Wieck. My recollection though is that this only affected applications that were using SELECT FOR UPDATE. In any case, it's pretty hard to see how this would affect an application that is in fact waiting for the backend to report commit-done before it launches the next transaction; the race-condition window we were concerned about no longer exists by the time the backend sends CommandComplete. So my suspicion remains fixed on that point. Do you have any way of sniffing the network traffic of the middle-tier to confirm that it's doing what it's supposed to? regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] Serialization errors on single threaded request
Unfortunately, the original test environment has been blown away in favor of testing the 8.1 beta release. I can confirm that the problem exists on a build of the 8.1 beta. If it would be helpful I could set it up again on 8.0.3 to confirm. I THINK it was actually the tip of the 8.0 stable branch as opposed to the 8.0.3 release proper. We have a little more information about the failure pattern -- when we get these, it is always after there has been a rollback on the thread which eventually generates the serialization error. So I think the pattern is: ConnectionA: - A series of insert/update/deletes (on tables OTHER than the progress table). - Update the progress table. - Commit the transaction. ConnectionB: - A series of insert/update/deletes (on tables OTHER than the progress table) fails. - Rollback the transaction. - Attempt each insert/update/delete individually. Commit or rollback each as we go. - Attempt to update the progress table -- fail on serialization error. To avoid any ambiguity in my former posts -- introducing even a very small delay between the operations on ConnectionA and ConnectionB makes the serialization error very infrequent; introducing a larger delay seems to make it go away. I hate to consider that as a solution, however. I'm afraid I'm not familiar with a good way to capture the stream of communications with the database server. If you could point me in the right direction, I'll give it my best shot. I did just have a thought, though -- is there any chance that the JDBC Connection.commit is returning once the command is written to the TCP buffer, and I'm getting hurt by some network latency issues -- the Nagle algorithm or some such? (I assume that the driver is waiting for a response from the server before returning, so this shouldn't be the issue.) At the point that the commit confirmation is sent by the server, I assume the shared memory changes are visible to the other processes? -Kevin >>> Tom Lane <[EMAIL PROTECTED]> 08/26/05 12:16 PM >>> "Kevin Grittner" <[EMAIL PROTECTED]> writes: > What happens if the timestamp of the commit is an exact match for the > timestamp of the next transaction start? What is the resolution of > the time sampling? It's not done via timestamps: rather, each transaction takes a census of the transaction XIDs that are running in other backends when it starts (there is an array in shared memory that lets it get this information cheaply). Reliability of the system clock is not a factor. Are you sure the server is 8.0.3? There was a bug in prior releases that might possibly be related: 2005-05-07 17:22 tgl * src/backend/utils/time/: tqual.c (REL7_3_STABLE), tqual.c (REL7_4_STABLE), tqual.c (REL7_2_STABLE), tqual.c (REL8_0_STABLE), tqual.c: Adjust time qual checking code so that we always check TransactionIdIsInProgress before we check commit/abort status. Formerly this was done in some paths but not all, with the result that a transaction might be considered committed for some purposes before it became committed for others. Per example found by Jan Wieck. My recollection though is that this only affected applications that were using SELECT FOR UPDATE. In any case, it's pretty hard to see how this would affect an application that is in fact waiting for the backend to report commit-done before it launches the next transaction; the race-condition window we were concerned about no longer exists by the time the backend sends CommandComplete. So my suspicion remains fixed on that point. Do you have any way of sniffing the network traffic of the middle-tier to confirm that it's doing what it's supposed to? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] BUG #1850: parameter WITH HOLD (of function DECLARE CURSOR) not acepted inside CREATE FUNCTION.
The following bug has been logged online: Bug reference: 1850 Logged by: Danilo Barbosa Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1 Operating system: LINUX Description:parameter WITH HOLD (of function DECLARE CURSOR) not acepted inside CREATE FUNCTION. Details: CREATE OR REPLACE FUNCTION teste2() RETURNS refcursor AS $BODY$DECLARE ref CURSOR WITH HOLD FOR SELECT * FROM tb_ponto; BEGIN OPEN ref; RETURN ref; END;$BODY$ LANGUAGE 'plpgsql' VOLATILE; ALTER FUNCTION teste2() OWNER TO aes_jacui; The same DECLARE works outside the CREATE FUNCTION. thank you! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[BUGS] BUG #1851: Performance reduction from 8.0.3
The following bug has been logged online: Bug reference: 1851 Logged by: Steve Wormley Email address: [EMAIL PROTECTED] PostgreSQL version: 8.1beta1 Operating system: RedHat Linux, Kernel: 2.6.12.2 Description:Performance reduction from 8.0.3 Details: Did a dump and restore of my DB to 8.1beta1 (running concurrently with 8.0.3) and performance on the same query is about 50% slower with the beta. Ran query twice on each instance in sequence to make sure all data was in OS cache. 8.0.3 explain analyze select 7208,avg(data.sl),mcc,mnc from grid_geography_link as gglink , meta_data as meta, signal_level_stats as data where data.ogc_fid = meta.ogc_fid AND data.isvalid=true AND meta.isvalid = true AND meta.gridid = gglink.gridid and 7208 = gglink.geographyid group by mcc,mnc; QUERY PLAN HashAggregate (cost=36735.67..36736.24 rows=228 width=16) (actual time=3834.630..3834.668 rows=8 loops=1) -> Nested Loop (cost=16.19..36722.89 rows=1704 width=16) (actual time=141.628..3526.391 rows=67824 loops=1) -> Merge Join (cost=16.19..30737.86 rows=1766 width=16) (actual time=141.565..1566.594 rows=68547 loops=1) Merge Cond: ("outer".gridid = "inner".gridid) -> Index Scan using meta_data_gridid on meta_data meta (cost=0.00..35362.89 rows=312214 width=20) (actual time=0.103..499.071 rows=89081 loops=1) Filter: (isvalid = true) -> Sort (cost=16.19..16.76 rows=229 width=4) (actual time=7.291..219.951 rows=68567 loops=1) Sort Key: gglink.gridid -> Index Scan using grid_geography_link_geographyid on grid_geography_link gglink (cost=0.00..7.22 rows=229 width=4) (actual time=0.077..3.865 rows=995 loops=1) Index Cond: (7208 = geographyid) -> Index Scan using signal_level_stats_pkey on signal_level_stats data (cost=0.00..3.38 rows=1 width=16) (actual time=0.011..0.015 rows=1 loops=68547) Index Cond: (data.ogc_fid = "outer".ogc_fid) Filter: (isvalid = true) Total runtime: 3834.868 ms (14 rows) 8.1beta1: explain analyze select 7208,avg(data.sl),mcc,mnc from grid_geography_link as gglink , meta_data as meta, signal_level_stats as data where data.ogc_fid = meta.ogc_fid AND data.isvalid=true AND meta.isvalid = true AND meta.gridid = gglink.gridid and 7208 = gglink.geographyid group by mcc,mnc; QUERY PLAN HashAggregate (cost=41180.34..41230.34 rows=2 width=16) (actual time=8834.251..8834.372 rows=8 loops=1) -> Merge Join (cost=31059.37..37808.18 rows=449621 width=16) (actual time=7969.032..8614.804 rows=67823 loops=1) Merge Cond: ("outer".gridid = "inner".gridid) -> Sort (cost=971.33..973.57 rows=896 width=4) (actual time=7.344..10.284 rows=995 loops=1) Sort Key: gglink.gridid -> Bitmap Heap Scan on grid_geography_link gglink (cost=6.13..927.39 rows=896 width=4) (actual time=0.289..3.687 rows=995 loops=1) Recheck Cond: (7208 = geographyid) -> Bitmap Index Scan on grid_geography_link_geographyid (cost=0.00..6.13 rows=896 width=0) (actual time=0.260..0.260 rows=995 loops=1) Index Cond: (7208 = geographyid) -> Sort (cost=30088.04..30338.95 rows=100362 width=20) (actual time=7864.261..8124.173 rows=88143 loops=1) Sort Key: meta.gridid -> Hash Join (cost=8165.88..21750.54 rows=100362 width=20) (actual time=2863.617..6363.921 rows=309036 loops=1) Hash Cond: ("outer".ogc_fid = "inner".ogc_fid) -> Bitmap Heap Scan on meta_data meta (cost=1166.30..10165.86 rows=204656 width=20) (actual time=61.030..1271.910 rows=314108 loops=1) Filter: isvalid -> Bitmap Index Scan on meta_data_isvalid (cost=0.00..1166.30 rows=204656 width=0) (actual time=58.417..58.417 rows=314108 loops=1) Index Cond: (isvalid = true) -> Hash (cost=6497.77..6497.77 rows=200724 width=16) (actual time=2802.385..2802.385 rows=394772 loops=1) -> Bitmap Heap Scan on signal_level_stats data (cost=1144.5
Re: [BUGS] Serialization errors on single threaded request
Kevin Grittner wrote: > I'm afraid I'm not familiar with a good way to capture the stream of > communications with the database server. If you could point me in the right > direction, I'll give it my best shot. tcpdump will do the trick (something like 'tcpdump -n -w some.output.file -s 1514 -i any tcp port 5432') Or you can pass '&loglevel=2' as part of the JDBC connection URL to have the JDBC driver generate a log of all the messages it sends/receives (in less detail than a full network-level capture would give you, though) > I did just have a thought, though -- is there any chance that the JDBC > Connection.commit is returning once the command is written to the TCP buffer, > and I'm getting hurt by some network latency issues No, the JDBC driver waits for ReadyForQuery from the backend before returning. -O ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[BUGS] Installing postgres 8.0.3 on Windows
I just installed postgres from the postgresql-8.0.3.zip file, and I'm running XPpro. The installation seemed to go ok but when I try to login as postgres it tells me... psql -Upostgres -W (I enter my password) psql server closed the connection unexpectedly this probably means the server terminated abnormally before or while processing the request ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] BUG #1851: Performance reduction from 8.0.3
"Steve Wormley" <[EMAIL PROTECTED]> writes: > Did a dump and restore of my DB to 8.1beta1 (running concurrently with > 8.0.3) and performance on the same query is about 50% slower with the beta. > Ran query twice on each instance in sequence to make sure all data was in OS > cache. Did you vacuum analyze (or at least analyze) both databases? The rowcount estimates seem quite a bit different, which suggests the two planners aren't working with the same stats. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [BUGS] BUG #1851: Performance reduction from 8.0.3
On 2005-08-26 16:17, "Tom Lane" <[EMAIL PROTECTED]> wrote: > "Steve Wormley" <[EMAIL PROTECTED]> writes: >> Did a dump and restore of my DB to 8.1beta1 (running concurrently with >> 8.0.3) and performance on the same query is about 50% slower with the beta. >> Ran query twice on each instance in sequence to make sure all data was in OS >> cache. > > Did you vacuum analyze (or at least analyze) both databases? The > rowcount estimates seem quite a bit different, which suggests the > two planners aren't working with the same stats. > Sigh, I swore I remembered to at least analyze both, but I guess not. After vacuum analyze 8.1beta1 is fine and happy and using a virtually identical plan and coming in a few hundred milliseconds faster. Thanks, -Steve Wormley ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [BUGS] [PATCHES] insert into table (column) values (nullif('',''));
Since column is boolean, you need a boolean result: insert into table (column) values (nullif(true,true)); > -Original Message- > From: [EMAIL PROTECTED] > [mailto:[EMAIL PROTECTED] On Behalf Of Matt A. > Sent: Monday, August 22, 2005 3:22 AM > To: pgsql-bugs@postgresql.org; pgsql-patches@postgresql.org > Subject: [PATCHES] insert into table (column) values (nullif('','')); > > insert into table (column) values (nullif('','')); > ERROR: column "column" is of type boolean but expression is > of type text. > > inserting NULL works. nullif('','') should return NULL if > both values are equal? It works in MSSQL. > > Is there different function to accomplish a insert > nullif('','') test. > > Thanks > Matt > > > > > > > Start your day with Yahoo! - make it your home page > http://www.yahoo.com/r/hs > > > ---(end of > broadcast)--- > TIP 4: Have you searched our list archives? > >http://archives.postgresql.org > > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match