Re: [BUGS] can't link the libpq.dll for bcc32.mak
Ping-Hua Shao wrote: > Dear: > I try to compile the libpq library (in 7.3.4 & 7.4 beta1 src folder) > with bcc32.mak under bcc55 and bcb6, but have some problems when linked. > The problems are about : > 1._pqGethostbyname > 2._pqStrerror > can't referenced. There are already patches for this on the way; please look at pgsql-patches of the last days. Regards, Andreas ---(end of broadcast)--- TIP 8: explain analyze is your friend
[BUGS] ODBC, SQLExecute and HY010
Hello, I have been getting intermentent errors with SQLExecute. Using SQLGetDiagRec, I get: "HY010\nError Code: 3\nConnection is already in use." I am running: The latest postgresql ODBC driver. Windows 2000 VC++ 6.0 The application is muliti-threaded. Functions are called via CRecordset. I have wrapped the queries in critical sections, which seemed to minimized the problem, but not eliminate it. The problem seems to be occuring in the function: SC_execute(StatementClass *self) Line: conn->status = CONN_EXECUTING; Lines: if (CONN_DOWN != conn->status) conn->status = oldstatus; If I set the next execution line for the SQLExecute function, the command executes just fine. I have tried to generate a log file, but it doesn't seem to helpful. Thanks. Log File--- [1036]CopyCommonAttributes: A7=100;A8=4096;A9=0;B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;C 0=0;C1=0;C2=dd_;[1036]attribute = 'UID', value = 'account' [1036]CopyCommonAttributes: A7=100;A8=4096;A9=0;B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;C 0=0;C1=0;C2=dd_;[1036]attribute = 'Servername', value = 'host' [1036]CopyCommonAttributes: A7=100;A8=4096;A9=0;B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;C 0=0;C1=0;C2=dd_;[1036]attribute = 'Password', value = 'x' [1036]CopyCommonAttributes: A7=100;A8=4096;A9=0;B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;C 0=0;C1=0;C2=dd_;[1036]attribute = 'Database', value = 'database' [1036]CopyCommonAttributes: A7=100;A8=4096;A9=0;B0=254;B1=8190;B2=0;B3=0;B4=1;B5=1;B6=0;B7=1;B8=0;B9=1;C 0=0;C1=0;C2=dd_; ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [BUGS] index not used afer VACUUM ANALYZE
On Tue, 26 Aug 2003, mike wrote: > On Mon, 25 Aug 2003, Stephan Szabo wrote: > > > Date: Mon, 25 Aug 2003 08:52:34 -0700 (PDT) > > From: Stephan Szabo <[EMAIL PROTECTED]> > > To: mike <[EMAIL PROTECTED]> > > Cc: [EMAIL PROTECTED] > > Subject: Re: [BUGS] index not used afer VACUUM ANALYZE > > > > On Mon, 25 Aug 2003, mike wrote: > > > > > On Mon, 25 Aug 2003, Stephan Szabo wrote: > > > > > > > Date: Mon, 25 Aug 2003 00:43:56 -0700 (PDT) > > > > From: Stephan Szabo <[EMAIL PROTECTED]> > > > > To: mike <[EMAIL PROTECTED]> > > > > Cc: [EMAIL PROTECTED] > > > > Subject: Re: [BUGS] index not used afer VACUUM ANALYZE > > > > > > > > On Thu, 21 Aug 2003, mike wrote: > > > > > > > > > Hi, > > > > > I hav a db as specified in nit.sql > > > > > flows has 763488 entries. > > > > > > > > > > After dropping/creating/loading the db and running auswert.sh I get > > > > > the attached result from query1.txt. > > > > > After 'VACUUM ANALYZE' I get the results from query2.txt > > > > > > > > > > As you can see, the indexes are not used any longer. > > > > > Why? > > > > > > > > It looks like the row estimates changed to say that a large % of the rows > > > > match the condition. Is that true? In any case, what does EXPLAIN > > > > > > Partially. > > > I have statistical records (763488) - various IP-Traffic - collected for one > > > month. > > > After collection I try to condense the data for dayly statistics. > > > > > > The EXPLAIN ANALYZE output is attached: > > > a1.txt is before, a2.txt after VACUUM ANALYZE run. > > > > There are two things that jump out at me, the first is that the group > > aggregate estimates on the after are way higher than reality and that it > > looks to me that the sort before the group aggregate is taking longer than > > expected. What do you have sort_mem set to since that will affect whether > > sorts are in memory and I believe whether it thinks it can use a hash > > aggregate on that nubmer of rows. > > > > sort_men was at the default. > But setting it to 10240 doesn't seem to change the seqscan on flows. But does it change the amount of time the query actually takes to run? seqscans are not always slower nor are they necessarily the actual problem here. The problem seems to be choosing a group aggregate + sort which is taking alot of time, if you look at the real time on the steps below that it's approximately the same for seqscan or index scan. ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [BUGS] postgresql 7.3.2 bug on date '1901-12-13' and '1901-12
On Mon, 25 Aug 2003, Tom Lane wrote: > Stephan Szabo <[EMAIL PROTECTED]> writes: > > On Thu, 21 Aug 2003, Tom Lane wrote: > >> Stephan Szabo <[EMAIL PROTECTED]> writes: > >>> Wait, he's in australia, what if he's getting the edge case the other way. > >> > >> I'm inclined to fix to_date by decomposing the code differently --- > >> it should avoid the coercion to timestamp, which is a waste of cycles > >> anyway. But is to_timestamp (and more generally timestamp's input > >> converter) broken? If so, how can we do better? I don't think we can > >> entirely avoid the problem of a transition between local and GMT time. > > > Yes. Timestamp with timezone is broken on the same boundaries in general. > > I'm not really sure how to do better without some work, it seems we end up > > with multiple different input values getting the same internal > > representation so we can differentiate which version of the input was used > > to get there (whether the user said 1901-12-13 23:00 or 1901-12-14). > > I've fixed to_date() along the above lines, but the general problem of > how timestamp I/O should behave remains. > > I've come to the conclusion that there isn't any really consistent > behavior if we want to stick with the current definition that > "timestamps outside the Unix date range are always UTC". If we do that, > then there is a set of timestamps at one end of the date range that are > ambiguous (they could be taken as either UTC or local), while at the > other end of the range there is a set of timestamps that can't be > validly converted as either one. This is essentially the same problem > we have during daylight-savings transition hours: when you "spring > forward" there is no local time 02:30, and when you "fall back" there > are two of 'em. > > The solution we've adopted for DST transitions is to interpret invalid > or ambiguous local times as "always standard time". We could possibly > do the same for the questionable times at the ends of the Unix date > range, ie, always interpret them as UTC (although I've been fooling with > the code for a couple hours now trying to get it to do that, without > much success). Yeah, it seemed like the rules involved in doing that might be complicated to get right. > Plan B would be to get rid of the discontinuity by abandoning the rule > that timestamps outside the Unix range are UTC. We could instead say > that the local time zone offset that mktime() reports for the first date > of the Unix range applies to all prior dates, and similarly the offset > for the last date of the range applies to all later dates. > > I'm unsure which of these is a better answer. Any thoughts? Generally, I think B is best since it keeps the values more continuous and doesn't require complicated trickery, although I'm not sure if that might change the observable behavior for people using timestamps outside the boundaries currently. I'm not one of them, so maybe we should continue on -general? ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [BUGS] postgresql 7.3.2 bug on date '1901-12-13' and '1901-12
Stephan Szabo <[EMAIL PROTECTED]> writes: > On Thu, 21 Aug 2003, Tom Lane wrote: >> Stephan Szabo <[EMAIL PROTECTED]> writes: >>> Wait, he's in australia, what if he's getting the edge case the other way. >> >> I'm inclined to fix to_date by decomposing the code differently --- >> it should avoid the coercion to timestamp, which is a waste of cycles >> anyway. But is to_timestamp (and more generally timestamp's input >> converter) broken? If so, how can we do better? I don't think we can >> entirely avoid the problem of a transition between local and GMT time. > Yes. Timestamp with timezone is broken on the same boundaries in general. > I'm not really sure how to do better without some work, it seems we end up > with multiple different input values getting the same internal > representation so we can differentiate which version of the input was used > to get there (whether the user said 1901-12-13 23:00 or 1901-12-14). I've fixed to_date() along the above lines, but the general problem of how timestamp I/O should behave remains. I've come to the conclusion that there isn't any really consistent behavior if we want to stick with the current definition that "timestamps outside the Unix date range are always UTC". If we do that, then there is a set of timestamps at one end of the date range that are ambiguous (they could be taken as either UTC or local), while at the other end of the range there is a set of timestamps that can't be validly converted as either one. This is essentially the same problem we have during daylight-savings transition hours: when you "spring forward" there is no local time 02:30, and when you "fall back" there are two of 'em. The solution we've adopted for DST transitions is to interpret invalid or ambiguous local times as "always standard time". We could possibly do the same for the questionable times at the ends of the Unix date range, ie, always interpret them as UTC (although I've been fooling with the code for a couple hours now trying to get it to do that, without much success). Plan B would be to get rid of the discontinuity by abandoning the rule that timestamps outside the Unix range are UTC. We could instead say that the local time zone offset that mktime() reports for the first date of the Unix range applies to all prior dates, and similarly the offset for the last date of the range applies to all later dates. I'm unsure which of these is a better answer. Any thoughts? regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster